Pietro Filipi

Our intelligent replenishment solution using ML has helped Pietro Filipi save significant sums on logistics costs, while increasing sales through better availability of goods in stores.

Pietro Filipi was a Czech company in the luxury clothing market with 23 stores in the Czech Republic and 11 others abroad. Each year they launched 2 collections of goods, each with about 400 products. They were struggling with the problem of how to efficiently distribute the goods to each store, which we at Revolt BI helped them solve using intelligent replenishment.

Problem and assignment

Number of outlets34 (23 in Czech Republic)
Number of collections per year2
Number of products in the collection400 (× sizes)
Scope of the assignment

Imagine that you receive a collection of 400 products twice a year, each containing several sizes, and you need to ensure that there are sufficient quantities in all combinations of sizes at each of the 34 stores.

The goods are for the most part ordered long in advance, so you can't react flexibly to demand - you are only estimating future sales at each location. At the same time, you need the customer to be able to try on and buy any of the goods in "his" or "her" store at any time, while at the same time you'd like to be nearly sold out as the new season approaches, since high-fashion collections are unsellable as the season ends and the new line arrives (or only disposed of at significant discount or loss).

So you have to

  • minimise investment in stock
  • keep the goods where they sell best
  • minimise losses on unsold or discounted goods
  • move products efficiently between stores (replenishment)

Existing solution

Number of lines of code> 5000
Number of lines of documentation 0
Orientation in codealmost impossible
Breakdown into logical blocksno
Principle of operationpriority of outlets
Startingmanually 2 times a week
Necessity of manual tuningsignificant
Evaluation of effectiveness, KPIsnone
Status of the existing solution

These were exactly the problems that plagued Pietro Filipi, so over the course of more than a decade they'd developed a custom script (a set of SQL procedures) extending to over 5,000 lines of code where virtually no one really knew how to use it anymore.

The script worked on the principle of a fixed order of stores, according to which stock was moved. They ran it 2x a week manually and it still required a lot of hands-on intervention before and after to tweak any obvious errors.

Thus the solution could not even evaluate the efficiency of individual stock moves, could not accurately tell how well the items were selling in each store, and for many moves left excess goods lying around unsold while conversely leaving other stores with higher demand short the same apparel.

Our solution

We approached the whole issue of replenishment moving backwards from the goals to be achieved and then how to evaluate them. So the key questions were:

Why am I repositioning goods?

I want to achieve higher sales given the same margins.

What does it depend on?

On the likelihood of sales.

It's logical - you need to know your odds of sellilng a particular product at any given store to know which one to ship it to.

Example: you have 100 blouses from a new collection and you need to determine which stores to send how many to. Alternatively, you have blouses at stores and you have to determine if they will be able to sell at those stores by end of the season and whether they're more likely to sell elsewhere. Of course, with option 2 you incur moving costs and loss of sales in what is called off-shelf time (or time being packed, in transit, and back on the shelf).

So the first step is to calculate the likelihood of the blouse selling in each store. This is based on past sales of the same or similar products, but you can also take other factors into account, for example the effect of that year's weather and so on. There are several ways to arrive at this result, but we found it useful to combine them and use different methods depending on the amount of data we had.

In the second stage, you need to calculate the probability at the level of individual units in each store. For example, if I have a 75% probability of selling a blouse in Prague and a 44% probability in Teplice, is it better to deliver the blouse as a 3rd of its type to Prague or as the only one in Teplice?

Our combined predictive model

Regression

  • Given enough data on previous sales
  • Linear, logistic, weighted...

ML algorithms

  • Can give a good prediction even with less data
  • Analyse relationships between products or other external influences (weather, season, economic factors...)
  • Neural networks, Bayesian neural networks

Expert estimation

  • Experience of company experts, excellent especially for initial estimates when there is almost no data.

However, this is not the end of the calculation, as sales probabilities can be interdependent:

  • unacceptable movements
  • minimum stock on the shop floor
  • substitutes, complements

So we are looking for the best distribution from the set of admissible solutions...

Optimization model for replenishment - summary

KPI

  • Total expected profit = expected sales - moving costs and off shelf time losses

Predictive model

  • linear or more complex combinations, ML or expert estimation

Limiting conditions

  • all rules for transfer eligibility

What worked for us

Decomposition into partial probabilities

  • model estimates more generally
  • decompose first according to a priori rules
  • gradually increase the weight of the data calculation

Pairwise linkage

  • Prohibitive prices for blocking and storage
  • Greedy heuristics for selecting the best shifts

What didn't work for us

Optimization by IP path

  • Idea: solve product placement with one or more IP tasks
  • Problem: calculation itself, time, initial conditions, misleading shadow-prices

Substitution analysis

  • Idea: "weaker" products sell better when better ones are not available
  • Problem: hypothesis inconclusive, data noise, positive correlation

Results

Our new code, which replaced the original solution, not only resulted in significant savings and increased gains, but was also much clearer due to its division into independently editable parts. In addition, the code base dropped from the original of more than 5000 to roughly 1000 lines. In addition, they gained complete documentation of those 1000 lines, making it easier to tweak according to lessons learned or shifting goals.

As a plus, all the transfer calculations were carriedout completely automatically every day without any manual intervention, while Pietro Filipi also received update reports on the efficiency of the individual transfers, allowing them to further optimize performance.

Number of optimised transfers1000 / day
Proportion of identified past loss transfers17 %
Reduce logistics costs by eliminating them10 – 20 %
The most efficient transfers identified28 % of movements yield 80 % of profit
Number of lines of codeReduction from 5000 to 1000
Number of pages of documentationIncrease from 0 to 10
Orientation in codeincreased due to the division into blocks
The principle of operation of the new solutionCombined predictive model including ML
StartingAutomatically daily
Necessity of manual tuningnone, tens of hours saved per week
Efficiency evaluation, KPINew automatic reporting
Replenishment optimization results

What the client says

The cooperation with Revolt BI was exemplary. They were not only able to quickly understand our needs and specifications, but came up with many improvements themselves that further increased efficiency. In the first few months we have already seen significant savings in logistics costs as well as increased sales due to better availability of our products in stores. Our colleagues in logistics have then appreciated the huge reduction in labour and increased understanding of what they do. In addition, we in management have gained a perfect overview of the efficiency of the moves.

Lukáš Uhl, IT/operations/retail manager, Pietro Filipi

CZ flagUK flag