In our travels around the industrial scene, we notice that many companies pay more attention to inventory Turns than they should. We would like to deflect some of this attention to more consequential performance metrics.
In a previous post, I discussed one of the thornier problems demand planners sometimes face: working with product demand data characterized by what statisticians call skewness—a situation that can necessitate costly inventory investments. This sort of problematic data is found in several different scenarios. In at least one, the combination of intermittent demand and very effective sales promotions, the problem lends itself to an effective solution. Continue reading
Demand planners have to cope with multiple problems to get their job done. One is the Irritation of Intermittency. The “now you see it, now you don’t” character of intermittent demand, with its heavy mix of zero values, forces the use of advanced statistical methods, such as Smart Software’s patented Markov Bootstrap algorithm. But even within the dark realm of intermittent demand, there are degrees of difficulty: planners must further cope with the potentially costly Scourge of Skewness.
A new metric we call the “Attention Index” will help forecasters identify situations where “data behaving badly” can distort automatic statistical forecasts (see adjacent poem). It quickly identifies those items most likely to require forecast overrides—providing a more efficient way to put business experience and other human intelligence to work maximizing the accuracy of forecasts. How does it work?
A readable, well-organized textbook could be invaluable to “help corporate forecasters-in-training understand the basics of time series forecasting,” as Tom Willemain notes in the conclusion to this review, originally published in Foresight: The International Journal of Applied Forecasting. Principally written for an academic audience, the review also serves inexperienced demand planning professionals by pointing them to an in-depth resource.
This neat little book aims to “introduce the reader to quantitative forecasting of time series in a practical, hands-on fashion.” For a certain kind of reader, it will doubtless succeed, and do so in a stylish way.
Most statistical forecasting works in one direct flow from past data to forecast. Forecasting with leading indicators works a different way. A leading indicator is a second variable that may influence the one being forecasted. Applying testable human knowledge about the predictive power in the relationship between these different sets of data will sometimes provide superior accuracy.
Fluctuations in an inventory supply chain are inevitable. Randomness, which can be a source of confusion and frustration, guarantees it. A ship carrying goods from China may be delayed by a storm at sea. A sudden upswing in demand one day can wipe out inventory in a single day, leaving you unable to meet the next day’s demand. Randomness creates frictions that make it hard to do your job.
At first blush, it sometimes seems best to respond to randomness with the ostrich approach: head buried in the sand. You can settle on a prediction and proceed on the assumption that the prediction will always be spot on. The flaw in that approach is that it ignores statistical methods that allow us to make use of a wealth of knowledge about our knowledge itself—how confident we can be in our predictions, and what breadth of possibilities confront us. The efficient approach to tackling the problems that stem from randomness is not to ignore uncertainty, but to embrace it with eyes open.
Posted in Excellence in Forecasting
Tagged average, contingencies, forecasting, inventory, overstocking, randomness, raw materials, reliability, staffing, stocking, supply chain, uncertainty, understocking