Foresight editor Leonard Jay Tashman
Companies launch initiatives to upgrade or improve their sales & operations planning and demand planning processes all the time. Many of these initiatives fail to deliver the results they should. Has your forecasting function fallen short of expectations? Do you struggle with “best practices” that seem incapable of producing accurate results? Continue reading
In a previous post, I discussed one of the thornier problems demand planners sometimes face: working with product demand data characterized by what statisticians call skewness—a situation that can necessitate costly inventory investments. This sort of problematic data is found in several different scenarios. In at least one, the combination of intermittent demand and very effective sales promotions, the problem lends itself to an effective solution. Continue reading
Demand planners have to cope with multiple problems to get their job done. One is the Irritation of Intermittency. The “now you see it, now you don’t” character of intermittent demand, with its heavy mix of zero values, forces the use of advanced statistical methods, such as Smart Software’s patented Markov Bootstrap algorithm. But even within the dark realm of intermittent demand, there are degrees of difficulty: planners must further cope with the potentially costly Scourge of Skewness.
A new metric we call the “Attention Index” will help forecasters identify situations where “data behaving badly” can distort automatic statistical forecasts (see adjacent poem). It quickly identifies those items most likely to require forecast overrides—providing a more efficient way to put business experience and other human intelligence to work maximizing the accuracy of forecasts. How does it work?
“Trillions of records of millions of people…Finding the useful and right information, understanding its quality and producing reliable analyzed data in a timely and cost-effective manner are all critical issues.”
Smart Software Senior Vice President for Research Tom Willemain recently had the opportunity to talk with Dr. Mohsen Hamoudia, President of the International Institute of Forecasters (IIF), to discuss current issues with, and opportunities for, big data analytics. The IIF informs practitioners on trends and research developments in forecasting via print and online publications and the hosting of professional conferences.
Posted in Excellence in Forecasting, Guest Posts
Tagged academic forecasting, big data, big data analytics, finance, forecasting, ict, mobile devices, on-line data, ott, professional development, retail, sense and react, telecom
A readable, well-organized textbook could be invaluable to “help corporate forecasters-in-training understand the basics of time series forecasting,” as Tom Willemain notes in the conclusion to this review, originally published in Foresight: The International Journal of Applied Forecasting. Principally written for an academic audience, the review also serves inexperienced demand planning professionals by pointing them to an in-depth resource.
This neat little book aims to “introduce the reader to quantitative forecasting of time series in a practical, hands-on fashion.” For a certain kind of reader, it will doubtless succeed, and do so in a stylish way.
In my previous post in this series on essential concepts, “What is ‘A Good Forecast’”, I discussed the basic effort to discover the most likely future in a demand planning scenario. I defined a good forecast as one that is unbiased and as accurate as possible. But I also cautioned that, depending on the stability or volatility of the data we have to work with, there may still be some inaccuracy in even a good forecast. The key is to have an understanding of how much.
Most statistical forecasting works in one direct flow from past data to forecast. Forecasting with leading indicators works a different way. A leading indicator is a second variable that may influence the one being forecasted. Applying testable human knowledge about the predictive power in the relationship between these different sets of data will sometimes provide superior accuracy.
Smart Software President Nelson Hartunian, PhD
Tremendous cost-saving efficiencies can result from optimizing inventory stocking levels using the best predictions of future demand. Familiarity with forecasting basics is an important part of being effective with the software tools designed to exploit this efficiency. This concise introduction (the first in a short series of blog posts) offers the busy professional a primer in the basic ideas you need to bring to bear on forecasting. How do you evaluate your forecasting efforts, and how reliable are the results?
Fluctuations in an inventory supply chain are inevitable. Randomness, which can be a source of confusion and frustration, guarantees it. A ship carrying goods from China may be delayed by a storm at sea. A sudden upswing in demand one day can wipe out inventory in a single day, leaving you unable to meet the next day’s demand. Randomness creates frictions that make it hard to do your job.
At first blush, it sometimes seems best to respond to randomness with the ostrich approach: head buried in the sand. You can settle on a prediction and proceed on the assumption that the prediction will always be spot on. The flaw in that approach is that it ignores statistical methods that allow us to make use of a wealth of knowledge about our knowledge itself—how confident we can be in our predictions, and what breadth of possibilities confront us. The efficient approach to tackling the problems that stem from randomness is not to ignore uncertainty, but to embrace it with eyes open.
Posted in Excellence in Forecasting
Tagged average, contingencies, forecasting, inventory, overstocking, randomness, raw materials, reliability, staffing, stocking, supply chain, uncertainty, understocking
In order to reap the efficiency benefits of forecasting, you need the most accurate forecasts—forecasts built on the most appropriate historical data. Most discussions of this issue tend to focus on the merits of using demand vs. shipment history—and I’ll comment on this later. But first, let’s talk about the use of net vs. gross data.
Net vs. Gross History
Many planners are inclined to use net sales data to create their forecasts. Systems that track sales capture transactions as they occur and aggregate results into weekly or monthly periodic totals. In some cases, sales records account for returned purchases as negative sales and compute a net total. These net figures, which often mask real sales patterns, are fed into the forecasting system. The historical data used actually presents a false sense of what the customer wanted, and when they wanted it. This will carry forward into the forecast, with less than optimal results.
Posted in Excellence in Forecasting
Tagged accuracy, demand, demand data, efficiency, forecasting, gross history, net history, returns, sales pattern, shipment data, stock-out