450 likes | 557 Views
Research Grants GR/60181/01 and GR/60198/01. How can short-term demand forecasting in companies be improved? - a research program. Robert Fildes Lancaster University Centre for Forecasting & Paul Goodwin, University of Bath Supported by Kostas Nikolopoulos Wing Yee Lee, Michael Lawrence.
E N D
Research Grants GR/60181/01 and GR/60198/01 How can short-term demand forecasting in companies be improved?- a research program Robert Fildes Lancaster University Centre for Forecasting & Paul Goodwin, University of Bath Supported by Kostas Nikolopoulos Wing Yee Lee, Michael Lawrence
The Background • Complex statistical methods are not commonly used in supply chain forecasting • Exponential smoothing or variants is employed • Complexity in the environment captured by incorporating judgemental interventions • On average, 75 % of supply chain forecasts incorporate judgement • Accuracy is important • 90% of respondents in our survey said it is their principal objective The Issue • How such judgemental adjustments can be made most effective • What is the role of forecasting software?
Outline • How forecasting is carried out at the company level • Analysis of company level forecasts • Explaining the excessive use of judgmental interventions • Potential improvement strategies
Research Methods • Collaborating companies as a basis for case studies • Data collected on their forecasts and actual outcomes • Observations and interviews to capture the company forecasting process, the role of forecasting software and their forecasters’ activities • Statistical appraisal of company forecasting accuracy • Experimental testing of alternative software designs A ‘triangulation’ of statistical analysis, case observation and experiment
Focus: • on short and medium term demand forecasting in supply chain companies • Forecasts in these companies are used in decisions relating to: - logistics, - human resource planning, - stock control, - purchasing, - cash flow management…. Service - inventory investment tradeoff curves The wrong product in the wrong place at the wrong time - forecasters motivated to improve accuracy
Promotion: 2 for 1 Economic factors Customers £ Sales EPOS info Retailer Orders Shipments Manufacturer Information affecting the supply chain The full model: Orders=f(past orders, Sales, forecast sales, promotions, Events) The basic model: Orders=f(past orders) + judgemental estimates of promotions
A complete statistical model?Is it possible? • The Problems • Too complex • Incomplete data on many drivers • ‘Unique’ events • No available statistical expertise • Management understanding & acceptance • Belief that managerial expertise is best • Cost of cleaning past data to remove unusual events Marketing factors The Practical Solution: Capture the unusual complexity by managerial judgement
Organisationally based Forecasting Combines Statistical Analysis with Managerial Judgement Complementary nature of statistical forecasts & management judgment • humans are adaptable and can take into account one-off events, but they are inconsistent and suffer from cognitive biases • statistical methods are rigid, but consistent, and can take into account large volumes of information
Forecast How companies make their forecasts 1. Package embodying simple robust time series methods is used(e.g. simple exponential smoothing, Holt-Winters) • Statistical forecast is then often judgmentally adjusted, usually at a meeting. – ostensibly for special events like promotion campaigns
The EPSRC Research Project Company Evidence Data (4 U.K. based companies) • 753 SKUs, Monthly • Company A: Major UK Manufacturer of Laundry, household cleaning and personal care products - 244 SKUs x 22 months -> 3012 triplets • Company B: Major International Pharmaceutical Manufacturer - 213 SKUs x 36 months -> 5428 triplets • Company C: Major International canned Food Manufacturer • 296 SKUs x 20 months -> 2856 triplets • 783 SKUs, Weekly • Company D: Major UK Retailer (over 26000 SKUs) - 104 weeks -> 57688 triplets Data collected on: Actuals, Statistical System forecasts, and Final adjusted forecasts
Research Issues and Hypotheses (practical and theoretical) • Does adjustment improve accuracy? • Under what circumstances? • Are the organisational forecasts rational? • Do biases and inefficiencies arise from organisational processes? • Why is judgment used so extensively? • Can the forecasts be improved? • By model-based improvements • By software improvements • By process improvements
Evidence on frequency of judgmental adjustments from 4 of our companies A significant % of forecasts are judgmentally adjusted Over 26000 SKUs
Size of adjustments As % of statistical system forecast Companies A-C Company D
Results: MAPE by Adjustment Size Small Large Adjustments Adjustments Group A -C Retailer
Accuracy – A summary • For group A-C • Positive adjustments damage the MAPE (but not the median) • Negative adjustments improve both • The naïve forecast beats the system forecast MAPE (but not the median) • For group D • The naïve forecast is better than the system forecast • The system forecast in better than the final forecast And the size of the adjustment matters
Three Types of Error All are replicated for negative adjustments Adjusted forecast • Wrong direction System forecast Actual demand Adjusted forecast • Adjustment too large Actual demand System forecast Actual demand • Adjustment too small Adjusted forecast System forecast
% of adjustments that were in the wrong direction or too large
The Characteristics of the Forecasts - Bias Fcasts Fcasts
Modelling the forecastsStatistical Issues For unbiasedness • Errors heteroscedastic with outliers • Can firms be pooled? • Solutions • Errors normalised by standard deviation of actuals and analysed by size of adjustment Forecasts are biased: mean and regression biased
Final Forecasts are biased But are the forecasts inefficient? The error models: Efficiency = all available information is being used effectively i.e. the models have no explanatory power for the jth sku in the ith company • To estimate, normalise, pool across sku, remove outliers, test for seasonality • The result? • The forecasts are inefficient & different companies embody different inefficiencies • Positively adjusted forecasts are more inefficient • Persistent optimism bias
Can we model the error to ensure an efficient forecast? improved forecast error The models: The last- the 50/50 model: Blattberg & Hoch We can then use these models to predict the actual and compare with the final forecast = 1*(SysFor)+1*Adjust
Comparative Results: Overall gains, - Major gains with some companies (particularly retailer) To test: split sample Overall, the adjustment models perform well - substantial improvements are possible - gains much larger than shown in statistical selection comparisons
Conclusions from the Empirical Analysis • 1 out of 3 adjustments is in the wrong direction! (very costly) b) Biases and Inefficiencies persist • effects are exaggerated for positive adjustments (Wishful thinking?) c) Small adjustments (<10-20%) are of doubtful value! d) Final forecasts are (usually) more accurate than system forecasts e) Reweighting of system forecast and market adjustment leads to improved accuracy f) Effectiveness of adjustments depend on the company - is it the data, the drivers or the forecasting process that makes the difference? Can we by understanding the process, develop an FSS that supports improved accuracy
The adjusted forecasts are biased and inefficient: the process through which the forecasts are adjusted and the Market Intelligence estimated leads to inaccuracies … Hence there may be gains in improving the quality of judgmental inputs to forecasts But most forecasting software does not provide facilities to support judgment. - Instead, packages promote their statistical power…..! • To understand the process of adjustment • To influence the adjustments through enhanced software design Next Step:
Why do managers adjust forecasts excessively? (1) The ‘advice’ perspective • We can regard the computer system’s forecasts as a form of advice Yaniv & Kleinberger (2000) : • People trust their own forecasts more because they have greater access to the rationale for these - No rationale or explanation is provided in most forecasting software packages employing univariate methods
Why do managers adjust forecasts excessively? (2) Yaniv & Kleinberger (2000): • Weight attached to advice is dependent on reputation of adviser - but negative information about an adviser is perceived to be more diagnostic than positive -In forecasting, noise & special events may contribute to a negative perception • Kaplan, et al (2001) found people were more likely to rely on a system when its accuracy was not disclosed.
Why do managers adjust forecasts excessively? (3) Misconceptions of randomness We tend to see systematic patterns in what are really random movements in graphs….
Why do managers adjust forecasts excessively? (4) Illusion of Control Ability to manipulate the system and carry out ‘what if’ analyses leads to an illusion of control (Davis et al 1994) -manipulation involves effort and is perceived to involve skill -people become overconfident in their judgments
Manipulations Change the responsiveness of the forecasts… Change the forecasting method... Change the amount of past data used..
Result of overconfidence Over weighting of judgmental forecasts relative to statistical forecasts -even when evidence shows judgment is less accurate…. In one study people continued to rely on their judgment despite receiving messages from the computer like… “Please be aware that you are 18.1% LESS ACCURATE than the statistical forecast provided to you.”
Why do managers adjust forecasts excessively? (5) Confusion between forecasts and other quantities: E,g. Targets Decisions Politically acceptable numbers
Why do managers adjust forecasts excessively?(6) Need for ‘ownership’ of the forecasts • Demonstration that you’ve contributed to the forecasting process and earned your salary • Demonstration of your marketing expertise to your colleagues at meetings
Why do managers adjust forecasts excessively?(7) "Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, ...... in which every detail finds its appropriate place. His writing is rather dull and mechanical, … enlivened by somewhat corny puns …… He has a strong drive for competence. He seems to feel little sympathy for other people …. Self-centered, he nonetheless has a deep moral sense.” (Kahneman and Tversky) Underweighting of Base-rate information vs Case-specific information Group A: This description has been drawn randomly from a folder containing descriptions of 30 engineers & 70 lawyers Group B: This description has been drawn randomly from a folder containing descriptions of 70 engineers & 30 lawyers -What is the probability that Tom W is an engineer?
Why do managers adjust forecasts excessively?(8) • People over-emphasise case specific information at the expense of base-rate information • The narrative fallacy-we have a story to explain every random movement in the graph • Our company forecasters focused on each demand figure on the graph as being a special case. -the base rate information represented by the statistical forecasts was thus under weighted Bloomberg News: Dec 2003 (Taleb 2007) 13:01 ”US Treasuries rise; Hussein capture may not curb terrorism” 13:31 “US Treasuries fall; Hussein capture boosts allure of risky assets”
Why do managers adjust forecasts excessively? (9) Recency bias - only the recent past was thought to be relevant • Statistical methods were fitted to very short sets of data (often 6 months to 2 years) Statistical methods not given much of a chance -reinforcing the forecasters’ confidence in the relative accuracy of judgment
Why do managers adjust forecasts excessively?(10) 66% of positive adjustments led to sales forecasts that were over optimistic Optimism bias Recall:
Lessons relating to when to intervene • Beware misconceptions of randomness, underweighting of base-rate information & recency biases -require recording of reasons for judgmental intervention • Beware overconfidence in judgment - restrict what-if analyses? • Beware numbers masquerading as forecasts • Build explanations into software – so its advice carries greater weight
Lessons relating to improving judgmental estimation (1)- Supporting the use of analogies • Involves identifying similar products, or similar promotion campaigns, and using these as a basis for the judgmental forecast E.g. We have a 2 week B.O.G.O.F promotion in the North region starting on 5 June 2006… Database identifies most similar cases:
Supporting the use of analogies Support for different stages of judgmental process: • Memory support –so forecaster avoids need to recall past cases • Similarity support –helps forecaster to identify most similar past cases • Adaptation support -helps forecaster to adapt from similar past cases to specifics of current case Eg. -Most similar past BOGOF promotion may have lasted for 3 weeks, - Forthcoming BOGOF promotion lasts for only 2 weeks. - Database will estimate effect of 2 weeks rather than 3 from all past promotions…..
Adaptation judgment support The interface Similarity judgment support Memory support
Lessons relating to improving judgmental estimation (2) • Using profiles to estimate effects of special events over time. E.g. week-to-week effects of a sales promotion…
Lessons relating to improving judgmental estimation (3) • Using decomposition to reduce the demands of holistic estimation
Summary The Research Programme • Analysed accuracy of > 12000 adjustments in 4 companies • Examined the processes and software used • Based on hypotheses about human judgement in FSS • Developed software to support adjustment process • Showed the FSS enhancements delivered improved accuracy • Explored the viability of implementing such software improvements
Conclusions • Judgment is employed extensively in short-term company forecasting • This can lead to improved accuracy but there is still much scope for improvement • Methods that correct judgmental interventions for inefficiency and bias may help to improve accuracy • But methods that support forecastersin their task of applying judgment are more likely to be accepted • However, even then, organisational politics and other pressures may restrict the extent of improvements that can be achieved
References Fildes, R., Goodwin, P. and Lawrence, M. (2006) “The design features of forecasting support systems and their effectiveness.” Decision Support Systems. Vol. 42, 351-361. Goodwin, P., Fildes, R., Lawrence, M. and Nikolopoulos, K. (accepted for publication) “The process of using a forecasting support system.” International Journal of Forecasting Syntetos, A., Nikolopoulos, K., Boylan, J., Fildes, R. and Goodwin, P. (accepted for publication) “The effects of integrating management judgement into intermittent demand forecasts.” International Journal of Production Economics. Lee, W.Y., Goodwin, P., Fildes, R., Nikolopoulos, K. and Lawrence, M. (accepted for publication) “Providing support for the use of analogies in demand forecasting tasks.” International Journal of Forecasting