FORECASTING -
To tell/find out something that may most likely happen in future perfect forecasting is impossible but we can have best use of forecasting methodology in management
Practical problems in forecasting - how to select the best forecasting method for a given situation - how to evaluate forecast accuracy Methods of forecasting can be put into 3 classes: - Extrapolation – also called “time series method” - Causal - Judgemental Extrapolation: - moving averages exponential smoothing (- both use special kinds of averages of the most recent data to forecast) trend line analysis : (the comparision of regression models of the rate of growth of data overtime) eg. dependent variable – sales and independent variablef(t) the straight line projection (a linear trend is used) classical classica l decomposition decompo sition - assumes assume s that data da ta are made ma de up of at least 3 components (seasonality, (seasonality, trend and randomness) - method attempts to separate Box jetkins : a sophisticated statistical technique t echnique which attempts to pick an optimal from a large no. of posssibilities( detailed not required) Causal -
-
Causal regression regres sion ( beyond the scope of the course) (sales vs. funtion of advertisement and price) Simulation: Simulatio n: develops a model of process proces s and then conducts a series of trial and error experiments to predict the behaviour of process over time.
Judgemental forecasting: done when there is few data to build a quantitative model (sales forecast of new product) - used when historical data are no longer representative (eg. OPEC decisions, gulf war ) - Quantitative forecast can be still used as benchmark to evaluate judgement accuracy - Quantitative methods used to adjust the data for seasonality and give better picture of trends - Judgement may be biased so it is compared with quantitative forecast - Gambler’s fallacy - Conservatism in forcasting - Bias can be reduced by averaging forecasts from different sources. - Fore Foreca cast stin ing g is is inp input ut to plan planni ning ng -
Time Series Pattern (Extrapolation) assumes that the time series follows some pattern which can be extrapolated into future Four types of trends: - Constant Level trend (forecast for any period in the future is a horizontal line) - Linear trend (straight line trend with constant growth) - Exponential trend (amount of growth increase continuously) - Damped trend (used for longer-range forecasting, trend become a horizontal line in later stage) -
3 types of seasonality - Non seasonality - Additive Seasonality (seasonal fluctuation are of constant size)- less common - Multiplicative Seasonality (seasonal fluctuations are proportional to the data)
The Naive Forecasting Model ( benchmark model) Ft+1= Xt Forecasting for next period = Observed value this period
Evaluating Forecast Accuracy: - Mean Absolute deviation (MAD) - Mean absolute percentage error(MAPE) error(M APE) - Mean Square error(MSE) Forecasting models ranked differently according to accuracy measure. - MAD gives equal weight to each error where as MSE gives more weight to larger errors) - MSE is most often used in practice - Forecast accuracy are compared with w ith given model with that of a benchmark model (Discard if error is higher) Forecast error equation: et = xt – Ft error = Data – Forecast Example of A naive naive Forecasting Model -The mean error measures are computed only for the last half of the data. - The first part of data is used to fit the forecasting model - Fitting data – Warm up sample - second part – forecasting sample Rule of thumb: - to put at least six non-seasonal data points - two complete seasons of seasonal data in warm up sample - If fewer data, no need to bother with two samples In a long time series, common practice simply divide di vide half
MOVING AVERAGE FORECASTING - UNWEIGHTED (EQUAL WEIGHTS TO OLD AND NEW DATA) - WEIGHTED ( MORE WEIGHTS TO MOST RECENT DATA) Ft+1 = (Xt+Xt-1+Xt-2)/3
- Here N=3, mean of N data points
Forecast = Mean of the last N data points The value of N could be other than 3 , the best one is determined by experiments. Better than the Naive Model as MSE(Mean square errror) of the moving average is improved(less) t
1 2 3 4 5 6 7 8 9 10 11 12 13
Data (Xt) 28 27 33 25 34 33 35 30 33 35 27 29
Forecast (Ft)
29.3 28.3 30.7 30.7 34.0 32.7 32.7 32.7 31.7 30.3
MSE (PERIOD 5.7)2+2.72)/6
Error et= Xt-Ft
Forecast for t+1 Ft+1= (Xt+Xt-1+Xt-2)/3
-4.3 5.7 2.3 4.3 -4.0 0.3 2.3 -5.7 -2.7
7
–12)
=
= 13.3 MSE (FOR NAIVE MODEL) = 18.3
(4.32+(-4.0)2+0.32+2.32+(-
SIMPLE EXPONENTIAL SMOOTHING If et is +ve, forecast are increased If et is –ve, forecast are decreased (This process of adjustment continues unless the errors reach zero. This does not happen but is always always the goal) THE SIMPLE SMOOTHING EQUATION Ft+1 = Ft + et Forecast fo t+1 = Forecast for t + x Error in t Here, = Smoothing parameter ( 0 < < 1) t
1 2 3 4 5 6 7 8 9 10 11 12 13
Data (Xt)
28 27 33 25 34 33 35 30 33 35 27 29
Forecast (Ft)
30 29.8 29.5 29.9 29.4 29.9 30.2 30.7 30.6 30.8 31.2 30.8 30.6
Error et= Xt-Ft
-2.0 -2.8 3.5 -4.9 4.6 3.1 4.8 -0.7 2.4 4.2 -4.2 -1.8
Forecast for t+1 Ft+1= Ft+ et F2= 30.0+0.1(-2.0)= 29.8 F3=29.8+0.1(-2.8)=29.5 F4=29.9 F5=29.4 F6=29.9 F7=30.2 F8=30.7 F9=30.6 F10=30.8 F11=31.2 F12=30.8 F13=30.6
MSE((PERIOD 7-12) = (4.82+0.72+2.42+4.22+4.22+1.82)/6 = 11.3 MSE (MOVING AVERAGE) = 13.3 (FOR NAIVE MODEL) = 18.3 TO Choose , a range of trial must be tested , “ the best fitting with minimum MSE is chosen as best. (Nine trials – from 0.1 to 0.9)
Basic Idea: on Simple Exponential Smoothing -
a new average can be computed from an old average and the most recent observed demand.
At = Xt + (1 – ) At-1 F t+1 = At Ft = At-1 Ft+1 = Xt+ (1- ) Ft After rearranging Ft+1 = Ft + et New New fore foreca cast st = old fore foreca cast st + a pr prop opor orti tion on of the the erro error r between the observed demand and the old forecast controls the proportion of error How exponential smoothing Ft+1 = a Xt + (1-a) Ft Generalizing: Ft+1 = Xt+ (1- ) Xt-1 + (1- )2 Xt-2 + ....+ (1- )t-1X1+(1- )t F1 This expression indicates that the weights on each preceeding demand decrease exponentially, by a factor of (1), unti untill the the dema demand nd from from the the firs firstt peri period od and and the the initi nitial al forecast F1 is reached. If = 0.3, t=3 F4 = 0.3 X 3 + 0.21X2+ 0.147X1 + 0.343 F1
Notice that the weights on the demand decreases exponentially over time and all the weights w eights add upto 1 Therefore exponentially smoothing is just a special form of the weighted average, with weights decreasing exponentially over time.
TIME SERIES REGRESSION (Containing trend): Two ways Fit a tre trend line to past ast data and then to project the line into the furure. Smooth the trend with an expanded version of the simple smothing model Regres ressio sion: process ess of est estimatin ting relatio tionshi ship between two variables (eg. sales and time) - the best fitting line is found out which give the minimum sum of the squares of errors. -
The Least Squares Method is used to fit the Regression model.
Ft = a + bt Linear Regression Calculation b = slope of the best fitting trend line a = intercept of the best-fitting trend line An Example: The computerland forecasting problem (sales vs time(12 months) t x tX t2 1 60 2 55 3 64 4 51 5 69 6 66 Summation (t X) – n mean t mean x b = -------------------------------------------------Summation t2 – n (mean t)2
a = Mean x – b mean t Disadvantages; All regr egressi ssion for forecast casts s are are based sed on a single equation Re computing the changing trend is tideous Equal weight is assigned to all observations Even if model fits for warmup sample, it may not be for later forecasting
Smooth the linear trend: with an expanded version of the simple smothing model simp simple le expo expo.. smoo smooth thin inggcont contin inua uall lly y adju adjust sts s the the forecasts according to the errors. Smoothing a linear trend also works similarly,except similarly,except - errors are used to continually adjust two things the intercept of the trend line the slope of the trend line The adjustments are made with a sequence of equations repeated each period. Smoothed level at the end of t (St) = forecast for t + 1 x error in t Smoothed trend at the end of t (Tt) = Smoothed trend at the end of t-1 + St = Ft +
1
et
Tt= Tt-1+
2
2
x error in t
et
= smoothing parameter for level which control the rate at which the level is adjusted 1
= smoothing parameter for trend which control the rate at which trend is adjusted. 2
(we need two parameters because the trend in any period is usually very small compared with the level) le vel) - best through experimentation Ft+1= St + Tt
just like
F= a + bt
t
Steps:(Exponential S Smoothing moothing with Linear Trend) Trend) time series regression on warm up sample the intercept(a intercept(a)) and slope (b) of the regression regression are always always used as the initial values of S and T (as S0 and T0) Choose α1 and α2 (here α1=0.1 and α2=0.01) Evaluate F1= S0 + T0 Evaluate et=Xt-Ft i.e. (e1) Evluate St = Ft + 1 et Evaluate Tt= Tt-1+ 2 et Evaluate Ft+1= St + Tt -
Smoothing picks up changing trend in half of the data The general forecasting equation for exponential smoothing of a linear trend.
Ft+m= St + mTt where m = the no. of periods into the future we want to forecast. F13= S12 + (1) T12 F14= S12 + (2) T12
Exponential Smoothing with Non linear trend St = Ft + α1 et Tt = φ Tt-1 + α2 et Ft+1 = St + φ Tt where φ is the non linear trend modification parameter, value other than 1
Exponential Smoothing with a linear trend, T
Data (Xt)
Forecast (Ft)
Error e t = Xt - F t
0 1
60.0
56.6
3.4
2
55.0
58.6
-3.6
3 4 5 6 7 8 9 10 11 12 13
64.0 51.0 69.0 66.0 83.0 90.0 76.0 95.0 72.0 88.0
59.9 62.0 62.5 64.9 66.7 70.2 74.3 76.6 80.7 82.0 84.9
4.1 -11.0 6.5 1.1 16.3 19.8 1.7 18.4 -8.7 6.0
Level at the end of t St= Ft + a1et S0= 5 4 .9 S1= 56.6+.1(3.4)=56.9 S2= 58.6+.1(-3.6) =58.2 S3=60.3 S4=60.9 S5=63.2 S6=65.0 S7=68.3 S8=72.2 S9=74.5 S10=78.4 S11=79.8 S12=82.6
= 0.10,
1
2
= 0.01
Trend at the end of t Tt=Tt-1+a2et T0= 1.7 T1=1.7+.01(3.4)=1.7
Forecast for t+1 Ft+1 = St + Tt F1=54.9+1.7= 56.6 F2=56.9+1.7=58.6
T2=1.7+.01(-3.6) =1.7
F3=58.2+1.7=59.9
T3=1.7 T4=1.6 T5=1.7 T6=1.7 T7=1.9 T8=2.1 T9=2.1 T10=2.3 T11=2.2 T12=2.3
F4=62.0 F5=62.5 F6=64.9 F7=66.7 F8=70.2 F9=74.3 F10=76.6 F11=80.7 F12=82.0 F13=84.9
MSE(Periods 7 – 12) = 185.1
DECOMPOSITION OF SEASONAL DATA SEASONAL – TIME SERIES PATTERN WHICH REPEATS ITSELF, ITSELF, AT LEAST APPROXIMATELY APPROXIMATELY EACH EAC H YESR YE SR FOR SEASONAL SEASONA L DAT DATA-THE PEAKS AND TROUGHS SHOULD BE CONSISTENT -THERE SHOULD BE AN EXPLANATION( WEATHER, HOIDAYS) DECOMPOSITON: - SEPARATION OF TIME SERIES INTO ITS COMPONENT PARTS (SEASONALITY, TREND, CYCLE, AND RANDOMNESS)
DECOMPOSITION OF SEASONAL DATA SEASONAL – TIME SERIES PATTERN WHICH REPEATS ITSELF, ITSELF, AT LEAST APPROXIMATELY APPROXIMATELY EACH EAC H YESR YE SR FOR SEASONAL SEASONA L DAT DATA-THE PEAKS AND TROUGHS SHOULD BE CONSISTENT -THERE SHOULD BE AN EXPLANATION( WEATHER, HOIDAYS) DECOMPOSITON: - SEPARATION OF TIME SERIES INTO ITS COMPONENT PARTS (SEASONALITY, TREND, CYCLE, AND RANDOMNESS) DESEASONALIZED OR SEASONALLY ADJUSTED DATA: - DATA DATA AFTER REMOVING REMOVIN G THE SEASONAL SEASON AL PATTERN PATTERN FORECASTING IS DONE FOR DESEASONALIZED DATA AND SEASONAL PATTERN WIL BE PUT BACK TO GET SEASONALIZED FORECAST ACTUAL DATA DESEASONALIZED DAT DATA = ------------------------------------------- --------------------------------------SEASONAL INDEX ACTUAL ACTU AL DATA DATA SEASONAL INDEX = --------------------------------------------AVERAGE FOR THE YEAR IF DATA DATA ARE MONTHLY MONT HLY, 12 SEASONAL SEASON AL INDICES IF DATA ARE QUARTERLY, 4 SEASONAL INDICES THE WOLFPACK FORECASTING PROBLEM CHOOSING A WARM UP(12) AND A FORECASTING SAMPLE(4)
9 STEPS IN FORECASTING SEASONAL DATA(MA APPROACH)
1.
COMPUTE A MOVING AVERAGE BASED ON THE LENGTH OF SEASONALITY
-THE -THE AVERA VERAGE GE SHOU SHOULD LD BE PLACE LACED D NEXT EXT TO THE THE CENTER PERIOD. OR IMMEDIATELY NEXT TO CENTRE POINT. -IF THERE ARE N PERIODS OF QUARTERLY DATA N-3 MA -IF THERE ARE N PERIODS OF MONTHLY DATA N-11 MA
2.
DIVIDE THE ACTUAL DATA BY THE CORR CORRES ESPO PONDI NDING NG MOVI MOVING NG AVERA VERAGE GE (TO (TO GET GET APPROXIMATE INDICES)
3.
AVERAGE THE RATIOS TO ELIMINATE AS MUC MUCH RANDOMNESS AS POSSIBLE
4.
COMPUTE A NORMALIZATION FACTOR TO ADJU ADJUST ST THE THE MEAN MEAN RATI RATIOS OS SO THEY THEY SUM SUM TO 4(QUARTERLY DATA) OR 12(MONTHLY DATA)
5.
MULTIPLY THE MEAN RATIOS BY TH E NORM NORMAL ALIZ IZA ATION TION FACT ACTOR TO GET GET THE THE FINA FINAL L SEASONAL INDICES
6.
DESE DESEAS ASON ONAL ALIZ IZE E THE THE DAT DATA BY BY DIV DIVIDIN IDING G BY THE THE SEASONAL INDEX
7.
FORE FORECA CAST ST THE THE DES DESEA EASO SONA NALI LIZE ZED D DA DATA (SIM (SIMPL PLE E EXPONENTIAL SMOOTHING HERE) =0.1 minimizing the MSE for deseasonalized data in the warm up sample
8.
SEASO EASONA NALI LIZE ZE THE THE FOR FORE ECAST CASTS S FRO FROM M STE STEP P 7 TO GET THE FINAL FORECASTS
9.
COMPUTE THE MSE (USING SEASONAL ERROR) FOR THE FORECASTING ERROR