CHAPTER 9 Input Modeling Purpose & Overview
Input models provide the driving force for a simulation model.
The quality of the output is no better than the quality of inputs.
In this chapter, we will discuss the 4 steps of input model development: o
Collect data from the real system
o
Identify a probability distribution to represent the input process
o
Choose parameters for the distribution
o
Evaluate the chosen distribution and parameters for goodness of fit.
Data Collection One of the biggest biggest tasks tasks in solvin solving g a real real probl problem. em. GIGO GIGO – garba garbagege-in in-garbage-out Suggestions that may enhance and facilitate data collection: o Plan ahead: begin by a practice or pre-observing session, watch for unusual circumstances o Analyze the data as it is being collected: check adequacy o Combine homogeneous data sets, e.g. successive time periods, during the same time period on successive days o Be aware of data censoring: the quantity is not observed in its entirety, danger of leaving out long process times o Check for relationship between variables, e.g. build scatter diagram o Check for autocorrelation autocorrelation o Collect input data, not performance data Identifying the Distribution
Histograms Selecting families of distribution Parameter estimation Goodness-of-fit tests Fitting a non-stationary process
1. HISTOGRAMS A frequency distribution or histogram is useful in determining the shape of a distribution The number of class intervals depends on: o The number of observations o The dispersion of the data
Suggested: the square root of the sample size For continuous data: o Corresponds Corresponds to the probability density function of a theoretical distribution For discrete data: o Corresponds Corresponds to the probability mass function If few data points are available: combine adjacent cells to eliminate the ragged appearance of the histogram Vehicle Arrival Example: # of vehicles arriving at an intersection between 7 am and 7:05 am was monitored for 100 random workdays. histogram may have a cell for each possible There are ample data, so the histogram value in the data range o
A r r iv iv a ls p P e r i o d F r e qu qu e n c 0
12
1
10
2
19
3
17
4
10
5
8
6
7
7
5
8
5
9
3
10
3
11
1
Same Same data data with different different intervalsize sizes s
2. SELECTING THE FAMILY OF DISTRIBUTIONS
A family of distributions is selected based on: o The context of the input variable o Shape of the histogram Frequently encountered distributions: distributions: o Easier to analyze: exponential, normal and Poisson o Harder to analyze: beta, gamma and Weibull Use the physical basis of the distribution distribution as a guide, for example: Binomial: # of successes in n trials o Poisson: # of independent events that occur in a fixed amount of time or space o Normal: distribution of a process that is the sum of a number of component processes o Exponential: time between independent events, or a process time that is memory less o Weibull: time to failure for components o Discrete or continuous uniform: uniform: models complete uncertainty o Triangular: a process for which only the minimum, most likely, and maximum values are known Empirical: resample from the actual data collected o Remember the physical characteristics of the process Is the process naturally discrete or continuous valued? Is it bounded? o No “true” distribution for any stochastic input process o Goal: obtain a good approximation o
Quantile-Quantile Quantile-Quantile Plots Q-Q plot is a useful tool for evaluating distribution fit X is a random variable with cdf F , then the q-quantile of X is X is the g such that If X is
F( γ ) = P(X
≤ γ ) = q,
for 0
When F has F has an inverse, g = F -1(q)
o
Let { x i, i = 1,2, …., n} n } be a sample of data from X from X and and { y , j = 1,2, …, n} n } be the j observations observation s in ascending order:
where j where j is the ranking or order number
o
The plot of y y versus F -1( (j-0.5)/n) is j F is a member of an appropriate o Approximately a straight line if F is family of distributions distributions F is a member of an appropriate family of o The line has slope 1 if F is distributions distributions with appropriate appropriate parameter values Example: Check whether the door installation times follows a normal distribution. distribution.
o
j
Value
j
Value
j
Value
1
99.55
6
99.98
11
100.26
2
99.56
7
100.02
12
100.27
3
99.62
8
100.06
13
100.33
4
99.65
9
100.17
14
100.41
5
99.79
10
100.23
15
100.47
o
The observations are now ordered from smallest to largest:
y are plotted versus F ( (j-0.5)/n) where F has F has a normal j distribution distribution with the sample mean (99.99 sec) sec ) and sample 2 2 variance (0.2832 (0.2832 sec ) -1
Example (continued): Check whether the door installation times follow a normal distribution.
S traig htline , su pp ortin gthe hyp othe sisofa n o rm a ld istrib u tion
S u p e rim p o s e d de nsityfun ctio iono f thenorm al d istrib u tio io n
Consider the following while evaluating the linearity of a q-q plot: o
o
o
The observed values never fall exactly on a straight line The ordered values are ranked and hence not independent, unlikely for the points to be scattered about the line Variance of the extremes extremes is higher than the middle. middle. Linearity of the points in the middle of the plot is more important.
Q-Q plot can also be used to check homogeneity o
Check whether a single distribution can represent both sample sets
o
Plotting the order values of the two data samples against each other
3. PARAMETER ESTIMATION
Next step after selecting a family of distributions
If observations in a sample of size n are X are X 1, X 2, …, X n (discrete or continuous), the sample mean and variance are: n
X =
∑i =1 X i n
n
S 2
∑ =
2 2 X − n X i i =1
n −1
If the data are discrete and have been grouped in a frequency distribution: n
X =
∑ j =1 f j X j n
n
S 2 =
∑ j
2
f X j −n X =1 j n −1
2
where f is the observed frequency of value X j j When raw data are unavailable (data are grouped into class intervals), the approximate sample mean and variance are: c
X =
∑ j 1 f j X j =
n
n
S 2 =
∑ j
2 2 f m n X − j j =1
n −1
jth class interval where f is the observed frequency of in the jth j m j is the midpoint of the jth jth interval, and c is the number of class intervals A parameter is an unknown constant, but an estimator is a statistic Vehicle Arrival Example (continued): (continued): Table in the histogram example on slide 6 (Table 9.1 in book) can be analyzed to obtain:
n =100 , f 1 =12 , X 1 = 0, f 2 =10 , X 2 =1,..., and
k
∑
f X j = 364 , and j =1 j
k
∑
2 f X j j = 2080 j =1
The sample mean and variance are
X 2
S
364 36 4 =
=
100
3.64
2080 100 10 0* (3.64) 2 −
=
=
99 7.63
The histogram suggests X to have a Possion distribution
Howe Howeve ver, r, note note that that samp sample le mean mean is not not equa equall to sampl ample e variance.
Reason: each estimator is a random variable, is not perfect.
4. GOODNESS-OF-FIT TESTS Conduct hypothesis testing on input data distribution distribution using: o Kolmogorov-Smirnov Kolmogorov-Smirnov test o Chi-square test No single correct distribution in a real application exists. o If very very little little data data are are availa available ble,, it is unlike unlikely ly to reject reject any candida candidate te distributions o If a lot of data are available, it is likely to reject all candidate distributions distributions
Chi-Square test Intuit itio ion: n: comp compar arin ing g the the hist histog ogra ram m of the the data data to the the sh shape ape of the the Intu candidate density or mass function Valid for large sample sizes when parameters are estimated by maximum likelihood By arranging the n observations into a set of k class intervals or cells, the test statistics is: k
2
(Oi − E i )
∑
χ 02
=
i=1
Expected ExpectedFrequen requency E i = n*p i
E i
where pi is thetheoretical retical prob.of the i th th inte interval.
Observed Frequency
SuggestedMinim nimum=5
which approximately follows the chi-square distribution with k-s-1 degrees of freedom, where s = # of parameters of the hypothesized distribution estimated by the sample statistics.
The hypothesis of a chi-square test is:
variable, X , conforms to the H0: The random variable, dist distri ribu buti tion onal al assum assumpt ptio ion n with with the the parameter(s) given by the estimate(s). H1: The random variable X does not conform. If the distribution tested is discrete and combining adjacent cell is not required (so that Ei > minimum requirement): Each value of the random variable should be a class interval, unless combining is necessary, and pi p(x i ) P(X x i )
=
=
=
If the distribution tested is continuous:
pi
=
ai
∫
ai − 1
f ( x ) dx = F ( ai ) − F ( ai −1 )
where ai-1 and ai are the endpoints of the ith class interval and f(x) is the assumed pdf, F(x) is the assumed cdf. SampleSize, n 20
Number of ClassIntervals, k Donot usethechi-squaretest
50
5to10
100
10 to20
>100 o
n
1/2
to n/5
Recommended number of class intervals ( k ): ):
o
Caution: Different grouping of data (i.e., k ) can affect the hypothesis testing result.
Vehicle Arrival Example (continued): H0: the random variable is Poisson distributed. H1: the random variable is not Poisson distributed. xi
0 1 2 3 4 5 6 7 8 9 10 >11
ObservedFre requency, O
i
12 10 19 17 19 6 7 5 5 3 3 1 100
ExpectedFre requency, E
2.6 9.6 17.4 21.1 19.2 14.0 8.5 4.4 2.0 0.8 0.3 0.1 100.0
i
2
(Oi - Ei) /E /Ei
7.87 0.15 0.8 4.41 2.57 0.26
11.62
E i =np( x) =n
e−α α x x!
Combin inedbeca caus use ofmin E i
27.68
Degree of freedom is k-s-1 = 7-1-1 = 5, hence, the hypothesis is rejected at 2
χ 0
2
= 27 .68 > χ 0.05 , 5 =11 .1
the 0.05 level of significance.
Kolmogorov-Smirnov Kolmogorov-Smirnov Test Intuition: formalize the idea behind examining a q-q plot Recall from Chapter 7.4.1: o The test compares the continuous cdf, F(x) , of the hypothesized dist distri ribu buti tion on with with the the empi empiri rica call cdf, cdf, SN(x), of the N sample observations. o Based on the maximum difference statistics (Tabulated in A.8): D = max| F(x) - S N(x)| A more powerful test, particularly useful when: Sample sizes are small, o o No parameters have been estimated from the data. da ta. When parameter estimates have been made: o Critical values in Table A.8 are biased, too large. o More conservative, i.e., smaller Type I error than specified. p-Values and “Best Fits” p-value for the test statistics o The significance level at which one would just reject H 0 for the given test statistic value. o A measure of fit, the larger the better
Large p-value : good fit o Small p-value : poor fit Vehicle Arrival Example (cont.): o H0: data is Possion o Test statistics: , with 5 degrees of freedom meaning ng we woul would d reje reject ct H0 with 0.00004 o pp-va value lue = 0.000 0.00004 04, meani significance level, hence Poisson is a poor fit. Many Many softwa software re use p-value as the rankin ranking g measur measure e to automa automatic ticall ally y determine the “best fit”. Things to be cautious about: o Soft Softwa ware re may may not not know know abou aboutt the the phys physic ical al basi basis s of the the data data,, distribution families it suggests may be inappropriate. o Close conformance to the data does not always lead to the most appropriate input model. o p-value does not say much about where the lack of fit occurs Recommended: always inspect the automatic selection using graphical methods. o
5. FITTING A NON-STATIONARY N ON-STATIONARY POISSON PROCESS Fitting a NSPP to arrival data is difficult, possible approaches: Fit a very flexible model with lots of parameters or o Approximate constant arrival rate over some basic interval of time, but vary O u r f o c u s
it from time interval to time interval.
Suppose we need to model arrivals over time [0,T], our approach is the most appropriate when we can:
Observe the time period repeatedly and Count arrivals / record arrival times. o The estimated arrival rate during the ith time period is: o
n 1 ˆ (t ) = C ij λ ∑ n ∆t j =1
where n = # of observation periods, Dt = time interval length arrivals during the ith time time inte interv rval al on the the jth Cij = # of arr observation period Example: Divide a 10-hour business day [8am,6pm ] into equal intervals k =
Tim ime Perio iod
Number of Arriv ivals ls Day1 Day2 Day3
Estim imatedArriv ival Rate (arrivals/hr) (arrivals/hr)
8:00 - 8:00
12
14
10
24
8:30 - 9:00
23
26
32
54
9:00 - 9:30
27
18
32
52
9:30 - 10:00
20
13
12
30
Forinstance instance, 1/3(0.5)*(23+26+32) =54arri arrivals/hou vals/hour
20 whose length Dt = ½, and observe over n =3 days
Selecting Model without Data
If data is not available, some possible sources to obtain information about the process are: Engineering data: often product or process has performance ratings o provi provided ded by the manufac manufactur turer er or compan company y rules rules specif specify y time time or production standards. o Expert Expert option option:: people people who are experie experience nced d with with the proces process s or similar similar processes, processes, often, they can provide provide optimistic optimistic,, pessimist pessimistic ic and most-likely times, and they may know the variability as well. Physical or conventional limitations: physical limits on performance, o limits or bounds that narrow the range of the input process. o The nature of the process. The uniform, triangular, and beta distributions are often used as input models. Example: Production planning simulation. s imulation. o Input of sales volume of various products is required, salesperson of product XYZ says that: No fewer than 1,000 units and no more than 5,000 units will be sold. Given her experience, she believes there is a 90% chance of selling more than 2,000 units, a 25% chance of selling more than 2,500 units, and only a 1% chance of selling more than 4,500 units. Translating these information into a cumulative probability of being less than
i
Interval (Sales)
Cumulative Frequency, c i
1
≤ x≤ 2000 2000 < x ≤ 3000 3000 < x ≤ 4000 4000 < x ≤ 5000
0. 10
2 3 4
1000
or equal to those goals for simulation input:
Multivariate Multivariate and Time-Series Input Models
0.75 0.99 1. 00
Multivariate: o For example, lead time and annual demand for an inventory model, increase in demand results in lead time increase, hence variables are dependent. Time-series: For example, time between arrivals of orders to buy and sell stocks, o buy and sell orders tend to arrive in bursts, hence, times between arrivals are dependent.
Covariance and Correlation Consider the model that describes relationship between X 1 and X 2:
( X 1 − µ ) =β ( X 2 − µ 2 ) +ε 1
ξisa randomvariable mvariable wit ithmean 0 and andis independent of X 2
b = 0, X 0, X 1 and X 2 are statistically independent b > 0, X 0, X 1 and X 2 tend to be above or below their means together b < 0, X 0, X 1 and X 2 tend to be on opposite sides of their means
Covariance between X 1 and X 2 :
cov( X 1, X 2 ) = E [( X 1 − µ 1 )( X 2 − µ 2 )] = E ( X 1 X 2 ) − µ 1µ 2 = 0,
=0
where
cov ( X X 1,
< 0, then β
X 2) <0 > 0,
Corr Correl elat atio ion n
= 0,
betw betwee een n
>0
X 1 and X 2 (values between -1 and 1):
=0 o
where
c orr ( X X 1,
X 2) > 0,
<
0,
then β
>0
<0
The closer r is to -1 or 1, the stronger the linear relationship is between X 1 and X 2. A time series is a sequence of random variables X 1, X 2, X 3, … , are identically distributed (same mean and variance) but dependent. cov( X o X t , X t+h) is the lag-h autocovariance corr( X o X t , X t+h) is the lag-h autocorrelation If the autocovariance value depends only on h and not on t , the o time series is covariance stationary o
Multivariate Input Models If X 1 and X 2 are normally distributed, dependence between them can be modeled modeled by the bivariate bivariate normal distributi distribution on with m1, m2, s12, s22 and correlation r 2 2 o To Estimate m , m , s 1 2 1 , s2 , see “Parameter Estimation” (slide 1517, Section 9.3.2 in book) To Estimate r , suppose we have n independent and identically distributed pairs ( X X 11, X 21), ( X X 12, X 22), … ( X X 1n, X 2n), then:
1
n
∑
ˆv co ( X , X 2 ) = ( X 1 1 j n− 1 j = 1
ˆ )( X − ˆ ) X X − 1 2 j 2
n ˆ X ˆ X X n X = − ∑ 1 j 2 j 1 2 n− 1 = j 1
1
ˆv co ( X 1, X 2) ˆ1σ ˆ2 σ
=
ˆ ρ
S a m p le d e v ia tio n
Time-Series Input Models
If X X 1, X 2, X 3,… is a sequence of identically distributed, but dependent and cova covari rianc ancee-st stat atio ionar nary y rand random om vari variab able les, s, then then we can can repr repres esen entt the the process as follows: o
Autoregressive order-1 model, AR(1)
Exponential autoregressive order-1 model, EAR(1)
ρ h
h
= corr ( X t , X t +h ) = ρ ,
for h =1,2,...
Both have the characteristics that:
autocorrel relati ation on decrea decreases ses geomet geometri rical cally ly as the lag Lag-h autocor increases, hence, observations far apart in time are nearly independent
AR(1) Time-Series Input Models Consider the time-series model:
X t = µ + φ ( X t −1 − µ ) + ε t ,
for t = 2,3,...
where ε 2 , ε 3 , … are i.i.d. normally distributed with µ ε
=
0 and variance σ ε 2
If X X 1 is chosen appropriately, then o
X 1, X 2, … are normally distributed with mean = m, and variance = σ 2 /(1-Ø 2 )
o
Autocorrelation ρh = Ø h
To estimate Ø, μ, σe2 :
ˆ = X , µ
ˆ = σ ˆ (1 − φ ˆ 2 ) , σ 2 ε
2
coˆ v( X t , X t +1 ) ˆ φ = ˆ2 σ
where coˆ v( X t , X t +1 ) is the lag-1 autocovariance EAR(1) Time-Series Input Models
Consider the time-series model:
with probability φ φ X t −1 , X t = φ X t −1 + ε t , with probability 1-φ
for t = 2,3,...
where ε 2 , ε 3 , … are i.i.d. exponentially distribute d with µ ε = 1/λ , and 0 ≤ φ < 1 If X X 1 is chosen appropriately, then
o
o
X 1, X 2, … are exponentially distributed with mean = 1/l Autocorrelation r h = f h , and only positive correlation is allowed.
To estimate Ø, λ :
Summary
In this chapter, we described the 4 steps in developing input data models: o
Collecting the raw data
o
Identifying the underlying statistical distribution
o
Estimating the parameters
o
Testing for goodness of fit