BITS Pilani presentation BITS Pilani Pilani Campus
Dr RAKHEE Department of Mathematics
MATH F111 & AAOC C111 Pro roba babil bilit ity y and and Sta Statis tistic tics s BITS Pilani Pilani Campus
BITS Pilani Pilani Campus
Ch ap t er 3 Discre isc rete te Dist Distri ributi bution on
Random variables
Random variables are variables whose values are determined by a chance. (This can be thought as a sample space of a random experiment whose outcomes are real numbers).
Random variables • The outcomes of random experiment may be numerical or non-numerical (descriptive). For example, when we throw a die, we get the outcomes as 1, 2, 3, 4, 5, 6 which is a numerical value, whereas when we toss a coin we get either a head or a tail. This is a non-numerical values. Instead of dealing the non-numerical values, we can assign some numerical value to them, say 1 for head and 0 for tail.
Random Variable • Random variable is a real valued function which maps the numerical or nonnumerical sample space (domain) of the random experiment to a real values (co domain or range) • It should be mapped such that an outcome of an event should correspond to only one real value.
Random Variable • Random variable is a real valued function which is also a single valued function and not a multi-valued. • That means it can be one-to- one or manyto-one but never be one-to-many mapping.
Example Suppose that we toss three coins and consider the sample space associated with the experiment S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}
X: number of tails obtained in the toss of three coins Hence, X(HHH) = 0, X(TTT) = 3, X(THT) = 2, X(HTT) = 2, X(TTH) = 2, X(THH) = 1, X(HHT) = 1, X(HTH) = 1
Random Variable Let E be a random experiment and S a sample space associated with it. A function X assigning to every element s ∈ S, a real number X(s) is called random variable. Though, X is a function yet we call it a random variable. S Definition:
s
X
X(s)
• Generally random variables are denoted by capital letters X, Y, Z etc or X1, X2 etc. whereas their possible values are denoted by the corresponding lower case letters x, y, z or x1, x2 etc. respectively.
Random variable
Examples Let E be the experiment of rolling two fair dice Let X be the random variable that is defined as the sum of numbers shown then X takes values 2, 3, 4,…, 10, 11, 12 P[X=2]= P[(1,1)] = 1/36 P[X=3]= P[(2,1),(1,2)] = 2/36 P[X=4]= P[(2,2),(3,1),(1,3)] = 3/36 P[X=5]= P[(2,3),(3,2),(1,4),(4,1)] = 4/36 ………… P[X=11]= P[6,5),(5,6)] = 2/36 P[X=12]= P[6,6)] = 1/36
Example Let Z = the time of peak demand for electricity at a power plant. Time is measured continuously, and Z can conceivably assume any value in the interval [0,24), 0: means mid night one day 24: means 12 mid night next day. In this case, the set of real numbers is neither finite nor countably infinite and hence Z is not discrete random varable.
Example
The random variable denoting the life time of a car, when the car’s lifetime is assumed to take on any value in some interval [a, c]. So this is not discrete r.v.
Section 3.1 (page no 81)
T : the turnaround time for a computer job -- not discrete random variable M : the number of meteorites hitting a satellite per day. -- discrete random variable
The uncertain behavior of the random variable is predicted by: (i) Probability density function f (x) (ii) Cumulative distribution function F(x)
Discrete Probability Density Definition : The density function of a discrete random variable X is defined
by
f (x) = P(X=x) for all real x.
From the density, one can evaluate the probability of any subset A of real numbers (i.e. event):
P ( A) =
∑ f ( x) x∈ A is a value of X
Conversely if we are given probabilities of all events of a discrete random variable, we get a Density function.
The necessary and sufficient condition for a function f to be a discrete density function :
f ( x ) ≥ 0 for all x and
∑ all x
f ( x ) = 1
The
cumulative distribution function F of a discrete random variable X , is defined by
F(x) = P(X ≤ x) =
∑ f(k) k ≤ x
for any real number x, here f denote the density of X .
The density and cumulative distribution function determine each other. If random variable takes integer values then
f(n) = F(n)-F(n-1).
CDF (cumulative distribution function) of a discrete random variable X then
F is
P(a < X ≤ c ) = P( X ≤ c) - P(X ≤ a) = F(c) – F(a)
as set of all x such that X ≤ a is subset of set of all x such that X ≤ c.
Cumulative distribution function of a discrete random variable is a step function, its values change at points where density is positive. Note : F( x) is non-decreasing and lim F ( x) = 0 , lim F ( x) = 1 x → − ∞
x →∞
Exercise : Given that f(x)= k/(2 x), x=0, 1, 2, 3 and 4 for a density function of a random variable taking only these values, find k. Exercise : Given that f(x) = k /(2 x) x=0, 1, 2, 3,- - - for a density function of a random variable taking only these values (a) Find k. (b) Find P( 3 < X < 100). (c) The cumulative distribution function of X.
Tabular way of defining density (pdf): Can tabulate values of density at points where it is nonzero. Tabular way of defining cumulative distribution function (cdf): Can tabulate values of F(x) where steps change.
Exercise 8 The density for X, the number of holes that can be drilled per bit while drilling into limestone is given by the following table : x
1
2
3
4
f(x) 0.02 0.03 0.05 0.2
5
6
7
8
0.4
0.2
0.07 ?
(i) Find f (8), (ii) Find the table for F(x). (iii) Use F to find the probability that a randomly selected bit can be used to drill between three and five holes inclusive.
Example If CDF F(x) for a r.v. is given as x
-1
0
1/3
1/2 2/3 1
F (x) 0.1 0.3 0.35 0.4 0.5 0.8
3 1.0
(i)Find probability density function f(x) for all x (ii) Find P(2 < X ≤ 3) & P(2 ≤ X < 3)
(iii)Find F(-2) & F(4) (iv)Find P(X < 3) & P(X > 0)
x
-1
F(x)
0.1 0.3 0.35 0.4 0.5 0.8 1.0
(i)
0
1/3
0
1/3
1/2 2/3 1
x
-1
1/2
2/3
f(x)
0.1 0.2 0.05 0.05 0.1
3
1
3
0.3 0.2
f (x) = 0 at all other real number x
x
-1
0
1/3
1/2 2/3 1
3
F(x)
0.1 0.3 0.35 0.4 0.5 0.8 1.0
(iii) Find F(-2) & F(4)
F(-2)=0 & F(4)=1 (iv) Find P(X<3) & P(X>0) P(X< 3)= 0.8 P(X>0) = 1-P(X≤0)= 1-F(0)= 1-0.3
Exercise 10 : It is known that the probability of being able to log on to a computer from a remote terminal at any given time is 0.7. Let X denote the number of attempts that must be made to gain access to the computer. (a)Find the first 4 terms of the density table. (b)Find a closed form expression for f(x). (c)Find a closed form expression for F(x). (d)Use F to find the probability that at most 4 attempts are required to access the computer.
Expectation The density function of a random variable completely describe the behavior of the random variable. Random variables can also be characterized by the knowledge of numerical values of three parameters, Mean( µ ), Variance (σ2) and Standared deviation ( σ).
Example
Consider the roll of a single fair die, and X denote the number that is obtain. The possible values for X are 1, 2, 3, 4, 5, 6, and since the die is fair, the probability associated with each value is 1/6. So the density function for X is given by
1 f ( x) = , x = 1, 2, 3, 4, 5, 6 6
When we repeat the rolling over and over and recording the values of X in each roll, We ask what is the theoretical average value of the rolls as the number of rolls approaches infinity. Since the density is symmetric and is known, this average can be found intuitively. As, P[X = 1] = P[X = 6] = 1/6, the average value is (1 + 6)/2 = 3.5
Similarly, P[X = 2] = P[X = 5] = 1/6, the average value is (2 + 5)/2 = 3.5 In long run, 3.5 dictates the average value. So we write it as E[X] = 3.5
Expectations Definition : Let X be a discrete random variable and H( X ) be a function of X . Then the expected value of H( X ), denoted by E(H( X )), is defined by
E ( H ( X )) =
∑
H ( x ) f ( x )
x any value of X
Where f (x) is density of X provided
∑ | H ( x) | f ( x) is finite x
Notes : 1) E[H(X)] can be interpreted as the weighted average value of H(X).
2) If ∑all x|H(x)|f(x) diverges then E[H(X)] does not exist irrespective of convergence of
∑all xH(x)f(x), see Ex. 22. 3) E[X] measures average value of X and is called the mean of X and denoted by µX or µ 4) Distribution is scattered around µ. Thus it indicates location of center of values of X and hence called a location parameter.
Variance and Standard Deviation
Variability is not being measured by the mean. Parameter must reflects consistency or the lack of it. The measure a large (small) positive values if the random variable fluctuates in the sense that it often assumes values far (closer) from its mean.
Variance and Standard deviation Definition : If a discrete random variable X has mean µ, its variance Var(X) or σ2 is defined by Var(X) = E[(X-µ)2]. The standard deviation σ is the nonnegative square root of Var( X).
Notes : 1) Note that Var(X) is always nonnegative, if it exists. 2) Variance measures the dispersion or variability of X. It is large if values of X away from µ have large probability, i.e. values of X are more likely to be spread. This indicates inconsistency or instability of random variable.
Properties of Mean Theorem : If X is a random variable and c is a real number then : E[c] = c and E[cX] = cE[X].
∑c f(x) = c ∑ f (x) = c(1) = c. E[cX] = ∑ c x f (x) = c ∑x f (x)= cE[X]. Proof : E[c] =
Ex.: Prove for reals a, b, E[aX + b] = aE[X] + b.
Properties of Variance Theorem : Var [X] = E[X 2] – ( E[X])2. Theorem : For a real number c, Var [c] = 0 and Var [cX] = c 2Var[X].
Exercise 15 : The density for X, the number of holes that can be drilled per bit while drilling into limestone is given by the following table : x
1
2
3
4
f(x) 0.02 0.03 0.05 0.2
5
6
7
8
0.4
0.2
0.07 0.03
Find E[X], E[X2], Var[X], σX. Find the unit of σX.
Example : Let X be random variable with density function
(1 / 3) ( 2 / 3), x = 1,2,3.... f ( x ) = 0, otherwise x −1
Find E(X).
Sol.
(1 / 3) (2 / 3), x = 1,2,3.... f ( x ) = 0, otherwise x −1
Section 3.3, page 84, 17
The probability of being able to log on to a computer from a remote terminal at any given time is 0.7. Let X denote the number of attempts that must be made to gain access to the computer. Find E[X]. Can you express E[X] in terms of p?
Section 3.3, page 84, 22 Consider the function f defined by
1 −| x| f ( x) = 2 , x = ±1, ± 2, ± 3,... 2 a) Verify that this is the density function for a discrete r.v. | x| b) Let 2 | x|−1 . g ( x) = (−1) 2 | x | −1 show that Σ g(x) f(x) < ∞ c)
c) Show that converges.
Σ
|g(x)| f(x) does not
Ordinary Moments : For any positive integer
k, the kth ordinary moment of a discrete random variable X with density f (x) is defined to be E[Xk]. Thus for k = 1 we get mean. Using 1 st and 2nd ordinary moment, we can evaluate variance. There is a tool, moment generating function (m.g.f) which helps to evaluate all ordinary moments in one go.
Moment generating function Definition: Let X be any random variable with density f . The m.g.f. for X is denoted by mX(t) and is given by
m X (t ) = E [e ] tX
provided the expectation is finite for all real numbers t in some open interval (-h, h).
Theorem 3.4.2: If mX(t) is the m.g.f. for a random variable X, then
k d m X (t ) k dt
k = E [ X ] t = 0
Proof : e
tX
= 1 + tX + t X / 2!+... + t X / n!+... 2
2
n
Hence m X (t ) = E [e ] tX
m X (t ) = E [1 + tX + t X / 2!+... + t X / n!+...] 2
2
n
n
m X (t ) = 1 + tE [ X ] + t E [ X ] / 2!+... + t E [ X ] / n!+... 2
2
n
n
n − k
n
Differentiating k times, k
d m X (t ) k
k +1
= E [ X ] + tE [ X ] + ... + t E [ X ] / k !+... k
dt Now put t = 0 to get the result.
Section 3.4 page no. 87, 31 Consider the random variable X whose density is given by:
f ( x) =
( x − 3) 5
2
,
x = 3, 4, 5
a) Verify that this function is a density for a discrete random variable.
b) Find E[X] directly. That is evaluate Σall x xf ( x). c) Find moment generating function for X. d) Use m.g.f. to find E[X] e) Find E[X2] directly. f) Use m.g.f. to find E[X 2] g) Find σ2 and σ.
Bernoulli trials A trial which has exactly 2 possible outcomes, success s and failure f, is called Bernoulli trial. For any random experiment, if we are only interested in occurrence or not of a particular event, we can treat it as Bernoulli trial. Thus if we toss a dice but are interested in whether top face has even number or not, we can treat it as a Bernoulli trial.
Geometric distribution If we perform a series of identical and independent trials, X = number of trials required to get the first success is a discrete random variable, which is known as geometric random variable. It’s probability distribution is called Geometric distribution .
Sample space of this experiment is S = {s, fs, ffs, fffs, …}. Probability of success on any trial = p is same. i −1
P ( X = i ) = (1 − p ) p for i = 1,2,...
In fact the function f is called the density of a geometric distribution with parameter p for 0 < p < 1, if
(1 − p ) x − 1 p; f ( x) = 0;
x = 1,2,3,..
otherwise.
(Verify it is a density of a discrete random variable)
We write q = 1- p.
Then c.d.f. of geometric distribution is F( x) = 1 - q[x] for any real x > 0
and F( x) = 0 otherwise.
Theor Theorem 3.4. 3.4.1 1
The m.g.f. of geometric random variable with parameter p, 0 < p < 1, is m X (t ) =
pe
t
; for ln ; t q < − t
1 − qe where q = 1 − p.
Proof: The density of a geometric distribution q x − 1 p; x = 1,2,3,.. f ( x ) = 0; otherwise. By def.
∞
m X (t ) = E [e ] = tX
∞
∑ e f ( x) = pq ∑ (qe ) tx
x =1
−1
t x
x =1
The series on the right is a geometric series t t with first term qe , common ratio qe .
m X (t ) =
pe
t
q p = − ; where 1 . t
1 − qe t Provided | r | < | qe | < 1. Since the exponential function is nonnegative and 0 < q < 1, this restriction implies that (qet ) < 1, implies that et < (1/q)
1 ln(e ) < ln , ⇒ t < (ln 1 − ln q ) ⇒ t < − ln q q t
Theorem 3.4.3
Let X be a geometric random variable with parameter p. Then q 1 E [ X ] = and Var [ X ] = 2 . p
p
Proof
m X (t ) =
pe
1 − qe d dt
E [ X ] =
t
d dt
q p = − ; where 1 . t
m X (t ) =
pe
t
(1 − qe )
m X (t ) t =0 =
t 2
p
(1 − q )
2
=
1 p
Now take the second derivative at t = 0 2
d
2
dt E [ X ] = 2
pe (1 + qe ) t
2
d
2
dt
m X (t ) =
t
(1 − qe )
m X (t ) t =0 =
t 3
p (1 + q )
(1 − q )
3
=
(1 + q ) p
2
E [ X ] =
1 p
and E [ X ] = 2
(1 + q ) p
Thus, Var [ X ] =
(1 + q ) p
2
−
1 p
2
=
q p
2
2
Expectation of Geometric r.v. by Definition
E [ X ] =
∞
∞
x =1
x =1
∑ xf ( x) = ∑ xpq ∞
= p ∑ xq x =1
x −1
x −1
= p (1 + 2q + 3q + 4q + ...) 2
3
Sum of AGP is
S = 1+ 2q + 3q + 4q + . .. 2
qS =
3
q + 2q + 3q + . .. 2
3
(1 − q ) S = 1+ q + q + q + . .. 1 S = 2 (1 − q) 2
3
1 1 1 ∴ E [ X ] = p = p = 2 2 (1 − q) p p
∞
E [ X ] = 2
∞
∑ x f ( x) = ∑ x pq 2
x =1
∞
2
x −1
x =1
= p ∑ x q
2 x −1
= p(1 + 4q + 9q + 16q + ...) 2
x =1
1 + q 1 + q 1 + q = p = p = 3 3 2 (1 − q) p p
3
Var [ X ] = E [ X ] − E [ X ] 2
2
1 q = 2 − = 2 p p p 1+ q
Exercise 25 The zinc phosphate coating on the threads of steel tubes used in oil and gas wells is critical to their performance. To monitor the coating process, an uncoated metal sample with known outside area is weighed and treated along with the lot of tubing. This sample is then stripped and reweighed. From this it is possible to determine whether or not the proper amount of coating was applied to the tubing.
Assume that the probability that a given lot is unacceptable is 0.05. Let X denote the number of runs conducted to produce an unacceptable lot. Assume that the runs are independent in the sense that the outcome of one run has no effect on that of any other. Verify X is geometric. What is success? p =? What is density, E[X], E[X 2], σ2? m.g.f.? Find the probability that the number of runs required to produce an unacceptable lot is at least 3.
Example
In a Video game the player attempts to capture a treasure lying behind one of five doors. The location of treasure varies randomly in such a way that at any given time it is just as likely to be behind one door as any other. When the player knocks on a given door, the treasure is his if it lies behind that door.
Otherwise he must return to his original starting point and approach the doors through a dangerous maze again. If the treasure is captured, the game ends. Let X be the number of trials needed to capture the treasure. Find the average number of trials needed to capture the treasure. Find P(X ≤ 3) and P(X > 3).
Binomial Distribution Let an experiment consist of fixed number ‘n’ of Bernoulli trials. Assume all trials are identical and independent. Thus p = probability of success is same for each trial. X = number of successes in these n trials. What is P(X = x)?
Example Consider a case in which n = 3 then it’s the sample space is S = {fff, sff, fsf, ffs, ssf, sfs, fss, sss} Since trials are independent, the probability assigned to each sample point is found by multiplying. For instance the probability assigned to the sample points are as follows: (1-p)(1-p)(1-p) = (1-p)3 and p(1-p)(1-p) = p(1-p) 2
The r. v. X assumes the value 0 only if the experiment result in the outcome fff. That is, P[X = 0] = (1- p) 3 However, X assumes the value 1 if the any one of the outcome is success (sff, fsf, ffs), then P[X = 1] = 3(1- p) 2 Similarly, P[X = 2] = 3(1- p) 2 and P[X = 3] = p 3
It is evident that x = 0, 1, 2, 3 P[X = x] = c(x) p x(1-p)3-x
3 where c( x) = x 3 x 3− x f ( x ) = p p ( 1 ) −
x
A discrete random variable X has binomial distribution with parameters n and p, n is a positive integer and 0 < p < 1, if its density function is
n x − n x ; x = 0,1,2,..., n p (1 − p) f ( x) = x 0 otherwise. (Verify it is density, use binomial theorem).
Theorem : Let X be a binomial
random variable with parameters n and p. Then
1) The m.g.f. of X is t n m X (t ) = (q + pe ) with q = 1 − p.
2) E [ X ] = np
and
Var [ X ] = npq.
Proof :
n x n − x tx 1) m X (t) = E[e ] = ∑ p (1 − p) e x x = 0 n n n − x t x = ∑ (1 − p) ( pe ) x x = 0 t n = (q + pe ) where q = 1 − p. n
tX
2) m X (t ) = (q + pe ) . t n
Thus E[X] =
dm X (t ) dt
= np (q + p ) = np.
t n −1
= npe (q + pe ) t
t = 0
t = 0
2
Also E [ X ] = 2
d m X (t )
=
2
dt
d [npe (q + pe )
t n − 2
]
dt
t =0
= [n(n −1) p e (q + pe ) 2 2t
t n −1
t
t =0 t n −1
+ npe (q + pe ) ] t
t =0
= n(n −1) p (q + p) + np = n(n − 1) p + np. 2
2
Thus Var [ X ] = E [ X ] − E [ X ] = n p + np − np − n p 2
2
= np(1 − p) = npq.
2
2
2
2
2
Expectation of Bionimal r.v. by Definition
n x n − x E[X] = ∑ x p (1 − p) x = 0 x n
n
= ∑ x x = 0 n
= ∑ x x = 0
n!
x
p q x!( n − x)! n!
n − x
x
p q x( x − 1)!(n − x)!
n − x
E[X] =
n
n!
∑ ( x − 1)!(n − x)! p q x
n − x
x = 0
Since the term x = 0 is zero E[X] =
n(n − 1)!
n −1
∑ ( x − 1)!((n − 1) − ( x − 1))! pp
x −1
q
( n −1) − ( x −1)
x =1
Let s = x - 1 and x assumes value 1 to n, therefore s assumes value 0 to n - 1
E[X] =
n −1
n(n − 1)!
∑ s!(n − 1 − s)! pp q s =0
s
n −1− s
E[X] =
n −1
n(n − 1)!
∑ s!(n − 1 − s)! pp q s
n −1− s
s =0
n − 1 s ( n −1) − s p q = np ∑ s = 0 s n −1 n −1 = np( p + q ) = np ( p + 1 − p ) = np n −1
Expectation of X2 by Definition
n x n − x E[X ] = ∑ x p (1 − p ) x = 0 x n
2
2
n
n!
= ∑ [ x( x − 1) + x] x = 0 n
= ∑ x( x − 1) x = 0
x
p q
n − x
x!(n − x)!
n!
x
p q
n − x
x!( n − x)! n
+ ∑ x 0
n!
x
p q
x!( n − x)!
n − x
n
2
E[X ]
n!
=∑
x
p q x = 0 ( x − 2)!( n − x )!
n − x
+ np
Since the term x = 0 and x = 1 are zero, n −1
E[X] = ∑
n( n − 1)(n − 2)!
2
x − 2
p p x = 2 ( x − 2)!(( n − 2) − ( x − 2))!
q
( n − 2)−( x − 2)
+ np
Let s = x - 2 and x assumes value 2 to n, therefore s assumes value 0 to n - 2
E[X ] = 2
n−2
∑ s =0
n(n − 1)(n − 2)! 2 s n − 2 − s + np p p q s!(n − 2 − s )!
E[X ] = 2
n−2
∑ s =0
n(n − 1)(n − 2)! 2 s n − 2− s p p q s!(n − 2 − s )!
n − 2 s ( n − 2) − s p q = n(n − 1) p ∑ + np s = 0 s n−2 2 = n(n − 1) p ( p + q ) + np 2
n −1
= n(n − 1) p ( p + 1 − p ) 2
= n(n − 1) p + np 2
n−2
+ np
Var(X) by definition
Var [ X ] = E [ X ] − E [ X ] 2
2
= n(n − 1) p + np − (np ) 2
2
= np − np = np (1 − p ) = npq 2
c.d.f. of binomial distribution It is difficult to write explicit formula. So values are given in Table I App. A, p. 687-691.
From c.d.f., we can find density f (x)= F(x) - F(x-1) if x = 0, 1, 2,…, n.
P(a ≤ X ≤ b) = F(b) - F(a-1) for integers a, b.
Example 3.5.3 Let X represent the number of signals properly identified in a 30 minute time period in which 10 signals are received. Assuming that any signal is identified with probability p=1/2 and identification of signals is independent of each other. (i) Find the probability that at most seven signals are identified correctly. (ii) Find the probability that at most 7 and at least 2 signals are identified correctly.
i) *----*-----*-----*----*----*----*----*------------0
1
2
3
4
5
6
7
Here, P[X ≤ 7] = 0.9453 includes the Probability associated with 0 and 1
8
9
10
n = 10, p = 0.5 and look at the table (in row 8, column labeled 0.5 for F, we will see the value is 0.9453).
(ii) ----------*---*--*--*--*--*----------------0
1
2
3
4
5
6
7
8
9
10
Thus, P[2 ≤ X ≤ 7] = P[X ≤ 7] – P[X < 2] = P[X ≤ 7] – P[X ≤ 1] = F(7) – F(1)
P[2 ≤ X ≤ 7] = 0.945 – 0.011 = 0.934
Section 3.4, page no 90, 45
Assume that an experiment is conducted and that the outcome is considered to be either a success or a failure. Let p denote the probability of success. Define X to be 1 if the experiment is a success and 0 if it is a failure. X is said to have a point binomial distribution {Bernoulli distribution) with parameter p. i) Argue that X is a binomial random variable with n = 1.
ii) Find the density of X. iii) Find the moment generating function for X. iv) Find mean and variance of X. v) In DNA replication error can occur that are chemically induced. Some of these errors are “silent” in that they do not lead to an observable mutation. Growing bacteria are exposed to a chemical that has probability 0.14 of inducing an observable error. Let X be 1 if an observable mutation results and let X be 0 otherwise. Find E[X].
Sampling with replacement If we choose randomly with replacement a sample of n objects from N objects of which r are favorable and X = number of favorable objects in the sample chosen then X has binomial distribution with parameters n and p = r/N.
Example
From a usual pack of 52 cards, 10 cards are picked randomly with replacement. Find the probability that they will contain at least 4 and at most 7 spades. Identify Bernoulli trials and success and random variable X together with its distribution. n = 10, p = 13/52 = 0.25 .
Required probability = F(7) - F(3)
n x 10 0 1 2 3 4 5 6 7 8
.01 0.90438 0.99573 0.99989 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
.05 0.59874 0.91386 0.98850 0.99897 0.99994 1.00000 1.00000 1.00000 1.00000
p .10 0.34868 0.73610 0.92981 0.98720 0.99837 0.99985 0.99999 1.00000 1.00000
.15 0.19687 0.54430 0.82020 0.95003 0.99013 0.99862 0.99987 0.99999 1.00000
.20 0.10737 0.37581 0.67780 0.87913 0.96721 0.99363 0.99914 0.99992 1.00000
.25 0.05631 0.24403 0.52559 0.77588 0.92187 0.98027 0.99649 0.99958 0.99997
Required probability = F(7) - F(3) = 0.99958 - 0.77588 (By tables) = 0.22370
Hypergeometric distribution If we are choosing without replacement a sample of size n from N objects of which r are favorable, and X = number of favorable objects in the sample, then r N − r
x n − x ; P[ X = x ] = N n if max[0, n - (N - r)] ≤ x ≤ min(n, r) and 0 otherwise.
N objects ‘r’ have trait (success) n objects (N - r) do not have trait (failure)
Properties: • The experiment consists of drawing a random sample of size n without replacement and without regard to order from a collection of N objects. • Of the N objects, r have a trait of interest to us; the other (N – r) do not have the trait. • The random variable X is the number of objects in the sample with the trait.
Definition: A random variable X with integer values has a hypergeometric distribution with parameters N, n, r if its density is
r N − r x n − x ; f ( x ) = N n if max[0, n - (N - r)] ≤ x ≤ min(n, r)
Theorem : If X is a hypergeometric random variable with parameters N, n, r then E[X] = n(r / N)
r N − r N − n Var ( X ) = n N N N − 1
Section 3.7 page no 91, 54 Suppose that X is hypergeometric with N = 20, r = 17, n = 5. What are the possible values for X? What is E[X] and Var (X)? r N − r Sol:
x n − x ; f ( x ) = N n if max[0, n - (N - r)] ≤ x ≤ min(n, r)
max[0, 5 – (20- 17)] ≤ x ≤ min(5, 17) max[0, 2] ≤ x ≤ min(5, 17)
i.e. X = 2, 3, 4 and 5. E[X] = 5(17/20) = 4.25 Var(X) = 5(17/20)(3/20)(15/19) = 0.5033
Hypergeometric binomial When the sample size n is small compared to population size N, then the composition of the sampled group does not change much from trial to trial if sampling is without replacement, This we can use binomial distribution with parameters are n and p = r/N. This is done if n/N
0.05.
Example 3.7.3 During a course of an hour, 1000 bottles of beer are filled by a particular machine. Each hour a sample of 20 bottles is randomly selected and number of ounces of beer per bottle is checked. Let X denote the number of bottles selected that are underfilled. Suppose during a particular hour, 100 underfilled bottles are produced. Find the probability that at least 3 underfilled bottles will be among those sampled.
Solution (using hypergeometric)
X = denote the number of bottles selected that are underfilled N = 1000, n = 20, r = 100 Required probability = P[X ≥3] = 1- P[X =0] – P[X=1] – P[X=2]
Required probability = P[X ≥3] = 1 – P[X=0] – P[X=1] – P[X=2] 100 900 100 900 100 900 0 20 1 19 2 18 = 1− − − 1000 1000 1000 20 20 20 = 0.3224
Using Binomial Approximation with n = 20, p = 100 /1000 = 0.1 (n/N = 20/1000 = 0.02 < 0.05) P[X ≥ 3] = 1 - F(2) = 1 - 0.6769 = 0.3231.
Sometimes population size is large but not known. Proportion of favorable population is given. Then we can use binomial distribution for both sampling with or without replacement where p is the proportion of favorable population.
Ex : A vegetable vendor has a large pile of tomatoes of which 30% are green. A buyer randomly puts 10 tomatoes in his basket. What is the probability that more than 5 of them are green?
Poisson Distribution This distribution is named on the French mathematician Simeon Denis Poisson. Let k > 0 be a constant and, for any real number x,
e − k k x ; for x = 0,1,2,... f ( x ) = x! 0 otherwise
Verify f ( x) is probability density function. f ( x) is nonnegative. ∞
∞
− k x e k
x = 0
x = 0
x!
∑ f ( x) = ∑
k k k − = e 1 + + + ... 1! 2! 2
k k − = e e =1
A random variable X with this density f is said to have a Poisson distribution with parameter k.
Theorem
The m.g.f. of a Poisson random variable X with parameter k > 0 is
m X (t ) = e
t k (e − 1)
E[X] = k and Var[X] = k.
Proof: ∞
m X (t ) = E [e ] = tX
∑ x = 0
∞
=∑ x = 0
e
− k
tx
− k x
e e k x!
(e k ) t
x
x!
t ke ( ) k (e − 1) t − k = e 1 + ke + + ..... = e 2! t 2
d k ( e −1) t (ke ) t =0 = k E [ X ] = (m X (t ) = e dt t =0
[
]
t
d E [ X ] = 2 (m X (t ) dt t =0 2
2
[
=e
t
k ( e −1)
(ke ) + e t
t
k ( e −1)
t 2
(ke )
]
t = 0
= k + k
Hence, Var ( X ) = E [ X ] − ( E [ X ]) = k 2
2
2
Expectation of Poisson r.v. by Definition ∞
E [ X ] =
∑ x
− k x
e k x!
x = 0
∞
=∑ x = 0
− k x
e k
( x − 1)!
= e k ∑ − k
x −1
∞
x = 0
k
( x − 1)!
Let x − 1 = y
k E [ X ] = e k ∑ = e k 1 + k + + ..... 2! y = 0 y! − k k = e ke = k − k
∞
y
k
− k
2
− k x
∞
E [ X ] = 2
∑ x x = 0
2
e k x!
∞
∞
= ∑ x( x − 1) x = 0
∞
=∑ x = 0
− k x
e k
( x − 2)!
= ∑ { x( x − 1) + x} − k x
e k x!
∞
+ ∑ x x =0
− k x
e k
x = 0
= ∑ x( x − 1) x = 0
∞
x!
− k x
e k x!
− k x
e k
x ( x − 1)( x − 2)! ∞
+ k = e k ∑ − k 2
x = 0
+ k x − 2
k
( x − 2)!
+ k
Let x − 2 = y − k 2
E [ X ] = e k 2
∞
y
k
∑ y! + k
y = 0
k = e k 1 + k + + ..... + k 2! − k 2 k 2 = e k e + k = k + k 2
− k
Var [ X ] = E [ X ] − E [ X ] 2
2
= k + k − k = k 2
2
Poisson Process : A process occurring discretely over a continuous interval of time or length or space is called a Poisson Process. Let λ = average number of successes occurring in a unit interval.
Assumptions of Poisson process : (i) Probability of success in a very small interval of time ∆t is λ∆t (ii) Probability of more than one success in such a very small interval of time is negligible. (iii) Probability success in such a small interval does not depend on what happened prior to that time.
Then λ = average number of successes occurring in a unit of (time or space or length ) Let X = number of times the discrete event occurs in a given interval of length s units Then X has Poisson distribution with parameter k = λs. Rajiv
Let X = number of times the discrete event occurs in a given interval of length s in a Poisson process. Then X has Poisson distribution with parameter k = λs. Thus density of X is :
e − s ( s) x for x = 0,1,2,... f ( x ) = x! 0 otherwise
c.d.f. of Poisson distribution Provided by Table on p.692. Values of k=λ s, the parameter of Poisson distribution corresponds to columns, values t of random variable correspond to rows and value of cdf F(t) are entries inside table.
• The expected value of X is λs. • The average number of occurrence of the event of interest in an interval of ‘s’ units = λs. • Thus the average number of occurrences of the event in 1 unit of time, length, area or space is λs/s = λ
Steps in solving Poisson Problem • Determine the basic unit of measurement being used. • Determine the average number of occurrences of the event per unit. This number is denoted by λ. • The random variable X, the number of occurrences of the event in the interval of size s follows a Poisson distribution with parameter k = λs.
Poisson approximation to Binomial
If a binomial random variable X has parameter p very small and n large so that np = k is moderate then X can be approximated by a Poisson random variable Y with parameter k.
Exercise 63 Geophysicists determine the age of a zircon by counting the number of uranium fission tracks on a polished surface. A particular zircon is of such an age that the average number of tracks per square centimeter is five. What is the probability that a 2 centimeter-square sample of this zircon will reveal at most 3 tracks, thus leading to an underestimation of the age of the Material?
Example:
A large microprocessor chip contains multiple copies of circuits . If a circuit fails, the chip knows how to select the proper logic to repair itself. Average number of defects per chip is 300. Find the probability that 10 or fewer defects will be found in a randomly selected region that comprises 5% of the total surface area?
Ex.64 : California is hit by approximately 500 Earthquakes that are large enough to be felt every year. However those of destructive magnitude occur on an average once a year. Find the probability that at least one earthquake of this magnitude occurs during a 6 month period. Would it be unusual to have 3 or more earthquakes of destructive magnitude on a 6 month Period? Explain.