Continuous Distributions 1
1.1 Definition
A continuous random variable X is said to follow the uniform distribution on an interval [a,b] with a<b denoted by U(a,b) if:
fX(x)={b−a10x∈[a,b]otherwise 1.2 Significance
In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions. The distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, a and b, which are the minimum and maximum values. The interval can either be closed (e.g. [a,b]) or open (e.g. ]a,b[).
It is the uniform distribution on the interval [0,1]:
U=U(0,1) Let X∼U(a,b)
∀x∈R,F−X(x)∀x∈R,f−X(x)⟹−X=P(−X<x)=P(X>−x)=1−FX(−x)=fX′(−x)=b−a11[a,b](−x)=(−a)−(−b)11[−b,−a](x)∼U(−b,−a) - Let α∈R+∗,β∈R
- Let a,b∈R with a<b
- Let X∼U(a,b) and Y=αX+b
∀x∈R,FY(x)⟹∀x∈R,FY(x)⟹=P(Y<x)=P(αX<x−β)=P(X<αx−β)=FX(αx−β)=a1FX′(αx−β)=α1fX(αx−β)=α(b−a)11[a,b](αx−β)=αb−αa11[αa+β,αb+β](x)=(αb+β)−(αa+β)11[αa+β,αb+β](x)Y∼U(αa+β,αb+β) In particular:
X∼U(a,b)⟺b−aX−a∼U(0,1) For α<0, We have Y=αX+β=−(−αX−β).
We have:
−αX−β∼U(−αa−β,−αb−β)⟹αX+β∼U(αb+β,αa+β) 1.4 Moments & Central Moments
1.4.1 Moments
∀n∈N∗,E[Xn]=∫abb−axndx=(n+1)(b−a)bn+1−an+1 In particular, the expected value E[X] is
E[X]=2a+b 1.4.1 Central Moments
For n∈N∗, the nth-central moment of X is the nth-moment of X−E[X]
But X−E[X]∼U(2a−b,2b−a)
∀n∈N∗,E[(X−E[X])n]=(n+1)(2b−a−2a−b)(2b−a)n+1−(2a−b)n+1=2n(n+1)1−(−1)n+1⋅b−a(b−a)n In particular, the variance V[X] is:
V[X]=12(b−a)2 2. Exponential Distribution
2.1 Definition
A continuous random variable X is said to follow the exponential distribution with paramter λ∈R+∗ if:
fX(x)={λe−λx0x∈R+otherwise We denote it by:
X∼E(λ) 2.2 Significance
The exponential distribution is the probability distribution of the time between events in a Poisson point process. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless.
It is used to model radioactive decay.
2.3 Moments
2.3.1 Raw Moments
∀n∈N,E[Xn]=∫R+λtne−λtdt=∫R+(λu)ne−uduwith u=λt,du=λdt=λ−n∫R+une−udu=λnΓ(n+1)=λnn! In particular, the expected value E[X] is:
E[X]=λ1 2.3.2 Central moments
∀n∈N,E[(X−E[X])n]=k=0∑n(−1)n−k(kn)E[Xk]E[X]n−k=k=0∑n(−1)n−k(kn)λnk!=λn1k=0∑n(−1)n−k(n−k)!n! In particular, the variance V[X] is:
V[X]=E[X2]−E[X]2=λ22−λ21=λ21 2.4 Memoryless
Memory-less is a fundamental property in the exponential distribution, It states:
∀T,r∈R+,P(X≥T+r∣X≥T)=P(X≥r) The proof is as follow:
∀T,r∈R+,P(X≥T+r∣X≥T)=P(X≥T)P(X≥T+r)=∫T+∞λe−λudu∫T+r+∞λe−λudu=e−Tλe−(T+r)λ=e−λr=P(X≥r) 2.5 Scaling
- Let k∈R+∗
- Let X∼E(λ) and Y=kX
We will calculate the probability distribution function of Y:
∀x∈R+,fY(x)=k1fX(kx)=kλe−kλx By that:
∀k∈R+∗,X∼E(λ)⟺kX∼E(kλ)