Expected Value of Random Variables

Subjects: Probability Theory
Links: Random Variables, Probability Functions for Random Variables, Riemann-Steiltjes Integral on R, Measurable Functions

Expected Value

Let X be a random variable with a distribution function F. The expected value of X, denoted as E[X], it is defined as the number

E[X]:=RxdF(x)

when this integral is absolutely convergent, i.e., when the integral R|x|dF(x)< converges, in this case we say that X is integrable or that it has finite expected value

Let X be discrete random variable with probability mass function f. The expected value of X is defined to be

E[X]=xxf(x)

supposing that the sum is absolutely convergent, meaning, when the sum of the values converges.
In the particular cases:
Let X be a absolutely continuous random variable with probability density function f, then the expected value is

E[X]=Rxf(x)dx

Let X be a random variable with a distribution function FX, and let g:RR be a Borel measurable function, then g(X) is a random variable, and we if try to calculate its expected value we get:

E[g(X)]=RxdFg(X)(x)

Law of the unconscious statistician

Let X be a random variable with a distribution function FX, and let g:RR be a Borel measurable function, such that the random variable g(X) has finite expected value. Then

E[g(X)]=Rg(x)dFX(x)

In the particular cases that:
Let X be continuous random variable with probability density function f, and g:RR be a function such that g(X) is a random variable with finite expected value then

E[g(X)]=xg(x)f(x)

Let X be discrete random variable with probability mass function f, and g:RR be a function such that g(X) is a random variable with finite expected value then

E[g(X)]=Rg(x)f(x)dx

Let (X,Y) be a random vector ϕ:R2R be Borel measurable such that ϕ(X,Y) be a random variable with finite expected value. Then we define it as

E[ϕ(X,Y)]=RxdFϕ(X,Y)(x)

Expected Value of a Function of a Random Vector

Let (X,Y) be a random vector ϕ:R2R be Borel measurable such that ϕ(X,Y) be a random variable with finite expected value. Then we define it as

E[ϕ(X,Y)]=R2ϕ(x,y)dFX,Y(x,y)

using Riemann-Stieltjes Integral in Rn. In the special case where X and Y are independent. We can actually simplify it to two integrals

E[ϕ(X,Y)]=R2ϕ(x,y)dFX(x)dFY(y)

Prop: Let Xand Y be continuous random variables defined over the same probability space with a conjoined probability density function f(x,y). Let g:R2R a function such that g(X,Y) is a random variable with finite expected value. Then

E[g(X,Y)]=RRg(x,y)f(x,y)dydx

If the random variables are discrete with a conjoined probability mass function f(x,y). Let g:R2R a function such that g(X,Y) is a random variable with finite expected value. Then

E[g(X,Y)]=xyg(x,y)f(x,y)

Prop: Properties of the expecte value. Let X and Y be random variables with finite expected value and c a constant. Then

Then we know that the expected value behaves linearly.

Let X be a random variable with a distribution function F, that admits a decomposition:

F(x)=αFd(x)+(1α)Fc(x)$$with$0α1$,where$Fd$isadiscretedistributionfunctionand$Fd$isacontinuousone.Let$Xd$havethedistribution$Fd$and$Xc$havethedistribution$Fc$.Then,$X$hasafiniteexpectedvalue,iff,$Xd$and$Xc$havefiniteexpectedvalue,andinthatcase

E[X] = \alpha E[X_d]+ (1-\alpha)E[X_c]

Prop:Let$X$and$Y$beindependentandbothwithfiniteexpectedvalues,andlet$g,h:R2R$beBorelmeasurableandsuchthat$g(X)$and$h(Y)$haveafiniteexpectedvalue.Then

E[g(X)h(Y)] = E[g(X)]E[h(Y)]

inparticular,

E[XY] = E[X]E[Y]

Prop:Let$X$beadiscreterandomvariablewithcumulativeprobabilityfunction$F(x)$,withafiniteexpectedvalueandpossiblevaluesintheset$N$.Then

E[X] = \sum_{x \in \Bbb N}(1-F(x))

Similarly,let$X$beacontinuousrandomvariablewithacumulativeprobabilityfunction$F$,withfiniteexpectedvalueandvaluesintheinterval$[0,)$.Then

E[X] = \int_0^\infty (1-F(x)) , dx