Expected Value, and Covariance of Random Vectors

Subjects: Probability Theory
Links: Random Vectors, Expected Value of Random Variables, Variance of Random Variables, Positive definite matrix

We define the expected value of a random vector (X,Y) composed of two random variable with finite expected values, as the vector of the expected values as

E[(X,Y)]=(E[X],E[Y])

Similarly for n dimensional random vector.

We can define the covariance of two vectors, where how much the each change with respect to the other and it is defined as

Cov(X,Y)=E[(XE[X])(YE[Y])]

doing some algebra we can see that

Cov(X,Y)=E[XY]E[X]E[Y]

and that

Cov(X,Y)=Cov(Y,X)

It has a few properties:

|Cov(X,Y)|Var(X)Var(Y)

With this we can define the variance of a random vector composed of n random variables with a finite variance. It is the square matrix

Var(X1,,Xn)=(Cov(Xi,Xj))i,j

this matrix is symmetric.
We can get the identity, let X=(X1,,Xn):

Var(X):=E[(XE(X))(XE(X))]

The matrix Var(X) is symmetric and positive definite

Expected Value of functions of a Random Vector

Let (X,Y) be a random vector and φ:R2R be Borel measurable function such that the random variable φ(X,Y) has a finite expected value. Then $$E[\varphi(X, Y) ] = \int_{\Bbb R^2} \varphi(x, y) dF_{X, Y}(x, y)$$
From this we can actually get that:

Let X and Y have finite expected value, then $$E[X + Y] = E[X] + E[Y]$$
Let X and Y be independent random variables, and let g,h be Borel measurable functions such that g(X) and h(Y) have finite expected values:

E[g(X)h(Y)]=E[g(X)]E[h(Y)]