Chapter 6: Functions of Random Variables

6.6: Multivariable Transformations Using Jacobians

6.6: Multivariable Transformations Using Jacobians

The Bivariate Transformation Method

Suppose that Y1Y_1 and Y2Y_2 are continuous random variables with joint density function f(Y1,Y2)(y1,y2)f_{(Y_1, Y_2)}(y_1, y_2) and that for all (y1,y2)(y_1, y_2), such that f(Y1,Y2)(y1,y2)>0f_{(Y_1, Y_2)}(y_1, y_2) > 0,

u1=h1(y1,y2) and u2=h2(y1,y2)u_1 = h_1(y_1, y_2) \quad \text{ and } \quad u_2 = h_2(y_1, y_2)

is a one-to-one transformation from (y1,y2)(y_1, y_2) to (u1,u2)(u_1, u_2) with inverse

y1=h11(u1,u2) and y2=h21(u1,u2).y_1 = h_1^{-1}(u_1, u_2) \quad \text{ and } \quad y_2 = h_2^{-1}(u_1, u_2).

If h11(u1,u2)h_1^{-1}(u_1, u_2) and h21(u1,u2)h_2^{-1}(u_1, u_2) have continuous partial derivatives with respect to u1u_1 and u2u_2 and Jacobian

J=det[h11u1h11u2h21u1h21u2]=h11u1h21u2h21u1h11u20,J = \det \begin{bmatrix} \frac{\partial h_1^{-1}}{\partial u_1} & \frac{\partial h_1^{-1}}{\partial u_2}\\ \frac{\partial h_2^{-1}}{\partial u_1} & \frac{\partial h_2^{-1}}{\partial u_2} \end{bmatrix} = \frac{\partial h_1^{-1}}{\partial u_1} \frac{\partial h_2^{-1}}{\partial u_2} - \frac{\partial h_2^{-1}}{\partial u_1} \frac{\partial h_1^{-1}}{\partial u_2} \neq 0,

then the joint density of U1U_1 and U2U_2 is

fU1,U2(u1,u2)=fY1,Y2(h11(u1,u2),h21(u1,u2))J,f_{U_1, U_2}(u_1, u_2) = f_{Y_1, Y_2}(h_1^{-1}(u_1, u_2), h_2^{-1}(u_1, u_2)) |J|,

where J|J| is the absolute value of the Jacobian.

A word of caution is in order. Be sure that the bivariate transformation u1=h1(y1,y2),u2=h2(y1,y2)u_1 = h_1(y_1, y_2), u_2 = h_2(y_1, y_2) is a one-to-one transformation for all (y1,y2)(y_1, y_2) such that fY1,Y2(y1,y2)>0f_{Y_1, Y_2}(y_1, y_2) > 0. This step is easily overlooked. If the bivariate transformation is not one-to-one and this method is blindly applied, the resulting "density" function will not have the necessary properties of a valid density function.

The multivariable transformation method is also useful if we are interested in a single function of Y1Y_1 and Y2Y_2 - say, U1=h(Y1,Y2)U_1 = h(Y_1, Y_2). Because we have only one function of Y1Y_1 and Y2Y_2, we can use the method of bivariate transformations to find the joint distribution of U1U_1 and another function U2=h2(Y1,Y2)U_2 = h_2(Y_1, Y_2) and then find the desired marginal density of U1U_1 by integrating the joint density. Because we are really interested in only the distribution of U1U_1, we would typically choose the other function U2=h2(Y1,Y2)U_2 = h_2(Y_1, Y_2) so that the bivariate transformation is easy to invert and the Jacobian is easy to work with. We illustrate this technique in the following example.

Example 6.14

Let Y1Y_1 and Y2Y_2 be independent exponential random variables, both with mean β>0\beta > 0. Find the density function of

U=Y1Y1+Y2.U = \frac{Y_1}{Y_1 + Y_2}.

If Y1,Y2,...,YkY_1, Y_2, ..., Y_k are jointly continuous random variables and

U1=h1(Y1,Y2,...,Yk),U2=h2(Y1,Y2,...,Yk),...,Uk=hk(Y1,Y2,...,Yk),U_1 = h_1(Y_1, Y_2, ..., Y_k), U_2 = h_2(Y_1, Y_2, ..., Y_k), ..., U_k = h_k(Y_1, Y_2, ..., Y_k),

where the transformation

u1=h1(y1,y2,...,yk),u2=h2(y1,y2,...,yk),...,uk=hk(y1,y2,...,yk)u_1 = h_1(y_1, y_2, ..., y_k), u_2 = h_2(y_1, y_2, ..., y_k), ..., u_k = h_k(y_1, y_2, ..., y_k)

is a one-to-one transformation from (y1,y2,...,yk)(y_1, y_2, ..., y_k) to (u1,u2,...,uk)(u_1, u_2, ..., u_k) with inverse

y1=h11(u1,u2,...,uk),y2=h21(u1,u2,...,uk),...,yk=hk1(u1,u2,...,uk),y_1 = h_1^{-1}(u_1, u_2, ..., u_k), y_2 = h_2^{-1}(u_1, u_2, ..., u_k), ..., y_k = h_k^{-1}(u_1, u_2, ..., u_k),

and h11(u1,u2,...,uk),h21(u1,u2,...,uk),...,hk1(u1,u2,...,uk)h_1^{-1}(u_1, u_2, ...,u_k), h_2^{-1}(u_1, u_2, ...,u_k), ..., h_k^{-1}(u_1, u_2, ..., u_k) have continuous partial derivatives with respect to u1,u2,...,uku_1, u_2, ..., u_k and Jacobian

J=[h11u1h11u2h11ukh21u1h21u2h21ukhk1u1hk1u2hk1uk]0,J = \begin{bmatrix} \frac{\partial h_1^{-1}}{\partial u_1} & \frac{\partial h_1^{-1}}{\partial u_2} & \cdots & \frac{\partial h_1^{-1}}{\partial u_k}\\ \frac{\partial h_2^{-1}}{\partial u_1} & \frac{\partial h_2^{-1}}{\partial u_2} & \cdots & \frac{\partial h_2^{-1}}{\partial u_k}\\ \vdots & \vdots & \ddots & \vdots\\ \frac{\partial h_k^{-1}}{\partial u_1} & \frac{\partial h_k^{-1}}{\partial u_2} & \cdots & \frac{\partial h_k^{-1}}{\partial u_k} \end{bmatrix} \neq 0,

then a result analogous to the one presented in this section can be used to find the joint density of U1,U2,...,UkU_1, U_2, ..., U_k. This requires the user to find the determinant of a k×kk \times k matrix, a skill that is not required in the rest of this text.

Exercises

6.63

In Example 6.14, Y1Y_1 and Y2Y_2 were independent exponentially distributed random variables, both with mean β\beta. We defined U1=Y1/(Y1+Y2)U_1 = Y_1 / (Y_1 + Y_2) and U2=Y1+Y2U_2 = Y_1 + Y_2 and determined the joint density of (U1,U2)(U_1, U_2) to be

fU1,U2(u1,u2)={1β2u2eu2/β,0<u1<1,0<u2,0,otherwise.f_{U_1, U_2}(u_1, u_2) = \begin{cases} \frac{1}{\beta^2} u_2 e^{-u_2 / \beta}, & 0 < u_1 < 1, 0 < u_2,\\ 0, & \text{otherwise.} \end{cases}

a) Show that U1U_1 is uniformly distributed over the interval (0,1)(0, 1).

b) Show that U2U_2 has a gamma density with parameters α=2\alpha = 2 and β\beta.

c) Establish that U1U_1 and U2U_2 are independent.

6.64

Refer to Exercise 6.63 and Example 6.14. Suppose that Y1Y_1 has a gamma distribution with parameters α1\alpha_1 and β\beta, that Y2Y_2 is gamma distributed with parameters α2\alpha_2 and β\beta, and that Y1Y_1 and Y2Y_2 are independent. Let U1=Y1/(Y1+Y2)U_1 = Y_1 / (Y_1 + Y_2) and U2=Y1+Y2U_2 = Y_1 + Y_2.

a) Derive the joint density function for U1U_1 and U2U_2.

b) Show that the marginal distribution of U1U_1 is a beta distribution with parameters α1\alpha_1 and α2\alpha_2.

c) Show that the marginal distribution of U2U_2 is a gamma distribution with parameters α=α1+α2\alpha = \alpha_1 + \alpha_2 and β\beta.

d) Establish that U1U_1 and U2U_2 are independent.

6.65

Let Z1Z_1 and Z2Z_2 be independent standard normal random variables and U1=Z1U_1 = Z_1 and U2=Z1+Z2U_2 = Z_1 + Z_2.

a) Derive the joint density of U1U_1 and U2U_2.

b) Use Theorem 5.12 to give E(U1),E(U2),V(U1),V(U2)E(U_1), E(U_2), V(U_1), V(U_2), and Cov(U1,U2)(U_1 , U_2).

c) Are U1U_1 and U2U_2 independent? Why?

d) Refer to Section 5.10. Show that U1U_1 and U2U_2 have a bivariate normal distribution. Identify all the parameters of the appropriate bivariate normal distribution.

6.66

Let Y1Y_1 and Y2Y_2 have joint density function fY1,Y2(y1,y2)f_{Y_1, Y_2}(y_1, y_2) and let U1=Y1+Y2U_1 = Y_1 + Y_2 and U2=Y2U_2 = Y_2.

a) Show that the joint density of (U1,U2)(U_1, U_2) is

fU1,U2(u1,u2)=fY1,Y2(u1u2,u2).f_{U_1, U_2}(u_1, u_2) = f_{Y_1, Y_2}(u_1 - u_2, u_2).

b) Show that the marginal density function for U1U_1 is

fU1(u1)=fY1,Y2(u1u2,u2)du2.f_{U_1}(u_1) = \int_{-\infty}^{\infty} f_{Y_1, Y_2}(u_1 - u_2, u_2) du_2.

c) If Y1Y_1 and Y2Y_2 are independent, show that the marginal density function for U1U_1 is

fU1(u1)=fY1(u1u2)fY2(u2)du2.f_{U_1}(u_1) = \int_{-\infty}^{\infty} f_{Y_1}(u_1 - u_2) f_{Y_2}(u_2) du_2.

That is, that the density of Y1+Y2Y_1 + Y_2 is the convolution of the densities fY1()f_{Y_1}(\cdot) and fY2()f_{Y_2}(\cdot).

6.67

Let (Y1,Y2)(Y_1, Y_2) have joint density function fY1,Y2(y1,y2)f_{Y_1, Y_2}(y_1, y_2) and let U1=Y1/Y2U_1 = Y_1 / Y_2 and U2=Y2U_2 = Y_2.

a) Show that the joint density of (U1,U2)(U_1, U_2) is

fU1,U2(u1,u2)=fY1,Y2(u1u2,u2)u2.f_{U_1, U_2}(u_1, u_2) = f_{Y_1, Y_2}(u_1 u_2, u_2) |u_2|.

b) Show that the marginal density function for U1U_1 is

fU1(u1)=fY1,Y2(u1u2,u2)u2du2.f_{U_1}(u_1) = \int_{-\infty}^{\infty} f_{Y_1, Y_2}(u_1 u_2, u_2) |u_2| du_2.

c) If Y1Y_1 and Y2Y_2 are independent, show that the marginal density function for U1U_1 is

fU1(u1)=fY1(u1u2)fY2(u2)u2du2.f_{U_1}(u_1) = \int_{-\infty}^{\infty} f_{Y_1}(u_1 u_2) f_{Y_2}(u_2) |u_2| du_2.

6.68

Let Y1Y_1 and Y2Y_2 have joint density function

fY1,Y2(y1,y2)={8y1y2,0y1<y21,0,otherwise,f_{Y_1, Y_2}(y_1, y_2) = \begin{cases} 8y_1 y_2, & 0 \leq y_1 < y_2 \leq 1,\\ 0, & \text{otherwise,} \end{cases}

and U1=Y1/Y2U_1 = Y_1 / Y_2 and U2=Y2U_2 = Y_2.

a) Derive the joint density function for (U1,U2)(U_1, U_2).

b) Show that U1U_1 and U2U_2 are independent.

6.69

The random variables Y1Y_1 and Y2Y_2 are independent, both with density

f(y)={1y2,1<y,0,otherwise.f(y) = \begin{cases} \frac{1}{y^2}, & 1 < y,\\ 0, & \text{otherwise.} \end{cases}

Let U1=Y1Y1+Y2U_1 = \frac{Y_1}{Y_1 + Y_2} and U2=Y1+Y2U_2 = Y_1 + Y_2.

a) What is the joint density of Y1Y_1 and Y2Y_2?

b) Show that the joint density of U1U_1 and U2U_2 is given by

fU1,U2(u1,u2)={1u12(1u1)2u23,1/u1<u2,0<u1<1/2 and1/(1u1)<u2,1/2u11,0,otherwise.f_{U_1, U_2}(u_1, u_2) = \begin{cases} \frac{1}{u_1^2 (1 - u_1)^2 u_2^3}, & 1 / u_1 < u_2, 0 < u_1 < 1 / 2 \text{ and}\\ & 1 / (1 - u_1) < u_2, 1 / 2 \leq u_1 \leq 1,\\ 0, & \text{otherwise.} \end{cases}

c) Sketch the region where fU1,U2(u1,u2)>0f_{U_1, U_2}(u_1, u_2) > 0.

d) Show that the marginal density of U1U_1 is

fU1(u1)={12(1u1)2,0u1<1/2,12u12,1/2u11,0,otherwise.f_{U_1}(u_1) = \begin{cases} \frac{1}{2(1 - u_1)^2}, & 0 \leq u_1 < 1 / 2,\\ \frac{1}{2u_1^2}, & 1 / 2 \leq u_1 \leq 1,\\ 0, & \text{otherwise.} \end{cases}

e) Are U1U_1 and U2U_2 independent? Why or why not?

6.70

Suppose that Y1Y_1 and Y2Y_2 are independent and that both are uniformly distributed on the interval (0,1)(0, 1), and let U1=Y1+Y2U_1 = Y_1 + Y_2 and U2=Y1Y2U_2 = Y_1 - Y_2.

a) Show that the joint density of U1U_1 and U2U_2 is given by

fU1,U2(u1,u2)={1/2,u1<u2<u1,0<u1<1 and u12<u2<2u1,1u1<2,0,otherwise.f_{U_1, U_2}(u_1, u_2) = \begin{cases} 1 / 2, & -u_1 < u_2 < u_1, 0 < u_1 < 1 \text{ and }\\ & u_1 - 2 < u_2 < 2 - u_1, 1 \leq u_1 < 2,\\ 0, & \text{otherwise.} \end{cases}

b) Sketch the region where fU1,U2(u1,u2)>0f_{U_1, U_2}(u_1, u_2) > 0.

c) Show that the marginal density of U1U_1 is

fU1(u1)={u1,0<u1<1,2u1,1u1<2,0,otherwise.f_{U_1}(u_1) = \begin{cases} u_1, & 0 < u_1 < 1,\\ 2 - u_1, & 1 \leq u_1 < 2,\\ 0, & \text{otherwise.} \end{cases}

d) Show that the marginal density of U2U_2 is

fU2(u2)={1+u2,1<u2<0,1u2,0u1<1,0,otherwise.f_{U_2}(u_2) = \begin{cases} 1 + u_2, & -1 < u_2 < 0,\\ 1 - u_2, & 0 \leq u_1 < 1,\\ 0, & \text{otherwise.} \end{cases}

e) Are U1U_1 and U2U_2 independent? Why or why not?

6.71

Suppose that Y1Y_1 and Y2Y_2 are independent exponentially distributed random variables, both with mean β\beta, and define U1=Y1+Y2U_1 = Y_1 + Y_2 and U2=Y1/Y2U_2 = Y_1 / Y_2.

a) Show that the joint density of (U1,U2)(U_1, U_2) is

fU1,U2(u1,u2)={1β2u1eu1/β1(1+u2)2,0<u1,0<u2,0,otherwise.f_{U_1, U_2}(u_1, u_2) = \begin{cases} \frac{1}{\beta^2} u_1 e^{-u_1 / \beta} \frac{1}{(1 + u_2)^2}, & 0 < u_1, 0 < u_2,\\ 0, & \text{otherwise.} \end{cases}

b) Are U1U_1 and U2U_2 independent? Why or why not?