MaPal93

90 Reputation

4 Badges

2 years, 341 days

MaplePrimes Activity


These are questions asked by MaPal93

Linear combinations of random variables: why Maple does not "inherit" the distributional assumptions when adding up two random variables?

In the script I attach below, I first define a vector of two uncorrelated gaussian RVs [epsilon[1],epsilon[2]] and then a vector of two correlated gaussian RVs [nu[1],nu[2]]. Both epsilon[1] and epsilon[2] are also uncorrelated with nu[1] and nu[2].

Now I want to create a vector of two correlated gaussian RVs, S, where S[1]=nu[1]+epsilon[1] and S[2]=nu[2]+epsilon[2]. The means and the variances of [S[1],S[2]] are correct, but the covariance (off-diagonal element of the covariance matrix) is weird. How to do this in Maple?

Please check this script:

RVs_sum.mw

 

Hello everyone,

Here is a stylized version of my problem. Given three normally distributed random variables {A, B, C}, I want to find the X_1, X_2, X_3 that maximize the following expression (gamma being a constant and A being a linear combination of other normally distributed random variables):

Max{ Exp[A|B,C] - (gamma/2)*Var[A|B,C] }

All the details are here: 070423_Optimization.mw

In particular, I am seeking your help to:

  1. Correlate three random variables (an example of the procedure for correlating two random variables is already provided in the script).
  2. Verify my understanding of the linear projection theorem (https://stats.stackexchange.com/questions/30588/deriving-the-conditional-distributions-of-a-multivariate-normal-distribution) in two dimensions, that is, to compute conditional means and variances of the form E[X|Y,Z] and V[X|Y,Z].
  3. Implement and adapt the linear projection theorem to my problem in Maple.
  4. Combine all together to obtain Expr = E[A|B,C] + V[A|B,C] and find the optimal {X_1, X_2, X_3} by solving the linear system of three equations in the three variables {X_1, X_2, X_3}, where the three equations are obtained by setting to 0 the partial derivatives of Expr with respect to {X_1, X_2, X_3}.

 

In relation to point 2., did I correctly interpret the matrix form of the three-dimensions version of the linear projection theorem?

 

In relation to point 3., I attach a stylized script for the three-dimensions version (note that I need the two-dimensions version for my problem): LinearProjectionTheorem_3dimensions.mw. Assuming a correct interpretation of the theorem:

  • Did I correctly implement E[X_2|Y_1,Y_2,Y_3] and E[X_3|Y_1,Y_2,Y_3] as in the picture above?
  • How to adapt it accordingly to include E[X_1|Y_1,Y_2,Y_3] and V[X_1|Y_1,Y_2,Y_3], V[X_2|Y_1,Y_2,Y_3], and V[X_3|Y_1,Y_2,Y_3]? 
  • How to apply it to the random variables in my script to eventually find the optimal {X_1,X_2,X_3}?

 

You can play around with my script 070423_Optimization.mw and send me the updated version. The problem I am trying to solve is quite convoluted, so let me know if you need any further clarification. Thanks a lot!

I simplified my setup as much as possible. Please check lambdas.mw.

While I think I managed to obtain some analytical solutions, they look a bit strange for two reasons:

1) They do not depend on the exogenous parameters as I expected. In fact, mu_jk and mu_ki should only depend on q_0jk and q_0ki, while lambda_jk and lambda_ki should only depend on BigSigma_0jk, BigSigma_0ki, smallsigma_ujk and smallsigma_uki.

2) Strong dependence on q_0jk and q_0ki: if I were to setup these two parameters to zero or to the same value I can't obtain solutions anymore (especially for the lambdas). Does it mean that they are not really "free" parameters?

I noticed that if I combine the two equations from the FOCs of mu_jk and mu_ki into one system (is this even legit?), I get q_0jk = - q_0ki * (lambda_jk / lambda_ki). This is also easy to see if I apply the calibration at the beginning of the script (remove hashtags on all the params with the exception of q_0jk and q_0ki) and then divide lambda_jk by lambda_ki. Why?

I am quite sure that the computations are correct (I checked multiple times), but I am now questioning my setup. In which ways does my setup differ from the one below?

Essentially, I am trying to extend the following problem. As you see below, mu depends only on p_0 (the one-dimensional equivalent of my q_0jk and q_0ki) and lambda depends only on BigSigma_0 and smallsigma_u (the one-dimensional equivalents of my BigSigma_0jk, BigSigma_0ki, smallsigma_ujk and smallsigma_uki).

Thank you.

I'd like to plot row[i] of M_jk (left axis) vs. row[i] of M_ki (right axis). i is the x axis, and varies discretely from 1 to 10 (number of runs). See attached screenshot below for details. My result is off in terms of values... How to fix my plot command?

 

matrix_inverse.mw

Here attached is my script. The execution of the worksheet gets stuck at the MatrixInverse(M) step. What do you suggest in order to speed up the computation?

As you can see, my matrix M is a 3x3 symmetric matrix, but quite convoluted. Eventually, I need to multiply the resulting inverse by another (row) vector.

Thank you for looking into this!

1 2 3 4 5 6 Page 5 of 6