PatrickT

Dr. Patrick T

2108 Reputation

18 Badges

16 years, 304 days

MaplePrimes Activity


These are replies submitted by PatrickT

thanks a lot Robert, yes that works very well (the output is huge! but I won't need all of it), I won't be able to think about this much until next week, but you have given me all I need here to solve this little challenge,

thanks!

thanks a lot Robert, yes that works very well (the output is huge! but I won't need all of it), I won't be able to think about this much until next week, but you have given me all I need here to solve this little challenge,

thanks!

So after a little more reading, I now follow your suggestions Robert.

There seems to be no general way to compute the Matrix Exponential in the "general" case

Q := Matrix([
  [-r[2, 3] - r[1, 2] , r[2, 3] , r[1, 2] , 0 , 0 , 0],
  [r[3, 2] , -r[3, 2] - r[1, 3] , 0 , r[1, 3] , 0 , 0],
  [r[2, 1] , 0 , -r[2, 1] - r[1, 3] , 0 , r[1, 3] , 0],
  [0 , r[3, 1] , 0 , -r[3, 1] - r[1, 2] , 0 , r[1, 2]],
  [0 , 0 , r[3, 1] , 0 , -r[3, 1] - r[2, 3] , r[2, 3]],
  [0 , 0 , 0 , r[2, 1] , r[3, 2] , -r[2, 1] - r[3, 2]]
]);

(earlier I inadvertently changed the notation from Robert's choice of Q to P --- I have reverted to Q)

so I considered special cases of interest:

1. the case where all the cars are identical so all the rates are equal to r,

  parameters1 := [r[1,2]=r, r[2,1]=r, r[1,3]=r, r[3,1]=r, r[2,3]=r, r[3,2]=r];

2. the case where there are two types of cars.

The case of identical cars can be handled well by Maple -- the eigenvalues and eigenvectors suggests that the MatrixExponential calculations simplify nicely,

e, V := Vector(6, {(1) = 0, (2) = -3*r, (3) = -3*r, (4) = -r, (5) = -r, (6) = -4*r}), Matrix(6, 6, {(1, 1) = 1, (1, 2) = 1, (1, 3) = 0, (1, 4) = -1, (1, 5) = 0, (1, 6) = -1, (2, 1) = 1, (2, 2) = 0, (2, 3) = 1, (2, 4) = 0, (2, 5) = -1, (2, 6) = 1, (3, 1) = 1, (3, 2) = -1, (3, 3) = -1, (3, 4) = -1, (3, 5) = 1, (3, 6) = 1, (4, 1) = 1, (4, 2) = -1, (4, 3) = -1, (4, 4) = 1, (4, 5) = -1, (4, 6) = -1, (5, 1) = 1, (5, 2) = 0, (5, 3) = 1, (5, 4) = 0, (5, 5) = 1, (5, 6) = -1, (6, 1) = 1, (6, 2) = 1, (6, 3) = 0, (6, 4) = 1, (6, 5) = 0, (6, 6) = 1});

selecting one element for illustration, e.g.

MatrixExponential(t*eval(Q,parameters1))[1,1];
        1   1               1               1          
        - + - exp(-4 t r) + - exp(-3 t r) + - exp(-t r)
        6   6               3               3          

The case of two types of car, by contrast, is quite a mess in general, so I looked at special cases,

2. the slow cars can never overtake the fast cars, the fast cars can overtake the slow cars at rate R, and cars of same build can overtake each other at rate r, where r < R,

  parameters2 := [r[1,2]=R, r[2,1]=0, r[1,3]=R, r[3,1]=0, r[2,3]=r, r[3,2]=r];

3. as case 2 but with the possibility that slow cars overtake fast cars at the low rate r < R

 parameters3 := [r[1,2]=R, r[2,1]=r, r[1,3]=R, r[3,1]=r, r[2,3]=r, r[3,2]=r];

Case 2 can be handled as well as case 1, and gives e.g.

MatrixExponential(t*eval(Q,parameters2))[1,1];
(1/2)*exp(-t*R)+(1/2)*exp(-t*(2*r+R))

Case 3 crashes Maple, but numerical examples can be handled easily,e.g.

P3:= LinearAlgebra:-MatrixExponential(t*eval(Q3,{r=0.01,R=0.1})):
plot(P3[1,1],t=0..50);

After 10 minutes, the computations hadn't returned anything in the case N=3 and 6 transition rates. This despite the presence of many zeros in the transition matrix.

Thus it would seem to be rather difficult to get a compact formula in the general case of N cars and general transition rates r(i,j).

The next thing I need to look into is how to use the information contained in the matrix exponential to compute probabilities about different race outcomes, e.g. probability that car i ends in jth position while car j ends in ith position, etc..

So after a little more reading, I now follow your suggestions Robert.

There seems to be no general way to compute the Matrix Exponential in the "general" case

Q := Matrix([
  [-r[2, 3] - r[1, 2] , r[2, 3] , r[1, 2] , 0 , 0 , 0],
  [r[3, 2] , -r[3, 2] - r[1, 3] , 0 , r[1, 3] , 0 , 0],
  [r[2, 1] , 0 , -r[2, 1] - r[1, 3] , 0 , r[1, 3] , 0],
  [0 , r[3, 1] , 0 , -r[3, 1] - r[1, 2] , 0 , r[1, 2]],
  [0 , 0 , r[3, 1] , 0 , -r[3, 1] - r[2, 3] , r[2, 3]],
  [0 , 0 , 0 , r[2, 1] , r[3, 2] , -r[2, 1] - r[3, 2]]
]);

(earlier I inadvertently changed the notation from Robert's choice of Q to P --- I have reverted to Q)

so I considered special cases of interest:

1. the case where all the cars are identical so all the rates are equal to r,

  parameters1 := [r[1,2]=r, r[2,1]=r, r[1,3]=r, r[3,1]=r, r[2,3]=r, r[3,2]=r];

2. the case where there are two types of cars.

The case of identical cars can be handled well by Maple -- the eigenvalues and eigenvectors suggests that the MatrixExponential calculations simplify nicely,

e, V := Vector(6, {(1) = 0, (2) = -3*r, (3) = -3*r, (4) = -r, (5) = -r, (6) = -4*r}), Matrix(6, 6, {(1, 1) = 1, (1, 2) = 1, (1, 3) = 0, (1, 4) = -1, (1, 5) = 0, (1, 6) = -1, (2, 1) = 1, (2, 2) = 0, (2, 3) = 1, (2, 4) = 0, (2, 5) = -1, (2, 6) = 1, (3, 1) = 1, (3, 2) = -1, (3, 3) = -1, (3, 4) = -1, (3, 5) = 1, (3, 6) = 1, (4, 1) = 1, (4, 2) = -1, (4, 3) = -1, (4, 4) = 1, (4, 5) = -1, (4, 6) = -1, (5, 1) = 1, (5, 2) = 0, (5, 3) = 1, (5, 4) = 0, (5, 5) = 1, (5, 6) = -1, (6, 1) = 1, (6, 2) = 1, (6, 3) = 0, (6, 4) = 1, (6, 5) = 0, (6, 6) = 1});

selecting one element for illustration, e.g.

MatrixExponential(t*eval(Q,parameters1))[1,1];
        1   1               1               1          
        - + - exp(-4 t r) + - exp(-3 t r) + - exp(-t r)
        6   6               3               3          

The case of two types of car, by contrast, is quite a mess in general, so I looked at special cases,

2. the slow cars can never overtake the fast cars, the fast cars can overtake the slow cars at rate R, and cars of same build can overtake each other at rate r, where r < R,

  parameters2 := [r[1,2]=R, r[2,1]=0, r[1,3]=R, r[3,1]=0, r[2,3]=r, r[3,2]=r];

3. as case 2 but with the possibility that slow cars overtake fast cars at the low rate r < R

 parameters3 := [r[1,2]=R, r[2,1]=r, r[1,3]=R, r[3,1]=r, r[2,3]=r, r[3,2]=r];

Case 2 can be handled as well as case 1, and gives e.g.

MatrixExponential(t*eval(Q,parameters2))[1,1];
(1/2)*exp(-t*R)+(1/2)*exp(-t*(2*r+R))

Case 3 crashes Maple, but numerical examples can be handled easily,e.g.

P3:= LinearAlgebra:-MatrixExponential(t*eval(Q3,{r=0.01,R=0.1})):
plot(P3[1,1],t=0..50);

After 10 minutes, the computations hadn't returned anything in the case N=3 and 6 transition rates. This despite the presence of many zeros in the transition matrix.

Thus it would seem to be rather difficult to get a compact formula in the general case of N cars and general transition rates r(i,j).

The next thing I need to look into is how to use the information contained in the matrix exponential to compute probabilities about different race outcomes, e.g. probability that car i ends in jth position while car j ends in ith position, etc..

At this point in my reading, I think I understand the basics of the discrete-time Markov chain, as explained in Professor Byron Schmuland's notes, summarized below:

Definition.
The transition matrix for the Markov chain (X_n)_n=0^infinity is the N X N matrix P whose (i,j)th entry is p(i,j).

Theorem.
The conditional probability Pr(X_n = j | X_0 = i) is the (i,j)th entry in the matrix P^n .

Let f_0 = (f_0(1), ..., f_0(N)) be the 1 X N row vector whose ith entry is Pr(X_0 = i).
The vector f_0 gives the distribution of the random variable X_0.
The vector f_n = f_0 P^n gives the distribution of X_n.

To find the probability that the process follows a certain path, multiply the initial probability with conditional probabilities.

For example, consider a two state Markov chain.

What is the probability that the process begins with 01010 ?
Pr(X_0 = 0, X_1 = 1, X_2 = 0, X_3 = 1, X_4 = 0) = f_0(0) p(0,1) p(1,0) p(0,1) p(1,0)

What is the probability that the process begins with 01010, conditional on starting in state zero, X_0 = 0 ?
Pr(X_1 = 1, X_2 = 0, X_3 = 1, X_4 = 0 | X_0 = 0)  = p(0,1) p(1,0) p(0,1) p(1,0)

What is the probability that X_4 = 0 conditional on X_0 = 0 ?
Instead of one path, this probability includes all possible paths that start in state 0 at time 0, and end up in state 0 at time 4.
The vector f_4 = f_0 P^4 gives the distribution of X_4.
If we start in state zero, then f_0 = (1,0).
If instead we flip a coin to choose the starting position, then f_0 = (1/2, 1/2).

So now we must turn to continuous time Markov chains, as the unwritten rules of the challenge were that the race must be run in continuous time. I suspect that the matrix exp(t*P) will make an appearance there. To be continued...

At this point in my reading, I think I understand the basics of the discrete-time Markov chain, as explained in Professor Byron Schmuland's notes, summarized below:

Definition.
The transition matrix for the Markov chain (X_n)_n=0^infinity is the N X N matrix P whose (i,j)th entry is p(i,j).

Theorem.
The conditional probability Pr(X_n = j | X_0 = i) is the (i,j)th entry in the matrix P^n .

Let f_0 = (f_0(1), ..., f_0(N)) be the 1 X N row vector whose ith entry is Pr(X_0 = i).
The vector f_0 gives the distribution of the random variable X_0.
The vector f_n = f_0 P^n gives the distribution of X_n.

To find the probability that the process follows a certain path, multiply the initial probability with conditional probabilities.

For example, consider a two state Markov chain.

What is the probability that the process begins with 01010 ?
Pr(X_0 = 0, X_1 = 1, X_2 = 0, X_3 = 1, X_4 = 0) = f_0(0) p(0,1) p(1,0) p(0,1) p(1,0)

What is the probability that the process begins with 01010, conditional on starting in state zero, X_0 = 0 ?
Pr(X_1 = 1, X_2 = 0, X_3 = 1, X_4 = 0 | X_0 = 0)  = p(0,1) p(1,0) p(0,1) p(1,0)

What is the probability that X_4 = 0 conditional on X_0 = 0 ?
Instead of one path, this probability includes all possible paths that start in state 0 at time 0, and end up in state 0 at time 4.
The vector f_4 = f_0 P^4 gives the distribution of X_4.
If we start in state zero, then f_0 = (1,0).
If instead we flip a coin to choose the starting position, then f_0 = (1/2, 1/2).

So now we must turn to continuous time Markov chains, as the unwritten rules of the challenge were that the race must be run in continuous time. I suspect that the matrix exp(t*P) will make an appearance there. To be continued...

so to get me started I'll read this:

www.stat.ualberta.ca/~schmu/stat580/notes-2010.pdf

and if I really want to get to the bottom of it all, I'll try this:

https://netfiles.uiuc.edu/meyn/www/spm.html

http://dspace.mit.edu/handle/1721.1/4852

so to get me started I'll read this:

www.stat.ualberta.ca/~schmu/stat580/notes-2010.pdf

and if I really want to get to the bottom of it all, I'll try this:

https://netfiles.uiuc.edu/meyn/www/spm.html

http://dspace.mit.edu/handle/1721.1/4852

a minor suggestion, currently shown on the front page is:

the title, the date, the identity of the poster, the product and the tags. and the number of thumbs-up and answers.

I'd find it useful to have also the identity of the last post. Here's why. I ask a question, someone requests clarification, I clarify and then check back to see if a reply has been posted. If my identicy appears as the last to have posted within the thread, then I can infer that there has been no reply to my last post. Currently I'd need to click on the link to find out and if it's a long exchange, loading the message can take a while.

I'd find it useful also to : either remove the distinction between an answer and a comment or have a counter for both. I find that the so-called "comments" often contain information that is just as useful as that in "answers." And from the front page, only the "answers" are counted, not the "comments". So if I know that there have been 2 answers to my question, I cannot currently use that information to infer that no recent reply has been posted, because someone could have replied as a "comment".

I found some relevant code to assist in the calculations. I'm still in the middle of learning this. I'll consign here the relevant bits for future reference.

The following was found on the maplesoft application website, slightly edited:

SteadyStateVector := proc(P::Matrix)
  options `Copyright (c) Maplesoft, a division of Waterloo Maple Inc., 2004`;
  description "Given an n x n transition matrix P,
    let I be the n x n identity matrix and Q = P - I,
    let e be the n-vector of entries equal to 1,
    let b be the (n+1)-vector with a 1 in position n+1 and 0 elsewhere.
    The steady-state vector of the Markov chain pi solves QT*pi = b
    the solution vector pi must have components summing to 1.";
  local n, Q, e, QT, b;
  n := LinearAlgebra:-Dimension(P)[1];
  Q := P - LinearAlgebra:-IdentityMatrix(n);
  e := <seq(1, i=1..n)>;
  QT := LinearAlgebra:-Transpose(<Q | e>);
  b := LinearAlgebra:-UnitVector(n+1, n+1);
  return LinearAlgebra:-LeastSquares(QT, b);
end proc:

It will compute the steady-state vector, useful. Hence, given some parameters:

p := [r[1,2]=0.1,r[2,1]=0.1,r[1,3]=0.15,r[3,1]=0.05,r[2,3]=0.15,r[3,2]=0.2];
and the Matrix given by Robert:
P := Matrix([
  [-r[2, 3] - r[1, 2] , r[2, 3] , r[1, 2] , 0 , 0 , 0],
  [r[3, 2] , -r[3, 2] - r[1, 3] , 0 , r[1, 3] , 0 , 0],
  [r[2, 1] , 0 , -r[2, 1] - r[1, 3] , 0 , r[1, 3] , 0],
  [0 , r[3, 1] , 0 , -r[3, 1] - r[1, 2] , 0 , r[1, 2]],
  [0 , 0 , r[3, 1] , 0 , -r[3, 1] - r[2, 3] , r[2, 3]],
  [0 , 0 , 0 , r[2, 1] , r[3, 2] , -r[2, 1] - r[3, 2]]
]);

pi := SteadyStateVector(eval(P,p));

yields : 

.145287987160869,
.127631549748810,
.132265311441818,
.153045024592411,
.158913954580861,
.139999029618087.

I hope I can get back to this soon, but the next few days are going to be busy with other things...

I found some relevant code to assist in the calculations. I'm still in the middle of learning this. I'll consign here the relevant bits for future reference.

The following was found on the maplesoft application website, slightly edited:

SteadyStateVector := proc(P::Matrix)
  options `Copyright (c) Maplesoft, a division of Waterloo Maple Inc., 2004`;
  description "Given an n x n transition matrix P,
    let I be the n x n identity matrix and Q = P - I,
    let e be the n-vector of entries equal to 1,
    let b be the (n+1)-vector with a 1 in position n+1 and 0 elsewhere.
    The steady-state vector of the Markov chain pi solves QT*pi = b
    the solution vector pi must have components summing to 1.";
  local n, Q, e, QT, b;
  n := LinearAlgebra:-Dimension(P)[1];
  Q := P - LinearAlgebra:-IdentityMatrix(n);
  e := <seq(1, i=1..n)>;
  QT := LinearAlgebra:-Transpose(<Q | e>);
  b := LinearAlgebra:-UnitVector(n+1, n+1);
  return LinearAlgebra:-LeastSquares(QT, b);
end proc:

It will compute the steady-state vector, useful. Hence, given some parameters:

p := [r[1,2]=0.1,r[2,1]=0.1,r[1,3]=0.15,r[3,1]=0.05,r[2,3]=0.15,r[3,2]=0.2];
and the Matrix given by Robert:
P := Matrix([
  [-r[2, 3] - r[1, 2] , r[2, 3] , r[1, 2] , 0 , 0 , 0],
  [r[3, 2] , -r[3, 2] - r[1, 3] , 0 , r[1, 3] , 0 , 0],
  [r[2, 1] , 0 , -r[2, 1] - r[1, 3] , 0 , r[1, 3] , 0],
  [0 , r[3, 1] , 0 , -r[3, 1] - r[1, 2] , 0 , r[1, 2]],
  [0 , 0 , r[3, 1] , 0 , -r[3, 1] - r[2, 3] , r[2, 3]],
  [0 , 0 , 0 , r[2, 1] , r[3, 2] , -r[2, 1] - r[3, 2]]
]);

pi := SteadyStateVector(eval(P,p));

yields : 

.145287987160869,
.127631549748810,
.132265311441818,
.153045024592411,
.158913954580861,
.139999029618087.

I hope I can get back to this soon, but the next few days are going to be busy with other things...

> No, I'm saying that in the long run (with all cars identical) all permutations are equally likely, and each car is equally likely to be in any of the positions.

thanks Robert, that makes sense.

> I don't understand how you can say "P(i,j) was the probability that the i-car overtakes the j-car during any one lap of the race".  Surely this depends on the positions of the cars. You can't overtake a car that's already behind you, and you're unlikely to overtake a car that's many positions ahead of you.  But maybe you meant that as the probability that car number i overtakes car number j in one lap when car j is directly ahead of car i.

you're absolutely right, that's what I meant, sloppy writing on my part.

> The generator for the continuous-time Markov chain is then an (N!) x (N!) matrix Q with positive entries for these transitions, and diagonal elements that make the sum of the entries in each row 0.

thanks Robert, that's a perfect start, I'll need a bit of time to play around with this, to make sure I understand it. I've found some Maple tutorials on Markov chains that seem quite relevant. And when I google Maple Markov chains your name comes up a lot, so I'll follow those links too. I'll try to write back within a week or so with a progress report.

once again Robert, thank you so much.

> No, I'm saying that in the long run (with all cars identical) all permutations are equally likely, and each car is equally likely to be in any of the positions.

thanks Robert, that makes sense.

> I don't understand how you can say "P(i,j) was the probability that the i-car overtakes the j-car during any one lap of the race".  Surely this depends on the positions of the cars. You can't overtake a car that's already behind you, and you're unlikely to overtake a car that's many positions ahead of you.  But maybe you meant that as the probability that car number i overtakes car number j in one lap when car j is directly ahead of car i.

you're absolutely right, that's what I meant, sloppy writing on my part.

> The generator for the continuous-time Markov chain is then an (N!) x (N!) matrix Q with positive entries for these transitions, and diagonal elements that make the sum of the entries in each row 0.

thanks Robert, that's a perfect start, I'll need a bit of time to play around with this, to make sure I understand it. I've found some Maple tutorials on Markov chains that seem quite relevant. And when I google Maple Markov chains your name comes up a lot, so I'll follow those links too. I'll try to write back within a week or so with a progress report.

once again Robert, thank you so much.

yes the integers run from 1 to N, as there are N cars and N starting positions, and i and j refer to the starting position of some cars currently racing (the probability of a crash may be added to the model later on!!)

yes the integers run from 1 to N, as there are N cars and N starting positions, and i and j refer to the starting position of some cars currently racing (the probability of a crash may be added to the model later on!!)

First 50 51 52 53 54 55 56 Last Page 52 of 93