Education

Teaching and learning about math, Maple and MapleSim

The Zassenhaus formula and the algebra of the Pauli matrices

 

Edgardo S. Cheb-Terrab1 and Bryan C. Sanctuary2

(1) Maplesoft

(2) Department of Chemistry, McGill University, Montreal, Quebec, Canada

 

  


The implementation of the Pauli matrices and their algebra were reviewed during 2018, including the algebraic manipulation of nested commutators, resulting in faster computations using simpler and more flexible input. As it frequently happens, improvements of this type suddenly transform research problems presented in the literature as untractable in practice, into tractable.

  

As an illustration, we tackle below the derivation of the coefficients entering the Zassenhaus formula shown in section 4 of [1] for the Pauli matrices up to order 10 (results in the literature go up to order 5). The computation presented can be reused to compute these coefficients up to any desired higher-order (hardware limitations may apply). A number of examples which exploit this formula and its dual, the Baker-Campbell-Hausdorff formula, occur in connection with the Weyl prescription for converting a classical function to a quantum operator (see sec. 5 of [1]), as well as when solving the eigenvalue problem for classes of mathematical-physics partial differential equations [2].  
To reproduce the results below - a worksheet with this contents is linked at the end - you need to have your Maple 2018.2.1 updated with the 
Maplesoft Physics Updates version 280 or higher.

References

 
  

[1] R.M. Wilcox, "Exponential Operators and Parameter Differentiation in Quantum Physics", Journal of Mathematical Physics, V.8, 4, (1967.

  

[2] S. Steinberg, "Applications of the lie algebraic formulas of Baker, Campbell, Hausdorff, and Zassenhaus to the calculation of explicit solutions of partial differential equations", Journal of Differential Equations, V.26, 3, 1977.

  

[3] K. Huang, "Statistical Mechanics", John Wiley & Sons, Inc. 1963, p217, Eq.(10.60).

 

Formulation of the problem

The Zassenhaus formula expresses exp(lambda*(A+B)) as an infinite product of exponential operators involving nested commutators of increasing complexity

"(e)^(lambda (A+B))   =    (e)^(lambda A) * (e)^(lambda B) * (e)^(lambda^2 C[2]) * (e)^(lambda^3 C[3]) *  ...  "
                                                                       =   exp(lambda*A)*exp(lambda*B)*exp(-(1/2)*lambda^2*%Commutator(A, B))*exp((1/6)*lambda^3*(%Commutator(B, %Commutator(A, B))+2*%Commutator(A, %Commutator(A, B))))

Given A, B and their commutator E = %Commutator(A, B), if A and B commute with E, C[n] = 0 for n >= 3 and the Zassenhaus formula reduces to the product of the first three exponentials above. The interest here is in the general case, when %Commutator(A, E) <> 0 and %Commutator(B, E) <> 0, and the goal is to compute the Zassenhaus coefficients C[n]in terms of A, B for arbitrary finite n. Following [1], in that general case, differentiating the Zassenhaus formula with respect to lambda and multiplying from the right by exp(-lambda*(A+B)) one obtains

"A+B=A+(e)^(lambda A) B (e)^(-lambda A)+(e)^(lambda A)+(e)^(lambda B) 2 lambda C[2] (e)^(-lambda B) (e)^(-lambda A)+ ..."

This is an intricate formula, which however (see eq.(4.20) of [1]) can be represented in abstract form as

 

"0=((&sum;)(lambda^n)/(n!) {A^n,B})+2 lambda ((&sum;) (&sum;)(lambda^(n+m))/(n! m!) {A^m,B^n,C[2]})+3 lambda^2 ((&sum;) (&sum;) (&sum;)(lambda^(n+m+k))/(n! m! k!) {A^k,B^m,(C[2])^n,C[3]})+ ..."

from where an equation to be solved for each C[n] is obtained by equating to 0 the coefficient of lambda^(n-1). In this formula, the repeated commutator bracket is defined inductively in terms of the standard commutator %Commutator(A, B)by

{B, A^0} = B, {B, A^(n+1)} = %Commutator(A, {A^n, B^n})

{C[j], B^n, A^0} = {C[j], B^n}, {C[j], A^m, B^n} = %Commutator(A, {A^`-`(m, 1), B^n, C[j]^k})

and higher-order repeated-commutator brackets are similarly defined. For example, taking the coefficient of lambda and lambda^2 and respectively solving each of them for C[2] and C[3] one obtains

C[2] = -(1/2)*%Commutator(A, B)

C[3] = (1/6)*%Commutator(B, %Commutator(A, B))+(1/3)*%Commutator(B, %Commutator(A, B))

This method is used in [3] to treat quantum deviations from the classical limit of the partition function for both a Bose-Einstein and Fermi-Dirac gas. The complexity of the computation of C[n] grows rapidly and in the literature only the coefficients up to C[5] have been published. Taking advantage of developments in the Physics package during 2018, below we show the computation up to C[10] and provide a compact approach to compute them up to arbitrary finite order.

 

Computing up to C[10]

Set the signature of spacetime such that its space part is equal to +++ and use lowercaselatin letters to represent space indices. Set also A, B and C[n] to represent quantum operators

with(Physics)

Setup(op = {A, B, C}, signature = `+++-`, spaceindices = lowercaselatin)

`* Partial match of  '`*op*`' against keyword '`*quantumoperators*`' `

 

_______________________________________________________

 

[quantumoperators = {A, B, C}, signature = `+ + + -`, spaceindices = lowercaselatin]

(1)

To illustrate the computation up to C[10], a convenient example, where the commutator algebra is closed, consists of taking A and B as Pauli Matrices which, multiplied by the imaginary unit, form a basis for the `&sfr;&ufr;`(2)group, which in turn exponentiate to the relevant Special Unitary Group SU(2). The algebra for the Pauli matrices involves a commutator and an anticommutator

Library:-DefaultAlgebraRules(Psigma)

%Commutator(Physics:-Psigma[i], Physics:-Psigma[j]) = (2*I)*Physics:-LeviCivita[i, j, k]*Physics:-Psigma[k], %AntiCommutator(Physics:-Psigma[i], Physics:-Psigma[j]) = 2*Physics:-KroneckerDelta[i, j]

(2)

Assign now A and B to two Pauli matrices, for instance

A := Psigma[1]

Physics:-Psigma[1]

(3)

B := Psigma[3]

Physics:-Psigma[3]

(4)

Next, to extract the coefficient of lambda^n from

"0=((&sum;)(lambda^n)/(n!) {A^n,B})+2 lambda ((&sum;) (&sum;)(lambda^(n+m))/(n! m!) {A^m,B^n,C[2]})+3 lambda^2 ((&sum;) (&sum;) (&sum;)(lambda^(n+m+k))/(n! m! k!) {A^k,B^m,(C[2])^n,C[3]})+..."

to solve it for C[n+1] we note that each term has a factor lambda^m multiplying a sum, so we only need to take into account the first n+1 terms (sums) and in each sum replace infinity by the corresponding n-m. For example, given "C[2]=-1/2 `%Commutator`(A,B), "to compute C[3] we only need to compute these first three terms:

0 = Sum(lambda^n*{B, A^n}/factorial(n), n = 1 .. 2)+2*lambda*(Sum(Sum(lambda^(n+m)*{C[2], A^m, B^n}/(factorial(n)*factorial(m)), n = 0 .. 1), m = 0 .. 1))+3*lambda^2*(Sum(Sum(Sum(lambda^(n+m+k)*{C[3], A^k, B^m, C[2]^n}/(factorial(n)*factorial(m)*factorial(k)), n = 0 .. 0), m = 0 .. 0), k = 0 .. 0))

then solving for C[3] one gets C[3] = (1/3)*%Commutator(B, %Commutator(A, B))+(1/6)*%Commutator(A, %Commutator(A, B)).

Also, since to compute C[n] we only need the coefficient of lambda^(n-1), it is not necessary to compute all the terms of each multiple-sum. One way of restricting the multiple-sums to only one power of lambda consists of using multi-index summation, available in the Physics package (see Physics:-Library:-Add ). For that purpose, redefine sum to extend its functionality with multi-index summation

Setup(redefinesum = true)

[redefinesum = true]

(5)

Now we can represent the same computation of C[3] without multiple sums and without computing unnecessary terms as

0 = Sum(lambda^n*{B, A^n}/factorial(n), n = 1)+2*lambda*(Sum(lambda^(n+m)*{C[2], A^m, B^n}/(factorial(n)*factorial(m)), n+m = 1))+3*lambda^2*(Sum(lambda^(n+m+k)*{C[3], A^k, B^m, C[2]^n}/(factorial(n)*factorial(m)*factorial(k)), n+m+k = 0))

Finally, we need a computational representation for the repeated commutator bracket 

{B, A^0} = B, {B, A^(n+1)} = %Commutator(A, {A^n, B^n})

One way of representing this commutator bracket operation is defining a procedure, say F, with a cache to avoid recomputing lower order nested commutators, as follows

F := proc (A, B, n) options operator, arrow; if n::negint then 0 elif n = 0 then B elif n::posint then %Commutator(A, F(A, B, n-1)) else 'F(A, B, n)' end if end proc

proc (A, B, n) options operator, arrow; if n::negint then 0 elif n = 0 then B elif n::posint then %Commutator(A, F(A, B, n-1)) else 'F(A, B, n)' end if end proc

(6)

Cache(procedure = F)

 

For example,

F(A, B, 1)

%Commutator(Physics:-Psigma[1], Physics:-Psigma[3])

(7)

F(A, B, 2)

%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))

(8)

F(A, B, 3)

%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3])))

(9)

We can set now the value of C[2]

C[2] := -(1/2)*Commutator(A, B)

I*Physics:-Psigma[2]

(10)

and enter the formula that involves only multi-index summation

H := sum(lambda^n*F(A, B, n)/factorial(n), n = 2)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = 1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = 0))

(1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))+2*lambda*(lambda*%Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])+lambda*%Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))+3*lambda^2*C[3]

(11)

from where we compute C[3] by solving for it the coefficient of lambda^2, and since due to the mulit-index summation this expression already contains lambda^2 as a factor,

C[3] = Simplify(solve(H, C[3]))

C[3] = (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]

(12)

In order to generalize the formula for H for higher powers of lambda, the right-hand side of the multi-index summation limit can be expressed in terms of an abstract N, and H transformed into a mapping:

 

H := unapply(sum(lambda^n*F(A, B, n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2)), N)

proc (N) options operator, arrow; lambda^N*F(Physics:-Psigma[1], Physics:-Psigma[3], N)/factorial(N)+2*lambda*(sum(Physics:-`*`(Physics:-`^`(lambda, n+m), Physics:-`^`(Physics:-`*`(factorial(n), factorial(m)), -1), F(Physics:-Psigma[1], F(Physics:-Psigma[3], I*Physics:-Psigma[2], n), m)), n+m = N-1))+3*lambda^2*(sum(Physics:-`*`(Physics:-`^`(lambda, n+m+k), Physics:-`^`(Physics:-`*`(factorial(n), factorial(m), factorial(k)), -1), F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(I*Physics:-Psigma[2], C[3], n), m), k)), n+m+k = N-2)) end proc

(13)

Now we have

H(0)

Physics:-Psigma[3]

(14)

H(1)

lambda*%Commutator(Physics:-Psigma[1], Physics:-Psigma[3])+(2*I)*lambda*Physics:-Psigma[2]

(15)

The following is already equal to (11)

H(2)

(1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))+2*lambda*(lambda*%Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])+lambda*%Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))+3*lambda^2*C[3]

(16)

In this way, we can reproduce the results published in the literature for the coefficients of Zassenhaus formula up to C[5] by adding two more multi-index sums to (13). Unassign C first

unassign(C)

H := unapply(sum(lambda^n*F(A, B, n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(A, F(B, F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(A, F(B, F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)*factorial(p)), n+m+k+l+p = N-4)), N)

We compute now up to C[5] in one go

for j to 4 do C[j+1] := Simplify(solve(H(j), C[j+1])) end do

I*Physics:-Psigma[2]

 

(2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]

 

-((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2])

 

-(8/9)*Physics:-Psigma[1]-(158/45)*Physics:-Psigma[3]-((16/3)*I)*Physics:-Psigma[2]

(17)

The nested-commutator expression solved in the last step for C[5] is

H(4)

(1/24)*lambda^4*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))))+2*lambda*((1/6)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])))+(1/2)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2])))+(1/2)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2])))+(1/6)*lambda^3*%Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))))+3*lambda^2*((1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+(1/2)*lambda^2*%Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[3], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+(1/2)*lambda^2*%Commutator(I*Physics:-Psigma[2], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1])))+4*lambda^3*(lambda*%Commutator(Physics:-Psigma[1], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator(Physics:-Psigma[3], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator(I*Physics:-Psigma[2], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator((2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2])))+5*lambda^4*(-(8/9)*Physics:-Psigma[1]-(158/45)*Physics:-Psigma[3]-((16/3)*I)*Physics:-Psigma[2])

(18)

With everything understood, we want now to extend these results generalizing them into an approach to compute an arbitrarily large coefficient C[n], then use that generalization to compute all the Zassenhaus coefficients up to C[10]. To type the formula for H for higher powers of lambda is however prone to typographical mistakes. The following is a program, using the Maple programming language , that produces these formulas for an arbitrary integer power of lambda:

Formula := proc(A, B, C, Q)

 

This Formula program uses a sequence of summation indices with as much indices as the order of the coefficient C[n] we want to compute, in this case we need 10 of them

summation_indices := n, m, k, l, p, q, r, s, t, u

n, m, k, l, p, q, r, s, t, u

(19)

To avoid interference of the results computed in the loop (17), unassign C again

unassign(C)

 

Now the formulas typed by hand, used lines above to compute each of C[2], C[3] and C[5], are respectively constructed by the computer

Formula(A, B, C, 2)

sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))

(20)

Formula(A, B, C, 3)

sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))

(21)

Formula(A, B, C, 5)

sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(l)*factorial(m)*factorial(k)*factorial(p)), n+m+k+l+p = N-4))

(22)

 

Construct then the formula for C[10] and make it be a mapping with respect to N, as done for C[5] after (16)

H := unapply(Formula(A, B, C, 10), N)

proc (N) options operator, arrow; sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(l)*factorial(m)*factorial(k)*factorial(p)), n+m+k+l+p = N-4))+6*lambda^5*(sum(lambda^(n+m+k+l+p+q)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], C[6], n), m), k), l), p), q)/(factorial(n)*factorial(l)*factorial(m)*factorial(p)*factorial(k)*factorial(q)), n+m+k+l+p+q = N-5))+7*lambda^6*(sum(lambda^(n+m+k+l+p+q+r)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], C[7], n), m), k), l), p), q), r)/(factorial(n)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(r)), n+m+k+l+p+q+r = N-6))+8*lambda^7*(sum(lambda^(n+m+k+l+p+q+r+s)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], C[8], n), m), k), l), p), q), r), s)/(factorial(n)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(s)), n+m+k+l+p+q+r+s = N-7))+9*lambda^8*(sum(lambda^(n+m+k+l+p+q+r+s+t)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], F(C[8], C[9], n), m), k), l), p), q), r), s), t)/(factorial(s)*factorial(n)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(t)), n+m+k+l+p+q+r+s+t = N-8))+10*lambda^9*(sum(lambda^(n+m+k+l+p+q+r+s+t+u)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], F(C[8], F(C[9], C[10], n), m), k), l), p), q), r), s), t), u)/(factorial(s)*factorial(n)*factorial(t)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(u)), n+m+k+l+p+q+r+s+t+u = N-9)) end proc

(23)

Compute now the coefficients of the Zassenhaus formula up to C[10] all in one go

for j to 9 do C[j+1] := Simplify(solve(H(j), C[j+1])) end do

I*Physics:-Psigma[2]

 

(2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]

 

-((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2])

 

-(8/9)*Physics:-Psigma[1]-(158/45)*Physics:-Psigma[3]-((16/3)*I)*Physics:-Psigma[2]

 

(1030/81)*Physics:-Psigma[1]-(8/81)*Physics:-Psigma[3]+((1078/405)*I)*Physics:-Psigma[2]

 

((11792/243)*I)*Physics:-Psigma[2]+(358576/42525)*Physics:-Psigma[1]+(12952/135)*Physics:-Psigma[3]

 

(87277417/492075)*Physics:-Psigma[1]+(833718196/820125)*Physics:-Psigma[3]+((35837299048/17222625)*I)*Physics:-Psigma[2]

 

-((449018539801088/104627446875)*I)*Physics:-Psigma[2]-(263697596812424/996451875)*Physics:-Psigma[1]+(84178036928794306/2197176384375)*Physics:-Psigma[3]

 

(3226624781090887605597040906/21022858292748046875)*Physics:-Psigma[1]+(200495118165066770268119656/200217698026171875)*Physics:-Psigma[3]+((2185211616689851230363020476/4204571658549609375)*I)*Physics:-Psigma[2]

(24)

Notes: with the material above you can compute higher order values of C[n]. For that you need:

1. 

Unassign C as done above in two opportunities, to avoid interference of the results just computed.

2. 

Indicate more summation indices in the sequence summation_indices in (19), as many as the maximum value of n in C[n].

3. 

Have in mind that the growth in size and complexity is significant, with each C[n] taking significantly more time than the computation of all the previous ones.

4. 

Re-execute the input line (23) and the loop (24).

NULL


Download The_Zassenhause_formula_and_the_Pauli_Matrices.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

The Joint Mathematics Meetings are taking place next week (January 16 – 19) in Baltimore, Maryland, U.S.A. This will be the 102nd annual winter meeting of the Mathematical Association of America (MAA) and the 125th annual meeting of the American Mathematical Society (AMS).

Maplesoft will be exhibiting at booth #501 as well as in the networking area. Please stop by to chat with me and other members of the Maplesoft team, as well as to pick up some free Maplesoft swag or win some prizes.

This year we will be hosting a hands-on workshop on Maple: A Natural Way to Work with Math

This special event will take place on Thursday, January 17 at 6:00 -8:00 P.M. in the Holiday Ballroom 4 at the Hilton Baltimore.

 

There are also several other interesting Maple related talks:

MYMathApps Tutorials

MAA General Contributed Paper Session on Mathematics and Technology 

Wednesday January 16, 2019, 1:00 p.m.-1:55 p.m.

Room 323, BCC
Matthew Weihing*, Texas A&M University 
Philip B Yasskin, Texas A&M University 

 

The Logic Behind the Turing Bombe's Role in Breaking Enigma. 

MAA General Contributed Paper Session on Mathematics and Technology 

Wednesday January 16, 2019, 1:00 p.m.-1:55 p.m.
Room 323, BCC
Neil Sigmon*, Radford University 
Rick Klima, Appalachian State University 

 

On a software accessible database of faithful representations of Lie algebras. 

MAA General Contributed Paper Session on Algebra, I 

Wednesday January 16, 2019, 2:15 p.m.-6:25 p.m.
Room 348, BCC
Cailin Foster*, Dixie State University 
 

Discussion of Various Technical Strategies Used in College Math Teaching. 

MAA Contributed Paper Session on Open Educational Resources: Combining Technological Tools and Innovative Practices to Improve Student Learning, IV 

Friday January 18, 2019, 8:00 a.m.-10:55 a.m.
Room 303, BCC
Lina Wu*, Borough of Manhattan Community College-The City University of New York 
 

An Enticing Simulation in Ordinary Differential Equations that predict tangible results. 

MAA Contributed Paper Session on The Teaching and Learning of Undergraduate Ordinary Differential Equations 

Friday January 18, 2019, 1:00 p.m.-4:55 p.m.
Room 324, BCC
Satyanand Singh*, New York City College of Technology of CUNY 
 

An Effort to Assess the Impact a Modeling First Approach has in a Traditional Differential Equations Class. 

AMS Special Session on Using Modeling to Motivate the Study of Differential Equations, I 
Saturday January 19, 2019, 8:00 a.m.-11:50 a.m.

Room 336, BCC
Rosemary C Farley*, Manhattan College 
Patrice G Tiffany, Manhattan College 

 

If you are attending the Joint Math meetings this week and plan on presenting anything on Maple, please feel free to let me know and I'll update this list accordingly.


See you in Baltimore!

Daniel

Maple Product Manager


Coherent States in Quantum Mechanics

 

Pascal Szriftgiser1 and Edgardo S. Cheb-Terrab2 

(1) Laboratoire PhLAM, UMR CNRS 8523, Université Lille 1, F-59655, France

(2) Maplesoft

 

  

Coherent states are among the most relevant representations for the state of a quantum system. These states, that form an overcomplete basis, minimize the quantum uncertainty between position x and momentum p (they satisfy the Heisenberg uncertainty principle with equality and their expectation values satisfy the classical equations of motion). Coherent states are widely used in quantum optics and quantum mechanics in general; they also mathematically characterize the concept of Planck cells. Part of this development is present in Maple 2018.2.1. To reproduce what you see below, however, you need a more recent version, as the one distributed within the Maplesoft Physics Updates (version 276 or higher). A worksheet with this contents is linked at the end of this post.

Definition and the basics

 

with(Physics)

 

Set a quantum operator A and corresponding annihilation / creation operators

Setup(quantumoperators = A)

[quantumoperators = {A}]

(1.1)

am := Annihilation(A)

`#msup(mi("a"),mo("&uminus0;"))`

(1.2)

ap := Creation(A)

`#msup(mi("a"),mo("&plus;"))`

(1.3)

In what follows, on the left-hand sides the product operator used is `*`, which properly represents, but does not perform the attachment of Bras Kets and operators. On the right-hand sides the product operator is `.`, that performs the attachments. Since the introduction of Physics in the Maple system, we have that

am*Ket(A, n) = am.Ket(A, n)

Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(A, n)) = n^(1/2)*Physics:-Ket(A, n-1)

(1.4)

(%Bracket = Bracket)(Bra(A, n), Ket(A, n))

%Bracket(Physics:-Bra(A, n), Physics:-Ket(A, n)) = 1

(1.5)

(%Bracket = Bracket)(Bra(A, n), Ket(A, m))

%Bracket(Physics:-Bra(A, n), Physics:-Ket(A, m)) = Physics:-KroneckerDelta[m, n]

(1.6)

New development during 2018: coherent states, the eigenstates of the annihilation operator `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`, with all of their properties, are now understood as such by the system

am*Ket(am, alpha) = am.Ket(am, alpha)

Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = alpha*Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)

(1.7)

Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) is an eigenket of `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))` but not of  `#msup(mi("a",mathcolor = "olive"),mo("&plus;",mathcolor = "olive"))`

ap.Ket(am, alpha)

Physics:-`.`(`#msup(mi("a"),mo("&plus;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

(1.8)

The norm of these states is equal to 1

(%Bracket = Bracket)(Bra(am, alpha), Ket(am, alpha))

%Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = 1

(1.9)

These states, however, are not orthonormal as the occupation number states Ket(A, n) are, and since `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))` is not Hermitian, its eigenvalues are not real but complex numbers. Instead of (1.6) , we now have

(%Bracket = Bracket)(Bra(am, alpha), Ket(am, beta))

%Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, beta)) = exp(-(1/2)*abs(alpha)^2-(1/2)*abs(beta)^2+conjugate(alpha)*beta)

(1.10)

At alpha = beta,

simplify(eval(%Bracket(Physics[Bra](`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, beta)) = exp(-(1/2)*abs(alpha)^2-(1/2)*abs(beta)^2+conjugate(alpha)*beta), alpha = beta))

1 = 1

(1.11)

Their scalar product with the occupation number states Ket(A, m), using the inert %Bracket on the left-hand side and the active Bracket on the other side:

(%Bracket = Bracket)(Bra(A, n), Ket(am, alpha))

%Bracket(Physics:-Bra(A, n), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)

(1.12)

The expansion of coherent states into occupation number states, first representing the product operation using `*`, then performing the attachments replacing `*` by `.`

Projector(Ket(A, n), dimension = infinity)

Sum(Physics:-`*`(Physics:-Ket(A, n), Physics:-Bra(A, n)), n = 0 .. infinity)

(1.13)

Ket(am, alpha) = (Sum(Physics[`*`](Physics[Ket](A, n), Physics[Bra](A, n)), n = 0 .. infinity))*Ket(am, alpha)

Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics:-`*`(Sum(Physics:-`*`(Physics:-Ket(A, n), Physics:-Bra(A, n)), n = 0 .. infinity), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

(1.14)

eval(Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics[`*`](Sum(Physics[`*`](Physics[Ket](A, n), Physics[Bra](A, n)), n = 0 .. infinity), Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, alpha)), `*` = `.`)

Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)

(1.15)

Hide now the ket label. When in doubt, input show to see the Kets with their labels explicitly shown

Setup(hide = true)

`* Partial match of  '`*hide*`' against keyword '`*hideketlabel*`' `

 

_______________________________________________________

 

[hideketlabel = true]

(1.16)

Define eigenkets of the annihilation operator, with two different eigenvalues for experimentation

`K__&alpha;` := Ket(am, alpha)

Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)

(1.17)

`K__&beta;` := Ket(am, beta)

Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, beta)

(1.18)

Because the properties of coherent states are now known to the system, the following computations proceed automatically. The left-hand sides use the `*`, while the right-hand sides use the `.`

(`*` = `.`)(Dagger(`K__&alpha;`), ap, am, `K__&alpha;`)

Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), `#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = abs(alpha)^2

(1.19)

(`*` = `.`)(Dagger(`K__&alpha;`), ap+am, `K__&alpha;`)

Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), `#msup(mi("a"),mo("&plus;"))`+`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = conjugate(alpha)+alpha

(1.20)

(`*` = `.`)(Dagger(`K__&alpha;`), ap-am, `K__&alpha;`)

Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), `#msup(mi("a"),mo("&plus;"))`-`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = conjugate(alpha)-alpha

(1.21)

(`*` = `.`)(Dagger(`K__&alpha;`), (ap+am)^2, `K__&alpha;`)

Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-`^`(`#msup(mi("a"),mo("&plus;"))`+`#msup(mi("a"),mo("&uminus0;"))`, 2), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = conjugate(alpha)^2+2*abs(alpha)^2+1+alpha^2

(1.22)

Properties of Coherent states

 

The mean value of the occupation number N

 

 

The occupation number operator N is given by

N := ap.am

Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)

(2.1.1)

N*` is Hermitian`

%Dagger(N) = N

%Dagger(Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)) = Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)

(2.1.2)

value(%Dagger(Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)) = Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`))

Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`) = Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)

(2.1.3)

N is diagonal in the Ket(A, n) basis of the Fock (occupation number) space

(`*` = `.`)(Bra(A, n), N, Ket(A, p))

Physics:-`*`(Physics:-Bra(A, n), `#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(A, p)) = p*Physics:-KroneckerDelta[n, p]

(2.1.4)
• 

The mean value of N in a coherent state `&equiv;`(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Ket(alpha))

Bracket(%N)[alpha] = %Bracket(Bra(am, alpha), N, Ket(am, alpha))

Physics:-Bracket(%N)[alpha] = %Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

(2.1.5)

value(Physics[Bracket](%N)[alpha] = %Bracket(Physics[Bra](`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, alpha)))

Physics:-Bracket(%N)[alpha] = abs(alpha)^2

(2.1.6)

The mean value of N^2

Bracket(%N^2)[alpha] = %Bracket(Bra(am, alpha), N^2, Ket(am, alpha))

Physics:-Bracket(%N^2)[alpha] = %Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-`^`(Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), 2), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

(2.1.7)

value(Physics[Bracket](%N^2)[alpha] = %Bracket(Physics[Bra](`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics[`^`](Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), 2), Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, alpha)))

Physics:-Bracket(%N^2)[alpha] = abs(alpha)^4+abs(alpha)^2

(2.1.8)

The standard deviation `&Delta;N` = sqrt(-Bracket(%N)[alpha]^2+Bracket(%N^2)[alpha]) for a state Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)

((Physics[Bracket](%N^2)[alpha] = abs(alpha)^4+abs(alpha)^2)-(Physics[Bracket](%N)[alpha] = abs(alpha)^2)^2)^(1/2)

(-Physics:-Bracket(%N)[alpha]^2+Physics:-Bracket(%N^2)[alpha])^(1/2) = abs(alpha)

(2.1.9)

In conclusion, a coherent state "| alpha >" has a finite spreading `&Delta;N` = abs(alpha).  Coherent states are good approximations for the states of a laser, where the laser intensity I  is proportional to the mean value of the photon number, I f Bracket(%N)[alpha] = abs(alpha)^2, and so the intensity fluctuation, `&prop;`(sqrt(I), abs(alpha)).

• 

The mean value of the occupation number N in an occupation number state `&equiv;`(Ket(A, n), Ket(n))

Bracket(%N)[n] = %Bracket(Bra(A, n), N, Ket(A, n))

Physics:-Bracket(%N)[n] = %Bracket(Physics:-Bra(A, n), Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), Physics:-Ket(A, n))

(2.1.10)

value(Physics[Bracket](%N)[n] = %Bracket(Bra(A, n), Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), Ket(A, n)))

Physics:-Bracket(%N)[n] = n

(2.1.11)

The mean value of the occupation number N in a state Ket(A, n) is thus n itself, as expected since Ket(A, n)represents a (Fock space) state of n (quase-) particles. Accordingly,

Bracket(%N^2)[n] = %Bracket(Bra(A, n), N^2, Ket(A, n))

Physics:-Bracket(%N^2)[n] = %Bracket(Physics:-Bra(A, n), Physics:-`^`(Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), 2), Physics:-Ket(A, n))

(2.1.12)

value(Physics[Bracket](%N^2)[n] = %Bracket(Bra(A, n), Physics[`^`](Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), 2), Ket(A, n)))

Physics:-Bracket(%N^2)[n] = n^2

(2.1.13)

The standard deviation `&Delta;N` = sqrt(-Bracket(%N)[n]^2+Bracket(%N^2)[n]) for a state Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha), is thus

((Physics[Bracket](%N^2)[n] = n^2)-(Physics[Bracket](%N)[n] = n)^2)^(1/2)

(-Physics:-Bracket(%N)[n]^2+Physics:-Bracket(%N^2)[n])^(1/2) = 0

(2.1.14)

That is, in a Fock state, `&Delta;N` = 0,  there is no intensity fluctuation.

"a^(-)| alpha > = alpha| alpha >"

 

 

The specific properties of coherent states implemented can be derived explicitly departing from the projection of "Ket(a^(-),alpha"into the Ket(A, m)basis of occupation number states and the definition of `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))` as the operator that annihilates the vacuum `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`Ket(A, n) = 0

Ket(am, alpha) = (Sum(Physics[`*`](Ket(A, n), Bra(A, n)), n = 0 .. infinity))*Ket(am, alpha)

Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics:-`*`(Sum(Physics:-`*`(Physics:-Ket(A, n), Physics:-Bra(A, n)), n = 0 .. infinity), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

(2.2.1)

eval(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics[`*`](Sum(Physics[`*`](Ket(A, n), Bra(A, n)), n = 0 .. infinity), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)), `*` = `.`)

Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)

(2.2.2)

To derive `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`*Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = alpha*Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) from the formula above, start multiplying by `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`

am*(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))

Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))

(2.2.3)

In view of `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`*Ket(A, 0) = 0, discard the first term of the sum

subs(0 = 1, Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)))

Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 1 .. infinity))

(2.2.4)

Change variables n = k+1; in the result rename proc (k) options operator, arrow; n end proc

subs(k = n, PDEtools:-dchange(n = k+1, Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 1 .. infinity)), `@`(combine, simplify)))

Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(A, n+1))*alpha^(n+1)/(factorial(n)^(1/2)*(n+1)^(1/2)), n = 0 .. infinity)

(2.2.5)

Activate the product `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`*Ket(A, n+1) by replacing, in the right-hand side, the product operator `*` by `.`

lhs(Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(A, n+1))*alpha^(n+1)/(factorial(n)^(1/2)*(n+1)^(1/2)), n = 0 .. infinity)) = eval(rhs(Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(A, n+1))*alpha^(n+1)/(factorial(n)^(1/2)*(n+1)^(1/2)), n = 0 .. infinity)), `*` = `.`)

Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Physics:-Ket(A, n)*alpha^(n+1)/factorial(n)^(1/2), n = 0 .. infinity)

(2.2.6)

By inspection the right-hand side of (2.2.6) is equal to alpha times the right-hand side of (2.2.2)

alpha*(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))-(Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Ket(A, n)*alpha^(n+1)/factorial(n)^(1/2), n = 0 .. infinity))

alpha*Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)-Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = alpha*(Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))-(Sum(exp(-(1/2)*abs(alpha)^2)*Physics:-Ket(A, n)*alpha^(n+1)/factorial(n)^(1/2), n = 0 .. infinity))

(2.2.7)

combine(alpha*Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)-Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = alpha*(Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))-(Sum(exp(-(1/2)*abs(alpha)^2)*Ket(A, n)*alpha^(n+1)/factorial(n)^(1/2), n = 0 .. infinity)))

alpha*Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)-Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = 0

(2.2.8)
• 

Overview of the coherent states distribution

 

Consider the projection of Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) over an occupation number state Ket(A, n)

%Bracket(Bra(A, n), lhs(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics[`*`](Sum(Physics[`*`](Ket(A, n), Bra(A, n)), n = 0 .. infinity), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)))) = Bracket(Bra(A, n), rhs(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics[`*`](Sum(Physics[`*`](Ket(A, n), Bra(A, n)), n = 0 .. infinity), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))))

%Bracket(Physics:-Bra(A, n), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)

(2.2.9)

An overview of the distribution of coherent states Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) for a sample of values of n and alpha is thus as follows

plot3d(rhs(%Bracket(Bra(A, n), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)), n = 0 .. 25, alpha = 0 .. 10, axes = boxed, caption = lhs(%Bracket(Bra(A, n), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)))

 

The distribution can be explored for ranges of values of n and alpha using Explore

NA := Typesetting:-Typeset(Bracket(Bra(A, n), Ket(am, alpha)))

Explore(plot(rhs(%Bracket(Bra(A, n), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)), n = 0 .. 200, view = 0 .. .6, labels = [n, NA]), parameters = [alpha = 0 .. 10], initialvalues = [alpha = 5])

"a^(+)| alpha >= (&PartialD;)/(&PartialD;alpha) | alpha >+(alpha)/2 | alpha >"

   

exp(-(1/2)*abs(alpha)^2)*exp(alpha*`#msup(mi("a",mathcolor = "olive"),mo("&plus;",mathcolor = "olive"))`)"| 0 >" = "| alpha >"

   

 exp(alpha*`#msup(mi("a",mathcolor = "olive"),mo("&plus;",mathcolor = "olive"))`-conjugate(alpha)*a)" | 0 >" = "| alpha >"

   

`<|>`(beta, alpha) = exp(conjugate(beta)*alpha-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2)

 

NULL

The identity in the title can be derived departing again from the the projection of a coherent stateKet(`#msup(mi("a"),mo("&uminus0;"))`, alpha)into the Ket(A, m)basis of occupation number states

Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)

Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)

(2.6.1)

Dagger(subs({alpha = beta, n = k}, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)))

Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta) = Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Physics:-Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity)

(2.6.2)

Taking the `*` product of these two expressions

(Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta) = Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity))*(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))

Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics:-`*`(Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Physics:-Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity), Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))

(2.6.3)

Perform the attachment of Bras and Kets on the right-hand side by replacing `*` by `.`, evaluating the sum and simplifying the result

lhs(Physics[`*`](Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics[`*`](Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity), Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))) = simplify(value(eval(rhs(Physics[`*`](Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics[`*`](Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity), Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))), `*` = `.`)))

Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2+alpha*conjugate(beta))

(2.6.4)
• 

Overview of the real and imaginary part of `<|>`(beta, alpha)

 

In most cases, alpha and beta are complex valued numbers. Below, the plots assume that alpha and beta are both real. To take into account the general case, the possibility to tune a phase difference theta between alpha and beta is explicitly added, so that (2.6.4) becomes

 

%Bracket(Bra(am, beta), Ket(am, alpha)) = subs(conjugate(beta) = conjugate(beta)*exp(I*theta), rhs(Physics[`*`](Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(conjugate(beta)*alpha-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2)))

%Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2+alpha*conjugate(beta)*exp(I*theta))

(2.6.5)

Explore(plot3d(Re(rhs(%Bracket(Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2+alpha*conjugate(beta)*exp(I*theta)))), alpha = -10 .. 10, beta = -10 .. 10, view = -1 .. 1, orientation = [-12, 74, 3], axes = boxed), parameters = [theta = 0 .. 2*Pi], initialvalues = [theta = (1/10)*Pi])

 

 

Download Coherent_States_in_Quantum_Mechanics.mw

 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
Editor, Computer Physics Communications

 

Recently I examined a piece of code of mine in an attempt to possibly convert it to another language as it is a numeric code and as such slower in Maple than I'd like it to run. In doing this I ran across the following strangeness, here reproduced in a minimum working example (file attached).

Consider this trivial integral:

x1:=Int(3.52*10^8, ti = 0 .. 1);
(4)

and also this one:

x2:=sin(2*Pi*x1);
(5)

I can then evaluate (4) and take sine(2Pi * the evaluation of (4)):

value(x1);
(6)

sin(2*Pi*(6));
(7)

Hmm... let's evaluate x2, which should be the same, right

value(x2);
0.00000556012229902952                                                              (8)

Oddly enough, it is not. Now the reason they are not 0 is due to round-off error (running the same sheet with Digits := 40 confirms that); but at the same time, (6) is in fact exact. More oddly, if I wrap the input leading to (7) in evalf() then it outputs 0., i.e. exact and correct. I suspect the problem must lie in the different treatments of Pi in the three cases.

I am not ready to call this behaviour a bug since I can see that different ways of evaluating what is essentially the same expression leads to a diffferent round-off. What strikes me is the relatively large errors in this case. The sheet was run with Digits being 15 (my default set in my .mapleinint), I initially expected somewhat more accuracy in the sine function than a mere 6 digits or so. On second thought, however, what is going on seems to be that the evaluation of the integral must be numerical and the large no. of cycles limits the accuracy; if one replaces 3.52E8 (a float) with 352E6 (an exact number) then (7) becomes 0 (exact) while (8) remains at the above value. Why

evalf(sin(2*Pi*(6)))

yields an exact value I do not quite understand.

So, caveat computor once again. This example, while it may look contrived, actually arose from a real-world case I was dealing with (the 352E6 is a frequency in Hz, in my actual application it can vary in time therefore the integration to get the no. of cycles in a given time interval). One annoyance here is that the "right" way to do this is not obvious, at least not to me.

M.D.

integration_test.mw

Tensor product of Quantum States using Dirac's Bra-Ket Notation - 2018

 

There has been increasing interest in the details of the Maple implementation of tensor products using Dirac's notation, developed during 2018. Tensor products of Hilbert spaces and related quantum states are relevant in a myriad of situations in quantum mechanics, and in particular regarding quantum information. Below is a presentation up-to-date of the design and implementation, with input/output and examples, organized in four sections:

 

• 

The basic ideas and design implemented

• 

Tensor product notation and the hideketlabel option

• 

Entangled States and the Bell basis

• 

Entangled States, Operators and Projectors

 

Part of this development is present in Maple 2018.2. To reproduce what you see below, however, you need a more recent version, as the one distributed within the Maplesoft Physics Updates (version 272 or higher).

 

The basic ideas and design implemented

 

 

Suppose A and B are quantum operators and Ket(A, n), et(B, m) are, respectively, their eigenkets. The following works since the introduction of the Physics package in Maple

with(Physics)

Setup(op = {A, B})

`* Partial match of  '`*op*`' against keyword '`*quantumoperators*`' `

 

_______________________________________________________

 

[quantumoperators = {A, B}]

(1.1)

A*Ket(A, alpha) = A.Ket(A, alpha)

Physics:-`*`(A, Physics:-Ket(A, alpha)) = alpha*Physics:-Ket(A, alpha)

(1.2)

B*Ket(B, beta) = B.Ket(B, beta)

Physics:-`*`(B, Physics:-Ket(B, beta)) = beta*Physics:-Ket(B, beta)

(1.3)

In previous Maple releases, all quantum operators are supposed to act on the same Hillbert space. New: suppose that A and B act on different, disjointed, Hilbert spaces.

 

1) To represent that situation, a new keyword in Setup , hilbertspaces, is introduced. With it you can indicate the quantum operators that act on a Hilbert space, say as in hilbertdspaces = {{A}, {B}} with the meaning that the operator A acts on one Hilbert space while B acts on another one.

 

The Hilbert space thus has no particular name (as in 1, 2, 3 ...) and is instead identified by the operators that act on it. There can be one or more, and operators acting on one space can act on other spaces too. The disjointedspaces keyword is a synonym for hilbertspaces and hereafter all Hilbert spaces are assumed to be disjointed.

 

NOTE: noncommutative quantum operators acting on disjointed spaces commute between themselves, so after setting - for instance - hilbertdspaces = {{A}, {B}}, automatically, A, B become quantum operators satisfying (see comment (ii) on page 156 of ref.[1])

 

"[A,B][-]=0"

 

2) Product of Kets and Bras that belong to different Hilbert spaces, are understood as tensor products satisfying (see footnote on page 154 of ref. [1]):

 

`&otimes;`(Ket(A, alpha), Ket(B, beta)) = `&otimes;`(Ket(B, beta), Ket(A, alpha)) 

 

`&otimes;`(Bra(A, alpha), Ket(B, beta)) = `&otimes;`(Ket(B, beta), Bra(A, alpha)) 

 

while

Bra(A, alpha)*Ket(A, alpha) <> Bra(A, alpha)*Ket(A, alpha)

 

3) All the operators of one Hilbert space act transparently over operators, Bras and Kets of other Hilbert spaces. For example

 

A*Ket(B, n) = A*Ket(B, n)

  

and the same for the Dagger of this equation, that is

Bra(B, n)*Dagger(A) = Bra(B, n)*Dagger(A)

 

  

Hence, when we write the left-hand sides of the two equations above and press enter, they are automatically rewritten (returned) as the right-hand sides.

 

4) Every other quantum operator, set as such using Setup , and not indicated as acting on any particular Hilbert space, is assumed to act on all spaces.

 

5) Notation:

 

• 

Tensor products formed with operators, or Bras and Kets belonging to different Hilbert spaces (set as such using Setup  and the keyword hilbertspaces), are now displayed with the symbol 5 in between, as in Ket(A, n)*Ket(B, n) instead of Ket(A, n)*Ket(B, n), and `&otimes;`(A, B) instead of A*B. The product of an operator A of one space and a KetNULL of another space Ket(B, n) however, is displayed AA, without 5.

• 

A new Setup option hideketlabel , makes all the labels in Kets and Bras to be hidden at the time of displaying Kets, Bras and Bracket, so when you set it entering Setup(hideketlabel = true),

 "Ket(A,m,n,l"  

  

is displayed as

Ket(A, m, n, l)

 

  

This is the notation frequently used when working with angular momentum or in quantum information, where tensor products of Hilbert spaces are used.

Design details

   

Tensor product notation and the hideketlabel option

 

 

According to the design section, set now two disjointed Hilbert spaces with operators A, C acting on one of them and B, C on the other one (you can think of  C = `&otimes;`(A, B))

 

Setup(hilbertspaces = {{A, C}, {B, C}})

[disjointedspaces = {{A, C}, {B, C}}]

(2.1)

 

Consider a tensor product of Kets, each of which belongs to one of these different spaces, note the new notation using"&otimes;"

Ket(A, 1)*Ket(B, 0)

Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0))

(2.2)
• 

As explained in the Details of the design section, the ordering of the Hilbert spaces in tensor products is now preserved: Bras (Kets) of the first space always appear before Bras (Kets) of the second space. For example, construct a projector into the state (2.2)

Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0))*Dagger(Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0)))

Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

(2.3)

You see that in the product of Bras, and also in the product of Kets, A comes first, then B.


Remark: some textbooks prefer a diadic style for sorting the operands in products of Bras and Kets that belong to different spaces, for example, `&otimes;`(Ket(A, 1)*Bra(A, 1), `&otimes;`(Ket(B, 0), Bra(B, 0))) instead of the projector sorting style of  (2.3). Both reorderings of Kets and Bras are mathematically equal.

 

• 

Because that ordering is preserved, one can now hide the label of Bras and Kets without ambiguity, as it is usual in textbooks (e.g. in Quantum Information). For that purpose use the new keyword option hideketlabel

Setup(hide = true)

`* Partial match of  '`*hide*`' against keyword '`*hideketlabel*`' `

 

_______________________________________________________

 

[hideketlabel = true]

(2.4)

The display for (2.3) is now

Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0), Physics[Bra](A, 1), Physics[Bra](B, 0))

Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

(2.5)

Important: this new option only hides the label while displaying the Bra or Ket. The label, however, is still there, both in the input and in the output. One can "see" what is behind this new display using show, that works the same way as it does in the context of   CompactDisplay . The actual contents being displayed in (2.5) is thus (2.3)

show

Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

(2.6)

Operators of each of these spaces act on their eigenkets as usual. Here we distribute over both sides of an equation, using `*` on the left-hand side, to see the product uncomputed, and `.` on the right-hand side to see it computed:

(`*` = `.`)(A, Ket(A, 1))

Physics:-`*`(A, Physics:-Ket(A, 1)) = Physics:-Ket(A, 1)

(2.7)

(`*` = `.`)(A, Ket(A, 0))

Physics:-`*`(A, Physics:-Ket(A, 0)) = 0

(2.8)
• 

The tensor product of operators belonging to different Hilbert spaces is also displayed using 5

A*B

Physics:-`*`(A, B)

(2.9)
• 

 As mentioned in the preceding design section, using the commutativity between operators, Bras and Kets that belong to different Hilbert spaces, within a product, operators are placed contiguous to the Kets and Bras belonging to the space where the operator acts. For example, consider the delayed product represented using the start `*` operator

'Physics[`*`](A, B)*Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0), Physics[Bra](A, 1), Physics[Bra](B, 0))'

Physics:-`*`(A, B, Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

(2.10)

Release the product

%

Physics:-`*`(A, Physics:-Ket(A, 1), B, Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

(2.11)

The same operation but now using the dot product `.` operator. Start by delaying the operation

'Physics[`*`](A, B).Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0), Physics[Bra](A, 1), Physics[Bra](B, 0))'

Parse:-ConvertTo1D, "invalid input %1", Typesetting:-mprintslash([A*B.Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))], [A*B.Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))])

(2.12)

Recalling that this product is mathematically the same as (2.11), and that

B.Ket(B, 0)

0

(2.13)

by releasing the delayed product (2.12) we have

Typesetting[delayDotProduct](Physics[`*`](A, B), Physics[`*`](Ket(A, 1), Ket(B, 0), Bra(A, 1), Bra(B, 0)))

0

(2.14)

Reset hideketlabel

Setup(hideketlabel = false)

[hideketlabel = false]

(2.15)

Implementation details

   

Entangled States and the Bell basis

 

 

With the introduction of disjointed Hilbert spaces in Maple it is possible to represent entangled quantum states in a simple way, basically as done with paper and pencil.

 

Recalling the Hilbert spaces set at this point are,

Setup(hilbert)

`* Partial match of  '`*hilbert*`' against keyword '`*hilbertspaces*`' `

 

_______________________________________________________

 

[disjointedspaces = {{A, C}, {B, C}}]

(3.1)

where C acts on the tensor product of the spaces where A and B act. A state of C can then always be written as

Ket(C, m, n) = Sum(Sum(M[j, p]*Ket(A, j)*Ket(B, p), j), p)

Physics:-Ket(C, m, n) = Sum(Sum(M[j, p]*Physics:-`*`(Physics:-Ket(A, j), Physics:-Ket(B, p)), j), p)

(3.2)

where M[j, p] is a matrix of complex coefficients. Bra  states of C are formed as usual taking the Dagger

Dagger(Ket(C, m, n) = Sum(Sum(M[j, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j), p))

Physics:-Bra(C, m, n) = Sum(Sum(conjugate(M[j, p])*Physics:-`*`(Physics:-Bra(A, j), Physics:-Bra(B, p)), j), p)

(3.3)

 

• 

By definition, all states Ket(C, alpha, beta) that can be written exactly as `&otimes;`(Ket(A, alpha), Ket(B, beta)), that is, the product of a arbitrary state of the subspace A and another of the subspace B, are product states, and all the other ones are entangled states. Entangelment is a property that is independent of the basis `&otimes;`(Ket(A, j), Ket(B, p))used in (3.2).

The physical interpretation is the standard one: when the state of a system constituted by two subsystems A and B is represented by a product state, the properties of the subsystem A are well defined and all given by "Ket(A,alpha),"while those for the subsystem B by NULL. When the system is in an entangled state one typically cannot assign definite properties to the individual subsystems A or B, each subsystem has no independent reality.

To determine whether a state Ket(C, alpha, beta) is or not entangled it then suffices to check the rank R of the matrix M[j, p] (see LinearAlgebra:-Rank ): when R = 1 the state is a product state, otherwise it is an entangled state. When the state being analized belongs to the tensor product of two subspaces, R = 1.is equivalent to having the determinant of M[j, p] equal to 0. The condition R = 1, however, is more general, and suffices to determine whether a state is a product state also on a Hilbert space that is the tensor product of three or more subspaces: "`&Hscr;`^()=`&Hscr;`^((1))&otimes;`&Hscr;`^((2))&otimes;`&Hscr;`^((3))... `&Hscr;`^((n))", in which case the matrix M will have more rows and columns and a determinant equal to 0 would only warrant the possibility of factorizing one Ket.

 

Example: the Bell basis for a system of two qubits

 

Consider a 2-dimensional space of states acted upon by the operator A, and let B act upon another, disjointed, Hilbert space that is a replica of the Hilbert space on which A acts. Set the dimensions of A, B and C respectively equal to 2, 2 and 2x2 (see Setup)

Setup(quantumbasisdimension = {A = 2, B = 2, C[1] = 2, C[2] = 2})

[quantumbasisdimension = {A = 2, B = 2, C[1] = 2, C[2] = 2}]

(3.4)

The system C with the two subsystems A and B represents the a two qubits system. The standard basis for C can be constructed in a natural way from the basis of  Kets of A and B, {Ket(A, 0), Ket(A, 1), Ket(B, 0), Ket(B, 1)}, by taking their tensor products:

seq(seq(Ket(A, j)*Ket(B, k), k = 0 .. 1), j = 0 .. 1)

Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0)), Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1)), Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)), Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1))

(3.5)

Set a more mathematical display for the imaginary unit

interface(imaginaryunit = i)

 

The four entangled Bell states also form a basis of C and are given by

Setup(op = `&Bscr;`)

`* Partial match of  '`*op*`' against keyword '`*quantumoperators*`' `

 

_______________________________________________________

 

[quantumoperators = {`&Bscr;`, A, B, C, E}]

(3.6)

Ket(`&Bscr;`, 0) = (Ket(A, 0)*Ket(B, 0)+Ket(A, 1)*Ket(B, 1))/('sqrt')(2)

Physics:-Ket(`&Bscr;`, 0) = (Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))/sqrt(2)

(3.7)

Ket(`&Bscr;`, 1) = (Ket(A, 0)*Ket(B, 1)+Ket(A, 1)*Ket(B, 0))/('sqrt')(2)

Physics:-Ket(`&Bscr;`, 1) = (Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))/sqrt(2)

(3.8)

Ket(`&Bscr;`, 2) = i*(Ket(A, 0)*Ket(B, 1)-Ket(A, 1)*Ket(B, 0))/('sqrt')(2)

Physics:-Ket(`&Bscr;`, 2) = I*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))/sqrt(2)

(3.9)

Ket(`&Bscr;`, 3) = (Ket(A, 0)*Ket(B, 0)-Ket(A, 1)*Ket(B, 1))/('sqrt')(2)

Physics:-Ket(`&Bscr;`, 3) = (Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))/sqrt(2)

(3.10)

There is no standard notation for denoting a Bell state (the linar combinations of the right-hand sides above). The convention used here relates to the definition of the Bell states related to the Pauli matrices shown below. Regardless fo the convention used, this basis is orthonormal. That can be verified by taking dot products, for example:

Dagger(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2)).(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

1 = 1

(3.11)

In steps, perform the same operation but using the star (`*`) operator, so that the contraction is represented but not performed

Dagger(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))*(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

Physics:-`*`(Physics:-Bra(`&Bscr;`, 0), Physics:-Ket(`&Bscr;`, 0)) = (1/2)*Physics:-`*`(Physics:-`*`(Physics:-Bra(A, 0), Physics:-Bra(B, 0))+Physics:-`*`(Physics:-Bra(A, 1), Physics:-Bra(B, 1)), Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

(3.12)

Evaluate now the result at `*` = `.`, that is transforming the star product into a dot product

eval(Physics[`*`](Bra(`&Bscr;`, 0), Ket(`&Bscr;`, 0)) = (1/2)*Physics[`*`](Physics[`*`](Bra(A, 0), Bra(B, 0))+Physics[`*`](Bra(A, 1), Bra(B, 1)), Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1))), `*` = `.`)

1 = 1

(3.13)

Dagger(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))*(Ket(`&Bscr;`, 1) = (Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0)))/sqrt(2))

Physics:-`*`(Physics:-Bra(`&Bscr;`, 0), Physics:-Ket(`&Bscr;`, 1)) = (1/2)*Physics:-`*`(Physics:-`*`(Physics:-Bra(A, 0), Physics:-Bra(B, 0))+Physics:-`*`(Physics:-Bra(A, 1), Physics:-Bra(B, 1)), Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

(3.14)

eval(Physics[`*`](Bra(`&Bscr;`, 0), Ket(`&Bscr;`, 1)) = (1/2)*Physics[`*`](Physics[`*`](Bra(A, 0), Bra(B, 0))+Physics[`*`](Bra(A, 1), Bra(B, 1)), Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0))), `*` = `.`)

0 = 0

(3.15)

The Bell basis and its relation with the Pauli matrices

 

The Bell basis can be constructed departing from Ket(`&Bscr;`, 0) using the Pauli matrices sigma[j]. For that purpose, using a Vector representation for Ket(A, j),

Physics:-Ket(`&Bscr;`, 0)

(3.16)

Ket(B, 0) = Vector([1, 0]), Ket(B, 1) = Vector([0, 1])

Physics:-Ket(B, 0) = Vector[column](%id = 18446744078301209294), Physics:-Ket(B, 1) = Vector[column](%id = 18446744078301209414)

(3.17)

Multiplying Ket(B, 0)by each of the sigma[j] Pauli Matrices and performing the matrix operations we have

"[seq(Psigma[j] . ?[1], j=1..3)]"

[Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)) = Physics:-Psigma[1].Vector[column](%id = 18446744078301209294), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)) = Physics:-Psigma[2].Vector[column](%id = 18446744078301209294), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)) = Physics:-Psigma[3].Vector[column](%id = 18446744078301209294)]

(3.18)

"map(u -> lhs(u) =Library:-PerformMatrixOperations(rhs(u)),?)"

[Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)) = Vector[column](%id = 18446744078376366918), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)) = Vector[column](%id = 18446744078376368838), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)) = Vector[column](%id = 18446744078376358606)]

(3.19)

In this result we see that sigma[1] and sigma[2] flip the state, transforming Ket(B, 0) into Ket(B, 1), sigma[2] also multiplies the state by the imaginary unit I, while sigma[3] leaves the state Ket(B, 0) unchanged.

We can rewrite all that by removeing from (3.19) the Vector representations of (3.17). For that purpose, create a list of substitution equations, replacing the Vectors by the Kets

"map(rhs = lhs,[?, i *~ ?])"

[Vector[column](%id = 18446744078301209294) = Physics:-Ket(B, 0), Vector[column](%id = 18446744078301209414) = Physics:-Ket(B, 1), Vector[column](%id = 18446744078376351494) = I*Physics:-Ket(B, 0), Vector[column](%id = 18446744078376351734) = I*Physics:-Ket(B, 1)]

(3.20)

So the action of sigma[j] in Ket(B, 0) is given by

"Library:-SubstituteMatrix(?,?)"

[Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)) = Physics:-Ket(B, 1), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)) = I*Physics:-Ket(B, 1), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)) = Physics:-Ket(B, 0)]

(3.21)

For Ket(B, 1), the same operations result in

"[seq(Psigma[j] . ?[2], j=1..3)]"

[Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)) = Physics:-Psigma[1].Vector[column](%id = 18446744078301209414), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1)) = Physics:-Psigma[2].Vector[column](%id = 18446744078301209414), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1)) = Physics:-Psigma[3].Vector[column](%id = 18446744078301209414)]

(3.22)

"map(u -> lhs(u) =Library:-PerformMatrixOperations(rhs(u)),?)"

[Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)) = Vector[column](%id = 18446744078464860518), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1)) = Vector[column](%id = 18446744078464862438), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1)) = Vector[column](%id = 18446744078464856182)]

(3.23)

"Library:-SubstituteMatrix(?,?)"

[Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)) = Physics:-Ket(B, 0), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1)) = -I*Physics:-Ket(B, 0), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1)) = -Physics:-Ket(B, 1)]

(3.24)

To obtain the other three Bell states using the results (3.21) and (3.24), indicate to the system that the Pauli matrices operate in the subspace where B operates

Setup(hilbert = {{B, C, Psigma}})

`* Partial match of  '`*hilbert*`' against keyword '`*hilbertspaces*`' `

 

_______________________________________________________

 

[disjointedspaces = {{A, C}, {B, C, Physics:-Psigma}}]

(3.25)

 

Multiplying Ket(`&Bscr;`, 0) given in (3.7) by each of the three sigma[j] we get the other three Bell states

Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2)

Physics:-Ket(`&Bscr;`, 0) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

(3.26)

Psigma[1]*(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics:-`*`(Physics:-Psigma[1], Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

(3.27)

Substitute in this result the first equations of (3.21) and (3.24)

[Physics[`*`](Physics[Psigma][1], Ket(B, 0)) = Ket(B, 1), Physics[`*`](Physics[Psigma][2], Ket(B, 0)) = I*Ket(B, 1), Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0)][1], [Physics[`*`](Physics[Psigma][1], Ket(B, 1)) = Ket(B, 0), Physics[`*`](Physics[Psigma][2], Ket(B, 1)) = -I*Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1)][1]

Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)) = Physics:-Ket(B, 1), Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)) = Physics:-Ket(B, 0)

(3.28)

map(rhs = lhs, [Physics[`*`](Physics[Psigma][1], Ket(B, 0)) = Ket(B, 1), Physics[`*`](Physics[Psigma][1], Ket(B, 1)) = Ket(B, 0)])

[Physics:-Ket(B, 1) = Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)), Physics:-Ket(B, 0) = Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1))]

(3.29)

subs([Ket(B, 1) = Physics[`*`](Physics[Psigma][1], Ket(B, 0)), Ket(B, 0) = Physics[`*`](Physics[Psigma][1], Ket(B, 1))], Physics[`*`](Physics[Psigma][1], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics[`*`](Physics[Psigma][1], Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1))))

Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics:-`*`(Physics:-Psigma[1], Physics:-`*`(Physics:-Ket(A, 0), Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0))))

(3.30)

factor(Simplify(Physics[`*`](Physics[Psigma][1], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics[`*`](Physics[Psigma][1], Physics[`*`](Ket(A, 0), Physics[`*`](Physics[Psigma][1], Ket(B, 1)))+Physics[`*`](Ket(A, 1), Physics[`*`](Physics[Psigma][1], Ket(B, 0))))))

Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

(3.31)

This is Ket(`&Bscr;`, 1) defined in (3.8)

Ket(`&Bscr;`, 1) = (Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0)))/sqrt(2)

Physics:-Ket(`&Bscr;`, 1) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

(3.32)

(Physics[`*`](Physics[Psigma][1], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0))))-(Ket(`&Bscr;`, 1) = (1/2)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0))))

Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(`&Bscr;`, 0))-Physics:-Ket(`&Bscr;`, 1) = 0

(3.33)

Multiplying now by sigma[2] and substituting Ket(B, j) using the 2^nd equations of (3.21) and (3.24) we get Ket(`&Bscr;`, 1)

Psigma[2]*(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics:-`*`(Physics:-Psigma[2], Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

(3.34)

[Physics[`*`](Physics[Psigma][1], Ket(B, 0)) = Ket(B, 1), Physics[`*`](Physics[Psigma][2], Ket(B, 0)) = I*Ket(B, 1), Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0)][2], [Physics[`*`](Physics[Psigma][1], Ket(B, 1)) = Ket(B, 0), Physics[`*`](Physics[Psigma][2], Ket(B, 1)) = -I*Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1)][2]

Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)) = I*Physics:-Ket(B, 1), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1)) = -I*Physics:-Ket(B, 0)

(3.35)

zip(isolate, [Physics[`*`](Physics[Psigma][2], Ket(B, 0)) = I*Ket(B, 1), Physics[`*`](Physics[Psigma][2], Ket(B, 1)) = -I*Ket(B, 0)], [Ket(B, 1), Ket(B, 0)])

[Physics:-Ket(B, 1) = -I*Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)), Physics:-Ket(B, 0) = I*Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1))]

(3.36)

factor(Simplify(subs([Ket(B, 1) = -I*Physics[`*`](Physics[Psigma][2], Ket(B, 0)), Ket(B, 0) = I*Physics[`*`](Physics[Psigma][2], Ket(B, 1))], Physics[`*`](Physics[Psigma][2], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics[`*`](Physics[Psigma][2], Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1))))))

Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(`&Bscr;`, 0)) = ((1/2)*I)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

(3.37)

The above is Ket(`&Bscr;`, 2) defined in (3.9)

Ket(`&Bscr;`, 2) = I*(Physics[`*`](Ket(A, 0), Ket(B, 1))-Physics[`*`](Ket(A, 1), Ket(B, 0)))/sqrt(2)

Physics:-Ket(`&Bscr;`, 2) = ((1/2)*I)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

(3.38)

Expand((Physics[`*`](Physics[Psigma][2], Ket(`&Bscr;`, 0)) = ((1/2)*I)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 1))-Physics[`*`](Ket(A, 1), Ket(B, 0))))-(Ket(`&Bscr;`, 2) = ((1/2)*I)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 1))-Physics[`*`](Ket(A, 1), Ket(B, 0)))))

Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(`&Bscr;`, 0))-Physics:-Ket(`&Bscr;`, 2) = 0

(3.39)

Finally, multiplying Ket(`&Bscr;`, 2) by sigma[3]

Psigma[3]*(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics:-`*`(Physics:-Psigma[3], Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

(3.40)

Substituting

[Physics[`*`](Physics[Psigma][1], Ket(B, 0)) = Ket(B, 1), Physics[`*`](Physics[Psigma][2], Ket(B, 0)) = I*Ket(B, 1), Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0)][3], [Physics[`*`](Physics[Psigma][1], Ket(B, 1)) = Ket(B, 0), Physics[`*`](Physics[Psigma][2], Ket(B, 1)) = -I*Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1)][3]

Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)) = Physics:-Ket(B, 0), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1)) = -Physics:-Ket(B, 1)

(3.41)

(rhs = lhs)((Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1))[1]), (rhs = lhs)(-(Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1))[2])

Physics:-Ket(B, 0) = Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)), Physics:-Ket(B, 1) = -Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1))

(3.42)

We get ``

factor(Simplify(subs(Ket(B, 0) = Physics[`*`](Physics[Psigma][3], Ket(B, 0)), Ket(B, 1) = -Physics[`*`](Physics[Psigma][3], Ket(B, 1)), Physics[`*`](Physics[Psigma][3], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics[`*`](Physics[Psigma][3], Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1))))))

Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

(3.43)

which is Ket(`&Bscr;`, 2)

Ket(`&Bscr;`, 3) = (Physics[`*`](Ket(A, 0), Ket(B, 0))-Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2)

Physics:-Ket(`&Bscr;`, 3) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

(3.44)

Expand((Physics[`*`](Physics[Psigma][3], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 0))-Physics[`*`](Ket(A, 1), Ket(B, 1))))-(Ket(`&Bscr;`, 3) = (1/2)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 0))-Physics[`*`](Ket(A, 1), Ket(B, 1)))))

Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(`&Bscr;`, 0))-Physics:-Ket(`&Bscr;`, 3) = 0

(3.45)

Entangled States, Operators and Projectors

 

 

Consider a fourth operator, H, that is Hermitian and acts on the same space of C, and then it has the same dimension,

Setup(additionally, hermitian = H, basisdimension = {H[1] = 2, H[2] = 2}, hilbertspaces = {{A, C, H}, {B, C, H}})

`* Partial match of  '`*hermitian*`' against keyword '`*hermitianoperators*`' `

 

`* Partial match of  '`*basisdimension*`' against keyword '`*quantumbasisdimension*`' `

 

_______________________________________________________

 

[disjointedspaces = {{A, C, H}, {B, C, H}, {B, C, Physics:-Psigma}}, hermitianoperators = {H}, quantumbasisdimension = {A = 2, B = 2, C[1] = 2, C[2] = 2, H[1] = 2, H[2] = 2}]

(4.1)

To operate in a practical way with these operators, Bras and Kets, however, bracket rules reflecting their relationship are necessary. From the definition of C as acting on the tensor product of  spaces where A and B act (see (3.2)) and taking into account the dimensions specified for A, B and C we have

Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Ket(A, j)*Ket(B, p), j = 0 .. 1), p = 0 .. 1)

Physics:-Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics:-`*`(Physics:-Ket(A, j), Physics:-Ket(B, p)), j = 0 .. 1), p = 0 .. 1)

(4.2)

Bra(A, k).(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

Physics:-Bracket(Physics:-Bra(A, k), Physics:-Ket(C, a, b)) = Sum(M[a, k, b, p]*Physics:-Ket(B, p), p = 0 .. 1)

(4.3)

Bra(B, k).(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

Physics:-Bracket(Physics:-Bra(B, k), Physics:-Ket(C, a, b)) = Sum(M[a, j, b, k]*Physics:-Ket(A, j), j = 0 .. 1)

(4.4)

Bra(A, k).Bra(B, l).(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

Physics:-`*`(Physics:-Bra(A, k), Physics:-Bracket(Physics:-Bra(B, l), Physics:-Ket(C, a, b))) = M[a, k, b, l]

(4.5)

The bracket rules for A, B and C are the first two of these; Set these rules, so that the system can take them into account

Setup(Bracket(Bra(A, k), Ket(C, a, b)) = Sum(M[a, k, b, p]*Ket(B, p), p = 0 .. 1), Bracket(Bra(B, k), Ket(C, a, b)) = Sum(M[a, j, b, k]*Ket(A, j), j = 0 .. 1))

[bracketrules = {%Bracket(%Bra(A, k), %Ket(C, a, b)) = Sum(M[a, k, b, p]*Physics:-Ket(B, p), p = 0 .. 1), %Bracket(%Bra(B, k), %Ket(C, a, b)) = Sum(M[a, j, b, k]*Physics:-Ket(A, j), j = 0 .. 1)}]

(4.6)

If we now recompute (4.5), the left-hand side is also computed

Bracket(C, i, j, H, C, k, l) = `&Hscr;`

Physics:-Bracket(Physics:-Bra(C, I, j), H, Physics:-Ket(C, k, l)) = `&Hscr;`

(4.7)

Bra(A, k).Bra(B, l).(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

M[a, k, b, l] = M[a, k, b, l]

(4.8)

Suppose now that you want to compute with the Hermitian operator H, that operates on the same space as C, both using C using the operators A and B, as in

 

Bracket(Bra(C, I, j), H, Ket(C, k, l)) = `&Hscr;`[i, j, k, l]

 

`&otimes;`(Bra(A, I), Bra(B, j))*H*`&otimes;`(Ket(A, k), Ket(B, l)) = H[I, j, k, l]

 

where `&Hscr;`[i, j, k, l] = H[I, j, k, l] when Ket(C, a, b) is a product (not entagled) state.

 

For Bracket(Bra(C, I, j), H, Ket(C, k, l)) = `&Hscr;`[I, j, k, l] it suffices to set a bracket rule

Setup(%Bracket(Bra(C, a, b), H, Ket(C, c, d)) = `&Hscr;`[a, b, c, d], real = `&Hscr;`)

`* Partial match of  '`*real*`' against keyword '`*realobjects*`' `

 

_______________________________________________________

 

[bracketrules = {%Bracket(%Bra(A, k), %Ket(C, a, b)) = Sum(M[a, k, b, p]*Physics:-Ket(B, p), p = 0 .. 1), %Bracket(%Bra(B, k), %Ket(C, a, b)) = Sum(M[a, j, b, k]*Physics:-Ket(A, j), j = 0 .. 1), %Bracket(%Bra(C, a, b), H, %Ket(C, c, d)) = `&Hscr;`[a, b, c, d]}, realobjects = {`&Hscr;`, x, y, z}]

(4.9)

After that, the system operates taking the rule into account

Bra(C, j, k).H.Ket(C, m, n)

`&Hscr;`[j, k, m, n]

(4.10)

Regarding `&otimes;`(Bra(A, I), Bra(B, j))*H*`&otimes;`(Ket(A, k), Ket(B, l)) = H[I, j, k, l]NULL, since H belongs to the tensor product of spaces A and B, it can be an entangled operator, one that you cannot represent just as a product of one operator acting on A times another one acting on B. A computational representation for the operator Bra(B, j)*H*Ket(A, k) (that is not just itself or as abstract) is not possible in the general case. For that you can use a different feature: define the action of the operator H on Kets of A and B.

 

Basically, we want:

 

"H*Ket(A,k)-> H[k]"

"H[k]*Ket(B,l)->H[k,l]"

A program sketch for that would be:


if H is applied to a Ket of A or B then

    if H itself is indexed then
        return H accumulating its indices, followed by the index of the Ket
    else

        return H indexed by the index of the Ket;
otherwise
    return the dot product operation uncomputed, unevaluated

 

In Maple language, that program-sketch becomes

 

"H := K ->   if K::Ket and op(1, K)::'identical(A,B)' then      if procname::'indexed' then         if nops(procname) <4 then             H[op(procname), op(2, K)]    #` accumulate indices`         else             'H . K'         fi     else          H[op(2, K)]     fi  else      'procname . K'  fi:"

 

Let's see it in action. Start erasing the Physics performance remember tables, that remember results like  computed before the definition of H

 

Library:-Forget()

H.Ket(A, k)

H[k]

(4.11)

Recalling that H is Hermitian,

Bra(B, j).H

H[j]

(4.12)

Bra(B, j).H.Ket(A, k)

H[j, k]

(4.13)

Bra(B, j).H.Ket(A, k).Ket(B, l)

H[j, k, l]

(4.14)

Bra(A, i).Bra(B, j).H.Ket(A, k).Ket(B, l)

H[I, j, k, l]

(4.15)

Note that the definition of H as a procedure does not interfer with the setting of an bracket rule for it with Ket(C, a, b), that is still working

Bra(C, i, j).H.Ket(C, k, l)

`&Hscr;`[I, j, k, l]

(4.16)

but the definition takes precedence, so if in it you indicate what to do with a C Ket, that will be taken into account before the bracket rule. Finally, In the typical case, the first four results, (4.11), (4.12), (4.13) and (4.14) are operators while (4.15) is a scalar; you can always represent the scalar aspect by substituing the noncommutative operator H by a related scalar, say H.

 

• 

You can set the projectors for all these operators / spaces. For example,

`&Iopf;__A` := Projector(Ket(A, i)); `&Iopf;__B` := Projector(Ket(B, i)); `&Iopf;__C` := Projector(Ket(C, a, b))

Sum(Physics:-`*`(Physics:-Ket(A, n), Physics:-Bra(A, n)), n = 0 .. 1)

 

Sum(Physics:-`*`(Physics:-Ket(B, n), Physics:-Bra(B, n)), n = 0 .. 1)

 

Sum(Sum(Physics:-`*`(Physics:-Ket(C, a, b), Physics:-Bra(C, a, b)), a = 0 .. 1), b = 0 .. 1)

(4.17)

Since the algebra rules for computing with eigenkets of A, B and C were already set in (4.6), from the projectors above you can construct any subspace projector, for example

Bra(A, m).`&Iopf;__C`

Sum(Sum(Sum(M[a, m, b, p]*Physics:-`*`(Physics:-Ket(B, p), Physics:-Bra(C, a, b)), p = 0 .. 1), a = 0 .. 1), b = 0 .. 1)

(4.18)

`&Iopf;__C`.Ket(A, m)

Sum(Sum(Sum(conjugate(M[a, m, b, p])*Physics:-`*`(Physics:-Ket(C, a, b), Physics:-Bra(B, p)), p = 0 .. 1), a = 0 .. 1), b = 0 .. 1)

(4.19)

The conjugate of M[a, m, b, p] is due to the contraction or attachment from the right of (4.18), that is with

Dagger(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

Physics:-Bra(C, a, b) = Sum(Sum(conjugate(M[a, j, b, p])*Physics:-`*`(Physics:-Bra(A, j), Physics:-Bra(B, p)), j = 0 .. 1), p = 0 .. 1)

(4.20)

 

The coefficients M[a, m, b, p] satisfy constraints due to the normalization of  Kets of A and B. One can derive these contraints by inserting the unit operator `#msub(mi("&Iopf;"),mi("C"))` constructing this identity

Sum(Sum(Physics:-`*`(Physics:-Ket(C, a, b), Physics:-Bra(C, a, b)), a = 0 .. 1), b = 0 .. 1)

(4.21)

Bra(A, m).Bra(B, n).`&Iopf;__C`.Ket(A, r).Ket(B, s) = Bra(A, m).Bra(B, n).Ket(A, r).Ket(B, s)

Sum(Sum(conjugate(M[a, r, b, s])*M[a, m, b, n], a = 0 .. 1), b = 0 .. 1) = Physics:-KroneckerDelta[m, r]*Physics:-KroneckerDelta[n, s]

(4.22)

Transform this result into a function P  to explore the identity further

P := unapply(subs(Sum = sum, Sum(Sum(conjugate(M[a, r, b, s])*M[a, m, b, n], a = 0 .. 1), b = 0 .. 1) = Physics[KroneckerDelta][m, r]*Physics[KroneckerDelta][n, s]), m, n, r, s)

proc (m, n, r, s) options operator, arrow; sum(sum(conjugate(M[a, r, b, s])*M[a, m, b, n], a = 0 .. 1), b = 0 .. 1) = Physics:-KroneckerDelta[m, r]*Physics:-KroneckerDelta[n, s] end proc

(4.23)

The first and third indices refer to the quantum numbers of A, the second and fourth to B, so the the right-hand sides in the following are respectively 1 and 0

P(1, 0, 1, 0)

conjugate(M[0, 1, 0, 0])*M[0, 1, 0, 0]+conjugate(M[1, 1, 0, 0])*M[1, 1, 0, 0]+conjugate(M[0, 1, 1, 0])*M[0, 1, 1, 0]+conjugate(M[1, 1, 1, 0])*M[1, 1, 1, 0] = 1

(4.24)

P(1, 0, 0, 0)

conjugate(M[0, 0, 0, 0])*M[0, 1, 0, 0]+conjugate(M[1, 0, 0, 0])*M[1, 1, 0, 0]+conjugate(M[0, 0, 1, 0])*M[0, 1, 1, 0]+conjugate(M[1, 0, 1, 0])*M[1, 1, 1, 0] = 0

(4.25)

To get the whole system of equations satisfied by the coefficients M[a, m, b, n], use P to construct an Array with four indices running from 0..1

Array(`$`(0 .. 1, 4), P)

_rtable[18446744078376377150]

(4.26)

Convert the whole Array into a set of equations

"simplify(convert(Typesetting:-msub(Typesetting:-mi("_rtable",italic = "true",mathvariant = "italic"),Typesetting:-mrow(Typesetting:-mn("18446744078376377150",mathvariant = "normal")),subscriptshift = "0"),setofequations))"

{abs(M[0, 0, 0, 0])^2+abs(M[1, 0, 0, 0])^2+abs(M[0, 0, 1, 0])^2+abs(M[1, 0, 1, 0])^2 = 1, abs(M[0, 0, 0, 1])^2+abs(M[1, 0, 0, 1])^2+abs(M[0, 0, 1, 1])^2+abs(M[1, 0, 1, 1])^2 = 1, abs(M[0, 1, 0, 0])^2+abs(M[1, 1, 0, 0])^2+abs(M[0, 1, 1, 0])^2+abs(M[1, 1, 1, 0])^2 = 1, abs(M[0, 1, 0, 1])^2+abs(M[1, 1, 0, 1])^2+abs(M[0, 1, 1, 1])^2+abs(M[1, 1, 1, 1])^2 = 1, conjugate(M[0, 0, 0, 0])*M[0, 0, 0, 1]+conjugate(M[1, 0, 0, 0])*M[1, 0, 0, 1]+conjugate(M[0, 0, 1, 0])*M[0, 0, 1, 1]+conjugate(M[1, 0, 1, 0])*M[1, 0, 1, 1] = 0, conjugate(M[0, 0, 0, 0])*M[0, 1, 0, 0]+conjugate(M[1, 0, 0, 0])*M[1, 1, 0, 0]+conjugate(M[0, 0, 1, 0])*M[0, 1, 1, 0]+conjugate(M[1, 0, 1, 0])*M[1, 1, 1, 0] = 0, conjugate(M[0, 0, 0, 0])*M[0, 1, 0, 1]+conjugate(M[1, 0, 0, 0])*M[1, 1, 0, 1]+conjugate(M[0, 0, 1, 0])*M[0, 1, 1, 1]+conjugate(M[1, 0, 1, 0])*M[1, 1, 1, 1] = 0, conjugate(M[0, 0, 0, 1])*M[0, 0, 0, 0]+conjugate(M[1, 0, 0, 1])*M[1, 0, 0, 0]+conjugate(M[0, 0, 1, 1])*M[0, 0, 1, 0]+conjugate(M[1, 0, 1, 1])*M[1, 0, 1, 0] = 0, conjugate(M[0, 0, 0, 1])*M[0, 1, 0, 0]+conjugate(M[1, 0, 0, 1])*M[1, 1, 0, 0]+conjugate(M[0, 0, 1, 1])*M[0, 1, 1, 0]+conjugate(M[1, 0, 1, 1])*M[1, 1, 1, 0] = 0, conjugate(M[0, 0, 0, 1])*M[0, 1, 0, 1]+conjugate(M[1, 0, 0, 1])*M[1, 1, 0, 1]+conjugate(M[0, 0, 1, 1])*M[0, 1, 1, 1]+conjugate(M[1, 0, 1, 1])*M[1, 1, 1, 1] = 0, conjugate(M[0, 1, 0, 0])*M[0, 0, 0, 0]+conjugate(M[1, 1, 0, 0])*M[1, 0, 0, 0]+conjugate(M[0, 1, 1, 0])*M[0, 0, 1, 0]+conjugate(M[1, 1, 1, 0])*M[1, 0, 1, 0] = 0, conjugate(M[0, 1, 0, 0])*M[0, 0, 0, 1]+conjugate(M[1, 1, 0, 0])*M[1, 0, 0, 1]+conjugate(M[0, 1, 1, 0])*M[0, 0, 1, 1]+conjugate(M[1, 1, 1, 0])*M[1, 0, 1, 1] = 0, conjugate(M[0, 1, 0, 0])*M[0, 1, 0, 1]+conjugate(M[1, 1, 0, 0])*M[1, 1, 0, 1]+conjugate(M[0, 1, 1, 0])*M[0, 1, 1, 1]+conjugate(M[1, 1, 1, 0])*M[1, 1, 1, 1] = 0, conjugate(M[0, 1, 0, 1])*M[0, 0, 0, 0]+conjugate(M[1, 1, 0, 1])*M[1, 0, 0, 0]+conjugate(M[0, 1, 1, 1])*M[0, 0, 1, 0]+conjugate(M[1, 1, 1, 1])*M[1, 0, 1, 0] = 0, conjugate(M[0, 1, 0, 1])*M[0, 0, 0, 1]+conjugate(M[1, 1, 0, 1])*M[1, 0, 0, 1]+conjugate(M[0, 1, 1, 1])*M[0, 0, 1, 1]+conjugate(M[1, 1, 1, 1])*M[1, 0, 1, 1] = 0, conjugate(M[0, 1, 0, 1])*M[0, 1, 0, 0]+conjugate(M[1, 1, 0, 1])*M[1, 1, 0, 0]+conjugate(M[0, 1, 1, 1])*M[0, 1, 1, 0]+conjugate(M[1, 1, 1, 1])*M[1, 1, 1, 0] = 0}

(4.27)

Reference

   

NULL


 

Download Tensor_Products_of_Quantum_States_-_2018.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

It can be considered as continuation of these topics: “Determination of the angles of the manipulator with the help of its mathematical model. Inverse problem.”  and  “The use of manipulators as multi-axis CNC machines”.
There is a simple two-link manipulator with three degrees of freedom. We change its last link with a fixed length to a variable length link. By this action we added one degree of freedom to our manipulator. Now we have a two-link manipulator with 4 degrees of freedom.
In a particular mathematical model the variable length of the link is related to the x7, and in the figure the total length of the last link is displayed as the result of subtracting the fixed part of the link (L2) from x7 (equation f2). At the same time, the value of the variable x7 depends on the inclination of the first link (equation f4).  x1, x2, x3 are the coordinates of the moving point of the first link, x4, x5, x6 are the coordinates of the operating point.  The  equation f2 defines the type of connection between the links, the equations f5 and f6  are the trajectory of the working point.

MAN_2_4_Variable_length_MP.mw

Here is a classic puzzle:
You are camping, and have an 8-liter bucket which is full of fresh water. You need to share this water fairly into exactly two portions (4 + 4 liters). But you only have two empty buckets: a 5-liter and a 3-liter. Divide the 8 liters in half in as short a time as possible.

This is not an easy task and requires at least 7 transfusions to solve it. 

The procedure  Pouring  solves a similar problem for the general case. Given n buckets of known volume and the amount of water in each bucket is known. Buckets can be partially filled, be full or be empty (of course the case when all is empty or all is full is excluded). With each individual transfusion, the bucket from which it is poured must be completely free, or the bucket into which it is poured must be completely filled. It is forbidden to pour water anywhere other than the indicated buckets.

Formal parameters of the procedure: BucketVolumes  is a list of bucket volumes,  WaterVolumes  is a list of water volumes in these buckets. The procedure returns all possible states that can occur during transfusions in the form of a tree (the initial state  WaterVolumes  is its root).

restart;
Pouring:=proc(BucketVolumes::list(And(positive,{integer,float,fraction})),WaterVolumes::list(And(nonnegative,{integer,float,fraction})), Output:=graph)
local S, W, n, N, OneStep, j, v, H, G;
uses ListTools, GraphTheory;

n:=nops(BucketVolumes); 
if nops(WaterVolumes)<>n then error "The lists should be the same length" fi;
if n<2 then error "Must have at least 2 buckets" fi;
if not `or`(op(WaterVolumes>~0)) then error "There must be at least one non-empty bucket" fi;
if BucketVolumes=WaterVolumes then error "At least one bucket should not be full" fi;
if not `and`(seq(WaterVolumes[i]<=BucketVolumes[i], i=1..n)) then error "The amount of water in each bucket cannot exceed its volume" fi;
W:=[[WaterVolumes]];

OneStep:=proc(W)
local w, k, i, v, V, k1, v0;
global L;
L:=convert(Flatten(W,1), set);
k1:=0; 
for w in W do
k:=0; v:='v';
for i from 1 to n do
for j from 1 to n do
if i<>j and w[-1][i]<>0 and w[-1][j]<BucketVolumes[j] then k:=k+1; v[k]:=subsop(i=`if`(w[-1][i]<=BucketVolumes[j]-w[-1][j],0,w[-1][i]-(BucketVolumes[j]-w[-1][j])),j=`if`(w[-1][i]<=BucketVolumes[j]-w[-1][j],w[-1][j]+w[-1][i],BucketVolumes[j]),w[-1]); fi;
od; 
od; 
v:=convert(v,list);
if `and`(seq(v0 in L, v0=v)) then k1:=k1+1; V[k1]:=w else 
for v0 in v do  
if not (v0 in L) then k1:=k1+1; V[k1]:=[op(w),v0] fi;
od;
fi;
L:=L union convert(v,set);
od;
convert(V,list);
end proc:

S[0]:={};
for N from 1 do
H[N]:=(OneStep@@N)(W);
S[N]:=L;
if S[N-1]=S[N] then break fi;
od;
if Output=set then return L else
if Output=trails then interface(rtablesize=infinity);
return <H[N-1]> else
G:=Graph(seq(Trail(map(t->t[2..-2],convert~(h,string))),h=H[N-1]));
DrawGraph(G, style=tree, root=convert(WaterVolumes,string)[2..-2], stylesheet = [vertexcolor = "Yellow", vertexfont=[TIMES,20]], size=[800,500])  fi; fi;

end proc: 

Examples of use:

Here is the solution to the original puzzle above. We see that at least 7 transfusions are  
required to get equal volumes (4 + 4) in two buckets

Pouring([8,5,3], [8,0,0]);
           

 

 With an increase in the number of buckets, the number of solutions is extremely 
 increased. Here is the solution to the problem: is it possible to equalize the amount of water (7+7+7+7) in the following example? 

Pouring([14,10,9,9],[14,10,4,0]);
S:=Pouring([14,10,9,9],[14,10,4,0], set);
is([7,7,7,7] in S);
nops(S);
         

 

 

Download Pouring.mw

 

 

Am pondering how to best provide user-configurable options for a few packages I've written. The easiest method is to use global variables, preassign their default values during the package definition (but don't protect them) and save them with the mla used for the package. A user could then assign new values, say in their Maple initialization file or in a worksheet.  For example

  FooDefaultBar := true:
  FooDefaultBaz := false:

That works for a few variables, but is unwieldy if there are many, as the names generally have to be long and verbose to avoid accidental collision. Better may be to use a single record

  FooDefaults := Record('Bar' = true, 'Baz' = false):

To change one or more values, the user could do

   use FooDefaults in
      Bar := false;
   end use:

A drawback of using a global variable or record is that the user can assign any type to the variable, so the using program will have to check it. While one could use a record with typed fields, for example,

  FooDefaults := Record('Bar' :: truefalse = true, 'Baz' :: truefalse := false):

that only has an effect on assignments if kernelopts(assertlevel) is 2, which isn't the default.

A different approach is to use a Maple object to handle configuration variables. The object should be defined separate from the package it is configuring, so that the target package doesn't have to be loaded to customize its configuration. I've created a small object for this, but am not satisfied with its usage. Here is how it is currently used

# Create configuration object for package foo
Configure('fooDefaults', 'Bar' :: truefalse = true, 'Baz' :: truefalse = false):

The Assign method is used to reassign one or more fields

Assign(fooDefaults, 'Bar' = false, 'Baz' = true):

If a value does not match the declared type, an error is raised. Values from the object are available via the index operator:

   fooDefaults['Bar'];

Am not wild about this approach, the assignment seems clunky and would require a user to consult a help page to learn about the existence of the Assign method, though that would probably be necessary, regardless, to learn about the defaults themselves. Any thoughts on improvements? Attached is the current code.

Configure := module()

option object;

local Default # record of values
    , Type    # record of types
    , nomen   # string corresponding to name of assigned object
    , all :: static := {}
    ;

export
    ModuleApply :: static := proc()
        Object(Configure, _passed);
    end proc;

export
    ModuleCopy :: static := proc(self :: Configure
                                 , proto :: Configure
                                 , nm :: name
                                 , defaults :: seq(name :: type = anything)
                                 , $
                                )
    local eq;
        self:-Default := Record(defaults);
        self:-Type    := Record(seq(op([1,1], eq) = op([1,2], eq), eq = [defaults]));
        self:-nomen   := convert(nm,'`local`');
        nm := self;
        protect(nm);
        self:-all := {op(self:-all), self:-nomen};
        nm;
    end proc;

export
    ModulePrint :: static := proc(self :: Configure)
    local default;
        if self:-Default :: 'record' then
            self:-nomen(seq(default = self:-Default[default]
                            , default = exports(self:-Default)
                           ));
        else
            self:-nomen();
        end if;
    end proc;

export
    Assign :: static := proc(self :: Configure
                             , eqs :: seq(name = anything)
                             , $
                            )
    local eq, nm, val;
        # Check eqs
        for eq in [eqs] do
            (nm, val) := op(eq);
            if not assigned(self:-Default[nm]) then
                error "%1 is not a default of %2", nm, self:-nomen;
            elif not val :: self:-Type[nm] then
                error ("%1 must be of type %2, received %3"
                       , nm, self:-Type[nm], val);
            end if;
        end do;
        # Assign defaults
        for eq in [eqs] do
            (nm, val) := op(eq);
            self:-Default[nm] := val;
        end do;
        self;
    end proc;

export
    `?[]` :: static := proc(self :: Configure
                            , indx :: list
                            , val :: list
                           )
    local opt;
        opt := op(indx);
        if not assigned(self:-Default[opt]) then
            error "'%0' is not an assigned field of this Configure object", indx[];
        elif nargs = 2 then
            self:-Default[opt];
        elif not val :: [self:-Type[opt]] then
            error "value for %1 must be of type %2", opt, self:-Type[opt];
        else
            self:-Default[opt] := op(val);
        end if;
    end proc;

export
    ListAll :: static := proc(self :: Configure)
        self:-all;
    end proc;

end module:

Later: Observing that this is just a glorified record with an assurance that the values match their declared types, but with less nice methods to set and get the values, I concluded that what I really want is a record that enforces types regardless the setting of . Maybe created with

   FooDefaults := Record[strict]('Bar' :: truefalse = true, 'Baz :: truefalse = false):

In the meantime, I'll probably just use a record and not worry about whether a user has assigned an invalid value.

Hoping to get your very own copy of Maple for Christmas? 

No, I'm not about to announce a sale. No crass commercialism allowed on MaplePrimes!  But we have created this handy-dandy letter for Santa that students can use to try to convince him to put Maple under the tree this year*.

 

 

Happy holidays from Maplesoft!

 

*Results not guaranteed, but we hope you will agree we made a valiant attempt. :-)

 

On the convex part of the surface we place a curve (not necessarily flat, as in this case). We divide this curve into segments of equal length (in the text Ls [i]) and divide the path that our surface will roll (in the text L [i]) into segments of the same length as segments of curve. Take the next segment of the trajectory L [i] and the corresponding segment on the curve Ls [i], calculate the angles between them. After that, we perform well-known transformations that place the curve in the space so that the segment Ls [i] coincides with the segment L [i]. At the same time, we perform exactly the same transformations with the equation of surface.

For example, the ellipsoid rolls on the oX1 axis, and each position of the ellipsoid in space corresponds to the equation in the figure.
Rolling_of_surface.mw

 

And similar examples:

Exact solutions for PDE and Boundary / Initial Conditions

 

Significant developments happened during 2018 in Maple's ability for the exact solving of PDE with Boundary / Initial conditions. This is work in collaboration with Katherina von Bülow. Part of these developments were mentioned in previous posts.  The project is still active but it's December, time to summarize.

 

First of all thanks to all of you who provided feedback. The new functionality is described below, in 11 brief Sections, with 30 selected examples and a few comments. A worksheet with this contents is linked at the end of this post. Some of these improvements appeared first in 2018.1, then in 2018.2, but other ones are posterior. To reproduce the input/output below in Maple 2018.2.1, the latest Maplesoft Physics Updates (version 269 or higher) needs to be installed.

 

1. PDE and BC problems solved using linear change of variables

 

PDE and BC problems often require that the boundary and initial conditions be given at certain evaluation points (usually in which one of the variables is equal to zero). Using linear changes of variables, however, it is possible to change the evaluation points of BC, obtaining the solution for the new variables, and then changing back to the original variables. This is now automatically done by the pdsolve command.

 

Example 1: A heat PDE & BC problem in a semi-infinite domain:

pde__1 := diff(u(x, t), t) = (1/4)*(diff(u(x, t), x, x))

iv__1 := u(-A, t) = 0, u(x, B) = 10

 

Note the evaluation points A and B. The method typically described in textbooks requires the evaluation points to be A = 0, B = 0. The change of variables automatically used in this case is:

transformation := {t = tau+B, x = xi-A, u(x, t) = upsilon(xi, tau)}

{t = tau+B, x = xi-A, u(x, t) = upsilon(xi, tau)}

(1)

so that pdsolve's task becomes solving this other problem, now with the appropriate evaluation points

PDEtools:-dchange(transformation, [pde__1, iv__1], {tau, upsilon, xi})

[diff(upsilon(xi, tau), tau) = (1/4)*(diff(diff(upsilon(xi, tau), xi), xi)), upsilon(0, tau) = 0, upsilon(xi, 0) = 10]

(2)

and then changing the variables back to the original {x, t, u} and giving the solution. The process all in one go:

`assuming`([pdsolve([pde__1, iv__1])], [abs(A) < x, abs(B) < t])

u(x, t) = 10*erf((x+A)/(t-B)^(1/2))

(3)

 

Example 2: A heat PDE with a source and a piecewise initial condition

pde__2 := diff(u(x, t), t)+1 = mu*(diff(u(x, t), x, x))

iv__2 := u(x, 1) = piecewise(0 <= x, 0, x < 0, 1)

`assuming`([pdsolve([pde__2, iv__2])], [0 < mu, 0 < t])

u(x, t) = 3/2-(1/2)*erf((1/2)*x/(mu^(1/2)*(t-1)^(1/2)))-t

(4)

 

Example 3: A wave PDE & BC problem in a semi-infinite domain:

pde__3 := diff(u(x, t), t, t) = diff(u(x, t), x, x)

iv__3 := u(x, 1) = exp(-(x-6)^2)+exp(-(x+6)^2), (D[2](u))(x, 1) = 1/2

`assuming`([pdsolve([pde__3, iv__3])], [0 < t])

u(x, t) = (1/2)*exp(-(-x+t+5)^2)+(1/2)*exp(-(-x+t-7)^2)+(1/2)*exp(-(x+t-7)^2)+(1/2)*exp(-(x+t+5)^2)+(1/2)*t-1/2

(5)

 

Example 4: A wave PDE & BC problem in a semi-infinite domain:

pde__4 := diff(u(x, t), t, t)-(1/4)*(diff(u(x, t), x, x)) = 0

iv__4 := (D[1](u))(1, t) = 0, u(x, 0) = exp(-x^2), (D[2](u))(x, 0) = 0

`assuming`([pdsolve([pde__4, iv__4])], [1 < x, 0 < t])

u(x, t) = piecewise((1/2)*t < x-1, (1/2)*exp(-(1/4)*(t+2*x)^2)+(1/2)*exp(-(1/4)*(t-2*x)^2), x-1 < (1/2)*t, (1/2)*exp(-(1/4)*(t+2*x)^2)+(1/2)*exp(-(1/4)*(t-2*x+4)^2))

(6)

 

Example 5: A wave PDE with a source:

pde__5 := diff(u(x, t), t, t)-c^2*(diff(u(x, t), x, x)) = f(x, t)

iv__5 := u(x, 1) = g(x), (D[2](u))(x, 1) = h(x)

pdsolve([pde__5, iv__5], u(x, t))

u(x, t) = (1/2)*(Int(Int((diff(diff(h(zeta), zeta), zeta))*c^2*tau+(diff(diff(g(zeta), zeta), zeta))*c^2+f(zeta, tau+1), zeta = (-t+tau+1)*c+x .. x+c*(t-1-tau)), tau = 0 .. t-1)+(2*t-2)*c*h(x)+2*g(x)*c)/c

(7)

pdetest(u(x, t) = (1/2)*(Int(Int((diff(diff(h(zeta), zeta), zeta))*c^2*tau+(diff(diff(g(zeta), zeta), zeta))*c^2+f(zeta, tau+1), zeta = (-t+tau+1)*c+x .. x+c*(t-1-tau)), tau = 0 .. t-1)+(2*t-2)*c*h(x)+2*g(x)*c)/c, [pde__5, iv__5])

[0, 0, 0]

(8)

2. It is now possible to specify or exclude method(s) for solving

 

The pdsolve/BC solving methods can now be indicated, either to be used for solving, as in methods = [method__1, method__2, () .. ()] to be tried in the indicated order, or to be excluded, as in exclude = [method__1, method__2, () .. ()]. The methods and sub-methods available are organized in a table,
`pdsolve/BC/methods`

indices(`pdsolve/BC/methods`)

[1], [2], [3], [2, "Series"], [2, "Heat"], ["high_order"], ["system"], [2, "Wave"], [2, "SpecializeArbitraryFunctions"]

(9)


So, for example, the methods for PDEs of first order and second order are, respectively,

`pdsolve/BC/methods`[1]

["SpecializeArbitraryFunctions", "Fourier", "Laplace", "Generic", "PolynomialSolutions", "LinearDifferentialOperator"]

(10)

`pdsolve/BC/methods`[2]

["SpecializeArbitraryFunctions", "SpecializeArbitraryConstants", "Wave", "Heat", "Series", "Laplace", "Fourier", "Generic", "PolynomialSolutions", "LinearDifferentialOperator", "Superposition"]

(11)

 

Some methods have sub-methods (their existence is visible in (9)):

`pdsolve/BC/methods`[2, "Series"]

["ThreeBCsincos", "FourBC", "ThreeBC", "ThreeBCPeriodic", "WithSourceTerm", "ThreeVariables"]

(12)

`pdsolve/BC/methods`[2, "Heat"]

["SemiInfiniteDomain", "WithSourceTerm"]

(13)

 

Example 6:

pde__6 := diff(u(r, theta), r, r)+diff(u(r, theta), theta, theta) = 0

iv__6 := u(2, theta) = 3*sin(2*theta)+1

pdsolve([pde__6, iv__6])

u(r, theta) = -_F2(-I*r+2*I+theta)+1-3*sin((2*I)*r-4*I-2*theta)+_F2(I*r-2*I+theta)

(14)

pdsolve([pde__6, iv__6], method = Fourier)

u(r, theta) = ((3/2)*I)*exp(2*r-4-(2*I)*theta)-((3/2)*I)*exp(-2*r+4+(2*I)*theta)+1

(15)

Example 7:

pde__7 := diff(u(x, y), x, x)+diff(u(x, y), y, y) = 0

iv__7 := u(x, 0) = Dirac(x)

pdsolve([pde__7, iv__7])

u(x, y) = Dirac(x)-(1/2)*Dirac(2, x)*y^2+_C3*y

(16)

pdsolve([pde__7, iv__7], method = Fourier)

u(x, y) = invfourier(exp(-s*y), s, x)

(17)

convert(u(x, y) = invfourier(exp(-s*y), s, x), Int)

u(x, y) = (1/2)*(Int(exp(-s*y+I*s*x), s = -infinity .. infinity))/Pi

(18)

pdsolve([pde__7, iv__7], method = Generic)

u(x, y) = -_F2(-y+I*x)+Dirac(x+I*y)+_F2(y+I*x)

(19)

3. Series solutions for linear PDE and BC problems solved via product separation with eigenvalues that are the roots of algebraic expressions which cannot be inverted

 

Linear problems for which the PDE can be separated by product, giving rise to Sturm-Liouville problems for the separation constant (eigenvalue) and separated functions (eigenfunctions), do not always result in solvable equations for the eigenvalues. Below are examples where the eigenvalues are respectively roots of a sum of  BesselJ functions and of the non-inversible equation tan(lambda[n])+lambda[n] = 0.

 

Example 8: This problem represents the temperature distribution in a thin circular plate whose lateral surfaces are insulated (Articolo example 6.9.2):

pde__8 := diff(u(r, theta, t), t) = (diff(u(r, theta, t), r)+r*(diff(u(r, theta, t), r, r))+(diff(u(r, theta, t), theta, theta))/r)/(25*r)

iv__8 := (D[1](u))(1, theta, t) = 0, u(r, 0, t) = 0, u(r, Pi, t) = 0, u(r, theta, 0) = (r-(1/3)*r^3)*sin(theta)

pdsolve([pde__8, iv__8])

u(r, theta, t) = `casesplit/ans`(Sum(-(4/3)*BesselJ(1, lambda[n]*r)*sin(theta)*exp(-(1/25)*lambda[n]^2*t)*(BesselJ(0, lambda[n])*lambda[n]^3-BesselJ(1, lambda[n])*lambda[n]^2+4*lambda[n]*BesselJ(0, lambda[n])-8*BesselJ(1, lambda[n]))/(lambda[n]^3*(BesselJ(0, lambda[n])^2*lambda[n]+BesselJ(1, lambda[n])^2*lambda[n]-2*BesselJ(0, lambda[n])*BesselJ(1, lambda[n]))), n = 0 .. infinity), {And(-BesselJ(1, lambda[n])+BesselJ(2, lambda[n])*lambda[n] = 0, 0 < lambda[n])})

(20)

 

In the above we see that the eigenvalue `&lambda;__n` satisfies -BesselJ(1, lambda[n])+BesselJ(2, lambda[n])*lambda[n] = 0. When `&lambda;__n` is the root of one single BesselJ or BesselY function of integer order, the Maple functions BesselJZeros and BesselYZeros are used instead. That is the case, for instance, if we slightly modify this problem changing the first boundary condition to be u(1, theta, t) = 0 instead of (D[1](u))(1, theta, t) = 0

`iv__8.1` := u(1, theta, t) = 0, u(r, 0, t) = 0, u(r, Pi, t) = 0, u(r, theta, 0) = (r-(1/3)*r^3)*sin(theta)

pdsolve([pde__8, `iv__8.1`])

u(r, theta, t) = `casesplit/ans`(Sum(-(4/3)*BesselJ(1, lambda[n]*r)*sin(theta)*exp(-(1/25)*lambda[n]^2*t)*(lambda[n]^2+4)/(BesselJ(0, lambda[n])*lambda[n]^3), n = 1 .. infinity), {And(lambda[n] = BesselJZeros(1, n), 0 < lambda[n])})

(21)

Example 9: This problem represents the temperature distribution in a thin rod whose left end is held at a fixed temperature of 5 and whose right end loses heat by convection into a medium whose temperature is 10. There is also an internal heat source term in the PDE (Articolo's textbook, example 8.4.3):

pde__9 := diff(u(x, t), t) = (1/20)*(diff(u(x, t), x, x))+t

iv__9 := u(0, t) = 5, u(1, t)+(D[1](u))(1, t) = 10, u(x, 0) = -40*x^2*(1/3)+45*x*(1/2)+5

pdsolve([pde__9, iv__9], u(x, t))

u(x, t) = `casesplit/ans`(Sum(piecewise(lambda[n] = 0, 0, (80/3)*exp(-(1/20)*lambda[n]^2*t)*sin(lambda[n]*x)*(lambda[n]^2*cos(lambda[n])+lambda[n]*sin(lambda[n])+4*cos(lambda[n])-4)/(lambda[n]^2*(sin(2*lambda[n])-2*lambda[n]))), n = 0 .. infinity)+Int(Sum(piecewise(lambda[n] = 0, 0, 4*exp(-(1/20)*lambda[n]^2*(t-tau))*sin(lambda[n]*x)*tau*(cos(lambda[n])-1)/(sin(2*lambda[n])-2*lambda[n])), n = 0 .. infinity), tau = 0 .. t)+(5/2)*x+5, {And(tan(lambda[n])+lambda[n] = 0, 0 < lambda[n])})

(22)

For information on how to test or plot a solution like the one above, please see the end of the Mapleprimes post "Sturm-Liouville problem with eigenvalues that are the roots of algebraic expressions which cannot be inverted" 

 

4. Superposition method for linear PDE with more than one non-homogeneous BC

 

Previously, for linear homogeneous PDE problems with non-periodic initial and boundary conditions, pdsolve was only consistently able to solve the problem as long as at most one of those conditions was non-homogeneous. The superposition method works by taking advantage of the linearity of the problem and the fact that the solution to such a problem in which two or more of the BC are non-homogeneous can be given as

u = u__1+u__2 + ...,  where each u__i is a solution of the PDE with all but one of the BC homogenized.

 

Example 10: A Laplace PDE with one homogeneous and three non-homogeneous conditions:

pde__10 := diff(u(x, y), x, x)+diff(u(x, y), y, y) = 0

iv__10 := u(0, y) = 0, u(Pi, y) = sinh(Pi)*cos(y), u(x, 0) = sin(x), u(x, Pi) = -sinh(x)

pdsolve([pde__10, iv__10])

u(x, y) = ((exp(2*Pi)-1)*(Sum((-1)^n*n*(exp(2*Pi)-1)*exp(n*(Pi-y)-Pi)*sin(n*x)*(exp(2*n*y)-1)/(Pi*(n^2+1)*(exp(2*Pi*n)-1)), n = 1 .. infinity))+(exp(2*Pi)-1)*(Sum(2*sin(n*y)*exp(n*(Pi-x))*n*sinh(Pi)*((-1)^n+1)*(exp(2*n*x)-1)/(Pi*(exp(2*Pi*n)-1)*(n^2-1)), n = 2 .. infinity))+sin(x)*(exp(-y+2*Pi)-exp(y)))/(exp(2*Pi)-1)

(23)

 

5. Polynomial solutions method:

 

This new method gives pdsolve better performance when the PDE & BC problems admit polynomial solutions.

 

Example 11:

pde__11 := diff(u(x, y), x, x)+y*(diff(u(x, y), y, y)) = 0

iv__11 := u(x, 0) = 0, (D[2](u))(x, 0) = x^2

pdsolve([pde__11, iv__11], u(x, y))

u(x, y) = y*(x^2-y)

(24)

 

6. Solving more problems using the Laplace transform or the Fourier transform

 

These methods now solve more problems and are no longer restricted to PDE of first or second order.

 

Example 12: A third order PDE & BC problem:

pde__12 := diff(u(x, t), t) = -(diff(u(x, t), x, x, x))

iv__12 := u(x, 0) = f(x)

pdsolve([pde__12, iv__12])

u(x, t) = (1/4)*(Int((4/3)*Pi*f(-zeta)*(-(x+zeta)/(-t)^(1/3))^(1/2)*BesselK(1/3, -(2/9)*3^(1/2)*(x+zeta)*(-(x+zeta)/(-t)^(1/3))^(1/2)/(-t)^(1/3))/(-t)^(1/3), zeta = -infinity .. infinity))/Pi^2

(25)

 

Example 13: A PDE & BC problem that is solved using Laplace transform:

pde__13 := diff(u(x, y), y, x) = sin(x)*sin(y)

iv__13 := u(x, 0) = 1+cos(x), (D[2](u))(0, y) = -2*sin(y)

pdsolve([pde__13, iv__13])

u(x, y) = (1/2)*cos(x-y)+(1/2)*cos(x+y)+cos(y)

(26)

To see the computational flow, the solving methods used and in which order they are tried use

infolevel[pdsolve] := 2

2

(27)

Example 14:

pde__14 := diff(u(x, y), x, x)+diff(u(x, y), y, y) = 0

iv__14 := u(x, 0) = 0, u(x, b) = h(x)

pdsolve([pde__14, iv__14])

* trying method "SpecializeArbitraryFunctions" for 2nd order PDEs
   -> trying "LinearInXT"
   -> trying "HomogeneousBC"
* trying method "SpecializeArbitraryConstants" for 2nd order PDEs
* trying method "Wave" for 2nd order PDEs
   -> trying "Cauchy"
   -> trying "SemiInfiniteDomain"
   -> trying "WithSourceTerm"
* trying method "Heat" for 2nd order PDEs
   -> trying "SemiInfiniteDomain"
   -> trying "WithSourceTerm"
* trying method "Series" for 2nd order PDEs
   -> trying "ThreeBCsincos"
   -> trying "FourBC"
   -> trying "ThreeBC"
   -> trying "ThreeBCPeriodic"
   -> trying "WithSourceTerm"
      * trying method "SpecializeArbitraryFunctions" for 2nd order PDEs
         -> trying "LinearInXT"
         -> trying "HomogeneousBC"
            Trying travelling wave solutions as power series in tanh ...
               Trying travelling wave solutions as power series in ln ...
      * trying method "SpecializeArbitraryConstants" for 2nd order PDEs
         Trying travelling wave solutions as power series in tanh ...
            Trying travelling wave solutions as power series in ln ...
      * trying method "Wave" for 2nd order PDEs
         -> trying "Cauchy"
         -> trying "SemiInfiniteDomain"
         -> trying "WithSourceTerm"
      * trying method "Heat" for 2nd order PDEs
         -> trying "SemiInfiniteDomain"
         -> trying "WithSourceTerm"
      * trying method "Series" for 2nd order PDEs
         -> trying "ThreeBCsincos"
         -> trying "FourBC"
         -> trying "ThreeBC"
         -> trying "ThreeBCPeriodic"
         -> trying "WithSourceTerm"
         -> trying "ThreeVariables"
      * trying method "Laplace" for 2nd order PDEs
         -> trying a Laplace transformation
      * trying method "Fourier" for 2nd order PDEs
         -> trying a fourier transformation

      * trying method "Generic" for 2nd order PDEs
         -> trying a solution in terms of arbitrary constants and functions to be adjusted to the given initial conditions
      * trying method "PolynomialSolutions" for 2nd order PDEs

      * trying method "LinearDifferentialOperator" for 2nd order PDEs
      * trying method "Superposition" for 2nd order PDEs
   -> trying "ThreeVariables"
* trying method "Laplace" for 2nd order PDEs
   -> trying a Laplace transformation
* trying method "Fourier" for 2nd order PDEs
   -> trying a fourier transformation

   <- fourier transformation successful
<- method "Fourier" for 2nd order PDEs successful

 

u(x, y) = invfourier(exp(s*(b+y))*fourier(h(x), x, s)/(exp(2*s*b)-1), s, x)-invfourier(exp(s*(b-y))*fourier(h(x), x, s)/(exp(2*s*b)-1), s, x)

(28)

convert(u(x, y) = invfourier(exp(s*(b+y))*fourier(h(x), x, s)/(exp(2*s*b)-1), s, x)-invfourier(exp(s*(b-y))*fourier(h(x), x, s)/(exp(2*s*b)-1), s, x), Int)

u(x, y) = (1/2)*(Int((Int(h(x)*exp(-I*x*s), x = -infinity .. infinity))*exp(s*(b+y)+I*s*x)/(exp(2*s*b)-1), s = -infinity .. infinity))/Pi-(1/2)*(Int((Int(h(x)*exp(-I*x*s), x = -infinity .. infinity))*exp(s*(b-y)+I*s*x)/(exp(2*s*b)-1), s = -infinity .. infinity))/Pi

(29)

Reset the infolevel to avoid displaying the computational flow:

infolevel[pdsolve] := 1

7. Improvements to solving heat and wave PDE, with or without a source:

 

Example 15: A heat PDE:

pde__15 := diff(u(x, t), t) = 13*(diff(u(x, t), x, x))

iv__15 := (D[1](u))(0, t) = 0, (D[1](u))(1, t) = 1, u(x, 0) = (1/2)*x^2+x

pdsolve([pde__15, iv__15], u(x, t))

u(x, t) = 1/2+Sum(2*cos(n*Pi*x)*exp(-13*Pi^2*n^2*t)*(-1+(-1)^n)/(Pi^2*n^2), n = 1 .. infinity)+13*t+(1/2)*x^2

(30)

To verify an infinite series solution such as this one you can first use pdetest

pdetest(u(x, t) = 1/2+Sum(2*cos(n*Pi*x)*exp(-13*Pi^2*n^2*t)*(-1+(-1)^n)/(Pi^2*n^2), n = 1 .. infinity)+13*t+(1/2)*x^2, [pde__15, iv__15])

[0, 0, 0, 1/2+Sum(2*cos(n*Pi*x)*(-1+(-1)^n)/(Pi^2*n^2), n = 1 .. infinity)-x]

(31)

To verify that the last condition, for u(x, 0) is satisfied, we plot the first 1000 terms of the series solution with t = 0 and make sure that it coincides with the plot of  the right-hand side of the initial condition u(x, 0) = (1/2)*x^2+x. Expected: the two plots superimpose each other

plot([value(subs(t = 0, infinity = 1000, rhs(u(x, t) = 1/2+Sum(2*cos(n*Pi*x)*exp(-13*Pi^2*n^2*t)*(-1+(-1)^n)/(Pi^2*n^2), n = 1 .. infinity)+13*t+(1/2)*x^2))), (1/2)*x^2+x], x = 0 .. 1)

 

Example 16: A heat PDE in a semi-bounded domain:

pde__16 := diff(u(x, t), t) = (1/4)*(diff(u(x, t), x, x))

iv__16 := (D[1](u))(alpha, t) = 0, u(x, beta) = 10*exp(-x^2)

`assuming`([pdsolve([pde__16, iv__16], u(x, t))], [0 < x, 0 < t])

u(x, t) = -5*exp(x^2/(-t+beta-1))*((erf(((t-beta-1)*alpha+x)/((t-beta+1)^(1/2)*(t-beta)^(1/2)))-1)*exp(4*alpha*(-x+alpha)/(-t+beta-1))+erf(((t-beta+1)*alpha-x)/((t-beta+1)^(1/2)*(t-beta)^(1/2)))-1)/(t-beta+1)^(1/2)

(32)

 

Example 17: A wave PDE in a semi-bounded domain:

pde__17 := diff(u(x, t), t, t)-9*(diff(u(x, t), x, x)) = 0

iv__17 := (D[1](u))(0, t) = 0, u(x, 0) = 0, (D[2](u))(x, 0) = x^3

`assuming`([pdsolve([pde__17, iv__17])], [0 < x, 0 < t])

u(x, t) = piecewise(3*t < x, 9*t^3*x+t*x^3, x < 3*t, (27/4)*t^4+(9/2)*t^2*x^2+(1/12)*x^4)

(33)

 

Example 18: A wave PDE with a source

pde__18 := diff(u(x, t), t, t) = diff(u(x, t), x, x)+x*exp(-t)

iv__18 := u(0, t) = 0, u(1, t) = 0, u(x, 0) = 0, (D[2](u))(x, 0) = 1

pdsolve([pde__18, iv__18])

u(x, t) = Sum(((-Pi^2*(-1)^n*n^2+Pi^2*n^2+2*(-1)^(n+1)+1)*cos(n*Pi*(t-x))-Pi*(-1)^n*n*sin(n*Pi*(t-x))+(Pi^2*(-1)^n*n^2-Pi^2*n^2+2*(-1)^n-1)*cos(n*Pi*(t+x))+Pi*n*(2*exp(-t)*(-1)^(n+1)*sin(n*Pi*x)+sin(n*Pi*(t+x))*(-1)^n))/(Pi^2*n^2*(Pi^2*n^2+1)), n = 1 .. infinity)

(34)

 

Example 19: Another wave PDE with a source

pde__19 := diff(u(x, t), t, t) = 4*(diff(u(x, t), x, x))+(1+t)*x

iv__19 := u(0, t) = 0, u(Pi, t) = sin(t), u(x, 0) = 0, (D[2](u))(x, 0) = 0

pdsolve([pde__19, iv__19])

u(x, t) = ((Sum(-2*((1/2)*cos(n*x-t)*n^3-(1/2)*cos(n*x+t)*n^3+((-2*n^4-(1/2)*Pi*n^2+(1/8)*Pi)*sin(2*n*t)+(t-cos(2*n*t)+1)*n*(n-1/2)*Pi*(n+1/2))*sin(n*x))*(-1)^n/(Pi*n^4*(4*n^2-1)), n = 1 .. infinity))*Pi+x*sin(t))/Pi

(35)

8. Improvements in series methods for Laplace PDE problems

 

"  Example 20:A Laplace PDE with BC representing the inside of a quarter circle of radius 1. The solution we seek is bounded as r approaches 0:"

pde__20 := diff(u(r, theta), r, r)+(diff(u(r, theta), r))/r+(diff(u(r, theta), theta, theta))/r^2 = 0

iv__20 := u(r, 0) = 0, u(r, (1/2)*Pi) = 0, (D[1](u))(1, theta) = f(theta)

`assuming`([pdsolve([pde__20, iv__20], u(r, theta), HINT = boundedseries(r = [0]))], [0 <= theta, theta <= (1/2)*Pi, 0 <= r, r <= 1])

u(r, theta) = Sum(2*(Int(f(theta)*sin(2*n*theta), theta = 0 .. (1/2)*Pi))*r^(2*n)*sin(2*n*theta)/(Pi*n), n = 1 .. infinity)

(36)

 

Example 21: A Laplace PDE for which we seek a solution that remains bounded as y approaches infinity:

pde__21 := diff(u(x, y), x, x)+diff(u(x, y), y, y) = 0

iv__21 := u(0, y) = A, u(a, y) = 0, u(x, 0) = 0

`assuming`([pdsolve([pde__21, iv__21], HINT = boundedseries(y = infinity))], [a > 0])

u(x, y) = ((Sum(-2*A*sin(n*Pi*x/a)*exp(-n*Pi*y/a)/(n*Pi), n = 1 .. infinity))*a-A*(x-a))/a

(37)

 

9. Better simplification of answers:

 

 

Example 22: For this wave PDE with a source term, pdsolve used to return a solution with uncomputed integrals:

pde__22 := diff(u(x, t), t, t) = A*x+diff(u(x, t), x, x)

iv__22 := u(0, t) = 0, u(1, t) = 0, u(x, 0) = 0, (D[2](u))(x, 0) = 0

pdsolve([pde__22, iv__22], u(x, t))

u(x, t) = Sum(2*(-1)^n*A*sin(n*Pi*x)*cos(n*Pi*t)/(n^3*Pi^3), n = 1 .. infinity)+(1/6)*(-x^3+x)*A

(38)

 

Example 23: A BC at x = infinityis now handled by pdsolve:

pde__23 := diff(u(x, y), x, x)+diff(u(x, y), y, y) = 0

iv__23 := u(0, y) = sin(y), u(x, 0) = 0, u(x, a) = 0, u(infinity, y) = 0

`assuming`([pdsolve([pde__23, iv__23], u(x, y))], [0 < a])

u(x, y) = Sum(2*piecewise(a = Pi*n, (1/2)*Pi*n, -Pi*(-1)^n*sin(a)*n*a/(Pi^2*n^2-a^2))*exp(-n*Pi*x/a)*sin(n*Pi*y/a)/a, n = 1 .. infinity)

(39)

 

Example 24: A reduced Helmholtz PDE in a square of side "Pi. "Previously, pdsolve returned a series starting at n = 0, when the limit of the n = 0 term is 0.

pde__24 := diff(u(x, y), x, x)+diff(u(x, y), y, y)-k*u(x, y) = 0

iv__24 := u(0, y) = 1, u(Pi, y) = 0, u(x, 0) = 0, u(x, Pi) = 0

`assuming`([pdsolve([pde__24, iv__24], u(x, y))], [0 < k])

u(x, y) = Sum(-2*sin(n*y)*(-1+(-1)^n)*(exp(-(-2*Pi+x)*(n^2+k)^(1/2))-exp((n^2+k)^(1/2)*x))/((exp(2*(n^2+k)^(1/2)*Pi)-1)*Pi*n), n = 1 .. infinity)

(40)

 

10. Linear differential operator: more solutions are now successfully computed

 

 

Example 25:

pde__25 := diff(w(x1, x2, x3, t), t) = diff(w(x1, x2, x3, t), x1, x1)+diff(w(x1, x2, x3, t), x2, x2)+diff(w(x1, x2, x3, t), x3, x3)

iv__25 := w(x1, x2, x3, 1) = exp(a)*x1^2+x2*x3

pdsolve([pde__25, iv__25])

w(x1, x2, x3, t) = (x1^2+2*t-2)*exp(a)+x2*x3

(41)

 

Example 26:

pde__26 := diff(w(x1, x2, x3, t), t)-(D[1, 2](w))(x1, x2, x3, t)-(D[1, 3](w))(x1, x2, x3, t)-(D[3, 3](w))(x1, x2, x3, t)+(D[2, 3](w))(x1, x2, x3, t) = 0

iv__26 := w(x1, x2, x3, a) = exp(x1)+x2-3*x3

pdsolve([pde__26, iv__26])

w(x1, x2, x3, t) = exp(x1)+x2-3*x3

(42)

 

Example 27:

pde__27 := diff(w(x1, x2, x3, t), t, t) = (D[1, 2](w))(x1, x2, x3, t)+(D[1, 3](w))(x1, x2, x3, t)+(D[3, 3](w))(x1, x2, x3, t)-(D[2, 3](w))(x1, x2, x3, t)

iv__27 := w(x1, x2, x3, a) = x1^3*x2^2+x3, (D[4](w))(x1, x2, x3, a) = -x2*x3+x1

pdsolve([pde__27, iv__27], w(x1, x2, x3, t))

w(x1, x2, x3, t) = x1^3*x2^2+3*x2*(-t+a)^2*x1^2+(1/2)*(-t+a)*(a^3-3*a^2*t+3*a*t^2-t^3-2)*x1-(1/6)*a^3+(1/2)*a^2*t+(1/6)*(-3*t^2+6*x2*x3)*a+(1/6)*t^3-t*x2*x3+x3

(43)

 

 

11. More problems in 3 variables are now solved

 

 

Example 28: A Schrödinger type PDE in two space dimensions, where Z is Planck's constant.

pde__28 := I*`&hbar;`*(diff(f(x, y, t), t)) = -`&hbar;`^2*(diff(f(x, y, t), x, x)+diff(f(x, y, t), y, y))/(2*m)

iv__28 := f(x, y, 0) = sqrt(2)*(sin(2*Pi*x)*sin(Pi*y)+sin(Pi*x)*sin(3*Pi*y)), f(0, y, t) = 0, f(1, y, t) = 0, f(x, 1, t) = 0, f(x, 0, t) = 0

pdsolve([pde__28, iv__28], f(x, y, t))

f(x, y, t) = 2^(1/2)*sin(Pi*x)*(2*exp(-((5/2)*I)*`&hbar;`*t*Pi^2/m)*cos(Pi*x)*sin(Pi*y)+exp(-(5*I)*`&hbar;`*t*Pi^2/m)*sin(3*Pi*y))

(44)

 

Example 29: This problem represents the temperature distribution in a thin rectangular plate whose lateral surfaces are insulated yet is losing heat by convection along the boundary x = 1, into a surrounding medium at temperature 0 (Articolo example 6.6.3):

pde__29 := diff(u(x, y, t), t) = 1/50*(diff(u(x, y, t), x, x)+diff(u(x, y, t), y, y))

iv__29 := (D[1](u))(0, y, t) = 0, (D[1](u))(1, y, t)+u(1, y, t) = 0, u(x, 0, t) = 0, u(x, 1, t) = 0, u(x, y, 0) = (1-(1/3)*x^2)*y*(1-y)

`assuming`([pdsolve([pde__29, iv__29], u(x, y, t))], [0 <= x, x <= 1, 0 <= y, y <= 1])

u(x, y, t) = `casesplit/ans`(Sum(Sum((32/3)*exp(-(1/50)*t*(Pi^2*n^2+lambda[n1]^2))*(-1+(-1)^n)*cos(lambda[n1]*x)*sin(n*Pi*y)*(-lambda[n1]^2*sin(lambda[n1])+lambda[n1]*cos(lambda[n1])-sin(lambda[n1]))/(Pi^3*n^3*lambda[n1]^2*(sin(2*lambda[n1])+2*lambda[n1])), n1 = 0 .. infinity), n = 1 .. infinity), {And(tan(lambda[n1])*lambda[n1]-1 = 0, 0 < lambda[n1])})

(45)

 

Articolo's Exercise 7.15, with 6 boundary/initial conditions, two for each variable

pde__30 := diff(u(x, y, t), t, t) = 1/4*(diff(u(x, y, t), x, x)+diff(u(x, y, t), y, y))-(1/10)*(diff(u(x, y, t), t))

iv__30 := (D[1](u))(0, y, t) = 0, (D[1](u))(1, y, t)+u(1, y, t) = 0, (D[2](u))(x, 0, t)-u(x, 0, t) = 0, (D[2](u))(x, 1, t) = 0, u(x, y, 0) = (1-(1/3)*x^2)*(1-(1/3)*(y-1)^2), (D[3](u))(x, y, 0) = 0

 

This problem is tricky ... There are three independent variables, therefore two eigenvalues (constants that appear separating variables by product) in the Sturm-Liouville problem. But after solving the separated system and also for the eigenvalues, the second eigenvalue is equal to the first one, and in addition cannot be expressed in terms of known functions, because the equation it solves cannot be inverted.

 

pdsolve([pde__30, iv__30])

u(x, y, t) = `casesplit/ans`(Sum((1/6)*(lambda[n]^2*sin(lambda[n])-lambda[n]*cos(lambda[n])+sin(lambda[n]))*cos(lambda[n]*x)*(exp((1/10)*t*(-200*lambda[n]^2+1)^(1/2))*(-200*lambda[n]^2+1)^(1/2)+exp((1/10)*t*(-200*lambda[n]^2+1)^(1/2))+(-200*lambda[n]^2+1)^(1/2)-1)*exp(-(1/20)*t*((-200*lambda[n]^2+1)^(1/2)+1))*(cos(lambda[n]*y)*lambda[n]+sin(lambda[n]*y))*(6*lambda[n]^2*cos(lambda[n])^2+cos(lambda[n])^2-5*lambda[n]^2-1)/((-200*lambda[n]^2+1)^(1/2)*(-cos(lambda[n])^4+(lambda[n]*sin(lambda[n])-1)*cos(lambda[n])^3+(lambda[n]^2+lambda[n]*sin(lambda[n])+1)*cos(lambda[n])^2+(lambda[n]^2+2*lambda[n]*sin(lambda[n])+1)*cos(lambda[n])+lambda[n]*(lambda[n]+sin(lambda[n])))*lambda[n]^4*(cos(lambda[n])-1)), n = 0 .. infinity), {And(tan(lambda[n])*lambda[n]-1 = 0, 0 < lambda[n])})

(46)

``

 


 

Download PDE_and_BC_during_2018.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Yesterday, I accidentally discovered a nasty bug in a fairly simple example:

restart;
Expr:=a*sin(x)+b*cos(x);
maximize(Expr, x=0..2*Pi);
minimize(Expr, x=0..2*Pi);
                                    

I am sure the correct answers are  sqrt(a^2+b^2)  and  -sqrt(a^2+b^2)  for any real values  a  and  b .  It is easy to prove in many ways. The simplest method does not require any calculations and can be done in the mind. We will consider  Expr  as the scalar product (or the dot product) of two vectors  <a, b>  and  <sin(x), cos(x)>, one of which is a unit vector. Then it is obvious that the maximum of this scalar product is reached if the vectors are codirectional and equals to the length of the first vector, that is, sqrt(a^2+b^2).

Bugs in these commands were noted by users and earlier (see search by keywords bug, maximize, minimize) but unfortunately are still not fixed. 

The attached worksheet develops a procedure for extrapolating boundary data from a square to its interior.  Specifically, let's consider the square [−1,1]×[−1,1] and the continuous functions u1, u2, u3, u4 defined on its edges. The procedure constructs a continuous function u(x,y) in the interior of the square which matches the boundary data.

The function u(x,y) is necessarily discontinuous at a corner if the boundary data of the edges meeting at that corner are inconsistent.  However, if the boundary data are consistent at all corners, then u(x,y) is continuous everywhere including the boundary.

Here is what the extrapolating function looks like:

proc (x, y) options operator, arrow; (1/2)*(1-y)*(u__1(x)-(1/2)*(1-x)*u__1(-1))+(1/2)*(x+1)*(u__2(y)-(1/2)*(1-y)*u__2(-1))+(1/2)*(y+1)*(u__3(x)-(1/2)*(x+1)*u__3(1))+(1/2)*(1-x)*(u__4(y)-(1/2)*(y+1)*u__4(1))+(1/4)*(u__1(-1)-u__4(-1))*(-x^2+1)*(1-y)/((x+1)^2+(y+1)^2)^(1/2)+(1/4)*(u__2(-1)-u__1(1))*(-y^2+1)*(x+1)/((y+1)^2+(1-x)^2)^(1/2)+(1/4)*(u__3(1)-u__2(1))*(-x^2+1)*(y+1)/((1-x)^2+(1-y)^2)^(1/2)+(1/4)*(u__4(1)-u__3(-1))*(-y^2+1)*(1-x)/((1-y)^2+(x+1)^2)^(1/2) end proc

 

The worksheet Extrapolant.mw contains the details of the derivation, and an example.

Edit: The worksheet Extrapolant-ver2.mw presents a slightly different version of the previous result. Here the square's corners are treated symmetrically, leading to a more pleasing interpolating function.

PS: A challenge.  Extend the result to 3D, that is, construct a function u(x,y,z) on the cube [−1,1]×[−1,1]×[−1,1] which matches prescribed functions on the cube's six faces.

In this post an interesting geometric problem is solved: for an arbitrary convex polygon, find a straight line that divides both the area and the perimeter in half. The following results on this problem are well known:
1. For any convex polygon there is such a straight line.
2. For any convex polygon into which a circle can be inscribed, in particular for any triangle, the desired line must pass through the center of the inscribed circle.
3. For a triangle, the number of solutions can be 1, 2, or 3.
4. If the polygon has symmetry with respect to a point, then any straight line passing through this point is a solution.

The procedure called  InHalf  (the code below) symbolically solves this problem. The formal parameter of the procedure is the list of coordinates of the vertices of a convex polygon (vertices must be passed opposite or clockwise). The procedure returns all solutions in the form of a list of pairs of points, lying on the perimeter of the polygon, that are the ends of segments that implement the desired dividing.


 

restart;

InHalf:=proc(V::listlist)
local L, n, a, b, M, N, i, j, P, Q, L1, L2, Area, Area1, Area2, Perimeter, Perimeter1, Perimeter2, sol, m, k, Sol;
uses LinearAlgebra, ListTools;
L:=map(convert,[V[],V[1]],rational); n:=nops(L)-1;
a:=<(V[2]-V[1])[1],(V[2]-V[1])[2],0>; b:=<(V[n]-V[1])[1],(V[n]-V[1])[2],0>;
if is(CrossProduct(a,b)[3]<0) then L:=Reverse(L) fi;
M:=[seq([L[i],L[i+1]], i=1..n)]:
N:=0;
for i from 1 to n-1 do
for j from i+1 to n do
P:=map(t->t*(1-s),M[i,1])+map(t->t*s,M[i,2]); Q:=map(s->s*(1-t),M[j,1])+map(s->s*t,M[j,2]);
L1:=[P,L[i+1..j][],Q,P];
L2:=[Q,L[j+1..-1][],L[1..i][],P,Q];
Area:=L->(1/2)*add(L[k, 1]*L[k+1, 2]-L[k, 2]*L[k+1, 1], k = 1 .. nops(L)-1);
Area1:=Area(L1);
Area2:=Area(L2);
Perimeter:=L->add(sqrt((L[k,1]-L[k+1,1])^2+(L[k,2]-L[k+1,2])^2), k=1..nops(L)-2);
Perimeter1:=Perimeter(L1);
Perimeter2:=Perimeter(L2);
sol:=[solve({Area1=Area2,Perimeter1=Perimeter2,s>=0,s<1,t>=0,t<1}, {s,t}, explicit)] assuming real;
if sol<>[] then m:=nops(sol);
for k from 1 to m do
N:=N+1; if nops(sol[k])=2 then Sol[N]:=simplify(eval([P,Q],sol[k])) else Sol[N]:=simplify(eval([P,Q],s=t)) fi;
od; fi;
od; od;
Sol:=convert(Sol, list);
`if`(indets(Sol)={},Sol,op([Sol,t>=0 and t<1]));
end proc:  


Examples of use

# For the Pythagorean triangle with sides 3, 4, 5, we have a unique solution

L:=[[4,3],[4,0],[0,0]]:
P:=InHalf(L);
plots:-display(plot([L[],L[1]], color=green, thickness=3), plot(P,  color=red), scaling=constrained);

[[[8/5-(2/5)*6^(1/2), 6/5-(3/10)*6^(1/2)], [4, (1/2)*6^(1/2)]]]

 

 

# For an isosceles right triangle, there are 3 solutions. We see that all the cuts pass through the center of the inscribed circle

L:=[[0,0],[4,0],[4,4]]:
InHalf(L);
P:=InHalf(L);
r:=(4+4-4*sqrt(2))/2: a:=4-r: b:=r:
plots:-display(plot([L[],L[1]], color=green, thickness=3), plot(P,  color=red), plot([r*cos(t)+a,r*sin(t)+b, t=0..2*Pi], color=blue), scaling=constrained);

[[[2*2^(1/2), 0], [2*2^(1/2), 2*2^(1/2)]], [[4, 0], [2, 2]], [[4, -2*2^(1/2)+4], [-2*2^(1/2)+4, -2*2^(1/2)+4]]]

 

[[[2*2^(1/2), 0], [2*2^(1/2), 2*2^(1/2)]], [[4, 0], [2, 2]], [[4, -2*2^(1/2)+4], [-2*2^(1/2)+4, -2*2^(1/2)+4]]]

 

 

# There are 3 solutions for the quadrilateral below

L:=[[0,0],[4.5,0],[4,3],[0,2]]:
P:=InHalf(L);
plots:-display(plot([L[],L[1]], color=green, thickness=3), plot(P,  color=red), scaling=constrained);

[[[(1/44844)*6^(1/2)*(17^(1/2)-13/2)*37^(3/4)*(2*17^(1/2)+13)^(1/2)*((1836*37^(1/2)-6956)*17^(1/2)+7995*37^(1/2)-56425)^(1/2)-(1/4)*17^(1/2)-(1/8)*37^(1/2)+23/8, 0], [(6^(1/2)*37^(1/4)*((1836*37^(1/2)-6956)*17^(1/2)+7995*37^(1/2)-56425)^(1/2)*(2*17^(1/2)+13)^(1/2)+(-156*37^(1/2)+7770)*17^(1/2)-711*37^(1/2)+50505)/(1776*17^(1/2)+11544), (-6^(1/2)*37^(1/4)*((1836*37^(1/2)-6956)*17^(1/2)+7995*37^(1/2)-56425)^(1/2)*(2*17^(1/2)+13)^(1/2)+(156*37^(1/2)+222)*17^(1/2)+711*37^(1/2)+1443)/(296*17^(1/2)+1924)]], [[(1/90576)*17^(3/4)*(37^(1/2)+37)^(1/2)*(37^(1/2)-37)*((1461*37^(1/2)+29415)*17^(1/2)-986*37^(1/2)-149702)^(1/2)+(5/4)*17^(1/2)+(1/8)*37^(1/2)-27/8, 0], [(17^(1/4)*((1461*37^(1/2)+29415)*17^(1/2)-986*37^(1/2)-149702)^(1/2)*(37^(1/2)+37)^(1/2)+(37*17^(1/2)-51)*37^(1/2)+703*17^(1/2)-1887)/(17*37^(1/2)+629), (17^(1/4)*((1461*37^(1/2)+29415)*17^(1/2)-986*37^(1/2)-149702)^(1/2)*(37^(1/2)+37)^(1/2)+(37*17^(1/2)+85)*37^(1/2)+703*17^(1/2)+3145)/(68*37^(1/2)+2516)]], [[-(1/90576)*17^(3/4)*(37^(1/2)+37)^(1/2)*(37^(1/2)-37)*((1461*37^(1/2)+29415)*17^(1/2)-986*37^(1/2)-149702)^(1/2)+(5/4)*17^(1/2)+(1/8)*37^(1/2)-27/8, 0], [(-17^(1/4)*((1461*37^(1/2)+29415)*17^(1/2)-986*37^(1/2)-149702)^(1/2)*(37^(1/2)+37)^(1/2)+(37*17^(1/2)-51)*37^(1/2)+703*17^(1/2)-1887)/(17*37^(1/2)+629), (-17^(1/4)*((1461*37^(1/2)+29415)*17^(1/2)-986*37^(1/2)-149702)^(1/2)*(37^(1/2)+37)^(1/2)+(37*17^(1/2)+85)*37^(1/2)+703*17^(1/2)+3145)/(68*37^(1/2)+2516)]]]

 

 

# There are infinitely many solutions for a polygon with a center of symmetry. Any cut through the center solves the problem. The picture shows 2 solutions.

L:=[[1,0],[1+2*sqrt(3),2],[2*sqrt(3),sqrt(3)+2],[0,sqrt(3)]]:
P:=InHalf(L);
plots:-display(plot([L[],L[1]], color=green, thickness=3), plot(eval(P[1],t=1/3),  color=red), scaling=constrained);

[[[2*3^(1/2)*t+1, 2*t], [-2*3^(1/2)*(t-1), 3^(1/2)-2*t+2]], [[2*3^(1/2)-t+1, 3^(1/2)*t+2], [t, -3^(1/2)*(t-1)]]], 0 <= t and t < 1

 

 

 


 

Download In_Half.mw

The attached worksheet shows how to evaluate and graphically analyze an autonomous first-order nonlinear recurrence with two dependent variables and multiple symbolic parameters. 

This worksheet shows how a small module that simply encapsulates the given information of a problem combined with some use statements can greatly facilitate the organization of one's work, can encapsulate the setting of parameter values, and can allow one to work with symbolic parameters.

Edit: In the first version of this Post, I forgot to include the qualifier "autonomous".  The system being autonomous substantially simplifies its treatment.
 

Autonomous first-order nonlinear recurrences with parameters and multiple dependent variables

Author: Carl Love <carl.j.love@gmail.com> 20-Oct-2018

 

The techniques used in this worksheet can be applied to most autonomous first-order nonlinear recurrences with multiple dependent variables and parameters.

 

This worksheet shows how a small module that simply encapsulates the given information of a problem combined with some use statements

• 

can greatly facilitate the organization of one's work,

• 

can encapsulate the setting of parameter values,

• 

can allow one to work with symbolic parameters.

 

A Problem from MaplePrimes: A discrete Lottka-Volterra population model is applied to an isolated island with a population of predators (foxes), R, and prey (rabbits), K. [Note that R is the foxes, not the rabbits! Perhaps this problem statement originated in another language.] The change over one time period is given by

K[n+1]:= K[n]*(-b*R[n]+a+1);  R[n+1]:= R[n]*(b*e*K[n]-c+1),

where a, b, c, e are parameters of the model. In this problem we will use a= 0.15, b= 0.01, c= 0.02, e= 0.01, when numeric values are needed.

 

a) Show that there exists an equilibrium (values of K[n] and R[n] such that K[n+1] = K[n] and R[n+1] = R[n]).

 

b) Write Maple code that solves the recurrence numerically. Assume that if any population is less than 0.5 then it has gone extinct and set the value to 0. Check that your program is idempotent at the equilibrium.

 

restart:

We begin by collecting all the given information (except for specific numeric values) into a module. The ModuleApply lets the user set the numeric values later.

 

For all two-element vectors used in this worksheet, K is the first value and R is the second value.

KandR:= module()
local
   a, b, c, e, #parameters

   #procedure that lets user set parameter values:
   ModuleApply:= proc({
       a::algebraic:= KandR:-a, b::algebraic:= KandR:-b,
       c::algebraic:= KandR:-c, e::algebraic:= KandR:-e
   })
   local k;
      for k to _noptions do thismodule[lhs(_options[k])]:= rhs(_options[k]) od;
      return
   end proc,

   Extinct:= (x::realcons)-> `if`(x < 0.5, 0, x) #force small, insignificant values to 0
;
export
   #Procedure that does one symbolic iteration
   #(Note that this procedure uses Vector input and output.)
   iter_symb:= KR-> KR *~ <-b*KR[2]+a+1, b*e*KR[1]-c+1>, 

   #Such simple treatment as above is only possible for autonomous
   #recurrences.

  
   iter_num:= Extinct~@iter_symb #one numeric iteration
;
end module:

#The following expression is the discrete equivalent of the derivative (or gradient).
#It represents the change over one time period.
P:= <K,R>:  
OneStep:= KandR:-iter_symb(P) - P

Vector(2, {(1) = K*(-R*b+a+1)-K, (2) = R*(K*b*e-c+1)-R})

#An equilibrium occurs when the gradient is 0.
Eq:= <K__e, R__e>:
Eqs:= solve({seq(eval(OneStep=~ 0, [seq(P=~ Eq)]))}, [seq(Eq)]);

[[K__e = 0, R__e = 0], [K__e = c/(b*e), R__e = a/b]]

#We're only interested here in nonzero solutions.
EqSol:= remove(S-> 0 in rhs~(S), Eqs)[];

[K__e = c/(b*e), R__e = a/b]

#Set parameters:
KandR(a= 0.15, b= 0.01, c= 0.02, e= 0.01);

#Show idempotency at equilibrium:
use KandR in Eq0:= eval(Eq, EqSol); print(Eq0 = iter_num(Eq0)) end use:

(Vector(2, {(1) = 200.0000000, (2) = 15.00000000})) = (Vector(2, {(1) = 200.0000000, (2) = 15.00000000}))

#procedure that fills a Matrix with computed values of a 1st-order recurrence.
#(A more-efficient method than this can be used for linear recurrences.)
#This procedure has no dependence on the module.
Iterate:= proc(n::nonnegint, iter, init::Vector[column])
local M:= Matrix((n+1, numelems(init)), init^+, datatype= hfloat), i;
   for i to n do M[i+1,..]:= iter(M[i,..]) od;
   M
end proc:

We want to see what happens if the initial conditions deviate slightly from the equilibrium. It turns out that any deviation (as long as the
initial values are still nonnegative!) will cause the same effect. I simply chose the deviation <7,2> because it was the smallest for which

the plot clearly showed what happens using the scale that I wanted to show the plot at. By using a finer scale, it is possible to see the

"outward spiral" efffect from even the tiniest deviation.

dev:= <7,2>:
use KandR in KR:= Iterate(1000, iter_num, Eq0 + dev) end use:

plot(
   [
       KR, #trajectory of population
       KR[[1,1],..], #1st point
       KR[-[1,1],..], #last point,
       <Eq0|Eq0>^+, #equilibrium
       #every 100th point (helps show time scale):
       KR[100*[$1..iquo(numelems(KR[..,1]), 100)-1], ..]
   ],
   #This group of options are all lists, each element of which corresponds
   #to one of the above components of the plot:
   style= [line, point$4],
   symbol= [solidcircle$4, soliddiamond],
   symbolsize= [18$4, 12],
   color= [black, green, red, brown, blue],
   thickness= [0$5],
   legend= [`pop.`, init, final, equilibrium, `100 periods`],

   #This group of options are lists, each element of which corresponds to one
   #coordinate axis (horizontal, then vertical).
   view= [0..max(KR[..,1]), 0..max(KR(..,2))],
   labels= [rabbits, foxes],
   labeldirections= [horizontal,vertical],
   size= [700,700], #measured in pixels

   #options applied to whole plot:
   labelfont= [TIMES, BOLDITALIC, 14],
   title= "Population of foxes and rabbits over time" "\n", titlefont= [TIMES,16],
   caption=
      "\n" "Choosing an initial point near the equilibrium causes"
      "\n" "outward spiraling divergence." "\n",
   gridlines= false
);
 

A fieldplot helps show what happens for any starting values. An arrow is drawn from each of a 2-D grid of point. The magnitude and direction of the arrow show the gradient (as a vector) in this case.

plots:-fieldplot(
   rtable_eval(OneStep),
   K= 0..max(KR[..,1]),  R= 0..max(KR[..,2]), grid= [16,16],

   #arrow-specific options:
   anchor= tail, fieldstrength= log, arrows= slim, color= "DarkGreen",

   #other options (same as any 2D plot):
   labels= [rabbits, foxes], labeldirections= [horizontal,vertical],
   labelfont= [TIMES, BOLDITALIC, 14],
   title= "One-step population changes from any point" "\n", titlefont= [TIMES,16],
   caption= "\n" "All trajectories spiral outward from the equilibrium." "\n",
   size= [700,700],
   gridlines= false
);

The above plot is computed only from the symbolic discrete gradient expression OneStep; it does not use the computed population values from the first plot. It only uses the maxima of those computed values to determine the length of the axes.

 

Conclusion: While this is interesting stuff mathematically, and makes for great plots, divergence from the equilibrium doesn't seem realistic to me.

 


 

Download FoxesAndRabbits.mw

First 13 14 15 16 17 18 19 Last Page 15 of 55