ecterrab

13431 Reputation

24 Badges

19 years, 356 days

MaplePrimes Activity


These are Posts that have been published by ecterrab


 

Solving PDEs with initial and boundary conditions:

Sturm-Liouville problem with RootOf eigenvalues

 

Computer algebra systems always failed to compute exact solutions for a linear PDE with initial / boundary conditions when the eigenvalues of the corresponding Sturm-Liouville problem cannot be solved exactly - that is, when they can only be represented at most abstractly, using a RootOf construction.

 

This post illustrates then a new Maple development, to tackle this kind of problem (work in collaboration with Katherina von Bülow), including testing and plotting the resulting exact solution. To reproduce the computation below in Maple 2018.1 you need to install the Maplesoft Physics Updates (version 134 or higher).

 

As an example, consider

pde := diff(u(x, t), t) = k*(diff(u(x, t), x, x)); iv := u(x, 0) = 1-(1/4)*x^3, eval(diff(u(x, t), x), x = 0) = 0, eval(diff(u(x, t), x), x = 1)+u(1, t) = 0

diff(u(x, t), t) = k*(diff(diff(u(x, t), x), x))

 

u(x, 0) = 1-(1/4)*x^3, eval(diff(u(x, t), x), {x = 0}) = 0, eval(diff(u(x, t), x), {x = 1})+u(1, t) = 0

(1)

This problem represents the temperature distribution in a thin rod whose lateral surface is insulated over the interval 0 < x and x < 1, with the left end of the rod insulated and the right end experiencing a convection heat loss, as explained in George A. Articolo's Partial Differential Equations and Boundary Value Problems with Maple, example 3.6.4.

 

The formulation as a Sturm-Liouville problem starts with solving the PDE by separating the variables by product

pdsolve(pde, HINT = `*`)

PDESolStruc(u(x, t) = _F1(x)*_F2(t), [{diff(_F2(t), t) = k*_c[1]*_F2(t), diff(diff(_F1(x), x), x) = _c[1]*_F1(x)}])

(2)

Substituting this separation by product into the last two (out of three) initial/boundary conditions (iv), the original pde and these conditions are transformed into an ODE system with initial conditions

{_F1(1)+(D(_F1))(1) = 0, diff(_F2(t), t) = -k*_c[1]*_F2(t), diff(_F1(x), x, x) = -_c[1]*_F1(x), (D(_F1))(0) = 0}

{_F1(1)+(D(_F1))(1) = 0, diff(_F2(t), t) = -k*_c[1]*_F2(t), diff(diff(_F1(x), x), x) = -_c[1]*_F1(x), (D(_F1))(0) = 0}

(3)

This is a problem in actually three variables, including _c[1], and the solution can be computed using dsolve

dsolve({_F1(1)+(D(_F1))(1) = 0, diff(_F2(t), t) = -k*_c[1]*_F2(t), diff(diff(_F1(x), x), x) = -_c[1]*_F1(x), (D(_F1))(0) = 0}, {_F1, _F2, _c[1]})

{_c[1] = 0, _F1(x) = 0, _F2(t) = _C5}, {_c[1] = _C2, _F1(x) = 0, _F2(t) = _C5/exp(k*_C2*t)}, {_c[1] = RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2, _F1(x) = _C4*cos((RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2)^(1/2)*x), _F2(t) = _C5/exp(k*RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2*t)}

(4)

We are interested in a solution of the form u(x, t) = _F1(x)*_F2(t) and _F1(x)*_F2(t) <> 0, so discard the first two in the above and keep only the third one

solution_to_SL := ({_c[1] = 0, _F1(x) = 0, _F2(t) = _C5}, {_c[1] = _C2, _F1(x) = 0, _F2(t) = _C5/exp(k*_C2*t)}, {_c[1] = RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2, _F1(x) = _C4*cos((RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2)^(1/2)*x), _F2(t) = _C5/exp(k*RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2*t)})[3]

{_c[1] = RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2, _F1(x) = _C4*cos((RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2)^(1/2)*x), _F2(t) = _C5/exp(k*RootOf(tan(_Z)*(_Z^2)^(1/2)-1)^2*t)}

(5)

If we were able to express _c[1] in closed form, with a formula depending on an integer parameter identifying one of the roots of the expression, the rest of the method is straightforward, the product u(x, t) = _F1(x)*_F2(t) is a solution that by construction satisfies the boundary conditions in (1) , and we have one of them for each value of _c[1](for each root of the expression within the RootOf construction). The first condition in iv is used to adjust the remaining constant (combine _C4 times _C5 into one) using orthogonality properties , and by linearly superimposing these different solutions we construct an infinite series solution. All this is explained in standard textbooks.

 

The problem is what to do when _c[1] cannot be expressed in closed form, as in the example above. To understand the situation clearly, remove that RootOf and plot its contents:

RootOf(tan(_Z)*sqrt(_Z^2)-1) = lambda[n]

RootOf(tan(_Z)*(_Z^2)^(1/2)-1) = lambda[n]

(6)

DEtools[remove_RootOf](RootOf(tan(_Z)*(_Z^2)^(1/2)-1) = lambda[n])

tan(lambda[n])*(lambda[n]^2)^(1/2)-1 = 0

(7)

This equation has infinitely many roots. For instance between -2*Pi and 2*Pi, NULL

plot(lhs(tan(lambda[n])*(lambda[n]^2)^(1/2)-1 = 0), lambda[n] = -2*Pi .. 2*Pi)

 

A closed form solution to represent these values of `&lambda;__n` (also called the eigenvalue of the Sturm-Liouville problem) are necessary in the intermediate solving steps, and to express in closed form the solution to the original problem.

 

We cannot do magic to overcome this mathematical limitation, but there are in Maple representational abstract alternatives: we can use, in all the intermediate computations, a RootOf construction with a label  identifying each of the roots and, at the end, remove the RootOf construction, presenting something readable, understandable.

 

This is the representation of the solution that we are proposing, whose automatic derivation from given pde and iv is already implemented:

pdsolve([pde, iv])

u(x, t) = `casesplit/ans`(Sum(3*exp(-k*lambda[n]^2*t)*cos(lambda[n]*x)*((-lambda[n]^2+2)*cos(lambda[n])-2+(lambda[n]^3+2*lambda[n])*sin(lambda[n]))/(lambda[n]^3*(sin(2*lambda[n])+2*lambda[n])), n = 0 .. infinity), {And(tan(lambda[n])*lambda[n]-1 = 0, 0 <= lambda[n])})

(8)

So an expression restricted by equations, inequations and possibly subject to conditions. This is the same type of representation we use in pdsolve, PDEtools:-casesplit and the FunctionAdvisor.

 

Note that, since there are infinitely many roots to the left and to the right of the origin, we assumed for simplicity that `&lambda;__n` >= 0, which suffices to construct the infinite series solution above. The initial value for the summation index n could be 0 or 1, it doesn't matter, since n itself does not appear in the summand, it only identifies one of the values of lambda solving tan(lambda[n])*lambda[n]-1 = 0. This is a very nice development.

 

So we can now compute and represent the solution algebraically even when the eigenvalue `&lambda;__n` cannot be expressed in closed form. But how do you test such a solution? Or even plot it?

 

For now that needs some human intervention. Start with testing (part of what follows is in the plans for further development of pdetest). Separate the solution expressed in terms of `&lambda;__n`from the equation defining it

solution := lhs(u(x, t) = `casesplit/ans`(Sum(3*exp(-k*lambda[n]^2*t)*cos(lambda[n]*x)*((-lambda[n]^2+2)*cos(lambda[n])-2+(lambda[n]^3+2*lambda[n])*sin(lambda[n]))/(lambda[n]^3*(sin(2*lambda[n])+2*lambda[n])), n = 0 .. infinity), {And(tan(lambda[n])*lambda[n]-1 = 0, 0 <= lambda[n])})) = op(1, rhs(u(x, t) = `casesplit/ans`(Sum(3*exp(-k*lambda[n]^2*t)*cos(lambda[n]*x)*((-lambda[n]^2+2)*cos(lambda[n])-2+(lambda[n]^3+2*lambda[n])*sin(lambda[n]))/(lambda[n]^3*(sin(2*lambda[n])+2*lambda[n])), n = 0 .. infinity), {And(tan(lambda[n])*lambda[n]-1 = 0, 0 <= lambda[n])})))

u(x, t) = Sum(3*exp(-k*lambda[n]^2*t)*cos(lambda[n]*x)*((-lambda[n]^2+2)*cos(lambda[n])-2+(lambda[n]^3+2*lambda[n])*sin(lambda[n]))/(lambda[n]^3*(sin(2*lambda[n])+2*lambda[n])), n = 0 .. infinity)

(9)

op([2, 2, 1, 1], u(x, t) = `casesplit/ans`(Sum(3*exp(-k*lambda[n]^2*t)*cos(lambda[n]*x)*((-lambda[n]^2+2)*cos(lambda[n])-2+(lambda[n]^3+2*lambda[n])*sin(lambda[n]))/(lambda[n]^3*(sin(2*lambda[n])+2*lambda[n])), n = 0 .. infinity), {And(tan(lambda[n])*lambda[n]-1 = 0, 0 <= lambda[n])}))

tan(lambda[n])*lambda[n]-1 = 0

(10)

By inspection, solution has sin(lambda[n]) and cos(lambda[n]), not tan(lambda[n]), so rewrite (10), and on the way isolate cos(lambda[n])

condition := isolate(convert(tan(lambda[n])*lambda[n]-1 = 0, sincos), cos(lambda[n]))

cos(lambda[n]) = sin(lambda[n])*lambda[n]

(11)

Start by pdetesting

pdetest(solution, [pde, iv])

[0, Sum(3*cos(lambda[n]*x)*((I*lambda[n]^3+(2*I)*lambda[n]-lambda[n]^2+2)*exp(-I*lambda[n])-4+(-I*lambda[n]^3-(2*I)*lambda[n]-lambda[n]^2+2)*exp(I*lambda[n]))/(2*lambda[n]^3*sin(2*lambda[n])+4*lambda[n]^4), n = 0 .. infinity)-1+(1/4)*x^3, 0, Sum(-3*((I*lambda[n]^3+(2*I)*lambda[n]-lambda[n]^2+2)*exp(-I*lambda[n])-4+(-I*lambda[n]^3-(2*I)*lambda[n]-lambda[n]^2+2)*exp(I*lambda[n]))*sin(lambda[n])*exp(-k*lambda[n]^2*t)/(2*lambda[n]^2*sin(2*lambda[n])+4*lambda[n]^3), n = 0 .. infinity)+Sum(3*((I*lambda[n]^3+(2*I)*lambda[n]-lambda[n]^2+2)*exp(-I*lambda[n])-4+(-I*lambda[n]^3-(2*I)*lambda[n]-lambda[n]^2+2)*exp(I*lambda[n]))*cos(lambda[n])*exp(-k*lambda[n]^2*t)/(2*lambda[n]^3*sin(2*lambda[n])+4*lambda[n]^4), n = 0 .. infinity)]

(12)

A further manipulation, substituting condition and combining the sums results in

simplify(subs(condition, combine([0, Sum(3*cos(lambda[n]*x)*((I*lambda[n]^3+(2*I)*lambda[n]-lambda[n]^2+2)*exp(-I*lambda[n])-4+(-I*lambda[n]^3-(2*I)*lambda[n]-lambda[n]^2+2)*exp(I*lambda[n]))/(2*lambda[n]^3*sin(2*lambda[n])+4*lambda[n]^4), n = 0 .. infinity)-1+(1/4)*x^3, 0, Sum(-3*((I*lambda[n]^3+(2*I)*lambda[n]-lambda[n]^2+2)*exp(-I*lambda[n])-4+(-I*lambda[n]^3-(2*I)*lambda[n]-lambda[n]^2+2)*exp(I*lambda[n]))*sin(lambda[n])*exp(-k*lambda[n]^2*t)/(2*lambda[n]^2*sin(2*lambda[n])+4*lambda[n]^3), n = 0 .. infinity)+Sum(3*((I*lambda[n]^3+(2*I)*lambda[n]-lambda[n]^2+2)*exp(-I*lambda[n])-4+(-I*lambda[n]^3-(2*I)*lambda[n]-lambda[n]^2+2)*exp(I*lambda[n]))*cos(lambda[n])*exp(-k*lambda[n]^2*t)/(2*lambda[n]^3*sin(2*lambda[n])+4*lambda[n]^4), n = 0 .. infinity)], Sum)))

[0, Sum(3*cos(lambda[n]*x)*((I*lambda[n]^3+(2*I)*lambda[n]-lambda[n]^2+2)*exp(-I*lambda[n])-4+(-I*lambda[n]^3-(2*I)*lambda[n]-lambda[n]^2+2)*exp(I*lambda[n]))/(2*lambda[n]^3*sin(2*lambda[n])+4*lambda[n]^4), n = 0 .. infinity)-1+(1/4)*x^3, 0, 0]

(13)

So from the four elements (one PDE and three iv), three are satisfied without having to specify who is `&lambda;__n` - it sufficed to say that cos(lambda[n]) = sin(lambda[n])*lambda[n], and this is the case of most of the examples we analyzed. You really don't need the exact closed form of `&lambda;__n` in these examples.

For the one expression which remains to be proven equal to zero, there is no clear way to perform the sum and show that it is equal to 1-(1/4)*x^3 without further information on the value of `&lambda;__n`.  So this part needs to be tested using a plot.

 

We need to perform the summation, instead of using infinite terms, using, say, the first 100 of them, and for that purpose we need the first 100 positive values of `&lambda;__n`. These values can be obtained using fsolve. Increase Digits to get a better approximation:

Digits := 20

20

(14)

L := [fsolve(condition, lambda[n], 0 .. 10^10, maxsols = 100)]

[.86033358901937976248, 3.4256184594817281465, 6.4372981791719471204, 9.5293344053619636029, 12.645287223856643104, 15.771284874815882047, 18.902409956860024151, 22.036496727938565083, 25.172446326646664714, 28.309642854452012364, 31.447714637546233553, 34.586424215288923664, 37.725612827776501051, 40.865170330488067836, 44.005017920830842726, 47.145097736761031075, 50.285366337773650463, 53.425790477394666341, 56.566344279821518125, 59.707007305335457045, 62.847763194454453706, 65.988598698490388394, 69.129502973895260447, 72.270467060308959618, 75.411483488848152399, 78.552545984242931733, 81.693649235601687434, 84.834788718042289254, 87.975960552493213068, 91.117161394464748699, 94.258388345039861151, 97.399638879073768561, 100.54091078684231954, 103.68220212628958019, 106.82351118369472752, 109.96483644107604716, 113.10617654902325890, 116.24753030393208866, 119.38889662883081820, 122.53027455715460386, 125.67166321895208822, 128.81306182910932798, 131.95446967725504430, 135.09588611907366660, 138.23731056880233903, 141.37874249272782439, 144.52018140353123171, 147.66162685535436266, 150.80307843948249426, 153.94453578055557821, 157.08599853323391302, 160.22746637925593824, 163.36893902483538786, 166.51041619835300015, 169.65189764830461611, 172.79338314147304750, 175.93487246129575380, 179.07636540640428965, 182.21786178931479917, 185.35936143525164220, 188.50086418108862526, 191.64236987439434602, 194.78387837256989967, 197.92538954206868814, 201.06690325768935430, 204.20841940193396857, 207.34993786442454883, 210.49145854137182078, 213.63298133509084236, 216.77450615355873900, 219.91603291001033966, 223.05756152256797778, 226.19909191390213620, 229.34062401091997921, 232.48215774447913415, 235.62369304912436649, 238.76522986284504017, 241.90676812685147396, 245.04830778536849850, 248.18984878544468993, 251.33139107677590895, 254.47293461154190896, 257.61447934425489811, 260.75602523161904694, 263.89757223240002997, 267.03912030730377471, 270.18066941886366904, 273.32221953133554631, 276.46377061059982966, 279.60532262407027259, 282.74687554060878265, 285.88842933044586060, 289.02998396510622761, 292.17153941733925030, 295.31309566105380609, 298.45465267125726198, 301.59621042399826673, 304.73776889631308125, 307.87932806617519465, 311.02088791244799345]

(15)

For convenience, construct a procedure, as a function of n, that returns each of these values

Lambda := proc (n) options operator, arrow; if n::nonnegint then L[n+1] else 'procname(args)' end if end proc

proc (n) options operator, arrow; if n::nonnegint then L[n+1] else 'procname(args)' end if end proc

(16)

Replace `&lambda;__n` by "Lambda(n), "infinity by 99 and the expression to be plotted is

remain := subs(lambda[n] = Lambda(n), infinity = 99, [0, Sum(3*cos(lambda[n]*x)*((I*lambda[n]^3+(2*I)*lambda[n]-lambda[n]^2+2)*exp(-I*lambda[n])-4+(-I*lambda[n]^3-(2*I)*lambda[n]-lambda[n]^2+2)*exp(I*lambda[n]))/(2*lambda[n]^3*sin(2*lambda[n])+4*lambda[n]^4), n = 0 .. infinity)-1+(1/4)*x^3, 0, 0][2])

Sum(3*cos(Lambda(n)*x)*((I*Lambda(n)^3+(2*I)*Lambda(n)-Lambda(n)^2+2)*exp(-I*Lambda(n))-4+(-I*Lambda(n)^3-(2*I)*Lambda(n)-Lambda(n)^2+2)*exp(I*Lambda(n)))/(2*Lambda(n)^3*sin(2*Lambda(n))+4*Lambda(n)^4), n = 0 .. 99)-1+(1/4)*x^3

(17)

Perform the sum and plot. The plotting range is the one present in the iv (1), x goes from 0 to 1

R := eval(remain, Sum = add)

plot(R, x = 0 .. 1)

 

This result is very good. With the first 100 terms (constructed using the first 100 roots of tan(lambda[n])*lambda[n]-1 = 0) we verified that this last of the four testing conditions is sufficiently close to zero, and this concludes the verification.

To plot the solution, the idea is the same one: in (9), give a value to k - say k = 1/5 - then construct the sum of the first 100 terms as done in (17), but this time using (9) instead of (13)

 

subs(k = 1/5, lambda[n] = Lambda(n), infinity = 99, u(x, t) = Sum(3*exp(-k*lambda[n]^2*t)*cos(lambda[n]*x)*((-lambda[n]^2+2)*cos(lambda[n])-2+(lambda[n]^3+2*lambda[n])*sin(lambda[n]))/(lambda[n]^3*(sin(2*lambda[n])+2*lambda[n])), n = 0 .. infinity))

u(x, t) = Sum(3*exp(-(1/5)*Lambda(n)^2*t)*cos(Lambda(n)*x)*((-Lambda(n)^2+2)*cos(Lambda(n))-2+(Lambda(n)^3+2*Lambda(n))*sin(Lambda(n)))/(Lambda(n)^3*(sin(2*Lambda(n))+2*Lambda(n))), n = 0 .. 99)

(18)

S := eval(u(x, t) = Sum(3*exp(-(1/5)*Lambda(n)^2*t)*cos(Lambda(n)*x)*((-Lambda(n)^2+2)*cos(Lambda(n))-2+(Lambda(n)^3+2*Lambda(n))*sin(Lambda(n)))/(Lambda(n)^3*(sin(2*Lambda(n))+2*Lambda(n))), n = 0 .. 99), Sum = add)

plot3d(rhs(S), x = 0 .. 1, t = 0 .. 1)

 

Compare with the numerical solution one could obtain using pdsolve with the numeric option . So substitute k = 1/5, and from the corresponding help page rewrite the initial/boundary conditions using D notation and this is the syntax and corresponding plot

pds := pdsolve(subs(k = 1/5, pde), convert({iv}, D), numeric, time = t, range = 0 .. 1)

_m5021120320

(20)

pds:-plot3d(t = 0 .. 1, x = 0 .. 1)

 

``


 

Download PDE_and_BC_Example_with_RootOf_eigenvalues.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

The first update to the Maple 2018 Physics, Differential Equations and Mathematical Functions packages is available. As has been the case since 2013, this update contains fixes, enhancements to existing functionality, and new developments in the three areas. 

The webpage for these updates will continue being the Maplesoft R&D Physics webpage. Starting with Maple 2018, however, this update is also available from the MapleCloud.

To install the update: open Maple and click the Cloud icon (upper-right corner), select "Packages" and search for "Physics Updates". Then, in the corresponding "Actions" column, click the third icon (install pop-up).

NOTE May/1: the "Updates" icon of the MapleCloud toolbar (that opens when you click the upper-right icon within a Maple document / worksheet), now works fine, after having installed the Physics Updates version 32 or higher.

These first updates include:

  • New Physics functionality regarding Tensor Products of Quantum States; and Coherent States.
  • Updates to pdsolve regarding PDE & Boundary Conditions (exact solutions);
  • A change in notation: d_(x), the differential of a coordinate in the Physics package, is now displayed as shown in this Mapleprimes post.


Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

This is about the recent implementation of tensor products of quantum state spaces in the Physics package, in connection with an exchange with the Physics of Information Lab of the University of Waterloo. As usual this development is available to everybody from the Maplesoft R&D Physics webpage. This is the last update for Maple 2017. The updates for Maple 2018, starting with this same material, will begin being distributed through the MapleCloud next week.

Tensor Product of Quantum State Spaces

 

Basic ideas and design

 

 

Suppose A and B are quantum operators and Ket(A, n), et(B, m) are, respectively, their eigenkets. The following works since the introduction of the Physics package in Maple

with(Physics)

Setup(op = {A, B})

`* Partial match of  'op' against keyword 'quantumoperators'`

 

[quantumoperators = {A, B}]

(1)

A*Ket(A, alpha) = A.Ket(A, alpha)

Physics:-`*`(A, Physics:-Ket(A, alpha)) = alpha*Physics:-Ket(A, alpha)

(2)

B*Ket(B, beta) = B.Ket(B, beta)

Physics:-`*`(B, Physics:-Ket(B, beta)) = beta*Physics:-Ket(B, beta)

(3)

where on the left-hand sides the product operator `*` is used as a sort of inert form (it has all the correct mathematical properties but does not perform the contraction) of the dot product operator `.`, used on the right-hand sides.

 

Suppose now that A and B act on different, disjointed, Hilbert spaces.

 

1) To represent that, a new keyword in Setup , is introduced, to indicate which spaces are disjointed, say as in disjointedhilbertspaces = {A, B}.  We want this notation to pop up at some point as {`&Hscr;`[A], `&Hscr;`[B]} where the indexation indicates all the operators acting on that Hilbert space. The disjointedspaces keyword has for synonyms disjointedhilbertspaces and hilbertspaces. The display `&Hscr;`[A] is not yet implemented.

 

NOTE: noncommutative quantum operators acting on disjointed spaces commute between themselves, so after setting  - for instance - disjointedspaces = {A, B}, automatically, A, B become quantum operators satisfying (see comment (ii) on page 156 of ref. [1])

 

"[A,B][-]=0"

 

2) Product of Kets and Bras (KK, BB, KB and BK) where K and B belong to disjointed spaces, are understood as tensor products satisfying, for instance with disjointed spaces A and B (see footnote on page 154 of ref. [1]),

 

`&otimes;`(Ket(A, alpha), Ket(B, beta)) = `&otimes;`(Ket(B, beta), Ket(A, alpha)) 

 

`&otimes;`(Bra(A, alpha), Ket(B, beta)) = `&otimes;`(Ket(B, beta), Bra(A, alpha)) 

 

while of course

Bra(A, alpha)*Ket(A, alpha) <> Bra(A, alpha)*Ket(A, alpha)

 

Details

   

 

3) All the operators of one disjointed space act transparently over operators, Bras and Kets of the other disjointed spaces, for example

 

A*Ket(B, n) = A*Ket(B, n)

and the same for the Dagger of this equation, that is

Bra(B, n)*Dagger(A) = Bra(B, n)*Dagger(A)

 

And this happens automatically. Hence, when we write the left-hand sides and press enter, they are automatically rewritten (returned) as the right-hand sides.

 

Note that for the product of an operator times a Bra or a Ket we are not using the notation that expresses the product with the symbol 5.

 

Regarding the display of Bras and Kets and their tensor products, two enhancements are happening:

 

• 

A new Setup option hideketlabel makes all the labels in Kets and Bras to be hidden when displaying Kets, Bras and Bracket(s), with the indices presented one level up, as if they were a sequence of labels, so that:

 "Ket(A,m,n,l"  

is displayed as

Ket(A, m, n, l)

 

  

This is the notation used more frequently when working in quantum information. This hideketlabel option is already implemented entering Setup(hideketlabel = true)

• 

Tensor products formed with operators, or Bras and Kets, that belong to disjointed spaces (set as such using Setup ), are displayed with the symbol 5 in between, as in Ket(A, n)*Ket(B, n) instead of Ket(A, n)*Ket(B, n), and `&otimes;`(A, B) instead of A*B.

Tensor product notation and the hideketlabel option

   

The implementation of tensor products using `*` and `.`

   

Basic exercising with the new functionality

   

Related functionality already in place before these changes

   

Reference

 

[1] Cohen-Tannoudji, Diue, Laloe, Quantum Mechanics, Chapter 2, section F.

[2] Griffiths Robert B., Hilbert Space Quantum Mechanics, Quantum Computation and Quantum Information Theory Course, Physics Department, Carnegie Mellon University, 2014.

See also

   

 


 

Download TensorProductDesign.mw
 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

Automatic handling of collision of tensor indices in products

 

 

The design of products of tensorial expressions that have contracted indices got enhanced. The idea: repeated indices in certain subexpressions are actually dummies. So suppose T[a, b] and B[b] are tensors, then in T[trace] = T[a, `~a`], a is just a dummy, therefore T[a, `~a`]*B[a] = T[b, `~b`]*B[a] is a well defined object. The new design automatically maps input like T[a, `~a`]*B[a] into T[b, `~b`]*B[a]. This significantly simplifies the manipulation of indices when working with products of tensorial expressions: each tensorial expression being multiplied has its repeated indices automatically transformed into different ones when they would collide with the free or repeated indices of the other expressions being multiplied.

 

This functionality is available within the Physics update distributed at the Maplesoft R&D Physics webpage (but for what you see under Preview that makes use of a new kernel feature of the Maple version under development).

 

restart

with(Physics); Setup(spacetimeindices = lowercaselatin, quiet)

[spacetimeindices = lowercaselatin]

(1)

Define(T[a, b], B[b])

`Defined objects with tensor properties`

 

{B[b], Physics:-Dgamma[a], Physics:-Psigma[a], T[a, b], Physics:-d_[a], Physics:-g_[a, b], Physics:-KroneckerDelta[a, b], Physics:-LeviCivita[a, b, c, d]}

(2)

This shows the automatic handling of collision of indices

T[a, a]*B[a]

T[b, `~b`]*B[a]

(3)

T[a, a]^2

T[a, `~a`]*T[b, `~b`]

(4)

``

Preview only in the upcomming version under development

 

Consider now the case of three tensors

Define(A[a], C[a])

`Defined objects with tensor properties`

 

{A[a], B[b], C[a], Physics:-Dgamma[a], Physics:-Psigma[a], T[a, b], Physics:-d_[a], Physics:-g_[a, b], Physics:-KroneckerDelta[a, b], Physics:-LeviCivita[a, b, c, d]}

(5)

A[a]*B[a]*C[a]

A[a]*B[a]*C[a]

(6)

The product above has indeed the index a repeated more than once, therefore none of its occurrences got automatically transformed into contravariant in the output, and Check  detects the problem interrupting with an error  message

Check(A[a]*B[a]*C[a])

Error, (in Physics:-Check) wrong use of the summation rule for repeated indices: `a repeated 3 times`, in A[a]*B[a]*C[a]

 

 

However, it is now also possible to indicate, using parenthesis, that the product of two of these tensors actually form a subexpression, so that the following two tensorial expressions are well defined, and the colliding dummy is automatically replaced making that explicit

A[a]*B[a]*C[a]

A[b]*B[`~b`]*C[a]

(7)

A[a]*B[a]*C[a]

A[a]*B[b]*C[`~b`]

(8)

 

 

This change in design makes concretely simpler the use of indices in that it eliminates the need for manually replacing dummies. For example, consider the tensorial expression for the angular momentum in terms of the coordinates and momentum vectors, in 3 dimensions

 

Setup(coordinates = cartesian, dimension = 3, metric = euclidean, quiet)

[coordinatesystems = {X}, dimension = 3, metric = {(1, 1) = 1, (2, 2) = 1, (3, 3) = 1}]

(9)

Define L[j], p[k] respectively representing angular and linear momentum

Define(L[j], p[k])

`Defined objects with tensor properties`

 

{Physics:-Dgamma[a], L[j], Physics:-Psigma[a], Physics:-d_[a], Physics:-g_[a, b], p[k], Physics:-KroneckerDelta[a, b], Physics:-LeviCivita[a, b, c], Physics:-SpaceTimeVector[a](X)}

(10)

Introduce the tensorial expression for L[a]

L[a] = LeviCivita[a, b, c]*X[b]*p[c]

L[a] = Physics:-LeviCivita[a, b, c]*Physics:-SpaceTimeVector[b](X)*p[c]

(11)

The left-hand side has one free index, a, while the right-hand side has two dummy indices b and c

Check(L[a] = Physics[LeviCivita][a, b, c]*Physics[SpaceTimeVector][b](X)*p[c], all)

`The repeated indices per term are: `[{`...`}, {`...`}, `...`]*`; the free indices are: `*{`...`}

 

([{}], {a}) = ([{b, c}], {a})

(12)

If we want to compute`#mrow(msup(mfenced(mover(mi("L"),mo("&rarr;")),open = "&Vert;",close = "&Vert;"),mn("2")),mo("&equals;"),msubsup(mi("L"),mi("a"),mn("2")))`we can now take the square of (11) directly, and the dummy indices on the right-hand side are automatically handled, there is now no need to manually substitute the repeated indices to avoid their collision

(L[a] = Physics[LeviCivita][a, b, c]*Physics[SpaceTimeVector][b](X)*p[c])^2

L[a]^2 = Physics:-LeviCivita[a, b, c]*Physics:-SpaceTimeVector[b](X)*p[c]*Physics:-LeviCivita[a, d, e]*Physics:-SpaceTimeVector[d](X)*p[e]

(13)

The repeated indices on the right-hand side are now a, b, c, d, e

Check(L[a]^2 = Physics[LeviCivita][a, b, c]*Physics[SpaceTimeVector][b](X)*p[c]*Physics[LeviCivita][a, d, e]*Physics[SpaceTimeVector][d](X)*p[e], all)

`The repeated indices per term are: `[{`...`}, {`...`}, `...`]*`; the free indices are: `*{`...`}

 

([{a}], {}) = ([{a, b, c, d, e}], {})

(14)

NULL


 

Download AutomaticHandlingCollisionOfTensorIndices.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

Minimize the number of tensor components according to its symmetries
(and relabel, redefine or count the number of independent tensor components)

 

 

The nice development described below is work in collaboration with Pascal Szriftgiser from Laboratoire PhLAM, Université Lille 1, France, used in the Mapleprimes post Magnetic traps in cold-atom physics

 

A new keyword in Define  and Setup : minimizetensorcomponents, allows for automatically minimizing the number of tensor components taking into account the tensor symmetries. For example, if a tensor with two indices in a 4D spacetime is defined as antisymmetric using Define with this new keyword, the number of different tensor components will be exactly 6, and the elements of the diagonal are automatically set equal to 0. After setting this keyword to true with Setup , all subsequent definitions of tensors automatically minimize the number of components while using this keyword with Define  makes this minimization only happen with the tensors being defined in the call to Define .

 

Related to this new functionality, 4 new Library routines were added: MinimizeTensorComponents, NumberOfIndependentTensorComponents, RelabelTensorComponents and RedefineTensorComponents

 

Example:

restart; with(Physics)

 

Define an antisymmetric tensor with two indices

Define(F[mu, nu], antisymmetric)

`Defined objects with tensor properties`

 

{Physics:-Dgamma[mu], F[mu, nu], Physics:-Psigma[mu], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-KroneckerDelta[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu]}

(1.1)

Although the system knows that F[mu, nu] is antisymmetric, you need to use Simplify to apply the (anti)symmetry

F[mu, nu]+F[nu, mu]

F[mu, nu]+F[nu, mu]

(1.2)

 

Simplify(F[mu, nu]+F[nu, mu])

0

(1.3)

so by default the components of F[mu, nu] do not automatically reflect the (anti)symmetry; likewise

F[1, 2]+F[2, 1]

F[1, 2]+F[2, 1]

(1.4)

Simplify(F[1, 2]+F[2, 1])

0

(1.5)

and computing the array form of F[mu, nu]we do not see the elements of the diagonal equal to zero nor the lower-left triangle equal to the upper-right triangle but for a different sign:

TensorArray(F[mu, nu])

Matrix(%id = 18446744078270093062)

(1.6)

 

On the other hand, this new functionality, here called minimizetensorcomponents, makes the symmetries of the tensor be explicitly reflected in its components.

 

There are three ways to use it. First, one can minimize the number of tensor components of a tensor previously defined. For example

 

Library:-MinimizeTensorComponents(F)

Matrix(%id = 18446744078270064630)

(1.7)

After this, both (1.2) and (1.3) are automatically equal to 0 without having to use Simplify

F[mu, nu]+F[nu, mu]

0

(1.8)

0

0

(1.9)

And the output of TensorArray  in (1.6) becomes equal to (1.7).

 

NOTE: in addition, after using minimizetensorcomponents in the definition of a tensor, say F, all the keywords implemented for Physics tensors are available for F:

 

F[]

F[mu, nu] = Matrix(%id = 18446744078247910206)

(1.10)

F[trace]

0

(1.11)

F[nonzero]

F[mu, nu] = {(1, 2) = F[1, 2], (1, 3) = F[1, 3], (1, 4) = F[1, 4], (2, 1) = -F[1, 2], (2, 3) = F[2, 3], (2, 4) = F[2, 4], (3, 1) = -F[1, 3], (3, 2) = -F[2, 3], (3, 4) = F[3, 4], (4, 1) = -F[1, 4], (4, 2) = -F[2, 4], (4, 3) = -F[3, 4]}

(1.12)

"F[~1,mu,matrix]"

F[`~1`, mu] = Vector[row](%id = 18446744078247885990)

(1.13)

Alternatively, one can define a tensor, specifying that the symmetries should be taken into account to minimize the number of its components passing the keyword minimizetensorcomponents to Define .

 

Example:

 

Define a tensor with the symmetries of the Riemann  tensor, that is, a tensor of 4 indices that is symmetric with respect to interchanging the positions of the 1st and 2nd pair of indices and antisymmetric with respect to interchanging the position of its 1st and 2nd indices, or 3rd and 4th indices, and define it minimizing the number of tensor components

 

Define(R[alpha, beta, mu, nu], symmetric = {[[1, 2], [3, 4]]}, antisymmetric = {[1, 2], [3, 4]}, minimizetensorcomponents)

`Defined objects with tensor properties`

 

{Physics:-Dgamma[mu], F[mu, nu], Physics:-Psigma[mu], R[mu, nu, alpha, beta], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-KroneckerDelta[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu]}

(1.14)

We now have

R[1, 2, 3, 4]+R[2, 1, 3, 4]

0

(1.15)

R[alpha, beta, mu, nu]-R[mu, nu, alpha, beta]

0

(1.16)
• 

One can always retrieve the symmetry properties in the abstract notation used by the Define command using the new Library:-GetTensorSymmetryProperties, its output is ordered, first the symmetric then the antisymmetric properties

 

Library:-GetTensorSymmetryProperties(R)

{[[1, 2], [3, 4]]}, {[1, 2], [3, 4]}

(1.17)
• 

After making the symmetries explicit (and also before that), it is frequently useful to know the number of independent components of a given tensor. For this purpose you can use the new Library:-NumberOfIndependentTensorComponents

 

Library:-NumberOfIndependentTensorComponents(R)

21

(1.18)

and besides taking into account the symmetries, in the case of the Riemann  tensor, after taking into account the first Bianchi identity this number of components is further reduced to 20.

 

A third way of using the new minimizetensorcomponents functionality is using Setup , so that, automatically, every subsequent definition of tensors with symmetries is performed minimizing the number of its components using the indicated symmetries

 

Example:

Setup(minimizetensorcomponents = true)

[minimizetensorcomponents = true]

(1.19)

So from hereafter you can define tensors taking into account their symmetries explicitly and without having to include the keyword minimizetensorcomponents at each definition

 

Define(C[alpha, beta], antisymmetric)

`Defined objects with tensor properties`

 

{C[mu, nu], Physics:-Dgamma[mu], F[mu, nu], Physics:-Psigma[mu], R[mu, nu, alpha, beta], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-KroneckerDelta[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu]}

(1.20)

 

C[]

C[mu, nu] = Matrix(%id = 18446744078408747598)

(1.21)
• 

Two new related functionalities are provided via Library:-RelabelTensorComponents and Library:-RedefineTensorComponent, the first one to have the number of tensor components directly reflected in the names of the components, the second one to redefine only one of these components

Library:-RelabelTensorComponents(C)

Matrix(%id = 18446744078408729774)

(1.22)

 

Suppose now we want to make one of these components equal to 1, say C__2

Library:-RedefineTensorComponent(C[1, 2] = 1)

C[mu, nu] = Matrix(%id = 18446744078270104390)

(1.23)

This nice development is work in collaboration with Pascal Szriftgiser from Laboratoire PhLAM, UMR CNRS 8523, Université Lille 1, F-59655, France.

``

 

Download MinimizeTensorComponents.mw

 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

5 6 7 8 9 10 11 Last Page 7 of 17