ecterrab

13431 Reputation

24 Badges

19 years, 356 days

MaplePrimes Activity


These are Posts that have been published by ecterrab


The PDE & BC project , a very nice and challenging one, also one where Maple is pioneer in all computer algebra systems, has restarted, including now also the collaboration of Katherina von Bülow.

Recapping, the PDE & BC project started 5 years ago implementing some of the basic methods found in textbooks to match arbitrary functions and constants to given PDE boundary conditions of different kinds. At this point we aim to fill gaps, and the first one we tackled is the case of 1st order PDE that can be solved without boundary conditions in terms of an arbitrary function, and where a single boundary condition (BC) is given for the PDE unknown function, and this BC does not depend on the independent variables of the problem. It looks simple ... It can be rather tricky though. The method we implemented is a simple however ingenious use of differential invariants  to match the boundary condition.


The resulting new code, the portion already tested, is available for download in the Maplesoft R&D webpage for Differential Equations and Mathematical Functions (the development itself is bundled within the library that contains the new developments for the Physics package, in turn within the zip linked in the webpage).


The examples that can now be handled, although restricted in generality to "only one 1st order linear or nonlinear PDE and only one boundary condition for the unknown function itself", illustrate well how powerful it can be to use more advanced methods to tackle these tricky situations where we need to match an arbitrary function to a boundary condition.


To illustrate the idea, consider first a linear example, among the simplest one could imagine:

PDEtools:-declare(f(x, y, z))

f(x, y, z)*`will now be displayed as`*f

(1)

pde := diff(f(x, y, z), x)+diff(f(x, y, z), y)+diff(f(x, y, z), z) = f(x, y, z)

diff(f(x, y, z), x)+diff(f(x, y, z), y)+diff(f(x, y, z), z) = f(x, y, z)

(2)

Input now a boundary condition (bc) for the unknownf(x, y, z) such that this bc does not depend on the independent variables {x, y, z}; this bc can however depend on arbitrary symbolic parameters, for instance

bc := f(alpha+beta, alpha-beta, 1) = alpha*beta

f(alpha+beta, alpha-beta, 1) = alpha*beta

(3)

With the recent development, this kind of problem can now be solved in one go:

sol := pdsolve([pde, bc])

f(x, y, z) = (1/4)*(x-2*z+2+y)*(x-y)*exp(z-1)

(4)

Nice! And how do you verify this result for correctness? With pdetest , which actually also tests the solution against the boundary conditions:

pdetest(sol, [pde, bc])

[0, 0]

(5)

And what has been done to obtain the solution (4)? First the PDE was solved regardless of the boundary condition, so in general, obtaining:

pdsolve(pde)

f(x, y, z) = _F1(-x+y, -x+z)*exp(x)

(6)

In a second step, the arbitrary function _F1(-x+y, -x+z) got determined such that the boundary condition f(alpha+beta, alpha-beta, 1) = alpha*beta is matched. Concretely, the mapping _F1 is what got determined. You can see this mapping reversing the solving process in two steps. Start taking the difference between the general solution (6) and the solution (4) that matches the boundary condition

(f(x, y, z) = _F1(-x+y, -x+z)*exp(x))-(f(x, y, z) = (1/4)*(x-2*z+2+y)*(x-y)*exp(z-1))

0 = _F1(-x+y, -x+z)*exp(x)-(1/4)*(x-2*z+2+y)*(x-y)*exp(z-1)

(7)

and isolate here _F1(-x+y, -x+z)

PDEtools:-Solve(0 = _F1(-x+y, -x+z)*exp(x)-(1/4)*(x-2*z+2+y)*(x-y)*exp(z-1), _F1(-x+y, -x+z))

_F1(-x+y, -x+z) = (1/4)*exp(-x+z-1)*(x^2-2*x*z-y^2+2*y*z+2*x-2*y)

(8)

So this is the value _F1(-x+y, -x+z) that got determined. To see now the actual solving mapping _F1, that takes for arguments -x+y and -x+z and returns the right-hand side of (8), one can perform a change of variables introducing the two parameters `τ__1` and `τ__2` of the _F1 mapping:

{tau__1 = -x+y, tau__2 = -x+z, tau__3 = z}

{tau__1 = -x+y, tau__2 = -x+z, tau__3 = z}

(9)

solve({tau__1 = -x+y, tau__2 = -x+z, tau__3 = z}, {x, y, z})

{x = -tau__2+tau__3, y = -tau__2+tau__1+tau__3, z = tau__3}

(10)

PDEtools:-dchange({x = -tau__2+tau__3, y = -tau__2+tau__1+tau__3, z = tau__3}, _F1(-x+y, -x+z) = (1/4)*exp(-x+z-1)*(x^2-2*x*z-y^2+2*y*z+2*x-2*y), proc (u) options operator, arrow; simplify(u, size) end proc)

_F1(tau__1, tau__2) = -(1/4)*exp(tau__2-1)*tau__1*(tau__1-2*tau__2+2)

(11)

So the solving mapping _F1 is

_F1 = unapply(rhs(_F1(tau__1, tau__2) = -(1/4)*exp(tau__2-1)*tau__1*(tau__1-2*tau__2+2)), tau__1, tau__2)

_F1 = (proc (tau__1, tau__2) options operator, arrow; -(1/4)*exp(tau__2-1)*tau__1*(tau__1-2*tau__2+2) end proc)

(12)

Wow! Although this pde & bc problem really look very simple, this solution (12) is highly non-obvious, as is the way to get it just from the boundary condition f(alpha+beta, alpha-beta, 1) = alpha*beta and the solution (6) too. Let's first verify that this mapping is correct (even when we know, by construction, that it is correct). For that, apply (12) to the arguments of the arbitrary function and we should obtain (8)

(_F1 = (proc (tau__1, tau__2) options operator, arrow; -(1/4)*exp(tau__2-1)*tau__1*(tau__1-2*tau__2+2) end proc))(-x+y, -x+z)

_F1(-x+y, -x+z) = -(1/4)*exp(-x+z-1)*(-x+y)*(x-2*z+2+y)

(13)

Indeed this is equal to (8)

normal((_F1(-x+y, -x+z) = -(1/4)*exp(-x+z-1)*(-x+y)*(x-2*z+2+y))-(_F1(-x+y, -x+z) = (1/4)*exp(-x+z-1)*(x^2-2*x*z-y^2+2*y*z+2*x-2*y)))

0 = 0

(14)

Skipping the technical details, the key observation to compute a solving mapping is that, given a 1st order PDE where the unknown depends on k independent variables, if the boundary condition depends on k-1 arbitrary symbolic parameters alpha, beta, one can always seek a "relationship between these k-1parameters and the k-1differential invariants that enter as arguments in the arbitrary function _F1 of the solution", and get the form of the mapping _F1 from this relationship and the bc. The method works in general. Change for instance the bc (3) making its right-hand side be a sum instead of a product

bc := f(alpha+beta, alpha-beta, 1) = alpha+beta

f(alpha+beta, alpha-beta, 1) = alpha+beta

(15)

sol := pdsolve([pde, bc])

f(x, y, z) = (x-z+1)*exp(z-1)

(16)

pdetest(sol, [pde, bc])

[0, 0]

(17)

An interesting case happens when the boundary condition depends on less than k-1 parameters, for instance:

bc__1 := subs(beta = alpha, bc)

f(2*alpha, 0, 1) = 2*alpha

(18)

sol__1 := pdsolve([pde, bc__1])

f(x, y, z) = ((x-z+1)*_C1+x-y)*exp(((z-1)*_C1+y)/(1+_C1))/(1+_C1)

(19)

As we see in this result, the additional difficulty represented by having few parameters got tackled by introducing an arbitrary constant _C1 (this is likely to evolve into something more general...)

pdetest(sol__1, [pde, bc__1])

[0, 0]

(20)

Finally, consider a nonlinear example

PDEtools:-declare(u(x, y))

u(x, y)*`will now be displayed as`*u

(21)

pde := 3*(u(x, y)-y)^2*(diff(u(x, y), x))-(diff(u(x, y), y)) = 0

3*(u(x, y)-y)^2*(diff(u(x, y), x))-(diff(u(x, y), y)) = 0

(22)

Here we have 2 independent variables, so for illustration purposes use a boundary condition that depends on only one arbitrary parameter

bc := u(0, alpha) = alpha

u(0, alpha) = alpha

(23)

All looks OK, but we still have another problem: check the arbitrary function _F1 entering the general solution of pde when tackled without any boundary condition:

pdsolve(pde)

u(x, y) = RootOf(-y^3+3*y^2*_Z-3*y*_Z^2+_Z^3-_F1(_Z)-x)

(24)

Remove this RootOf to see the underlying algebraic expression

DEtools[remove_RootOf](u(x, y) = RootOf(-y^3+3*y^2*_Z-3*y*_Z^2+_Z^3-_F1(_Z)-x))

-y^3+3*y^2*u(x, y)-3*y*u(x, y)^2+u(x, y)^3-_F1(u(x, y))-x = 0

(25)

So this is a pde where the general solution is implicit, actually depending on an arbitrary function of the unknown u(x, y) The code handles this problem in the same way, just that in cases like this there may be more than one solution. For this very particular bc (23) there are actually three solutions:

pdsolve([pde, bc])

u(x, y) = x^(1/3)+y, u(x, y) = -(1/2)*x^(1/3)-((1/2)*I)*3^(1/2)*x^(1/3)+y, u(x, y) = -(1/2)*x^(1/3)+((1/2)*I)*3^(1/2)*x^(1/3)+y

(26)

Verify these three solutions against the pde and the boundary condition

map(pdetest, [u(x, y) = x^(1/3)+y, u(x, y) = -(1/2)*x^(1/3)-((1/2)*I)*3^(1/2)*x^(1/3)+y, u(x, y) = -(1/2)*x^(1/3)+((1/2)*I)*3^(1/2)*x^(1/3)+y], [pde, bc])

[[0, 0], [0, 0], [0, 0]]

(27)

:)


Download PDEs_and_Boundary_Conditions.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft


In connection with recent developments for symbolic sequences, a number of improvements were implemented regarding symbolic differentiation, that is the computation of n^th order derivatives were n is a symbol, the simplest example being the n^th derivative of the exponential, which of course is the exponential itself. This post is about these developments, done in collaboration with Katherina von Bülow, and available for download as usual from the Maplesoft R&D web page for Differential Equations and Mathematical functions (the update itself is bundled with the official updates of the Maple Physics package).

 

It is important to note that Maple is pioneer in having an actual implementation of symbolic differentiation, something that works for real, since several releases.  The development, however, was somewhat stuck because we were unable to compute the symbolic n^th derivative of a composite function f(g(z)). A formula for this problem is actually known, it is the Faà di Bruno formula, but, in order to implement it, first we were missing the incomplete Bell functions , that got implemented in Maple 15, nice, but then we were still missing differentiating symbolic sequences, and functions whose arguments are symbolic sequences (i.e. the number of arguments of the function is n, a symbol, of unknown value at the time of differentiating). All this got implemented now within the new MathematicalFunctions:-Sequence package, opening the door widely to these improvements in n^th differentiation.

 

The symbolic differentiation code works as mostly all other computer algebra code, by mapping complicated problems into a composition of simpler problems all of which are tractable; what follows is then an illustration of these basic cases.

 

Among the simplest new case that can now be handled there is that of a power where the exponent is linear in the differentiation variable. This is actually an easy problem

(%diff = diff)(f^(alpha*z+beta), `$`(z, n))

%diff(f^(alpha*z+beta), `$`(z, n)) = alpha^n*f^(alpha*z+beta)*ln(f)^n

(1)

More complicated, consider the k^th power of a generic function; the corresponding symbolic derivative can be mapped into a sum of symbolic derivatives of powers of g(z) with lower degree

(%diff = diff)(g(z)^k, `$`(z, n))

%diff(g(z)^k, `$`(z, n)) = k*binomial(n-k, n)*(Sum((-1)^_k1*binomial(n, _k1)*g(z)^(k-_k1)*(Diff(g(z)^_k1, [`$`(z, n)]))/(k-_k1), _k1 = 0 .. n))

(2)

In some cases where g(z) is a known function, the computation can be carried on furthermore. For example, for g = ln the result can be expressed using Stirling numbers of the first kind

(%diff = diff)(ln(alpha*z+beta)^k, `$`(z, n))

%diff(ln(alpha*z+beta)^k, `$`(z, n)) = alpha^n*(Sum(pochhammer(k-_k1+1, _k1)*Stirling1(n, _k1)*ln(alpha*z+beta)^(k-_k1), _k1 = 0 .. n))/(alpha*z+beta)^n

(3)

The case of sin and cos are relatively simpler, but then assumptions on the exponent are required in order to proceed further ahead from (2), for example

`assuming`([(%diff = diff)(sin(alpha*z+beta)^k, `$`(z, n))], [k::posint])

%diff(sin(alpha*z+beta)^k, `$`(z, n)) = (-1)^k*piecewise(n = 0, (-sin(alpha*z+beta))^k, alpha^n*I^n*(Sum(binomial(k, _k1)*(2*_k1-k)^n*exp(I*(2*_k1-k)*(alpha*z+beta+(1/2)*Pi)), _k1 = 0 .. k))/2^k)

(4)

The case of functions of arbitrary number of variables (typical situation where symbolic sequences are required) is now handled properly. This is the pFq hypergeometric function of symbolic order p and q 

(%diff = diff)(hypergeom([`$`(a[i], i = 1 .. p)], [`$`(b[j], j = 1 .. q)], z), `$`(z, n))

%diff(hypergeom([`$`(a[i], i = 1 .. p)], [`$`(b[j], j = 1 .. q)], z), `$`(z, n)) = (product(pochhammer(a[i], n), i = 1 .. p))*hypergeom([`$`(a[i]+n, i = 1 .. p)], [`$`(b[j]+n, j = 1 .. q)], z)/(product(pochhammer(b[j], n), j = 1 .. q))

(5)

The case of the MeijerG function is more complicated, but in practice, for the computer, once it knows how to handle symbolic sequences, the more involved problem becomes computable

(%diff = diff)(MeijerG([[`$`(a[i], i = 1 .. n)], [`$`(b[i], i = n+1 .. p)]], [[`$`(b[i], i = 1 .. m)], [`$`(b[i], i = m+1 .. q)]], z), `$`(z, k))

%diff(MeijerG([[`$`(a[i], i = 1 .. n)], [`$`(b[i], i = n+1 .. p)]], [[`$`(b[i], i = 1 .. m)], [`$`(b[i], i = m+1 .. q)]], z), `$`(z, k)) = MeijerG([[-k, `$`(a[i]-k, i = 1 .. n)], [`$`(b[i]-k, i = n+1 .. p)]], [[`$`(b[i]-k, i = 1 .. m)], [0, `$`(b[i]-k, i = m+1 .. q)]], z)

(6)

Not only the mathematics of this result is correct: the object returned is actually computable to the end (if you provide the values of n, p, m and q), and the typesetting is actually fully readable, as in textbooks, including copy and paste working properly; all this is new.

The n^th derivative of a number of mathematical functions that were not implemented before, are now also implemented, covering the gaps, for example:

(%diff = diff)(BellB(a, z), `$`(z, n))

%diff(BellB(a, z), `$`(z, n)) = Sum(Stirling2(a, _k1)*pochhammer(_k1-n+1, n)*z^(_k1-n), _k1 = 0 .. a)

(7)

(%diff = diff)(bernoulli(z), `$`(z, n))

%diff(bernoulli(z), `$`(z, n)) = pochhammer(nu-n+1, n)*bernoulli(nu-n, z)

(8)

(%diff = diff)(binomial(z, m), `$`(z, n))

%diff(binomial(z, m), `$`(z, n)) = (Sum((-1)^(_k1+m)*Stirling1(m, _k1)*pochhammer(_k1-n+1, n)*(z-m+1)^(_k1-n), _k1 = 1 .. m))/factorial(m)

(9)

(%diff = diff)(euler(a, z), `$`(z, n))

%diff(euler(a, z), `$`(z, n)) = pochhammer(a-n+1, n)*euler(a-n, z)

(10)

In the same way the fundamental formulas for the n^th derivative of all the 12 elliptic Jacobi functions  as well as the four elliptic JacobiTheta functions,  the LambertW , LegendreP  and some others are now all implemented.

Finally there is the "holy grail" of this problem: the n^th derivative of a composite function f(g(z)) - this always-unreachable implementation of Faa di Bruno formula. We now have it :)

(%diff = diff)(f(g(z)), `$`(z, n))

%diff(f(g(z)), `$`(z, n)) = Sum(((D@@k)(f))(g(z))*IncompleteBellB(n, k, `$`(diff(g(z), [`$`(z, j)]), j = 1 .. n-k+1)), k = 0 .. n)

(11)

Note the symbolic sequence of symbolic order derivatives of lower degree, both of of f and g, also within the arguments of the IncompleteBellB function. This is a very abstract formula ... And does this really work? Of course it does :). Consider, for instance, a case where the n^th derivatives of f(z) and g(z) can both be computed by the system:

sin(cos(alpha*z+beta))

sin(cos(alpha*z+beta))

(12)

This is the n^th derivative expressed using Faa di Bruno's formula, in turn expressed using symbolic sequences within the IncompleteBellB  function

(%diff = diff)(sin(cos(alpha*z+beta)), `$`(z, n))

%diff(sin(cos(alpha*z+beta)), `$`(z, n)) = Sum(sin(cos(alpha*z+beta)+(1/2)*k*Pi)*IncompleteBellB(n, k, `$`(cos(alpha*z+beta+(1/2)*j*Pi)*alpha^j, j = 1 .. n-k+1)), k = 0 .. n)

(13)

These results can all be verified. Take for instance n = 3

eval(%diff(sin(cos(alpha*z+beta)), `$`(z, n)) = Sum(sin(cos(alpha*z+beta)+(1/2)*k*Pi)*IncompleteBellB(n, k, `$`(cos(alpha*z+beta+(1/2)*j*Pi)*alpha^j, j = 1 .. n-k+1)), k = 0 .. n), n = 3)

%diff(sin(cos(alpha*z+beta)), z, z, z) = Sum(sin(cos(alpha*z+beta)+(1/2)*k*Pi)*IncompleteBellB(3, k, `$`(cos(alpha*z+beta+(1/2)*j*Pi)*alpha^j, j = 1 .. 4-k)), k = 0 .. 3)

(14)

Compute now the inert functions: on the left-hand side this is just the (now explicit) 3rd order derivative, while on the right-hand side we have a sum of IncompleteBellB  functions, where the number of arguments, expressed in (13) using symbolic sequences that depend on the summation index k and the differentiation order n, now in (14) depend only on k, and get transformed into explicit sequences of arguments when the summation is performed and k assumes integer values

value(%diff(sin(cos(alpha*z+beta)), z, z, z) = Sum(sin(cos(alpha*z+beta)+(1/2)*k*Pi)*IncompleteBellB(3, k, `$`(cos(alpha*z+beta+(1/2)*j*Pi)*alpha^j, j = 1 .. 4-k)), k = 0 .. 3))

alpha^3*sin(alpha*z+beta)*cos(cos(alpha*z+beta))-3*alpha^3*cos(alpha*z+beta)*sin(alpha*z+beta)*sin(cos(alpha*z+beta))+alpha^3*sin(alpha*z+beta)^3*cos(cos(alpha*z+beta)) = alpha^3*sin(alpha*z+beta)*cos(cos(alpha*z+beta))-3*alpha^3*cos(alpha*z+beta)*sin(alpha*z+beta)*sin(cos(alpha*z+beta))+alpha^3*sin(alpha*z+beta)^3*cos(cos(alpha*z+beta))

(15)

Take left-hand side minus right-hand side

simplify((lhs-rhs)(alpha^3*sin(alpha*z+beta)*cos(cos(alpha*z+beta))-3*alpha^3*cos(alpha*z+beta)*sin(alpha*z+beta)*sin(cos(alpha*z+beta))+alpha^3*sin(alpha*z+beta)^3*cos(cos(alpha*z+beta)) = alpha^3*sin(alpha*z+beta)*cos(cos(alpha*z+beta))-3*alpha^3*cos(alpha*z+beta)*sin(alpha*z+beta)*sin(cos(alpha*z+beta))+alpha^3*sin(alpha*z+beta)^3*cos(cos(alpha*z+beta))))

0

(16)

NULL

:)


Download SymbolicOrderDifferentiation.mw


Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

Symbolic sequences enter in various formulations in mathematics. This post is about a related new subpackage, Sequences, within the MathematicalFunctions package, available for download in Maplesoft's R&D page for Mathematical Functions and Differential Equations (currently bundled with updates to the Physics package).

 

Perhaps the most typical cases of symbolic sequences are:

 

1) A sequence of numbers - say from n to m - frequently displayed as

n, `...`, m

 

2) A sequence of one object, say a, repeated say p times, frequently displayed as

 "((a,`...`,a))"

3) A more general sequence, as in 1), but of different objects and not necessarily numbers, frequently displayed as

a[n], `...`, a[m]

or likewise a sequence of functions

f(n), `...`, f(m)

In all these cases, of course, none of n, m, or p are known: they are just symbols, or algebraic expressions, representing integer values.

 

These most typical cases of symbolic sequences have been implemented in Maple since day 1 using the `$` operator. Cases 1), 2) and 3) above are respectively entered as `$`(n .. m), `$`(a, p), and `$`(a[i], i = n .. m) or "`$`(f(i), i = n .. m)." To have computer algebra representations for all these symbolic sequences is something wonderful, I would say unique in Maple.

Until recently, however, the typesetting of these symbolic sequences was frankly poor, input like `$`(a[i], i = n .. m) or ``$\``(a, p) just being echoed in the display. More relevant: too little could be done with these objects; the rest of Maple didn't know how to add, multiply, differentiate or map an operation over the elements of the sequence, nor for instance count the sequence's number of elements.

 

All this has now been implemented.  What follows is a brief illustration.

restart

First of all, now these three types of sequences have textbook-like typesetting:

`$`(n .. m)

`$`(n .. m)

(1)

`$`(a, p)

`$`(a, p)

(2)

For the above, a$p works the same way

`$`(a[i], i = n .. m)

`$`(a[i], i = n .. m)

(3)

Moreover, this now permits textbook display of mathematical functions that depend on sequences of paramateters, for example:

hypergeom([`$`(a[i], i = 1 .. p)], [`$`(b[i], i = 1 .. q)], z)

hypergeom([`$`(a[i], i = 1 .. p)], [`$`(b[i], i = 1 .. q)], z)

(4)

IncompleteBellB(n, k, `$`(factorial(j), j = 1 .. n-k+1))

IncompleteBellB(n, k, `$`(factorial(j), j = 1 .. n-k+1))

(5)

More interestingly, these new developments now permit differentiating these functions even when their arguments are symbolic sequences, and displaying the result as in textbooks, with copy and paste working properly, for instance

(%diff = diff)(hypergeom([`$`(a[i], i = 1 .. p)], [`$`(b[i], i = 1 .. q)], z), z)

%diff(hypergeom([`$`(a[i], i = 1 .. p)], [`$`(b[i], i = 1 .. q)], z), z) = (product(a[i], i = 1 .. p))*hypergeom([`$`(a[i]+1, i = 1 .. p)], [`$`(b[i]+1, i = 1 .. q)], z)/(product(b[i], i = 1 .. q))

(6)

It is very interesting how much this enhances the representation capabilities; to mention but one, this makes 100% possible the implementation of the Faa-di-Bruno  formula for the nth symbolic derivative of composite functions (more on this in a post to follow this one).

But the bread-and-butter first: the new package for handling sequences is

with(MathematicalFunctions:-Sequences)

[Add, Differentiate, Map, Multiply, Nops]

(7)

The five commands that got loaded do what their name tells. Consider for instance the first kind of sequences mentione above, i.e

`$`(n .. m)

`$`(n .. m)

(8)

Check what is behind this nice typesetting

lprint(`$`(n .. m))

`$`(n .. m)

 

All OK. How many operands (an abstract version of Maple's nops  command):

Nops(`$`(n .. m))

m-n+1

(9)

That was easy, ok. Add the sequence

Add(`$`(n .. m))

(1/2)*(m-n+1)*(n+m)

(10)

Multiply the sequence

Multiply(`$`(n .. m))

factorial(m)/factorial(n-1)

(11)

Map an operation over the elements of the sequence

Map(f, `$`(n .. m))

`$`(f(j), j = n .. m)

(12)

lprint(`$`(f(j), j = n .. m))

`$`(f(j), j = n .. m)

 

Map works as map, i.e. you can map extra arguments as well

MathematicalFunctions:-Sequences:-Map(Int, `$`(n .. m), x)

`$`(Int(j, x), j = n .. m)

(13)

All this works the same way with symbolic sequences of forms "((a,`...`,a))" , and a[n], `...`, a[m]. For example:

`$`(a, p)

`$`(a, p)

(14)

lprint(`$`(a, p))

`$`(a, p)

 

MathematicalFunctions:-Sequences:-Nops(`$`(a, p))

p

(15)

Add(`$`(a, p))

a*p

(16)

Multiply(`$`(a, p))

a^p

(17)

Differentation also works

Differentiate(`$`(a, p), a)

`$`(1, p)

(18)

MathematicalFunctions:-Sequences:-Map(f, `$`(a, p))

`$`(f(a), p)

(19)

MathematicalFunctions:-Sequences:-Differentiate(`$`(f(a), p), a)

`$`(diff(f(a), a), p)

(20)

For a symbolic sequence of type 3)

`$`(a[i], i = n .. m)

`$`(a[i], i = n .. m)

(21)

MathematicalFunctions:-Sequences:-Nops(`$`(a[i], i = n .. m))

m-n+1

(22)

Add(`$`(a[i], i = n .. m))

sum(a[i], i = n .. m)

(23)

Multiply(`$`(a[i], i = n .. m))

product(a[i], i = n .. m)

(24)

The following is nontrivial: differentiating the sequence a[n], `...`, a[m], with respect to a[k] should return 1 when n = k (i.e the running index has the value k), and 0 otherwise, and the same regarding m and k. That is how it works now:

Differentiate(`$`(a[i], i = n .. m), a[k])

`$`(piecewise(k = i, 1, 0), i = n .. m)

(25)

lprint(`$`(piecewise(k = i, 1, 0), i = n .. m))

`$`(piecewise(k = i, 1, 0), i = n .. m)

 

MathematicalFunctions:-Sequences:-Map(f, `$`(a[i], i = n .. m))

`$`(f(a[i]), i = n .. m)

(26)

Differentiate(`$`(f(a[i]), i = n .. m), a[k])

`$`((diff(f(a[i]), a[i]))*piecewise(k = i, 1, 0), i = n .. m)

(27)

lprint(`$`((diff(f(a[i]), a[i]))*piecewise(k = i, 1, 0), i = n .. m))

`$`((diff(f(a[i]), a[i]))*piecewise(k = i, 1, 0), i = n .. m)

 

 

And that is it. Summarizing: in addition to the former implementation of symbolic sequences, we now have textbook-like typesetting for them, and more important: Add, Multiply, Differentiate, Map and Nops. :)

 

The first large application we have been working on taking advantage of this is symbolic differentiation, with very nice results; I will see to summarize them in a post to follow in a couple of days.

 

Download MathematicalFunctionsSequences.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

This post aims at summarizing the developments of the last 12 months of the Physics package, which were focused on: Vector Analysis, symbolic Tensor manipulations, Quantum Mechanics, and General Relativity including a new Tetrads subpackage. Besides that, there is a new command, Assume with valuable new features, and a new computing modality: automaticsimplification, as well as an enlargement of the database of solutions to Einstein's equations with more than 100 new metrics. As is always the case, there is still a lot more to do, of course. But It has been an year of tremendous developments, worth recapping.

I would also like to acknowledge and specially thank Pascal Szriftgiser, Directeur de recherche CNRS from the "Laboratoire de Physique des Lasers Atomes et Molécules" (Lille, France), and Denitsa Staicova from the Bulgarian Academy of Science, "Institute for Nuclear Research and Nuclear Energy", for their constant, constructive and valuable feedback along the year, respectively in the areas of Quantum Mechanics and General Relativity. Thanks also to all of you who reported bugs and emailed suggestions as physics@maplesoft.com; this kind of feedback is a pillar of the development of this Physics project.

As usual, the latest version of the package including the developments mentioned below can be downloaded from the Maple Physics: Research and Development web site. Some examples illustrating the use of the new capabilities in the context of more general problems are found in the 2014 MaplePrimes post Computer Algebra for Theoretical Physics. (At the end of this post there are two links: a pdf file with all the sections open, and this worksheet itself for whoever wants to play around).

Simplification

 

Simplification is perhaps the most common operation performed in a computer algebra system. In Physics, this typically entails simplifying tensorial expressions, or expressions involving noncommutative operators that satisfy certain commutator/anticommutator rules, or sums and integrals involving quantum operators and Dirac delta functions in the summands and integrands. Relevant enhancements were introduced for all these cases, including enhancements in the simplification of:

• 

Products of LeviCivita  tensors in curved spacetimes when LeviCivita represents the Galilean pseudo-tensor (related to Setup(levicivita = Galilean) ), instead of its generalization to curved spaces (related to Setup(levicivita = nongalilean) ).

• 

Tensorial expressions in general that have spacetime, space, and/or tetrad contracted indices, possibly at the same time.

• 

New option tryhard, that resolves zero recognition in an important number of nontrivial situations.

• 

Expressions involving the Dirac function.

• 

Vectorial expressions involving cylindrical or spherical coordinates and related unit vectors.

• 

Expressions simplified with respect to side relations (equations) in the presence of quantum vectorial equations.

• 

Expressions involving products of quantum operators entering parameterized algebra rules.

• 

Expressions involving vectorial quantum operators simplified with respect to other vectorial equations.

• 

Add support for the simplification and integration of spherical harmonics ( SphericalY ) relevant in quantum mechanics.

Examples

   

 

Tensors

 

A number of relevant changes happened in the tensor routines of the Physics package, towards making the routines pack more functionality, the simplification more powerful, and the handling of symmetries, substitutions, and other operations more flexible and natural.

• 

Physics now works with four kinds of Minkowski spaces (different signatures) to accommodate the typical situations seen in textbooks; to these, correspond the signatures +---, ---+, -+++ and +++-.

• 

Allow setting the metric by specifying the signature directly, as in g_[`-`] or g_[`+---`], or Setup(metric = `---+`) or Setup(g[mu, nu] = `---+`).

• 

The signature keyword of the Physics Setup  is now in use, to set the metric and to indicate the form of the orthonormal tetrad, in turn used to derive the form of a null tetrad.

• 

Automatic detection of the position of t as the time variable when you set the coordinates automatically sets the signature of the default Minkowski spacetime metric accordingly to ---+ or +---.

• 

New keywords with special meaning when indexing the Physics (also the user defined) tensors:
· `~`; for example g_[`~`] returns the all-contravariant matrix form of the metric.
· definition; for example Ricci[definition] returns the definition of the Ricci tensor; works also with user-defined tensors.
· scalars; for example Weyl[scalars] and Ricci[scalars] return the five Weyl and seven Ricci scalars used to perform a Petrof classification and in the Newman-Penrose formalism.
· scalarsdefinition, and invariantsdefinition; for example Weyl[scalarsdefinition] or Riemann[invariantsdefinition] return the corresponding definitions for the scalars and invariants.
· nullvectors; for example, when the new Tetrads  subpackage is loaded, e_[nullvectors] returns a sequence of null vectors with their products normalized according to the Newman-Penrose formalism.
· matrix; this keyword was introduced in previous releases, and now it can appear after a space index (not spacetime), in which case a matrix with only the space components is returned.

• 

Tensorial expressions can now have spacetime indices (related to a global system of references) and tetrad indices (related to a local system of references) at the same time, or they be rewritten in one (spacetime) or the other (tetrad) frames.

• 

The matrix keyword can be used with spacetime, space, or tetrad indices, resulting in the corresponding matrix

• 

Implement automatic determination of symmetry under permutation of tensor indices when the tensor is defined as a matrix.

• 

New conversions from the Weyl to the Ricci tensors, and from Weyl to the Christoffel symbols.

• 

New option evaluatetrace = true or false within convert/Ricci, to avoid automatically evaluating the Ricci trace when performing conversions that involve this trace.

• 

New option 'evaluate' to convert/g_, convert/Christoffel and convert/Ricci. With this option set to false, it is possible to see the algebraic form of the result (that is, of the tensors involved) before evaluating it.

• 

The Maple 18 Library:-SubstituteTensor  command, got enhanced and transformed into one of the main Physics commands, that substitutes tensorial equation(s) Eqs into an expression, taking care of the free and repeated indices, such that: 1) equations in Eqs are interpreted as mappings having the free indices as parameters, 2) repeated indices in Eqs do not clash with repeated indices in the expression and 3) spacetime, space, and tetrad indices are handled independently, so they can all be present in Eqs and in the expression at the same time. This new command can also substitute algebraic sub-expressions of type product or sum within the expression, generalizing and unifying the functionality of the subs and algsubs  commands for algebraic tensor expressions.

Examples

   

 

Tetrads in General Relativity

 

The formalism of tetrads in general relativity got implemented within Physics  as a new package, Physics:-Tetrads , with 13 commands, mainly the null vectors of the Newman-Penrose formalism, the tetrad tensors `𝔢`[a, mu], eta[a, b], gamma[a, b, c], lambda[a, b, c], respectively: the tetrad, the tetrad metric, the Ricci rotation coefficients, and the lambda tensor, plus five algebraic manipulation commands: IsTetrad , NullTetrad , OrthonormalTetrad , SimplifyTetrad , and TransformTetrad  to construct orthonormal and null tetrads of different forms and using different methods.

Examples

   

 

More Metrics in the Database of Solutions to Einstein's Equations

 

A database of solutions to Einstein's equations  was added to the Maple library in Maple 15 with a selection of metrics from "Stephani, H.; Kramer, D.; MacCallum, M.; Hoenselaers, C.; and Herlt, E.,  Exact Solutions to Einstein's Field Equations" and "Hawking, Stephen; and Ellis, G. F. R., The Large Scale Structure of Space-Time". More metrics from these two books were added for Maple 16, Maple 17, and Maple 18. These metrics can be searched using g_  (the Physics command representing the spacetime metric that also sets the metric to your choice in one go) or using the command DifferentialGeometry:-Library:-MetricSearch .

• 

New, one hundred and four more metrics were added to the database from various Chapters of the aforementioned book entitled "Exact Solutions to Einstein's Field Equations". Among new metrics for other chapters, with this addition, the solutions found in the literature and collected in Chapters 13 and 14 of this book are all present as well in database of solutions to Einstein's equations.

• 

It is now possible to manipulate algebraically the properties of these metrics, for example, computing tetrads and null vectors for them - using the 13 commands of new Physics:-Tetrads package.

Examples

   

 

Commutators, AntiCommutators, and Dirac notation in quantum mechanics

 

When computing with products of noncommutative operators, the results depend on the algebra of commutators and anticommutators that you previously set. Besides that, in Physics, various mathematical objects themselves satisfy specific commutation rules. You can query about these rules using the Library  commands Commute  and AntiCommute . Previously existing functionality and enhancements in this area were refined. These include:

• 

Computing Commutators and Anticommutators between equations, or of an expression with an equation.

• 

Library:-Commute(A, F(A)) = true whenever A is a quantum operator and F is a commutative mapping (see Cohen-Tannoudji, Quantum Mechanics, page 171).

• 

Differentiating with respect to a noncommutative variable whenever all the variables present in the derivand commute with the differentiation variable.

• 

Automatic computation of F(X).Ket(X, x) = F(x)*Ket(X, x), that is the automatic computation of a function of an operator applied to its eigenkets (see Cohen-Tannoudji, Quantum Mechanics, page 171).

• 

Parameterized commutators; for example: when setting the rule Commutator(A, exp(lambda*B)) = lambda*C, take lambda as a parameter, so Commutator(A, exp(alpha*B)) now returns alpha*C, not "lambda C."

• 

Automatic derivation of a commutator rule: Commutator(A, F(B)) = F '(B) when Commutator(A, C) = Commutator(B, C) = 0 and C = Commutator(A, B), as shown in (see Cohen-Tannoudji, Quantum Mechanics, page 171).

• 

 "f(A) | A[a] >=f(a) | A[a] >", including cases like for instance "f(alpha A) | A[a] >=f(alpha a) | A[a] >", or "(e)^((ⅈ f(t) A)/`ℏ`) | A[a] >=(e)^((ⅈ f(t) a)/`ℏ`) | A[a] >".

• 

New mechanism to have more than one algebra rule related to the same function (for example, a function of two arguments that come in different order).

• 

The dot product of the inverse of an operator, Inverse(A) · Ket, now returns the same as 1/A · Ket.

• 

F(H) is Hermitian if H is Hermitian and F is assumed to be real, via Assume, assuming, or Setup(realobjects = F).

• 

Implement that F(H) is Hermitian if H is Hermitian and F is a mathematical real function, that is, one that maps real objects into real objects; in this change only exp, the trigonometric functions and their inert forms are included.

• 

Add a few previously missing Unitary and Hermitian operator cases:

  

a) if \035U and V are unitary, the U V is also unitary.

  

b) if A is Hermitian then exp(i*A) is unitary.

  

c) if U is unitary and A is Hermitian, then U*A*`#msup(mi("U"),mo("†"))` is also Hermitian.

• 

Make the type definition for ExtendedQuantumOperator more precise to include as such any arbitrary function of an ExtendedQuantumOperator.

Examples

   

 

New Assume command and new enhanced Mode: automaticsimplification

 

One new enhanced mode was added to the Physics setup, automaticsimplification, and a new Physics:-Assume  command make expressions be automatically expressed in simpler forms and allow for very flexible ways of implementing assumptions, making the Physics environment concretely more expressive.

Assume

 

In almost any mathematical formulation in Physics, there are objects that are real, positive, or just angles that have a restricted range; for example: Planck's constant, time, the mass and position of particles, and so on. When placing assumptions using the assume  command, however, expressions entered before placing the assumptions and those entered with the assumptions cannot be reused in case the the assumptions are removed. Also, when using assume variables get redefined so that geometrical coordinates (spacetime, Cartesian, cylindrical, and spherical) loss their identity. These issues got addressed with a new Assume  command, that does not redefine the variables, implementing the concept of an "extended  assuming ", allowing for reusing expressions entered before placing assumptions and also after removing them. Assume also includes the functionality of the additionally  command.

Examples

   

Automatic simplification

 

This new Physics mode of computation means that, after you enter Setup(automaticsimplification = true), the output corresponding to every single input (not just related to Physics) gets automatically simplified in size before being returned to the screen. This is fantastically convenient for interactive work in most situations.

Examples

   

 

Vectors Package

 

A number of changes were performed in the Vectors  subpackage to make the computations more natural and versatile:

• 

Enhancement in the algebraic manipulations of inert vectorial differential operators.

• 

Improvements in the manipulation of of scalar products of vector or scalar functions (to the left) with vectorial differential operators (to the right), that result in vectorial or scalar differential operators.

• 

Several improvements in the use of trigonometric simplifications when changing the basis or the coordinates in vectorial expressions.

• 

Add new functionality mapping Vectors:-Component over equations, automatically changing basis if the two sides are not projected over the same base.

• 

Implement the expansion of the square of a vectorial expression as the scalar (dot) product of the expression with itself, including the case of a vectorial quantum operator expression.

• 

Allow multiplying equations also when the product operator is in scalar and vector products (Vectors:-`.` and Vectors:-`&x`).

• 

ChangeBasis : allow changing coordinates between sets of orthogonal coordinates also when the expression is not vectorial.

• 

New command: ChangeCoordinates , to rewrite an algebraic expression, using Cartesian, cylindrical, and spherical coordinates, an expression that involves these coordinates, either a scalar expression, or vectorial one but then not changing the orthonormal basis.

Examples

   

 

The Physics Library

 

Twenty-six new commands, useful for programming and interactive computation, have been added to the Physics:-Library  package. These are:

• 

ClearCaches

• 

ExpandProductsInExpression

• 

FlipCharacterOfFreeIndices

• 

FromMinkowskiKindToSignature

• 

FromSignatureToMinkowskiKind

• 

FromTetradToTetradMetric

• 

GetByMatchingPattern

• 

GetTypeOfTensorIndices

• 

GetVectorRootName

• 

HasOriginalTypeOfIndices

• 

IsEuclideanSignature

• 

IsGalileanSignature

• 

IsMinkowskiSignature

• 

IsValidSignature

• 

IsEuclideanMetric

• 

IsGalileanMetric

• 

IsMinkowskiMetric

• 

IsOrthonormalTetradMetric

• 

IsNullTetradMetric 

• 

IsNullTetrad

• 

IsOrthonormalTetrad

• 

IsTensorFunctionalForm

• 

RepositionRepeatedIndicesAsIn

• 

RestoreRepeatedIndices

• 

RewriteTypeOfIndices

• 

SplitIndicesByType

 

Additionally, several improvements in the previously existing Physics:-Library  commands have been implemented:

• 

Add the types spacetimeindex, spaceindex, spinorindex, gaugeindex, and tetradindex to the exports of the Library:-PhysicsType  package.

• 

Library:-ToCovariant  and Library:-ToContravariant  when the spacetime is curved and some 'tensors' involved are not actually a tensor in a curved space.

• 

Add new options changefreeindices and flipcharacterofindices to the Library:-ToCovariant  and Library:-ToContravariant  commands, to actually lower and raise the free indices as necessary, instead of the default behavior of returning an expression that is mathematically equivalent to the given one.

• 

Extend the Library commands GetCommutativeSymbol , GetAntiCommutativeSymbol , and GetNonCommutativeSymbol  to return vectorial symbols when Vectors  is loaded and a vectorial symbol is requested.

• 

Add functionality to the Library command GetSymbolsWithSameType  so that when the input is a list of objects, it returns a list with new symbols of the corresponding types, automatically taking into account the vectorial (Y/N) kind of the symbols.

Examples

   

 

Miscellaneous

 
• 

Add several fields to the Physics:-Setup() applet in order to allow for manipulating all the Physics settings from within the applet.

• 

New Physics:-Setup  options: automaticsimplification and normusesconjugate.

• 

When any of Physics  or Physics:-Vectors  are loaded, dtheta, dphi, etc. are now displayed as `dθ`, `dφ`, etc. 

• 

Implement, within the `*` operator, both the global  and the Physics one , the product of equations as the product of left-hand sides equal the product of right-hand sides, eliminating the frequently tedious typing "lhs(eq[1])*lhs(eq[2]) = rhs(eq[1])*rhs(eq[2])". You can now just enter "eq[1]*eq[2]".

• 

Automatically distribute dot products over lists, as in A.[a, b, c] = [A.a, A.b, A.c].

• 

Allow (A = B) - C also when A, B, and C are Matrices.

• 

Add convert(`...`, setofequations) and convert(`...`, listofequations) to convert Physics:-Vectors, Matrices of equations, etc. into sets or lists of equations.

• 

Annihilation  and Creation  operators are now displayed as in textbooks using `#msup(mi("a"),mo("&uminus0;"))` and `#msup(mi("a"),mo("+"))`.

• 

It is now possible to use equation labels to copy and paste expressions involving Annihilation and Creation operators.

• 

Implement the ability in Fundiff  to compute functional derivatives by passing only a function name as second argument. This works okay when the derivand contains this function with only one dependency (perhaps with many variables), say X, permitting varying a function quite like that done using paper and pencil.

• 

The determination of symmetries and antisymmetries of tensorial expressions got enhanced.

• 

The metric g[mu, nu] as well as the tetrad `𝔢`[a, mu] and tetrad metric eta[mu, nu] can now be (re)defined using the standard Physics:-Define  command for defining tensors. Also, the definition can now be given directly in terms of a tensorial expression.

• 

Add keyword option attemptzerorecognition in TensorArray , so that each component of the array is tested for 0.

• 

Allow to sum over a list of objects, or over `in` structures like '`in`(j, [a, b, c])' when redefining sum, and also in Physics:-Library:-Add .

• 

Harmonize the use of simplify/siderels  with Physics, so that anticommutative and noncommutative objects, whether they are vectorial or not, are respected as such and not transformed into commutative objects when the simplification is performed.

• 

Changes in design:

a. 

The output of KillingVectors  has now the format of a vector solution by default, that is, a 4-D vector on the left-hand side and a list with its components on the right-hand side and as such can be repassed to the Define  command for posterior use as a tensor. To recover the old format of a set of equation solutions for each vector component, a new optional argument, output = componentsolutions, got implemented.

b. 

Vectors:-Norm  now returns the Euclidean real norm by default, that is: Norm(`#mover(mi("v"),mo("→"))`) = `#mover(mi("v"),mo("→"))`.`#mover(mi("v"),mo("→"))`, and only return using conjugate, as in Norm(`#mover(mi("v"),mo("→"))`) = `#mover(mi("v"),mo("→"))`.conjugate(`#mover(mi("v"),mo("→"))`), when the option conjugate is passed, or the setting normusesconjugate is set using Physics:-Setup .

c. 

The output of FeynmanDiagrams  now discards, by default, all terms that include tadpoles. Also, an option, includetadpoles, to have these terms included as in previous releases, got implemented.

d. 

When Physics is loaded, 0^m does not return 0, in view that, in Maple, 0^0 returns 1.

e. 

The dot product A.B of quantum operators A and B now returns as a (noncommutative) product A B when neither A nor B involve Bras  or Kets .

f. 

If A is a quantum operator and Vectors  is loaded, then `#mover(mi("A",mathcolor = "olive"),mo("→"))` is also a quantum operator; likewise, if Z is a noncommutative prefix and Vectors is loaded then `#mover(mi("Z",mathcolor = "olive"),mo("→"))`is also a noncommutative object.

g. 

When Vectors  is loaded, the Hermitian and Unitary properties of operators set using the Setup  command are now propagated to "the name under the arrow" and vice versa, so that if A is a Hermitian Operator, or Unitary, then `#mover(mi("A",mathcolor = "olive"),mo("→"))` is too.

h. 

The SpaceTimeVector  can now have dependency other than a coordinate system.

i. 

It is now possible to enter `∂`[mu](A[mu]*B[mu])even when the index is repeated twice, considering that mu in A[mu]*B[mu] = A.B is actually a dummy, so that a collision with mu in `∂`[mu] can be programmatically avoided.

j. 

Diminish the use of KroneckerDelta  as a tensor, using the metric g_ instead in the output of Physics commands, reserving KroneckerDelta to be used as the standard corresponding symbol in quantum mechanics, so not as a tensor.

Examples

   

 

See Also

 

Physics , Computer Algebra for Theoretical Physics, The Physics project


OneYearOfPhysics.mw

OneYearOfPhysics.pdf

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi
Two new things recently added to the latest version of Physics available on Maplesoft's R&D Physics webpage are worth mentioning outside the framework of Physics.

  • automaticsimplification. This means that after "Physics:-Setup(automaticsimplification=true)", the output corresponding to every single input (literally) gets automatically simplified in size before being returned to the screen. This is fantastically convenient for interactive work in most situations.

  • Add Physics:-Library:-Assume, to perform the same operations one typically performs with the  assume command, but without the side effect that the variables get redefined. So the variables do not get redefined, they only receive assumptions.

This new Assume implements the concept of an "extended assuming". It permits re-using expressions involving the variables being assumed, expressions that were entered before the assumptions were placed, as well as reusing all the expressions computed while the variables had assumptions, even after removing the variable's assumptions. None of this is possible when placing assumptions using the standard assume. The new routine also permits placing assumptions on global variables that have special meaning, that cannot be redefined, e.g. the cartesian, cylindrical or spherical coordinates sets, or the coordinates of a coordinate spacetime system within the Physics package, etc.

Examples:

 

with(Physics):

This is Physics from today:

Physics:-Version()[2]

`2014, December 9, 16:51 hours`

(1.1)
• 

Automatic simplification is here. At this point automaticsimplification is OFF by default.

Setup(automaticsimplification)

[automaticsimplification = false]

(1.2)

Hence, for instance, if you input the following expression, the computer just echoes your input:

Physics:-`*`(a, c)+Physics:-`*`(a, d)+Physics:-`*`(b, c)+Physics:-`*`(b, d)

a*c+a*d+b*c+b*d

(1.3)

There is however some structure behind (1.3) and, in most situations, it is convenient to have these structures
apparent, in part because they frequently provide hints on how to proceed ahead, but also because a more
compact expression is, roughly speaking, simpler to understand. To see this
automaticsimplification in action,
turn it ON:

Setup(automaticsimplification = true)

[automaticsimplification = true]

(1.4)

Recall this same expression (you could input it with the equation label (1.3) as well) 

Physics:-`*`(a, c)+Physics:-`*`(a, d)+Physics:-`*`(b, c)+Physics:-`*`(b, d)

(c+d)*(a+b)

(1.5)

What happened: this output, as everything else after you set automaticsimplification = true and with no
exceptions, is now further processed with simplify/size before being returned. And enjoy computing with frankly
shorter expressions all around! And no need anymore for "simplify(%, size)" every three or four input lines.

Another  example, typical in computer algebra where expressions become uncomfortably large and difficult to
read: convert the following input to 2D math input mode first, in order to compare what is being entered with the
automatically simplified output on the screen

-Physics:-`*`(Physics:-`*`(Physics:-`*`(3, sin(x)^(1/2)), cos(x)^2), sin(x)^m)+Physics:-`*`(Physics:-`*`(Physics:-`*`(3, sin(x)^(1/2)), cos(x)^2), cos(x)^n)+Physics:-`*`(Physics:-`*`(Physics:-`*`(4, sin(x)^(1/2)), cos(x)^4), sin(x)^m)-Physics:-`*`(Physics:-`*`(Physics:-`*`(4, sin(x)^(1/2)), cos(x)^4), cos(x)^n)

-4*(cos(x)^n-sin(x)^m)*sin(x)^(1/2)*cos(x)^2*(cos(x)^2-3/4)

(1.6)

You can turn automaticsimplification OFF the same way

Setup(automaticsimplification = false)

[automaticsimplification = false]

(1.7)
• 

New Library:-Assume facility; welcome to the world of "extended assuming" :)

 

Consider a generic variable, x. Nothing is known about it

about(x)

x:

  nothing known about this object

 

Each variable has associated a number that depends on the session, and the computer (internally) uses this
number to refer to the variable.

addressof(x)

18446744078082181054

(1.8)

When using the assume  command to place assumptions on a variable, this number, associated to it, changes,
for example:

assume(0 < x and x < Physics:-`*`(Pi, 1/2))

addressof(x)

18446744078179060574

(1.9)

Indeed, the variable x got redefined and renamed, it is not anymore the variable x referenced in (1.8).

about(x)

Originally x, renamed x~:

  is assumed to be: RealRange(Open(0),Open(1/2*Pi))

 


The semantics may seem confusing but that is what happened, you enter x and the computer thinks x~, not x 
anymore.This means two things:

1) all the equations/expressions, entered before placing the assumptions on x using assume, involve a variable x 
that is different than the one that exists after placing the assumptions, and so these previous expressions
cannot
be reused
. They involve a different variable.

2) Also, because, after placing the assumptions using assume, x refers to a different object, programs that depend
on the
x that existed before placing the assumptions will not recognize the new x redefined by assume .

 

For example, if x was part of a coordinate system and the spacetime metric g[mu, nu]depends on it, the new variable x
redefined within assume, being a different symbol, will not be recognized as part of the dependency of "g[mu,nu]." This
posed constant obstacles to working with curved spacetimes that depend on parameters or on coordinates that
have a restricted range. These problems are resolved entirely with this new
Library:-Assume, because it does not
redefine the variables. It only places assumptions on them, and in this sense it works like
assuming , not assume .
As another example, all the
Physics:-Vectors commands look for the cartesian, cylindrical or spherical coordinates
sets
[x, y, z], [rho, phi, z], [r, theta, phi] in order to determine how to proceed, but these variables disappear if you use
assume to place assumptions on them. For that reason, only assuming  was fully compatible with Physics, not assume.

 

To undo assumptions placed using the assume command one reassigns the variable x to itself:

x := 'x'

x

(1.10)

Check the numerical address: it is again equal to (1.8) 

addressof(x)

18446744078082181054

(1.11)

·All these issues get resolved with the new Library:-Assume, that uses all the implementation of the existing 
assume command but with a different approach: the variables being assumed do not get redefined, and hence:
a) you can reuse expressions/equations entered before placing the assumptions, you can also undo the
assumptions and reuse results obtained with assumptions. This is the concept of an
extended assuming. Also,
commands that depend on these assumed variables will all continue to work normally, before, during or after
placing the assumption, because
the variables do not get redefined.

Example:

about(x)

x:

  nothing known about this object

 

So this simplification attempt accomplishes nothing

simplify(arccos(cos(x)))

arccos(cos(x))

(1.12)

Let's assume now that 0 < x and x < (1/2)*Pi

Library:-Assume(0 < x and x < Physics:-`*`(Pi, 1/2))

{x::(RealRange(Open(0), Open((1/2)*Pi)))}

(1.13)

The new command echoes the internal format representing the assumption placed.

a) The address is still the same as (1.8)

addressof(x)

18446744078082181054

(1.14)

So the variable did not get redefined. The system however knows about the assumption - all the machinery of the
assume command is being used

about(x)

Originally x, renamed x:

  is assumed to be: RealRange(Open(0),Open(1/2*Pi))

 


Note that the renaming is to the variable itself - i.e. no renaming.

Hence, expressions entered before placing assumptions can be reused. For example, for (1.12), we now have

simplify(arccos(cos(x)))

x

(1.15)

To clear the assumptions on x, you can use either of Library:-Assume(x=x) or Library:-Assume(clear = {x, ...}) in
the case of many variables being cleared in one go, or in the case of a single variable being cleared:

Library:-Assume(clear = x)

about(x)

x:

  nothing known about this object

 


The implementation includes the additionally functionality, for that purpose add the keyword
additionally 
anywhere in the calling sequence. For example:

Library:-Assume(x::positive)

{x::(RealRange(Open(0), infinity))}

(1.16)

about(x)

Originally x, renamed x:

  is assumed to be: RealRange(Open(0),infinity)

 

Library:-Assume(additionally, x < 1)

{x::(RealRange(Open(0), Open(1)))}

(1.17)

Library:-Assume(x = x)

In summary, the new Library:-Assume command implements the concept of an extended assuming, that can be
turned ON and OFF at will at any moment without changing the variables involved.


Download AutomaticSimplificationAndAssume.mw

 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

First 11 12 13 14 15 16 17 Page 13 of 17