ecterrab

13431 Reputation

24 Badges

19 years, 360 days

MaplePrimes Activity


These are answers submitted by ecterrab


Hi umbli,

What you are doing is correct, i.e. you can set the coordinates and the line element the way you posted:

restart

with(Physics)

Note the signature:

Setup(sign)

`* Partial match of  '`*sign*`' against keyword '`*signature*`' `

 

_______________________________________________________

 

[signature = `- - - +`]

(1)

So place t in position 4 in the list of coordinates

Setup(coordinates = {X = [x, sigma, theta, t]}, g_ = a(t, x, sigma)*dt^2+b(t, x, sigma)*dt*dx+dx^2+dsigma^2+sigma^2*dtheta^2)

`Systems of spacetime coordinates are:`*{X = (x, sigma, theta, t)}

 

_______________________________________________________

 

[coordinatesystems = {X}, metric = {(1, 1) = 1, (1, 4) = (1/2)*b(t, x, sigma), (2, 2) = 1, (3, 3) = sigma^2, (4, 4) = a(t, x, sigma)}]

(2)

This is the metric

g_[]

Physics:-g_[mu, nu] = Matrix(%id = 18446744078318363702)

(3)

And that automatically indicated the dependence of the a and b functions. Independent of that, it is frequently convenient to avoid redundant display of functionality (you know a and b are function, no need to show their functionality all around ...). For that purpose,

"CompactDisplay(?)"

` a`(t, x, sigma)*`will now be displayed as`*a

 

` b`(t, x, sigma)*`will now be displayed as`*b

(4)

g_[]

Physics:-g_[mu, nu] = Matrix(%id = 18446744078318363702)

(5)

The above only hides the functionality, but it is there

show

Physics:-g_[mu, nu] = Matrix(%id = 18446744078318363702)

(6)

See CompactDisplay. By the way, one thing I think is of the utmost importance when you use computer algebra: always give a look at the help page of a command before using it (even if it is a very brief look). That avoids frustrations and saves a lot of your time.

 

Regarding the Christoffel symbols, they are computed automatically as soon as you enter the metric, e.g:

Christoffel[nonzero]

Physics:-Christoffel[alpha, mu, nu] = {(1, 2, 4) = (1/4)*(diff(b(t, x, sigma), sigma)), (1, 4, 2) = (1/4)*(diff(b(t, x, sigma), sigma)), (1, 4, 4) = (1/2)*(diff(b(t, x, sigma), t))-(1/2)*(diff(a(t, x, sigma), x)), (2, 1, 4) = -(1/4)*(diff(b(t, x, sigma), sigma)), (2, 3, 3) = -sigma, (2, 4, 1) = -(1/4)*(diff(b(t, x, sigma), sigma)), (2, 4, 4) = -(1/2)*(diff(a(t, x, sigma), sigma)), (3, 2, 3) = sigma, (3, 3, 2) = sigma, (4, 1, 1) = (1/2)*(diff(b(t, x, sigma), x)), (4, 1, 2) = (1/4)*(diff(b(t, x, sigma), sigma)), (4, 1, 4) = (1/2)*(diff(a(t, x, sigma), x)), (4, 2, 1) = (1/4)*(diff(b(t, x, sigma), sigma)), (4, 2, 4) = (1/2)*(diff(a(t, x, sigma), sigma)), (4, 4, 1) = (1/2)*(diff(a(t, x, sigma), x)), (4, 4, 2) = (1/2)*(diff(a(t, x, sigma), sigma)), (4, 4, 4) = (1/2)*(diff(a(t, x, sigma), t))}

(7)

"seq(Christoffel[~j,mu,nu, matrix], ~j = [~0, ~1, ~2, ~3])"

Physics:-Christoffel[`~4`, mu, nu] = Matrix(%id = 18446744078247888406), Physics:-Christoffel[`~1`, mu, nu] = Matrix(%id = 18446744078247884190), Physics:-Christoffel[`~2`, mu, nu] = Matrix(%id = 18446744078247885630), Physics:-Christoffel[`~3`, mu, nu] = Matrix(%id = 18446744078318387558)

(8)

I noticed in another post you made that you prefer to work with a signature with time in position 1. That is all OK:

Redefine(setall, tosignature = `-+++`)

[X], Matrix(%id = 18446744078366166782)

(9)

So now you have

Coordinates()

`Systems of spacetime coordinates are:`*{X = (t, x, sigma, theta)}

 

{X}

(10)

g_[]

Physics:-g_[mu, nu] = Matrix(%id = 18446744078247898518)

(11)

Just be careful with using 0 as an index, since it always points to the timelike position,which now is 1

"Christoffel[~0,mu,nu, matrix]"

Physics:-Christoffel[`~1`, mu, nu] = Matrix(%id = 18446744078318380086)

(12)

NULL


 

Download signature_and_line_element.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi,
Instead of assume, you can use Physics:-Assume, which is free of this issue of redefininig the variable each time you place an assumption on it, or you add to the assumptions placed. The syntax / use of Physics:-Assume is exactly the same as that of the older lowercase assume; Physics:-Assume also combines the functionality of assume and additionally into one (check the help page for details).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft.

 

These first three work fine, as you say in your post

NULL

NULL

NULL

 

The next was the problematic one. A fix for this issue is available to everybody, provided within the Maplesoft Physics Updates v.358.

Physics:-Version()[2]

`2019, May 21, 15:22 hours, version in the MapleCloud: 358, version installed in this computer: 358.`

(1)

This is the system

sys := [diff(u(t, x), t) = diff(u(t, x), x, x), u(0, x) = 1, u(t, -1) = 0, u(t, 1) = 0]

[diff(u(t, x), t) = diff(diff(u(t, x), x), x), u(0, x) = 1, u(t, -1) = 0, u(t, 1) = 0]

(2)

The solution

sol := pdsolve(sys)

u(t, x) = Sum(-4*exp(-(1/4)*Pi^2*(2*n-1)^2*t)*cos((1/2)*(2*n-1)*Pi*x)*(-1)^n/((2*n-1)*Pi), n = 1 .. infinity)

(3)

pdetest(sol, sys)

[0, Sum(-4*cos((1/2)*(2*n-1)*Pi*x)*(-1)^n/((2*n-1)*Pi), n = 1 .. infinity)-1, 0, 0]

(4)

So we need to test by hand the first condition, u(0, x) = 1. For that: take the solution at t=0, change infinity by - say - 1000 terms

u(0, x) = eval(rhs(u(t, x) = Sum(-4*exp(-(1/4)*Pi^2*(2*n-1)^2*t)*cos((1/2)*(2*n-1)*Pi*x)*(-1)^n/((2*n-1)*Pi), n = 1 .. infinity)), [t = 0, infinity = 1000])

u(0, x) = Sum(-4*cos((1/2)*(2*n-1)*Pi*x)*(-1)^n/((2*n-1)*Pi), n = 1 .. 1000)

(5)

Plot from -1 to 1 (the values of x in the 2nd and 3rd boundary conditions), expected: a constant segment at 1 in the vertical axis:

plot(rhs(u(0, x) = Sum(-4*cos((1/2)*(2*n-1)*Pi*x)*(-1)^n/((2*n-1)*Pi), n = 1 .. 1000)), x = -1 .. 1)

 


 

Download PDE_4th_working_fine.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

  

pde := (a*y+b*x+c)*(diff(w(x, y), x))-(b*y+k*x+s)*(diff(w(x, y), y)) = 0

(a*y+b*x+c)*(diff(w(x, y), x))-(b*y+k*x+s)*(diff(w(x, y), y)) = 0

(1)

This pde admits several differential invariants. The pdsolve command finds one and proceeds ahead constructing the solution. It is unclear to me whether you can predict that there exists a differential invariant that will lead to a simpler solution without just doing trial and error, and in doing so running the risk of significantly complicating the process while trying to search for a "simpler" solution. In other words, if another program using the same (characteristic strip) method (as Mathematica does) arrives at a simplier solution it is just by chance.

 

Having said that, here is a human-guided way to make the "simpler" solution be discovered before the other ones: change proc (x) options operator, arrow; 1/x end proc, solve the transformed pde, then chage variables back.

PDEtools:-dchange({x = 1/xi, w(x, y) = W(xi, y)}, pde, [xi, W])

-(a*y+b/xi+c)*(diff(W(xi, y), xi))*xi^2-(b*y+k/xi+s)*(diff(W(xi, y), y)) = 0

(2)

pdsolve(-(a*y+b/xi+c)*(diff(W(xi, y), xi))*xi^2-(b*y+k/xi+s)*(diff(W(xi, y), y)) = 0)

W(xi, y) = _F1(-(1/2)*(a*xi^2*y^2+2*c*xi^2*y+2*b*xi*y+2*s*xi+k)/xi^2)

(3)

PDEtools:-dchange({xi = 1/x, W(xi, y) = w(x, y)}, W(xi, y) = _F1(-(1/2)*(a*xi^2*y^2+2*c*xi^2*y+2*b*xi*y+2*s*xi+k)/xi^2), known = all, [x, w], normal)

w(x, y) = _F1(-(1/2)*a*y^2-b*y*x-(1/2)*k*x^2-c*y-s*x)

(4)

pdetest(w(x, y) = _F1(-(1/2)*a*y^2-b*y*x-(1/2)*k*x^2-c*y-s*x), pde)

0

(5)

``


 

Download simplier_solution.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft


Maybe I am missing something ... what you suggest, already exists, it is called  macro

macro(v^2 = a^2+b^2)

v^2

a^2+b^2

(1)

v := 5

5

(2)

v^2

a^2+b^2

(3)

v^3

125

(4)

macro(a*b = 3*x+5*y2)

a*b

3*x+5*y2

(5)

a, b := 1, 2

1, 2

(6)

a*b

3*x+5*y2

(7)


 

Download macro.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Remove from the problem your bc[4] written as an uncomputed limit. Compute now the solution with only the first three bcs. The solution returned by pdsolve in that case depends on an arbitrary function of the summation index (displayed as _C5(n)). You then need analyze the result ... and consider a value of _C5(n) such that the result is 0 for x going to infinity.

Alternatively, change variables x -> 1/xi (use PDEtools:-dchange), and try with xi = 0 (just write F(0, y) = 0, instead of using limit).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi
I couldn't reproduce your problem but anyway updated v.350 of the Physics Updates with a guard against issues with Heaviside. So, whatever this problem of Heaviside is, I imagine that with this change it would stop happening. Recalling, these versions of the Updates work with Maple 2019 - not previous Maple releases (for maple 2018.2, you can still install the Updates up to v.329).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

restart

Physics:-Version()[2]

`2019, March 9, 0:38 hours, version in the MapleCloud: 348, version installed in this computer: 348.`

(1)

unassign(r, u, t)

pde := diff(u(r, t), t) = (diff(r*u(r, t), `$`(r, 2)))/r

diff(u(r, t), t) = (2*(diff(u(r, t), r))+r*(diff(diff(u(r, t), r), r)))/r

(2)

ic := u(r, 0) = 1; bc := u(1, t) = 0

u(r, 0) = 1

 

u(1, t) = 0

(3)

`assuming`([pdsolve([pde, ic, bc], u(r, t))], [t > 0])

u(r, t) = (-invlaplace(sinh(s^(1/2)*r)/(sinh(s^(1/2))*s), s, t)+r)/r

(4)

Note r = [0]

`assuming`([pdsolve([pde, ic, bc], u(r, t), HINT = boundedseries(r = [0]))], [t > 0])

u(r, t) = (-invlaplace(sinh(s^(1/2)*r)/(sinh(s^(1/2))*s), s, t)+r)/r

(5)

It works with r = 0 but the flow goes through a different path

`assuming`([pdsolve([pde, ic, bc], u(r, t), HINT = boundedseries(r = 0))], [t > 0])

u(r, t) = (r+invlaplace(sinh(s^(1/2)*r)*_F2(s), s, t)-invlaplace(cosh(s^(1/2)*r)/(cosh(s^(1/2))*s), s, t)-invlaplace(cosh(s^(1/2)*r)*sinh(s^(1/2))*_F2(s)/cosh(s^(1/2)), s, t))/r

(6)

I will adjust the syntax r = 0 to map into r = [0].

``


 

Download it_works_OK.mw

I'm not sure exactly what is what you intend to do, but have in mind that Physics:-Assume wraps around assume - so that you can do all what you can with assume, including 'additionally' - but does not redefine the variables. So that the addressof is always the same. This is useful in several scenarios, and is also more intuitive than the redefinition of variables done o background (such that x remains looking like x but is not equal to :-x anymore).

Also, regarding assuming, one option of the Physics package (that you can turn ON without loading the Physics package) is Physics:-Setup(assumingusesAssume = true). After that, assuming will also use Physics:-Assume (instead of the lowercase assume) and therefore the variables used when computing with assuming will also not be redefined (as they would when assuming uses 'assume').

All in all: this issue of the redefinition of variables is known, 'assume' is a very old command, and there is a more modern alternative free of those redefinition of variables. Take a look at the help page ?Physics,Assume.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Try with(Physics), and take a look at its help pages (e.g. for Physics:-`*`). Generally speaking, the main difference between symbolic algebra and symbolic matrix algebra is that, in the latter, the product A*B is not commutative, .i.e. A*B <> B*A. To compute with such a domain you can use the Physics package, set up A and B as noncommutative, and then proceed normally. A matrix functon would then be A(x) as usual. Differentiation, exponentiation, as well as simplification taking into account specific algebra rules for the commutator [A, B] that you can set (as said, can be different from zero) are all implemented. 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi JagenWu,
Change your assumption from 0 < x < m to 0 <= x < m and you get a solution (involving Piecewise and Heaviside functions) that is not u = 0. The problem with your assumption is that Dirac(x-0) = Dirac(x) and for 0 < x you have Dirac(x) = 0.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi
Whether you post a worksheet or not, you may want to take a look at the answer to this other post: Einstein tensor components of Kerr metric: in the context of Physics you can set the tensorsimplifier to whatever you want (for arbitrary metrics it is recommended tensorsimplifier = normal).

Another thing you may want to do is to us Physics:-KillingVectors with the options output = equations, to then analyze the best strategy to tackle the PDE system using the symmetry commands of PDEtools (mainly InvariantSolutions exploring its options for the infinitesimals), as well integrabilityconditions = false. You can also check the dimension of the system (i.e. the number of independent KillingVectors that exist, before and without having to compute them. Also, you can explore using different differential-elimination engines (RIF, diffalg and DifferentialThomas) when tackling that PDE system.

There are indeed several things you can do.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Funcftions, Maplesoft.

Hi Escorpsy,

Using Maple 2019 I get a result - no hangs, as shown below. Perhaps more important, (take a look at the help page for Physics:-Setup ) have in mind that simplification matters are tricky, in some sense more art than exact science. The default tensor simplifier aims at doing a good job in a wide range of algebraic situations, but there is no single alternative for all cases. For that reason, the design is that you can set it to anything else more suitable for your problem if you prefer; you see below how to play around with that.

 

with(Physics); Setup(dimension = 4, coordinates = (X = [t, r, theta, phi]), metric = -Physics:-`*`(Physics:-`*`(r+Physics:-`^`(a, 2)-Physics:-`*`(Physics:-`*`(2, m), r), Physics:-`^`(Physics:-`^`(r, 2)+Physics:-`*`(Physics:-`^`(a, 2), Physics:-`^`(cos(theta), 2)), -1))-Physics:-`*`(Physics:-`*`(Physics:-`^`(sin(theta), 2), Physics:-`^`(a, 2)), Physics:-`^`(Physics:-`^`(r, 2)+Physics:-`*`(Physics:-`^`(a, 2), Physics:-`^`(cos(theta), 2)), -1)), Physics:-`^`(dt, 2))+Physics:-`*`(Physics:-`^`(r, 2)+Physics:-`*`(Physics:-`^`(a, 2), Physics:-`^`(cos(theta), 2)), Physics:-`^`(dr, 2))+Physics:-`*`(Physics:-`^`(r, 2)+Physics:-`*`(Physics:-`^`(a, 2), Physics:-`^`(cos(theta), 2)), Physics:-`^`(dtheta, 2))+Physics:-`*`(Physics:-`*`(Physics:-`*`(Physics:-`^`(Physics:-`^`(r, 2)+Physics:-`^`(a, 2), 2), Physics:-`^`(sin(theta), 2))-Physics:-`*`(Physics:-`*`(Physics:-`^`(a, 2), Physics:-`^`(r, 2)+Physics:-`^`(a, 2)), Physics:-`^`(sin(theta), 4)), Physics:-`^`(Physics:-`^`(r, 2)+Physics:-`*`(Physics:-`^`(a, 2), Physics:-`^`(cos(theta), 2)), -1)), Physics:-`^`(dphi, 2))+Physics:-`*`(Physics:-`*`(Physics:-`*`(-Physics:-`*`(Physics:-`*`(Physics:-`*`(Physics:-`*`(4, r), m), a), Physics:-`^`(sin(theta), 2)), Physics:-`^`(Physics:-`^`(r, 2)+Physics:-`*`(Physics:-`^`(a, 2), Physics:-`^`(cos(theta), 2)), -1)), dphi), dtheta), quiet)

g_[]

Physics:-g_[mu, nu] = Matrix(%id = 18446744078650807950)

(1)

First, just let it run. In my computer I get a result in ~90 sec.

time(); simplified := Einstein[1, 1]; time_consumed = (time()-`%%`)*sec

time_consumed = 91.275*sec

(2)

Check the tensor simplifier being used: it is a wide-range cocktail of simplifications called Physics:-TensorSimplifier.

Setup(tensorsimplifier)

[tensorsimplifier = Physics:-TensorSimplifier]

(3)

Set it to normal (ie, just the very basic stuff)

Setup(tensorsimplifier = normal)

[tensorsimplifier = normal]

(4)

time(); unsimplified := Einstein[1, 1]; time_consumed = (time()-`%%`)*sec

time_consumed = 2.265*sec

(5)

:) :) :)  

 

But the lengths are

length(simplified) < length(unsimplified)

3560 < 38446

(6)

:( :(

Anyway you can simplify after having computed, to get

simplify(unsimplified)

(a^2*cos(theta)^2+(-2*m+1)*r)*((a^20+(2*r^2-1)*a^18+a^16*r^4)*cos(theta)^16+(a^6+(12*r^2-1)*a^4+(21*r^4-10*r^2)*a^2+10*r^6-r^4)*a^14*cos(theta)^14+5*(a^6*r^2+(10*r^4-r^2)*a^4+(17*r^6+4*r^2*m^2-(38/5)*r^4-(3/5)*m^2)*a^2+8*r^8+4*m^2*r^4-r^6+(1/5)*r^2*m^2)*a^12*cos(theta)^12+9*a^10*(a^6*r^4+((104/9)*r^6-(32/9)*r^2*m^2-r^4+(1/3)*m^2)*a^4+((181/9)*r^8+(44/9)*m^2*r^4-(74/9)*r^6+(28/9)*r^2*m^2)*a^2+(86/9)*r^10+(76/9)*m^2*r^6-r^8+(25/9)*m^2*r^4)*cos(theta)^10+8*r^2*a^8*(((5/8)*r^4+m^2)*a^6+(15*r^6-(29/2)*r^2*m^2-(5/8)*r^4-(29/8)*m^2)*a^4+((225/8)*r^8-2*m^2*r^4-10*r^6+(57/8)*r^2*m^2)*a^2+(55/4)*r^10+(27/2)*m^2*r^6-(5/8)*r^8+(13/4)*m^2*r^4+(1/2)*m^4)*cos(theta)^8+28*r^2*((-(5/28)*r^6+r^2*m^2)*a^6+((19/7)*r^8-(39/7)*m^2*r^4+(5/28)*r^6-(41/14)*r^2*m^2)*a^4+((167/28)*r^10-(29/7)*m^2*r^6-(23/14)*r^8+(4/7)*m^2*r^4-(2/7)*m^4)*a^2+(26/7)*((43/52)*r^10+(17/26)*m^2*r^6+(5/104)*r^8-(31/52)*m^2*r^4+m^4)*r^2)*a^6*cos(theta)^6+36*((-(1/4)*r^8+m^2*r^4)*a^6+((11/18)*r^10-(23/9)*m^2*r^6+(1/4)*r^8-(7/6)*m^2*r^4+(1/9)*m^4)*a^4+((71/36)*r^12-(28/9)*m^2*r^8-(5/18)*r^10+(31/36)*m^2*r^6-(52/9)*m^4*r^2)*a^2-(11/9)*(-(10/11)*r^10-(4/11)*m^2*r^6-(9/44)*r^8+(107/44)*m^2*r^4+m^4)*r^4)*r^2*a^4*cos(theta)^4+20*r^4*((-(1/4)*r^8+m^2*r^4)*a^6+(-m^2*r^6+(1/4)*r^8+(31/20)*m^2*r^4+(26/5)*m^4)*a^4+((3/4)*r^12-2*m^2*r^8+(1/10)*r^10+(21/5)*m^2*r^6+(22/5)*m^4*r^2)*a^2+(1/2)*r^14+(1/4)*r^12-(43/20)*m^2*r^8)*a^2*cos(theta)^2+4*r^6*((-(1/4)*r^8+m^2*r^4)*a^6+(-(1/4)*r^10+(1/4)*r^8+(23/4)*m^2*r^4-11*m^4)*a^4-(-(1/4)*r^6+r^2*m^2-(1/4)*r^4-(43/4)*m^2)*r^6*a^2+(1/4)*r^12*(r^2+1)))/(((a^8+a^6*r^2)*cos(theta)^6+(3*a^6*r^2+3*a^4*r^4)*cos(theta)^4+(3*a^4*r^4+(3*r^6+4*m^2*r^2)*a^2)*cos(theta)^2+(r^6-4*m^2*r^2)*a^2+r^8)^2*(r^2+a^2*cos(theta)^2)^4)

(7)

length(%)

2270

(8)

Playing around with the simplifier can get you the best result in different situations. Here is another option, apparently the best one for your problem that has no radicals and only trig functions

Setup(tensorsimplifier = (proc (u) options operator, arrow; simplify(u, trig) end proc))

[tensorsimplifier = (proc (u) options operator, arrow; simplify(u, trig) end proc)]

(9)

time(); simplifiedtrig := Einstein[1, 1]; time_consumed = (time()-`%%`)*sec; length(simplifiedtrig)

time_consumed = 18.744*sec

 

2270

(10)

``


 

Download Kerr_-_tensorsimplifier_(reviewed).mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

The command that computes the characteristic strip for 1st order PDEs is PDEtools:-charstrip. Take a look at its help page. The command for plotting the PDE solution following the characteristcs is PDEtools:-PDEplot.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft


 

Hi Carl, Alaloush,

with(PDEtools)

declare(m(t, x), u(t, x))

` m`(t, x)*`will now be displayed as`*m

 

` u`(t, x)*`will now be displayed as`*u

(1)

pde := diff(m(t, x), t)+diff((u(t, x)^2-(diff(u(t, x), x))^2)*m(t, x), x) = 0

diff(m(t, x), t)+(2*u(t, x)*(diff(u(t, x), x))-2*(diff(u(t, x), x))*(diff(diff(u(t, x), x), x)))*m(t, x)+(u(t, x)^2-(diff(u(t, x), x))^2)*(diff(m(t, x), x)) = 0

(2)

The first substitution, m(t, x) = u(t, x)-(diff(u(t, x), x, x)), is not a change of variables but only a substitution

m(t, x) = u(t, x)-(diff(u(t, x), x, x))

m(t, x) = u(t, x)-(diff(diff(u(t, x), x), x))

(3)

eval(pde, m(t, x) = u(t, x)-(diff(u(t, x), x, x)))

diff(u(t, x), t)-(diff(diff(diff(u(t, x), t), x), x))+(2*u(t, x)*(diff(u(t, x), x))-2*(diff(u(t, x), x))*(diff(diff(u(t, x), x), x)))*(u(t, x)-(diff(diff(u(t, x), x), x)))+(u(t, x)^2-(diff(u(t, x), x))^2)*(diff(u(t, x), x)-(diff(diff(diff(u(t, x), x), x), x))) = 0

(4)

The next one, y = -a*t+x, is a change of variables. Unless you tell how you are transforming the dependent variable u(t, x) (you didn't), I'd assume you are transforming u(t, x) = upsilon(tau, y). So this is how you use dchange to perform such a transformation

declare(upsilon(tau, y))

` upsilon`(tau, y)*`will now be displayed as`*upsilon

(5)

tr := {t = tau, x = a*tau+y, u(t, x) = upsilon(tau, y)}

{t = tau, x = a*tau+y, u(t, x) = upsilon(tau, y)}

(6)

This is the call to dchange

dchange(tr, diff(u(t, x), t)-(diff(diff(diff(u(t, x), t), x), x))+(2*u(t, x)*(diff(u(t, x), x))-2*(diff(u(t, x), x))*(diff(diff(u(t, x), x), x)))*(u(t, x)-(diff(diff(u(t, x), x), x)))+(u(t, x)^2-(diff(u(t, x), x))^2)*(diff(u(t, x), x)-(diff(diff(diff(u(t, x), x), x), x))) = 0, [tau, y, upsilon(y, tau)])

diff(upsilon(tau, y), tau)-(diff(upsilon(tau, y), y))*a-(diff(diff(diff(upsilon(tau, y), tau), y), y))+(diff(diff(diff(upsilon(tau, y), y), y), y))*a+(2*upsilon(tau, y)*(diff(upsilon(tau, y), y))-2*(diff(upsilon(tau, y), y))*(diff(diff(upsilon(tau, y), y), y)))*(upsilon(tau, y)-(diff(diff(upsilon(tau, y), y), y)))+(upsilon(tau, y)^2-(diff(upsilon(tau, y), y))^2)*(diff(upsilon(tau, y), y)-(diff(diff(diff(upsilon(tau, y), y), y), y))) = 0

(7)

Note however that the resulting PDE has derivatives with respect to tau and therefore the transformation you show (including the equation I guessed for the dependent variable) does not reduce the number of independent variables.

What Carl did, that I reproduce here, implicitly assumes u(t, x) = upsilon(y) which amounts to just removing by hand one variable from the problem. In this example works for no evident reason (the reason is shown further below)

So this is Carl's input/output

PDE1 := diff(m(t, x), t)+diff((u(t, x)^2-(diff(u(t, x), x))^2)*m(t, x), x) = 0; PDE2 := eval(PDE1, m(t, x) = u(t, x)-(diff(u(t, x), x, x))); ODE := convert(simplify(eval(PDE2, u(t, x) = U(-a*t+x)), {-a*t+x = y}), diff)

diff(m(t, x), t)+(2*u(t, x)*(diff(u(t, x), x))-2*(diff(u(t, x), x))*(diff(diff(u(t, x), x), x)))*m(t, x)+(u(t, x)^2-(diff(u(t, x), x))^2)*(diff(m(t, x), x)) = 0

 

diff(u(t, x), t)-(diff(diff(diff(u(t, x), t), x), x))+(2*u(t, x)*(diff(u(t, x), x))-2*(diff(u(t, x), x))*(diff(diff(u(t, x), x), x)))*(u(t, x)-(diff(diff(u(t, x), x), x)))+(u(t, x)^2-(diff(u(t, x), x))^2)*(diff(u(t, x), x)-(diff(diff(diff(u(t, x), x), x), x))) = 0

 

-(diff(U(y), y))^3+(diff(U(y), y))^2*(diff(diff(diff(U(y), y), y), y))+(3*U(y)^2-4*(diff(diff(U(y), y), y))*U(y)+2*(diff(diff(U(y), y), y))^2-a)*(diff(U(y), y))+(diff(diff(diff(U(y), y), y), y))*(-U(y)^2+a) = 0

(8)

dsolve(ODE)

U(y) = _C1, Intat(1/(_a^2-a-(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = U(y))-y-_C3 = 0, Intat(1/(_a^2-a+(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = U(y))-y-_C3 = 0, Intat(-1/(_a^2-a-(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = U(y))-y-_C3 = 0, Intat(-1/(_a^2-a+(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = U(y))-y-_C3 = 0

(9)

You now reverse the substitution and you get

subs(y = -a*t+x, U(-a*t+x) = u(t, x), (U(y) = _C1, Intat(1/(_a^2-a-(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = U(y))-y-_C3 = 0, Intat(1/(_a^2-a+(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = U(y))-y-_C3 = 0, Intat(-1/(_a^2-a-(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = U(y))-y-_C3 = 0, Intat(-1/(_a^2-a+(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = U(y))-y-_C3 = 0)[2])

Intat(1/(_a^2-a-(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = u(t, x))+a*t-x-_C3 = 0

(10)

pdetest(Intat(1/(_a^2-a-(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = u(t, x))+a*t-x-_C3 = 0, PDE2)

0

(11)

But note however that what you did, Carl, amounts to actually (and only) removing one variable by hand. For example: forget about -a*t+x = y

eval(PDE2, u(t, x) = U(x))

(2*U(x)*(diff(U(x), x))-2*(diff(U(x), x))*(diff(diff(U(x), x), x)))*(U(x)-(diff(diff(U(x), x), x)))+(U(x)^2-(diff(U(x), x))^2)*(diff(U(x), x)-(diff(diff(diff(U(x), x), x), x))) = 0

(12)

The above is also an ODE, there is no t around, and this happens regardless of -a*t+x = y, no need for it. Also, if the idea is to use u(t, x) = upsilon(y) instead of u(t, x) = upsilon(tau, y), then the answer to your question on how to do it using dchange (instead of simplify/siderels  as you did, Carl) is

tr__2 := {t = tau, x = a*tau+y, u(t, x) = upsilon(y)}

{t = tau, x = a*tau+y, u(t, x) = upsilon(y)}

(13)

dchange(tr__2, diff(u(t, x), t)-(diff(diff(diff(u(t, x), t), x), x))+(2*u(t, x)*(diff(u(t, x), x))-2*(diff(u(t, x), x))*(diff(diff(u(t, x), x), x)))*(u(t, x)-(diff(diff(u(t, x), x), x)))+(u(t, x)^2-(diff(u(t, x), x))^2)*(diff(u(t, x), x)-(diff(diff(diff(u(t, x), x), x), x))) = 0, [tau, y, upsilon(y)])

-(diff(upsilon(y), y))*a+(diff(diff(diff(upsilon(y), y), y), y))*a+(2*upsilon(y)*(diff(upsilon(y), y))-2*(diff(upsilon(y), y))*(diff(diff(upsilon(y), y), y)))*(upsilon(y)-(diff(diff(upsilon(y), y), y)))+(upsilon(y)^2-(diff(upsilon(y), y))^2)*(diff(upsilon(y), y)-(diff(diff(diff(upsilon(y), y), y), y))) = 0

(14)

And there you are with your ODE

dsolve(-(diff(upsilon(y), y))*a+(diff(diff(diff(upsilon(y), y), y), y))*a+(2*upsilon(y)*(diff(upsilon(y), y))-2*(diff(upsilon(y), y))*(diff(diff(upsilon(y), y), y)))*(upsilon(y)-(diff(diff(upsilon(y), y), y)))+(upsilon(y)^2-(diff(upsilon(y), y))^2)*(diff(upsilon(y), y)-(diff(diff(diff(upsilon(y), y), y), y))) = 0)

upsilon(y) = _C1, Intat(1/(_a^2-a-(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = upsilon(y))-y-_C3 = 0, Intat(1/(_a^2-a+(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = upsilon(y))-y-_C3 = 0, Intat(-1/(_a^2-a-(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = upsilon(y))-y-_C3 = 0, Intat(-1/(_a^2-a+(-4*_C1*_a+a^2+4*_C2)^(1/2))^(1/2), _a = upsilon(y))-y-_C3 = 0

(15)


You see ths solution is the same I computed in (10) for your ODE.

All OK, but how is that by simply removing one variable by hand you reduce the number of independent variables? The interesting question for me is, actually, how do you unveil the possible transformations that reduce the number of independent variables. 

The answer is in InvariantSolutions . It is one of the most powerful and advanced commands in the Maple library. In brief, InvariantSolutions returns a sequence of lists of the form [{inverse_transformation}, {transformation}], where transformation involves equations for independent and dependent variables (in this case there is only one PDE but the command works for PDE systems as well). The independent variables that do not change are not mentioned.

So these are the transformations you can actually use to reduce the number of independent variables (remember that u(t, x) is being displayed as u)

TR := InvariantSolutions(PDE2, onlythetransformation)

[{_t1 = t, _u1(_t1) = u(t, x)}, {t = _t1, u(t, x) = _u1(_t1)}], [{_t1 = x, _u1(_t1) = u(t, x)}, {x = _t1, u(t, x) = _u1(_t1)}], [{_t1 = x, _t2 = t, _u1(_t1) = u(t, x)*t^(1/2)}, {t = _t2, x = _t1, u(t, x) = _u1(_t1)/_t2^(1/2)}]

(16)

The first and second one are the same as removing by hand one of the variables, which happens to work for this particular PDE (and not in general, at all).

These transformations are derived from the symmetries of PDE2. To see the infinitesimals of the corresponding symmetry transformations use Infinitesimals

Infinitesimals(pde, display = false)

`* Partial match of  'display' against keyword 'displayfunctionality'`

 

[_xi[t] = _F1(t), _xi[x] = 1, _eta[m] = 0, _eta[u] = -(1/2)*(diff(_F1(t), t))*u], [_xi[t] = _F2(t), _xi[x] = 0, _eta[m] = m, _eta[u] = -(1/2)*(diff(_F2(t), t))*u]

(17)

In the above you also see an arbitrary function _F1, which means there are more than just the transformations shown in (16), obtained by (automatically) specializing these arbitrary functions.


So let's see how you use dchange to transform the variables using these transformations TR to actually diminish the number of independent variables of PDE2 in a way that is systematic and with understandable origin for the transformation.

subs(_t1 = y, _t2 = tau, _u1 = upsilon, TR[1][2])

{t = y, u(t, x) = upsilon(y)}

(18)

dchange({t = y, u(t, x) = upsilon(y)}, PDE2)

diff(upsilon(y), y) = 0

(19)

The second transformation in TR is equivalent to what you did, Carl

subs(_t1 = y, _t2 = tau, _u1 = upsilon, TR[2][2])

{x = y, u(t, x) = upsilon(y)}

(20)

dchange({x = y, u(t, x) = upsilon(y)}, PDE2)

(2*upsilon(y)*(diff(upsilon(y), y))-2*(diff(upsilon(y), y))*(diff(diff(upsilon(y), y), y)))*(upsilon(y)-(diff(diff(upsilon(y), y), y)))+(upsilon(y)^2-(diff(upsilon(y), y))^2)*(diff(upsilon(y), y)-(diff(diff(diff(upsilon(y), y), y), y))) = 0

(21)

subs(_t1 = y, _t2 = tau, _u1 = upsilon, TR[3][2])

{t = tau, x = y, u(t, x) = upsilon(y)/tau^(1/2)}

(22)

dchange({t = tau, x = y, u(t, x) = upsilon(y)/tau^(1/2)}, PDE2, simplify)

(1/2)*(6*upsilon(y)^2*(diff(upsilon(y), y))-2*upsilon(y)^2*(diff(diff(diff(upsilon(y), y), y), y))-8*upsilon(y)*(diff(upsilon(y), y))*(diff(diff(upsilon(y), y), y))-2*(diff(upsilon(y), y))^3+2*(diff(upsilon(y), y))^2*(diff(diff(diff(upsilon(y), y), y), y))+4*(diff(upsilon(y), y))*(diff(diff(upsilon(y), y), y))^2-upsilon(y)+diff(diff(upsilon(y), y), y))/tau^(3/2) = 0

(23)

In the three cases you go an ODE, each case leads to a different solution to PDE2. Equation (10) seems to depend on tau but it doesn't: it only occurs in the denominator - the numerator is free of tau.

``


 

Download changes_of_variables_and_reduction_of_their_number.mw

 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

First 21 22 23 24 25 26 27 Last Page 23 of 55