ecterrab

13431 Reputation

24 Badges

19 years, 359 days

MaplePrimes Activity


These are replies submitted by ecterrab

@tomleslie 

What is not rocket science is to describe how Maple's dsolve works regarding integration constants, and that is what you describe. What is also not rocket science, however not convincing to me is to say that there "may be a bug" as in "this or that command may be reusing constants the wrong way" without showing one single example of any sort after perhaps 15 replies. Anyway, I need to move forward to other tasks, leaving this conversation for now.

Best

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

Hi, just to say that I am pretty easy to convince, but you need to show an example. In this whole thread with so many replies, however, I've seen not one! I'm also aware that bugs exist in software, of course, but again: you need to show one example of it at least. Without that example, the claim that "there may be a bug" is empty (no offense meant please).

Best

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 @tomleslie 

What you show is the opposite of what was mentioned. I.e.: PDEtools:-Solve is not reusing an integration constant such that the solution is wrong or incomplete. 

As for the idea conveyed by your post, this is it: the integration constant of a 1st order ODE is _C1 (unless someone assigned it, or unless it was found itself in the given ODE). So if you send an 1st order ODE one at a time, the integration contant is always _C1. I was this way in 1996 when I wrote a new (the current) dsolve, and because I find this the correct design I kept this in place. The _C1 is an arbitrary constant. It can have any value in the first solution returned. It can have any other value in another solution returned some lines after.

Now if you send two of them in one go as a system of equations, think a bit, would it be correct to return _C1 for both? No. Why? Because, within a single solution, it does imply that the value of _C1 is arbitrary in itself but the same in both parts of the single solution, and thus it would be "a particular case" (when the two integration constants are equal to each other), not the actual general solution case (when the two integration constants are different).

As for the consistency issue mentioned by Preben, I have two comments. First, we know that mathematical software keeps evolving, some design decisions made day 1 are not ideal day 10. We keep them in place nevertheless, mainly to not complicate the life of people who got used to that design and have work based on it. But there is another thing here:_C1 is completely arbitrary. Change it by 1/_C1 and the solution is still a solution. Changed by f(a,b,c) (not depending on the independent variable of the ODE) and the solution is still a correct solution. On the other hand, _Zn is not as arbitrary. Change it by _Zn / 2 and the solution returned by solve is not a solution anymore. Although I can see how annoying is to receive a different _Zn each time you call solve (even with the same equation!), the situation is not entirely the same as that of the integration constants _Cn returned by dsolve.

Having said that, for instance in the pdsolve & BC new code (check it out in Maple 2017.2), the summation dummy is not anymore of the form _Zn (with each time a different _Zn released by solve) but now uniformily equal to 'n', or 'k' if n is itself found in the PDE, or another letter if also 'k' is there. In fact it is not difficult to tweak PDEtools:-Solve to return the same _Zn at least when the same equation is entered... (at the cost of having PDEtools:-Solve not returning exactly as solve, that each time releases a new _Zn).

Anyway, just to say that if you Jonh think that Tom Leslie summarized your problem, then my opinon here is that there is not a problem, really. Having dsolve releasing a different _C1 each time a 1st order ODE is passed would only complicate life, and no there is no any "erroneous reuse of integration constants" by PDEtools:-Solve, at all.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

To reproduce a problem as the one you describe, I may be sufficient to post a worksheet containing, only, the two differential equation systems.


Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

g_[~mu,nu] is used in physics all the time, in textbooks, papers and calculations by hand. Enforcing it to be displayed as KroneckerDelta would be artificial, for me at least. The same about KroneckerDelta[a, b] or [mu,nu], I see it as such here and there. Then you raise and lower indices in KD the same way, contracting with the metric. And the components are also well defined, as shown above for the (1,1), (2,0) and (0,2), nothing illegal.

Anyway just to say that I prefer things the way they are. People actually working with the package also use both kd_ and g_ normally, with both input and output as expected, all ok, but for the help page I mentioned - that needs a rewriting. 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@nm 

It is all about the method used. Try userinfos and you will see:

@Rouben Rostamian  

Yes, that is another way to put it. Check the documentation, it is mentioned there in ?dsolve,details, reversing expressions involving fractional powers have these problems, we can tell "in advance" that what you will get doing that is not correct all around. The behaviour observed is intentional. Programmed to be that way.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft.

@J F Ogilvie 

I still think that special functions is not the topic of this post, but you want to comment on them and on different aspects of Maple development anyway; ok. My comments are that some of the things you say could start a separate valuable blog/exchange, but some other ones you say look to me not accurate.

First, topics not related to this post but that I find interesting, worth an exchange of ideas.

  • All computer algebra system (CAS) nowadays have very advanced mathematics/physics functionality.

Why? Because modern CAS are not anymore “just a computational research aid” but became "true learning environments". Within a CAS worksheet, difficult stuff is - just - easy, fun for real. One can concentrate on learning and working with the concepts at the same time, while leaving the algebraic difficulty to the computer. For me, bringing otherwise difficult and advanced stuff to the tip of my fingers not only is exciting but changes the research and learning game entirely. Some good examples of this are: 

  1. The FuntionAdvisor brought a myriad of special functions so close to everybody that we all now feel like special function experts :)
  2. The symmetry methods implemented in PDEtools and DEtools also brought the Lie symmetry theory unbelievably close to everybody. I truly think Lie methods are nowadays so much more understood by many, just because of this fully detailed CAS implementation.
  3. The tensor and general relativity routines of the Physics package, that you mention, also brought this otherwise very difficult area close to everybody who is curious about it, not just physics students. This Maple package indeed allows for studying general relativity with incredible ease, e.g. following any textbook reading and reproducing results on a Maple worksheet, with all the heavy algebra performed by the computer. I recall that general relativity was one of the first motivations people had to write the first computer algebra systems (e.g REDUCE).
  4. Special functions are one more example of the same: CAS are making them surprisingly accessible. So, It took only a few CAS years to make Gauss 2F1 (mentioned in Abramowitz and Stegun (A&S)) be trivially generalized to the pFq hypergeometric function (not mentioned in A&S), or the Heun functions be a solid part of the mathematical language (to a point where they are now mentioned in NIST), and now the doubly hypergeometric functions called AppellF from 1 to 4 (these are also part of the NIST and in Maple we implemented not just a few, as you say, but all the Appell functions, there are no more than 4). Appell functions are popping up in the literature more and more by the day. Check also in Google and you will be surprised to discover that during 2016 the number of hits for Appell functions is approx 10 times the one for Lame functions.
  5. The developments Maple is bringing to the computer about PDEs & Boundary Conditions is yet one more example of the same: difficult knowledge becoming available to everybody, the same I could say about the DifferentialGeometry package, etc.
  6. Last but not least, the whole CAS benefits enormously from the side-developments necessary to implement this more advanced material. As an example, take the developments that happened in the numerical evaluation of special functions in Maple 2017, only due to the requirements for the implementation of Appell functions. We don't advertise all of this, but it is there, and among the advertised things there is this new MathematicalFunctions:-Evalf package.

I frankly see all these as wonderful developments. They change not just the research but also the learning game, concretely and for good, merging the two things. It is difficult for me to understand how can you call - any of these - as "irrelevant distractions". It seems to me you have a different vision of what a CAS could be for.

Summarizing:

  • We bring into Maple otherwise very complicated knowledge close to everybody, and in that way we play a bold role in both popularizing and helping develop otherwise difficult areas of knowledge.
  • We bring, into the Maple CAS, knowledge that is popping up more and more in the current scientific literature (that is changing now faster than ever), and in this way we participate in the wave of expansion of knowledge that is happening today.
  • The developments in Maple 2017 are not restricted to General Relativity - a relevant area of Physics - but also in Particle Physics, in Partial Differential Equations with Boundary Conditions, and in Special Functions, and not restricted to the new - exciting and increasingly relevant - Appell functions.

Next are things that you say which look inaccurate to me.

You say “there is much more than general relativity in physics”. But the Maple Physics package is not only nor mainly about general relativity: vector analysis and quantum mechanics are well developed fundamental parts of this package - see the links of my previous reply. There is also CAS-Maple-Physics-package thorough educational material developed, the mini-course for physicists. Physics actually covers most of what you study starting 1st year undergrad to what you see in a master in Physics course.

Then you say: “Meanwhile, whilst all the attention has been devoted to general relativity, other aspects of Physics and more general mathematical applications have been neglected.” The facts for me are that no aspects of Physics have been neglected. The Maple Physics project just advances each release some significant amount, here and there. From what I see as new in Maple 2017, there are also relevant developments in the special functions and differential equations areas.

You say “Even only a small fraction of physicists will ever use this [general relativity] functionality”. But then anyone googling for ‘ “General relativity” physics “2016” ’ (2.5 million hits), then for ‘ “Lame functions” mathematics “2016” ’ (750 hits), could conclude the opposite of what you are suggesting in your replies: the number of people interested in Lame functions is, frankly speaking, rather small if compared with the number interested in general relativity.

You say “what has happened to the long urged inclusion of all of Abramowitz and Stegun into Maple”. I hope you don’t take me wrong but it is my impression that, from the special functions documented by A & S in 1950, about what was relevant in 1950, basically, only the spheroidal wave and the Lame functions are missing in Maple. I do think that the spheroidal ones are worth implementing. Having said that, even regarding the Appell functions, appearing in the literature increasingly more during the last 5 to 10 years, I heard more people asking about than about spheroidal wave. Appell functions, as Heun functions, also have a tremendous generality, enlarging the mathematical language - and so our ability to express concepts - much more than the spheroidal wave or Lame functions.

You say “still other [functions] are incompletely installed, with poor or no transformations to included functions that run at better than a snail's pace”. From your replies as a whole, you seem to be referring to the Heun functions. These functions are neither incompletely installed nor do they have “poor or no transformations” included. The Heun functions just happen to be much more general than the rest of the functions of the mathematical language, so they can be expressed in terms of the other functions only in very-very few cases, and for those cases transformations are in fact provided in Maple. Try for instance FunctionAdvisor(specialize, HeunG) and you will see; change HeunG by HeunC or any of the other Heun ones and you will see more.

By the way, regarding this style of communication you use, with disqualifying connotation words  “… negligence, snail speed, poor, irrelevant, distraction, etc.”, I can see that other people, as it happened above in this thread, can find it harsh or even disrespectful. Generally speaking, I believe that to promote changes you can: a) try to make people feel ashamed in public about their work, thinking that this shame will make them do the developments you want, or b) try to excite people about their work, honestly showing how much advancement and potential is in their developments, and how some bits more here and there would complement their work beautifully.

And then, this is important, I feel that more than anything else: accept that, regardless of your communication style, people may differ from you. Think tolerance. Tolerance also for whatever you perceive as imperfection. This is one of the most important things. The other one that springs to my mind is to stick to the facts. Make your points only over the facts. I remember Russel saying these two things, that great British mathematician that was also a philosopher. Since then his words come to my mind repeatedly.

I hope these comments enrich the debate you are trying to make on these other topics, beyond my original intention of telling what I think are rather exciting novelties in the Physics package for Maple 2017.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

@J F Ogilvie 

The developments in the Physics package for Maple 2017 were not restricted to General Relativity and Particle Physics.

Also, as people using the package know, Physics has basic and advanced functionality for Quantum Mechanics too since it entered the Maple library, including a full implementation of anticommutative and noncommutative operators and functions, related operations (including functional differentiation), Commutators, Anticommutators, Creation and Annihilation operator commands, pre-defined and customizable algebra rules, a whole implementation of Dirac notation for vector calculus on a space of quantum physical states, … to mention but a few; the list of functionality available is really large.

By the way, we are able to advance General Relativity using Computer Algebra and Particle Physics in Maple 2017 only because we already have the basic functionality in place, and as it is the case of science in general, it is the simultaneous development of both - basic and advanced - that makes this package evolve so fantastically.

To stick to the facts, it is useful to point to some application examples on Quantum Mechanics posted in this Mapleprimes forum in the past, developed using the same Physics package that recently implemented this most thorough digital database for Exact Solutions to Einstein's equations in existence and now the ThreePlusOne package towards Numerical Relativity:

The following link is also interesting because it shows a balanced set of applications in different physics areas, and the section on Quantum Mechanics also features a subsection on the use of the Physics package in developing proofs regarding properties of operations between quantum operators, something I don't recall having seen before in any computer algebra system, commercial or brewed at universities

For completeness, this other link to a mini-course on computer algebra for physicists hits the Education aspect and is somehow ambitious, in that it shows an entry point to using such a wide-range-of-areas package as Physics is, while at the same time it is a compact tutorial for people who - simply put - never used computer algebra

So, strictly sticking to the facts, I'd say that Maple's Physics is rather thorough regarding the areas in physics that it covers, and for which it provides both basic and advanced functionality, as opposed to what you say, that "in Maple Physics, all the attention has been devoted to general relativity" and "other aspects of physics have been neglected".

The rest of your reply is about Special Functions. There too I think that Maple is pretty strong, thorough and with wide - unique - functionality, from the FunctionAdvisor to the conversion network for mathematical functions, the package for Symbolic Sequences, or the level of development of Symbolic Differentiation, the list is large ... and also with very interesting new special functions and related developments in Maple 2017, but all this is not the topic of this post so I'll not comment here.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

After fixing the "too stringent type checking on the input", the command works as it is expected (until further improvement): it displays an error message saying why it cannot perform the computation. That is why the message reads "unable to handle <this>", as in a programmed message from the program to the user. As said I am also rather busy nowadays, but improving the code to pass ahead of this interruption looks easy a priori ... I may dedicate some time to this next week.

As for the other error in simplify/do that you show, before using a command, or before posting about an error that results from using it, it is convenient to look at its help page, for instance to see whether you are using it correctly. Your use "PerformOnAnticommutativeSystem(PDEtools:-ConservedCurrents(pde));" is incorrect and not according to what the help page tells.

PS: I have no idea who moved your reply.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@deniscr 
 

restart

with(Physics)

This is your line element / metric

`#msup(mi("ds"),mn("2"))` := -exp(2*v)*dt^2+exp(2*psi)*(-dt*omega-dx__2*q__2-dx__3*q__3+dphi)^2+exp(2*`&mu;__2`)*dx__2^2+exp(2*`&mu;__3`)*dx__3^2

-exp(2*v)*dt^2+exp(2*psi)*(-dt*omega-dx__2*q__2-dx__3*q__3+dphi)^2+exp(2*mu__2)*dx__2^2+exp(2*mu__3)*dx__3^2

(1)

I am "guessing" that the ordering for the coordinates is ["t,phi,`x__2`,`x__3`]" and this ordering is irrelevant when you indicate the line element but it is relevant when you indicate the tetrad as a matrix. So set the signature as you indicated, the coordinates using this ordering mentioned and the metric (I use the shortcut g_ instead of 'metric', it is all the same)

Setup(signature = -` + + +`, coordinates = (X = [t, phi, x__2, x__3]), g_ = `#msup(mi("ds"),mn("2"))`)

`* Partial match of  'coordinates' against keyword 'coordinatesystems'`

 

`Default differentiation variables for d_, D_ and dAlembertian are: `*{X = (t, phi, x__2, x__3)}

 

`Systems of spacetime Coordinates are: `*{X = (t, phi, x__2, x__3)}

 

[coordinatesystems = {X}, metric = {(1, 1) = -exp(2*v)+exp(2*psi)*omega^2, (1, 2) = -exp(2*psi)*omega, (1, 3) = exp(2*psi)*omega*q__2, (1, 4) = exp(2*psi)*omega*q__3, (2, 2) = exp(2*psi), (2, 3) = -exp(2*psi)*q__2, (2, 4) = -exp(2*psi)*q__3, (3, 3) = exp(2*psi)*q__2^2+exp(2*mu__2), (3, 4) = exp(2*psi)*q__2*q__3, (4, 4) = exp(2*psi)*q__3^2+exp(2*mu__3)}, signature = `- + + +`]

(2)

Check the metric

g_[]

Physics:-g_[mu, nu] = Matrix(%id = 18446744078150038342)

(3)

Is this the metric of your problem? Otherwise you need to revise the line element you showed.

 

Now on the tetrad

with(Tetrads)

`Setting lowercaselatin_ah letters to represent tetrad indices `

 

0, "%1 is not a command in the %2 package", Tetrads, Physics

 

0, "%1 is not a command in the %2 package", Tetrads, Physics

(4)

You know, the tetrad is defined up to a Lorenze rotation, so it can be written in infinitely many ways.

 

The defaults used by the Physics package are: an orthonormal tetrad system, so

eta_[]

Physics:-Tetrads:-eta_[a, b] = Matrix(%id = 18446744078139391742)

(5)

and this is according to what you show, and the definition of the tetrad is also as you mention in your post

e_[definition]

Physics:-Tetrads:-e_[a, mu]*Physics:-Tetrads:-e_[b, `~mu`] = Physics:-Tetrads:-eta_[a, b]

(6)

The value computed automatically by the package is just one possible value

e_[]

Physics:-Tetrads:-e_[a, mu] = Matrix(%id = 18446744078348784334)

(7)

You can verify that this is indeed a tetrad using the IsTetrad  command, that also indicates the type of tetrad

"IsTetrad(?)"

`Type of tetrad: orthonormal `

 

true

(8)

Alternatively you can directly verify the definition (6)

Physics:-Tetrads:-e_[a, mu]*Physics:-Tetrads:-e_[b, `~mu`] = Physics:-Tetrads:-eta_[a, b]

Physics:-Tetrads:-e_[a, mu]*Physics:-Tetrads:-e_[b, `~mu`] = Physics:-Tetrads:-eta_[a, b]

(9)

TensorArray(Physics:-Tetrads:-e_[a, mu]*Physics:-Tetrads:-e_[b, `~mu`] = Physics:-Tetrads:-eta_[a, b], simplifier = simplify)

Matrix(%id = 18446744078348022838)

(10)

So indeed this is a tetrad.

 

Let's introduce now the tetrad you say "you need to get" and check first whether it is or not a tetrad:

Matrix(4, 4, [[-exp(v), 0, 0, 0], [-omega*exp(psi), exp(psi), -q__2*exp(psi), -q__3*exp(psi)], [0, 0, exp(`&mu;__2`), 0], [0, 0, 0, exp(`&mu;__3`)]])

Matrix(%id = 18446744078139386686)

(11)

According to the computer, your suggested tetrad is indeed a tetrad (so far so good)

"IsTetrad(?)"

`Type of tetrad: orthonormal `

 

true

(12)

Just for illustration / fun, one can verify this same thing in steps for instance defining a tensor with your suggested tetrad (11) (note that in the definition I indicate the mix of indices, spacetime and tetradic, in situations like this one this indication is relevant)

"T[a,mu] = ?"

T[a, mu] = Matrix(%id = 18446744078139386686)

(13)

"Define(?)"

`Defined objects with tensor properties`

 

{Physics:-Dgamma[mu], Physics:-Psigma[mu], T[a, mu], Physics:-d_[mu], Physics:-Tetrads:-e_[a, mu], Physics:-Tetrads:-eta_[a, b], Physics:-g_[mu, nu], Physics:-Tetrads:-gamma_[a, b, c], Physics:-Tetrads:-l_[mu], Physics:-Tetrads:-lambda_[a, b, c], Physics:-Tetrads:-m_[mu], Physics:-Tetrads:-mb_[mu], Physics:-Tetrads:-n_[mu], Physics:-KroneckerDelta[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu], Physics:-SpaceTimeVector[mu](X)}

(14)

Check the components and definition

T[definition]

T[a, mu] = Matrix(%id = 18446744078348787334)

(15)

Verify now whether this satisfies the definition of a tetrad

subs(e_ = T, Physics:-Tetrads:-e_[a, mu]*Physics:-Tetrads:-e_[b, `~mu`] = Physics:-Tetrads:-eta_[a, b])

T[a, mu]*T[b, `~mu`] = Physics:-Tetrads:-eta_[a, b]

(16)

TensorArray(T[a, mu]*T[b, `~mu`] = Physics:-Tetrads:-eta_[a, b], simplifier = simplify)

Matrix(%id = 18446744078142948526)

(17)

So this is all about finding a transformation that transforms the default tetrad into your tetrad (i.e. a reorientation of the axes of the tetrad (local) system of references). For that purpose you need to use the command TransformTetrad  (chek the help page); in view of the varied options of that command I understand finding this transformation should be not difficult. If however you have a problem with that please post again and we move from there (to move forward any computation it is useful to post it as a worksheet so that results can be reproduced and worked furthermore).


 

Download TetradComputation.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

I see you are using the Physics package and working with PDE systems involving anticommutative variables and functions, not quiet simple conserved currents ... To perform this kind of computation systematically and correctly (not just with a toy example) is the kind of thing I call non-trivial. As you may know, within Physics there is a rather ambitious command aiming at that kind of computation, called PerformOnAnticommutativeSystem. It is an experimental command, as explained in the help page. It works well by "either returns a correct result, or gives up explaining why is the problem out of its scope".

The Error message you show, however, points to "input restrictions that are too stringent". I fixed this now (already uploaded the fix to the Maplesoft R&D Physics webpage, this is the 8th update after Maple 2017 got released), but the problem still appears out of reach. I'm rather busy right now with further developments in the new in Maple 2017 Physics:-ThreePlusOne package but will return to this problem in a couple of days. Your example, and further ones you may end up bringing, are always helpful to tune this advanced code (tackling PDE systems that involve anticommutative variables and functions).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 
In order to tell you something about the message you show or what happened in your calculation I'd need to see the input that produces that message.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 


 

We are talking about the same thing.

restart; with(PDEtools)

declare((f, g)(x, y))

f(x, y)*`will now be displayed as`*f

 

g(x, y)*`will now be displayed as`*g

(1)

pde := (diff(f(x, y), x))*g(x, y)+f(x, y)*(diff(g(x, y), x))

(diff(f(x, y), x))*g(x, y)+f(x, y)*(diff(g(x, y), x))

(2)

Your pde is already a divergence: it satisfies Euler's equations

Euler(pde)

{0}

(3)

It also admits a rather general integrating factor, an arbitrary function of y, f*g:

IntegratingFactors(pde)

[_mu[1](x, y, f(x, y), g(x, y)) = _F1(y, g(x, y)*f(x, y))]

(4)

Check it out:

PDE := rhs(%[1])*pde

_F1(y, g(x, y)*f(x, y))*((diff(f(x, y), x))*g(x, y)+f(x, y)*(diff(g(x, y), x)))

(5)

Euler(PDE)

{0}

(6)

Now on the conserved currents

ConservedCurrents(pde)

[_J[x](x, y, f(x, y), g(x, y)) = Int(-(diff(_F1(x, y), y)), x)+_F2(y, g(x, y)*f(x, y)), _J[y](x, y, f(x, y), g(x, y)) = _F1(x, y)]

(7)

The above is of the form: ["`J__1`= ..., `J__2`= ...]". In your reply you write this current as the right-hand sides only, it is the same:

J := map(rhs, %)

[Int(-(diff(_F1(x, y), y)), x)+_F2(y, g(x, y)*f(x, y)), _F1(x, y)]

(8)

This result is correct, check it out:

diff(J[1], x)+diff(J[2], y)

(D[2](_F2))(y, g(x, y)*f(x, y))*((diff(f(x, y), x))*g(x, y)+f(x, y)*(diff(g(x, y), x)))

(9)

You see that this is equal to 0: it is the product of two factors, one of which is proportional to pde itself.

 

So this current not "almost gibberish" but just more general than the one you mentioned: [f(x, y)*g(x, y), 0]. Maple's result is correct for any value of the arbitrary mappings _F1 and _F2. The solution you posted is the one you get for _F1 = 0, _F2(y, f*g) = f*g 

 

To mention but one, just another example could be

J_bis := value(eval(J, [_F1 = `*`, _F2 = `+`]))

[-(1/2)*x^2+y+g(x, y)*f(x, y), x*y]

(10)

diff(J_bis[1], x)+diff(J_bis[2], y)

(diff(f(x, y), x))*g(x, y)+f(x, y)*(diff(g(x, y), x))

(11)

ReducedForm(%, pde)

`casesplit/ans`([0], [])

(12)

Instead of testing manually you can always test using ConservedCurrentTest

ConservedCurrentTest([-(1/2)*x^2+y+f(x, y)*g(x, y), x*y], pde)

{0}

(13)

On the weak side: ConservedCurrentTest doesn't return 0 for the general result, because reduced form gets confused with the D construction:

ConservedCurrentTest([_J[x](x, y, f(x, y), g(x, y)) = Int(-(diff(_F1(x, y), y)), x)+_F2(y, f(x, y)*g(x, y)), _J[y](x, y, f(x, y), g(x, y)) = _F1(x, y)], pde)

{(D[2](_F2))(y, g(x, y)*f(x, y))*((diff(f(x, y), x))*g(x, y)+f(x, y)*(diff(g(x, y), x)))}

(14)

But you see by eye that this is proportional to pde itself, therefore equal to zero.

 

About your more complicated examples you mention, I suggest you give them a try. Remember the format of the output of the form ["`J__1`= ..., `J__2`= ...]" and if you prefer to not display the functionality in the left-hand sides use

ConservedCurrents(pde, displayfunctionality = false)

[_J[x] = Int(-(diff(_F1(x, y), y)), x)+_F2(y, g(x, y)*f(x, y)), _J[y] = _F1(x, y)]

(15)

``


 

Download ConservedCurrentsIsOK.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

Given an expression - EE - as a differential expression (say of differential order k), compute a first integral, so another expression of differential order k-1 such that its total derivative is equal to the original expression EE. This is the general idea, also applicable to PDEs of course. In the case of ODEs, the command is DEtools[firint]. Complementary command: DEtools[intfactor]; related (say inverse) command: DEtools[redode];

In the PDE case, replace "total derivative" in the paragraph above by "divergence" and the command you need to give a look is PDEtools:-ConservedCurrents. Complementary commands are PDEtools:-ConservedCurrentTest, PDEtools:-IntegratingFactors and  PDEtools:-IntegratingFactorsTest. I never wrote the equivalent to redode for PDEs, but I explained the idea I used in sufficient details in the help page for redode ; it is not difficult to extend it to the PDE case.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

First 31 32 33 34 35 36 37 Last Page 33 of 60