ecterrab

13431 Reputation

24 Badges

19 years, 358 days

MaplePrimes Activity


These are replies submitted by ecterrab

@trace 

To Define k_[mu,nu] indicating its components, please do as you did for N[mu] and F[mu,nu] in Plugging in a metric ansatz, thread that you also started. If that does not resolve your problem, please let me see what you are doing to define k_[mu,nu] indicating its components and that does not work, thanks.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@trace 

To discard terms of degree greater than one in g(r) use series. That command however does not accept a second argument of type function (g(r)), so use it with frontend, as in 

> frontend(series, [(17), g(r), 2])

 

Regarding "how can i put k components" I do not understand what you mean by that - an example? Do you mean assigning values to the components of k? If so it suffices to define k (using Define) indicating its components, instead of just Define(k_[mu,nu], symmetric) as I did in the review of your worksheet, or otherwise leave that definition as it is and redefine k_ indicating its components just before (or even after) using series to discard terms O(g(r)^2).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Rouben Rostamian 

As is clear from the previous replies, although I understand you think abs(x) in this context is bogus, I think it is not. However, taking as reference the previous comments, the solution with x instead of abs(x) is definitely simpler, and the introduction of complex components (abs, csgn signum) when simplifying solutions to linear equations is a pattern that can be checked at a low cost, identify the radicals that lead to these complex components through simplification, take advantage of integration contants to reabsorb constant factors, and in this way rewrite the solution in simpler form. Depending on the program, to do this kind of manipulation may be tricky, but not in this case, the symmetry routines in Maple are pretty modern code.

I implemented this change (you can install it by updating your Maple 2017.3 with the update for Physics, Differential Equations and Mathematical functions distributed at the Maplesoft R&D Physics webpage), and now we receive the one you were expecting, that I call the simpler solution:

pde := diff(u(x, t), t) = diff(u(x, t), x, x); sols := `assuming`([PDEtools:-SimilaritySolutions(pde)], [t > 0, x::real]); sol := sols[4]

u(x, t) = _C1+erf((1/2)*x/t^(1/2))*_C2

(1)

``


 

Download solution_without_abs.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Rouben Rostamian  

A constant doesn't need to be continuous; a piecewise constant is valid; you see however that the problem is at its discontinuity, the origin, exactly where signum(1, x) is undefined. All I am saying (have been saying) is that the solution returned for the symbolic problem without initial/boundary conditions is correct except at one point, that arbitrary constants can be piecwise constants (but for their value at that point) and that the problem of all this is in the simplifier when it changes the solution computed by the PDE symmetry routines introducing abs. That introduction is not OK but the consequence, for x::real, is only to make the solution not valid at the origin.

Anyway,  arguments went back and forth, I enjoyed the exchange, but need to return to other matters.

Best

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Rouben Rostamian  

Hold on: the problem posted has no initial conditions, leaving a lot of freedom in the choice of the arbitrary constants _C1 and _C2 to overcome almost any argumentation. On the other hand, the example you are presenting as argument has initial conditions. With (please!) no offense, I think you cannot compare one thing with the other one. More concretely, look at the solution, with the term under focus of the form _C2*erf(abs(x)) where x::real. You know erf(-x) = -erf(x); reabsorb now that sign within _C2 and you can even say that for x >= 0 you have _C2*erf(abs(x)) = _C2*erf(x) while for x < 0 you have _C3*erf(x), and there you are, watching at a solution arguably valid all around x::real.

The key observation here is that a problem without initial conditions has some extra freedom that a problem with initial conditions (provided it is well defined) has not.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Rouben Rostamian  

I'm happy to see exploration to the sides, the PDEtools symmetry commands are really powerful, and have a ton of options. By the way, SimilaritySolutions, the one you used, an approach frequently presented in symmetry textooks, is a rather restricted and watered-down version of InvariantSolutions, in turn only presented in full in more advanced symmetry textbooks. 

Now on the sqrt(x): this is not really "a bug", if you pdetest(sol, pde) assuming x::real, you see the remainder has signum(1, x) as a factor, which is equal to 0 for all nonzero real x. So this solution returned could be seen as not appropriate when x = 0 only.

But more important: from where is this abs(x) coming? It is from simplify. The solution actually computed by SimilaritySolutions is 
sol_0 := u(x, t) = _C1+erf((1/2)/(t/x^2)^(1/2))*_C2, which tests OK right away, even with something as simple as normal(eval(pde, sol_0)). Try now simplify(sol_0) assuming x::real; and you see abs(x) in the output, which, when testing generates signum(1, x).

Best

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@_Maxim_ 

I see now there are two assuming constructs in the same item 3. First of all: from the explanation in the previous reply, what works is assume; your comment can then only be applied to it (whether it tries to place assumptions on 0), and my take on this one is that assume(x(0)>0) should be a valid assumption, because in general nothing is known about x. Regarding why "the first construct does not interrupt with an error", it is because assuming notices that abs(x(0)) has no indets of type name, therefore shortcuts the computation without calling assume at all.

So one thing to think about: assume could handle assumptions on x(0), at least when x is not assigned; I will forward this comment to the person taking care of assume (I wrote assuming but am not involved in assume itself).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@illuminates 

Now it is clear. This is fixed for Maple 2017. The updates of the Physics package, including fixes and the new material being developed are distributed at the Maplesoft R&D Physics webpage. So in Maple 2017 + updates you get -6, not -24.

This Maplesoft R&D Physics webpage also includes updates for Maple 2016 that I recommend, containing fixes and Physics developments that entered Maple 2017, but the last update for Maple 2016 happened Apr/15. Only updates for the current release get posted every week. A workaround in Maple 2016 for this example you posted is to use SumOverRepeatedIndices, as you indicated.

@digerdiga 

The answers are on the help pages.

For the signature, see Physics:-Setup, search for the word signature on the page; in more recent versions of Maple you can just search for signature within the whole help system, and Physics:-Setup is one of the options that come close to the top. For D_ see ?D_, for Christoffel see ?Christoffel. To express covariant derivatives in terms of Christoffel symbols also see Christoffel.

I am not sure how familiar you are with using computer algebra ... Help pages are a very core part of it. There are so many commands that it is meaningless to try to remember everything. You just consult the help pages, always, and you rapidly remember things about those commands you use the most. Otherwise, without consulting the documentation computer algebra systems are of little use.

 

@digerdiga 

differentialoperators is a new feature of Maple 2017 (this is four releases after Maple 17), so it doesn't work in Maple 17. The ability to work with any of the four signatures mentioned in the first answer (above) was introduced in Maple 2015 or the next one, so that also doesn't work as such in the older Maple 17. Without this differentialoperators feature, you can compute with D_[mu] and A(X) but not as operands of a product: D_[mu] is a differentiation operator, to use it you need to apply it, as in D_[mu](A(X)). Of course the interesting thing is to have this working properly also when you use multiplication (with `*`, not `.`) instead of just application, but that requires you updating to the lastest Maple.

Regarding your question about Physics:-Simplify, recall that all these commands have a corresponding help page. The answer to your question is in that help page: Simplify is physics oriented, performs simplifications of noncommutative products taking into account algebra rules, as well as simplification of contracted indices taking Einstein's rule and tensor's symmetries into account, plus some other stuff. So it is complementary to the standard simplify.

In summary, some of the topics presented in today's IOP talk work in Maple 17 but several only work in newer or the latest release Maple 2017.3.

 

@digerdiga 

Could you please post a worksheet with your input/output? (From your reply I am unable to understand what you are saying). Also the Maple version? Current is Maple 2017. Thanks.

@_Maxim_ 

The case of _Y is different: it is produced by the DESol routines, and it is a global since before the existence of the FunctionAdvisor. The same happens with _Z and RootOf.

Regarding the FunctionAdvisor,I don't remember exactly all the situations (there are too many) but mostly always the variables introduced are local ones. This design has advantages and disadvantages. The most obvious disadvantage is precisely the situation where you fell: you expected the f in f(z) to be global, it is, of course, an understandable expectation; but when you stop to think about the design, returning globals that also have a reasonable visual (e.g. 'f', not '_f' which is kinda FORTRANISH) is nontrivial: the global 'f' can always be assigned, macroed, aliased, have attributes, etc. and then you need to spin around ideas, returning a letter that looks different, make it be indexed or etc., all not as simple as returning a local f.

In some more modern parts of the FunctionAdvisor, nonetheless, not entirely happy with locals I made it work with globals (regarding some summation indices) but as said it was complicated, and with advantages and disadvantages. Either way I made this design decision 15+ years ago, when writing the first version of the FunctionAdvisor, for now, this is how it works, I do not foresee this changing.

@vv 

Thanks for your coment, it is either a complaint about the things implemented in Physics! or more like a sign that you appreciate what is in this package but would like to see more of this outside the package? Physics needs to redefine a large number of things because, otherwise, the dense notation used in physics, simply put, cannot be used on a Computer Algebra System (CAS), and so most of the computations done in theoretical physics are just not possible the way we do them with paper and pencil or in textbooks. Think about, CAS do not implement even the addition of two non-projected vectors - nowadays part of the Physics:-Vectors package. This is the first thing that called my attention when I discovered Mathematica and Maple years ago, not even that letter with an arrow on top ....

As a more advanced example, still basic, in the post above you see things in blue, olive and purple, respectively identifying commutative, noncommutative and anticommutative objects with respect to products `*`. Such a thing is impossible in pre-Physics CAS, where the product operator `*` assumes all its operands commute, and how about diff with regards to that. You may think of the implementation of these things in Physics as "a state within a state" as you say; indeed in all these respects Physics is unique. By the way I already heard the same comment about PDEtools. Either way, Maple is the only CAS that implements this advanced functionality, and I think this openness of Maplesoft to extend functionality as you see is a very-very good thing, a strong feature of the Maple system. In the same line I would mention the official Maplesoft Physics updates and fixes distributed every week.

You ask whether other parts of Maple are going to evolve in these directions. Unfortunately I'm not sure someone has the answer to that question. It is too general. I can tell you that several of the things you saw implemented in the previous releases started as features in the Physics package (to mention but some, there is the "everything inert" and a large number of improvements in int, assume, is, Typesetting and the GUI in general, similar to what happened with former features in PDEtools, dsolve and pdsolve; to mention but three significant and non-obvious ones: `assuming`, the FunctionAdvisor and simplify/size. The merge of Physics:-Assume (which actually is an official Maple command) with the older assume (also an official Maple command) is most certainly going to happen too. There is agreement about that.

On why is Maple missing an AbtractLinearAlgebra (or a Rings package), I don't have the answer but for noting (just muy perception, not talking in the name of Maplesoft) that the Maplesoft development group looks small to me for the task at hands, while the amount of things being developed is not small. Anyway in my experience as soon as something starts to pop-up more frequently as a request it calls the attention to a point where it gets developed.

John Fredsted

The matrix command may appear in the documentation as 'deprecated' but you see it is not: its functionality is unique, not available - as such - elsewhere in the Maple language. BTW there is an internal conversation about exactly that.

Then the use you suggest, of a mapping on the rhs of the algebra rule, has an issue: you know, in CAS, multiplication and function application are not the same thing. So, if you use a mapping as you suggest, on the lhs of the algebra rule you 'multipliy' but on the rhs you 'apply' ... (?). Not OK. On the other hand, as you do with paper and pencil, on the computer you multiply equation (6) -then you need product operands, and if you want the matricial dimension of the lhs and rhs explicitly being the same you need a matrix with which you can also operate algebraically. That is what this post shows, how to do something like that.

Regarding changing Physics defaults, returning a rhs that is different than the rhs of (6) in that it has the same matricial dimension of the lhs, I still think the current output in (6) is simpler for the purposes we use these algebra rules (ie with the matricial dimensions ommited), but only after finishing revamping the use of Dirac and other spinors typical of particle physics formulations (on the way) I will be able to see pros and cons for real, and then think about this again.

Finally, you enter that II symbol from the palette of symbols on the left, see under "Open face".

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

I just posted how to set the Algebra of Dirac matrices with an identity matrix on the right-hand side. I preferred to make it a post so that it doesn't get lost in the middle of this thread with so many replies. In a week maybe I finish revamping this spinor sector related to particle physics. Yes I agree with you that the spinor notation in GR is relevant, in my opinion mainly in connection with quantum gravity. That however will still take some more time to be all implemented. We will arrive there.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

So `.` now uses the same commutation rule of `*` with regards to Dgamma[mu] and an object set as anticommutative (in your example, psi). The fix is again available to everyone at the Maplesoft R&D Physics webpage.

You naturally moved the focus to something else, not mentioned in your original post, which is: "if we do not enter - say psi, a spinor - as a non-algebraic Vector construct, then how do we see the matricial form of an abstract expression involving psi only declared as anticommutative?" This is actually an entire question by itself.

The answer involves two or three things. First, within the Physics:-Library package, there are the library routines: Library:-RewriteInMatrixForm and Library:-PerformMatrixOperations. The first one is expected to just display the given algebraic expression replacing abstract objects by the corresponding underlying matrices. It works fine but not yet with this psi. The second one, not only replaces the abstract objects but also carries on the matrix operations; this one two is not yet handling "just-an-anticommutative-psi" as a 4-spinor. So these two routines are evolving into something similar to Physics:-TensorArray but with regards to 'spinor indices'.

The last sentence touches the realm of the representation problem you are focusing: originally, my idea was to represent spinors just a indexed (tensorial) anticommutative objects. So, 'spinorindex' is one of the things you can set using Setup and indeed in ?FeynmanDiagrams you see these indices used, and the equivalent of the mass term in a Lagrangian being represented. But then, general relativity, in its extended form, uses spacetimeindices, spaceindices and tetradindices, and will soon use spinorindices as well, to that you add su2indices and su3indices introduced in Maple 2017 with the StandardModel package, plus the fact that, with paper and pencil, we -almost- never write spinor indices; instead we omit them.

All this to say that the original idea is by now obsolete. I am considering changing this into a concrete "omit Dirac-spinor indices" and in fact all other particle physics spinor indices, and instead enhance Library:-RewriteInMatrixForm and Library:- PerformMatrixOperations to also handle the psi we are discussing, by automatically transforming psi -> Vector[row/colum](4, symbol = psi). Then we will see the matrix form and also have the operations performed, while: a) being able to work the Lagrangian in compact form, b) not artificially expressed with too-many-indices, and c) keep spinor indices for something else, possible GR or supersymmetry. If the wind doesn't change, this development step may be ready in a week among other related things in the StandardModel package (including the default setting of the algebra rules for the Gell-Man matrices, that are already there present and implemented in the StandardModel package)

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

First 28 29 30 31 32 33 34 Last Page 30 of 60