Alejandro Jakubi

MaplePrimes Activity


These are answers submitted by Alejandro Jakubi

In my opinion, at the mathematical, semantic or higher level it should not, while it has to at the data structure, syntactic or lower level. Both Vector and Matrix (upper case) are variations of the rtable data structure (see ?rtable ):

> v:=Vector(3,[a,b,c]);
                                         [a]
                                         [ ]
                                    v := [b]
                                         [ ]
                                         [c]
> lprint(%);
Vector[column](3,{1 = a, 2 = b, 3 = c},datatype = anything,storage = 
rectangular,order = Fortran_order,shape = [])
> m:=Matrix(3,1,[x,y,z]);
                                         [x]
                                         [ ]
                                    m := [y]
                                         [ ]
                                         [z]
> lprint(%);
Matrix(3,1,{(1, 1) = x, (2, 1) = y, (3, 1) = z},datatype = anything,storage = 
rectangular,order = Fortran_order,shape = [])

Now, mathematically, a vector and a one column/row vector are "isomorphic". So, it should be possible to add them. Since Maple 16 there is a handy tool, coercion (see ?coercion ),  that helps bridging these data type differences. In particular you can do:

> ~Matrix(v)+m;
                                    [a + x]
                                    [     ]
                                    [b + y]
                                    [     ]
                                    [c + z]

But `rtable/Sum` has not been updated to use coercion.

If you mean by "why this happened" what is the code doing so as to return 0, debugging these lines of product shows it. A boolean expression in the line 53 of product evaluates to false, and the return for false is 0 :

> stopat(product,48):
> product(cos(Pi/k), k = 3 .. infinity);   
true
product:
  48*            roots := [solve(f,n,'AllSolutions')];
DBG> into 10
[2/(1+2*_Z1)]
product:
  49             if has(roots,indets(f,'name')) or hastype(roots,'function') then
                   ...
                 else
                   ...
                 end if
[2/(1+2*_Z1)]
product:
  51               r := FAIL or evalb(SolutionsMayBeLost <> false);
true
product:
  52               if type([a, b],['integer', 'pos_infinity']) then
                     ...
                   elif type([a, b],['neg_infinity', 'integer']) then
                     ...
                   elif type([a, b],['neg_infinity', 'pos_infinity']) then
                     ...
                   else
                     ...
                   end if
true
product:
  53                 r := r and not ormap(coulditbe,roots,AndProp('integer',RealRange(a,infinity)));
false
product:
  54                 if r = false then
                       ...
                     end if
false
product:
  55                   return 0
                                       0

You can see that many expressions would produce the same result as the value of roots above:

> ormap(coulditbe,a,AndProp('integer',RealRange(3,infinity)));
                              true

> ormap(coulditbe,1+exp(u),AndProp('integer',RealRange(3,infinity)));
                              true

So that this boolean expression of line 53 evaluating to false is not very significant. But what the developer intended to get with this code is unclear to me. And we have no certain way to get this information as the documentation of the code is not available for the users.

It looks like the default algorithm for the assumptions goes into an involved expression that cannot be handled symbolically:

> trace(assume):
> solve(eqs, [omega[n],theta[n]], UseAssumptions) assuming omega[n]>=0.0;
...
{--> enter assume, args = x_1 = RootOf(25*tan(_Z)*_Z^4-51*tan(_Z)*_Z^2-50*_Z^3+
tan(_Z)+26*_Z), _X000001 = -arctan(25*RootOf(25*tan(_Z)*_Z^4-51*tan(_Z)*_Z^2-50
*_Z^3+tan(_Z)+26*_Z)/(5*RootOf(25*tan(_Z)*_Z^4-51*tan(_Z)*_Z^2-50*_Z^3+tan(_Z)+
26*_Z)-1)/(5*RootOf(25*tan(_Z)*_Z^4-51*tan(_Z)*_Z^2-50*_Z^3+tan(_Z)+26*_Z)+1)) ...

So, it appears that the "magic" here is filter out the undesired solutions:

> s:=solve(eqs, [omega[n],theta[n]]);
s := [[omega[n] = -0.5240921270, theta[n] = 1.149798694],
    [omega[n] = 0., theta[n] = 0.],
    [omega[n] = 0.5240921270, theta[n] = -1.149798694],
    [omega[n] = -1.487355972, theta[n] = 0.6003934395]]


> select(x->op([1,2],x)>=0,s);
[[omega[n] = 0., theta[n] = 0.],
    [omega[n] = 0.5240921270, theta[n] = -1.149798694]]

After ?plotsetup:

plotoptions is a string containing comma separated keywords that are recognized by the device drivers. For example, plotsetup(ps, plotoptions=`color,portrait`) tells the PostScript driver to perform color plotting in a portrait orientation. For a complete list of the plotoptions keywords supported for each device, see the plot/device help page.

But ?plot,device and ?plot,ps say nothing of font specifications being recognized.

It might be a driver/renderer problem with the Standard GUI, if this is what you are using. First check that the plot data structure is properly created:

a;
              CURVES([[0., 0., 0.], [1., 1., 1.]])

If so, try with the "basic" character device:

plotsetup(char):
display([a],axes=none);
                                                                              
                                       |                                      
                                       |                                      
                                       |                                      
                                       |                                      

If it works, then try with the maplet device:

plotsetup(maplet):
display([a],axes=none);

Because it is computed using the method cook:

> infolevel[IntegrationTools]:=3:
> int(BesselJ(1, x), x = 0 .. infinity);
Definite Integration:   Integrating expression on x=0..infinity
Definite Integration:   Using the integrators [distribution, piecewise, series, o, polynomial, ln, lookup, cook, ratpoly, elliptic, elliptictrig, meijergspecial, improper, asymptotic, ftoc, ftocms, meijerg, contour]
LookUp Integrator:   unable to find the specified integral in the table
Cook LookUp Integrator:   returning answer from cook pattern 3
Definite Integration:   Method cook succeeded.
Definite Integration:   Finished sucessfully.
                                       -1

which has a regression bug after Maple 9.52. Note that "Finished sucessfully" does not mean a correct result! Then, a workaround is avoiding that the method cook were used:

> int(BesselJ(1, x), x = 0 .. infinity,method=nocook);
memory used=1.1MB, alloc=30.3MB, time=0.07
Definite Integration:   Integrating expression on x=0..infinity
Definite Integration:   Using the integrators [distribution, piecewise, series, o, polynomial, ln, lookup, ratpoly, elliptic, elliptictrig, meijergspecial, improper, asymptotic, ftoc, ftocms, meijerg, contour]
LookUp Integrator:   unable to find the specified integral in the table
Definite Integration:   Method meijergspecial succeeded.
Definite Integration:   Finished sucessfully.
                                       1

You see that this way, the method meijergspecial is then used instead. Also, the method ftoc could be chosen:

> int(BesselJ(1, x), x = 0 .. infinity,method=ftoc);
                                       1

Or, a numerically checked result could be asked by using the undocumented method _VERIFYFLOAT:

> int(BesselJ(1, x), x = 0 .. infinity,method=_VERIFYFLOAT);
                                       1

A priori, it does not seem so easy. Tracing shows where that change from inert to active form occurs:

> trace(`dchange/diff/tr`):
> trace(`PDEtools/eval`):
> Reynolds:=Diff(p(x)*h(x)^3/(12*mu)*Diff(p(x),x),x)-u(x)/2*Diff(p(x)*h(x),x)+Diff(p(x)*h(x)^3/(12*mu)*Diff(p(x),z),z)=Diff(p(x)*h(x),t):
> PDEtools:-dchange({p(x)=P(X)*Pa,x=Lx*X,h(x)=H(X)*h2},Reynolds,{P,h,X,u,H});
{--> enter PDEtools/eval, args = Diff(p(x)*h(x),x), [Diff = `dchange/Diff_aux`,
...
{--> enter dchange/diff/tr, args = P(X)*Pa*H(X)*h2, x, {x = Lx*X}, {}, {h(x) =
H(X)*h2, p(x) = P(X)*Pa}, {}, [X], [H(X), P(X)], {}, {}, {}
                               F, piff := F, piff
                                zz := [x = Lx X]
                                      true
                                  ans := [Lx]
                                        [ 1  ]
                                 ans := [----]
                                        [ Lx ]
                                          1
                                 ans := [----]
                                          Lx
                                      piff(F, X)
                               ans := ----------
                                          Lx
{--> enter PDEtools/eval, args = 1/Lx*piff(F,X), [piff = diff, F = P(X)*Pa*H(X)
*h2]
                  /d      \                      /d      \
                  |-- P(X)| Pa H(X) h2 + P(X) Pa |-- H(X)| h2
                  \dX     /                      \dX     /
                  -------------------------------------------
                                      Lx
...

As you see, the procedure `dchange/diff/tr` makes the change of variable, expressing the result in terms of an auxiliary function call piff(...), and then an evaluation is done calling `PDEtools/eval` where piff is replaced with diff, even when the original expression was an inert derivative (Diff or %diff). 

At least conceptually, this is a design bug, in my opinion. Probably, to get the desired behavior would require patching PDEtools:-dchange:-`dchange/diff`, and perhaps something else. I think that it could be done (assuming spare time). But I would prefer that Edgardo go do it...

It is related in internal order of objects. Compare with:

> cos(a-b);
                                   cos(a - b)
> cos(b-a);
                                   cos(a - b)

Using sort may help. In a basic way:

> sort(cos(Phi1(x)-psi));
                               cos(psi - Phi1(x))

For control of what is expanded (here sin calls and sums) and what not (the rest, here products, etc ), the frontend facility can be used with some advantage, without the need of crafting ad-hoc procedures. For acer's examples:

> (simplify@frontend)(expand,[2*sin(x+Pi/4)],[{`+`},{sin}]);
                              1/2
                             2    (sin(x) + cos(x))

> (simplify@frontend)(expand,[2*sin(2*x+Pi/4)],[{`+`},{sin}]); 1/2 2 (sin(2 x) + cos(2 x))
> (simplify@frontend)(expand,[2*sin(2*x+4*k)],[{`+`},{sin}]); 2 sin(2 x) cos(4 k) + 2 cos(2 x) sin(4 k)
> (simplify@frontend)(expand,[2*sin(4*k)],[{`+`},{sin}]); 2 sin(4 k)

Certainly, it is my opinion, that the default of expand, namely expand "everything" is upside-down. And it should be not to perform specific expansions unless they are explicitly required. That way, the length of the statements that perform such controled expansions would be, typically, much shorter.

This is a partial list of threads discussing LaTeX export issues and alternative code for this purpose:

A simple Latex example

maple2latex

Maple Expressions to LaTeX converter

Better LaTeX output from Maple?

Maple 12 - Wish List

 

 

No, it does not seem to be the right tool. GRTensor is designed for computations with a specified metric.

Both integrals are computed using the method FTOC. And while the primitive function computed for f1 has a discontinuity within the interval (0,1) at t=1/2 (actually a branch cut crossing the real axis):

> F1:=int(f1,t);
                                                    2
                      F1 := -1/4 ln(cot(2 t - 1 - I)  + 1)
> ll:=limit(F1,t=1/2,left);
           ll := 1/4 I Pi + 1/4 ln(cosh(1) - 1) + 1/4 ln(cosh(1) + 1)
> lr:=limit(F1,t=1/2,right);
          lr := -1/4 I Pi + 1/4 ln(cosh(1) - 1) + 1/4 ln(cosh(1) + 1)
> lr-ll;
                                   -1/2 I Pi

the primitive function computed for f2 has none within this interval. Then, the computation of the definite integral test1 looks for discontinuities but fails to detect it:

> trace(discont):
> test1:=int(f1,t=0..1);
{--> enter discont, args = cot(2*t-1-I), t
                            _EnvAllSolutions := true
                                 Pi _Z1~
                        disr := {------- + 1/2 + 1/2 I}
                                    2
                                   disc := {}
                                   disc := {}
                                       {}
<-- exit discont (now in GetPoles) = {}}
{--> enter discont, args = -1/4*ln(cot(2*t-1-I)^2+1), t
                            _EnvAllSolutions := true
                                 Pi _Z2~
                        disr := {------- + 1/2 + 1/2 I}
                                    2
                                   disc := {}
                                   disc := {}
                                       {}
<-- exit discont (now in FindDisconts) = {}}

Meaning that it is computed as if the primitive function F1 were continuous...

Really, with the BranchCuts (sub)package in place since Maple 17, it seems to me that it is time to improve discontinuity checking.

@taro yamada 

For such symbolic manipulations, it is better to use the inert form Int, as there are already tools for handling it:

> equ:=p(k)=alpha-gamma*q(k):
> map(IntegrationTools:-Expand@Int,equ,k=0..M);
                  M              M                     M
                 /              /                     /
                |              |                     |
                |   p(k) dk =  |   alpha dk - gamma  |   q(k) dk
                |              |                     |
               /              /                     /
                 0              0                     0
> value(%);
                     M                              M
                    /                              /
                   |                              |
                   |   p(k) dk = alpha M - gamma  |   q(k) dk
                   |                              |
                  /                              /
                    0                              0

Basically, what are being tried and failed are the tangent half-angle substitution and the Risch algorithm:

> infolevel[int]:=3:
> trace(`int/trigon`):
> int(tan(x)^(1/3)*sec(x)^2, x);
...
{--> enter int/trigon, args = tan(_X)^(1/3)*sec(_X)^2
...
                                     /sin(_X)\1/3
                                     |-------|
                                     \cos(_X)/
                               ff := ------------
                                              2
                                       cos(_X)
...
                             1/3 /   _X   \1/3    2
                             2    |--------|    (_X  + 1)
                                  |   2    |
                                  \-_X  + 1/
                       ff := ----------------------------
                                         2     2
                                     (-_X  + 1)
                                  r := FAIL
...
{--> enter int/risch/alg1, args = _th[1]^2/(_th[1]^2+1)^2*(_th[1]^2-1)/_th[2], 
[x, exp(RootOf(_Z^2+1,index = 1)*x), _root(RootOf(_Z^2+1,index = 1)^2*(exp(
RootOf(_Z^2+1,index = 1)*x)^2-1)^2*(exp(RootOf(_Z^2+1,index = 1)*x)^2+1),3)]
...
<-- ERROR in int/risch/alg1 (now in int/risch/int) = 
"cannot determine if this expression is true or false: %1"}

On the other hand, the expected substitution is tried with Carl's case:

> trace(`int/indef1`):
> int(tan(x)^(1/3)*(1+tan(x)^2), x);
...
{--> enter int/indef1, args = _X^(1/3)
...
                                             4/3
                                         3 _X
                               answer := -------
                                            4
<-- exit int/indef1 (now in int/indef2) = 3/4*_X^(4/3)}
                                                (4/3)
                           answer := 3/4 tan(_X)
           

Indeed floor and family should be revised some day. See also this thread.

You might try this alternative to floor:

> f:=z->-1/Pi*(-Pi*z-1/2*Pi+arccot(cot(Pi*z)))-1/2:
> simplify(f(p)) assuming p>0,p<1;
                                       0
First 11 12 13 14 15 16 17 Last Page 13 of 29