sursumCorda

922 Reputation

13 Badges

2 years, 206 days

MaplePrimes Activity


These are replies submitted by sursumCorda

@Christian Wolinski Thank you for your explanation. This design seems inconvenient. Anyway, is there any workaround to reduce the above three cases?

@MDD Most of the commands in the MTM package are actually encapsulations of corresponding commands in the LinearAlgebra (or linalg) package.
As for MTM:-rref,

showstat(MTM:-rref, 12):

MTM:-rref := proc(A)
local dims, ndims, r, eval_return, inplace_convert;
       ...
  12           r := LinearAlgebra:-ReducedRowEchelonForm(r)
       ...
end proc


showstat(LinearAlgebra:-ReducedRowEchelonForm, 3):

LinearAlgebra:-ReducedRowEchelonForm := proc(A::~Matrix, {inplace::truefalse := false, outputoptions::list := []})
local OO;
       ...
   3   return LUDecomposition(A,(':-method') = (':-GaussianElimination'),(':-
         output') = [':-R'],':-conjugate',_options['inplace'],
         `outputoptions[R]` = [OO])
end proc


so its algorithm is Gaussian elimination. As the documentation says, 

The selection of pivots in the GaussianElimination method differs according to the type of the Matrix entries. For Matrices with numeric entries and at least one floating-point entry, pivots are selected according to absolute magnitude. For Matrices with only exact rational or symbolic entries, pivots are selected as the first nonzero element in the current column, moving downwards from the current row.

Besides, the algorithm of linalg:-rref is said to be Gauss-Jordan elimination. (Note that there exists an undocumented `DEtools/pcolechelon`.)

In practice, MATLAB 9.13 (R2022b) is compatible with Maple 2023 (and works well in it), although the full support for corresponding versions of MatLab has not been qualified by Maplesoft so far. 
I believe that such out-dated information comes from the fact that the Matlab package is in the shadow of the Python package at present. (Note that the underlying engine of MatLab's Symbolic Math Toolbox (after 2007 or 2008) is still the "vanished" MuPAD (instead of a new product).)

@Scot Gould Many thanks. Such an evaluation strategy is quite powerful indeed! 

Calculations are performed through input that is applied spatially, not temporally. 

@Scot Gould Sorry. What does this mean? Any examples? 

@Carl Love Many thanks. Since convert(..., 'elsymfun') currently does not support arbitrary polynomials, I try to use PolynomialReduce to convert a non-symmetric polynomial into a sum and a remainder (with fixed variable ordering) (and then reconstruct the original polynomial from them). But it appears that this time none of orders works: 

P := (a - c*1)*(b - a*2)*(c - b*4):
B := [(a + b + c)**3, (a*b + b*c + c*a)*(a + b + c), a*b*c]:
PolynomialReduce(P, B, [a, b, c]);
Error, (in quo) arguments must be polynomial in c
PolynomialReduce(P, B, [a, c, b]);
Error, (in quo) arguments must be polynomial in b
PolynomialReduce(P, B, [b, a, c]);
Error, (in quo) arguments must be polynomial in c
PolynomialReduce(P, B, [b, c, a]);
Error, (in quo) arguments must be polynomial in a
PolynomialReduce(P, B, [c, a, b]);
Error, (in quo) arguments must be polynomial in b
PolynomialReduce(P, B, [c, b, a]);
Error, (in quo) arguments must be polynomial in a

@Carl Love Thanks. Another strange thing is that sometimes the output seems incorrect: 

PolynomialReduce(x^5 - x*y^6 - x*y, [x^2 + y, x^2 - y^3], [x, y]):
expand(inner([x^2 + y, x^2 - y^3], %[1]) + %[2]);
 = 
             6    3  3    5      4    3        2      
         -x y  + x  y  + x  + x y  - x  y - x y  - x y

PolynomialReduce(x^5 - x*y^6 - x*y, [x^2 + y, x^2 - y^3], [y, x]):
expand(inner([x^2 + y, x^2 - y^3], %[1]) + %[2]);
 = 
            13    7  3    9      6    3  3    5      
           x   + x  y  - x  - x y  - x  y  + x  - x y


Though the result is often not completely unique, yet I think that at least the reconstructed polynomial should equal the original one. Perhaps there exists some bug? 

@Carl Love Thanks. But I find a strange thing:

[PolynomialReduce](2*x^4 + y^3 - x^2 + y^2, [x^3 - x, y^3 - y], [x, y]);
 = 
                    [           2    2    ]
                    [[2 x, 1], x  + y  + y]

[PolynomialReduce](2*x^4 + y^3 - x^2 + y^2, [x^3 - x, y^3 - y], [y, x]);
Error, (in quo/polynom) division by zero


@Carl Love Thanks. The only regret is that this does not output the other term "[x + y, 2*(y - a)]" (although Maple may have found it in the internal).

Besides, I think that the origional function is also an extension of solve(identity(…, ...), ...) and PDEtools:-Solve(…, ..., 'independentof' = ...) when working with polynomials, since those coefficients in the second example can be easily obtained by them when a=0. But if the remainder is not zero, these two commands will not work.

@Rouben Rostamian  Six years have passed; regretfully, such convenient syntax is still not built into Maple. What is the reason?

@sand15 Thanks. Strangely, the older Eigenvals function will return the newer default LinearAlgebra:-Eigenvectors form (instead of the linalg:-eigenvectors form).

@Preben Alsholm I read it in 

showstat(`evalf/matrixexp`, 66);

`evalf/matrixexp` := proc(A)
local a, t, ss, s, i, i1, bk, p, z, n, m, j, k, l, M, N, oldD, islist, tmp;
       ...
  66   if _EnvLinalg95 = true and islist then
           ...
       else
           ...
       end if
end proc


I believe that it is similar to: 

  1. MTM:-expm
  2. `linalg/matrixexp`
  3. linalg:-matrixexp
  4. linalg:-exponential
  5. Student:-LinearAlgebra:-MatrixExponential, and 
  6. LinearAlgebra:-MatrixExponential.

@Preben Alsholm Thanks. The problem is: If I understand right, in theory, shouldn'd setting “_EnvLinalg95 := true:” only impact on the obsolete `linalg` package (without influencing the most recent `LinearAlgebra` package)? 

The modern `LinearAlgebra` package was originally introduced in Maple 6, which was released in 2000. Twenty-three years have passed; while `LinearAlgebra:-Eigenvalues` and `LinearAlgebra:-EigenConditionNumbers` work well, the relevant `LinearAlgebra:-Eigenvectors` still has a cryptic dependency upon `_EnvLinalg95` (even if I do not “with(linalg):”). This is really weird.

I am not very familiar with signal processing, but the spline option in the new SignalProcessing:-SavitzkyGolayFilter command seems to be explained in SignalProcessing:-DifferentiateData's documentation: 

• … the savitzkygolay method … use extrapolation on the left and right ends to lengthen data container X so that the derivative of order n is of the same size as X. This extrapolation can be performed in three ways:
    • extrapolation=periodic: The data is assumed to be periodic, so that Xi + m = Xi for each integer i with m being the size of X.
    • extrapolation=polynomial: Using the CurveFitting['PolynomialInterpolation'] command, interpolating polynomials of degree (at most) d are used to extend X on the left and right.
    • extrapolation=spline: Using the CurveFitting['ArrayInterpolation'] command, splines of degree (at most) d are used to extend X on the left and right.

(The Wikipedia article says that this method "has been extended for the treatment of 2- and 3-dimensional data". Can Maple's SignalProcessing['SavitzkyGolayFilter'] be generalized?)

@awass Even if ordinary users read the help page, they can be still confused.
For instance, the documentation of `convert/Vector` mentions: 

• copy : truefalse
Indicates whether a new rtable should be allocated when converting from other rtable types such as Array or Matrix. The default is false, meaning that convert will attempt to provide a reference to the existing rtable instead of allocating a new one.

However, 

Sv := <1, 2>:
Tv := convert(Sv, Vector, copy = false):
evalb(Sv = Tv);
 = 
                             false

Will this not confuse users? I think it will.

@nm Sorry, there is a typo; I have edited my reply. Thanks. 

 

2 3 4 5 6 7 8 Last Page 4 of 19