Venkat Subramanian

386 Reputation

13 Badges

15 years, 242 days

MaplePrimes Activity


These are replies submitted by

@nm 

I believe the old vendor who supported the development of classic worksheet doesn't exist anymore. The Maplesoft folks moved to Java based interface (mw versions) which is a pain and is very slow compared to classic worksheet.

Current help files are  useless if they cannot be copied, pasted and tested. 

Maplesoft urgently needs to invest on folks who can do simple easy to edit/load/save/execute gui (i.e, no gimmicks, just mimic *.mws as mush as possible using 1d input and 2d output in mw) and shut everything else off. Call this mwlean. Keep the mw format for those who type only one line code to run some calculations (like using a calculator).

 

 

 

 

@Carl Love 

Your post clarifies the consistent behavior of Maple from storage and memory point of view. From a new user's point of view (in particular students just learning programming), I think it is not trivial to understand why results are different for sets, scalars and matrices.

 

@Carl Love 

in general there can be better documentation and help about commands that work in evalhf format  (compiled form when appropriate) and encourage users to use only those commands as much as possible.

 

 

@Carl Love 

x:=[1,2,3];
x := [1, 2, 3]

> y:=x;

y := [1, 2, 3]

> y[2]:=3;

y[2] := 3

> x;y;

[1, 2, 3]


[1, 3, 3]

 

I think in other languages, I don't think it is possible to assign A :=B matrix directly, I might be wrong. I typically write a loop.

@Carl Love 

 

x:=Matrix(1,1,[1]);
x := [1]

> y:=Matrix(1,1);

y := [0]

> y:=x;

y := [1]

> y;x;

[1]


[1]

> y[1]:=2;

y[1] := 2

> x;

[2]

> x1:=1;

x1 := 1

> y1:=x1;

y1 := 1

> y1:=2;

y1 := 2

> x1;

1

 

We may be on the same page, but to me it looks like := means different things for scalars and vectors

@Carl Love 

 

It took me a long time to realize this simple thing when I started using Maple. However, there is some inconsistency.

If I use x:=2; y:=x; y:=3; x still remains as 2. 

 

@Carl Love 

Carl, thanks. Just fixed the mistake. Yes, I meant 2D input.

@Carl Love 

 

Don't you think Maple should stop 2D input in the future or at least alert new users? What is the point of doing 2D input? Never figured out why.

@dharr 

Maple has implemented lsode wrongly.

please read my comments in the link given in this post (top of this page). 

 

When adaptive sovler is used (with error control), there is no problem as Maple takes a large number of time steps to correct the error (this is inefficient). In fact I am not sure why you are posting dsolve solution. I have posted the expected answer at t = 0.1. I am also highlighting what you are supposed to get when dt = 0.1 (Singlestep was chosen to integrate with Eulerbackward/LSODE method).

Rosenbrock involves only linearsolves. it seems to be implemented well for ODEs (not DAEs for which it will try to convert DAEs to ODEs of the form dy/dt = f which fails for real nonlinear DAEs).

Lsode involves newton raphson iterations. That is not properly done (stopped after one iteration).

 

When error is taken to be one and timestep is taken to be 0.1, you can force lsode to mimic Eulerbackward with one time step. Lsode is not doing what it is supposed to do (results not matching with fsolve for Eulerbackward).

 

I am not sure how much you know about the Maple's implementation of  numerical algorithms for differential equations (didn't mean to be offensive, but it looks to me now that some of the algorithms in the literature are just wrongly implemented in Maple. For example LSODE and MEBDFI). Even the mgear algorithm seems to do only 3 iterations (not sure it solves Newton Raphson till convergence). If you had implemented mgear, then my guess/request is that you can help bring mgear back to Maple perhaps with the ability to solve Mdy/dt = f with M being singular (zero rows for algebraic variables), this will help Maple to include a real DAE solver. If I can be of any assistance, please let me know.

EDIT: Dharr, I just looked at your profile. It is clear that you are very knowledgable about numerical methods and also electrochemistry (we share similar research interests). On the surface level, Maple's dsolve numeric is fine, but when you dig deeper, it turns that some of the algorithms are not properly implemented.

 

The issue is not being able to solve this example. This simple example was chosen to show why lsode fails in maple for the complciated example given in the link  for Can we trust maple?. It also highlights the issue the issue that Maple is not doing what it claims it is doing.

 

Note that this is not to say that Maple's solvers don't work. The objective is to highilght the buggy implementation of certain algorithms which will bite you for difficult and real problems.  As a professor and Maple Ambassador, I have to insist on accuracy to make sure Maple doesn't lose credibility. If Maple claims to do certain things, and it is not properly done, I am going to highlight the same.  Hope that is fine.

 

 

@Preben Alsholm 

Thanks. As I mentioned in my post mgear is hidden in Maple 7 (I used showstat to find that). Maple removed it from Maple 8 onwards when it started including Rosenbrock algorithms.

There should be a way to use it in Maple 7, not sure how.

@ 

current tests suggest that mgear gives better results than lsode.

@abcd 

 

Glad you liked my analysis. You are welcome.

I agree with you that Maple should focus on existing fortran or c algorithms and just provide a wrapper for the same instead of writing its own solvers. The issue might have been that some of the linear solvers might require licenses, not sure.

 

I prefer Maple because at least I can learn the algorithms and concepts behind any command (at least till Maple 15 or so, not sure about new economics or other packages).

For some reason (perhaps Commercial) Maplesoft has made the decision to move towards Maple TA, elementary/high school math/teaching and move most of its interesting algorithms or applications to Maplesim. (I hate the interface of Maplesim). 

Maplesim's solvers are more robust (including the ability to solve DAEs), so I would suggest that if you have the patience to use Maplesim interface (after using Maple classic worksheet, for me MATLAB, Mathematica, newer Maple modes are just too slow), try it. My codes are more than 1000 lines long just like yours.

 

Hope I didn't give the impression that Maple folks are not smart. My interaction with Allan Wittkopf suggests that he knows a lot of algorithms, programming environments and solvers. Maplesoft folks are probably restricted to provide robust stiff algorithms that include Newton Raphson algorithms only via Maplesim for commerical purposes. I respect that. However, they should document lsode/mebdfi and DAE solvers properly suggesting the current limitation to handle only linear equations.

I have used DASSL, DASKR, DASPK in Fortran, IDA in C and RADAU in FORTRAN. Those are the best solvers. I dream of the day when Maple will have a wrapper for these algorithms.

In my group, we routinely use all of these algorithms, but we cheat. We write equations in C or FORTRAN from Maple, run a command prompt from Maple to compile and execute these codes, execute them and plot the profiles in Maple. So, Maple is an interface and also provides the ability to write analytic Jacobians and the ability to create reduced set of model equations.

If I can do this, I am sure Maplesoft folks can easily manage writting better wrappers and use external calling better.

 

@abcd 

@abcd 

Maple unfortunately has not implemented LSODE properly. Only a single iteration (or may be 2 iterations) are made for solving nonlinear equations. Attached code analyzes dsolve/numeric/lsode in Maple 14.

1. Codes were run with abserr=relerr=1, with minstep of 0.1 and initstep of 0.1

2. lsode[backfull] with Ctrl[9]=1 was used. This is just Euler backward method with numerical jacobian. 

3. For linear problems, expected results are obtained.

4. For nonlinear problems wrong results are obtained when error is not controlled with abserr or relerr. That is lsode in Maple is not equal to lsode in FORTRAN for nonlinear problems.

5. For highly nonlinear problems, very bad results are obtained. This could be finding a wrong answer from multiple solutions of a nonlinear equation. Manual Euler backward step is implemented with fsolve to show what should have been the answer from lsode.

 

When time permits (not sure if I am knowledgable to do this), I will try to take the lsode algorithm and modify it to add Newton Raphson loop to fix it. It will be great if someone were to show how to run a processed code (with procedures for f, and y0 given by user) and then directly call the final solver in lsode. If any algorithm is not implemented properly, then it should be documented in the help pages/manual. This explains why a DAE solver was not included. (It is straightforward to modify lsode type algorithms to solve DAEs).

 

restart;

Digits:=15;

Digits := 15

(1)

eq:=diff(y(t),t)=-y(t);

eq := diff(y(t), t) = -y(t)

(2)

C:=array([0$22]);

C := Vector[row](22, {(1) = 0, (2) = 0, (3) = 0, (4) = 0, (5) = 0, (6) = 0, (7) = 0, (8) = 0, (9) = 0, (10) = 0, (11) = 0, (12) = 0, (13) = 0, (14) = 0, (15) = 0, (16) = 0, (17) = 0, (18) = 0, (19) = 0, (20) = 0, (21) = 0, (22) = 0})

(3)

C[9]:=1;

C[9] := 1

(4)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = .909090909090834]

(5)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1

(6)

fsolve(%,y1=0.5);

.909090909090909

(7)

 While for linear it gave the expected result, it gives wrong results for nonlinear problems.

sol1:=dsolve({eq,y(0)=1},type=numeric):

sol1(0.1);

[t = .1, y(t) = .904837355407810]

(8)

eq:=diff(y(t),t)=-y(t)^2*exp(-y(t))-10*y(t)*(1+0.01*exp(y(t)));

eq := diff(y(t), t) = -y(t)^2*exp(-y(t))-10*y(t)*(1+0.1e-1*exp(y(t)))

(9)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = .501579294869466]

(10)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1^2*exp(-y1)-10*y1*(1+0.1e-1*exp(y1))

(11)

fsolve(%,y1=1);

.488691779256025

(12)

sol1:=dsolve({eq,y(0)=1},type=numeric):

sol1(0.1);

[t = .1, y(t) = .349614721994122]

(13)

 The results obtained are worse than single iteration using jacobian.

eq2:=(lhs-rhs)(subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq));

eq2 := 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1))

(14)

jac:=unapply(diff(eq2,y1),y1);

jac := proc (y1) options operator, arrow; 20.+2*y1*exp(-y1)-y1^2*exp(-y1)+.10*exp(y1)+.10*y1*exp(y1) end proc

(15)

f:=unapply(eq2,y1);

f := proc (y1) options operator, arrow; 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1)) end proc

(16)

y0:=1;

y0 := 1

(17)

dy:=-evalf(f(y0)/jac(y0));

dy := -.508796088545793

(18)

ynew:=y0+dy;

ynew := .491203911454207

(19)

 Following procedures confirm that it is indeed calling the procedure only at 0 and 0.1, with backdiag giving slightly better results.

myfun:= proc(x,y) if not type(x,'numeric') or not type(evalf(y),numeric)then 'procname'(x,y);
    else lprint(`Request at x=`,x); -y^2*exp(-y(x))-10*y*(1+0.01*exp(y)); end if; end proc;

myfun := proc (x, y) if not (type(x, 'numeric') and type(evalf(y), numeric)) then ('procname')(x, y) else lprint(`Request at x=`, x); -y^2*exp(-y(x))-10*y*(1+0.1e-1*exp(y)) end if end proc

(20)

sol1:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

sol1(0.1);

`Request at x=`, 0.

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .501579304183583]

(21)

sol2:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backdiag],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

sol2(0.1);

`Request at x=`, 0.

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .497831388424072]

(22)

 

 Very bad results are obtained if nonlinearity is bad. For example if we replace 10 with 100

eq:=diff(y(t),t)=-y(t)^2*exp(-y(t))-100*y(t)*(1+0.01*exp(y(t)));

eq := diff(y(t), t) = -y(t)^2*exp(-y(t))-100*y(t)*(1+0.1e-1*exp(y(t)))

(23)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = -8.18645608888381]

(24)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1^2*exp(-y1)-100*y1*(1+0.1e-1*exp(y1))

(25)

fsolve(%,y1=1);

0.899472065885205e-1

(26)

sol1:=dsolve({eq,y(0)=1},type=numeric):

sol1(0.1);

[t = .1, y(t) = 0.402875805278245e-4]

(27)

 

 

 

Download Lsodeanalysis.mws

@abcd 

 

Though I am not exactly sure, reading the algorithms behind previous versions of mgear in Maple 6 suggest that perhaps in Maple, lsode is not properly implemented. That is instead of doing Newton Raphson to convergence for BDF methods, it simply does only one or may be 2 iterations and stops. This could give wrong results. However, starting from initial values and by adjusting time steps, accuracy is controlled (not sure this is stable). Whenever Newton Raphson is not solved to high precision, accuracy is lost (order is lost for the methods). If this is true, then it is unfortunate that Maple claims to implement standard methods, but implements the same in an incomplete manner.

 

Rosenbrock methods seem to be correctly implemented for ODEs.

 

@ currently running some tests. lsode[backfull] may be close to mgear numerical jacobian. Not sure, but checking the same.

First 7 8 9 10 11 12 13 Last Page 9 of 17