Venkat Subramanian

386 Reputation

13 Badges

15 years, 242 days

MaplePrimes Activity


These are replies submitted by

Preben's answer is correct.

Similar problems in cylindrical or other coordinates will have a different particular solution. Typically, separation of variables methods work for non-homogeneous boundary conditions by using c(x,t) = u(x,t) +w(x).

But in this case, that won't work, you can get the answer by using c(x,t) = u(x,t) + w(x) +v(t), and deriving for w(x) and v(t).

More details in the paper, http://www.maple.eece.wustl.edu/pubs/9-JPS-VS-RW-sepvar-2001.pdf. I have also shown this example in the book I wrote on using Maple for chemical engineers.

 

Good luck. Laplace transform will solve this problem direclty, but you will end up with roots that are repeated and you have to use Heaviside-expansion theorem to invert back to time.

 

Two flux conditions are common in electrochemical systems (Batteries), and they don't actually have a steady-state. 

Preben's answer is correct.

Similar problems in cylindrical or other coordinates will have a different particular solution. Typically, separation of variables methods work for non-homogeneous boundary conditions by using c(x,t) = u(x,t) +w(x).

But in this case, that won't work, you can get the answer by using c(x,t) = u(x,t) + w(x) +v(t), and deriving for w(x) and v(t).

More details in the paper, http://www.maple.eece.wustl.edu/pubs/9-JPS-VS-RW-sepvar-2001.pdf. I have also shown this example in the book I wrote on using Maple for chemical engineers.

 

Good luck. Laplace transform will solve this problem direclty, but you will end up with roots that are repeated and you have to use Heaviside-expansion theorem to invert back to time.

 

Two flux conditions are common in electrochemical systems (Batteries), and they don't actually have a steady-state. 

@ecterrab 

 

Edgardo, thanks for trying the model I gave. My point is that we should not manipulate index 1 DAE, just take it and solve with implicit formula as explained before.

As you have asked, I am attaching a MATLAB script for the same model (which fails in Maple as of today for N greater than 7). You will see that I am not very good at it. As of now you have to change both N1 and N (both mean the same) for the number of node points. MATLAB solves without any trouble. Plots for both N = 2 and N = 100 are attached. I even ran N = 1000 node points, it ran in 2-3 minutes or so. N = 100 was instantaneous and N = 2 was comparable speed with Maple. In addition, MATLAB file is a pain (I have to print multiple files, merge as a PDF and upload).

For reasons unknown to me MATLAB pushes ode45, but their ode15i and ode15s are their best  solvers in our experience. Note that for this particular model MATLAB is superior, it doesn't mean the same for ill-conditioned problems (Maple's analytical jacobian will make the dsolve/numeric robust and more efficient for ill-conditioned ODEs (small sized) as of today). For index 1 DAEs, I think Maple's way of manipulation and conversion to ODEs should be modified and option to solve directly with M being singular would be great.

MATLABscripts_and_pl.pdfI am sure one can code the same in Mathematica.

Venkat Subramanian

www.maple.eece.wustl.edu

@ecterrab 

 

Edgardo,

I completely agree with the C code conversion. For example, codegen feature of Maple is a great tool. There is no argument from my side on the successful conversion to C format. My comments are on the implementation of the algorithm, choice of the solver and the method of solving DAEs as ODEs.

When you use standalone C or FORTRAN solvers, very rarely anyone would enter a full dense analytic jacobian for a largescale system. There are more examples that I can provide. For example, Burger's equation in 1 or 2D is a standard problem (solved and demonstrated in Mathematica, MATLAB, etc for example). One will hit the RAM limit as of now with dsolve/numeric when the number of node points is increased.

Please do note that despite the limitation of what we have, our group's first choice is still and will always be Maple to start a model, and when needed hardcoded C or FORTRAN. We very rarely start with other software because for us using Maple helps us develop a very complicated model in 15 minutes to one hour.  However, MATLAB has very good reach among the systems community. For optimization and other toolboxes, MATLAB is far ahead and we use the same for that. In addition, MATLAB allows for files to be sent to non MATLAB users.

Using C directly would take weeks or months to develop these models.

 

@ecterrab 

 

Please see a code attached. This comes from finite difference discretization of one PDE with time derivative coupled with one PDE without time derivative. Don't run this code for high values of N (even 5 or 10 would crash any computer), with total number of DAEs less than 25 or so.

 

The only way to run this code as of today would be to convert all the algebraic equations into ODEs and solve the resulting dy/dt = f system with implicit=true. There again size becomes the limitation.

 

restart:

Digits:=15;

Digits := 15

 

 Code writen by Dr. Venkat Subramanian and MAPLE group members at WU.

N:=2;

N := 2

 

eq1[0]:=3*u1[0](t)-4*u1[1](t)+u1[2](t);

eq1[0] := 3*u1[0](t)-4*u1[1](t)+u1[2](t)

 

eq1[N+1]:=u1[N+1](t)-1;

eq1[3] := u1[3](t)-1

 

eq2[0]:=3*u2[0](t)-4*u2[1](t)+1*u2[2](t);

eq2[N+1]:=u2[N+1](t);

eq2[0] := 3*u2[0](t)-4*u2[1](t)+u2[2](t)

eq2[3] := u2[3](t)

 

 

h:=1/(N+1);

h := 1/3

 

 

for i from 1 to N do eq1[i]:=diff(u1[i](t),t)=(u1[i+1](t)-2*u1[i](t)+u1[i-1](t))/h^2-u1[i](t)*(1+u2[i](t));od:

for i from 1 to N do eq2[i]:=0=(u2[i+1](t)-2*u2[i](t)+u2[i-1](t))/h^2-(1-u1[i](t)^2)*(exp(-u2[i](t)));od:

Eqs:=seq(eq1[i],i=0..N+1),seq(eq2[i],i=0..N+1):

ics:=seq(u1[i](0)=1,i=0..N+1),seq(u2[i](0)=0,i=0..N+1):

#infolevel[all]:=10;

tt := time():
sol:=dsolve({Eqs,ics},type=numeric, optimize=false,stiff=true,compile=true, maxfun=0):
time()-tt;

.265

 

 

with(plots):

 

odeplot(sol,[t,u1[0](t)],0..1,thickness=3,axes=boxed);

 

odeplot(sol,[t,u2[0](t)],0..1,axes=boxed,thickness=3);

 

 

 

PDE-DAE-To_post.mw

Hope I am not annoying you, my objective is to continue to restate the current limitation and improve for the future.

Venkat Subramanian

www.maple.eece.wustl.edu

pdsolve/numeric seems to use pdsolve/numeric/sparseLU. I am not sure if is really a sparse solver and if it is available in compiled form.

 

Allan,

Hope this comment and discussion are useful. My apologies if this comment is trivial.

 

Consider an index-1 DAE system,             du/dt = -u-exp(-z)

                                                                 0 = log(z)+z-exp(1-u)

With initial conditions u=z=1 at t=0.

Applying Euler-backward scheme

(u1-1)/dt=-u1-exp(-z1)

0= log(z1)+z1-exp(1-u1)

This system can be solved to find u1 and z1 and march in time (without having to differentiate the algebraic equation).

For this case the mass matrix M is singular M = [[1,0,][0,0]]. The Euler backward scheme is

M(Y1-Y0)=hf(Y1) where Y=[u,z] and h is the time step.

When Newton Raphson is written for this one can see that M being singular is not a problem. The scheme for Newton Raphson would require inverting or Linear Solve on [M-Jh]. Note that J is nonsingular for index-1 DAEs. Note that for many largescale systems, M-Jh is better conditioned than just J (if you were to convert all DAEs as first-order ODEs). In addition many DAEs will have discontinuous non-homogeneous terms in the algebraic equation.

We should leave index 1 DAEs in Mdy/dt = f(y) form and apply implicit methods (order higher than Euler-backward or implicit RK methods) to solve.

Right now it seams that DAEs are converted to ODEs before numerical integration in the standard DAE solvers in Maple. This is not recommended for large scale index-1 DAEs.

MEBDFI seems to just find residues as needed (and discussed here) for this system, but seems to check on time derivatives for algebraic variables also for initial conditions and includes a procedure for time derivatives for the algebraic variables as well. I am not sure if the original MEBDFI code in FORTRAN does this.

My experience with dsolve/numeric is that implicit=true, stiff=true is the best solver. Right now it solves only dy/dt =f, changing the algorithm to solve Mdy/dt = f should help solve many largescale index-1 DAE systems. M can be singular and also a function of y.

M reduces to identity matrix for ODEs, and singular matrix with zeros as rows for algebraic variables. This will also benefit from banded or sparse storage.

Venkat Subramanian

www.maple.eece.wustl.edu

@Allan Wittkopf 

 

Allan,

Thanks for the fast response. I am not sure I understand your answers to (1) completely. If you take the rhs (f) and then create an index for row and column for nonzero entries, then one should be able to store jacobian in a vector as opposed to matrix and then call this from a different matrix (with storage = sparse). Using the Jacobian option of Vector calculus will not help on this. This is not truely a sparse solver, but memory consumption will be very less. There will be three procedures (compiled), one for f, one for Jvec (storing only non zero entries) and a numerical Jac which calls Jvec at particular index. This will help solve much larger systems. I will be very surprised if this is any slower than regular full jacobian we have now. I will try to answer more on (1) later.

 

Regarding (6), standalone will be great even if it means sending necessary Maple codes in dll and exe forms with 100 MB or more folders (like MATLAB). At the very least I will take your option 1 wherein I can send the code to other users if we can send them without revealing the equations or parameters (the users would need to have Maple). (This is often needed when we work for industries with restrictions on design equations, parameters, etc). As a first step, one can even take a constant fixed step size code.

Numerical jacobian is a great step forward. Newton-Krylov can result from that at a later stage. Please do explore sparse matrices  whenever you can.

 

Thanks for the attention Allan. I always tell everyone that I use Maple heavily in research, and I wanted to make sure that if advanced researchers in the field visit this page, they don't leave with the wrong (inaccurate claims) impressions. That is why I took the time to post.

 

Most researchers in the field for largescale DAEs are familiar with the need for good parallel solvers integrated to solvers. Also, you should allow for M to be singular (to address index 1- DAEs). Backward difference schemes don't expect M to be non-singular, so the codes should work and shouldn't require dy/dt = f form.

Venkat Subramanian

www.maple.eece.wustl.edu

Another option to include is to add Newton-Krylov based parallel solvers for reasonably conditioned systems (won't work for badly conditioned DAEs). These solvers operate on vector products and don't store the Jacobian at all.

Venkat Subramanian

www.maple.eece.wustl.edu

@ecterrab 

 

Edgardo,

 

I respectfully disagree on (1). I am a longtime and passionate user of Maple (from Maple V) and I have authored a book on using Maple for chemical engineers.

My group has been successful in using Maple to develop approximate, symbolic, reformulated and perhaps most efficient models for lithium-ion batteries. This has given me good visibility in the academia, battery and automobile industries. However when it comes to numerical efficiency, Maple is still not competitive with Fortran, or C when it comes to nonlinear DAEs, in particular for large scale systems. Below, I summarize the problems with the current status of Maple's dsolve numeric solvers.

I have been using kernelopts(opaquemodules) = false, infolevel = 10 to dig more into how the dsolve works.

 

(1) For ODEs - Maple's solvers are very good, even for stiff problems. In particular, the use of analytical jacobian helps in solving stiff ODEs very efficiently. However, currently dsolve/numeric can solve only limited number of equations. (I have benefitted a lot from interacting with Allan Witkoppf.  I think compile=true, is a great speed up, but still not optimal for largescale systems).

For example, take a simple parabolic PDE, discretize in x using finite differences in x. Solve the resulting system using Maple (with compile =true). You will see that when N> 200 or may be 500 it will fail. I think this is because Maple finds jacobian of every row and stores the full matrix and does not use sparse matrix. (Both MATLAB and Mathematica will switch to sparse solvers) for larger number of systems. In addition, while analytical jacobian is robust, Maple does not provide a way to solve ODEs using automated numerical jacobian. My experience is that Maple is an order of magnitude or more slower for systems greater than 200 or more ODEs. In particular, dsolve cannot be used for larger sytems arising from discretization of 2D and 3D PDEs. I had explored giving jacobian as a procedure, but that rules out using compile=true and events.

It will be great if Maple can use sparse solver with compile for largescale systems including numerical jacobian (with pattern and banded structure predicted or provided by the user).

For example, my group has written a backward difference code (first order) in time with compiled functions and jacobians (converted to sparse format), that can run 10000 or more ODEs/DAEs (without compiled), may be 2000 or more ODEs/DAEs with (compile=true).

 

(2) DAEs-

I think the newer version (Maple 17) seems to have some hidden and new options for Euler Backward, I am yet to test them all. Maple’s dsolve numeric is very good and robust even for index 3 DAEs. However, my experience is that even for index-1 nonlinear DAEs (total of 25 equations), Maple’s dsolve fails (this fails in MATLAB and Mathematica as well), but I was able to do some tweaking to solve this in Maple approximately.

I think the reduce option in Maple (infolevel=10) should be optional and not allowed for nonlinear stiff DAEs. Maple spends too much time and RAM there and it doesn’t solve (RAM goes to 4GB). This is reduced if I say optimize=false, but still not helpful. Maple should provide options to enter DAEs (index 1, largescale) as Mdy/dt = f where M can be singular, just use a backward difference and solve (don’t change or rearrange anything, just solve). I think MEBDFI is in this direction, but it doesn’t offer compile options or events detections, so not very useful. There are also sparse and parallel solver versions of MEBDFI in the literature.

 

I will be glad to send codes that fail in Maple for both ODEs and DAEs. Note that I will probably use Maple forever and I use it very frequently compared to MATLAB and Mathematica. But as a professor, I wanted to be accurate and state the current weakness to further improve. In short, my suggestions would be

 

(1)   Use sparse storage for jacobian (when it is written or created). I see the jacobian to be a regular matrix in the current solvers

(2)   Use parallel sparse solver for the linear solver part of the code (very few codes in the world do this, so this will be a great advantage).

(3)   Provide options for numerical jacobian to be used.

(4)   Solve the system Mdy/dt=f or f(dy/dt,y) directly without any rearrangement or reduce.

(5)   Enable sparstity pattern to be provided or detected to numerical parallel, compiled sparse linear solvers.

(6)   Last, but not the least, provide a way to send the code as an exe to others. I am not sure why this is restricted (available only with Maplesim for certain solvers. However dsolve has better options for events compared to Maplesim). For example, dlls created from the dsolve if we can export as exe, that would be great. MATLAB provides for files to be sent to non matlab users. This will enable faster adaption of codes outside the groups. Currently, we convert codes to C (the solvers are faster and slightly less robust compared to Maple) and send that as exe codes.

Because of my current fortunate situation, if there is anyone interested, I may be able to even pay someone to develop and do this for my projects.

Thanks

Venkat Subramanian

www.maple.eece.wustl.edu

First 15 16 17 Page 17 of 17