Preben Alsholm

13471 Reputation

22 Badges

20 years, 263 days

MaplePrimes Activity


These are replies submitted by Preben Alsholm

For a very long time (since Maple 6) the new packages in Maple have been modules. LinearAlgebra was introduced in Maple 6 and as a module. Later some old ones have been turned into modules too.

Examples:
type(DEtools,`module`); #false
type(DEtools,table); # true
type(PDEtools,`module`); #true
type(plots,`module`); #true
Those packages were introduced before Maple 6 as tables.

@YasH Yes, automatic simplification is prevented in computing your a:

'5^( evalf(1.25, 30) )';
                                  


@torabi I suppose that I shall ignore the delta in front of all quantities?

I take the vertical bar |x=0,1  to mean evaluated at x=0 and also at x=1, so there are two conditions for each equation.

Now the two second order conditions at x=0 and theta=0 (respectively) should be written

D[1,1](w)(0,theta)=0 and D[2,2](w)(x,0)=0.

D[1]    means the first order derivative w.r.t. the first variable (here x).
D[1,1] means the second order derivative w.r.t. the first variable (here x).
D[2]   means the first order derivative w.r.t. the second variable (here theta).

@marc sancandi Please notice that I revised the solution procedure. It had an unfortunate error. Neither of X[1] or X[2] should be applied to t. This mistake doesn't produce wrong results, but prevents the use of hardware floats (see below).
The computation with rosenbrock as well as with adamsfull now takes about 0.4s.

##
The following shows that the erroneous t in solproc prevents the use of hardware floats:

3(2); #Result 3. In Maple numbers can act as constant functions.
evalhf(3(2));
   
Error, unable to evaluate function `3` in evalhf



@torabi After I remove quite a few implicit multiplication signs in your BC due to a space being interpreted by the 2D parser as multiplication so that I have:
BC := {D[1](U)(0, theta) = 0, D[1](U)(1, theta) = 0, D[1](V)(0, theta) = 0, D[1](V)(1, theta) = 0, D[1](V)(x, 0) = 0, D[1](V)(x, 1) = 0, D[1](W)(0, theta) = 0, D[1](W)(1, theta) = 0, D[1](W)(x, 0) = 0, D[1](W)(x, 1) = 0, D[2](W)(0, theta) = 0, D[2](W)(1, theta) = 0, D[2](W)(x, 1) = 0, U(0, theta) = 0, U(1, theta) = 0, U(x, 0) = 0, U(x, 1) = 0, V(0, theta) = 0, V(1, theta) = 0, V(x, 0) = 0, V(x, 1) = 0, W(0, theta) = 0, W(1, theta) = 0, W(x, 0) = 0, W(x, 1) = 0, (D[1](U))(x, 0) = 0, (D[1](U))(x, 1) = 0, (D[2](W))(x, 0) = 0}
#
then when I run
res := pdsolve({PD1, PD2, PD3}, BC, numeric);

I get


Error, (in pdsolve/numeric/process_IBCs) initial/boundary conditions can only contain derivatives which are normal to the boundary, got (D[1](U))(x, 0)

which tells you that you cannot have D[1](U)(x,0) and the similar ones. You can have e.g. D[2](U)(x,0) which is a derivative normal to the boundary theta=0.


@vv No I haven't been aware of this potential problem till now.

I noticed that it doesn't help the rosenbrock method perform any faster in the example in the link given above when being used nonprocedurely. That may be because rosenbrock needs to compute the jacobian and in doing so, subsequent expansion occurs. That is just a guess.

 

@vv Here is a procedure for isolating derivatives in a first order system of odes which tries to avoid expansion.

It works in the example given in the link in my question. The problem there was that DEtools[convertsys] expanded a very complicated right side. It also works in the two toy examples below.

IsolateDerivatives:=proc(sys::set(equation)) local nms,SYS,vars,p;
  p:=proc(u,vars) if not depends(u,vars) then freeze(u) else u end if end proc;
  nms:=indets(sys,name);
  SYS:=evalindets((lhs-rhs)~(sys),specfunc(diff),freeze);
  vars:=indets(SYS,name) minus nms;
  SYS:=map~(rcurry(p,vars),SYS);
  SYS:=solve(SYS,vars);
  thaw(SYS)
end proc:
sys1:={diff(x(t),t)=x(t)*(a-y(t)), diff(y(t),t)=-y(t)*(b-x(t))};
sys2:={2*diff(x(t),t)=x(t)*(1-y(t))-diff(y(t),t), diff(y(t),t)=-y(t)*(1-x(t))};
res1:=IsolateDerivatives(sys1);
res2:=IsolateDerivatives(sys2);
DEtools[convertsys](sys1,{},[x,y],t,Y,YP)[1]; #No expansion
DEtools[convertsys](sys2,{},[x,y],t,Y,YP)[1]; #Expansion
##Now use convertsys on res2:
DEtools[convertsys](res2,{},[x,y],t,Y,YP)[1]; #No expansion
##
## I think it is important that DEtools[convertsys] doesn't change a system in which the derivatives are already isolated.
The person isolating the derivatives may have known what he is doing.
The little evidence I have so far is that convertsys lives up to that. It does it in the example in the link too after having isolated the derivatives with (e.g.) IsolateDerivatives.

@vv Yes, freezing the right hand sides will work if we know that the unknowns are not present there.

Freezing could be extended to cover terms that don't depend on the unknowns as in this example:

restart;
eq1:=a+b=b*(c+d)+2*a+7*e;
eq2:=e*d=b*(c+d)+f*(8+k)+k*a;
solve({eq1,eq2},{a,e});
sys:=(lhs-rhs)~({eq1,eq2});
p:=proc(u) if not depends(u,{a,e}) then freeze(u) else u end if end proc;
map~(p,sys);
solve(%,{a,e});
thaw(%);
#####################
I should mention that `DEtools/convertsys` behaves not quite as solve when isolating the derivatives.
This is seen e.g. in this familiar system (Volterra-Lotka), where solve expands, but convertsys doesn't.
restart;
sys:={diff(x(t),t)=x(t)*(1-y(t)), diff(y(t),t)=-y(t)*(1-x(t))}:
DEtools[convertsys](sys,{},[x,y],t,YP):
%[1]; 
        [YP[1] = YP[1]*(1-YP[2]), YP[2] = -YP[2]*(1-YP[1])]
solve(sys,{diff(x(t),t),diff(y(t),t)});
        {diff(x(t), t) = -x(t)*y(t)+x(t), diff(y(t), t) = x(t)*y(t)-y(t)}
## Doing the same in this example shows that convertsys expands some part, where solve expands all.
sys:={2*diff(x(t),t)=x(t)*(1-y(t))-diff(y(t),t), diff(y(t),t)=-y(t)*(1-x(t))};

1. Remove the two occurrences of units.
2. Be aware that in ?solve,detail there is a section Using Assumptions, which starts out saying
  In most cases, solve ignores assumptions on the variables for which it is solving.
3. Did you really mean L__L(g__ap*Unit('m'), N__1), i.e. that L__L is a function of two variables with input (in this case):
g__ap*Unit('m') and N__1 ?

@vitato According to your link at surrey.ac.uk the denominator should be sin(theta(t)), not sin(phi(t)).

I have corrected my answer below to reflect that change.

@Carl Love I just tried giving a vote up. Now the OP has a reputation of 5.

@marc sancandi I admit to have been put off by your heavy use of sarcasm, which I think is not conducive to a sober and rational discussion.

Does the problem you have show up in an example simple enough to view on a computer screen?
If so, why not upload a worksheet not relying om an .m file?

In your reply to Tom Leslie you say that the system is huge, so I guess that for now you don't have a simpler example.
Often enough I have heard (or read) numerical analysts say that for any numerical method there exist inputs for which the method won't work.

Should we just trust your statement:

# File "MC.m" below contains an example of one of them (a second degree ODE
# reduced to a couple of 2 first order ODEs ; details about these equations
# are of no importane here).


?

@shahid Try not to assign to RE and then do:

res := dsolve(eval({bc, eq1, eq2},RE=15), numeric); #Works for me

Now you can experiment with continuation in RE or with using an approximate solution to get success for higher values of RE.
See  ?dsolve,numeric,bvp

@xhimi I make 4 (randomly chosen) matrices with size 16x16 and with complex entries (no symbolic entries). I give them shape=hermitian; then I know that the eigenvalues will be real and more importantly, Maple knows it too.
If you happen to have an hermitian matrix and the matrix is not given the shape hermitian, then there will no doubt be imaginary roundoff errors in the eigenvalues. Those can be removed by the use of fnormal and then simplify. For an example see the bottom.
restart;
with(LinearAlgebra):
n:=16:
p:=RandomTools:-Generate(complex(float),makeproc);
A:=RandomMatrix(n,generator=p,shape=hermitian);
B:=RandomMatrix(n,generator=p,shape=hermitian);
C:=RandomMatrix(n,generator=p,shape=hermitian);
D1:=RandomMatrix(n,generator=p,shape=hermitian);
L:=[A,B,C,D1];
LE:=[Eigenvectors]~(L):
LEV:=map2(op,2,LE);
LEV[1]; #Matrix of eigenvectors of A
seq(LEV[1][..,i],i=1..n); #Eigenvectors for A
m:=nops(L);
res:=IntersectionBasis([seq([seq(LEV[j][..,i],i=1..n)],j=1..m)]);
interface(rtablesize=16);
res;
LE[1][1]; #Eigenvalues for matrix A
##############################
D2:=Matrix(D1): #Making a new matrix with the same elements as D1, but with default options
MatrixOptions(D2);
ev:=Eigenvectors(D2):
ev[1]; #Vector of eigenvalues
simplify(fnormal(ev[1])); #Stripping roundoff errors



First 80 81 82 83 84 85 86 Last Page 82 of 225