Venkat Subramanian

386 Reputation

13 Badges

15 years, 331 days

MaplePrimes Activity


These are replies submitted by

@eithne 

At this point in time, it seems to me that 2019.2 should be withdrawn. There are some benefits, but negatives are significant.

 

@acer 

Most users would know a good initial guess and Globalsolve should attempt to call a localsearch based on the initial guess provided by the user first. This is not just with Maple, many packages seem to ignore the initial guess provided by the user for the GA.

@acer 

Any global solver should check results from a local solver before displaying the final answer. The word Global is abused and there are no guarantees for global optima, in particular for blackbox objectives.

This may be because the global optima solver probably ignores the initial guess provided and starts searching. It should ideally first do a gradient-based search based on the initial guess provided. 

But we can't win against the craze/push for GA.

Though mapleprimes will help you with maple related questions, I will be surprised if you find help for this. In my opinion, Maple's initial goal was to call define_external and build on external routines. They should have just done this instead of rewriting (sometimes wrong) codes for dsolve/numeric, pde/numeric, etc. Perhaps, compiler changes make this complicated, but many commands won't work well for define external (for fortran or C). Though I like maplesim, in my opinion, Maple folks dropped the ball on this after Maplesim came online.

For example, I would like to see RK, or Fortran codes (MEBDF, eg.,) in Fortran and someone explaining how this gets converted to dll files in Maple. If we get help for this (including sparse matrices/vectors), Maple can get very powerful (by calling all possible open-source ODE/DAE/PDE/FEM solvers, optimizers).

Edits: I have learned a lot from the current implementation of ODE/DAE solvers in Maple and the transparent nature of implementation helps in identifying errors when the solvers fail. The same thing can't be said about NLPSolve. It frequently fails for gradient-based inequality constrained optimizations for blackbox objectives. If the plan is to call external routines, implementing them correctly is very important.

@mmcdara 

I had been wanting to write to this post.

We should write in flux form and assign conductivity to the harmonic mean of values at two neighbouring nodes. This applies in 2D as well.

 

@tomleslie 

I believe a procedure is the best way to approach this. But there are situations when the OP's approach is better. For example, let us assume that we need to call fsolve or Roots or something similar and it fails. In the OP's approach, it will be easier to debug, and in a procedure, it will be a pain.

Once someone is absolutely sure of all the commands in a code, then the procedure makes sense. The OP may not have the ability to share the code (copyrights/etc).

@Daniel Skoog 

Please don't stop classic worksheet. Though it is unstable now, that is what I use. For me it is 100x faster than any of the Java based interfaces.

Also, it will be great to have a fsolve that works and gives answers for a specified tolerance. Provide an option that uses the Newton Raphson for mutliple variables which is guaranteed to give a better answer compared to the starting initial guess just with Digits:=15 without increasing Digits to seek unrealistic tolerances. 

 

@David1 

FWIW, I never delete any old installations specifically for this point. 

It may be hard to accept, but your best would be to remove maple 2018, reinstall maple 2017 and matlab 2017 again.

 

@Daniel Skoog 

I wish Maplecloud is disabled by default and enabled only if the users choose to enable it.

@ solve the case with the BC, u = 1 and then apply Duhamel's superposition theorem. Maple can be used to do this. Or just apply Laplace transform method, solve the BVP in x, and invert back to time domain.

 

I realize there are signficant new additions in Maple lately and with physics package many more PDEs being solved analytically.

IMHO we should expect analytical solutions for linear ODEs and not PDEs. For PDEs, boundary conditions will complicate the existence, convergence, etc of a solution approach.

Just presenting Fourier or SOV solution will not work for all the PDEs. I wish the focus and expectation will be on guiding the user to use Maple to arrive at a Separation of variable solution from scratch or using Laplace transform or similarity solution instead of a click of a button to get a solution.

I can break any software for a solution of a PDE. Just take a simple linear PDE see if any software can do both short-time and long-time solution.

Thanks for bringing up bugs, etc. This will be a never ending game. (For example I saw another post with a fortuitious analytical solution for wave/Burger's equation)

@ 

Modifying the nonlinear constraint procedure to provide output W[1] seems to fix this mistake and the Matrix form works in 2017 with this corrected format. The use of 'evalf' form for Matrix/operator form is still seen.
nlc2:=proc(x,W)
W[1]:=x[1]*x[2]+1*log(x[1]-0.0001)-4;
end proc;

@ 

It appears that the result depends on the version of Maple.

Matrix approach fails in Maple 14.

Matrix approah fails in Maple2017 if there is a nonlinear constraint.

Both operator and Matrix approaches use evalf mode compared to algebraic form (which uses evalhf). 

. (By setting infolevel=3 or more one can find this).

So, the help file is technically correct (stating that Matrix form is more efficient).

 

restart;

obj:=100*(x[1]-2.1)^2+10000.0*(x[2]-3.3)^2+0.00003*exp(-x[1])*exp(-x[2]);

obj := 100*(x[1]-2.1)^2+10000.0*(x[2]-3.3)^2+0.3e-4*exp(-x[1])*exp(-x[2])

(1)

#infolevel[all]:=3;

Optimization:-NLPSolve(obj);

[0.135497428232022180e-6, [x[1] = 2.10000000067749, x[2] = 3.30000000000677]]

(2)

Optimization:-NLPSolve(obj,[x[1]<=2.0],x[1]=1..2,x[2]=3.5..4.5);

[401.000000122603808, [x[1] = 2., x[2] = 3.50000000000000]]

(3)

constraint:=x[1]*x[2]+1*log(x[1]-0.0001)-4;

constraint := x[1]*x[2]+ln(x[1]-0.1e-3)-4

(4)

Optimization:-NLPSolve(obj,[constraint<=0],x[1]=1..2,x[2]=3.5..4.5);

[497.527740765090868, [x[1] = 1.11243865776721, x[2] = 3.50000000000000]]

(5)

 Next operator method is used

obj1:=unapply(obj,x[1],x[2]);

obj1 := proc (x_1, x_2) options operator, arrow; 100*(x_1-2.1)^2+10000.0*(x_2-3.3)^2+0.3e-4*exp(-x_1)*exp(-x_2) end proc

(6)

 

Optimization:-NLPSolve(obj1,1..2,3.5..4.5);

[401.000000122603808, Vector(2, {(1) = 2., (2) = 3.50000000000000})]

(7)

nlc1:=unapply(constraint,x[1],x[2]);

nlc1 := proc (x_1, x_2) options operator, arrow; x_1*x_2+ln(x_1-0.1e-3)-4 end proc

(8)

Optimization:-NLPSolve(obj1,{nlc1},1..2,3.5..4.5);

[497.527740765090868, Vector(2, {(1) = 1.11243865776721, (2) = 3.50000000000000})]

(9)

nlc1(1.11,3.5);

-0.10730079e-1

(10)

 Next, Matrix approach

obj2:=unapply(obj,x);

obj2 := proc (x) options operator, arrow; 100*(x[1]-2.1)^2+10000.0*(x[2]-3.3)^2+0.3e-4*exp(-x[1])*exp(-x[2]) end proc

(11)

Optimization:-NLPSolve(2,obj2);

[1.35497428232022180*10^(-7), Vector(2, {(1) = 2.10000000067749, (2) = 3.30000000000677})]

(12)

Optimization:-NLPSolve(2,obj2,[],[Vector([1,3.5]),Vector([2,4.5])]);

[401.000000122603808, Vector(2, {(1) = 2., (2) = 3.50000000000000})]

(13)

nlc2:=unapply(constraint,x);

nlc2 := proc (x) options operator, arrow; x[1]*x[2]+ln(x[1]-0.1e-3)-4 end proc

(14)

#infolevel[all]:=10;

Optimization:-NLPSolve(2,obj2,1,nlc2,[],[Vector([1,3.5]),Vector([2,4.5])]);

[401.000000122603808, Vector(2, {(1) = 2., (2) = 3.50000000000000})]

(15)

nlc2(Vector([2,3.5]));

3.693097179

(16)

 Matrix form seems to fail when there is a nonlinear constraint in 2017 Maple. Any Matrix form fails in Maple 14.

 

 

 

 

 

 

 

 

 

 

 

 

 

 


 

Download optimizationexamples.mws

@acer 

Perhaps Maplesoft should not assume that all the users require a simple call/GUI and provide more examples for directly calling NAG's subroutines. I am wondering if what you found out is the reason for NLPSolve (sqp approaches) to fail for bounded or constrained optimization problems in Matrix or operator form.

For example take any arbitrary set of nonlinear equations for x1, x2. Solve using fsolve. Solve the same set of equations using NLPSolve objective =1 . This should work well.

Then write a procedure that returns 10*(x1-x1sol)^2+(x2-x2sol)^2*1000 or something like that or just 1.

Give bounds and arbitrary nonlinear or linear constraints (stating x1=x1sol, etc).

NLPSolve won't give the expected solution

@Robert Israel 

In Maple 2015, 2016 and 2017 the following command works for pdsolve numeric with compile =true selectively. I am not sure if the value saved at t = 1 as a function of x is a compiled procedure, but it works. However, trying to recover solution at different times does not work.
I am restating my request and interest in storing compiled dsolve numeric solutions.

Thanks

restart;

eq:=diff(y(x,t),t)=diff(y(x,t),x$2);
sol:=pdsolve(eq,{y(x,0)=1,y(0,t)=0,y(1,t)=exp(-t)},numeric,compile=true):
s1:=sol:-value(t=1,output=listprocedure):
uu:=subs(s1,y(x,t)):
uu(0);uu(0.23);uu(1);
save uu,sol,"pdesol.m";
restart;
read("pdesol.m"):
uu(0);uu(0.23);uu(1);
uu(0.1);
s2:=sol:-value(t=1/2,output=listprocedure):

Error, (in pdsolve/numeric/value) cannot determine if this expression is true or false: INFO[errorest]

3 4 5 6 7 8 9 Last Page 5 of 17