acer

29759 Reputation

29 Badges

19 years, 316 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

@C_R As you've seen, one aspect of your original approach is that f(x__0) gets evaluated with x__0 being a mere name. In this case you encountered problems with that.

That aspect is sometimes called "premature evaluation", with a problem ensuing because procedure f is not handling a mere name as you'd prefer. The aspect is that procedure f is being called before its argument gets an actual numeric value.

Previously you were given some ways in which the root-finding could be accomplished (using Maple 2021 at the time).

Your procedure f was defined as,
   f := x__0 -> int(1/sqrt(sin(x__0) - sin(x)), x = 0 .. x__0)

Note that for,
    fsolve(f(x__0) = 2*sqrt(alpha), x__0 = 0 .. 0.9*Pi/2)
the call f(x__0) for x__0 a mere name occurs before fsolve ever receives any arguments. The call f(x__0) is being evaluated prematurely.

In my Maple 2023.0 the approach of delaying the evaluation of f(x__0) does succeed, ie,
    g := alpha -> fsolve('f(x__0) = 2*sqrt(alpha)', x__0 = 0 .. Pi*1/2*0.9):
    g(0.6336176953);

produces the answer 0.5614162053. It is slow, taking about 75 seconds, because the numeric integrations are not fast.

For me, it doesn't take "forever". But it's slow. It's slow because the way in which the numeric integrations are requested makes them slow (not because fsolve itself is slow).

You had used a so-called operator calling sequence in a call to plot. That's another way to avoid the call f(x__0) for x__0 a name. A similar approach for fsolve can also avoid the premature evaluation. This too takes about 75 seconds for me.
   g := alpha -> fsolve(f - 2*sqrt(alpha), 0 .. Pi*1/2*0.9):
   g(0.6336176953);

At the risk of being irritating, I'll mention that (as shown before), with a bit of care using numeric integration can do well here. Here's a slightly different tweak that I showed before (which was a coarser error tolerance with a slightly higher working precision).
    f := x__0 -> int(1/sqrt(sin(x__0) - sin(x)), x = 0 .. x__0, method = _d01ajc):
    g := alpha -> fsolve(f - 2*sqrt(alpha), 0 .. Pi*1/2*0.9):
    g(0.6336176953);

That produces the answer 0.5614162054 in less that one second, for me.

As Preben has shown, the integral can be solved symbolically under assumptions and a procedure formed from the result using unapply. Previously I too had shown a symbolic integration under assumptions, using Maple 2021 and a change of variables, followed by unapply. The ensuing fsolve attempt is very fast. As you've seen, cases that do not do well here include that in which the integration is attempted symbolically but without the key assumptions.

The most notable aspect of your worksheet is that you have formed the equation (1) before you assign values with units to the parameters. You don't have to do it that way, but I will retain that approach because it's key to some of the things you've noticed.

restart

with(Units)

Automatically loading the Units[Simple] subpackage
 

 

`α__1` = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2))

alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2))

(1)

l__1 := 50*Unit('mm')

l__2 := 40*Unit('mm')
alpha := 120*Unit('deg')

combine(alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2)), units)

alpha__1 = arctan((5/3)*3^(1/2))

(2)

:-simplify(alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2)))

alpha__1 = arctan((5/3)*3^(1/2))

(3)

Units:-Standard:-simplify(alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2)))

alpha__1 = arctan((5/3)*3^(1/2))

(4)

simplify(alpha__1 = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2)))

alpha__1 = arctan(5*sin(120*Units:-Unit(arcdeg))/(5*cos(120*Units:-Unit(arcdeg))+4))

(5)

Now the the parameters have been assigned their values
with units, entered input gets parsed with the rebound
arithmetic operators.

`α__1` = arctan(l__1*sin(alpha)/(l__1*cos(alpha)+l__2))

alpha__1 = arctan((5/3)*3^(1/2))

(6)

Download units_after.mw

If the Units (or Units:-Simple, or Units:-Standard) package is loaded then the arithmetic operators `+` and `*` are rebound to package exports of the same names, which are units-aware.

The reason why it works directly if you "retype" the input (after assigning values with units to the parameters) is that the input gets parsed using those units-aware versions.

But your original equation (1) is formed before the parameters have values with units. And so the resulting expression contains calls to the original global (units-unaware) arithmetic operators. Mere re-evaluation does not reparse, or re-utilize the rebound units-aware versions, even after the parameters are assigned values with units.

It is unfortunate that Units:-Simple:-simplify is inadequate here, taking longer and not getting the desired result. I will submit a bug report against that aspect. But combine(...,units) or the global :-simplify do fine.

Based on your description, you might consider line-printing, eg.

restart;

with(StringTools):

S := $1..20;

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20

Ls := SubstituteAll(String([S])[2..-2]," ",""):

lprint~([LengthSplit(Ls,10)]):

1,2,3,4,5,
6,7,8,9,10
,11,12,13,
14,15,16,1
7,18,19,20

printf~("%s\n",[LengthSplit(Ls,10)]):

1,2,3,4,5,
6,7,8,9,10
,11,12,13,
14,15,16,1
7,18,19,20

 

Download lineprint_ex1.mw

Your complete requirements are not clear, based only on your original Question. Is a fixed width font wanted? Could your "output" contain non-ASCII characters? Does all whitespace need to be removed? What did you mean by the trailing "/", if something other than the linebreaks? Etc.

This might be as much as I can recover, while also being able to reliably Save&Reopen using Maple 2022.2. (I noticed that your original attachment was last saved in Maple 2022.2.)

Opgaver_til_ULA_13_addition_af_spin_ac22.mw

I suggest that your save a copy of this, on the side as a backup, for now.

Is this what you're after?

L := [a,b,c,d];
foldl(`&^`, L[]);

In a worksheet,

L := [a,b,c,d]

[a, b, c, d]

foldl(`&^`, L[]);

`&^`(`&^`(`&^`(a, b), c), d)

lprint(%);

((a &^ b) &^ c) &^ d

a &^ b &^ c &^ d;

`&^`(`&^`(`&^`(a, b), c), d)

lprint(%);

((a &^ b) &^ c) &^ d

Download foldl_ex1.mw

Since the eigenvectors of a nonsymmetric Matrix with floating-point entries could be nonreal in general, the data structures used for the eigenvalue and eigenvectors results are consistently a Vector and Matrix with datatype=complex[8] (at hardware working precision, and complex(sfloat) at higher working precision), regardless of whether the values are mixed real/nonreal or all purely real.

In such a case as yours, you are free to cast the results to a float[8] or other Vector and Matrix. See below.

Alternatively, you can change a kernelopts setting so that the visual display of the zero-imaginary components are suppressed for such Vectors/Matrices.

restart

kernelopts(display_zero_complex_part = true)

A := Matrix(2, 2, {(1, 1) = .8, (1, 2) = .3, (2, 1) = .2, (2, 2) = .7})

Matrix(%id = 36893628602552940292)

H := LinearAlgebra:-Eigenvectors(A)

Vector[column](%id = 36893628602552927044), Matrix(%id = 36893628602552928724)

simplify([H], zero)[]

Vector[column](%id = 36893628602552925588), Matrix(%id = 36893628602552925708)

Vector(H[1], datatype = float[8]), Matrix(H[2], datatype = float[8])

Vector[column](%id = 36893628602552916196), Matrix(%id = 36893628602552916316)

restart

kernelopts(display_zero_complex_part = false)

A := Matrix(2, 2, {(1, 1) = .8, (1, 2) = .3, (2, 1) = .2, (2, 2) = .7})

Matrix(%id = 36893628602552940292)

LinearAlgebra:-Eigenvectors(A)

Vector[column](%id = 36893628602552926924), Matrix(%id = 36893628602552928604)

NULL

Download Eigenvectors_ac.mw

For this example you could perform simplification after substituting your example numeric values, and in this case avoid the small imaginary artefact due to floating-point round-off error.

For this example this also happens to lead to better accuracy that applying evalf without that simplification (even after removal of the spurious imaginary artefact).

restart;

INV := invztrans((z - 1)^2/(a*z^2 + b*z + c), z, n):

INVnew := simplify(allvalues(INV)):

G :=eval(INVnew, [a=2,b=1,c=4,n=16])

-(1/496)*(-(1/65536)*(-2*(-31)^(1/2)+38)*(-1/2+(1/2)*(-31)^(1/2))^16+(2*(-31)^(1/2)+38)*(-1/4-(1/4)*(-31)^(1/2))^16)*(-31)^(1/2)

simplify(G);

-16429327/131072

evalf(%); # decent accuracy

-125.3458176

INVnew2 := rationalize(allvalues(INV)):

H := eval(INVnew2, [a=2,b=1,c=4,n=16]);

(1/3968)*(-16*(-31)^(1/2)*(-1/4+(1/4)*(-31)^(1/2))^16-16*(-31)^(1/2)*(-1/4-(1/4)*(-31)^(1/2))^16+304*(-1/4+(1/4)*(-31)^(1/2))^16-304*(-1/4-(1/4)*(-31)^(1/2))^16)*(-31)^(1/2)

simplify(H);

-16429327/131072

evalf(%); # decent accuracy

-125.3458176

evalf(G);
simplify(fnormal(%, 9),'zero'); # less accurate

-125.3458180-0.1122533138e-7*I

-125.345818

evalf(H);
simplify(%,'zero'); # less accurate

-125.3458181+0.*I

-125.3458181

Download ztrans_ex1.mw

Your attempt at procedure df is calling diff(f(x),x) where x has a numeric value. That does not make sense.

Also, your loop will not terminate if the process is not converging to a root (because of the choice of initial x0).

Here is an adjustment that computes the derivative more properly, and which also terminates the loop after a maximum numer of iterations. It also uses evalf in a few key places to help ensure that the computations are done in floating-point even if x0 is numeric but not a float (eg. an exact rational).

I haven't changed your overall process, mostly because I suppose that this is coursework and that you are supposed to figure out and refine the approach. Perhaps it would be better if you could repeat the process at other initial x0 choices. Putting the code into a reusable procedure could make it easier for you to try that.

restart;

f := x -> exp(x^2)*sin(x - 5);
df := D(f);
#df := unapply(diff(f('x'),'x'),'x');
x0 := -1.0;
tol := 0.00001;
for i to 3 do
    x := x0;
    n := 0;
    while n<10 and tol < abs(evalf(f(x))) do
        x := evalf(x - f(x)/df(x));
        n := n + 1;
    end do;
    printf("Root %d: %.5f (after %d iterations f=%.5e)\n", i, x, n, f(x));
    x0 := x + 1.0;
end do:

proc (x) options operator, arrow; exp(x^2)*sin(x-5) end proc

proc (x) options operator, arrow; 2*x*exp(x^2)*sin(x-5)+exp(x^2)*cos(x-5) end proc

-1.0

0.1e-4

Root 1: -1.28319 (after 7 iterations f=9.31896e-10)
Root 2: -13.12697 (after 10 iterations f=4.53770e+74)
Root 3: -11.71441 (after 10 iterations f=3.34134e+59)

Download rf0.mw

By splitting up the surface we can arrive at a nicer result. Not only is the gap narrower here, but the jaggedness along the top and bottom boundaries has cleared up.

Unfortunately this is tricky to do programmatically in general.

The gap gets smaller if you increase the resolution (eg. grid=[200,200]), and the overall appearance looks quite nice with style=surface to suppress the grid-lines.

restart;

f := piecewise(x=0, 0, x <> 0, x*y^2/(x^2+y^4));

f := piecewise(x = 0, 0, x <> 0, x*y^2/(y^4+x^2))

plots:-display(
plot3d([x,y,f], y=-1..0, x=-1..-y^2),
plot3d([x,y,f], y=-1..0, x=-y^2..0),
plot3d([x,y,f], y=-1..0, x=0..y^2),
plot3d([x,y,f], y=-1..0, x=y^2..1),
plot3d([x,y,f], y=0..1, x=-1..-y^2),
plot3d([x,y,f], y=0..1, x=-y^2..0),
plot3d([x,y,f], y=0..1, x=0..y^2),
plot3d([x,y,f], y=0..1, x=y^2..1),
labels=[x,y,'f']);

Download plot3d_ridge.mw

There are other ways to accomplish something similar. (You could even get the gap to vanish altogether...) This is simply the first thing I considered.

You are passing an equation,
    C_prev = -0.7676394482e16
to SFloatMantissa and SFloatExponent. That's why those two commands are returning their very same argument, unchanged.

This behaviour is documented in the very first bullet point of the Desciption section of the relevant Help page.

Try passing the actual floating-point value instead of that equation.

C_prev_V := eval(C_prev, it[1]);

                              15
   C_prev_V := -7.676394482 10  

SFloatMantissa(C_prev_V);

        -7676394482

SFloatExponent(C_prev_V);

             6

Or, in full,

restart;

it := solve({
83.0 =  (57.9467777777778) * C_prev  + (-1.001) * C_ksteps  + (-67.1782222222222) * C_fat  + (-91.8695555555555) * C_carb  + (-24.4021555555556) * C_prot  + (-11.5003777777778) * C_fiber  + (-3.21432222222223) * C_sugar  + (-14.1697111111111) * C_saturated  + (-1.61272222222222) * C_fasted  + (1.01),
153.0 =  (-11.1222222222222) * C_prev  + (-1.001) * C_ksteps  + (99.9887777777778) * C_fat  + (22.2444444444444) * C_carb  + (48.3705444444444) * C_prot  + (17.2283222222222) * C_fiber  + (-5.21632222222223) * C_sugar  + (9.15358888888888) * C_saturated  + (0.389277777777778) * C_fasted  + (1.01),
84.0 =  (-17.1282222222222) * C_prev  + (-1.001) * C_ksteps  + (49.9387777777778) * C_fat  + (17.2394444444444) * C_carb  + (7.72994444444445) * C_prot  + (5.01612222222222) * C_fiber  + (29.8186777777778) * C_sugar  + (10.6550888888889) * C_saturated  + (1.39027777777778) * C_fasted  + (1.01),
78.0 =  (-8.11922222222223) * C_prev  + (0.) * C_ksteps  + (-23.1342222222222) * C_fat  + (16.2384444444445) * C_carb  + (-26.1038555555556) * C_prot  + (-12.7015777777778) * C_fiber  + (31.8206777777778) * C_sugar  + (6.05048888888888) * C_saturated  + (-1.61272222222222) * C_fasted  + (1.01),
87.0 =  (-8.11922222222223) * C_prev  + (3.003) * C_ksteps  + (-19.1302222222222) * C_fat  + (28.2504444444444) * C_carb  + (-16.4942555555556) * C_prot  + (-5.09397777777778) * C_fiber  + (6.79567777777777) * C_sugar  + (14.7591888888889) * C_saturated  + (-0.111222222222222) * C_fasted  + (1.01),
87.0 =  (37.9267777777778) * C_prev  + (-4.004) * C_ksteps  + (-48.1592222222222) * C_fat  + (-83.8615555555555) * C_carb  + (-6.08385555555555) * C_prot  + (0.611722222222222) * C_fiber  + (-58.0691222222222) * C_sugar  + (-20.0756111111111) * C_saturated  + (-1.61272222222222) * C_fasted  + (1.01),
133.0 =  (-17.1282222222222) * C_prev  + (3.003) * C_ksteps  + (-56.1672222222222) * C_fat  + (15.2374444444444) * C_carb  + (13.4356444444444) * C_prot  + (3.01412222222222) * C_fiber  + (13.8026777777778) * C_sugar  + (-12.2678111111111) * C_saturated  + (-0.111222222222222) * C_fasted  + (1.01),
78.0 =  (-11.1222222222222) * C_prev  + (1.001) * C_ksteps  + (52.9417777777778) * C_fat  + (-2.78055555555555) * C_carb  + (5.82804444444444) * C_prot  + (-0.589477777777777) * C_fiber  + (-39.5506222222222) * C_sugar  + (-0.0556111111111198) * C_saturated  + (4.39327777777778) * C_fasted  + (1.01),
84.0 =  (-23.1342222222222) * C_prev  + (0.) * C_ksteps  + (10.8997777777778) * C_fat  + (79.3014444444444) * C_carb  + (-2.28005555555555) * C_prot  + (4.01512222222222) * C_fiber  + (23.8126777777778) * C_sugar  + (5.95038888888888) * C_saturated  + (-1.11222222222222) * C_fasted  + (1.01) },
[C_prev, C_ksteps, C_fat, C_carb,
C_prot, C_fiber, C_sugar, C_saturated,
C_fasted]);

[[C_prev = -0.7676394482e16, C_ksteps = 0.1095758760e18, C_fat = 0.7856937061e16, C_carb = -0.5792833066e16, C_prot = -0.1954662069e17, C_fiber = 0.3025217874e17, C_sugar = 0.7496058869e15, C_saturated = -0.1779505040e17, C_fasted = -0.1062355291e18]]

C_prev_V := eval(C_prev, it[1]);

-0.7676394482e16

SFloatMantissa(C_prev_V);

-7676394482

SFloatExponent(C_prev_V);

6

Download SFloatMantissa.mw

Confirmed with,

   restart;
   int((b*g*x+a*g)^2/(A+B*ln(e*(b*x+a)/(d*x+c))),x):

using Maple 2023.0.

It appears to be occurring here, where an argument to Utils:-Userinfo:-PrintInfo is wrapped in print (instead of, say, sprintf).

   showstat(IntegrationTools:-Indefinite:-Stage2,25);

This is just an idea I saw elsewhere; I'll have to wait and see if it makes a difference.

I have toggled off "Enable MapleCloud connection" in the popup dialogue,
     Tools -> Options - Network

Elementwise mapping of DrawGraph works fine for me, over CubicVT a table of graphs. The result will, naturally, also be a table.

tablemap_ac.mw

I elected to convert that resulting table to a list, after which there are several easy ways to plot them together.

Your attachment seems to have unevaluated Graph calls in it, making it suspicious whether GraphTheory really was loaded before such construction attempted. (Orphaned output, and a lack of restart, are additional small hints of a possible lack of care about programmatic flow...)

If I were to do it programmatically then I might start with something more like the following, which I think is a little less opaque,

restart;

 

shiftsum := proc(S, s)
  local p2 := op(2,S), v := lhs(p2);
  op(0,S)(eval(op(1,S), v=v+s),
          v = map(`-`, rhs(p2), s),
          op(3..,S));
end proc:

 

S0 := sum(a[k]*(k + r)*(k + r - 1)*x^(k + r - 1), k = 0 .. infinity);

sum(a[k]*(k+r)*(k+r-1)*x^(k+r-1), k = 0 .. infinity)

shiftsum(S0, 1);

sum(a[k+1]*(k+1+r)*(k+r)*x^(k+r), k = -1 .. infinity)

shiftsum(Sum(f(r), r = a .. b), u);

Sum(f(r+u), r = a-u .. b-u)

Download shiftsum.mw

Of course, one could also add type-checks, eg. ensuring that S is of type {Sum,sum}, that op(2,S) is of type name=range, etc.

You are passing a flat sequence of eight arguments, after s.

It doesn't magically read your mind and know that you intended it to take successive pairs from that flat sequence of eight things. You could see that by changing SubstituteAll to a dummy name like, say, SA.

How about using four lists (of pairs), and having a custom iterator take the operands from its second argument, before it calls SubstituteAll.

  foldl( (a, b)->SubstituteAll( a, b[] ), s, seq( [L1[i], L2[i]], i=1..4 ) );

Eg,

s:="[ (0, 1), (1, 2), (1, 10), (2, 3), (3, 4), (4, 5),
(4, 9), (5, 6), (6, 7), (7, 8),(8, 9), (10, 11), (11, 12),
(11, 16), (12, 13), (13, 14), (14, 15), (15, 16)]":

with(StringTools):

L1:= "()[]": L2:= "{}{}":

foldl((a,b)->SubstituteAll(a,b[]),s,seq([L1[i],L2[i]],i=1..length(L1)));

"{ {0, 1}, {1, 2}, {1, 10}, {2, 3}, {3, 4}, {4, 5},
{4, 9}, {5, 6}, {6, 7}, {7, 8},{8, 9}, {10, 11}, {11, 12},
{11, 16}, {12, 13}, {13, 14}, {14, 15}, {15, 16}}"

Download foldl_ex.mw

ps. I find that much more legible than a mixture of extra unevaluation quotes and eval calls.

First 22 23 24 25 26 27 28 Last Page 24 of 309