dharr

Dr. David Harrington

5712 Reputation

21 Badges

20 years, 342 days
University of Victoria
Professor or university staff
Victoria, British Columbia, Canada

Social Networks and Content at Maplesoft.com

I am a professor of chemistry at the University of Victoria, BC, Canada, where my research areas are electrochemistry and surface science. I have been a user of Maple since about 1990.

MaplePrimes Activity


These are replies submitted by dharr

@Tamour_Zubair There isn't an analytical solution for y as a function of t; the RootOf is the best you can do. On the other hand, from the implicit solution you can get t as an analytical function of y. 

Please check your boundary condition D(f)(0) = lambda + xi*(D^2)(f)(0). Note that if you wanted the second derivative and not the square of the first derivative, then you need D(f)(0) = lambda + xi*(D@@2)(f)(0)

@NeraSnow Your procedure to upload the worksheet was correct, but the website is not working correctly today, as happens sometimes.

@Carl Love Yes, I tried something like this expecting the new inline loops might be more efficient, but was disappointed (as I've found a couple of times before).

There is also the issue of counts>9, which I missed originally. So "1111111111111" gives "131" in my code, but "11111111111111" in yours.

@sursumCorda Arrays are usually efficient, but here adding on to the end with the ,= operation is part of the secret.

@AngieS7 It means it can't find a solution. You can help it by providing ranges (or an initial guess), but it might mean there is no solution

@AngieS7 You can tell fsolve to only look for solutions in specific ranges by using something like

fsolve({eq1, eq2, eq3, eq4, eq5, eq6}, {A=0..5, B=3..7, C=0..infinity, omega=0..10, x1=0..1, x2=0..1});

@AngieS7 It is hard to tell why this hasn't worked without you uploading your worksheet. You can do that with the green up-arrow in the Mapleprimes editor.

Edit: looks like you might have changed vb:=0.5 to vb:=5

@hojat Lambda still has a 1[1] and a 2[1].

@hojat Please confirm that you are minimizing a function of 12 variables (from indets(Lambda,name) ). If so this will be an extremely difficult problem and I'm guessing it generates some complex numbers because it doesn't know enough about these values. What are the approximate values of these variables? For nonlinear problems, an initialpoint is almost certainly needed.

@ianmccr This recasting into alternate forms happens in several situations, including just printing the code of a procedure with a semicolon. Since these are often less readable, I find this frustrating but don't know of a workaround.

@sursumCorda I see there that the symmetric NAG routine is recommended over the general one because it is "more accurate", so a NAG general routine is probably not useful here (can't increase digits then) though the specialized ones may be. Maple probably has a special routine for exp (I didn't check), which is why @Axel Vogt's method works.

Detecting the non-diagonalizable case will always be difficult if Jordan forms are used, because that is a difficult computation. Eigenvectors returns a "zero eigenvector" in that case but how close to zero does it have to be to be detected? [Edit: I see Eigenvectors can just output the true eigenvectors, so has some detection method for this.] These situations, including when two numerical eigenvalues are the same, rely on some careful assessments and are tricky in general - maybe the linalg method that worked well in this case would not work so well in other cases.

However, the fact that it is so bad in quite a few cases means it needs some improvement for sure.

@tomleslie Yes, it is interesting how often this question is asked. Perhaps it should be in huge letters on the help pages for plot and plot3d.

@sursumCorda I had a closer look into how the algorithm works, and my analysis below suggests in the numerical case, close eigenvalues that should be detected as degenerate are not in LinearAlgebra:-MatrixFunction, but are in linalg:-matfunc. I have submitted an SCR.


For going beyond, the normal case is well-behaved and can be done just by applying the function to the eigenvalues in a unitary decomposition (Horn and Johnson, Theorem 6.2.37). So for numerical matrices with shape=symmetric or shape=hermitian (or the skew versions), that would be a separate case worth doing.

Matrix function algorithm

restart

with(LinearAlgebra)

A := Matrix([[1, 1, 3], [0, 1, 0], [0, I, 2]]); f := x/(exp(x)-1)

Matrix(%id = 36893490720140850108)

x/(exp(x)-1)

Exact answer

fA := MatrixFunction(A, f, x)

Matrix(%id = 36893490720140844196)

Matrix is non-diagonalizable, eigenvalues: 2,1,1

JordanForm(A); cp := CharacteristicPolynomial(A, x)

Matrix(%id = 36893490720140857460)

x^3-4*x^2+5*x-2

Let's try as an example first, not the function f, but a quintic polynomial

p := add((i+1)*x^i, i = 0 .. 5)

6*x^5+5*x^4+4*x^3+3*x^2+2*x+1

MatrixFunction(A, p, x); eval(p, x = A)

Matrix(3, 3, {(1, 1) = 21, (1, 2) = 70+690*I, (1, 3) = 900, (2, 1) = 0, (2, 2) = 21, (2, 3) = 0, (3, 1) = 0, (3, 2) = 300*I, (3, 3) = 321})

Matrix(%id = 36893490720205323788)

Any polynomial can be reduced to one of degree <n that gives the same result. We just find the remainder on dividing by a polynomial such that p(A)=0, so the characteristic polynomial or minimal polynomial (the same in this case) can be used

r := rem(p, cp, x); eval(r, x = A)

r := 230*x^2-390*x+181

Matrix(%id = 36893490720205308740)

mp := MinimalPolynomial(A, x)

x^3-4*x^2+5*x-2

The polynomial r is an interpolating polynomial since it is equal to p at the eigenvalues. The slopes are also equal at the doubly-degenerate eigenvalue.

plot([p, r], x = 0 .. 5, 0 .. 1000, color = [red, blue])

For a general function described by an "infinite degree" polynomial (Taylor series), we may expect it (and some slopes) to be described at the eigenvalues by a polynomial r of degree less than n. This is the interpolating polynomial.  Then f(A) = r(A)

In the present case r has to have the same values as f at the eigenvalues 1 and 2, and at the doubly degenerate eigenvalue 1 the slopes have to be the same.

f1 := eval(f, x = 1); f11 := eval(diff(f, x), x = 1); f2 := eval(f, x = 2)

1/(exp(1)-1)

1/(exp(1)-1)-exp(1)/(exp(1)-1)^2

2/(exp(2)-1)

fs := evalf([f1, f11, f2])

[.5819767070, -.3386968875, .3130352854]

Let's find a quadratic with the same values. An interpolating algorithm is used in practice, but conceptually it is simple to do it this way:

q := a*x^2+b*x+c

a*x^2+b*x+c

ans := solve({eval(q, x = 1) = f1, eval(q, x = 2) = f2, eval(diff(q, x), x = 1) = f11}, {a, b, c})

{a = -(exp(2)*exp(1)-2*(exp(1))^2-2*exp(2)+3*exp(1))/(((exp(1))^2-2*exp(1)+1)*(exp(2)-1)), b = (2*exp(2)*exp(1)-4*(exp(1))^2-5*exp(2)+6*exp(1)+1)/((exp(1)-1)^2*(exp(2)-1)), c = 2*((exp(1))^2+exp(2)-2*exp(1))/(((exp(1))^2-2*exp(1)+1)*(exp(2)-1))}

qnum := eval(q, ans); qf := evalf(%)

-(exp(2)*exp(1)-2*(exp(1))^2-2*exp(2)+3*exp(1))*x^2/(((exp(1))^2-2*exp(1)+1)*(exp(2)-1))+(2*exp(2)*exp(1)-4*(exp(1))^2-5*exp(2)+6*exp(1)+1)*x/((exp(1)-1)^2*(exp(2)-1))+2*((exp(1))^2+exp(2)-2*exp(1))/(((exp(1))^2-2*exp(1)+1)*(exp(2)-1))

0.6975546596e-1*x^2-.4782078194*x+.9904290612

eval(qf, x = 1), eval(diff(qf, x), x = 1), eval(qf, x = 2)

.5819767078, -.3386968875, .3130352862

And we get the correct exact result.

simplify(eval(qnum, x = A)); simplify(%-fA)

Matrix(3, 3, {(1, 1) = 1/(exp(1)-1), (1, 2) = ((-1+9*I)*exp(1)-(3*I)*exp(2)-1)/((exp(2)-1)*(exp(1)-1)), (1, 3) = -3/(exp(1)+1), (2, 1) = 0, (2, 2) = 1/(exp(1)-1), (2, 3) = 0, (3, 1) = 0, (3, 2) = -I/(exp(1)+1), (3, 3) = 2/(exp(2)-1)})

Matrix(%id = 36893490720140843116)

But we have to know the eigenvalues are distinct, which for numerical eigenvalues is difficult. For example, suppose we assumed the eigenvalue pair at 1 was actually two close values

eps := 0.5e-8; ff := [eval(f, x = 1.0), eval(f, x = 1+eps), eval(f, x = 2.)]

0.5e-8

[.5819767070, .5819767052, .3130352855]

Then the quadratic we find is

`~`[`=`]([eval(q, x = 1.0), eval(q, x = 1+eps), eval(q, x = 2.)], ff); ans2 := solve({%[]}, {a, b, c})

[1.00*a+1.0*b+c = .5819767070, 1.000000010*a+1.000000005*b+c = .5819767052, 4.*a+2.*b+c = .3130352855]

{a = 0.9105857850e-1, b = -.5421171570, c = 1.033035286}

qf2 := eval(q, ans2); eval(qf2, x = 1.), eval(qf2, x = 1+eps), eval(qf2, x = 2.)

0.9105857850e-1*x^2-.5421171570*x+1.033035286

.5819767075, .5819767057, .3130352860

and the matrix function is

simplify(eval(qf2, x = A))

Matrix(%id = 36893490720140862996)

Which is significantly different to the correct answer

evalf(fA)

Matrix(%id = 36893490720140849388)

In the linalg case, the input to the polyomial interpolator `matfunc/pinterp` is the list of eigevalues [1.+0.*I, 1.+0.*I, 2.+0.*I] and b, which is what I have called fs above, i.e., it is clear that the degeneracy of the eigenvalues has been detected.

On the other hand, in the LinearAlgebra case, `MatrixFunction/PolynomialInterpolation` receives the list of the eigenvalues [HFloat(0.9999999999999997)+HFloat(0.0)*I, HFloat(1.0)+HFloat(0.0)*I, HFloat(1.9999999999999998)+HFloat(0.0)*I] and the vector Vector(3, {(1) = HFloat(0.5819767068693266)+HFloat(0.0)*I, (2) = HFloat(0.5819767068693265)+HFloat(0.0)*I, (3) = HFloat(0.31303528549933135)+HFloat(0.0)*I}). The first two entries of the vector are very slightly different, suggesting that the degeneracy is not detected. (However, it could be that this detection is delegated to `MatrixFunction/PolynomialInterpolation`)

NULL

Download MatrixFunctionAlgorithm.mw

@ishan_ae I'm guessing that you didn't add two unevaluation quotes around the add: '' add '' (these are two ' not one "). This is an endless problem in Maple. One way to solve not knowing how many is just to have one in the definition, and then add a "numeric" option, which effectively tells animate not to bother unless a number is provided. The output of the fourier_f definition is now ugly.

fourier_analysis_numeric.mw

Or you can use Sum, but then you need value:

fourier_analysis_Sum.mw

 

First 13 14 15 16 17 18 19 Last Page 15 of 58