Axel Vogt

5821 Reputation

20 Badges

20 years, 229 days
Munich, Bavaria, Germany

MaplePrimes Activity


These are replies submitted by Axel Vogt

Dave - Thanks, it is a good improvement! PS: It would be fine if the Marketing guys would be able to provide more technical details for new releases for people who are more interested in functionality then advertising mumble & pictures.
is there any additional functionality added to use NAG or is it through a toolbox to be payed separately? the question is based on http://www.maplesoft.com/applications/app_center_view.aspx?AID=2042
The help page for ?Matrix gives an example for a filling function if you want it explicitly and in your case that could be s.th. like (i,j) -> `if`(i=j,1,0): E:= Matrix(theDim,%); But probably not as fine as Jacques' way ... PS: wanted to reply to the original question, but was only offered a way to reply to the first comment ...
The help page for ?Matrix gives an example for a filling function if you want it explicitly and in your case that could be s.th. like (i,j) -> `if`(i=j,1,0): E:= Matrix(theDim,%); But probably not as fine as Jacques' way ... PS: wanted to reply to the original question, but was only offered a way to reply to the first comment ...
Even if Maple does not provide a friendly answer: I guess you are
talking about the real case.

May be a basic geometric view on eigenvalues and eigenvectors helps:

Let f be linear with f(x) = lambda*x and lambda =/= 0, x =/= 0,
lambda in the field, x in the vector space. 

Then the line L = IR*x given by x is an invariant space under f
[i.e. f(L) = L as a set] and g = f restricted to the subspsace L
is simply a multiplication by lambda [i.e. g(y) = lambda*y for y
in L], just write it down by paper and pencil:

for y in L write y = alpha*x and you have f(y)= alpha*f(x) =
alpha*lambda*x = lambda*y, so it is in L.

So an eigenvector is not *one* vector, it *is* a 1-dim subspace
and for lambda=0 it is the whole kernel of f.

But a rotation in the plane always rotates any line and never leaves
them invariant - except the rotation is by 180° or 360°, i.e. your
theta is a multiple of Pi. Then your rotation equals +- identity.

More formally: you where already told that the eigenvalues are
complex, so in general do not exist in the Reals. Assuming that
theta is between +-Pi the command Eigenvalues(A) will give you
cos(theta)+-sqrt(1-cos(theta)^2)*I.

The imaginary part has to vanish: solve(1-cos(theta)^2=0, theta);

                                Pi, 0

However if you insist to work over the complex numbers it was
already said what has to done ...
Even if Maple does not provide a friendly answer: I guess you are
talking about the real case.

May be a basic geometric view on eigenvalues and eigenvectors helps:

Let f be linear with f(x) = lambda*x and lambda =/= 0, x =/= 0,
lambda in the field, x in the vector space. 

Then the line L = IR*x given by x is an invariant space under f
[i.e. f(L) = L as a set] and g = f restricted to the subspsace L
is simply a multiplication by lambda [i.e. g(y) = lambda*y for y
in L], just write it down by paper and pencil:

for y in L write y = alpha*x and you have f(y)= alpha*f(x) =
alpha*lambda*x = lambda*y, so it is in L.

So an eigenvector is not *one* vector, it *is* a 1-dim subspace
and for lambda=0 it is the whole kernel of f.

But a rotation in the plane always rotates any line and never leaves
them invariant - except the rotation is by 180° or 360°, i.e. your
theta is a multiple of Pi. Then your rotation equals +- identity.

More formally: you where already told that the eigenvalues are
complex, so in general do not exist in the Reals. Assuming that
theta is between +-Pi the command Eigenvalues(A) will give you
cos(theta)+-sqrt(1-cos(theta)^2)*I.

The imaginary part has to vanish: solve(1-cos(theta)^2=0, theta);

                                Pi, 0

However if you insist to work over the complex numbers it was
already said what has to done ...
At the NG it also is was said "do not use numerical JNF, try ..." and I wanted to have my own view/understanding/experience on that. The appended gives examples of small dimension (namely 7), some reasonable determinant (say of 'size 1') and arbitrary Jordan blocks and eigenvalues (see the sheet for what I mean). Higher precision seems not to be really able to heal the problem ... generateJNF_simple.mws
generateJNF_simple.mws.pdf
I meant something like the reviewed below (sorry, do not have access to libraries, certainly others can). (Zentralblatt)Bujosa,Criado,Vega Jordan normal form via elementary transformations.pdf (1998)
restart: interface(version); ``;
sqrt(x^2); simplify(%);
% assuming (x::real);

  Classic Worksheet Interface, Maple 10.06, Windows, Oct 2 2006 Build ID 255401


                                 2 1/2
                               (x )

                              csgn(x) x

                             signum(x) x
restart: interface(version); ``;
sqrt(x^2); simplify(%);
% assuming (x::real);

  Classic Worksheet Interface, Maple 10.06, Windows, Oct 2 2006 Build ID 255401


                                 2 1/2
                               (x )

                              csgn(x) x

                             signum(x) x
I will have to save this thread to disk, it is really fine!

Not really within Maple, but I made a compiled DLL (for Windows) for
normal distributed numbers, find it as appended upload. I manually
deleted the Arrays used (otherwise the saved mws would be quite large),
but left the timings in it (having an Acer ~ 2 GHz, 500 MB on Win XP,
compiling was done with MSVC6, hence not a high power system).

It uses 'Ziggurat' and two variants for that, based on an article
"Improved Ziggurat ..." by Jurgen Doornik (and helpfull mails with
him concerning his code).

The idea is to provide memory space through Maple and fill it in the
external program (hence no copying is done, which makes it fast).
Of course this approach is limited to floating doubles ...

Hope it is of interest, even if it is a bit off the original intention.
Download RNGnormal.zip (8 Kb: DLL + Maple worksheet)
sorry for the stupid error, of course :-( anyway: if one has the roots (say approximated by rationals) 'the' construction does not need floats any longer (that's what I recall) and should be 'exact' ... or would it be too slow?
I have not followed the discussion at the usenet in detail. But if I remember well the JNF is 'constructive' up to eigenvalues. And if it is done over the complex numbers only that reduces to find the roots of the characteristic polynom (with multiplicities), as it is always a diagonal matrix. It is not difficult to descent to the Reals (but for intermediate fields over the Rationals it may be some effort, I think one needs the splitting factors of the polynom). At least that is what I recall from old lectures. It seems you just make use of that facts in your sheet. So what was the problem in the NG? Precision only? Edited to add: so the precision need is the one to find the roots, no?
forgot a square by typing and used 3*Im(z) - 1 < - abs(z)^2, so take it as a new exercise ...
forgot a square by typing and used 3*Im(z) - 1 < - abs(z)^2, so take it as a new exercise ...
First 192 193 194 195 196 197 198 Last Page 194 of 207