roman_pearce

Mr. Roman Pearce

1678 Reputation

19 Badges

20 years, 216 days
CECM/SFU
Research Associate
Abbotsford, British Columbia, Canada

I am a research associate at Simon Fraser University and a member of the Computer Algebra Group at the CECM.

MaplePrimes Activity


These are replies submitted by roman_pearce

The restart command would fix this though. The issue must be with Java or the GUI. Or it could simply be that once the OS starts using virtual memory it takes a while to stop using it.
Thanks for the times, I found them very interesting. I am using this on structured linear systems, which may be very large and the BFS will do potentially O(n) steps, as in your example above. The reason sets were fast on the random graphs is because the BFS didn't do many steps. With random edges going everywhere you can cover the graph in I think O(log[m](n)) steps (n=10^5, m=10 took 8 steps). In that case, the incremental cost of adding a vertex doesn't matter.
That is a neat, clever trick. I'm going to file that one away. For what it's worth, the true/false version is for selectremove, in case you want the duplicates.
If you only can make use of eigenvectors corresponding to a specific value as an eigenvalue, then why not test that eigenvalue first? ... And to get the nullspace, you could also use LinearSolve with the RHS as zero-Vector, in which case you could control via Normalizer & Testzero how hard you would want it to work on RootOfs. All of this is true of course, but I like to whack everything with the golden hammer.
If you only can make use of eigenvectors corresponding to a specific value as an eigenvalue, then why not test that eigenvalue first? ... And to get the nullspace, you could also use LinearSolve with the RHS as zero-Vector, in which case you could control via Normalizer & Testzero how hard you would want it to work on RootOfs. All of this is true of course, but I like to whack everything with the golden hammer.
The nullspace command will tell you what columns are linearly dependent. If you transpose the matrix and compute the nullspace, you will get information about the dependent rows. It looks like you compute Z1 := A^(-1)*D^(-1)*C*A;. If you want to solve Z1*v = v then v would be an eigenvector of Z1 with eigenvalue lambda. Try the Eigenvectors command:
V, E := Eigenvectors(Z1);
This will compute a vector V of the eigenvalues and the columns of E are the corresponding eigenvectors. You need eigenvectors with eigenvalue 1. I am running the command right now but it seems to be taking a long time. Maybe it is using a bad algorithm. You can get numerical results quickly though by doing the following:
Z2 := Matrix(Z1, storage=rectangular, datatype=float[8]);
V, E := Eigenvectors(Z2);
Now how do you find out (automatically) if there is an eigenvalue equal to 1 ? Try the following:
V2 := map(trunc, convert(V, list));  # convert V to list of integers (truncating)
member(1, V2, 'i');  # find 1 in V2, assign index to i
When I ran it, I got i=2. The member command will return false if 1 is not in V2. Now you can extract the ith column of E to get the eigenvector, which is the solution.
S := E[1..-1,i];  # rows 1..-1 (-1=end) and column i
Hope this helps.
The nullspace command will tell you what columns are linearly dependent. If you transpose the matrix and compute the nullspace, you will get information about the dependent rows. It looks like you compute Z1 := A^(-1)*D^(-1)*C*A;. If you want to solve Z1*v = v then v would be an eigenvector of Z1 with eigenvalue lambda. Try the Eigenvectors command:
V, E := Eigenvectors(Z1);
This will compute a vector V of the eigenvalues and the columns of E are the corresponding eigenvectors. You need eigenvectors with eigenvalue 1. I am running the command right now but it seems to be taking a long time. Maybe it is using a bad algorithm. You can get numerical results quickly though by doing the following:
Z2 := Matrix(Z1, storage=rectangular, datatype=float[8]);
V, E := Eigenvectors(Z2);
Now how do you find out (automatically) if there is an eigenvalue equal to 1 ? Try the following:
V2 := map(trunc, convert(V, list));  # convert V to list of integers (truncating)
member(1, V2, 'i');  # find 1 in V2, assign index to i
When I ran it, I got i=2. The member command will return false if 1 is not in V2. Now you can extract the ith column of E to get the eigenvector, which is the solution.
S := E[1..-1,i];  # rows 1..-1 (-1=end) and column i
Hope this helps.
I think Will's point is that the feature is intended to display simple expressions, not recreate Maple's typesetting system.
Disclaimer: I don't work for Maplesoft, however as the author of many bugs I feel I can contribute. When is a bug a bug ? Here are some hard and fast rules that I use to identify "bugs" in my code: When the answer should be written down in a unique way, e.g. integer factorization, rank of a matrix: 1) you compute an answer that is wrong 2) the time required to compute the answer varies by more than 30% 3) you can't compute an answer but you really should be able to When the answer describes a mathematically unique object that can be expressed in different ways, e.g. nullspace of a matrix: 1), 2), and 3) above, plus 4) the size of the answer varies by more than 30% 5) equivalent answers can not be recognized When answers are well-defined but not mathematically unique, e.g. primary decomposition of a polynomial system: 1), 2), 3), 5) When answers are not well-defined: 1) and 2) plus 6) you return anything other than FAIL or an error Now, it may not be possible to always meet these criteria, but if you do I think it's safe to say that you have produced reliable software. As I tried to point out in my other posts in this topic, ordering issues are an inherent part of symbolic computation. There are times when a computation can take different paths and there may be no meaningful or consistent way to decide which path to take. Some systems take the approach of asking the user to order everything up front (by defining the domain), but this has downsides too.
Disclaimer: I don't work for Maplesoft, however as the author of many bugs I feel I can contribute. When is a bug a bug ? Here are some hard and fast rules that I use to identify "bugs" in my code: When the answer should be written down in a unique way, e.g. integer factorization, rank of a matrix: 1) you compute an answer that is wrong 2) the time required to compute the answer varies by more than 30% 3) you can't compute an answer but you really should be able to When the answer describes a mathematically unique object that can be expressed in different ways, e.g. nullspace of a matrix: 1), 2), and 3) above, plus 4) the size of the answer varies by more than 30% 5) equivalent answers can not be recognized When answers are well-defined but not mathematically unique, e.g. primary decomposition of a polynomial system: 1), 2), 3), 5) When answers are not well-defined: 1) and 2) plus 6) you return anything other than FAIL or an error Now, it may not be possible to always meet these criteria, but if you do I think it's safe to say that you have produced reliable software. As I tried to point out in my other posts in this topic, ordering issues are an inherent part of symbolic computation. There are times when a computation can take different paths and there may be no meaningful or consistent way to decide which path to take. Some systems take the approach of asking the user to order everything up front (by defining the domain), but this has downsides too.
Personally I don't think that a new plot3d data structure is really needed, but that would be one way to fix the problem. They should rewrite things from scratch if needed and then write a conversion routine to convert the old data structures into the new ones automatically. Nobody cares about backwards compatibility if you make something 100 or 1000 times faster and more feature complete. At least, I don't think anybody should care, and if they do, the company should ignore them and build the better product.
Do you suspect that, even if you find reasonably good defaults for those magic numbers, some user-based control via options may be necessary for some problems to be solved? No. I will pick slightly conservative defaults, sacrificing some performance to guarantee that if the routines can solve the problem then they always do. The current magic numbers are actually pretty good. I'm hoping people will test a few systems and report back if there is a problem. Will there be some mechanism for repeated solving using different RHS's (but *not* all done at once)? No, and this was a difficult decision. The plan is to replace the code in solve, which solves linear systems with one right hand side. I have made some optimizations for that case. I wanted to focus on this one specific instance because: 1) the general problem of fixing sparse linear algebra is a megaproject 2) some people are skeptical of whether it can or should be done 3) it would be massively faster if we wrote it in C Basically, I decided not to try to do too much. I want to show the potential for solving these types of problems and provide a useful routine to attract support. I know these computations are important, which is why I am even more concerned that they are done right if they are done at all.
I don't really want a bunch of options, but there are some magic numbers at the top of the file: - the cutoff between SGE and Markowitz pivoting (30k rows or 12% dense) - the cutoff between Markowitz and LAModular (95% dense) - the size of the blocks in the Markowitz solver (n/3) - the max number of heavy columns in SGE (4/3*sqrt(|A|)) - when to stop light block elimination in SGE (< sqrt(max_heavy) eliminated) - number of light rows to recurse on in SGE (n-sqrt(n)) Right now I am trying to choose good defaults. I think the SGE values work well, but the Markowitz block size seems insane. It tries to solve n/3 equations in every step, and n/2 is often faster! Now if multiple equations pivot on the same variable only one equation is selected and the others are not replaced, so as the system gets smaller there are more collisions and less equations are solved. But still, this seems very counter-intuitive. It's probably just a side effect of Maple's design, which always makes a copy when things are modified. As for the NAG solvers, it's true that most iterative methods require a symmetric (and sometimes even positive-definite) matrix for efficiency and/or convergence. Often there are modifications that can be made (like the biconjugate gradient method which slow the algorithm down but allow non-symmetric matrices to be solved. Overall I am not that familiar with iterative methods, which is why I didn't try to implement one this time around. If I have time or if there is demand for it I may try in the future. There are some systems where I think it could really help.
I don't see the problem with people posting in other languages. It wouldn't be bad for MaplePrimes to attract a wider audience. Of course if you are posting a reply and you expect the person to understand it then it's probably best to use their language, but plenty of people can read French even if they would prefer not to look like an idiot trying to write it and there are lots of Japanese users of Maple too. If I'm really interested in what someone could be saying there are lots of automatic translation tools such as BabelFish. I would prefer not to exclude people who don't speak English, or don't speak it well enough to feel comfortable posting.
I don't see the problem with people posting in other languages. It wouldn't be bad for MaplePrimes to attract a wider audience. Of course if you are posting a reply and you expect the person to understand it then it's probably best to use their language, but plenty of people can read French even if they would prefer not to look like an idiot trying to write it and there are lots of Japanese users of Maple too. If I'm really interested in what someone could be saying there are lots of automatic translation tools such as BabelFish. I would prefer not to exclude people who don't speak English, or don't speak it well enough to feel comfortable posting.
First 22 23 24 25 26 27 28 Last Page 24 of 39