roman_pearce

Mr. Roman Pearce

1678 Reputation

19 Badges

20 years, 216 days
CECM/SFU
Research Associate
Abbotsford, British Columbia, Canada

I am a research associate at Simon Fraser University and a member of the Computer Algebra Group at the CECM.

MaplePrimes Activity


These are replies submitted by roman_pearce

Arrays and tables are subject to "last name evaluation", which essentially means that they are passed by reference automatically. The following code will work:
A := Matrix(1..5,1..5);
initblock := proc(A,m,n)
  local i,j;
   for i from 1 to m do
     for j from 1 to n do
       A[i,j] := 1;
     end do;
   end do;
end proc:
initblock(A,3,3);
A;
Note that arrays and matrices in Maple are initialized to zero automatically. Hash tables are not, although you can make sparse hash tables that look like they are.
I use Maple 10 on a 1GHz iBook G4 with 768MB of ram on OS X 10.3.9. After a reboot, it takes 15 seconds to load and open a new worksheet. If you're still interested, I recommend you try Maple 10.06: http://www.maplesoft.com/support/downloads/index.aspx Also, check that your hard drive is not more than 80% full, because can produce serious performance problems on Macs.
You might want to try the define command. I don't normally use it, but it seems to do what you want.
You might want to try the define command. I don't normally use it, but it seems to do what you want.
For polynomial systems (a frequent cause of solve taking hours) I don't think it's possible to write such a procedure. The worst case bound is so large that it is almost always irrelevant, and there is no reliable way to guess how long computations will be. This may improve somewhat when better algorithms are used for solve (the Buchberger algorithm is particularly bad) but the problem will never totally go away. Modular trace algorithms with lifting or chinese remaindering are much more predictable overall, but the time for the initial computation mod p is still unpredictable.
For polynomial systems (a frequent cause of solve taking hours) I don't think it's possible to write such a procedure. The worst case bound is so large that it is almost always irrelevant, and there is no reliable way to guess how long computations will be. This may improve somewhat when better algorithms are used for solve (the Buchberger algorithm is particularly bad) but the problem will never totally go away. Modular trace algorithms with lifting or chinese remaindering are much more predictable overall, but the time for the initial computation mod p is still unpredictable.
I just tried it and got inconclusive timings (too much cache interference) so I looked at the source. I believe there is an extra eval with [op](f) that is not done with [op(f)], so you're right. Interesting.
I just tried it and got inconclusive timings (too much cache interference) so I looked at the source. I believe there is an extra eval with [op](f) that is not done with [op(f)], so you're right. Interesting.
The numeric DAE code was added in Maple 9.5.
The numeric DAE code was added in Maple 9.5.
Gsolve, or Groebner:-Solve in Maple 10, uses the factoring Buchberger algorithm. The algorithm factors all polynomials before they are added to the basis. When a polynomial factors, say f = f[1]*f[2]*`...`*f[k] it splits the system and calls itself recursively. One f[i] is added to each component, and the other f[j] are added to the set of non-zero constraints. These constraints are reduced by the current basis at the beginning and end of Buchberger's algorithm and if any are zero the component is discarded as redundant. The assumption is that if f = f[1]*f[2]*`...`*f[k] then only one f[i] can vanish on a prime (or primary) component. For an introduction, you can try On Factorized Gröbner Bases by Hans Gert Gräbe. The first Maple implementation (in Maple VR4) is based on Solving algebraic equations: combining Buchberger's algorithm with multivariate factorization by Stephen Czapor. Czapor uses additional strategies, such as detecting linear variables and applying a substitution to eliminate the variable. On large problems this strategy is bad because it requires you to reorder the variables and restart Buchberger's algorithm. The algorithm was reimplemented in Maple VR5 and is much slower. The basic problem is that Buchberger's algorithm is restarted every time a factorization occurs, so that syzygies that were dealt with previously are reduced again (to zero). These issues should be fixed in a future release of Maple. A limitation of the algorithm is that you can't typically solve systems for which Buchberger's algorithm can not compute a lexicographic Groebner basis. This is a lot of systems (hence FGLM and the Groebner Walk conversion strategies), so for hard systems it is suggested that you compute a lex Groebner basis first using conversion and then apply the factoring Buchberger algorithm to efficiently split the result. Finally, it should be noted that gsolve is only a heuristic. It does not actually compute prime decompositions. For example, the ideal {x^2+1, y^2+1} factors as {x^2+1, x-y}, {x^2+1, x+y} however this is not detected because gsolve does not introduce algebraic extensions. Nevertheless, it is a very useful preprocessor for positive dimensional ideals when trivial factors can be found.
The trick above does the computation in two steps over Q(x). Aside from intermediate expression swell, I am concerned that the adjoint algorithm could produce zero as a linear combination of the input. For example, if the polynomial x*(x^2+1)+1 were to appear it would not be recognized as zero. Similarly 2*f(x) for any polynomial f(x). One way around this problem is to encode it using bivariate polynomials (for example, by replacing 1 with z) so that minor expansion is used to compute the adjoint. This method is likely to be useless however, so alec's heuristic method is really the best bet. It has the nice property that modp1 arithmetic is used as well.
The trick above does the computation in two steps over Q(x). Aside from intermediate expression swell, I am concerned that the adjoint algorithm could produce zero as a linear combination of the input. For example, if the polynomial x*(x^2+1)+1 were to appear it would not be recognized as zero. Similarly 2*f(x) for any polynomial f(x). One way around this problem is to encode it using bivariate polynomials (for example, by replacing 1 with z) so that minor expansion is used to compute the adjoint. This method is likely to be useless however, so alec's heuristic method is really the best bet. It has the nice property that modp1 arithmetic is used as well.
I can't help but think that this advice and your user graphic (which is hilarious by the way) are somehow related :) I use only Maple, but I can really empathize with that stick figure sometimes.
I can't help but think that this advice and your user graphic (which is hilarious by the way) are somehow related :) I use only Maple, but I can really empathize with that stick figure sometimes.
First 27 28 29 30 31 32 33 Last Page 29 of 39