roman_pearce

Mr. Roman Pearce

1678 Reputation

19 Badges

20 years, 217 days
CECM/SFU
Research Associate
Abbotsford, British Columbia, Canada

I am a research associate at Simon Fraser University and a member of the Computer Algebra Group at the CECM.

MaplePrimes Activity


These are replies submitted by roman_pearce

I appreciate hearing your perspective on this since you've been using Maple a lot longer than I have. I am probably not qualified to say what is or is not "core functionality" for a symbolic computation package, but that won't stop me from trying: primitives - data structures such as lists, arrays, and hash tables - arbitrary expressions and evaluation (Maple is great) - integer arithmetic (Maple is pretty good) - floating point arithmetic (very good) - logical operations and a programming language (great) algebraic algorithms and data structures - dense polynomials (good) - sparse polynomials (ok) - dense matrices (good) - sparse matrices (poor) I'll restrict myself to talking about exact computations and ignore numerical algorithms, because it's not my area of expertise and also because I think the case for fast numerics should self-evident by now :) I'm pretty sure that you can implement any exact algorithm efficiently using only the primitives and data structures above. For example, power series can be represented as polynomials, graphs could use a sparse matrix or lists of neighbors, and so on. On the other hand, when you don't have one of those nice things, your life quickly becomes hard. Imagine trying to do a reasonable implementation of GF(p^k) without a proper data structure or algorithms for dense polynomials mod p. I think it would be quite difficult. Likewise, you can't do any multivariate polynomial arithmetic without some kind of sparse data structure. Perhaps the need for linear algebra is less obvious, but you tend to find it at the core of almost every algorithm for a "hard" computational problem. Off the top of my head, here are some computations that directly use sparse linear algebra mod p: - integer factorization - discrete logarithm - polynomial factorization There are many more specialized algorithms, but my point is this: every time you want to implement one of these, you have to write the linear algebra yourself. I bet there are at least 20 implementations of sparse linear algebra mod p in the Maple library, and probably all of them are using Gaussian elimination, hopefully with a bit of sparse pivoting, and a data structure which is not optimized for the task. A dedicated project would blow them all away. It's a sad state of affairs. Even worse, nobody can hope to improve sparse linear algebra over the rationals because doing so would require those modular algorithms. As a result, there have been no improvement in sparse linear algebra for over a decade, probably more. Projects like Magma, Linbox, and now SAGE have turned Maple into a toy, and new developments are rightfully being done in those systems because we don't have the appropriate primitives. These things are really just the basic building blocks of computer algebra, and you can't expect to be relevant for very long if you don't have them. Things like a nice GUI and network and document functionality are big improvements, but they can not come at the expense of algorithm development. I think it's telling that the glory days of Maple are often remembered as a time when exciting new algorithms were being added to the system while everyone was still typing cryptic commands into a terminal. I also think Maplesoft has focused far too much on commercial competitors like Wolfram Research (they have made the same mistake), when we are really competing with everyone in the entire field. It's a strange sort of competition, because the Maple project is big enough to make them collaborators, and it's very lopsided because Maple is a commercial project with a revenue source while most of these people are working on research grants. The logical response is to invite the best of them to join the Maple project, and to try to cover every area of mathematics with people and expertise through partnerships. This doesn't even have to be expensive. The long term goal is to build the best possible system in every area. As for what to optimize, the answer is simple. Optimize everything. If a new algorithm is better, use it. If a computation is slow, find out why. Maybe the algorithm is bad or it was programmed incorrectly, or maybe the problem is hard. Try to develop new algorithms for hard problems. I think everyone knows what has to be done, but with a system as old and as popular as Maple there is a lot of inertia and it is easy to resist change. We need break the mold to get past this, and a big part of that is changing peoples' expectations. The entire library has to be fast, and there are no excuses. We will have to invest more in the system.

Aren't mwz files compressed ? If so, that would be one reason. Otherwise, it would be convenient for classroom notes, etc.

Aren't mwz files compressed ? If so, that would be one reason. Otherwise, it would be convenient for classroom notes, etc.

I think speed is a major concern. People will migrate away from Maple if it can not do the problems they want to do. The lack of fast routines inherently limits the scope of the system, at a time when the range of applications for symbolic computation is growing rapidly. In order to compete, we need fast algorithms for core functionality. These become the building blocks for future applications. I am going to pick on my perennial favorite topic: exact linear algebra. Maple has fast routines for dense linear algebra mod p, but it doesn't have anything for sparse matrices or matrices with rational entries. If my application needs to solve a large sparse matrix, I can't use Maple unless I want to program all the linear algebra myself. A user in this situation isn't likely to complain. If they say anything it will be in the form of a question, such as "why can't Maple do anything with matrices larger than 1000 x 1000 ?", and such an inquiry either here or in the newsgroups is likely to be met with indignation. In the end they will just use something else because they have a job to do, and that job doesn't include nagging Maplesoft to improve their system. I think the only people who do complain are the long term users, and I think most of them have grown so accustomed to the status quo that they restrict their complaints to 1) bugs and 2) performance problems of an absolutely gross magnitude. I think that nobody is asking for speed because nobody thinks it will happen. I would like to end my rant by pointing out that while I have picked on one particular area where Maple is bad, there are a few areas where Maple has consistently been among the best software available. I think we all know what those areas are, and how much of a benefit that leadership position brings to the system as a whole. My complaint is that we should strive to be the among the very best in every area, and not dismiss obvious shortcomings by saying that "users don't care about speed".
Because your constraints are all equalities I tried constructing a polynomial system. It turns out to have an exact solution. If you have Maple 10, try the following:
Obj := (1/4*a[1]+3/4*a[2]+7/8*a[3]+a[4]+15/8*a[5]+2*a[6]
+19/8*a[7]+21/8*a[8]-1/4*b[1]-3/4*b[2]-7/8*b[3]-b[4]-15/8*b[5]
-2*b[6]-19/8*b[7]-21/8*b[8])^2+(1/4*b[1]+3/4*b[2]+7/8*b[3]+b[4]
+15/8*b[5]+2*b[6]+19/8*b[7]+21/8*b[8]-1/4*a[1]-3/4*a[2]-7/8*a[3]
-a[4]-15/8*a[5]-2*a[6]-19/8*a[7]-21/8*a[8])^2:

C1:=add(a[n], n = 1 .. 8) = 4:
C2:=add(b[n], n = 1 .. 8) = 4:
C3:=seq(a[n]+b[n] = 1, n = 1 .. 8):
C4:=seq(a[n]*(a[n]-1) = 0, n = 1 .. 8):
C5:=seq(b[n]*(b[n]-1) = 0, n = 1 .. 8):
F := expand(map(lhs-rhs,[C1,C2,C3,C4,C5])):

# factor the system of constraints
with(PolynomialIdeals):
infolevel[GroebnerBasis] := 4:
J := PolynomialIdeal(F):
P := Simplify(PrimeDecomposition(J));

# only 70 points satisfy the constraints
# evaluate the function and sort by value
S := seq(solve(Generators(i)), i=P);
M := sort([seq([subs(i,Obj), i], i=S)], proc(a,b) evalb(a[1] < b[1]) end proc):
M[1];  # min
M[-1]; # max
If you don't have Maple 10, here are the minimum and maximum values:
> lprint(M[1]);
[0, {a[2] = 1, b[7] = 0, a[7] = 1, b[6] = 1, b[1] = 1, b[5] = 0,
     a[8] = 0, a[5] = 1, a[1] = 0, b[4] = 1, b[3] = 0, a[4] = 0,
     a[3] = 1, b[2] = 0, a[6] = 0, b[8] = 1}]
> lprint(M[2]);
[0, {a[3] = 0, a[2] = 0, a[1] = 1, b[3] = 1, b[7] = 1, b[5] = 1,
     b[1] = 0, a[8] = 1, b[8] = 0, b[6] = 0, b[4] = 0, a[7] = 0,
     a[5] = 0, b[2] = 1, a[6] = 1, a[4] = 1}]
>
> lprint(M[-1]);
[72, {a[2] = 1, b[6] = 1, a[1] = 1, b[7] = 1, b[5] = 1, a[8] = 0,
      b[1] = 0, b[4] = 0, a[7] = 0, a[5] = 0, a[4] = 1, b[3] = 0,
      a[3] = 1, b[2] = 0, a[6] = 0, b[8] = 1}]
> lprint(M[-2]);
[72, {a[3] = 0, a[2] = 0, b[7] = 0, a[7] = 1, b[3] = 1, b[1] = 1,
      b[5] = 0, a[5] = 1, a[8] = 1, b[8] = 0, b[6] = 0, a[1] = 0,
      b[2] = 1, a[6] = 1, b[4] = 1, a[4] = 0}]
Try rewriting (a[n]=0 OR b[n]=1) as a[n]*(b[n]-1)=0.
Here's a simple implementation of Gaussian elimination.
gauss := proc(M1::listlist(integer))
  local i,j,k,M,v,r,piv;
   piv := table('sparse'):
   M := table();
   i := 0:
   for v in M1 do
     r := v;
     for j to nops(r) do
       if r[j] <> 0 and piv[j] <> 0 then
         k := piv[j];
         r := r - r[j]/M[k][j]*M[k]
       end if
     end do:
     r := r*ilcm(1, seq(denom(j), j=r));
     for j from 1 to nops(r) do
       if r[j] <> 0 then
         i := i+1;
         M[i] := r/igcd(op(r));
         piv[j] := i;
         break
       end if
     end do
   end do;
   [seq(M[j], j=1..i)]
end proc:

# example usage
n := 5;
A := mods([seq([seq(rand(),j=1..n)], i=1..n)], 20);
gauss(A); 
Here are a few routines for manipulating expanded polynomials:
# select terms of maximal degree in X
initform := proc(f::polynom, X::{list(name),set(name)}) 
  local h, i;
   lcoeff(subs(seq(i=i*h,i=X), f), h)
end proc:

# make a polynomial homogeneous using a new variable h
homogenize := proc(f::polynom, X::{list(name),set(name)}, h::name)
  local i;
   subs(seq(i=i/h, i=X), expand(h^degree(f,X)*f))
end proc:

# compute the leading term with respect 
# to graded reverse lexicographic order
grevlex := proc(f::polynom, X::list(name))
  local X1, f1, i, h, m;
   X1 := [seq(X[-i], i=1..nops(X))];
   f1 := lcoeff(subs(seq(i=i*h,i=X), f), h);
   tcoeff(f1, X1, 'm')*m;
end proc:

# compute the weighted degree wrt a vector of weights
wdegree := proc(f::polynom, X::list(name), W::list(posint))
  local S, w, i;
   S := {seq(X[i]=X[i]*w^W[i], i=1..nops(X))};
   degree(subs(S, f), w);
end proc:

# example usage:
f := randpoly([x,y,z]);
initform(f,[x,y,z]);
homogenize(f,[x,y,z],h);
grevlex(f,[x,y,z]);
grevlex(f,[z,y,x]);
wdegree(f,[x,y,z],[1,2,3]);
wdegree(f,[x,y,z],[3,2,1]);
You might also want to try the routines I posted here: non-neg-int solutions to x+y+z=n
Sorry, I checked this quickly before uploading it, but messed it up. My code assigns to the array on the left hand side of eq1, not to the variables in the array. Try this:
AssignVars := proc(x) local i, n, v, r;
  v := lhs(x): r := rhs(x); n := op(1,v):
  for i to n do 
    unassign(v[i]);
    assign(v[i], r[i]);
  end do:
end proc;
Sorry, I checked this quickly before uploading it, but messed it up. My code assigns to the array on the left hand side of eq1, not to the variables in the array. Try this:
AssignVars := proc(x) local i, n, v, r;
  v := lhs(x): r := rhs(x); n := op(1,v):
  for i to n do 
    unassign(v[i]);
    assign(v[i], r[i]);
  end do:
end proc;
I wouldn't bother reinstalling OS X, it's not likely to help. One option is to completely delete Maple and try to reinstall using the 10.06 installer. In addition to the Maple 10 folder in Applications, you have to delete /Library/Frameworks/Maple.Framework/Versions/10 I would try completely deleting Maple and reinstalling it using the 10.06 installer. Ask Maplesoft tech support for a download link.
Most laptop computers throttle the CPU to a lower speed. Although you noted setting the laptop to "full speed", this really is the likely cause of the problem. Use SpeedSwitchXP to set the processor speed exactly: http://www.majorgeeks.com/download4300.html Note that some Dell laptops will also slow down the CPU using the BIOS after the machine has been under heavy load for a while. I don't know of any safe way to prevent that.
This might be a Java problem. In any case you should contact Maplesoft support, see http://www.maplesoft.com/contact/index.aspx If you want to try some things, open Terminal and run: /Library/Frameworks/Maple.framework/Versions/10/bin/maple -x This should start the GUI version of Maple. If this doesn't work then you probably have a Java problem. If you're using OS X 10.4 then make sure your software is up to date. If you're using 10.3 then having up to date software actually causes this problem because Apple broke Java with Quicktime > 7.01 on OS X 10.3. You should contact Maplesoft support to download the Maple 10.06 installer which was rewritten to support 10.3 customers. If you're using 10.4 or the command above works, then the Maple10.app bundle was probably damaged, and you should try deleting it and reinstalling Maple. If Maple won't install for any reason contact Maplesoft support and download the 10.06 installer, which really should work in all circumstances.
Can you start command line Maple ? Open Terminal (in Applications->Utilities) and type: /Library/Frameworks/Maple.framework/Versions/10/bin/maple
Arrays and tables are subject to "last name evaluation", which essentially means that they are passed by reference automatically. The following code will work:
A := Matrix(1..5,1..5);
initblock := proc(A,m,n)
  local i,j;
   for i from 1 to m do
     for j from 1 to n do
       A[i,j] := 1;
     end do;
   end do;
end proc:
initblock(A,3,3);
A;
Note that arrays and matrices in Maple are initialized to zero automatically. Hash tables are not, although you can make sparse hash tables that look like they are.
First 26 27 28 29 30 31 32 Last Page 28 of 39