roman_pearce

Mr. Roman Pearce

1678 Reputation

19 Badges

20 years, 215 days
CECM/SFU
Research Associate
Abbotsford, British Columbia, Canada

I am a research associate at Simon Fraser University and a member of the Computer Algebra Group at the CECM.

MaplePrimes Activity


These are replies submitted by roman_pearce

You are correct that Maple is using the classical division algorithm, which does an order of magnitude too many monomial comparisons and under some conditions an order of magnitude too much coefficient arithmetic too. There is extra overhead because Maple uses general purpose data structures for polynomials as well. Maple 11 and up, you can do F4 style polynomial division, if the first argument to Groebner[Reduce] or NormalForm is a list of polynomials. That is, replace Reduce(f,G,tord) with Reduce([f],G,tord)[1]. This will almost certainly be faster in your case. Keep in mind that this is still Maple code though, so although the algorithm has good complexity, it is not comparable to a high performance implementation. My implementation of polynomial division in C can handle much larger problems.
You are correct that Maple is using the classical division algorithm, which does an order of magnitude too many monomial comparisons and under some conditions an order of magnitude too much coefficient arithmetic too. There is extra overhead because Maple uses general purpose data structures for polynomials as well. Maple 11 and up, you can do F4 style polynomial division, if the first argument to Groebner[Reduce] or NormalForm is a list of polynomials. That is, replace Reduce(f,G,tord) with Reduce([f],G,tord)[1]. This will almost certainly be faster in your case. Keep in mind that this is still Maple code though, so although the algorithm has good complexity, it is not comparable to a high performance implementation. My implementation of polynomial division in C can handle much larger problems.
Can Sage factor a multivariate polynomial efficiently yet ? How about solve a differential equation ? I don't mean to be disparaging, but they have a long ways to go in many areas before Sage can be called a leading computer algebra system. These things take a lot of time to develop, and that will become increasingly obvious as the options for integrating existing software start to dwindle. I don't want to sound too harsh here. I think it's great to have a common interface to so many good open source programs, and some of their homegrown projects are very impressive. But realistically, their prospects for overthrowing Maple, Mathematica, or Matlab in the next 10 years are slim (but after that, who knows?)
Can Sage factor a multivariate polynomial efficiently yet ? How about solve a differential equation ? I don't mean to be disparaging, but they have a long ways to go in many areas before Sage can be called a leading computer algebra system. These things take a lot of time to develop, and that will become increasingly obvious as the options for integrating existing software start to dwindle. I don't want to sound too harsh here. I think it's great to have a common interface to so many good open source programs, and some of their homegrown projects are very impressive. But realistically, their prospects for overthrowing Maple, Mathematica, or Matlab in the next 10 years are slim (but after that, who knows?)
Yikes. They need a high performance graphics programmer. The Java interface does not need more features. They need to embed OpenGL in the Java interface with no loss of performance or, failing that, open a new window for high performance plots (like the X11 plotdevice).
Yikes. They need a high performance graphics programmer. The Java interface does not need more features. They need to embed OpenGL in the Java interface with no loss of performance or, failing that, open a new window for high performance plots (like the X11 plotdevice).
Then memory use is also out of control. 10^6 points with double precision [x,y,z] values is 24MB. You should be able to plot 10^7 points on any modern computer. I know the classic interface can do 10^6 points, and that was on a machine from 2001. Did the Java interface set us back two decades ? Are the results we see today the same as optimized C on a 486 ? It looks like it. I tried the following on a MacBook 2 GHz Core2 w/ 2GB of RAM (Java interface): N := 10^3: A := rtable(1..N,1..3,frandom(0..1,1),datatype=float[8]); plots[pointplot3d](A,style=point,symbol=point); N=10^3 was slow to display and choppy to rotate, 10^4 was very slow to display and extremely choppy to rotate, 10^5 almost killed my system. This is pathetic. Then I tried Maple 11 classic interface on a Dell Pentium M 1.6 GHz: N=10^3 was instant and smooth, 10^4 was quick smooth, 10^5 took a few seconds to display but rotating it was perfectly smooth. N=10^6 took far too long to display and gobbled up 600MB of RAM so I killed it. In short, Maple was bad before and now it is much much worse. At this rate in another decade we won't even be able to add 2+2.
Then memory use is also out of control. 10^6 points with double precision [x,y,z] values is 24MB. You should be able to plot 10^7 points on any modern computer. I know the classic interface can do 10^6 points, and that was on a machine from 2001. Did the Java interface set us back two decades ? Are the results we see today the same as optimized C on a 486 ? It looks like it. I tried the following on a MacBook 2 GHz Core2 w/ 2GB of RAM (Java interface): N := 10^3: A := rtable(1..N,1..3,frandom(0..1,1),datatype=float[8]); plots[pointplot3d](A,style=point,symbol=point); N=10^3 was slow to display and choppy to rotate, 10^4 was very slow to display and extremely choppy to rotate, 10^5 almost killed my system. This is pathetic. Then I tried Maple 11 classic interface on a Dell Pentium M 1.6 GHz: N=10^3 was instant and smooth, 10^4 was quick smooth, 10^5 took a few seconds to display but rotating it was perfectly smooth. N=10^6 took far too long to display and gobbled up 600MB of RAM so I killed it. In short, Maple was bad before and now it is much much worse. At this rate in another decade we won't even be able to add 2+2.
That's disappointing. The user has 10^6 points, but computers are at least O(10^9). Any decent graphics card can handle O(10^8) triangles per second. There's really no excuse. Hopefully newer versions of Maple do a bit better with this stuff. One million anything isn't big.
That's disappointing. The user has 10^6 points, but computers are at least O(10^9). Any decent graphics card can handle O(10^8) triangles per second. There's really no excuse. Hopefully newer versions of Maple do a bit better with this stuff. One million anything isn't big.
Maple puts the polynomials of the Groebner basis into increasing order by leading term. Magma uses decreasing order.
Maple puts the polynomials of the Groebner basis into increasing order by leading term. Magma uses decreasing order.
I think the out of core problem will mostly go away. Most computers will have integrated graphics and all the data will be in main memory. Ultimately, this is the technology that every customer will have, so that is what should be targeted. High end graphics cards will have a lot of ram. They have 1 GB today and it's going to 4 GB quickly. There the model is: transfer data to the card, compute, and transfer the result back. And I agree, PDEs and all kinds of simulations will benefit hugely from this technology. GPUs are essentially a supercomputer on a chip. By "it" above, I meant "will Maple be able to solve these problems" ? Will it be competitive ? I expect every computer will have a good GPU within two years, and the programming model for GPUs is fairly straightforward, so I expect rapid adoption of the technology. Photoshop is already using it. This won't be like the multicore situation where everyone basically missed the boat. BTW, regarding the multicore situation, we have some good news for sparse polynomial arithmetic coming out of Simon Fraser :)
Maple needs to start taking advantage of GPUs. Now that the OpenCL standard has been finalized there is no longer any risk to development. A $500 graphics card today can do dense linear algebra on a 10000 x 10000 matrix. In 2 years it will be 20000 x 20000 and CPUs won't be able to compete. Furthermore, previously "high end" GPUs will be integrated into motherboards, so users can expect to solve 5000 x 5000 systems easily on a junk computer. Will Maple do it or will it be obsolete ? This is not like general purpose parallel programming (or heterogeneous computing - yikes!), this is considerably easier and there's really no excuse for missing the boat.
There are no solutions to that system, you might try solving for all variables. For example, solve({eq});
First 15 16 17 18 19 20 21 Last Page 17 of 39