Yesterday I attended a lecture by Fran Allen, as part of the "David R. Cheriton School of Computer Science, University of Waterloo, Distinguished Lecture Series".  Allen worked a IBM Research from 1957 to 2002, she was awarded the ACM's Turing Award in 2006.  Here is her biography from Wikipedia (let's hope it is accurate). 

Aside from some technical issues (why can't a room full of computer science professors and students successfully attach a laptop to a projector?) the talk was quite interesting.  There were two main sections, the first discussed Allen's career at IBM and the second was about the future of computer science.  Allen's work at IBM focused mostly on compilers and high performance computing.  She made a few interesting comments about the importance of high performance computing.  For example, one of the systems she worked on was designed and used to model the detonations of nuclear weapons.  The development of this system ended the need for the United Stated to perform actual test detonations.

However more interesting from my point of view was her discussion of the future of computing.  Allen re-iterated the need for going parallel, which should be no surprise to the readers of this blog, but also made her own suggestions for how to do it.  Her suggestion, as I understood it, was to create new domain specific high level languages with very smart compilers capable of automatically parallelizing the code.  By using a high level language you can describe the problem in a more abstract way.  An abstract description allows the compiler more flexibility to automatically parallelize the resulting code.  Languages like C, and even Java, describe what the processors should be doing in a very concrete way.  This limits the ability of a compiler to manipulate the code into something that can be parallelized.  She also had one surprising suggestion, she feels that hardware designers should stop using caches.  Unfortunately there was not enough time to get into the details of how this would work.

Although it was not explicitly stated it seemed Allen's ideas are really for high performance computing and specialized systems more that general computation.  It seems unlikely that domain specific languages with smart auto-parallelizing compilers could be created for all the different types of applications that are run on a typical desktop machine.

So how does Maple fit into this world view?  Maple is a reasonably high level language, which is good, but we really don't have code analysis tools in the kernel to automatically optimize user code.  Automatic parallelization also requires that the kernel be able to automatically determine the thread safety of user defined routines, so that is something else we'd need to add.  If we had automatic parallelization how would that work with the explicit parallel programming tools like the Task Programming Model?  If we implement some automatic parallelization features in the next few years, I think they would help with fine grained parallelism, but I don't think it will be able to exploit coarse grained parallelization very well.  Thus Task programming would still be useful for exploiting high level parallelization.

If Allen's ideas really take off (and there is some very active, very interesting research exploring these ideas) we may even want to add a higher level mathematical description language on top of the current Maple language.  This high level description language would allow the kernel more freedom to generate parallel solutions to given problems.

I wanted to mention that we at Maplesoft are hard at work developing next version of Maple.  We will soon start beta testing and so anyone interested in testing the parallel programming tools (and Maple in general) should consider applying to be a beta tester.  In the past the parallel tools have had a pretty limited attention from our beta testers, which can make it difficult for us to really flush out all the bugs.  If you are interested in helping please go to the Maplesoft beta site for more information:

http://beta.maplesoft.com

here is the beta program FAQ

http://beta.maplesoft.com/betafaq.html

Also, I will be taking some vacation time throughout December so I won't be blogging quite as often as I generally do.


Please Wait...