Applications, Examples and Libraries

Share your work here

In this post I will describe a little about the OU course MS325: Computer Algebra, Chaos and Simulations, which I took last year.

MS325 is a level 3 OU applied mathematics course, which means, roughly that it is pitched at the level of a final year mathematics undergraduate. It is split into three components: Computer Algebra, which teaches the use of Maple and Maple programming; Chaos, which teaches dynamical systems, deterministic chaos and fractals, with an emphasis...

Time and time again you get caught up working with Maple and before you know it your 20 minutes of alloted time has turned into almost 1 hour.

Here's a little procedure alarm message to remind you it's time to start shutting down.  Start this procedure in a seperate worksheet and enter the number of minutes before you want your Maple alarm message popping up.  Open a new worksheet and start your work.  When the time is up, your message will pop up unless...

All too often one wants a list of dates from one date to another for use in a graph or table. 

All searches on the subject have evaded me.  I do not have Maple15 but I think the tools for such luxuries are there in the new Finance package. 

Without Maple 15, how can I do this in a simple manner?

** edit ** I have converted this to a post, plus added more info below

Well I have created a procedure with one way to achieve...

Another feature added to Maple 15 partially in response to the MaplePrimes forums is the new/improved ?HTTP package.  It provides one-step commands for fetching data from the web: much simpler than using the ?Sockets package directly. In most cases, the command ?HTTP,Get is what you would use:

 (s, page, h) := HTTP:-Get("http://en.wikipedia.org/wiki/List_of_Crayola_crayon_colors"):

The above fetches the HTML source of a page from Wikipedia and stores it as a string 'page'. The other two outputs are 's', and integer HTTP status code and 'h' a table of the headers returned in the HTTP response from the server.  Compare this to the amount of code needed to fetch data in my Baby Names application for Maple 12, for example.

Classic Triangle Peg Board GameIn high school I was briefly fascinated by a triangular "jump all but one" game, commonly found at Cracker Barrel restaurants.  The basic premise is that any peg can "jump" over an adjacent peg to occupy the empty hole next to the jumped peg.  The jumped peg is then removed.  The goal is to continue jumping pegs until there is only one left.  


The instructions on the face of the Cracker Barrel version of this game say, "LEAVE ONLY ONE -- YOU'RE A GENIUS".  Wanting to claim the right to call myself a genius, unlike ordinary kids, who might just play the game a few times, I sat down on my Turbo-XT and started writing BASIC code.  The algorithm I came up with ran a bit slow, so I directed output to my printer and let it run over night.  In the morning the program was still chugging along.  I advanced the paper feed on the dot-matrix lineprinter -- the kind that used continuous feed paper with perforated edges and holes on each side.   Into view came 3 solutions represented by a string of numbers.   A quick check verified that I was now a genius.  

Now that Maple 15 is out, I thought I should share this little application I made: GoalTracker.mw. It is an application partially inspired by the BMI tracker in Nintendo's WiiFit application; you could easily use it to track a weight loss goal. But it could also be used to track other quantifiable goals. I am posting it here mostly because it takes advantage of two new features in Maple 15.

Each of my two previous two blog posts (Maple Gems, More Maple Gems) contained five "gems" from my Little Red Book of Maple Magic, a red ring-binder in which I record...

Dear all,

Find here a part of my precedent post "2D finite element method" which represent a worksheet for 2D triangular Mesh.A part of the mesh can rotate with regard to the other by modifying theta.

restart: with(geometry): with(plots):

ms := 8: n := 6: mr := 5: theta := (1/180)*(23*Pi):

wrs := (1/1000)*<30.8, 33, 36, 39, 40, 43, 46, 49, 53.25>:

wrr := (1/1000)*<20,23,26,28,30,30.8>:

wt := (1/180)*Pi*<0,10,20,30,40,50,60>:

In a theme similar to the movie Matrix a little procedure to create falling random number animation, and in this example 0's and 1's hence the name binary rain.

MatrixRotaterows := proc (arows, bcolumns, randstart, randend, iterations) 
local a, i, b, c, d, e, f, g:
a := Matrix(arows, bcolumns, 0):
for i to iterations do
  b := Matrix(1, bcolumns, rand(randstart .. randend)):
  a := ArrayTools:-CircularShift(a, 1, 0):
  a[1,..] := b:

Hi all,

I noticed that there are not many applications which deal with the finite element method with Maple. I attached in the file below a code for the magnetostatic probems and which allows the calculation of the magnetic field in a permanent magnet synchronous machine.

mafem.zip

I hope this work will help many users of maple in the numerical analyzis with the finite element method

I posted...

I was recently asked a question on using regular expressions with ?type , and I thought it was interesting enough, to share here.

I have been reading through the following book:

Hilderman, Robert J. and Hamilton, Howard J., "Knowledge Discovery and Measures of Interest," Kluwer Academic Publishers, 2001.

To better understand the material in Chapter 3, "A Data Mining Technique" I have written a Maple Worksheet implementing...

The goal of computing only a select number of eigenvectors of a real symmetric floating-point Matrix comes up now and then. For very large Matrices the memory requirements can be more restrictive than the timing.

The attached worksheet and code computes this, more quickly and with significantly less memory allocation than does the usual task of computing all eigenvectors. By using the supplied Matrix itself as a partial "workspace" the amount of additional workspace and memory allocation for the task is negligible.

For example, having created the very large Matrix in the first place,  essentially no significant further memory allocation is required to compute the largest eigenvalue and its associated eigenvector.

A little about this routine `SelectedEigenvectors` follows.

It only works in hardware double precision. It expects a float[8] datatype Matrix (because you are serious about using minimal memory!). It uses the CLAPACK function dsyevx, using the "wrapperless" version of Maple's external-calling mechanism. It seems to work fine in the systems I've tried so far: Maple 13 and 14 on both 32bit and 64bit Linux and Windows.

Whether it computes and returns the selected eigenvectors (alongside the selected eigenvalues, which are always returned) is controlled by the 'vectors=truefalse' optional argument. By default it uses the Matrix argument as partial workspace and so destroys the original data; but this can be overridden with the 'preserve=true' optional argument. The requested accuracy can be relaxed with the 'epsilon=float' optional argument, which might sometimes speed it up.

The input Matrix is presumed to be symmetric. By default it uses the data in the lower triangle, but this can be changed to be the upper triangle with the 'uplo' optional argument.

The choice of eigenvalues is controlled by the two integer arguments `il` and `iu`. If il=iu=n then only the nth largest eigenvalue is computed. If il=1 and iu=4 then the four smallest eigenvalues are computed.

It returns three things: a Vector of dimension n whose first m entries are the selected eigenvalues, a nxm Matrix whose columns are the m associated eigenvectors, and a Vector of dimension n whose entries indicate whether corresponding eigenvectors failed to converge.

I didn't enable float arguments such as `vl` and `vu` which in principle could allow one to supply a floating-point range in which to find eigenvalues.

I didn't make an optimization of having it do an initial "dummy" external call in which no calculation would be done, but which would instead query for and subsequently utilize the optimal-performance additonal float workspace size.

For reasons mysterious to me, on Windows the 64bit version runs almost exactly half as fast as the 32bit version.

Usually, the workspace for eigen-solving is implemented to be at least O(n^2) for an nxn Matrix. But this routine does only O(m+n) extra workspace allocation to compute the m eigenvectors. And that is linear. Which is the Big Deal.

A 5000x5000 datatype=float[8] (ie. hardware double precision) Matrix takes 200MB of memory. With the preserve=true option, this routine can compute just the largest eigenvector with only about 200MB of additional allocation. And if the original Matrix is no longer required then with the preserve=false option this routine can do that task with less than 1MB further allocation. In comparison, the regular LinearAlgebra:-Eigenvectors command would require about 600MB of additional memory allocation while computing all eigenvectors.

At size 5000x5000 this routine is only about four times faster than LinearAlgebra:-Eigenvectors. I suspect that is because it still has to compute in full the reduction to tridiagonal form.

Download dsyevx.mw

One must agree with the fact that on the foundation of Mathematica created the most popular math encyclopedia.

Our response to Chamberlain.

I propose to establish a Global Practicum of elementary and higher mathematics - Mapler.
My Russian version already contains several thousand multi-choice programs with complete solutions, tests, tutors, graphics, etc.
This workshop will be an order of magnitude more in demand audience than Mathworld.

I hope that someone will prove useful. Especially - for beginners

ani.zip

First 58 59 60 61 62 63 64 Last Page 60 of 71