MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • I recently upgraded to Microsoft Internet Explorer 9 x64. I was unable to get the "Insert Link" or "Insert Content" buttons of the "File Uploader" to work. I found that enabling "Compatibility View" seems to resolve the problems with those buttons in the "File Uploader."

    Unfortunately, even with "Compatibility View" enabled I seem to be unable to comment on posts. When I hit the Submit button I am taken to the post view however my Comment does not appear.

    I was recently asked a question on using regular expressions with ?type , and I thought it was interesting enough, to share here.

    I have been reading through the following book:

    Hilderman, Robert J. and Hamilton, Howard J., "Knowledge Discovery and Measures of Interest," Kluwer Academic Publishers, 2001.

    To better understand the material in Chapter 3, "A Data Mining Technique" I have written a Maple Worksheet implementing...

    I had originally planned for a light-hearted post on a recent customer visit that I recently made in Europe but in light of the events in Japan these past few days, it somehow seemed terribly inappropriate. Around the world, people are coming to grips with this recent series of disasters, but for us at Maplesoft, being part of the global Cybernet corporate team, there are very personal dimensions.

    The good news is that all of our colleagues at Cybernet Systems, headquartered...

     

     

     The first few convergents from the continued fraction expansion of the MRB constant are 0,1/5,3/16,31/165,34/181 and 65/346. If you were to use those convergents as terms of a generalized continued fraction, it would represent, approximately,

     

    0+1/(0.20+1/(0.1875+1/(0.1878787+1/(0.187845+1/0.187861))))=1.83346...

    This year I am organizing East Coast Computer Algebra Day (ECCAD) 2011 at the University of Waterloo, on April 9th. 

    ECCAD is a long running series of annual one-day meetings for those interested in...

    In a recent blog post, I discussed five "gems" in my Little Red Book of Maple Magic, a notebook I use to keep track of the Maple wisdom I glean from interactions with the Maple programmers in the building. Here are five more such "gems" that appeared in a Tips & Techniques column in a recent issue of the ...

    The goal of computing only a select number of eigenvectors of a real symmetric floating-point Matrix comes up now and then. For very large Matrices the memory requirements can be more restrictive than the timing.

    The attached worksheet and code computes this, more quickly and with significantly less memory allocation than does the usual task of computing all eigenvectors. By using the supplied Matrix itself as a partial "workspace" the amount of additional workspace and memory allocation for the task is negligible.

    For example, having created the very large Matrix in the first place,  essentially no significant further memory allocation is required to compute the largest eigenvalue and its associated eigenvector.

    A little about this routine `SelectedEigenvectors` follows.

    It only works in hardware double precision. It expects a float[8] datatype Matrix (because you are serious about using minimal memory!). It uses the CLAPACK function dsyevx, using the "wrapperless" version of Maple's external-calling mechanism. It seems to work fine in the systems I've tried so far: Maple 13 and 14 on both 32bit and 64bit Linux and Windows.

    Whether it computes and returns the selected eigenvectors (alongside the selected eigenvalues, which are always returned) is controlled by the 'vectors=truefalse' optional argument. By default it uses the Matrix argument as partial workspace and so destroys the original data; but this can be overridden with the 'preserve=true' optional argument. The requested accuracy can be relaxed with the 'epsilon=float' optional argument, which might sometimes speed it up.

    The input Matrix is presumed to be symmetric. By default it uses the data in the lower triangle, but this can be changed to be the upper triangle with the 'uplo' optional argument.

    The choice of eigenvalues is controlled by the two integer arguments `il` and `iu`. If il=iu=n then only the nth largest eigenvalue is computed. If il=1 and iu=4 then the four smallest eigenvalues are computed.

    It returns three things: a Vector of dimension n whose first m entries are the selected eigenvalues, a nxm Matrix whose columns are the m associated eigenvectors, and a Vector of dimension n whose entries indicate whether corresponding eigenvectors failed to converge.

    I didn't enable float arguments such as `vl` and `vu` which in principle could allow one to supply a floating-point range in which to find eigenvalues.

    I didn't make an optimization of having it do an initial "dummy" external call in which no calculation would be done, but which would instead query for and subsequently utilize the optimal-performance additonal float workspace size.

    For reasons mysterious to me, on Windows the 64bit version runs almost exactly half as fast as the 32bit version.

    Usually, the workspace for eigen-solving is implemented to be at least O(n^2) for an nxn Matrix. But this routine does only O(m+n) extra workspace allocation to compute the m eigenvectors. And that is linear. Which is the Big Deal.

    A 5000x5000 datatype=float[8] (ie. hardware double precision) Matrix takes 200MB of memory. With the preserve=true option, this routine can compute just the largest eigenvector with only about 200MB of additional allocation. And if the original Matrix is no longer required then with the preserve=false option this routine can do that task with less than 1MB further allocation. In comparison, the regular LinearAlgebra:-Eigenvectors command would require about 600MB of additional memory allocation while computing all eigenvectors.

    At size 5000x5000 this routine is only about four times faster than LinearAlgebra:-Eigenvectors. I suspect that is because it still has to compute in full the reduction to tridiagonal form.

    Download dsyevx.mw

    One must agree with the fact that on the foundation of Mathematica created the most popular math encyclopedia.

    Our response to Chamberlain.

    I propose to establish a Global Practicum of elementary and higher mathematics - Mapler.
    My Russian version already contains several thousand multi-choice programs with complete solutions, tests, tutors, graphics, etc.
    This workshop will be an order of magnitude more in demand audience than Mathworld.

    I hope that someone will prove useful. Especially - for beginners

    ani.zip

    Maple-version (86 mws with links): course_zno.zip  Html-interactive version: tar.zip

    Training course

    Maple in Ukraine
    TRAINING COURSE
    for the entrance examination in mathematics
    External independent evaluation

      When I went to the library, I found the books about maple are very few. I turned to the internet sales site in my country, and found that the situation was similar. The most incredible aspect was that the latest book about maple was based on maple 8. But when I typed 'matlab' in the search...

    countries.zip

    I thought I'd create a partial map of the middle east using a procedure with Maple.  Let me say it's not the quickest thing to do when the data format is not particularly favourable. 

    I downloaded data from Coastline Extractor from ngdc.noaa.gov/mgg/coast.  This gives you point data sets.  You can then use maple to pointplot...

     

    Update - April 4, 2011: I corrected a typo in Table 2, first column, bottom row.  What was sqrt(6) has been changed to sqrt(5).

     

    This is just a programmatic twist on Robert Lopez's very nice original post on this topic.

    I'd rather be able to control such declarations with code, than to have to manipulate the palettes or context menus in order to get what are -- in essence -- additional programming and authoring tools. Such code can be dynamic and flexible, and could be inserted in Code- or Startup regions.

    Download atomicpartials.mw

    First 115 116 117 118 119 120 121 Last Page 117 of 297