Product Tips & Techniques

Tips and Tricks on how to get the most about Maple and MapleSim
r := abs(z)^(Re(a)) * exp(-Im(a) * argument(z));
w:= r * abs(z)^(Im(a)*I) * (z/abs(z))^Re(a);

I want to see, that z^a = w.

But simplify(w) gives a wrong result, it differs from w:

tstData:= [z=1+3*I, a=-3+I];
z^a; eval(%, tstData):  evalf(%);
'w'; eval(%, tstData): evalf(%);
'simplify(w)'; eval(%, tstData): evalf(%);

tstData:= [z=-2*I, a=+I];
z^a; eval(%, tstData):  evalf(%);
'w'; eval(%, tstData): evalf(%);
'simplify(w)'; eval(%, tstData): evalf(%);

In the last case simplify(w) results in a purely real value,
while w has a nonvanishing imaginary part.

In my previous posts I discussed the basic difference between parallel programming and single threaded programming. I also showed how controlling access to shared variables can be used to solve some of those problems. For this post, I am going to discuss more difficulties of writing good parallel algorithms.

Here are some definitions used in this post:

  • scale: the ability of a program to get faster as more cores are available
  • load balancing: how effectively work is distributed over the available cores
  • coarse grained parallelism: parallelizing routines at a high level
  • fine grained parallelism: parallelizing routines at a low level

Consider the following example

As of 9th of Oct 2009

http://www.mapleprimes.com/mapleranking?sort=desc&order=Points
 

I just want to reiterate how dynamic programming problems can be solved in Maple.

Especially dynamic programming models that frequently appears in economic models.

First of all it is important to note that is close to impossible to find an easy to understand

and step-by-step road maps to dynamic programming. Why is that ?!  The below Maple

code was basically "discovered" by trial and error and pure stubbornness (caveman 101).

 

In the previous post, I described why parallel programming is hard. Now I am going to start describing techniques for writing parallel code that works correctly.

First some definitions.

  • thread safe: code that works correctly even when called in parallel.
  • critical section: an area of code that will not work correctly if run in parallel.
  • shared: a resource that can be accessed by more than one thread.
  • mutex: a programming tool that controls access to a section of code
HI there, I have a quick question and I just want to confirm what I found. I was trying to do some Fourier Transform using the MTM package and I was successful in doing so for one dimensional problems when it's only f(x) but when I tried to do something more complicate like 2D fourier transform when the f is a function of x and y ie. f(x,y), Maple seems not able to do it. I checked with help menu and it seems it can support only 1D. is there anything other package that I'm not aware of that is capable of a such thing?

In my previous post, I tried to convince you that going parallel is necessary for high performance applications. In this post, I am going to show what makes parallel programming different, and why that makes it harder.

Here are some definitions used in this post:

  • process: the operating system level representation of a running executable. Each process has memory that is protected from other processes.
  • thread: within a single process each thread represents an independent, concurrent path of execution. Threads within a process share memory.

We think of a function as a series of discrete steps. Lets say a function f, is composed of steps f1, f2, f3... e.g.

I started six month ago with what I though at the time to be a simple question.

Why is the mean in the Black and Scholes model assumed to be (mu-(1/2)*sigma^2)*T ?

I had seen numerous attempts of deriving such an relationship on the Internet but every solution that I found always had some flaw in the step-by-step mathematical logic which meant that the solution was rendered useless.

A few years ago I wrote a tool, mgrep, for searching Maple repositories;  download mgrep.zip. The zip file includes the noweb source (mgrep.nw), however, it is missing some of the files needed to rebuild the documentation—I will add them later when I bring an old drive back online.  You should not, however, need to rebuild the documentation (mgrep.pdf) because it is included along with the shell-script (mgrep) and gawk file (mgrep.awk).  To use the tool you will need to install mgrep and mgrep.awk in a directory in your path.  The --help option prints a brief help page.

Here I use mgrep to partially explore a question that  acer poses, that, whether % may be usefully employed in a Maple procedure. A reasonable start is to see whether it is so used in the distributed Maple library.  First I go to the lib subdirectory of the Maple installation, then call mgrep to search maple.mla for all procedures that use % as a name

Computers with multiple processors have been around for a long time and people have been studying parallel programming techniques for just as along. However only in the last few years have multi-core processors and parallel programming become truly mainstream. What changed?

Here are some definitions for terms used in this post:

  • core: the part of a processor responsible for executing a single series of instructions at a time.
  • processor: the physical chip that plugs into a motherboard. A computer can have multiple processors, and each processor can have multiple cores
  • process: a running instance of a program. A process's memory is usually protected from access by other processes.
  • thread: a running instance of a process's code. A single process can have multiple threads, and multiple threads can be executing at the same on multiple cores
  • parallel: the ability to utilize more than one processor at a time to solve problems more quickly, usually by being multi-threaded.

For years, processors designers had been able to increase the performance of processors by increasing their clock speeds. However a few years ago they ran into a few serious problems. RAM access speeds were not able to keep up with the increased speed of processors, causing processors to waste clock cycles waiting for data. The speed at which electrons can flow through wires is limited, leading to delays within the chip itself. Finally, increasing a processor's clock speed also increases its power requirements. Increased power requirements leads to the processor generating more heat (which is why overclockers come up with such ridiculous cooling solutions). All of these issues meant that is was getting harder and harder to continue to increase clock speeds.  The designers realized that instead of increasing the core's clock speed, they could keep the clock speed fairly constant, but put more cores on the chip. Thus was born the multi-core revolution.

My name is Darin Ohashi and I am a senior kernel developer at Maplesoft. For the last few years I have been focused on developing tools to enable parallel programming in Maple. My background is in Mathematics and Computer Science, with a focus on algorithm and data structure design and analysis. Much of my experience with parallel programming has been acquired while working at Maplesoft, and it has been a very interesting ride.

In Maple 13 we added the Task Programming Model, a high level parallel programming system. With the addition of this feature, and a few significant kernel optimizations, useful parallel programs can now be written in Maple. Although there are still limitations and lots more work to be done on our side, adventurous users may want to try writing parallel code for themselves.

To encourage those users, and to help make information about parallel programming more available, I have decided to write a series of blog posts here at Maple Primes. My hope is that I can help explain parallel programming in general terms, with a focus on the tools available in Maple 13. Along the way I may post links to sites, articles and blogs that discuss parallel programming issues, as well as related topics, such as GPU programming (CUDAOpenCL, etc).

My next post, the first real one, I am going to explain why parallel programming has suddenly become such an important topic.

Not all objects can be saved to .m and retrieved sucessfully in a restarted or new session. This is the case not only for "escaped" locals, but also for some objects implemented as function calls of a module member.

> restart:

> t := ScientificConstants:-Constant('c'):

> type(t, specfunc(anything,ScientificConstants:-Constant));
                                     true
 
> ScientificConstants:-GetValue(t...
I've been using a combination of Fortran and gnuplots to do my plots. my research required an intensive amount of data plotting. I've been trying to use maple to do scatterplots and curve fitting for my data. here is the code: > K := readdata("/Users/xxx/Work/dt40s5l10.110/ALL/dt40s5l10.110.1.DFIR.AVE", 8) # the file contains 2972 lines and 8 columns of data > unassign('G', 'X'); > seq(assign(G[i], (K[i+1, 5]-K[i, 5])/(K[i+1, 1]-K[i, 1])), i = 1 .. 2970); # to calculate the derivative > seq(assign(X[i], K[i, 1]), i = 1 .. 2970); when I tried to plot X vs G using scatterplots

Good morning!

I am currently involved (though hopefully near the end) in a lengthy discusion regarding how to use the mapleprimes editor. I should say up front that I am grateful for any forum that provides me a place to ask questions and to help others. I've been around for a few years, but only lately have I decided to put substantial effort into learning Maple. And it is a substantial effort, despite the Maple ads about how easy Maple is to use.

I was just reminded of an aspect of Maple GUI Components, new to Maple 13, that has sometimes come in very useful to me. It is the ability to refresh a component programatically.

I should explain. The old (and current default) behaviour is for the GUI not to refresh any other components until the current component is finished (ie. returns control).

The relevant situation in which this matters is when a given component has action code whose

First 36 37 38 39 40 41 42 Last Page 38 of 64