Maplesoft Blog

The Maplesoft blog contains posts coming from the heart of Maplesoft. Find out what is coming next in the world of Maple, and get the best tips and tricks from the Maple experts.

Atomic operations are CPU instructions that are guaranteed to execute in a single CPU cycle. This means that the operation is guaranteed to complete without being interrupted by the actions of another thread. Although this may not sound too exciting, careful programming using these instructions can lead to algorithms and data structures that can be used in parallel without needing locks. Maple currently does not support atomic operations, however they are an interesting tool and are used in the kernel to help improve Maple's parallelism in general.

Dual- and quad-core PCs are now ubiquitous.  While making your operating system a better multi-tasking environment, they’ve had a limited effect on the code that most technical professionals write.  This is largely because of the perceived difficulty of parallel programming.   The evolution , however, of high-level languages that support multi-threading throughout the 90s and beyond, removed the need to manage threads at the low level, allowing engineers to concentrate on what part of the algorithm could be run in parallel.  Given the ever-increasing complexity of systems that have to be simulated, multi-threaded programming can offer significant time savings for many the problems that can be easily parallelized (and for which time-savings of parallelization outweigh the overhead).

It’s a small world, but there are still too many borders.

I’ve recently become a fan of country music.  It amazes and amuses my wife and children, but I find that country music tells stories that contain some very basic truths.

Brad Paisley sings a song named “Welcome to the Future”.  He begins that song by telling his grandfather’s story of being a soldier in the Philippines fighting the Japanese during World...

In this post I'll take a closer look at the ways in which Maple code can be thread unsafe. If you have not already seen my post on Thread Safety, consider reading that post first. As a brief review, a procedure is thread safe if it works correctly when run in parallel.

The most obvious way in which procedures can be thread unsafe is if they share data without synchronizing access (using a Mutex, for example). So how can two threads share data?

There is something profoundly satisfying when something that goes “viral” on the Web has some connection to your life. This happened recently when I and my colleagues were pointed towards a video of some laboratory robots that somehow drew almost a million views on YouTube alone. For an engineer,...

The first professional training course I gave involved a 275 mile late evening drive in a 1 litre European econobox from Letchworth in the UK to a dingy hotel in Alnwick.  I was pretty nervous –some of my delegates were engineers who had been using Mathcad for over ten years, and I was being paid to tell them what they didn’t know.  The following day, after drinking several litres of coffee, I drove another five miles to the training location, only to find that just one delegate had turned up.  Luckily he was just an intern who’d never used Mathcad before – and to him I was an expert.

Those of you who know me know that besides my family I have three great passions:  History, travel and technology.

I have always been an amateur student of history, reading and learning as much as I can.  But reading only gets you so far.  I think it was Mark Twain who said, “You can’t understand a country until you smell it.”  Smell it?  I think by that he meant that you can’t smell a country unless you are there, which is really the only way to begin to truly understand it.  He was right, of course, and travel is the perfect complement to my love of history.

I'd like to start by thanking all those readers who left feedback on my last post. It was good hear that most of you enjoy reading my posts and that they are generally helpful. I would like to encourage you to continue posting feedback, especially questions or comments about anything that I fail to explain sufficiently.

The following is a discussion of the limitations of parallel programming in Maple. These are the issues that we are aware of and are hoping to fix in future releases.

A leading motorcycle manufacturer has been using MapleSim to model their powertrain, and now they want to include a realistic battery model. This would let them choose batteries and accessories (like starters and alternators) that they can simulate under a variety of operating conditions, along with their powertrain model. The company turned to Maplesoft to help with this modeling exercise and I was put on the task. My background is in circuits so I thought this would be a straightforward project. In my mind batteries were just constant voltage sources that eventually ran out of charge. I was able to find several recent research papers on battery models, and I realized their behavior was much more complicated than a simple voltage source.

My son Eric began high school this year (grade 9) and a marvelous thing happened. In my previous posts, I lamented that I was generally unable to spark in him an interest in math but something changed this year. The first sign was his first math test given within the first two weeks of the new year. It was an assessment of sorts to see who knows what, and he scored 90%. Although it was a review of basic arithmetic with complicated fractions, order of operations, and such, this was the first time he had ever ranked within the top few of his class in math. Fast forward a few days. He came up to me with a large grin and said “Dad, you’re in my math text book!” Actually it wasn’t me but there was an indirect reference to Maple in one of the later chapters of the book that he was perusing out of curiosity (another good sign). “This is your stuff isn’t it?” With tears welling up inside, I proudly answered “yes.”

A favorite diversion of mine (and of many around the Maplesoft office) is xkcd. Its author, Randall Munroe, bills it as “a webcomic of romance, sarcasm, math, and language.” Since 2005, he’s been entertaining many self-proclaimed geeks with his unique and slightly skewed jokes on technology, computer science, mathematics, and relationships.

I really like the post in which a substitute teacher – hm, Mr. Munroe......

In my previous posts I have discussed various difficulties encountered when writing parallel algorithms. At the end of the last post I concluded that the best way to solve some of these problems is to introduce a higher level programming model. This blog post will discuss the Task Programming Model, the high level parallel programming model introduced in Maple 13.

Unless you’ve spent the past five years on an isolated island in the middle of the Pacific, you’ll have heard of Facebook and Twitter and LinkedIn and MySpace and Flickr. Social media sites: whether you love them, hate them, or just don’t get them, they’re going to be here for a while. If you’re like many of us, you may have a few accounts on these sites, whether you’re a power user or occasional dabbler. Social media allow us to re-connect with old friends and colleagues, share our thoughts – and photos, advertise, network... and generally waste time. :)

The evolution of written language started in earnest in 3500 BC with Cuneiform, spurring a step-change in the volume of information that could be recorded and transmitted over large distances.

This evolved into wide spectrum of other methods of information transmission. The first transatlantic telegraph cables, for example, were laid in the mid-to-late nineteenth century by information pioneers – industrialists who saw the vast benefit in increasing the rate of information exchange by many orders of magnitude. This led to a Cambrian explosion in the sheer volume of information transmitted internationally, increasing trade and commerce to hitherto unseen levels.

In my previous posts I discussed the basic difference between parallel programming and single threaded programming. I also showed how controlling access to shared variables can be used to solve some of those problems. For this post, I am going to discuss more difficulties of writing good parallel algorithms.

Here are some definitions used in this post:

  • scale: the ability of a program to get faster as more cores are available
  • load balancing: how effectively work is distributed over the available cores
  • coarse grained parallelism: parallelizing routines at a high level
  • fine grained parallelism: parallelizing routines at a low level

Consider the following example

First 26 27 28 29 30 31 32 Page 28 of 34