Maplesoft Blog

The Maplesoft blog contains posts coming from the heart of Maplesoft. Find out what is coming next in the world of Maple, and get the best tips and tricks from the Maple experts.

I’ve written in the past of how the push for more efficient, “greener” designs are driving innovation in important industries like auto, aerospace, and power.  Over the past few years, we’ve met countless engineers around the world who are working hard to transform conventional designs to highly refined optimal designs in tune with modern realities, and some are, of course, throwing out old ideas all together and venturing into exotic power sources and radical platforms that used to be the stuff of science fiction. Last week I had one of the more interesting and enjoyable encounters with such a group of very talented green engineers.

Recently, I had to write a brief introduction to the precalculus topic "Vertical Translation of Graphs." Figure 1 ( in black, in red) says just about everything. 

 

Plot_2d 

Figure 1   The red curve () is the black curve () vertically translated upward by one unit. 

 

But is the issue all that trivial? Although the curves are vertically separated by one unit, they don't look uniformly spaced. The animation in Figure 2 helps overcome the optical illusion that makes it seem like the black curve bends towards the red curve, even though the curves are congruent.

Ten years ago, I wrote an article for Dr. Dobb’s Journal on Analytical Computing. Many of the techniques I discussed there, like hybrid symbolic-numeric computing and automated code generation have since revealed themselves as indispensable tools for engineering. Others, like exact computing, have yet to reveal their potential.

 A lot has happened since that article, of course, and it’s about time I share some thoughts about what the current challenges are. There are three areas that are top of my mind and that I would like to discuss here: Parallel computing, collaborative software and user interface abstractions.

I’ve always been a big fan of languages and even a bigger fan of those who readily master multiple languages with relative ease. My late brother was a linguist with a minimum of five or so distinct languages in his portfolio. Yes, there were many things that I thought I could do better, but that one gift of his was the thing that I would remember him by as time went on.  The other day, my son Eric asked me for advice on what courses to take in Grade 10. He essentially had three electives and, as with most public schools in our country, there were countless choices, all of which sounded tantalizingly interesting and enriching. In the end he came to the conclusion (OK, I drove him to the conclusion), that French, German, and Computer Science would be the right choices.

I’m not a morning person. Well, that’s not entirely true: I am not particularly a morning person, but relative to my wife, Amy, I seem awfully crusty and curmudgeonly for about an hour after waking up. She, on the other hand, is definitely of the “up and at ‘em” variety. As such, I would like to credit coffee with contributing significantly to our happy marriage these last five years.

With so many data points I can now reliably say that it is in everyone’s best interest for me to wake up first, or for us to wake up at the same time. If Amy gets up first, by the time I wake up she is reciting lists of “things I’d like to do today” as I groggily attempt to get that first double espresso to my lips. This is where something interesting happens: If I don’t perk up, Amy gets extra happy in an attempt to cheer me up (just give me time to wake up!). This implicitly suggests that she is using her mood as a forcing function to my mood. If I still don’t perk up, then things turn ugly as I am clearly being insensitive to her generous efforts to cheer me up, and her mood drops. Conversely, if I do perk up, whether from the coffee or her cheerfulness, all is well.

It was 1992 when Mel Maron and I had just published the third edition of Numerical Analysis: A Practical Approach.  One of our editors made the suggestion that a Maple version of an advanced engineering math book should be written. For the next five years I steadfastly resisted the challenge.  Finally, in 1997 I signed a contract with Addison Wesley for a 1000-page AEM text, the manuscript due in two years. 

 Rose-Hulman Institute of Technology where I was teaching in the math department is on the quarter system, and math faculty normally teach twelve contact hours.  Calculus classes are five hours per week, so for each calculus course taught, a faculty member picks up an extra hour.  To minimize prep time, I wrangled three courses all the same, but they had to be calculus courses, so I was teaching fifteen contact hours and writing what turned out to be a 1200-page text. 

After the first two quarters of academic year 1997, I needed to come up for air, so I set aside the project and spent several months putting together a Maple-based tensor calculus course. Happily, I even got to teach it in the following school year. One of the high points for me was animating a parallel vector field along a latitude on a sphere.

Around the time that Windows 98 was at its most popular, I used to dabble in programming Windows user interfaces with Visual C++ and the help of several thick MFC (Microsoft Foundation Class) manuals.  I wanted to create packaged (and admittedly simple) engineering applications. But for a chemical engineer with little background in Windows programming, combining math functionality with a user interface was time-consuming and cumbersome. (MFC can be arcane unless you’ve invested considerable time in learning the API.)

Later, I migrated to VB6.  Designing an interface was an order of magnitude easier, but I still had to roll many of my own math routines, or link to external libraries. While I may be interested in the mathematical mechanics of adaptive step sizing in Runge-Kutta algorithms at the intellectual level, it was secondary to my then goal.

In his last blog post “Watching the Dawn”, Fred Kern comments on the life of an engineer before the realization that symbolic approaches to computing can get you better results faster. The analogy is, of course, prior to this revelation we were in some sense in the dark. I’d like to add my two cents worth as I was indeed one of those engineers lurking in the dark for many years.

Flash back about 20 or so years.  I was a poor graduate student and to feed myself, I began doing small jobs for this new company called Waterloo Maple Software (which eventually became Maplesoft).  Mostly, my work was to develop small applications or demonstrations with an engineering focus.  I remember with great fondness, the look of shock and awe that would come over my engineering colleagues’ faces when I showed them how I computed symbolic matrix products or performed a cumbersome simplification in seconds. For me, it was an obvious thing to do because I had access to the technology and I didn’t know any better. But for them, it seemed like pure voodoo. But in reality, the common themes that I somehow fumbled upon during these early presentations would later reappear in much richer, exciting forms as core themes in the eventual “symbolic sunrise” twenty years later.

I’ve flown across the oceans hundreds of times, but anyone who has done it even once has experienced the beautiful view of a dawn or a sunset.  That is, if you weren’t asleep.

I’ve had the good fortune to witness other dawns and sunsets – the dawn of new technologies, and the sunset of others.  I’m old enough to remember the dawn of ATMs, fax machines, the internet, wireless technology, transistors, personal computers and several other things that are basic to our lives today.  I actually contributed in a small way to at least two of those “dawns”.

The truth is that most technology dawns are more obvious in the “afternoon” – a few years after the dawn.  When it’s happening, it often seems like a complicated and possibly interesting thing, but the full potential impact isn’t always clear (at least to me).

I’m quite sure that I’m witnessing another new dawn today.  It’s the dawn of symbolic computing technology revolutionizing the world of engineering.

A few months ago, I needed to prepare for a customer on-site training session. As part of the request for topics to be covered during the training, my contact there wanted to talk about contact! Contact models are important for multi-body systems because it is about the interactions between objects.  An important example of a contact model is a tire component that interacts with the road. In this case, the training topic requested was a more generic question: “how to create contact models in MapleSim”.  There are, of course, lots of examples available within MapleSim that contain contact models already. Two particular examples came to mind: 1) the bouncing ball; and 2) the catapult. However, this being a training session, simply presenting the example models would not accomplish the purpose of the session. So I broadened my scope and turned my attention to the question: “how does one model contact in general?”

Undergraduate engineering and science consists of learning various rules and laws that govern the domains of interest. For me, it was Maxwell’s Equations for electromagnetics, the Navier-Stokes equation for acoustics, the Rayleigh criterion for imaging, the speed of light, et cetera ad nauseam. What is frequently missed or neglected in teaching and in practice is how these rules and limits are simply the boundaries of the game – endpoints on a spectrum of possibilities. That’s why a recent headline caught my attention: “Computers to Get Faster Only for 75 More Years". I find it hard to believe that humans a thousand years from now will be commemorating 2084 as “The Year Computers Stopped Getting Faster”. After reading the research paper from which this headline arose, I was reminded that innovative science doesn’t set limits, it uses them as tools. Since this is precisely what we do in Applications Engineering at Maplesoft, I thought it would be worth looking into a little further.

If you were to stroll into the Application Engineering office at Maplesoft, you might be led to believe that we subsist on nothing but donuts, pizza, chocolate and coffee.  It’s even worse at this time of year when we have many more opportunities to over-consume. I try to have a balanced diet, but there are too many temptations scattered around the office (including candy at the office entrance – our receptionist, Walli, expects me at 3pm each day without fail). It doesn’t help that a virtually limitless supply of donuts are only a three minute drive away.

One of the most common foods prepared at this time of the year, and arguably the most common kitchen disaster, is turkey. 

There are several employees here at Maplesoft (myself included) who are full-fledged foodies:  not only do we enjoy eating good food, but we enjoy preparing it with all our cool kitchen gadgets.  Just as mathies may compare calculators, we compare chef’s knives.  So being a foodie and a mathie, I was quite intrigued when a co-worker sent me an article that found the optimal cooking temperature for a turkey.

For those of you who have had to take on the task of preparing a turkey, you’re probably familiar with this basic rule of thumb (thousands of burnt turkeys must have contributed to this rule): preheat the oven to 400°F, and then cook it for 20 min/lb at 350 °F.  Essentially what this rule means is that the time required to cook a turkey is directly proportional to the mass of the turkey.  We know that this cannot be true because some people who adhere to this rule will have a turkey that is moist and tender, and others will have a turkey that is dry and tough.  If we take more variables into account, like the size of the turkey (l), oven temperature (T), average density (ρ) and thermal conductivity (κ) we can create a function with respect to time . We can now do a bit of dimensional analysis on this to evaluate the accuracy of the traditional rule of thumb.  By using dimensional analysis, we can formulate a relation between a set of known variables, even though we are not sure of the relationship between these variables. The immediate advantage of this procedure is that less experimentation is required to establish a relationship between the variables, allowing us to take given data and see how it will fit with the equations that are created in the analysis.  I won’t go into full detail here, but I’ve created a Maple worksheet that shows the calculations used in the analysis.  The important part comes from the graphs that are generated:

The black dots represent various cooking times of various sizes of birds.  The red line is the old rule of thumb, which you can clearly see is not very reliable.  The green line represents the new rule of thumb which falls in line much better.  So, what is the magical formula that you should use?  Based on the analysis:  where x is in lbs and the resulting time is in minutes.  Now I will be honest, I haven’t put this to the test yet, but I’ll be sure to try it out this Christmas.

Resellers buy products from a manufacturer, and sell to consumers.  They are an important factor in many industries, including the one in which I work.  Maplesoft operates through a network of resellers throughout the world (apart from North America and a few other territories).  Some may suspect I’m somewhat biased in promoting the importance of resellers; I spent seven years working for Adept Scientific, Maplesoft’s partner in the UK.

The largest resellers are based in larger, better developed markets with a strong manufacturing and research base (like Cybernet and Scientific Computers in Japan and Germany).  Conversely, many smaller resellers, like Multi-On and Czech Software First in Mexico and the Czech Republic, operate in markets with significant growth potential.

Although the digital world has provided me with a wonderful career and countless enriching experiences, in my heart I will always have a special passion towards the analog world: vinyl LP’s, multiple print sets of the Encyclopædia Britannica, a manual wind watch, fountain pens, film cameras and a darkroom, and carbureted motorcycles all have privileged spots in my house. With digital equivalents being so much more accurate, faster, convenient, and cheaper, what could possibly be the appeal of these ancient artifacts?

First 25 26 27 28 29 30 31 Page 27 of 34