DJKeenan

437 Reputation

9 Badges

18 years, 112 days

MaplePrimes Activity


These are replies submitted by DJKeenan

There is a nice spinning globe in the Application Center. See http://www.maplesoft.com/applications/app_center_view.aspx?AID=158. This includes maps of the continents, etc. Note that the direction of spin is reversed.
Other people have had the same problem. But for myself and most (all?) others, closing and relaunching Maple resolves things. Are you certain that Maple is fully closed, before relaunching? A couple times I have had problems because Windows XP does not always end tasks when it says it does. To check this, go into the Windows Task Manager (via Ctrl-Alt-Del) and click on the Processes tab; then click on the term “Image Name”, to put the names in alphabetical order; then check to see if there are any tasks running whose name begins with “maple”. It is regrettable that Microsoft failed to make even something as basic as ending a user task work reliably.
Other people have had the same problem. But for myself and most (all?) others, closing and relaunching Maple resolves things. Are you certain that Maple is fully closed, before relaunching? A couple times I have had problems because Windows XP does not always end tasks when it says it does. To check this, go into the Windows Task Manager (via Ctrl-Alt-Del) and click on the Processes tab; then click on the term “Image Name”, to put the names in alphabetical order; then check to see if there are any tasks running whose name begins with “maple”. It is regrettable that Microsoft failed to make even something as basic as ending a user task work reliably.
The L2 cache memory typically runs 3–4 times faster than RAM. There is a big speedup with latency though; L2 cache is faster by a factor of ~20. So this might come into play if the Maple data is spread out over memory. L2 cache, however, is usually only ~2 MB, and some of that is used by the operating system. How much can Maple process within <2 MB? How L2 and TLB misses interact with Maple is not something that I understand well. Has anyone done tests recently?
The L2 cache memory typically runs 3–4 times faster than RAM. There is a big speedup with latency though; L2 cache is faster by a factor of ~20. So this might come into play if the Maple data is spread out over memory. L2 cache, however, is usually only ~2 MB, and some of that is used by the operating system. How much can Maple process within <2 MB? How L2 and TLB misses interact with Maple is not something that I understand well. Has anyone done tests recently?
Above is a complaint about “skimpy” Help pages and a suggestion for an expensive rewrite. I replied that, for me, skimpiness was not a serious issue. Here is some evidence that skimpy Help pages might even be better: http://www.sciencedaily.com/releases/2007/08/070801161511.htm The article is for monkeys, but it says that the same result has been well established in humans. The original paper presumably has references.
I think that by “cache” JacquesC means Translation Lookaside Buffer (TLB). I had thought that TLB misses were, in general, supposed to decrease speed by a factor of 10. The factors seem to be much larger here though.
I think that by “cache” JacquesC means Translation Lookaside Buffer (TLB). I had thought that TLB misses were, in general, supposed to decrease speed by a factor of 10. The factors seem to be much larger here though.
Neither of your methods exactly reproduces curry. The first has g calling f; so if f later changes its definition, then so will g—which does not happen with curry. The second is subject to name conflicts: it does not work if the variable was previously assigned. The second can be made to work though. g:=unapply(f(2,'y'),'y') This gets awkward when f has several arguments. g:=unapply(f(2,'y','z','s','t'),'y','z','s','t') vs. g:=curry(f,2) The name is commonly used in combinatory logic. As Joe Riel says, it derives from the much-liked logician Haskell Curry.
Neither of your methods exactly reproduces curry. The first has g calling f; so if f later changes its definition, then so will g—which does not happen with curry. The second is subject to name conflicts: it does not work if the variable was previously assigned. The second can be made to work though. g:=unapply(f(2,'y'),'y') This gets awkward when f has several arguments. g:=unapply(f(2,'y','z','s','t'),'y','z','s','t') vs. g:=curry(f,2) The name is commonly used in combinatory logic. As Joe Riel says, it derives from the much-liked logician Haskell Curry.
Those are pretty strong terms— “disgusting” and “horrendous”! I guess that if you are used to 1-D input, 2-D input can be a little strange at first. Maple, though, is behaving fine. As another example, suppose that you type “2^3+4” (without the quotes). In 2-D, as soon as you type the “^”, you are put into superscript; thereafter, everything that you type is in superscript, until you specify otherwise. Thus we get following. “2^3+4” in 2-D:   2^`3+4` = 128 “2^3+4” in 1-D:   2^3+4 = 12 We've typed the exact same keys in both cases, yet we get different results—correctly. As Schivnorr observes, you can prevent treating a character specially by escaping it. “2\^3+4” in 2-D:   `2^3+4` = 12
Suppose that f,g,h are functions. What does f(g+h) mean?—f×(g+h) or f applied to g+h? (This assumes that f has an appropriately large domain.) As this illustrates, even knowing the types, you cannot unambiguously parse all expressions in mathematics. I suspect that what users really want is for Maple to have a psychic mode. André Heck discusses something a little related in his Introduction to Maple [2003: p.250]: evaluate int(exp(-c*x^2), x=0..infinity) The answer given by Maple is too complicated and includes a limit; reevaluating with the assumption c>0 gives the expected (simple) answer. Heck comments thus: “… you may think of c as a positive real constant, by Maple does not start from this assumption!” Until the developers implement psychic mode, we will have issues. I agree that having an unambiguous notation has strong advantages. Way back when, Ken Iverson proposed revising mathematical notation, or at least a subset of it. Modern notation for floor/ceiling is due to him; previously, brackets were commonly used. Little else made it into the mainstream. (Some of his ideas were implemented as a computer language, APL; originally, though, his plan was for inter-human communication.) I thought that Iverson's basic idea was really good—so good that I spent time working as a colleague of his on the plan. It is fair to say, then, that I am a really big supporter of unambiguous notation. But mathematics has its conventional notation. Maple should support this on output. Given that and WYSIWYG, then, as I said above, you have current Maple input, or something very similar.
Implicit multiplication is common in mathematics. And function application is commonly written as f(x), which makes z(x+y) syntactically ambiguous. Maple is just trying to be like mathematics. It is not Maple's fault if mathematics has inherent syntactic ambiguities, no explicit indication of function application, and so on. People have been happy with Maple's 2-D output. Now they are starting to use 2-D input, and are discovering the problem. But WYSIWYG input is naturally wanted by many users. And it is surely nice to be able to Ctrl+c some output and then do Ctrl-v and have the pasted input look exactly the same. In other words, if you agree that Maple output should look like common math, then you should agree with allowing input that looks the same. The choice is to either keep common mathematical notation, with its problems, or use some other notation that is unambiguous. I like the Maple choice.
My view is that listdensityplot is very nice. By default, listdensityplot plots the input exactly; and then there is an option for smoothing. The smooth option is really good (and will likely be used most of the time). I would prefer that densityplot operated the same way. First, as a principle, I think graphics should represent their data as accurately as feasible; second, it would be consistent with listdensityplot; third, the new plot could cause some people to be misled.
My view is that listdensityplot is very nice. By default, listdensityplot plots the input exactly; and then there is an option for smoothing. The smooth option is really good (and will likely be used most of the time). I would prefer that densityplot operated the same way. First, as a principle, I think graphics should represent their data as accurately as feasible; second, it would be consistent with listdensityplot; third, the new plot could cause some people to be misled.
First 7 8 9 10 11 12 13 Page 9 of 13