Applications, Examples and Libraries

Share your work here

The Saturday edition of our local newspaper (Waterloo Region Record) carries, as part of The PUZZLE Corner column, a weekly puzzle "STICKELERS" by Terry Stickels. Back on December 13, 2014, the puzzle was:

What number comes next?

1   4   18   96   600   4320   ?

The solution given was the number 35280, obtained by setting k = 1 in the general term k⋅k!.

On September 5, 2015, the column contained the puzzle:

What number comes next?

2  3  3  5  10  13  39  43  172  177  ?

The proposed solution was the number 885, obtained as a10 from the recursion

where a0 =2.

As a youngster, one of my uncles delighted in teasing me with a similar question for the sequence 36, 9, 50, 55, 62, 71, 79, 18, 20. Ignoring the fact that there is a missing entry between 9 and 50, the next member of the sequence is "Bay Parkway," which is what 22nd Avenue is actually called in the Brooklyn neighborhood of my youth. These are subway stops on what was then called the West End line of the subway that went out to Stillwell Avenue in Coney Island.

Armed with the skepticism inspired by this provincial chestnut, I looked at both of these puzzles with the attitude that the "next number" could be anything I chose it to be. After all, a sequence is a mapping from the (nonnegative) integers to the reals, and unless the mapping is completely specified, the function values are not well defined.

Indeed, for the first puzzle, the polynomial f(x) interpolating the points


(0, 1), (1, 4), (2, 18), (3, 93), (4, 600), (5, 4320)

reproduces the first six members of the given sequence, and gives 18593 (not 35280) for f(7). In other words, the pattern k⋅k! is not a unique representation of the sequence, given just the first six members. The worksheet NextNumber derives the interpolating polynomial f and establishes that f(n) is an integer for every nonzero integer n.

Likewise, for the second puzzle, the polynomial g(x) interpolating the points

(1, 2) ,(2, 3) ,(3, 3) ,(4, 5) ,(5, 10) ,(6, 13) ,(7, 39) ,(8, 43), (9, 172) ,(10, 177)

reproduces the first ten members of the given sequence, and gives -7331(not 885) for g(11). Once again, the pattern proposed as the "solution" is not unique, given that the worksheet NextNumber contains both g(x) and a proof that for integer n, all values of g(n) are integers.

The upshot of these observations is that without some guarantee of uniqueness, questions like "what is the next number" are meaningless. It would be far better to pose such challenges with the words "Find a pattern for the given members of the following sequence" and warn that the function capturing that pattern might not be unique.

I leave it to the interested reader to prove or disprove the following conjecture: Interpolate the first n terms of either sequence. The interpolating polynomial p will reproduce these n terms, but for k>n, p(k) will differ from the corresponding member of the sequence determined by the stated patterns. (Results of limited numerical experiments are consistent with the truth of this conjecture.)

Attached: NextNumber.mw

Bruce Jenkins is President of Ora Research, an engineering research and advisory service. Maplesoft commissioned him to examine how systems-driven engineering practices are being integrated into the early stages of product development, the results of which are available in a free whitepaper entitled System-Level Physical Modeling and Simulation. In this series of blog posts, Mr. Jenkins discusses the results of his research.

This is the third entry in the series.

My last post, System-level physical modeling and simulation: Adoption drivers vs. adoption constraints, described my firm’s research project to investigate the contemporary state of adoption and application of systems modeling software technologies, and their attendant methods and work processes, in the engineering design of off-highway equipment and mining machinery.

In this project, I interviewed some half-dozen expert practitioners at leading manufacturers, including both engineering management and senior discipline leads, to identify key technological factors as well as business and competitive issues driving adoption and use of systems modeling at current levels.

After identifying present-day adoption drivers as well as current constraints on adoption, finally I sought to learn practitioners’ visions, strategies and best practices for accelerating and institutionalizing the implementation and usage of systems modeling tools and practices in their organizations.

I was strongly encouraged to find a wealth of avenues and opportunities for exploiting enterprise business drivers, current industry disruptions, and related internal realignments and change-management initiatives to help drive introduction—or proliferation—of these technologies and their associated new ways of working into engineering organizations:

  • Systems modeling essential to compete by creating differentiated products
  • Mechatronics revolution in off-highway equipment
  • Industry downturns and disruptions create opportunities for disruptive innovation
    • Opportunities to leverage change in underlying industry competitive dynamics
    • Mining industry down-cycle creates opportunity to innovate, find new ways of working
    • Some manufacturers are using current down-cycle in mining industry to change their product innovation strategy
  • Strategies of manufacturers pursuing disruptive innovation
    • Best odds are in companies with deep culture of continually inculcating new skills into their people, and rethinking methods and work processes
    • Some managements willing to take radical corporate measures to replace old-thinking engineering staff with “systems thinkers”
    • Downsizing in off-highway equipment manufacturers may push them to seek more systems-level value-add from their component suppliers
  • New technology opportunities inside manufacturers ready to move more deeply into systems modeling
    • Opportunities in new/emerging industries/companies without legacy investments in systems modeling tools and libraries
    • Best practice for introducing systems modeling: start with work process, then bring in software
    • Capitalizing on engineering’s leeway and autonomy in specifying systems modeling software compared with enterprise-standard CAD/PLM tools
  • Systems modeling technology advances anticipated by practitioner advocates
    • Improving software integration, interoperability, data interchange
    • Improving co-simulation across domain tools
    • Better, more complete FMI (Functional Mock-up Interface) implementation/compliance
    • Higher-fidelity versions of FMI or similar

The white paper detailing the findings of this research is intended to offer guidance and advice for implementing change, as well as documentation to help convince colleagues, management and partners that new ways of working exist, and that the software technologies to support and enable them are available, accessible, and delivering payback and business advantage to forward-thinking engineering organizations today.

My hope is that this research finds utility as a practical, actionable aid for engineers and engineering management in helping their organizations to adopt and implement—or to strengthen and deepen—a simulation-led, systems-driven approach to product development.

You can download the full white paper reporting our findings here.

Bruce Jenkins, Ora Research
oraresearch.com

most effective built in operator code award goes to ppl that wrote the code for the union and intercect set operations for maple. Very important simple example below of  one of its applications.

 

When i work with algorithms, probably one of my most primary ports of enquiry (figuratively jeez skynet)  is to set up and if statement triggered to terminate the loop once the operations performed for any further cycles is INDEMPOTENT. this doesnt always mean your output is convergent in every case but it allows you to minimize the amount of time the cpu needs to collect data( ie the point at which it would produce that same set as it did in the last most loop)

 

 

Y := proc (X) local N, S1, `&Sopf;`; if X <> `union`(X, S1[N]) then N := (rand(1 .. NrANGE))(); S1[N] := {K[1](4+N), K[1](5+N), K[1](6+N), K[1](7+N), K[1](8+N), K[1](9+N), K[1](10+N), K[1](11+N), K[1](12+N), K[1](13+N), K[1](14+N), K[1](15+N)}; `&Sopf;` := `union`(X, S1[N]) else  end if end proc

proc (X) local N, S1, `&Sopf;`; if X <> `union`(X, S1[N]) then N := (rand(1 .. NrANGE))(); S1[N] := {K[1](4+N), K[1](5+N), K[1](6+N), K[1](7+N), K[1](8+N), K[1](9+N), K[1](10+N), K[1](11+N), K[1](12+N), K[1](13+N), K[1](14+N), K[1](15+N)}; `&Sopf;` := `union`(X, S1[N]) else  end if end proc

(1)

``

 

Download idempotency.mwidempotency.mw

 

 

The presentation below is on undergrad Quantum Mechanics. Tackling this topic within a computer algebra worksheet in the way it's done below, however, is an exciting novelty and illustrates well the level of abstraction that is now possible using the Physics package.

 

Quantum Mechanics: Schrödinger vs Heisenberg picture

Pascal Szriftgiser1 and Edgardo S. Cheb-Terrab2 

(1) Laboratoire PhLAM, UMR CNRS 8523, Université Lille 1, F-59655, France

(2) Maplesoft

 

Within the Schrödinger picture of Quantum Mechanics, the time evolution of the state of a system, represented by a Ket "| psi(t) >", is determined by Schrödinger's equation:

I*`&hbar;`*(diff(Ket(psi, t), t)) = H*Ket(psi, t)

where H, the Hamiltonian, as well as the quantum operators O__S representing observable quantities, are all time-independent.

 

Within the Heisenberg picture, a Ket Ket(psi, 0) representing the state of the system does not evolve with time, but the operators O__H(t)representing observable quantities, and through them the Hamiltonian H, do.

 

Problem: Departing from Schrödinger's equation,

  

a) Show that the expected value of a physical observable in Schrödinger's and Heisenberg's representations is the same, i.e. that

Bra(psi, t)*O__S*Ket(psi, t) = Bra(psi, 0)*O__H(t)*Ket(psi, 0)

  

b) Show that the evolution equation of an observable O__H in Heisenberg's picture, equivalent to Schrödinger's equation,  is given by:

diff(O__H(t), t) = (-I*Physics:-Commutator(O__H(t), H))*(1/`&hbar;`)

where in the right-hand-side we see the commutator of O__H with the Hamiltonian of the system.

Solution: Let O__S and O__H respectively be operators representing one and the same observable quantity in Schrödinger's and Heisenberg's pictures, and H be the operator representing the Hamiltonian of a physical system. All of these operators are Hermitian. So we start by setting up the framework for this problem accordingly, including that the time t and Planck's constant are real. To automatically combine powers of the same base (happening frequently in what follows) we also set combinepowersofsamebase = true. The following input/output was obtained using the latest Physics update (Aug/31/2016) distributed on the Maplesoft R&D Physics webpage.

with(Physics):

Physics:-Setup(hermitianoperators = {H, O__H, O__S}, realobjects = {`&hbar;`, t}, combinepowersofsamebase = true, mathematicalnotation = true)

[combinepowersofsamebase = true, hermitianoperators = {H, O__H, O__S}, mathematicalnotation = true, realobjects = {`&hbar;`, t}]

(1)

Let's consider Schrödinger's equation

I*`&hbar;`*(diff(Ket(psi, t), t)) = H*Ket(psi, t)

I*`&hbar;`*(diff(Physics:-Ket(psi, t), t)) = Physics:-`*`(H, Physics:-Ket(psi, t))

(2)

Now, H is time-independent, so (2) can be formally solved: psi(t) is obtained from the solution psi(0) at time t = 0, as follows:

T := exp(-I*H*t/`&hbar;`)

exp(-I*t*H/`&hbar;`)

(3)

Ket(psi, t) = T*Ket(psi, 0)

Physics:-Ket(psi, t) = Physics:-`*`(exp(-I*t*H/`&hbar;`), Physics:-Ket(psi, 0))

(4)

To check that (4) is a solution of (2), substitute it in (2):

eval(I*`&hbar;`*(diff(Physics[Ket](psi, t), t)) = Physics[`*`](H, Physics[Ket](psi, t)), Physics[Ket](psi, t) = Physics[`*`](exp(-I*H*t/`&hbar;`), Physics[Ket](psi, 0)))

Physics:-`*`(H, exp(-I*t*H/`&hbar;`), Physics:-Ket(psi, 0)) = Physics:-`*`(H, exp(-I*t*H/`&hbar;`), Physics:-Ket(psi, 0))

(5)

Next, to relate the Schrödinger and Heisenberg representations of an Hermitian operator O representing an observable physical quantity, recall that the value expected for this quantity at time t during a measurement is given by the mean value of the corresponding operator (i.e., bracketing it with the state of the system Ket(psi, t)).

So let O__S be an observable in the Schrödinger picture: its mean value is obtained by bracketing the operator with equation (4):

Dagger(Ket(psi, t) = Physics[`*`](exp(-I*H*t/`&hbar;`), Ket(psi, 0)))*O__S*(Ket(psi, t) = Physics[`*`](exp(-I*H*t/`&hbar;`), Ket(psi, 0)))

Physics:-`*`(Physics:-Bra(psi, t), O__S, Physics:-Ket(psi, t)) = Physics:-`*`(Physics:-Bra(psi, 0), exp(I*t*H/`&hbar;`), O__S, exp(-I*t*H/`&hbar;`), Physics:-Ket(psi, 0))

(6)

The composed operator within the bracket on the right-hand-side is the operator O in Heisenberg's picture, O__H(t)

Dagger(T)*O__S*T = O__H(t)

Physics:-`*`(exp(I*t*H/`&hbar;`), O__S, exp(-I*t*H/`&hbar;`)) = O__H(t)

(7)

Analogously, inverting this equation,

(T*(Physics[`*`](exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`)) = O__H(t)))*Dagger(T)

O__S = Physics:-`*`(exp(-I*t*H/`&hbar;`), O__H(t), exp(I*t*H/`&hbar;`))

(8)

As an aside to the problem, we note from these two equations, and since the operator T = exp((-I*H*t)*(1/`&hbar;`)) is unitary (because H is Hermitian), that the switch between Schrödinger's and Heisenberg's pictures is accomplished through a unitary transformation.

 

Inserting now this value of O__S from (8) in the right-hand-side of (6), we get the answer to item a)

lhs(Physics[`*`](Bra(psi, t), O__S, Ket(psi, t)) = Physics[`*`](Bra(psi, 0), exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`), Ket(psi, 0))) = eval(rhs(Physics[`*`](Bra(psi, t), O__S, Ket(psi, t)) = Physics[`*`](Bra(psi, 0), exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`), Ket(psi, 0))), O__S = Physics[`*`](exp(-I*H*t/`&hbar;`), O__H(t), exp(I*H*t/`&hbar;`)))

Physics:-`*`(Physics:-Bra(psi, t), O__S, Physics:-Ket(psi, t)) = Physics:-`*`(Physics:-Bra(psi, 0), O__H(t), Physics:-Ket(psi, 0))

(9)

where, on the left-hand-side, the Ket representing the state of the system is evolving with time (Schrödinger's picture), while on the the right-hand-side the Ket `&psi;__0`is constant and it is O__H(t), the operator representing an observable physical quantity, that evolves with time (Heisenberg picture). As expected, both pictures result in the same expected value for the physical quantity represented by O.

 

To complete item b), the derivation of the evolution equation for O__H(t), we take the time derivative of the equation (7):

diff((rhs = lhs)(Physics[`*`](exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`)) = O__H(t)), t)

diff(O__H(t), t) = I*Physics:-`*`(H, exp(I*t*H/`&hbar;`), O__S, exp(-I*t*H/`&hbar;`))/`&hbar;`-I*Physics:-`*`(exp(I*t*H/`&hbar;`), O__S, H, exp(-I*t*H/`&hbar;`))/`&hbar;`

(10)

To rewrite this equation in terms of the commutator  Physics:-Commutator(O__S, H), it suffices to re-order the product  H  exp(I*H*t/`&hbar;`) placing the exponential first:

Library:-SortProducts(diff(O__H(t), t) = I*Physics[`*`](H, exp(I*H*t/`&hbar;`), O__S, exp(-I*H*t/`&hbar;`))/`&hbar;`-I*Physics[`*`](exp(I*H*t/`&hbar;`), O__S, H, exp(-I*H*t/`&hbar;`))/`&hbar;`, [exp(I*H*t/`&hbar;`), H], usecommutator)

diff(O__H(t), t) = I*Physics:-`*`(exp(I*t*H/`&hbar;`), H, O__S, exp(-I*t*H/`&hbar;`))/`&hbar;`-I*Physics:-`*`(exp(I*t*H/`&hbar;`), Physics:-`*`(H, O__S)+Physics:-Commutator(O__S, H), exp(-I*t*H/`&hbar;`))/`&hbar;`

(11)

Normal(diff(O__H(t), t) = I*Physics[`*`](exp(I*H*t/`&hbar;`), H, O__S, exp(-I*H*t/`&hbar;`))/`&hbar;`-I*Physics[`*`](exp(I*H*t/`&hbar;`), Physics[`*`](H, O__S)+Physics[Commutator](O__S, H), exp(-I*H*t/`&hbar;`))/`&hbar;`)

diff(O__H(t), t) = -I*Physics:-`*`(exp(I*t*H/`&hbar;`), Physics:-Commutator(O__S, H), exp(-I*t*H/`&hbar;`))/`&hbar;`

(12)

Finally, to express the right-hand-side in terms of  Physics:-Commutator(O__H(t), H) instead of Physics:-Commutator(O__S, H), we take the commutator of the equation (8) with the Hamiltonian

Commutator(O__S = Physics[`*`](exp(-I*H*t/`&hbar;`), O__H(t), exp(I*H*t/`&hbar;`)), H)

Physics:-Commutator(O__S, H) = Physics:-`*`(exp(-I*t*H/`&hbar;`), Physics:-Commutator(O__H(t), H), exp(I*t*H/`&hbar;`))

(13)

Combining these two expressions, we arrive at the expected result for b), the evolution equation of a given observable O__H in Heisenberg's picture

eval(diff(O__H(t), t) = -I*Physics[`*`](exp(I*H*t/`&hbar;`), Physics[Commutator](O__S, H), exp(-I*H*t/`&hbar;`))/`&hbar;`, Physics[Commutator](O__S, H) = Physics[`*`](exp(-I*H*t/`&hbar;`), Physics[Commutator](O__H(t), H), exp(I*H*t/`&hbar;`)))

diff(O__H(t), t) = -I*Physics:-Commutator(O__H(t), H)/`&hbar;`

(14)


Download:    Schrodinger_vs_Heisenberg_picture.mw     Schrodinger_vs_Heisenberg_picture.pdf

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
Editor, Computer Physics Communications

Hanze University of Applied Sciences Groningen has created 105 questions related to engineering mechanics for structures (statics/construction). These 105 randomised questions with graphics are used for first year students in civil engineering, structural engineering, architectural engineering and building engineering.

The topics of the course modules are as follows:
- Force Vectors (10)
- Support Reactions (26)
- Internal Forces (31)
- Stress (21)
- Trusses (17)

All questions have a translation button which makes it easy to switch from English to any other language. The questions are first shown in Dutch [NL] but by clicking [UK] in the Preview, the English version is shown. The text can easily be edited and changed into the language of choice in the Maple T.A. question editor. Only the button needs adjustment in the question source.

60 questions are “exercises“ which means that these questions have extended feedback. The remaining questions (45) are “tests” meaning that the questions include no feedback.

Cone.zip - construction exercises (60 questions)

Cont.zip - construction tests (45 questions)

Jonny
Maplesoft Product Manager, Online Education Products

Bruce Jenkins is President of Ora Research, an engineering research and advisory service. Maplesoft commissioned him to examine how systems-driven engineering practices are being integrated into the early stages of product development, the results of which are available in a free whitepaper entitled System-Level Physical Modeling and Simulation. In this series of blog posts, Mr. Jenkins discusses the results of his research.

This is the second entry in the series.

My last post, Strategies for accelerating the move to simulation-led, systems-driven engineering, described my firm’s research project to investigate the contemporary state of adoption and application of systems modeling software technologies, and their attendant methods and work processes, in the engineering design of off-highway equipment and mining machinery.

Adoption drivers

In this project, I conducted in-depth, structured but open-ended interviews with some half-dozen expert practitioners at leading manufacturers, including both engineering management and senior discipline leads. These interviews identified the following key technological factors as well as business and competitive issues driving adoption and use of systems modeling tools and methods at current levels:

  • Fuel economy and emissions mandates, powertrain electrification and autonomous operation requirements
  • Software’s ability to drive down product cost of ownership and delivery times
  • Traditional development processes often fail to surface system-level issues until fabrication or assembly, or even until operational deployment
  • Detailed analysis tools such as FEA and CFD focus on behaviors at the component level, and are not optimal for studies of the complete system
  • Engineering departments/groups enjoy greater freedom in systems modeling software selection and purchase decisions than in enterprise-controlled CAD/PDM/PLM decisions
  • Good C/VP-level visibility of systems modeling tools, especially in off-highway equipment

Adoption constraints

At the same time, there was widespread agreement among all the experts interviewed that these tools and methods are not being brought to bear with anywhere near the breadth or depth that practitioner advocates would like, and that they believe would be greatly beneficial to their organizations and industries.

In probing why this is, the interviews revealed an array of factors constraining broader adoption at present. These range from legacy engineering culture issues, through human resource limitations and constraints imposed by business models and corporate cultures, to entrenched shortcomings in how long-established systems modeling software toolsets have been deployed and applied to the product development process:

  • Legacy engineering culture constraints
    • Conservatism of mining machinery product development culture
    • Engineering practices in long-standardized industries grounded in handbook formulas and rules of thumb
    • Perceived lack of time in schedule to do systems modeling
  • Human resource constraints
    • Low availability of engineers with systems modeling skills
    • Shortage of engineers trained in systems thinking
    • Legacy engineering processes compound shortage of systems-thinking engineers
    • Industry downturns put constraints on professional staff development
  • Business-model and corporate-culture constraints
    • Culture of seeking to mitigate cost and risk by staying with legacy designs instead of advancing and innovating the product
    • Corporate awareness of need to innovate in mining machinery gets stifled at engineering level
    • Low C/VP-level visibility of systems modeling tools in mining machinery
  • Engineering organization constraints on innovating/modernizing their systems modeling technology infrastructure
    • Power users wedded to legacy systems modeling tools
    • Weak integration at many/most points of the engineering digital toolset chain
    • Implementing systems modeling software as a sales configuration/costing aid seen as taking too much time

My next post will detail practitioners’ visions, strategies and best practices for accelerating and institutionalizing the implementation and usage of systems modeling tools and practices in their organizations.

You can download the full white paper reporting our findings here.

Bruce Jenkins, Ora Research
oraresearch.com

Hi,

In the following example I introduce some commutation rules that are standard in Quantum Mechanics. A major feature of the Maple Physics Package, is that it is possible to define tensors as Quantum Operators. This is of great interest because powerful tensor simplification rules can then be used in Quantum Mechanics. For an example, see the commutation rules of the components of the angular momentum operator in ?Physics,Examples. Here, I focus on a possible issue: when destroying all quantum operators, the pre-defined commutation rules still apply, which should not be the case. As shown in the post, this is link to the fact that these operators are also tensors.
 

NULL

 

Physics:-Version()[2]

`2016, August 16, 18:56 hours`

(1)

NULL

NULL

restart; with(Physics); interface(imaginaryunit = I)

First, set a 3D Euclidian space

Setup(mathematicalnotation = true, dimension = 3, signature = `+`, spacetimeindices = lowercaselatin, quiet)

[dimension = 3, mathematicalnotation = true, signature = `+ + +`, spacetimeindices = lowercaselatin]

(2)

Define two rank 1 tensors

Define(x[k], p[k])

`Defined objects with tensor properties`

 

{Physics:-Dgamma[a], Physics:-Psigma[a], Physics:-d_[a], Physics:-g_[a, b], p[k], x[k], Physics:-KroneckerDelta[a, b], Physics:-LeviCivita[a, b, c]}

(3)

Now, further define these tensors as quantum operators and gives the usual commutation rule between position and momentum operators (Quantum Mechanics).

Setup(hermitianoperators = {p, x}, algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, realobjects = {`&hbar;`})

[algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, hermitianoperators = {p, x}, realobjects = {`&hbar;`}]

(4)

As expected:

(%Commutator = Commutator)(p[a], x[b])

%Commutator(p[a], x[b]) = -I*`&hbar;`*Physics:-KroneckerDelta[a, b]

(5)

Now, reset all the Hermitian operators, so that all quantum operators are destroyed. This is useful if, for instance, one needs to compare some the result with the commutative case.

Setup(redo, hermitianoperators = {})

[hermitianoperators = none]

(6)

As expected, there are no quantum operators anymore...

Setup(quantumoperators)

[quantumoperators = {}]

(7)

...so that the following expressions should commute (result should be true)

Library:-Commute(p[a], x[b])

false

(8)

Result should be 0NULL

Commutator(p[a], x[b])

-I*`&hbar;`*Physics:-KroneckerDelta[a, b]

(9)

p[a], x[b]

p[a], x[b]

(10)

NULL

NULL

``

NULLNULL

Below is just a copy & paste of the above section. The only difference, is that "Define(x[k], p[k])" has been commented, so that x[k]and p[k] are not a tensor. In that case, everything behaves as expected (but of course, the interesting feature of tensors is not available).

````

NULL

restart; with(Physics); interface(imaginaryunit = I)

First, set a 3D Euclidian space

Physics:-Setup(mathematicalnotation = true, dimension = 3, signature = `+`, spacetimeindices = lowercaselatin, quiet)

[dimension = 3, mathematicalnotation = true, signature = `+ + +`, spacetimeindices = lowercaselatin]

(11)

#Define two rank 1 tensors

Now, further define these tensors as quantum operators and gives the usual commutation rule between position and momentum operators (Quantum Mechanics)

Physics:-Setup(hermitianoperators = {p, x}, algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = Physics:-`*`(Physics:-`*`(I, `&hbar;`), Physics:-KroneckerDelta[k, l]), %Commutator(x[k], x[l]) = 0}, realobjects = {`&hbar;`})

[algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`&hbar;`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, hermitianoperators = {p, x}, realobjects = {`&hbar;`}]

(12)

As expected:

(%Commutator = Physics:-Commutator)(p[a], x[b])

%Commutator(p[a], x[b]) = -I*`&hbar;`*Physics:-KroneckerDelta[a, b]

(13)

Now, reset all the Hermitian operators, so that all quantum operators are destroyed.

Physics:-Setup(redo, hermitianoperators = {})

[hermitianoperators = none]

(14)

As expected, there are no quantum operators anymore...

Physics:-Setup(quantumoperators)

[quantumoperators = {}]

(15)

...so that the following expressions should commute (result should be true)

Physics:-Library:-Commute(p[a], x[b])

true

(16)

Result should be 0``

Physics:-Commutator(p[a], x[b])

0

(17)

p[a], x[b]

p[a], x[b]

(18)

NULL

``

NULL``

NULL


Download Quantum_operator_as_Tensors_August_23_2016.mw

Bruce Jenkins is President of Ora Research, an engineering research and advisory service. Maplesoft commissioned him to examine how systems-driven engineering practices are being integrated into the early stages of product development, the results of which are available in a free whitepaper entitled System-Level Physical Modeling and Simulation. In the coming weeks, Mr. Jenkins will discuss the results of his research in a series of blog posts.

This is the first entry in the series.

Discussions of how to bring simulation to bear starting in the early stages of product development have become commonplace today. Driving these discussions, I believe, is growing recognition that engineering design in general, and conceptual and preliminary engineering in particular, face unprecedented pressures to move beyond the intuition-based, guess-and-correct methods that have long dominated product development practices in discrete manufacturing. To continue meeting their enterprises’ strategic business imperatives, engineering organizations must move more deeply into applying all the capabilities for systematic, rational, rapid design development, exploration and optimization available from today’s simulation software technologies.

Unfortunately, discussions of how to simulate early still fixate all too often on 3D CAE methods such as finite element analysis and computational fluid dynamics. This reveals a widespread dearth of awareness and understanding—compounded by some fear, intimidation and avoidance—of system-level physical modeling and simulation software. This technology empowers engineers and engineering teams to begin studying, exploring and optimizing designs in the beginning stages of projects—when product geometry is seldom available for 3D CAE, but when informed engineering decision-making can have its strongest impact and leverage on product development outcomes. Then, properly applied, systems modeling tools can help engineering teams maintain visibility and control at the subsystems, systems and whole-product levels as the design evolves through development, integration, optimization and validation.

As part of my ongoing research and reporting intended to help remedy the low awareness and substantial under-utilization of system-level physical modeling software in too many manufacturing industries today, earlier this year I produced a white paper, “System-Level Physical Modeling and Simulation: Strategies for Accelerating the Move to Simulation-Led, Systems-Driven Engineering in Off-Highway Equipment and Mining Machinery.” The project that resulted in this white paper originated during a technology briefing I received in late 2015 from Maplesoft. The company had noticed my commentary in industry and trade publications expressing the views set out above, and approached me to explore what they saw as shared perspectives.

From these discussions, I proposed that Maplesoft commission me to further investigate these issues through primary research among expert practitioners and engineering management, with emphasis on the off-highway equipment and mining machinery industries. In this research, focused not on software-brand-specific factors but instead on industry-wide issues, I interviewed users of a broad range of systems modeling software products including Dassault Systèmes’ Dymola, Maplesoft’s MapleSim, The MathWorks’ Simulink, Siemens PLM’s LMS Imagine.Lab Amesim, and the Modelica tools and libraries from various providers. Interviewees were drawn from manufacturers of off-highway equipment and mining machinery as well as some makers of materials handling machinery.

At the outset, I worked with Maplesoft to define the project methodology. My firm, Ora Research, then executed the interviews, analyzed the findings and developed the white paper independently of input from Maplesoft. That said, I believe the findings of this project strongly support and validate Maplesoft’s vision and strategy for what it calls model-driven innovation. You can download the white paper here.

Bruce Jenkins, Ora Research
oraresearch.com

Bruce Jenkins is President of Ora Research, an engineering research and advisory service. Maplesoft commissioned him to examine how systems-driven engineering practices are being integrated into the early stages of product development, the results of which are available in a free whitepaper entitled System-Level Physical Modeling and Simulation. In the coming weeks, Mr. Jenkins will discuss the results of his research in a series of blog posts.

This is the first entry in the series.

Discussions of how to bring simulation to bear starting in the early stages of product development have become commonplace today. Driving these discussions, I believe, is growing recognition that engineering design in general, and conceptual and preliminary engineering in particular, face unprecedented pressures to move beyond the intuition-based, guess-and-correct methods that have long dominated product development practices in discrete manufacturing. To continue meeting their enterprises’ strategic business imperatives, engineering organizations must move more deeply into applying all the capabilities for systematic, rational, rapid design development, exploration and optimization available from today’s simulation software technologies.

Unfortunately, discussions of how to simulate early still fixate all too often on 3D CAE methods such as finite element analysis and computational fluid dynamics. This reveals a widespread dearth of awareness and understanding—compounded by some fear, intimidation and avoidance—of system-level physical modeling and simulation software. This technology empowers engineers and engineering teams to begin studying, exploring and optimizing designs in the beginning stages of projects—when product geometry is seldom available for 3D CAE, but when informed engineering decision-making can have its strongest impact and leverage on product development outcomes. Then, properly applied, systems modeling tools can help engineering teams maintain visibility and control at the subsystems, systems and whole-product levels as the design evolves through development, integration, optimization and validation.

As part of my ongoing research and reporting intended to help remedy the low awareness and substantial under-utilization of system-level physical modeling software in too many manufacturing industries today, earlier this year I produced a white paper, “System-Level Physical Modeling and Simulation: Strategies for Accelerating the Move to Simulation-Led, Systems-Driven Engineering in Off-Highway Equipment and Mining Machinery.” The project that resulted in this white paper originated during a technology briefing I received in late 2015 from Maplesoft. The company had noticed my commentary in industry and trade publications expressing the views set out above, and approached me to explore what they saw as shared perspectives.

From these discussions, I proposed that Maplesoft commission me to further investigate these issues through primary research among expert practitioners and engineering management, with emphasis on the off-highway equipment and mining machinery industries. In this research, focused not on software-brand-specific factors but instead on industry-wide issues, I interviewed users of a broad range of systems modeling software products including Dassault Systèmes’ Dymola, Maplesoft’s MapleSim, The MathWorks’ Simulink, Siemens PLM’s LMS Imagine.Lab Amesim, and the Modelica tools and libraries from various providers. Interviewees were drawn from manufacturers of off-highway equipment and mining machinery as well as some makers of materials handling machinery.

At the outset, I worked with Maplesoft to define the project methodology. My firm, Ora Research, then executed the interviews, analyzed the findings and developed the white paper independently of input from Maplesoft. That said, I believe the findings of this project strongly support and validate Maplesoft’s vision and strategy for what it calls model-driven innovation. You can download the white paper here.

Bruce Jenkins, Ora Research
oraresearch.com

A more honest and specific version of lemma 3.

CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw

Maple Worksheet - Error

Failed to load the worksheet /maplenet/convert/CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw .

Download CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw

In a recent blog post, I found a single rotation that was equivalent to a sequence of Givens rotations, the underlying message being that teaching, learning, and doing mathematics is more effective and efficient when implemented with a tool like Maple. This post has the same message, but the medium is now the Householder reflection.

Given the vector x = , the Householder matrix H = I - 2 uuT reflects x to y = Hx, where I is the appropriate identity matrix, u = (x - y) / ||x - y|| is a unit normal for the plane (or hyperplane) across which x is reflected, and y necessarily has the same norm as x. The matrix H is orthogonal but its determinant is -1, making it a reflection instead of a rotation.

Starting with x and uH can be constructed and the reflection y calculated. Starting with x and yu and H can be determined. But what does any of this look like? Besides, when the Householder matrix is introduced as a tool for upper triangularizing a matrix, or for putting it into upper Hessenberg form, a recipe such as the one stated in Table 1 is the starting point.

In other words, the recipe in Table 1 reflects x to a vector y in which all entries below the kth are zero. Again, can any of this be visualized and rendered more concrete? (The chair who hired me into my first job averred that there are students who can learn from the general to the particular. Maybe some of my classmates in graduate school could, but in 40 years of teaching, I've never met one such student. Could that be because all things are known through the eyes of the beholder?)

In the attached worksheet, Householder matrices that reflect x = <5, -2, 1> to vectors y along the coordinate axes are constructed. These vectors and the reflecting planes are drawn, along with the appropriate normals u. In addition, the recipe in Table 1 is implemented, and the recipe itself examined. If you look at the worksheet, I believe you will agree that without Maple, the explorations shown would have been exceedingly difficult to carry out by hand.

Attached: RHR.mw

hello i was just looking back on some stuff i did a few months back and although im aware there is a function for generating the prime subset up to a given number already featured in a package in mape im just curious to know how this one measures up in terms of computational efficiency etc.

 

anyway, this is code, if anyone has the time to give it a try and let me know what they think ie faster more logical way about it any feed back is appreciated cheers.

 

restart;
interface(showassumed = 0, rtablesize = infinity);
with(plots); with(numtheory); with(Statistics); with(LinearAlgebra); with(RandomTools); with(codegen, makeproc); with(combinat); with(Maplets[Elements]);
unprotect(real, rational, integer, complex);
alias(P[In] = CurveFitting[PolynomialInterpolation]); alias(L[In] = CurveFitting[LeastSquares]); alias(R[In] = CurveFitting[RationalInterpolation]); alias(S[In] = CurveFitting[Spline]); alias(B[In] = CurveFitting[BSplineCurve]); alias(L[In] = CurveFitting[ThieleInterpolation], rho = frac); alias(`&Nscr;` = Count); alias(`&Dopf;` = numtheory:-divisors); alias(sigma = numtheory:-sigma); alias(`&Fscr;` = ListTools['Flatten']); alias(`&Sopf;` = seq);
delta := proc (x, y) options operator, arrow; piecewise(x = y, 1, x <> y, 0) end proc;
`&Mopf;` := proc (X, Y) options operator, arrow; map(X, Y) end proc;
`&Cscr;`[S, L] := proc (X) options operator, arrow; convert(X, 'list') end proc;
`&Cscr;`[L, S] := proc (X) options operator, arrow; convert(X, 'set') end proc;
`&Popf;` := proc (N) options operator, arrow; `minus`({`&Sopf;`(k*delta(`&Nscr;`(`&Fscr;`(`&Cscr;`[S, L](`&Mopf;`(`&Cscr;`[S, L], `&Mopf;`(`&Dopf;`, `&Dopf;`(k)))))), 3), k = 1 .. N)}, {0}) end proc;
N -> `minus`({(k delta(&Nscr;(&Fscr;(&Cscr;[S, L]((&Cscr;[S, L])

  &Mopf; (&Dopf; &Mopf; (&Dopf;(k)))))), 3)) &Sopf; (k = 1 .. N)},

  {0})
n[P] := proc (N) options operator, arrow; `&Nscr;`(`&Cscr;`[S, L](`&Popf;`(N)))-1 end proc;



Maple Worksheet - Error

Failed to load the worksheet /maplenet/convert/prime_subset_up_to_N.mw .

Download prime_subset_up_to_N.mw

The 5th International Congress on Mathematical Software was  held in Berlin from July, 11 to July,14, 2016.

I'd like to pay attention to the following sessions:

The talks demonstrate what is going on at the frontiers of math soft nowadays and what are the cutting edge research topics in it.

This post is about the relationship between the number of processors used in parallel processing with the Threads package and the resultant real times and cpu times for a computation.

In the worksheet below, I perform the same computation using each possible number of processors on my machine, one thru eight. The computation is adding a list of 32 million pre-selected random integers. The real times and cpu times are collected from each run, and these are analyzed with a variety of metrics that I devised. Note that garbage-collection (gc) time is not an issue in the timings; as you can see below, the gc times are zero throughout.

My conclusion is that there are severely diminishing returns as the number of processors increases. There is a major benefit in going from one processor to two; there is a not-as-great-but-still-substantial benefit in going from two processors to four. But the real-time reduction in going from four processors to eight is very small compared to the substantial increase in resource consumption.

Please discuss the relevance of my six metrics, the soundness of my test technique, and how the presentation could be better. If you have a computer capable of running more than eight threads, please modify and run my worksheet on it.

Diminishing Returns from Parallel Processing: Is it worth using more than four processors with Threads?

Author: Carl J Love, 2016-July-30 

Set up tests

 

restart:

currentdir(kernelopts(homedir)):
if kernelopts(numcpus) <> 8 then
     error "This worksheet needs to be adjusted for your number of CPUs."
end if:
try fremove("ThreadsData.m") catch: end try:
try fremove("ThreadsTestData.m") catch: end try:
try fremove("ThreadsTest.mpl") catch: end try:

#Create and save random test data
L:= RandomTools:-Generate(list(integer, 2^25)):
save L, "ThreadsTestData.m":

#Create code file to be read for the tests.
fd:= FileTools:-Text:-Open("ThreadsTest.mpl", create):
fprintf(
     fd,
     "gc();\n"
     "read \"ThreadsTestData.m\":\n"
     "CodeTools:-Usage(Threads:-Add(x, x= L)):\n"
     "fd:= FileTools:-Text:-Open(\"ThreadsData.m\", create, append):\n"
     "fprintf(\n"
     "     fd, \"%%m%%m%%m\\n\",\n"
     "     kernelopts(numcpus),\n"
     "     CodeTools:-Usage(\n"
     "          Threads:-Add(x, x= L),\n"
     "          iterations= 8,\n"
     "          output= [realtime, cputime]\n"
     "     )\n"
     "):\n"
     "fclose(fd):"
):
fclose(fd):

#Code review
fd:= FileTools:-Text:-Open("ThreadsTest.mpl"):
while not feof(fd) do
     printf("%s\n", FileTools:-Text:-ReadLine(fd))
end do:

fclose(fd):

gc();
read "ThreadsTestData.m":
CodeTools:-Usage(Threads:-Add(x, x= L)):
fd:= FileTools:-Text:-Open("ThreadsData.m", create, append):
fprintf(
     fd, "%m%m%m\n",
     kernelopts(numcpus),
     CodeTools:-Usage(
          Threads:-Add(x, x= L),
          iterations= 8,
          output= [realtime, cputime]
     )
):
fclose(fd):

 

Run the tests

restart:

kernelopts(numcpus= 1):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.79MiB, alloc change=0 bytes, cpu time=2.66s, real time=2.66s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=2.26s, real time=2.26s, gc time=0ns

 

Repeat above test using numcpus= 2..8.

 

restart:

kernelopts(numcpus= 2):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.79MiB, alloc change=2.19MiB, cpu time=2.73s, real time=1.65s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=2.37s, real time=1.28s, gc time=0ns

 

restart:

kernelopts(numcpus= 3):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.79MiB, alloc change=4.38MiB, cpu time=2.98s, real time=1.38s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=2.75s, real time=1.05s, gc time=0ns

 

restart:

kernelopts(numcpus= 4):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.80MiB, alloc change=6.56MiB, cpu time=3.76s, real time=1.38s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=3.26s, real time=959.75ms, gc time=0ns

 

restart:

kernelopts(numcpus= 5):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.80MiB, alloc change=8.75MiB, cpu time=4.12s, real time=1.30s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=3.74s, real time=910.88ms, gc time=0ns

 

restart:

kernelopts(numcpus= 6):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.81MiB, alloc change=10.94MiB, cpu time=4.59s, real time=1.26s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=4.29s, real time=894.00ms, gc time=0ns

 

restart:

kernelopts(numcpus= 7):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.81MiB, alloc change=13.12MiB, cpu time=5.08s, real time=1.26s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=4.63s, real time=879.00ms, gc time=0ns

 

restart:

kernelopts(numcpus= 8):
currentdir(kernelopts(homedir)):
read "ThreadsTest.mpl":

memory used=0.82MiB, alloc change=15.31MiB, cpu time=5.08s, real time=1.25s, gc time=0ns

memory used=0.78MiB, alloc change=0 bytes, cpu time=4.69s, real time=845.75ms, gc time=0ns

 

Analyze the data

restart:

currentdir(kernelopts(homedir)):

(R,C):= 'Vector(kernelopts(numcpus))' $ 2:
N:= Vector(kernelopts(numcpus), i-> i):

fd:= FileTools:-Text:-Open("ThreadsData.m"):
while not feof(fd) do
     (n,Tr,Tc):= fscanf(fd, "%m%m%m\n")[];
     (R[n],C[n]):= (Tr,Tc)
end do:

fclose(fd):

plot(
     (V-> <N | 100*~V>)~([R /~ max(R), C /~ max(C)]),
     title= "Raw timing data (normalized)",
     legend= ["real", "CPU"],
     labels= [`number of processors\n`, `%  of  max`],
     labeldirections= [HORIZONTAL,VERTICAL],
     view= [DEFAULT, 0..100]
);

The metrics:

 

R[1] /~ R /~ N:          Gain: The gain from parallelism expressed as a percentage of the theoretical maximum gain given the number of processors

C /~ R /~ N:               Evenness: How evenly the task is distributed among the processors

1 -~ C[1] /~ C:           Overhead: The percentage of extra resource consumption due to parallelism

R /~ R[1]:                   Reduction: The percentage reduction in real time

1 -~ R[2..] /~ R[..-2]:  Marginal Reduction: Percentage reduction in real time by using one more processor

C[2..] /~ C[..-2] -~ 1:  Marginal Consumption: Percentage increase in resource consumption by using one more processor

 

plot(
     [
          (V-> <N | 100*~V>)~([
               R[1]/~R/~N,             #gain from parallelism
               C/~R/~N,                #how evenly distributed
               1 -~ C[1]/~C,           #overhead
               R/~R[1]                 #reduction
          ])[],
          (V-> <N[2..] -~ .5 | 100*~V>)~([
               1 -~ R[2..]/~R[..-2],   #marginal reduction rate
               C[2..]/~C[..-2] -~ 1    #marginal consumption rate        
          ])[]
     ],
     legend= typeset~([
          'r[1]/r/n',
          'c/r/n',
          '1 - c[1]/c',
          'r/r[1]',
          '1 - `Delta__%`(r)',
          '`Delta__%`(c) - 1'       
     ]),
     linestyle= ["solid"$4, "dash"$2], thickness= 2,
     title= "Efficiency metrics\n", titlefont= [HELVETICA,BOLD,16],
     labels= [`number of processors\n`, `% change`], labelfont= [TIMES,ITALIC,14],
     labeldirections= [HORIZONTAL,VERTICAL],
     caption= "\nr = real time,  c = CPU time,  n = # of processors",
     size= combinat:-fibonacci~([16,15]),
     gridlines
);

 

 

Download Threads_dim_ret.mw

First 31 32 33 34 35 36 37 Last Page 33 of 71