MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • Maple users often want to write a derivative evaluated at a point using Leibniz notation, as a matter of presentation, with appropriate variables and coordinates. For instance:

     

    Now, Maple uses the D operator for evaluating derivatives at a point, but this can be a little clunky:

    p := D[1,2,2,3](f)(a,b,c);
    
    q := convert( p, Diff );

    u := D[1,2,2,3](f)(5,10,15);
    
    v := convert( u, Diff );

    How can we tell Maple, programmatically, to print this in a nicer way? We amended the print command (see below) to do this. For example:

    print( D[1,2,2,3](f)(a,b,c), [x,y,z] );
    
    print( D[1,2,2,3](f)(5,10,15), [x,y,z] );

    print( 'D(sin)(Pi/6)', theta );

    Here's the definition of the custom version of print:

    # Type to check if an expression is a derivative using 'D', e.g. D(f)(a) and D[1,2](f)(a,b).
    
    TypeTools:-AddType(   
    
            'Dexpr',      
    
            proc( f )     
    
                   if op( [0,0], f ) <> D and op( [0,0,0], f ) <> D then
    
                           return false;
    
                   end if;       
    
                   if not type( op( [0,1], f ), 'name' ) or not type( { op( f ) }, 'set(algebraic)' ) then
    
                           return false;
    
                   end if;       
    
                   if op( [0,0,0], f ) = D and not type( { op( [0,0,..], f ) }, 'set(posint)' ) then
    
                           return false;
    
                   end if;       
    
                   return true;          
    
            end proc      
    
    ):
    
    
    # Create a local version of 'print', which will print expressions like D[1,2](f)(a,b) in a custom way,
    
    # but otherwise print in the usual fashion.
    
    local print := proc()
    
    
            local A, B, f, g, L, X, Y, Z;
    
    
            # Check that a valid expression involving 'D' is passed, along with a variable name or list of variable names.
    
            if ( _npassed < 2 ) or ( not _passed[1] :: 'Dexpr' ) or ( not passed[2] :: 'Or'('name','list'('name')) ) then
    
                   return :-print( _passed );
    
            end if;
    
    
            # Extract important variables from the input.
    
            g := _passed[1]; # expression
    
            X := _passed[2]; # variable name(s)
    
            f := op( [0,1], g ); # function name in expression
    
            A := op( g ); # point(s) of evaluation
    
    
            # Check that the number of variables is the same as the number of evaluation points.
    
            if nops( X ) <> nops( [A] ) then
    
                   return :-print( _passed );
    
            end if;
    
    
            # The differential operator.
    
            L := op( [0,0], g );
    
    
            # Find the variable (univariate) or indices (multivariate) for the derivative(s).
    
            B := `if`( L = D, X, [ op( L ) ] );
    
    
            # Variable name(s) as expression sequence.
    
            Y := op( X );
    
    
            # Check that the point(s) of evaluation is/are distinct from the variable name(s).
    
            if numelems( {Y} intersect {A} ) > 0 then
    
                   return :-print( _passed );
    
            end if;
    
    
            # Find the expression sequence of the variable names.
    
            Z := `if`( L = D, X, X[B] );
    
           
    
            return print( Eval( Diff( f(Y), Z ), (Y) = (A) ) );
    
    
    end proc:
    
    

    Do you use Leibniz Notation often? Or do you have an alternate method? We’d love to hear from you!

    Last year, I read a fascinating paper that presented evidence of an exoplanet, inferred through the “wobble” (or radial velocity) of the star it orbits, HD 3651. A periodogram of the radial velocity revealed the orbital period of the exoplanet – about 62.2 days.

    I found the experimental data and attempted to reproduce the periodogram. However, the data was irregularly sampled, as is most astronomical data. This meant I couldn’t use the standard Fourier-based tools from the signal processing package.

    I started hunting for the techniques used in the spectral analysis of irregularly sampled data, and found that the Lomb Scargle approach was often used for astronomical data. I threw together some simple prototype code and successfully reproduced the periodogram in the paper.

     

    After some (not so) gentle prodding, Erik Postma’s team wrote their own, far faster and far more robust, implementation.

    This new functionality makes its debut in Maple 2019 (and the final worksheet is here.)

    From a simple germ of an idea, to a finished, robust, fully documented product that we can put in front of our users – that, for me, is incredibly satisfying.

    That’s a minor story about a niche I’m interested in, but these stories are repeated time and time again.  Ideas spring from users and from those that work at Maplesoft. They’re filtered to a manageable set that we can work on. Some projects reach completion in under a year, while other, more ambitious, projects take longer.

    The result is software developed by passionate people invested in their work, and used by passionate people in universities, industry and at home.

    We always pack a lot into each release. Maple 2019 contains improvements for the most commonly used Maple functions that nearly everyone uses – such as solve, simplify and int – as well features that target specific groups (such as those that share my interest in signal processing!)

    I’d like to to highlight a few new of the new features that I find particularly impressive, or have just caught my eye because they’re cool.

    Of course, this is only a small selection of the shiny new stuff – everything is described in detail on the Maplesoft website.

    Edgardo, research fellow at Maplesoft, recently sent me a recent independent comparison of Maple’s PDE solver versus those in Mathematica (in case you’re not aware, he’s the senior developer for that function). He was excited – this test suite demonstrated that Maple was far ahead of its closest competitor, both in the number of PDEs solved, and the time taken to return those solutions.

    He’s spent another release cycle working on pdsolve – it’s now more powerful than before. Here’s a PDE that Maple now successfully solves.

    Maplesoft tracks visits to our online help pages - simplify is well-inside the top-ten most visited pages. It’s one of those core functions that nearly everyone uses.

    For this release, R&D has made many improvements to simplify. For example, Maple 2019 better simplifies expressions that contain powers, exponentials and trig functions.

    Everyone who touches Maple uses the same programming language. You could be an engineer that’s batch processing some data, or a mathematical researcher prototyping a new algorithm – everyone codes in the same language.

    Maple now supports C-style increment, decrement, and assignment operators, giving you more concise code.

    We’ve made a number of improvements to the interface, including a redesigned start page. My favorite is the display of large data structures (or rtables).

    You now see the header (that is, the top-left) of the data structure.

    For an audio file, you see useful information about its contents.

    I enjoy creating new and different types of visualizations using Maple's sandbox of flexible plots and plotting primitives.

    Here’s a new feature that I’ll use regularly: given a name (and optionally a modifier), polygonbyname draws a variety of shapes.

    In other breaking news, I now know what a Reuleaux hexagon looks like.

    Since I can’t resist talking about another signal processing feature, FindPeakPoints locates the local peaks or valleys of a 1D data set. Several options let you filter out spurious peaks or valleys

    I’ve used this new function to find the fundamental frequencies and harmonics of a violin note from its periodogram.

    Speaking of passionate developers who are devoted to their work, Edgardo has written a new e-book that teaches you how to use tensor computations using Physics. You get this e-book when you install Maple 2019.

    The new LeastTrimmedSquares command fits data to an equation while not being signficantly influenced by outliers.

    In this example, we:

    • Artifically generate a noisy data set with a few outliers, but with the underlying trend Y =5 X + 50
    • Fit straight lines using CurveFitting:-LeastSquares and Statistics:-LeastTrimmedSquares

    LeastTrimmedSquares function correctly predicts the underlying trend.

    We try to make every release faster and more efficient. We sometimes target key changes in the core infrastructure that benefit all users (such as the parallel garbage collector in Maple 17). Other times, we focus on specific functions.

    For this release, I’m particularly impressed by this improved benchmark for factor, in which we’re factoring a sparse multivariate polynomial.

    On my laptop, Maple 2018 takes 4.2 seconds to compute and consumes 0.92 GiB of memory.

    Maple 2019 takes a mere 0.27 seconds, and only needs 45 MiB of memory!

    I’m a visualization nut, and I always get a vicarious thrill when I see a shiny new plot, or a well-presented application.

    I was immediately drawn to this new Maple 2019 app – it illustrates the transition between day and night on a world map. You can even change the projection used to generate the map. Shiny!

     

    So that’s my pick of the top new features in Maple 2019. Everyone here at Maplesoft would love to hear your comments!

    It is my pleasure to announce the return of the Maple Conference! On October 15-17th, in Waterloo, Ontario, Canada, we will gather a group of Maple enthusiasts, product experts, and customers, to explore and celebrate the different aspects of Maple.

    Specifically, this conference will be dedicated to exploring Maple’s impact on education, new symbolic computation algorithms and techniques, and the wide range of Maple applications. Attendees will have the opportunity to learn about the latest research, share experiences, and interact with Maple developers.

    In preparation for the conference we are welcoming paper and extended abstract submissions. We are looking for presentations which fall into the broad categories of “Maple in Education”, “Algorithms and Software”, and “Applications of Maple” (a more extensive list of topics can be found here).

    You can learn more about the event, plus find our call-for-papers and abstracts, here: https://www.maplesoft.com/mapleconference/

    There have been several posts, over the years, related to visual cues about the values associated with particular 2D contours in a plot.

    Some people ask or post about color-bars [1]. Some people ask or post about inlined labelling of the curves [1, 2, 3, 4, 5, 6, 7]. And some post about mouse popup/hover-over functionality [1]., which got added as general new 2D plot annotation functionality in Maple 2017 and is available for the plots:-contourplot command via its contourlabels option.

    Another possibility consists of a legend for 2D contour plots, with distinct entries for each contour value. That is not currently available from the plots:-contourplot command as documented. This post is about obtaining such a legend.

    Aside from the method used below, a similar effect may be possible (possibly with a little effort) using contour-plotting approaches based on individual plots:-implicitplot calls for each contour level. Eg. using Kitonum's procedure, or an undocumented, alternate internal driver for plots:-contourplot.

    Since I like the functionality provided by the contourlabels option I thought that I'd highjack that (and the _HOVERCONTENT plotting substructure that plot-annotations now generate) and get a relatively convenient way to get a color-key via the 2D plotting legend.  This is not supposed to be super-efficient.

    Here below are some examples. I hope that it illustrates some useful functionality that could be added to the contourplot command. It can also be used to get a color-key for use with densityplot.

    restart;

    contplot:=proc(ee, rng1, rng2)
      local clabels, clegend, i, ncrvs, newP, otherdat, others, tcrvs, tempP;
      (clegend,others):=selectremove(type,[_rest],identical(:-legend)=anything);
      (clabels,others):= selectremove(type,others,identical(:-contourlabels)=anything);
      if nops(clegend)>0 then
        tempP:=:-plots:-contourplot(ee,rng1,rng2,others[],
                                    ':-contourlabels'=rhs(clegend[-1]));
        tempP:=subsindets(tempP,'specfunc(:-_HOVERCONTENT)',
                          u->`if`(has(u,"null"),NULL,':-LEGEND'(op(u))));
        if nops(clabels)>0 then
          newP:=plots:-contourplot(ee,rng1,rng2,others[],
                                  ':-contourlabels'=rhs(clabels[-1]));
          tcrvs:=select(type,[op(tempP)],'specfunc(CURVES)');
          (ncrvs,otherdat):=selectremove(type,[op(newP)],'specfunc(CURVES)');
          return ':-PLOT'(seq(':-CURVES'(op(ncrvs[i]),op(indets(tcrvs[i],'specfunc(:-LEGEND)'))),
                              i=1..nops(ncrvs)),
                          op(otherdat));
        else
          return tempP;
        end if;
      elif nops(clabels)>0 then
        return plots:-contourplot(ee,rng1,rng2,others[],
                                  ':-contourlabels'=rhs(clabels[-1]));
      else
        return plots:-contourplot(ee,rng1,rng2,others[]);
      end if;
    end proc:
     

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          coloring=["Yellow","Blue"],
          contours = 9,
          size=[500,400],
          legendstyle = [location = right],
          legend=true,
          contourlabels=true,
          view=[-2.1..2.1,-2.1..2.1]
    );

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          coloring=["Yellow","Blue"],
          contours = 17,
          size=[500,400],
          legendstyle = [location = right],
          legend=['contourvalue',$("null",7),'contourvalue',$("null",7),'contourvalue'],
          contourlabels=true,
          view=[-2.1..2.1,-2.1..2.1]
    );

    # Apparently legend items must be unique, to persist on document re-open.

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          coloring=["Yellow","Blue"],
          contours = 11,
          size=[500,400],
          legendstyle = [location = right],
          legend=['contourvalue',seq(cat($(` `,i)),i=2..5),
                  'contourvalue',seq(cat($(` `,i)),i=6..9),
                  'contourvalue'],
          contourlabels=true,
          view=[-2.1..2.1,-2.1..2.1]
    );

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          coloring=["Green","Red"],
          contours = 8,
          size=[400,450],
          legend=true,
          contourlabels=true
    );

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          coloring=["Yellow","Blue"],
          contours = 13,
          legend=['contourvalue',$("null",5),'contourvalue',$("null",5),'contourvalue'],
          contourlabels=true
    );

    (low,high,N):=0.1,7.6,23:
    conts:=[seq(low..high*1.01, (high-low)/(N-1))]:
    contplot(x^2+y^2, x=-2..2, y=-2..2,
          coloring=["Yellow","Blue"],
          contours = conts,
          legend=['contourvalue',$("null",floor((N-3)/2)),'contourvalue',$("null",ceil((N-3)/2)),'contourvalue'],
          contourlabels=true
    );

    plots:-display(
      subsindets(contplot((x^2+y^2)^(1/2), x=-2..2, y=-2..2,
                          coloring=["Yellow","Blue"],
                          contours = 7,
                          filledregions),
                 specfunc(CURVES),u->NULL),
      contplot((x^2+y^2)^(1/2), x=-2..2, y=-2..2,
          coloring=["Yellow","Blue"],
          contours = 7, #grid=[50,50],
          thickness=0,
          legendstyle = [location=right],
          legend=true),
      size=[600,500],
      view=[-2.1..2.1,-2.1..2.1]
    );

     

    plots:-display(
      contplot(x^2+y^2, x=-2..2, y=-2..2,
          coloring=["Yellow","Blue"],
          contours = 5,
          thickness=0, filledregions),
      contplot(x^2+y^2, x=-2..2, y=-2..2,
          coloring=["Yellow","Blue"],
          contours = 5,
          thickness=3,
          legendstyle = [location=right],
          legend=typeset("<=",contourvalue)),
      size=[700,600],
      view=[-2.1..2.1,-2.1..2.1]
    );

    N:=11:
    plots:-display(
      contplot(sin(x)*y, x=-2*Pi..2*Pi, y=-1..1,
          coloring=["Yellow","Blue"],
          contours = [seq(-1+(i-1)*(1-(-1))/(N-1),i=1..N)],
          thickness=3,
          legendstyle = [location=right],
          legend=true),
       plots:-densityplot(sin(x)*y, x=-2*Pi..2*Pi, y=-1..1,
          colorscheme=["zgradient",["Yellow","Blue"],colorspace="RGB"],
          grid=[100,100],
          style=surface, restricttoranges),
       plottools:-line([-2*Pi,-1],[-2*Pi,1],thickness=3,color=white),
       plottools:-line([2*Pi,-1],[2*Pi,1],thickness=3,color=white),
       plottools:-line([-2*Pi,1],[2*Pi,1],thickness=3,color=white),
       plottools:-line([-2*Pi,-1],[2*Pi,-1],thickness=3,color=white),
       size=[600,500]
    );

    N:=13:
    plots:-display(
      contplot(sin(x)*y, x=-2*Pi..2*Pi, y=-1..1,
          coloring=["Yellow","Blue"],
          contours = [seq(-1+(i-1)*(1-(-1))/(N-1),i=1..N)],
          thickness=6,
          legendstyle = [location=right],
          legend=['contourvalue',seq(cat($(` `,i)),i=2..3),
                  'contourvalue',seq(cat($(` `,i)),i=5..6),
                  'contourvalue',seq(cat($(` `,i)),i=8..9),
                  'contourvalue',seq(cat($(` `,i)),i=11..12),
                  'contourvalue']),
       plots:-densityplot(sin(x)*y, x=-2*Pi..2*Pi, y=-1..1,
          colorscheme=["zgradient",["Yellow","Blue"],colorspace="RGB"],
          grid=[100,100],
          style=surface, restricttoranges),
       plottools:-line([-2*Pi,-1],[-2*Pi,1],thickness=6,color=white),
       plottools:-line([2*Pi,-1],[2*Pi,1],thickness=6,color=white),
       plottools:-line([-2*Pi,1],[2*Pi,1],thickness=6,color=white),
       plottools:-line([-2*Pi,-1],[2*Pi,-1],thickness=6,color=white),
      size=[600,500]
    );

     

    Download contour_legend_post.mw

     

     

     


    A Complete Guide for performing Tensors computations using Physics

     

    This is an old request, a complete guide for using Physics  to perform tensor computations. This guide, shown below with Sections closed, is linked at the end of this post as a pdf file with all the sections open, and also as a Maple worksheet that allows for reproducing its contents. Most of the computations shown are reproducible in Maple 2018.2.1, and a significant part also in previous releases, but to reproduce everything you need to have the Maplesoft Physics Updates version 283 or higher installed. Feedback one how to improve this presentation is welcome.

     

    Physics  is a package developed by Maplesoft, an integral part of the Maple system. In addition to its commands for Quantum Mechanics, Classical Field Theory and General Relativity, Physics  includes 5 other subpackages, three of them also related to General Relativity: Tetrads , ThreePlusOne  and NumericalRelativity (work in progress), plus one to compute with Vectors  and another related to the Standard Model (this one too work in progress).

     

    The presentation is organized as follows. Section I is complete regarding the functionality provided with the Physics package for computing with tensors  in Classical and Quantum Mechanics (so including Euclidean spaces), Electrodynamics and Special Relativity. The material of section I is also relevant in General Relativity, for which section II is all devoted to curved spacetimes. (The sub-section on the Newman-Penrose formalism needs to be filled with more material and a new section devoted to the EnergyMomentum tensor is appropriate. I will complete these two things as time permits.) Section III is about transformations of coordinates, relevant in general.

     

    For an alphabetical list of the Physics commands with a brief one-line description and a link to the corresponding help page see Physics: Brief description of each command .

     

    I. Spacetime and tensors in Physics

     

     

    This section contains all what is necessary for working with tensors in Classical and Quantum Mechanics, Electrodynamics and Special Relativity. This material is also relevant for computing with tensors in General Relativity, for which there is a dedicated Section II. Curved spacetimes .

     

    Default metric and signature, coordinate systems

       

    Tensors, their definition, symmetries and operations

     

     

    Physics comes with a set of predefined tensors, mainly the spacetime metric  g[mu, nu], the space metric  gamma[j, k], and all the standard tensors of  General Relativity. In addition, one of the strengths of Physics is that you can define tensors, in natural ways, by indicating a matrix or array with its components, or indicating any generic tensorial expression involving other tensors.

     

    In Maple, tensor indices are letters, as when computing with paper and pencil, lowercase or upper case, latin or greek, entered using indexation, as in A[mu], and are displayed as subscripts as in A[mu]. Contravariant indices are entered preceding the letter with ~, as in A[`~&mu;`], and are displayed as superscripts as in A[`~mu`]. You can work with two or more kinds of indices at the same time, e.g., spacetime and space indices.

     

    To input greek letters, you can spell them, as in mu for mu, or simpler: use the shortcuts for entering Greek characters . Right-click your input and choose Convert To → 2-D Math input to give to your input spelled tensorial expression a textbook high quality typesetting.

     

    Not every indexed object or function is, however, automatically a tensor. You first need to define it as such using the Define  command. You can do that in two ways:

     

    1. 

    Passing the tensor being defined, say F[mu, nu], possibly indicating symmetries and/or antisymmetries for its indices.

    2. 

    Passing a tensorial equation where the left-hand side is the tensor being defined as in 1. and the right-hand side is a tensorial expression - or an Array or Matrix - such that the components of the tensor being defined are equal to the components of the tensorial expression.

     

    After defining a tensor - say A[mu] or F[mu, nu]- you can perform the following operations on algebraic expressions involving them

     

    • 

    Automatic formatting of repeated indices, one covariant the other contravariant

    • 

    Automatic handling of collisions of repeated indices in products of tensors

    • 

    Simplify  products using Einstein's sum rule for repeated indices.

    • 

    SumOverRepeatedIndices  of the tensorial expression.

    • 

    Use TensorArray  to compute the expression's components

    • 

    TransformCoordinates .

     

    If you define a tensor using a tensorial equation, in addition to the items above you can:

     

    • 

    Get each tensor component by indexing, say as in A[1] or A[`~1`]

    • 

    Get all the covariant and contravariant components by respectively using the shortcut notation A[] and "A[~]".

    • 

    Use any of the special indexing keywords valid for the pre-defined tensors of Physics; they are: definition, nonzero, and in the case of tensors of 2 indices also trace, and determinant.

    • 

    No need to specify the tensor dependency for differentiation purposes - it is inferred automatically from its definition.

    • 

    Redefine any particular tensor component using Library:-RedefineTensorComponent

    • 

    Minimizing the number of independent tensor components using Library:-MinimizeTensorComponent

    • 

    Compute the number of independent tensor components - relevant for tensors with several indices and different symmetries - using Library:-NumberOfTensorComponents .

     

    The first two sections illustrate these two ways of defining a tensor and the features described. The next sections present the existing functionality of the Physics package to compute with tensors.

     

    Defining a tensor passing the tensor itself

       

    Defining a tensor passing a tensorial equation

       

    Automatic formatting of repeated tensor indices and handling of their collisions in products

       

    Tensor symmetries

       

    Substituting tensors and tensor indices

       

    Simplifying tensorial expressions

       

    SumOverRepeatedIndices

       

    Visualizing tensor components - Library:-TensorComponents and TensorArray

       

    Modifying tensor components - Library:-RedefineTensorComponent

       

    Enhancing the display of tensorial expressions involving tensor functions and derivatives using CompactDisplay

       

    The LeviCivita tensor and KroneckerDelta

       

    The 3D space metric and decomposing 4D tensors into their 3D space part and the rest

       

    Total differentials, the d_[mu] and dAlembertian operators

       

    Tensorial differential operators in algebraic expressions

       

    Inert tensors

       

    Functional differentiation of tensorial expressions with respect to tensor functions

       

    The Pauli matrices and the spacetime Psigma[mu] 4-vector

       

    The Dirac matrices and the spacetime Dgamma[mu] 4-vector

       

    Quantum not-commutative operators using tensor notation

       

    II. Curved spacetimes

     

     

    Physics comes with a set of predefined tensors, mainly the spacetime metric  g[mu, nu], the space metric  gamma[j, k], and all the standard tensors of general relativity, respectively entered and displayed as: Einstein[mu,nu] = G[mu, nu],    Ricci[mu,nu]  = R[mu, nu], Riemann[alpha, beta, mu, nu]  = R[alpha, beta, mu, nu], Weyl[alpha, beta, mu, nu],  = C[alpha, beta, mu, nu], and the Christoffel symbols   Christoffel[alpha, mu, nu]  = GAMMA[alpha, mu, nu] and Christoffel[~alpha, mu, nu]  = "GAMMA[mu,nu]^(alpha)" respectively of first and second kinds. The Tetrads  and ThreePlusOne  subpackages have other predefined related tensors. This section is thus all about computing with tensors in General Relativity.

     

    Loading metrics from the database of solutions to Einstein's equations

       

    Setting the spacetime metric indicating the line element or a Matrix

       

    Covariant differentiation: the D_[mu] operator and the Christoffel symbols

       

    The Einstein, Ricci, Riemann and Weyl tensors of General Relativity

       

    A conversion network for the tensors of General Relativity

       

    Tetrads and the local system of references - the Newman-Penrose formalism

       

    The ThreePlusOne package and the 3+1 splitting of Einstein's equations

       

    III. Transformations of coordinates

       

    See Also

     

    Physics , Conventions used in the Physics package , Physics examples , Physics Updates

     


     

    Download Tensors_-_A_Complete_Guide.mw, or the pdf version with sections open: Tensors_-_A_Complete_Guide.pdf

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    Recently, my research team at the University of Waterloo was approached by Mark Ideson, the skip for the Canadian Paralympic men’s curling team, about developing a curling end-effector, a device to give wheelchair curlers greater control over their shots. A gold medalist and multi-medal winner at the Paralympics, Mark has a passion to see wheelchair curling performance improve and entrusted us to assist him in this objective. We previously worked with Mark and his team on a research project to model the wheelchair curling shot and help optimize their performance on the ice. The end-effector project was the next step in our partnership.

    The use of technology in the sports world is increasing rapidly, allowing us to better understand athletic performance. We are able to gather new types of data that, when coupled with advanced engineering tools, allow us to perform more in-depth analysis of the human body as it pertains to specific movements and tasks. As a result, we can refine motions and improve equipment to help athletes maximize their abilities and performance. As a professor of Systems Design Engineering at the University of Waterloo, I have overseen several studies on the motor function of Paralympic athletes. My team focuses on modelling the interactions between athletes and their equipment to maximize athletic performance, and we rely heavily on Maple and MapleSim in our research and project development.

    The end-effector project was led by my UW students Borna Ghannadi and Conor Jansen. The objective was to design a device that attaches to the end of the curler’s stick and provides greater command over the stone by pulling it back prior to release.  Our team modeled the end effector in Maple and built an initial prototype, which has undergone several trials and adjustments since then. The device is now on its 7th iteration, which we felt appropriate to name the Mark 7, in recognition of Mark’s inspiration for the project. The device has been a challenge, but we have steadily made improvements with Mark’s input and it is close to being a finished product.

    Currently, wheelchair curlers use a device that keeps the stone static before it’s thrown. Having the ability to pull back on the stone and break the friction prior to release will provide great benefit to the curlers. As a curler, if you can only push forward and the ice conditions aren’t perfect, you’re throwing at a different speed every time. If you can pull the stone back and then go forward, you’ve broken that friction and your shot is far more repeatable. This should make the game much more interesting.

    For our team, the objective was to design a mechanism that not only allowed curlers to pull back on the stone, but also had a release option with no triggers on the curler’s hand. The device we developed screws on to the end of the curler’s stick, and is designed to rest firmly on the curling handle. Once the curler selects their shot, they can position the stone accordingly, slide the stone backward and then forward, and watch the device gently separate from the stone.

    For our research, the increased speed and accuracy of MapleSim’s multibody dynamic simulations, made possible by the underlying symbolic modelling engine, Maple, allowed us to spend more time on system design and optimization. MapleSim combines principles of mechanics with linear graph theory to produce unified representations of the system topology and modelling coordinates. The system equations are automatically generated symbolically, which enables us to view and share the equations prior to a numerical solution of the highly-optimized simulation code.

    The Mark 7 is an invention that could have significant ramifications in the curling world. Shooting accuracy across wheelchair curling is currently around 60-62%, and if new technology like the Mark 7 is adopted, that number could grow to 70 or 75%. Improved accuracy will make the game more enjoyable and competitive. Having the ability to pull back on the stone prior to release will eliminate some instability for the curlers, which can help level the playing field for everyone involved. Given the work we have been doing with Mark’s team on performance improvements, it was extremely satisfying for us to see them win the bronze medal in South Korea. We hope that our research and partnership with the team can produce gold medals in the years to come.

     

    Throughout the course of a year, Maple users create wildly varying applications on all sorts of subjects. To mark the end of 2018, I thought I’d share some of the 2018 submissions to the Maple Application Center that I personally found particularly interesting.

    Solving the 15-puzzle, by Curtis Bright. You know those puzzles where you have to move the pieces around inside a square to put them in order, and there’s only one free space to move into?  I’m not good at those puzzles, but it turns out Maple can help. This is one of collection of new, varied applications using Maple’s SAT solvers (if you want to solve the world’s hardest Sudoku, Maple’s SAT solvers can help with that, too).

    Romeo y Julieta: Un clasico de las historias de amor... y de las ecuaciones diferenciales [Romeo and Juliet: A classic story of love..and differential equations], by Ranferi Gutierrez. This one made me laugh (and even more so once I put some of it in google translate, which is more than enough to let you appreciate the application even if you don’t speak Spanish). What’s not to like about modeling a high drama love story using DEs?

    Prediction of malignant/benign of breast mass with DNN classifier, by Sophie Tan. Machine learning can save lives.

    Hybrid Image of a Cat and a Dog, by Samir Khan. Signal processing can be more fun that I realized. This is one of those crazy optical illusions where the picture changes depending on how far away you are.

    Beyond the 8 Queens Problem, by Yury Zavarovsky. In true mathematical fashion, why have 8 queens when you can have n?  (If you are interested in this problem, you can also see a different solution that uses SAT solvers.)

    Gödel's Universe, by Frank Wang.  Can’t say I understood much of it, but it involves Gödel, Einstein, and Hawking, so I don’t need to understand it to find it interesting.


    Overview of the Physics Updates

     

    One of the problems pointed out several times about the Physics package documentation is that the information is scattered. There are the help pages for each Physics command, then there is that page on Physics conventions, one other with Examples in different areas of physics, one "What's new in Physics" page at each release with illustrations only shown there. Then there are a number of Mapleprimes post describing the Physics project and showing how to use the package to tackle different problems. We seldomly find the information we are looking for fast enough.

     

    This post thus organizes and presents all those elusive links in one place. All the hyperlinks below are alive from within a Maple worksheet. A link to this page is also appearing in all the Physics help pages in the future Maple release. Comments on practical ways to improve this presentation of information are welcome.

    Description

     

    As part of its commitment to providing the best possible environment for algebraic computations in Physics, Maplesoft launched, during 2014, a Maple Physics: Research and Development website. That enabled users to ask questions, provide feedback and download updated versions of the Physics package, around the clock.

    The "Physics Updates" include improvements, fixes, and the latest new developments, in the areas of Physics, Differential Equations and Mathematical Functions. Since Maple 2018, you can install/uninstall the "Physics Updates" directly from the MapleCloud .

    Maplesoft incorporated the results of this accelerated exchange with people around the world into the successive versions of Maple. Below there are two sections

    • 

    The Updates of Physics, as  an organized collection of links per Maple release, where you can find a description with examples of the subjects developed in the Physics package, from 2012 till 2019.

    • 

    The Mapleprimes Physics posts, containing the most important posts describing the Physics project and showing the use of the package to tackle problems in General Relativity and Quantum Mechanics.

    The update of Physics in Maple 2018 and back to Maple 16 (2012)

     

     

    • 

    Physics Updates during 2018

    a. 

    Tensor product of Quantum States using Dirac's Bra-Ket Notation

    b. 

    Coherent States in Quantum Mechanics

    c. 

    The Zassenhaus formula and the algebra of the Pauli matrices

    d. 

    Multivariable Taylor series of expressions involving anticommutative (Grassmannian) variables

    e. 

    New SortProducts command

    f. 

    A Complete Guide for Tensor computations using Physics

     

    • 

    Physics Maple 2018 updates

    g. 

    Automatic handling of collision of tensor indices in products

    h. 

    User defined algebraic differential operators

    i. 

    The Physics:-Cactus package for Numerical Relativity

    j. 

    Automatic setting of the EnergyMomentumTensor for metrics of the database of solutions to Einstein's equations

    k. 

    Minimize the number of tensor components according to its symmetries, relabel, redefine or count the number of independent tensor components

    l. 

    New functionality and display for inert names and inert tensors

    m. 

    Automatic setting of Dirac, Paul and Gell-Mann algebras

    n. 

    Simplification of products of Dirac matrices

    o. 

    New Physics:-Library commands to perform matrix operations in expressions involving spinors with omitted indices

    p. 

    Miscellaneous improvements

     

    • 

    Physics Maple 2017 updates

    q. 

    General Relativity: classification of solutions to Einstein's equations and the Tetrads package

    r. 

    The 3D metric and the ThreePlusOne (3 + 1) new Physics subpackage

    s. 

    Tensors in Special and General Relativity

    t. 

    The StandardModel new Physics subpackage

     

    • 

    Physics Maple 2016 updates

    u. 

    Completion of the Database of Solutions to Einstein's Equations

    v. 

    Operatorial Algebraic Expressions Involving the Differential Operators d_[mu], D_[mu] and Nabla

    w. 

    Factorization of Expressions Involving Noncommutative Operators

    x. 

    Tensors in Special and General Relativity

    y. 

    Vectors Package

    z. 

    New Physics:-Library commands

    aa. 

    Redesigned Functionality and Miscellaneous

     

    • 

    Physics Maple 2015 updates

    ab. 

    Simplification

    ac. 

    Tensors

    ad. 

    Tetrads in General Relativity

    ae. 

    More Metrics in the Database of Solutions to Einstein's Equations

    af. 

    Commutators, AntiCommutators, and Dirac notation in quantum mechanics

    ag. 

    New Assume command and new enhanced Mode: automaticsimplification

    ah. 

    Vectors Package

    ai. 

    New Physics:-Library commands

    aj. 

    Miscellaneous

     

    • 

    Physics Maple 18 updates

    ak. 

    Simplification

    al. 

    4-Vectors, Substituting Tensors

    am. 

    Functional Differentiation

    an. 

    More Metrics in the Database of Solutions to Einstein's Equations

    ao. 

    Commutators, AntiCommutators

    ap. 

    Expand and Combine

    aq. 

    New Enhanced Modes in Physics Setup

    ar. 

    Dagger

    as. 

    Vectors Package

    at. 

    New Physics:-Library commands

    au. 

    Miscellaneous

     

    • 

    Physics Maple 17 updates

    av. 

    Tensors and Relativity: ExteriorDerivative, Geodesics, KillingVectors, LieDerivative, LieBracket, Antisymmetrize and Symmetrize

    aw. 

    Dirac matrices, commutators, anticommutators, and algebras

    ax. 

    Vector Analysis

    ay. 

    A new Library of programming commands for Physics

     

    • 

    Physics Maple 16 updates

    az. 

    Tensors in Special and General Relativity: contravariant indices and new commands for all the General Relativity tensors

    ba. 

    New commands for working with expressions involving anticommutative variables and functions: Gtaylor, ToFieldComponents, ToSuperfields

    bb. 

    Vector Analysis: geometrical coordinates with funcional dependency

    Mapleprimes Physics posts

     

     

    1. 

    The Physics project at Maplesoft

    2. 

    Mini-Course: Computer Algebra for Physicists

    3. 

    A Complete Guide for Tensor computations using Physics

    4. 

    Perimeter Institute-2015, Computer Algebra in Theoretical Physics (I)

    5. 

    IOP-2016, Computer Algebra in Theoretical Physics (II)

    6. 

    ACA-2017, Computer Algebra in Theoretical Physics (III) 

     

    • 

    General Relativity

     

    7. 

    General Relativity using Computer Algebra

    8. 

    Exact solutions to Einstein's equations 

    9. 

    Classification of solutions to Einstein's equations and the ThreePlusOne (3 + 1) package 

    10. 

    Tetrads and Weyl scalars in canonical form 

    11. 

    Equivalence problem in General Relativity 

    12. 

    Automatic handling of collision of tensor indices in products 

    13. 

    Minimize the number of tensor components according to its symmetries

    • 

    Quantum Mechanics

     

    14. 

    Quantum Commutation Rules Basics 

    15. 

    Quantum Mechanics: Schrödinger vs Heisenberg picture 

    16. 

    Quantization of the Lorentz Force 

    17. 

    Magnetic traps in cold-atom physics 

    18. 

    The hidden SO(4) symmetry of the hydrogen atom

    19. 

    (I) Ground state of a quantum system of identical boson particles 

    20. 

    (II) The Gross-Pitaevskii equation and Bogoliubov spectrum 

    21. 

    (III) The Landau criterion for Superfluidity 

    22. 

    Simplification of products of Dirac matrices

    23. 

    Algebra of Dirac matrices with an identity matrix on the right-hand side

    24. 

    Factorization with non-commutative variables

    25. 

    Tensor Products of Quantum State Spaces 

    26. 

    Coherent States in Quantum Mechanics 

    27. 

    The Zassenhaus formula and the Pauli matrices 

     

    • 

    Physics package generic functionality

     

    28. 

    Automatic simplification and a new Assume (as in "extended assuming")

    29. 

    Wirtinger derivatives and multi-index summation

    See Also

     

    Conventions used in the Physics package , Physics , Physics examples , A Complete Guide for Tensor computations using Physics


     

    Download Physics-Updates.mw
     

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    The Zassenhaus formula and the algebra of the Pauli matrices

     

    Edgardo S. Cheb-Terrab1 and Bryan C. Sanctuary2

    (1) Maplesoft

    (2) Department of Chemistry, McGill University, Montreal, Quebec, Canada

     

      


    The implementation of the Pauli matrices and their algebra were reviewed during 2018, including the algebraic manipulation of nested commutators, resulting in faster computations using simpler and more flexible input. As it frequently happens, improvements of this type suddenly transform research problems presented in the literature as untractable in practice, into tractable.

      

    As an illustration, we tackle below the derivation of the coefficients entering the Zassenhaus formula shown in section 4 of [1] for the Pauli matrices up to order 10 (results in the literature go up to order 5). The computation presented can be reused to compute these coefficients up to any desired higher-order (hardware limitations may apply). A number of examples which exploit this formula and its dual, the Baker-Campbell-Hausdorff formula, occur in connection with the Weyl prescription for converting a classical function to a quantum operator (see sec. 5 of [1]), as well as when solving the eigenvalue problem for classes of mathematical-physics partial differential equations [2].  
    To reproduce the results below - a worksheet with this contents is linked at the end - you need to have your Maple 2018.2.1 updated with the 
    Maplesoft Physics Updates version 280 or higher.

    References

     
      

    [1] R.M. Wilcox, "Exponential Operators and Parameter Differentiation in Quantum Physics", Journal of Mathematical Physics, V.8, 4, (1967.

      

    [2] S. Steinberg, "Applications of the lie algebraic formulas of Baker, Campbell, Hausdorff, and Zassenhaus to the calculation of explicit solutions of partial differential equations", Journal of Differential Equations, V.26, 3, 1977.

      

    [3] K. Huang, "Statistical Mechanics", John Wiley & Sons, Inc. 1963, p217, Eq.(10.60).

     

    Formulation of the problem

    The Zassenhaus formula expresses exp(lambda*(A+B)) as an infinite product of exponential operators involving nested commutators of increasing complexity

    "(e)^(lambda (A+B))   =    (e)^(lambda A) * (e)^(lambda B) * (e)^(lambda^2 C[2]) * (e)^(lambda^3 C[3]) *  ...  "
                                                                           =   exp(lambda*A)*exp(lambda*B)*exp(-(1/2)*lambda^2*%Commutator(A, B))*exp((1/6)*lambda^3*(%Commutator(B, %Commutator(A, B))+2*%Commutator(A, %Commutator(A, B))))

    Given A, B and their commutator E = %Commutator(A, B), if A and B commute with E, C[n] = 0 for n >= 3 and the Zassenhaus formula reduces to the product of the first three exponentials above. The interest here is in the general case, when %Commutator(A, E) <> 0 and %Commutator(B, E) <> 0, and the goal is to compute the Zassenhaus coefficients C[n]in terms of A, B for arbitrary finite n. Following [1], in that general case, differentiating the Zassenhaus formula with respect to lambda and multiplying from the right by exp(-lambda*(A+B)) one obtains

    "A+B=A+(e)^(lambda A) B (e)^(-lambda A)+(e)^(lambda A)+(e)^(lambda B) 2 lambda C[2] (e)^(-lambda B) (e)^(-lambda A)+ ..."

    This is an intricate formula, which however (see eq.(4.20) of [1]) can be represented in abstract form as

     

    "0=((&sum;)(lambda^n)/(n!) {A^n,B})+2 lambda ((&sum;) (&sum;)(lambda^(n+m))/(n! m!) {A^m,B^n,C[2]})+3 lambda^2 ((&sum;) (&sum;) (&sum;)(lambda^(n+m+k))/(n! m! k!) {A^k,B^m,(C[2])^n,C[3]})+ ..."

    from where an equation to be solved for each C[n] is obtained by equating to 0 the coefficient of lambda^(n-1). In this formula, the repeated commutator bracket is defined inductively in terms of the standard commutator %Commutator(A, B)by

    {B, A^0} = B, {B, A^(n+1)} = %Commutator(A, {A^n, B^n})

    {C[j], B^n, A^0} = {C[j], B^n}, {C[j], A^m, B^n} = %Commutator(A, {A^`-`(m, 1), B^n, C[j]^k})

    and higher-order repeated-commutator brackets are similarly defined. For example, taking the coefficient of lambda and lambda^2 and respectively solving each of them for C[2] and C[3] one obtains

    C[2] = -(1/2)*%Commutator(A, B)

    C[3] = (1/6)*%Commutator(B, %Commutator(A, B))+(1/3)*%Commutator(B, %Commutator(A, B))

    This method is used in [3] to treat quantum deviations from the classical limit of the partition function for both a Bose-Einstein and Fermi-Dirac gas. The complexity of the computation of C[n] grows rapidly and in the literature only the coefficients up to C[5] have been published. Taking advantage of developments in the Physics package during 2018, below we show the computation up to C[10] and provide a compact approach to compute them up to arbitrary finite order.

     

    Computing up to C[10]

    Set the signature of spacetime such that its space part is equal to +++ and use lowercaselatin letters to represent space indices. Set also A, B and C[n] to represent quantum operators

    with(Physics)

    Setup(op = {A, B, C}, signature = `+++-`, spaceindices = lowercaselatin)

    `* Partial match of  '`*op*`' against keyword '`*quantumoperators*`' `

     

    _______________________________________________________

     

    [quantumoperators = {A, B, C}, signature = `+ + + -`, spaceindices = lowercaselatin]

    (1)

    To illustrate the computation up to C[10], a convenient example, where the commutator algebra is closed, consists of taking A and B as Pauli Matrices which, multiplied by the imaginary unit, form a basis for the `&sfr;&ufr;`(2)group, which in turn exponentiate to the relevant Special Unitary Group SU(2). The algebra for the Pauli matrices involves a commutator and an anticommutator

    Library:-DefaultAlgebraRules(Psigma)

    %Commutator(Physics:-Psigma[i], Physics:-Psigma[j]) = (2*I)*Physics:-LeviCivita[i, j, k]*Physics:-Psigma[k], %AntiCommutator(Physics:-Psigma[i], Physics:-Psigma[j]) = 2*Physics:-KroneckerDelta[i, j]

    (2)

    Assign now A and B to two Pauli matrices, for instance

    A := Psigma[1]

    Physics:-Psigma[1]

    (3)

    B := Psigma[3]

    Physics:-Psigma[3]

    (4)

    Next, to extract the coefficient of lambda^n from

    "0=((&sum;)(lambda^n)/(n!) {A^n,B})+2 lambda ((&sum;) (&sum;)(lambda^(n+m))/(n! m!) {A^m,B^n,C[2]})+3 lambda^2 ((&sum;) (&sum;) (&sum;)(lambda^(n+m+k))/(n! m! k!) {A^k,B^m,(C[2])^n,C[3]})+..."

    to solve it for C[n+1] we note that each term has a factor lambda^m multiplying a sum, so we only need to take into account the first n+1 terms (sums) and in each sum replace infinity by the corresponding n-m. For example, given "C[2]=-1/2 `%Commutator`(A,B), "to compute C[3] we only need to compute these first three terms:

    0 = Sum(lambda^n*{B, A^n}/factorial(n), n = 1 .. 2)+2*lambda*(Sum(Sum(lambda^(n+m)*{C[2], A^m, B^n}/(factorial(n)*factorial(m)), n = 0 .. 1), m = 0 .. 1))+3*lambda^2*(Sum(Sum(Sum(lambda^(n+m+k)*{C[3], A^k, B^m, C[2]^n}/(factorial(n)*factorial(m)*factorial(k)), n = 0 .. 0), m = 0 .. 0), k = 0 .. 0))

    then solving for C[3] one gets C[3] = (1/3)*%Commutator(B, %Commutator(A, B))+(1/6)*%Commutator(A, %Commutator(A, B)).

    Also, since to compute C[n] we only need the coefficient of lambda^(n-1), it is not necessary to compute all the terms of each multiple-sum. One way of restricting the multiple-sums to only one power of lambda consists of using multi-index summation, available in the Physics package (see Physics:-Library:-Add ). For that purpose, redefine sum to extend its functionality with multi-index summation

    Setup(redefinesum = true)

    [redefinesum = true]

    (5)

    Now we can represent the same computation of C[3] without multiple sums and without computing unnecessary terms as

    0 = Sum(lambda^n*{B, A^n}/factorial(n), n = 1)+2*lambda*(Sum(lambda^(n+m)*{C[2], A^m, B^n}/(factorial(n)*factorial(m)), n+m = 1))+3*lambda^2*(Sum(lambda^(n+m+k)*{C[3], A^k, B^m, C[2]^n}/(factorial(n)*factorial(m)*factorial(k)), n+m+k = 0))

    Finally, we need a computational representation for the repeated commutator bracket 

    {B, A^0} = B, {B, A^(n+1)} = %Commutator(A, {A^n, B^n})

    One way of representing this commutator bracket operation is defining a procedure, say F, with a cache to avoid recomputing lower order nested commutators, as follows

    F := proc (A, B, n) options operator, arrow; if n::negint then 0 elif n = 0 then B elif n::posint then %Commutator(A, F(A, B, n-1)) else 'F(A, B, n)' end if end proc

    proc (A, B, n) options operator, arrow; if n::negint then 0 elif n = 0 then B elif n::posint then %Commutator(A, F(A, B, n-1)) else 'F(A, B, n)' end if end proc

    (6)

    Cache(procedure = F)

     

    For example,

    F(A, B, 1)

    %Commutator(Physics:-Psigma[1], Physics:-Psigma[3])

    (7)

    F(A, B, 2)

    %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))

    (8)

    F(A, B, 3)

    %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3])))

    (9)

    We can set now the value of C[2]

    C[2] := -(1/2)*Commutator(A, B)

    I*Physics:-Psigma[2]

    (10)

    and enter the formula that involves only multi-index summation

    H := sum(lambda^n*F(A, B, n)/factorial(n), n = 2)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = 1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = 0))

    (1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))+2*lambda*(lambda*%Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])+lambda*%Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))+3*lambda^2*C[3]

    (11)

    from where we compute C[3] by solving for it the coefficient of lambda^2, and since due to the mulit-index summation this expression already contains lambda^2 as a factor,

    C[3] = Simplify(solve(H, C[3]))

    C[3] = (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]

    (12)

    In order to generalize the formula for H for higher powers of lambda, the right-hand side of the multi-index summation limit can be expressed in terms of an abstract N, and H transformed into a mapping:

     

    H := unapply(sum(lambda^n*F(A, B, n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2)), N)

    proc (N) options operator, arrow; lambda^N*F(Physics:-Psigma[1], Physics:-Psigma[3], N)/factorial(N)+2*lambda*(sum(Physics:-`*`(Physics:-`^`(lambda, n+m), Physics:-`^`(Physics:-`*`(factorial(n), factorial(m)), -1), F(Physics:-Psigma[1], F(Physics:-Psigma[3], I*Physics:-Psigma[2], n), m)), n+m = N-1))+3*lambda^2*(sum(Physics:-`*`(Physics:-`^`(lambda, n+m+k), Physics:-`^`(Physics:-`*`(factorial(n), factorial(m), factorial(k)), -1), F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(I*Physics:-Psigma[2], C[3], n), m), k)), n+m+k = N-2)) end proc

    (13)

    Now we have

    H(0)

    Physics:-Psigma[3]

    (14)

    H(1)

    lambda*%Commutator(Physics:-Psigma[1], Physics:-Psigma[3])+(2*I)*lambda*Physics:-Psigma[2]

    (15)

    The following is already equal to (11)

    H(2)

    (1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))+2*lambda*(lambda*%Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])+lambda*%Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))+3*lambda^2*C[3]

    (16)

    In this way, we can reproduce the results published in the literature for the coefficients of Zassenhaus formula up to C[5] by adding two more multi-index sums to (13). Unassign C first

    unassign(C)

    H := unapply(sum(lambda^n*F(A, B, n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(A, F(B, F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(A, F(B, F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)*factorial(p)), n+m+k+l+p = N-4)), N)

    We compute now up to C[5] in one go

    for j to 4 do C[j+1] := Simplify(solve(H(j), C[j+1])) end do

    I*Physics:-Psigma[2]

     

    (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]

     

    -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2])

     

    -(8/9)*Physics:-Psigma[1]-(158/45)*Physics:-Psigma[3]-((16/3)*I)*Physics:-Psigma[2]

    (17)

    The nested-commutator expression solved in the last step for C[5] is

    H(4)

    (1/24)*lambda^4*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))))+2*lambda*((1/6)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])))+(1/2)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2])))+(1/2)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2])))+(1/6)*lambda^3*%Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))))+3*lambda^2*((1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+(1/2)*lambda^2*%Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[3], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+(1/2)*lambda^2*%Commutator(I*Physics:-Psigma[2], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1])))+4*lambda^3*(lambda*%Commutator(Physics:-Psigma[1], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator(Physics:-Psigma[3], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator(I*Physics:-Psigma[2], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator((2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2])))+5*lambda^4*(-(8/9)*Physics:-Psigma[1]-(158/45)*Physics:-Psigma[3]-((16/3)*I)*Physics:-Psigma[2])

    (18)

    With everything understood, we want now to extend these results generalizing them into an approach to compute an arbitrarily large coefficient C[n], then use that generalization to compute all the Zassenhaus coefficients up to C[10]. To type the formula for H for higher powers of lambda is however prone to typographical mistakes. The following is a program, using the Maple programming language , that produces these formulas for an arbitrary integer power of lambda:

    Formula := proc(A, B, C, Q)

     

    This Formula program uses a sequence of summation indices with as much indices as the order of the coefficient C[n] we want to compute, in this case we need 10 of them

    summation_indices := n, m, k, l, p, q, r, s, t, u

    n, m, k, l, p, q, r, s, t, u

    (19)

    To avoid interference of the results computed in the loop (17), unassign C again

    unassign(C)

     

    Now the formulas typed by hand, used lines above to compute each of C[2], C[3] and C[5], are respectively constructed by the computer

    Formula(A, B, C, 2)

    sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))

    (20)

    Formula(A, B, C, 3)

    sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))

    (21)

    Formula(A, B, C, 5)

    sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(l)*factorial(m)*factorial(k)*factorial(p)), n+m+k+l+p = N-4))

    (22)

     

    Construct then the formula for C[10] and make it be a mapping with respect to N, as done for C[5] after (16)

    H := unapply(Formula(A, B, C, 10), N)

    proc (N) options operator, arrow; sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(l)*factorial(m)*factorial(k)*factorial(p)), n+m+k+l+p = N-4))+6*lambda^5*(sum(lambda^(n+m+k+l+p+q)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], C[6], n), m), k), l), p), q)/(factorial(n)*factorial(l)*factorial(m)*factorial(p)*factorial(k)*factorial(q)), n+m+k+l+p+q = N-5))+7*lambda^6*(sum(lambda^(n+m+k+l+p+q+r)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], C[7], n), m), k), l), p), q), r)/(factorial(n)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(r)), n+m+k+l+p+q+r = N-6))+8*lambda^7*(sum(lambda^(n+m+k+l+p+q+r+s)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], C[8], n), m), k), l), p), q), r), s)/(factorial(n)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(s)), n+m+k+l+p+q+r+s = N-7))+9*lambda^8*(sum(lambda^(n+m+k+l+p+q+r+s+t)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], F(C[8], C[9], n), m), k), l), p), q), r), s), t)/(factorial(s)*factorial(n)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(t)), n+m+k+l+p+q+r+s+t = N-8))+10*lambda^9*(sum(lambda^(n+m+k+l+p+q+r+s+t+u)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], F(C[8], F(C[9], C[10], n), m), k), l), p), q), r), s), t), u)/(factorial(s)*factorial(n)*factorial(t)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(u)), n+m+k+l+p+q+r+s+t+u = N-9)) end proc

    (23)

    Compute now the coefficients of the Zassenhaus formula up to C[10] all in one go

    for j to 9 do C[j+1] := Simplify(solve(H(j), C[j+1])) end do

    I*Physics:-Psigma[2]

     

    (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]

     

    -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2])

     

    -(8/9)*Physics:-Psigma[1]-(158/45)*Physics:-Psigma[3]-((16/3)*I)*Physics:-Psigma[2]

     

    (1030/81)*Physics:-Psigma[1]-(8/81)*Physics:-Psigma[3]+((1078/405)*I)*Physics:-Psigma[2]

     

    ((11792/243)*I)*Physics:-Psigma[2]+(358576/42525)*Physics:-Psigma[1]+(12952/135)*Physics:-Psigma[3]

     

    (87277417/492075)*Physics:-Psigma[1]+(833718196/820125)*Physics:-Psigma[3]+((35837299048/17222625)*I)*Physics:-Psigma[2]

     

    -((449018539801088/104627446875)*I)*Physics:-Psigma[2]-(263697596812424/996451875)*Physics:-Psigma[1]+(84178036928794306/2197176384375)*Physics:-Psigma[3]

     

    (3226624781090887605597040906/21022858292748046875)*Physics:-Psigma[1]+(200495118165066770268119656/200217698026171875)*Physics:-Psigma[3]+((2185211616689851230363020476/4204571658549609375)*I)*Physics:-Psigma[2]

    (24)

    Notes: with the material above you can compute higher order values of C[n]. For that you need:

    1. 

    Unassign C as done above in two opportunities, to avoid interference of the results just computed.

    2. 

    Indicate more summation indices in the sequence summation_indices in (19), as many as the maximum value of n in C[n].

    3. 

    Have in mind that the growth in size and complexity is significant, with each C[n] taking significantly more time than the computation of all the previous ones.

    4. 

    Re-execute the input line (23) and the loop (24).

    NULL


    Download The_Zassenhause_formula_and_the_Pauli_Matrices.mw

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    A common question to our tech support team is about completing the square for a univariate polynomial of even degree, and how to do that in Maple. We’ve put together a solution that we think you’ll find useful. If you have any alternative methods or improvements to our code, let us know!

    restart;
    
    # Procedure to complete the square for a univariate
    # polynomial of even degree.
    
    CompleteSquare := proc( f :: depends( 'And'( polynom, 'satisfies'( g -> ( type( degree(g,x), even ) ) ) ) ), x :: name )
    
           local a, g, k, n, phi, P, Q, r, S, T, u:
    
           # Degree and parameters of polynomial.
           n := degree( f, x ):
           P := indets( f, name ) minus { x }:
    
           # General polynomial of square plus constant.
           g := add( a[k] * x^k, k=0..n/2 )^2 + r:
    
           # Solve for unknowns in g.
           Q := indets( g, name ) minus P:
    
           S := map( expand, { solve( identity( expand( f - g ) = 0, x ), Q ) } ):
    
           if numelems( S ) = 0 then
                  return NULL:
           end if:
    
           # Evaluate g at the solution, and re-write square term
           # so that the polynomial within the square is monic.
    
           phi := u -> lcoeff(op(1,u),x)^2 * (expand(op(1,u)/lcoeff(op(1,u),x)))^2:  
           T := map( evalindets, map( u -> eval(g,u), S ), `^`(anything,identical(2)), phi ):
    
           return `if`( numelems(T) = 1, T[], T ):
    
    end proc:
    
    
    # Examples.
    
    CompleteSquare( x^2 + 3 * x + 2, x );
    CompleteSquare( a * x^2 + b * x + c, x );
    CompleteSquare( 4 * x^8 + 8 * x^6 + 4 * x^4 - 246, x );
    
    m, n := 4, 10;
    r := rand(-10..10):
    for i from 1 to n do
           CompleteSquare( r() * ( x^(m/2) + randpoly( x, degree=m-1, coeffs=r ) )^2 + r(), x );
    end do;
    
    # Compare quadratic examples with Student:-Precalculus:-CompleteSquare()
    # (which is restricted to quadratic expressions).
    
    Student:-Precalculus:-CompleteSquare( x^2 + 3 * x + 2 );
    Student:-Precalculus:-CompleteSquare( a * x^2 + b * x + c );
    

    For a higher-order example:

    f := 5*x^4 - 70*x^3 + 365*x^2 - 840*x + 721;
    g := CompleteSquare( f, x ); # 5 * ( x^2 - 7 * x + 12 )^2 + 1
    h := evalindets( f, `*`, factor ); 4 * (x-3)^2 * (x-4)^2 + 1
    p1 := plot( f, x=0..5, y=-5..5, color=blue ):
    p2 := plots:-pointplot( [ [3,1], [4,1] ], symbol=solidcircle, symbolsize=20, color=red ):
    plots:-display( p1, p2 );
    

    tells us that the minimum value of the expression is 1, and it occurs at x=3 and x=4.

    The Joint Mathematics Meetings are taking place next week (January 16 – 19) in Baltimore, Maryland, U.S.A. This will be the 102nd annual winter meeting of the Mathematical Association of America (MAA) and the 125th annual meeting of the American Mathematical Society (AMS).

    Maplesoft will be exhibiting at booth #501 as well as in the networking area. Please stop by to chat with me and other members of the Maplesoft team, as well as to pick up some free Maplesoft swag or win some prizes.

    This year we will be hosting a hands-on workshop on Maple: A Natural Way to Work with Math

    This special event will take place on Thursday, January 17 at 6:00 -8:00 P.M. in the Holiday Ballroom 4 at the Hilton Baltimore.

     

    There are also several other interesting Maple related talks:

    MYMathApps Tutorials

    MAA General Contributed Paper Session on Mathematics and Technology 

    Wednesday January 16, 2019, 1:00 p.m.-1:55 p.m.

    Room 323, BCC
    Matthew Weihing*, Texas A&M University 
    Philip B Yasskin, Texas A&M University 

     

    The Logic Behind the Turing Bombe's Role in Breaking Enigma. 

    MAA General Contributed Paper Session on Mathematics and Technology 

    Wednesday January 16, 2019, 1:00 p.m.-1:55 p.m.
    Room 323, BCC
    Neil Sigmon*, Radford University 
    Rick Klima, Appalachian State University 

     

    On a software accessible database of faithful representations of Lie algebras. 

    MAA General Contributed Paper Session on Algebra, I 

    Wednesday January 16, 2019, 2:15 p.m.-6:25 p.m.
    Room 348, BCC
    Cailin Foster*, Dixie State University 
     

    Discussion of Various Technical Strategies Used in College Math Teaching. 

    MAA Contributed Paper Session on Open Educational Resources: Combining Technological Tools and Innovative Practices to Improve Student Learning, IV 

    Friday January 18, 2019, 8:00 a.m.-10:55 a.m.
    Room 303, BCC
    Lina Wu*, Borough of Manhattan Community College-The City University of New York 
     

    An Enticing Simulation in Ordinary Differential Equations that predict tangible results. 

    MAA Contributed Paper Session on The Teaching and Learning of Undergraduate Ordinary Differential Equations 

    Friday January 18, 2019, 1:00 p.m.-4:55 p.m.
    Room 324, BCC
    Satyanand Singh*, New York City College of Technology of CUNY 
     

    An Effort to Assess the Impact a Modeling First Approach has in a Traditional Differential Equations Class. 

    AMS Special Session on Using Modeling to Motivate the Study of Differential Equations, I 
    Saturday January 19, 2019, 8:00 a.m.-11:50 a.m.

    Room 336, BCC
    Rosemary C Farley*, Manhattan College 
    Patrice G Tiffany, Manhattan College 

     

    If you are attending the Joint Math meetings this week and plan on presenting anything on Maple, please feel free to let me know and I'll update this list accordingly.


    See you in Baltimore!

    Daniel

    Maple Product Manager

    Over the holidays I reconnected with an old friend and occasional
    chess partner who, upon hearing I was getting soundly thrashed by run
    of the mill engines, recommended checking out the ChessTempo site.  It
    has online tools for training your chess tactics.  As you attempt to
    solve chess problems your rating is computed depending on how well you
    do.  The chess problems, too, are rated and adjusted as visitors
    attempt them.  This should be familar to any chess or table-tennis
    player.  Rather than the Elo rating system, the Glicko rating system is
    used.

    You have a choice of the relative difficulty of the problems.
    After attempting a number of easy puzzles and seeing my rating slowly
    climb, I wondered what was the most effective technique to raise my
    rating (the classical blunder).  Attempting higher rated problems would lower my
    solving rate, but this would be compensated by a smaller loss and
    larger gain.  Assuming my actual playing strength is greater than my
    current rating (a misconception common to us patzers), there should be a
    rating that maximizes the rating gain per problem.

    The following Maple module computes the expected rating change
    using the Glicko system.

    Glicko := module()
    
    export DeltaRating
        ,  ExpectedDelta
        ,  Pwin
        ;
    
        # Return the change in rating for a loss and a win
        # for player 1 against player2
        DeltaRating := proc(r1,rd1,r2,rd2)
        local E, K, g, g2, idd, q;
    
            q := ln(10)/400;
            g := rd -> 1/sqrt(1 + 3*q^2*rd^2/Pi^2);
            g2 := g(rd2);
            E := 1/(1+10^(-g2*(r1-r2)/400));
            idd := q^2*(g2^2*E*(1-E));
    
            K := q/(1/rd1^2+idd)*g2;
    
            (K*(0-E), K*(1-E));
    
        end proc:
    
        # Compute the probability of a win
        # for a player with strength s1
        # vs a player with strength s2.
    
        Pwin := proc(s1, s2)
        local p;
            p := 10^((s1-s2)/400);
            p/(1+p);
        end proc:
    
        # Compute the expected rating change for
        # player with strength s1, rating r1 vs a player with true rating r2.
        # The optional rating deviations are rd1 and rd2.
    
        ExpectedDelta := proc(s1,r1,r2,rd1 := 35, rd2 := 35)
        local P, l, w;
            P := Pwin(s1,r2);
            (l,w) := DeltaRating(r1,rd1,r2,rd2);
            P*w + (1-P)*l;
        end proc:
    
    end module:
    

    Assume a player has a rating of 1500 but an actual playing strength of 1700.  Compute the expected rating change for a given puzzle rating, then plot it.  As expected the graph has a peak.

     

    Ept := Glicko:-ExpectedDelta(1700,1500,r2):
    plot(Ept,r2 = 1000...2000);
    

    Compute the optimum problem rating

     

    fsolve(diff(Ept,r2));
    
                         {r2 = 1599.350691}
    

    As your rating improves, you'll want to adjust the rating of the problems (the site doesn't allow that fine tuning). Here we plot the optimum puzzle rating (r2) for a given player rating (r1), assuming the player's strength remains at 1700.

    Ept := Glicko:-ExpectedDelta(1700, r1, r2):
    dEpt := diff(Ept,r2):
    r2vsr1 := r -> fsolve(eval(dEpt,r1=r)):
    plot(r2vsr1, 1000..1680);
    

    Here is a Maple worksheet with the code and computations.

    Glicko.mw

    Later

    After pondering this, I realized there is a more useful way to present the results. The shape of the optimal curve is independent of the user's actual strength. Showing that is trivial, just substitute a symbolic value for the player's strength, offset the ratings from it, and verify that the result does not depend on the strength.

    Ept := Glicko:-ExpectedDelta(S, S+r1, S+r2):
    has(Ept, S);
                        false
    

    Here's the general curve, shifted so the player's strength is 0, r1 and r2 are relative to that.

    r2_r1 := r -> rhs(Optimization:-Maximize(eval(Ept,r1=r), r2=-500..0)[2][]):
    p1 := plot(r2_r1, -500..0, 'numpoints'=30);
    

    Compute and plot the expected points gained when playing the optimal partner and your rating is r-points higher than your strength.

    EptMax := r -> eval(Ept, [r1=r, r2=r2_r1(r)]):
    plot(EptMax, -200..200, 'numpoints'=30, 'labels' = ["r","Ept"]);
    

    When your playing strength matches your rating, the optimal opponent has a relative rating of

    r2_r1(0);
                           -269.86
    

    The expected points you win is

    evalf(EptMax(0));
                           0.00956
    

    Coherent States in Quantum Mechanics

     

    Pascal Szriftgiser1 and Edgardo S. Cheb-Terrab2 

    (1) Laboratoire PhLAM, UMR CNRS 8523, Université Lille 1, F-59655, France

    (2) Maplesoft

     

      

    Coherent states are among the most relevant representations for the state of a quantum system. These states, that form an overcomplete basis, minimize the quantum uncertainty between position x and momentum p (they satisfy the Heisenberg uncertainty principle with equality and their expectation values satisfy the classical equations of motion). Coherent states are widely used in quantum optics and quantum mechanics in general; they also mathematically characterize the concept of Planck cells. Part of this development is present in Maple 2018.2.1. To reproduce what you see below, however, you need a more recent version, as the one distributed within the Maplesoft Physics Updates (version 276 or higher). A worksheet with this contents is linked at the end of this post.

    Definition and the basics

     

    with(Physics)

     

    Set a quantum operator A and corresponding annihilation / creation operators

    Setup(quantumoperators = A)

    [quantumoperators = {A}]

    (1.1)

    am := Annihilation(A)

    `#msup(mi("a"),mo("&uminus0;"))`

    (1.2)

    ap := Creation(A)

    `#msup(mi("a"),mo("&plus;"))`

    (1.3)

    In what follows, on the left-hand sides the product operator used is `*`, which properly represents, but does not perform the attachment of Bras Kets and operators. On the right-hand sides the product operator is `.`, that performs the attachments. Since the introduction of Physics in the Maple system, we have that

    am*Ket(A, n) = am.Ket(A, n)

    Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(A, n)) = n^(1/2)*Physics:-Ket(A, n-1)

    (1.4)

    (%Bracket = Bracket)(Bra(A, n), Ket(A, n))

    %Bracket(Physics:-Bra(A, n), Physics:-Ket(A, n)) = 1

    (1.5)

    (%Bracket = Bracket)(Bra(A, n), Ket(A, m))

    %Bracket(Physics:-Bra(A, n), Physics:-Ket(A, m)) = Physics:-KroneckerDelta[m, n]

    (1.6)

    New development during 2018: coherent states, the eigenstates of the annihilation operator `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`, with all of their properties, are now understood as such by the system

    am*Ket(am, alpha) = am.Ket(am, alpha)

    Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = alpha*Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)

    (1.7)

    Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) is an eigenket of `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))` but not of  `#msup(mi("a",mathcolor = "olive"),mo("&plus;",mathcolor = "olive"))`

    ap.Ket(am, alpha)

    Physics:-`.`(`#msup(mi("a"),mo("&plus;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

    (1.8)

    The norm of these states is equal to 1

    (%Bracket = Bracket)(Bra(am, alpha), Ket(am, alpha))

    %Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = 1

    (1.9)

    These states, however, are not orthonormal as the occupation number states Ket(A, n) are, and since `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))` is not Hermitian, its eigenvalues are not real but complex numbers. Instead of (1.6) , we now have

    (%Bracket = Bracket)(Bra(am, alpha), Ket(am, beta))

    %Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, beta)) = exp(-(1/2)*abs(alpha)^2-(1/2)*abs(beta)^2+conjugate(alpha)*beta)

    (1.10)

    At alpha = beta,

    simplify(eval(%Bracket(Physics[Bra](`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, beta)) = exp(-(1/2)*abs(alpha)^2-(1/2)*abs(beta)^2+conjugate(alpha)*beta), alpha = beta))

    1 = 1

    (1.11)

    Their scalar product with the occupation number states Ket(A, m), using the inert %Bracket on the left-hand side and the active Bracket on the other side:

    (%Bracket = Bracket)(Bra(A, n), Ket(am, alpha))

    %Bracket(Physics:-Bra(A, n), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)

    (1.12)

    The expansion of coherent states into occupation number states, first representing the product operation using `*`, then performing the attachments replacing `*` by `.`

    Projector(Ket(A, n), dimension = infinity)

    Sum(Physics:-`*`(Physics:-Ket(A, n), Physics:-Bra(A, n)), n = 0 .. infinity)

    (1.13)

    Ket(am, alpha) = (Sum(Physics[`*`](Physics[Ket](A, n), Physics[Bra](A, n)), n = 0 .. infinity))*Ket(am, alpha)

    Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics:-`*`(Sum(Physics:-`*`(Physics:-Ket(A, n), Physics:-Bra(A, n)), n = 0 .. infinity), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

    (1.14)

    eval(Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics[`*`](Sum(Physics[`*`](Physics[Ket](A, n), Physics[Bra](A, n)), n = 0 .. infinity), Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, alpha)), `*` = `.`)

    Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)

    (1.15)

    Hide now the ket label. When in doubt, input show to see the Kets with their labels explicitly shown

    Setup(hide = true)

    `* Partial match of  '`*hide*`' against keyword '`*hideketlabel*`' `

     

    _______________________________________________________

     

    [hideketlabel = true]

    (1.16)

    Define eigenkets of the annihilation operator, with two different eigenvalues for experimentation

    `K__&alpha;` := Ket(am, alpha)

    Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)

    (1.17)

    `K__&beta;` := Ket(am, beta)

    Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, beta)

    (1.18)

    Because the properties of coherent states are now known to the system, the following computations proceed automatically. The left-hand sides use the `*`, while the right-hand sides use the `.`

    (`*` = `.`)(Dagger(`K__&alpha;`), ap, am, `K__&alpha;`)

    Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), `#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = abs(alpha)^2

    (1.19)

    (`*` = `.`)(Dagger(`K__&alpha;`), ap+am, `K__&alpha;`)

    Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), `#msup(mi("a"),mo("&plus;"))`+`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = conjugate(alpha)+alpha

    (1.20)

    (`*` = `.`)(Dagger(`K__&alpha;`), ap-am, `K__&alpha;`)

    Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), `#msup(mi("a"),mo("&plus;"))`-`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = conjugate(alpha)-alpha

    (1.21)

    (`*` = `.`)(Dagger(`K__&alpha;`), (ap+am)^2, `K__&alpha;`)

    Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-`^`(`#msup(mi("a"),mo("&plus;"))`+`#msup(mi("a"),mo("&uminus0;"))`, 2), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = conjugate(alpha)^2+2*abs(alpha)^2+1+alpha^2

    (1.22)

    Properties of Coherent states

     

    The mean value of the occupation number N

     

     

    The occupation number operator N is given by

    N := ap.am

    Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)

    (2.1.1)

    N*` is Hermitian`

    %Dagger(N) = N

    %Dagger(Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)) = Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)

    (2.1.2)

    value(%Dagger(Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)) = Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`))

    Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`) = Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`)

    (2.1.3)

    N is diagonal in the Ket(A, n) basis of the Fock (occupation number) space

    (`*` = `.`)(Bra(A, n), N, Ket(A, p))

    Physics:-`*`(Physics:-Bra(A, n), `#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(A, p)) = p*Physics:-KroneckerDelta[n, p]

    (2.1.4)
    • 

    The mean value of N in a coherent state `&equiv;`(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Ket(alpha))

    Bracket(%N)[alpha] = %Bracket(Bra(am, alpha), N, Ket(am, alpha))

    Physics:-Bracket(%N)[alpha] = %Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

    (2.1.5)

    value(Physics[Bracket](%N)[alpha] = %Bracket(Physics[Bra](`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, alpha)))

    Physics:-Bracket(%N)[alpha] = abs(alpha)^2

    (2.1.6)

    The mean value of N^2

    Bracket(%N^2)[alpha] = %Bracket(Bra(am, alpha), N^2, Ket(am, alpha))

    Physics:-Bracket(%N^2)[alpha] = %Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics:-`^`(Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), 2), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

    (2.1.7)

    value(Physics[Bracket](%N^2)[alpha] = %Bracket(Physics[Bra](`#msup(mi("a"),mo("&uminus0;"))`, alpha), Physics[`^`](Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), 2), Physics[Ket](`#msup(mi("a"),mo("&uminus0;"))`, alpha)))

    Physics:-Bracket(%N^2)[alpha] = abs(alpha)^4+abs(alpha)^2

    (2.1.8)

    The standard deviation `&Delta;N` = sqrt(-Bracket(%N)[alpha]^2+Bracket(%N^2)[alpha]) for a state Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)

    ((Physics[Bracket](%N^2)[alpha] = abs(alpha)^4+abs(alpha)^2)-(Physics[Bracket](%N)[alpha] = abs(alpha)^2)^2)^(1/2)

    (-Physics:-Bracket(%N)[alpha]^2+Physics:-Bracket(%N^2)[alpha])^(1/2) = abs(alpha)

    (2.1.9)

    In conclusion, a coherent state "| alpha >" has a finite spreading `&Delta;N` = abs(alpha).  Coherent states are good approximations for the states of a laser, where the laser intensity I  is proportional to the mean value of the photon number, I f Bracket(%N)[alpha] = abs(alpha)^2, and so the intensity fluctuation, `&prop;`(sqrt(I), abs(alpha)).

    • 

    The mean value of the occupation number N in an occupation number state `&equiv;`(Ket(A, n), Ket(n))

    Bracket(%N)[n] = %Bracket(Bra(A, n), N, Ket(A, n))

    Physics:-Bracket(%N)[n] = %Bracket(Physics:-Bra(A, n), Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), Physics:-Ket(A, n))

    (2.1.10)

    value(Physics[Bracket](%N)[n] = %Bracket(Bra(A, n), Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), Ket(A, n)))

    Physics:-Bracket(%N)[n] = n

    (2.1.11)

    The mean value of the occupation number N in a state Ket(A, n) is thus n itself, as expected since Ket(A, n)represents a (Fock space) state of n (quase-) particles. Accordingly,

    Bracket(%N^2)[n] = %Bracket(Bra(A, n), N^2, Ket(A, n))

    Physics:-Bracket(%N^2)[n] = %Bracket(Physics:-Bra(A, n), Physics:-`^`(Physics:-`*`(`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), 2), Physics:-Ket(A, n))

    (2.1.12)

    value(Physics[Bracket](%N^2)[n] = %Bracket(Bra(A, n), Physics[`^`](Physics[`*`](`#msup(mi("a"),mo("&plus;"))`, `#msup(mi("a"),mo("&uminus0;"))`), 2), Ket(A, n)))

    Physics:-Bracket(%N^2)[n] = n^2

    (2.1.13)

    The standard deviation `&Delta;N` = sqrt(-Bracket(%N)[n]^2+Bracket(%N^2)[n]) for a state Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha), is thus

    ((Physics[Bracket](%N^2)[n] = n^2)-(Physics[Bracket](%N)[n] = n)^2)^(1/2)

    (-Physics:-Bracket(%N)[n]^2+Physics:-Bracket(%N^2)[n])^(1/2) = 0

    (2.1.14)

    That is, in a Fock state, `&Delta;N` = 0,  there is no intensity fluctuation.

    "a^(-)| alpha > = alpha| alpha >"

     

     

    The specific properties of coherent states implemented can be derived explicitly departing from the projection of "Ket(a^(-),alpha"into the Ket(A, m)basis of occupation number states and the definition of `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))` as the operator that annihilates the vacuum `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`Ket(A, n) = 0

    Ket(am, alpha) = (Sum(Physics[`*`](Ket(A, n), Bra(A, n)), n = 0 .. infinity))*Ket(am, alpha)

    Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics:-`*`(Sum(Physics:-`*`(Physics:-Ket(A, n), Physics:-Bra(A, n)), n = 0 .. infinity), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))

    (2.2.1)

    eval(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics[`*`](Sum(Physics[`*`](Ket(A, n), Bra(A, n)), n = 0 .. infinity), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)), `*` = `.`)

    Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)

    (2.2.2)

    To derive `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`*Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = alpha*Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) from the formula above, start multiplying by `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`

    am*(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))

    Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))

    (2.2.3)

    In view of `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`*Ket(A, 0) = 0, discard the first term of the sum

    subs(0 = 1, Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)))

    Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 1 .. infinity))

    (2.2.4)

    Change variables n = k+1; in the result rename proc (k) options operator, arrow; n end proc

    subs(k = n, PDEtools:-dchange(n = k+1, Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 1 .. infinity)), `@`(combine, simplify)))

    Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(A, n+1))*alpha^(n+1)/(factorial(n)^(1/2)*(n+1)^(1/2)), n = 0 .. infinity)

    (2.2.5)

    Activate the product `#msup(mi("a",mathcolor = "olive"),mo("&uminus0;",mathcolor = "olive"))`*Ket(A, n+1) by replacing, in the right-hand side, the product operator `*` by `.`

    lhs(Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(A, n+1))*alpha^(n+1)/(factorial(n)^(1/2)*(n+1)^(1/2)), n = 0 .. infinity)) = eval(rhs(Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(A, n+1))*alpha^(n+1)/(factorial(n)^(1/2)*(n+1)^(1/2)), n = 0 .. infinity)), `*` = `.`)

    Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Physics:-Ket(A, n)*alpha^(n+1)/factorial(n)^(1/2), n = 0 .. infinity)

    (2.2.6)

    By inspection the right-hand side of (2.2.6) is equal to alpha times the right-hand side of (2.2.2)

    alpha*(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))-(Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Sum(exp(-(1/2)*abs(alpha)^2)*Ket(A, n)*alpha^(n+1)/factorial(n)^(1/2), n = 0 .. infinity))

    alpha*Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)-Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = alpha*(Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))-(Sum(exp(-(1/2)*abs(alpha)^2)*Physics:-Ket(A, n)*alpha^(n+1)/factorial(n)^(1/2), n = 0 .. infinity))

    (2.2.7)

    combine(alpha*Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)-Physics[`*`](`#msup(mi("a"),mo("&uminus0;"))`, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = alpha*(Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))-(Sum(exp(-(1/2)*abs(alpha)^2)*Ket(A, n)*alpha^(n+1)/factorial(n)^(1/2), n = 0 .. infinity)))

    alpha*Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)-Physics:-`*`(`#msup(mi("a"),mo("&uminus0;"))`, Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = 0

    (2.2.8)
    • 

    Overview of the coherent states distribution

     

    Consider the projection of Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) over an occupation number state Ket(A, n)

    %Bracket(Bra(A, n), lhs(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics[`*`](Sum(Physics[`*`](Ket(A, n), Bra(A, n)), n = 0 .. infinity), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)))) = Bracket(Bra(A, n), rhs(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Physics[`*`](Sum(Physics[`*`](Ket(A, n), Bra(A, n)), n = 0 .. infinity), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha))))

    %Bracket(Physics:-Bra(A, n), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)

    (2.2.9)

    An overview of the distribution of coherent states Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) for a sample of values of n and alpha is thus as follows

    plot3d(rhs(%Bracket(Bra(A, n), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)), n = 0 .. 25, alpha = 0 .. 10, axes = boxed, caption = lhs(%Bracket(Bra(A, n), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)))

     

    The distribution can be explored for ranges of values of n and alpha using Explore

    NA := Typesetting:-Typeset(Bracket(Bra(A, n), Ket(am, alpha)))

    Explore(plot(rhs(%Bracket(Bra(A, n), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(alpha)^2)*alpha^n/factorial(n)^(1/2)), n = 0 .. 200, view = 0 .. .6, labels = [n, NA]), parameters = [alpha = 0 .. 10], initialvalues = [alpha = 5])

    "a^(+)| alpha >= (&PartialD;)/(&PartialD;alpha) | alpha >+(alpha)/2 | alpha >"

       

    exp(-(1/2)*abs(alpha)^2)*exp(alpha*`#msup(mi("a",mathcolor = "olive"),mo("&plus;",mathcolor = "olive"))`)"| 0 >" = "| alpha >"

       

     exp(alpha*`#msup(mi("a",mathcolor = "olive"),mo("&plus;",mathcolor = "olive"))`-conjugate(alpha)*a)" | 0 >" = "| alpha >"

       

    `<|>`(beta, alpha) = exp(conjugate(beta)*alpha-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2)

     

    NULL

    The identity in the title can be derived departing again from the the projection of a coherent stateKet(`#msup(mi("a"),mo("&uminus0;"))`, alpha)into the Ket(A, m)basis of occupation number states

    Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)

    Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)

    (2.6.1)

    Dagger(subs({alpha = beta, n = k}, Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity)))

    Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta) = Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Physics:-Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity)

    (2.6.2)

    Taking the `*` product of these two expressions

    (Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta) = Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity))*(Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha) = Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))

    Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics:-`*`(Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Physics:-Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity), Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Physics:-Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))

    (2.6.3)

    Perform the attachment of Bras and Kets on the right-hand side by replacing `*` by `.`, evaluating the sum and simplifying the result

    lhs(Physics[`*`](Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics[`*`](Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity), Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))) = simplify(value(eval(rhs(Physics[`*`](Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = Physics[`*`](Sum(exp(-(1/2)*abs(beta)^2)*conjugate(beta)^k*Bra(A, k)/factorial(k)^(1/2), k = 0 .. infinity), Sum(exp(-(1/2)*abs(alpha)^2)*alpha^n*Ket(A, n)/factorial(n)^(1/2), n = 0 .. infinity))), `*` = `.`)))

    Physics:-`*`(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2+alpha*conjugate(beta))

    (2.6.4)
    • 

    Overview of the real and imaginary part of `<|>`(beta, alpha)

     

    In most cases, alpha and beta are complex valued numbers. Below, the plots assume that alpha and beta are both real. To take into account the general case, the possibility to tune a phase difference theta between alpha and beta is explicitly added, so that (2.6.4) becomes

     

    %Bracket(Bra(am, beta), Ket(am, alpha)) = subs(conjugate(beta) = conjugate(beta)*exp(I*theta), rhs(Physics[`*`](Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(conjugate(beta)*alpha-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2)))

    %Bracket(Physics:-Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Physics:-Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2+alpha*conjugate(beta)*exp(I*theta))

    (2.6.5)

    Explore(plot3d(Re(rhs(%Bracket(Bra(`#msup(mi("a"),mo("&uminus0;"))`, beta), Ket(`#msup(mi("a"),mo("&uminus0;"))`, alpha)) = exp(-(1/2)*abs(beta)^2-(1/2)*abs(alpha)^2+alpha*conjugate(beta)*exp(I*theta)))), alpha = -10 .. 10, beta = -10 .. 10, view = -1 .. 1, orientation = [-12, 74, 3], axes = boxed), parameters = [theta = 0 .. 2*Pi], initialvalues = [theta = (1/10)*Pi])

     

     

    Download Coherent_States_in_Quantum_Mechanics.mw

     

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft
    Editor, Computer Physics Communications

     

    Recently I examined a piece of code of mine in an attempt to possibly convert it to another language as it is a numeric code and as such slower in Maple than I'd like it to run. In doing this I ran across the following strangeness, here reproduced in a minimum working example (file attached).

    Consider this trivial integral:

    x1:=Int(3.52*10^8, ti = 0 .. 1);
    (4)

    and also this one:

    x2:=sin(2*Pi*x1);
    (5)

    I can then evaluate (4) and take sine(2Pi * the evaluation of (4)):

    value(x1);
    (6)

    sin(2*Pi*(6));
    (7)

    Hmm... let's evaluate x2, which should be the same, right

    value(x2);
    0.00000556012229902952                                                              (8)

    Oddly enough, it is not. Now the reason they are not 0 is due to round-off error (running the same sheet with Digits := 40 confirms that); but at the same time, (6) is in fact exact. More oddly, if I wrap the input leading to (7) in evalf() then it outputs 0., i.e. exact and correct. I suspect the problem must lie in the different treatments of Pi in the three cases.

    I am not ready to call this behaviour a bug since I can see that different ways of evaluating what is essentially the same expression leads to a diffferent round-off. What strikes me is the relatively large errors in this case. The sheet was run with Digits being 15 (my default set in my .mapleinint), I initially expected somewhat more accuracy in the sine function than a mere 6 digits or so. On second thought, however, what is going on seems to be that the evaluation of the integral must be numerical and the large no. of cycles limits the accuracy; if one replaces 3.52E8 (a float) with 352E6 (an exact number) then (7) becomes 0 (exact) while (8) remains at the above value. Why

    evalf(sin(2*Pi*(6)))

    yields an exact value I do not quite understand.

    So, caveat computor once again. This example, while it may look contrived, actually arose from a real-world case I was dealing with (the 352E6 is a frequency in Hz, in my actual application it can vary in time therefore the integration to get the no. of cycles in a given time interval). One annoyance here is that the "right" way to do this is not obvious, at least not to me.

    M.D.

    integration_test.mw

    Tensor product of Quantum States using Dirac's Bra-Ket Notation - 2018

     

    There has been increasing interest in the details of the Maple implementation of tensor products using Dirac's notation, developed during 2018. Tensor products of Hilbert spaces and related quantum states are relevant in a myriad of situations in quantum mechanics, and in particular regarding quantum information. Below is a presentation up-to-date of the design and implementation, with input/output and examples, organized in four sections:

     

    • 

    The basic ideas and design implemented

    • 

    Tensor product notation and the hideketlabel option

    • 

    Entangled States and the Bell basis

    • 

    Entangled States, Operators and Projectors

     

    Part of this development is present in Maple 2018.2. To reproduce what you see below, however, you need a more recent version, as the one distributed within the Maplesoft Physics Updates (version 272 or higher).

     

    The basic ideas and design implemented

     

     

    Suppose A and B are quantum operators and Ket(A, n), et(B, m) are, respectively, their eigenkets. The following works since the introduction of the Physics package in Maple

    with(Physics)

    Setup(op = {A, B})

    `* Partial match of  '`*op*`' against keyword '`*quantumoperators*`' `

     

    _______________________________________________________

     

    [quantumoperators = {A, B}]

    (1.1)

    A*Ket(A, alpha) = A.Ket(A, alpha)

    Physics:-`*`(A, Physics:-Ket(A, alpha)) = alpha*Physics:-Ket(A, alpha)

    (1.2)

    B*Ket(B, beta) = B.Ket(B, beta)

    Physics:-`*`(B, Physics:-Ket(B, beta)) = beta*Physics:-Ket(B, beta)

    (1.3)

    In previous Maple releases, all quantum operators are supposed to act on the same Hillbert space. New: suppose that A and B act on different, disjointed, Hilbert spaces.

     

    1) To represent that situation, a new keyword in Setup , hilbertspaces, is introduced. With it you can indicate the quantum operators that act on a Hilbert space, say as in hilbertdspaces = {{A}, {B}} with the meaning that the operator A acts on one Hilbert space while B acts on another one.

     

    The Hilbert space thus has no particular name (as in 1, 2, 3 ...) and is instead identified by the operators that act on it. There can be one or more, and operators acting on one space can act on other spaces too. The disjointedspaces keyword is a synonym for hilbertspaces and hereafter all Hilbert spaces are assumed to be disjointed.

     

    NOTE: noncommutative quantum operators acting on disjointed spaces commute between themselves, so after setting - for instance - hilbertdspaces = {{A}, {B}}, automatically, A, B become quantum operators satisfying (see comment (ii) on page 156 of ref.[1])

     

    "[A,B][-]=0"

     

    2) Product of Kets and Bras that belong to different Hilbert spaces, are understood as tensor products satisfying (see footnote on page 154 of ref. [1]):

     

    `&otimes;`(Ket(A, alpha), Ket(B, beta)) = `&otimes;`(Ket(B, beta), Ket(A, alpha)) 

     

    `&otimes;`(Bra(A, alpha), Ket(B, beta)) = `&otimes;`(Ket(B, beta), Bra(A, alpha)) 

     

    while

    Bra(A, alpha)*Ket(A, alpha) <> Bra(A, alpha)*Ket(A, alpha)

     

    3) All the operators of one Hilbert space act transparently over operators, Bras and Kets of other Hilbert spaces. For example

     

    A*Ket(B, n) = A*Ket(B, n)

      

    and the same for the Dagger of this equation, that is

    Bra(B, n)*Dagger(A) = Bra(B, n)*Dagger(A)

     

      

    Hence, when we write the left-hand sides of the two equations above and press enter, they are automatically rewritten (returned) as the right-hand sides.

     

    4) Every other quantum operator, set as such using Setup , and not indicated as acting on any particular Hilbert space, is assumed to act on all spaces.

     

    5) Notation:

     

    • 

    Tensor products formed with operators, or Bras and Kets belonging to different Hilbert spaces (set as such using Setup  and the keyword hilbertspaces), are now displayed with the symbol 5 in between, as in Ket(A, n)*Ket(B, n) instead of Ket(A, n)*Ket(B, n), and `&otimes;`(A, B) instead of A*B. The product of an operator A of one space and a KetNULL of another space Ket(B, n) however, is displayed AA, without 5.

    • 

    A new Setup option hideketlabel , makes all the labels in Kets and Bras to be hidden at the time of displaying Kets, Bras and Bracket, so when you set it entering Setup(hideketlabel = true),

     "Ket(A,m,n,l"  

      

    is displayed as

    Ket(A, m, n, l)

     

      

    This is the notation frequently used when working with angular momentum or in quantum information, where tensor products of Hilbert spaces are used.

    Design details

       

    Tensor product notation and the hideketlabel option

     

     

    According to the design section, set now two disjointed Hilbert spaces with operators A, C acting on one of them and B, C on the other one (you can think of  C = `&otimes;`(A, B))

     

    Setup(hilbertspaces = {{A, C}, {B, C}})

    [disjointedspaces = {{A, C}, {B, C}}]

    (2.1)

     

    Consider a tensor product of Kets, each of which belongs to one of these different spaces, note the new notation using"&otimes;"

    Ket(A, 1)*Ket(B, 0)

    Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0))

    (2.2)
    • 

    As explained in the Details of the design section, the ordering of the Hilbert spaces in tensor products is now preserved: Bras (Kets) of the first space always appear before Bras (Kets) of the second space. For example, construct a projector into the state (2.2)

    Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0))*Dagger(Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0)))

    Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

    (2.3)

    You see that in the product of Bras, and also in the product of Kets, A comes first, then B.


    Remark: some textbooks prefer a diadic style for sorting the operands in products of Bras and Kets that belong to different spaces, for example, `&otimes;`(Ket(A, 1)*Bra(A, 1), `&otimes;`(Ket(B, 0), Bra(B, 0))) instead of the projector sorting style of  (2.3). Both reorderings of Kets and Bras are mathematically equal.

     

    • 

    Because that ordering is preserved, one can now hide the label of Bras and Kets without ambiguity, as it is usual in textbooks (e.g. in Quantum Information). For that purpose use the new keyword option hideketlabel

    Setup(hide = true)

    `* Partial match of  '`*hide*`' against keyword '`*hideketlabel*`' `

     

    _______________________________________________________

     

    [hideketlabel = true]

    (2.4)

    The display for (2.3) is now

    Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0), Physics[Bra](A, 1), Physics[Bra](B, 0))

    Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

    (2.5)

    Important: this new option only hides the label while displaying the Bra or Ket. The label, however, is still there, both in the input and in the output. One can "see" what is behind this new display using show, that works the same way as it does in the context of   CompactDisplay . The actual contents being displayed in (2.5) is thus (2.3)

    show

    Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

    (2.6)

    Operators of each of these spaces act on their eigenkets as usual. Here we distribute over both sides of an equation, using `*` on the left-hand side, to see the product uncomputed, and `.` on the right-hand side to see it computed:

    (`*` = `.`)(A, Ket(A, 1))

    Physics:-`*`(A, Physics:-Ket(A, 1)) = Physics:-Ket(A, 1)

    (2.7)

    (`*` = `.`)(A, Ket(A, 0))

    Physics:-`*`(A, Physics:-Ket(A, 0)) = 0

    (2.8)
    • 

    The tensor product of operators belonging to different Hilbert spaces is also displayed using 5

    A*B

    Physics:-`*`(A, B)

    (2.9)
    • 

     As mentioned in the preceding design section, using the commutativity between operators, Bras and Kets that belong to different Hilbert spaces, within a product, operators are placed contiguous to the Kets and Bras belonging to the space where the operator acts. For example, consider the delayed product represented using the start `*` operator

    'Physics[`*`](A, B)*Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0), Physics[Bra](A, 1), Physics[Bra](B, 0))'

    Physics:-`*`(A, B, Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

    (2.10)

    Release the product

    %

    Physics:-`*`(A, Physics:-Ket(A, 1), B, Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))

    (2.11)

    The same operation but now using the dot product `.` operator. Start by delaying the operation

    'Physics[`*`](A, B).Physics[`*`](Physics[Ket](A, 1), Physics[Ket](B, 0), Physics[Bra](A, 1), Physics[Bra](B, 0))'

    Parse:-ConvertTo1D, "invalid input %1", Typesetting:-mprintslash([A*B.Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))], [A*B.Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0), Physics:-Bra(A, 1), Physics:-Bra(B, 0))])

    (2.12)

    Recalling that this product is mathematically the same as (2.11), and that

    B.Ket(B, 0)

    0

    (2.13)

    by releasing the delayed product (2.12) we have

    Typesetting[delayDotProduct](Physics[`*`](A, B), Physics[`*`](Ket(A, 1), Ket(B, 0), Bra(A, 1), Bra(B, 0)))

    0

    (2.14)

    Reset hideketlabel

    Setup(hideketlabel = false)

    [hideketlabel = false]

    (2.15)

    Implementation details

       

    Entangled States and the Bell basis

     

     

    With the introduction of disjointed Hilbert spaces in Maple it is possible to represent entangled quantum states in a simple way, basically as done with paper and pencil.

     

    Recalling the Hilbert spaces set at this point are,

    Setup(hilbert)

    `* Partial match of  '`*hilbert*`' against keyword '`*hilbertspaces*`' `

     

    _______________________________________________________

     

    [disjointedspaces = {{A, C}, {B, C}}]

    (3.1)

    where C acts on the tensor product of the spaces where A and B act. A state of C can then always be written as

    Ket(C, m, n) = Sum(Sum(M[j, p]*Ket(A, j)*Ket(B, p), j), p)

    Physics:-Ket(C, m, n) = Sum(Sum(M[j, p]*Physics:-`*`(Physics:-Ket(A, j), Physics:-Ket(B, p)), j), p)

    (3.2)

    where M[j, p] is a matrix of complex coefficients. Bra  states of C are formed as usual taking the Dagger

    Dagger(Ket(C, m, n) = Sum(Sum(M[j, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j), p))

    Physics:-Bra(C, m, n) = Sum(Sum(conjugate(M[j, p])*Physics:-`*`(Physics:-Bra(A, j), Physics:-Bra(B, p)), j), p)

    (3.3)

     

    • 

    By definition, all states Ket(C, alpha, beta) that can be written exactly as `&otimes;`(Ket(A, alpha), Ket(B, beta)), that is, the product of a arbitrary state of the subspace A and another of the subspace B, are product states, and all the other ones are entangled states. Entangelment is a property that is independent of the basis `&otimes;`(Ket(A, j), Ket(B, p))used in (3.2).

    The physical interpretation is the standard one: when the state of a system constituted by two subsystems A and B is represented by a product state, the properties of the subsystem A are well defined and all given by "Ket(A,alpha),"while those for the subsystem B by NULL. When the system is in an entangled state one typically cannot assign definite properties to the individual subsystems A or B, each subsystem has no independent reality.

    To determine whether a state Ket(C, alpha, beta) is or not entangled it then suffices to check the rank R of the matrix M[j, p] (see LinearAlgebra:-Rank ): when R = 1 the state is a product state, otherwise it is an entangled state. When the state being analized belongs to the tensor product of two subspaces, R = 1.is equivalent to having the determinant of M[j, p] equal to 0. The condition R = 1, however, is more general, and suffices to determine whether a state is a product state also on a Hilbert space that is the tensor product of three or more subspaces: "`&Hscr;`^()=`&Hscr;`^((1))&otimes;`&Hscr;`^((2))&otimes;`&Hscr;`^((3))... `&Hscr;`^((n))", in which case the matrix M will have more rows and columns and a determinant equal to 0 would only warrant the possibility of factorizing one Ket.

     

    Example: the Bell basis for a system of two qubits

     

    Consider a 2-dimensional space of states acted upon by the operator A, and let B act upon another, disjointed, Hilbert space that is a replica of the Hilbert space on which A acts. Set the dimensions of A, B and C respectively equal to 2, 2 and 2x2 (see Setup)

    Setup(quantumbasisdimension = {A = 2, B = 2, C[1] = 2, C[2] = 2})

    [quantumbasisdimension = {A = 2, B = 2, C[1] = 2, C[2] = 2}]

    (3.4)

    The system C with the two subsystems A and B represents the a two qubits system. The standard basis for C can be constructed in a natural way from the basis of  Kets of A and B, {Ket(A, 0), Ket(A, 1), Ket(B, 0), Ket(B, 1)}, by taking their tensor products:

    seq(seq(Ket(A, j)*Ket(B, k), k = 0 .. 1), j = 0 .. 1)

    Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0)), Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1)), Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)), Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1))

    (3.5)

    Set a more mathematical display for the imaginary unit

    interface(imaginaryunit = i)

     

    The four entangled Bell states also form a basis of C and are given by

    Setup(op = `&Bscr;`)

    `* Partial match of  '`*op*`' against keyword '`*quantumoperators*`' `

     

    _______________________________________________________

     

    [quantumoperators = {`&Bscr;`, A, B, C, E}]

    (3.6)

    Ket(`&Bscr;`, 0) = (Ket(A, 0)*Ket(B, 0)+Ket(A, 1)*Ket(B, 1))/('sqrt')(2)

    Physics:-Ket(`&Bscr;`, 0) = (Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))/sqrt(2)

    (3.7)

    Ket(`&Bscr;`, 1) = (Ket(A, 0)*Ket(B, 1)+Ket(A, 1)*Ket(B, 0))/('sqrt')(2)

    Physics:-Ket(`&Bscr;`, 1) = (Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))/sqrt(2)

    (3.8)

    Ket(`&Bscr;`, 2) = i*(Ket(A, 0)*Ket(B, 1)-Ket(A, 1)*Ket(B, 0))/('sqrt')(2)

    Physics:-Ket(`&Bscr;`, 2) = I*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))/sqrt(2)

    (3.9)

    Ket(`&Bscr;`, 3) = (Ket(A, 0)*Ket(B, 0)-Ket(A, 1)*Ket(B, 1))/('sqrt')(2)

    Physics:-Ket(`&Bscr;`, 3) = (Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))/sqrt(2)

    (3.10)

    There is no standard notation for denoting a Bell state (the linar combinations of the right-hand sides above). The convention used here relates to the definition of the Bell states related to the Pauli matrices shown below. Regardless fo the convention used, this basis is orthonormal. That can be verified by taking dot products, for example:

    Dagger(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2)).(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

    1 = 1

    (3.11)

    In steps, perform the same operation but using the star (`*`) operator, so that the contraction is represented but not performed

    Dagger(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))*(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

    Physics:-`*`(Physics:-Bra(`&Bscr;`, 0), Physics:-Ket(`&Bscr;`, 0)) = (1/2)*Physics:-`*`(Physics:-`*`(Physics:-Bra(A, 0), Physics:-Bra(B, 0))+Physics:-`*`(Physics:-Bra(A, 1), Physics:-Bra(B, 1)), Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

    (3.12)

    Evaluate now the result at `*` = `.`, that is transforming the star product into a dot product

    eval(Physics[`*`](Bra(`&Bscr;`, 0), Ket(`&Bscr;`, 0)) = (1/2)*Physics[`*`](Physics[`*`](Bra(A, 0), Bra(B, 0))+Physics[`*`](Bra(A, 1), Bra(B, 1)), Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1))), `*` = `.`)

    1 = 1

    (3.13)

    Dagger(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))*(Ket(`&Bscr;`, 1) = (Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0)))/sqrt(2))

    Physics:-`*`(Physics:-Bra(`&Bscr;`, 0), Physics:-Ket(`&Bscr;`, 1)) = (1/2)*Physics:-`*`(Physics:-`*`(Physics:-Bra(A, 0), Physics:-Bra(B, 0))+Physics:-`*`(Physics:-Bra(A, 1), Physics:-Bra(B, 1)), Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

    (3.14)

    eval(Physics[`*`](Bra(`&Bscr;`, 0), Ket(`&Bscr;`, 1)) = (1/2)*Physics[`*`](Physics[`*`](Bra(A, 0), Bra(B, 0))+Physics[`*`](Bra(A, 1), Bra(B, 1)), Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0))), `*` = `.`)

    0 = 0

    (3.15)

    The Bell basis and its relation with the Pauli matrices

     

    The Bell basis can be constructed departing from Ket(`&Bscr;`, 0) using the Pauli matrices sigma[j]. For that purpose, using a Vector representation for Ket(A, j),

    Physics:-Ket(`&Bscr;`, 0)

    (3.16)

    Ket(B, 0) = Vector([1, 0]), Ket(B, 1) = Vector([0, 1])

    Physics:-Ket(B, 0) = Vector[column](%id = 18446744078301209294), Physics:-Ket(B, 1) = Vector[column](%id = 18446744078301209414)

    (3.17)

    Multiplying Ket(B, 0)by each of the sigma[j] Pauli Matrices and performing the matrix operations we have

    "[seq(Psigma[j] . ?[1], j=1..3)]"

    [Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)) = Physics:-Psigma[1].Vector[column](%id = 18446744078301209294), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)) = Physics:-Psigma[2].Vector[column](%id = 18446744078301209294), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)) = Physics:-Psigma[3].Vector[column](%id = 18446744078301209294)]

    (3.18)

    "map(u -> lhs(u) =Library:-PerformMatrixOperations(rhs(u)),?)"

    [Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)) = Vector[column](%id = 18446744078376366918), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)) = Vector[column](%id = 18446744078376368838), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)) = Vector[column](%id = 18446744078376358606)]

    (3.19)

    In this result we see that sigma[1] and sigma[2] flip the state, transforming Ket(B, 0) into Ket(B, 1), sigma[2] also multiplies the state by the imaginary unit I, while sigma[3] leaves the state Ket(B, 0) unchanged.

    We can rewrite all that by removeing from (3.19) the Vector representations of (3.17). For that purpose, create a list of substitution equations, replacing the Vectors by the Kets

    "map(rhs = lhs,[?, i *~ ?])"

    [Vector[column](%id = 18446744078301209294) = Physics:-Ket(B, 0), Vector[column](%id = 18446744078301209414) = Physics:-Ket(B, 1), Vector[column](%id = 18446744078376351494) = I*Physics:-Ket(B, 0), Vector[column](%id = 18446744078376351734) = I*Physics:-Ket(B, 1)]

    (3.20)

    So the action of sigma[j] in Ket(B, 0) is given by

    "Library:-SubstituteMatrix(?,?)"

    [Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)) = Physics:-Ket(B, 1), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)) = I*Physics:-Ket(B, 1), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)) = Physics:-Ket(B, 0)]

    (3.21)

    For Ket(B, 1), the same operations result in

    "[seq(Psigma[j] . ?[2], j=1..3)]"

    [Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)) = Physics:-Psigma[1].Vector[column](%id = 18446744078301209414), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1)) = Physics:-Psigma[2].Vector[column](%id = 18446744078301209414), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1)) = Physics:-Psigma[3].Vector[column](%id = 18446744078301209414)]

    (3.22)

    "map(u -> lhs(u) =Library:-PerformMatrixOperations(rhs(u)),?)"

    [Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)) = Vector[column](%id = 18446744078464860518), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1)) = Vector[column](%id = 18446744078464862438), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1)) = Vector[column](%id = 18446744078464856182)]

    (3.23)

    "Library:-SubstituteMatrix(?,?)"

    [Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)) = Physics:-Ket(B, 0), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1)) = -I*Physics:-Ket(B, 0), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1)) = -Physics:-Ket(B, 1)]

    (3.24)

    To obtain the other three Bell states using the results (3.21) and (3.24), indicate to the system that the Pauli matrices operate in the subspace where B operates

    Setup(hilbert = {{B, C, Psigma}})

    `* Partial match of  '`*hilbert*`' against keyword '`*hilbertspaces*`' `

     

    _______________________________________________________

     

    [disjointedspaces = {{A, C}, {B, C, Physics:-Psigma}}]

    (3.25)

     

    Multiplying Ket(`&Bscr;`, 0) given in (3.7) by each of the three sigma[j] we get the other three Bell states

    Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2)

    Physics:-Ket(`&Bscr;`, 0) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

    (3.26)

    Psigma[1]*(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

    Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics:-`*`(Physics:-Psigma[1], Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

    (3.27)

    Substitute in this result the first equations of (3.21) and (3.24)

    [Physics[`*`](Physics[Psigma][1], Ket(B, 0)) = Ket(B, 1), Physics[`*`](Physics[Psigma][2], Ket(B, 0)) = I*Ket(B, 1), Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0)][1], [Physics[`*`](Physics[Psigma][1], Ket(B, 1)) = Ket(B, 0), Physics[`*`](Physics[Psigma][2], Ket(B, 1)) = -I*Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1)][1]

    Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)) = Physics:-Ket(B, 1), Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)) = Physics:-Ket(B, 0)

    (3.28)

    map(rhs = lhs, [Physics[`*`](Physics[Psigma][1], Ket(B, 0)) = Ket(B, 1), Physics[`*`](Physics[Psigma][1], Ket(B, 1)) = Ket(B, 0)])

    [Physics:-Ket(B, 1) = Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0)), Physics:-Ket(B, 0) = Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1))]

    (3.29)

    subs([Ket(B, 1) = Physics[`*`](Physics[Psigma][1], Ket(B, 0)), Ket(B, 0) = Physics[`*`](Physics[Psigma][1], Ket(B, 1))], Physics[`*`](Physics[Psigma][1], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics[`*`](Physics[Psigma][1], Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1))))

    Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics:-`*`(Physics:-Psigma[1], Physics:-`*`(Physics:-Ket(A, 0), Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 1)))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(B, 0))))

    (3.30)

    factor(Simplify(Physics[`*`](Physics[Psigma][1], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics[`*`](Physics[Psigma][1], Physics[`*`](Ket(A, 0), Physics[`*`](Physics[Psigma][1], Ket(B, 1)))+Physics[`*`](Ket(A, 1), Physics[`*`](Physics[Psigma][1], Ket(B, 0))))))

    Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

    (3.31)

    This is Ket(`&Bscr;`, 1) defined in (3.8)

    Ket(`&Bscr;`, 1) = (Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0)))/sqrt(2)

    Physics:-Ket(`&Bscr;`, 1) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

    (3.32)

    (Physics[`*`](Physics[Psigma][1], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0))))-(Ket(`&Bscr;`, 1) = (1/2)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 1))+Physics[`*`](Ket(A, 1), Ket(B, 0))))

    Physics:-`*`(Physics:-Psigma[1], Physics:-Ket(`&Bscr;`, 0))-Physics:-Ket(`&Bscr;`, 1) = 0

    (3.33)

    Multiplying now by sigma[2] and substituting Ket(B, j) using the 2^nd equations of (3.21) and (3.24) we get Ket(`&Bscr;`, 1)

    Psigma[2]*(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

    Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics:-`*`(Physics:-Psigma[2], Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

    (3.34)

    [Physics[`*`](Physics[Psigma][1], Ket(B, 0)) = Ket(B, 1), Physics[`*`](Physics[Psigma][2], Ket(B, 0)) = I*Ket(B, 1), Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0)][2], [Physics[`*`](Physics[Psigma][1], Ket(B, 1)) = Ket(B, 0), Physics[`*`](Physics[Psigma][2], Ket(B, 1)) = -I*Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1)][2]

    Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)) = I*Physics:-Ket(B, 1), Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1)) = -I*Physics:-Ket(B, 0)

    (3.35)

    zip(isolate, [Physics[`*`](Physics[Psigma][2], Ket(B, 0)) = I*Ket(B, 1), Physics[`*`](Physics[Psigma][2], Ket(B, 1)) = -I*Ket(B, 0)], [Ket(B, 1), Ket(B, 0)])

    [Physics:-Ket(B, 1) = -I*Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 0)), Physics:-Ket(B, 0) = I*Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(B, 1))]

    (3.36)

    factor(Simplify(subs([Ket(B, 1) = -I*Physics[`*`](Physics[Psigma][2], Ket(B, 0)), Ket(B, 0) = I*Physics[`*`](Physics[Psigma][2], Ket(B, 1))], Physics[`*`](Physics[Psigma][2], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics[`*`](Physics[Psigma][2], Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1))))))

    Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(`&Bscr;`, 0)) = ((1/2)*I)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

    (3.37)

    The above is Ket(`&Bscr;`, 2) defined in (3.9)

    Ket(`&Bscr;`, 2) = I*(Physics[`*`](Ket(A, 0), Ket(B, 1))-Physics[`*`](Ket(A, 1), Ket(B, 0)))/sqrt(2)

    Physics:-Ket(`&Bscr;`, 2) = ((1/2)*I)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 1))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 0)))

    (3.38)

    Expand((Physics[`*`](Physics[Psigma][2], Ket(`&Bscr;`, 0)) = ((1/2)*I)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 1))-Physics[`*`](Ket(A, 1), Ket(B, 0))))-(Ket(`&Bscr;`, 2) = ((1/2)*I)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 1))-Physics[`*`](Ket(A, 1), Ket(B, 0)))))

    Physics:-`*`(Physics:-Psigma[2], Physics:-Ket(`&Bscr;`, 0))-Physics:-Ket(`&Bscr;`, 2) = 0

    (3.39)

    Finally, multiplying Ket(`&Bscr;`, 2) by sigma[3]

    Psigma[3]*(Ket(`&Bscr;`, 0) = (Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2))

    Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics:-`*`(Physics:-Psigma[3], Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))+Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

    (3.40)

    Substituting

    [Physics[`*`](Physics[Psigma][1], Ket(B, 0)) = Ket(B, 1), Physics[`*`](Physics[Psigma][2], Ket(B, 0)) = I*Ket(B, 1), Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0)][3], [Physics[`*`](Physics[Psigma][1], Ket(B, 1)) = Ket(B, 0), Physics[`*`](Physics[Psigma][2], Ket(B, 1)) = -I*Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1)][3]

    Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)) = Physics:-Ket(B, 0), Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1)) = -Physics:-Ket(B, 1)

    (3.41)

    (rhs = lhs)((Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1))[1]), (rhs = lhs)(-(Physics[`*`](Physics[Psigma][3], Ket(B, 0)) = Ket(B, 0), Physics[`*`](Physics[Psigma][3], Ket(B, 1)) = -Ket(B, 1))[2])

    Physics:-Ket(B, 0) = Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 0)), Physics:-Ket(B, 1) = -Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(B, 1))

    (3.42)

    We get ``

    factor(Simplify(subs(Ket(B, 0) = Physics[`*`](Physics[Psigma][3], Ket(B, 0)), Ket(B, 1) = -Physics[`*`](Physics[Psigma][3], Ket(B, 1)), Physics[`*`](Physics[Psigma][3], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*Physics[`*`](Physics[Psigma][3], Physics[`*`](Ket(A, 0), Ket(B, 0))+Physics[`*`](Ket(A, 1), Ket(B, 1))))))

    Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

    (3.43)

    which is Ket(`&Bscr;`, 2)

    Ket(`&Bscr;`, 3) = (Physics[`*`](Ket(A, 0), Ket(B, 0))-Physics[`*`](Ket(A, 1), Ket(B, 1)))/sqrt(2)

    Physics:-Ket(`&Bscr;`, 3) = (1/2)*2^(1/2)*(Physics:-`*`(Physics:-Ket(A, 0), Physics:-Ket(B, 0))-Physics:-`*`(Physics:-Ket(A, 1), Physics:-Ket(B, 1)))

    (3.44)

    Expand((Physics[`*`](Physics[Psigma][3], Ket(`&Bscr;`, 0)) = (1/2)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 0))-Physics[`*`](Ket(A, 1), Ket(B, 1))))-(Ket(`&Bscr;`, 3) = (1/2)*2^(1/2)*(Physics[`*`](Ket(A, 0), Ket(B, 0))-Physics[`*`](Ket(A, 1), Ket(B, 1)))))

    Physics:-`*`(Physics:-Psigma[3], Physics:-Ket(`&Bscr;`, 0))-Physics:-Ket(`&Bscr;`, 3) = 0

    (3.45)

    Entangled States, Operators and Projectors

     

     

    Consider a fourth operator, H, that is Hermitian and acts on the same space of C, and then it has the same dimension,

    Setup(additionally, hermitian = H, basisdimension = {H[1] = 2, H[2] = 2}, hilbertspaces = {{A, C, H}, {B, C, H}})

    `* Partial match of  '`*hermitian*`' against keyword '`*hermitianoperators*`' `

     

    `* Partial match of  '`*basisdimension*`' against keyword '`*quantumbasisdimension*`' `

     

    _______________________________________________________

     

    [disjointedspaces = {{A, C, H}, {B, C, H}, {B, C, Physics:-Psigma}}, hermitianoperators = {H}, quantumbasisdimension = {A = 2, B = 2, C[1] = 2, C[2] = 2, H[1] = 2, H[2] = 2}]

    (4.1)

    To operate in a practical way with these operators, Bras and Kets, however, bracket rules reflecting their relationship are necessary. From the definition of C as acting on the tensor product of  spaces where A and B act (see (3.2)) and taking into account the dimensions specified for A, B and C we have

    Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Ket(A, j)*Ket(B, p), j = 0 .. 1), p = 0 .. 1)

    Physics:-Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics:-`*`(Physics:-Ket(A, j), Physics:-Ket(B, p)), j = 0 .. 1), p = 0 .. 1)

    (4.2)

    Bra(A, k).(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

    Physics:-Bracket(Physics:-Bra(A, k), Physics:-Ket(C, a, b)) = Sum(M[a, k, b, p]*Physics:-Ket(B, p), p = 0 .. 1)

    (4.3)

    Bra(B, k).(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

    Physics:-Bracket(Physics:-Bra(B, k), Physics:-Ket(C, a, b)) = Sum(M[a, j, b, k]*Physics:-Ket(A, j), j = 0 .. 1)

    (4.4)

    Bra(A, k).Bra(B, l).(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

    Physics:-`*`(Physics:-Bra(A, k), Physics:-Bracket(Physics:-Bra(B, l), Physics:-Ket(C, a, b))) = M[a, k, b, l]

    (4.5)

    The bracket rules for A, B and C are the first two of these; Set these rules, so that the system can take them into account

    Setup(Bracket(Bra(A, k), Ket(C, a, b)) = Sum(M[a, k, b, p]*Ket(B, p), p = 0 .. 1), Bracket(Bra(B, k), Ket(C, a, b)) = Sum(M[a, j, b, k]*Ket(A, j), j = 0 .. 1))

    [bracketrules = {%Bracket(%Bra(A, k), %Ket(C, a, b)) = Sum(M[a, k, b, p]*Physics:-Ket(B, p), p = 0 .. 1), %Bracket(%Bra(B, k), %Ket(C, a, b)) = Sum(M[a, j, b, k]*Physics:-Ket(A, j), j = 0 .. 1)}]

    (4.6)

    If we now recompute (4.5), the left-hand side is also computed

    Bracket(C, i, j, H, C, k, l) = `&Hscr;`

    Physics:-Bracket(Physics:-Bra(C, I, j), H, Physics:-Ket(C, k, l)) = `&Hscr;`

    (4.7)

    Bra(A, k).Bra(B, l).(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

    M[a, k, b, l] = M[a, k, b, l]

    (4.8)

    Suppose now that you want to compute with the Hermitian operator H, that operates on the same space as C, both using C using the operators A and B, as in

     

    Bracket(Bra(C, I, j), H, Ket(C, k, l)) = `&Hscr;`[i, j, k, l]

     

    `&otimes;`(Bra(A, I), Bra(B, j))*H*`&otimes;`(Ket(A, k), Ket(B, l)) = H[I, j, k, l]

     

    where `&Hscr;`[i, j, k, l] = H[I, j, k, l] when Ket(C, a, b) is a product (not entagled) state.

     

    For Bracket(Bra(C, I, j), H, Ket(C, k, l)) = `&Hscr;`[I, j, k, l] it suffices to set a bracket rule

    Setup(%Bracket(Bra(C, a, b), H, Ket(C, c, d)) = `&Hscr;`[a, b, c, d], real = `&Hscr;`)

    `* Partial match of  '`*real*`' against keyword '`*realobjects*`' `

     

    _______________________________________________________

     

    [bracketrules = {%Bracket(%Bra(A, k), %Ket(C, a, b)) = Sum(M[a, k, b, p]*Physics:-Ket(B, p), p = 0 .. 1), %Bracket(%Bra(B, k), %Ket(C, a, b)) = Sum(M[a, j, b, k]*Physics:-Ket(A, j), j = 0 .. 1), %Bracket(%Bra(C, a, b), H, %Ket(C, c, d)) = `&Hscr;`[a, b, c, d]}, realobjects = {`&Hscr;`, x, y, z}]

    (4.9)

    After that, the system operates taking the rule into account

    Bra(C, j, k).H.Ket(C, m, n)

    `&Hscr;`[j, k, m, n]

    (4.10)

    Regarding `&otimes;`(Bra(A, I), Bra(B, j))*H*`&otimes;`(Ket(A, k), Ket(B, l)) = H[I, j, k, l]NULL, since H belongs to the tensor product of spaces A and B, it can be an entangled operator, one that you cannot represent just as a product of one operator acting on A times another one acting on B. A computational representation for the operator Bra(B, j)*H*Ket(A, k) (that is not just itself or as abstract) is not possible in the general case. For that you can use a different feature: define the action of the operator H on Kets of A and B.

     

    Basically, we want:

     

    "H*Ket(A,k)-> H[k]"

    "H[k]*Ket(B,l)->H[k,l]"

    A program sketch for that would be:


    if H is applied to a Ket of A or B then

        if H itself is indexed then
            return H accumulating its indices, followed by the index of the Ket
        else

            return H indexed by the index of the Ket;
    otherwise
        return the dot product operation uncomputed, unevaluated

     

    In Maple language, that program-sketch becomes

     

    "H := K ->   if K::Ket and op(1, K)::'identical(A,B)' then      if procname::'indexed' then         if nops(procname) <4 then             H[op(procname), op(2, K)]    #` accumulate indices`         else             'H . K'         fi     else          H[op(2, K)]     fi  else      'procname . K'  fi:"

     

    Let's see it in action. Start erasing the Physics performance remember tables, that remember results like  computed before the definition of H

     

    Library:-Forget()

    H.Ket(A, k)

    H[k]

    (4.11)

    Recalling that H is Hermitian,

    Bra(B, j).H

    H[j]

    (4.12)

    Bra(B, j).H.Ket(A, k)

    H[j, k]

    (4.13)

    Bra(B, j).H.Ket(A, k).Ket(B, l)

    H[j, k, l]

    (4.14)

    Bra(A, i).Bra(B, j).H.Ket(A, k).Ket(B, l)

    H[I, j, k, l]

    (4.15)

    Note that the definition of H as a procedure does not interfer with the setting of an bracket rule for it with Ket(C, a, b), that is still working

    Bra(C, i, j).H.Ket(C, k, l)

    `&Hscr;`[I, j, k, l]

    (4.16)

    but the definition takes precedence, so if in it you indicate what to do with a C Ket, that will be taken into account before the bracket rule. Finally, In the typical case, the first four results, (4.11), (4.12), (4.13) and (4.14) are operators while (4.15) is a scalar; you can always represent the scalar aspect by substituing the noncommutative operator H by a related scalar, say H.

     

    • 

    You can set the projectors for all these operators / spaces. For example,

    `&Iopf;__A` := Projector(Ket(A, i)); `&Iopf;__B` := Projector(Ket(B, i)); `&Iopf;__C` := Projector(Ket(C, a, b))

    Sum(Physics:-`*`(Physics:-Ket(A, n), Physics:-Bra(A, n)), n = 0 .. 1)

     

    Sum(Physics:-`*`(Physics:-Ket(B, n), Physics:-Bra(B, n)), n = 0 .. 1)

     

    Sum(Sum(Physics:-`*`(Physics:-Ket(C, a, b), Physics:-Bra(C, a, b)), a = 0 .. 1), b = 0 .. 1)

    (4.17)

    Since the algebra rules for computing with eigenkets of A, B and C were already set in (4.6), from the projectors above you can construct any subspace projector, for example

    Bra(A, m).`&Iopf;__C`

    Sum(Sum(Sum(M[a, m, b, p]*Physics:-`*`(Physics:-Ket(B, p), Physics:-Bra(C, a, b)), p = 0 .. 1), a = 0 .. 1), b = 0 .. 1)

    (4.18)

    `&Iopf;__C`.Ket(A, m)

    Sum(Sum(Sum(conjugate(M[a, m, b, p])*Physics:-`*`(Physics:-Ket(C, a, b), Physics:-Bra(B, p)), p = 0 .. 1), a = 0 .. 1), b = 0 .. 1)

    (4.19)

    The conjugate of M[a, m, b, p] is due to the contraction or attachment from the right of (4.18), that is with

    Dagger(Ket(C, a, b) = Sum(Sum(M[a, j, b, p]*Physics[`*`](Ket(A, j), Ket(B, p)), j = 0 .. 1), p = 0 .. 1))

    Physics:-Bra(C, a, b) = Sum(Sum(conjugate(M[a, j, b, p])*Physics:-`*`(Physics:-Bra(A, j), Physics:-Bra(B, p)), j = 0 .. 1), p = 0 .. 1)

    (4.20)

     

    The coefficients M[a, m, b, p] satisfy constraints due to the normalization of  Kets of A and B. One can derive these contraints by inserting the unit operator `#msub(mi("&Iopf;"),mi("C"))` constructing this identity

    Sum(Sum(Physics:-`*`(Physics:-Ket(C, a, b), Physics:-Bra(C, a, b)), a = 0 .. 1), b = 0 .. 1)

    (4.21)

    Bra(A, m).Bra(B, n).`&Iopf;__C`.Ket(A, r).Ket(B, s) = Bra(A, m).Bra(B, n).Ket(A, r).Ket(B, s)

    Sum(Sum(conjugate(M[a, r, b, s])*M[a, m, b, n], a = 0 .. 1), b = 0 .. 1) = Physics:-KroneckerDelta[m, r]*Physics:-KroneckerDelta[n, s]

    (4.22)

    Transform this result into a function P  to explore the identity further

    P := unapply(subs(Sum = sum, Sum(Sum(conjugate(M[a, r, b, s])*M[a, m, b, n], a = 0 .. 1), b = 0 .. 1) = Physics[KroneckerDelta][m, r]*Physics[KroneckerDelta][n, s]), m, n, r, s)

    proc (m, n, r, s) options operator, arrow; sum(sum(conjugate(M[a, r, b, s])*M[a, m, b, n], a = 0 .. 1), b = 0 .. 1) = Physics:-KroneckerDelta[m, r]*Physics:-KroneckerDelta[n, s] end proc

    (4.23)

    The first and third indices refer to the quantum numbers of A, the second and fourth to B, so the the right-hand sides in the following are respectively 1 and 0

    P(1, 0, 1, 0)

    conjugate(M[0, 1, 0, 0])*M[0, 1, 0, 0]+conjugate(M[1, 1, 0, 0])*M[1, 1, 0, 0]+conjugate(M[0, 1, 1, 0])*M[0, 1, 1, 0]+conjugate(M[1, 1, 1, 0])*M[1, 1, 1, 0] = 1

    (4.24)

    P(1, 0, 0, 0)

    conjugate(M[0, 0, 0, 0])*M[0, 1, 0, 0]+conjugate(M[1, 0, 0, 0])*M[1, 1, 0, 0]+conjugate(M[0, 0, 1, 0])*M[0, 1, 1, 0]+conjugate(M[1, 0, 1, 0])*M[1, 1, 1, 0] = 0

    (4.25)

    To get the whole system of equations satisfied by the coefficients M[a, m, b, n], use P to construct an Array with four indices running from 0..1

    Array(`$`(0 .. 1, 4), P)

    _rtable[18446744078376377150]

    (4.26)

    Convert the whole Array into a set of equations

    "simplify(convert(Typesetting:-msub(Typesetting:-mi("_rtable",italic = "true",mathvariant = "italic"),Typesetting:-mrow(Typesetting:-mn("18446744078376377150",mathvariant = "normal")),subscriptshift = "0"),setofequations))"

    {abs(M[0, 0, 0, 0])^2+abs(M[1, 0, 0, 0])^2+abs(M[0, 0, 1, 0])^2+abs(M[1, 0, 1, 0])^2 = 1, abs(M[0, 0, 0, 1])^2+abs(M[1, 0, 0, 1])^2+abs(M[0, 0, 1, 1])^2+abs(M[1, 0, 1, 1])^2 = 1, abs(M[0, 1, 0, 0])^2+abs(M[1, 1, 0, 0])^2+abs(M[0, 1, 1, 0])^2+abs(M[1, 1, 1, 0])^2 = 1, abs(M[0, 1, 0, 1])^2+abs(M[1, 1, 0, 1])^2+abs(M[0, 1, 1, 1])^2+abs(M[1, 1, 1, 1])^2 = 1, conjugate(M[0, 0, 0, 0])*M[0, 0, 0, 1]+conjugate(M[1, 0, 0, 0])*M[1, 0, 0, 1]+conjugate(M[0, 0, 1, 0])*M[0, 0, 1, 1]+conjugate(M[1, 0, 1, 0])*M[1, 0, 1, 1] = 0, conjugate(M[0, 0, 0, 0])*M[0, 1, 0, 0]+conjugate(M[1, 0, 0, 0])*M[1, 1, 0, 0]+conjugate(M[0, 0, 1, 0])*M[0, 1, 1, 0]+conjugate(M[1, 0, 1, 0])*M[1, 1, 1, 0] = 0, conjugate(M[0, 0, 0, 0])*M[0, 1, 0, 1]+conjugate(M[1, 0, 0, 0])*M[1, 1, 0, 1]+conjugate(M[0, 0, 1, 0])*M[0, 1, 1, 1]+conjugate(M[1, 0, 1, 0])*M[1, 1, 1, 1] = 0, conjugate(M[0, 0, 0, 1])*M[0, 0, 0, 0]+conjugate(M[1, 0, 0, 1])*M[1, 0, 0, 0]+conjugate(M[0, 0, 1, 1])*M[0, 0, 1, 0]+conjugate(M[1, 0, 1, 1])*M[1, 0, 1, 0] = 0, conjugate(M[0, 0, 0, 1])*M[0, 1, 0, 0]+conjugate(M[1, 0, 0, 1])*M[1, 1, 0, 0]+conjugate(M[0, 0, 1, 1])*M[0, 1, 1, 0]+conjugate(M[1, 0, 1, 1])*M[1, 1, 1, 0] = 0, conjugate(M[0, 0, 0, 1])*M[0, 1, 0, 1]+conjugate(M[1, 0, 0, 1])*M[1, 1, 0, 1]+conjugate(M[0, 0, 1, 1])*M[0, 1, 1, 1]+conjugate(M[1, 0, 1, 1])*M[1, 1, 1, 1] = 0, conjugate(M[0, 1, 0, 0])*M[0, 0, 0, 0]+conjugate(M[1, 1, 0, 0])*M[1, 0, 0, 0]+conjugate(M[0, 1, 1, 0])*M[0, 0, 1, 0]+conjugate(M[1, 1, 1, 0])*M[1, 0, 1, 0] = 0, conjugate(M[0, 1, 0, 0])*M[0, 0, 0, 1]+conjugate(M[1, 1, 0, 0])*M[1, 0, 0, 1]+conjugate(M[0, 1, 1, 0])*M[0, 0, 1, 1]+conjugate(M[1, 1, 1, 0])*M[1, 0, 1, 1] = 0, conjugate(M[0, 1, 0, 0])*M[0, 1, 0, 1]+conjugate(M[1, 1, 0, 0])*M[1, 1, 0, 1]+conjugate(M[0, 1, 1, 0])*M[0, 1, 1, 1]+conjugate(M[1, 1, 1, 0])*M[1, 1, 1, 1] = 0, conjugate(M[0, 1, 0, 1])*M[0, 0, 0, 0]+conjugate(M[1, 1, 0, 1])*M[1, 0, 0, 0]+conjugate(M[0, 1, 1, 1])*M[0, 0, 1, 0]+conjugate(M[1, 1, 1, 1])*M[1, 0, 1, 0] = 0, conjugate(M[0, 1, 0, 1])*M[0, 0, 0, 1]+conjugate(M[1, 1, 0, 1])*M[1, 0, 0, 1]+conjugate(M[0, 1, 1, 1])*M[0, 0, 1, 1]+conjugate(M[1, 1, 1, 1])*M[1, 0, 1, 1] = 0, conjugate(M[0, 1, 0, 1])*M[0, 1, 0, 0]+conjugate(M[1, 1, 0, 1])*M[1, 1, 0, 0]+conjugate(M[0, 1, 1, 1])*M[0, 1, 1, 0]+conjugate(M[1, 1, 1, 1])*M[1, 1, 1, 0] = 0}

    (4.27)

    Reference

       

    NULL


     

    Download Tensor_Products_of_Quantum_States_-_2018.mw

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    First 32 33 34 35 36 37 38 Last Page 34 of 297