MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • With the new features added to the Student[LinearAlgebra] package I wanted to go over some of the basics on how someone can do Linear Algebra in Maple without require them to do any programming.  I was recently asked about this and thought that the information may be useful to others.
     

    This post will be focussed towards new Maple users. I hope that this will be helpful to students using Maple for the first time and professors who want their students to use Maple without needing to spend time learning the language.
     

    In addition to the following post you can find a detailed video on using Maple to do Linear Algebra without programming here. You can also find some of the tools that are new to Maple 2020 for Linear Algebra here.

    The biggest tools you'll be using are the Matrix palette on the left of Maple, and the Context Panel on the right of Maple.

    First you should load the Student[LinearAlgebra] package by entering:

    with(Student[LinearAlgebra]);

    at the beginning of your document. If you end it with a colon rather than a semi colon it won't display the commands in the package.

    Use the Matrix Palette on the left to input Matrices:

     


    Once you have a Matrix you can use the context panel on the right to apply a variety of operations to it:

     


    The Student Linear Algebra Menu will give you many linear algebra commands.

     


    You can also access Maple's Tutors from the Tools Menu

    Tools > Tutors > Linear Algebra



    If you're interested in using the commands for Student[LinearAlegbra] in Maple you can view the help pages here or by entering:

    ?Student[LinearAlegbra]

    into Maple.

    I hope that this helps you get started using Maple for Linear Algebra.



    Maple_for_Beginners.mw

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    Over the past weeks, we have spoken with many of our academic customers throughout the world, many of whom have decided to continue their academic years online. As you can imagine, this is a considerable challenge for instructors and students alike. Academia has quickly had to pivot to virtual classrooms, online testing and other collaborative technologies, while at the same time dealing with the stress and uncertainty that has resulted from this crisis.

    We have been working with our customers to help them through this time in a variety of ways, but we know that there are still classes and students out there who are having trouble getting all the resources they need to complete their school year. So starting today, Maple Student Edition is being made free for every student, anywhere in the world, until the end of June. It is our hope that this action will remove a barrier for instructors to complete their Maple-led math instruction, and will help make things a bit more simple for everyone.

    If you are a student, you can get your free copy of Maple here.

    In addition, many of you have asked us about the best way to work on your engineering projects from home and/or teaching and learning remotely during this global crisis. We have put together resources for both that you can use as a starting point, and I invite you to contact us if you have any questions, or are dealing with challenges of your own. We are here to support you, and will be very flexible as we work together through these uncertain times.

    I wish you all the best,

    Laurent
    President & CEO


    Vectors in Spherical Coordinates using Tensor Notation

    Edgardo S. Cheb-Terrab1 and Pascal Szriftgiser2

    (2) Laboratoire PhLAM, UMR CNRS 8523, Université de Lille, F-59655, France

    (1) Maplesoft

     

    The following is a topic that appears frequently in formulations: given a 3D vector in spherical (or any curvilinear) coordinates, how do you represent and relate, in simple terms, the vector and the corresponding vectorial operations Gradient, Divergence, Curl and Laplacian using tensor notation?

     

    The core of the answer is in the relation between the - say physical - vector components and the more abstract tensor covariant and contravariant components. Focusing the case of a transformation from Cartesian to spherical coordinates, the presentation below starts establishing that relationship between 3D vector and tensor components in Sec.I. In Sec.II, we verify the transformation formulas for covariant and contravariant components on the computer using TransformCoordinates. In Sec.III, those tensor transformation formulas are used to derive the vectorial form of the Gradient in spherical coordinates. In Sec.IV, we switch to using full tensor notation, a curvilinear metric and covariant derivatives to derive the 3D vector analysis traditional formulas in spherical coordinates for the Divergence, Curl, Gradient and Laplacian. On the way, some useful technics, like changing variables in 3D vectorial expressions, differential operators, using Jacobians, and shortcut notations are shown.

     

    The computation below is reproducible in Maple 2020 using the Maplesoft Physics Updates v.640 or newer.

     

    Start setting the spacetime to be 3-dimensional, Euclidean, and use Cartesian coordinates

    with(Physics); with(Vectors)

    Setup(dimension = 3, coordinates = cartesian, g_ = `+`, spacetimeindices = lowercaselatin)

    `The dimension and signature of the tensor space are set to `[3, `+ + +`]

     

    `Default differentiation variables for d_, D_ and dAlembertian are:`*{X = (x, y, z)}

     

    `Systems of spacetime coordinates are:`*{X = (x, y, z)}

     

    _______________________________________________________

     

    `The Euclidean metric in coordinates `*[x, y, z]

     

    _______________________________________________________

     

    Physics:-g_[mu, nu] = Matrix(%id = 18446744078312229334)

     

    (`Defined Pauli sigma matrices (Psigma): `*sigma[1]*`, `*sigma[2]*`, `)*sigma[3]

     

    __________________________________________________

     

    _______________________________________________________

    (1)

    I. The line element in spherical coordinates and the scale-factors

     

     

    In vector calculus, at the root of everything there is the line element `#mover(mi("dr"),mo("→"))`, which in Cartesian coordinates has the simple form

    dr_ = _i*dx+_j*dy+_k*dz

    dr_ = _i*dx+_j*dy+_k*dz

    (1.1)

    To compute the line element  `#mover(mi("dr"),mo("→"))` in spherical coordinates, the starting point is the transformation

    tr := `~`[`=`]([X], ChangeCoordinates([X], spherical))

    [x = r*sin(theta)*cos(phi), y = r*sin(theta)*sin(phi), z = r*cos(theta)]

    (1.2)

    Coordinates(S = [r, theta, phi])

    `Systems of spacetime coordinates are:`*{S = (r, theta, phi), X = (x, y, z)}

    (1.3)

    Since in (dr_ = _i*dx+_j*dy+_k*dz)*[dx, dy, dz] are just symbols with no relationship to "[x,y,z],"start transforming these differentials using the chain rule, computing the Jacobian of the transformation (1.2). In this Jacobian J, the first line is "[(∂x)/(∂r)dr", "(∂x)/(∂theta)"`dθ`, "(∂x)/(∂phi)dphi]"

    J := VectorCalculus:-Jacobian(map(rhs, [x = r*sin(theta)*cos(phi), y = r*sin(theta)*sin(phi), z = r*cos(theta)]), [S])

     

    So in matrix notation,

    Vector([dx, dy, dz]) = J.Vector([dr, dtheta, dphi])

    Vector[column](%id = 18446744078518652550) = Vector[column](%id = 18446744078518652790)

    (1.4)

    To complete the computation of  `#mover(mi("dr"),mo("→"))` in spherical coordinates we can now use ChangeBasis , provided that next we substitute (1.4) in the result, expressing the abstract objects [dx, dy, dz] in terms of [dr, `dθ`, `dφ`].

     

    In two steps:

    lhs(dr_ = _i*dx+_j*dy+_k*dz) = ChangeBasis(rhs(dr_ = _i*dx+_j*dy+_k*dz), spherical)

    dr_ = (dx*sin(theta)*cos(phi)+dy*sin(theta)*sin(phi)+dz*cos(theta))*_r+(dx*cos(phi)*cos(theta)+dy*sin(phi)*cos(theta)-dz*sin(theta))*_theta+(cos(phi)*dy-sin(phi)*dx)*_phi

    (1.5)

    The line element

    "simplify(subs(convert(lhs(?) =~ rhs(?),set),dr_ = (dx*sin(theta)*cos(phi)+dy*sin(theta)*sin(phi)+dz*cos(theta))*_r+(dx*cos(phi)*cos(theta)+dy*sin(phi)*cos(theta)-dz*sin(theta))*_theta+(cos(phi)*dy-sin(phi)*dx)*_phi))"

    dr_ = _phi*dphi*r*sin(theta)+_theta*dtheta*r+_r*dr

    (1.6)

    This result is important: it gives us the so-called scale factors, the key that connect 3D vectors with the related covariant and contravariant tensors in curvilinear coordinates. The scale factors are computed from (1.6) by taking the scalar product with each of the unit vectors [`#mover(mi("r"),mo("∧"))`, `#mover(mi("θ",fontstyle = "normal"),mo("∧"))`, `#mover(mi("φ",fontstyle = "normal"),mo("∧"))`], then taking the coefficients of the differentials [dr, `dθ`, `dφ`] (just substitute them by the number 1)

    h := subs(`~`[`=`]([dr, `dθ`, `dφ`], 1), [seq(rhs(dr_ = _phi*dphi*r*sin(theta)+_theta*dtheta*r+_r*dr).q, q = [`#mover(mi("r"),mo("∧"))`, `#mover(mi("θ",fontstyle = "normal"),mo("∧"))`, `#mover(mi("φ",fontstyle = "normal"),mo("∧"))`])])

    [1, r, r*sin(theta)]

    (1.7)

    The scale factors are relevant because the components of the 3D vector and the corresponding tensor are not the same in curvilinear coordinates. For instance, representing the differential of the coordinates as the tensor dS^j = [dr, `dθ`, `dφ`], we see that corresponding vector, the line element in spherical coordinates `#mover(mi("dS"),mo("→"))`, is not  constructed by directly equating its components to the components of dS^j = [dr, `dθ`, `dφ`], so  

     

     `#mover(mi("dS"),mo("&rarr;"))` <> `d&phi;`*`#mover(mi("&phi;",fontstyle = "normal"),mo("&and;"))`+dr*`#mover(mi("r"),mo("&and;"))`+`d&theta;`*`#mover(mi("&theta;",fontstyle = "normal"),mo("&and;"))` 

     

    The vector `#mover(mi("dS"),mo("&rarr;"))` is constructed multiplying these contravariant components [dr, `d&theta;`, `d&phi;`] by the scaling factors, as

     

     `#mover(mi("dS"),mo("&rarr;"))` = `d&phi;`*`h__&phi;`*`#mover(mi("&phi;",fontstyle = "normal"),mo("&and;"))`+dr*h__r*`#mover(mi("r"),mo("&and;"))`+`d&theta;`*`h__&theta;`*`#mover(mi("&theta;",fontstyle = "normal"),mo("&and;"))` 

     

    This rule applies in general. The vectorial components of a 3D vector in an orthogonal system (curvilinear or not) are always expressed in terms of the contravariant components A^j the same way we did in the line above with the line element, using the scale-factors h__j, so that

     

     `#mover(mi("A"),mo("&rarr;"))` = Sum(h[j]*A^j*`#mover(mi("\`e__j\`"),mo("&circ;"))`, j = 1 .. 3)

     

    where on the right-hand side we see the contravariant components "A[]^(j)" and the scale-factors h[j]. Because the system is orthogonal, each vector component `#msub(mi("A",fontstyle = "normal"),mfenced(mi("j")))`satisfies

    A__j = h[j]*A[`~j`]

     

    The scale-factors h[j] do not constitute a tensor, so on the right-hand side we do not sum over j.  Also, from

     

    LinearAlgebra[Norm](`#mover(mi("A"),mo("&rarr;"))`) = A[j]*A[`~j`]

    it follows that,

    A__j = A__j/h__j

    where on the right-hand side we now have the covariant tensor components A__j.

     

    • 

    This relationship between the components of a 3D vector and the contravariant and covariant components of a tensor representing the vector is key to translate vector-component to corresponding tensor-component formulas.

     

    II. Transformation of contravariant and covariant tensors

     

     

    Define here two representations for one and the same tensor: A__c will represent A in Cartesian coordinates, while A__s will represent A in spherical coordinates.

    Define(A__c[j], A__s[j])

    `Defined objects with tensor properties`

     

    {A__c[j], A__s[j], Physics:-Dgamma[a], Physics:-Psigma[a], Physics:-d_[a], Physics:-g_[a, b], Physics:-LeviCivita[a, b, c], Physics:-SpaceTimeVector[a](S), Physics:-SpaceTimeVector[a](X)}

    (2.1)

    Transformation rule for a contravariant tensor

     

    We know, by definition, that the transformation rule for the components of a contravariant tensor is `#mrow(msup(mi("A"),mi("&mu;",fontstyle = "normal")),mo("&ApplyFunction;"),mfenced(mi("y")),mo("&equals;"),mfrac(mrow(mo("&PartialD;"),msup(mi("y"),mi("&mu;",fontstyle = "normal"))),mrow(mo("&PartialD;"),msup(mi("x"),mi("&nu;",fontstyle = "normal"))),linethickness = "1"),mo("&InvisibleTimes;"),mo("&InvisibleTimes;"),msup(mi("A"),mi("&nu;",fontstyle = "normal")),mfenced(mi("x")))`, that is the same as the rule for the differential of the coordinates. Then, the transformation rule from "`A__c`[]^(j)" to "`A__s`[]^(j)"computed using TransformCoordinates should give the same relation (1.4). The application of the command, however, requires attention, because, as in (1.4), we want the Cartesian (not the spherical) components isolated. That is like performing a reversed transformation. So we will use

     

    "TensorArray(`A__c`[]^(j))=TransformCoordinates(tr,`A__s`[]^(j),[X],[S])"

    where on the left-hand side we get, isolated, the three components of A in Cartesian coordinates, and on the right-hand side we transform the spherical components "`A__c`[]^(j)", from spherical S = (r, theta, phi) (4th argument) to Cartesian X = (x, y, z) (3rd argument), which according to the 5th bullet of TransformCoordinates  will result in a transformation expressed in terms of the old coordinates (here the spherical [S]). Expand things to make the comparison with (1.4) possible by eye

     

    Vector[column](TensorArray(A__c[`~j`])) = TransformCoordinates(tr, A__s[`~j`], [X], [S], simplifier = expand)

    Vector[column](%id = 18446744078459463070) = Vector[column](%id = 18446744078459463550)

    (2.2)

    We see that the transformation rule for a contravariant vector "`A__c`[]^(j)"is, indeed, as the transformation (1.4) for the differential of the coordinates.

    Transformation rule for a covariant tensor

     

    For the transformation rule for the components of a covariant tensor A__c[j], we know, by definition, that it is `#mrow(msub(mi("A"),mi("&mu;",fontstyle = "normal")),mo("&ApplyFunction;"),mfenced(mi("y")),mo("&equals;"),mfrac(mrow(mo("&PartialD;"),msup(mi("x"),mi("&nu;",fontstyle = "normal"))),mrow(mo("&PartialD;"),msup(mi("y"),mi("&mu;",fontstyle = "normal"))),linethickness = "1"),mo("&InvisibleTimes;"),mo("&InvisibleTimes;"),msub(mi("A"),mi("&nu;",fontstyle = "normal")),mfenced(mi("x")))`, so the same transformation rule for the gradient [`&PartialD;`[x], `&PartialD;`[y], `&PartialD;`[z]], where `&PartialD;`[x] = (proc (u) options operator, arrow; diff(u, x) end proc) and so on. We can experiment this by directly changing variables in the differential operators [`&PartialD;`[x], `&PartialD;`[y], `&PartialD;`[z]], for example

    d_[x] = PDEtools:-dchange(tr, proc (u) options operator, arrow; diff(u, x) end proc, simplify)

    Physics:-d_[x] = (proc (u) options operator, arrow; ((-r*cos(theta)^2+r)*cos(phi)*(diff(u, r))+sin(theta)*cos(phi)*cos(theta)*(diff(u, theta))-(diff(u, phi))*sin(phi))/(r*sin(theta)) end proc)

    (2.3)

    This result, and the equivalent ones replacing x by y or z in the input above can be computed in one go, in matricial and simplified form, using the Jacobian of the transformation computed in . We need to take the transpose of the inverse of J (because now we are transforming the components of the gradient   [`&PartialD;`[x], `&PartialD;`[y], `&PartialD;`[z]])

    H := simplify(LinearAlgebra:-Transpose(1/J))

    Vector([d_[x], d_[y], d_[z]]) = H.Vector([d_[r], d_[theta], d_[phi]])

    Vector[column](%id = 18446744078518933014) = Vector[column](%id = 18446744078518933254)

    (2.4)

    The corresponding transformation equations relating the tensors A__c and A__s in Cartesian and spherical coordinates is computed with TransformCoordinates  as in (2.2), just lowering the indices on the left and right hand sides (i.e., remove the tilde ~)

    Vector[column](TensorArray(A__c[j])) = TransformCoordinates(tr, A__s[j], [X], [r, theta, phi], simplifier = expand)

    Vector[column](%id = 18446744078557373854) = Vector[column](%id = 18446744078557374334)

    (2.5)

    We see that the transformation rule for a covariant vector A__c[j] is, indeed, as the transformation rule (2.4) for the gradient.

     

    To the side: once it is understood how to compute these transformation rules, we can have the inverse of (2.5) as follows

    Vector[column](TensorArray(A__s[j])) = TransformCoordinates(tr, A__c[j], [S], [X], simplifier = expand)

    Vector[column](%id = 18446744078557355894) = Vector[column](%id = 18446744078557348198)

    (2.6)

    III. Deriving the transformation rule for the Gradient using TransformCoordinates

     

     

    Turn ON the CompactDisplay  notation for derivatives, so that the differentiation variable is displayed as an index:

    ON


    The gradient of a function f in Cartesian coordinates and spherical coordinates is respectively given by

    (%Nabla = Nabla)(f(X))

    %Nabla(f(X)) = (diff(f(X), x))*_i+(diff(f(X), y))*_j+(diff(f(X), z))*_k

    (3.1)

    (%Nabla = Nabla)(f(S))

    %Nabla(f(S)) = (diff(f(S), r))*_r+(diff(f(S), theta))*_theta/r+(diff(f(S), phi))*_phi/(r*sin(theta))

    (3.2)

    What we want now is to depart from (3.1) in Cartesian coordinates and obtain (3.2) in spherical coordinates using the transformation rule for a covariant tensor computed with TransformCoordinates in (2.5). (An equivalent derivation, simpler and with less steps is done in Sec. IV.)

     

    Start changing the vector basis in the gradient (3.1)

    lhs(%Nabla(f(X)) = (diff(f(X), x))*_i+(diff(f(X), y))*_j+(diff(f(X), z))*_k) = ChangeBasis(rhs(%Nabla(f(X)) = (diff(f(X), x))*_i+(diff(f(X), y))*_j+(diff(f(X), z))*_k), spherical)

    %Nabla(f(X)) = ((diff(f(X), x))*sin(theta)*cos(phi)+(diff(f(X), y))*sin(theta)*sin(phi)+(diff(f(X), z))*cos(theta))*_r+((diff(f(X), x))*cos(phi)*cos(theta)+(diff(f(X), y))*sin(phi)*cos(theta)-(diff(f(X), z))*sin(theta))*_theta+(-(diff(f(X), x))*sin(phi)+cos(phi)*(diff(f(X), y)))*_phi

    (3.3)

    By eye, we see that in this result the coefficients of [`#mover(mi("r"),mo("&and;"))`, `#mover(mi("&theta;",fontstyle = "normal"),mo("&and;"))`, `#mover(mi("&phi;",fontstyle = "normal"),mo("&and;"))`] are the three lines in the right-hand side of (2.6) after replacing the covariant components A__j by the derivatives of f with respect to the jth coordinate, here displayed using indexed notation due to using CompactDisplay

    `~`[`=`]([A__s[1], A__s[2], A__s[3]], [diff(f(S), r), diff(f(S), theta), diff(f(S), phi)])

    [A__s[1] = Physics:-Vectors:-diff(f(S), r), A__s[2] = Physics:-Vectors:-diff(f(S), theta), A__s[3] = Physics:-Vectors:-diff(f(S), phi)]

    (3.4)

    `~`[`=`]([A__c[1], A__c[2], A__c[3]], [diff(f(X), x), diff(f(X), y), diff(f(X), z)])

    [A__c[1] = Physics:-Vectors:-diff(f(X), x), A__c[2] = Physics:-Vectors:-diff(f(X), y), A__c[3] = Physics:-Vectors:-diff(f(X), z)]

    (3.5)

    So since (2.5) is the inverse of (2.6), replace A by ∂ f in (2.5), the formula computed using TransformCoordinates, then insert the result in (3.3) to relate the gradient in Cartesian and spherical coordinates. We expect to arrive at the formula for the gradient in spherical coordinates (3.2) .

    "subs([A__s[1] = Physics:-Vectors:-diff(f(S),r), A__s[2] = Physics:-Vectors:-diff(f(S),theta), A__s[3] = Physics:-Vectors:-diff(f(S),phi)],[A__c[1] = Physics:-Vectors:-diff(f(X),x), A__c[2] = Physics:-Vectors:-diff(f(X),y), A__c[3] = Physics:-Vectors:-diff(f(X),z)],?)"

    Vector[column](%id = 18446744078344866862) = Vector[column](%id = 18446744078344866742)

    (3.6)

    "subs(convert(lhs(?) =~ rhs(?),set),%Nabla(f(X)) = (diff(f(X),x)*sin(theta)*cos(phi)+diff(f(X),y)*sin(theta)*sin(phi)+diff(f(X),z)*cos(theta))*_r+(diff(f(X),x)*cos(phi)*cos(theta)+diff(f(X),y)*sin(phi)*cos(theta)-diff(f(X),z)*sin(theta))*_theta+(-diff(f(X),x)*sin(phi)+cos(phi)*diff(f(X),y))*_phi)"

    %Nabla(f(X)) = ((sin(theta)*cos(phi)*(diff(f(S), r))+cos(theta)*cos(phi)*(diff(f(S), theta))/r-sin(phi)*(diff(f(S), phi))/(r*sin(theta)))*sin(theta)*cos(phi)+(sin(theta)*sin(phi)*(diff(f(S), r))+cos(theta)*sin(phi)*(diff(f(S), theta))/r+cos(phi)*(diff(f(S), phi))/(r*sin(theta)))*sin(theta)*sin(phi)+(cos(theta)*(diff(f(S), r))-sin(theta)*(diff(f(S), theta))/r)*cos(theta))*_r+((sin(theta)*cos(phi)*(diff(f(S), r))+cos(theta)*cos(phi)*(diff(f(S), theta))/r-sin(phi)*(diff(f(S), phi))/(r*sin(theta)))*cos(phi)*cos(theta)+(sin(theta)*sin(phi)*(diff(f(S), r))+cos(theta)*sin(phi)*(diff(f(S), theta))/r+cos(phi)*(diff(f(S), phi))/(r*sin(theta)))*sin(phi)*cos(theta)-(cos(theta)*(diff(f(S), r))-sin(theta)*(diff(f(S), theta))/r)*sin(theta))*_theta+(-(sin(theta)*cos(phi)*(diff(f(S), r))+cos(theta)*cos(phi)*(diff(f(S), theta))/r-sin(phi)*(diff(f(S), phi))/(r*sin(theta)))*sin(phi)+cos(phi)*(sin(theta)*sin(phi)*(diff(f(S), r))+cos(theta)*sin(phi)*(diff(f(S), theta))/r+cos(phi)*(diff(f(S), phi))/(r*sin(theta))))*_phi

    (3.7)

    Simplifying, we arrive at (3.2)

    (lhs = `@`(`@`(expand, simplify), rhs))(%Nabla(f(X)) = ((sin(theta)*cos(phi)*(diff(f(S), r))+cos(theta)*cos(phi)*(diff(f(S), theta))/r-sin(phi)*(diff(f(S), phi))/(r*sin(theta)))*sin(theta)*cos(phi)+(sin(theta)*sin(phi)*(diff(f(S), r))+cos(theta)*sin(phi)*(diff(f(S), theta))/r+cos(phi)*(diff(f(S), phi))/(r*sin(theta)))*sin(theta)*sin(phi)+(cos(theta)*(diff(f(S), r))-sin(theta)*(diff(f(S), theta))/r)*cos(theta))*_r+((sin(theta)*cos(phi)*(diff(f(S), r))+cos(theta)*cos(phi)*(diff(f(S), theta))/r-sin(phi)*(diff(f(S), phi))/(r*sin(theta)))*cos(phi)*cos(theta)+(sin(theta)*sin(phi)*(diff(f(S), r))+cos(theta)*sin(phi)*(diff(f(S), theta))/r+cos(phi)*(diff(f(S), phi))/(r*sin(theta)))*sin(phi)*cos(theta)-(cos(theta)*(diff(f(S), r))-sin(theta)*(diff(f(S), theta))/r)*sin(theta))*_theta+(-(sin(theta)*cos(phi)*(diff(f(S), r))+cos(theta)*cos(phi)*(diff(f(S), theta))/r-sin(phi)*(diff(f(S), phi))/(r*sin(theta)))*sin(phi)+cos(phi)*(sin(theta)*sin(phi)*(diff(f(S), r))+cos(theta)*sin(phi)*(diff(f(S), theta))/r+cos(phi)*(diff(f(S), phi))/(r*sin(theta))))*_phi)

    %Nabla(f(X)) = (diff(f(S), r))*_r+(diff(f(S), theta))*_theta/r+(diff(f(S), phi))*_phi/(r*sin(theta))

    (3.8)

    %Nabla(f(S)) = (diff(f(S), r))*_r+(diff(f(S), theta))*_theta/r+(diff(f(S), phi))*_phi/(r*sin(theta))

    %Nabla(f(S)) = (diff(f(S), r))*_r+(diff(f(S), theta))*_theta/r+(diff(f(S), phi))*_phi/(r*sin(theta))

    (3.9)

    IV. Deriving the transformation rule for the Divergence, Curl, Gradient and Laplacian, using TransformCoordinates and Covariant derivatives

     

     

    • 

    The Divergence

     

    Introducing the vector A in spherical coordinates, its Divergence is given by

    A__s_ := A__r(S)*_r+`A__&theta;`(S)*_theta+`A__&phi;`(S)*_phi

    A__r(S)*_r+`A__&theta;`(S)*_theta+`A__&phi;`(S)*_phi

    (4.1)

    CompactDisplay(%)

    ` A__r`(S)*`will now be displayed as`*A__r

     

    ` A__&phi;`(S)*`will now be displayed as`*`A__&phi;`

     

    ` A__&theta;`(S)*`will now be displayed as`*`A__&theta;`

    (4.2)

    %Divergence(%A__s_) = Divergence(A__s_)

    %Divergence(%A__s_) = ((diff(A__r(S), r))*r+2*A__r(S))/r+((diff(`A__&theta;`(S), theta))*sin(theta)+`A__&theta;`(S)*cos(theta))/(r*sin(theta))+(diff(`A__&phi;`(S), phi))/(r*sin(theta))

    (4.3)

    We want to see how this result, (4.3), can be obtained using TransformCoordinates and departing from a tensorial representation of the object, this time the covariant derivative "`&dtri;`[j](`A__s`[]^(j))". For that purpose, we first transform the coordinates and the metric introducing nonzero Christoffel symbols

    TransformCoordinates(tr, g_[j, k], [S], setmetric)

    `Systems of spacetime coordinates are:`*{S = (r, theta, phi), X = (x, y, z)}

     

    `Changing the differentiation variables used to compute the Christoffel symbols from `[x, y, z]*` to `[r, theta, phi]*` while the spacetime metric depends on `[r, theta]

     

    `Default differentiation variables for d_, D_ and dAlembertian are:`*{S = (r, theta, phi)}

     

    _______________________________________________________

     

    `Coordinates: `[r, theta, phi]*`. Signature: `(`+ + -`)

     

    _______________________________________________________

     

    Physics:-g_[a, b] = Matrix(%id = 18446744078312216446)

     

    _______________________________________________________

     

    `Setting `*greek*` letters to represent `*space*` indices`

    (4.4)

    To the side: despite having nonzero Christoffel symbols, the space still has no curvature, all the components of the Riemann tensor are equal to zero

    Riemann[nonzero]

    Physics:-Riemann[a, b, c, d] = {}

    (4.5)

    Consider now the divergence of the contravariant "`A__s`[]^(j)"tensor, computed in tensor notation

    CompactDisplay(A__s(S))

    ` A__s`(S)*`will now be displayed as`*A__s

    (4.6)

    D_[j](A__s[`~j`](S))

    Physics:-D_[j](A__s[`~j`](S), [S])

    (4.7)

    To the side: the covariant derivative  expressed using the D_  operator can be rewritten in terms of the non-covariant d_  and Christoffel  symbols as follows

    D_[j](A__s[`~j`](S), [S]) = convert(D_[j](A__s[`~j`](S), [S]), d_)

    Physics:-D_[j](A__s[`~j`](S), [S]) = Physics:-d_[j](A__s[`~j`](S), [S])+Physics:-Christoffel[`~j`, a, j]*A__s[`~a`](S)

    (4.8)

    Summing over the repeated indices in (4.7), we have

    %D_[j](%A__s[`~j`]) = SumOverRepeatedIndices(D_[j](A__s[`~j`](S), [S]))

    %D_[j](%A__s[`~j`]) = diff(A__s[`~1`](S), r)+diff(A__s[`~2`](S), theta)+diff(A__s[`~3`](S), phi)+2*A__s[`~1`](S)/r+cos(theta)*A__s[`~2`](S)/sin(theta)

    (4.9)

    How is this related to the expression of the VectorCalculus[Nabla].`#mover(mi("\`A__s\`"),mo("&rarr;"))` in (4.3) ? The answer is in the relationship established at the end of Sec I between the components of the tensor "`A__s`[]^(j)"and the components of the vector `#mover(mi("\`A__s\`"),mo("&rarr;"))`, namely that the vector components are obtained multiplying the contravariant tensor components by the scale-factors h__j. So, in the above we need to substitute the contravariant "`A__s`[]^(j)" by the vector components A__j divided by the scale-factors

    [seq(A__s[Library:-Contravariant(j)](S) = Component(A__s_, j)/h[j], j = 1 .. 3)]

    [A__s[`~1`](S) = A__r(S), A__s[`~2`](S) = `A__&theta;`(S)/r, A__s[`~3`](S) = `A__&phi;`(S)/(r*sin(theta))]

    (4.10)

    subs[eval]([A__s[`~1`](S) = A__r(S), A__s[`~2`](S) = `A__&theta;`(S)/r, A__s[`~3`](S) = `A__&phi;`(S)/(r*sin(theta))], %D_[j](%A__s[`~j`]) = diff(A__s[`~1`](S), r)+diff(A__s[`~2`](S), theta)+diff(A__s[`~3`](S), phi)+2*A__s[`~1`](S)/r+cos(theta)*A__s[`~2`](S)/sin(theta))

    %D_[j](%A__s[`~j`]) = diff(A__r(S), r)+(diff(`A__&theta;`(S), theta))/r+(diff(`A__&phi;`(S), phi))/(r*sin(theta))+2*A__r(S)/r+cos(theta)*`A__&theta;`(S)/(sin(theta)*r)

    (4.11)

    Comparing with (4.3), we see these two expressions are the same:

    expand(%Divergence(%A__s_) = ((diff(A__r(S), r))*r+2*A__r(S))/r+((diff(`A__&theta;`(S), theta))*sin(theta)+`A__&theta;`(S)*cos(theta))/(r*sin(theta))+(diff(`A__&phi;`(S), phi))/(r*sin(theta)))

    %Divergence(%A__s_) = diff(A__r(S), r)+(diff(`A__&theta;`(S), theta))/r+(diff(`A__&phi;`(S), phi))/(r*sin(theta))+2*A__r(S)/r+cos(theta)*`A__&theta;`(S)/(sin(theta)*r)

    (4.12)
    • 

    The Curl

     

    The Curl of the the vector `#mover(mi("\`A__s\`"),mo("&rarr;"))` in spherical coordinates is given by

    %Curl(%A__s_) = Curl(A__s_)

    %Curl(%A__s_) = ((diff(`A__&phi;`(S), theta))*sin(theta)+`A__&phi;`(S)*cos(theta)-(diff(`A__&theta;`(S), phi)))*_r/(r*sin(theta))+(diff(A__r(S), phi)-(diff(`A__&phi;`(S), r))*r*sin(theta)-`A__&phi;`(S)*sin(theta))*_theta/(r*sin(theta))+((diff(`A__&theta;`(S), r))*r+`A__&theta;`(S)-(diff(A__r(S), theta)))*_phi/r

    (4.13)

     

    One could think that the expression for the Curl in tensor notation is as in a non-curvilinear system

     

    "`&epsilon;`[i,j,k] `&dtri;`[]^(j)(`A__s`[]^(k))"

     

    But in a curvilinear system `&epsilon;`[i, j, k] is not a tensor, we need to use the non-Galilean form Epsilon[i, j, k] = sqrt(%g_[determinant])*`&epsilon;`[i, j, k], where %g_[determinant] is the determinant of the metric. Moreover, since the expression "Epsilon[i,j,k] `&dtri;`[]^(j)(`A__s`[]^(k))"has one free covariant index (the first one), to compare with the vectorial formula (4.12) this index also needs to be rewritten as a vector component as discussed at the end of Sec. I, using

    A__j = A__j/h__j

    The formula (4.13) for the vectorial Curl is thus expressed using tensor notation as

    Setup(levicivita = nongalilean)

    [levicivita = nongalilean]

    (4.14)

    %Curl(%A__s_) = LeviCivita[i, j, k]*D_[`~j`](A__s[`~k`](S))/%h[i]

    %Curl(%A__s_) = Physics:-LeviCivita[i, j, k]*Physics:-D_[`~j`](A__s[`~k`](S), [S])/%h[i]

    (4.15)

    followed by replacing the contravariant tensor components "`A__s`[]^(k)" by the vector components A__k/h__k using (4.10). Proceeding the same way we did with the Divergence, expand this expression. We could use TensorArray , but Library:-TensorComponents places a comma between components making things more readable in this case

    lhs(%Curl(%A__s_) = Physics[LeviCivita][i, j, k]*D_[`~j`](A__s[`~k`](S), [S])/%h[i]) = Library:-TensorComponents(rhs(%Curl(%A__s_) = Physics[LeviCivita][i, j, k]*D_[`~j`](A__s[`~k`](S), [S])/%h[i]))

    %Curl(%A__s_) = [(sin(theta)^3*(diff(A__s[`~3`](S), theta))*r^2+2*sin(theta)^2*cos(theta)*A__s[`~3`](S)*r^2-(diff(A__s[`~2`](S), phi))*sin(theta)*r^2)/(%h[1]*sin(theta)^2*r^2), (-sin(theta)^3*(diff(A__s[`~3`](S), r))*r^4-2*sin(theta)^3*A__s[`~3`](S)*r^3+(diff(A__s[`~1`](S), phi))*sin(theta)*r^2)/(%h[2]*sin(theta)^2*r^2), (sin(theta)^3*(diff(A__s[`~2`](S), r))*r^4+2*sin(theta)^3*A__s[`~2`](S)*r^3-sin(theta)^3*(diff(A__s[`~1`](S), theta))*r^2)/(%h[3]*sin(theta)^2*r^2)]

    (4.16)

    Replace now the components of the tensor "`A__s`[]^(j)" by the components of the 3D vector `#mover(mi("\`A__s\`"),mo("&rarr;"))` using (4.10)

    lhs(%Curl(%A__s_) = [(sin(theta)^3*(diff(A__s[`~3`](S), theta))*r^2+2*sin(theta)^2*cos(theta)*A__s[`~3`](S)*r^2-(diff(A__s[`~2`](S), phi))*sin(theta)*r^2)/(%h[1]*sin(theta)^2*r^2), (-sin(theta)^3*(diff(A__s[`~3`](S), r))*r^4-2*sin(theta)^3*A__s[`~3`](S)*r^3+(diff(A__s[`~1`](S), phi))*sin(theta)*r^2)/(%h[2]*sin(theta)^2*r^2), (sin(theta)^3*(diff(A__s[`~2`](S), r))*r^4+2*sin(theta)^3*A__s[`~2`](S)*r^3-sin(theta)^3*(diff(A__s[`~1`](S), theta))*r^2)/(%h[3]*sin(theta)^2*r^2)]) = value(subs[eval]([A__s[`~1`](S) = A__r(S), A__s[`~2`](S) = `A__&theta;`(S)/r, A__s[`~3`](S) = `A__&phi;`(S)/(r*sin(theta))], rhs(%Curl(%A__s_) = [(sin(theta)^3*(diff(A__s[`~3`](S), theta))*r^2+2*sin(theta)^2*cos(theta)*A__s[`~3`](S)*r^2-(diff(A__s[`~2`](S), phi))*sin(theta)*r^2)/(%h[1]*sin(theta)^2*r^2), (-sin(theta)^3*(diff(A__s[`~3`](S), r))*r^4-2*sin(theta)^3*A__s[`~3`](S)*r^3+(diff(A__s[`~1`](S), phi))*sin(theta)*r^2)/(%h[2]*sin(theta)^2*r^2), (sin(theta)^3*(diff(A__s[`~2`](S), r))*r^4+2*sin(theta)^3*A__s[`~2`](S)*r^3-sin(theta)^3*(diff(A__s[`~1`](S), theta))*r^2)/(%h[3]*sin(theta)^2*r^2)])))

    %Curl(%A__s_) = [(sin(theta)^3*((diff(`A__&phi;`(S), theta))/(r*sin(theta))-`A__&phi;`(S)*cos(theta)/(r*sin(theta)^2))*r^2+2*sin(theta)*cos(theta)*`A__&phi;`(S)*r-(diff(`A__&theta;`(S), phi))*r*sin(theta))/(h[1]*sin(theta)^2*r^2), (-sin(theta)^3*((diff(`A__&phi;`(S), r))/(r*sin(theta))-`A__&phi;`(S)/(r^2*sin(theta)))*r^4-2*sin(theta)^2*`A__&phi;`(S)*r^2+(diff(A__r(S), phi))*sin(theta)*r^2)/(h[2]*sin(theta)^2*r^2), (sin(theta)^3*((diff(`A__&theta;`(S), r))/r-`A__&theta;`(S)/r^2)*r^4+2*sin(theta)^3*`A__&theta;`(S)*r^2-sin(theta)^3*(diff(A__r(S), theta))*r^2)/(h[3]*sin(theta)^2*r^2)]

    (4.17)

    (lhs = `@`(simplify, rhs))(%Curl(%A__s_) = [(sin(theta)^3*((diff(`A__&phi;`(S), theta))/(r*sin(theta))-`A__&phi;`(S)*cos(theta)/(r*sin(theta)^2))*r^2+2*sin(theta)*cos(theta)*`A__&phi;`(S)*r-(diff(`A__&theta;`(S), phi))*r*sin(theta))/(h[1]*sin(theta)^2*r^2), (-sin(theta)^3*((diff(`A__&phi;`(S), r))/(r*sin(theta))-`A__&phi;`(S)/(r^2*sin(theta)))*r^4-2*sin(theta)^2*`A__&phi;`(S)*r^2+(diff(A__r(S), phi))*sin(theta)*r^2)/(h[2]*sin(theta)^2*r^2), (sin(theta)^3*((diff(`A__&theta;`(S), r))/r-`A__&theta;`(S)/r^2)*r^4+2*sin(theta)^3*`A__&theta;`(S)*r^2-sin(theta)^3*(diff(A__r(S), theta))*r^2)/(h[3]*sin(theta)^2*r^2)])

    %Curl(%A__s_) = [((diff(`A__&phi;`(S), theta))*sin(theta)+`A__&phi;`(S)*cos(theta)-(diff(`A__&theta;`(S), phi)))/(r*sin(theta)), (diff(A__r(S), phi)-(diff(`A__&phi;`(S), r))*r*sin(theta)-`A__&phi;`(S)*sin(theta))/(r*sin(theta)), ((diff(`A__&theta;`(S), r))*r+`A__&theta;`(S)-(diff(A__r(S), theta)))/r]

    (4.18)

    We see these are exactly the components of the Curl (4.13)

    %Curl(%A__s_) = ((diff(`A__&phi;`(S), theta))*sin(theta)+`A__&phi;`(S)*cos(theta)-(diff(`A__&theta;`(S), phi)))*_r/(r*sin(theta))+(diff(A__r(S), phi)-(diff(`A__&phi;`(S), r))*r*sin(theta)-`A__&phi;`(S)*sin(theta))*_theta/(r*sin(theta))+((diff(`A__&theta;`(S), r))*r+`A__&theta;`(S)-(diff(A__r(S), theta)))*_phi/r

    %Curl(%A__s_) = ((diff(`A__&phi;`(S), theta))*sin(theta)+`A__&phi;`(S)*cos(theta)-(diff(`A__&theta;`(S), phi)))*_r/(r*sin(theta))+(diff(A__r(S), phi)-(diff(`A__&phi;`(S), r))*r*sin(theta)-`A__&phi;`(S)*sin(theta))*_theta/(r*sin(theta))+((diff(`A__&theta;`(S), r))*r+`A__&theta;`(S)-(diff(A__r(S), theta)))*_phi/r

    (4.19)
    • 

    The Gradient

     

    Once the problem is fully understood, it is easy to redo the computations of Sec.III for the Gradient, this time using tensor notation and the covariant derivative. In tensor notation, the components of the Gradient are given by the components of the right-hand side

    %Nabla(f(S)) = `&dtri;`[j](f(S))/%h[j]

    %Nabla(f(S)) = Physics:-d_[j](f(S), [S])/%h[j]

    (4.20)

    where on the left-hand side we have the vectorial Nabla  differential operator and on the right-hand side, since f(S) is a scalar, the covariant derivative `&dtri;`[j](f) becomes the standard derivative `&PartialD;`[j](f).

    lhs(%Nabla(f(S)) = Physics[d_][j](f(S), [S])/%h[j]) = eval(value(Library:-TensorComponents(rhs(%Nabla(f(S)) = Physics[d_][j](f(S), [S])/%h[j]))))

    %Nabla(f(S)) = [Physics:-Vectors:-diff(f(S), r), (diff(f(S), theta))/r, (diff(f(S), phi))/(r*sin(theta))]

    (4.21)

    The above is the expected result (3.2)

    %Nabla(f(S)) = (diff(f(S), r))*_r+(diff(f(S), theta))*_theta/r+(diff(f(S), phi))*_phi/(r*sin(theta))

    %Nabla(f(S)) = (diff(f(S), r))*_r+(diff(f(S), theta))*_theta/r+(diff(f(S), phi))*_phi/(r*sin(theta))

    (4.22)
    • 

    The Laplacian

     

    Likewise we can compute the Laplacian directly as

    %Laplacian(f(S)) = D_[j](D_[j](f(S)))

    %Laplacian(f(S)) = Physics:-D_[j](Physics:-d_[`~j`](f(S), [S]), [S])

    (4.23)

    In this case there are no free indices nor tensor components to be rewritten as vector components, so there is no need for scale-factors. Summing over the repeated indices,

    SumOverRepeatedIndices(%Laplacian(f(S)) = D_[j](Physics[d_][`~j`](f(S), [S]), [S]))

    %Laplacian(f(S)) = Physics:-dAlembertian(f(S), [S])+2*(diff(f(S), r))/r+cos(theta)*(diff(f(S), theta))/(sin(theta)*r^2)

    (4.24)

    Evaluating the  Vectors:-Laplacian on the left-hand side,

    value(%Laplacian(f(S)) = Physics[dAlembertian](f(S), [S])+2*(diff(f(S), r))/r+cos(theta)*(diff(f(S), theta))/(sin(theta)*r^2))

    ((diff(diff(f(S), r), r))*r+2*(diff(f(S), r)))/r+((diff(diff(f(S), theta), theta))*sin(theta)+cos(theta)*(diff(f(S), theta)))/(r^2*sin(theta))+(diff(diff(f(S), phi), phi))/(r^2*sin(theta)^2) = Physics:-dAlembertian(f(S), [S])+2*(diff(f(S), r))/r+cos(theta)*(diff(f(S), theta))/(sin(theta)*r^2)

    (4.25)

    On the right-hand side we see the dAlembertian , "`&square;`(f(S)),"in curvilinear coordinates; rewrite it using standard diff  derivatives and expand both sides of the equation for comparison

    expand(convert(((diff(diff(f(S), r), r))*r+2*(diff(f(S), r)))/r+((diff(diff(f(S), theta), theta))*sin(theta)+cos(theta)*(diff(f(S), theta)))/(r^2*sin(theta))+(diff(diff(f(S), phi), phi))/(r^2*sin(theta)^2) = Physics[dAlembertian](f(S), [S])+2*(diff(f(S), r))/r+cos(theta)*(diff(f(S), theta))/(sin(theta)*r^2), diff))

    diff(diff(f(S), r), r)+(diff(diff(f(S), theta), theta))/r^2+(diff(diff(f(S), phi), phi))/(r^2*sin(theta)^2)+2*(diff(f(S), r))/r+cos(theta)*(diff(f(S), theta))/(sin(theta)*r^2) = diff(diff(f(S), r), r)+(diff(diff(f(S), theta), theta))/r^2+(diff(diff(f(S), phi), phi))/(r^2*sin(theta)^2)+2*(diff(f(S), r))/r+cos(theta)*(diff(f(S), theta))/(sin(theta)*r^2)

    (4.26)

    This is an identity, the left and right hand sides are equal:

    evalb(diff(diff(f(S), r), r)+(diff(diff(f(S), theta), theta))/r^2+(diff(diff(f(S), phi), phi))/(r^2*sin(theta)^2)+2*(diff(f(S), r))/r+cos(theta)*(diff(f(S), theta))/(sin(theta)*r^2) = diff(diff(f(S), r), r)+(diff(diff(f(S), theta), theta))/r^2+(diff(diff(f(S), phi), phi))/(r^2*sin(theta)^2)+2*(diff(f(S), r))/r+cos(theta)*(diff(f(S), theta))/(sin(theta)*r^2))

    true

    (4.27)


     

    Download Vectors_and_Spherical_coordinates_in_tensor_notation.mw

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    A way of cutting holes on an implicit plot. This is from the field of numerical parameterization of surfaces. On the example of the surface  x3 = 0.01*exp (x1) / (0.01 + x1^4 + x2^4 + x3^4)  consider the approach to producing holes. The surface is locally parameterized in some suitable way and the place for the hole and its size are selected. In the first example, the parametrization is performed on the basis of the section of the initial surface by perpendicular planes. In the second example, "round"  parametrization. It is made on the basis of the cylinder and the planes passing through its axis. Holes can be of any size and any shape. In the figures, the cut out surface sections are colored green and are located above their own holes at an equidistant to the original surface.
    HOLE_1.mwHOLE_2.mw

    Hi, 

    The present work is aimed to show how bayesian inference methods can be used to infer (= to assess) the probabilility that a person detected infected by the SARS-Cov2  has to die (remark I did not write "has to die if it" because one never be sure of the reason of the death).
    A lot of details are avaliable in the attached pdf file (I tried to be pedagogic enough so that the people not familiar with bayesian inference can get a global understanding of the subject, many links are provided for quick access to the different notions).

    In particular, I explain why simple mathematics cannot provide a reliable estimate of this probability of death (sometimes referred to as the "death rate") as long as the epidemic continues to spread.

    Even if the approach presented here is rather original, this is not the purpose of this post. 
    Since a long time I had in mind to post here an application concerning bayesian methods. The CoVid19 outbreak has only provided me with the most high-profile topic to do so.
    I will say no more about the inference procedure itself (all the material is given in the attached pdf file) and I will only concentrate on the MAPLE implementation of the solution algorithm.

    Bayesian Inference uses generally simple algorithms such as MCMC (Markov Chain Monte Carlo) or ABC (Approximate Bayesian Computation) to mention a few, and their corresponding pseudo code writes generally upon a few tens of lines.
    This is something I already done with other languages but I found the task comparatively more difficult with Maple. Probably I was to obsess not to code in Maple as you code in Matlab or R for instance.
    At the very end the code I wrote is rather slow, this because of the allocated memory size it uses.
    In a question I posed weeks ago (How can I prevent the creation of random variables...) Preben gave a solution to limit the burst of the memory: the trick works well but I'm still stuked with memory size problems (Acer also poposed a solution but I wasn't capable to make it works... maybe I was too lazzy to modify deeply my code).

    Anyway, the code is there, in case anyone would like to take up the challenge to make it more efficient (in which case I'll take it).

    Note 1: this code contains a small "Maplet" to help you choose any country in the data file on which you would like to run the inference.
    Note 2: Be careful: doing statistics, even bayesian statistics, needs enough data: some countries have history records ranging over a few days , or no recorded death at all; infering something from so loos date will probably be disappointing

    The attached files:

    • The pdf file is the "companion document" where all or most of it is explained.It has been written a few days ago for another purpose and the results it presents were not ontained from the lattest data (march 21, 2020 coronavirus)
    • xls files are data files, they were loaded yesterday (march 28, 2020) from here coronavirus
    • the mw file... well, I guess you know what it is.
       

    Bayesian_inference.pdf

    total-cases-covid-19_NF.xls

    total-deaths-covid-19_NF.xls

    Bayesian_Inference_ABC+MCMC_NF_2.mw


     

     

    In maple plot, very many symbols like, diamond, star, solidcircle are available. Many of them may have been used also for teaching purposes.

    Recently, someone encountered the need to draw graphs with arrowheads and many solutions may be available as well. But it requires a thorough understanding of maple's features which are infinitely many. My feeling was that an arrow symbol also could be added in the symbol feature so that the option can be used as a plot point in the graph at the graph end points very easily. It can be just like adding a solidbox symbol at any point on the curve.

    Hope my suggestions are in order.

    Thanks.

    Ramakrishnan V

    The following puzzle prompted me to write this post: "A figure is drawn on checkered paper that needs to be cut into 2 equal parts (the cuts must pass along the sides of the squares.)" (parts are called equal if, after cutting, they can be superimposed on one another, that is, if one of them can be moved, rotated and (if need to) flip so that they completely coincide) (see the first picture below). 
    I could not solve it manually and wrote a procedure called  CutTwoParts  that does this automatically (of course, this procedure applies to other similar puzzles). This procedure uses my procedure  AreIsometric  published earlier  https://www.mapleprimes.com/posts/200157-Testing-Of-Two-Plane-Sets-For-Isometry  (for convenience, I have included its text here). In the procedure  CutTwoParts  the figure is specified by the coordinates of the centers of the squares of which it consists).

    I advise everyone to first try to solve this puzzle manually in order to feel its non-triviality, and only then load the worksheet with the procedure for automatic solution.


    For some reason, the worksheet did not load and I was only able to insert the link.

    Cuttings.mw



     

    With this application our students of science and engineering in the areas of physics will check the first condition of balance using Maple technology. Only with entering mass and angles we obtain graphs and data for a better interpretation.

    First_equilibrium_condition.zip

    Lenin AC

    Ambassador of Maple

    Until now I have been reading Maple Help files on the MAPLE website.  For convenience and mark-up, I have often dowloaded the help files and printed them on paper, only to find that the text over-runs the margins, and is therefore annoyingly incomplete.  On reflection, this is not surprising as the content is formatted for internet/web display!

    TIP:  Instead, go to the bottom of the MAPLE help webpage of interest, click on the "Download Help Document" link and so download the Maple file:  e.g. pdsolve-numeric.mw.  The helpfile can then be read (& executed) using MAPLE.

    Melvin

    So here's something silly but cool you can do with Maple while you're "working" from home.

    • Record a few seconds of your voice on a microphone that's close to your mouth (probably using a headset). This is your dry audio.
    • On your phone, record a single clap of your hands in an enclosed space, like your shower cubicle or a closet. Trim this audio to the clap, and the reverb created by your enclosed space. This is your impulse response.
    • Send both sound files to whatever computer you have Maple on.
    • Using AudioTools:-Convolution, convolve the dry audio with the impulse response . This your wet audio and should sound a little bit like your voice was recorded in your enclosed space.

    Here's some code. I've also attached my dry audio, an impulse response recorded in my shower (yes, I stood inside my shower, closed the door, and recorded a single clap of my hands on my phone), and the resulting wet audio.

    with( AudioTools ):
    dry_audio := Read( "MaryHadALittleLamb_sc.wav" ):
    impulse_response := Read( "clap_sc.wav" ):
    wet_audio := Normalize( Convolution( dry_audio, impulse_response ) ):
    Write("wet_audio.wav", wet_audio );
    

    A full Maple worksheet is here.

    AudioSamplesForReverb.zip

    Hi,

    Two weeks ago, I started loading data on the CoVid19 outbreak in order to understand, out of any official communication from any country, what is really going on.

    From february 29 to march 9 these data come from https://bnonews.com/index.php/2020/02/the-latest-coronavirus-cases/ and from 10 march until now from https://www.worldometers.info/coronavirus/#repro.In all cases the loading is done manually (copy-paste onto a LibreOffice spreadsheet plus correction and save into a xls file) for I wasn't capable to find csv data (csv data do exist here https://github.com/CSSEGISandData/COVID-19, by they end febreuary 15th).
    So I copied-pasted the results from the two sources above into a LibreOffice spreadsheet, adjusted the names of some countries for they appeared differently (for instance "United States" instead of "USA"), removed the unnessary commas and saved the result in a xls file.

    I also used data from https://www.worldometers.info/world-population/population-by-country/ to get the populations of more than 260 countries around the world and, finally, csv data from https://ourworldindata.org/coronavirus#covid-19-tests to get synthetic histories of confirmed and death cases (I have discovered this site only yesterday evening and I think it could replace all the data I initially loaded).

    The two worksheet here are aimed to exploratory and visualization only.
    An other one is in progress whose goal is to infer the true death rate (also known as CFR, Case Fatality Rate).

    No analysis is presented, if for no other reason than that the available data (except the numbers of deaths) are extremely dependent on the testing policies in place. But some features can be drawn from the data used here.
    For instance, if you select country = "China" in file Covid19_Evolution_bis.mw, you will observe very well known behaviour which is that the "Apparent Death Rate", I defined as the ratio of the cumulated number of death at time t by the cumulatibe number of confirmed cases at the same time, is always an underestimation of the death rate one can only known once the outbreak has ended. With this in mind, changing the country in this worksheet from China to Italy seems to lead to frightening  scary interpolations... But here again, without knowing the test policy no solid conclusion can be drawn: maybe Italy tests mainly elder people with accute symptoms, thus the huge "Apparent Death Rate" Italy seems to have?


    The work has been done with Maple 2015 and some graphics can be improved if a newer version is used (for instance, as Maple 2015 doesn't allow to change the direction of tickmarks, I overcome this limitation by assigning the date to the vertical axis on some plots).
    The second Explore plot could probably be improved by using newer versions or Maplets or Embeded components.

    Explore data from https://bnonews.com/index.php/2020/02/the-latest-coronavirus-cases/ and https://www.worldometers.info/coronavirus/#repro
    Files to use
    Covid19_Evolution.mw
    Covid19_Data.m.zip
    Population.xls

    Explore data from  https://ourworldindata.org/coronavirus#covid-19-tests
    Files to use
    Covid19_Evolution_bis.mw
    daily-deaths-covid-19-who.xls
    total-cases-covid-19-who.xls
    Population.xls


    I would be interested by any open collaboration with people interested by this post (it's not in my intention to write papers on the subject, my only motivation is scientific curiosity).

     

    An expression sequence is the underlying data structure for lists, sets, and function call arguments in Maple. Conceptually, a list is just a sequence enclosed in "[" and "]", a set is a sequence (with no duplicate elements) enclosed in "{" and "}", and a function call is a sequence enclosed in "(" and ")". A sequence can also be used as a data structure itself:

    > Q := x, 42, "string", 42;
                               Q := x, 42, "string", 42
    
    > L := [ Q ];
                              L := [x, 42, "string", 42]
    
    > S := { Q };
                                S := {42, "string", x}
    
    > F := f( Q );
                              F := f(x, 42, "string", 42)
    

    A sequence, like most data structures in Maple, is immutable. Once created, it cannot be changed. This means the same sequence can be shared by multiple data structures. In the example above, the list assigned to and the function call assigned to both share the same instance of the sequence assigned to . The set assigned to refers to a different sequence, one with the duplicate 42 removed, and sorted into a canonical order.

    Appending an element to a sequence creates a new sequence. The original remains unaltered, and still referenced by the list and function call:

    > Q := Q, a+b;
                            Q := x, 42, "string", 42, a + b
    
    > L;
                                 [x, 42, "string", 42]
    
    > S;
                                   {42, "string", x}
    
    > F;
                                f(x, 42, "string", 42)
    

    Because appending to a sequence creates a new sequence, building a long sequence by appending one element at a time is very inefficient in both time and space. Building a sequence of length this way creates sequences of lengths 1, 2, ..., -1, . The extra space used will eventually be reclaimed by Maple's garbage collector, but this takes time.

    This leads to the subject of this article, which is how to create long sequences efficiently. For the remainder of this article, the sequence we will use is the Fibonacci numbers, which are defined as follows:

    • Fib(0) = 0
    • Fib(1) = 1
    • Fib() = Fib(-1) + Fib(-2) for all > 1

    In a computer algebra system like Maple, the simplest way to generate individual members of this sequence is with a recursive function. This is also very efficient if option is used (and very inefficient if it is not; computing Fib() requires 2 Fib() - 1 calls, and Fib() grows exponentially):

    > Fib := proc(N)
    >     option remember;
    >     if N = 0 then
    >         0
    >     elif N = 1 then
    >         1
    >     else
    >         Fib(N-1) + Fib(N-2)
    >     end if
    > end proc:
    > Fib(1);
                                           1
    
    > Fib(2);
                                           1
    
    > Fib(5);
                                           5
    
    > Fib(10);
                                          55
    
    > Fib(20);
                                         6765
    
    > Fib(50);
                                      12586269025
    
    > Fib(100);
                                 354224848179261915075
    
    > Fib(200);
                      280571172992510140037611932413038677189525
    

    Let's start with the most straightforward, and most inefficient way to generate a sequence of the first 100 Fibonacci numbers, starting with an empty sequence and using a for-loop to append one member at a time. Part of the output has been elided below in the interests of saving space:

    > Q := ();
                                         Q :=
    
    > for i from 0 to 99 do
    >     Q := Q, Fib(i)
    > end do:
    > Q;
    0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584,
    
        4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811,
    
        ...
    
        51680708854858323072, 83621143489848422977, 135301852344706746049,
    
        218922995834555169026
    

    As mentioned previously, this actually produces 100 sequences of lengths 1 to 100, of which 99 will (eventually) be recovered by the garbage collector. This method is O(2) (Big O Notation) in time and space, meaning that producing a sequence of 200 values this way will take 4 times the time and memory as a sequence of 100 values.

    The traditional Maple wisdom is to use the seq function instead, which produces only the requested sequence, and no intermediate ones:

    > Q := seq(Fib(i),i=0..99);
    Q := 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597,
    
        2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418,
    
        ...
    
        51680708854858323072, 83621143489848422977, 135301852344706746049,
    
        218922995834555169026
    

    This is O() in time and space; generating a sequence of 200 elements takes twice the time and memory required for a sequence of 100 elements.

    As of Maple 2019, it is also possible to achieve O() performance by constructing a sequence directly using a for-expression, without the cost of constructing the intermediate sequences that a for-statement would incur:

    > Q := (for i from 0 to 99 do Fib(i) end do);
    Q := 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597,
    
        2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418,
    
        ...
    
        51680708854858323072, 83621143489848422977, 135301852344706746049,
    
        218922995834555169026
    

    This method is especially useful when you wish to add a condition to the elements selected for the sequence, since the full capabilities of Maple loops can be used (see The Two Kinds of Loops in Maple). The following two examples produce a sequence containing only the odd members of the first 100 Fibonacci numbers, and the first 100 odd Fibonacci numbers respectively:

    > Q := (for i from 0 to 99 do
    >           f := Fib(i);
    >           if f :: odd then
    >               f
    >           else
    >               NULL
    >           end if
    >       end do);
    Q := 1, 1, 3, 5, 13, 21, 55, 89, 233, 377, 987, 1597, 4181, 6765, 17711, 28657,
    
        75025, 121393, 317811, 514229, 1346269, 2178309, 5702887, 9227465,
    
        ...
    
        19740274219868223167, 31940434634990099905, 83621143489848422977,
    
        135301852344706746049
    
    > count := 0:
    > Q := (for i from 0 while count < 100 do
    >           f := Fib(i);
    >           if f :: odd then
    >               count += 1;
    >               f
    >           else
    >               NULL
    >           end if
    >       end do);
    Q := 1, 1, 3, 5, 13, 21, 55, 89, 233, 377, 987, 1597, 4181, 6765, 17711, 28657,
    
        75025, 121393, 317811, 514229, 1346269, 2178309, 5702887, 9227465,
    
        ...
    
        898923707008479989274290850145, 1454489111232772683678306641953,
    
        3807901929474025356630904134051, 6161314747715278029583501626149
    
    > i;
                                          150
    

    A for-loop used as an expression generates a sequence, producing one member for each iteration of the loop. The value of that member is the last expression computed during the iteration. If the last expression in an iteration is NULL, no value is produced for that iteration.

    Examining after the second loop completes, we can see that 149 Fibonacci numbers were generated to find the first 100 odd ones. (The loop control variable is incremented before the while condition is checked, hence is one more than the number of completed iterations.)

    Until now, we've been using calls to the Fib function to generate the individual Fibonacci numbers. These numbers can of course also be generated by a simple loop which, together with assignment of its initial conditions, can be written as a single sequence:

    > Q := ((f0 := 0),
    >       (f1 := 1),
    >       (for i from 2 to 99 do
    >            f0, f1 := f1, f0 + f1;
    >            f1
    >        end do));
    Q := 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597,
    
        2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418,
    
        ...
    
        51680708854858323072, 83621143489848422977, 135301852344706746049,
    
        218922995834555169026
    

    A Maple Array is a mutable data structure. Changing an element of an Array modifies the Array in-place; no new copy is generated:

    > A := Array([a,b,c]);
                                    A := [a, b, c]
    
    > A[2] := d;
                                       A[2] := d
    
    > A;
                                       [a, d, c]
    

    It is also possible to append elements to an array, either by using programmer indexing, or the recently introduced ,= operator:

    > A(numelems(A)+1) := e; # () instead of [] denotes "programmer indexing"
                                   A := [a, d, c, e]
    
    > A;
                                     [a, d, c, e]
    

    Like appending to a sequence, this sometimes causes the existing data to be discarded and new data to be allocated, but this is done in chunks proportional to the current size of the Array, resulting in time and memory usage that is still O(). This can be used to advantage to generate sequences efficiently:

    > A := Array(0..1,[0,1]);
                                  [ 0..1 1-D Array       ]
                             A := [ Data Type: anything  ]
                                  [ Storage: rectangular ]
                                  [ Order: Fortran_order ]
    
    > for i from 2 to 99 do
    >     A ,= A[i-1] + A[i-2]
    > end do:
    > A;
                               [ 0..99 1-D Array      ]
                               [ Data Type: anything  ]
                               [ Storage: rectangular ]
                               [ Order: Fortran_order ]
    
    > Q := seq(A);
    Q := 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597,
    
        2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418,
    
        ...
    
        51680708854858323072, 83621143489848422977, 135301852344706746049,
    
        218922995834555169026
    

    Although unrelated specifically to the goal of producing sequences, the same techniques can be used to construct Maple strings efficiently:

    > A := Array("0");
                                       A := [48]
    
    > for i from 1 to 99 do
    >    A ,= " ", String(Fib(i))
    > end do:
    > A;
                               [ 1..1150 1-D Array     ]
                               [ Data Type: integer[1] ]
                               [ Storage: rectangular  ]
                               [ Order: Fortran_order  ]
    
    > A[1..10];
                       [48, 32, 49, 32, 49, 32, 50, 32, 51, 32]
    
    > S := String(A);
    S := "0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 \
        10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 134626\
        9 2178309 3524578 5702887 9227465 14930352 24157817 39088169 63245986 1\
        02334155 165580141 267914296 433494437 701408733 1134903170 1836311903 \
        2971215073 4807526976 7778742049 12586269025 20365011074 32951280099 53\
        316291173 86267571272 139583862445 225851433717 365435296162 5912867298\
        79 956722026041 1548008755920 2504730781961 4052739537881 6557470319842\
         10610209857723 17167680177565 27777890035288 44945570212853 7272346024\
        8141 117669030460994 190392490709135 308061521170129 498454011879264 80\
        6515533049393 1304969544928657 2111485077978050 3416454622906707 552793\
        9700884757 8944394323791464 14472334024676221 23416728348467685 3788906\
        2373143906 61305790721611591 99194853094755497 160500643816367088 25969\
        5496911122585 420196140727489673 679891637638612258 1100087778366101931\
         1779979416004714189 2880067194370816120 4660046610375530309 7540113804\
        746346429 12200160415121876738 19740274219868223167 3194043463499009990\
        5 51680708854858323072 83621143489848422977 135301852344706746049 21892\
        2995834555169026"
    

    A call to the Array constructor with a string as an argument produces an array of bytes (Maple data type integer[1]). The ,= operator can then be used to append additional characters or strings, with O() efficiency. Finally, the Array can be converted back into a Maple string.

    Constructing sequences in Maple is a common operation when writing Maple programs. Maple gives you many ways to do this, and it's worthwhile taking the time to choose a method that is efficient, and suitable to the task at hand.

    Maple 2020 offers many improvements motivated and driven by our users.

    Every single update in a new release has a story behind it. It might be a new function that a customer wants, a response to some feedback about usability, or an itch that a developer needs to scratch.

    I’ll end this post with a story about acoustic guitars and how they drove improvements in signal and audio processing. But first, here are some of my personal favorites from Maple 2020.

    Graph theory is a big focus of Maple 2020. The new features include more control over visualization, additional special graphs, new analysis functions, and even an interactive layout tool.

    I’m particularly enamoured by these:

    • We’ve introduced new centrality measures - these help you determine the most influential vertices, based on their connections to other vertices
    • You now have more control over the styling of graphs – for example, you can vary the size or color of a nodebased on its centrality

    I’ve used these two new features to identify the most influential MaplePrimes users. Get the worksheet here.

    @Carl Love – looks like you’re the biggest mover and shaker on MaplePrimes (well, according to the eigenvector centrality of the MaplePrimes interaction graph).

    We’ve also started using graph theory elsewhere in Maple. For example, you can generate static call graph to visualize dependencies between procedures calls in a procedure

    You now get smoother edges for 3d surfaces with non-numeric values. Just look at the difference between Maple 2019 and 2020 for this plot.

    Printing and PDF export has gotten a whole lot better.  We’ve put a lot of work into the proper handling of plots, tables, and interactive components, so the results look better than before.

    For example, plots now maintain their aspect ratio when printed. So your carefully constructed psychrometric chart will not be squashed and stretched when exported to a PDF.

    We’ve overhauled the start page to give it a cleaner, less cluttered look – this is much more digestible for new users (experienced users might find the new look attractive as well!). There’s a link to the Maple Portal, and an updated Maple Fundamentals guide that helps new users learn the product.

    We’ve also linked to a guide that helps you choose between Document and Worksheet, and a link to a new movie.

    New messages also guide new users away from some very common mistakes. For example, students often type “e” when referring to the exponential constant – a warning now appears if that is detected

    We’re always tweaking existing functions to make them faster. For example, you can now compute the natural logarithm of large integers much more quickly and with less memory.

    This calculation is about 50 times faster in Maple 2020 than in prior versions:

    Many of our educators have asked for this – the linear algebra tutorials now return step by step solutions to the main document, so you have a record of what you did after the tutor is closed.

    Continuing with this theme, the Student:-LinearAlgebra context menu features several new linear algebra visualizations to the Student:-LinearAlgebra Context Menu. This, for example, is an eigenvector plot.

    Maple can now numerically evaluate various integral transforms.

    The numerical inversion of integral transforms has application in many branches of science and engineering.

    Maple is the world’s best tool for the symbolic solution of ODEs and PDEs, and in each release we push the boundary back further.

    For example, Maple 2020 has improved tools for find hypergeometric solutions for linear PDEs.

    This might seem like a minor improvement that’s barely worth mentions, but it’s one I now use all the time! You can now reorder worksheet tabs just by clicking and dragging.

    The Hough transform lets you detect straight lines and line segments in images.

    Hough transforms are widely used in automatic lane detection systems for autonomous driving. You can even detect the straight lines on a Sudoku grid!

    The Physics package is always a pleasure to write about because it's something we do far better than the competition.

    The new explore option in TensorArray combines two themes in Maple - Physics and interactive components. It's an intuitive solution to the real problem of viewing the contents of higher dimensional tensorial expressions.

    There are many more updates to Physics in Maple 2020, including a completely rewritten FeynmanDiagrams command.

    The Quantum Chemistry Toolbox has been updated with more analysis tools and curriculum material.

    There’s more teaching content for general chemistry.

    Among the many new analysis functions, you can now visualize transition orbitals.

    I promised you a story about acoustic guitars and Maple 2020, didn’t I?

    I often start a perfectly innocuous conversation about Maple that descends into several weeks of intense, feverish work.

    The work is partly for me, but mostly for my colleagues. They don’t like me for that.

    That conversation usually happens on a Friday afternoon, when we’re least prepared for it. On the plus side, this often means a user has planted a germ of an idea for a new feature or improvement, and we just have to will it into existence.

    One Friday afternoon last year, I was speaking to a user about acoustic guitars. He wanted to synthetically generate guitar chords with reverb, and export the sound to a 32-bit Wave file. All of this, in Maple.

    This started a chain of events that that involved least-square filters, frequency response curves, convolution, Karplus-Strong string synthesis and more. We’ll package up the results of this work, and hand it over to you – our users – over the next one or two releases.

    Let me tell you what made it into Maple 2020.

    Start by listening to this:

    It’s a guitar chord played twice, the second time with reverb, both generated with Maple.

    The reverb was simulated with convolving the artificially generated guitar chord with an impulse response. I had a choice of convolution functions in the SignalProcessing and AudioTools packages.

    Both gave the same results, but we found that SignalProcessing:-Convolution was much faster than its AudioTools counterpart.

    There’s no reason for the speed difference, so R&D modified AudioTools:-Convolution to leverage SignalProcessing:-Convolution for the instances for which their options are compatible. In this application, AudioTools:-Convolution is 25 times faster in Maple 2020 than Maple 2019!

    We also discovered that the underlying library we use for the SignalProcessing package (the Intel IPP) gives two options for convolution that we were previously not using; a method which use an explicit formula and a “fast” method that uses FFTs. We modified SignalProcessing:-Convolution to accept both options (previously, we used just one of the methods),

    That’s the story behind two new features in Maple 2020. Look at the entirety of what’s new in this release – there’s a tale for each new feature. I’d love to tell you more, but I’d run out of ink before I finish.

    To read about everything that’s new in Maple 2020, go to the new features page.

    Today we celebrated International Women's Day at Maplesoft. As part of our celebration, we had a panel of 5 successful women from within the community share their experiences and insights with us. 

    Hearing these women speak has given me the courage to share my personal experience and advice to women in technology. If what I write here helps even one woman, then I will have accomplished something great today. 

    -----

    What do you do at Maplesoft?

    My name is Karishma. I'm the Director, Product Management - Academic. 

     

    Where did you grow up and where did you go to school (Diploma/degree)?

    I was born and raised in Montreal to parents of Indian descent. Like most Indian parents, they “encouraged" me to pursue a career in either Law, Medicine, or Engineering, despite my true calling to pursue a career in theatre (at least that's what I believed it to be at the time)

    Given that I had no siblings to break the ice, and that rebelling wasn't my Modus Operandi (that came much later), I did what any obedient teenager would do: I pursued a career in Electrical Engineering at McGill University. In my mind, this was the fastest way to landing a job and fleeing the proverbial nest. 

    Electrical Engineering was far from glamorous, and after two years, I was ready to switch. It was due to the sheer insistence of my mother that I completed the degree. 

    So how did I end up pursuing a graduate degree in Biomedical Engineering at McGill University? It wasn't the future I envisioned, but the economic downturn in 2001-2002 saw a massive decrease in hiring, and the job that I had held-out patiently for during those four years became a far-off dream. So I did the thing I never imagined I would: I accepted the offer to pursue a Master's and the very generous stipend that came with it. In case you are wondering, I only applied because my father nagged me into submission. (Insistence and nagging are two innate traits of Indian parents)

    Contrary to what I expected, I loved my Master's degree! It gave me the freedom to immerse myself wholly in a topic I found exciting and allowed me to call the shots on my schedule, which led to my involvement in student government as VP Internal. But apart from the research and the independence, pursuing a master's degree opened doors to opportunities that I couldn't have imagined, such as an internship with the International Organization for Migration in Kenya, a job offer in Europe, and the chance to work at Maplesoft. (I guess my parents did know what was best for me.) 

     

    What is the best part of your job?

    It's figuring out how to solve problems our users have as well as the ones they might not realize they have. 

    At Maplesoft, I work with some most brilliant minds I've ever encountered to build a product that makes math more accessible to our users, whether they be a student, researcher, scientist, or engineer. 

    Some of the aspects of my role that I love the most include: 

    • speaking to and learning from our customers, 
    • interpreting the meaning behind their words, facial expressions, vocal intonations, and body language, and
    • collaborating with the sales, marketing, and development teams to turn what was 'said' into tangible actions that will enhance the product and user experience. 

    Most nights, when I leave work, I do so with a sense of excitement because I know my actions and the actions of those I work with will help our users achieve their goals and ambitions. There's no better high. 

     

    What advice do you have for young women interested in a career in your field? 
    Throughout my career, I've had the privilege to work with some amazing women and men who've given me advice that I wish I had known when I was an undergraduate student. If you are a woman pursuing a STEM degree or starting your first job in a tech firm, here are three tips that may help you: 

    1.   Don't be scared of the 'N' word. 
    Don't be scared of NETWORKING. I know it can be intimidating, but it truly is the best way to land a job, advance your career, or meet the person you admire most. Remember that networking can take place anywhere - it's not exclusive to networking events. Some advice that I received that helped me overcome my fear of networking: 

    • Smile - Before you approach a person or enter a networking session, force yourself to smile. It will help you diffuse any tension you are holding and will make you appear more approachable. 
    • Research - Take the time to research the person(s) you would like to meet. Find out as much as you can about them and their company. Prepare some icebreaker questions and other questions to help carry the conversation forward ahead of time. Remember that people like to talk about themselves and their experiences. 
    • Don't take it personally - The person you approach may find networking equally tricky. So if they seem disinterested or aloof, don't take it personally. 
    • Just do it - Networking gets easier with practice. Don't let a failed attempt set you back. The worse thing that will happen is that you don't make a connection. 

     

     2.   It's ok to ask for help.
    If you are a woman in an environment that is dominated by men, you might hesitate to ask for help. DON'T! There's nothing wrong with asking for help. That said, many women ask for help in a way that undermines their confidence and thus erodes others’ perception of them. Next time you need help, have a question or require clarification, take a moment to phrase your request, so you don't inadvertently put yourself down. 

     

    3.   Play to your strengths
    Don't think you need to know everything. Nobody expects it. If you landed a new job or co-op placement, and you are finding yourself doing things you've never done and don't come naturally to you yet, don't let your brain convince you that you don't deserve it. Remember that you earned it because of your qualities and strengths. 

    First 26 27 28 29 30 31 32 Last Page 28 of 297