MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • It seems once in a while someone asks about converting from degrees minutes seconds format to decimal or the other way around. 

    I created a little procedure for just that purpose. 

    restart; gc()

    NULL

    A little procedure to convert the form of degrees, minutes, seconds to decimal and vice versa.  

    NULL

    dms := proc (d, m := 0, s := 0) local con, d1, d2, d3; if 0 < frac(d) then d1 := floor(d); d2 := floor(60*frac(d)); d3 := 60*frac(60*frac(d)); con := cat(d1, `° `, d2, `' `, d3, `"`) else con := d+m/60.+s/3600. end if; print(con) end proc

    NULL

    Examples of use.

     

    dms(45.2365)

    `45° 14' 11.4000"`

    (1)

    dms(45, 14, 11.4)

    45.23650000

    (2)

    NULL

    Download dms.mw

    edit - a quick realization is to remove the decimals that changes the fractions to floating decimals and change print(con) to print(evlaf(con)) to avoid rounding issues.

    This post is motivated by a question asked by @vs140580  ( The program is making intercept zero even though There is a intercept in regression Fit (A toy code showing the error attached) ).

    The problem met by @vs140580 comes from the large magnitudes of the (two) regressors and the failure to Fit/LinearFit to find the correct solution unless an undecent value of Digits is used.
    This problem has been answerd by @dharr after scaling the data (which is always, when possible, a good practice) and by 
    myself while using explicitely the method called "Normal Equations" (see https://en.wikipedia.org/wiki/Least_squares).

    The method of "Normal Equations" relies upon the inversion of a symmetric square matrix H whose dimension is equal to the number of coefficients of the model to fit.
    It's well known that this method can potentially lead to matrices H extremely ill-conditionned, and thus face severe numerical problems (the most common situation being the fit of a high degree polynomial).
     

    About these alternative methods:

    • In English: http://www.math.kent.edu/~reichel/courses/intr.num.comp.1/fall09/lecture4/lecture4.pdf
    • In French: https://moodle.utc.fr/pluginfile.php/24407/mod_resource/content/5/MT09-ch3_ecran.pdfI


    The attached file illustrates how the QR decomposition method works.
    The test case is @vs140580's.

    Maybe the development team could enhance Fit/LinearFit in future versions by adding an option which specifies what method is to be used?

     

    restart:

    with(Statistics):

    interface(version)

    `Standard Worksheet Interface, Maple 2015.2, Mac OS X, December 21 2015 Build ID 1097895`

    (1)

    Data := Matrix([[4.74593554708566, 11385427.62, 2735660038000], [4.58252830679671, 25469809.77, 12833885700000], [4.29311160501838, 1079325200, 11411813200000000], [4.24176959154225, 1428647556, 18918585950000000], [5.17263072694618, 1428647556, 18918585950000000], [4.39351114955735, 1877950416, 30746202150000000], [4.39599006758777, 1428647556, 18918585950000000], [5.79317412396815, 2448320309, 49065217290000000], [4.48293612651735, 2448320309, 49065217290000000], [4.19990181982522, 2448320309, 49065217290000000], [5.73518217699046, 1856333905, 30648714900000000], [4.67943831980476, 3071210420, 75995866910000000], [4.215240105336, 2390089264, 48670072110000000], [4.41566877563247, 3049877383, 75854074610000000], [4.77780395369828, 2910469403, 74061327950000000], [4.96617430604669, 1416936352, 18891734280000000], [4.36131111330988, 1416936352, 18891734280000000], [5.17783192063198, 1079325200, 11411813200000000], [4.998266287191, 1067513353, 11402362980000000], [4.23366152474871, 2389517120, 48661380410000000], [4.58252830679671, 758079709.3, 5636151969000000], [6.82390874094432, 1304393838, 14240754750000000], [4.24176959154225, 912963601.2, 8621914602000000], [4.52432881167557, 573965555.4, 3535351888000000], [4.84133601918601, 573965555.4, 3535351888000000], [6.88605664769316, 732571773.2, 5558875538000000], [4.35575841415627, 1203944381, 13430693320000000], [4.42527441640593, 955277678, 8795128298000000], [6.82390874094432, 997591754.9, 8968341995000000], [4.35144484433733, 143039477.1, 305355143300000]]):

    # Direct use of LinearFit.
    #
    # As far as I know LinearFit is based on the resolution of the "Normal Equations"
    # (see further down), a system of equations that is known to be ill-conditioned
    # when regressors have large values (in particular when polynomial regression
    # is used).

    X := Data[.., [2, 3]]:
    Y := Data[.., 1]:


    LinearFit(C1+C2*v+C3*w, X, Y, [v, w]);

    Warning, model is not of full rank

     

    HFloat(6.830889923844425e-9)*v-HFloat(2.275143726335622e-16)*w

    (2)

    # For roundoff issues the 3-by-3 matrix involved in the "Normal Equations" (NE)
    # appears to of rank < 3.
    # The rank of this matrix is rqual to 1+rank(X) and one can easily verify that
    # the 2 columns of X are linearly independent:

    LinearAlgebra:-LinearSolve(X, Vector(numelems(Y), 0));
    LinearAlgebra:-Rank(X);

     

    Vector[column]([[0.], [-0.]])

     

    2

    (3)

    # Solve the least squares problem by using explicitely the NE.
    #
    # To account for an intercept we augment X by a vector column of "1"
    # traditionally put in column one.
    Z := `<|>`(Vector(numelems(Y), 1), X):  
    A := (Z^+ . Z)^(-1) . Z^+ . Y;          # Normal Equations

    A := Vector(3, {(1) = 4.659353816079307, (2) = 0.5985084089529947e-9, (3) = -0.27350964718426345e-16}, datatype = float[8])

    (4)

    # What is the rank of Z?
    # Due to the scale of compared to "1", Rank fails to return the good value
    # of rank(Z), which is obviously equal to rank(X)+1.

    LinearAlgebra:-LinearSolve(Z, Vector(numelems(Y), 0));
    LinearAlgebra:-Rank(Z);

    Vector[column]([[0.], [0.], [-0.]])

     

    2

    (5)


    A WORKAROUND : SCALING THE DATA

    model := unapply( LinearFit(C1+C2*v+C3*w, Scale(X), Scale(Y), [v, w]), [v, w] );

    proc (v, w) options operator, arrow; -HFloat(1.264691577813453e-15)+HFloat(0.6607154853418553)*v-HFloat(0.8095150669884322)*w end proc

    (6)

    mX, sX := (Mean, StandardDeviation)(X);
    mY, sY := (Mean, StandardDeviation)(Y);

    mX, sX := Vector[row](2, {(1) = 1447634550.7963333, (2) = 24441399854567932.}, datatype = float[8]), Vector[row](2, {(1) = 871086770.7242773, (2) = 23354440973344224.}, datatype = float[8])

     

    HFloat(4.857279402730572), HFloat(0.789073010656694)

    (7)

    MODEL := model((x1-mX[1])/sX[1], (x2-mX[2])/sX[2]) * sY + mY

    HFloat(4.659353816079309)+HFloat(5.985084089530032e-10)*x1-HFloat(2.7350964718426736e-17)*x2

    (8)

    # Check that the vector of regression coefficients is almost equal to A found above
    # relative error lesst than 10^(-14)

    A_from_scaling       := < coeffs(MODEL) >:
    Relative_Discrepancy := (A_from_scaling - A) /~ A

    Relative_Discrepancy := Vector(3, {(1) = 0.5718679809000842e-15, (2) = 0.14166219140514066e-13, (3) = 0.14308415396983588e-13}, datatype = float[8])

    (9)


    THE QR DECOMPOSITION  (applied on raw data)

    The QR decomposition, as well as Given's rotation method, are two alternatives to the the NE method
    to find the vector of regression coefficients.
    Both of them are known to be less sensitive to the magnitudes of the regressors and do nt require (not
    always) a scaling of the data (which can be quite complex with polynomial regression or when some
    transformation is used to liearize the statistical model, for instanc Y=a*exp(b*X) --> log(Y)=log(a)+b*X).

    N := numelems(Y);
    P := numelems(Z[1]);

    30

     

    3

    (10)

    # Perform the QR decomposition of Z.

    Q, R := LinearAlgebra:-QRDecomposition(Z, fullspan);

    Q, R := Vector(4, {(1) = ` 30 x 30 `*Matrix, (2) = `Data Type: `*float[8], (3) = `Storage: `*rectangular, (4) = `Order: `*Fortran_order}), Vector(4, {(1) = ` 30 x 3 `*Matrix, (2) = `Data Type: `*float[8], (3) = `Storage: `*triangular[upper], (4) = `Order: `*Fortran_order})

    (11)

    # Let C the column vector of length P defined by:

    C := (Q^+ . Y)[1..P];

    C := Vector(3, {(1) = -26.6044149698075, (2) = -.517558353158176, (3) = -.881007519371895})

    (12)

    # Then the vector of regression coefficients is given by:

    A_QR                 := (R[1..P, 1..P])^(-1) . C;
    Relative_Discrepancy := (A_QR - A) /~ A

    A_QR := Vector(3, {(1) = 4.65935381607931, (2) = 0.5985084090e-9, (3) = -0.2735096472e-16})

     

    Relative_Discrepancy := Vector(3, {(1) = 0.3812453206e-15, (2) = 0.1105656128e-13, (3) = 0.1216778632e-13})

    (13)

    # The matrix H = Z^+ . Z writes

    H                    := Z^+ . Z:
    H_QR                 := R^+ . Q^+ . Q . R:
    Relative_Discrepancy := (H_QR - H) /~ H

    Relative_Discrepancy := Matrix(3, 3, {(1, 1) = -0.1184237893e-15, (1, 2) = 0., (1, 3) = -0.3491343943e-15, (2, 1) = 0., (2, 2) = 0.1930383052e-15, (2, 3) = 0.3369103254e-15, (3, 1) = -0.1745671971e-15, (3, 2) = -0.5053654881e-15, (3, 3) = -0.1366873891e-15})

    (14)

    # H_QR expression is required to obtain the covariance matrix of the regression coefficients.


     

    Download LeastSquares_and_QRdecomposition.mw


     

     

    # countourplot3d piggybacks on top of plot3d.
    # For the "coloring=[lowColor, highColor]", the "filledregions=true" option must be present.
    # If "filledregions=true" is not present, plot3d will throw an error.
    # This code shows the three cases, only one of which will work.
    Note to support. I cannot add a new tag. contourplot3d should be a tag.

    restart;
    with(plots);
    with(ColorTools);
    cGr4s := Color([0.50, 0.50, 0.50]);
    contourplot3d(-5*d/(d^2 + y^2 + 1), d = -3 .. 3, y = -3 .. 3, color = black, thickness = 3, coloring = [cGr4s, cGr4s], contours = [-2, -1.5, -1, -0.5, 0, 0.5, 1, 1.5, 2]);
    contourplot3d(-5*d/(d^2 + y^2 + 1), d = -3 .. 3, y = -3 .. 3, filledregions = false, color = black, thickness = 3, coloring = [cGr4s, cGr4s], contours = [-2, -1.5, -1, -0.5, 0, 0.5, 1, 1.5, 2]);
    contourplot3d(-5*d/(d^2 + y^2 + 1), d = -3 .. 3, y = -3 .. 3, filledregions = true, color = black, thickness = 3, coloring = [cGr4s, cGr4s], contours = [-2, -1.5, -1, -0.5, 0, 0.5, 1, 1.5, 2]);

    CREATING GRAPH EQUATION OF "DNA" IN CARTESIAN SPACE USING PARAMETRIC SURFACE EQUATION RUN ON MAPLE SOFTWARE

    ENJOY...

     

    Happy Springtime to all in the MaplePrimes Community! Though some in our community may not live in the northern hemisphere where flowers are beginning to bloom, many will be celebrating April holidays like Ramadan, Passover, and Easter.

    One of my favorite springtime activities is decorating eggs. Today, the practice is typically associated with the Christian holiday of Easter. However, painted eggs have roots in many cultures.

    For over 3,000 years, painting eggs has been a custom associated with the holiday of Nowruz, or Persian New Year, during the spring equinox. Furthermore, in the Bronze Age, decorated ostrich eggs were traded as luxury items across the Mediterranean and Northern Africa. Dipped eggs have also played an important role in the Jewish holiday of Passover since the 16th century.

    To celebrate this tradition, I would like to invite all of the Maplesoft community to create a decorated egg of their own with the Easter Egg Art Maple Learn document. In this document, an ovoid egg equation is used to define the shape of an egg. 



    The ovoid egg equation mimics the shape of a typical hen’s egg. Each bird species lays differently shaped eggs. For example, an ostrich’s egg is more oblong than an owl’s, and an owl’s egg is rounder than a goose’s. Surprisingly, every egg can be described by a single equation with four parameters:



    Learn more about this equation and others like it with John May’s Egg Formulas Maple Learn document.

    The Easter Egg Art document includes 9 different decorative elements; users can change the color, position, and size of each in order to create their own personal egg! The egg starts out looking like this:



    In just a couple of minutes, you can create a unique egg. Have fun exploring this document and share a screenshot of your egg in the comments below!  Here’s one I made:


    How to Create Graph Equation of Water Drop Wave in Cartesian Space using single Implicit Function only run by Maple Software

    The Equation is:   z = - cos( (x2+y2)0.5 - a)  with paramater a is moving form 0 to 2pi 

    Enjoy...

    Plese Click link below to see full equation in Maple software

    Water_Drop_Wave.mw

    MapleFlow is showing the exact same icon in the taskbar as Maple when both are open.  Be nice if they were slightly different.

     

    Several studies, such as “Seeing and feeling volumes: The influence of shape on volume perception”, have shown that people have a tendency to overestimate the volume of common objects, such as glasses and containers, that are tall and thin and underestimate those that are short and wide; this phenomenon is called “elongation bias”. 

     

    Sue Palmberg, an instructor at Edwin O. Smith High School, created and shared with us a lab activity for students to design a glass in Maple and use volumes of revolution to determine the amount of liquid it can hold. This lab was then turned into this Maple Learn document: Piecewise Volumes of Revolution Activity.

     

    Use this document to create your own glass or goblet shape and determine its volume. Simply create a piecewise function that will define the outside shape of your glass between your chosen bounds and another piecewise function to define the hollowed-out part of your creation. The document will graph the volumes of revolution that represent your glass and calculate the relevant volume integral for you.

     

    Here is my own goblet-shaped creation: 

    I used this piecewise function to define it:

    After creating the outline of my goblet, I constructed a function for the hollow part of the goblet – the part that can actually hold liquid.

     

     

    Using Context Panel operations and the volume integral provided by the document, I know that the volume of the hollow part of my goblet is approximately 63.5, so my goblet would hold around 63.5 units3 of liquid when full.

    Create your own goblets of varying shapes and see if their volumes surprise you; elongation bias can be tricky! For some extra help, check out the Piecewise Functions and Plots and Solids of Revolution - Volume Derivation documents!

    Creating Graph Equation of An Apple in Cartesian Space using single Implicit Function only run by Maple software

    Enjoy...

    Please click the link below to see full equation on Maple file:

    2._Apel_3D_A.mw

     

    Creating Graph Equation of A Candle on Cartesian Plane using single Implicit Function only run by Maple software

    Enjoy...

    3D_Candle.mw

    Sea_Shells.mw

     

     

     

    Today I'm very greatfull to have Inspiration to create Graph Equation of 3D Candle in Cartesian Space using single 3D Implicit Function only, run by Maple software.

    Enjoy... 

    Candle_1.mw

     

    Today I got an inspiration to create graph equation of "Petrol Truck" using only with Single Implicit Equation in Cartesian space run by Maple Software

    Maple software is amazing...

    Enjoy...

     

    CREATING 3D GRAPH EQUATION OF BACTERIOPHAGE USING ONLY WITH SINGLE IMPLICIT EQAUTION IN CARTESIAN SPACE RUN BY MAPLE SOFTWARE

    MAPLE SOFTWARE IS AMAZING...

    ENJOY...

     

    GRAPH EQUATION OF A FEATHER

    AND THE EQUATION IS:

    ENJOY...

    I like this Equation and post it because it is so beautiful...

    Click this link below to see full equation and download the Maple file: 

    Bulu_Angsa_3.mw

     

    GRAPH EQUATION OF "383" CREATED BY DHIMAS MAHARDIKA

    ENJOY...

    with(plots):

    DHIMAS MAHARDIKA EQUATION

    plots:-implicitplot(15-8.*cos(y)^(79/2)-32.*cos(y)^(37/2)+96.*cos(y)^(33/2)-96.*cos(y)^(29/2)+4.*cos(x)^(61/2)+4.*cos(x)^(31/2)-12.*cos(x)^(27/2)+12.*cos(x)^(23/2)+24.*cos(y)^29-48.*cos(y)^27+16.*cos(y)^8-64.*cos(y)^6+96.*cos(y)^4-4.*cos(x)^(19/2)-6.*cos(x)^19-4.*cos(x)^(57/2)+32.*cos(y)^(25/2)+24.*cos(y)^25+8.*cos(y)^(75/2)-cos(x)^38+cos(y)^50-64.*cos(y)^2+4.*cos(x)^2-6.*cos(x)^4+4.*cos(x)^6-cos(x)^8+12.*cos(x)^21-6.*cos(x)^23, x = -15 .. 15, y = -15 .. 15, numpoints = 50000, thickness = 4, colour = blue)

     

    NULL

    Download 383.mw

     

    Drawing Eifel Tower using Implicit Equation in Cartesian Space 

     

    The recent Maple 2023 release comes with a multitude of new features, including a new Canvas Scripting Gallery full of templates for creating interactive Maple Learn documents.

    The Maple Learn Scripting Gallery can be accessed through Maple, by searching “BuildInteractiveContent Maple2023” in the search bar at the top of the application and clicking on the only result that appears. This will bring you to the help page titled “Build and Share Interactive Content”, which can also be found by searching “scripting gallery” in the search bar of a Maple help page window. The link to the Maple Learn Scripting Gallery is found under the “Canvas Scripting” section on this help page and clicking on it will open a Maple workbook full of examples and templates for you to explore.

    The interactive content in the Scripting Gallery is organized into five main categories – Graphing, Visualization, Quiz, Add-ons and Options, and Applications Optimized for Maple Learn – each with its own sub-categories, templates, and examples.

    One of the example scripts that I find particularly interesting is the “Normal Distribution” script, under the Visualizations category.

     

     

    All of the code for each of the examples and templates in the gallery is provided, so we can see exactly how the Normal Distribution script creates a Maple Learn canvas. It displays a list of grades, a plot for the grade distribution to later appear on, math groups for the data’s mean and variance, and finally a “Calculate” button that runs a function called UpdateStats.

    The initial grades loaded into the document result in the below plot, created using Maple’s DensityPlot and Histogram functions, from the Statistics package. 




     

    The UpdateStats function takes the data provided in the list of grades and uses a helper function, getDist, to generate the new plot to display the data, the distribution, the mean, and the variance. Then, the function uses a Script object to update the Maple Learn canvas with the new plot and information.

    The rest of the code is contained in the getDist function, which uses a variety of functions from Maple’s Statistics package. The Normal Distribution script takes advantage of Maple’s ability to easily calculate mean and variance for data sets, and to use that information to create different types of random variable distributions.

    Using the “Interactive Visualization” template, provided in the gallery, many more interactive documents can be created, like this Polyhedra Visualization and this Damped Harmonic Oscillator – both from the Scripted Gallery or like my own Linear Regression: Method of Least Squares document.


     

    Another new feature of Maple 2023 is the Quiz Builder, also featured in the Scripting Gallery. Quizzes created using Quiz Builder can be displayed in Maple or launched as Maple Learn quizzes, and the process for creating such a quiz is short.

    The QuizBuilder template also provides access to many structured examples, available from a dropdown list:


    As an example, check out this Maple Learn quiz on Expected Value: Continuous Practice. Here is what the quiz looks like when generated in Maple:


     

    This quiz, in particular, is “Fill-in the blank” style, but Maple users can also choose “Multiple Choice”, “True/False”, “Multiple Select”, or “Multi-Line Feedback”. It also makes use of all of the featured code regions from the template, providing functionality for checking inputted answers, generating more questions, showing comprehensive solutions, and providing a hint at the press of a button.

    Check out the Maple Learn Scripting Gallery for yourself and see what kinds of interactive content you can make for Maple and Maple Learn!

     

    Maple 2023: The colorbar option for contour plots does not work when used with the Explore command. See the example below.

    No_colorbar_when_exploring_contour_plots.mw
     

    Just installed Maple 2023 on Macbook Air M1.

    Maple Tasks dosn't work :-(

    /


    This post is closely related to this one Centered Divided Difference approximations whose purpose is to build Finite-Difference (FD) approxmation schemes.
    In this latter post I also talked, without explicitely naming it, about Truncation Error, see for instance https://en.wikiversity.org/wiki/Numerical_Analysis/Truncation_Errors.

    I am foccusing here on a less known concept named "Equivalent Equation" (sometimes named "Modified Equation").
    The seminal paper (no free acccess, which is surprising since it was first published 50 years ago) is this one by Warming and Hyett https://www.sciencedirect.com/science/article/pii/0021999174900114.
    For a scholar example you can see here https://guillod.org/teaching/m2-b004/TD2-solution.pdf.
    An alternative method to that of Warming and Hyett to derive the Equivalent Equation is given here (in French)
    http://www.numdam.org/item/M2AN_1997__31_4_459_0.pdf (Carpentier et al.)

    I never heard of the concept of Modified Equation applied to elliptic PDEs ; it's main domain of application is advection PDEs (parabolic PDEs in simpler cases).

    Basically, any numerical scheme for solving an ODE or a PDE has a Truncation Error: this means there is some discrepancy between the exact solution of this PDE and the solution the scheme provides.
    This discrepancy depends on the truncation error, in space alone for ODEs or elliptic PDEs, or in space and time for hyperbolic or advection PDEs.

    One main problem with the Truncation Error is that it doesn't enable to understand the impact it will have on the solution it returns. For instance, will this sheme introduce diffusion, antidiffusion, scattering, ...
    The aim of the Modified Equation is to answer these questions by constructing the continuous equation a given numerical scheme solves with a zero truncation error.
    This is the original point of view Warming and Hyett developped in their 1974 paper.

    This is a subject I work on 30 years ago (Maple V), and later in 2010. 
    It is very easy with Maple to do the painstaking development that Warming and Hyett did by hand half a century ago. And it is even so easy that the trick used by Carpentier et al. to make the calculations easier has lost some of its interest.

    Two examples are given in the attched file
    EquivalentEquation.mw

    If a moderator thinks that this post should be merged with Centered Divided Difference approximations, he can do it freely, I won't be offended.
     


    This code enables building Centered Divided Difference (CDD) approximations of derivatives of a univariate function.
    Depending on the stencil we choose we can get arbitrary high order approximations.

    The extension to bivariate functions is based upon what is often named tensorization in numerical analysis: for instance diff(f(x, y), [x, y] is obtained this way (the description here is purely notional)

    1. Let CDD_x the CDD approximation of diff( f(x), x) ) .
      CDD_x is a linear combination of shifted replicates of f(x)
    2. Let s one of this shifted replicates
      Let CDD_y(s) the CDD approximation of diff( s(y), y) ) .
    3. Replace in CDD_x all shifted replicates by their corresponding expression CDD_y(s)


    REMARKS:

    • When I write for instance "approximation of diff(f(x), x)", this must be intended as a short for "approximation of diff(f(x), x) at point x=a"
    • I agree that a notation like, for instance, diff(f(a), a) is not rigourous and that something like a Liebnitz notation would be better. Unfortunately I don't know how to get it when I use mtaylor.
       

    restart:


    CDDF stands for Cendered Divided Difference Formula

    CDDF := proc(f, A, H, n, stencil)
      description "f = target function,\nA = point where the derivatives are approximated,\nH = step,\nn = order of the derivative,\nstencil = list of points for the divided differenceCDDF\n";
      local tay, p, T, sol, unknown, Unknown, expr:

      tay := (s, m) -> convert(
                         eval(
                           convert(
                             taylor(op(0, f)(op(1, f)), op(1, f)=A, m),
                             Diff
                           ),
                           op(1, f)=A+s*H),
                         polynom
                       ):

      p   := numelems(stencil):
      T   := add(alpha[i]*tay(i, p+1), i in stencil):
      T   := convert(%, diff):

      if p > n+1 then
        sol := solve([seq(coeff(T, h, i)=0, i in subsop(n+1=NULL, [$0..p]))], [seq(alpha[i], i in stencil)])[];
      else
        sol := solve([seq(coeff(T, H, i)=0, i in subsop(n+1=NULL, [$0..n]))], [seq(alpha[i], i in stencil)])[];
      end if:

      if `and`(is~(rhs~(sol)=~0)[]) then
        WARNING("no solution found"):
        return
      else
        unknown := `union`(indets~(rhs~(sol))[])[];
        Unknown := simplify(solve(eval(T, sol) = Diff(op(0, f)(A), A$n), unknown));
        sol     := lhs~(sol) =~ eval(rhs~(sol), unknown=Unknown);
        expr    := normal(eval(add(alpha[i]*op(0, f)(A+i*H), i in stencil), sol));
      end if:

      return expr
    end proc:

    Describe(CDDF)


    # f = target function,
    # A = point where the derivatives are approximated,
    # H =
    # step,
    # n = order of the derivative,
    # stencil = list of points for the divided
    # differenceCDDF
    #
    CDDF( f, A, H, n, stencil )
     

     

    # 2-point approximation of diff(f(x), x) | x=a

    CDDF(f(x), a, h, 1, [-1, 1]);
    convert(simplify(mtaylor(%, h=0, 4)), Diff);

    -(1/2)*(f(a-h)-f(a+h))/h

     

    Diff(f(a), a)+(1/6)*(Diff(Diff(Diff(f(a), a), a), a))*h^2

    (1)

    # 3-point approximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [-1, 0, 1]);
    convert(simplify(mtaylor(%, h=0)), Diff);

    -(-f(a-h)+2*f(a)-f(a+h))/h^2

     

    Diff(Diff(f(a), a), a)+(1/12)*(Diff(Diff(Diff(Diff(f(a), a), a), a), a))*h^2

    (2)

    # 5-point pproximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [$-2..2]);
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    -(1/12)*(f(a-2*h)-16*f(a-h)+30*f(a)-16*f(a+h)+f(a+2*h))/h^2

     

    Diff(Diff(f(a), a), a)-(1/90)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a))*h^4

    (3)

    # 7-point approximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [$-3..3]);
    # simplify(taylor(%, h=0, 10));
    convert(simplify(mtaylor(%, h=0, 10)), Diff);

    -(1/180)*(-2*f(a-3*h)+27*f(a-2*h)-270*f(a-h)+490*f(a)-270*f(a+h)+27*f(a+2*h)-2*f(a+3*h))/h^2

     

    Diff(Diff(f(a), a), a)+(1/560)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a), a))*h^6

    (4)

    # 4-point staggered approximation of diff(f(x), x$3) | x=a

    CDDF(f(x), a, h, 3, [seq(-3/2..3/2, 1)]);
    convert(simplify(mtaylor(%, h=0, 6)), Diff);

    -(f(a-(3/2)*h)-3*f(a-(1/2)*h)+3*f(a+(1/2)*h)-f(a+(3/2)*h))/h^3

     

    Diff(Diff(Diff(f(a), a), a), a)+(1/8)*(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a))*h^2

    (5)

    # 6-point staggered approximation of diff(f(x), x$3) | x=a

    CDDF(f(x), a, h, 3, [seq(-5/2..5/2, 1)]);
    # simplify(taylor(%, h=0, 8));
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    (1/8)*(f(a-(5/2)*h)-13*f(a-(3/2)*h)+34*f(a-(1/2)*h)-34*f(a+(1/2)*h)+13*f(a+(3/2)*h)-f(a+(5/2)*h))/h^3

     

    Diff(Diff(Diff(f(a), a), a), a)-(37/1920)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a))*h^4

    (6)

    # 5-point approximation of diff(f(x), x$4) | x=a

    CDDF(f(x), a, h, 4, [$-2..2]);
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    (f(a-2*h)-4*f(a-h)+6*f(a)-4*f(a+h)+f(a+2*h))/h^4

     

    Diff(Diff(Diff(Diff(f(a), a), a), a), a)+(1/6)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a))*h^2

    (7)

    # 7-point approximation of diff(f(x), x$4) | x=a

    CDDF(f(x), a, h, 4, [$-3..3]);
    convert(simplify(mtaylor(%, h=0, 10)), Diff);

    (1/6)*(-f(a-3*h)+12*f(a-2*h)-39*f(a-h)+56*f(a)-39*f(a+h)+12*f(a+2*h)-f(a+3*h))/h^4

     

    Diff(Diff(Diff(Diff(f(a), a), a), a), a)-(7/240)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a), a))*h^4

    (8)


    A FEW 2D EXTENSIONS

    # diff(f(x, y), [x, y]) approximation over a (2 by 2)-point stencil

    stencil := [-1, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 1, stencil):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 1, stencil)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0]), Diff)

    (1/4)*(f(a-h, b-k)-f(a-h, b+k)-f(a+h, b-k)+f(a+h, b+k))/(k*h)

     

    Diff(Diff(f(a, b), a), b)+(1/6)*(Diff(Diff(Diff(Diff(f(a, b), a), a), a), b))*h^2+(1/6)*(Diff(Diff(Diff(Diff(f(a, b), a), b), b), b))*k^2

    (9)

    # Approximation of diff(f(x, y), [x, x, y, y] a (3 by 3)-point stencil


    stencil := [-1, 0, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 2, stencil)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0], 8), Diff)

    -(2*f(a, b-k)-4*f(a, b)+2*f(a, b+k)-f(a-h, b-k)+2*f(a-h, b)-f(a-h, b+k)-f(a+h, b-k)+2*f(a+h, b)-f(a+h, b+k))/(h^2*k^2)

     

    Diff(Diff(Diff(Diff(f(a, b), a), a), b), b)+(1/12)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a), b), b))*h^2+(1/12)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), b), b), b), b))*k^2

    (10)

    # Approximation of diff(f(x, y), [x, x, y] a (3 by 2)-point stencil

    stencil_x := [-1, 0, 1]:
    stencil_y := [-1, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil_x):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 1, stencil_y)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0], 6), Diff)

    (1/2)*(2*f(a, b-k)-2*f(a, b+k)-f(a-h, b-k)+f(a-h, b+k)-f(a+h, b-k)+f(a+h, b+k))/(h^2*k)

     

    Diff(Diff(Diff(f(a, b), a), a), b)+(1/12)*(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a), b))*h^2+(1/6)*(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), b), b), b))*k^2

    (11)

    # Approximation of the laplacian of f(x, y)

    stencil := [-1, 0, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil):
    fy  := CDDF(f(y), b, k, 2, stencil):

    fxy := simplify( eval(fx, f=(u -> f(u, b))) + eval(fy, f=(u -> f(a, u))) );

    convert(mtaylor(fxy, [h=0, k=0], 6), Diff)

    (f(a-h, b)-2*f(a, b)+f(a+h, b))/h^2+(f(a, b-k)-2*f(a, b)+f(a, b+k))/k^2

     

    Diff(Diff(f(a, b), a), a)+Diff(Diff(f(a, b), b), b)+(1/12)*(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a))*h^2+(1/12)*(Diff(Diff(Diff(Diff(f(a, b), b), b), b), b))*k^2

    (12)

     


     

    Download CDD.mw

    Recently Mapleprimes user @vs140580  asked here about finding the shortest returning walk in a graph (explained more below). I provided an answer using the labelled adjacency matrix. @Carl Love pointed out that the storage requirements are significant. They can be reduced by storing only the vertices in the walks and not the walks themselves. This can be done surprisingly easily by redefining how matrix multiplication is done using Maple's LinearAlgebra:-Generic package.

    (The result is independent of the chosen vertex.)

    restart

    with(GraphTheory)

    Consider the following graph. We want to find, for a given vertex v, the shortest walk that visits all vertices and returns to v. A walk traverses the graph along the edges, and repeating an edge is allowed (as distinct from a path, in which an edge can be used only once).

    G := AddVertex(PathGraph(5), [6, 7]); AddEdge(G, {{3, 7}, {4, 6}, {6, 7}}); DrawGraph(G, layout = circle, size = [250, 250])

    GRAPHLN(undirected, unweighted, [1, 2, 3, 4, 5, 6, 7], Array(1..7, {(1) = {2}, (2) = {1, 3}, (3) = {2, 4}, (4) = {3, 5}, (5) = {4}, (6) = {}, (7) = {}}), `GRAPHLN/table/2`, 0)

    n := NumberOfVertices(G)

    7

    A := AdjacencyMatrix(G)

    As is well known, the (i, j) entry of A^k gives the number of walks of length k between vertices i and j. The labelled adjacency matrix shows the actual walks.

    Alab := `~`[`*`](A, Matrix(n, n, symbol = omega))

    Matrix(%id = 36893490455680637396)

    For example, the (3,3) entry of Alab^2 has 3 terms, one for each walk of length 2 that starts and ends at vertex 3. The `&omega;__i,j` factors in a term give the edges visited.

    So the algorithm to find the shortest walk is to raise Alab to successively higher powers until one of the terms in the diagonal entry for the vertex of interest has indices for all the vertices.

    Alab^2

    Matrix(%id = 36893490455709504684)

    The expressions for higher powers get very large quickly, so an algorithm that only retains the sets of vertices in each term as a set of sets will use less storage space. So we can consider the following modified labelled adjacency matrix.

    B := Matrix(n, n, proc (i, j) options operator, arrow; if A[i, j] = 1 then {{i, j}} else {} end if end proc)

    Matrix(%id = 36893490455703852204)

    Now we need to modify how we do matrix multiplication, but Maple has the LinearAlgebra:-Generic package to do this. We can redefine addition and multiplication to operate correctly on the sets of sets.

    Consider two sets of sets of vertices for walks.

    a := {{7}, {1, 3}, {2, 6, 7}}; b := {{1, 2}, {2, 3, 8}}

    {{7}, {1, 3}, {2, 6, 7}}

    {{1, 2}, {2, 3, 8}}

    Addition is just combining the possibilities, and set union will do this. And addition of "zero" should add no possibilities, so we take {} as zero.

    `union`(a, b); `union`(a, {})

    {{7}, {1, 2}, {1, 3}, {2, 3, 8}, {2, 6, 7}}

    {{7}, {1, 3}, {2, 6, 7}}

    Multiplication is just combining all pairs by union. Notice here that {7} union {1,3,5} and {1,5} union {3,7} give the same result, but that we do not get duplicates in the set.

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, b))}

    {{1, 2, 3}, {1, 2, 7}, {1, 2, 3, 8}, {1, 2, 6, 7}, {2, 3, 7, 8}, {2, 3, 6, 7, 8}}

    The unit for multiplication should leave the set of sets unchanged, so {{}} can be used

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, {{}}))}

    {{7}, {1, 3}, {2, 6, 7}}

    And we should check that zero multiplied by a is zero

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, {}))}

    {}

    Define these operations for the LinearAlgebraGeneric package:

    F[`=`] := `=`; F[`/`] := `/`; F[`0`] := {}; F[`1`] := {{}}; F[`+`] := `union`; F[`*`] := proc (x, y) options operator, arrow; {seq(seq(`union`(i, j), `in`(i, x)), `in`(j, y))} end proc

    Warning, (in F[*]) `j` is implicitly declared local

    Warning, (in F[*]) `i` is implicitly declared local

    Compare B^2 with Alab^2. We have lost information about the details of the walks except for the vertices visited.

    LinearAlgebra:-Generic:-MatrixMatrixMultiply[F](B, B)

    Matrix(%id = 36893490455680647164)

    So here is a procedure for finding the length of the shortest walk starting and ending at vertex v.

    WalkLength:=proc(G::Graph,v)
      uses GraphTheory;
      local x,y,i,j,vv,A,B,F,n,vertset;
      if IsDirected(G) then error "graph must be undirected" end if;
      if not member(v,Vertices(G),'vv') then error "vertex not in graph" end if;
      if not IsConnected(G) then return infinity end if;
      F[`=`]:=`=`:F[`/`]:=`/`: # not required but have to be specified
      F[`0`]:={}:
      F[`1`]:={{}}:
      F[`+`]:=`union`;
      F[`*`]:=(x,y)->{seq(seq(i union j, i in x), j in y)};
      n:=NumberOfVertices(G);
      vertset:={$(1..n)};
      A:=Matrix(n,n, (i, j)-> if AdjacencyMatrix(G)[i, j] = 1 then {{i, j}} else {} end if);
      B:=copy(A);
      for i from 2 do
        B:=LinearAlgebra:-Generic:-MatrixMatrixMultiply[F](B,A);
      until vertset in B[vv,vv];
      i;
    end proc:

    WalkLength(G, 1)

    10

    NULL

    Download WalksGenericSetOfSets.mw

    5 6 7 8 9 10 11 Last Page 7 of 297