Applications, Examples and Libraries

Share your work here

In order to explore the use of signal processing package in image processing, @Samir Khan and I created this application that performs discrete Haar wavelet transform on images to achieve both lossy (irreversible) and lossless (reversible) compression.

Haar wavelet compression modifies the image matrix to increase the number of zero entries in the matrix, which results in a sparse matrix that can be stored more efficiently, thus reducing the file size. A Haar wavelet transform groups adjacent items in the matrix, stores the average and difference of the two numbers. Notice that this computation is reversible since knowing the values of a, b and given that x1-x2 = a; (x1+x2)/2 = b, we can solve for x1 and x2. Haar wavelet compression is taking advantage of the property that neighboring pixels in an image usually share very similar value; hence recursively applying Haar wavelet transform to the rows and columns of an image matrix significantly increases the number of zero entries. In the application we achieved a compression ratio of 1.296 (number of non-zero entries in original: number of non-zero entries in processed matrix).

The fact that Haar wavelet transform is reversible means that we can use it to perform lossless image compression: the decompressed image will be exactly the same as the image before compression. Transmission and temporary storage of the data would be made more efficient without any loss of details in the image.

But what if the file size is still too big or we simply don’t need that many details in the image? That is when lossy compression comes into use. By omitting some details/fidelity, lossy compression is able to achieve notably smaller file size. In this application, we apply a filter to the transformed image matrix, setting entries that are close to zeros to actual zeros (i.e. pick a positive number ϵ such that all x < ϵ are changed to 0 in the matrix). The value of ϵ directly impacts the quality of the decompressed image so should be chosen carefully in practice. In this application, we chose ϵ = 0.01, which results in a compression ratio of 19.38, but instead produces a very blurry image after reversing the compression.

(left: Original image, right: lossy compression with ϵ = 0.01)

The application can be accessed here for more details.


 

For Maple 2018.1, there are improvements in pdsolve's ability to solve PDE with boundary and initial conditions. This is work done together with E.S. Cheb-Terrab. The improvements include an extended ability to solve problems involving non-homogeneous PDE and/or non-homogeneous boundary and initial conditions, as well as improved simplification of solutions and better handling of functions such as piecewise in the arguments and in the processing of solutions. This is also an ongoing project, with updates being distributed regularly within the Physics Updates.

Solving more problems involving non-homogeneous PDE and/or non-homogeneous boundary and initial conditions

 

 

Example 1: Pinchover and Rubinstein's exercise 6.17: we have a non-homogenous PDE and boundary and initial conditions that are also non-homogeneous:

pde__1 := diff(u(x, t), t)-(diff(u(x, t), x, x)) = 1+x*cos(t)
iv__1 := (D[1](u))(0, t) = sin(t), (D[1](u))(1, t) = sin(t), u(x, 0) = 1+cos(2*Pi*x)

pdsolve([pde__1, iv__1])

u(x, t) = 1+cos(2*Pi*x)*exp(-4*Pi^2*t)+t+x*sin(t)

(1)

How we solve the problem, step by step:

   

 

Example 2: the PDE is homogeneous but the boundary conditions are not. We solve the problem through the same process, which means we end up solving a nonhomogeneous pde with homogeneous BC as an intermediate step:

pde__2 := diff(u(x, t), t) = 13*(diff(u(x, t), x, x))
iv__2 := (D[1](u))(0, t) = 0, (D[1](u))(1, t) = 1, u(x, 0) = (1/2)*x^2+x

pdsolve([pde__2, iv__2])

u(x, t) = 1/2+Sum(2*(-1+(-1)^n)*cos(n*Pi*x)*exp(-13*Pi^2*n^2*t)/(Pi^2*n^2), n = 1 .. infinity)+13*t+(1/2)*x^2

(12)

How we solve the problem, step by step:

   

 

Example 3: a wave PDE with a source that does not depend on time:

pde__3 := (diff(u(x, t), x, x))*a^2+1 = diff(u(x, t), t, t)
iv__3 := u(0, t) = 0, u(L, t) = 0, u(x, 0) = f(x), (D[2](u))(x, 0) = g(x)

`assuming`([pdsolve([pde__3, iv__3])], [L > 0])

u(x, t) = (1/2)*(2*(Sum(sin(n*Pi*x/L)*(2*L*(Int(sin(n*Pi*x/L)*g(x), x = 0 .. L))*sin(a*Pi*t*n/L)*a-Pi*(Int(sin(n*Pi*x/L)*(-2*f(x)*a^2+L*x-x^2), x = 0 .. L))*cos(a*Pi*t*n/L)*n)/(Pi*n*a^2*L), n = 1 .. infinity))*a^2+L*x-x^2)/a^2

(23)

How we solve the problem, step by step:

   

 

Example 4: Pinchover and Rubinstein's exercise 6.23 - we have a non-homogenous PDE and initial condition:

pde__4 := diff(u(x, t), t)-(diff(u(x, t), x, x)) = g(x, t)
iv__4 := (D[1](u))(0, t) = 0, (D[1](u))(1, t) = 0, u(x, 0) = f(x)

pdsolve([pde__4, iv__4], u(x, t))

u(x, t) = Int(f(tau1), tau1 = 0 .. 1)+Sum(2*(Int(f(tau1)*cos(n*Pi*tau1), tau1 = 0 .. 1))*cos(n*Pi*x)*exp(-Pi^2*n^2*t), n = 1 .. infinity)+Int(Int(g(x, tau1), x = 0 .. 1)+Sum(2*(Int(g(x, tau1)*cos(n1*Pi*x), x = 0 .. 1))*cos(n1*Pi*x)*exp(-Pi^2*n1^2*(t-tau1)), n1 = 1 .. infinity), tau1 = 0 .. t)

(30)

If we now make the functions f and g into specific mappings, we can compare pdsolve's solutions to the general and specific problems:

f := proc (x) options operator, arrow; 3*cos(42*x*Pi) end proc
g := proc (x, t) options operator, arrow; exp(3*t)*cos(17*x*Pi) end proc

 

Here is what pdsolve's solution to the general problem looks like when taking into account the new values of f(x) and g(x,t):

value(simplify(evalindets(u(x, t) = Int(f(tau1), tau1 = 0 .. 1)+Sum(2*(Int(f(tau1)*cos(n*Pi*tau1), tau1 = 0 .. 1))*cos(n*Pi*x)*exp(-Pi^2*n^2*t), n = 1 .. infinity)+Int(Int(g(x, tau1), x = 0 .. 1)+Sum(2*(Int(g(x, tau1)*cos(n1*Pi*x), x = 0 .. 1))*cos(n1*Pi*x)*exp(-Pi^2*n1^2*(t-tau1)), n1 = 1 .. infinity), tau1 = 0 .. t), specfunc(Int), proc (u) options operator, arrow; `PDEtools/int`(op(u), AllSolutions) end proc)))

u(x, t) = 3*cos(42*Pi*x)*exp(-1764*Pi^2*t)+cos(Pi*x)*(65536*cos(Pi*x)^16-278528*cos(Pi*x)^14+487424*cos(Pi*x)^12-452608*cos(Pi*x)^10+239360*cos(Pi*x)^8-71808*cos(Pi*x)^6+11424*cos(Pi*x)^4-816*cos(Pi*x)^2+17)*(exp(289*Pi^2*t+3*t)-1)*exp(-289*Pi^2*t)/(289*Pi^2+3)

(31)

 

Here is pdsolve's solution to the specific problem:

pdsolve([pde__4, iv__4], u(x, t))

u(x, t) = ((867*Pi^2+9)*cos(42*Pi*x)*exp(-1764*Pi^2*t)+cos(17*Pi*x)*(exp(3*t)-exp(-289*Pi^2*t)))/(289*Pi^2+3)

(32)

 

And the two solutions are equal:

simplify(combine((u(x, t) = 3*cos(42*x*Pi)*exp(-1764*Pi^2*t)+cos(x*Pi)*(65536*cos(x*Pi)^16-278528*cos(x*Pi)^14+487424*cos(x*Pi)^12-452608*cos(x*Pi)^10+239360*cos(x*Pi)^8-71808*cos(x*Pi)^6+11424*cos(x*Pi)^4-816*cos(x*Pi)^2+17)*(exp(289*Pi^2*t+3*t)-1)*exp(-289*Pi^2*t)/(289*Pi^2+3))-(u(x, t) = ((867*Pi^2+9)*cos(42*x*Pi)*exp(-1764*Pi^2*t)+cos(17*x*Pi)*(exp(3*t)-exp(-289*Pi^2*t)))/(289*Pi^2+3)), trig))

0 = 0

(33)

f := 'f'; g := 'g'

 

Improved simplification in integrals, piecewise functions, and sums in the solutions returned by pdsolve

 

 

Example 1: exercise 6.21 from Pinchover and Rubinstein is a non-homogeneous heat problem. Its solution used to include unevaluated integrals and sums, but is now returned in a significantly simpler format.

pde__5 := diff(u(x, t), t)-(diff(u(x, t), x, x)) = t*cos(2001*x)
iv__5 := (D[1](u))(0, t) = 0, (D[1](u))(Pi, t) = 0, u(x, 0) = Pi*cos(2*x)

pdsolve([pde__5, iv__5])

u(x, t) = (1/16032024008001)*(4004001*t+exp(-4004001*t)-1)*cos(2001*x)+Pi*cos(2*x)*exp(-4*t)

(34)

pdetest(%, [pde__5, iv__5])

[0, 0, 0, 0]

(35)

 

Example 2: example 6.46 from Pinchover and Rubinstein is a non-homogeneous heat equation with non-homogeneous boundary and initial conditions. Its solution used to involve two separate sums with unevaluated integrals, but is now returned with only one sum and unevaluated integral.

pde__6 := diff(u(x, t), t)-(diff(u(x, t), x, x)) = exp(-t)*sin(3*x)
iv__6 := u(0, t) = 0, u(Pi, t) = 1, u(x, 0) = phi(x)

pdsolve([pde__6, iv__6], u(x, t))

u(x, t) = (1/8)*(8*(Sum(2*(Int(-(-phi(x)*Pi+x)*sin(n*x), x = 0 .. Pi))*sin(n*x)*exp(-n^2*t)/Pi^2, n = 1 .. infinity))*Pi-Pi*(exp(-9*t)-exp(-t))*sin(3*x)+8*x)/Pi

(36)

pdetest(%, [pde__6, iv__6])

[0, 0, 0, (-phi(x)*Pi^2+Pi*x+2*(Sum((Int(-(-phi(x)*Pi+x)*sin(n*x), x = 0 .. Pi))*sin(n*x), n = 1 .. infinity)))/Pi^2]

(37)

 

More accuracy when returning series solutions that have exceptions for certain values of the summation index or a parameter

 

 

Example 1: the answer to this problem was previously given with n = 0 .. infinity instead of n = 1 .. infinity as it should be:

pde__7 := diff(v(x, t), t, t)-(diff(v(x, t), x, x))

iv__7 := v(0, t) = 0, v(x, 0) = -(exp(2)*x-exp(x+1)-x+exp(1-x))/(exp(2)-1), (D[2](v))(x, 0) = 1+(exp(2)*x-exp(x+1)-x+exp(1-x))/(exp(2)-1), v(1, t) = 0

pdsolve([pde__7, iv__7])

v(x, t) = Sum(-2*sin(n*Pi*x)*((Pi^2*(-1)^n*n^2-Pi^2*n^2+2*(-1)^n-1)*sin(Pi*t*n)-(-1)^n*cos(Pi*t*n)*Pi*n)/(Pi^2*n^2*(Pi^2*n^2+1)), n = 1 .. infinity)

(38)

 

Example 2: the answer to exercise 6.25 from Pinchover and Rubinstein is now given in a much simpler format, with the special limit case for w = 0 calculated separately:

pde__8 := diff(u(x, t), t) = k*(diff(u(x, t), x, x))+cos(w*t)
iv__8 := (D[1](u))(L, t) = 0, (D[1](u))(0, t) = 0, u(x, 0) = x

`assuming`([pdsolve([pde__8, iv__8], u(x, t))], [L > 0])

u(x, t) = piecewise(w = 0, (1/2)*L+Sum(2*L*(-1+(-1)^n)*cos(n*Pi*x/L)*exp(-Pi^2*n^2*k*t/L^2)/(n^2*Pi^2), n = 1 .. infinity)+t, (1/2)*(L*w+2*(Sum(2*L*(-1+(-1)^n)*cos(n*Pi*x/L)*exp(-Pi^2*n^2*k*t/L^2)/(n^2*Pi^2), n = 1 .. infinity))*w+2*sin(w*t))/w)

(39)

 

Improved handling of piecewise, eval/diff in the given problem

 

 

Example 1: this problem, which contains a piecewise function in the initial condition, can now be solved:

pde__9 := diff(f(x, t), t) = diff(f(x, t), x, x)
iv__9 := f(0, t) = 0, f(1, t) = 1, f(x, 0) = piecewise(x = 0, 0, 1)

pdsolve([pde__9, iv__9])

f(x, t) = Sum(2*sin(n*Pi*x)*exp(-Pi^2*n^2*t)/(n*Pi), n = 1 .. infinity)+x

(40)

 

Example 2: this problem, which contains a derivative written using eval/diff, can now be solved:

pde__10 := -(diff(u(x, t), t, t))-(diff(u(x, t), x, x))+u(x, t) = 2*exp(-t)*(x-(1/2)*x^2+(1/2)*t-1)

iv__10 := u(x, 0) = x^2-2*x, u(x, 1) = u(x, 1/2)+((1/2)*x^2-x)*exp(-1)-((3/4)*x^2-(3/2)*x)*exp(-1/2), u(0, t) = 0, eval(diff(u(x, t), x), {x = 1}) = 0

pdsolve([pde__10, iv__10], u(x, t))

u(x, t) = -(1/2)*exp(-t)*x*(x-2)*(t-2)

(41)

 

References:

 

Pinchover, Y. and Rubinstein, J.. An Introduction to Partial Differential Equations. Cambridge UP, 2005.


 

Download What_is_New_after_Maple_2018.mw

Katherina von Bülow

In an attempt to explore the field of image processing, @Samir Khan and I created an application (download here) that demonstrates the removal of two types of noises from an image through frequency and spatial filtering.

Periodic noises and salt & pepper noises are two common types of image noises, usually caused by errors during the image capturing or data transmission process. Periodic noises result in repetitive patterns being added onto the original image, while salt & pepper noises are the irregular appearance of dark pixels in the bright area and bright pixels in the dark area of the image. In this application, we artificially generate these noises and pollute a clean picture in order to demonstrate the removal techniques.

(Fig 1: Picture of Waterloo Office taken by Sophie Tan            Fig 2: Converted to greyscale for processing, added two noises)

In order to remove periodic noises from the image, we apply a 2D Fourier Transform to convert the image from spatial domain to frequency domain, where periodic noises can be visually detected as separate, discrete spikes and therefore easily removed.

(Fig 3 Frequency domain of the magnitude of the image)

One way to remove salt and pepper noises is to apply a median filter to the image. In this application, we run a 3 by 3 kernel across the image matrix that sorts and places the median among the 9 elements as the new matrix entry, thus resulting in the whole image being median-filtered.

Comparison of the image before and after noise removal:

Please refer to the application for more details on the implementation of the two removal techniques.

 

Hello, everyone! My name’s Sophie and I’m an intern at Maplesoft. @Samir Khan asked me to develop a couple of demonstration applications using the DeepLearning package - my work is featured on the Application Center

I thought I’d describe two critical commands used in the applications – DNNClassifier() and DNNRegressor().

The DNNClassifier calls tf.estimator.DNNClassifier from the Tensorflow Python API. This command builds a feedforward multilayer neural network that is trained with a set of labeled data in order to perform classification on similar, unlabeled data.

Dataset used for training and validating the classifier has the type DataFrame in Maple. In the Prediction of malignant/benign of breast mass example, the training set is a DataFrame with 32 columns in total, with column labels: “ID Number”, “Diagnosis”, “radius”, “texture”, etc. Note that labeling the columns of the dataset is mandatory, as later the neural network needs to identify which feature column corresponds to which list of values.

Feature columns are what come between the raw input data and the classifier model; they are required by Tensorflow to specify how the input data should be transformed before given to the model. Maple now supports three types of Feature Columns, including:

  • NumericColumn that represents real, numerical figure,
  • CategoricalColumn that denotes categorical(ordinal) data
  • BucketizedColumn that organizes continuous data into a discrete number buckets with specified boundaries.

In this application, the input data consists of 30 real, numeric values that represents physical traits of a cell nucleus computed from a digitized image of the breast mass. We create a list of NumericColumns by calling

with(DeepLearning):
fc := [seq(NumericColumn(u,shape=[1]), u in cols[3..])]:

where cols is a list of column labels and shape[1] indicates that each data input is just a single numeric value.

When we create a DNNClassifier, we need to specify the feature columns (input layer), the architecture of the neural network (hidden layers) and the number of classes (output layer). Recall that the DNNClassifier builds a feedforward multilayer neural network, hence when we call the function, we need to indicate how many hidden layers we want and how many nodes there should be on each of the layer. This is done by passing a list of non-negative integers as the parameter hidden_units when we call the function. In the example, we did:

classifier := DNNClassifier(fc, hidden_units=[20,40,20],num_classes=2):

where we set 3 hidden layer each with 20, 40, 20 nodes respectively. In addition, there are 30 input nodes (i.e. the number of feature columns) and 1 output node (i.e. binary classification). The diagram below illustrates a simpler example with an input layer with 3 nodes, 2 hidden layers with 7, 5 nodes and an output layer with 1 node.

(Created using NN-SVG by https://github.com/zfrenchee/NN-SVG)

After we built the model, we can train it by calling

classifier:-Train(train_data[3..32], train_data[2], steps = 256, num_epochs = 3, shuffle = true):

where we

  1. Give the training data (train_data[3..32]) and the corresponding labels (train_data[2]) to the model.
  2. Specified that the entire dataset will be passed to the model for three times and each iteration has 256 steps.
  3. Specified that data batches for training will be created by randomly shuffling the tensors.

Now the training process is complete, we can use the validation set to evaluate the effectiveness of our model.

classifier:-Evaluate(test_data[3..32],test_data[2], steps = 32);

The output indicates an accuracy of ~92.11% in this case. There are more indices like accuracy_basline, auc, average_loss that help us decide if we need to modify the architecture for better performance.

We then build a predictor function that takes an arbitrary set of measurements as a DataSeries and returns a prediction generated by the trained DNN classifier.

predictor := proc (ds) classifier:-Predict(Transpose(DataFrame(ds)), num_epochs = 1, shuffle = false)[1] end proc;

Now we can pass a DataSeries with 30 labeled rows to the predictor: (Recall the cols is a list of the column names)

ds := DataSeries([11.49, 14.59, 73.99, 404.9, 0.1046, 8.23E-02, 5.31E-02, 1.97E-02, 0.1779, 6.57E-02, 0.2034, 1.166, 1.567, 14.34, 4.96E-03, 2.11E-02, 4.16E-02, 8.04E-03, 1.84E-02, 3.61E-03, 12.4, 21.9, 82.04, 467.6, 0.1352, 0.201, 0.2596, 7.43E-02, 0.2941, 9.18E-02], labels = cols[3..]); 
predictor(ds);

The output indicates that the probability of this data being a class _id [0] is ~90.79%. In other words, according to our model, the probability of this breast mass cell being benign is ~90.79%.

The use of the DNNRegressor is very similar (almost identical) to that of the Classifier, the only significant difference is that while the Classifier predicts discrete labels as classes, the Regressor predicts a continuous qualitative result with the provided data (Note that CategoricalColumn is still applicable). For more details about the basic usage of the DNNRegressor, please refer to Predicting the burnt area of a forest fires with DNN Regressor.

 

Mukhametshina Liya

Games with pseudo-fractals
 

Homothety_Fractals.mw

 

  

  

Aleksandrov Denis,
7th grade
secondary school #57 of Kazan

_ANIMATED_PICTURE_ON_THE_COORDINATE_PLANE_Aleksandrov_D..mw

 

vv if you could please help adjust your code.  I've adjusted the start of the eurocup code to match the world cup however I haven't decoded your coding and probably won't be able to have time before the world cup starts.  I've got as far as adding the teams, flags and ratings of each team.

Let me just say while copying and pasting the flag bytes to the code, Maple became a bitch to work worth (pardon my language) but I became so frustated because my laptop locked up twice.  The more I worked with Maple the slower it got, until it froze right up.  Copying and pasting large data in maple is almost to near IMPOSSIBLE.  .. perhaps this could be a side conversation.

Here's the world cup file so far.

2018_World_Cup.mw

**edit added**
Fixed flag sizes, couple of other fixes in other stats and added some additional stats
2018_World_Cup7.mw

 

ANIMATED image of cascade of opening matryoshkas

E.R. Ibragimova

 

 

ИбрагимоваЭ.Р_03_Казань_Матрёшки.mws

Murtazin Shamil, 6th grade

Simulation of the animated image "Flask with bubbles"

 

FLASC_Murtazin_S.A..mw

 

At a recent undegraduate competition the students had to compute the following limit

 

Limit( n * Diff( (exp(x)-1)/x, x$n), n=infinity ) assuming x<>0;

Limit(n*(Diff((exp(x)-1)/x, `$`(x, n))), n = infinity)

(1)

 

Maple is able to compute the symbolic n-fold derivative and I hoped that the limit will be computed at once.

Unfortunately it is not so easy.
Maybe someone finds a more more straightforward way.

 

restart;

f := n * diff( (exp(x)-1)/x, x$n );

n*(-1/x)^n*(-GAMMA(n+1)+GAMMA(n+1, -x))/x

(2)

limit(%, n=infinity);

limit(n*(-1/x)^n*(-GAMMA(n+1)+GAMMA(n+1, -x))/x, n = infinity)

(3)

simplify(%) assuming x>0;

limit(-(-1)^n*x^(-n-1)*n*(GAMMA(n+1)-GAMMA(n+1, -x)), n = infinity)

(4)

 

So, Maple cannot compute directly the limit.

 

convert(f, Int) assuming n::posint;

-n*(-1/x)^n*(-x)^(n+1)*GAMMA(2+n)*(Int(exp(_t1*x)*_t1^n, _t1 = 0 .. 1))/((n+1)*(Int(_k1^n*exp(-_k1), _k1 = 0 .. infinity))*x)

(5)

J:=simplify(%)  assuming n::posint;

n*(Int(exp(x*_k1)*_k1^n, _k1 = 0 .. 1))*GAMMA(n+1)/(Int(_k1^n*exp(-_k1), _k1 = 0 .. infinity))

(6)

L:=convert(J, Int) assuming n::posint;

n*(Int(exp(x*_k1)*_k1^n, _k1 = 0 .. 1))

(7)

L:=subs(_k1=u, L);

n*(Int(exp(x*u)*u^n, u = 0 .. 1))

(8)

 

Now it should be easy, but Maple needs help.

 

with(IntegrationTools):

L1:=Change(L, u^n = t, t) assuming n::posint;

Int(exp((x*t^(1/n)*n+ln(t))/n), t = 0 .. 1)

(9)

limit(L1, n=infinity);  # OK

exp(x)

(10)

####################################################################

Note that the limit can also be computed using an integration by parts, but Maple refuses to finalize:

Parts(L, exp(u*x)) assuming n::posint;

n*(exp(x)/(n+1)-(Int(u^(n+1)*x*exp(x*u)/(n+1), u = 0 .. 1)))

(11)

simplify(%);

n*(-x*(Int(u^(n+1)*exp(x*u), u = 0 .. 1))+exp(x))/(n+1)

(12)

limit(%, n=infinity);

limit(n*(-x*(Int(u^(n+1)*exp(x*u), u = 0 .. 1))+exp(x))/(n+1), n = infinity)

(13)

value(%);  # we are almost back!

limit(n*((-x)^(-n)*(-(n+1)*n*GAMMA(n)/x-(-x)^n*(x-n-1)*exp(x)/x+(n+1)*n*GAMMA(n, -x)/x)+exp(x))/(n+1), n = infinity)

(14)

 

It post can be called a continuation of the theme “Determination of the angles of the manipulator with the help of its mathematical model. Inverse  problem”.
Consider  the use of manipulators as multi-axis CNC machines.
Three-link manipulator with 5 degrees of freedom. In these examples  one of the restrictions on the movement of the manipulator links is that the position of the last link coincides with the normal to the surface along the entire trajectory of the working point movement.
That is, we, as it were, mathematically transform a system with many degrees of freedom to an analog of a lever mechanism with one degree of freedom, so that we can do the necessary work in a convenient to us way.
It seems that this approach is fully applicable directly to multi-axis CNC machines.

(In the texts of the programs, the normalization is carried out with respect to the coordinates of the last point, in order that the lengths of the integration interval coincide with the path length.)
MAN_3_5_for_MP.mw

MAN_3_5_for_MP_TR.mw

add, floats, and Kahan sum

 

I found an intresting fact about the Maple command add for floating point values.
It seems that add in this case uses a summation algorithm in order to reduce the numerical error.
It is probably the Kahan summation algorithm (see wiki), but I wonder why this fact is not documented.

Here is a simple Maple procedure describing and implementing the algorithm.

 

 

restart;

Digits:=15;

15

(1)

KahanSum := proc(f::procedure, ab::range)  
local S,c,y,t, i;      # https://en.wikipedia.org/wiki/Kahan_summation_algorithm
S := 0.0;              # S = result (final sum: add(f(n), n=a..b))
c := 0.0;              # c = compensation for lost low-order bits.
for i from lhs(ab) to rhs(ab) do
    y := f(i) - c;     
    t := S + y;              
    c := (t - S) - y;        
    S := t;                  
od;                         
return S
end proc:

 

Now, a numerical example.

 

 

f:= n ->  evalf(1/(n+1/n^3+1) - 1/(n+1+1/(n+1)^3+1));

proc (n) options operator, arrow; evalf(1/(n+1/n^3+1)-1/(n+2+1/(n+1)^3)) end proc

(2)

n := 50000;
K := KahanSum(f, 1..n);

50000

 

.333313334133301

(3)

A := add(f(k),k=1..n);

.333313334133302

(4)

s:=0.0:  for i to n do s:=s+f(i) od:
's' = s;

s = .333313334133413

(5)

exact:=( 1/3 - 1/(n+1+1/(n+1)^3+1) );

6250249999999900000/18751875067501050009

(6)

evalf( [errK = K-exact, errA = A-exact, err_for=s-exact] );

[errK = 0., errA = 0.1e-14, err_for = 0.112e-12]

(7)

evalf[20]( [errK = K-exact, errA = A-exact, err_for=s-exact] );

[errK = -0.33461e-15, errA = 0.66539e-15, err_for = 0.11166539e-12]

(8)

 


Download KahanSum.mw

Due to the mechanistic process of our students and little creativity in analysis in schools and universities to be professionally trained is that STEM education appears (science, technology, engineering and mathematics) is a new model that is being considered in other countries and with very slow step in our city. In this work the methods with STEM will be visualized but using computational tools provided by Maplesoft which is a company that leads online education for adolescents and adults in the current market. In Spanish.

ECI_UNT_2018.pdf

ECI_UNT_2018.mw

Lenin Araujo Castillo

Ambassador of Maple

There is a bug in inttrans:-hilbert:

restart;

inttrans:-hilbert(sin(a)*sin(t+b), t, s);
# should be:
sin(a)*cos(s+b);   expand(%);

sin(a)*cos(s)

 

sin(a)*cos(s+b)

 

sin(a)*cos(s)*cos(b)-sin(a)*sin(s)*sin(b)

(1)

########## correction ##############

`inttrans/expandc` := proc(expr, t)
local xpr, j, econst, op1, op2;
      xpr := expr;      
      for j in indets(xpr,specfunc(`+`,exp)) do
          econst := select(type,op(j),('freeof')(t));
          if 0 < nops(econst) and econst <> 0 then
              xpr := subs(j = ('exp')(econst)*combine(j/('exp')(econst),exp),xpr)
          end if
      end do;
      for j in indets(xpr,{('cos')(linear(t)), ('sin')(linear(t))}) do
          if type(op(j),`+`) then
              op1:=select(has, op(j),t); ##
              op2:=op(j)-op1;            ##
              #op1 := op(1,op(j));
              #op2 := op(2,op(j));
              if op(0,j) = sin then
                  xpr := subs(j = cos(op2)*sin(op1)+sin(op2)*cos(op1),xpr)
              else
                  xpr := subs(j = cos(op1)*cos(op2)-sin(op1)*sin(op2),xpr)
              end if
          end if
      end do;
      return xpr
end proc:

#######################################

inttrans:-hilbert(sin(a)*sin(t+b), t, s); expand(%);

-(1/2)*cos(a-b)*sin(s)+(1/2)*sin(a-b)*cos(s)+(1/2)*cos(a+b)*sin(s)+(1/2)*sin(a+b)*cos(s)

 

sin(a)*cos(s)*cos(b)-sin(a)*sin(s)*sin(b)

(2)

 


Download hilbert.mw

 

To demonstrate Maple 2018’s new Python connectivity, we wanted to integrate a large Python library. The result is the DeepLearning package - this offers an interface to a subset of the Tensorflow framework for machine learning.

I thought I’d share an application that demonstrates how the DeepLearning package can be used to recognize the numbers in images of handwritten digits.

The application employs a very small subset of the MNIST database of handwritten digits. Here’s a sample image for the digit 0.

This image can be represented as a matrix of pixel intensities.        

The application generates weights for each digit by training a two-layer neural network using multinomial logistic regression. When visualized, the weights for each digit might look like this.

Let’s say that we’re comparing an image of a handwritten digit to the weights for the digit 0. If a pixel with a high intensity lands in

  • an intensely red area, the evidence is high that the number in the image is 0
  • an intensely blue area, the evidence is low that the number in the image is 0

While this explanation is technically simplistic, the application offers more detail.

Get the application here

First 20 21 22 23 24 25 26 Last Page 22 of 71