Maple Questions and Posts

These are Posts and Questions associated with the product, Maple

Linear combinations of random variables: why Maple does not "inherit" the distributional assumptions when adding up two random variables?

In the script I attach below, I first define a vector of two uncorrelated gaussian RVs [epsilon[1],epsilon[2]] and then a vector of two correlated gaussian RVs [nu[1],nu[2]]. Both epsilon[1] and epsilon[2] are also uncorrelated with nu[1] and nu[2].

Now I want to create a vector of two correlated gaussian RVs, S, where S[1]=nu[1]+epsilon[1] and S[2]=nu[2]+epsilon[2]. The means and the variances of [S[1],S[2]] are correct, but the covariance (off-diagonal element of the covariance matrix) is weird. How to do this in Maple?

Please check this script:

RVs_sum.mw

 

If I have a pde with multiple constants (for example a and b) in them such that some are fixed constants (say k) and some terms are undetermined constants , how do I put that in a PDE such that Maple knows to solve the equation not for arbritary k (in case no solution exists), but any k such that k is not a function of the differentiating variable.

hey guys...please watch my file and help me. when i call(Am)...it only shows last value. how can i get all (Am) value in like seq or table

restart

with(LinearAlgebra):

i := [seq(2*i-1, i = 1 .. 10)];

[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]

(1)

for i in i do A[m] := (x/a)^(i+1)*(1-x/a)^2 end do;

x^2*(1-x/a)^2/a^2

 

x^4*(1-x/a)^2/a^4

 

x^6*(1-x/a)^2/a^6

 

x^8*(1-x/a)^2/a^8

 

x^10*(1-x/a)^2/a^10

 

x^12*(1-x/a)^2/a^12

 

x^14*(1-x/a)^2/a^14

 

x^16*(1-x/a)^2/a^16

 

x^18*(1-x/a)^2/a^18

 

x^20*(1-x/a)^2/a^20

(2)

``

Download 3.mw

when i call(Am)...it only shows last value. how can i get all (Am) value in like seq or table

i want to solve..this what is happening..i used maple not a long time ago.help me

restart; with(LinearAlgebra)

i := 1;

1

(1)

w := c[i]*(x/a)^(i+1)*(1-x/a)^2*(y/b)^(i+1)*(1-y/b)^2;

c[1]*x^2*(1-x/a)^2*y^2*(1-y/b)^2/(a^2*b^2)

(2)

(1/2)*(int(int([D__11*(diff(w, x, x))^2+2*D__12*(diff(w, x, x))*(diff(w, y, y))+4*D[66]*(diff(w, x, y))^2+D[22]*(diff(w, y, y))^2-2*q*w], x = 0 .. b), y = 0 .. a))

Error, (in int) wrong number (or type) of arguments: for an operator integrand a range without a variable of integration is expected, got x = 0 .. b

 

``

Download 2.mw

Given a list of equations say: eq1 := [diff(kk(r), r) = 0, ff(r)*1/2 = 0], how would i extract just the left hand side of each element ?
[diff(kk(r), r) , ff(r)*1/2 ]

Is there a way to compute the Riemann tensor for two different metrics in the physics package without each time setting up the metric. For example you just put the metric as an argument to do so in the differential geometry package.

I obtain the adjacency matrix from a graph. We know that it is indexed according to the order of vertices in the Vertices. But what if I want to rearrange it in a different order? Here is a specific example.

with(GraphTheory):
g:=Graph({{2,3},{1,2}});
Vertices(g); #[1,2,3]
AdjacencyMatrix(g);

I would like to display this matrix in the order of [3, 1, 2].

Hi

I get the following error: "Error, (in dsolve) invalid input: 'PDEtols/sdsolve' expects its 1st argument, SYS, to be of type OR(set(..."

I don't know what's wrong. My equations look like a set to me.

My equations:

{0 = -F__Ay - F__By, 0 = 25*F__Oy + 25*F__Ay - 25*F__By, diff(theta__1(t), t)*t - theta__1(t) = 0, -x__1(t) + 25*cos(theta__1(t)) = 0, -y__1(t) + 25*sin(theta__1(t)) = 0, (2500*cos(theta__1(t))*diff(theta__1(t), t)*pi)/3 - 50*cos(theta__1(t))*diff(theta__2(t), t)*theta__2(t) = F__Ax + F__Ox + F__Bx, (2500*cos(theta__1(t))*diff(theta__1(t), t)*pi)/3 - 50*cos(theta__1(t))*diff(theta__2(t), t)*theta__2(t) = F__Ay + F__Oy + F__By, diff(theta__2(t), t)*t - Pi/4 - theta__2(t) = 0, x__1(t) + 25*cos(theta__1(t)) - x__2(t) - 50*cos(theta__2(t)) = 0, y__1(t) + 25*sin(theta__1(t)) - y__2(t) - 50*sin(theta__2(t)) = 0, (1250*cos(theta__1(t))*diff(theta__1(t), t)*pi)/3 = -F__Ax - F__Bx, (1250*cos(theta__1(t))*diff(theta__1(t), t)*pi)/3 = -50*F__Ay + 50*F__By}

My solve:

dsolve({eqs[1] = 0, eqs[2] = 0, eqs[3] = 0, eqs[4] = 0, eqs[5] = 0, eqs[6] = 0, eqs_Mq[1] = eqs_g[3], eqs_Mq[1] = eqs_g[6], eqs_Mq[2] = eqs_g[1], eqs_Mq[2] = eqs_g[2], eqs_Mq[3] = eqs_g[4], eqs_Mq[3] = eqs_g[5]}, numeric)

Anyone can tell me what I'm doing wrong?

clutch.mw

I have say a

1) n cross n matrix A

2)  n cross 1 matrix B

I want to elementwise divide that is /~ each column of  by B and store in A itself like this 

If b_1 is in B and a_11 is in A i want   b_1/~a_11 in a_11

That is i divide

B /~ A first column

B /~ A second column

Like that for all its n columns

Kind help please

It seems once in a while someone asks about converting from degrees minutes seconds format to decimal or the other way around. 

I created a little procedure for just that purpose. 

restart; gc()

NULL

A little procedure to convert the form of degrees, minutes, seconds to decimal and vice versa.  

NULL

dms := proc (d, m := 0, s := 0) local con, d1, d2, d3; if 0 < frac(d) then d1 := floor(d); d2 := floor(60*frac(d)); d3 := 60*frac(60*frac(d)); con := cat(d1, `° `, d2, `' `, d3, `"`) else con := d+m/60.+s/3600. end if; print(con) end proc

NULL

Examples of use.

 

dms(45.2365)

`45° 14' 11.4000"`

(1)

dms(45, 14, 11.4)

45.23650000

(2)

NULL

Download dms.mw

edit - a quick realization is to remove the decimals that changes the fractions to floating decimals and change print(con) to print(evlaf(con)) to avoid rounding issues.

This post is motivated by a question asked by @vs140580  ( The program is making intercept zero even though There is a intercept in regression Fit (A toy code showing the error attached) ).

The problem met by @vs140580 comes from the large magnitudes of the (two) regressors and the failure to Fit/LinearFit to find the correct solution unless an undecent value of Digits is used.
This problem has been answerd by @dharr after scaling the data (which is always, when possible, a good practice) and by 
myself while using explicitely the method called "Normal Equations" (see https://en.wikipedia.org/wiki/Least_squares).

The method of "Normal Equations" relies upon the inversion of a symmetric square matrix H whose dimension is equal to the number of coefficients of the model to fit.
It's well known that this method can potentially lead to matrices H extremely ill-conditionned, and thus face severe numerical problems (the most common situation being the fit of a high degree polynomial).
 

About these alternative methods:

  • In English: http://www.math.kent.edu/~reichel/courses/intr.num.comp.1/fall09/lecture4/lecture4.pdf
  • In French: https://moodle.utc.fr/pluginfile.php/24407/mod_resource/content/5/MT09-ch3_ecran.pdfI


The attached file illustrates how the QR decomposition method works.
The test case is @vs140580's.

Maybe the development team could enhance Fit/LinearFit in future versions by adding an option which specifies what method is to be used?

 

restart:

with(Statistics):

interface(version)

`Standard Worksheet Interface, Maple 2015.2, Mac OS X, December 21 2015 Build ID 1097895`

(1)

Data := Matrix([[4.74593554708566, 11385427.62, 2735660038000], [4.58252830679671, 25469809.77, 12833885700000], [4.29311160501838, 1079325200, 11411813200000000], [4.24176959154225, 1428647556, 18918585950000000], [5.17263072694618, 1428647556, 18918585950000000], [4.39351114955735, 1877950416, 30746202150000000], [4.39599006758777, 1428647556, 18918585950000000], [5.79317412396815, 2448320309, 49065217290000000], [4.48293612651735, 2448320309, 49065217290000000], [4.19990181982522, 2448320309, 49065217290000000], [5.73518217699046, 1856333905, 30648714900000000], [4.67943831980476, 3071210420, 75995866910000000], [4.215240105336, 2390089264, 48670072110000000], [4.41566877563247, 3049877383, 75854074610000000], [4.77780395369828, 2910469403, 74061327950000000], [4.96617430604669, 1416936352, 18891734280000000], [4.36131111330988, 1416936352, 18891734280000000], [5.17783192063198, 1079325200, 11411813200000000], [4.998266287191, 1067513353, 11402362980000000], [4.23366152474871, 2389517120, 48661380410000000], [4.58252830679671, 758079709.3, 5636151969000000], [6.82390874094432, 1304393838, 14240754750000000], [4.24176959154225, 912963601.2, 8621914602000000], [4.52432881167557, 573965555.4, 3535351888000000], [4.84133601918601, 573965555.4, 3535351888000000], [6.88605664769316, 732571773.2, 5558875538000000], [4.35575841415627, 1203944381, 13430693320000000], [4.42527441640593, 955277678, 8795128298000000], [6.82390874094432, 997591754.9, 8968341995000000], [4.35144484433733, 143039477.1, 305355143300000]]):

# Direct use of LinearFit.
#
# As far as I know LinearFit is based on the resolution of the "Normal Equations"
# (see further down), a system of equations that is known to be ill-conditioned
# when regressors have large values (in particular when polynomial regression
# is used).

X := Data[.., [2, 3]]:
Y := Data[.., 1]:


LinearFit(C1+C2*v+C3*w, X, Y, [v, w]);

Warning, model is not of full rank

 

HFloat(6.830889923844425e-9)*v-HFloat(2.275143726335622e-16)*w

(2)

# For roundoff issues the 3-by-3 matrix involved in the "Normal Equations" (NE)
# appears to of rank < 3.
# The rank of this matrix is rqual to 1+rank(X) and one can easily verify that
# the 2 columns of X are linearly independent:

LinearAlgebra:-LinearSolve(X, Vector(numelems(Y), 0));
LinearAlgebra:-Rank(X);

 

Vector[column]([[0.], [-0.]])

 

2

(3)

# Solve the least squares problem by using explicitely the NE.
#
# To account for an intercept we augment X by a vector column of "1"
# traditionally put in column one.
Z := `<|>`(Vector(numelems(Y), 1), X):  
A := (Z^+ . Z)^(-1) . Z^+ . Y;          # Normal Equations

A := Vector(3, {(1) = 4.659353816079307, (2) = 0.5985084089529947e-9, (3) = -0.27350964718426345e-16}, datatype = float[8])

(4)

# What is the rank of Z?
# Due to the scale of compared to "1", Rank fails to return the good value
# of rank(Z), which is obviously equal to rank(X)+1.

LinearAlgebra:-LinearSolve(Z, Vector(numelems(Y), 0));
LinearAlgebra:-Rank(Z);

Vector[column]([[0.], [0.], [-0.]])

 

2

(5)


A WORKAROUND : SCALING THE DATA

model := unapply( LinearFit(C1+C2*v+C3*w, Scale(X), Scale(Y), [v, w]), [v, w] );

proc (v, w) options operator, arrow; -HFloat(1.264691577813453e-15)+HFloat(0.6607154853418553)*v-HFloat(0.8095150669884322)*w end proc

(6)

mX, sX := (Mean, StandardDeviation)(X);
mY, sY := (Mean, StandardDeviation)(Y);

mX, sX := Vector[row](2, {(1) = 1447634550.7963333, (2) = 24441399854567932.}, datatype = float[8]), Vector[row](2, {(1) = 871086770.7242773, (2) = 23354440973344224.}, datatype = float[8])

 

HFloat(4.857279402730572), HFloat(0.789073010656694)

(7)

MODEL := model((x1-mX[1])/sX[1], (x2-mX[2])/sX[2]) * sY + mY

HFloat(4.659353816079309)+HFloat(5.985084089530032e-10)*x1-HFloat(2.7350964718426736e-17)*x2

(8)

# Check that the vector of regression coefficients is almost equal to A found above
# relative error lesst than 10^(-14)

A_from_scaling       := < coeffs(MODEL) >:
Relative_Discrepancy := (A_from_scaling - A) /~ A

Relative_Discrepancy := Vector(3, {(1) = 0.5718679809000842e-15, (2) = 0.14166219140514066e-13, (3) = 0.14308415396983588e-13}, datatype = float[8])

(9)


THE QR DECOMPOSITION  (applied on raw data)

The QR decomposition, as well as Given's rotation method, are two alternatives to the the NE method
to find the vector of regression coefficients.
Both of them are known to be less sensitive to the magnitudes of the regressors and do nt require (not
always) a scaling of the data (which can be quite complex with polynomial regression or when some
transformation is used to liearize the statistical model, for instanc Y=a*exp(b*X) --> log(Y)=log(a)+b*X).

N := numelems(Y);
P := numelems(Z[1]);

30

 

3

(10)

# Perform the QR decomposition of Z.

Q, R := LinearAlgebra:-QRDecomposition(Z, fullspan);

Q, R := Vector(4, {(1) = ` 30 x 30 `*Matrix, (2) = `Data Type: `*float[8], (3) = `Storage: `*rectangular, (4) = `Order: `*Fortran_order}), Vector(4, {(1) = ` 30 x 3 `*Matrix, (2) = `Data Type: `*float[8], (3) = `Storage: `*triangular[upper], (4) = `Order: `*Fortran_order})

(11)

# Let C the column vector of length P defined by:

C := (Q^+ . Y)[1..P];

C := Vector(3, {(1) = -26.6044149698075, (2) = -.517558353158176, (3) = -.881007519371895})

(12)

# Then the vector of regression coefficients is given by:

A_QR                 := (R[1..P, 1..P])^(-1) . C;
Relative_Discrepancy := (A_QR - A) /~ A

A_QR := Vector(3, {(1) = 4.65935381607931, (2) = 0.5985084090e-9, (3) = -0.2735096472e-16})

 

Relative_Discrepancy := Vector(3, {(1) = 0.3812453206e-15, (2) = 0.1105656128e-13, (3) = 0.1216778632e-13})

(13)

# The matrix H = Z^+ . Z writes

H                    := Z^+ . Z:
H_QR                 := R^+ . Q^+ . Q . R:
Relative_Discrepancy := (H_QR - H) /~ H

Relative_Discrepancy := Matrix(3, 3, {(1, 1) = -0.1184237893e-15, (1, 2) = 0., (1, 3) = -0.3491343943e-15, (2, 1) = 0., (2, 2) = 0.1930383052e-15, (2, 3) = 0.3369103254e-15, (3, 1) = -0.1745671971e-15, (3, 2) = -0.5053654881e-15, (3, 3) = -0.1366873891e-15})

(14)

# H_QR expression is required to obtain the covariance matrix of the regression coefficients.


 

Download LeastSquares_and_QRdecomposition.mw


 

 

To illustrate, here is an HTML example that overlays a circle and a letter

<span style="position: relative; font-size: 2em;">&#x25CB;<span style="position: absolute; top: 1.0em; right: 0.4em; font-size: 0.4em;">Y</span></span>

than can be pasted here to visualize.

I am not sure if that is possible with Maples typesetting tags.

Hello everyone!

I have an expression for the resonant frequency in terms of some geometric paremeter, say "x", i.e. f(x). I want to plot it together with the resonant wavelength (lambda=c/f) in the same plot. The "dualaxisplot" produces two curves (f(x), lambda(x)) with two uniform axes to the left and to the right. I am wondering is there a reasonably simple way to make it look as one curve, but with the second (e.g. lambda) axes nonuniformely scaled to fit the curve f(x)?

Many thanks in advance for your suggestions!

restart;

Frac_C := proc (expr, a, t, alpha) local ig, m, tau;

m := ceil(alpha);

ig := (t-tau)^(m-alpha-1)*(diff(eval(expr, t = tau), tau$m));

`assuming`([(int(ig, tau = a .. t))/GAMMA(m-alpha)], [a < t]);

end proc;
r := .5;

k := .7;

eq1 := Frac_C(x, 0, t, r)-y(t) = 0;

eq2 := Frac_C(y, 0, t, k)-x(t)-2*t = 0;

eq3 := x(0)-y(1) = 0;

eq4 := Frac_C(x, 0, t, r)-(eval(diff(y(x), x), x = 1)) = 0;

eq5 := Frac_C(x, 0, t, r)-(eval(diff(y(x), x, x), x = 1)) = 0;

eq6 := eval(diff(y(x), x), x = 0)-x(1)-2 = 0;

eq7 := y(0) = 0;

N := 5;

x[c] := [seq(a[i], i = 0 .. N)];

y[c] := [seq(b[i], i = 0 .. N)];

for n to N do

subs([seq(x(i) = x[c][i], i = 0 .. n), seq(y(i) = y[c][i], i = 0 .. n)], {eq1, eq2, eq3, eq4, eq5, eq6, eq7});
soln := solve({eq3, eq4, eq5, eq6, eq7, seq(coeff(lhs(eq), t, j) = 0, eq in {eq1, eq2})}, {a[n+1], b[n+1]});

x[c][n+1] := eval(a[n+1], soln);

y[c][n+1] := eval(b[n+1], soln);

end do;

x[s] := add(x[c][i]*t^(i-1), i = 1 .. N+1);

y[s] := add(y[c][i]*t^(i-1), i = 1 .. N+1);

x[s];

y[s];

Hi, i want to calculate fourier transform of functions with fractional powers. how can i do this? for example what is fourier transform of sqrt(x) ? I want a function or an expression as output, not the inetgral itself. thnx in advance

restart:with(inttrans):

f := x -> x^(1/2);
int(f(x)*exp(-I*w*x), x = -infinity .. infinity);

proc (x) options operator, arrow; x^(1/2) end proc

 

int(x^(1/2)*exp(-I*w*x), x = -infinity .. infinity)

(1)

fourier(f(x),x,w)

fourier(x^(1/2), x, w)

(2)

 

 

Download fracfourier.mw

First 90 91 92 93 94 95 96 Last Page 92 of 2097