mmcdara

6149 Reputation

17 Badges

9 years, 62 days

MaplePrimes Activity


These are answers submitted by mmcdara

 The key is that CharacteristicPolynomial always returns the eigenvalues in the same order

restart:
with(LinearAlgebra):

lambda := < [solve(CharacteristicPolynomial(Sy, t))] >:
`<|>`(seq(LinearSolve(Sy-DiagonalMatrix([L$3]), <0,0,0>), L in lambda)):
lambda, eval(%, [indets(%)[]]=~[1$3]);

 

 

Here is an answer to your first question only.
It uses two classical results about Legengre polynomials

# Legendre's differential equation

LDE := (1-x^2)*Diff(P(n, x), x$2) = 2*x*Diff(P(n, x), x) - n*(n-1)*P(n, x);

# 1st Derivative (classical result)

FD := Diff(P(n, x), x) = n/(1-x^2) * (P(n-1, x) - x*P(n, x));

# Expression of Diff(P(n, x), x, x):

map(collect, isolate(eval(LDE, FD), Diff(P(n, x), x$2)), P(n, x))

When P(n, x) is written P(n, cos(theta)), the final result is of the form

Diff(P(n, cos(theta)), theta$2) = a(n, theta)*P(n-1, cos(theta)) + b(n, theta)*P(n, cos(theta))

Where a(n, theta) and b(n, theta) are given in closed forms in the attached file

legendre.mw

PS: hope I wasn't mistaken...

A workaround

restart:
alias(u=u(x,y)):
int(diff(u,x)^2+u*diff(u, x$2), x):  # doesn't give the expected result

# workaround
with(IntegrationTools):
J := Expand(Int(diff(u*diff(u, x), x), x));  # note the use of Int instead of int
K := Parts(op(2, J), u);
sol := op(1, J) + K

integration.mw

Seems simpler

map(k -> v[.., k]/norm(v[.., k], 2), [$1..3])

 

In four steps:

  1. Do as Kitonum said (export as image.jpg for instance)
  2. Use some digitalizer (on Mac I use Engauge Digitizer, see here http://digitizer.sourceforge.net ) to gigitalize the curve (quite easy, the main problem is a correct definition of the axes.
    It's quite fast (less than 2 minutes for the result shown below).
    Export the digitalization as digitalized.csv
  3. Open digitalized.csv and replace the decimal separator (comma) by a dot
    Save the result (same name for instance)
  4. Read digitalized.csv in your worksheet (you can use CurveFitting:-Spline to create a finer digitalization):
    restart:
    file := cat(currentdir(), "/Desktop/digitalized.csv"):
    M := ImportMatrix(file):
    plots:-pointplot(M[2..-1], style=line, size=[800, 400])
    

    Excerpt from M

    numelems(M[..,1]):
    convert(M[100..102,..], listlist)
    
    [[202.3029, 204.884], [203.3591, 205.464], [203.6805, 205.47]]
    
    


    Result:


     

This can be done using MathML coding/
Example

restart

 # example 1

`#msup(mo(u),mo(2))` := 3;
`#msub(msup(mo(u),mo(2)),mo(h))` := 4;
3*`#msup(mo(u),mo(2))` + a*`#msub(msup(mo(u),mo(2)),mo(h))`;

# example 2
 
`#msup(mo(u),mo(2))` := '`#msup(mo(u),mo(2))`':
`#msub(msup(mo(u),mo(2)),mo(h))` := ' `#msub(msup(mo(u),mo(2)),mo(h))`':

f := (x, y) -> 1/log(x)+y^3:
f(`#msup(mo(u),mo(2))` , `#msub(msup(mo(u),mo(2)),mo(h))`);

But manipulating such names is not that easy.
Using aliases can help simplifying all this:

restart

# alias definitions

alias(`#msup(mo(u),mo(2))`=__x, `#msub(msup(mo(u),mo(2)),mo(h))`=__y);
       #msup(mo(u),mo(2)), #msub(msup(mo(u),mo(2)),mo(h))

# example 1

3*__x+a*__y;

# example 2
​​​​​​​
f := (x, y) -> 1/log(x)+y^3:

f(__x, __y);  # smart output
lprint(%);    # detailed output

 

Here are a some improvements to what tomleslie already did:

  1. Use of a simple strategy distribute the values of the contour according to the distribution of the 25x25 values
    (based on Statistics:-Quantile)
  2. Use of CurveFitting:-ArrayInterpolation to smooth the contour plot.
  3. Use of Explore to visualize how the rendering is affected by yhe changes of the number K of contours, the size N of the interpolation grid and the method T of interpolation.

Clever (?) strategies for choosing the contours could be used, while Maple offers (ImageTools package) other strategies to "smooth" the rendering of the plot.
contours.mw

Look at this
Student:-LinearAlgebra:-LinearSolveTutor( M );
Next click on the button "Next", look to the top right textfield what is done an observe the result in the main window.+

 

I don't understand how your Mi matrix could be useful to you.
If you really want to solve your system of equations using matrices (the LinearAlgebra package) you can do this

eqs := [ 5*x+4*y+3*z = 2, 7*x+6*y+4*z = 5, 7*x+3*y+5*z = -11];
vars := [indets(eqs)[]];
A, B := LinearAlgebra:-GenerateMatrix(eqs, vars)
sol := LinearAlgebra:-LinearSolve(A, B)
#check
eval(eqs, vars =~ convert(sol, list))

But solve(eqs) will do the job as the same

What do you mean exactly by multivariate polynomial regression here are a different possibilities
Given P regressors X1.., XP and Q dependent variables Y1.., YQ :

  1. some say multivariate in the case P > 1 and Q = 1
    (the univariate case being P=Q=1)
  2. others say multivariate if Q > 1 and (generally P > 1 too) 

Case 1 can be handled by "native" Statistics:-LinearFit procedure
Case 2 can or can't depending the type of regression model you are looking for:

  1. the simplest thing is to run Q independent "multivariate regressions" of type 1 (P > 1 and Q=1);
    this will give you Q independent regression models.
  2. but you might be looking for a unique combination of X1.., XP (or f1(X1.., XP), ...  fR(X1.., XP) in case of polynomial regression) that globally explains the Y1.., YQ :
    1. if the Y1.., YQ are correlated ways are regression on principal component factors or redundancy analysis
    2. if the  X1.., XP are correlated too you must turn to partial least regression

Situation 1 can be done with Statistics:-LinearFit again; situation 2 require specific development.

Here is an example for P = 3 and Q = 2 wher Y1 and Y2  are assumed independent.
Let me know if you are interested with Y1.., YQ correlated (a weaker notion than dependancy)

example.mw


Step 1: type this

restart: 
P1:=plot(sin(x), x=0..Pi);
dataMat:=plottools:-getdata(P1);
whattype(dataMat);

dataMat is ia list of 3 elements:

  1. "curve"
  2. a list of ranges
  3. a matrix

Converting this to a Matrix as you do won't help.
What you want is to "extract" this matrix alone.

# do this to see what P1 is made of
op(P1);
# and now this to extract the matrix of points
dataMat := op([1, 1], P1);
# then export
ExportMatrix( "test.mat", dataMat, target=Matlab);

For your 3D example:

restart:
z=sin(x)*cos(y);
P2:=plot3d(z);
op(P2);
dataMat := convert(op([1, 3], P2), Matrix);  # as op(...) is an Array, not a Matrix
ExportMatrix( "test.mat", dataMat, target=Matlab);



Done with Maple 2015

Ar the end of your worksheet type this

lprint(B[..,3])
lprint(B[..,4])

You will see that the third column of B has elements ot type Hfloat while the fourth is of "simple" type float.
This means that some computation in your worksheet is done with Hfloats (from the Hfloat help page : The presence of a hardware floating-point number in an expression generally implies that the computation will use hardware floating-point evaluation, unless the settings of Digits and UseHardwareFloats specify otherwise (see UseHardwareFloats).):

The simplest (?) trick to avoid this is to write this at the top of your worksheet (it's better to write the restart in a separate block) 

UseHardwareFloats:=false

I didn't try to seek where Hfloats are generated (I had some difficulties accommodating my eyesight to the colors you use :-) )

You write:
In an academic assignment, the load function acting on top of a beam contains dirac function and its derivatives.

No, the load function acting on the top of a beam contains  "concentrated forces" ( the one you call "Dirac forces") and density of forces ("distributed forces), never derivatives or Dirac functions.
Euler–Bernoulli_beam_theory

The first thing to notice is that "Dirac forces" do not exist: they are just a useful mathematical representation of the reality (a force always act on a domain of strictly positive measure otherwise the local pressure would be infinite). More of this a concentrated force has a finite magnitude while the amplitude of a dirac is not defined.

If you really want (but it is useless) to define a loading in terms of a set of Dirac(s), here is a simple trick

# a force of magnitude M at location x=a

F := A * diff(Heaviside(x-a), a)

Another trick would be to define a uniform density of force (A)  in the interval [a-eps, a+eps] such that 2*epsilon*A=M.

Why is this useless?
Because the shear force, which is a key element in the ode that governs the flexion of the beam, is the antiderivative of the loading (concentrated and distributed forces)... and thus integrating Dirac(s) gives Heaviside(s).


The attached file gives an illustration
Beam_for_fun.mw

If I'm not mistaken, here is the "first order" aymptotic expansion.
(This could be done by hand using the Laplace's Method)

The result is of the form

G(x) * N*exp(-N*sqrt(x^2+1)+N*arcsinh(1/x))

(G is given below and in the attached file).

Note: this asymptotic expansion goes to 0 if x is larger than a critical value equl to 0.6627 and goes to +oo is x < 0.6627.
This means this asymtotic expansion is wrong (at least) for x < 0.6627 (the integral vanishes for any positive x as N --> +oo)... either one should push the expansion to larger orders or I did something wrong

# the critical value of x verifies
fsolve(-sqrt(x^2+1)+arcsinh(1/x))
                     0.66274341934918158097



I do not know if Maple can do this without help
 

restart
f := (n, x) -> Int(exp(-n*(x*cosh(t)+t)), t = 0 .. infinity);

# the previous relation is of the form

f__x(epsilon) = Int(g(t)*exp(phi__x(t)/epsilon), t = 0 .. infinity);

# with 
#   g(t)=1, 
#   phi__x(t)=-(x*cosh(t)+t)   (x is consider as a known parameter)
#   epsilon=1/n

# Rewritten this way one can apply the Laplace's Method:
#     g(t) and phi__x(t) are smooth functions
#     epsilon is a small positive parameter
# Doing this one must assume:
#     1/ that phi__x has a global maximum at some value t=c
#        (cf below)
#     2/ diff(phi__x(t), t$2) < 0 at t=c
#        The positiveness of x implies this condition is verified too 

phi__x := t -> -(x*cosh(t)+t) ;
c      := solve(diff(phi__x(t), t)=0, t);

diff(phi__x(t), t$2);
eval(%, t=c);         # negativeness of diff(phi__x(t), t$2) at t=c


# Laplace's Method:
# replace phi__x(t) by its taylor expansion up to order 2
# to get 

f__x(epsilon) = Int(exp(('phi__x(c)'+1/2*D[2]('phi__x(c)')*(t-'c')^2)/epsilon), t = 0 .. infinity);

h :=convert(taylor(phi__x(t), t=c, 3), polynom);

# verify this is identical to 

phi__x(c) +1/2*eval(diff(phi__x(t), t$2), t=c)*(t-c)^2;

simplify(%-h)
         
                               0

# note that phi__x(c) doesn't depend on t, thus 

f__x(epsilon) = exp('phi__x(c)'/epsilon)*Int(exp(1/2*D[2]('phi__x(c)')*(t-'c')^2/epsilon), t = 0 .. infinity);


# The integral could easily be computed by hand (a ERF function)
# but I use Maple
#    I set A = D[2]('phi__x(c)') (which is negative)
#    and C=c to avoid premature evaluation and keep the expression simple.


K := int(exp(1/2*A*(t-C)^2)/epsilon, t=0..+infinity) assuming A < 0, epsilon > 0;

# Finally

J := simplify( 
       exp(phi__x(c)/epsilon) 
       * 
       eval(K, [C=c, A=eval(diff(phi__x(t), t$2), t=c)]) 
     ) assuming x > 0;


# and thus the asymtotic expansion with regard to N:

Asympt_J := combine(asympt(eval(J, epsilon=1/N), N), exp);

AsymptoticExpansion.mw

In the plotsetup help page it is written :
          A list of colors cannot be provided to the plots[pointplot] or plots[pointplot3d] command.

Here is a workaround (Maple 2015)
 

restart:
List := [[[0, 0], 1], [[1, 2], 0], [[2, 3], 0], [[3, 1], 1]]:
C := n -> COLOR(RGB, op(`if`(List[n][2]=1, [1, 0, 0], [0, 0, 1])));

p := PLOT(seq( POINTS([List[i][1]], C(i)), i=1..nops(List) )):
# verify
plots:-display(p);

plotsetup(ps, plotoutput=cat(currentdir(), "/Desktop/test"), plotoptions=`color=cmyk`);
plots:-display(p);
plotsetup(default):

Here is the content of the file test.eps (the extension is automatically added).

As Mapleprimes doesn't accept to load eps files I opened the eps file with a ps wiever and exported it to a png file... but believe me, test.eps contains colors.

 

First 33 34 35 36 37 38 39 Last Page 35 of 52