mmcdara

6149 Reputation

17 Badges

9 years, 115 days

MaplePrimes Activity


These are answers submitted by mmcdara

  1. Within the loop: remove the square brackets in the rhs of the two expressions.
  2. Within the loop: replace sum by add.
  3. Within the loop: you wrote (bracket removed)
    F*(sum(lambda^i*u[i], i = 0 .. n))

    instead of

    F(sum(lambda^i*u[i], i = 0 .. n))

    Note that F being undefined

  4. You missed to define u[0] before entering the loop:
    u[0] := 1+t*e^(x+y)
  5. The definition of A[n], at least in Maple 2015.2, is not correct when n=0
  6. The integral is not correctly written:
    1. dt*dt means nothing
    2. int(..., t=0..t, t=0..t) is not correct, (Maple 2015.2), write instead
      int(int(..., t=0..t) t=0..t) 
  7. ... and I missed probably some other mistakes.

Here is a code which works, but I do not pretend it implements correctly the Adomian's method as you define it  (browse this site, a lot of questions about it have already been posted and answered).
Try and analyze it and correctly while respecting Maple's syntax.

restart

PDEtools[declare](prime = x):

`derivatives with respect to`*x*`of functions of one variable will now be displayed with '`

(1)

# The two next lines are useless
# equ1 := u[tt] = 1/2*(u[xx]+u[yy])+u^2:
# ICS := u(x, y, 0) = 1, u[t](x, y, 0) = e^(x+y), lambda = 0, u[0] = 1+t*e^(x+y), F(u[0]) = u[0]^2

F    := v -> v^2:
u[0] := 1+t*e^(x+y);
A[0] := F(u[0]);

for n from 0 to 5 do
  u[n+1] := (1/2)*int(int(diff(add(u[i], i = 0 .. n), x, x)+diff(add(u[n], i = 0 .. n), y, y), t=0 .. t), t=0..t)
            +
            int(A[n], t = 0 .. t):
  A[n+1] := diff(F(sum(lambda^i*u[i], i = 0 .. n+1)), lambda$(n+1))/factorial(n);
end do:

1+t*e^(x+y)

 

(1+t*e^(x+y))^2

(2)

# Extremely lengthy expressions

Download ADM-1_mmcdara.mw

Lasr point: even using simplify(..., size) the expressions become rapidly hugely complex.
So do not try do visualize u[n] for n > 3.

As you have all the formulas for step2 and beyond, I answer only your first question.
The attached file shows how to build a random vector whose gaussian components are linearly correlated.

The underlined bold text defines the limits of the method: all random variables have to be gaussian and the correlation must ne linear.
So do not try to extend this to other situations, such as, for instance, the linear correlation of uniform random variables: the result will be false.

restart

with(LinearAlgebra):
with(Statistics):
with(RealDomain):

N := 3:

MarginalStDev := Vector(N, symbol=sigma);
cor_Matrix    := Matrix(N$2, (i, j) -> `if`(j=i, 1, rho[i, j]), shape=symmetric);
cov_Matrix    := DiagonalMatrix(MarginalStDev) . cor_Matrix . DiagonalMatrix(MarginalStDev);

MarginalStDev := Vector(3, {(1) = sigma[1], (2) = sigma[2], (3) = sigma[3]})

 

cor_Matrix := Matrix(3, 3, {(1, 1) = 1, (1, 2) = rho[1, 2], (1, 3) = rho[1, 3], (2, 2) = 1, (2, 3) = rho[2, 3], (3, 3) = 1}, storage = triangular[upper], shape = [symmetric])

 

cov_Matrix := Matrix(3, 3, {(1, 1) = sigma[1]^2, (1, 2) = sigma[1]*rho[1, 2]*sigma[2], (1, 3) = sigma[1]*rho[1, 3]*sigma[3], (2, 1) = sigma[1]*rho[1, 2]*sigma[2], (2, 2) = sigma[2]^2, (2, 3) = sigma[2]*rho[2, 3]*sigma[3], (3, 1) = sigma[1]*rho[1, 3]*sigma[3], (3, 2) = sigma[2]*rho[2, 3]*sigma[3], (3, 3) = sigma[3]^2})

(1)

# Conditions for cor_Matrix to be positive definite

rhos := indets(cor_Matrix):

# Sylvester's criterion: the N main minors must be strictly positive.

det_11 := cor_Matrix[1, 1]:        # this minor isobviously positive
det_22 := Minor(cor_Matrix, 3, 3):
det_33 := Determinant(cor_Matrix):

rho_conds := solve({det_22 > 0, det_33 > 0}, rhos):
print~(rho_conds):
 

-1 < rho[1, 2]

 

-1 < rho[2, 3]

 

rho[1, 2] < 1

 

rho[1, 3] < rho[2, 3]*rho[1, 2]+((rho[2, 3]^2-1)*(rho[1, 2]^2-1))^(1/2)

 

rho[2, 3] < 1

 

rho[2, 3]*rho[1, 2]-((rho[2, 3]^2-1)*(rho[1, 2]^2-1))^(1/2) < rho[1, 3]

(2)

# Cholesky decomposition assuming rho_conds are met

conditions := map(x -> op([x > -1, x < 1]), indets(cor_Matrix))[], map(x -> x > 0, indets(MarginalStDev))[];
LUDecomposition(cov_Matrix, method='Cholesky'):
L := simplify(%) assuming conditions;

conditions := -1 < rho[1, 2], -1 < rho[1, 3], -1 < rho[2, 3], rho[1, 2] < 1, rho[1, 3] < 1, rho[2, 3] < 1, 0 < sigma[1], 0 < sigma[2], 0 < sigma[3]

 

Warning, Matrix is non-hermitian; using only data from upper triangle

 

Parse:-ConvertTo1D, "arguments to %1 must be (integer,posint)", "_Inert_RATIONAL"

(3)

# Correlation by linear combination:
#
# step 1: Let A a random variable of independent centered gaussian variables

A := `<,>`(seq(RandomVariable(Normal(0, 1)), i=1..N));

A := Vector(3, {(1) = _R, (2) = _R0, (3) = _R1})

(4)

# step 2: Define the random vector B this way

B := L . A

B := 3

(5)

# step 3: check B

'Expectation(B)' = Mean~(B);
'Cov(B)' =  simplify(Mean~(B . Transpose(B)))

"?=RTABLE(18446744074197627966,MATRIX([[0], [0], [0]]),Vector[column])"

 

Cov(B) = (Matrix(3, 3, {(1, 1) = sigma[1]^2, (1, 2) = sigma[1]*rho[1, 2]*sigma[2], (1, 3) = sigma[1]*rho[1, 3]*sigma[3], (2, 1) = sigma[1]*rho[1, 2]*sigma[2], (2, 2) = sigma[2]^2, (2, 3) = sigma[2]*rho[2, 3]*sigma[3], (3, 1) = sigma[1]*rho[1, 3]*sigma[3], (3, 2) = sigma[2]*rho[2, 3]*sigma[3], (3, 3) = sigma[3]^2}))

(6)

# final step: Define the random vector C as B + the vector of desired means.
#             Check C
C := B + Vector(N, symbol=mu):

'Expectation(C)' = Mean~(C);
'Cov(C)' =  simplify( Mean~(C . Transpose(C))  - Mean~(C) . Transpose(Mean~(C)) )

"?=RTABLE(18446744074197628238,MATRIX([[mu[1]], [mu[2]], [mu[3]]]),Vector[column])"

 

Cov(C) = (Matrix(3, 3, {(1, 1) = sigma[1]^2, (1, 2) = sigma[1]*rho[1, 2]*sigma[2], (1, 3) = sigma[1]*rho[1, 3]*sigma[3], (2, 1) = sigma[1]*rho[1, 2]*sigma[2], (2, 2) = sigma[2]^2, (2, 3) = sigma[2]*rho[2, 3]*sigma[3], (3, 1) = sigma[1]*rho[1, 3]*sigma[3], (3, 2) = sigma[2]*rho[2, 3]*sigma[3], (3, 3) = sigma[3]^2}))

(7)

 

Download 3_gaussian.mw

restart:
r := 4*t1+4*sqrt(t1^2-4*t2):
cond := t1 < 0:
piecewise(cond, S(cond), S(eval(cond, t1=-t1)))

                   /                   /      1   2\ \
          piecewise|t1 < 0, {t2 < 0}, { t2 <= - t1  }|
                   \                   \      4    / /

 

What technical problems?

Given the shape of the surface expr=f(k, r), the sulution found by NonlinearFiit is extremely doubtful.

You fill find a quite detailed analysis of your problem in the attached worksheet:

  1. a recall of @tomleslie's solution,
  2. a rapid evaluation of the stability of the corresponding best fit: "What does happen if the initial point is changed?",
  3. an alternative approach based on Optimzation:-Minimize which,
    1. when applied to the same expr, leads to a better agreement between the data and the firring curve,
    2. when applied to the mean values (see details in the attached file), also leads to a close result which, at least theoritically is less sensitive to roundoff errors,
    3. after having observed that the surface expr=f(k, r) contains 3 discontinuity locci, enables locating the valey where the minimum is to be seeked,
    4. when applied over this valley and after proper re-parameterizations, gives an even deeper minimum.

fit_mmcdara.mw

And how to avoid them?

All the problems related to the discontinuities of the objective function can be removed with an ad hoc re-parameterization of the problem:

Let t the name of the regressor and y the name of the dependent variable.
Define tau as log(t)
Define z as 1/y

The only singularity is then located at k=0

With this double transformation, coupled (not necessary) with the "mean trick" one get a unique solution without any problem:

restart;


NATIVE DATA & MODEL

with(plots):
alias(ST = Statistics):

dat := Matrix([
               [1.0, 60],  [1.0, 70],  [1.0, 80],  [1.0, 90],
               [1.0, 100], [1.0, 110], [1.0, 120], [1.0, 150],
               [2.0, 70],  [2.0, 80],  [2.0, 90],  [2.0, 100], [2.0, 120],
               [2.0, 130], [2.0, 140], [2.0, 150], [2.0, 160], [2.0, 170],
               [3.0, 100], [3.0, 110], [3.0, 120], [3.0, 130], [3.0, 140], [3.0, 150],
               [3.0, 160], [3.0, 170], [3.0, 180], [3.0, 200], [3.0, 210], [3.0, 220]
             ]):

expr := k/(1 + (k/80 - 1)*2.73^(r*t)):


RE-PARAMETERIZATION OF THE ORIGINAL PROBLEM

# In order to reduce roundoff errors, it's better to replace all the couples by
# just 3 couples [1.0, m1], [2.0, m2], [3.0, m[3]] where each m[1], m[2] and m[3]
# represent the mean value of the data at locations 1.0, 2.0 &nd 3.0 .
#
# Doing this we lose some information and we have to define a new objective function
# J_mean which weights the 3 residuals of squares by the number of times each "t site"
# occurs.
#
# Theory says that the minima of J and J_mean, as their minimizers too, should be the same.
# But using J_mean instead of J will reduce roundoff errors:


L      := convert(dat, listlist):
L_mean := [ seq([x, ST:-Mean(map2(op, 2, (select((u -> op(1, u)=x), L))))], x in {map2(op, 1, L)[]}) ];
W_mean := rhs~(ST:-Tally(map2(op, 1, L)));

[[1.0, HFloat(97.5)], [2.0, HFloat(121.0)], [3.0, HFloat(157.5)]]

 

[8, 12, 10]

(1)

y = k/(1 + (k/80 - 1)*2.73^(r*t));
eval(%, t=log(tau));
simplify(%) assuming r > 0;
map((x -> 1/x), %);

y = k/(1+((1/80)*k-1)*2.73^(r*t))

 

y = k/(1+((1/80)*k-1)*2.73^(r*ln(tau)))

 

y = k/(1.+0.125e-1*tau^(1.004301609*r)*k-tau^(1.004301609*r))

 

1/y = (1.+0.125e-1*tau^(1.004301609*r)*k-tau^(1.004301609*r))/k

(2)

# Re-parameterization
#   tau = log(t) where represents the first  column of dat
#   z   = 1/x    where represents the second column of dat

L_mean_Re := map(x -> [exp(x[1]), 1/x[2]], L_mean)

[[2.718281828, HFloat(0.010256410256410256)], [7.389056099, HFloat(0.008264462809917356)], [20.08553692, HFloat(0.006349206349206349)]]

(3)

# Define the model MODEL : tau ---> z

model := 1/expr:
model := simplify(eval(model, t=log(tau))):

MODEL := unapply(model, tau):
MODEL(tau);

(1+((1/80)*k-1)*tau^(1.004301609*r))/k

(4)

# The objective function J to minimize is the weighted residual sum of squares.

J_mean := ( add(W_mean *~ (MODEL~(map2(op, 1, L_mean_Re)) - map2(op, 2, L_mean_Re))^~2) );
J_mean := simplify(J_mean) assuming r > 0;

plot3d(J_mean, k=0..10, r=-1..1, style=surface, contours=50, axis[3]=[mode=log]);


opt_mean := Optimization:-Minimize(J_mean, { k >= 0, r >= 0}, initialpoint={k=1, r=0})

8*(-HFloat(0.010256410256410256)+(1+((1/80)*k-1)*2.718281828^(1.004301609*r))/k)^2+12*(-HFloat(0.008264462809917356)+(1+((1/80)*k-1)*7.389056099^(1.004301609*r))/k)^2+10*(-HFloat(0.006349206349206349)+(1+((1/80)*k-1)*20.08553692^(1.004301609*r))/k)^2

 

(-HFloat(0.4894337985247076)*k-HFloat(0.0020512820512820513)*exp(1.004301609*r)*k^2-HFloat(0.0012293388429752067)*exp(2.008603218*r)*k^2+HFloat(0.29834710743801657)*exp(2.008603218*r)*k-16.*exp(2.008603218*r)+0.1875e-2*exp(4.017206436*r)*k^2-.3*exp(4.017206436*r)*k+12*exp(4.017206436*r)-HFloat(0.0015873015873015873)*exp(3.012904827*r)*k^2+0.15625e-2*exp(6.025809654*r)*k^2-.25*exp(6.025809654*r)*k+10*exp(6.025809654*r)-16.*exp(1.004301609*r)-20.*exp(3.012904827*r)+HFloat(0.002064291969868487)*k^2+30.+HFloat(0.36410256410256414)*exp(1.004301609*r)*k+HFloat(0.37698412698412703)*exp(3.012904827*r)*k)/k^2

 

 

[0.379608116406440121e-6, [k = HFloat(1.0005661663654444), r = HFloat(0.002095863450755464)]]

(5)

display(
  [
    plot(eval(eval(expr, tau=exp(t)), opt_mean[2]), t=(min..max)(dat[..,1]), color=red),
    plot(L, style=point, symbol=solidcircle, symbolsize=16, color=blue)
  ]
);

 

FIT := unapply( eval(eval(expr, tau=exp(t)), opt_mean[2]), t);

ResidualSumOfSquares := add( (FIT~(dat[.., 1]) - dat[.., 2])^~2 );

proc (t) options operator, arrow; HFloat(1.0005661663654444)/(1-HFloat(0.9874929229204319)*2.73^(HFloat(0.002095863450755464)*t)) end proc

 

HFloat(34172.640157119225)

(6)

 

Download fit_1_mmcdara.mw


You can get the answer using ShortestPath instead:

restart
with(GraphTheory):
with(SpecialGraphs):
g:=IcosahedronGraph():
HighlightVertex(g, {1,7}, green):
G := CopyGraph(g):
d := NULL:

while IsConnected(G) do
  s := ShortestPath(G, 1, 7);
  c := s[2..-2]:
       HighlightEdges(G, {seq({s[i], s[i+1]}, i=1..nops(s)-1)}, red);
  d := d, DrawGraph(G);
  G := DeleteVertex(G, c);
  print(s);
end do:
d := d, DrawGraph(G):

plots:-display(`<|>`(d))
                           [1, 2, 7]
                           [1, 3, 7]
                          [1, 5, 6, 7]
                          [1, 9, 8, 7]
                       [1, 4, 10, 12, 7]



ThisWay.mw

This answers your question and not another.

MTM:-sort([x, x/3])
                            [1     ]
                            [- x, x]
                            [3     ]

Using sort and options `>`or `<` (default) is useless because we don't know ix/3 is larger or lower than x

sort([1, 1/3]);          # 1/3, 1
sort([1, 1/3], `>`);     # 1, 1/3
sort([-1, -1/3]);        # -1, -1/3
sort([-1, -1/3], `>`);   # -1/3, -1

Hi, 

I guess it's more a problem of visualization than a problem of storage:

V := Vector[row](6, {1 = T[1], 2 = T[2]}, storage = sparse, fill = {}); 
op(%)
            Vector[row](%id = 18446744078202550262)
6, {1 = T[1], 2 = T[2]}, datatype = anything, storage = sparse, 

  order = Fortran_order, shape = []
addressof(V[1]);
addressof(V[2]);
addressof(V[3]);
addressof(V[4]);
addressof(V[5]);
addressof(V[6]);
                      18446744078202681118
                      18446744078202681166
                      18446744073709551615
                      18446744073709551615
                      18446744073709551615
                      18446744073709551615

The adresses of all the unassigned elements of V being the same I understand (but maybe I'm wrong) that no memory is allocated to   them.
By comparison 

V := Vector[row](6, symbol=s); 
addressof(V[1]);
addressof(V[2]);
addressof(V[3]);
addressof(V[4]);
addressof(V[5]);
addressof(V[6]);
                      18446744078205130166
                      18446744078205130190
                      18446744078205130214
                      18446744078205130238
                      18446744078205130262
                      18446744078205130286



UPDATED

BEFORE ALL: LogNormal distribution already exists in MAPLE.


Let us suppose that LogNormal is not a member of the ListKnownDistributions list.
There are two ways to work with a LogNormal (LN) distribution:

  1. First way is to create a LN variable Y from a Normal one (the underlying normal rv Carl refered to).
    1. define X as a Normal rv
    2. set Y = exp(X)
       
  2. A more complex way is to define a Distribuion of LN type (see the attached file).
    Note this is purely academic for this distribution already exists in Maple: but this may help you to define new distributions (a LN distribution parameterized diffferentlyf, a normal distribution parameterized by mean and variance, a Dirichlet distribution , ...), the process is exactly the same as the one described here.



At the end of the file you will see a list of all the statistics you can define for a continuous random variable. You can add them in the definition of LogNormal to build a more complete distribution.

LogNormal.mw

 

I don't about such build-in function but you can proceed this way

restart:

NR := 10:
NC := 8:
M  := LinearAlgebra:-RandomMatrix(NR, NC, generator=0..1)

M := Matrix(10, 8, {(1, 1) = 1, (1, 2) = 0, (1, 3) = 1, (1, 4) = 0, (1, 5) = 0, (1, 6) = 1, (1, 7) = 1, (1, 8) = 0, (2, 1) = 1, (2, 2) = 0, (2, 3) = 0, (2, 4) = 1, (2, 5) = 0, (2, 6) = 1, (2, 7) = 1, (2, 8) = 1, (3, 1) = 1, (3, 2) = 0, (3, 3) = 1, (3, 4) = 0, (3, 5) = 0, (3, 6) = 1, (3, 7) = 0, (3, 8) = 0, (4, 1) = 1, (4, 2) = 1, (4, 3) = 0, (4, 4) = 0, (4, 5) = 0, (4, 6) = 1, (4, 7) = 1, (4, 8) = 1, (5, 1) = 0, (5, 2) = 0, (5, 3) = 1, (5, 4) = 0, (5, 5) = 1, (5, 6) = 1, (5, 7) = 0, (5, 8) = 0, (6, 1) = 1, (6, 2) = 1, (6, 3) = 1, (6, 4) = 0, (6, 5) = 1, (6, 6) = 1, (6, 7) = 0, (6, 8) = 1, (7, 1) = 1, (7, 2) = 1, (7, 3) = 1, (7, 4) = 1, (7, 5) = 0, (7, 6) = 1, (7, 7) = 0, (7, 8) = 0, (8, 1) = 1, (8, 2) = 0, (8, 3) = 1, (8, 4) = 1, (8, 5) = 1, (8, 6) = 1, (8, 7) = 1, (8, 8) = 1, (9, 1) = 1, (9, 2) = 0, (9, 3) = 0, (9, 4) = 1, (9, 5) = 0, (9, 6) = 0, (9, 7) = 0, (9, 8) = 0, (10, 1) = 1, (10, 2) = 1, (10, 3) = 0, (10, 4) = 1, (10, 5) = 0, (10, 6) = 0, (10, 7) = 0, (10, 8) = 0})

(1)

ZerosInRow := i -> NC-numelems(op(M[i])[2])

proc (i) options operator, arrow; NC-numelems(op(M[i])[2]) end proc

(2)

ZerosInRow(1);
ZerosInRow(2);

4

 

3

(3)

 

Download ZerosInRow.mw

@MaPal93 

It's 11 pm here, so I'm going to  quit Mapleprimes in a few minutes.
Just the time to provide you a worksheet which implements your ides, but do not validate your last result.
Please check what I did (all the intermediate results about the expectations are correct).

There is just an ambiguous point about the last operation you mention.

restart:


PART 1

with(Statistics):

X := RandomVariable(Normal(mu__X, sigma__X)):
Y := RandomVariable(Normal(mu__Y, sigma__Y)):

Correlation(X, Y) assuming sigma__X > 0, sigma__Y > 0

0

(1)

# To correlate X and Y use a third variable Z defined this way:

Z := a*X + b*Y

_R*a+_R0*b

(2)

# We want these two relations to be verified:

rel_1 := Correlation(X, Z) = rho    assuming sigma__x::positive, sigma__Y::positive;
rel_2 := Variance(Z) = sigma__Y^2   assuming sigma__X::positive, sigma__Y::positive;

(a*mu__X^2+a*sigma__X^2+b*mu__X*mu__Y-mu__X*(a*mu__X+b*mu__Y))/(sigma__X*(a^2*sigma__X^2+b^2*sigma__Y^2)^(1/2)) = rho

 

a^2*sigma__X^2+b^2*sigma__Y^2 = sigma__Y^2

(3)

# Solve these two relations with respect to a and b.

solve([rel_1, rel_2], [a, b]);
av := allvalues(%);

[[a = -rho*sigma__Y/sigma__X, b = RootOf(_Z^2+rho^2-1)], [a = rho*sigma__Y/sigma__X, b = RootOf(_Z^2+rho^2-1)]]

 

[[a = -rho*sigma__Y/sigma__X, b = (-rho^2+1)^(1/2)], [a = rho*sigma__Y/sigma__X, b = (-rho^2+1)^(1/2)]], [[a = -rho*sigma__Y/sigma__X, b = -(-rho^2+1)^(1/2)], [a = rho*sigma__Y/sigma__X, b = -(-rho^2+1)^(1/2)]]

(4)

# Let's take the first solution (this is an arbitrary decision).

ab := av[1][1]

[a = -rho*sigma__Y/sigma__X, b = (-rho^2+1)^(1/2)]

(5)

# Finally copy Z into Y if you want to keep working with X and Y.
# Don't forget to check the result.
Y := copy(Z):

eval([Correlation(X, Y), Variance(Y)], ab)   assuming sigma__X::positive, sigma__Y::positive:
simplify(%)   assuming sigma__Y::positive;

[-rho, sigma__Y^2]

(6)


APPLICATION TO YOUR PROBLEM

nu__jk := RandomVariable(Normal(q__0jk, sqrt(Sigma__0jk))):
nu__ki := RandomVariable(Normal(q__0ki, sqrt(Sigma__0ki))):

# Correlate nu_jk and nu_ki (correlation coeff = rho__XX__23)

XX := eval(ab, [sigma__X=sqrt(Sigma__0jk), sigma__Y=sqrt(Sigma__0ki), rho=rho__XX__23]);

[a = -rho__XX__23*Sigma__0ki^(1/2)/Sigma__0jk^(1/2), b = (-rho__XX__23^2+1)^(1/2)]

(7)

nu__ki := eval(a*nu__jk + b*nu__ki,  XX)

-_R7*rho__XX__23*Sigma__0ki^(1/2)/Sigma__0jk^(1/2)+_R8*(-rho__XX__23^2+1)^(1/2)

(8)

# WATCH OUT!!!
# The expectation of nu_ki is no longer q__0ki:

expand(Mean(nu__ki));

# thus we must shift nu__ki

nu__ki := nu__ki - expand(Mean(nu__ki)) + q__0ki:

 

q__0ki*(-rho__XX__23^2+1)^(1/2)-q__0jk*rho__XX__23*Sigma__0ki^(1/2)/Sigma__0jk^(1/2)

(9)

# Check

eval([Correlation(nu__jk, nu__ki), Variance(nu__ki), Mean(nu__ki)])   assuming Sigma__0jk::positive, Sigma__0ki::positive:
simplify(%)   assuming sigma__Y::positive;

[-rho__XX__23, Sigma__0ki, q__0ki]

(10)

# It's probably interesting to define some assumptions to use hereafter

assumptions := Sigma__0jk::positive, Sigma__0ki::positive:

u__jk := RandomVariable(Normal(0, sigma__ujk)):
u__ki := RandomVariable(Normal(0, sigma__uki)):

Variance(nu__jk + nu__ki);
Mean(nu__jk^2);
Mean(nu__ki^2);
Mean(u__jk^2);
Mean(u__ki^2);

-2*Sigma__0ki^(1/2)*Sigma__0jk^(1/2)*rho__XX__23+Sigma__0jk+Sigma__0ki

 

q__0jk^2+Sigma__0jk

 

q__0ki^2+Sigma__0ki

 

sigma__ujk^2

 

sigma__uki^2

(11)

allrv := ["nu__jk", "nu__ki", "u__jk", "u__ki"]:

for i from 1 to 2 do
  rvi := parse(allrv[i]):
  for j from 3 to 4 do
    rvj := parse(allrv[j]):
    printf("E(%s, %s) = %a\n", allrv[i], allrv[j], Mean(rvi*rvj)):
  end do:
end do:

E(nu__jk, u__jk) = 0
E(nu__jk, u__ki) = 0
E(nu__ki, u__jk) = 0

E(nu__ki, u__ki) = 0

 

Mean(nu__jk*nu__ki);


PART 2  (minimization problem)

argmin := Mean( (nu__jk*(-beta__jk2*lambda__jk+1)-lambda__jk*(nu__ki*beta__jk3+u__jk)-alpha__jk*lambda__jk-mu__jk)^2 )

-2*Sigma__0ki^(1/2)*Sigma__0jk^(1/2)*beta__jk2*beta__jk3*lambda__jk^2*rho__XX__23+q__0jk^2*beta__jk2^2*lambda__jk^2+2*q__0jk*q__0ki*beta__jk2*beta__jk3*lambda__jk^2+q__0ki^2*beta__jk3^2*lambda__jk^2+2*Sigma__0ki^(1/2)*Sigma__0jk^(1/2)*beta__jk3*lambda__jk*rho__XX__23+2*q__0jk*alpha__jk*beta__jk2*lambda__jk^2+2*q__0ki*alpha__jk*beta__jk3*lambda__jk^2+Sigma__0jk*beta__jk2^2*lambda__jk^2+lambda__jk^2*Sigma__0ki*beta__jk3^2+2*mu__jk*q__0jk*beta__jk2*lambda__jk+2*mu__jk*q__0ki*beta__jk3*lambda__jk-2*q__0jk^2*beta__jk2*lambda__jk-2*q__0jk*q__0ki*beta__jk3*lambda__jk+alpha__jk^2*lambda__jk^2+2*mu__jk*alpha__jk*lambda__jk-2*q__0jk*alpha__jk*lambda__jk-2*Sigma__0jk*beta__jk2*lambda__jk+mu__jk^2-2*mu__jk*q__0jk+q__0jk^2+Sigma__0jk+lambda__jk^2*sigma__ujk^2

(12)

FOC_1A := diff(argmin, mu__jk) = 0;

2*q__0jk*beta__jk2*lambda__jk+2*q__0ki*beta__jk3*lambda__jk+2*alpha__jk*lambda__jk+2*mu__jk-2*q__0jk = 0

(13)

FOC_1B := diff(argmin, lambda__jk) = 0;

-4*Sigma__0ki^(1/2)*Sigma__0jk^(1/2)*beta__jk2*beta__jk3*lambda__jk*rho__XX__23+2*q__0jk^2*beta__jk2^2*lambda__jk+4*q__0jk*q__0ki*beta__jk2*beta__jk3*lambda__jk+2*q__0ki^2*beta__jk3^2*lambda__jk+2*Sigma__0ki^(1/2)*Sigma__0jk^(1/2)*beta__jk3*rho__XX__23+4*q__0jk*alpha__jk*beta__jk2*lambda__jk+4*q__0ki*alpha__jk*beta__jk3*lambda__jk+2*Sigma__0jk*beta__jk2^2*lambda__jk+2*lambda__jk*Sigma__0ki*beta__jk3^2+2*mu__jk*q__0jk*beta__jk2+2*mu__jk*q__0ki*beta__jk3-2*q__0jk^2*beta__jk2-2*q__0jk*q__0ki*beta__jk3+2*alpha__jk^2*lambda__jk+2*mu__jk*alpha__jk-2*q__0jk*alpha__jk-2*Sigma__0jk*beta__jk2+2*lambda__jk*sigma__ujk^2 = 0

(14)

# I don't really understand your phrase
#       Plug FOC_1A into FOC_1B and FOC_1B can now be solved for `#msub(mi("&lambda;",fontstyle = "normal"),mi("jk"))`.
# I guess you want to do this (?)

solve([FOC_1A, FOC_1B], [mu__jk, lambda__jk]);

[[mu__jk = -(q__0jk*Sigma__0jk^(1/2)*Sigma__0ki^(1/2)*beta__jk2*beta__jk3*rho__XX__23-q__0ki*Sigma__0jk^(1/2)*Sigma__0ki^(1/2)*beta__jk3^2*rho__XX__23-q__0jk*Sigma__0ki*beta__jk3^2+q__0ki*Sigma__0jk*beta__jk2*beta__jk3-Sigma__0jk^(1/2)*Sigma__0ki^(1/2)*alpha__jk*beta__jk3*rho__XX__23+Sigma__0jk*alpha__jk*beta__jk2-q__0jk*sigma__ujk^2)/(-2*Sigma__0jk^(1/2)*Sigma__0ki^(1/2)*beta__jk2*beta__jk3*rho__XX__23+Sigma__0jk*beta__jk2^2+Sigma__0ki*beta__jk3^2+sigma__ujk^2), lambda__jk = Sigma__0jk^(1/2)*(-Sigma__0ki^(1/2)*beta__jk3*rho__XX__23+Sigma__0jk^(1/2)*beta__jk2)/(-2*Sigma__0jk^(1/2)*Sigma__0ki^(1/2)*beta__jk2*beta__jk3*rho__XX__23+Sigma__0jk*beta__jk2^2+Sigma__0ki*beta__jk3^2+sigma__ujk^2)]]

(15)

eval(lambda__jk, %[]);

Sigma__0jk^(1/2)*(-Sigma__0ki^(1/2)*beta__jk3*rho__XX__23+Sigma__0jk^(1/2)*beta__jk2)/(-2*Sigma__0jk^(1/2)*Sigma__0ki^(1/2)*beta__jk2*beta__jk3*rho__XX__23+Sigma__0jk*beta__jk2^2+Sigma__0ki*beta__jk3^2+sigma__ujk^2)

(16)

# In fact "Plug FOC_1A into FOC_1B" means nothing in Maple.
# The closest way of this is eval(FOC_1B, something=something else)
# where:
#     something is expression that FOC_1B contains
#     something else is an expression uses as a replacementt of something.
#
# For instance

replacement := isolate(FOC_1A, mu__jk);

plug_into_FOC_1B := eval(FOC_1B, replacement);

mu__jk = -q__0jk*beta__jk2*lambda__jk-q__0ki*beta__jk3*lambda__jk-alpha__jk*lambda__jk+q__0jk

 

-4*Sigma__0ki^(1/2)*Sigma__0jk^(1/2)*beta__jk2*beta__jk3*lambda__jk*rho__XX__23+2*q__0jk^2*beta__jk2^2*lambda__jk+4*q__0jk*q__0ki*beta__jk2*beta__jk3*lambda__jk+2*q__0ki^2*beta__jk3^2*lambda__jk+2*Sigma__0ki^(1/2)*Sigma__0jk^(1/2)*beta__jk3*rho__XX__23+4*q__0jk*alpha__jk*beta__jk2*lambda__jk+4*q__0ki*alpha__jk*beta__jk3*lambda__jk+2*Sigma__0jk*beta__jk2^2*lambda__jk+2*lambda__jk*Sigma__0ki*beta__jk3^2+2*(-q__0jk*beta__jk2*lambda__jk-q__0ki*beta__jk3*lambda__jk-alpha__jk*lambda__jk+q__0jk)*q__0jk*beta__jk2+2*(-q__0jk*beta__jk2*lambda__jk-q__0ki*beta__jk3*lambda__jk-alpha__jk*lambda__jk+q__0jk)*q__0ki*beta__jk3-2*q__0jk^2*beta__jk2-2*q__0jk*q__0ki*beta__jk3+2*alpha__jk^2*lambda__jk+2*(-q__0jk*beta__jk2*lambda__jk-q__0ki*beta__jk3*lambda__jk-alpha__jk*lambda__jk+q__0jk)*alpha__jk-2*q__0jk*alpha__jk-2*Sigma__0jk*beta__jk2+2*lambda__jk*sigma__ujk^2 = 0

(17)

solve(plug_into_FOC_1B, lambda__jk);

(Sigma__0ki^(1/2)*Sigma__0jk^(1/2)*beta__jk3*rho__XX__23-Sigma__0jk*beta__jk2)/(2*Sigma__0jk^(1/2)*Sigma__0ki^(1/2)*beta__jk2*beta__jk3*rho__XX__23-Sigma__0jk*beta__jk2^2-Sigma__0ki*beta__jk3^2-sigma__ujk^2)

(18)

# Which is obviously the same thing that solving a 2 equation system in mu__jk and lambda__jk

Download 230323_Problem_mmcdara.mw

combinat:-numbcomb(6, 3);
                           20
combinat:-choose(6, 3);  # still tractable
   [[1, 2, 3], [1, 2, 4], [1, 2, 5], [1, 2, 6], [1, 3, 4], 

     [1, 3, 5], [1, 3, 6], [1, 4, 5], [1, 4, 6], [1, 5, 6], 

     [2, 3, 4], [2, 3, 5], [2, 3, 6], [2, 4, 5], [2, 4, 6], 

     [2, 5, 6], [3, 4, 5], [3, 4, 6], [3, 5, 6], [4, 5, 6]]
combinat:-numbcomb(100, 3);
                             161700 

CodeTools:-Usage(combinat:-choose(100, 3)):
memory used=19.90MiB, alloc change=77.36MiB, cpu time=220.00ms, real time=221.00ms, gc time=40.07ms

combinat:-choose(100, 3)[2023]
                          [1, 25, 47]

 


A few are given in the attached file

 

restart

with(plots):

f := proc (u) options operator, arrow; e^(-theta*u) end proc;

proc (u) options operator, arrow; e^Student:-VectorCalculus:-`-`(Student:-VectorCalculus:-`*`(theta, u)) end proc

(1)

R := int(Student:-VectorCalculus:-`*`(1/e^Student:-VectorCalculus:-`*`(theta, t), int(Student:-VectorCalculus:-`*`(f(u), e^Student:-VectorCalculus:-`*`(theta, u)), u = t .. t1)), t = 0 .. t1);

(theta*ln(e)*t1-1+e^(-theta*t1))/(theta^2*ln(e)^2)

(2)

with(IntegrationTools):

# Writting < Int > instead of < int > enables using IntegrationTools,
# and thus extract the integrand < c > in a simple way.
# As a drawvack you must use value to evaluate the integral

r := Int(1/e^(theta*t)*Int(f(u)*e^(theta*u), u = t..t1), t=0..t1);
c := IntegrationTools:-GetIntegrand(r);

Int((Int(e^(-theta*u)*e^(theta*u), u = t .. t1))/e^(theta*t), t = 0 .. t1)

 

(Int(e^(-theta*u)*e^(theta*u), u = t .. t1))/e^(theta*t)

(3)

value(r);

(theta*ln(e)*t1-1+e^(-theta*t1))/(theta^2*ln(e)^2)

(4)

ci := IntegrationTools:-GetIntegrand(c);

e^(-theta*u)*e^(theta*u)

(5)

G := unapply( Int(ci, u=0..t1) / denom(c), t)

proc (t) options operator, arrow; (Int(e^(-theta*u)*e^(theta*u), u = 0 .. t1))/e^(theta*t) end proc

(6)

value(G(t))

t1/e^(theta*t)

(7)

value(G(0))

t1

(8)


BUT...

G := t -> Int(ci, u=0..t1) / denom(c);
value(G(t));
value(G(0));

# You can "fix" this that way

eval(value(G(t)), t=0)

proc (t) options operator, arrow; Student:-VectorCalculus:-`*`(Int(ci, u = 0 .. t1), 1/denom(c)) end proc

 

t1/e^(theta*t)

 

t1/e^(theta*t)

 

t1

(9)


FINALLY you can simplify the "unapply" defiinition of G this way

G := unapply( value(Int(ci, u=0..t1)) / denom(c), t)

proc (t) options operator, arrow; t1/e^(theta*t) end proc

(10)

G(0)

t1

(11)

 


 

Download M2_mmcdara.mw

Let's note that 

u := (1 + 20230321)*x*y - (x^2 + y^2)/2

is an homogenous polynomial wrt x and y.
One can rewrite u by setting [x=lambda*cos(t), y=lambda*sin(t)] which gives 

v := collect(eval(u, [x=lambda*cos(t), y=lambda*sin(t)]), lambda);
    /                         1       2   1       2\       2
    |20230322 cos(t) sin(t) - - cos(t)  - - sin(t) | lambda 
    \                         2           2        /        

The maximum (maxima in reality) of v are obtained when lambda is equal to +/- infinity and when the t unction (op(1, v)) reaches its maxima:

v       := collect(eval(u, [x=lambda*cos(t), y=lambda*sin(t)]), lambda):
opt_t_1 := Optimization:-NLPSolve(op(1, v), t=0..Pi, maximize):
opt_t_2 := Optimization:-NLPSolve(op(1, v), t=Pi..2*Pi, maximize):

opt_x_1 := op(2, v) *~ eval([cos(t), sin(t)], opt_t_1[2]):
opt_x_2 := op(2, v) *~ eval([cos(t), sin(t)], opt_t_2[2]):

identify(evalf[8](opt_x_1));
identify(evalf[8](opt_x_2));

              [1  (1/2)       2  1  (1/2)       2]
              [- 2      lambda , - 2      lambda ]
              [2                 2               ]

            [  1  (1/2)       2    1  (1/2)       2]
            [- - 2      lambda , - - 2      lambda ]
            [  2                   2               ]

Then the maxima are rejected to x = +/- infinity and y = x.

Which s a far better result than Mathematica provides.

So, let me ask you this question (and @QM too): suppose your are part of a MMA users blog and MMA tells you that 

 {\[Infinity], {x -> Indeterminate, y -> Indeterminate}}

wouldn't you say that MMA doesn't give a completely satisfactory answer ?

I feel a sense of unfairness in the question and this two-man bashing of Maple, am I wrong? 

d1 is a centered divided difference approximation of diff(u, x$3) at "point" [i+1/2, j];not an approximation of the laplacian of u at [i, j]
 

restart

tau := (d, s) -> p -> p + s*~[2-d, d-1]

proc (d, s) options operator, arrow; proc (p) options operator, arrow; p+`~`[:-`*`](s, ` $`, [2-d, d-1]) end proc end proc

(1)

# examples

tau(1, h)([0, 0]), tau(1, -h)([0, 0]);
tau(2, k)([0, 0]), tau(2, -k)([0, 0]);

[h, 0], [-h, 0]

 

[0, k], [0, -k]

(2)

# forward and backward divided differences as approximations of diff(u(x, y), x)

fd_x := (u[op(tau(1, 1)([i, j]))] - u[op(tau(1, 0)([i, j]))]) / h;
bd_x := (u[op(tau(1, 0)([i, j]))] - u[op(tau(1, -1)([i, j]))]) / h;
 

(u[1+i, j]-u[i, j])/h

 

(u[i, j]-u[-1+i, j])/h

(3)

# approximation of diff(u(x, y), x$2) from fd and bd

d2_x := simplify((fd_x-bd_x)/h)

-(-u[1+i, j]+2*u[i, j]-u[-1+i, j])/h^2

(4)

# laplacian at point [i, j] 

fd_y := (u[op(tau(2, 1)([i, j]))] - u[op(tau(2, 0)([i, j]))]) / k;
bd_y := (u[op(tau(2, 0)([i, j]))] - u[op(tau(2, -1)([i, j]))]) / k;

d2_y := simplify((fd_y-bd_y)/k);

lap := simplify(d2_x+d2_y):
lap := collect(numer(lap), u[i, j]) / denom(lap)

(u[i, 1+j]-u[i, j])/k

 

(u[i, j]-u[i, -1+j])/k

 

-(-u[i, 1+j]+2*u[i, j]-u[i, -1+j])/k^2

 

((-2*h^2-2*k^2)*u[i, j]+h^2*u[i, -1+j]+h^2*u[i, 1+j]+k^2*u[-1+i, j]+k^2*u[1+i, j])/(h^2*k^2)

(5)

# approximation of diff(u(x, y), x$3) at point [i+1/2, j]


d3_x := simplify((eval(d2_x, i=i+1) - d2_x) / h)

(u[2+i, j]-3*u[1+i, j]+3*u[i, j]-u[-1+i, j])/h^3

(6)

 


Download DividedDifferences.mw


This code allows you to print your list in a way that is close to what you want.
Printed lines have arround n=10 characters.

Of course arrangements have to be done if some elements of the list are numbers with more than n digits.
More of that, the list is converted into a string before being parsed, which means that spaces and commas are included in the calculation of the string to plot.
But I think this can give you some hints to go further (for instance if you want to plot numbers, you must augment the the range to plot 1..pc[cut] by 2*p, where p is number of occurrences of a comma in newLs[1..pc[cut]].

 

restart:

SmartPrint := proc(L::list, n::posint)
local newLs, pc, cut:

newLs  := convert(L, string)[2..-2]:
pc     := [ StringTools:-SearchAll(",", newLs)]:
cut    := ListTools:-BinaryPlace(pc, n+1/2):

while length(newLs) > n do
  if pc[cut+1] = n+1 then
    cut := cut+1:
  end if:
  printf("%-20s (length = %d)\n", StringTools:-Trim(newLs[1..pc[cut]]), pc[cut]):
  newLs  := StringTools:-Trim(newLs[pc[cut]+1..-1]);
  pc     := [ StringTools:-SearchAll(",", newLs)];
  cut    := ListTools:-BinaryPlace(pc, n+1/2);
end do:

printf("%-20s (length = %d)\n", StringTools:-Trim(newLs), length(StringTools:-Trim(newLs))):
end proc:

L := [seq(i,i=1..20)]:
SmartPrint(L, 10)

1, 2, 3, 4,          (length = 11)
5, 6, 7, 8,          (length = 11)
9, 10, 11,           (length = 10)
12, 13, 14,          (length = 11)
15, 16, 17,          (length = 11)
18, 19, 20           (length = 10)

 

r := 1000*rand(0..1)*rand(0..1) + 100*rand(0..1)*rand(0..1) + 10*rand(0..10)*rand(0..1) + rand(0..10):
L := [seq(r(), i=1..20)];
SmartPrint(L, 10)

[60, 1100, 13, 1, 110, 66, 2, 2, 101, 1008, 28, 1038, 100, 3, 8, 33, 1161, 4, 50, 101]

 

60, 1100,            (length = 9)
13, 1, 110,          (length = 11)
66, 2, 2,            (length = 9)
101, 1008,           (length = 10)
28, 1038,            (length = 9)
100, 3, 8,           (length = 10)
33, 1161,            (length = 9)
4, 50, 101           (length = 10)

 

 


 

Download SmartPrint.mw

First 12 13 14 15 16 17 18 Last Page 14 of 52