Klausklabauter

22 Reputation

3 Badges

6 years, 314 days

MaplePrimes Activity


These are replies submitted by Klausklabauter

SOLUTION FOUND!

There is actually a way to decomposite to the new LDLt Composition!

Try

with(Student[NumericalAnalysis]);

and the MatrixDecompositionTutor for LDLt.


 

with(Student[NumericalAnalysis])

[AbsoluteError, AdamsBashforth, AdamsBashforthMoulton, AdamsMoulton, AdaptiveQuadrature, AddPoint, ApproximateExactUpperBound, ApproximateValue, BackSubstitution, BasisFunctions, Bisection, CubicSpline, DataPoints, Distance, DividedDifferenceTable, Draw, Euler, EulerTutor, ExactValue, FalsePosition, FixedPointIteration, ForwardSubstitution, Function, InitialValueProblem, InitialValueProblemTutor, Interpolant, InterpolantRemainderTerm, IsConvergent, IsMatrixShape, IterativeApproximate, IterativeFormula, IterativeFormulaTutor, LeadingPrincipalSubmatrix, LinearSolve, LinearSystem, MatrixConvergence, MatrixDecomposition, MatrixDecompositionTutor, ModifiedNewton, NevilleTable, Newton, NumberOfSignificantDigits, PolynomialInterpolation, Quadrature, RateOfConvergence, RelativeError, RemainderTerm, Roots, RungeKutta, Secant, SpectralRadius, Steffensen, Taylor, TaylorPolynomial, UpperBoundOfRemainderTerm, VectorLimit]

(1)

A := Matrix(3, 3, {(1, 1) = m[1, 1], (1, 2) = m[1, 2], (1, 3) = m[1, 3], (2, 1) = m[2, 1], (2, 2) = m[2, 2], (2, 3) = m[2, 3], (3, 1) = m[3, 1], (3, 2) = m[3, 2], (3, 3) = m[3, 3]})

Matrix(%id = 18446744078247695414)

(2)

MatrixDecompositionTutor(A)

Matrix(%id = 18446744078247682286), Matrix(%id = 18446744078247683606), Matrix(%id = 18446744078247685406)

(3)

``


 

Download LDLtCholeskyZerpflueckung.mw

@vv 

Our final goal is to obtain the LDL^T decomposition, nothing more :D. But as already said, Maple is doing the other variant.

I don't know if we are gonna use the three extra lines of code to obtain the LDL Variant.

Time is running and I the cholesky part is only a very small part of the big exam. Therefore it is very error-prone to memorize another 3 extra lines, and If you forgot them you are f****.

Maybe its faster just to do the sub-steps with maple and do the LDL manualy
Thanks anyway! :)

 

@Carl Love 


Oh thats very kind of you, great work!

But it is not allowed for us to use 3rd programs in maple.

@acer 
We are working with this book in the lecture (only german)
https://link.springer.com/book/10.1007/978-3-540-76493-9


LDL^T tests if the Input Matrix A is S.P.D and cancels if its not, thats one benefit.
If a quadratic matrix is not S.P.D the Gaussian-Elimination has to be used.
Site 15 (only German sorry - the script of the author of the book)
https://www.igpm.rwth-aachen.de/Numa/NumaMB/SS17/handouts/Handout20170516.pdf


The algorithm for LDL^T is not pivoting at all. I think you refer to the gauss-elimination with pivoting?
Pivoting with LDL^T would destroy the symmetry.  The diagonal entries of A has to be strict positive of course (Site 11 in the Handhout).


The old variant LL^T has a bigger complexity because it takes the square root which should be avoided.
We never used the old method (even the book is not explaining it) therefore we won't use it. We just know that its worse than LDL^T.
Thats why we are so confused why maple is using the LL^t variant.

 

@Carl Love 

No he references on the cholesky decomposition.

We never had a introduction to Maple. We have to teach Maple by our self.

Of course we could calculate everything by hand. But using maple is much faster and should be prefered in the exam.

 

Thank you for the fast reply, we are gonna try to use it with the additional calculations. :)

Well okay. We were afraid of that, maple is using the 'old' Cholesky-Decomposition.

We were taught not to use the old method, but are 'forced' to use maple in our coming exam.

Very conflicting..

 

 

 

 

 

 

 

 

@rlopez 

 

Oooh of course. I used the first row instead of the column. My fault, thanks..

And I was wondering why I get these weird results.

 

@tomleslie

Okay, next time :)!

Page 1 of 1