I was reminded of this by another thread.

It is faster to add in-place a large size storage=sparse float[8] Matrix into a new empty storage=rectangular float[8] Matrix than it is to convert it that way using the Matrix() or rtable() constructors.

Here's an example. First I'll do it with in-place Matrix addition. And then after that with a call to Matrix(). I measure the time to execute as well as the increase in bytes-allocated and bytes-used.

> with(LinearAlgebra):

> N := 500:
> A := RandomMatrix(N,'density'=0.1,
>                   'outputoptions'=['storage'='sparse',
>                                    'datatype'=float[8]]):

> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):

> B := Matrix(N,'datatype'=float[8]):
> MatrixAdd(B,A,'inplace'=true):

> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                            0.022, 2489912, 357907

And now in a totally fresh session (not a restart, but exit fresh instance of Maple altogether) for fair comparison,

> with(LinearAlgebra):

> N := 500:
> A := RandomMatrix(N,'density'=0.1,
>                   'outputoptions'=['storage'='sparse',
>                                    'datatype'=float[8]]):

> st,ba,bu := time(),kernelopts(bytesalloc),kernelopts(bytesused):

> Matrix(A,'storage'='rectangular','datatype'=float[8]):

> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                            8.969, 3800392, 483272

At size N=1000 the MatrixAdd way ends with,

> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                            0.041, 8387072, 1105999

while the Matrix() call ends with,

> time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                          167.094, 11925368, 1518315

At size N=2000 I killed the session, with the Matrix() call incomplete, after half an hour.

The reason that I post this as a tip is that quite a lot of LinearAlgebra does not have specialized sparse algorithms implemented. For hardware datatypes there is MatrixAdd, MatrixVectorMultiply, DotProduct, and LinearSolve. But almost all the other routines to do numerical linear algebra have to resort to copying the sparse storage Matrix to a dense storage Matrix. There's no other choice right now. And the LinearAlgebra routines which do the conversion usually do it by calling the Matrix() constructor.

Such a conversion from sparse to dense is sometimes called explosion, or scattering.

But if you didn't know all this, then you might just start sending your 6000x6000 float[8] storage=sparse Matrix off to lots of other LinearAlgebra routines, and encounter the above issue. Or you might try using the Matrix() constructor to do the conversion yourself.

So, until someone improves the Matrix and rtable constructors, the in-place MatrixAdd trick can help.

acer


Please Wait...