sursumCorda

922 Reputation

13 Badges

2 years, 206 days

MaplePrimes Activity


These are replies submitted by sursumCorda

@C_R Unexpectedly, Wolfram's marketing literature provides an extensive test suite to benchmark the performance on a significant subset of the numerical computations.
Here is the given source code: the_entire_test_suite.mpl

Note. Although over a test suite of 828 tasks, covering different operations on different types of numerical data, Mathematica was faster in 813 cases, this document is of doubtful fairness, since the numeric computation environment in Maple follows the international IEEE®-754 floating point standard (not IEEE Std 754-2019) that is thoroughly understood and documented (cf. numericrefs), while Mathematica uses a proprietary numeric model derived from something called “significance arithmetic”, which is claimed to be "an extremely powerful technique that offers many advantages" over floating-point arithmetic (see details of the implementation). However, we may ignore such marketing stuff and nonsense and only focus on those examples that can be tested on different computers as a benchmark. 

Another test suite can be found hereExact computational linear algebra tools.

@MDD This can be done likewise: 
 

restart;

aux0 := proc(d::nonnegint, bases::Or(list, set)(symbol))::set; local w; coeffs(expand(eval(convert(`+`, function, convert(bases, list))**d)), bases, 'w'); {w} end:

aux1 := (m::set, n::set) -> expand({eval}(MmaTranslator:-Mma:-Distribute(m*n, set, `*`), set = expand@``)):

F := table([F1 = {x-y}, F2 = {-x*y*z+z^3, x*y^2+y*z^2-z^3}, F3 = {-x^2*y*z+z^4, x*y^3+y*z^3-z^4}])

F[F2] union= aux1(F[F1], aux0(2, {x, y, z}));

{x*y^2-y^3, x^2*y-x*y^2, x^3-x^2*y, x*z^2-y*z^2, -x*y*z+z^3, x*y*z-y^2*z, x^2*z-x*y*z, x*y^2+y*z^2-z^3}

(1)

F[F3] union= aux1(F[F1], aux0(4, {x, y, z})) union aux1(F[F2], aux0(2, {x, y, z}));

{x*y^4-y^5, x^2*y^3-x*y^4, x^3*y^2-x^2*y^3, x^4*y-x^3*y^2, x^5-x^4*y, -x^2*y*z+z^4, x*z^4-y*z^4, -x*y*z^3+z^5, x*y*z^3-y^2*z^3, -x*y^2*z^2+y*z^4, x*y^2*z^2-y^3*z^2, -x*y^3*z+y^2*z^3, x*y^3*z-y^4*z, x^2*z^3-x*y*z^3, -x^2*y*z^2+x*z^4, x^2*y*z^2-x*y^2*z^2, -x^2*y^2*z+x*y*z^3, x^2*y^2*z-x*y^3*z, x^3*z^2-x^2*y*z^2, -x^3*y*z+x^2*z^3, x^3*y*z-x^2*y^2*z, x^4*z-x^3*y*z, x*y^3+y*z^3-z^4, x*y^2*z^2+y*z^4-z^5, x*y^3*z+y^2*z^3-y*z^4, x*y^4+y^3*z^2-y^2*z^3, x^2*y^2*z+x*y*z^3-x*z^4, x^2*y^3+x*y^2*z^2-x*y*z^3, x^3*y^2+x^2*y*z^2-x^2*z^3}

(2)

op(op(F))NULL

[F1 = {x-y}, F2 = {x*y^2-y^3, x^2*y-x*y^2, x^3-x^2*y, x*z^2-y*z^2, -x*y*z+z^3, x*y*z-y^2*z, x^2*z-x*y*z, x*y^2+y*z^2-z^3}, F3 = {x*y^4-y^5, x^2*y^3-x*y^4, x^3*y^2-x^2*y^3, x^4*y-x^3*y^2, x^5-x^4*y, -x^2*y*z+z^4, x*z^4-y*z^4, -x*y*z^3+z^5, x*y*z^3-y^2*z^3, -x*y^2*z^2+y*z^4, x*y^2*z^2-y^3*z^2, -x*y^3*z+y^2*z^3, x*y^3*z-y^4*z, x^2*z^3-x*y*z^3, -x^2*y*z^2+x*z^4, x^2*y*z^2-x*y^2*z^2, -x^2*y^2*z+x*y*z^3, x^2*y^2*z-x*y^3*z, x^3*z^2-x^2*y*z^2, -x^3*y*z+x^2*z^3, x^3*y*z-x^2*y^2*z, x^4*z-x^3*y*z, x*y^3+y*z^3-z^4, x*y^2*z^2+y*z^4-z^5, x*y^3*z+y^2*z^3-y*z^4, x*y^4+y^3*z^2-y^2*z^3, x^2*y^2*z+x*y*z^3-x*z^4, x^2*y^3+x*y^2*z^2-x*y*z^3, x^3*y^2+x^2*y*z^2-x^2*z^3}]

(3)


 

Download 237017-How-Can-I-Do-A-Particular-Type-Of-Partitioning.mw

@Carl Love Many thanks. This version works now. I hope there is such a functionality in ListTools (It appears that this package only gives attention to `list` and `listlist` and largely neglects their generalizations.) in a future release.

@Carl Love Thanks. This procedure returns correct results with the default second parameter `max`. But I guess that there is some misunderstanding. Your procedure seems to check if the input is a exactly depth-`max` rectangle nested list (Am I right?), while my main purpose is to determine if the input can be rectangle when considering it as a nested list with a depth up to `max`. (I'm not sure if this is clear as my English is poor.) 
I try to explain this by some examples. If the second argument is not be specified, `IsRectangular` will simply determine if a nested list is completely rectangular and yield its deepest "rectangular" level. (This is correct.) And as for the two arguments form, this should be something like: 

IsRectangular([[[[l], 2], [3, 4]], [[5, 6], [7, 8]]], 2); 
# should return “true, 2”, since if we treat it as a depth-2 nested 
# list (that is, [[…, …], […, …]]), it will become rectangle again.
IsRectangular([[([l], 2), [3, 4]], [[5, 6], [7, 8]]], 2); 
# should return “false, 1”, since here, the level-2 part is ragged.

Besides, I think that to distinguish these two cases (i.e., `max` only, and 1 through `max`), one may consider using `IsRectangular(..., [max])` and `IsRectangular(..., max)`, respectively.

@dharr I don't know why the built-in degree is not purely syntactical. Sometimes it seems that applyrule (which stems from the "structure" or "pattern") is more powerful instead.

@dharr For instance, 

powers(sqrt(y)*((x^y - 1)/y - ln(x)), x, y);
 = 
    [[                
    [[                
    [[                
    [[  / y    \      
    [[  |x    1|      
    [[  |-- - -| x    
    [[  \x    x/      
    [[--------------, 
    [[ y              
    [[x  - 1          
    [[------ - ln(x)  
    [[  y             

      / y                                         \       ]]
      |x  - 1                                     |       ]]
      |------ - ln(x)          / y          y    \|       ]]
      |  y               (1/2) |x  ln(x)   x  - 1||  (1/2)]]
      |-------------- + y      |-------- - ------|| y     ]]
      |      (1/2)             |   y          2  ||       ]]
      \   2 y                  \             y   //       ]]
      ----------------------------------------------------]]
                          y                               ]]
                         x  - 1                           ]]
                         ------ - ln(x)                   ]]
                           y                              ]]


But the output should be "[[0, 1/2]]".

@acer Thanks. Actually, I don't quite understand why the creators and designers chose such a rule; in my opinion, this is somewhat unnecessary. For instance, if there is no automatic simplification process, “(a-b)/3” will only take one `-` and one `/`, but as the automatic simplification exists, it always immediately becomes “a*(1/3)+b*((-1)/3)”, which is, in effect, of less simplicity. Doesn't this simply make some (numerical) evaluations more complicated/expensive?

The help page says that automatic simplification … cannot be controlled. Does this mean that it is always impossible to switch off some of rules used by the automatic simplification process? (Well, although this may not be essential, I do not think that this is very convenient for the so-called "live programming".)

@minhthien2016 For some unknowable reason, GraphTheory:-FindClique only returns the first clique it can find (By contrast Mathematica's FindClique  does attempt to find all cliques.), so one has to enumerate all vertex subsets …. However, one may count the "130 results" manually

restart;
Sol := ListTools:-MakeUnique(sort~(ListTools:-FlattenOnce([seq(map((x, y) -> [x[], y], numtheory:-sum2sqr((33*sqrt(3))**2 - z^2), z), z = 0 .. trunc(sqrt((33*sqrt(3))**2/3)))]))):
Sol := convert(convert(ArrayTools:-GeneralOuterProduct(ListTools:-FlattenOnce(combinat:-permute~(Sol)), `*`~, ListTools:-Flatten(eval(MmaTranslator:-Mma:-Distribute([[-1, 1] $ 3], list), list = `[]`), 3 - 1)), list), Matrix):
M__0 := Sol.Sol^%T:
M__1, M__2 := ArrayTools:-Replicate(LinearAlgebra:-Diagonal(M__0), 1, 416), ArrayTools:-Replicate(LinearAlgebra:-Diagonal(M__0)^%T, 416, 1):
G0 := GraphTheory:-Graph((`{}`@lhs)~(select[4](foldl, eval(apply), rcurry(`=`, 8/3*(33*sqrt(3))**2), rhs, op(2, M__1 + M__2 - 2*M__0)))):
G || (1 .. 26) := op(GraphTheory:-InducedSubgraph~(G0, GraphTheory:-ConnectedComponents(G0))):
andseq(GraphTheory:-IsIsomorphic(G || i, G || (i + 1)), i = 1 .. 25);
                              true

GraphTheory:-DrawGraph(G1, style = spring, stylesheet = "legacy");

Now we know that there are 5*26 tetrahedra in total.

@Carl Love Thanks. As for my StringTools-based version, I find a strange thing. 

In the auxiliary procedure aux, I only use 

parse(String(seq('length(str), str[1]', str = temp)))

which turns out to be inefficient.
According to string buffer constructor, the StringBuffer constructor efficiently builds long strings in pieces by appending the pieces one at a time, yet after changing it into 

 local sb := StringBuffer();
 seq(sb:-appendf("%d%c", length(str), str[1]), str in temp);
 parse(sb:-value(clear))

the new one remains to be less efficient.
Nevertheless, if I simply replace the original String with cat, i.e., 

parse(cat(seq('length(str), str[1]', str = temp)))

(and others leave unchanged), the updated code will immediately become unexpectedly efficient (nearly 3.5s (which is very close to Mma-implementation's 2.5s)). 
Why is there such a mysterious performance distinction between String and cat? The documentation just tells us that "unlike the cat function, String always returns a single string, never a name, unevaluated concatenation (||), or sequence", but this is irrelevant to the performance. 

@Carl Love I believe that the fastest way is: 

if rtable_is_zero(A[2]) then
    …

Surprisingly, EqualEntries({entries(A[2], 'nolist')}, {0}) is also very fast.

@dharr Thanks. I'm astonished to see that the speed of your code beats that of mine (that is, manipulating text directly) instead, since a number of routines provided by the StringTools package essentially make use of and accordingly benefit from compiled libraries, while your code doesn't even call any external functions. (Anyway, I still don't know why the uncompiled Mathematica implementation runs quite fast. Maybe there'll be a dramatic speed-up in some future release. Let's look forward to seeing this.)

@Axel Vogt Hmm, this means that one has to change softwares (and ways of thinking) at each stage, which may retard (or even waste) productive computing in certain cases. The help page notes that "since the importing of NAG routines into Maple is an ongoing project, future releases will include the ability to handle a larger selection of input shapes and storage modes". This sentence is about the LinearAlgebra package, but I intend to know if future Maple releases will introduce more highly efficient optimization routines (because the add-on Global Optimization Toolbox cannot solve non-convex mixed-integer nonlinear problems as well). 

@Axel Vogt I read that "the Optimization package solvers rely on a built-in library of optimization routines provided by the Numerical Algorithms Group". And according to Optimization : NAG FL Interface, "mixed integer nonlinear programming" is supported in fact, but Maple's documentation simply writes that assume = integer is "only accepted by the Optimization:-LPSolve command". This is strange.

@vv Thanks. But the elapsed time is quite long; on my low-end computer, executing this code takes about six minutes. And it is strange that the returned values are not the optimum: 

Actually, the original problem (not posted here, since it is more expensive) asks for twenty variables (rather than nine variables). Is it possible to speed up the optimization processing (so that a medium-scale analogue can also be solved in an acceptable time)?

First 6 7 8 9 10 11 12 Last Page 8 of 19