Axel Vogt

5821 Reputation

20 Badges

20 years, 227 days
Munich, Bavaria, Germany

MaplePrimes Activity


These are replies submitted by Axel Vogt

I do not think so, it prevents bots. But not humans - say working for 1 EUR / h in some countries.

May be Marketing is more afraid of not getting new users than vexing members.

There are various ways. For example users with score=0 could be set on hold for a review. Or E-Mail systems do have a good spam filter - so generate an E-Mail from an intented post, send it to a local Mail server for analyzing should cover 95 % or so ...

@vv 

I meant F(z) + Int( sqrt(z-linear) * y(sigma), sigma ): does the approach 'depend' on sqrt

@ vv

Is it some general "recipe" (as a method is given of it) or does it 'just' work for sqrt?

Likewise you can solve the system for the polynomial of minmal degree = n-1 for n integers. If I remember correctly there is some general result that shows it maps integers to integers

A question might be: what is expected by varying the constant w (and in what field) ?

@Kitonum: I once filed from Wikipedia (which is terrible instable in some themes),
https://en.wikipedia.org/wiki/Polynomial#Solving_polynomial_equations:

It has been shown by Richard Birkeland and Karl Meyr that the roots of any
polynomial may be expressed in terms of multivariate hypergeometric functions.

Ferdinand von Lindemann and Hiroshi Umemura showed that the roots may also be
expressed in terms of Siegel modular functions, generalizations of the theta
functions that appear in the theory of elliptic functions. These characterisations
of the roots of arbitrary polynomials are generalisations of the methods previously
discovered to solve the quintic equation.


R. Birkeland. Über die Auflösung algebraischer Gleichungen durch hypergeometrische
Funktionen. Mathematische Zeitschrift vol. 26, (1927) pp. 565–578.

K. Mayr. Über die Auflösung algebraischer Gleichungssysteme durch hypergeometrische
Funktionen. Monatshefte für Mathematik und Physik vol. 45, (1937) pp. 280–313.

Umemura is in some Hartshorne's or Mumford's lectures (Tata?), IIRC.

But practically ...

There are (at least) 5 zeros on the diagonal

g(x+x*I);
gg:=unapply(%, x);
gg(x):
tmp:=[Re(%), Im(%)];
plot(tmp, x=-2 .. 2);
plot(tmp, x=-1.01 .. 1.01); # Diff is very steep ...

fsolve(gg(x), x=1, complex);  

                       0.999999986309701 - 0. I

gg(%);
                              0. + 0. I

'gg(x) = - gg(-x)';
is(%);
                           gg(x) = -gg(-x)
                                 true

It is symmetric, so only check on 0 < x and small

plot(tmp, x=0 .. 0.05); # indicates another zero

fsolve(gg(x), x=0.04, complex);
gg(%);

                      0.0388684654358755 - 0. I

                             -15         -15
                       0.7 10    + 0.7 10    I

Added: moreover on the diagonal the function is real valued

eval(eq, beta = x+x*I);
evalc(Im(%)); combine(%); evalc(Im(%));

                                  0


20 <= j I would say, I did it for 'large' beta for an initial guess, since for small ones you already got code and locations are seen in one man's picture. The essential point is: there are infinitely many solutions, almost all are close to the axes and given by rotating by roughly 90° (?), spacing is as indicated.The rotation is not exact, but good enough for a location to find the zero (I edited my answer accordingly), you cann 'see' it if you plot as suggested.

Edited: I think: if x is a solution, then so is -x. And also x*I - 2*Re(x*I) for those close to the axes. At least that seems to be a reasonable guess after checking the plot.

 

And closer to zero there are 3 or 5 (?) solutions on the diagonal.

 eq:= beta*(1+cos(beta*L)*cosh(beta*L)) +
      I*d*(cosh(beta*L)*sin(beta*L) - sinh(beta*L)*cos(beta*L));
 
 convert(%, expln): simplify(%, size):
 eval(%, beta=z/L): %*L*4: simplify(%, size):
 eval(%, d=s/L): simplify(%):
 f:=(%);
 beta=z/L, d=s/L; z=beta*L, s=d*L;

 sol:={a[1]=RootOf(x^4+4, index=4), a[2]=RootOf(x^4+4, index=1),
       a[3]=RootOf(x^4+4, index=2), a[4]=RootOf(x^4+4, index=3)};

 'f' =
 'eval( sum( (a[j]*s+z)*exp(RootOf(x^4+4, index=j)*z), j=1 ..4)+4*z, sol)';
 allvalues(%):
 is(%);

                                 true


To get more zeros:

  eq; subs(L=10, d=1, %);
  g:=unapply(%,beta);
 
  j:=200; # integer, not too small
  rng:='(1/2+j)*Pi/L .. (1/2+j + 1)*Pi/L';
  rng:=eval(rng, L=10);
  beta0:=RootFinding:-Analytic(g(z), re= rng, im= -1e-2..1);

The 'picture' given by one man suggests you get others by rotation (*):

  '[beta0, beta0*I, beta0*I^2,beta0*I^3]';

You may check it by RootFinding:-Analytic

Edited: (*) roughly

NB: it is difficult to see that it is a zero location by applying g

Do you mean something like this ?

     diag      (f,g)      h
 R^1 ----> R^2 ----> R^2 ---> R

 h:=sqrt(1 + z) * (2 + w);
 mtaylor(%, [z=0,w=0], 3);
 subs(z=f(x), w=g(x), %);

                                                       2
             2 + g(x) + f(x) + 1/2 f(x) g(x) - 1/4 f(x)

Int((((2/3)*t^3-2*t^2+t)/(t+m*((2/3)*t^3-2*t^2+t)))^2*(t+m*((2/3)*t^3-2*t^2+t)), t = 0 .. 1)+
Int(((-(2/3)*t^3+2*t^2-t-2/3)/(2-t+m*(-(2/3)*t^3+2*t^2-t-2/3)))^2*(2-t+m*(-(2/3)*t^3+2*t^2-t-2/3)), t = 1 .. 2):
value(%):
simplify(%) assuming -1 < m, m < 1:
simplify(%, size);

and

 

Int(-ln(t+m*((2/3)*t^3-2*t^2+t))*(t+m*((2/3)*t^3-2*t^2+t)), t = 0 .. 1)+
Int(-ln(2-t+m*(-(2/3)*t^3+2*t^2-t-2/3))*(2-t+m*(-(2/3)*t^3+2*t^2-t-2/3)), t = 1 .. 2):
value(%):
simplify(%) assuming -1 < m, m < 1:
simplify(%, size);

I have not checked the results.

Plot to "see" there are no imaginary results by the command

plot( [Re(%), Im(%)], m=-1 ..1, color = [red, blue]);

 

@Carl Love 

Carl, I used Digits=30, as in his uploaded sheet

PS: Changing t = arcsin(x)  - suggested in other thread of the poster - leads to a version that can be done quickly by Gauss-Kronrod (splitting in t=0), I think, since that method avoids numerical trouble endpoints (and the integrand becomes 'nice'). However I needed higher precision (like above) and I do not have a stable, adaptive version beyond GK 7 - 15 (though Peter Stone's page should provide better ones)

@vv 

Increasing the precision for the integrand allows for smaller relative integration errors. The following modification of your approach works for me, giving a relative error like in double precision

for j from 0 to N do 
for i from 0 to N do
  phi[i]*S[j]*w1; simplify(%, size);
  tmp1 := unapply(%, x);
  tmp2 := x -> evalf[80](tmp1(x));
  B[i+1, j+1] := evalf(Int(tmp2(x), x = -1 .. 1, epsilon = 1e-14, method = _Dexp));
end do; end do;

Is that for computing Gauss integration rules or what is the reason for the question(s) ?

@Adam Ledger 

Your writing is difficult to follow for non-english people like me.

 

The command should be int((1/x^3), x=-1..1). And if you use
  Int((1/x^3), x=-1..1, CauchyPrincipalValue); value(%);
then you get what you perhaps want to see.

That sounds like improper usage.

 

What you want is usually named "de-compiling and re-use" for
modifying a software. You are aware of copyrights? In some
special cases Maple opens its secrets, but not always.

If you see a specific bug: just describe it.

I played with it in the following way, setting Digits:=10 (well if 9.81 is 'exact', then ...) and L_1 = 2/10

convert(f_2, rational): # which is your f_3
combine(%):
Int(%, s): expand(%): combine(%, exp):
V:=value(%):

eval(V, s=2/10) - eval(V, s=0):
simplify(%, size):
collect(%, W):
evalf(%); # length(%);

The result comes up in short time, so symbolics is not the issue, I would say

And I agree: I also have some doubts that such a huge = lengthy expression is usefull in numerics.

First 26 27 28 29 30 31 32 Last Page 28 of 207