The most trivial task in option pricing is to compute values through the Black-Scholes formula: type in the formula, feed Maple with data, done. The same in computational environments like C programs or Excel (assuming a good implementation of the cumulative normal function). Really? And the limiting cases? Or coming close to them? How about small volatility (say below 10% like in FX trading?) or short expiry times (say some weeks)? No problem to back-out volatility from prices to fit models? First I give an example that using the common formula even Maple quickly runs into numerical errors. That can be avoided by decomposing calls and puts into their so-called intrinsic value (the discounted pay-off) and their premium (that's wht has to be payed beyond that, the actual 'speculation'). That premium writes in a simple way (for both puts and calls) after passing to a normed situation, variables now are 'logarithmic moneyness' and standard deviation = volatility * sqrt(time). Starring long enough enough at that formula and playing even much longer with it the idea is to re-scale it in a similar way how Mills Ratio is related to the cumulative normal distribution. And - as the main motivation was small volatility - to try something like a Taylor series in volatility = 0. The result does not look that promising, it looks quite complicated. But using the packages 'PDEtools' and 'gfun' one actually can work out a law to compute the Taylor coefficients through a 4-term recursion. After a while one sees that a simple coding is possible and testing shows it to be both stable and fast. And it is a bit better than the reformulation just mentioned. An implementation as C-DLL shows that one can even get 1 more digit in decimal precison versus just using evalhf for simulating that. The uploaded zip-file contains the worksheet and the DLL (without virus as my program says ...), a pdf would be too large due to graphics. Download 102_tiny_vola_primes.zip
View file details

Please Wait...