Compute convergence rates of numerical integration methods.
Most numerical methods have a discretization parameter, call it n, such that if n increases (or decreases), the method performs better. Often, the relation between the error in the numerical approximation (compared with the exact analytical result) can be written as
E = Cnr,
where E is the error, and C and r are constants.
Suppose you have performed an experiment with a numerical method using discretization parameters n0, n1, . . . , nN
. You have computed the corresponding errors E0, E1, . . . , EN
in a test problem with an analytical solution. One way to estimate r goes as follows. For two successive experiments we have
Ei−1
= Cnr
i−1
And
Ei
= Cnr
i
.
Divide the first equation by the second to eliminate C, and then take the logarithm to solve for r:
We can compute r for all pairs of two successive experiments. Usually, the “last r”, corresponding to i = N in the formula above, is the “best” r value12. Knowing r, we can compute C as EN
n
–r
n
Having stored the ni
and Ei
values in two lists n and E, the following code snippet computes r and C:
from scitools.convergencerate import convergence_rate C, r = convergence_rate(n, E)
Construct a test problem for integration where you know the analytical result of the integral. Run different numerical methods (the Midpoint method, the Trapezoidal method, Simpson’s method, Monte Carlo integration) with the number of evaluation points n = 2k
+ 1 for k = 2, . . . , 11, compute corresponding errors, and use the code snippet above to compute the r value for the different methods in questions. The higher the absolute error of r is, the faster the method converges to the exact result as n increases, and the better the method is. Which is the best and which is the worst method?
Let the program file import methods from the integrate module and the module with the Monte Carlo integration method from Exercise 9.14. Name of program file: integrators_convergence.py.