Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

PrevUpHomeNext

Gauss-Kronrod Quadrature

Overview

Gauss-Kronrod quadrature is an extension of Gaussian quadrature which provides an a posteriori error estimate for the integral.

The idea behind Gaussian quadrature is to choose n nodes and weights in such a way that polynomials of order 2n-1 are integrated exactly. However, integration of polynomials is trivial, so it is rarely done via numerical methods. Instead, transcendental and numerically defined functions are integrated via Gaussian quadrature, and the defining problem becomes how to estimate the remainder. Gaussian quadrature alone (without some form of interval splitting) cannot answer this question.

It is possible to compute a Gaussian quadrature of order n and another of order (say) 2n+1, and use the difference as an error estimate. However, this is not optimal, as the zeros of the Legendre polynomials (nodes of the Gaussian quadrature) are never the same for different orders, so 3n+1 function evaluations must be performed. Kronrod considered the problem of how to interleave nodes into a Gaussian quadrature in such a way that all previous function evaluations can be reused, while increasing the order of polynomials that can be integrated exactly. This allows an a posteriori error estimate to be provided while still preserving exponential convergence. Kronrod discovered that by adding n+1 nodes (computed from the zeros of the Legendre-Stieltjes polynomials) to a Gaussian quadrature of order n, he could integrate polynomials of order 3n+1.

The integration routines provided here will perform either adaptive or non-adaptive quadrature, they should be chosen for the integration of smooth functions with no end point singularities. For difficult functions, or those with end point singularities, please refer to the double-exponential integration schemes.

#include <boost/math/quadrature/gauss_kronrod.hpp>

template <class Real, unsigned N, class Policy = boost::math::policies::policy<> >
class gauss_kronrod
{
   static const RandomAccessContainer& abscissa();
   static const RandomAccessContainer& weights();

   template <class F>
   static auto integrate(F f,
                         Real a, Real b,
                         unsigned max_depth = 15,
                         Real tol = tools::root_epsilon<Real>(),
                         Real* error = nullptr,
                         Real* pL1 = nullptr)->decltype(std::declval<F>()(std::declval<Real>()));
};

Description

static const RandomAccessContainer& abscissa();
static const RandomAccessContainer& weights();

These functions provide direct access to the abscissa and weights used to perform the quadrature: the return type depends on the Points template parameter, but is always a RandomAccessContainer type. Note that only positive (or zero) abscissa and weights are stored, and that they contain both the Gauss and Kronrod points.

template <class F>
static auto integrate(F f,
                            Real a, Real b,
                            unsigned max_depth = 15,
                            Real tol = tools::root_epsilon<Real>(),
                            Real* error = nullptr,
                            Real* pL1 = nullptr)->decltype(std::declval<F>()(std::declval<Real>()));

Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b).

max_depth sets the maximum number of interval splittings permitted before stopping. Set this to zero for non-adaptive quadrature. Note that the algorithm descends the tree depth first, so only "difficult" areas of the integral result in interval splitting.

tol Sets the maximum relative error in the result, this should not be set too close to machine epsilon or the function will simply thrash and probably not return accurate results. On the other hand the default may be overly-pessimistic.

error When non-null, *error is set to the difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation.

pL1 When non-null, *pL1 is set to the L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned.

Choosing the number of points

The number of points specified in the Points template parameter must be an odd number: giving a (N-1)/2 Gauss quadrature as the comparison for error estimation.

Internally class gauss_kronrod has pre-computed tables of abscissa and weights for 15, 31, 41, 51 and 61 Gauss-Kronrod points at up to 100-decimal digit precision. That means that using for example, gauss_kronrod<double, 31>::integrate incurs absolutely zero set-up overhead from computing the abscissa/weight pairs. When using multiprecision types with less than 100 digits of precision, then there is a small initial one time cost, while the abscissa/weight pairs are constructed from strings.

However, for types with higher precision, or numbers of points other than those given above, the abscissa/weight pairs are computed when first needed and then cached for future use, which does incur a noticeable overhead. If this is likely to be an issue, then:

Complex Quadrature

The Gauss-Kronrod quadrature support integrands defined on the real line and returning complex values. In this case, the template argument is the real type, and the complex type is deduced via the return type of the function.

Examples

We'll begin by integrating exp(-t2/2) over (0,+∞) using a 7 term Gauss rule and 15 term Kronrod rule, and begin by defining the function to integrate as a C++ lambda expression:

using namespace boost::math::quadrature;

auto f1 = [](double t) { return std::exp(-t*t / 2); };

We'll start off with a one shot (ie non-adaptive) integration, and keep track of the estimated error:

double error;
double Q = gauss_kronrod<double, 15>::integrate(f1, 0, std::numeric_limits<double>::infinity(), 0, 0, &error);

This yields Q = 1.25348207361, which has an absolute error of 1e-4 compared to the estimated error of 5e-3: this is fairly typical, with the difference between Gauss and Gauss-Kronrod schemes being much higher than the actual error. Before moving on to adaptive quadrature, lets try again with more points, in fact with the largest Gauss-Kronrod scheme we have cached (30/61):

Q = gauss_kronrod<double, 61>::integrate(f1, 0, std::numeric_limits<double>::infinity(), 0, 0, &error);

This yields an absolute error of 3e-15 against an estimate of 1e-8, which is about as good as we're going to get at double precision

However, instead of continuing with ever more points, lets switch to adaptive integration, and set the desired relative error to 1e-14 against a maximum depth of 5:

Q = gauss_kronrod<double, 15>::integrate(f1, 0, std::numeric_limits<double>::infinity(), 5, 1e-14, &error);

This yields an actual error of zero, against an estimate of 4e-15. In fact in this case the requested tolerance was almost certainly set too low: as we've seen above, for smooth functions, the precision achieved is often double that of the estimate, so if we integrate with a tolerance of 1e-9:

Q = gauss_kronrod<double, 15>::integrate(f1, 0, std::numeric_limits<double>::infinity(), 5, 1e-9, &error);

We still achieve 1e-15 precision, with an error estimate of 1e-10.

Caveats

For essentially all analytic integrands bounded on the domain, the error estimates provided by the routine are woefully pessimistic. In fact, in this case the error is very nearly equal to the error of the Gaussian quadrature formula of order (N-1)/2, whereas the Kronrod extension converges exponentially beyond the Gaussian estimate. Very sophisticated method exist to estimate the error, but all require the integrand to lie in a particular function space. A more sophisticated a posteriori error estimate for an element of a particular function space is left to the user.

These routines are deliberately kept relatively simple: when they work, they work very well and very rapidly. However, no effort has been made to make these routines work well with end-point singularities or other "difficult" integrals. In such cases please use one of the double-exponential integration schemes which are generally much more robust.

References


PrevUpHomeNext