Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of Boost. Click here to view this page for the latest version.
PrevUpHomeNext

tanh_sinh

template<class Real>
class tanh_sinh
{
public:
    tanh_sinh(size_t max_refinements = 15, const Real& min_complement = tools::min_value<Real>() * 4)

    template<class F>
    auto integrate(const F f, Real a, Real b,
                   Real tolerance = tools::root_epsilon<Real>(),
                   Real* error = nullptr,
                   Real* L1 = nullptr,
                   std::size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;

    template<class F>
    auto integrate(const F f, Real
                   tolerance = tools::root_epsilon<Real>(),
                   Real* error = nullptr,
                   Real* L1 = nullptr,
                   std::size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;

};

The tanh-sinh quadrature routine provided by boost is a rapidly convergent numerical integration scheme for holomorphic integrands. By this we mean that the integrand is the restriction to the real line of a complex-differentiable function which is bounded on the interior of the unit disk |z| < 1, so that it lies within the so-called Hardy space. If your integrand obeys these conditions, it can be shown that tanh-sinh integration is optimal, in the sense that it requires the fewest function evaluations for a given accuracy of any quadrature algorithm for a random element from the Hardy space. A basic example of how to use the tanh-sinh quadrature is shown below:

tanh_sinh<double> integrator;
auto f = [](double x) { return 5*x + 7; };
// Integrate over native bounds of (-1,1):
double Q = integrator.integrate(f);
// Integrate over (0,1.1) instead:
Q = integrator.integrate(f, 0.0, 1.1);

The basic idea of tanh-sinh quadrature is that a variable transformation can cause the endpoint derivatives to decay rapidly. When the derivatives at the endpoints decay much faster than the Bernoulli numbers grow, the Euler-Maclaurin summation formula tells us that simple trapezoidal quadrature converges faster than any power of h. That means the number of correct digits of the result should roughly double with each new level of integration (halving of h), Hence the default termination condition for integration is usually set to the square root of machine epsilon. Most well-behaved integrals should converge to full machine precision with this termination condition, and in 6 or fewer levels at double precision, or 7 or fewer levels for quad precision.

One very nice property of tanh-sinh quadrature is that it can handle singularities at the endpoints of the integration domain. For instance, the following integrand, singular at both endpoints, can be efficiently evaluated to 100 binary digits:

auto f = [](Real x) { return log(x)*log1p(-x); };
Real Q = integrator.integrate(f, (Real) 0, (Real) 1);

Now onto the caveats: As stated before, the integrands must lie in a Hardy space to ensure rapid convergence. Attempting to integrate a function which is not bounded on the unit disk by tanh-sinh can lead to very slow convergence. For example, take the Runge function:

auto f1 = [](double t) { return 1/(1+25*t*t); };
Q = integrator.integrate(f1, (double) -1, (double) 1);

This function has poles at ± ⅈ/5, and as such it is not bounded on the unit disk. However, the related function

auto f2 = [](double t) { return 1/(1+0.04*t*t); };
Q = integrator.integrate(f2, (double) -1, (double) 1);

has poles outside the unit disk (at ± 5ⅈ), and is therefore in the Hardy space. Our benchmarks show that the second integration is performed 22x faster than the first! If you do not understand the structure of your integrand in the complex plane, do performance testing before deployment.

Like the trapezoidal quadrature, the tanh-sinh quadrature produces an estimate of the L1 norm of the integral along with the requested integral. This is to establish a scale against which to measure the tolerance, and to provide an estimate of the condition number of the summation. This can be queried as follows:

tanh_sinh<double> integrator;
auto f = [](double x) { return 5*x + 7; };
double termination = std::sqrt(std::numeric_limits<double>::epsilon());
double error;
double L1;
size_t levels;
double Q = integrator.integrate(f, 0.0, 1.0, termination, &error, &L1, &levels);
double condition_number = L1/std::abs(Q);

If the condition number is large, the computed integral is worthless: typically one can assume that Q has lost one digit of precision when the condition number of O(10^Q). The returned error term is not the actual error in the result, but merely an a posteriori error estimate. It is the absolute difference between the last two approximations, and for well behaved integrals, the actual error should be very much smaller than this. The following table illustrates how the errors and conditioning vary for few sample integrals, in each case the termination condition was set to the square root of epsilon, and all tests were conducted in double precision:

Integral

Range

Error

Actual measured error

Levels

Condition Number

Comments

5 * x + 7

(0,1)

3.5e-15

0

5

1

This trivial case shows just how accurate these methods can be.

log(x) * log(x)

0, 1)

0

0

5

1

This is an example of an integral that Gaussian integrators fail to handle.

exp(-x) / sqrt(x)

(0,+∞)

8.0e-10

1.1e-15

5

1

Gaussian integrators typically fail to handle the singularities at the endpoints of this one.

x * sin(2 * exp(2 * sin(2 * exp(2 * x))))

(-1,1)

7.2e-16

4.9e-17

9

1.89

This is a truely horrible integral that oscillates wildly and unpredictably with some very sharp "spikes" in it's graph. The higher number of levels used reflects the difficulty of sampling the more extreme features.

x == 0 ? 1 : sin(x) / x

(-∞, ∞)

3.0e-1

4.0e-1

15

159

This highly oscillatory integral isn't handled at all well by tanh-sinh quadrature: there is so much cancellation in the sum that the result is essentially worthless. The argument transformation of the infinite integral behaves somewhat badly as well, in fact we do slightly better integrating over 2 symmetrical and large finite limits.

sqrt(x / (1 - x * x))

(0,1)

1e-8

1e-8

5

1

This an example of an integral that has all its area close to a non-zero endpoint, the problem here is that the function being integrated returns "garbage" values for x very close to 1. We can easily fix this issue by passing a 2 argument functor to the integrator: the second argument gives the distance to the nearest endpoint, and we can use that information to return accurate values, and thus fix the integral calculation.

x < 0.5 ? sqrt(x) / sqrt(1 - x * x) : sqrt(x / ((x + 1) * (xc)))

(0,1)

0

0

5

1

This is the 2-argument version of the previous integral, the second argument xc is 1-x in this case, and we use 1-x2 == (1-x)(1+x) to calculate 1-x2 with greater accuracy.

Although the tanh-sinh quadrature can compute integral over infinite domains by variable transformations, these transformations can create a very poorly behaved integrand. For this reason, double-exponential variable transformations have been provided that allow stable computation over infinite domains; these being the exp-sinh and sinh-sinh quadrature.

Complex integrals

The tanh_sinh integrator supports integration of functions which return complex results, for example the sine-integral Si(z) has the integral representation:

Which we can code up directly as:

template <class Complex>
Complex Si(Complex z)
{
   typedef typename Complex::value_type value_type;
   using std::sin;  using std::cos; using std::exp;
   auto f = [&z](value_type t) { return -exp(-z * cos(t)) * cos(z * sin(t)); };
   boost::math::quadrature::tanh_sinh<value_type> integrator;
   return integrator.integrate(f, 0, boost::math::constants::half_pi<value_type>()) + boost::math::constants::half_pi<value_type>();
}

PrevUpHomeNext