...one of the most highly
regarded and expertly designed C++ library projects in the
world.

— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards

A few things to keep in mind while using the tanh-sinh, exp-sinh, and sinh-sinh quadratures:

These routines are **very** aggressive about
approaching the endpoint singularities. This allows lots of significant digits
to be extracted, but also has another problem: Roundoff error can cause the
function to be evaluated at the endpoints. A few ways to avoid this: Narrow
up the bounds of integration to say, [a + ε, b - ε], make sure (a+b)/2 and
(b-a)/2 are representable, and finally, if you think the compromise between
accuracy an usability has gone too far in the direction of accuracy, file
a ticket.

Both exp-sinh and sinh-sinh quadratures evaluate the functions they are passed
at **very** large argument. You might understand
that x^{12}exp(-x) is should be zero when x^{12} overflows, but IEEE floating point
arithmetic does not. Hence `std::pow(x, 12)*std::exp(-x)`

is an indeterminate form whenever ```
std::pow(x,
12)
```

overflows. So make sure your functions have the correct limiting behavior;
for example

auto f = [](double x) { double t = exp(-x); if(t == 0) { return 0; } return t*pow(x, 12); };

has the correct behavior for large *x*, but ```
auto f = [](double
x) { return exp(-x)*pow(x, 12); };
```

does
not.

Oscillatory integrals, such as the sinc integral, are poorly approximated by double-exponential quadrature. Fortunately the error estimates and L1 norm are massive for these integrals, but nonetheless, oscillatory integrals require different techniques.

A special mention should be made about integrating through zero: while our range adaptors preserve precision when one endpoint is zero, things get harder when the origin is neither in the center of the range, nor at an endpoint. Consider integrating:

1 / (1 +x^2)

Over (a, ∞). As long as `a >= 0`

both
the tanh_sinh and the exp_sinh integrators will handle this just fine: in
fact they provide a rather efficient method for this kind of integral. However,
if we have ```
a <
0
```

then we are forced to adapt the range
in a way that produces abscissa values near zero that have an absolute error
of ε, and since all of the area of the integral is near zero both integrators
thrash around trying to reach the target accuracy, but never actually get
there for ```
a <<
0
```

. On the other hand, the simple expedient
of breaking the integral into two domains: (a, 0) and (0, b) and integrating
each seperately using the tanh-sinh integrator, works just fine.

Finally, some endpoint singularities are too strong to be handled by tanh_sinh or equivalent methods, for example consider integrating the function:

double p = some_value; tanh_sinh<double> integrator; auto f = [&](double x){ return pow(tan(x), p); }; auto Q = integrator.integrate(f, 0, constants::half_pi<double>());

The first problem with this function, is that the singularity is at π/2, so
if we're integrating over (0, π/2) then we can never approach closer to the
singularity than ε, and for p less than but close to 1, we need to get *very*
close to the singularity to find all the area under the function. If we recall
the identity `tan(π/2 - x) == 1/tan(x)`

then we can rewrite
the function like this:

auto f = [&](double x){ return pow(tan(x), -p); };

And now the singularity is at the origin and we can get much closer to it when evaluating the integral: all we have done is swap the integral endpoints over.

This actually works just fine for p < 0.95, but after that the tanh_sinh integrator starts thrashing around and is unable to converge on the integral. The problem is actually a lack of exponent range: if we simply swap type double for something with a greater exponent range (an 80-bit long double or a quad precision type), then we can get to at least p = 0.99. If we want to go beyond that, or stick with type double, then we have to get smart.

The easiest method is to notice that for small x, then `tan(x) ≅ x`

,
and so we are simply integrating x^{-p}. Therefore we can use this approximation
over (0, small), and integrate numerically from (small, π/2), using ε as a suitable
crossover point seems sensible:

double p = some_value; double crossover = std::numeric_limits<double>::epsilon(); tanh_sinh<double> integrator; auto f = [&](double x){ return pow(tan(x), p); }; auto Q = integrator.integrate(f, crossover, constants::half_pi<double>()) + pow(crossover, 1 - p) / (1 - p);

There is an alternative, more complex method, which is applicable when we
are dealing with expressions which can be simplified by evaluating by logs.
Let's suppose that as in this case, all the area under the graph is infinitely
close to zero, now inagine that we could expand that region out over a much
larger range of abscissa values: that's exactly what happens if we perform
argument substitution, replacing `x`

by `exp(-x)`

(note
that we must also multiply by the derivative of `exp(-x)`

).
Now the singularity at zero is moved to +∞, and the π/2 bound to -log(π/2).
Initially our argument substituted function looks like:

auto f = [&](double x){ return exp(-x) * pow(tan(exp(-x)), -p); };

Which is hardly any better, as we still run out of exponent range just as
before. However, if we replace `tan(exp(-x))`

by
`exp(-x)`

for
suitably small `exp(-x)`

, and
therefore `x > -log(ε)`

, we can greatly simplify the expression
and evaluate by logs:

auto f = [&](double x) { static const double crossover = -log(std::numeric_limits<double>::epsilon()); return x > crossover ? exp((p - 1) * x) : exp(-x) * pow(tan(exp(-x)), -p); };

This form integrates just fine over (-log(π/2), +∞) using either the tanh_sinh or exp_sinh classes.