Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

PrevUpHomeNext

Cohen Acceleration

Synopsis
#include <boost/math/tools/cohen_acceleration.hpp>

namespace boost::math::tools {

template<class G>
auto cohen_acceleration(G& generator, int64_t n = -1);

} // namespaces

The function cohen_acceleration rapidly computes the limiting value of an alternating series via a technique developed by Henri Cohen et al. To compute

we first define a callable that produces ak on the kth call. For example, suppose we wish to compute

First, we need to define a callable which returns the requisite terms:

template<typename Real>
class G {
public:
    G(){
        k_ = 0;
    }

    Real operator()() {
        k_ += 1;
        return 1/(k_*k_);
    }

private:
    Real k_;
};

Then we pass this into the cohen_acceleration function:

auto gen = G<double>();
double computed = cohen_acceleration(gen);

See cohen_acceleration.cpp in the examples directory for more.

The number of terms consumed is computed from the error model

and must be computed a priori. If we read the reference carefully, we notice that this error model is derived under the assumption that the terms ak are given as the moments of a positive measure on [0,1]. If this assumption does not hold, then the number of terms chosen by the method is incorrect. Hence we permit the user to provide a second argument to specify the number of terms:

double computed = cohen_acceleration(gen, 5);

Nota bene: When experimenting with this option, we found that adding more terms was no guarantee of additional accuracy, and could not find an example where a user-provided number of terms outperformed the default. In addition, it is easy to generate intermediates which overflow if we let n grow too large. Hence we recommend only playing with this parameter to decrease the default number of terms to increase speed.

Performance

To see that Cohen acceleration is in fact faster than naive summation for the same level of relative accuracy, we can run the reporting/performance/cohen_acceleration_performance.cpp file. This benchmark computes the alternating Basel series discussed above:

Running ./reporting/performance/cohen_acceleration_performance.x
Run on (16 X 2300 MHz CPU s)
CPU Caches:
  L1 Data 32 KiB (x8)
  L1 Instruction 32 KiB (x8)
  L2 Unified 256 KiB (x8)
  L3 Unified 16384 KiB (x1)
Load Average: 4.13, 3.71, 3.30
-----------------------------------------------------------------
Benchmark                                                    Time
-----------------------------------------------------------------
CohenAcceleration<float>                                  20.7 ns
CohenAcceleration<double>                                 64.6 ns
CohenAcceleration<long double>                             115 ns
NaiveSum<float>                                           4994 ns
NaiveSum<double>                                     112803698 ns
NaiveSum<long double>                               5009564877 ns

In fact not only does the naive sum take orders of magnitude longer to compute, it is less accurate as well.

References

PrevUpHomeNext