Boost C++ Libraries of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards


Log Gamma

#include <boost/math/special_functions/gamma.hpp>
namespace boost{ namespace math{

template <class T>
calculated-result-type lgamma(T z);

template <class T, class Policy>
calculated-result-type lgamma(T z, const Policy&);

template <class T>
calculated-result-type lgamma(T z, int* sign);

template <class T, class Policy>
calculated-result-type lgamma(T z, int* sign, const Policy&);

}} // namespaces

The lgamma function is defined by:

The second form of the function takes a pointer to an integer, which if non-null is set on output to the sign of tgamma(z).

The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more details.

The return type of these functions is computed using the result type calculation rules: the result is of type double if T is an integer type, or type T otherwise.


The following table shows the peak errors (in units of epsilon) found on various platforms with various floating point types, along with comparisons to various other libraries. Unless otherwise specified any floating point type that is narrower than the one shown will have effectively zero error.

Note that while the relative errors near the positive roots of lgamma are very low, the lgamma function has an infinite number of irrational roots for negative arguments: very close to these negative roots only a low absolute error can be guaranteed.

Table 8.3. Error rates for lgamma

GNU C++ version 7.1.0

GNU C++ version 7.1.0
long double

Sun compiler version 0x5150
Sun Solaris
long double

Microsoft Visual C++ version 14.1


Max = 0ε (Mean = 0ε)

(GSL 2.1: Max = 33.6ε (Mean = 2.78ε))
(Rmath 3.2.3: Max = 1.55ε (Mean = 0.592ε))

Max = 0.991ε (Mean = 0.308ε)

(<cmath>: Max = 1.67ε (Mean = 0.487ε))
(<math.h>: Max = 1.67ε (Mean = 0.487ε))

Max = 0.991ε (Mean = 0.383ε)

(<math.h>: Max = 1.36ε (Mean = 0.476ε))

Max = 0.914ε (Mean = 0.175ε)

(<math.h>: Max = 0.958ε (Mean = 0.38ε))

near 0

Max = 0ε (Mean = 0ε)

(GSL 2.1: Max = 5.21ε (Mean = 1.57ε))
(Rmath 3.2.3: Max = 0ε (Mean = 0ε))

Max = 1.42ε (Mean = 0.566ε)

(<cmath>: Max = 0.964ε (Mean = 0.543ε))
(<math.h>: Max = 0.964ε (Mean = 0.543ε))

Max = 1.42ε (Mean = 0.566ε)

(<math.h>: Max = 0.964ε (Mean = 0.543ε))

Max = 0.964ε (Mean = 0.462ε)

(<math.h>: Max = 0.962ε (Mean = 0.372ε))

near 1

Max = 0ε (Mean = 0ε)

(GSL 2.1: Max = 442ε (Mean = 88.8ε))
(Rmath 3.2.3: Max = 7.99e+04ε (Mean = 1.68e+04ε))

Max = 0.948ε (Mean = 0.36ε)

(<cmath>: Max = 0.615ε (Mean = 0.096ε))
(<math.h>: Max = 0.615ε (Mean = 0.096ε))

Max = 0.948ε (Mean = 0.36ε)

(<math.h>: Max = 1.71ε (Mean = 0.581ε))

Max = 0.867ε (Mean = 0.468ε)

(<math.h>: Max = 0.906ε (Mean = 0.565ε))

near 2

Max = 0ε (Mean = 0ε)

(GSL 2.1: Max = 1.17e+03ε (Mean = 274ε))
(Rmath 3.2.3: Max = 2.63e+05ε (Mean = 5.84e+04ε))

Max = 0.878ε (Mean = 0.242ε)

(<cmath>: Max = 0.741ε (Mean = 0.263ε))
(<math.h>: Max = 0.741ε (Mean = 0.263ε))

Max = 0.878ε (Mean = 0.242ε)

(<math.h>: Max = 0.598ε (Mean = 0.235ε))

Max = 0.591ε (Mean = 0.159ε)

(<math.h>: Max = 0.741ε (Mean = 0.473ε))

near -10

Max = 0ε (Mean = 0ε)

(GSL 2.1: Max = 24.9ε (Mean = 4.6ε))
(Rmath 3.2.3: Max = 4.22ε (Mean = 1.26ε))

Max = 3.81ε (Mean = 1.01ε)

(<cmath>: Max = 0.997ε (Mean = 0.412ε))
(<math.h>: Max = 0.997ε (Mean = 0.412ε))

Max = 3.81ε (Mean = 1.01ε)

(<math.h>: Max = 3.04ε (Mean = 1.01ε))

Max = 4.22ε (Mean = 1.33ε)

(<math.h>: Max = 0.997ε (Mean = 0.444ε))

near -55

Max = 0ε (Mean = 0ε)

(GSL 2.1: Max = 7.02ε (Mean = 1.47ε))
(Rmath 3.2.3: Max = 250ε (Mean = 60.9ε))

Max = 0.821ε (Mean = 0.513ε)

(<cmath>: Max = 1.58ε (Mean = 0.672ε))
(<math.h>: Max = 1.58ε (Mean = 0.672ε))

Max = 1.59ε (Mean = 0.587ε)

(<math.h>: Max = 0.821ε (Mean = 0.674ε))

Max = 0.821ε (Mean = 0.419ε)

(<math.h>: Max = 249ε (Mean = 43.1ε))

The following error plot are based on an exhaustive search of the functions domain, MSVC-15.5 at double precision, and GCC-7.1/Ubuntu for long double and __float128.


The main tests for this function involve comparisons against the logs of the factorials which can be independently calculated to very high accuracy.

Random tests in key problem areas are also used.


The generic version of this function is implemented using Sterling's approximation for large arguments:

For small arguments, the logarithm of tgamma is used.

For negative z the logarithm version of the reflection formula is used:

For types of known precision, the Lanczos approximation is used, a traits class boost::math::lanczos::lanczos_traits maps type T to an appropriate approximation. The logarithmic version of the Lanczos approximation is:

Where Le,g is the Lanczos sum, scaled by eg.

As before the reflection formula is used for z < 0.

When z is very near 1 or 2, then the logarithmic version of the Lanczos approximation suffers very badly from cancellation error: indeed for values sufficiently close to 1 or 2, arbitrarily large relative errors can be obtained (even though the absolute error is tiny).

For types with up to 113 bits of precision (up to and including 128-bit long doubles), root-preserving rational approximations devised by JM are used over the intervals [1,2] and [2,3]. Over the interval [2,3] the approximation form used is:

lgamma(z) = (z-2)(z+1)(Y + R(z-2));

Where Y is a constant, and R(z-2) is the rational approximation: optimised so that its absolute error is tiny compared to Y. In addition, small values of z greater than 3 can handled by argument reduction using the recurrence relation:

lgamma(z+1) = log(z) + lgamma(z);

Over the interval [1,2] two approximations have to be used, one for small z uses:

lgamma(z) = (z-1)(z-2)(Y + R(z-1));

Once again Y is a constant, and R(z-1) is optimised for low absolute error compared to Y. For z > 1.5 the above form wouldn't converge to a minimax solution but this similar form does:

lgamma(z) = (2-z)(1-z)(Y + R(2-z));

Finally for z < 1 the recurrence relation can be used to move to z > 1:

lgamma(z) = lgamma(z+1) - log(z);

Note that while this involves a subtraction, it appears not to suffer from cancellation error: as z decreases from 1 the -log(z) term grows positive much more rapidly than the lgamma(z+1) term becomes negative. So in this specific case, significant digits are preserved, rather than cancelled.

For other types which do have a Lanczos approximation defined for them the current solution is as follows: imagine we balance the two terms in the Lanczos approximation by dividing the power term by its value at z = 1, and then multiplying the Lanczos coefficients by the same value. Now each term will take the value 1 at z = 1 and we can rearrange the power terms in terms of log1p. Likewise if we subtract 1 from the Lanczos sum part (algebraically, by subtracting the value of each term at z = 1), we obtain a new summation that can be also be fed into log1p. Crucially, all of the terms tend to zero, as z -> 1:

The Ck terms in the above are the same as in the Lanczos approximation.

A similar rearrangement can be performed at z = 2: