...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/math/special_functions/gamma.hpp>
namespace boost{ namespace math{ template <class T1, class T2> calculatedresulttype gamma_p(T1 a, T2 z); template <class T1, class T2, class Policy> calculatedresulttype gamma_p(T1 a, T2 z, const Policy&); template <class T1, class T2> calculatedresulttype gamma_q(T1 a, T2 z); template <class T1, class T2, class Policy> calculatedresulttype gamma_q(T1 a, T2 z, const Policy&); template <class T1, class T2> calculatedresulttype tgamma_lower(T1 a, T2 z); template <class T1, class T2, class Policy> calculatedresulttype tgamma_lower(T1 a, T2 z, const Policy&); template <class T1, class T2> calculatedresulttype tgamma(T1 a, T2 z); template <class T1, class T2, class Policy> calculatedresulttype tgamma(T1 a, T2 z, const Policy&); }} // namespaces
There are four incomplete gamma functions: two are normalised versions (also known as regularized incomplete gamma functions) that return values in the range [0, 1], and two are nonnormalised and return values in the range [0, Γ(a)]. Users interested in statistical applications should use the normalised versions (gamma_p and gamma_q).
All of these functions require a > 0 and z >= 0, otherwise they return the result of domain_error.
The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more details.
The return type of these functions is computed using the result type calculation rules when T1 and T2 are different types, otherwise the return type is simply T1.
template <class T1, class T2> calculatedresulttype gamma_p(T1 a, T2 z); template <class T1, class T2, class Policy> calculatedresulttype gamma_p(T1 a, T2 z, const Policy&);
Returns the normalised lower incomplete gamma function of a and z:
This function changes rapidly from 0 to 1 around the point z == a:
template <class T1, class T2> calculatedresulttype gamma_q(T1 a, T2 z); template <class T1, class T2, class Policy> calculatedresulttype gamma_q(T1 a, T2 z, const Policy&);
Returns the normalised upper incomplete gamma function of a and z:
This function changes rapidly from 1 to 0 around the point z == a:
template <class T1, class T2> calculatedresulttype tgamma_lower(T1 a, T2 z); template <class T1, class T2, class Policy> calculatedresulttype tgamma_lower(T1 a, T2 z, const Policy&);
Returns the full (nonnormalised) lower incomplete gamma function of a and z:
template <class T1, class T2> calculatedresulttype tgamma(T1 a, T2 z); template <class T1, class T2, class Policy> calculatedresulttype tgamma(T1 a, T2 z, const Policy&);
Returns the full (nonnormalised) upper incomplete gamma function of a and z:
The following tables give peak and mean relative errors in over various domains of a and z, along with comparisons to the GSL1.9 and Cephes libraries. Note that only results for the widest floating point type on the system are given as narrower types have effectively zero error.
Note that errors grow as a grows larger.
Note also that the higher error rates for the 80 and 128 bit long double results are somewhat misleading: expected results that are zero at 64bit double precision may be nonzero  but exceptionally small  with the larger exponent range of a long double. These results therefore reflect the more extreme nature of the tests conducted for these types.
All values are in units of epsilon.
Table 6.9. Error rates for gamma_p
Microsoft Visual C++ version 12.0 
GNU C++ version 5.1.0 
GNU C++ version 5.1.0 
Sun compiler version 0x5130 


tgamma(a, z) medium values 
Max = 35.1ε (Mean = 6.97ε) 
Max = 0.955ε (Mean = 0.05ε) 
Max = 41ε (Mean = 8.09ε) 
Max = 239ε (Mean = 30.2ε) 
tgamma(a, z) small values 
Max = 1.54ε (Mean = 0.439ε) 
Max = 0ε (Mean = 0ε) 
Max = 2ε (Mean = 0.461ε) 
Max = 2ε (Mean = 0.472ε) 
tgamma(a, z) large values 
Max = 244ε (Mean = 20.2ε) 
Max = 0ε (Mean = 0ε) 
Max = 3.08e+04ε (Mean = 1.86e+03ε) 
Max = 3.02e+04ε (Mean = 1.91e+03ε) 
tgamma(a, z) integer and half integer values 
Max = 13ε (Mean = 2.93ε) 
Max = 0ε (Mean = 0ε) 
Max = 11.8ε (Mean = 2.65ε) 
Max = 71.6ε (Mean = 9.47ε) 
Table 6.10. Error rates for gamma_q
Microsoft Visual C++ version 12.0 
GNU C++ version 5.1.0 
GNU C++ version 5.1.0 
Sun compiler version 0x5130 


tgamma(a, z) medium values 
Max = 23.7ε (Mean = 4.03ε) 
Max = 0.927ε (Mean = 0.035ε) 
Max = 31.3ε (Mean = 6.56ε) 
Max = 199ε (Mean = 26.6ε) 
tgamma(a, z) small values 
Max = 2.26ε (Mean = 0.732ε) 
Max = 0ε (Mean = 0ε) 
Max = 2.45ε (Mean = 0.832ε) 
Max = 2.25ε (Mean = 0.81ε) 
tgamma(a, z) large values 
Max = 470ε (Mean = 31.5ε) 
Max = 0ε (Mean = 0ε) 
Max = 6.82e+03ε (Mean = 414ε) 
Max = 1.15e+04ε (Mean = 733ε) 
tgamma(a, z) integer and half integer values 
Max = 8.48ε (Mean = 1.42ε) 
Max = 0ε (Mean = 0ε) 
Max = 11.1ε (Mean = 2.09ε) 
Max = 54.7ε (Mean = 6.16ε) 
Table 6.11. Error rates for tgamma_lower
Microsoft Visual C++ version 12.0 
GNU C++ version 5.1.0 
GNU C++ version 5.1.0 
Sun compiler version 0x5130 


tgamma(a, z) medium values 
Max = 5.62ε (Mean = 1.43ε) 
Max = 0.833ε (Mean = 0.0315ε) 
Max = 6.79ε (Mean = 1.38ε) 
Max = 363ε (Mean = 63.8ε) 
tgamma(a, z) small values 
Max = 1.57ε (Mean = 0.527ε) 
Max = 0ε (Mean = 0ε) 
Max = 1.97ε (Mean = 0.552ε) 
Max = 1.97ε (Mean = 0.567ε) 
tgamma(a, z) integer and half integer values 
Max = 2.69ε (Mean = 0.866ε) 
Max = 0ε (Mean = 0ε) 
Max = 4.83ε (Mean = 1.12ε) 
Max = 84.7ε (Mean = 17.5ε) 
Table 6.12. Error rates for tgamma (incomplete)
Microsoft Visual C++ version 12.0 
GNU C++ version 5.1.0 
GNU C++ version 5.1.0 
Sun compiler version 0x5130 


tgamma(a, z) medium values 
Max = 8.14ε (Mean = 1.71ε) 
Max = 0ε (Mean = 0ε) 
Max = 7.35ε (Mean = 1.69ε) 
Max = 412ε (Mean = 95.5ε) 
tgamma(a, z) small values 
Max = 2.53ε (Mean = 0.66ε) 
Max = 0.753ε (Mean = 0.0474ε) 
Max = 2.13ε (Mean = 0.717ε) 
Max = 2.13ε (Mean = 0.712ε) 
tgamma(a, z) integer and half integer values 
Max = 5.16ε (Mean = 1.44ε) 
Max = 0ε (Mean = 0ε) 
Max = 5.52ε (Mean = 1.52ε) 
Max = 79.6ε (Mean = 20.9ε) 
There are two sets of tests: spot tests compare values taken from Mathworld's online evaluator with this implementation to perform a basic "sanity check". Accuracy tests use data generated at very high precision (using NTL's RR class set at 1000bit precision) using this implementation with a very high precision 60term Lanczos approximation, and some but not all of the special case handling disabled. This is less than satisfactory: an independent method should really be used, but apparently a complete lack of such methods are available. We can't even use a deliberately naive implementation without special case handling since Legendre's continued fraction (see below) is unstable for small a and z.
These four functions share a common implementation since they are all related via:
1)
2)
3)
The lower incomplete gamma is computed from its series representation:
4)
Or by subtraction of the upper integral from either Γ(a) or 1 when x  (1(3x)) > a and x > 1.1/.
The upper integral is computed from Legendre's continued fraction representation:
5)
When (x > 1.1) or by subtraction of the lower integral from either Γ(a) or 1 when x  (1(3x)) < a/.
For x < 1.1 computation of the upper integral is more complex as the continued fraction representation is unstable in this area. However there is another series representation for the lower integral:
6)
That lends itself to calculation of the upper integral via rearrangement to:
7)
Refer to the documentation for powm1 and tgamma1pm1 for details of their implementation. Note however that the precision of tgamma1pm1 is capped to either around 35 digits, or to that of the Lanczos approximation associated with type T  if there is one  whichever of the two is the greater. That therefore imposes a similar limit on the precision of this function in this region.
For x < 1.1 the crossover point where the result is ~0.5 no longer occurs for x ~ y. Using x * 0.75 < a as the crossover criterion for 0.5 < x <= 1.1 keeps the maximum value computed (whether it's the upper or lower interval) to around 0.75. Likewise for x <= 0.5 then using 0.4 / log(x) < a as the crossover criterion keeps the maximum value computed to around 0.7 (whether it's the upper or lower interval).
There are two special cases used when a is an integer or half integer, and the crossover conditions listed above indicate that we should compute the upper integral Q. If a is an integer in the range 1 <= a < 30 then the following finite sum is used:
9)
While for half integers in the range 0.5 <= a < 30 then the following finite sum is used:
10)
These are both more stable and more efficient than the continued fraction alternative.
When the argument a is large, and x ~ a then the series (4) and continued fraction (5) above are very slow to converge. In this area an expansion due to Temme is used:
11)
12)
13)
14)
The double sum is truncated to a fixed number of terms  to give a specific target precision  and evaluated as a polynomialofpolynomials. There are versions for up to 128bit long double precision: types requiring greater precision than that do not use these expansions. The coefficients C_{k}^{n} are computed in advance using the recurrence relations given by Temme. The zone where these expansions are used is
(a > 20) && (a < 200) && fabs(xa)/a < 0.4
And:
(a > 200) && (fabs(xa)/a < 4.5/sqrt(a))
The latter range is valid for all types up to 128bit long doubles, and is
designed to ensure that the result is larger than 10^{6}, the first range is
used only for types up to 80bit long doubles. These domains are narrower
than the ones recommended by either Temme or Didonato and Morris. However,
using a wider range results in large and inexact (i.e. computed) values being
passed to the exp
and erfc
functions resulting in significantly
larger error rates. In other words there is a fine trade off here between
efficiency and error. The current limits should keep the number of terms
required by (4) and (5) to no more than ~20 at double precision.
For the normalised incomplete gamma functions, calculation of the leading power terms is central to the accuracy of the function. For smallish a and x combining the power terms with the Lanczos approximation gives the greatest accuracy:
15)
In the event that this causes underflow/overflow then the exponent can be reduced by a factor of a and brought inside the power term.
When a and x are large, we end up with a very large exponent with a base near one: this will not be computed accurately via the pow function, and taking logs simply leads to cancellation errors. The worst of the errors can be avoided by using:
16)
when ax is small and a and x are large. There is still a subtraction and therefore some cancellation errors  but the terms are small so the absolute error will be small  and it is absolute rather than relative error that counts in the argument to the exp function. Note that for sufficiently large a and x the errors will still get you eventually, although this does delay the inevitable much longer than other methods. Use of log(1+x)x here is inspired by Temme (see references below).