...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/math/special_functions/hypergeometric_1F1.hpp> namespace boost { namespace math { template <class T1, class T2, class T3> calculatedresulttype hypergeometric_1F1(T1 a, T2 b, T3 z); template <class T1, class T2, class T3, class Policy> calculatedresulttype hypergeometric_1F1(T1 a, T2 b, T3 z, const Policy&); template <class T1, class T2, class T3> calculatedresulttype hypergeometric_1F1_regularized(T1 a, T2 b, T3 z); template <class T1, class T2, class T3, class Policy> calculatedresulttype hypergeometric_1F1_regularized(T1 a, T2 b, T3 z, const Policy&); template <class T1, class T2, class T3> calculatedresulttype log_hypergeometric_1F1(T1 a, T2 b, T3 z); template <class T1, class T2, class T3, class Policy> calculatedresulttype log_hypergeometric_1F1(T1 a, T2 b, T3 z, const Policy&); template <class T1, class T2, class T3> calculatedresulttype log_hypergeometric_1F1(T1 a, T2 b, T3 z, int* sign); template <class T1, class T2, class T3, class Policy> calculatedresulttype log_hypergeometric_1F1(T1 a, T2 b, T3 z, int* sign, const Policy&); }} // namespaces
The function hypergeometric_1F1(a,
b, z)
returns
the nonsingular solution to Kummer's
equation
which for z < 1 has the hypergeometric series expansion
where (a)_{n} denotes rising factorial.
This function has the same definition as Mathematica's Hypergeometric1F1[a,
b, z]
and
Maple's KummerM(a, b, z)
.
The "regularized" versions of the function return:
The "log" versions of the function return:
When the optional sign
argument
is supplied it is set on exit to either +1 or 1 depending on the sign of
_{1}F_{1}(a, b,
z).
Both the regularized and the logarithmic versions of these functions return results without the spurious under/overflow that plague naive implementations.
This function is still very much the subject of active research, and a full range of methods capable of calculating the function over its entire domain are not yet available. We have worked extremely hard to ensure that problem domains result in an exception being thrown (an evaluation_error) rather than return a garbage result. Domains that are still unsolved include:
domain 
comment 
example 


a, b and z all large. 
_{1}F_{1}(814723.75, 13586.87890625, 15.87335205078125) 

a < b, b > z, this is really the same domain as above. 


There is a gap in between methods where no reliable implementation is possible, the issue becomes much worse for a, b and z all large. 
_{1}F_{1}(9057.91796875, 1252.51318359375, 15.87335205078125) 

This domain is mostly handled by A&S 13.3.6 (see implementation notes below), but there are some values where either that series is nonconvergent (most particularly for b < 0) or where the series becomes divergent after a few terms limiting the precision that can be achieved. 
_{1}F_{1}(5.9981750131794866e15, 0.499999999999994, 240.42092034220695) 
The following graph illustrates the problem area for b < 0 and a,z > 0:
There are 3 main groups of tests:
We also have a small program for testing random values over a userspecified domain of a, b and z, this program is also used for the error rate plots below and has been extremely useful in finetuning the implementation.
It should be noted however, that there are some domains for large negative a and b that have poor test coverage as we were simply unable to compute test values in reasonable time: it is not uncommon for the pFq series to cancel many hundreds of digits and sometimes into the thousands of digits.
We divide the domain of _{1}F_{1} up into several domains to explain the error rates.
In the following graphs we ran 100000 random test cases over each domain, note that the scatter plots show only the very worst errors as otherwise the graphs are both incomprehensible and virtually unplottable (as in sudden browser death).
For 0 < a,b,z the error rates are trivially small unless we are forced to take steps to avoid veryslow convergence and use a method other than direct evaluation of the series for performance reasons. Even so the errors rates are broadly acceptable, while the scatter graph shows where the worst errors are located:
For 1000 < a < 0 and 0 < b,z < 1000 the maximum error rates are higher, but most are still low, and the worst errors are fairly evenly distributed:
For 1000 < b < 0 and a,z > 0 the errors are mostly low at double precision except near the "unimplementable zone". Note however, that the some of the methods used here fail to converge to much better than double precision.
For both a and b less than zero, the errors the worst errors are clustered in a "difficult zone" with b < a and z ≈ a:
The implementation for this function is remarkably complex; readers will have to refer to the code for the transitions between methods, as we can only give an overview here.
In almost all cases where z < 0 we use Kummer's relation to make z positive:
The main series representation
is used only when
The implementation of this series is complicated by the fact that for b < 0 then what appears to be a fully converged series can in fact suddenly jump back to life again as b crosses the origin.
A&S 13.3.6 gives
which is particularly useful for a ≅ b and z > 0, or a ≪ 1, and z < 0.
Then we have Tricomi's approximation (given in simplified form in A&S 13.3.7) (7)
with
and
With E_{v} defined as:
This approximation is especially effective when a < 0.
For a and b both large and positive and somewhat similar in size then an expansion in terms of gamma functions can be used (6):
For z large we have the asymptotic expansion:
which is a special case of the gamma function expansion above.
In the situation where ab/z
is reasonably
small then Luke's rational approximation is used  this is too complex to
document here, refer either to the code or the original paper (3).
The special case _{1}F_{1}(1, b, z) is treated via Luke's Pade approximation (3).
That effectively completes the "direct" methods used, the remaining areas are treated indirectly via recurrence relations. These require extreme care in their use, as they often change the direction of stability, or else are not stable at all. Sometimes this is a well defined and characterized change in direction: for example with a,b and z all positive recurrence on a via
abruptly changes from stable forwards recursion to stable backwards if 2ab+z changes sign. Thus recurrence on a, even when _{1}F_{1}(a+N, b, z) is strictly increasing, needs careful handling as a → 0.
The transitory nature of the stable recurrence relations is well documented in the literature, for example (10) gives many examples, including the anomalous behaviour of recurrence on a and b for large z as first documented by Gauchi (12). Gauchi also provides the definitive reference on 3term recurrence relations in general in (11).
Unfortunately, not all recurrence stabilities are so well defined. For example, when considering _{1}F_{1}(a, b, z) it would be convenient to use the continued fractions associated with the recurrence relations to calculate _{1}F_{1}(a, b, z) / _{1}F_{1}(a, 1b, z) and then normalize either by iterating forwards on b until b > 0 and comparison with a reference value, or by using the Wronskian
which is of course the well known Miller's method (12).
Unfortunately, stable forwards recursion on b is stable only for b << z, as b increases in magnitude it passes through a region where recursion is unstable in both directions before eventually becoming stable in the backwards direction (at least for a while!). This transition is moderated not by a change of sign in the recurrence coefficients themselves, but by a change in the behaviour of the values of _{1}F_{1}  from alternating in sign when b is small to having the same sign when b is larger. During the actual transition, _{1}F_{1} will either pass through a zero or a minima depending on whether b is even or odd (if there is a minima at _{1}F_{1}(a, b, z) then there is necessarily a zero at _{1}F_{1}(a+1, b+1, z) as per the derivative of the function). As a result the behaviour of the recurrence relations is almost impossible to reason about in this area, and we are left to using heuristic based approaches to "guess" which region we are in.
In this implementation we use recurrence relations as follows:
For a,b,z > 0 and large we can either:
For b < 0 and a tiny we would normally use A&S 13.3.6, but when that is nonconvergent for some inputs we can use the forward recurrence relation on b to calculate the ratio _{1}F_{1}(a, b, z)/_{1}F_{1}(a, 1b, z) and then iterate forwards until b > 0 and compute a reference value and normalize (Millers Method). In the same domain when b is near 1 we can use a single backwards recursion on b to compute _{1}F_{1}(a, b, z) from _{1}F_{1}(a, 2b, z) and _{1}F_{1}(a, 1b, z) even though technically we are recursing in the unstable direction.
For _{1}F_{1}(N, b, z) and integer N then backwards recursion from _{1}F_{1}(0, b, z) and _{1}F_{1}(1, b, z) works well.
For a or b < 0 and if all the direct methods have failed, then we use various fallbacks:
For _{1}F_{1}(a, b, z) we can use backwards recursion on a as long as b > z, otherwise a more complex scheme is required which starts from _{1}F_{1}(a + N, b + M, z), and recurses backwards in up to 3 stages: first on a, then on a and b together, and finally on b alone.
For b < 0 we have no good methods in some domains (see the unsolved issues above). However in some circumstances we can either use:
The latter two methods use a lookup table to determine whether inputs are in either of the domains or neither. Unfortunately the methods are basically limited to double precision: calculation of the ratios require iteration towards the nomansland between the two methods where iteration is unstable in both directions. As a result, only lowprecision results which require few iterations to calculate the ratio are available.
If all else fails, then we use a checked series summation which will raise an evaluation_error if more than half the digits are destroyed by cancellation.