The Moore-Penrose inverse or the pseudoinverse of a matrix is a kind of generalization of the inverse matrix to non-square matrices or ill-conditioned matricies. The most confusing part in coding a pinv
function is how to choose a appropriate tolerance truncating zero singular values.
Basics on Pseudoinverse
Mathematically, a pseudoinverse of is defined as a matrix statisfing some criteria. The pseudoinverse has many good properites, such as being equal to the inverse of if is a square matrix and invertiable, and it exists for any matrix, etc. The compuation of the pseudoinverse is quite intutive, , where is the SVD of the matrix .
More importantly, the pseudoinverse relates to the least square problems with Tiknonov regularization:
where the solution is:
The pseudoinverse is exactly the limit when :
If is square and ill-conditioned, then it has a high condition number and many singular values or eigenvalues would gradually decay and be equal to 0 in the diagonal matrix . The inverse of usually causes overflow in real applications due to its division by zero errors. A common workaround is to replace with its pseudoinverse . Recall equation (3) and the SVD, we have:
so, the pseudoinverse is a truncated SVD method by discarding all zero corresponding components. Theorectially, it would not make encounter any divide-by-zero issues.
Engineering Considerations
However, determining what zero means in practical numerical computing is a little tricky. Typically, we classify those singular values that fall below some small tolerance as zero numbers, and the choice of default tolerance varies among implementations. Here I list some implementations:
Software | Implementation | Note |
---|---|---|
Matlab | max(m,n)*eps(norm(s, inf)) | eps(x) returns the positive distance from abs(x) to the next larger in magnitude floating point number of the same precision as x |
Scipy | atol+max(m, n)*np.finfo(dtype).eps*max(s) | np.finfo(dtype).eps returns the difference between 1.0 and the next smallest floating point number larger than 1.0 of the precision dtype , and atol is the absolute tolerance defaults to 0 |
Numpy | 1e-15*max(s) | |
Octave | max(m, n)*max(s)*std::numeric_limits<T>.epsilon() | std::numeric_limits<T>.epsilon() returns the difference between 1.0 and the next smallest floating point number larger than 1.0 of the precision T , and that is GCC compiler’s behavior |
Julia | max(eps(T)*min(m, n)*maximum(s), atol) | eps(T) returns the distance between 1.0 and the next larger representable floating-point value of the precision T , and the Julia community also recommend sqrt(eps(T)) for dense ill-conditioned matrices. |
I’m not an expert in numerical computing, so I don’t want to talk about the considerations behind these implementations. If you are interested, you can refer to the commnunity discussions on this topic, such as those in Numpy and Julia, or some typical books such as Golub and Van Loan’s Matrix Computations, Vetterling’s Numerical Recipes.
Generally speaking, there is no one-size-fits-all tolerance for all kinds of problems, and Matlab’s implementation is considered to be the most conservative way. Nearly all implementations consider two tolerances: absolute and relative. Here I just provide an excerpt from stata’s manuals:
An absolute tolerance is a fixed number that is used to make direct comparisons. If the tolerance for a particular routine were 1e–14, then 8.99e–15 in some calculation would be considered to be close enough to zero to act as if it were, in fact, zero, and 1.000001e–14 would be considered a valid, nonzero number.
But is 1e–14 small? The number may look small to you, but whether 1e–14 is small depends on what is being measured and the units in which it is measured. If all the numbers in a certain problem were around 1e–12, you might suspect that 1e–14 is a reasonable number. That leads to relative measures of tolerance. Rather than treating, say, a predetermined quantity as being so small as to be zero, one specifies a value (for example, 1e–14) multiplied by something and uses that as the definition of small.
…
For the above matrix, the diagonal of U turns out to be (5.5e+14, 2.4e+13, 0.000087).
An absolutist would tell you that the matrix is of full rank; the smallest number along the diagonal of U is 0.000087 (8.7e–5), and that is still a respectable number, at least when compared with computer precision, which is about 2.22e–16.
Most Mata routines would tell you that the matrix has rank 2. Numbers such as 0.000087 may seem respectable when compared with machine precision, but 0.000087 is, relatively speaking, a very small number, being about 4.6e–19 relative to the average value of the diagonal elements.
and from Vetterling’s comments (p795):
Moreover, if a singular value wi is nonzero but very small, you should also define its reciprocal to be zero, since its apparent value is probably an artifact of
roundoff error, not a meaningful number. A plausible answer to the question “how small is small?” is to edit in this fashion all singular values whose ratio to the largest singular value is less than N times the machine precision . (This is a more conservative recommendation than the default in section 2.6, which scales as .)