More on Matrices
Learn how we can perform some advanced matrix operations using JAX.
We'll cover the following...
While the vast world of matrices and the implementation is more than even a dedicated course can cover, we’ll try to do some justice to the subject by concluding with another lesson covering a few important concepts.
Determinant
Like the norm, the determinant is a scalar value characterizing a square matrix. It is quite useful in defining some properties of the matrix.
For example, a matrix is singular if (and only if) its determinant is zero.
We can calculate it as:
a = linalg.det(A)
Inverse
The inverse of a matrix is defined as:
The linalg sublibrary provides the function for inverse calculation as:
A_inv = linalg.inv(A)
Note: The
linalg.pinv()can be used to calculate aand doesn’t support the pseudo-inverse While inverse is calculated only for square, full-rank matrices, Pseudo/Moore-Penrose inverse can be calculated for any mxn matrix. intmembers.
Power
Similarly, we can calculate the power of A using matrix_power() as:
A_square = linalg.matrix_power(A,2)
A is
Eigenvectors and eigenvalues
If we recall, an eigenvector v of a square matrix A is defined as:
where λ is the corresponding eigenvalue.
lamba_values, eigen_vect = linalg.eig(A)
Compared to normal NumPy, the JAX version treats every value as complex, so don’t be surprised by the j we observe in the output values.
It may be tempting to use “lambda” as a variable identifier. Don’t do this since
lambdais a keyword. Instead, we can useλitself as an identifier.
Definiteness
Eigenvalues of a vector introduce another related concept.
We call a square matrix a positive definite matrix if all the eigenvalues are positive while a positive semidefinite matrix is one having all its eigenvalues either zero or positive.
We can extend this concept to negative definite and semidefinite matrices as well.
If a matrix has some positive eigenvalues and other negatives, it’s called an indefinite matrix. These matrices are pretty helpful as we will shortly see in the follow-up lessons.
Note: The concept of definiteness is usually restricted to only real-value square matrices, but can be expanded to complex matrices if needed.
Singular Value Decomposition
We’ll conclude the lesson by performing a singular value decomposition (SVD) on a given matrix. If you recall, an SVD of matrix A is:
If A is , then the dimensions of the decomposed matrices are:
We can verify the dimensions in this example:
If we look closely, while the shapes of and are consistent with the formula, is returned as a one-dimensional vector of singular values, which is consistent with NumPy.