5.1 Determinants: Basic Properties
5.1.1 Conventions
For the rest of this section, we fix a commutative ring \(K\). In most examples, \(K\) will be \(\mathbb {Z}\) or \(\mathbb {Q}\) or a polynomial ring.
Let \(n, m \in \mathbb {N}\).
(a) If \(A\) is an \(n \times m\)-matrix, then \(A_{i,j}\) shall mean the \((i,j)\)-th entry of \(A\), that is, the entry of \(A\) in row \(i\) and column \(j\).
(b) If \(a_{i,j}\) is an element of \(K\) for each \(i \in [n]\) and each \(j \in [m]\), then
shall denote the \(n \times m\)-matrix whose \((i,j)\)-th entry is \(a_{i,j}\) for all \(i \in [n]\) and \(j \in [m]\).
(c) We let \(K^{n \times m}\) denote the set of all \(n \times m\)-matrices with entries in \(K\). This is a \(K\)-module. If \(n = m\), this is also a \(K\)-algebra.
(d) Let \(A \in K^{n \times m}\) be an \(n \times m\)-matrix. The transpose \(A^T\) of \(A\) is defined to be the \(m \times n\)-matrix whose entries are given by
5.1.2 Definition
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix. The determinant \(\det A\) of \(A\) is defined to be the element
of \(K\). Here:
we let \(S_n\) denote the \(n\)-th symmetric group (i.e., the group of permutations of \([n] = \{ 1, 2, \ldots , n\} \));
we let \((-1)^{\sigma }\) denote the sign of the permutation \(\sigma \) (as defined in Definition 3.109).
5.1.3 Small determinants and basic API
The determinant of a \(0 \times 0\) matrix is \(1\) (empty product convention).
The sum over \(S_0 = \{ \mathrm{id}\} \) gives \((-1)^{\mathrm{id}} \cdot 1 = 1\) (empty product is \(1\)).
The determinant of a \(1 \times 1\) matrix is its single entry: \(\det (a) = a\).
The sum over \(S_1 = \{ \mathrm{id}\} \) gives \((-1)^{\mathrm{id}} \cdot a = a\).
The determinant of a \(2 \times 2\) matrix satisfies \(\det \begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc\).
\(S_2\) has two elements: the identity (sign \(+1\)) and the transposition (sign \(-1\)). The sum gives \(a \cdot d - b \cdot c\).
The determinant of the identity matrix is \(1\): \(\det I_n = 1\).
By the fact that the identity matrix has determinant \(1\).
For \(n \geq 1\), the determinant of the zero matrix is \(0\). (For \(n = 0\), the zero matrix is the empty matrix, and \(\det (\emptyset ) = 1\).)
The zero matrix has a zero row, so \(\det = 0\) by Theorem 5.20 (b).
The determinant of a diagonal matrix equals the product of its diagonal entries: \(\det (\mathrm{diag}(d_1, \ldots , d_n)) = d_1 d_2 \cdots d_n\).
A diagonal matrix is both upper- and lower-triangular, so this follows from Theorem 5.19.
For \(n \geq 2\), a matrix with all entries equal to \(c\) has determinant \(0\). This is a corollary of Proposition 5.12 with \(x_i = 1\) and \(y_j = c\).
The constant matrix \((c)_{i,j}\) is the outer product of \(\mathbf{1}\) and \((c, c, \ldots , c)\). By Proposition 5.12, its determinant is \(0\).
Let \(n \geq 2\). Then,
The proof uses a sign-reversing involution: for \(n \geq 2\), pick two distinct elements \(i, j\) and their transposition \(t_{i,j}\). The map \(\sigma \mapsto \sigma \cdot t_{i,j}\) pairs up permutations with opposite signs, so all summands cancel.
Let \(n \in \mathbb {N}\) be such that \(n \geq 2\). Let \(x_1, x_2, \ldots , x_n \in K\) and \(y_1, y_2, \ldots , y_n \in K\). Then,
The definition of the determinant yields (using Lemma 5.11 for the last step)
Let \(A\) be a \(5 \times 5\) matrix such that \(A_{i,j} = 0\) whenever \(i, j \in \{ 2, 3, 4\} \). Then \(\det A = 0\). This formalizes the \(5 \times 5\) hollow matrix example: a matrix with a \(3 \times 3\) block of zeros in the middle has determinant \(0\).
By the pigeonhole principle: any permutation \(\sigma \) of \([5]\) must map some element of \(\{ 2,3,4\} \) to an element of \(\{ 2,3,4\} \), because \(|\{ 2,3,4\} | = 3 {\gt} 2 = |\{ 1,5\} |\). Hence, for every \(\sigma \), the product \(\prod _i A_{i,\sigma (i)}\) contains a zero factor, making every term in the determinant sum equal to \(0\).
5.1.4 Helpers for Proposition 5.17
The factor matrix \(A\) in the factorization of the sum matrix \((x_i + y_j)\) has a zero column (column index \(2\)) when \(n \geq 3\). The matrix \(A\) has \(x_i\) in the first column, \(1\) in the second column, and \(0\) elsewhere.
By direct computation: for column index \(2\), neither \(j = 0\) nor \(j = 1\), so the entry is \(0\).
For \(n \geq 3\), the factor matrix \(A\) has determinant \(0\).
For \(n \geq 2\), the matrix \((x_i + y_j)_{i,j}\) factors as \(A \cdot B\) where:
Direct matrix multiplication: \((AB)_{i,j} = \sum _k A_{i,k} B_{k,j} = x_i \cdot 1 + 1 \cdot y_j = x_i + y_j\).
Let \(n \in \mathbb {N}\) be such that \(n \geq 3\). Let \(x_1, x_2, \ldots , x_n \in K\) and \(y_1, y_2, \ldots , y_n \in K\). Then,
Define two \(n \times n\)-matrices
Then \(\left( x_i + y_j \right)_{1 \leq i \leq n,\; 1 \leq j \leq n} = AB\), so
(by Theorem 5.23). However, the matrix \(A\) has a zero column (since \(n \geq 3\)), and thus \(\det A = 0\) (by Theorem 5.21 (b)). Hence the product is \(0\).
5.1.5 Basic properties
Let \(n \in \mathbb {N}\). If \(A \in K^{n \times n}\) is any \(n \times n\)-matrix, then \(\det (A^T) = \det A\).
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) be a triangular (i.e., lower-triangular or upper-triangular) \(n \times n\)-matrix. Then, the determinant of the matrix \(A\) is the product of its diagonal entries. That is,
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix. Then:
(a) If we swap two rows of \(A\), then \(\det A\) gets multiplied by \(-1\).
(b) If \(A\) has a zero row (i.e., a row that consists entirely of zeroes), then \(\det A = 0\).
(c) If \(A\) has two equal rows, then \(\det A = 0\).
(d) Let \(\lambda \in K\). If we multiply a row of \(A\) by \(\lambda \) (that is, we multiply all entries of this one row by \(\lambda \), while leaving all other entries of \(A\) unchanged), then \(\det A\) gets multiplied by \(\lambda \).
(e) If we add a row of \(A\) to another row of \(A\) (that is, we add each entry of the former row to the corresponding entry of the latter), then \(\det A\) stays unchanged.
(f) Let \(\lambda \in K\). If we add \(\lambda \) times a row of \(A\) to another row of \(A\) (that is, we add \(\lambda \) times each entry of the former row to the corresponding entry of the latter), then \(\det A\) stays unchanged.
(g) Let \(B, C \in K^{n \times n}\) be two further \(n \times n\)-matrices. Let \(k \in [n]\). Assume that
whereas each \(i \neq k\) satisfies
Then,
(a) See [ Grinbe15 , Exercise 6.7 (a) ] . This is also a particular case of [ Strick13 , Corollary B.19 ] .
(b) See [ Grinbe15 , Exercise 6.7 (c) ] . This is also near-obvious from Definition 5.3.
(c) See [ Grinbe15 , Exercise 6.7 (e) ] or [ Laue15 , § 5.3.3, property (iii) ] .
(d) See [ Grinbe15 , Exercise 6.7 (g) ] or [ Laue15 , § 5.3.3, property (ii) ] .
(f) See [ Grinbe15 , Exercise 6.8 (a) ] .
(e) This is the particular case of part (f) for \(\lambda = 1\).
(g) See [ Grinbe15 , Exercise 6.7 (i) ] or [ Laue15 , § 5.3.3, property (i) ] .
Theorem 5.20 also holds if we replace “row” by “column” throughout it.
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) and \(\tau \in S_n\). Then,
and
Let us first prove 8.
The definition of \(\det A\) yields
The definition of \(\det \left( \left( A_{i, \tau (j)} \right)_{1 \leq i \leq n,\; 1 \leq j \leq n} \right)\) yields
However, \(S_n\) is a group. Thus, the map \(\sigma \mapsto \tau ^{-1} \sigma \) is a bijection on \(S_n\). Substituting \(\tau ^{-1} \sigma \) for \(\sigma \), we obtain (using Proposition 3.110 (d) and (f))
This proves 8.
Now, 7 follows by applying 8 to \(A^T\) instead of \(A\) and using Theorem 5.18.
Let \(n \in \mathbb {N}\). Let \(A, B \in K^{n \times n}\) be two \(n \times n\)-matrices. Then,
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) and \(d_1, d_2, \ldots , d_n \in K\). Then,
and
The definition of a determinant yields
and
and
5.1.6 Permutation images and submatrix determinants
The image of a finite subset \(P \subseteq [n]\) under a permutation \(\sigma \in S_n\) is \(\sigma (P) = \{ \sigma (i) \mid i \in P\} \).
- PermFinset.imageFinset σ P = Finset.map { toFun := ⇑σ, inj' := ⋯ } P
The image of a finset under a permutation has the same cardinality: \(|\sigma (P)| = |P|\).
Since \(\sigma \) is injective, the image has the same cardinality as the original set.
The image of a complement equals the complement of the image: \(\sigma (P^c) = \sigma (P)^c\).
By the injectivity of \(\sigma \): \(x \in \sigma (P^c)\) iff \(\sigma ^{-1}(x) \notin P\) iff \(x \notin \sigma (P)\).
If \(\sigma (P) = Q\), then \(\sigma ^{-1}(Q) = P\).
Apply \(\sigma ^{-1}\) to both sides of \(\sigma (P) = Q\) and use \(\sigma ^{-1} \circ \sigma = \mathrm{id}\).
The set of permutations mapping \(P\) to \(Q\): \(\mathrm{permsMapping}(P, Q) = \{ \sigma \in S_n \mid \sigma (P) = Q\} \).
- PermFinset.permsMapping P Q = {σ : Equiv.Perm (Fin n) | PermFinset.imageFinset σ P = Q}
If \(|P| \neq |Q|\), then no permutation maps \(P\) to \(Q\): \(\mathrm{permsMapping}(P, Q) = \emptyset \).
If \(\sigma (P) = Q\), then \(|Q| = |\sigma (P)| = |P|\) by Lemma 5.26, contradicting \(|P| \neq |Q|\).
The submatrix determinant of an \(n \times n\) matrix \(A\) with row set \(P\) and column set \(Q\) (both subsets of \([n]\)) is
where \(A[P, Q]\) is the submatrix of \(A\) with rows indexed by \(P\) and columns indexed by \(Q\), in the order determined by their natural ordering.
- PermFinset.submatrixDet A P Q = if h : P.card = Q.card then (A.submatrix ⇑(P.orderEmbOfFin ⋯) ⇑(Q.orderEmbOfFin ⋯)).det else 0
When \(|P| = |Q|\), the submatrix determinant equals the actual determinant of the submatrix \(A[P, Q]\).
Immediate from the definitions, using the hypothesis \(|P| = |Q|\).
When \(|P| \neq |Q|\), the submatrix determinant is \(0\).
Immediate from the definitions, using the hypothesis \(|P| \neq |Q|\).
The submatrix determinant of the empty row and column sets is \(1\): \(\mathrm{submatrixDet}(A, \emptyset , \emptyset ) = 1\).
Both sets are empty (so \(|P| = |Q| = 0\)), and the determinant of the \(0 \times 0\) submatrix is \(1\).