5.3 Determinants: Factor Hunting, Laplace Expansion, and Desnanot–Jacobi
5.3.1 Factor hunting and the Vandermonde determinant
Let \(n \in \mathbb {N}\). Let \(a_1, a_2, \ldots , a_n\) be \(n\) elements of \(K\). Then:
(a) We have
(b) We have
(c) We have
(d) We have
(a) Follows from Lemma 5.76 by substituting \(a_1, a_2, \ldots , a_n\) for the indeterminates \(x_1, x_2, \ldots , x_n\). More precisely, the equality
follows from the polynomial identity 13 by applying the evaluation homomorphism \(x_i \mapsto a_i\).
(b) The matrix \(\left( a_j^{n-i} \right)_{1 \leq i \leq n,\; 1 \leq j \leq n}\) is the transpose of the matrix \(\left( a_i^{n-j} \right)_{1 \leq i \leq n,\; 1 \leq j \leq n}\). Thus part (b) follows from part (a) using \(\det (A^T) = \det A\).
(c) The matrix \(\left( a_i^{j-1} \right)\) is obtained from \(\left( a_i^{n-j} \right)\) by reversing the column order. Let \(\tau \in S_n\) send each \(j\) to \(n+1-j\). Then
Since each factor \((a_i - a_j)\) gets negated by \((-1)^\tau \), the result becomes \(\prod _{1 \leq j {\lt} i \leq n} (a_i - a_j)\).
(d) Follows from part (c) using \(\det (A^T) = \det A\).
Let \(n \in \mathbb {N}\). Consider the polynomial ring \(\mathbb {Z}[x_1, x_2, \ldots , x_n]\) in \(n\) indeterminates \(x_1, x_2, \ldots , x_n\) with integer coefficients. In this ring, we have
Set
We must prove that \(f = g\).
By the definition of the determinant,
which is a homogeneous polynomial of degree \(\frac{n(n-1)}{2}\). The monomial \(x_1^{n-1} x_2^{n-2} \cdots x_n^0\) appears with coefficient \(1\) (from the identity permutation \(\sigma = \mathrm{id}\)).
For any \(u {\lt} v\) in \([n]\), setting \(x_u = x_v\) makes two rows of the matrix identical, so \(f\) vanishes. By the root factoring-off theorem (viewing \(f\) as a polynomial in \(x_u\) over \(\mathbb {Z}[x_1, \ldots , \widehat{x_u}, \ldots , x_n]\)), we conclude that \(x_u - x_v\) divides \(f\). Since the ring \(\mathbb {Z}[x_1, \ldots , x_n]\) is a UFD and the linear factors \(x_u - x_v\) are irreducible and mutually non-associate, the product \(g = \prod _{1 \leq i {\lt} j \leq n} (x_i - x_j)\) divides \(f\). Thus \(f = gq\) for some \(q \in \mathbb {Z}[x_1, \ldots , x_n]\).
Since \(\deg f \leq \frac{n(n-1)}{2} = \deg g\), we get \(\deg q \leq 0\), so \(q \in \mathbb {Z}\). Comparing the coefficient of \(x_1^{n-1} x_2^{n-2} \cdots x_n^0\) on both sides gives \(1 = q \cdot 1\), hence \(q = 1\) and \(f = g\).
The formalization relates our matrix convention to Mathlib’s via column reversal, and uses the fact that swapping the order of subtraction in each factor introduces a sign that cancels with the sign from the column permutation.
Let \(n \in \mathbb {N}\). Let \(x_1, x_2, \ldots , x_n \in K\) and \(y_1, y_2, \ldots , y_n \in K\). Then,
Let \(C = \left( (x_i + y_j)^{n-1} \right)_{1 \leq i \leq n,\; 1 \leq j \leq n}\). By the binomial formula, for any \(i, j \in [n]\),
Define two \(n \times n\)-matrices
Then \(C = PQ\), so \(\det C = \det P \cdot \det Q\).
From \(Q = \left( y_j^{n-i} \right)\), Theorem 5.75 (b) gives \(\det Q = \prod _{1 \leq i {\lt} j \leq n} (y_i - y_j)\).
From \(P = \left( \binom {n-1}{j-1} x_i^{j-1} \right)\), factoring out the binomial coefficients from each column and applying Theorem 5.75 (c) gives
Hence,
5.3.2 Laplace expansion
Let \(n \in \mathbb {N}\). Let \(A\) be an \(n \times n\)-matrix. Let \(i, j \in [n]\). Then, we set
This is the \((n-1) \times (n-1)\)-matrix obtained from \(A\) by removing its \(i\)-th row and its \(j\)-th column.
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix.
(a) For every \(p \in [n]\), we have
(b) For every \(q \in [n]\), we have
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix. Let \(r \in [n]\).
(a) For every \(p \in [n]\) satisfying \(p \neq r\), we have
(b) For every \(q \in [n]\) satisfying \(q \neq r\), we have
(a) Let \(p \in [n]\) satisfy \(p \neq r\). Let \(C\) be the matrix \(A\) with its \(p\)-th row replaced by its \(r\)-th row. Then \(C\) has two equal rows, so \(\det C = 0\). On the other hand, expanding \(\det C\) along the \(p\)-th row (using Theorem 5.79 (a) applied to \(C\)) yields
Comparing gives the claim. Part (b) follows similarly.
The formalization uses the adjugate: \((-1)^{p+q} \det (A_{\sim p, \sim q}) = (\operatorname {adj} A)_{q,p}\), so the sum becomes \(\sum _q A_{r,q} (\operatorname {adj} A)_{q,p} = (A \cdot \operatorname {adj} A)_{r,p} = (\det A) \cdot I_{r,p} = 0\) when \(r \neq p\).
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix. We define a new \(n \times n\)-matrix \(\operatorname {adj} A \in K^{n \times n}\) by
This matrix \(\operatorname {adj} A\) is called the adjugate (or classical adjoint) of the matrix \(A\). Note the index swap: the \((i,j)\) entry involves removing row \(j\) and column \(i\).
The formalization shows that this definition equals Mathlib’s Matrix.adjugate, which is defined via \((\operatorname {adj} A)_{i,j} = \det (A')\) where \(A'\) is the matrix \(A\) with row \(j\) replaced by the \(i\)-th standard basis vector.
- AlgebraicCombinatorics.Determinants.adjugateMat A = Matrix.of fun (i j : Fin (m + 1)) => (-1) ^ (↑i + ↑j) * (AlgebraicCombinatorics.Determinants.submatrixRemove A j i).det
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix. Let \(I_n\) denote the \(n \times n\) identity matrix. Then,
See [ Grinbe15 , Theorem 6.100 ] or [ Ford21 , Lemma 4.7.9 ] or [ Loehr11 , Theorem 9.50 ] or [ Strick13 , Proposition B.28 ] .
To show \(A \cdot (\operatorname {adj} A) = (\det A) \cdot I_n\), it suffices to check that the \((i,j)\)-th entry of \(A \cdot (\operatorname {adj} A)\) equals \(\det A\) when \(i = j\) and equals \(0\) otherwise.
Case \(i = j\): The \((i,i)\) entry is \(\sum _k A_{i,k} (\operatorname {adj} A)_{k,i} = \sum _k (-1)^{i+k} A_{i,k} \det (A_{\sim i, \sim k}) = \det A\) by Laplace expansion along row \(i\) (Theorem 5.79 (a)).
Case \(i \neq j\): The \((i,j)\) entry is \(\sum _k A_{i,k} (\operatorname {adj} A)_{k,j} = \sum _k (-1)^{j+k} A_{i,k} \det (A_{\sim j, \sim k}) = 0\) by Proposition 5.80 (a) (using row \(i\) entries with row \(j\) cofactors).
Similarly, \((\operatorname {adj} A) \cdot A = (\det A) \cdot I_n\) follows using column versions.
5.3.3 Laplace expansion along multiple rows
For any subset \(I\) of \([n]\), we write \(\widetilde{I} = [n] \setminus I\) for its complement, and \(\operatorname {sum} S = \sum _{s \in S} s\) for any finite set \(S\) of integers.
Let \(n \in \mathbb {N}\). Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix.
(a) For every subset \(P\) of \([n]\), we have
(b) For every subset \(Q\) of \([n]\), we have
See [ Grinbe15 , Theorem 6.156 ] .
The proof partitions the symmetric group based on how permutations map \(P\) to subsets \(Q\) of the same size. For each such \(Q\), the permutations \(\sigma \) with \(\sigma (P) = Q\) decompose as a bijection \(\tau : P \to Q\) (contributing to \(\det (\operatorname {sub}_P^Q A)\)) and a bijection \(\rho : \widetilde{P} \to \widetilde{Q}\) (contributing to \(\det (\operatorname {sub}_{\widetilde{P}}^{\widetilde{Q}} A)\)). The sign decomposes as \((-1)^\sigma = (-1)^{\operatorname {sum} P + \operatorname {sum} Q} (-1)^\tau (-1)^\rho \).
Part (b) follows from part (a) applied to \(A^T\) using \(\det (A^T) = \det A\).
5.3.4 Desnanot–Jacobi and Dodgson condensation
Let \(n \in \mathbb {N}\) be such that \(n \geq 2\). Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix.
Let \(A'\) be the \((n-2) \times (n-2)\)-matrix
(This is what remains of \(A\) when we remove the first row, the last row, the first column and the last column.) Then,
See [ Grinbe15 , Proposition 6.122 ] or [ Bresso99 , § 3.5, proof of Theorem 3.12 ] or [ Zeilbe98 ] .
The proof uses the adjugate matrix.
The right-hand side equals \((\operatorname {adj} A)_{0,0} \cdot (\operatorname {adj} A)_{n-1,n-1} - (\operatorname {adj} A)_{0,n-1} \cdot (\operatorname {adj} A)_{n-1,0}\), which is the \(2 \times 2\) determinant of the corner submatrix of \(\operatorname {adj} A\).
By Jacobi’s complementary minor theorem for the \(2 \times 2\) case (a special case proved independently), this \(2 \times 2\) determinant of the adjugate equals \(\det A \cdot \det (A')\), giving the left-hand side.
The proof of the special case uses the polynomial ring approach: reduce to the generic matrix over the multivariate polynomial ring, then work in its field of fractions where the matrix is invertible.
Let \(n \in \mathbb {N}\). Let \(x_1, x_2, \ldots , x_n\) be \(n\) elements of \(K\). Let \(y_1, y_2, \ldots , y_n\) be \(n\) elements of \(K\). Assume that \(x_i + y_j\) is invertible in \(K\) for each \((i, j) \in [n]^2\). Then,
The proof uses the polynomial version of the identity (factor hunting).
Step 1: Clear denominators. Multiply both sides by \(\prod _{i,j}(x_i + y_j)\) to obtain the “cleared” polynomial identity:
Step 2: Base cases. For \(n = 0, 1, 2, 3\): verified by direct computation.
Step 3: Factor hunting (\(n \geq 4\)): The determinant on the left is a polynomial in \(x_1, \ldots , x_n, y_1, \ldots , y_n\). Setting \(x_i = x_j\) (for \(i \neq j\)) makes two rows identical, so \((x_i - x_j)\) divides the determinant. Similarly, \((y_i - y_j)\) divides the determinant for \(i \neq j\). Since these linear factors are pairwise coprime irreducibles in the UFD \(\mathbb {Z}[x_1, \ldots , x_n, y_1, \ldots , y_n]\), their product divides the determinant.
Step 4: Degree comparison. Both the determinant and the product have total degree \(n(n-1)\), so the quotient is a constant. Evaluating at a specific point shows the constant is \(1\).
Step 5: Recover the Cauchy formula. Dividing the cleared identity by \(\prod _{i,j}(x_i + y_j)\) yields the Cauchy determinant formula.
Let \(n \in \mathbb {N}\). Let \(x_1, \ldots , x_n \in R\) and \(y_1, \ldots , y_n \in R\) be elements of a commutative ring \(R\). Then
This is equivalent to the Cauchy determinant formula after dividing both sides by \(\prod _{(i,j) \in [n]^2} (x_i + y_j)\).
The proof uses factor hunting in the polynomial ring. Both sides are polynomials of degree \(n(n-1)\) in the \(x_i\)’s and \(y_j\)’s. The left-hand side vanishes when \(x_u = x_v\) (two rows become identical) and when \(y_u = y_v\) (two columns become proportional). Thus the product \(\prod _{i {\lt} j} (x_i - x_j)(y_i - y_j)\) divides the left-hand side. Degree comparison shows the quotient is a constant, and coefficient comparison shows the constant is \(1\).
5.3.5 Generalized Desnanot–Jacobi
Let \(n \in \mathbb {N}\) be such that \(n \geq 2\). Let \(p, q, u, v\) be four elements of \([n]\) such that \(p {\lt} q\) and \(u {\lt} v\). Let \(A\) be an \(n \times n\)-matrix. Then,
See [ Grinbe15 , Theorem 6.126 ] .
The proof combines two identities:
Adjugate identity (Lemma 5.94, generalized): The \(2 \times 2\) determinant \((\operatorname {adj} A)_{u,p} (\operatorname {adj} A)_{v,q} - (\operatorname {adj} A)_{u,q} (\operatorname {adj} A)_{v,p}\) equals \((-1)^{p+u+q+v}\) times the product-minus-product expression.
Jacobi \(2 \times 2\) identity (Theorem 5.89 for \(|P|=|Q|=2\)): The same \(2 \times 2\) determinant of the adjugate equals \((-1)^{p+u+q+v} \det A \cdot \det (\operatorname {sub}_{[n] \setminus \{ p,q\} }^{[n] \setminus \{ u,v\} } A)\).
Equating the two expressions and canceling \((-1)^{p+u+q+v}\) (which squares to \(1\)) yields the result.
Theorem 5.85 is the particular case \(p = 1\), \(q = n\), \(u = 1\), \(v = n\).
5.3.6 Jacobi’s complementary minor theorem
Let \(n \in \mathbb {N}\). For any subset \(I\) of \([n]\), let \(\widetilde{I}\) denote the complement \([n] \setminus I\). Set \(\operatorname {sum} S = \sum _{s \in S} s\) for any finite set \(S\) of integers.
Let \(A \in K^{n \times n}\) be an \(n \times n\)-matrix. Let \(P\) and \(Q\) be two subsets of \([n]\) such that \(|P| = |Q| \geq 1\). Then,
The proof uses the polynomial ring approach (following Prasolov’s Problems and Theorems in Linear Algebra).
Step 1: Reduce to generic matrix. Let \(A'\) be the generic \(n \times n\) matrix with polynomial entries in \(\mathbb {Z}[(x_{i,j})_{1 \leq i,j \leq n}]\). The identity for arbitrary \(A\) over any commutative ring \(R\) follows by applying the evaluation homomorphism \(x_{i,j} \mapsto A_{i,j}\).
Step 2: Work in the field of fractions. Since \(\det (A')\) is a nonzero polynomial, we can embed \(A'\) into the field of fractions \(K\) of the polynomial ring, where \(A'\) becomes invertible.
Step 3: Use \(\operatorname {adj}(A) = \det (A) \cdot A^{-1}\). For invertible \(A\) with \(|P| = |Q| = k\):
Step 4: Complementary minor theorem for inverse. The classical result for invertible \(A\) states:
This is proved using block matrix decomposition.
Step 5: Combine. \(\det (\operatorname {sub}_P^Q(\operatorname {adj} A)) = (\det A)^k \cdot (-1)^{\operatorname {sum} P + \operatorname {sum} Q} \cdot \det (\operatorname {sub}_{\widetilde{Q}}^{\widetilde{P}} A) / \det A = (-1)^{\operatorname {sum} P + \operatorname {sum} Q} (\det A)^{k-1} \det (\operatorname {sub}_{\widetilde{Q}}^{\widetilde{P}} A)\).
Step 6: Pull back. Since the identity holds in \(K\) and the canonical map from the polynomial ring to \(K\) is injective, the identity holds in the polynomial ring, and hence for all matrices \(A\) over any commutative ring.
Theorem 5.88 is the particular case of Theorem 5.89 for \(P = \{ u, v\} \) and \(Q = \{ p, q\} \). Theorem 5.85 is, of course, the particular case of Theorem 5.88 for \(p = 1\), \(q = n\), \(u = 1\), \(v = n\).
5.3.7 Helpers for the Vandermonde determinant proof
Let \(v_1, \ldots , v_m\) be elements of a commutative ring \(S\). Then
Each factor \((v_i - v_j) = -(v_j - v_i)\), and there are \(\binom {m}{2} = m(m-1)/2\) such factors.
The sign of the column reversal permutation \(\tau \) on \(\{ 1, 2, \ldots , m\} \) (sending \(j\) to \(m + 1 - j\)) satisfies
For \(i {\lt} j\), we have \(\tau (i) {\gt} \tau (j)\), so every pair is an inversion. The total number of inversions is \(\binom {m}{2}\).
Let \(A\) be an \(m \times m\) matrix and \(\sigma \) a permutation of \([m]\). Then \(\det (A \circ \sigma ) = \operatorname {sign}(\sigma ) \cdot \det A\), where \(A \circ \sigma \) denotes the column permutation \(A_{i,\sigma (j)}\).
Transpose to a row permutation, apply the row permutation determinant formula, then transpose back.
The determinant of the polynomial Vandermonde matrix (with entries \(x_i^{m-1-j}\)) equals
where \(\tau \) is the column reversal permutation (sending \(j\) to \(m - 1 - j\)). This relates the textbook matrix convention to Mathlib’s via column reversal.
The polynomial Vandermonde matrix is Mathlib’s Vandermonde matrix with columns permuted by the reversal permutation, so the determinant picks up the sign factor.
5.3.8 Helpers for the Desnanot–Jacobi proof
Let \(A \in K^{n \times n}\) with \(n \geq 2\). Then
Direct computation using \((\operatorname {adj} A)_{i,j} = (-1)^{i+j} \det (A_{\sim j, \sim i})\).
The Desnanot–Jacobi identity holds for \(2 \times 2\) matrices by direct computation.
The identity \(\det A \cdot \det (A') = \det (A_{\sim 1,\sim 1}) \cdot \det (A_{\sim 2,\sim 2}) - \det (A_{\sim 1,\sim 2}) \cdot \det (A_{\sim 2,\sim 1})\) reduces to \(\det A \cdot 1 = A_{2,2} \cdot A_{1,1} - A_{1,2} \cdot A_{2,1}\) for a \(2 \times 2\) matrix, which is the definition of \(\det A\).
For a \(2 \times 2\) matrix \(A\), the \(2 \times 2\) determinant of the corner entries of \(\operatorname {adj} A\) satisfies
This is the base case of the Jacobi complementary minor theorem (since \(\det (A') = 1\) for the \(0 \times 0\) inner submatrix).
Direct computation using the explicit \(2 \times 2\) adjugate formula.
For a \(3 \times 3\) matrix \(A\), the \(2 \times 2\) determinant of the corner entries of \(\operatorname {adj} A\) at positions \(\{ 0, 2\} \times \{ 0, 2\} \) satisfies
This verifies the Jacobi complementary minor theorem for \(3 \times 3\) matrices with \(P = Q = \{ 0, 2\} \).
Direct expansion of the adjugate entries and the \(3 \times 3\) determinant, followed by algebraic simplification.
5.3.9 Helpers for the generalized Desnanot–Jacobi proof
Let \(n \in \mathbb {N}\) with \(n \geq 2\). Let \(A\) be an \(n \times n\)-matrix. For \(p {\lt} q\) in \([n]\) and \(u {\lt} v\) in \([n]\), define
as the \((n-2) \times (n-2)\)-submatrix of \(A\) obtained by removing rows \(p, q\) and columns \(u, v\). The row and column indexing uses the order-preserving injection \([n-2] \hookrightarrow [n]\) that skips the two removed indices in order.
Let \(A\) be an \((m+2) \times (m+2)\) matrix, and let \(p {\lt} q\) and \(u {\lt} v\) be indices. Then
where the right-hand side uses the complement-based submatrix construction. This connects the explicit order-preserving injection used in the submatrix construction to the general complement-based definition, enabling the use of Jacobi’s complementary minor theorem.
The proof shows that the order-preserving injection that skips \(p\) and \(q\), being strictly monotone and having range equal to \(\{ p,q\} ^c\), coincides with the canonical order-preserving enumeration of the complement set, so the two submatrix constructions yield the same matrix.
5.3.10 Helpers for the adjugate and inverse relationship
Let \(K\) be a field and \(A \in K^{n \times n}\) with \(\det A \neq 0\). Then for all \(i, j\),
Follows from \(A^{-1} = (\det A)^{-1} \cdot \operatorname {adj} A\).
Let \(A\) be an \(n \times n\) matrix with \(\det A\) a unit. Then for any index functions \(f\) and \(g\),
That is, the adjugate submatrix equals \(\det A\) times the corresponding inverse submatrix.
Entry-wise, \((\operatorname {adj} A)_{i,j} = \det A \cdot A^{-1}_{i,j}\) for invertible \(A\).
Let \(A\) be an \(m \times m\) matrix with \(\det A\) a unit. Let \(P, Q \subseteq [m]\) with \(|P| = |Q| = k\). Then
This reduces Jacobi’s complementary minor theorem to the complementary minor theorem for inverse matrices.
By Lemma 5.101, the adjugate submatrix is \(\det A\) times the inverse submatrix. The determinant of a scalar multiple is \((\det A)^k\) times the original determinant.
5.3.11 Helpers for block matrix decomposition
For a block matrix \(M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}\) with \(D\) invertible and the Schur complement \(A - BD^{-1}C\) invertible, we have
By the Schur complement formula, \(\det (M) = \det (D) \cdot \det (A - BD^{-1}C)\). The top-left block of \(M^{-1}\) is \((A - BD^{-1}C)^{-1}\), whose determinant is \(\det (A - BD^{-1}C)^{-1}\). Multiplying gives \(\det (D)\).
Let \(M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}\) be an invertible block matrix with \(D\) invertible and Schur complement \(A - BD^{-1}C\) invertible. Let \(k = |A|\) (the size of the top-left block) with \(k \geq 1\). Then
Since \(\operatorname {adj}(M) = \det (M) \cdot M^{-1}\) for invertible \(M\), the top-left block of \(\operatorname {adj}(M)\) is \(\det (M) \cdot (M^{-1})_{\text{top-left}}\). Taking the \(k \times k\) determinant yields \((\det M)^k \cdot \det ((M^{-1})_{\text{top-left}})\). By Lemma 5.103, \(\det (M) \cdot \det ((M^{-1})_{\text{top-left}}) = \det D\), so \(\det ((M^{-1})_{\text{top-left}}) = \det D / \det M\), giving the result.
Given a subset \(P\) of \([m]\), the sorting equivalence is a bijection
that sends the \(i\)-th element of the left component to the \(i\)-th smallest element of \(P\) and the \(j\)-th element of the right component to the \(j\)-th smallest element of \(P^c\). This equivalence is used to decompose matrices into block form with respect to the partition \([m] = P \sqcup P^c\).
Let \(K\) be a field, \(A \in K^{m \times m}\) with \(\det A \neq 0\), and \(\sigma , \tau \) bijections from \([n]\) to \([m]\). Then
where subscripts denote the submatrix obtained by reindexing rows and columns via the given bijections.
Verify directly that \(A_{\sigma ,\tau } \cdot (A^{-1})_{\tau ,\sigma } = I\) using \((A \cdot A^{-1})_{\sigma (i), \sigma (j)} = \delta _{ij}\).
Let \(A \in K^{m \times m}\) and \(f, g : [n] \xrightarrow {\sim } [m]\) be bijections. Then
Rewrite the submatrix as a reindexed matrix and apply the standard determinant reindexing formula.
5.3.12 Helpers for the complementary minor theorem proof
Let \(K\) be a field. Let \(A \in K^{m \times m}\) be an invertible matrix. Let \(P, Q \subseteq [m]\) with \(|P| = |Q|\). Then
The proof uses block matrix decomposition. Define sorting equivalences \(\sigma \) and \(\tau \) for \(Q\) and \(P\) respectively (Definition 5.105), which reorder indices so that \(P\)-elements come first. The permuted matrix \(M = A_{\sigma , \tau }\) decomposes into blocks: \(M_{\text{top-left}} = \operatorname {sub}_Q^P(A)\) and \(M_{\text{bottom-right}} = \operatorname {sub}_{Q^c}^{P^c}(A)\). By Lemma 5.106, \((M^{-1})_{\text{top-left}} = \operatorname {sub}_P^Q(A^{-1})\). By Lemma 5.103, \(\det (M) \cdot \det ((M^{-1})_{\text{top-left}}) = \det (M_{\text{bottom-right}})\). The sign factor \((-1)^{\operatorname {sum} P + \operatorname {sum} Q}\) arises from the determinant of the sorting permutation (Lemma 5.107).
Let \(K\) be a field. Let \(A \in K^{m \times m}\) with \(\det A \neq 0\). Let \(P, Q \subseteq [m]\) with \(|P| = |Q| \geq 1\). Then
By Lemma 5.102,
By Lemma 5.108,
so \(\det (\operatorname {sub}_P^Q(A^{-1})) = (-1)^{\operatorname {sum} P + \operatorname {sum} Q} \det (\operatorname {sub}_{\widetilde{Q}}^{\widetilde{P}} A) / \det A\). Since \(|P| = |Q|\), the exponent becomes \(|Q| - 1\).
5.3.13 Helpers for the Cauchy determinant proof (factor hunting)
The variable substitution \(\varphi _{i \to j}\) is the algebra homomorphism \(R[x_1, \ldots , x_n] \to R[x_1, \ldots , x_n]\) that sends \(x_i \mapsto x_j\) and fixes all other variables.
- AlgebraicCombinatorics.Determinants.substXiToXj i j = MvPolynomial.aeval fun (k : σ) => if k = i then MvPolynomial.X j else MvPolynomial.X k
Let \(p \in R[x_1, \ldots , x_n]\) be a multivariate polynomial and \(i \neq j\). Then \((x_i - x_j) \mid p - p|_{x_i = x_j}\), where \(p|_{x_i = x_j}\) denotes the substitution of \(x_j\) for \(x_i\).
By induction on the structure of \(p\) (using the polynomial recursor on constants, sums, and products by a variable). The base case (constants) is trivial. The additive case follows from divisibility of sums. The multiplicative case \(p \cdot x_k\) splits: if \(k = i\), factor as \((p - p|_{x_i = x_j}) \cdot x_i + p|_{x_i = x_j} \cdot (x_i - x_j)\); otherwise, factor as \((p - p|_{x_i = x_j}) \cdot x_k\).
Let \(R\) be an integral domain, and let \(P \in R[x_0, x_1, \ldots , x_{m+1}]\). If \(P\) vanishes when \(x_0\) is replaced by \(x_1\) (i.e., \(P|_{x_0 = x_1} = 0\)), then \((x_0 - x_1) \mid P\).
View \(P\) as a univariate polynomial in \(x_0\) over \(R[x_1, \ldots , x_{m+1}]\) (using the canonical isomorphism between \(R[x_0, x_1, \ldots , x_{m+1}]\) and \(R[x_1, \ldots , x_{m+1}][x_0]\)). The hypothesis says that \(x_1\) is a root of this univariate polynomial, so \((X - x_1)\) divides it by the polynomial root theorem. Pulling back through the isomorphism yields \((x_0 - x_1) \mid P\).
For distinct variables \(x_i, x_j\) in \(\mathbb {Z}[x_1, \ldots , x_n]\), the polynomial \(x_i - x_j\) is irreducible.
Since \(x_i - x_j\) has total degree \(1\) and is primitive (its content is \(1\)), it is irreducible. The degree-\(1\) argument: any nontrivial factorization would require one factor of degree \(0\) (a constant), but a primitive polynomial of positive degree cannot have a non-unit constant factor.
For distinct \(i, j\), the total degree of \(x_i - x_j\) in \(\mathbb {Z}[x_1, \ldots , x_n]\) is \(1\).
The upper bound follows from \(\deg (f - g) \leq \max (\deg f, \deg g)\) and \(\deg (X_i) = 1\). The lower bound follows from the monomial \(X_i^1\) having nonzero coefficient in \(x_i - x_j\).
For distinct \(i, j\), the polynomial \(x_i - x_j \in \mathbb {Z}[x_1, \ldots , x_n]\) is primitive (its content is \(1\)).
The coefficient of the monomial \(x_i^1\) is \(1\), so the GCD of the coefficients is \(1\).
Let \(i \neq j\) and \(k \neq l\) be pairs of distinct indices such that \(\{ i,j\} \) and \(\{ k,l\} \) are not equal as unordered pairs. Then \((x_i - x_j)\) and \((x_k - x_l)\) are coprime in \(\mathbb {Z}[x_1, \ldots , x_n]\).
Both are irreducible by Lemma 5.113, and they are non-associate (not equal up to unit multiple) since they involve different variable pairs. Distinct irreducibles in a UFD are coprime.
Let \(C(m)\) be the \(m \times m\) matrix with entry \((i,j) = \prod _{k \neq j}(X_i + Y_k)\) over \(\mathbb {Z}[X_1, \ldots , X_m, Y_1, \ldots , Y_m]\). For distinct \(i, j\), we have \((X_i - X_j) \mid \det (C(m))\).
Substituting \(X_j\) for \(X_i\) makes rows \(i\) and \(j\) identical, so the determinant vanishes. By Lemma 5.112, \((X_i - X_j)\) divides the determinant.
For distinct \(i, j\), we have \((Y_i - Y_j) \mid \det (C(m))\), where \(C(m)\) is the polynomial Cauchy matrix from Lemma 5.117.
Substituting \(Y_j\) for \(Y_i\) makes columns \(i\) and \(j\) proportional (both become \(\prod _{k \neq i, k \neq j}(X_r + Y_k) \cdot (X_r + Y_j)\)), so the determinant vanishes. The divisibility follows by Lemma 5.112.
Let \(K\) be a field. Let \(x_1, \ldots , x_m, y_1, \ldots , y_m \in K\) with \(x_i + y_j \neq 0\) for all \(i, j\). Then
The polynomial Cauchy identity (Lemma 5.87) gives \(\det (\prod _{k \neq j}(x_i + y_k)) = \prod _{i {\lt} j}(x_i - x_j)(y_i - y_j)\). The left-hand side equals \(\prod _{i,j}(x_i + y_j) \cdot \det (1/(x_i + y_j))\) since each row of the Cauchy matrix can be multiplied by \(\prod _j (x_i + y_j)\) to clear denominators. Dividing gives the result.