Formalization Blueprint for “An Introduction to Algebraic Combinatorics” by Darij Grinberg

1.17 Multivariate formal power series

Definition 1.522
#

Let \(k\in \mathbb {N}\). The \(K\)-algebra of all FPSs in \(k\) variables \(x_{1},x_{2},\ldots ,x_{k}\) over \(K\) will be denoted by \(K\left[ \left[ x_{1},x_{2},\ldots ,x_{k}\right] \right] \).

Sometimes we will use different names for our variables. For example, if we work with \(2\) variables, we will commonly call them \(x\) and \(y\) instead of \(x_{1}\) and \(x_{2}\). Correspondingly, we will use the notation \(K\left[ \left[ x,y\right] \right] \) for the \(K\)-algebra of FPSs in these two variables.

1.17.1 Embedding univariate power series into bivariate power series

Definition 1.523
#

Let \(f_0, f_1, f_2, \ldots \) be FPSs in a single variable \(x\). Define the bivariate power series \(\operatorname {embed}(f) := \sum _{k \in \mathbb {N}} f_k \, y^k \in K[[x,y]]\), whose coefficient at \(x^n y^k\) is the coefficient of \(x^n\) in \(f_k\).

Lemma 1.524

Let \(f_0, f_1, f_2, \ldots \) be FPSs in a single variable \(x\). Then for all \(n, k \in \mathbb {N}\),

\[ [x^n y^k]\, \operatorname {embed}(f) = [x^n]\, f_k. \]
Proof

Immediate from the definition of \(\operatorname {embed}\).

1.17.2 Comparing coefficients of \(y^k\)

Proposition 1.525

Let \(f_{0},f_{1},f_{2},\ldots \) and \(g_{0},g_{1},g_{2},\ldots \) be FPSs in a single variable \(x\) such that

\begin{equation} \sum _{k\in \mathbb {N}}f_{k}y^{k}=\sum _{k\in \mathbb {N}}g_{k}y^{k}\ \ \ \ \ \ \ \ \ \ \text{in }K\left[ \left[ x,y\right] \right] . \label{eq.prop.fps.mulvar.comp-y-coeff.ass} \end{equation}
71

Then, \(f_{k}=g_{k}\) for each \(k\in \mathbb {N}\).

Proof

For each \(k\in \mathbb {N}\), let us write the two FPSs \(f_{k}\) and \(g_{k}\) as \(f_{k}=\sum _{n\in \mathbb {N}}f_{k,n}x^{n}\) and \(g_{k}=\sum _{n\in \mathbb {N}}g_{k,n}x^{n}\) with \(f_{k,n},g_{k,n}\in K\). Then, the equality (71) can be rewritten as

\[ \sum _{k\in \mathbb {N}}\ \ \sum _{n\in \mathbb {N}}f_{k,n}x^{n}y^{k}=\sum _{k\in \mathbb {N}}\ \ \sum _{n\in \mathbb {N}}g_{k,n}x^{n}y^{k}. \]

Now, comparing coefficients in front of \(x^{n}y^{k}\) in this equality, we obtain \(f_{k,n}=g_{k,n}\) for each \(k,n\in \mathbb {N}\). Therefore, \(f_{k}=g_{k}\) for each \(k\in \mathbb {N}\).

1.17.3 Application: binomial generating function identity

Definition 1.526
#

The binomial generating function is the bivariate power series

\[ \operatorname {BinGen} := \sum _{n,k\in \mathbb {N}} \binom {n}{k}\, x^n y^k \in \mathbb {Q}[[x,y]], \]

whose coefficient at \(x^n y^k\) is \(\binom {n}{k}\).

Definition 1.527
#

The closed form of the binomial generating function is

\[ \frac{1}{1 - x(1+y)} \in \mathbb {Q}[[x,y]]. \]
Theorem 1.528

The binomial generating function equals its closed form:

\[ \sum _{n,k\in \mathbb {N}} \binom {n}{k}\, x^n y^k = \frac{1}{1-x(1+y)}. \]
Proof

It suffices to show that \((1 - x(1+y)) \cdot \operatorname {BinGen} = 1\), since the constant term of \(1 - x(1+y)\) is \(1 \neq 0\) and inverses in power series rings are unique.

Since \(1 - x(1+y) = 1 - x - xy\), the coefficient of \(x^n y^k\) in \((1 - x - xy) \cdot \operatorname {BinGen}\) is

\[ \binom {n}{k} - \binom {n-1}{k} - \binom {n-1}{k-1} \]

(where we set \(\binom {n-1}{k} = 0\) when \(n = 0\) and \(\binom {n-1}{k-1} = 0\) when \(n = 0\) or \(k = 0\)).

For \(n = 0\), this equals \(\binom {0}{k} = \delta _{k,0}\), which is the coefficient of \(x^0 y^k\) in \(1\).

For \(n \geq 1\) and \(k = 0\), this equals \(\binom {n}{0} - \binom {n-1}{0} = 1 - 1 = 0\).

For \(n \geq 1\) and \(k \geq 1\), Pascal’s identity gives \(\binom {n}{k} = \binom {n-1}{k-1} + \binom {n-1}{k}\), so the expression is \(0\).

Thus \((1 - x(1+y)) \cdot \operatorname {BinGen} = 1\), and uniqueness of inverses yields \(\operatorname {BinGen} = (1 - x(1+y))^{-1}\).

From the computation in the source text, comparing the two expressions for \(\sum _{n,k} \binom {n}{k} x^n y^k\), we find

\begin{equation} \sum _{k\in \mathbb {N}}\frac{x^{k}}{\left( 1-x\right) ^{k+1}}y^{k}=\sum _{k\in \mathbb {N}}\left( \sum _{n\in \mathbb {N}}\binom {n}{k}x^{n}\right) y^{k}. \label{eq.fps.mulvar.exa1.xyk} \end{equation}
72

1.17.4 Helper lemmas from the formalization

Lemma 1.529

The inverse of \(1 - x\) in \(\mathbb {Q}[[x]]\) equals the power series with all coefficients equal to \(1\):

\[ (1 - x)^{-1} = \sum _{n \in \mathbb {N}} x^n. \]
Proof

We verify \((1-x) \cdot \sum _{n \in \mathbb {N}} x^n = 1\) by direct computation, and the constant term of \(1-x\) is \(1 \neq 0\).

Lemma 1.530

For each \(k \in \mathbb {N}\),

\[ \frac{x^k}{(1-x)^{k+1}} = \sum _{n \in \mathbb {N}} \binom {n}{k}\, x^n \quad \text{in } \mathbb {Q}[[x]]. \]
Proof

We compare coefficients. By the formula for the inverse of \((1-x)^{k+1}\), \((1-x)^{-(k+1)} = \sum _n \binom {n+k}{k}\, x^n\). Multiplying by \(x^k\) shifts the index: the coefficient of \(x^n\) in \(x^k \cdot (1-x)^{-(k+1)}\) is \(0\) for \(n {\lt} k\) and \(\binom {n-k+k}{k} = \binom {n}{k}\) for \(n \geq k\). For \(n {\lt} k\), \(\binom {n}{k} = 0\) as well, so both sides agree.

Theorem 1.531

Equation (72) holds: as embedded sequences of univariate power series,

\[ \operatorname {embed}\! \left(k \mapsto \frac{x^k}{(1-x)^{k+1}}\right) = \operatorname {embed}\! \left(k \mapsto \sum _{n \in \mathbb {N}} \binom {n}{k}\, x^n\right). \]
Proof

Follows immediately from Lemma 1.530, which shows that the two sequences are equal term by term.

Theorem 1.532

For each \(k \in \mathbb {N}\),

\[ \frac{x^{k}}{\left( 1-x\right) ^{k+1}}=\sum _{n\in \mathbb {N}}\binom {n}{k}x^{n} \]

as an identity in \(\mathbb {Q}[[x]]\). This is an equality between univariate FPSs, even though we have obtained it by manipulating bivariate FPSs.

Proof

By Theorem 1.531, the two embedded sequences are equal. Apply Proposition 1.525 to extract the \(k\)-th component of this equality.

1.17.5 Partial derivatives

Definition 1.533
#

Let \(\sigma \) be a type of variable indices and let \(i \in \sigma \). The partial derivative of a multivariate power series \(f = \sum _{\mathbf{m}} a_{\mathbf{m}}\, \mathbf{x}^{\mathbf{m}}\) with respect to \(x_i\) is the power series

\[ \frac{\partial f}{\partial x_i} = \sum _{\mathbf{m}} (m_i + 1) \cdot a_{\mathbf{m} + \mathbf{e}_i} \cdot \mathbf{x}^{\mathbf{m}}, \]

where \(\mathbf{e}_i\) is the \(i\)-th standard basis vector (i.e., \(\mathbf{e}_i = (\delta _{j,i})_{j \in \sigma }\)).

This defines an \(R\)-linear map \(\partial /\partial x_i \colon R[[\mathbf{x}]] \to R[[\mathbf{x}]]\).

noncomputable def AlgebraicCombinatorics.partialDeriv {R : Type u_1} [CommSemiring R] {σ : Type u_2} [DecidableEq σ] (i : σ) :
  • One or more equations did not get rendered due to their size.
Lemma 1.534

For any multivariate power series \(f\) and any multi-index \(\mathbf{m}\),

\[ [\mathbf{x}^{\mathbf{m}}]\, \frac{\partial f}{\partial x_i} = (m_i + 1) \cdot [\mathbf{x}^{\mathbf{m} + \mathbf{e}_i}]\, f. \]
theorem AlgebraicCombinatorics.coeff_partialDeriv {R : Type u_1} [CommSemiring R] {σ : Type u_2} [DecidableEq σ] (i : σ) (f : MvPowerSeries σ R) (m : σ →₀ ) :
(MvPowerSeries.coeff m) ((partialDeriv i) f) = ((m + fun₀ | i => 1) i) * (MvPowerSeries.coeff (m + fun₀ | i => 1)) f
Proof

Immediate from the definition.

Lemma 1.535

The map \((\mathbf{a}, \mathbf{b}) \mapsto (\mathbf{a} + \mathbf{e}_i, \mathbf{b})\) gives a bijection from \(\operatorname {antidiagonal}(\mathbf{m})\) to the subset of \(\operatorname {antidiagonal}(\mathbf{m} + \mathbf{e}_i)\) consisting of pairs \((\mathbf{a}, \mathbf{b})\) with \(\mathbf{e}_i \leq \mathbf{a}\).

theorem AlgebraicCombinatorics.antidiag_shift_fst {σ : Type u_2} [DecidableEq σ] (i : σ) (m : σ →₀ ) :
Finset.map { toFun := fun (p : (σ →₀ ) × (σ →₀ )) => (p.1 + fun₀ | i => 1, p.2), inj' := } (Finset.antidiagonal m) = {pFinset.antidiagonal (m + fun₀ | i => 1) | (fun₀ | i => 1) p.1}
Proof

Verified by checking the membership conditions in both directions.

Lemma 1.536

The map \((\mathbf{a}, \mathbf{b}) \mapsto (\mathbf{a}, \mathbf{b} + \mathbf{e}_i)\) gives a bijection from \(\operatorname {antidiagonal}(\mathbf{m})\) to the subset of \(\operatorname {antidiagonal}(\mathbf{m} + \mathbf{e}_i)\) consisting of pairs \((\mathbf{a}, \mathbf{b})\) with \(\mathbf{e}_i \leq \mathbf{b}\).

theorem AlgebraicCombinatorics.antidiag_shift_snd {σ : Type u_2} [DecidableEq σ] (i : σ) (m : σ →₀ ) :
Finset.map { toFun := fun (p : (σ →₀ ) × (σ →₀ )) => (p.1, p.2 + fun₀ | i => 1), inj' := } (Finset.antidiagonal m) = {pFinset.antidiagonal (m + fun₀ | i => 1) | (fun₀ | i => 1) p.2}
Proof

Symmetric to Lemma 1.535.

Lemma 1.537

In the sum over \(\operatorname {antidiagonal}(\mathbf{m} + \mathbf{e}_i)\), the terms with \(a_i = 0\) (i.e., \(\mathbf{e}_i \not\leq \mathbf{a}\)) contribute zero to \(\sum _{(\mathbf{a},\mathbf{b})} a_i \cdot [{\mathbf{x}}^{\mathbf{a}}]\, f \cdot [{\mathbf{x}}^{\mathbf{b}}]\, g\).

theorem AlgebraicCombinatorics.sum_filter_zero_fst {R : Type u_1} [CommSemiring R] {σ : Type u_2} [DecidableEq σ] (i : σ) (m : σ →₀ ) (f g : MvPowerSeries σ R) :
pFinset.antidiagonal (m + fun₀ | i => 1) with ¬(fun₀ | i => 1) p.1, (p.1 i) * (MvPowerSeries.coeff p.1) f * (MvPowerSeries.coeff p.2) g = 0
Proof

Each such term has \(a_i = 0\), so the factor \(a_i\) makes the whole summand vanish.

Lemma 1.538

In the sum over \(\operatorname {antidiagonal}(\mathbf{m} + \mathbf{e}_i)\), the terms with \(b_i = 0\) (i.e., \(\mathbf{e}_i \not\leq \mathbf{b}\)) contribute zero to \(\sum _{(\mathbf{a},\mathbf{b})} [\mathbf{x}^{\mathbf{a}}]\, f \cdot b_i \cdot [\mathbf{x}^{\mathbf{b}}]\, g\).

theorem AlgebraicCombinatorics.sum_filter_zero_snd {R : Type u_1} [CommSemiring R] {σ : Type u_2} [DecidableEq σ] (i : σ) (m : σ →₀ ) (f g : MvPowerSeries σ R) :
pFinset.antidiagonal (m + fun₀ | i => 1) with ¬(fun₀ | i => 1) p.2, (MvPowerSeries.coeff p.1) f * ((p.2 i) * (MvPowerSeries.coeff p.2) g) = 0
Proof

Each such term has \(b_i = 0\), so the factor \(b_i\) makes the whole summand vanish.

Theorem 1.539 Product rule for partial derivatives

The partial derivative satisfies the product rule:

\[ \frac{\partial (fg)}{\partial x_i} = \frac{\partial f}{\partial x_i} \cdot g + f \cdot \frac{\partial g}{\partial x_i}. \]
theorem AlgebraicCombinatorics.partialDeriv_mul {R : Type u_1} [CommSemiring R] {σ : Type u_2} [DecidableEq σ] (i : σ) (f g : MvPowerSeries σ R) :
(partialDeriv i) (f * g) = (partialDeriv i) f * g + f * (partialDeriv i) g
Proof

We compare coefficients at an arbitrary multi-index \(\mathbf{m}\). Set \(\mathbf{m}' = \mathbf{m} + \mathbf{e}_i\). On the left side, by Lemma 1.534 and the product formula,

\[ [\mathbf{x}^{\mathbf{m}}]\, \frac{\partial (fg)}{\partial x_i} = m'_i \cdot \sum _{\substack{[\mathbf , {, a, }, +, \mathbf , {, b, }, =, \mathbf , {, m, }, ']}} [\mathbf{x}^{\mathbf{a}}]\, f \cdot [\mathbf{x}^{\mathbf{b}}]\, g. \]

We distribute the scalar \(m'_i\) into each summand. For each pair \((\mathbf{a},\mathbf{b})\) in \(\operatorname {antidiagonal}(\mathbf{m}')\), we have \(m'_i = a_i + b_i\), so

\[ m'_i \cdot [\mathbf{x}^{\mathbf{a}}]\, f \cdot [\mathbf{x}^{\mathbf{b}}]\, g = a_i \cdot [\mathbf{x}^{\mathbf{a}}]\, f \cdot [\mathbf{x}^{\mathbf{b}}]\, g + [\mathbf{x}^{\mathbf{a}}]\, f \cdot b_i \cdot [\mathbf{x}^{\mathbf{b}}]\, g. \]

Summing over the antidiagonal, the first sum equals \([\mathbf{x}^{\mathbf{m}}]\, (\frac{\partial f}{\partial x_i} \cdot g)\) and the second equals \([\mathbf{x}^{\mathbf{m}}]\, (f \cdot \frac{\partial g}{\partial x_i})\), after reindexing via Lemmas 1.535 and 1.536 and discarding the zero terms via Lemmas 1.537 and 1.538.