1.17 Multivariate formal power series
Let \(k\in \mathbb {N}\). The \(K\)-algebra of all FPSs in \(k\) variables \(x_{1},x_{2},\ldots ,x_{k}\) over \(K\) will be denoted by \(K\left[ \left[ x_{1},x_{2},\ldots ,x_{k}\right] \right] \).
Sometimes we will use different names for our variables. For example, if we work with \(2\) variables, we will commonly call them \(x\) and \(y\) instead of \(x_{1}\) and \(x_{2}\). Correspondingly, we will use the notation \(K\left[ \left[ x,y\right] \right] \) for the \(K\)-algebra of FPSs in these two variables.
1.17.1 Embedding univariate power series into bivariate power series
Let \(f_0, f_1, f_2, \ldots \) be FPSs in a single variable \(x\). Define the bivariate power series \(\operatorname {embed}(f) := \sum _{k \in \mathbb {N}} f_k \, y^k \in K[[x,y]]\), whose coefficient at \(x^n y^k\) is the coefficient of \(x^n\) in \(f_k\).
- AlgebraicCombinatorics.embedUnivInBiv f nm = (PowerSeries.coeff (nm 0)) (f (nm 1))
Let \(f_0, f_1, f_2, \ldots \) be FPSs in a single variable \(x\). Then for all \(n, k \in \mathbb {N}\),
Immediate from the definition of \(\operatorname {embed}\).
1.17.2 Comparing coefficients of \(y^k\)
Let \(f_{0},f_{1},f_{2},\ldots \) and \(g_{0},g_{1},g_{2},\ldots \) be FPSs in a single variable \(x\) such that
Then, \(f_{k}=g_{k}\) for each \(k\in \mathbb {N}\).
For each \(k\in \mathbb {N}\), let us write the two FPSs \(f_{k}\) and \(g_{k}\) as \(f_{k}=\sum _{n\in \mathbb {N}}f_{k,n}x^{n}\) and \(g_{k}=\sum _{n\in \mathbb {N}}g_{k,n}x^{n}\) with \(f_{k,n},g_{k,n}\in K\). Then, the equality (71) can be rewritten as
Now, comparing coefficients in front of \(x^{n}y^{k}\) in this equality, we obtain \(f_{k,n}=g_{k,n}\) for each \(k,n\in \mathbb {N}\). Therefore, \(f_{k}=g_{k}\) for each \(k\in \mathbb {N}\).
1.17.3 Application: binomial generating function identity
The binomial generating function is the bivariate power series
whose coefficient at \(x^n y^k\) is \(\binom {n}{k}\).
- AlgebraicCombinatorics.binomialGenFun nm = ↑((nm 0).choose (nm 1))
The closed form of the binomial generating function is
The binomial generating function equals its closed form:
It suffices to show that \((1 - x(1+y)) \cdot \operatorname {BinGen} = 1\), since the constant term of \(1 - x(1+y)\) is \(1 \neq 0\) and inverses in power series rings are unique.
Since \(1 - x(1+y) = 1 - x - xy\), the coefficient of \(x^n y^k\) in \((1 - x - xy) \cdot \operatorname {BinGen}\) is
(where we set \(\binom {n-1}{k} = 0\) when \(n = 0\) and \(\binom {n-1}{k-1} = 0\) when \(n = 0\) or \(k = 0\)).
For \(n = 0\), this equals \(\binom {0}{k} = \delta _{k,0}\), which is the coefficient of \(x^0 y^k\) in \(1\).
For \(n \geq 1\) and \(k = 0\), this equals \(\binom {n}{0} - \binom {n-1}{0} = 1 - 1 = 0\).
For \(n \geq 1\) and \(k \geq 1\), Pascal’s identity gives \(\binom {n}{k} = \binom {n-1}{k-1} + \binom {n-1}{k}\), so the expression is \(0\).
Thus \((1 - x(1+y)) \cdot \operatorname {BinGen} = 1\), and uniqueness of inverses yields \(\operatorname {BinGen} = (1 - x(1+y))^{-1}\).
From the computation in the source text, comparing the two expressions for \(\sum _{n,k} \binom {n}{k} x^n y^k\), we find
1.17.4 Helper lemmas from the formalization
The inverse of \(1 - x\) in \(\mathbb {Q}[[x]]\) equals the power series with all coefficients equal to \(1\):
We verify \((1-x) \cdot \sum _{n \in \mathbb {N}} x^n = 1\) by direct computation, and the constant term of \(1-x\) is \(1 \neq 0\).
For each \(k \in \mathbb {N}\),
We compare coefficients. By the formula for the inverse of \((1-x)^{k+1}\), \((1-x)^{-(k+1)} = \sum _n \binom {n+k}{k}\, x^n\). Multiplying by \(x^k\) shifts the index: the coefficient of \(x^n\) in \(x^k \cdot (1-x)^{-(k+1)}\) is \(0\) for \(n {\lt} k\) and \(\binom {n-k+k}{k} = \binom {n}{k}\) for \(n \geq k\). For \(n {\lt} k\), \(\binom {n}{k} = 0\) as well, so both sides agree.
Equation (72) holds: as embedded sequences of univariate power series,
Follows immediately from Lemma 1.530, which shows that the two sequences are equal term by term.
For each \(k \in \mathbb {N}\),
as an identity in \(\mathbb {Q}[[x]]\). This is an equality between univariate FPSs, even though we have obtained it by manipulating bivariate FPSs.
1.17.5 Partial derivatives
Let \(\sigma \) be a type of variable indices and let \(i \in \sigma \). The partial derivative of a multivariate power series \(f = \sum _{\mathbf{m}} a_{\mathbf{m}}\, \mathbf{x}^{\mathbf{m}}\) with respect to \(x_i\) is the power series
where \(\mathbf{e}_i\) is the \(i\)-th standard basis vector (i.e., \(\mathbf{e}_i = (\delta _{j,i})_{j \in \sigma }\)).
This defines an \(R\)-linear map \(\partial /\partial x_i \colon R[[\mathbf{x}]] \to R[[\mathbf{x}]]\).
- One or more equations did not get rendered due to their size.
For any multivariate power series \(f\) and any multi-index \(\mathbf{m}\),
Immediate from the definition.
The map \((\mathbf{a}, \mathbf{b}) \mapsto (\mathbf{a} + \mathbf{e}_i, \mathbf{b})\) gives a bijection from \(\operatorname {antidiagonal}(\mathbf{m})\) to the subset of \(\operatorname {antidiagonal}(\mathbf{m} + \mathbf{e}_i)\) consisting of pairs \((\mathbf{a}, \mathbf{b})\) with \(\mathbf{e}_i \leq \mathbf{a}\).
Verified by checking the membership conditions in both directions.
The map \((\mathbf{a}, \mathbf{b}) \mapsto (\mathbf{a}, \mathbf{b} + \mathbf{e}_i)\) gives a bijection from \(\operatorname {antidiagonal}(\mathbf{m})\) to the subset of \(\operatorname {antidiagonal}(\mathbf{m} + \mathbf{e}_i)\) consisting of pairs \((\mathbf{a}, \mathbf{b})\) with \(\mathbf{e}_i \leq \mathbf{b}\).
Symmetric to Lemma 1.535.
In the sum over \(\operatorname {antidiagonal}(\mathbf{m} + \mathbf{e}_i)\), the terms with \(a_i = 0\) (i.e., \(\mathbf{e}_i \not\leq \mathbf{a}\)) contribute zero to \(\sum _{(\mathbf{a},\mathbf{b})} a_i \cdot [{\mathbf{x}}^{\mathbf{a}}]\, f \cdot [{\mathbf{x}}^{\mathbf{b}}]\, g\).
Each such term has \(a_i = 0\), so the factor \(a_i\) makes the whole summand vanish.
In the sum over \(\operatorname {antidiagonal}(\mathbf{m} + \mathbf{e}_i)\), the terms with \(b_i = 0\) (i.e., \(\mathbf{e}_i \not\leq \mathbf{b}\)) contribute zero to \(\sum _{(\mathbf{a},\mathbf{b})} [\mathbf{x}^{\mathbf{a}}]\, f \cdot b_i \cdot [\mathbf{x}^{\mathbf{b}}]\, g\).
Each such term has \(b_i = 0\), so the factor \(b_i\) makes the whole summand vanish.
The partial derivative satisfies the product rule:
We compare coefficients at an arbitrary multi-index \(\mathbf{m}\). Set \(\mathbf{m}' = \mathbf{m} + \mathbf{e}_i\). On the left side, by Lemma 1.534 and the product formula,
We distribute the scalar \(m'_i\) into each summand. For each pair \((\mathbf{a},\mathbf{b})\) in \(\operatorname {antidiagonal}(\mathbf{m}')\), we have \(m'_i = a_i + b_i\), so
Summing over the antidiagonal, the first sum equals \([\mathbf{x}^{\mathbf{m}}]\, (\frac{\partial f}{\partial x_i} \cdot g)\) and the second equals \([\mathbf{x}^{\mathbf{m}}]\, (f \cdot \frac{\partial g}{\partial x_i})\), after reindexing via Lemmas 1.535 and 1.536 and discarding the zero terms via Lemmas 1.537 and 1.538.