Theta Functions, Elliptic Hypergeometric Series, and Kawanaka's Macdonald Polynomial Conjecture

We give a new theta-function identity, a special case of which is utilised to prove Kawanaka's Macdonald polynomial conjecture. The theta-function identity further yields a transformation formula for multivariable elliptic hypergeometric series which appears to be new even in the one-variable, basic case.


Introduction
The recent discovery of elliptic hypergeometric series (EHS) by Frenkel and Turaev [2] has led to a renewed interest in theta-function identities. Such identities are at the core of many of the proofs of identities for EHS associated with root systems. If, for |p| < 1 and x ∈ C * , θ(x) = θ(x; p) = θ(x i /y j ) n j=1, j =i θ(x i /x j ) = 0 for x 1 · · · x n = y 1 · · · y n , (1.1) θ(x i y j ) θ(qx i y j ) i∈I j ∈I θ(qx i /x j ) θ(x i /x j ) = I⊆[n] |I|=r i∈I j∈ [n] θ(y i x j ) θ(qy i x j ) i∈I j ∈I θ(qy i /y j ) θ(y i /y j ) , (1.3) where [n] := {1, 2, . . . , n}. The identity (1.1) is standard and can for example be found in the classic text of Whittaker and Watson [24, page 451]. It was employed by Gustafson in [4] to derive an A n−1 extension of Bailey's very-well-poised 6 ψ 6 summation. In the same paper, Gustafson discovered the identity (1.2) [4, Lemma 4.14] and used it to derive a C n extension of Bailey's very-well-poised 6 ψ 6 summation. The identities (1.1) and (1.2) were also employed by Rosengren in [14] to prove elliptic analogues of Milne's A n−1 Jackson sum [12] and Schlosser's D n Jackson sum [20].  [17], see also [19]) to show the commutativity of what is nowadays known as the Macdonald-Ruijsenaars difference operator [17]. Let (a) n = (a; q, p) n = n−1 k=0 θ aq k ; p be a theta shifted factorial (cf. [3,Chapter 11]), and set (a 1 , . . . , a k ) n = (a 1 ) n · · · (a k ) n .
Then the main result of this paper is the following new identity for theta functions.
As will be shown in Section 4, Theorem 1.1 can be used to obtain identities for EHS for the root systems of type A. More unexpectedly, however, it also implies a combinatorial identity in the theory of Macdonald polynomials [11,Chapter VI], conjectured in 1999 by Kawanaka [8]. For λ a partition let P λ (x; q, t) be the Macdonald symmetric function in countably many independent variables x = (x 1 , x 2 , . . . ). Let a(s) and l(s) be the arm-length and leg-length of the square s in the diagram of λ, and let (Kawanaka). The following formal identity holds: Using elementary results from Macdonald polynomial theory and (a special limiting case of) Theorem 1.1 we can claim the following.
We finally remark that although Theorem 1.1 is new, a special limiting case coincides with a limiting case of another theta-function identity, implicit in [14,Corollary 5.3].
Theorem 1.4 (Rosengren). For n a nonnegative integer and v, w, y, z, q, x 1 , . . . , x n ∈ C * such that vw = q n−1 yz and such that both sides are well defined, If we set p = 0 in Theorems 1.1 and Theorem 1.4 using θ(a; 0) = (1 − a), and then let t → ∞ in the former (so that its right-hand side vanishes unless r = 0) and (y, z) → (0, ∞) (such that yz = q 1−n vw) in the latter, we obtain one and the same rational function identity (up to a rescaling of x i → wqx i in Theorem 1.1).
2 Proof of Theorem 1.1

Preliminary remarks
Let the left-and right-hand sides of the identity of the theorem be denoted as L(x; v, w, q, t; p) and R(x; v, w, q, t; p) respectively, where x := (x 1 , . . . , x n ). Then, by Hence Theorem 1.1 may be reformulated as the symmetry Alternatively, by and the substitutions I → [n] − I and r → n − r, it follows that L(x; v, w, q, t; p) = R x; q −n tw −1 , q −n tv −1 , q, t; p (v, w) n (qv/t, qw/t) n q t n .
Hence Theorem 1.1 is also equivalent to L(x; v, w, q, t; p) = L x; q −n tw −1 , q −n tv −1 , q, t; p (v, w) n (qv/t, qw/t) n q t n .

Proof of Theorem 1.1
Recall that x = (x 1 , . . . , x n ). We begin by introducing a scalar variable u in the theorem by making the substitution x → x/u. Then Let L(x; u, v, w, q, t; p) and R(x; u, v, w, q, t; p) denote the left-hand side and right-hand side of this identity, and further define Comparing with our earlier definitions we thus have L(x; u, v, w, q, t; p) = L(x/u; v, w, q, t; p) and R(x; u, v, w, q, t; p) = R(x/u; v, w, q, t; p).
We are mainly interested in the u-dependence of L, R and F, and will frequently write L(u), R(u) and F(u). The claim of the theorem is thus F(u) = 0, which will be proved by induction on n, the cardinality of the alphabet x.
From θ(z) = −zθ(pz) it immediately follows that F is periodic along annuli, with period p: 3) The function F(u) has simple poles at It is thus sufficient to only consider the residues of the poles at u = x n /t, First we compute the residue at u = x n /t. For L(u) only terms such that n ∈ I contribute and for R(u) only terms with n ∈ I do. Hence, after an elementary calculation which includes the use of (2.1), lim u→a u − a θ(a/u; p) = a (p; p) 2 ∞ and a shift r → r + 1 in R(u), we find Res u=t −1 xn F(x; u, v, w, q, t; p) = F x (n) ; x n /qt, v, qw, q, t; p qx n t 2 where x (n) := (x 1 , . . . , x n−1 ). By induction on n this vanishes.
Next we consider the pole at u = q −m x n /w. The only contributions to its residue come from L(u) with (i) n ∈ I and r = m or (ii) n ∈ I and r = m − 1. An elementary calculation shows that these two contributions are the same up to a sign, and thus cancel. Now that we have established that all poles of F(u) have zero residue we may conclude that F(u) is independent of u. To show that it is actually identically zero we take u = x n /q. In L(x n /q) only terms such that n ∈ I contribute and in R(x n /q) only terms with n ∈ I do. Again using (2.1) and making a shift r → r + 1 in L(x n /q), we find F(x; x n /q, v, w, q, t; p) = F x (n) ; x n , qv, w, q, t; p θ(v)θ(qv/tw) θ(qv/t)θ(v/w) .
By induction this once again vanishes.

Preliminary remarks
Kawanaka's identity complements a set of four Macdonald polynomial identities discovered by Macdonald [11, page 349]. In slightly more general form as given in [23], these identities may be stated as the following pair of results (Macdonald's formulae correspond to b = 0 and b = 1) where c(λ) and r(λ) are the number of columns and rows of odd length, respectively. Due to its quadratic nature Kawanaka's identity is significantly harder to prove than (3.1). If x contains a single variable then (1.6) simplifies to the classical q-binomial theorem [3, where, for integer k, If q = t then (1.6) reduces to an identity for the Schur function s λ proved by Kawanaka [8]. Specifically, using that P λ (x; q, q) = s λ (x) and a(s) + l(s) + 1 = h(s) with h(s) the hook-length of the square s, it follows that the q = t case of (1.6) is This result was reproved and reinterpreted by Rosengren in [15]. If Q µ is Schur's Q-function, see [11, Section III.8], then Rosengren observed that for µ a partition of length m By (3.3) this results in a product-form for Q µ (1, q, q 2 , . . . ) and, consequently, in a product-form for the generating function of marked shifted tableaux [15,Corollary 3.1].
It is an open problem to find a corresponding interpretation of (1.6). Another special case of (1.6) proved by Kawanaka corresponds to q = 0 [7]. Using P λ (x; 0, t) = P λ (x; t) with on the right a Hall-Littlewood symmetric function, it follows that the q = 0 case of (1.6) is with m i (λ) the multiplicity of the part i in λ. This is in fact a special case of a much more general identity for Hall-Littlewood functions proved in [23, Theorem 1.1]. So far our attempts to generalise (1.6) to include this more general result for Hall-Littlewood functions have been unsuccessful.

Macdonald polynomials
Let λ = (λ 1 , λ 2 , . . . ) be a partition, i.e., λ 1 ≥ λ 2 ≥ · · · with finitely many λ i unequal to zero. The length and weight of λ, denoted by l(λ) and |λ|, are the number and sum of the non-zero λ i respectively. As usual we identify two partitions that differ only in their string of zeros, so that (6, 3, 3, 1, 0, 0) and (6, 3, 3, 1) represent the same partition. When |λ| = N we say that λ is a partition of N , and the unique partition of zero is denoted by 0. The multiplicity of the part i in the partition λ is denoted by m i (λ). We identify a partition with its Ferrers graph, defined by the set of points in (i, j) ∈ Z 2 such that 1 ≤ j ≤ λ i , and further make the usual identification between Ferrers graphs and (Young) diagrams by replacing points by squares. The conjugate λ ′ of λ is the partition obtained by reflecting the diagram of λ in the main diagonal.
If λ and µ are partitions then µ ⊆ λ if (the diagram of) µ is contained in (the diagram of) λ, i.e., µ i ≤ λ i for all i ≥ 1. If µ ⊆ λ then the skew-diagram λ − µ denotes the set-theoretic difference between λ and µ, i.e., those squares of λ not contained in µ. The skew diagram λ − µ is a vertical r-strip if |λ − µ| := |λ| − |µ| = r and if, for all i ≥ 1, λ i − µ i is at most one, i.e., each row of λ − µ contains at most one square. For example, if λ = (5, 4, 2, 2, 1) and µ = (4, 3, 1, 1, 1) then λ − µ is a vertical 4-strip. The set of all vertical r-strips is denoted by V r and the set of all vertical strips by , each column of λ − µ contains at most one square. The set of all horizontal r-strips is denoted by H r and the set of all horizontal strips by H.
Let s = (i, j) be a square in the diagram of λ, and let a(s) and l(s) be the arm-length and leg-length of s, given by Then we define the rational The function b + λ (q, t) is standard in Macdonald polynomial theory and is usually denoted as b λ (q, t). Below we use both notations: corresponds to the product in the summand of Kawanaka's conjecture. Since under conjugation arms and legs are interchanged, it easily follows that Subsequently we require non-combinatorial expressions for both where n is an integer such that l(λ) ≤ n. From this and (3.5) we also find where, again, l(λ) ≤ n. Let S n denote the symmetric group, acting on x = (x 1 , . . . , x n ) by permuting the x i , and let Λ n = Z[x 1 , . . . , x n ] Sn and Λ denote the ring of symmetric polynomials in n independent variables and the ring of symmetric functions in countably many variables, respectively.
For λ = (λ 1 , . . . , λ n ) a partition of at most n parts the monomial symmetric function m λ is defined as where the sum is over all distinct permutations α of λ, and x α = x α 1 1 · · · x αn n . For l(λ) > n we set m λ (x) = 0. The monomial symmetric functions m λ for l(λ) ≤ n form a Z-basis of Λ n .
For r a nonnegative integer the power sums p r are given by p 0 = 1 and p r = m (r) for r > 1. Hence More generally the power-sum products are defined as p λ (x) = p λ 1 (x) · · · p λn (x). Define the Macdonald scalar product ·, · q,t on the ring of symmetric functions by . If we denote the ring of symmetric functions in n variables over the field F = Q(q, t) of rational functions in q and t by Λ n,F , then the Macdonald polynomial P λ (x; q, t) is the unique symmetric polynomial in Λ n,F such that [11, Section VI.4, Equation (4.7)]: and P λ , P µ q,t = 0 if λ = µ.
A second Macdonald symmetric function is defined as The normalisation of the Macdonald inner product is then P λ , Q λ q,t = 1.

Proof of Theorem 1.3
By (3.4) Kawanaka's conjecture can be stated as (3.10) Our initial steps closely follow Macdonald's proof of (3.1). It suffices to prove (3.10) for the finite set of variables x = (x 1 , . . . , x n ). If we also denote x ′ = (x 1 , . . . , x n , y) and let then, by induction on n, it is enough to prove that We will expand both sides of (3.12) in terms of P µ (x; q 2 , t 2 )y r . After comparing coefficients this results in an identity for Pieri coefficients, given in Proposition 3.3 below.
Proof . By (3.11), the q-binomial theorem (3.2), and the generating function (3.8) for the g r , the right of (3.12) is equal to Recalling the Pieri rule (3.9a) this can be further rewritten as ∞ k,r=0 µ,ν µ−ν∈Hr where the last equality is true since 1/(q; q) k = 0 for k a negative integer.

Proposition 3.3. For µ a partition and r a nonnegative integer,
(3.13) Notationally it turns out to be slightly simpler to prove this in a form involving the Pieri coefficient ψ ′ λ/µ (q, t) given by where e r = P (1 r ) is the rth elementary symmetric function Hence we replace all partitions in (3.13) by their conjugates and use [11, page 341] as well as (3.5). Finally dividing both sides by b − µ ′ (q, t) it follows that (3.13) can be rewritten as follows.

Proposition 3.3. ′ For µ a partition and r a nonnegative integer,
Crucial in our proof below is an explicit formula for ψ ′ λ/µ due to Macdonald [11, Section VI.6, Equation (6.13)] (3.14) Proof of Proposition 3.3 ′ . Let us denote the left-and right-hand sides of the identity of Proposition 3.3 ′ by LHS and RHS. Then Let n denote the length of the partition µ, i.e., µ = (µ 1 , . . . , µ n ) with µ n ≥ 1. We can then replace the sum over ν by a sum over k-subsets I of [n] such that i ∈ I iff µ i − ν i = 1. In other words, I encodes the parts of ν that differ from those of µ. Using this notation as well as (3.6) and (3.14) we find We next turn to the right-hand side, which is a sum over partitions λ such that λ − µ ∈ V r . Recall that µ has exactly n parts. The maximum number of parts of λ is thus n + r (when λ = µ ∪ (1 r )) and the minimum number of parts of λ is n. Hence we can write Again we let I ⊆ [n] (with |I| = r − s) be the set of indices of those parts of µ to which a square is added to form λ; i ∈ I iff λ i − µ i = 1 for i ∈ [n]. For example if µ = (3, 2, 2, 1) and λ = (4, 3, 2, 2, 1, 1) then n = 4, r = 5, s = 2 and I = {1, 2, 4}. From (3.7), after a tedious calculation, we obtain Furthermore, from (3.14), Putting these results together yields Finally equating LHS and RHS we obtain This is a limiting case of Theorem 1.1. To see this, take the theorem with p = 0 (recall that θ(z; 0) = (1 − z)) and replace the summation index r by s. Then carry out the simultaneous substitutions (for all i ∈ [n]) and take the ǫ → 0 limit. Finally replacing s → r − s and multiplying both sides by (−t; q) r /(q; q) r yields (3.15).

Elliptic hypergeometric series 4.1 A new multivariable transformation formula
To turn the theta-function identity of Theorem 1.1 into an identity for elliptic hypergeometric series we apply the well-known procedure of multiple principal specialisation, see e.g., [5,6,13,16]. where m 1 + · · · + m N = n. (In the notation of λ-rings [9] we are making the substitution Since θ(1) = 0 it follows that vanishes unless I is of the form where k 1 , . . . , k N are integers such that 0 ≤ k i ≤ m i for each i. Since |I| = r we must of course further impose that |k| := k 1 + · · · + k N = r. The rest is essentially a straightforward calculation, and we only sketch the details pertaining to the right-hand side of Theorem 1.1: where we have replaced n by |m| := m 1 + · · · + m N . Using and making the substitutions where, for t = (t 1 , . . . , t N ) and m = (m 1 , . . . , m N ), We note that a particularly succinct way to express this elliptic hypergeometric series follows by introducing k N +1 := −k 1 − · · · − k N . Then A similar calculation may be carried out for the left-hand side of the theorem and we find exactly the same multiple basic hypergeometric series, but with a replaced byâ := cd/ab and t i replaced by s i := q −m i /t i for all i. As a result we can claim the following transformation formula for A N −1 elliptic hypergeometric series.
For N = 1, after rescaling (a,â) → (a/t 1 ,â/s 1 ) and replacing m 1 → n, this gives n k=0 θ(aq 2k ) θ(a) (a, b, c, d, ab/c, ab/d, abq n , q −n ) k (q, aq/b, aq/c, aq/d, whereâ = q −n cd/ab. Curiously, even this one-dimensional case, which may also be written as a transformation between 20 V 19 elliptic hypergeometric series (or for p = 0 as a transformation between 14 W 13 basic hypergeometric series), is new. To the best of our knowledge it is the first-ever example of a transformation that does not yield a summation upon specialisation of some of its parameters. We award AU$25 for a proof of (4.2) based on known identities for one-variable elliptic hypergeometric series, and AU$10 for a proof of the p = 0 case using identities for basic hypergeometric series. We do remark that (4.2) may be viewed as a somewhat strange generalisation of Jackson's 6 φ 5 summation [3, Equation (II. 21)]. Indeed, after taking p = 0 the b → 0 limit can be taken. Close inspection reveals that on the right the summand vanishes unless k = 0, resulting in Jackson's sum 6 W 5 a; c, d, q −n ; q, aq n+1 /cd = (aq, aq/cd; q) n (aq/c, aq/d; q) n .
The same in fact applies for general N , and setting p = 0 and then taking the b → 0 limit in Theorem 4.1 leads to the following A N −1 extension of Jackson's 6 φ 5 summation. For t = (t 1 , . . . , t N ) let be the Vandermonde product, and set This same result also follows by taking the d → ∞ limit in the U (n) (or A N −1 ) Jackson sum [12, Theorem 6.14], or, alternatively, by taking the b → ∞ limit in the D N Jackson sum [1,Theorem A.12].
By a standard analytic argument, see e.g., [14], the sum over the N -dimensional hyperrectangle in Theorem 4.1 may be transformed into a sum over the N -simplex k 1 , . . . , k N ≥ 0, k 1 + · · · + k N ≤ n.
From the p = 0 case of Theorem 4.1 we may also deduce the following double multi-sum identity.
Corollary 4.4. For m 1 , . . . , m N nonnegative integers and |m| = m 1 + · · · + m N we have Proof . We define Then the p = 0 case of Theorem 4.1 may be put as ) the inverse of the infinite-dimensional lower-triangular matrix M mk is given by Hence the p = 0 case of Theorem 4.1 is equivalent to We remark that for N = 1 the identity in Corollary 4.4 (in which case the determinant appearing in the summand of the double sum factorises) admits the following elliptic extension: (aq m , q −m ) l+k (bq 1−m , abq m+1 ) l+k (b, ab/c, ab/d, aq/cd) l (q, aq/c, aq/d, ab/cd) l q l × θ(cdq k−l /ab) θ(cdq −l /ab)