Multivariate Quadratic Transformations and the Interpolation Kernel

We prove a number of quadratic transformations of elliptic Selberg integrals (conjectured in an earlier paper of the author), as well as studying in depth the"interpolation kernel", an analytic continuation of the author's elliptic interpolation functions which plays a major role in the proof as well as acting as the kernel for a Fourier transform on certain elliptic double affine Hecke algebras (discussed in a later paper). In the process, we give a number of examples of a new approach to proving elliptic hypergeometric integral identities, by reduction to a Zariski dense subset of a formal neighborhood of the trigonometric limit.


Introduction
In [23], the author conjectured a number of multivariate integral identities, which could be viewed either as elliptic analogues of the vanishing results of [24] (which in turn were Macdonald polynomial analogues of, e.g., the classical fact that the integral of a Schur function over the orthogonal group is 0 unless the corresponding partition has no odd parts), or as multivariate elliptic hypergeometric quadratic transformations. (In the latter interpretation, the conjectures were new even in the "trigonometric" (q-hypergeometric) limit. . . ) The first purpose of the present note is to prove these conjectures, as well as give some partial explanation to some of the commonalities of structure shared by the seven main conjectures.
These conjectures were originally formulated by first guessing one conjecture (and verifying it in various special cases), then using certain symmetries to transform the given conjecture into qualitatively quite different forms. Unfortunately, for the most part, those symmetries did not actually apply to the specific contour integrals being considered, but rather only to certain degenerations of those integrals to finite sums. As a result, even though the main conjectures form a single orbit under the symmetries, it would seem that we need to prove each conjecture independently. This impression is especially reinforced by the fact that in many cases, the conjectures actually involved two qualititatively different finite degenerations (corresponding to the fact that, unlike most elliptic hypergeometric integral identities, these quadratic transformations break the symmetry between p and q). As a result, the formulation of the conjectures involved several steps in which the author had to essentially guess half of the identity. On the other hand, two special cases of the conjectures were proved in [4]; since both of those were separated by at least one such guess from the original conjecture, this suggested that there might in fact be more structure to the symmetries than was apparent.
Note that although the conjectures had multivariate quadratic transformations as special cases, the full version also incorporated "representation theory"-ish information. For the analogues of results of [24], this involved introducing one of the biorthogonal functions of [22,21] into the integrand; similarly, the full quadratic transformations used the corresponding "interpolation functions". One difficulty in working with the symmetries is that the biorthogonal and interpolation functions are not in general elliptic; rather, they are products of two functions, each of which lives on a different elliptic curve. Thus even if we knew one of the conjectures in the elliptic case, this would still not suffice to prove the conjecture in general. It turns out, however, that there is a way to finesse this issue. In addition to five continuous parameters and n variables, the interpolation functions also depend on a pair of partitions (one for each elliptic factor). It turns out that this discrete family of functions can instead be obtained as a discrete family of specializations of a single function (the "interpolation kernel") depending on four continuous parameters and two sets of n variables; for each pair of partitions, there is a corresponding 1-parameter family of specializations for the second set of variables giving the corresponding interpolation function.
Thus, rather than attempt to prove the various quadratic transformations in their original versions, we could instead try to prove the analogous identities involving this kernel. Although at first glance, this does not appear to be any easier (though it does, at least, make the expressions of the identities somewhat simpler), a surprising thing happens that allows us to reduce to simpler cases. It turns out that under fairly weak conditions, an identity involving the interpolation kernel is actually equivalent to the specialization not just to interpolation functions, but to elliptic interpolation functions. In fact, in many cases, it is even equivalent to an identity between sums of elliptic functions. This is a special case of a general principle, which is likely to be useful in proving a wide variety of elliptic hypergeometric integral identities. (Indeed, an important secondary objective of the present note is to introduce this technique, and give some examples of various ways it can be applied.) Suppose we are given an elliptic hypergeometric integral that becomes an evaluation under some suitable limit as p → 0. (E.g., most of the quadratic transformations we wish to prove have limits in which both sides are integrals over the Koornwinder density.) In particular, since the integral is holomorphic at p = 0, we may expand it in a power series (or, more likely, a Puiseux series) in p around that point. In many cases, we can then show that the coefficients of that Puiseux series are rational functions in the remaining parameters (possibly after dividing by the constant term). We can then prove equality of two such integrals by showing that those rational functions agree. In particular, it suffices to prove the identity for a Zariski dense set of points, rather than needing equality on a set with a limit point (or something analogous if we have specialized multiple parameters). For instance, most elliptic hypergeometric integrals that have been considered in the literature have degenerations coming from residue calculus in which they become finite sums. This typically involves a degree of freedom being specialized to a power of q; since the parameters generally live on C * , the powers of q do not even have a limit point, but are certainly Zariski dense. (The specializations that turn the interpolation kernel into elliptic interpolation functions are similarly dense.) This reduction to finite identities is, of course, particularly powerful when the identity of interest was formulated as the integral analogue of a sum.
Once we begin thinking about the Puiseux series associated to an integral, we find that there are other possible ways to establish equalities between such series. Here the additional observation we make is that although the Puiseux series we obtain are, of course, convergent, we never use that fact in establishing the identities. Once we think of the problem as establishing an identity between formal Puiseux series, we see that there is, for instance, no need to restrict our attention to convergent formulas for those series. For instance, one of the key ideas in implementing one of the symmetries we need for the quadratic transformations is the observation that as long as we are willing to forego convergence, we can replace the dimension of the integrals by a continuous parameter (in such a way that the finite-dimensional case is Zariski dense). This analytic continuation then has additional symmetries not preserving the finite-dimensional specialization, allowing us to prove some of the trickier quadratic transformations. Other approaches used below involve writing the Puiseux series as a formally convergent infinite sum (sidestepping the otherwise formidable obstacles to making sense of nonterminating elliptic hypergeometric sums), or showing that the series satisfies a family of difference equations having a unique formal solution.
Although we initially introduced the interpolation kernel as merely a tool for using the above formal method to analytically continue results involving interpolation functions, we quickly found that it had a number of remarkable properties justifying studying it in its own right, bringing us to the main purpose of the present note beyond proving the quadratic transformations. One key property we will make only limited use of at present (but will use more extensively in future work) is the fact that the interpolation kernel can be viewed as the kernel of an integral transformation, a sort of multivariate elliptic generalization of the Fourier-Laplace transform. Just as the usual Fourier transformation acts on the algebra of differential operators, this transformation acts on a certain algebra of elliptic difference operators; among other things, it preserves (up to an additive scalar and an action on the parameters) the van Diejen Hamiltonian [27] (as well as the filtered algebra of operators commuting with this integrable Hamiltonian). Another application of the kernel is that we can use it in "Bailey lemma"-type arguments; thus, for instance, we will be able to directly derive the W (E 7 )-symmetry of the order 1 elliptic Selberg integral [22,Thm. 9.7] from the kernel analogue (the "braid relation") of the much simpler Theorem 9.2 of [22]. With this in mind, we will devote a fair amount of space in the present note to developing a theory of this kernel.
We will take as our primary definition of the interpolation kernel a certain sum of interpolation functions, generalizing the Cauchy identity of [23]. A priori, this sum is nonconvergent (indeed, we cannot even avoid poles of terms without excluding a dense set!), but this is avoided if instead we take a Puiseux series in p and only insist on formal convergence, which turns out to hold without much difficulty as long as the remaining parameters behave as suitable powers of p. We then find without much difficulty that the coefficients of the resulting series are rational functions in the variables and parameters, with well-controlled poles. If we specialize one of the sets of variables to make the sum terminate (a Zariski dense set of specializations), we find that the result can be evaluated as an interpolation function. (This function now lives on a formal elliptic curve, a.k.a. a "Tate curve" [26].) Thus various results about interpolation functions have immediate consequences for this formal kernel. In particular, we obtain an identity to the effect that a certain integral operator built from this kernel has a very nice action on interpolation functions.
As it stands, the formal kernel would be of only limited usefulness. There are, however, two important ways in which one can extend this kernel. The first involves the fact that one can express the n-dimensional formal kernel as an integral involving the (n − 1)-dimensional formal kernel (integrating term-by-term), and thus in general as an n(n − 1)/2-dimensional integral. This expression turns out to be analytically convergent on a large open set of parameters, and by general considerations of [22, §10] extends to a meromorphic function on parameter space, giving us the true interpolation kernel. In other words, although the infinite sum defining the formal kernel is not analytically convergent, the limiting formal power series often converges. And, of course, the analytic interpolation kernel inherits the identities of the formal kernel! It also has additional symmetries of its own; in addition to the symmetry between p and q that those familiar with known elliptic hypergeometric integrals might expect, there is also a symmetry between t and pq/t. We also find that there are values of the parameter outside the domain of formal convergence in which we recover known operators: not only the integral operators of [22], but the difference operators as well. These difference operators in turn extend to a family of formal difference operators; in this case, not formal in their coefficients (which are functions on a general elliptic curve) but in that they are formal series in the set of possible shifts. (This means that the operators cannot be applied to actual functions, but as we will see does not cause any particular difficulty in multiplying (and dividing!) them.) These formal operators will not play a direct role in the present work, but in future work will play an important role in understanding a certain algebra of (actual) difference operators, by giving an alternate approach to the view of the integral operators as generalized Fourier transformations.
The other important way in which we can extend the formal kernel is that we can analytically continue in the dimension. More precisely, the coefficients of the n-dimensional formal kernel are symmetric Laurent polynomials in the two sets of variables, and in each case can be expressed as a suitable specialization of a certain symmetric function depending only mildly on n. Again, this gives rise to additional symmetries, the two main ones being an action of the Macdonald involution and a certain plethystic symmetry. The key ingredient in this construction is a corresponding symmetric function version of the interpolation functions, which (once we start thinking in terms of formal series) is a straightforward consequence of identities of [23]. Although it would be surprising if either symmetric function analogue converged analytically, we can still use it to prove identities: simply check the identity on a Zariski dense set of points, then specialize to a case where the series converges.
As we mentioned above, the expression of the formal kernel as a sum can be viewed as a deformation of a nonterminating version of the elliptic Cauchy identity of [23]. That paper also established an elliptic analogue of the Littlewood identity, which also immediately extends to a nonterminating formal identity. It turns out that the resulting sum also has a 1-parameter deformation which sums to the Puiseux series of a certain meromorphic function. This also arises in a straightforward way from the kernel analogue of Conjecture L1 of [23] (not one of the main sequence of quadratic transformations). This transformation (which is relatively straightforward to prove given the machinery we develop for the interpolation kernel) has a special case in which the symmetry becomes a continuous one; the resulting function with a one-parameter family of integral representations also has a formal expression as a deformed Littlewood sum. A "Bailey Lemma"-type argument turns this family of integral representations into a transformation, only one side of which involves this "Littlewood kernel". The form of the right-hand side turns out to appear in two other conjectures from [23], which can thus be interpreted as describing certain degenerations of this function. Moreover, the Littlewood kernel has a special case (i.e., when the deformed Littlewood sum is not actually deformed) that can be expressed as a product, and thus gives rise to its own set of identities along the lines of [23]. In fact, this identity, together with the t → pq/t symmetry as well as the various other symmetries used in [23], is what will let us prove the main quadratic transformations.
The plan of the paper is as follows. Apart from a discussion of notation and reminders of the interpolation functions in the remainder of this introduction, we will begin in Section 2 by defining the formal kernel and proving some initial properties, culminating in the main integral representation. In Section 3, we will then use this to define the full analytic version of the kernel, and establish a number of its properties and special cases. Section 4 (which will not be used elsewhere in the present work) uses the kernel to construct a certain family of formal difference operators and again consider their main properties. The main result (Theorem 4.10) of this section is that these operators can be used to construct twisted representations of a certain sequence of Coxeter groups; the one nontrivial braid relation in this interpretation turns out to be the difference operator of the main identity satisfied by the kernel (Proposition 2.8), which we thus refer to as "the" braid relation. Using these operators, we also make precise a weak version (Theorem 4.11 below) of the fact that the kernel is the kernel of a generalized Fourier transformation. Section 5 constructs the symmetric function version of the formal kernel, as well as reminding the reader of some of the properties (especially duality) of the corresponding analogue of the Koornwinder integral that will be needed in order to apply this kernel. Of particular note here is Lemma 5.10, which shows how a certain 0/0 issue in degenerating these integrals results in their expansion as a sum of two finite-dimensional integrals, in particular explaining why Conjecture Q5 of [23] involved such sums. Section 6 proves the kernel version of Conjecture L1 of [23], and as mentioned above uses this to construct the Littlewood kernel and understand a number of identities satisfied by the kernel. (One notable special case is an elliptic analogue (Theorem 6.24) of Conjecture 1 of [1], which expressed a certain deformation of the usual Littlewood identity for Macdonald polynomials as a pfaffian related to the 6-vertex model.) Section 7 proves Conjectures L2 and L3 of [23], and studies the corresponding analogues of the Littlewood kernel (the dual Littlewood and Kawanaka kernels, respectively). In section 8, we use the machinery developed for these various kernels to finally prove the remaining conjectures of [23], the promised multivariate quadratic transformations, and consider a few new transformations that arise by viewing these as statements about degenerations of the Littlewood and other kernels. We finish with an appendix of sorts that uses properties of the interpolation polynomials of [17,19] to establish that certain difference and integral equations have unique polynomial solutions. (This will then imply that certain equations with formal coefficients have unique formal solutions, which will in turn be used in proving Theorems 6.24 and 8.10 below.) Acknowledgements The author would particularly like to thank P. Etingof for an initial suggestion that taking p to be a formal variable might allow one to extend the W (E 7 ) symmetry of the order 1 elliptic Selberg to W (E 8 ); this turned out not to work (some symmetries are, indeed, gained, but at the expense of others), but led the author to a more general study of the formal limit. In addition, the author would like to thank D. Betea, M. Wheeler, and P. Zinn-Justin for discussions relating to Izergin-Korepin determinants and their elliptic analogues, and especially for discussions relating to Conjecture 1 of [1] (which led the author to consider the general case of the Littlewood kernel below). The author would also like to thank O. Warnaar for additional discussions related to the Macdonald polynomial limit. The author would finally like to thank H. Rosengren for providing extra motivation to finish writing the present work, as well as some helpful pointers to the vertex model literature. The author was partially supported by the National Science Foundation (grant number DMS-1001645).
Notation As in [23], we will be using the notation of [21] and [22]. In particular, bold-face greek letters refer to pairs of partitions; if only one of the partitions is nonzero, we will either give the partition pair explicitly, or rewrite using the notation of [21], explicitly breaking the symmetry between p and q. Thus the interpolation functions are denoted by R * (n) λ (z 1 , . . . , z n ; a, b; t; p, q), (1.1) which factors as R * (n) λ,µ (z 1 , . . . , z n ; a, b; t; p, q) = R * (n) λ (z 1 , . . . , z n ; a, b; p, t; q)R * (n) µ (z 1 , . . . , z n ; a, b; q, t; p), (1.2) with the first factor q-elliptic, and the second p-elliptic. (In fact, we will nearly always be using the elliptic notations, as in the vast majority of the cases in which we would want to use the full versions, we will be using the kernel instead!) Relations and operations on single partitions extend to partition pairs in the obvious way; in particular, λ ⊂ µ denotes the product of the usual inclusion orders on the two pieces. We will need some additional notations for partitions. Of particular importance are λ 2 , denoting the partition with λ 2 i = λ ⌈i/2⌉ , and 2λ, denoting the partition with (2λ) i = 2λ i , both extending immediately to partition pairs. (The latter will appear in the form (1, 2)(λ, µ) = (λ, 2µ).) (We will use a similar notation for the biorthogonal functions, but these will only appear briefly in certain corollaries not otherwise used.) We will also find it convenient to let z denote the tuple z 1 , . . . , z n of arguments, and similarly for x, y, etc.
We specifically recall the elliptic Gamma function with the convention here (and for Γ + , θ, etc.) that multiple arguments express a product: This satisfies the functional equations is a theta function (θ p (exp(2πix)) is doubly quasiperiodic), as well as the "quadratic" functional equations which will be useful below. The special values will arise as well, in the process of various (omitted) simplifications. We will also make brief use of the triple gamma function with functional equations and so forth.
We will also need two families of densities. The simpler of the two is the elliptic Dixon density which will also appear in a form with "univariate" parameters, We allow m negative here, but note that it in general measures the complexity of the integral; e.g., for m = 0, the integral of this density has an explicit evaluation ( [22,Cor. 3.2], originally conjectured in [9] as the "Type I" integral): (1.19) subject to the "balancing" condition r u r = pq. The contour here must separate the double geometric progressions of poles converging to 0 from those converging to ∞; if |u r | < 1 for all r, we may take the contour to be the unit circle. (This contour may not always exist, but by general considerations [22, §10] such an integral always gives a well-defined meromorphic function.) The other density we will need is the elliptic Selberg density, (1.20) (The ratio between the two densities will also appear quite a few times below.) Again, this will typically be given additional parameters This has the evaluation ([22, Thm. 6.1], originally conjectured in [8]) now with balancing condition The t-dependent factors of the density force us to insist that C contains tC; again, as long as u r and t are in the unit circle, there is no difficulty taking C = S 1 . There is also a transformation for order m = 1, [22,Thm. 9.7].
(As an aside, we note that both of the above evaluations are special cases of the analytic form of Proposition 2.8 below; similarly, the transformation of the order 1 elliptic Selberg integral is a special case of Theorem 3.9 below.) Note that for either density, if two parameters multiply to pq, then the reflection relation of the elliptic gamma function causes the two parameters to cancel out, thus reducing the order by 1.
Connected to the elliptic Selberg density is a family of difference operators satisfying (formal) adjointness relations with respect to that denstiy. The simplest such operator, D q (t; p), is self-adjoint with respect to the 0-parameter elliptic Selberg density, and acts on hyperoctahedrally symmetric functions by f (q σ1/2 z 1 , . . . , q σn/2 z n ). (1.24) More generally, we define an operator this has the effect of multiplying the term corresponding to σ by 1≤i≤n,1≤j≤2m+2 θ p (u j z σi i ). In the case of ambiguity regarding the variables on which a given difference operator acts, we will specify those variables as a subscript, as D We will also need some finite products. The factors ∆ 0 λ (a|b 0 , . . . , b n−1 ; t; p, q) and ∆ λ (a|b 0 , . . . , b n−1 ; t; p, q) (1.26) that appear below are certain multivariate q-symbols (see the introduction of [22]). The first is defined by Note that ∆ 0 λ,µ (a|b 0 , . . . , b n−1 ; t; p, q) = ∆ 0 λ,0 (a|b 0 , . . . , b n−1 ; t; p, q)∆ 0 0,µ (a|b 0 , . . . , b n−1 ; t; p, q), (1.30) and if n = 2m, 0≤r<2m b r = (pqa) m , then both factors are elliptic subject to this constraint; i.e., is invariant under shifting the parameters by integer powers of p such that the balancing condition remains satisfied.
As with the interpolation functions, we also use an elliptic version: and similarly for ∆ 0 . For we also take the convention of omitting p in the limit p = 0.
The key property of the interpolation functions is that 22,Cor. 8.12]); this property and the triangularity property are related by a complementation symmetry, and together determine the interpolation function up to normalization, which is determined by These values of interpolation functions appear frequently enough to merit their own notation: we define where the first factor is q-elliptic in a, b, p, and t, and imilarly for the second factor. In actuality, we will essentially only use the alternate normalization of [21], which in the p, q-symmetric version reads The binomial coefficients so normalized are products of elliptic Finally, as mentioned above, we will quite frequently be taking p to be a formal variable. More precisely, we will take the various parameters to be elements of some field of formal power series, in such a way that p has valuation < 1. For a nonzero element of such a field, we define ord p (x) = log |x| log |p| , (1.43) and will typically omit p. (We will, in fact, only use the subscript in a few cases, in which it is more natural to let the formal parameter be q.) Note that here || denotes the non-Archimedean valuation, i.e., exp(−d) where d is the degree of the leading term of the power series. We will generally take the field to be power series in some fixed N -th root of p (with coefficients meromorphic or rational in the remaining variables, as appropriate), but will in general simply refer to it as the field of formal Puiseux series. (For simplicity, we only allow rational exponents with finite common denominator; this could be weakened in general, but we will never need more than fourth roots in any event.) We will also use this order notation in the analytic case, to cover the case in which the formal Puiseux series arises as an actual Puiseux series. That is, given a limit of functions, integrals, etc., in which x is a parameter or variable, we will define ord p (x) to be the limit of log |x|/ log |p|, this time using the usual complex absolute value. In general, we will take these limits in such a way that x = p α x 0 for x 0 fixed.

The formal kernel
As we mentioned in the introduction, although normally nonterminating elliptic hypergeometric series fail to converge, we can finesse this issue by taking p to be a formal variable. The general principle with natural series of this kind is that the valuation of the terms depends linearly on the index of summation; e.g., for a sum over partitions, the term associated to λ will be O(p α|λ| ) for some α. Thus in practice, a series will converge formally iff it becomes the trivial sum 1 (or any other single-term sum) in the limit p → 0. In particular, given a sufficiently large family of finite sums with this property, we can hope to have a straightforward continuation to the nonterminating case.
For our purposes, the most important sum will be the one expanding an interpolation function from one basis in terms of the interpolation functions from another basis. The coefficients here are "elliptic binomial coefficients", but these are essentially just values of interpolation functions. As a result, we may express the expansion in the following form (with some reparametrization).
Lemma 2.1. [21,Cor. 4.14] For t 0 , u 0 , c, q, t ∈ C * , and any |p| < 1, we have R * (n) λ (x 1 , . . . , x n ; cu 0 , c/t n−1 u 0 ; q, t; p) =∆ 0 λ (t 2n−2 u 2 0 |cu 0 /t 0 , ct n−1 t 0 u 0 ; q, t; p) µ⊂λ ∆ µ (t 2n−2 t 0 u 0 /c|t n , pqt n−1 /c 2 ; q, t; p) R * (n) µ (x 1 , . . . , x n ; t 0 , c/t n−1 u 0 ; q, t; p) R * (n) µ (. . . , q λi t n−i u 0 , . . . ; u 0 , c/t n−1 t 0 ; q, t; p). (2.1) As mentioned, we need the sum to be dominated by its first term, and to understand when that happens, we need to understand the valuations of the individual components of the summand. To make our initial calculations easier, we assume (as we always will) that q, t have order 0, and, at least initially, that t 0 , u 0 , . , x n have order 0. Since the ∆ symbol is defined as a product, it is straightforward to compute its valuation, and we find that it has order ord(c)|µ|, as long as 0 < ord(c) ≤ 1/2. (Here, of course, this is only the generic valuation; dividing by this power of p makes the limit a generically nonzero rational function.) The interpolation functions are a priori harder to control, but luckily their valuations were computed in [6], and we find that both interpolation functions have order 0. Combining, we find that the µ term in the sum has order ord(c)|µ|. In particular, we find that even if the second interpolation function is evaluated at a generic point (with coordinates of order 0), the corresponding nonterminating sum will converge formally. More generally, if x 1 , . . . , x n have order x ∈ (−1/2, 1/2), the results of [6] indicate that the interpolation function has order −|x||µ|, and the argument there further demonstrates that this is a lower bound so long as | ord(x 1 )|, . . . , | ord(x n )| ≤ x.
We are led to introduce some additional prefactors to maximize symmetry, and arrive at the following definition.
Definition 1. For p a formal variable, |q|, |t| < 1, and x, y, c parameters such that c ( x; y; q, t; p) is defined by the following infinite sum (depending on auxilliary variables t 0 , u 0 of valuation 0): Remark 1. The reason we take ord(c) ≤ 1/2 is to accommodate the factor 1≤i≤n Γ p,q (t i−n c 2 ) −1 above. We could of course omit this factor, but at the cost of making a number of later formulas (in particular for the analytic kernel) more complicated. We find in general that the given expression for 1≤i≤n Γ p,q (t i−n c 2 , t i )K (n) c ( x; y; q, t; p) (2.4) converges formally (and has limit 1) as long as (2.5) and agrees with the corresponding formal expansion of the analytic kernel (q.v.) as long as ord(t 0 ) = ord(u 0 ) = 0.
Lemma 2.2. For a, q, t, x of order 0, and 0 < ord and is otherwise a hyperoctahedrally symmetric Laurent polynomial in x of degree at most α ord(b) + |λ|, with coefficients rational functions in the remaining parameters.
Proof. That the coefficient of p α vanishes for α < 0 follows by the valuation calculation of [6]. That the coefficients are hyperoctahedrally symmetric follows from the corresponding symmetry of the interpolation functions. Finally, that they are polynomials of the appropriate degree follows either by induction using the branching rule as in [6] or using the expansion formula of [23,Thm. 2.5]. The latter expresses the interpolation function as a finite sum in which the dependence of each term on x is as a product over the variables and their reciprocals. (Compare Definition 2 below.) Remark. One should note that the expression from [23, Thm. 2.5] involves a significant amount of cancellation; each individual term has order −|λ| ord(b), but all terms of negative order cancel.
Plugging this into the definition of the formal kernel gives the following result.
is a hyperoctahedrally symmetric Laurent polynomial in each of x and y of degree at most α ord(c) , with coefficients rational functions in the remaining parameters.
The significance of this result is that it allows us to extend identities of K (n) from the case in which x and y have valuation 0.
Theorem 2.4. The formal interpolation kernel is well-defined, symmetrical between x and y, and has the specialization for any a of order 0.
Proof. By construction, the given specialization holds as long as we take a = u 0 . This, in particular, shows that the sum is independent of t 0 as long as y has order 0. (Indeed, apart from a simple prefactor, the coefficients of the Puiseux series are rational functions of the parameters, and for a Zariski dense set of possible y the coefficients are independent of t 0 .) We can then extend to more general valuations of y using the corollary.
Since the sum is unchanged if we swap t 0 and u 0 as well as x and y, we conclude that the sum is also independent of u 0 , so well-defined. The remaining claims are then immediate.
In particular, if we specialize one set of variables to a geometric progression, we have an explicit evaluation.
This only works directly in the case that the base of the progression has order 0, but easily extends to more general valuations. . (2.8) In particular, Remark. This gives an initial indication of why we call this a "kernel": K (1) c is essentially the kernel of a univariate integral operator considered in [25], where it is shown that the operators with kernels K We also have some special cases with explicit formulas corresponding to similar special cases of the interpolation functions.
Proposition 2.6. The kernel has the special cases Proof. In each case, it suffices to verify that the identity holds when y i = t n−i q λi a for any partition λ. These correspond to the Cauchy, Schur, and monomial cases of the interpolation functions, see [21].
If we rewrite the first special case as an identity for sums, we find that the result is a (nonterminating) version of the elliptic Cauchy identity [23,Thm. 3.6]. (The latter was expressed in terms of certain plethystic generalizations of the interpolation functions, which (in the formal case) turn out to be special cases of the symmetric function analogue, see below.) Naïvely, the usual Cauchy identity for Macdonald polynomials arises by a term-by-term limit, using the fact that . (2.12) If we apply this to K (n) c termwise, we obtain lim p→0 1≤i≤n The problem, of course, is that the left-hand side is not defined, and the right-hand side need not converge. Now, the right-hand side does converge as a formal series in x and/or y, which is the key to making the limit rigorous. Indeed, we find that P µ (p x; q, t)P µ (p y; q, t) (2.14) holds as a limit of formal power series in p. In this way, any formula for K The name "kernel" comes from the fact that K (n) c forms the kernel of an integral operator having the interpolation functions as (generalized) eigenfunctions. Here we of course define the integral of a formal power series by integrating term-by-term. As long as the u r parameters of valuation 0 are inside the unit circle, the contour can be taken to be a power of the unit circle; one can then extend to general parameters as in [22, §10].
In particular, the integration variables here will always have order 0.
Proposition 2.7. If u 0 , u 1 , u 2 , u 3 are parameters of nonnegative valuation such that t n−1 u 0 u 1 u 2 u 3 = pq/c 2 , then 1≤i≤n 0≤r<4 Proof. We first note that the kernel and interpolation function are both formal power series with coefficients polynomial in x and y, so the integral is indeed well-defined, and has coefficients polynomial in y. Specializing to the Zariski dense set y i = q λi t n−i u 0 reduces to Theorem 9.2 of [22].
One thing of particular note about this integral equation is that the right-hand side has roughly the same form as other known equations for the interpolation functions. For instance, the integral operators considered in [22] satisfy precisely such an equation, except that c = √ t has valuation 0. Though this is of course outside the range of formal convergence, it still suggests an "identity" by comparing integrands: .

(2.16)
Though this is nonsense for the formal kernel, we will see below that it indeed holds for the analytic kernel.
Perhaps even more surprising is the fact that essentially the same right-hand side appears in [22,Lem. 9.8], which describes a difference equation satisfied by the interpolation functions. Thus in general we expect that when c = q −m/2 , the kernel should act as a multivariate difference operator of order m (i.e., shifting each variable by at most m/2). Presumably this can again be made rigorous at the analytic level, but we will content ourselves with using it in the formal context. (See also Section 4 below.) It is of course straightforward to extend the above integral equation to an equation satisfied by the formal kernel, by the usual "compare when y is a partition" argument. This gives us the following identity, which we will refer to as the "braid relation".
Indeed, one can use the braid relation together with the integral equation to express the sum as an integral involving only the special cases in which one of the valuations is 0, for which the above arguments suffice. We omit the details.
There are, of course, corresponding identities for c = t 1/2 or c = q −m/2 , since all we are using is that the corresponding operators preserve the space of formal series with polynomial coefficients, and act in the appropriate way on interpolation functions. Thus, for instance, the above identity contiues to hold if we take c = t 1/2 and expand K (n) t 1/2 via the (nonsense) formula (2.16). Similarly, there is a difference equation, which we postpone until the analytic case.
The integral representation for interpolation functions also involves an integrand very similar to K (n) t 1/2 , and in particular also extends to an identity for the formal kernel. To extend this to the formal kernel, we need only establish that the corresponding integral operator preserves the space of formal Puiseux series with polynomial coefficients. This follows from the fact that the kernel of the operator can be factored as a formal Puiseux series with polynomial coefficients times its value for p = 0; since the limiting operator was shown to preserve polynomials in [20,Thm. 3.2], the claim follows. Lemma 2.9. If 0 < ord(c) ≤ 1/2 and all other parameters have valuation 0 (with |q|, |t| < 1), then K (n) This is a special case of a much more general integral formula. We omit the convergence conditions, as we will see shortly that the identity holds (as do those above) whenever both sides converge.
Theorem 2.10. For any integers 0 ≤ k ≤ n, we have the following identity.
Proof. Again, it suffices to verify this when x has been specialized to a partition. In that case, both resulting interpolation functions can be expanded via the generalized branching rule of [21,Thm. 4.16], and the claim follows upon applying Proposition 2.7 term-by-term.
Remark. Note that when k = 0, this is just the braid relation, while when k = n, it is just Corollary 2.5; it also agrees in the usual sense with the integral representation, by taking k = 1, c = t 1/2 and replacing that instance of the kernel as per usual.

The interpolation kernel
The key benefit of Lemma 2.9 is that it expresses the n-dimensional formal kernel as an integral of the n − 1dimensional formal kernel, and thus we can iterate to obtain an expression as a n(n − 1)/2-dimensional integral, in which the integrand is a suitable product of elliptic Gamma functions alone. In particular, the resulting integrand is the formal power series expansion of an honest meromorphic function.
Theorem 3.1. There exists a function K (n) c ( x; y; t; p, q), meromorphic on the region such that any Puiseux expansion of this function with ord(q) = ord(t) = 0, c . This function satisfies the symmetries and on a suitable open subset can be defined inductively by Proof. We first note that if |t| 1/2 < |x i | < |t| −1/2 and max i | ord(y i )| < ord(c) for each i, then the integrand has the following property: if every integration variable is on the unit circle, then every elliptic Gamma function in the integrand has argument of absolute value between |pq| and 1. Thus if we fix the other parameters, the integral is holomorphic (apart from an algebraic singularity) near p = 0, and the formal Puiseux series expansion of the integral is the same as the term-by-term integral of the formal Puiseux series expansion of the integrand.
In other words, in this range, the formal kernel is actually a convergent Puiseux series, and converges to the value of the integral. The existence of a meromorphic extension follows from [22,Thm. 10.2].
In particular, it follows that all of the above identities for the formal kernel continue to hold (with suitably deformed contours) for K (n) , in particular the braid relation. Thus to extend to the full set of valuations note that the integration variables in the braid relation have order 0.
The symmetry between p and q is by inspection of the integral representation, while the symmetry between x and y follows from the corresponding symmetry of the formal kernel.
Remark. Presumably if we multiply by 1≤i≤n Γ p,q (t 1−i c 2 , t i ), then the analytic kernel has a formal expansion with rational function coefficients whenever (3.4) and this expansion agrees with the sum defining the formal kernel. This certainly holds when ord( x) = 0, as the integral representation remains valid in that case. For more general valuations of x, some sort of symmetry breaking limit will be required.
Of course, when we specialize x to a partition, we recover the integral representation of the corresponding interpolation function. In fact, we obtain even more: the analytic kernel is manifestly symmetric between p and q, and thus we can also obtain q-elliptic interpolation functions by a suitable specialization. In fact, we can obtain the full analytic interpolation functions of [22].
Proposition 3.2. Let λ, µ be a pair of partitions with at most n parts. Then we have the following identity of meromorphic functions.
Remark. This might appear at first glance to be incompatible with Proposition 2.6. For instance, for t = q, Proposition 2.6 says that the kernel can be expressed as a simple determinant; on the other hand, the general interpolation function for t = q can only be expressed as a sum of n! determinants in general. We can resolve this by noting that the interpolation function specialization only holds for generic values of the parameters; if we first specialize a, b, p, q, t before specializing the variables, we can obtain a different result. This can only occur at poles of the interpolation kernel, but it follows easily from the results below on such poles that when t = q, the point y i = p λi q µi t n−i a/ √ t n−1 ab is on such a polar divisor whenever λ = 0.
The fact that the general interpolation function is a special case of the kernel is a quite powerful tool, as it allows us to extend integral formulas involving p-elliptic interpolation functions to integral formulas involving general interpolation functions. Indeed, any formula involving p-elliptic interpolation functions that satisfies suitable formal convergence properties implies a corresponding identity for the formal kernel, thus a corresponding identity for the analytic kernel, so by specializing gives the identity for general interpolation functions!
In addition to the symmetry between p and q, there is another symmetry of the interpolation kernel that does not make sense for the formal kernel; in fact, this additional symmetry also does not make sense for interpolation functions. The key point is that the analytic version of Theorem 2.10 also gives an explicit integral representation, by taking k = 1, c = pq/t. Comparing the two integral representations gives the following.
Proposition 3.3. The interpolation kernel satsfies the following identity.
In particular, .
Here, of course, the product formula for K (n) √ t follows via the symmetry from the product formula for K are invariant under permutation and inversion of the 2n variables.
Proof. The first claim is a simple consequence of the braid relation for c = d = pq/t; the second follows by the t → pq/t symmetry.
Of course, simply knowing that the kernel is meromorphic is of only limited use without more specific information about the poles. It is difficult to control all of the poles, but we can at least control the poles depending on the x and y variables.
Proof. We proceed by induction on n, so that we need only analyze an n − 1-dimensional integral, rather than an n(n − 1)/2-dimensional integral. (Note that for n = 1, we can verify the claim by inspection, while for n = 2, the integral representation is just an order 1 elliptic beta integral, and the claim follows from known properties of such integrals.) The construction of meromorphic extensions of integrals in [22] comes with a very crude bound on the set of possible poles. Indeed, if we multiply the integrand by the result is a holomorphic function of the integration variables. It follows that the integral (ignoring prefactors) is holomorphic whenever there exists a contour C invariant under z → 1/z such that C contains (pq/t)C as well as every point of the form We thus find that for |pq/t| < 1, the integral can only introduce poles where two numbers (duplication allowed) from these lists multiply to a nonnegative power of t/pq. Of course, this allows plenty of poles that we claim do not occur, and does not control the multiplicities of those poles that should occur. (There are results from [22] that could be used to control the multiplicities of these poles, and rule some of them out entirely, but this would still give a wild overestimate of the polar divisor!) The key fact that allows us to control the poles is that the integral representation, and thus the corresponding upper bound on the set of poles, has less symmetry than the actual kernel. In particular, the poles involving y n are quite different than those involving y 1 through y n−1 , but the result should be invariant under permuting all of the y variables. In addition, we know from the formal kernel that the analytic kernel is invariant under swapping the x and y variables. We find (for generic c) that the only poles consistent with these symmetries are those of the form given.
Applying the t → pq/t symmetry shows that the same bound on poles applies for |t| < 1, and since |p|, |q| < 1, we conclude that the bound on poles holds in general.
Remark. We can also gain some control over the poles that depend on c but not the x and y variables, using the braid relation. The point there is that most of the poles coming from the integral in the braid relation depend on the auxiliary parameters, so cannot actually be present. We find that the only possible such poles arise on divisors of the form with max(0, l) < min(j, k). (Strictly speaking, this only applies for |pq| < |t| < 1, but should hold in general; note also that in that region, there are no poles depending only on t, p, and q.) One thing control over the poles allows us to do is take certain limits involving pinched contours. For instance, the case d = √ t of the braid relation becomes singular whenever a subsequence of x is a geometric progression of step t. Since such limits occur below, we give the corresponding limit in significant generality.
Proposition 3.6. Let k 1 , . . . , k m be a sequence of positive integers with k 1 + · · · k m = n. Then for otherwise generic parameters, Proof. If m = n, this is just the usual integral representation; in general, one can proceed by induction in n − m. Indeed, the limit x m → t −(km+km−1)/2 x m−1 of the left-hand side is the general case with m − 1 geometric sequences, so it suffices to verify that the above formula is consistent with this limit. Before taking the limit, the constraint on the contour C for the integrals is that (a) C = C −1 (corresponding to the symmetry of the integral), (b) C contains (pq/t)C (corresponding to the factors ((pq/t)z ±1 i z ±1 j ; p, q) of the poles), and (c) C contains every doubly-geometric sequence of poles converging to 0. If p is sufficiently small and the parameters are otherwise generic, the only obstruction to these conditions is the requirement that C contain t km/2 x m and exclude t −km−1/2 x m−1 . Thus we can compute the limit by moving the contour through t km/2 x m before taking the limit; the prefactor Γ p,q (t (km−1+km)/2 x m /x m−1 ) ensures that only the residues contribute to the limit, which is then straightforward to compute.
Similarly, the case d = √ t of the braid relation has the following geometric progression limit.
Proposition 3.7. If k 1 , . . . , k m are positive integers summing to n, and u 0 u 1 = pq/tc 2 , then A natural question, given that the kernel has the above simple poles is whether we can characterize the residues along those poles. The poles involving two x or two y variables can be resolved using the t → pq/t symmetry; for the simplest instance of the remaining poles, we have the following. Note that when y is specialized to a partition, this is simply the case k = 1 of equation (3.43) of [21].
Lemma 3.8. The interpolation kernel has the limiting case . . , x n ; y 1 , . . . , y n ; t; p, q) Proof. If we represent the left-hand side via the integral representation, branching on y n , the limit becomes a simple substitution, and the resulting integral is just the case d = t 1/2 of the braid relation.
One advantage of the kernel over interpolation functions is the fact that the braid relation acts as a sort of Bailey Lemma. In particular, this allows us to greatly simplify (and generalize to the kernel) the arguments of [22, §9]. The main identity there is [22,Thm. 9.7] (the W (E 7 ) symmetry of the elliptic Selberg integral), which becomes the following identity in terms of the interpolation kernel.
Proof. Use the braid relation to expand K (n) c on the left as an integral involving K (n) c/u and K (n) u , then change the order of integration and apply the braid relation again. Note that there is a range of parameters where the contours can all be taken to be the unit circle, so the change in order of integration is legal, and extends to an identity of meromorphic functions.
Remark. This argument is essentially the same as the proof of the multivariate elliptic Bailey transformation, [21,Thm. 4.9], except that we have replaced the elliptic binomial coefficients by the interpolation kernel.
The left-hand side is invariant under the natural action of S 4 on v 0 , v 1 , w 0 , w 1 , and together with the above transformation gives an action of D 4 . As in [22], this gives another identity, corresponding to the third nontrivial double coset of S 4 in D 4 .
This can be viewed as a sort of commutation relation; indeed, it corresponds directly to a commutation relation for the corresponding integral operators acting on interpolation functions.
We note some special cases of interest. If c and d are both √ t (or, by the t → pq/t symmetry, if both are equal to pq/t), the commutation relation becomes an explicit integral transformation originally proved by van de Bult, [3]. If one is √ t and the other is pq/t, the result is a special case of the elliptic Dixon integral, We record the following degeneration (à la Propositions 3.6 and 3.7 above) for use in Section 6 below.
Proposition 3.11. If k 1 , . . . , k m are positive integers summing to n, and u 0 u 1 u 2 u 3 = p 2 q 2 /tc 2 , then As noted above, if c = q −1/2 , the integral equation for interpolation functions has the same right-hand side as a known difference equation. This extends to the following difference analogue of the braid relation.
Proposition 3.12. The interpolation function satisfies the generalized eigenvalue equation Proof. It suffices to prove this for the formal kernel, and thus when y is specialized to a partition; this is simply Remark. This can also be proved by induction using the integral representation, together with the special case c = √ t of the commutation relation below (see [22,Thm. 7.9] for a direct proof). One can also show that for 0 < ord(c) < 1/2 or generic c of order 1/2, K (n) c ( x; y; t; p, q) is determined up to a factor independent of x by the fact that 1≤i≤n is independent of v, together with the existence of a formal expansion. Indeed, by Lemma 9.1 below the limit of the equation as p → 0 has no nonconstant solutions, and thus any solution of this system of equations becomes constant in that limit; it follows (essentialy by Nakayama's Lemma) that any two nonzero solutions are proportional. This would allow one to develop most of the theory of the interpolation kernel without using interpolation functions, though of course not the Cauchy-type series expression itself (which plays a crucial role in constructing the symmetric function variant of the formal kernel).
If we view this formally as the special case (c, d) → (q −1/2 , q 1/2 c) of the braid relation, then we immediately find that we obtain corresponding special cases of the Bailey transformation and the commutation relation. (We can also obtain identities involving difference operators alone, but postpone consideration of those to Section 4.) For the Bailey transformation, we have the following.
The commutation relation becomes the following identity.
Proposition 3.14. Let u 0 , u 1 , u 2 , u 3 , c be parameters such that u 0 u 1 u 2 u 3 c 2 = p 2 q. Then We can also obtain identities by specializing one of the sets of variables to a geometric progression. This has the effect of replacing one of the interpolation kernels by a product of elliptic Gamma functions. (Of course, we could replace both sets of variables by geometric progressions, but this would simply recover results of [22], albeit with new proofs.) Specializing the braid relation in this way gives the following generalization of the Kadell-type integral of [22,Cor. 9.3].
Remark. We can also obtain transformations in this way, but omit the (straightforward) details. The one thing one should note is that (in direct analogy to [22,Cor. 9.13]), the symmetry group is extended from D 4 to D 6 , and we acquire an additional double coset.
We also note the following curious identity, a multivariate analogue of the main result of [5]; the proof below is a direct adaptation of van de Bult's argument for the univariate case. Note that since both x and y are specialized to z, there is no way to specialize this to a statement about interpolation functions.
Theorem 3.16. The integral Proof. Take the identity of Theorem 3.9, specialized so that v 0 w 0 = pq/cd and y = x. If we multiply both sides with t 0 t 1 = pq/c 2 d 2 , the integrals over x on both sides are special cases of the braid relation. The result is the general case of the claimed identity.
Remark. As in [5], this symmetry, together with the visible symmetries, generates the Weyl group W (F 4 ).
The action of the kernel on interpolation functions extends in a natural way to an action on biorthogonal functions, generalizing the difference and integral equations of [22, §8].
Proposition 3.17. The multivariate elliptic biorthogonal functions satisfy the following integral equation, for Proof. Simply apply the usual integral equation to the binomial formula [21,Defn. 12] term-by-term.
If we set u 1 = 1/t n−1 c 2 t 2 , then the biorthogonal function in the integrand becomes an interpolation function, giving a variant of [22,Thm. 9.4], and a representation of the general biorthogonal function as an integral involving the kernel and an interpolation function. Analytically continuing the interpolation function to another instance of the kernel gives an analytic continuation of the biorthogonal function as a function of the indexing partition. In the absence of a particular application for this analytic continuation, we omit the details.
If we instead expand the biorthogonal function via the binomial formula, we obtain the following integral equation.
Corollary 3.18. For otherwise generic parameters satisfying t n−1 u 0 u 1 u 2 u 3 = pq/c 2 , one has We close by mentioning a special case with an unexpected determinantal representation. If we combine the difference equation with the explicit formula for the case c = pq/t, we obtain the expression The right-hand side is a sum of 2 n terms, each of which can be expressed as an explicit product of Gamma and theta functions. We find that although the individual terms depend on q, the ratios of the terms do not. As a result, we conclude that K is independent of q. Setting q = t gives the determinantal expression .
This determinant has appeared in work of Filali [10] on a certain variant of the "8VSOS" model, a generalization of the usual 6-vertex model. In the Macdonald limit, this becomes a known expression for the the Izergin-Korepin (essentially the partition function of the 6-vertex model [12,14]) as a sum of Macdonald polynomials [28].
Applying the t → pq/t symmetry gives a similar expression for c = t/q. The case t = p 1/3 is of particular interest, since this is in the intersection of the c = (p/t) 1/2 and c = t cases; this implies (using Corollary 3.4

above) that
1≤i,j≤n is invariant under arbitrary permutations of the 2n variables, recovering a result of [29, App. C]. This corresponds to the well-known fact that the partition function for the 6-vertex model acquires additional symmetries when the parameter is a cube root of unity.

Formal difference operators
Although the analytic kernel most naturally corresponds to a family of integral operators, it is difficult to make this precise, given issues with contours; even basic questions concerning the domain of the operators are difficult to approach. Now, we recall that when c = q −n/2 , the integral operator at least formally becomes a difference operator. Although this is only a sparse set of specializations, it turns out that there is a natural analytic continuation in c. At first glance, this seems impossible, since the number of different shifts appearing in the operator for c = q −n/2 depends on n; however, we can avoid this issue by working with formal difference operators.
For c ∈ C * , let D c be the vector space of formal sums of the form where each coefficient F k is a meromorphic function. We can multiply two such formal sums using the following rules: and, for any meromorphic function F , (And, of course, we multiply meromorphic functions in the usual way.) In this way, we obtain a product is the identity for this product, we will omit it from the notation for D 1 .
We call the resulting C * -graded algebra the algebra of formal difference operators. Note that any formal difference operator with only finitely many nonzero coefficients acts in a natural way on the space of meromorphic functions (i.e., right-multiply by the function, then take the sum of the coefficients), thus justifying the name.
One important observation about formal difference operators is that a formal difference operator is invertible whenever F 0 = 0. (This is by the usual argument for formal power series: if c = 1, F 0 = 1, we invert using the power series for 1/(1 + z); in general, we can always extract the invertible (right) factor F 0 ( x)T (c) to reduce to that case.) Similarly, the algebra of formal difference operators has no zero-divisors.
We associate a formal difference operator to the interpolation kernel in the following way: Roughly speaking, this arises by considering an integral and attempting to compute it as an infinite sum of residues (taking into account only the simplest possible residues). Note that by applying Lemma 3.8 repeatedly, we may compute the leading coefficient of D (n) c (q, t; p).
For the next few lemmas, we will view the operators D (n) q (u 0 , . . . , u 2m−1 ; t; p) as elements of D q −1/2 . The first two lemmas are direct translations of Propositions 3.12 and 3.14, respectively.
The first lemma is particularly useful, for the following reason.
Lemma 4.4. Suppose q is non-torsion in C * / p , and let D ∈ D c be an operator with leading coefficient 0 such is independent of u. Then D = 0.
with F 0 ( x) = 0. The fact that the given product of operators is independent of u implies that vanishes if we set u = x j for any 1 ≤ j ≤ n. We take j = n for notational simplicity; the other cases are analogous. If we take the coefficient of i T ki i T (q −1/2 c) in this product, we obtain a linear relation between the coefficients F l for 0 ≤ l ≤ k (in the product partial order). The coefficient of F k in this relation is (4.14) and thus as long as θ p (q −kn ) = 0, we obtain an expression for F k in terms of coefficients F l with i l i < i k i .
Since F 0 = 0, this implies by induction that F k = 0.
In particular, D   In other words, the operator D (n) c (q, t; p) is well-defined whenever q is non-torsion. (This is in contrast to K (n) c , which certainly does have poles depending on c but not on the variables!) Proposition 4.6. If c 2 ∈ p Z , then is independent of u. This reduces to checking that is independent of u, where c = p l , which in turn reduces easily to the case l = 1.
In particular, D 1 (q, t; p) = 1. Plugging this into Lemma 4.2 gives the following identification.
Proposition 4.7. We have More generally, for any nonnegative integer m, has finite support, with theta function coefficients, and the corresponding true difference operator commutes with the natural action of the hyperoctahedral group (by permuting and inverting the variables).
Proof. If we write we see that the case m = 1 immediately gives the first claim, while the second claim follows by induction from the corresponding fact for D Remark. Indeed, we see that D (n) q −m/2 (q, t; p) is an operator of the form considered in [22] (introduced in the proof of Theorem 9.7 op. cit.); in that notation, we have (4.21) We also note that a straightforward induction shows that we may replace the integral operator corresponding to K q −m/2 (q, t; p) in the same way as for m = 1 The key identity satisfied by our formal difference operators is the following analogue of the braid relation, Proposition 2.8.
Proof. We give two arguments. The first is to use the residue definition of the coefficients of D (n) cd (q, t; p), and expand using the braid relation to obtain a limit of integrals. The natural contour conditions on the integral cannot be satisfied, so we must first move the contour before taking the limit; the result is a sum of residues, and gives the desired result.
The second, more algebraic argument, is to note that it suffices to show that the operator  1/c (q, t; p) are inverses.
The name "braid relation" for this identity (and thus for Proposition 2.8) comes from the following observation. For a nonnegative integer m ≥ 2, consider the following involutions acting on (C * ) m+1 : and for 3 ≤ k ≤ m − 1, we have Under the identification of Aut((C * ) m+1 ) with GL m+1 (Z), we find that these are precisely the simple reflections in the standard reflection representation of a Coxeter group of type "E m+1 ", i.e., the sequence to any element w ∈ W (E m+1 ) and any element g ∈ (C * ) m+1 such that and satisfying the cocycle conditions One application of this construction is that it associates an identity of difference operators to any pair of words for the same element of W (E m+1 ). For instance, take m = 4, and consider the element (4.36) It is straightforward to verify that this normalizes the subgroup W (D 4 ) generated by s Γ and s i , 1 ≤ i ≤ 3, and thus any element of that subgroup gives rise to a different representation of the corresponding difference operator by left-and right-multiplying by elements of W (D 4 ). Since s i act trivially, there are a total of 8 resulting representations, giving two transformations. After reparametrizing so that the original operator is where e = c 2 t 0 t 1 /pq = pq/d 2 t 2 t 3 , and 1≤i≤n 0≤r<4 These, of course, are just the analogues of Theorem 3.9 and Corollary 3.10 respectively.
In future work, we will use this cocycle of formal difference operators over W (E m+1 ) to construct isomorphisms between certain noncommutative rational varieties. (In particular, we will see that the above appearance of W (E m+1 ) is related to the appearance of that Coxeter group in the theory of rational surfaces.) One particularly nice consequence is related to the following fact.
Theorem 4.11. Suppose q is non-torsion in C * / p . Let A c denote the algebra generated by the difference operators Moreover, for any D ∈ A c , we have the identity

42)
where ad denotes the formal adjoint with respect to ∆ Proof. The isomorphism F c is simply conjugation by the (invertible) formal operator D (n) c (q, t; p), so is an isomorphism, and acts in the correct way on the generators by Lemma 4.3.
For the claim about K (n) c , we need merely note that it holds for the generators, and is preserved under multiplication of operators. For the generators, we need merely note that D (n) q (cu 0 , cu 1 , cu 2 , cu 3 ; t; p) ad = D (n) q (pq 1/2 /cu 0 , . . . , pq 1/2 /cu 3 ; t; p), (4.43) so that the claim is simply Proposition 3.14.
Remark. Modulo issues with contours, the second claim should be viewed as saying that F c agrees with conjugation by the integral operator associated to K (n) c . Since we see that, roughly speaking, F c interchanges multiplication and difference operators (see also Lemma 4. c , there is a significant difficulty in that we would need to show that the relevant algebra of difference operators acts faithfully on K (n) c . Although it follows from the formal difference operator approach that the operators generically act faithfully, it is difficult to determine the precise hypersurfaces on which faithfulness fails; in contrast, since formal difference operators form a domain, that definition only fails when q is torsion. In addition, there are a number of natural conditions on difference operators (e.g., support, vanishing of leading coefficients along suitable divisors) that can be defined in terms of modules over the ring of formal difference operators, and are thus preserved by F c . In future work, we will characterize the algebra A (n) c (in fact, a somewhat larger algebra to which the claim still applies), which will enable us to show, for instance, that there is an identity of the above form in which both D and F c (D) are (general) instances of the van Diejen Hamiltonian [27], up to an additive scalar. (This is a multivariate analogue of the results of [18].) There is, in fact, a region in which we can control faithfulness of difference operators; we record the following result for use in future work. Lemma 4.12. Suppose |pq| < |t|, |c| 2 < 1, and let m be a nonnegative integer such that q k c 2 / ∈ p Z for 1 ≤ k ≤ m.
Then the (m + 1) n functions K (n) c (q k x; y; t; p, q), k ∈ {0, . . . , m} n (4.44) are linearly independent over the field of meromorphic functions independent of y.
Proof. When |t|, |pq/t| < 1, the integral representation gives us an easy inductive proof that K (n) c (; ; t; p, q) has no poles depending only on p, q, t. As we remarked following Theorem 3.5, we can then use the braid relation to understand those poles depending on c, p, q, t. It turns out that any such pole has |c| 2 ≤ |pq|, and thus cannot occur in the given region of parameter space. We thus conclude that is a holomorphic function of the parameters as well as the variables. Moreover, since |c| < 1, we find that (for generic y) the y-dependent poles in x are at most order 1. As a result, the residue of K (n) c ( x; y; t; p, q) along any such pole can be computed via the limit from generic c. Now, for l ∈ {0, . . . , m} n , consider the matrix of residues Res x=q − l y K (n) c (q k x; y; t; p, q) = Res x=q k− l y K (n) c ( x; y; t; p, q) (4.46) For generic y, this vanishes unless k i − l i > 0 or k i − l i < −m (the latter coming from the possibility that q −j c 2 ∈ p Z for some j > m). Thus this matrix of residues is triangular; since the diagonal residues are nonzero meromorphic functions of y, the matrix of residues is nonsingular; the claim follows immediately.
Remark. Since the application involves an algebraic statement, the constraints on t and c 2 are not particularly stringent, as the allowed region contains a fundamental domain for the p-periodicities. One consequence is that F c is a well-defined homomorphism in the above region, and thus (combining with the Theorem) on the complement of a codimension 2 subvariety of the region |pq| < |t|, |c| 2 < 1; further results will then permit the application of Hartog's Lemma, so that F c actually extends to the entirety of parameter space.
We close by noting some simple consequences for these formal operators arising from properties of the interpolation kernel. The simplest is the t → pq/t symmetry.
Proposition 4.13. We have Proof. Using Lemma 4.4, we may immediately reduce to the case c = q −1/2 , where this is straightforward.
Remark. In fact, this symmetry came first (via a somewhat different approach to these operators); it was only later that it became apparent that the symmetry extended to the kernel itself.
The explicit formula for the univariate interpolation kernel gives the following expression (which can also be obtained by using the proof of Lemma 4.4 to obtain a first-order recurrence satisfied by the coefficients).
Proposition 4.14. We have This should be compared with the formula of [7] for the powers of the Askey-Wilson operator, as well as the elliptic analogue [11]. There is a similar, but more complicated, expression for D (n) t 1/2 (q, t; p), which we omit, as well as the corresponding expression for D (n) (pq/t) 1/2 (q, t; p). Similar reductions to c = q −1/2 give the following.
Of course, we also obtain a formula for D (n) c (q, pq; p) x coming from the t → pq/t symmetry. This arises from a quasiperiodicity of the coefficients under t → pt, which we omit. (We will show something far stronger in future work: if we divide by the leading coefficient, and introduce suitable additional factors, the resulting formal difference operator will not only be elliptic in all parameters and variables, but will extend to algebraic elliptic curves in a canonical (thus modular) way.)

The kernel as symmetric function
As we mentioned in the introduction, another important extension of the formal kernel involves analytically continuing in the dimension, along the same lines as the lifting of Koornwinder polynomials to symmetric functions in [19]. Clearly the analytic definition by induction in the dimension will not be of use in this regard, so we must return to the deformed Cauchy identity definition. We thus see that our first order of business must be to extend the interpolation functions themselves to symmetric functions. Such an extension will be a symmetric function (more properly, a formal series in p with symmetric function coefficients) depending on an auxiliary parameter T such that when T = t n and we specialize the variables to x 1 , 1/x 1 , . . . , x n , 1/x n , we recover the formal series expansion of the relevant n-variable interpolation function.
For any partition µ, we recall from [19] the specialization µ q,t,T ;a of the ring Λ of symmetric functions defined by where p k denotes the power sum symmetric function. Note that the summands vanish once µ i = 0, so this is a well-defined homomorphism Λ → Q(q, t)[a, T, 1/a, 1/T ]. One significance of this specialization is that for any symmetric function f , There is one special case of the interpolation functions which is quite straightforward to lift. When t n ab = pq, the interpolation function has an explicit expression as a product over the variables. Thus the only potential obstacle to lifting is formal convergence, and this turns out not to be an issue. With this in mind, we define, for b a formal parameter with 0 < ord(b) < 1, a family of symmetric functions The constraint on ord(b) ensures that ord(p k/2 p k ( λ ′ t,q,1;p −1/2 b )) > 0, (5.4) and thus the sum converges formally to a series of positive order, making the exponential well-defined as well.
This is indeed a lift of the "Cauchy" special case of the interpolation function: for any n ≥ ℓ(λ), we have (z 1 , . . . , z n ; pq/t n b, b; q, t; p).

(5.5)
With this in mind, we can define more general lifted interpolation functions using connection coefficients.

(5.7)
Moreover, for any partition µ,R * λ ( µ q,t,T ;a ; a, b; q, t, T ; p) = 0 (5.8) unless µ ⊃ λ. More generally, R * λ ( µ q,t,T ;a ; a, b; q, t, T ; p) = ∆ λ (T a/tb|t/T ab; q, t; p) −1 µ λ [T 2 a 2 /t 2 ,T ab/t];q,t;p . (5.9) Proof. The specialization for n ≥ ℓ(λ) follows from the connection coefficient identity for interpolation functions, and the claim about the coefficients is manifest from the definition. (The only nontrivial contribution comes from the elliptic binomial coefficient, but this is the restriction of an algebraic function to a Tate curve, so has rational function coefficients.) Since the interpolation functions thatR specializes to have valuation 1 ([6]), the same is true forR.
The vanishing condition and relation to binomial coefficients hold since they hold for all sufficiently large n; it then follows that the specialization vanishes for n ≥ ℓ(λ), since it is a polynomial Puiseux series such that every coefficient vanishes at all partitions with at most n parts. This is a formal symmetric function analogue of the skew interpolation functions of [23], as can be seen by applying the specialization Remark 3. We also note that the constant (p 0 ) term of the lifted interpolation function is essentially just the lifted interpolation polynomial of [19]: has no z-independent poles for generic q, t. Indeed, one has ∆ 0 λ (t n a/b|t n+1 ; q, t; p)R * (n+1) λ (z 1 , . . . , z n+1 ; a, b; q, t; p) = κ λ κ [t n a/b,t](t n azn+1,t n a/zn+1);q,t;p (5.13) ∆ 0 κ (t n−1 a/b|t n ; q, t; p)R * (n) κ (z 1 , . . . , z n ; a, b; q, t; p), so the only relevant factor is λ κ [t n a/b,t];q,t;p , (5.14) which can be controlled using the explicit product formula [21,Cor. 4.5].
This immediately implies that for generic q, t, the only poles of the coefficients ofR * λ (x; a, b; q, t, T ; p) are functions of T alone. (Any other pole would be visible in the reduction to interpolation functions for all sufficiently large n.) However, a careful look at the definition shows that there can be no such poles. Indeed, for each factor of the summand, the parameters that appear generate a field over which T is transcendental! We thus find that the coefficients ofR * λ (x; a, b; q, t, T ; p) lie in Λ ⊗ Q(q, t)[lc(a) ±1 , lc(b) ±1 , T ±1 ]. The specialization to ordinary interpolation functions immediately gives us some symmetries of the lifted interpolation function, by checking that the identity holds for T = t n for all sufficiently large n. There is also a plethystic symmetry of the following form. Let τ a;t denote the endomorphism of Λ given by noting that τ −1 a;t = τ t/a;t , and that any two such endomorphisms commute. In that light, we adopt the shorthand τ a1,...,am;t = 1≤r≤m τ ar;t . (5.18) We note the particularly nice special cases Proposition 5.3. The function τ a;tR * µ (; a, b; q, t, T /a; p) is independent of a.
Proof. We equivalently need to show τ t/a ′ ,a;tR * µ (x; a, b; q, t, T /a; p) =R * µ (x; a ′ , b; q, t, T /a ′ ; p) (5 .22) for all a, a ′ . If we specialize to T = t n+m a, a ′ = t m a for m ≥ 0, n sufficiently large, and specializex to z ±1 1 , . . . , z ±1 n , this becomes an identity of ordinary interpolation functions, [21, (3.43)]. The claim then follows in the usual way.
Remark. Note that this symmetry gives rise to the expression R * µ (; a, b; q, t, T ; p) = τ T a,t/a;tR * µ (; T a, b; q, t, 1; p), (5.23) giving an alternate argument for the lack of poles depending only on T .
As in the Koornwinder case, a major benefit of lifting to symmetric functions is the action of a slightly modified Macdonald involution. Recall from [19] thatω q,t is the involution acting on symmetric functions bỹ and satisfies (ω q,t f )( µ t,q,1/T ;− √ qt/a ) = f ( µ ′ q,t,T ;a ). Proof. Indeed, we can verify this by direct calculation in the Cauchy case a = pq/T b, and the connection coefficients transform correctly.
The Cauchy and Littlewood identities of [23] directly translate to the lifted interpolation functions; again, we need simply observe that the claim holds for a sufficiently general class of specializations to ordinary interpolation functions.
Proposition 5.5. If ord(a), ord(b) = 0, then Applying the modified Macdonald involution to the latter sum immediately gives a dual Littlewood identity.
At this point, it is relatively straightforward to come up with a candidate for the lifted kernel.
It is fairly straightforward to relate this to the formal kernel. The only nontrivial issue is that the ∆ µ symbol has a pole at T = t n , so unlike for the lifted interpolation function, we must be careful about order of specialization. In particular, we must specialize one or both of the sets of variables before setting T → t n ; specializingŷ → y 1 , 1/y 1 , . . . , y n , 1/y n has the effect of cancelling the pole at T = t n , making the remaining limits commute. We thus find that for all n ≥ 0, . . . , z n ; w 1 , . . . , w n ; q, t; p).

(5.35)
Since this is independent of t 0 , u 0 for all n, the same is true forK c , making the latter well-defined.

(5.36)
For the lifted kernel and interpolation function to be useful, we need to be able to substitute them into integral identities, and thus need to have similar symmetric function analogues of the elliptic Selberg integral.
This is mostly straightforward, since in any case in which the elliptic Selberg integral reduces as p → 0 to a Koornwinder integral, the ratio between the two integrands is essentially a symmetric function. For instance, if ord(a) > 0, we find as a specializationx → z 1 , 1/z 1 , . . . , z n , 1/z n of a symmetric function, and similarly for the univariate factors.
As a result, to extend an identity involving integrals of formal kernels to an identity for the lifted kernel, it suffices to understand integrals of symmetric functions against the Koornwinder density. That is, we want a linear functional I K (; q, t, T ; t 0 , t 1 , t 2 , t 3 ) such that for otherwise generic parameters and any symmetric function where I (n) K denotes the normalized n-dimensional Koornwinder integral. That is, for any symmetric Laurent polynomial g, . (5.40) Such integrals were already considered in [19]; we will, however, need some slightly better control over the poles. The key idea of the consruction in [19] is that the normalized integral of a symmetric Laurent polynomial against the Koornwinder density can be computed by expanding the polynomial in the corresponding orthogonal poynomials and taking the constant term. This extends immediately to symmetric functions using the symmetric function analogues of the Koornwinder polynomials.
To control the poles, it will be useful to take a slightly different approach. Rather than take as the basic identity the fact that K λ integrates to δ λ0 , we use the analogue of Kadell's lemma, which here gives a formula for the integral of a suitable interpolation polynomial against the Koornwinder density. Thus (where I K denotes the "virtual Koornwinder integral" of [19]) we have . this expansion is triangular with respect to the inclusion partial order, we find after integrating term-by-term The pole at t 0 = 0 can be removed by symmetry; the pole at T = 0 can also be removed using the explicit formulas for that case in [19].
Since the Macdonald polynomials are a basis for generic q and t, a similar statement applies to the poles of the integral of an arbitrary symmetric function.
Lemma 5.9. For any symmetric function f of degree ≤ k, We recall from [19,Cor. 7.6] the following symmetries of the virtual Koornwinder integral (after fixing a couple of typos): (We can double-check these identities by setting f =P * λ (; q, t, T ; t 0 ).) This last symmetry generates an action of W (D 4 ) on the parameters, which gives rise to a symmetry I K (f ; q, t, T ; t 0 , t 1 , t 2 , t 3 ) = I K (τ t0,t1,t2,t3;t f ; q, t, T t 0 t 1 t 2 t 3 /t 2 ; t/t 0 , t/t 1 , t/t 2 , t/t 3 ). (5.48) It is tempting here to specialize the parameters so that T = t n and T t 0 t 1 t 2 t 3 /t 2 = t n ′ , so that both sides become finite integrals. The difficulty, of course, is that the specialization to a finite-dimensional integral only works for otherwise generic parameters, so we need to ensure that the direction in which we take the limit has no effect. This turns out to be a problem, for the simple reason that the virtual integral has a pole when T 2 t 0 t 1 t 2 t 3 /t 2 ∈ t N ! As a result, we cannot expect to obtain an identity of finite-dimensional integrals from this symmetry.
Despite this fact, it turns out that the symmetry is quite useful! When applying the virtual integral below, we will in general have little control over the parameters of the Koornwinder integral, and in at leat one case find ourselves having to understand the limit in a case when the direction of the limit is important. Since the polar divisor of the integral has multiplicity 1 at the generic point with T 2 t 0 t 1 t 2 t 3 /t 2 ∈ t N , in order to compute the integral in a general direction, we only need to understand the limits in two directions. The symmetry, in particular, gives us two directions in which we can express the limit as a finite-dimensional integral.
We thus obtain the following, in the special case of interest below.
(5.51) By inspection, g(1, u) is an n-dimensional integral, while g(u, 1) becomes an n ′ -dimensional integral once we apply the symmetry; in each case, the resulting expression is holomorphic at u = 1.
It will be useful to know how various natural products transform under duality and the homorphisms τ b1,...,bm;t . The key facts are the liftings valid whenever |a| < 1; given the expressions on the right, it is straightforward to apply either homomorphism.
For duality, we have the following correspondences; in each case, we take ord(q) = ord(t) = 0, and choose the remaining parameters so that the Gamma functions have arguments of order in [0, 1]. Then the claim is that if we divide by the limit as p → 0, the residual functions are related by ω q,t . For interaction factors, we have: If a = pq/t, we can take the square root to obtain For univariate factors, we have We also note that for ord(x) ∈ (0, 1), Γ p,1/q (x) = 1/Γ p,q (qx).

The Littlewood kernel
If we translate Conjecture L1 of [23] into a statement about the kernel, we obtain the following.
Theorem 6.1. The interpolation kernel satisfies the integral identity Proof. This identity certainly holds in the limit p → 0, c ∼ p 1/2 , v r ∼ 1, as then both integrals become Koornwinder integrals. Moreover, if we divide both sides by the common limit, then both sides have formal Puiseux series expansions in p with rational function coefficients. It thus suffices to show that the two sides agree for a Zariski dense set of parameters consistent with this scaling. Now, suppose we already know a particular case of the identity, with parameters (c, v 0 , v 1 , v 2 , v 3 ). Using this, it turns out to be relatively straightforward to establish that the identity also holds in the case . Indeed, starting with the integral on the left, we can expand K (2n) t 1/2 c using the braid relation, in such a way that after exchanging the order of integration (which is not a problem as long as all parameters are inside the unit circle), the inner integral becomes the known instance of the transformation.
Apply that instance, then exchange the order of integration again. At this point, the inner integral is of the form to which commutation applies (in the form of Proposition 3.11). After commutation, we obtain an integral over two sets of n variables, one of which we can simplify using Proposition 3.7. The resulting integral is precisely the desired right-hand side. Now, the identity trivially holds whenever v 0 v 1 = pqt/c 2 , and thus a simple induction using the preceding paragraph shows that it holds when v 0 v 1 = pqt k /c 2 for any integer k ≥ 1. This is a Zariski dense set of parameters, and thus the identity holds in general.
Remark 1. This is dual to Theorem 7.1 below, in the sense that if we analytically continue both sides in the dimension and apply the modified Macdonald involution, we obtain the analytic continuation of Theorem 7.1.
In particular, if the reader prefers difference operators to degenerate integral operators, the reader may first prove Theorem 7.1 (say by following the argument given in the remark following said Theorem), then apply duality.
Remark 2. An alternate approach involves taking v 0 v 1 = t 2−2n q −m , so that the transformation becomes an identity of theta functions which, when y is a suitable partition, becomes [23,Thm. 4.7].
An interesting special case of this transformation comes when v 2 v 3 = pq. In that case, the left-hand side is independent of v 2 , while the right-hand side is (up to simple gamma factors) independent of v 0 . We thus immediately obtain the following Corollary.
This suggests the following definition.
Definition 4. The Littlewood kernel is the meromorphic function (defined for |p|, |q| < 1) When x is specialized to a geometric progression, the integral on the right becomes an elliptic Selberg integral, thus giving an explicit expression.
Similarly, when n = 1, the interpolation kernel in the integrand simplifies so that we obtain an elliptic beta integral.
Proposition 6.4. We have When t = q (or t = p), the interpolation kernel is essentially a determinant, so that (following [2]) the Littlewood kernel becomes a pfaffian.
Another case with a reasonably nice expression is when c = pq/t, so the interpolation kernel in the integrand can be expressed as a product.
Proof. It suffices to prove this in the case ord(t 0 ) = ord(z i ) = 0, since both sides have well-behaved Puiseux series expansions. We can then specialize so that z is a partition based at t 0 , at which point the claim follows from [23,Thm. 4.7].
Remark 1. Note that the right-hand side converges formally whenever If | ord(t 0 )| = | ord( z)| = 0, it follows from the branching rule below that the the sum converges to the Littlewood kernel for the full range 0 < ord(c) < 1/2.
P µ 2 ( x; q, t), (6.11) which can be made rigorous in the same way as the Macdonald limit of the interpolation kernel. Note that when c = (qt) 1/4 here, the coefficient is essentially the coefficient in the usual Littlewood identity for Macdonald polynomials.
In particular, when c = (pqt) 1/4 , this becomes the usual elliptic Littlewood sum, and we obtain the following evaluation.
Theorem 6.8. When c = (pqt) 1/4 , the Littlewood kernel has the product expression Remark. This can also be proved using integral manipulations alone: if one expands the interpolation kernel using the degenerate branching rule (Proposition 3.6 above), swaps the two resulting integrals, then applies the degenerate braid relation (Proposition 3.7), one obtains an (n − 1)-dimensional integral involving a (2n − 1)dimensional instance of the interpolation kernel. If we then perform the same steps again, we obtain the (n − 1)dimensional instance of the Theorem. Working backwards, this gives an inductive proof of this evaluation.
Another consequence of the formal deformed Littlewood sum expression is that the Littlewood kernel satisfies a branching rule.
Corollary 6.9. The Littlewood kernel satisfies the branching rule Proof. Expand the left-hand side via the formal sum for t 0 = v, and note that this gives an expansion in 2n − 1-variable interpolation functions indexed by partitions with ≤ 2n − 2 parts. As a result, we can expand those interpolation functions using the integral representation; simplifying gives the desired result.
Since the Littlewood kernel is defined using the interpolation kernel, we can use the braid relation to obtain a transformation of sorts.
Theorem 6.10. The Littlewood kernel satisfies the integral identity Proof. Expand L (2n) d using the definition, exchange the two integrals (allowable since we can choose the parameters so that all singularities are inside the unit circle), then use the braid relation to simplify the inner integral.
When x is specialized to a partition pair, we obtain the following.
Corollary 6.11. For otherwise generic parameters satisfying t 2n t 0 t 1 t 2 u 0 = pqd 2 , If we take d = √ −1 in this identity, we find that the right-hand side agrees (even including the prefactors) with the right-hand side of Conjecture Q1 of [23]; similarly, the case d = p −1/4 recovers (up to shifting v) the right-hand side of Conjecture Q2 of [23]. We may thus view those conjectures as claims about certain degenerations of the Littlewood kernel (specifically for c ∈ { √ −t, (t 2 /p) 1/4 , (t 2 /q) 1/4 }). To be precise, those conjectures do not give formulas for the Littlewood kernel, but rather describe how to integrate certain test functions against the Littlewood kernel.
Since those three degenerate examples all (conjecturally in [23], but see below) give rise to vanishing identities, this suggests that the same should apply to an arbitrary instance of the Littlewood kernel.
Theorem 6.12. For generic parameters satisfying t 2n t 0 t 1 u 0 = √ pqt, the integral vanishes unless λ has the form µ 2 , when the integral is .

(6.17)
Here, Z is a normalization constant explicitly given by Proof. Set v = √ t, t 2 = pq/td 2 in Corollary 6.11, and observe that the right-hand side can be expressed as a sum via [23,Cor. 4.8]. Inverting the binomial coefficient in this sum turns the remaining interpolation function into a biorthogonal function, giving the above identity.
Although the Littlewood kernel is ill-defined for c = 1, since the same applies to the interpolation kernel, naïve manipulations suggest that for suitable test functions f , the point is that the interpolation kernel for c = 1 corresponds to the identity as an integral operator. The Littlewood kernel similarly has issues for c = t 1/2 , but again we can introduce a test function to obtain (essentially via the same limit as Proposition 3.7) D ( y; p, q).

(6.20)
Although this is not well-defined in general, we can check that the required manipulations are valid when f is the interpolation kernel (or, more precisely, a suitable product of the interpolation kernel and gamma functions).
This gives a new explicit vanishing integral following the above argument.

(6.22)
The normalization constant Z is given by When t 2n t 0 u 0 = 1/d 2 in the vanishing result, the biorthogonal function becomes an interpolation function.
It turns out that there is a more general vanishing result for interpolation functions.
Theorem 6.14. For t 2n t 0 u 0 = 1, the integral vanishes unless λ has the form µ 2 , when it equals Z µ ∆ µ (1/t 2 u 2 0 |t 2n , pqt 2n−2 , √ pqv ±1 d 2 /u 0 ; q, t 2 ; p) Proof. If we attempt to substitute the above parameters into Corollary 6.11, we find that the integral on the right-hand side becomes singular (two parameters multiply to (t 2 ) 1−n ). In particular, the right-hand side becomes a finite sum in this limit, and in fact at most one term can be nonzero (corresponding to z = t 2n−2i (p, q) µ i tt 0 with µ 2 = λ). The desired vanishing property follows; the specific nonzero values are then obtained by taking the appropriate residue.
Remark. The residue calculation is rather tedious, so it may be worth noting the following shortcut: It is quite simple to determine the dependence of the right-hand side on d and v (as these only appear in the residue via univariate factors of the integrand), so that one can reduce to Theorem 6.12 (taking d = 1 or v = (d 2 √ pq/t 0 ) ±1 ).
In addition to vanishing results, another nice special case of Theorem 6.10 involves taking v = t 1/2 , w = cd 2 /t, so that the integral on the right-hand side becomes an instance of the definition of the Littlewood kernel. We thus find the following.
Theorem 6.15. The Littlewood kernel satisfies the identity Remark. Note that when c = q −1/2 d, the integral on the right-hand side should naïvely become a difference operator; of course the corresponding identity holds, and by the same proof.
Again a "Bailey Lemma"-like manipulation gives a transformation.
is invariant under swapping d and e.
We obtain a different transformation by specializing the parameters in Theorem 6.10 so that the right-hand side transforms under Theorem 6.1.
Corollary 6.17. The expression is invariant under v → 1/v.
In the limit c → q −1/2 , this becomes a difference equation.
Corollary 6.18. The expression 1 It turns out that in many cases, this 1-parameter family of difference equations suffices to uniquely determine the Littlewood kernel; see below, where we use it to evaluate the Littlewood kernel in the case d = (pt) 1/4 .
As usual for branching rules, the right-hand side of the branching rule for L (2n) c appears to have less symmetry than the left-hand side. This, of course, corresponds to a transformation, which generalizes to the following.
is invariant under t r → pq/c 2 t r .
Proof. Expand the Littlewood kernel using the definition, choosing v so that the integral over y still has only four parameters. Applying the degenerate version of commutation to this integral gives an integral in which the desired symmetry is manifest.
When two parameters multiply to pq, the integral again is independent of the remaining parameter, and once more gives rise to the Littlewood kernel.
Corollary 6.20. We have K (2n) t 1/2 c ( x; t; p, q). (6.32) Taking c = (pqt) 1/4 gives another semi-explicit special case of the Littlewood kernel: This can also be combined with the "distributional" formula for L (2n) Two more such formulas will follow from the "distributional" expressions of L q −1/4 t 1/2 ; we state them here, but note that they are properly viewed as corollaries of Theorems 8.4 and 8.10 below. . . . , x 2n ; t; p, q) = K (n) t 2 (x 2 1 , . . . , x 2 n ; x 2 n+1 , . . . , x 2 2n ; p 2 q 2 /t 2 ; p 2 , q 2 ) Γ p,q (t) 2n 1≤i<j≤2n Γ p,q (tx ±1 i x ±1 j ) (p/t) 1/2 had an unexpected determinantal expression, this suggests that we should investigate L (2n) (pt) 1/4 . Although the argument for the interpolation kernel case does not carry over, we can still us the cases t = q and 2n = 2 as a guide. In particular, if we guess that dividing by a suitable product makes L (2n) (pt) 1/4 independent of q, then there is a natural possibility for that product. We are thus led to guess that is independent of q; this would give us a pfaffian expression for L (2n) (pt) 1/4 ( x; t; p, q). None of the methods we have used above (or will use in Section 8) appears to be applicable to derive such an expression. It turns out, however, that given such a guess, there is a method we can use to prove it. The key observation is that Corollary 6.18 gives a family of difference equations which, in a suitable limit, has a unique formal solution. Indeed, taking ord(v) = 0, ord(d) = 1/4 in Corollary 6.18 gives a difference equation with formal series coefficients that in the limit p → 0 becomes the equation of Corollary 9.2 for u = lim p→0 √ pqt/d 2 .
Thus as long as lim p→0 d 4 /p is not of the form qt n+2−i , the equation has a unique (up to scalars) formal solution.
(Again, this is essentially Nakayama's Lemma: Any nonzero solution must have constant leading coefficient, and thus we can repeatedly subtract constant multiples of a fixed solution to make the other solution have valuation as small as we would like; i.e., expressing that other solution as a formal limit of constant multiples of the fixed solution.) Since in our case lim p→0 d 4 /p = t, there is no difficulty, so it will suffice to prove the equation holds (and verify that we have the correct scalar multiple). It will be convenient to replace q by q 2 and t by p/t 2 , so that the equation becomes After substituting in the claimed value for the Littlewood kernel, we find that we need to show that and R(x) denotes the operator x → 1/x. The quantity in brackets is set apart for the following reason.
Lemma 6.21. The function is holomorphic.
Proof. The only poles of the pfaffian come from poles of F (x i , x j ; t), and thus are cancelled by the prefactor.
The only poles of the prefactor are at zeros of x −1 i θ p (x i x j , x i /x j ), but these are cancelled by the pfaffian (since the pfaffian is antisymmetric and quasiperiodic).
It will be helpful to first consider a somewhat simpler version of this identity. Lemma 6.22. For any parameters q, t, x, we have the identity Proof. Let G(q) denote the given sum as a function of q, after first replacing x 2n → x 2n /q. We then find (by checking this for every term) that G(pq) = (p/t 2 ) (2n−1)(n−1) G(q). As a result, in order to show that G(q) = 0, it suffices to show that it is holomorphic. Since the term in brackets is holomorphic, the only poles come from the factors θ p (t/q 2 x i x j ) (and their images under the symmetry). It thus suffices to show that the residue of the sum along any such divisor is 0. Taking the residue in x 2n−1 along the divisor q 2 x 2n−2 x 2n−1 = t gives a smaller instance of the identity, and thus the identity follows by induction.
Proof. If G(q) denotes the left-hand side of (6.39) as a function of q, we note that G(pq) = p 2n 2 +n t −4n 2 +2n G(q), so that again it suffices to prove that G(q) is holomorphic. There are now two types of poles to consider, coming from the factors θ p (t/qvx i ) and θ p (t/q 2 x i x j ). The residue in v along the first type of pole vanishes by Lemma 6.22, while the residue in x j along the second type of pole vanishes by induction.
Proof. As we have already noted, that the right-hand side satisfies the requisite difference equation is simply (up to reparametrization) equation (6.39), and thus this fact holds by Lemma 6.23. It follows therefore that the above expression holds up to a factor independent of x. Since this is equivalent to the expression (pt) 1/4 ( x; t; p, t), (6.44) it is straightforward to verify that this takes the correct value when x = . . . , t 2n−i v, . . . , giving the desired result.
As in the interpolation kernel case, the special case t = p 1/3 is particularly nice, since we then have an alternate expression, equation (6.34). Cancelling common factors gives the following theta function identity: (Some similar factorizations appeared in [29], but the above appears to be new.) This is, in a somewhat disguised way, a special case of the more general identity (6.46) (This is the special case of [16,Thm. 4.7] in which the two factors agree.) To see this, apply a substitution of the form then remove the unwanted factors from the rows and columns. We also note here that z i is a rational function of degree 3 in y i , and dimension considerations show that the general such function appears in this way.
A particularly nice consequence of the pfaffian expression for L (2n) (pt) 1/4 arises as the Macdonald polynomial limit from Proposition 6.7. Recall that to obtain such a limit, we compare the two expressions for L (2n) p N t 1/4 (p −N +1 x; t; p 4N , q), (6.48) and take the limit as N → ∞, noting that the Littlewood kernel converges formally in this limit, so the limiting identity continues to hold.
Corollary 6.25. The Macdonald polynomials satisfy the following summation identity. .
This is a special case of Conjecture 1 of [1]. It turns out to be straightforward to prove that identity as well.
Corollary 6.26. The Macdonald polynomials satisfy the following summation identity.
Proof. Consider the ratio of symmetric functions. We can evaluate the denominator using the usual Littlewood identity for Macdonald polynomials, and thus find by the previous Corollary that .
Since the right-hand side is independent of q, so is the left-hand side. Moreover, the left-hand side remains independent of q if we set some of the variables to 0, and thus F (x 1 , . . . , x m ; t 2n+1 , q, t) is independent of q for all integers n ≥ m/2. Since this is a Zariski dense set, it follows that F (x 1 , . . . , x m ; v, q, t) is independent of q.
The claim follows from the case q = 0, which was established in [1].
Remark. In addition, the usual argument allows us to directly evaluate the case q = t as a pfaffian, giving an alternate argument. One can also directly show that the identity is consistent under setting two of the variables to 0, which allows one to prove the case u = t 2k+1 of the identity from the case u = t, and again the result (which is an identity of polynomials in u) follows.

More kernels
Just as Conjecture L1 of [23] has an analogue in terms of the interpolation kernel (Theorem 6.1 above), the same applies to Conjecture L2.
Theorem 7.1. The interpolation kernel satisfies the integral identity Proof. Again, this becomes an identity of Koornwinder integrals in the limit p → 0, c ∼ p 1/2 , and dividing by the common limit gives formal Puiseux series with rational function coefficients. But by the remark following [23,Prop. 4.13], the identity holds on the Zariski closed set v 0 v 1 = pq k /c 2 , k ∈ Z, so holds in general.
Remark. In fact, the argument given there applies directly to the kernel, and shows that if the identity holds for . The argument is essentially the same as that given above for Theorem 6.1, except that instead of the braid and commutation relations for c = t 1/2 , we use the corresponding difference equations (Propositions 3.12 and 3.14). All but one of the applications of Fubini reduce to linearity of integration (since the difference operator is expressed as a finite sum). The remaining application of Fubini is replaced by a combination of the self-adjointness of D (n) q (t; p) with respect to the Selberg density together with the fact that 1≤i≤n 0≤r<2m S ( z; t; p, q) is invariant under v r → pq 2 /v r . (This substitution has the same effect on each of the 2 n terms as inverting all the variables, so has no effect on the sum.) As before, we have the following special case.
is independent of v.
Definition 5. The dual Littlewood kernel is the meromorphic function Again, though this appears to depend on a choice of √ p, this choice can be absorbed by negating v; similarly, is invariant under negating c or x. We also have the following important symmetry.
Proposition 7.3. The dual Littlewood kernel satisfies the following t → pq/t symmetry.
Proof. Apply the t → pq/t symmetry to the interpolation kernel in the integrand, and simplify using S ( z; t; p, q 2 ) ∆ (n) S ( z; pq/t; p, q 2 ) , (7.6) a straightforward application of (1.9).
Proposition 7.4. The dual Littlewood kernel has the following specialization.
Remark. In particular, When t = q, so that the interpolation kernel can be expressed as a determinant, the remainder of the integrand can be expressed as a pfaffian, and thus again [2] shows that the dual Littlewood kernel is a pfaffian.
Since the entries of the pfaffian appear no longer to have nice expressions (they are 2-dimensional instances of the dual Littlewood kernel), we omit the details.
As above, we have expressions in terms of the interpolation kernel when c = pq/t or c = √ t. We give the former, as the latter has complicated prefactors and can in any event be obtained via the t → pq/t symmetry.
Proposition 7.6. We have S ( x; t; p, q) . (7.10) Proof. We need to take the limit c → 1 in the expression S ( z; t; p, q) (7.11) Now, we naïvely expect K (n) c ( z; x; t; p, q)∆ (n) S ( z; t; p, q) to behave like a delta function in this limit, from which the given expression would follow. To make this precise, we note that although the right-hand side of (7.10) has a fairly complicated (though a product) limit as p → 0, it converges to 1 as q → 0, and indeed has a polynomial Puiseux series in q. Moreover, the integral operator we are applying to this formal series differs by formal factors from an instance of the usual integral operator associated to the kernel, and thus (since the formal factors cancel) indeed converges to the identity as we would expect.
Again, the name "dual Littlewood kernel" comes from an expression as a deformation of the dual Littlewood identity.
Proof. This follows from the dual of [23,Thm. 4.7] as above.
Corollary 7.8. When c = (p/qt) 1/4 , the dual Littlewood kernel has the product expansion Remark. Of course, this also follows directly by duality from the corresponding evaluation for L (2n) (pqt) 1/4 . The image of this under the t → pq/t symmetry has a particularly nice expression. Corollary 7.9. When c = q −1/2 t 1/4 , the dual Littlewood kernel has the expression S ( x; t; p, q) . (7.16) One major difference between the Littlewood kernel and its dual is that the dual appears not to satisfy any branching rule. There is, however, an analogue of Theorem 6.10, with essentially the same "Bailey lemma"-type proof.
Theorem 7.10. The dual Littlewood kernel satisfies the integral identity For comparison with Conjectures Q3, Q4, and Q5 of [23], we record the interpolation function version of this identity.
Corollary 7.11. For otherwise generic parameters satisfying t n−1 t 0 t 1 t 2 u 0 = pq 2 d 2 , Again, a suitable specialization allows us to evaluate the right-hand side in terms of the dual Littlewood kernel.
Corollary 7.12. We have the identity The case d = q −1/2 t 1/4 is particularly useful.
Unlike the defining integral for L ′(n) c , this has a well-behaved formal expansion in q for a range of valuations of c, namely −1/2 < ord q (c) < 0 (which extends to ord q (c) = 0 if we divide by the limit as q → 0). (This in particular explains why L ′(n) (p/qt) 1/4 has a nice formal expansion in q.) The case c = 1 of Corollary 7.12 can be generalized somewhat, as it is then easier to cancel parameters.
Note that unlike Corollary 7.12, the resulting identity is actually equivalent (by the usual argument) to the full Theorem 7.10.
Corollary 7.14. The dual Littlewood kernel satisfies the identity S ( x; t; p, q) .

(7.21)
Proof. We find that both sides have well-behaved formal expansions in q, so long as 1/2 < ord q (c) < 0, so that we may argue as in the proof of Proposition 7.6.
We also have the following identity obtained by specializing Theorem 7.10 so that we may apply Theorem 7.1 to the right-hand side.
Corollary 7.15. The integral We also have an analogue of Theorem 6.12, using Corollary 4.14 of [23] in place of Corollary 4.8 op. cit.
(Note that there is a typo there: qt n−2 should read qt n−1 .) Theorem 7.16. For generic parameters satisfying t n−1 t 0 t 1 u 0 = √ pq, vanishes unless λ has the form (1, 2)µ, when it equals Z ∆ µ (q/u 2 0 |t n , t n−1 t 2 0 , 1/t n−1 t 0 u 0 , q/t n−1 t 0 u 0 ; t; p, q 2 ) ∆ (1,2)µ (q/u 2 0 |t n , t n−1 t 2 0 , 1/t n−1 t 0 u 0 , q/t n−1 t 0 u 0 ; t; p, q) , (7.24) where The analogue of Theorem 6.14 is even simpler than in the Littlewood case, as we can simply specialize x to a partition in Corollary 7.14. This is particularly lucky since once λ becomes nontrivial, we would need to compute residues at second order poles of the integrand! Theorem 7.17. For t n−1 t 0 u 0 = q, the integral vanishes unless λ has the form (1, 2)µ for some µ, when it equals We also have an analogue of Theorem 6.19, this time in the form of a difference equation.
is invariant under t r → p/c 2 t r .
Corollary 7.19. We have Proof. When v = c, this is a special case of Corollary 7.12. That the right-hand side is independent of v follows from the case t 2 t 3 = p of the difference equation.
In addition to the Littlewood and dual Littlewood kernel, there is one more such kernel we wish to consider, this time related to Conjecture L3 of [23].
Theorem 7.20. The interpolation kernel satisfies the integral identity subject to the balancing condition v 0 v 1 v 2 v 3 = pq/c 2 .
Proof. As usual, both sides have the same limit as p → 0, c ∼ p 1/2 , and dividing by the common limit gives formal Puiseux series in p with rational function coefficients, so it suffices to prove a Zariski dense set of special cases.
In a suitable limit v 0 v 1 → q −m/2 , this becomes an identity involving finite sums of interpolation functions.
The interpolation functions are modular ( [21, §6]), but the evaluation at z 2 i is not preserved by the modular group. In other words, the identity depends on a choice of 2-isogeny, which we may replace by any other 2-isogeny. Upon doing so, we find that the identity we require is a special case of Theorem 7.1; to be precise, it is obtained from that identity by first swapping p and q then taking the limit v 0 v 1 → q −m .
Remark. Unlike Theorems 6.1 and 7.1, we have been unable to come up with a direct argument (i.e., not using a modular transformation). The difficulty is that the relevant analogue of the integral and difference operators we used above is the operator corresponding to the kernel with c = −1, but this is trivial! Once more, if we cancel two of the parameters, we find that the result is independent of the remaining degree of freedom, motivating the following definition. Definition 6. The Kawanaka kernel is defined by Note here that unlike the Littlewood and dual Littlewood kernels, the interpolation kernel in the integrand has parameters (t 2 ; p 2 , q 2 ) rather than (t; p, q). This is important, as otherwise the right-hand side would have square roots, and the result would depend on the choices of sign. (The residual choice of a square root of pq can be absorbed by negating v.) We have an analogue of the t → pq/t symmetry.
When c = t, pq/t, we can express the result in terms of the interpolation kernel; we give the c = pq/t case, as the other follows from the symmetry (and is more complicated).
Proposition 7.23. The Kawanaka kernel has the special case Proof. This follows by substituting pq/t ( z 2 ; x; t 2 ; p 2 , q 2 ) = K into the definition of the Kawanaka kernel, then simplifying using the braid relation.
We again omit the pfaffian cases t = ±q, t = ±p.
As before, the name comes from a formal expansion. The undeformed version of this expansion is the elliptic analogue of Kawanaka's identity, an identity of Macdonald polynomials conjectured by Kawanaka in [13], and proved by Langer, et. al. in [15].

(7.39)
This has the usual special case giving a product, though in this case the resulting identity is actually new.
Theorem 7.25. The Kawanaka kernel has the following special case with a product expansion.
Proof. Both sides clearly have well-behaved formal expansions, so it suffices to prove this in the case x i = t 2n−2i q 2λi t 0 for some partition λ. The resulting identity of multivariate elliptic functions is a modular transform of a special case of Corollary 7.8.
Remark. If we replace the left-hand side by its formal expansion, the resulting formal sum may be viewed as an elliptic analogue of Kawanaka's identity. This gives an alternate proof of the latter by a careful limit (i.e., replace p by p 4N and multiply x by p, then take the limit N → ∞).
Again, this is particularly nice after applying the t → pq/t expression.
Corollary 7.26. For c = t 1/2 , the Kawanaka kernel has the following expression.
(7.41) Again, we do not appear to satisfy any simple branching rule.
The analogues of most of the integral identities are straightforward.
Theorem 7.27. The Kawanaka kernel satisfies the integral identity Corollary 7.28. For otherwise generic parameters satisfying t n−1 t 0 t 1 t 2 u 0 = pqd 2 , Corollary 7.29. The Kawanaka kernel has the satisfies the identity is invariant under swapping d and e.
If we attempt to obtain an identity by specializing the right-hand side to an instance of Theorem 7.20, we find that the resulting identity is trivial. We also do not have an analogue of Theorems 6.19 or 7.18.
The analogue of the vanishing integrals of Theorems 6.12 and 7.16 is again straightforward, now using [23,Cor. 4.16]. Note that in this case, the integral never vanishes.
Remark. Of course, since the above Selberg integral has 8 parameters, one can obtain a large number of other quadratic evaluations by applying the W (E 7 ) symmetry of the integral, [22, §9].
Now, if we take ord(c) = 1/2, ord(t 0 ) = ord(t 0 ) = 0, ord(v 0 ) = ord(v 1 ) = 1/2 in Theorem 8.1, then the integrals on either side become Koornwinder integrals in the limit p → 0, so that we may apply the results of Section 5 to analytically continue in the dimension. Applying the Macdonald involution, reparametrizing, then specializing to a finite-dimensional integral gives the following result.
Again, this becomes a quadratic transformation when x = t 2n−1 w, . . . , w. Unlike Theorem 8.1, this does not give an evaluation of L (2n) q −1/4 t 1/2 , but does correspond to the following "distributional" statement: S ( z; t; p, q 1/2 ), (8.5) for suitable test functions f . Although it is difficult to make precise the notion of "suitable" here, it certainly follows that, since Theorem 8.4 is obtained from Theorem 6.10 via this substitution, the same substitution applies to the corollaries of this theorem. In particular, subsituting into Corollary 6.11 (i.e., specializing x to a partition pair) proves Conjecture [23, Q2]. (To be precise, we must also swap p and q, but this is no difficulty.) There is also a quadratic evaluation, but since two of the parameters multiply to a negative (but large) power of t, there are significant contour issues, so we omit the details.
A similar calculation applies when dualizing Theorem 8.2; the only difference is that in the resulting elliptic Selberg integral, two of the parameters are 1 and t 1/2 , so that we may apply the τ 1,t 1/2 ;t symmetry (5.47) without affecting the finite dimensionality of either the kernel or the integral. We obtain the following result, which we state in "distributional" form for concision. Substituting this into Corollary 7.28 proves Conjecture Q6 of [23].
We would expect (following the derivation of [23]) to obtain another such result by dualizing Theorem 8.1, after first swapping p and q. At first glance, however, this appears impossible, for a very simple reason: there is no way to assign valuations to the parameters so that the integral becomes a Koornwinder integral in the limit! (Indeed, it is not even clear whether we can specialize the valuations in such a way that the limiting integral has an evaluation at all. . . ) The simplest way to avoid this problem is to note that Corollary 7.14 implies Theorem 7.10 in much the same way as the definition of L ′(n) c . And, as we already observed in the proof of that corollary, the right-hand side of that identity does have a well-behaved formal expansion (albeit in q, rather than p).
In other words, we need only dualize the following identity, simply the special case c = q −1/2 t 1/4 of Corollary 7.14, except with p and q swapped.
At this point, if ord(v) = ord(x) = 0, both sides have perfectly well-behaved formal expansions, and the integral on the left becomes a Koornwinder integral in the limit, so there is no difficulty in analytically continuing in the dimension and dualizing. We do encounter one more difficulty when specializing to finite dimension, however, as the Koornwinder parameters are then ±1, ± √ t, and thus we encounter a pole of the lifted Koornwinder integral. Of course, we have anticipated this problem, so need only apply Lemma 5.10.
(This, in particular, explains why Conjecture Q5 of [23] involved sums of two integrals.) In this way, we obtain the following result, again stated in distributional form. in the sense that Theorem 7.10 and its corollaries continue to hold after the stated specialization.
Remark. This proves Conjecture Q5 of [23]. There is a corresponding evaluation coming from Theorem 7.17, but as this has two versions (depending on the parity of the dimension), each of which evaluates a sum of two integrals differing by a simple elliptic factor from an integral with an evaluation (so equivalent to a univariate sum), we omit the details.
We now have two conjectures remaining, Q1 and Q4 of [23]. It is straightforward to verify that these two conjectures (in kernel form) are dual to each other, so it will suffice to prove one, say Q1 (which is slightly simpler). One natural approach is to follow the development of [23] in reverse, and prove Q1 by a modular transform from Q2 (i.e., Theorem 8.4 above). This requires a suitable choice of algebraic degeneration of the identity, but it turns out that the relevant special cases of Theorem 6.12 are suitable for that purpose, in that, although the corresponding kernel identity is only a special case of the version of Theorem 6.10 we require, it is sufficiently general that a couple of "Bailey Lemma"-type steps suffice to prove the full version. (There would normally be a difficulty, in that the first step of obtaining Theorem 6.12 from Theorem 6.10 was to specialize the auxiliary parameter v, but that parameter happens to disappear in the special case of interest.) Rather than give the details of the above approach, we will take an alternate approach that, although it still takes some advantage of the special structure of our particular case, has a better chance of being adaptable to other special cases. The idea is that if we were dealing with a special case that had a well-behaved formal series expansion, then it would be enough to show that the putative Littlewood kernel satisfied the integral equation of Corollary 6.20 (or, more precisely, the corresponding special case of Theorem 6.19). Although our case is not formal, it turns out that we can finesse this issue, at the cost of having to prove the full version of Theorem 6.19, which in our case becomes the following. S ( z; t, pt, qt, pqt, p 2 q 2 /t 2 u 0 , . . . , p 2 q 2 /t 2 u 3 ; t 2 ; p 2 , q 2 ).
Proof. Since K (2n) t 1/2 can be written as a product, we find that K (2n) t 1/2 (± √ − z; x; t; p, q)∆ (n) S ( z; t, pt, qt, pqt; t 2 ; p 2 , q 2 ) (8.12) = 1≤i≤n,1≤j≤2n Γ p 2 ,q 2 (−tx ±2 j z ±1 i )∆ (n) S ( z; p 2 q 2 /t 2 ; p 2 , q 2 ) Γ p,q (t) 2n 1≤i<j≤2n Γ p,q (tx ±1 i x ±1 j ) , and thus the given identity reduces to the main theorem of [3] (a.k.a. the case c = d = pq/t of Corollary 3.10 above). Proof. We find that both sides have the same limit as p → 0 with 0 < ord(c) ≤ 1/4, and dividing by the common limit makes both Puiseux series have rational function coefficients. Thus, by Lemma 9.3 below, it will suffice to show that if we denote the right-hand side by F c ( x), then t 1/2 ( z; x; t; p, q)F c ( z)∆ (2n) S ( z; √ pqv ±1 /c 2 ; t; p, q) (8.14) is independent of v. (This only determines the right-hand side up to a scalar, but setting x i = t 2n−i v makes the right-hand side an elliptic Selberg integral, so that we can explicitly evaluate it.) This is a straightforward combination of commutation (Corollary 3.10) and the previous Lemma.
Theorem 8.10. The Littlewood kernel has the "distributional" limit S ( z; t, pt, qt, pqt; t 2 ; p 2 , q 2 ), (8.15) in the sense that Theorem 6.10 and its corollaries continue to hold after the stated specialization.
Proof. The previous Lemma shows that this holds in the special case of Theorem 6.15. A straightforward "Bailey Lemma" step shows that the general case of Theorem 6.10 holds as well.
Remark. We again used the fact that the "v" parameter of Theorem 6.10 disappears when d = √ −t. This issue would need to be worked around to make the above argument work for other values of d, but most likely one could show that the known test functions span a sufficiently large space to include the required test functions.
This proves Conjecture Q1 of [23]; dualizing in the usual way gives the following, which proves Conjecture Q4 of [23], finishing the set. Note that as above, we obtain two cases depending on the parity of the dimension. S ( z; t 2 , p, t, pt; t 2 ; p 2 , q 2 ), (8.17) in the sense that Theorem 7.10 and its corollaries continue to hold after the stated specialization.
Since it is quite straightforward to perform the required substitutions into the various corollaries of Theorems 6.10, 7.10, and 7.27, we omit the details. The one exception is Corollary 6.16 and its analogues for the dual Littlewood and Kawanaka kernels, where something interesting occurs. Recall that Corollary 6.16 gave a transformation between two integrals involving instances of the Littlewood kernel with differing parameters. If we specialize the parameters so that both kernels correspond to one of the special cases computed in this section, we obtain new quadratic transformations. Curiously, if we do the same for the dual Littlewood kernels instead, we find that most of the "new" transformations already appeared in the list corresponding to the Littlewood kernel. This appears to be related to the relations (p/qt) 1/4 ( x; t; p, q), (8.19) which arise from the fact that the three kernels all have nearly the same product expressions. If we substitute the first relation into the case d = (pqt) 1/4 of Theorem 6.15, we obtain an expression for the Littlewood kernel as an integral involving the dual Littlewood kernel. The relevant Selberg density has four parameters, but if c 2 = −t or c 4 = t 2 /p, two of the Gamma factors cancel to give an expression for the dual Littlewood kernel. Now, this is in fact not a legal substitution (since specializing c in this way causes issues with singularities), but this calculation at least suggests that the corresponding instances of the Littlewood and dual Littlewood kernels should be closely related. And, indeed, if we (for instance) compare Theorems 8.10 and 8.4, we see that the two "distributions" differ by simple univariate factors.
In any event, by taking all pairs of parameters coming from the above theorems, we obtain the following special cases of Corollary 6.16 et al. is invariant under swapping p and q.

Appendix: Uniqueness of formal solutions
A key step in using integral or difference equations to determine various formal series was the fact that the limiting systems have no nonconstant polynomial solutions. As this requires some properties of interpolation polynomials [17,19], we address those statements in this appendix.
For the difference equations, we have the following. whereP * (n) µ is Okounkov's interpolation polynomial [17] (in the notation of [19]). The limit p → 0 of Corollary 3.18 gives an expansion of the form D (n) q (v, u/v; t; p)P * (n) λ (; q, t, s) = µ⊂λ s −|λ/µ| C 0 λ/µ (q 1/2 t n−1 sv, q 1/2 t n−1 su/v; q, t)C 0 (1 n +µ)/λ (t n−1 u; q, t) d λµ (q, t)P * (n) µ (; q, t, q 1/2 s), where d λµ (q, t) is independent of s, u, v, is nonzero precisely when λ/µ is a vertical strip. We thus see that the RHS is a Laurent polynomial (symmetric under v → u/v) of degree ≤ ℓ(λ) in v. Moreover, the only term that can contribute to order v ℓ(λ) is the one with µ = λ − 1 ℓ(λ) , and the hypothesis ensures that this coefficient is nonzero. Now, among those λ such that c λ = 0, choose one with ℓ(λ) maximal. Then this term gives a nonzero contribution to the coefficient of v ℓ(λ)P * (n) µ (; q, t, q 1/2 s) (9.4) in the output of the difference operator, while no other term can contribute to this coefficient. It follows that this coefficient is nonzero, and the result follows.
Remark. The constraint on u is equivalent to saying that D q (v, u/v; t)1 = 0.
We also needed the following, apparently weaker, system of equations.
Corollary 9.2. Let q and t be generic, and u such that 1≤i≤n (1 − t n−i u 2 ) = 0. Let f ( z) be a BC n -symmetric Laurent polynomial such that D (n) q (u(t 1/2 v) ±1 ; t; p)f (9.5) is invariant under v → 1/v as a polynomial in v. Then f is constant.
Proof. Let D v be the given operator. Since D v = D 1/tv as operators, we conclude that D v f = D tv f for all v, and thus D v f = D t k v . By Zariski density, we conclude that D v f is independent of v, and the result follows.
For the integral equations, essentially the same argument applies; the only difference is that "vertical strip" is replaced by "horizontal strip", and we must choose λ to maximize λ 1 rather than ℓ(λ). We obtain the following.
Lemma 9.3. Let q, t be generic, and let u be such that (t n u; q) = 0. Then for any nonconstant BC n -symmetric Laurent polynomial f ( z), is a nonconstant function of v.
Remark. Note that the excluded values of u are precisely those for which the integral operator becomes singular.