Vector-Valued Polynomials and a Matrix Weight Function with $B_2$-Action

The structure of orthogonal polynomials on $\mathbb{R}^{2}$ with the weight function $| x_{1}^{2}-x_{2}^{2}|^{2k_{0}}| x_{1}x_{2}|^{2k_{1}}e^{-(x_{1}^{2}+x_{2}^{2})/2}$ is based on the Dunkl operators of type $B_{2}$. This refers to the full symmetry group of the square, generated by reflections in the lines $x_{1}=0$ and $x_{1}-x_{2}=0$. The weight function is integrable if $k_{0},k_{1},k_{0}+k_{1}>-\frac{1}{2}$. Dunkl operators can be defined for polynomials taking values in a module of the associated reflection group, that is, a vector space on which the group has an irreducible representation. The unique 2-dimensional representation of the group $B_{2}$ is used here. The specific operators for this group and an analysis of the inner products on the harmonic vector-valued polynomials are presented in this paper. An orthogonal basis for the harmonic polynomials is constructed, and is used to define an exponential-type kernel. In contrast to the ordinary scalar case the inner product structure is positive only when $(k_{0},k_{1})$ satisfy $-\frac{1}{2}<k_{0}\pm k_{1}<\frac{1}{2}$. For vector polynomials $(f_{i})_{i=1}^{2}$, $(g_{i})_{i=1}^{2}$ the inner product has the form $\iint_{\mathbb{R}^{2}}f(x) K(x) g(x)^{T}e^{-(x_{1}^{2}+x_{2}^{2})/2}dx_{1}dx_{2}$ where the matrix function $K(x)$ has to satisfy various transformation and boundary conditions. The matrix $K$ is expressed in terms of hypergeometric functions.


Introduction
The algebra of operators on polynomials generated by multiplication and the Dunkl operators associated with some reflection group is called the rational Cherednik algebra. It is parametrized by a multiplicity function which is defined on the set of roots of the group and is invariant under the group action. For scalar-valued polynomials there exists a Gaussian-type weight function which demonstrates the positivity of a certain bilinear form on polynomials, for positive values (and a small interval of negative values) of the multiplicity function. The algebra can also be represented on polynomials with values in an irreducible module of the group. In this case the problem of finding a Gaussian-type weight function and the multiplicity-function values for which it is positive and integrable becomes much more complicated. Here we initiate the study of this problem on the smallest two-parameter, two-dimensional example, namely, the group of type B 2 (the full symmetry group of the square).
Griffeth [7] defined and studied analogues of nonsymmetric Jack polynomials for arbitrary irreducible representations of the complex reflection groups in the family G(r, 1, n). This paper introduced many useful methods for dealing with vector-valued polynomials. In the present paper we consider B 2 , which is the member G(2, 1, 2) of the family, but we use harmonic polynomials, rather than Griffeth's Jack polynomials because the former play a crucial part in the analysis of the Gaussian weight. There is a detailed study of the unitary representations of the rational Cherednik algebra for the symmetric and dihedral groups in Etingof and Stoica [6].
We begin with a brief discussion of vector-valued polynomials and the results which hold for any real reflection group. This includes the definition of the Dunkl operators and the basic bilinear form. The next section specializes to the group B 2 and contains the construction of an orthogonal basis of harmonic homogeneous polynomials, also a brief discussion of the radical. Section 4 uses this explicit basis to construct the appropriate analogue of the exponential function. Section 5 contains the derivation of the Gaussian-type weight function; it is a 2 × 2 matrix function whose entries involve hypergeometric functions. This is much more complicated than the scalar case. The method of solution is to set up a system of differential equations, find a fundamental solution and then impose several geometric conditions, involving behavior on the mirrors (the walls of the fundamental region of the group) to construct the desired solution.

General results
Suppose R is a root system in R N and W = W (R) is the finite Coxeter group generated by x i y i , |x| := x, x 1/2 , xσ v := x − 2 x,v v,v v for x, y, v ∈ R N and v = 0). Let κ be a multiplicity function on R (u = vw for some w ∈ W and u, v ∈ R implies κ(u) = κ(v)). Suppose τ is an irreducible representation of W on a (real) vector space V of polynomials in t ∈ R N of dimension n τ . (There is a general result for these groups that real representations suffice, see [1,Chapter 11].) Let P V be the space of polynomial functions R N → V , that is, the generic f ∈ P V can be expressed as f (x, t) where f is a polynomial in x, t and f (x, t) ∈ V for each fixed x ∈ R N . There is an action of W on P V given by wf (x, t) := f (xw, tw), w ∈ W.
Define Dunkl operators on P V , for 1 ≤ i ≤ N by There is an equivariance relation, for u ∈ R N , w ∈ W Ordinary (scalar) polynomials act by multiplication on P V . For 1 ≤ i, j ≤ N and f ∈ P V the basic commutation rule is The abstract algebra generated by {x i , D i : 1 ≤ i ≤ N }∪RW with the commutation relations (2) and equivariance relations like (1) is called the rational Cherednik algebra of W parametrized by κ; henceforth denoted by A κ . Then P V is called the standard module of A κ determined by the W -module V .
We introduce symmetric bilinear W -invariant forms on P V . There is a W -invariant form ·, · τ on V ; it is unique up to multiplication by a constant because τ is irreducible. The form is extended to P V subject to x i f (x, t), g(x, t) τ = f (x, t), D i g(x, t) τ for f, g ∈ P V and 1 ≤ i ≤ N . To be more specific: let {ξ i (t) : 1 ≤ i ≤ n τ } be a basis for V . Any f ∈ P V has a unique expression f (x, t) = i f i (x)ξ i (t) where each f i (x) is a polynomial, and then The form satisfies f, This is a general result for standard modules of the rational Cherednik algebra, see [4]. The proof is based on induction on the degree and the eigenfunction decomposition of the operator where ∇ denotes the gradient (so that Because τ is irreducible there are integers c τ (v), constant on conjugacy classes of reflections (namely, the values of the character of τ ) such that where ∆ and ∇ denote the ordinary Laplacian and gradient, respectively. Motivated by the Gaussian inner product for scalar polynomials (case τ = 1) which is defined by where c κ is a normalizing (Macdonald-Mehta) constant, and satisfies f, g τ = e −∆κ/2 f, e −∆κ/2 g G , we define a bilinear Gaussian form on P V by f, g G := e ∆κ/2 f, e ∆κ/2 g τ .
Thus the multiplier operator x i is self-adjoint for this form (since x i = D i + D * i ). This suggests that the form may have an expression as an actual integral over R N , at least for some restricted set of the parameter values κ(v). As in the scalar case harmonic polynomials are involved in the analysis of the Gaussian form. The equation for f ∈ P V . For n = 0, 1, 2, . . . let P V,n = {f ∈ P V : f (rx, t) = r n f (x, t), ∀ r ∈ R}, the polynomials homogeneous of degree n, and let H V,κ,n := {f ∈ P V,n : ∆ κ f = 0}, the harmonic homogeneous polynomials. As a consequence of the previous formula, for m = 1, 2, 3, . . . and f ∈ H V,κ,n one obtains (a+i−1).
and (−n) k = 0 for n = 0, . . . , k − 1.) Thus ∆ k κ (|x| 2m f (x, t)) = 0 for k > m. With the same proofs as for the scalar case [5,Theorem 5.1.15] From the definition of ·, · τ it follows that f ∈ P V,m , g ∈ P V,n and m = n implies f, g τ = 0. Also if f ∈ H V,κ,m , g ∈ H V,κ,n , m = n then |x| 2a f, |x| 2b g τ = 0 for any a, b = 0, 1, 2, . . .. This follows from the previous statement when 2a + m = 2b + n, otherwise n = m + 2a − 2b and assume m < n (by symmetry of the form), thus |x| 2a f, |x| 2b g τ = f, ∆ a κ |x| 2b g τ = 0 because a > b. This shows that for generic κ there is an orthogonal decomposition of P V as a sum of |x| 2m H V,κ,n over m, n = 0, 1, 2, . . .. If f, g ∈ H V,κ,n then so to find an orthogonal basis for P V,m it suffices to find an orthogonal basis for each H V,κ,n . The decomposition formula (5) implies the dimensionality result for H V,κ,n (when γ(κ; τ ) + We will need a lemma about integrating closed 1-forms. Consider an N -tuple f = (f 1 , . . . , f N ) ∈ P N V as a vector on which W can act on the right. Say f is a closed 1-form if D i f j − D j f i = 0 for all i, j.
Lemma 1. Suppose f is a closed 1-form and 1 ≤ j ≤ N then Proof . By the commutation relations (2) The calculation is finished with the use of (3).
Corollary 1. Suppose f is a closed 1-form, homogeneous of degree n and for some constant λ κ then The values of κ on the conjugacy classes {σ + 12 , σ − 12 } and {σ 1 , σ 2 } will be denoted by k 0 and k 1 , respectively. We consider the unique 2-dimensional representation τ and set V := span{t 1 , t 2 }.
The reflections act on this polynomial as follows Here is the formula for D 1 (D 2 is similar) Since the matrices for the reflections all have trace zero we find that γ(κ; τ ) = 0. We investigate the properties of ·, · τ by constructing bases for each H V,κ,n ; note dim H V,κ,n = 4 for n ≥ 1.
Definition 1. The polynomials p n,i ∈ P V,n for n ≥ 1 and 1 ≤ i ≤ 4 are given by for m ≥ 1.
Proposition 1. If n = 4m + 1 or 4m with m ≥ 1 then Proof . The formula is clearly valid for n = 1. The typical step in the inductive proof is because c 1 + c 2 = 2. If n is odd then = 3, otherwise = 1. The same argument works for p n+1,3 , p n+1,4 .
Proof . Using induction one needs to show that the validity of the statements for 2m − 1 implies the validity for 2m, and the validity for 2m implies the validity for 2m + 1, for each m ≥ 1. Suppose the statements hold for some 2m. The types of p 2m+1,i are easy to verify (1 ≤ i ≤ 4). Next and by similar calculations σ + 12 p 2m+1,3 = −p 2m+1,3 and σ + 12 p 2m+1,4 = −p 2m+1,4 . Now suppose the statements hold for some 2m − 1. As before the types of p 2m,i are easy to verify. Consider Since (σ + 12 ) 2 = 1 this finishes the inductive proof. Proof . These polynomials are eigenfunctions of the (self-adjoint) reflections σ 1 , σ + 12 with different pairs of eigenvalues.
Proof . By the above formulae D 2 1 p n,i = −D 2 2 p n,i in each case. As a typical calculation It suffices to check the even degree cases because ∆ κ f = 0 implies ∆ κ D 1 f = 0.
Because p 2m,1 and p 2m,3 are both type OE one can use an appropriate self-adjoint operator to prove orthogonality. Indeed let U 12 := σ + 12 (x 2 D 1 − x 1 D 2 ), which is self-adjoint for the form ·, · τ .
Proof . This is a simple verification. For example and σ + 12 p 2m,3 = −p 2m,4 .  Proof . The fact that p n,i = 0 for all n, i (Proposition 1) and the eigenvector properties from Propositions 2 and 3 imply linear independence for each set {p n,i : 1 ≤ i ≤ 4}.
These norm results are used to analyze the representation of the rational Cherednik algebra A κ on P V for arbitrary parameter values. For fixed k 0 , k 1 the radical in P V is the subspace The radical is an A κ -module and the representation is called unitary if the form ·, · τ is positivedefinite on P V /rad V (k 0 , k 1 ). By Proposition 4 {|x| 2m p n,i } (with m, n ∈ N 0 , 1 ≤ i ≤ 4, except 1 ≤ i ≤ 2 when n = 0) is a basis for P V for any parameter values (see formula (5)).

The reproducing kernel
In the τ = 1 setting there is a ("Dunkl ") kernel E(x, y) which satisfies D (x) i E(x, y) = y i E(x, y), E(x, y) = E(y, x) and E(·, y), p 1 = p(y) for any polynomial p. We show there exists such a function in this B 2 -setting which is real-analytic in its arguments, provided 1 2 ± k 0 ± k 1 / ∈ Z. This kernel takes values in V ⊗ V ; for notational convenience we will use expressions of the form 2 i=1 2 j=1 f ij (x, y)s i t j where each f ij (x, y) is a polynomial in x 1 , x 2 , y 1 , y 2 (technically, we should write s i ⊗t j for the basis elements in V ⊗V ). For a polynomial f (x) = f 1 (x)t 1 +f 2 (x)t 2 let f (y) * denote f 1 (y)s 1 + f 2 (y)s 2 . The kernel E is defined as a sum of terms like 1 ν(p n,i ) p n,i (y) * p n,i (x); by the orthogonality relations (for m, n = 0, 1, 2, . . . and 1 ≤ i, j ≤ 4) 1 ν(p n,i ) p n,i (y) * p n,i , p m,j τ = δ mn δ ij p n,i (y) * .
We will find upper bounds on {p n,i } and lower bounds on {ν(p n,i )} in order to establish convergence properties of E(x, y). For u ∈ R set d(u) := min m∈Z u + 1 2 + m .
The condition that ν(p n,i ) = 0 for all n ≥ 1, 1 ≤ i ≤ 4 is equivalent to d(k + )d(k − ) > 0, since each factor in the numerator of Π is of the form 1 4 + 1 2 (±k 0 ± k 1 ) + m or 3 4 + 1 2 (±k 0 ± k 1 ) + m for m = 0, 1, 2, . . .. i E n (x, y) = y i E n−1 (x, y) for i = 1, 2. Proof . By hypothesis on k 0 , k 1 the polynomial E n (x, y) exists, and is a rational function of k 0 , k 1 . The reproducing property is a consequence of the orthogonal decomposition P V,n = 0≤m≤n/2 ⊕|x| 2m H V,κ,n−2m . For the second part, suppose f ∈ P V,n−1 , then . This is an algebraic (rational) relation which holds on an open set of its arguments and is thus valid for all k 0 , k 1 except for the poles of E n .
Proof . This follows similarly as the previous argument.
By Stirling's formula For the purpose of analyzing lower bounds on |ν (p n,i )| we consider an infinite product.
Proof . The product converges to an entire function by the comparison test: (u+n) 2 < ∞ (this means that the partial products converge to a nonzero limit, unless one of the factors vanishes). Suppose that |z| < u then Re(u ± z) ≥ u − |z| > 0 and by Stirling's formula. The entire function ω(u; ·) agrees with the latter expression (in Γ) on an open set in C, hence for all z. ; 2. If n ≡ 0, 1 mod 4 and i = 3, 4 or n ≡ 2, 3 mod 4 and i = 1, 2 then Proof . The statements follow from formulae (8), (9) and (10).
For a ∈ R with a + 1 2 / ∈ Z let ((a)) denote the nearest integer to a.
As in the scalar (τ = 1) theory the function E(x, y) can be used to define a generalized Fourier transform.

The Gaussian-type weight function
In this section we use vector notation for P V : f (x) = (f 1 (x), f 2 (x)) for the previous f 1 (x)t 1 + f 2 (x)t 2 . The action of W is written as (wf )(x) = f (xw)w −1 . We propose to construct a 2 × 2 positive-definite matrix function K(x) on R 2 such that and with the restriction |k 0 ±k 1 | < 1 2 ; the need for this was demonstrated in the previous section. The two necessary algebraic conditions are (for all f, g ∈ P V ) We will assume K is differentiable on Ω := {x ∈ R 2 : The integral formula is defined if K is integrable, but for the purpose of dealing with the singularities implicit in D i we introduce the region Condition (11) implies, for each w ∈ W , f, g ∈ P V that For the second step change the variable from x to xw −1 (note w T = w −1 ). Thus we impose the condition This implies that it suffices to determine K on the fundamental region C 0 and then extend to all of Ω by using this formula. Set ∂ i := ∂ ∂x i . Recall for the scalar situation that the analogous weight function is Start by solving the equation for a 2 × 2 matrix function L on C 0 , extended by (from the facts that w −1 σ v w = σ vw and κ(vw) = κ(v) it follows that equation (13) is satisfied on all of Ω) and set with the result that K is positive-semidefinite and (note σ T v = σ v ) Then for f, g ∈ P V (and i = 1, 2) we find Consider the second necessary condition (12) ( In the second part, for each v ∈ R + change the variable to xσ v , then the numerator is invariant, because σ v K(xσ v ) = K(x)σ v , and xσ v , v = − x, v , and thus each term vanishes (note Ω ε is W -invariant). So establishing the validity of the inner product formula reduces to showing lim ε→0 + Ωε ∂ i f (x)K(x)g(x) T e − |x| 2 /2 dx = 0 for i = 1, 2. By the polar identity it suffices to prove this for g = f . Set Q(x) = f (x)K(x)f (x) T e −|x| 2 /2 . By symmetry (σ + 12 D 1 σ + 12 = D 2 ) it suffices to prove the formula for i = 2. Consider the part of Ω ε in {x 1 > 0} as the union of {x : ε < |x 2 | < x 1 − ε} and {x : ε < x 1 < |x 2 | − ε} (with vertices (2ε, ±ε), (ε, ±2ε) respectively). In the iterated integral evaluate the inner integral over See Fig. 1 for a diagram of Ω ε and a typical inner integral.
By differentiability where h ij (x) = f i (x)f j (x)e −|x| 2 /2 for 1 ≤ i, j ≤ 2; the factors C 1 , C 2 depend on x 1 but there is a global bound |C i (x 1 )| < C 0 depending only on f because of the exponential decay. Thus the behavior of K(x 1 , ε) and K(x 1 , x 1 − ε) is crucial in analyzing the limit as ε → 0 + , for x 2 = −x 1 , 0, x 1 . It suffices to consider the fundamental region 0 < x 2 < x 1 . At the edge {x 2 = 0} corresponding to σ 2 At the edge {x 1 − x 2 = 0} corresponding to σ + 12 For conciseness set To get zero limits as ε → 0 + some parts rely on the uniform continuity of h ij and bounds on certain entries of K, and the other parts require This imposes various conditions on K near the edges, as described above. We turn to the solution of the system (13) and rewrite A form of solutions can be obtained by computer algebra, then a desirable solution can be verified. Set then the equations become (with s := u 2 ) and the solution regular at s = 0 is (We use F to denote the hypergeometric function 2 F 1 ; it is the only type appearing here.) The verification uses two hypergeometric identities (arbitrary parameters a, b, c with −c / To get the other solutions we use the symmetry of the system, replace k 1 by −k 1 and interchange f 1 and f 2 . We have a fundamental solution L(u) given by Observe that lim u→0 + det L(u) = 1, thus det L(u) = 1 for all u. We can write L in the form where each c ij is even in x 2 and is real-analytic in 0 < |x 2 | < x 1 . In fact L(x) is thus defined on C 0 ∪ C 0 σ 2 . It follows that K = (M L) T M L is integrable near {x 2 = 0} if |k 1 | < 1 2 , and lim ε→0 + K(x 1 , ε) 12 = 0 exactly when M T M is diagonal. The standard identity [8, 15.8 shows that there is a hidden symmetry for k 0 and similar equations for the other entries of L. Consider ε)).
With diagonal M T M we find (note x 1 > 2ε in the region, so ε By the exponential decay we can assume that the double integral is over the box max(|x 1 |, Next we analyze the behavior of this solution in a neighborhood of t = 1, that is the ray {(x 1 , x 1 ) : x 1 > 0}. The following identity [8, 15.10.21] is used the latter equation follows from Γ(2a)/Γ(a) = 2 2a−1 Γ(a + 1 2 )/ √ π (the duplication formula). We will need the identity proved by use of Γ 1 2 + a Γ 1 2 − a = π cos πa . Thus by use of identity (16), and also Transform again using (15) to obtain and so on. All the hypergeometric functions we use are of one form and it is convenient to introduce H(a, b; s) := F a, a + b + 1 2 ; 2a + 1; 1 − s . Also det K = d 1 d 2 = c 2 1 − tan 2 πk 0 tan 2 πk 1 . The expressions for K ij can be rewritten somewhat by using the transformations (16). Observe that the conditions − 1 2 < ±k 0 ± k 1 < 1 2 are needed for d 1 , d 2 > 0. The normalization constant is to be determined from the condition R 2 K(x) 11 e −|x| 2 /2 dx = 1. By the homogeneity of K this is equivalent to evaluating π −π K(cos θ, sin θ) 11 dθ (or π/4 0 (K 11 + K 22 )(cos θ, sin θ)dθ) (the integral looks difficult because K 11 involves squares of hypergeometric functions with argument tan 2 θ). Numerical experiments suggest the following conjecture for the normalizing constant c = cos πk 0 cos πk 1 2π .
Using the same arguments as in the scalar case (see [5,Section 5.7]) we define the Fourier transform (for suitably integrable functions f ). To adapt E(x, y) to vector notation write E(x, y) = For m, n ≥ 0 and 1 ≤ i ≤ 4 let φ m,n,i (x) = L (n) m |x| 2 p n,i (x)e −|x| 2 /2 .
This establishes a Plancherel theorem for F by use of the density (from Hamburger's theorem) of span{φ m,n,i } in L 2 (K(x)dx, R 2 ).

Closing remarks
The well-developed theory of the hypergeometric function allowed us to find the weight function which satisfies both a differential equation and geometric conditions. The analogous problem can be stated for any real reflection group and there are some known results about the differential system (13) (see [2,3]); it appears some new insights are needed to cope with the geometric conditions. The fact that the Gaussian inner product ·, · G is well-defined supports speculation that Gaussian-type weight functions exist in general settings. However it has not been shown that K can be produced as a product L T L, and the effect of the geometry of the mirrors (walls) on the solutions of the differential system is subtle, as seen in the B 2 -case.