The Smallest Singular Values and Vector-Valued Jack Polynomials

There is a space of vector-valued nonsymmetric Jack polynomials associated with any irreducible representation of a symmetric group. Singular polynomials for the smallest singular values are constructed in terms of the Jack polynomials. The smallest singular values bound the region of positivity of the bilinear symmetric form for which the Jack polynomials are mutually orthogonal. As background there are some results about general finite reflection groups and singular values in the context of standard modules of the rational Cherednik algebra.


Introduction
Suppose W is the finite reflection group generated by the reflections in the reduced root system R. This means R is a finite set of nonzero vectors in R N such that u, v ∈ R implies Ru ∩ R = {±u} and vσ u ∈ R where σ u is the reflection x → x − 2 x,v v,v v and ·, · is the standard inner product. For a fixed vector b 0 such that u, b 0 = 0 for all u ∈ R there is the decomposition R = R + ∪ R − with R + := {u ∈ R : u, b 0 > 0}. The set R + serves as index set for the reflections in W . The group W is represented on the space P of polynomials in x = (x 1 , . . . , x N ) by wp (x) = p (xw) for w ∈ W . Denote N 0 := {0, 1, 2, . . .} and for α ∈ N N 0 let |α| := N i=1 α i and x α := N i=1 x αi i , a monomial. Then P := span x α : α ∈ N N 0 and P n := span x α : α ∈ N N 0 , |α| = n the space of polynomials homogeneous of degree n. Let κ be a parameter (called multiplicity function), a function on R constant on W -orbits. For indecomposable groups W there are at most two orbits in R (two for types B N , F 4 and I (2k), otherwise one for type A N , D N , E m , I (2k + 1)) then the Dunkl operators {D i : 1 ≤ i ≤ N } are defined by Then D i D j = D j D i for 1 ≤ i, j ≤ N , D i maps P n to P n−1 , and the Laplacian The abstract algebra generated by W, D i , and multiplication by x i (1 ≤ i ≤ N ) acting on P is the rational Cherednik algebra. There are two W -invariant bilinear symmetric forms of interest here, denoted ·, · κ and ·, · κ,G . The first one satisfies D i f, g κ = f, x i g κ for all i and f, g ∈ P and f, g κ = 0 if f, g are homogeneous of different degrees; also 1, 1 κ = 1, wf, wg κ = f, g κ for w ∈ W . The Gaussian form is derived from the first one by f, g κ,G := e ∆κ/2 f, e ∆κ/2 g κ . This form satisfies D i f, g κ,G = f, (x i − D i ) g κ,G for all i, and thus multiplication by x i is self-adjoint because f, x i g κ,G = D i f, g κ,G + f, D i g κ,G . For certain constant values of κ the Gaussian form is realized as an integral with respect to a finite positive measure on R N , in fact where m N is Lebesgue measure on R N (see [2,Thm. 3.10]). The constant c κ is a normalizing constant to match 1, 1 κ,G = 1. The explanation of the value of c κ is in terms of the fundamental degrees of W . By a theorem of Chevalley the ring of Winvariant polynomials is generated by N algebraically independent homogeneous polynomials of degrees d 1 ≤ d 2 ≤ · · · ≤ d N (generally '<' holds), and these are the fundamental degrees (see [10,Sec. 3.5]). They satisfy where c is independent of κ. There is a version of this for the B N and F 4 types. Etingof [6,Thm. 3.1] gave a proof of the formula valid for all finite reflection groups. The integral shows that the measure is finite and positive for κ > − 1 dN . This number appears in another context. Suppose for some specific rational value of κ in the one-orbit case, or some linear equation satisfied by the values κ (v) , there exists a nonconstant polynomial p for which D i p = 0 for 1 ≤ i ≤ N , then p is called a singular polynomial and κ is a singular value. We can assume that p is homogeneous. In this case x α , p (x) κ = 1, N i=1 D αi i p (x) κ = 0 for all α ∈ N N 0 with α = (0, . . . , 0) and thus f, p κ = 0 for all f ∈ P. Furthermore ∆ κ p = 0 implying e ∆κ/2 p = p and p, p κ,G = 0. It follows that κ ≤ − 1 dN (taking κ constant). In fact the smallest (in absolute value) singular value is indeed − 1 dN ( [1,Thm. 4.9]). The theory can be extended to polynomials taking values in modules of W . Suppose τ is an irreducible orthogonal representation of W on a (finite-dimensional) real vector space V with basis There is a representation of W on P τ defined to be the linear extension of The associated Dunkl operators are the linear extension of The two forms can be defined just as in the scalar case, and the definition of singular polynomials is the same. The interesting question is for what κ the Gaussian form is given by a positive measure. Shelley-Abrahamson [12] proved there is a small interval for κ about zero for which this occurs. It is the purpose of this note to show that this neighborhood is bounded by the smallest singular values, and to construct vector-valued Jack polynomials which specialize to singular polynomials in the case of the symmetric groups. In this situation the representation is determined by a partition τ of N and the smallest singular values are ± 1 hτ where h τ is the longest hook-length of the Ferrers diagram of τ (see Etingof and Stoica [7,Sect. 5]). There are two ways of finding singular polynomials, either define them directly or describe the nonsymmetric Jack polynomials which become singular when specialized to the appropriate parameter value. Feigin and Silantyev [8] found explicit formulas for all singular polynomials which span a W -module isomorphic to the reflection representation of W . We will construct the lowest degree singular polynomials for the exterior powers of the reflection representation of W , and then concentrate on Jack polynomials.
The presentation starts with the result on the positivity of the Gaussian form, then the definition and properties of P τ , the nonsymmetric Jack polynomials, results about the action of D i and the construction of the singular polynomials. The theory of vector-valued nonsymmetric Jack polynomials, originated by Griffeth [9], allows detailed analyses of P τ .

Region of Positivity of the Gaussian Form
Fix an irreducible representation τ of W . The form ·, · κ is normalized by 1 ⊗ The following is due to Shelley-Abrahamson [12]. This result includes the existence of a matrix measure on R N which realizes the Gaussian form. Lemma 1. Suppose for some κ ∈ Ω there is a polynomial f ∈ P τ such that f = 0 and f, f κ = 0 then the space X f := span {wf : w ∈ W } can be decomposed as a sum of irreducible W -modules and g ∈ X f implies g, p κ = 0 for all p ∈ P τ .
Proof. The decomposability is a group-theoretic property. By the W -invariance property of ·, · κ it follows that wf, wf κ = 0 for all w. The Cauchy-Schwartz inequality for ·, · κ is valid because κ ∈ Ω thus | wf, p κ | 2 ≤ wf, wf κ p, p κ = 0 for all w ∈ W and p ∈ P τ . In particular w 1 f + w 2 f, p κ = 0 for any w 1 , w 2 and thus g ∈ X f implies g, g κ = 0.
For the sake of simplicity we restrict our consideration to groups with a single conjugacy class of reflections. It is not hard to extend the arguments to the other groups. We will show that the set of singular values is a subset of a set of rational numbers with no accumulation point, that is, there is a minimum nonzero distance between elements. Proof. The basic idea is that the solutions of the characteristic equation are algebraic integers. The details are in [5, p. 194].
Because the right regular representation of W on RW is a direct sum of all irreducible representations of W the integer property of eigenvalues applies to v∈R+ ρ (σ v ) for any irreducible representation ρ. Since ρ is irreducible and vℓR+ ρ (σ v ) is central there is just one eigenvalue, denoted by ε (ρ).
If f is singular and homogeneous of degree n then κ = n ε (ρ) − ε (τ ) where ρ is some irreducible representation of W .
Proof. Let p ∈ P n and u ∈ V then The relation is extended to all of P τ by linearity. If f is singular then the space span {wf : w ∈ W } consists of singular polynomials (same κ of course) and can be decomposed into irreducible W -submodules, giving rise to the eigenvalues ε (ρ).
We suspect that only one ρ can appear for any singular polynomial. The minimum distance between two singular values is bounded below by max Recall {ε (ρ)} is a set of integers contained in [−#R + , #R + ]. For each n ≥ 1 restrict the form ·, · κ to P n ⊗ V . The condition that the form is positive-definite is that the leading principal minors of the Gram matrix are positive (for example use the basis {x α ⊗ u i : |α| = n, 1 ≤ i ≤ dim V }). The minors are polynomials in κ and are positive in a neighborhood of 0. Let z n denote the positive zero of any of the minors closest to 0, that is the form is positive for 0 ≤ κ < z n and is positive-semidefinite for κ = z n and there exists f n ∈ P n ⊗ V such that f n = 0 and f n , f n κ = 0. If there are no positive zeros set z n = ∞.
(The reason for the following careful argument is to avoid the hypothetical situation z n = 1 + 1 n , min z n = 1 and there is no nonzero polynomial f with f, f κ = 0 for κ = 1.) Lemma 2. Suppose z n > z n+1 then z n+1 is a singular value.
Proof. Set κ = z n+1 . By the definition of z n+1 and properties of the form By hypothesis the form is positive-definite on P n ⊗ V for κ = z n+1 < z n . Thus f is singular.
Define the subsequence {z ni } by n 1 = min {n : z n < ∞} and n i+1 = min {n : n > n i , z n < z n−1 } (essentially the points of decrease of the sequence). If there are no positive eigenvalues then each z n = ∞ and the form is positive-definite for κ ≥ 0. Now assume there is at least one z n < ∞. Each z n ≥ z ni for some i. Theorem 2. Let z 0 = min {z ni : i ≥ 1} then z 0 = z nj for some j, the form ·, · κ is positive-definite for 0 ≤ κ < z 0 and z 0 is a singular value.
Proof. By the Lemma the subsequence consists of singular values. The spacing of singular values implies there is no accumulation point thus the minimum z 0 is achieved at one of the values z nj . Hence there exists f ∈ P nj ⊗ V such that f = 0 and f is singular for κ = z 0 .
The same argument can be applied to negative κ: let z ′ n be the negative zero closest to 0 of the leading principal minors of the form restricted to P n ⊗ V so the form is positive-definite for z To summarize there is an interval z ′ 0 < κ < z 0 for which ·, · κ is positive-definite and z 0 , z ′ 0 are singular values if finite, respectively.

Exterior Powers of the Reflection Representation
Suppose W has only one conjugacy class of reflections and span Proof. By hypothesis τ is irreducible and v∈R+ has trace 1 thus the trace for the sum over v ∈ R + is #R + . The form a, b corresponds to the identity matrix and has trace N . The other conclusion follows from Henceforth assume |v| 2 = 2 for all v ∈ R, and set γ := 2 #R + N ; thus v∈R+ v i v j = γδ ij . The computations use a boundary operator.
where the caret indicates the omitted factor. The operator ∂ (a) is extended to all of ∧ m (V ) by linearity.
Proof. The first part follows directly from the definition. Apply ∂ (a) to both sides of the equation: To compute the terms in D i p we find Here is a table with data on the indecomposable groups with one conjugacy class of reflections. The subscripts indicate the rank N of the group. . Considering the known situation for the symmetric groups and for the reflection representation we conjecture that for any representation τ of degree greater than one that the interval z ′ 0 < κ < z 0 of positivity is symmetric, z ′ 0 = −z 0 , and z 0 ≥ 1 dN .

Representations of the Symmetric Groups
The symmetric group S N , the set of permutations of {1, 2, . . . , N }, acts on C N by permutation of coordinates. The space of polynomials P := span R(κ) x α : α ∈ N N 0 where κ is a parameter. The action of S N is extended to polynomials by wp (x) = p (xw) where (xw) i = x w(i) (consider x as a row vector and w as a permutation matrix, ). This is a representation of S N , that is, Furthermore S N is generated by reflections in the mirrors {x : They are the key devices for applying inductive methods, and satisfy the braid relations: We consider the situation where the group S N acts on the range as well as on the domain of the polynomials. We use vector spaces, called S N -modules, on which S N has an irreducible orthogonal representation: τ : . See James and Kerber [11] for representation theory, including a modern discussion of Young's methods.
Denote the set of partitions We identify τ with a partition of N given the same label, that is τ ∈ N N,+ 0 and |τ | = N . The length of τ is ℓ (τ ) := max {i : τ i > 0}. There is a Ferrers diagram of shape τ (also given the same label), with boxes at points (i, j) with 1 ≤ i ≤ ℓ (τ ) and 1 ≤ j ≤ τ i . A tableau of shape τ is a filling of the boxes with numbers, and a reverse standard Young tableau (RSYT) is a filling with the numbers {1, 2, . . . , N } so that the entries decrease in each row and each column.
Denote the set of RSYT's of shape τ by Y (τ ) and let with orthogonal basis Y (τ ). For 1 ≤ i ≤ N and T ∈ Y (τ ) the entry i is at coordinates (rw (i, T ) , cm (i, T )) and the content is c (i, T ) := cm (i, T ) − rw (i, T ). Each T ∈ Y (τ ) is uniquely determined by its content vector [c (i, T )] N i=1 . There is an irreducible representation of S N on V τ also denoted by τ (slight abuse of notation). To specify the action of τ it suffices for our purposes to give only the formulae for τ (s i ) . The S N -invariant inner product on V τ is defined by It is unique up to multiplication by a constant. The Jucys-Murphy elements Thus 1≤i<j≤N τ ((i, j)) acts on V τ as multiplication by ε (τ ) = N j=1 c (j, T ) =

Vector-valued Jack Polynomials
For a given partition τ of N there is a space of vector-valued nonsymmetric Jack polynomials, also called a standard module of the rational Cherednik algebra. The nonsymmetric vector-valued Jack polynomials (NSJP) form a basis of P τ = P ⊗ V τ , the space of V τ valued polynomials in x, equipped with the S N action: p (x (i, j)) ⊗ τ ((i, j)) T , extended by linearity to all of P τ .
For each α ∈ N N 0 and T ∈ Y (τ ) there is a NSJP ζ α,T with leading term x α ⊗ τ r −1 α T , that is, The list of eigenvalues is called the spectral vector ξ α,T : The NSJP's can be constructed by means of a Yang-Baxter graph. The details are in [4]; this paper has several figures illustrating some typical graphs.
A node consists of (α, T, ξ α.T , r α , ζ α,T ) where α ∈ N N 0 ,T ∈ Y (τ ) , ξ α,T is the spectral vector. The root is If for particular T and i the relation c (i, T ) − c (i + 1, T ) ≥ 2 holds then T (i) , the tableau formed by interchanging i and i + 1 in T is also a RSYT (the relation is equivalent to rw (i, T ) < rw (i + 1, T ) and cm (i, T ) > cm (i + 1, T ); the RSYT property implies that these two inequalities are logically equivalent). In this case inv T (i) = inv (T )−1. The inv-maximal tableau is T 0 and the inv-minimal tableau is T 1 formed by entering N, N − 1, . . . , 1 row-by-row.
For any κ there is a unique bilinear symmetric S N -invariant form on P τ which satisfies (f, g ∈ P τ ): As a consequence x i f, is self-adjoint; furthermore ζ α,T , ζ β,T ′ κ = 0 whenever (α, T ) = (β, T ′ ) because ξ α,T = ξ β,T ′ for generic κ. The form is defined in terms of ζ α,T , ζ α,T κ and is extended by linearity and orthogonality to all polynomials. It is a special case of a result of Griffeth [9]. The first ingredient is the formula for ζ λ,T for λ ∈ N N,+ The second ingredient expresses the relationship between ζ α,T , ζ α,T κ and ζ α + ,T , ζ α + ,T κ : let . From the bounds on c (i, T )−c (j, T ) and the formulae it follows that ζ α,T , ζ α,T κ > 0 provided − 1 hτ < κ < 1 hτ . Denote f, f κ by f 2 for any generic value of κ (slight abuse of notation).

Differentiation Formulae
First we prove formulae for D j ζ α,T for ℓ (α) ≤ j ≤ N (recall the length of α is ℓ (α) := max {i : α i > 0}). We need the commutation relations (part of the defining relations of the rational Cherednik algebra): (p ∈ P τ ) Recall the Jucys-Murphy elements ω i := N j=i+1 (i, j) for 1 ≤ i < N and ω N := 0. Proposition 4. Suppose p ∈ P τ and 1 ≤ i ≤ N then D i p = 0 if and only if U i p = p + κω i p.
We specialize the formula to partition labels.
There are several ingredients to the proof. The first τ l coordinates of the spectral vector of ζ α,T1 are The contents c (r α (i) , T 1 ) for τ l + 1 ≤ i ≤ N make up l − 1 lists of consecutive integers, one for each row, from row #1 to row # (l − 1). The following is easily proved by induction: .
Consider the part of the product in (6.3) corresponding to row #i: the contents are 1 − i, 2 − i, . . . , τ i − i and by the Lemma this row contributes to the product because the hook-length h (i, 1) = τ i + l − i. Thus , the other factors telescope.
Proof. The first m coordinates of the spectral vector of ζ α,T0 are The contents c (r α (i) , T 1 ) for τ l + 1 ≤ i ≤ N make up τ 1 − 1 lists of consecutive integers, one for each column, from column #1 to column # (τ 1 − 1). Consider the part of the product in (6.3) corresponding to column #j: the contents are j − τ ′ i , j − τ ′ i + 1, . . . , j − 1 and by the Lemma this column contributes , the other factors telescope.
The proof of the following is essentially the same as that of Theorem 8.
As mentioned before the polynomials in span {wζ α,T0 : w ∈ S N } are singular for the same κ.