A Linear System of Differential Equations Related to Vector-Valued Jack Polynomials on the Torus

For each irreducible module of the symmetric group $\mathcal{S}_{N}$ there is a set of parametrized nonsymmetric Jack polynomials in $N$ variables taking values in the module. These polynomials are simultaneous eigenfunctions of a commutative set of operators, self-adjoint with respect to two Hermitian forms, one called the contravariant form and the other is with respect to a matrix-valued measure on the $N$-torus. The latter is valid for the parameter lying in an interval about zero which depends on the module. The author in a previous paper [SIGMA 12 (2016), 033, 27 pages, arXiv:1511.06721] proved the existence of the measure and that its absolutely continuous part satisfies a system of linear differential equations. In this paper the system is analyzed in detail. The $N$-torus is divided into $(N-1)!$ connected components by the hyperplanes $x_{i}=x_{j}$, $i<j$, which are the singularities of the system. The main result is that the orthogonality measure has no singular part with respect to Haar measure, and thus is given by a matrix function times Haar measure. This function is analytic on each of the connected components.


Introduction
The Jack polynomials form a parametrized basis of symmetric polynomials. A special case of these consists of the Schur polynomials, important in the character theory of the symmetric groups. By means of a commutative algebra of differential-difference operators the theory was extended to nonsymmetric Jack polynomials, again a parametrized basis but now for all polynomials in N variables. These polynomials are orthogonal for several different inner products, and in each case they are simultaneous eigenfunctions of a commutative set of self-adjoint operators. These inner products are invariant under permutations of the coordinates, that is, the symmetric group. One of these inner products is that of L 2 T N , K κ (x)dm(x) , where x j = exp(iθ j ), −π < θ j ≤ π, 1 ≤ j ≤ N, defining the N -torus, the Haar measure on the torus, and the weight function respectively. Beerends and Opdam [1] discovered this orthogonality property of symmetric Jack polynomials. Opdam [9] established orthogonality structures on the torus for trigonometric polynomials associated with Weyl groups; the nonsymmetric Jack polynomials form a special case. Griffeth [7] constructed vector-valued Jack polynomials for the family G(n, p, N ) of complex reflection groups. These are the groups of permutation matrices (exactly one nonzero entry in each row and each column) whose nonzero entries are n th roots of unity and the product of these entries is a (n/p) th root of unity. The symmetric groups and the hyperoctahedral groups are the special cases G(1, 1, N ) and G(2, 1, N ) respectively. The term "vector-valued" means that the polynomials take values in irreducible modules of the underlying group, and the action of the group is on the range as well as the domain of the polynomials. The author [2] together with Luque [5] investigated the symmetric group case more intensively. The basic setup is an irreducible representation of the symmetric group, specified by a partition τ of N , and a parameter κ restricted to an interval determined by the partition, namely −1/h τ < κ < 1/h τ where h τ is the maximum hook-length of the partition τ . More recently [3] we showed that there does exist a positive matrix measure on the torus for which the nonsymmetric vector-valued Jack polynomials (henceforth NSJP's) form an orthogonal set. The proof depends on a matrix-version of Bochner's theorem about the relation between positive measures on a compact abelian group and positive-definite functions on the dual group, which is a discrete abelian group. In the present situation the torus is the compact (multiplicative) group and the dual is Z N . By using known properties of the NSJP's we produced a positive-definite matrix function on Z N and this implied the existence of the desired orthogonality measure. Additionally we showed that the part of the measure supported by T N reg := T N \ i<j {x : x i = x j } is absolutely continuous with respect to the Haar measure dm and satisfies a first-order differential system. In this paper we complete the description of the measure by proving there is no singular part. The idea is to use the functional equations satisfied by the inner product to establish a correspondence to the differential system. The main reason for the argument being so complicated is that the "obvious" integration-by-parts argument which works smoothly for the scalar case with κ > 1 has great difficulty with the singularities of the measure of the form |x i − x j | −2|κ| . We use a Cauchy principal-value argument based on a weak continuity condition across the faces {x : x i = x j } (as an over-simplified one-dimensional example consider the integral 1 −1 d dx f (x)dx with f (x) = |2x + x 2 | −1/4 : the integral is divergent but the principal value lim )} and f (−ε) − f (ε) = O ε 3/4 hence the limit exists).
The differential system is a two-sided version of a Knizhnik-Zamolodchikov equation (see [6]) modified to have solutions homogeneous of degree zero, that is, constant on circles {(ux 1 , . . ., ux N ) : |u| = 1}. The purpose of the latter condition is to allow solutions analytic on connected components of T N reg . Denote the degree of τ by n τ . The solutions of the differential system are locally analytic n τ × n τ matrix functions with initial condition given by a constant matrix. That is, the solution space is of dimension n 2 τ but only one solution can provide the desired weight function. Part of the analysis deals with conditions specifying this solution -they turn out to be commutation relations involving certain group elements. In the subsequent discussion it is shown that the weight function property holds for a very small interval of κ values if these relations are satisfied. This is combined with the existence theorem of the positive-definite matrix measure to finally demonstrate that the measure has no singular part for any κ in −1/h τ < κ < 1/h τ .
In a subsequent development [4] it is shown that the square root of the matrix weight function multiplied by vector-valued symmetric Jack polynomials provides novel wavefunctions of the Calogero-Sutherland quantum mechanical model of identical particles on a circle with 1/r 2 interactions.
Here is an outline of the contents of the individual sections: • Section 2: a short description of the representation of the symmetric group associated to a partition; the definition of Dunkl operators for vector-valued polynomials and the definition of nonsymmetric Jack polynomials (NSJP's) as simultaneous eigenvectors of a commutative set of operators; and the Hermitian form given by an integral over the torus, for which the NSJP's form an orthogonal basis.
• Section 3: the definition of the linear system of differential equations which will be demonstrated to have a unique matrix solution L(x) such that L(x) * L(x)dm(x) is the weight function for the Hermitian form; the proof that the system is Frobenius integrable and the analyticity and monodromy properties of the solutions on the torus.
• Section 4: the use of the differential equation to relate the Hermitian form to L(x) * L(x) by means of integration by parts; the result of this is to isolate the role of the singularities in the process of proving the orthogonality of the NSJP's with respect to L * Ldm.
• Section 5: deriving power series expansions of L(x) near the singular set i<j x ∈ T N : x i = x j , in particular near the set {x : x N −1 = x N }; description of commutation properties of the coefficients with respect to the reflection (N − 1, N ); the behavior of L across the mirror {x : x N −1 = x N }.
• Section 6: the derivation of global bounds on L(x) and local bounds on the coefficients of the power series, needed to analyze convergence properties of the integration by parts.
• Section 7: the proof of a sufficient condition for the validity of the Hermitian form; the condition is partly that κ lies in a small interval around 0 and that the boundary value of L(x) satisfies a commutativity condition; the proof involves very detailed analysis of bounds on L, since the local bounds have to be integrated over the entire torus.
• Section 8: further analysis of the orthogonality measure constructed in [3], in particular the proof of the formal differential system satisfied by the Fourier-Stieltjes (Laurent) series of the measure; this is used to show that the measure has no singular part on the open faces, such as e iθ 1 , e iθ 2 , . . . , e iθ N −1 , e iθ N −1 : θ 1 < θ 2 < · · · < θ N −2 < θ N −1 < θ 1 + 2π ; in turn this property is shown to imply the validity of the sufficient condition set up in Section 7.
• Section 9: analyticity properties of the solutions of matrix equations with analytic coefficients; the results are used to extend the validity of the Hermitian form to the desired interval −1/h τ < κ < 1/h τ from the smaller interval found in Section 7.

Modules of the symmetric group
The symmetric group S N , the set of permutations of {1, 2, . . . , N }, acts on C N by permutation of coordinates. For α ∈ Z N the norm is |α| : |α i | and the monomial is Elements of span C x α : α ∈ Z N are called Laurent polynomials. The action of S N is extended to polynomials by wp(x) = p(xw) where (xw) i = x w(i) (consider x as a row vector and w as a permutation matrix, ). This is a representation of S N , that is, Furthermore S N is generated by reflections in the mirrors {x : They are the key devices for applying inductive methods, and satisfy the braid relations: We consider the situation where the group S N acts on the range as well as on the domain of the polynomials. We use vector spaces, called S N -modules, on which S N has an irreducible unitary (orthogonal) representation: τ : S N → O m (R) τ (w) −1 = τ w −1 = τ (w) T . See James and Kerber [8] for representation theory, including a modern discussion of Young's methods.
Denote the set of partitions We identify τ with a partition of N given the same label, that is τ ∈ N N,+ 0 and |τ | = N . The length of τ is (τ ) := max{i : τ i > 0}. There is a Ferrers diagram of shape τ (also given the same label), with boxes at points (i, j) with 1 ≤ i ≤ (τ ) and 1 ≤ j ≤ τ i . A tableau of shape τ is a filling of the boxes with numbers, and a reverse standard Young tableau (RSYT) is a filling with the numbers {1, 2, . . . , N } so that the entries decrease in each row and each column. We exclude the one-dimensional representations corresponding to one-row (N ) or onecolumn (1, 1, . . . , 1) partitions (the trivial and determinant representations, respectively). We need the important quantity h τ := τ 1 + (τ ) − 1, the maximum hook-length of the diagram (the hook-length of the node (i, j) ∈ τ is defined to be τ i − j + #{k : i < k ≤ (τ )&j ≤ τ k } + 1). Denote the set of RSYT's of shape τ by Y(τ ) and let V τ = span{T : T ∈ Y(τ )} (the field is C(κ)) with orthogonal basis Y(τ ). For 1 ≤ i ≤ N and T ∈ Y(τ ) the entry i is at coordinates (rw(i, T ), cm(i, T )) and the content is c(i, T ) (this sum depends only on τ ) and γ := S 1 (τ )/N . The S N -invariant inner product on V τ is defined by It is unique up to multiplication by a constant.

Jack polynomials
The main concerns of this paper are measures and matrix functions on the torus associated to P τ := P ⊗ V τ , the space of V τ -valued polynomials, which is equipped with the S N action: wp(x) = τ (w)p(xw), p ∈ P τ , extended by linearity. There is a parameter κ which may be generic/transcendental or complex.
The commutation relations analogous to the scalar case hold: The simultaneous eigenfunctions of {U i } are called (vector-valued) nonsymmetric Jack polynomials (NSJP). For generic κ these eigenfunctions form a basis of P τ (this property fails for certain rational numbers outside the interval −1/h τ < κ < 1/h τ ). There is a partial order on N N 0 × Y(τ ) for which the NSJP's have a triangular expression with leading term indexed by (α, T ) ∈ N N 0 × Y(τ ). The polynomial with this label is denoted by ζ α,T , homogeneous of degree N i=1 α i and satisfies the rank function r α ∈ S N and r α = I if and only if α is a partition. The vector is called the spectral vector for (α, T ). The NSJP structure can be extended to Laurent polyno- x i and 1 := (1, 1, . . . , 1) ∈ N N 0 , then r α+m1 = r α for any α ∈ N N 0 and m ∈ Z. The commutation U i (e m N p) = e m N (m + U i )p for 1 ≤ i ≤ N and p ∈ P τ imply that e m N ζ α,T and ζ α+m1,T have the same spectral vector for any m ∈ N 0 . They also have the same leading term (see [3,Section 2.2]) and hence e m N ζ α,T = ζ α+m1,T for α ∈ N N 0 . This fact allows the definition of ζ α,T for any α ∈ Z N : let m = − min i α i then α + m1 ∈ N N 0 and set ζ α,T := e −m N ζ α+m1,T . For a complex vector space V a Hermitian form is a mapping ·, · : The form is positive semidefinite if u, u ≥ 0 for all u ∈ V . The concern of this paper is with a particular Hermitian form on P τ which has the properties (for all f, g ∈ P τ , T, T ∈ Y(τ ), wf, wg = f, g , Thus uniqueness of the spectral vectors (for all but a certain set of rational κ values) implies that ζ α,T , ζ β,T = 0 whenever (α, T ) = (β, T ). In particular polynomials homogeneous of different degrees are mutually orthogonal, by the basis property of {ζ α,T }. For this particular Hermitian form, multiplication by any x i is an isometry for all 1 ≤ i ≤ N . This involves an integral over the torus. The equations (2.1) determine the form uniquely (up to a multiplicative constant if the first condition is removed). Denote C × := C\{0} and C N reg The torus is a compact multiplicative abelian group. The notations for the torus and its Haar measure in terms of polar coordinates are Let T N reg := T N ∩ C N reg , then T N reg has (N − 1)! connected components and each component is homotopic to a circle (if x is in some component then so is ux = (ux 1 , . . . , ux N ) for each u ∈ T). Thus C 0 is the set consisting of e iθ 1 , . . . , e iθ N with θ 1 < θ 2 < · · · < θ N < θ 1 + 2π.
We introduced the notation where f, g ∈ P τ have the components (f T ), (g T ) with respect to the orthonormal basis Thus the Hermitian form f, g = T N f (x) * dµ(x)g(x) satisfies (2.1). Furthermore we showed that where the singular part µ s is the restriction of µ to T N \T N reg , H(C) is constant and positivedefinite on each connected component C of T N reg and L(x) is a matrix function solving a system of differential equations. That system is the subject of this paper. In a way the main problem is to show that µ has no singular part.
3 The dif ferential system The effect of the term γ x i I is to make L(x) homogeneous of degree zero, that is, The differential system is defined on C N reg , Frobenius integrable and analytic, thus any local solution can be continued analytically to any point in C N reg . Different paths may produce different values; if the analytic continuation is done along a closed path then the resultant solution is a constant matrix multiple of the original solution, called the monodromy matrix, however if the closed path is contained in a simply connected subset of C N reg then there is no change. Integrability means that ∂ i (κL(x)A j (x)) = ∂ j (κL(x)A i (x)) for i = j, writing the system as is defined by equation (3.1)). The condition becomes for {i, k} ∩ {j, } = ∅, and terms involving the 3-cycles (i, j, k) and (j, i, k) occurring as (because (i, j)(j, k) = (i, k)(j, i) = (i, j, k) and (i, k)(j, k) = (j, i, k)) and the latter two terms are symmetric in i, j. We consider only fundamental solutions, that is, det L(x) = 0. Recall Jacobi's identity: where F (t) is a differentiable matrix function and adj(F (t)) This can be solved: from i<j τ ((i, j)) = S 1 (τ )I it follows that tr(τ ((i, j))) = N 2 −1 S 1 (τ )n τ = 2 N −1 γn τ (and n τ = #Y(τ )). We obtain the system (with the principal branch of the power function, positive on positive reals). This implies det L(x) = 0 for x ∈ C N reg (and of course det L(x) is homogeneous of degree zero).
Henceforth we use L(x) to denote the solution of (3.1) in C 0 which satisfies L(x 0 ) = I.
Proof . Consider the solution L(xw 0 )τ (w 0 ) −1 which agrees with ΞL(x) for all x ∈ C 0 for some fixed matrix Ξ. In particular for Because of its frequent use denote υ := τ (w 0 ) (the letter υ occurs in the Greek word cycle). Definition 3.3. For w ∈ S N set ν(w) := υ 1−w (1) . For any x ∈ T N reg there is a unique w x such that w x (1) = 1 and xw −1 x ∈ C 0 . Set M (w, x) := ν(w x w).
This completes the proof.
We can now extend L(x) to all of T N reg from its values on C 0 .
Proposition 3.7. For any x ∈ T N reg and w ∈ S N Proof . Let w 1 = w xw , that is, w 1 (1) = 1 and xww −1

The adjoint operation on Laurent polynomials and L(x)
The purpose is to define an operation which agrees with taking complex conjugates of functions and Hermitian adjoints of matrix functions when restricted to T N , and which preserves analyticity. The parameter κ is treated as real in this context even where it may be complex (to preserve analyticity in κ).
Loosely speaking F * (x) is obtained by replacing x by φx, conjugating the complex constants and transposing. The fundamental chamber C 0 is mapped by φ onto e iθ j N j=1 : Transposing this system leads to (note τ (w) Now use part (3) of Definition 3.8 and set up the system whose solution of satisfying L * (x 0 ) = I is denoted by L * (x). The constants in the system are all real so replacing complex constants by their complex conjugates preserves solutions of the system. The effect is that L(x) * agrees with the Hermitian adjoint of L(x) for x ∈ C 0 (for real κ). The goal here is to establish conditions on a constant Hermitian matrix H so that K(x) := L * (x)HL(x) has desirable properties, such as K(xw) = τ (w) −1 K(x)τ (w) and K(x) ≥ 0 (i.e., positive definite). Similarly to the above τ ((i, j))L * (x(i, j)) is also a solution of (3.2), implying that τ (w)L * (xw) is a solution for any w ∈ S N , the inductive step is In analogy to L for x ∈ T N reg and the same w x as above let For any nonsingular constant matrix C the function CL(x) also satisfies (3.1) and the function This formulation can be slightly generalized by replacing C * C by a Hermitian matrix H (not necessarily positive-definite) without changing the equation.
For the purpose of realizing the form (2.1) we want K to satisfy which is now added to the hypotheses, summarized here:

Integration by parts
In this section we establish the relation between the differential system and the abstract relation We demonstrate how close L is to providing the desired inner product, by performing an integration-by-parts over an S N -invariant closed set ⊂ T N reg . Here L(x) and H satisfy the hypotheses listed in Condition 3.9 above. We use the identity This set is invariant under S N and K(x) is bounded and smooth on it. Thus the following integrals exist.
Proof . By definition , and x i − x j changes sign under this transformation. Thus for each j = i because Ω δ and dm are invariant under (i, j).
Observe the value of κ is not involved in the proof. Since x j ∂ j = −i ∂ ∂θ j when x j = e iθ j and dm(x) = (2π) −N dθ 1 · · · dθ N one step of integration can be directly evaluated. Consider the case i = N and for a fixed (N − 1)-tuple (θ 1 , . . . , θ N −1 ) with θ 1 < θ 2 < · · · < θ N −1 < θ 1 + 2π such that e iθ j − e iθ i ≥ δ the integral over θ N is over a union of closed intervals. These are the complement of This results in an alternating sum of values of f * Kg at the end-points of the closed intervals. Analyzing the resulting integral (over (θ 1 , . . . , θ N −1 ) with respect to dθ 1 · · · dθ N −1 ) is one of the key steps in showing that a given K provides the desired inner product. In other parts of this paper we find that H must satisfy another commuting relation.

Local power series near the singular set
In this section assume κ / ∈ Z+ 1 2 . We consider the system (3.1) in a neighborhood of the face {x : We use a coordinate system which treats the singularity in a simple way. For a more concise notation define We consider the system in terms of the variable x(u, z) subject to the conditions that the points x 1 , x 2 , . . . , x N −2 , u are pairwise distinct and |z| < min 1≤j≤N −2 |x j −u|, also |z| < |u|, Im z u > 0 (these conditions imply arg(u − z) < arg(u + z)). This allows power series expansions in z.
Note σB n x(u, 0)σ = (−1) n+1 B n (x(u, 0)). Suggested by the relation we look for a solution of the form where each a n (x(u, 0)) is matrix-valued and analytic in x(u, 0), and the initial condition is The equations for ∂ u and ∂ j simplify to We only need the equations for α 0 (x(u, 0)) (that is, the coefficient of z 0 ) to initialize the ∂ z equation (this is valid because the system is Frobenius integrable): Proof . By hypothesis α 0 x (0) = I. The right hand sides of the system are invariant under the transformation Q → σQσ thus α 0 (x(u, 0)) and σα 0 (x(u, 0))σ satisfy the same system. They agree at the base-point x (0) , hence everywhere in the domain. By Jacobi's identity the determinant satisfies (where λ := tr(σ) = n τ − 2m τ ) , the multiplicative constant follows from α 0 x (0) = I. Thus α 0 (x(u, 0)) is nonsingular in its domain.
Henceforth denote the series (5.1), solving (3.1) with the normalization α 0 x (0) = I by L 1 (x). It is defined for all x(u, z) ∈ C 0 subject to |z| < min 1≤j≤N −2 |x j − u|, also |z| < |u|, Im z u > 0. The radius of convergence depends on x(u, 0). Return to using L(x) to denote the solution from x j | = sin π N 5+4 cos 2π N and |z| = sin π N (also z u = i tan π N ) and x 0 is in the domain of convergence of the series L 1 (x). Thus the relation L 1 (x) = L 1 (x 0 )L(x) holds in the domain of L 1 in C 0 . This implies the important fact that L 1 (x 0 ) is an analytic function of κ, to be exploited in Section 9.

Behavior on boundary
The term ρ(z −κ , z κ ) implies that L 1 (x) is not continuous at z = 0, that is, on the boundary {x : x N −1 = x N }. However there may be a weak type of continuity, specifically N − 1, N ))) = 0.
With the aim of expressing the desired K(x) in the form L(x) * C * CL(x) (and C is unknown at this stage) we consider CL(x) in series form, that is CL 1 (x 0 ) −1 L 1 (x) (recall det L(x) = 0 in C 0 ). We analyze the effect of C on the weak continuity condition. Denote C := CL 1 (x 0 ) −1 .
The term of lowest order in z in K(x(u, z)) − K(x(u, −z)) is In terms of the σ-block decomposition, with which tends to zero as z → 0 if and only if c 12 = 0, that is, σC * Cσ = C * C.

Bounds
In this section we derive bounds on L(x) of global and local type. Throughout we adopt the normalization L(x 0 ) = I. The operator norm on n τ × n τ complex matrices is defined by M = sup{|M v| : |v| = 1}.
The proof is a series of steps starting with a general result which applies to matrix functions satisfying a linear differential equation in one variable.

Suf f icient condition for the inner product property
In this section we will use the series This is a solution of (3.1) by Proposition 3.1. This has the analogous behavior to L 1 ; writing We claim that the Hermitian matrix H 2 defined by There is a subtle change: the base point x (0) = 1, ω, . . . , ω N −2 , ω −3/2 , ω −3/2 is replaced by ω, . . . , ω N −2 , ω −3/2 , ω −3/2 , 1 and now ωx 0 = ω, . . . , ω N −1 , 1 is in the domain of convergence of L 2 . Set x = ωx 0 in (7.2) to obtain because H commutes with υ (and L(ωx 0 ) = L(x 0 ) = I by the homogeneity). Thus H 2 commutes with τ (N − 2, N − 1). From Theorem 6.1 we have the bound Denote K(x) = L(x) * HL(x). We will show that there is an interval −κ 1 < κ < κ 1 where κ 1 depends on N such that The integral is broken up into three pieces. The aim is to let δ → 0; where δ satisfies an upper bound δ < min 2 sin π N 2 , 1 9 ; the first term comes from the maximum spacing of N points on T and the second is equivalent to 3δ < δ 1/2 . Also δ := 2 arcsin δ 2 . 1. min , and the measure of the set is O(δ). The limit as δ → 0 is zero by the dominated convergence theorem.
This is done with a detailed analysis using the double series from (6.4).
Thus the sum of the first order terms The second last step is to relate α 0 (x( u, 0)) to L 1 η (1) ; indeed Similarly to (7.5) By Theorem 6.1 because the first two groups of terms satisfy the bound |x i − x j | ≥ δ 1/2 . Combining everything we obtain the bound The constant is independent of η (1) and the exponent on δ is 1 2 − |κ| N 2 − N + 2 . Thus the integral of part (3) goes to zero as δ → 0 if |κ| < 2 N 2 − N + 2 −1 . This is a crude bound, considering that we know everything works for −1/h τ < κ < 1/h τ , but as we will see, an open interval of κ values suffices.
It is important that we can derive uniqueness of H from the relation, because the conditions wf, wg = f, g , x i f, x i g = f, g , and x i D i f, g = f, x i D i g for w ∈ S N and 1 ≤ i ≤ N determine the Hermitian form uniquely up to multiplication by a constant. Thus the measure K(x)dm(x) is similarly determined, by the density of Laurent polynomials.

The orthogonality measure on the torus
At this point there are two logical threads in the development. On the one hand there is a sufficient condition implying the desired orthogonality measure is of the form L * HLdm, specifically if H commutes with υ, (L 1 (x 0 ) * ) −1 HL 1 (x 0 ) −1 commutes with σ, and |κ| < (2(N 2 − N + 2)) −1 . However we have not yet proven that H exists. On the other hand in [3] we showed that there does exist an orthogonality measure of the form dµ = dµ S + L * HLdm where spt µ S ⊂ T N \T N reg , H commutes with υ, and −1/h τ < κ < 1/h τ (the support of a Baire measure ν, denoted by spt ν, is the smallest compact set whose complement has ν-measure zero). In the next sections we will show that (L 1 (x 0 ) * ) −1 HL 1 (x 0 ) −1 commutes with σ and that H is an analytic function of κ in a complex neighborhood of this interval. Combined with the above sufficient condition this is enough to show that there is no singular part, that is, µ S = 0. The proof involves the formal differential equation satisfied by the Fourier-Stieltjes series of µ, which is used to show µ S = 0 on x ∈ T N : #{x j } N j=1 = N − 1 (that is, x has at least N − 1 distinct components). In turn this implies (L * 1 (x 0 )) −1 HL 1 (x 0 ) −1 commutes with σ. The proofs unfortunately are not short. In the sequel H refers to the Hermitian matrix in the formula for dµ and K denotes L * HL. Also H is positive-definite since the measure µ is positive (else there exists a vector v with Hv = 0 and then the C (1) T N reg ; V τ function given by f (x) := L(x) −1 vg(x) where g is a smooth scalar nonnegative function with support in a sufficiently small neighborhood of x 0 , has norm f, f = 0, a contradiction). Thus H has a positive-definite square root C which commutes with υ. Now extend CL(x) from C 0 to all of T N reg by Definition 3.6 and so K(x) = L * (x)C * CL(x) for all x ∈ T N reg (this follows from Kdm is the absolutely continuous part of the finite Baire measure µ. We will show that (L * 1 (x 0 )) −1 C * CL 1 (x 0 ) −1 commutes with σ. The proof begins by establishing a recurrence relation for the Fourier coefficients of K(x), which comes from equation (3.3). For F (x) integrable on T N , possibly matrix-valued, and α ∈ Z N let F α = is also integrable then (integration-byparts) For a subset J ⊂ {1, 2, . . . , N } let ε J ∈ N N 0 be defined by (ε J ) i = 1 if i ∈ J and = 0 otherwise; also ε i := ε {i} . For 1 ≤ i ≤ N let

Equation (3.3) can be rewritten as
this is a polynomial relation which shows that p i (x)x i ∂ i K(x) is integrable and which has implications for the Fourier coefficients of K.
Proof . Multiply both sides of (8.2) by x 1−N i ; this makes the terms homogeneous of degree zero. Suppose j = i then Multiply the right side by x −α dm(x) and integrate over T N to obtain The sum is zero unless α ∈ Z N where Z N := α ∈ Z N : N j=1 α j = 0 , by the homogeneity. For the left side start with (8.1) applied to Combining the two sides finishes the proof. If α / ∈ Z N then both sides are trivially zero.
This system of recurrences has the easy (and quite undesirable) solution K α = I for all α ∈ Z N and 0 otherwise. The right side becomes 2κ an underlying assumption), and the left side is to the measure 1 2π dθ on the circle e iθ (1, . . . , 1) : − π < θ ≤ π . Next we show that µ α := T N x −α dµ(x) satisfies the same recurrences. Proposition 5.2 of [3] asserts that if α, β ∈ N N 0 and N j=1 (α j − β j ) = 0 then The relation τ (w) * µ wα τ (w) = µ α is shown in [3,Theorem 4.4]. Introduce Laurent series The purpose of the definition is to produce a formal Laurent series satisfying The ambiguity in the solution is removed by the second condition (note that α B (i,j) α − cI x α also solves the first equation for any constant c).
Proof . Start with µ α τ ((i, j)) = τ ((i, j)) µ (i,j)α and the defining relations subtract the second equation from the first: By two-sided induction for all s ∈ Z, in particular for s = α j where the right hand side vanishes by definition.
In the following there is no implied claim about convergence, because any term x α appears only a finite number of times in the equation.
Proof . Start with multiplying equation (8.5) by By construction

Thus the equation becomes
This completes the proof.

Maximal singular support
Above we showed that µ and K satisfy the same Laurent series differential systems (8.2) and (8.6), thus the singular part µ S also satisfies this relation. The singular part µ S is the restriction of µ to this is an intersection of a closed set and an open set, hence T i,j is a Baire set and the restriction µ i,j of µ to T i,j is a Baire measure. Informally T i,j = x ∈ T N : We will prove that µ i,j = 0 for all i = j. That is, µ S is supported by x ∈ T N : #{x k } ≤ N − 2 (the number of distinct coordinate values is ≤ N − 2). In [3,Corollary 4.15] there is an approximate identity x α , which satisfies σ N −1 n (x) ≥ 0 and σ N −1 n * ν → ν as n → ∞, in the weak- * sense for any finite Baire measure ν on T N /D (referring to functions and measures on T N homogeneous of degree zero as Laurent series). The set T i,j is pointwise invariant under (i, j) thus dµ i,j (x) = dµ i,j (x(i, j)) = τ ((i, j))dµ i,j (x)τ ((i, j)). Let K s n = σ N −1 n * µ S (convolution), a Laurent polynomial, fix in 1 ≤ ≤ N , and consider the functionals F ,n , G ,n on scalar functions p ∈ C (1) By construction the functionals annihilate x α for α / ∈ Z N . For a fixed α ∈ Z N the value where b n (γ) := (from the Laurent series of σ N −1 n ), and A γ := T N x −γ dµ S . Thus for fixed α the coefficients b n (·) → 1 as n → ∞ and the expression tends to the differential system 8.2 and This result extends to any Laurent polynomial by linearity. From the approximate identity property and where M depends on µ S . Also for Laurent polynomials p. By density of Laurent polynomials in C (1) T N /D (D = {(u, u, . . . , u) : |u| = 1} thus functions homogeneous of degree zero on T N can be considered as functions on the quotient group T N /D) we obtain for all p ∈ C (1) T N /D . in formula (8.7) (with = 1) applied to f the measure µ S can be replaced with µ 1,2 . Evaluate the derivative Each term vanishes on i<j {x : x i = x j }\T 1,2 , and restricted to T 1,2 the value is f (x) The right hand side of the formula reduces to since dµ 1,2 (x)τ (1, 2) = τ (1, 2)dµ 1,2 (x). Thus the integral is a matrix F (f ) such that (I + 2κτ (1,2))F (f ) = 0, 1 − x j x 1 −1 shows that µ 1,2 = 0, since E was arbitrarily chosen.

Boundary values for the measure
In this subsection we will show that K satisfies the weak continuity condition N − 1, N ))) = 0 at the faces of C 0 and then deduce that H 1 commutes with σ (as described in Theorem 7.1). The idea is to use the inner product property of µ on functions supported in a small enough neighborhood of x (0) = 1, ω, . . . , ω N −3 , ω −3/2 , ω −3/2 where µ S vanishes, so that only K is involved, then argue that a failure of the continuity condition leads to a contradiction. Let 0 < δ ≤ 2π 3N and define the boxes , e iφ 0 = ω −3/2 . We consider the identity The support hypothesis and the construction of Ω δ imply that Ω δ ∩ T N \T N reg ⊂ T N −1,N and thus dµ can be replaced by K(x)dm(x) in the formula. Recall the general identity (4.1) , N )) .
Specialize to spt(f ) ⊂ Ω δ and spt(g) ⊂ Ω δ and x ∈ Ω δ then only the j = N − 1 term in the sum remains, and this term changes sign under x → x (N − 1, N ).
Hence the hypotheses of Theorem 9.1 are satisfied, and there exists a nontrivial solution B 1 (κ) which is analytic in |κ| < 1 2 . Since the Hermitian form is positive definite for −1/h τ < κ < 1/h τ we can use the fact that B 1 (κ) is a multiple of a positive-definite matrix when κ is real (in fact, of the matrix H 1 arising from µ as in Section 8) and its trace is nonzero (at least on a complex neighborhood of {κ : − 1/h τ < κ < 1/h τ } by continuity). Set B 1 (κ) := n t / nτ i=1 B 1 (κ) ii B 1 (κ), analytic and tr(B 1 (κ)) = 1 thus the normalization produces a unique analytic (and Hermitian for real κ) matrix in the null-space of M (κ). Let H(κ) = L 1 (x 0 ; κ) * B 1 (κ)L 1 (x 0 ; κ) then for fixed f, g ∈ C (1) is an analytic function of κ which vanishes for −b N < κ < b N hence for all κ in −1/h τ < κ < 1/h τ ; this condition is required for integrability. This completes the proof.
By very complicated means we have shown that the torus Hermitian form for the vectorvalued Jack polynomials is given by the measure L * HLdm. The orthogonality measure we constructed in [3] is absolutely continuous with respect to the Haar measure. We conjecture that L * (x; κ)H(κ)L(x; κ) is integrable for −1/τ 1 < κ < 1/ (τ ) but H(κ) is not positive outside |κ| < 1/h τ (the length of τ is (τ ) := max{i : τ i ≥ 1}). In as yet unpublished work we have found explicit formulas for L * HL for the two-dimensional representations (2, 1) and (2, 2) of S 3 and S 4 respectively, using hypergeometric functions. It would be interesting to find the normalization constant, that is, determine the scalar multiple of H(κ) which results in 1 ⊗ T, 1 ⊗ T H(κ) = T, T 0 (see (2.1)) the "initial condition" for the form. In [3,Theorem 4.17(3)] there is an infinite series for H(κ) but it involves all the Fourier coefficients of µ.