Nonsymmetric Macdonald Superpolynomials

There are representations of the type-A Hecke algebra on spaces of polynomials in anti-commuting variables. Luque and the author [S\'em. Lothar. Combin. 66 (2012), Art. B66b, 68 pages, arXiv:1106.0875] constructed nonsymmetric Macdonald polynomials taking values in arbitrary modules of the Hecke algebra. In this paper the two ideas are combined to define and study nonsymmetric Macdonald polynomials taking values in the aforementioned anti-commuting polynomials, in other words, superpolynomials. The modules, their orthogonal bases and their properties are first derived. In terms of the standard Young tableau approach to representations these modules correspond to hook tableaux. The details of the Dunkl-Luque theory and the particular application are presented. There is an inner product on the polynomials for which the Macdonald polynomials are mutually orthogonal. The squared norms for this product are determined. By using techniques of Baker and Forrester [Ann. Comb. 3 (1999), 159-170, arXiv:q-alg/9707001] symmetric Macdonald polynomials are built up from the nonsymmetric theory. Here"symmetric"means in the Hecke algebra sense, not in the classical group sense. There is a concise formula for the squared norm of the minimal symmetric polynomial, and some formulas for anti-symmetric polynomials. For both symmetric and anti-symmetric polynomials there is a factorization when the polynomials are evaluated at special points.


Introduction
Nonsymmetric Macdonald [13] polynomials are simultaneous eigenfunctions of a set of mutually commuting operators derived from an action of the type-A Hecke algebra on the space of polynomials in N variables. They are significantly different from the symmetric Macdonald polynomials in the technique of their respective definitions and yet Baker and Forrester [1] established a strong relation between them. In the analogous theory of nonsymmetric Jack polynomials Griffeth [11] constructed such polynomials which take values in modules of the underlying groups, specifically the complex reflection groups in the infinite family G( , p, N ). These polynomials constitute a standard module of the rational Cherednik algebra. Luque and the author [9] extended the theory of nonsymmetric Macdonald polynomials in the direction suggested by Griffeth's work by studying polynomials taking values in modules of the Hecke algebra. The development relies on exploiting standard Young tableaux and the Yang-Baxter graph technique of Lascoux [12].
The superpolynomials considered here are generated by N anti-commuting and N commuting variables. By defining representations of the Hecke algebra on anti-commuting variables the theory of vector-valued nonsymmetric Macdonald polynomials is applied to define and analyze superpolynomials. There is a theory of symmetric Macdonald superpolynomials initiated by Blondeau-Fournier, Desrosiers, Lapointe, and Mathieu [3] with further developments on norm and special point values by González and Lapointe [10]. Their approach and definitions are based on differential operators and linear combinations of the classical nonsymmetric Macdonald polynomials, whose coefficients involve anti-commuting variables. The theory developed in the present paper is different due to the method of using anti-commuting variables to form Hecke algebra modules.
Nonsymmetric Macdonald polynomials associated with general root systems were intensively studied by Cherednik [5]. By specializing to root systems of type A it becomes possible to develop more detailed relations, formulas and structure. In particular, the papers of Noumi and Mimachi [14], Baker and Forrester [1] provide important background for the present paper. Note that some authors use different axioms for the quadratic relations of the Hecke algebra, such as T − t 1/2 T + t −1/2 = 0, rather than (T − t)(T + 1) = 0.
The theory of Hecke algebras of type A and their representations is briefly described in Section 2 and then applied to modules of polynomials in anti-commuting variables. In general the irreducible representations are constructed as spans of standard Young tableaux whose shape corresponds to a fixed partition of N . In the present situation it is the hook tableaux which arise. The basis vectors are constructed and the important transformation formulas are stated. There is an inner product in which the generators of the Hecke algebra are self-adjoint which leads to evaluation of the squared norms of the basis elements.
In Section 3 the theory of vector-valued nonsymmetric Macdonald polynomials developed in [9] is applied to produce superpolynomials, considered as polynomials taking values in modules of anti-commuting variables. The main results are stated without proofs but some important details are carefully worked out. In [8] the author constructed an inner product in which the nonsymmetric Macdonald polynomials are mutually orthogonal, in the general vector-valued situation. This structure is worked out for the superpolynomials in Section 3.3 and the squared norms are computed. In Section 4 the techniques of Baker and Forrester [1] are used to produce supersymmetric Macdonald polynomials, and the squared norms. From results of [9] the labels of these polynomials correspond to the superpartitions of Desrosiers, Lapointe, and Mathieu [6]. It has to be emphasized that in this paper the meaning of symmetric is with respect to the Hecke algebra, not the symmetric group. Also the squared norm of the lowest degree supersymmetric polynomial is determined -the formula is more elegant than the general formula; its calculation is able to use telescoping arguments for simplifications. There is a derivation of formulas for antisymmetric Macdonald polynomials in Section 4.5. In the conclusion some further topics of investigation, such as evaluation at special points, are discussed.
2 The Hecke algebra of type A

Definitions and Jucys-Murphy elements
The Hecke algebra H N (t) of type A N −1 with parameter t is the associative algebra over an extension field of Q, generated by T 1 , . . . , T N −1 subject to the braid relations (2.1b) and the quadratic relations where t is a generic parameter (this means t n = 1 for 2 ≤ n ≤ N ). The quadratic relation implies There is a commutative set in H N (t) of Jucys-Murphy elements defined by ω N = 1, ω i = t −1 T i ω i+1 T i for 1 ≤ i < N , that is, Simultaneous eigenvectors of {ω i } form bases of irreducible representations of the algebra. The symmetric group S N is the group of permutations of {1, 2, . . . , N } and is generated by the simple reflections (adjacent transpositions) {s i : 1 ≤ i < N }, where s i interchanges i, i + 1 and fixes the other points (the s i satisfy the braid relations and s 2 i = 1). There is a linear isomorphism ZS N → H N (t) given by u∈S N a u u → u∈S N a u T (u), where T (u) = T i 1 · · · T i with u = s i 1 · · · s i being a shortest expression for u (in fact = #{(i, j) : i < j, u(i) > u(j)}); T (u) is well-defined because of the braid relations (see [7]).
There is a symmetric bilinear form on P which is positive-definite for t > 0 and in which T i is self-adjoint for 1 ≤ i < N . The purpose of the form is to make the simultaneous eigenvectors of {ω i } mutually perpendicular. (E) and extend the form to P by linearity.
Proof . This follows from ω N = 1 and There are two degree-changing linear maps which commute with the Hecke algebra action.
It is clear that and this term is canceled out in M D + DM .

Representations of H N (t)
These representations correspond to partitions of N , namely λ = (λ 1 , . . . , λ N ) ∈ N N 0 with λ 1 ≥ λ 2 ≥ · · · ≥ λ N and N i=1 λ i = N . The length of λ is (λ) = max{i : λ i ≥ 1}. There is a graphical device to picture λ, called the Ferrers diagram, which has boxes at {(i, j) : 1 ≤ i ≤ (λ) , 1 ≤ j ≤ λ i } (integer points). A reverse standard tableau (RSYT) is a filling of the Ferrers diagram with the numbers {1, 2, . . . , N } such that the entries decrease in each row and in each column. The relevant representation of H N (t) is defined on the span of the RSYT's of shape λ in such a way that , and Y is a RSYT of shape λ. In the present work only hook tableaux will occur, namely partitions of the form λ = (N − n, 1 n ) (the part 1 is repeated n times), so that (λ) = n + 1.
We will show that P m is a direct sum of the H N (t)-modules corresponding to (N − m, 1 m ) and (N + 1 − m, 1 m−1 ). Here is a structure for labeling the φ E of interest. These sets are associated to RSYT's of shape N −m, 1 m and N −m+1, 1 m−1 respectively, and this correspondence will be used to define content vectors for E. Then Theorem 2.14.
and T j p j+1 = p j . To set up an induction argument let U N −m−1 = T N −m−1 and U i+1 = T i+1 U i for i < N − 1. We claim At the start of the induction as is to be shown. We also need Turning to the isotype N − m + 1, 1 m−1 , let E 1 := {1, 2, . . . , m − 1} ∈ Y 1 so that Then T j p j = p j+1 so that T N −1 T N −2 · · · T m−1 p m−1 = p N . Also T j p j+1 = tp j + (t − 1)p j+1 and T j p i = tp i for i > j + 1. By induction we prove that The formula is valid for i = N − 1 and assuming it is true for i apply T i−1 to both sides, then the first term becomes t N −i (p i−1 + (t − 1)p i ) and the second term is multiplied by t. Substitute M φ F = N j=m−1 p j in the formula with i = m − 1 to obtain Thus

Steps
Having found two polynomials which are {ω i } simultaneous eigenfunctions we describe the method for constructing for each by the braid relations; and Proof . If j > i + 1 or j < i then ω j T i = T i ω j and thus ω j g = λ j g. By (2.3) Given the hypotheses of the proposition and the self-adjointness of ω i (Corollary 2.8) it follows Proof . The transformation rules follow from Definition 2.3 and DT i = T i D (see Proposition 2.10).
for all i then this property holds for n + 1.
Proof . The existence follows from induction starting with τ E 0 = ψ E 0 and Theorem 2.14. Uniqueness follows from the leading term. The {ω i }-eigenvalues of τ E determine E uniquely.

Isotype
This concerns the polynomials in P m,1 = ker M ∩ P m = M P m−1 .
Proof . The transformation rules follow from Definition 2.3 and M T i = T i M .
Theorem 2.30. Suppose for some n and for each E ∈ Y 1 with inv(E) = n there is a polynomial such that ω i τ E = t c(i,E) τ E for all i then this property holds for n − 1.
Proof . The existence follows from induction starting with τ E 1 = M φ E 1 = η E 1 and Theorem 2.15. Uniqueness follows from the leading term. The eigenvalues of τ E determine E uniquely.
Proof . This has the same proof as Corollary 2.24.
In the product for |τ F | 2 the factors for pairs , respectively. The extra factor in the product for |τ E | 2 has the desired value.

Isomorphisms
This section concerns the action of the maps M , D on the irreducible H N (t)-modules. The following is a version of Schur's lemma for irreducible representations.
Proof . The argument is based on orthogonal bases defined in the previous sections. By hypothesis V 1 has an orthogonal basis consisting of {ω i }-eigenfunctions. The image of this basis under µ has the same property. For a typical basis element this equation follows from T i being self-adjoint and f, g = 0). By hypothesis ω j µf = λ j µf for all j and µg = (T i +b)µf satisfies ω i µg = λ i+1 µg, ω i+1 µg = λ i µg. By Lemma 2.17 µg 2 = (1 − b)(t + b) µf 2 and so γ := µg 2 / g 2 = µf 2 / f 2 . By the step constructions µf 2 / f 2 = γ holds for every basis vector of V 1 .
Proof . M and D commute with each T i and hence with each ω i . Furthermore if f ∈ P m,0 then (M D + DM )f = DM f = [N ] t f (by Proposition 2.11) and thus M is one-to-one on P m,0 . Similarly if g ∈ P m+1,1 then [N ] t g = (M D + DM )g = M Dg and D is one-to-one. By the lemma there are constants γ 1 , γ 2 such that M f 2 = γ 1 f 2 and Dg 2 = γ 2 g 2 . From

Operators on polynomials
The following presents the key concepts for our constructions: the definition of the action of H N (t) on superpolynomials and the ingredients necessary to define the Cherednik operators whose simultaneous eigenvectors are the nonsymmetric Macdonald superpolynomials. Here we extend the polynomials in {θ i } by adjoining N commuting variables if and only if λ 1 ≥ λ 2 ≥ · · · ≥ λ N ). The fermionic degree of this monomial is #E and the bosonic degree is |α| := N i=1 α i . Let sP m := span x α φ E : α ∈ N N 0 , #E = m . Then using the decomposition P m = P m,0 ⊕ P m,1 let The Hecke algebra H N (t) is represented on sP m . This allows us to apply the theory of nonsymmetric Macdonald polynomials taking values in H N (t)-modules (see [8,9]).
Definition 3.1. Suppose p ∈ sP m and 1 ≤ i < N then set Note that T i acts on the θ variables according to Definition 2.3.
We also use The operators ξ i are Cherednik operators, defined by Baker and Forrester [1] (see Braverman et al. [4] for the significance of these operators in double affine Hecke algebras). They mutually commute (the proof in the vector-valued situation is in [9, Theorem 3.8]). Observe Their key properties are There is a basis of sP m consisting of simultaneous eigenvectors of {ξ i } and these are the nonsymmetric Macdonald superpolynomials (henceforth abbreviated to "NSMP").
Suppose p(θ) is independent of x then T i p = T i p and that is ξ i agrees with ω i on polynomials of bosonic degree 0. Also wT i+1 = T i w. Suppose j > i + 1 then

Properties of nonsymmetric Macdonald polynomials
They have a triangularity property with respect to the partial order £ on the compositions N N 0 , which is derived from the dominance order: The rank function on compositions is involved in the formula for an NSMP.

2)
where v α,β,E (θ; q, t) ∈ P m,k and its coefficients are rational functions of q, t. Also Note that the leading term involves the element R α (τ E (θ)) of H N (t) acting on fermionic variables. The explanation for the exponents e(α + ) and b(α) is in Proposition 3.10 below. The relations (2.3) hold when ω i , T i is replaced by ξ i , T i respectively and this leads to the following, which has the same proof as Proposition 2.16: Proposition 3.6. Suppose ξ j f = λ j f for 1 ≤ j ≤ N (f = 0 and f ∈ sP m ) and g := T i f + t−1 λ i+1 /λ i −1 f then ξ j g = λ j g for all j = i, i + 1 and ξ i g = λ i+1 g, ξ i+1 g = λ i g. If λ i+1 = t ±1 λ i then g = 0.
This together with a degree-raising operation provides the method for constructing the Macdonald polynomials.
Suppose α ∈ N N 0 , E ∈ Y 0 ∪ Y 1 and α i = α i+1 then let z = ζ α,E (i + 1)/ζ α,E (i) and In all cases ζ α,s j E = s i ζ α,E . The above equations are implicit formulas for T i M α,E . Formula (3.3) is the same as that for the scalar case, as in [1,14].
Here is a brief discussion of the effect of T i on x α R α τ E for the c α,E = 1 cases. For α ∈ N N 0 let inv(α) := # (i, j) : i < j, α i < α j then r α α = α + and r α = s i 1 · · · s i , where = inv(α). Recall R α = T −1 i · · · T −1 i 1 and the value of R α is independent of the expressions for r α of length . Suppose α i < α i+1 then inv(α) = inv(s i α) + 1; write s i α = r −1 s i α α + and r s i α = s i 1 · · · s i with = inv(s i α). Thus r −1 α = s i s i 1 · · · s i and R α = T −1 i R s i α and so T i x α R α τ E = x s i α R s i α τ E + p(x; θ), where p is a sum of x β p (θ) with s i α β.
Assuming the existence of the nonsymmetric Macdonald polynomials M α,E the argument for showing that D i f is a polynomial is the following: Replace g 12 (θ) by (θ 1 θ 2 θ 3 + θ 1 θ 2 θ 4 ) for the P 3,1 version (by applying M ).

Symmetric bilinear form
In this section we define an inner product (symmetric bilinear form) on sP m in which T i , ξ i are self-adjoint, the Macdonald polynomials are pairwise orthogonal and it is positive-definite for t, q > 0, q = 1 and min q 1/N , q −1/N < t < max q 1/N , q −1/N . The background and proofs for this section are in [8]. The hypotheses T i f, g = f, T i g for 1 ≤ i < N and ξ N f, g = f, ξ N g already imply that ξ i f, g = f, ξ i g for all i since ξ i = t −1 T i ξ i+1 T i and thus M α,E , M β,F = 0 if (α, E) = (β, F ) (at least one different {ξ i }-eigenvalue). Denote f, f = f 2 , even if possibly nonpositive. The aim is to determine a formula for M α,E 2 which, other than leading coefficients q * t * , involves only linear factors of the form ( Proposition 3.15. Suppose there is a symmetric bilinear form on sP in which each T i and ξ i is self-adjoint and suppose E ∈ Y 0 ∪ Y 1 , α ∈ N N 0 and α i < α i+1 for some i then Proof . This is the same argument used in Lemma 2.17.
We introduce a product for expressing M α,E 2 in terms of M α + ,E 2 .
There are inv(α) terms in the product. The next proposition assumes the same hypotheses on the bilinear form.
Proof . With the same argument as in Proposition 2.26 one shows α i < α i+1 implies Another hypothesis is required to define the inner product for all polynomials starting with bosonic degree 0 (M 0,E = τ E ). The approach of making D i the adjoint of multiplication by x i , or making an isometry out of the latter (torus norm) as is done in the Jack polynomial situation, does not work here without a modification.
It follows from ξ i = t −1 T i ξ i+1 T i that ξ i f, g = f, ξ i g for all i. The reason for the factor (1−q) is to allow the limit as t → 1 when q = t 1/κ to obtain nonsymmetric Jack polynomials.
In [8] hypothesis (3.6) is stated in the equivalent form and this expression follows from w = t N −1 T −1 N −1 · · · T −1 2 T −1 1 ξ 1 . Next we use hypothesis (3.6) to relate norms for polynomials of different bosonic degrees.
Proof . In (3.6) set g = M α,E and f = M Φα,E then (1−q) f, x N wg = (1−q) M Φα,E 2 . On the other hand With this formula and Proposition 3.17 we can use induction to find M α,E 2 for any α. The first step uses α = 0 and any E ∈ Y 0 ∪ Y 1 , where M 0,E = τ E and the spectral vector The argument for establishing the formula for M λ,E 2 , where λ ∈ N N,+ 0 uses the following steps, starting with the assumption that λ k ≥ 1 and λ j = 0 for k < j ≤ N . Throughout E is fixed. Let In the resulting formula we use a slightly different expression As Griffeth [11] pointed out there is not much cancellation between successive terms in general; there is a certain amount for the extreme cases for all i, symmetric or antisymmetric, respectively. (The meaning of symmetric here is not the same as for the symmetric group situation, as will be shown by example.) There are two approaches to producing symmetric polynomials. One way is to identify a set of M α,E which is closed under the steps f → (T i + b)f of the type described in Proposition 3.6 and then to apply the symmetry conditions to a linear combination of these polynomials with undetermined coefficients. The other way is to apply a symmetrization operator to one polynomial. The original idea for these approaches comes from Baker and Forrester [2].  It is a consequence of the transformation rules that if β, F = α, E then the spectral vector ζ β,F is a permutation of ζ α,E . Furthermore M(α, E) is an H N (t)-module. In [6] the authors defined a superpartition with N parts and fermionic degree m as an N -tuple Λ 1 , . . . , Λ m ; Λ m+1 , . . . , Λ N which satisfies Λ 1 > Λ 2 > · · · > Λ m and Λ m+1 ≥ Λ m+2 ≥ · · · ≥ Λ N . Suppose λ ∈ N N,+ 0 , E ∈ Y 0 and λ, E is column strict, then Λ i = λ, E [m + 2 − i, 1] for 1 ≤ i ≤ m and Λ i = λ, E [1, N + 1 − i] for m + 1 ≤ i ≤ N , and also Λ m > Λ N . Alternatively suppose λ ∈ N N,+ 0 , E ∈ Y 1 and λ, E is column strict, then Λ i = λ, E [m + 1 − i, 1] for 1 ≤ i ≤ m and Λ i = λ, E [1, N + 2 − i] for m + 1 ≤ i ≤ N , and also Λ m ≤ Λ N (because Λ m = λ, E [1,1] and Λ N = λ, E [1,2]). Thus the inequalities Λ m > Λ N and Λ m ≤ Λ N distinguish Y 0 from Y 1 .
As a standardization for the labels use λ = α + and for E use the root E R or the sink E S The root and the sink are produced by minimizing the entries of F in row 1, respectively minimizing the entries of F in column 1. For E ∈ Y 1 the definitions of E R and E S are reversed.
The same argument as in Proposition 2.26 applies.
Theorem 4.9. Suppose λ ∈ N N,+ 0 , E ∈ Y 0 , and λ, E is column-strict then is the supersymmetric polynomial in M(λ, E), unique when the coefficient of M λ,E S is 1.

Symmetrization operator and norms
The symmetrization operator is defined analogously to the group case.
Proof . Consider the same formulas with T i replaced by s i and denote X n = 1 + s n X n−1 .
In the full expansion there are (n + 1)! terms and the coefficient of t k in [n + 1] t ! is the number of terms with k factors. Claim that S (n) = X 1 X 2 · · · X n = u∈S n+1 u ∈ ZS n+1 ; proceeding by induction the statement is true for n = 1, where X 1 = 1 + s 1 and now suppose it is true for n and consider u∈S n+1 u(1 + s n+1 + s n+1 s n + · · · + s N +1 · · · s 1 ) acting on γ = (γ 1 , . . . , γ n+2 ); then s n+1 · · · s j γ = (γ 1 , . . . , γ i−1 , γ i+1 , . . . , γ n+2 , γ i ). Thus u∈S n+1 us n+1 · · · s j is the sum of all u (i) such that u (i) γ n+2 = γ i . This shows S (n+1) = u∈S n+2 u. Since the number of terms with k factors in S (n) is the same as the number of u of length k each term is of minimum length (the shortest expression of u as a product of {s i }). Thus replacing each s i by T i shows that S (n) = u∈S n+1 T (u). Replacing T i by T n+1−i for 1 ≤ i ≤ n in S (n) does not affect the sum (implicitly the braid relations are used). Given j ≤ n apply the map T i → T j+1−i in X 1 X 2 · · · X j to obtain S (n) = 1 + T j 1 + T j−1 + T j−1 T j · · · 1 + T 1 + · · · + T 1 · · · T j X j+1 · · · X n and it is now obvious that T j − t S (n) = 0.  Proof . The effect of X j on an invariant polynomial is to multiply by 1 + t + t 2 + · · · + t j .
Proof . Suppose u ∈ S N and u = s i 1 · · · s i is a shortest expression for u so that T (u) = T i 1 · · · × T i then T (u)f, g = f, T i · · · T i 1 g = f, T (u −1 )g . Since u∈S N T (u) = u∈S N T (u −1 ) this completes the proof.
There is a summation-free formula for p λ,E 2 , derived as follows: Suppose α, F = λ, E then S (N −1) M α,F = cp λ,E for some constant c, because of the uniqueness of p λ,E in M(λ, E). Then The evaluation depends on determining c, which can be done by using M λ − ,E R , where λ − is the nondecreasing rearrangement of λ. For each i ≤ λ 1 let m i = # j : λ, E S [1, j] = i (the multiplicity of i in row 1 of λ, E S ). We will show that the coefficient of (This was shown in [9,Theorem 5.39]; we are outlining a proof here with simplifications due to the simple hook shape (N − m, 1 m ), also to accommodate the different notation.) Here is an illustration of the following theorem and the method of proof. Suppose λ = (3, 2, 2, 2, 1, 0) and We demonstrate the effect on the significant terms by using the spectral vectors: and this is the spectral vector of λ, E S . Note that X j does not affect the variables x k for k > j + 1. It (almost) suffices to consider the coefficient of x α(1) in X N −1 M λ − ,E R . (Throughout we use Σ to denote a linear combination of terms M β,F which can not be transformed into M λ,E S by the operators X 1 · · · X j .) Suppose (1),E R + Σ, and the process is repeated with M α(1),E R . The other possibility is that
The last case to consider is and similarly to the previous case T i M α(i),E = tM α(i),E for 1 ≤ i ≤ k − 1. The set E is an intermediate step in a series of transpositions transforming E R to E S , at this stage using only s j with i > N − i. Similarly and inv(E ) = inv(E ) + 1. Eventually these steps transform E R to E S and λ − to λ. Each set of m i (contiguous) λ i values λ j = i in row 1 of λ, E S contributes a factor of [m i ] t !. By beginning with E R the factors appearing in (T i + b)M β,F are always 1 (see (3.3) and (3.2)).
for some function g then . So each term in F λ − ,E matches one in the stated λ-product.
Note that E R can be replaced by E S in the first two lines of the formula for p λ,E S 2 . By using the M map the formulas produce supersymmetric polynomials in P m,1 : consider the polynomials M (p λ,E S ), where p λ,E S ∈ P m−1,0 . This is why we do not go into detail about the E ∈ E 1 case. The norm formula implies the identity This formula was checked by computer algebra for a "small" example, N = 5, m = 2, λ = (2, 2, 1, 1, 0) with there are 120 labels (β, F ) with β, F = λ, E S , that is dim M λ, E S = 120.

Special values
.
We claim by induction that (z i − tz i+k ) is a factor of p(z) for 1 ≤ i < i + k ≤ m: this is valid for k = 1 so consider that (z i − tz i+k ) is a factor of p(z) and p(z)/(z i+k − tz i+k+1 ) is s i+k -invariant thus (z i − tz i+k+1 ) is a factor (where i + k + 1 ≤ m). Suppose z m = t N −m = tz N −m+1 then T m p(z )τ F = t p(z )τ F but this implies p(z ) = 0 or ω m τ F = t N −m τ F which is impossible. Thus z m − t N −m is a factor of p(z). The symmetry properties imply z i − t N −m is a factor of p(z) for 1 ≤ i ≤ m.
An example appears to show there is no such general result in our version. However there may be one for the special case where λ, E S [1, j] = 0 for 1 ≤ j ≤ N − m. At this point we offer no conjecture, but some very small examples with N = 3, 4 and |λ| ≤ 4 suggest there is something to be found.
Definition 4.23. For n ≥ 1 let X a 0 = 1 and X a n = 1 − 1 t T n X n−1 , and A (n) = X a 1 X a 2 · · · X a n .
Proof . The operators − 1 t T i satisfy the braid relations so the same approach as in Theorem 4.11 works here, and the proof then follows from (T i + 1) 1 − 1 t T i = 0.
Similarly to Corollary 4.13 one can show that There is a result analogous to Proposition 4.18.