Double lowering operators on polynomials

Recently Sarah Bockting-Conrad introduced the double lowering operator $\psi$ for a tridiagonal pair. Motivated by $\psi$ we consider the following problem about polynomials. Let $\mathbb F$ denote a field. Let $x$ denote an indeterminate, and let $\mathbb F\lbrack x \rbrack$ denote the algebra consisting of the polynomials in $x$ that have all coefficients in $\mathbb F$. Let $N$ denote a positive integer or $\infty$. Let $\lbrace a_i\rbrace_{i=0}^{N-1}$, $\lbrace b_i\rbrace_{i=0}^{N-1}$ denote scalars in $\mathbb F$ such that $\sum_{h=0}^{i-1} a_h \not= \sum_{h=0}^{i-1} b_h$ for $1 \leq i \leq N$. For $0 \leq i \leq N$ define polynomials $\tau_i, \eta_i \in \mathbb F\lbrack x \rbrack$ by $\tau_i = \prod_{h=0}^{i-1} (x-a_h)$ and $\eta_i = \prod_{h=0}^{i-1} (x-b_h)$. Let $V$ denote the subspace of $\mathbb F\lbrack x \rbrack$ spanned by $\lbrace x^i\rbrace_{i=0}^N$. An element $\psi \in {\rm End}(V)$ is called double lowering whenever $\psi \tau_i \in \mathbb F \tau_{i-1}$ and $\psi \eta_i \in \mathbb F \eta_{i-1}$ for $0 \leq i \leq N$, where $\tau_{-1}=0$ and $\eta_{-1}=0$. We give necessary and sufficient conditions on $\lbrace a_i\rbrace_{i=0}^{N-1}$, $\lbrace b_i\rbrace_{i=0}^{N-1}$ for there to exist a nonzero double lowering map. There are four families of solutions, which we describe in detail.


Introduction
This paper is mainly about polynomials and special functions, but in order to motivate things we first discuss a topic in linear algebra. The topic has to do with tridiagonal pairs [17] and their associated double lowering operator [7][8][9][10]. Let V denote a vector space with finite positive dimension. A tridiagonal pair on V is an ordered pair of linear maps A : V → V and A * : V → V that satisfy the following conditions: (i) each of A, A * is diagonalizable; (ii) there exists an ordering {V i } d i=0 of the eigenspaces of A such that where V −1 = 0 and V d+1 = 0; (iii) there exists an ordering {V * i } δ i=0 of the eigenspaces of A * such that where V * −1 = 0 and V * δ+1 = 0; (iv) there does not exist a subspace W ⊆ V such that AW ⊆ W and A * W ⊆ W and W = 0 and W = V .
Let A, A * denote a tridiagonal pair on V , as in the above definition. By [17,Lemma 4.5] the integers d and δ from (ii), (iii) are equal; this common value is called the diameter of the pair. For 0 ≤ i ≤ d let θ i (resp. θ * i ) denote the eigenvalue of A (resp. A * ) for the eigenspace V i (resp. V * i ). By [17,Theorem 11.1] the scalars are equal and independent of i for 2 ≤ i ≤ d − 1. For this constraint the solutions can be given in closed form [17,Theorem 11.2]. The "most general" solution is called q-Racah, and will be described shortly.
By construction the vector space V has a direct sum decomposition into the eigenspaces {V i } d i=0 of A and the eigenspaces {V * i } d i=0 of A * . The vector space V has two other direct sum decompositions of interest, called the first split decomposition {U i } d i=0 and second split decomposition {U ⇓ i } d i=0 . By [17,Theorem 4.6] the first split decomposition satisfies U 0 + U 1 + · · · + U i = V * 0 + V * 1 + · · · + V * i , U i + U i+1 + · · · + U d = V i + V i+1 + · · · + V d for 0 ≤ i ≤ d. By [17,Theorem 4.6] the second split decomposition satisfies By [17,Theorem 4.6], where U −1 = 0, U d+1 = 0 and U ⇓ −1 = 0, U ⇓ d+1 = 0. In [7,Sections 11,15] Sarah Bockting-Conrad introduces a linear map Ψ : V → V such that This map is called the double lowering operator or Bockting operator. In [7,Sections 9,15] Bockting-Conrad introduces an invertible linear map ∆ : V → V that commutes with Ψ and sends U i onto U ⇓ i for 0 ≤ i ≤ d. The maps Ψ and ∆ are related in the following way. For 0 ≤ i ≤ d define two polynomials in a variable x. Define the scalars By [7,Theorem 17.1], provided that each of ϑ 1 , ϑ 2 , . . . , ϑ d is nonzero.
Shortly we will describe Ψ and ∆ in more detail, but first we restrict to the q-Racah case.
In this case there exist nonzero scalars a, b, q such that q 4 = 1 and By [8,Theorem 9.8] the map ψ is equal to each of the following: This result is used in [8,Theorem 9.9] to obtain By (3) and [9,Theorem 7.2], Motivated by this factorization, in [9,Sections 6,7] Bockting-Conrad introduces an invertible linear map M : V → V such that By [9,Section 6], By [9,Lemma 6.2], M is equal to each of (I − a −1 qψ) −1 K, By [9, Lemma 6.7], We just listed many results about ψ, ∆, K, B, M. In the present paper, we interpret these results in terms of polynomials. The polynomials in question are essentially (1), (2) although we adopt a more general point of view. In the next section we will describe a problem about polynomials, and for the rest of the paper we will describe the solution. In this description we will encounter analogs of the above results. We hope that the above results are illuminated by our description.

Definitions and first steps
We now begin our formal argument. The following assumptions and notational conventions apply throughout the paper. Recall the natural numbers N = {0, 1, 2, . . .} and integers Z = {0, ±1, ±2, . . .}. Let F denote a field. Every vector space discussed in this paper is over F. Every algebra discussed in this paper is associative, over F, and has a multiplicative identity. Let x denote an indeterminate. Let F[x] denote the algebra consisting of the polynomials in x that have all coefficients in F. Throughout the paper we use the following convention: The symbols N, n refer to an integer or ∞. The symbols i, j, k refer to an integer.
We now describe a problem about polynomials. Let N denote a positive integer or ∞. We consider an ordered pair of sequences such that a i , b i ∈ F for 0 ≤ i ≤ N − 1. To avoid degenerate situations, we assume that We call the ordered pair (4) the data.
The polynomials τ i , η i are monic of degree i. For notational convenience define τ −1 = 0 and Note that ∆ is invertible.
1. An element ψ ∈ End(V ) is called double lowering (with respect to the given data) whenever both Note that L is a subspace of the vector space End(V ). We call L the double lowering space for the given data. Problem 2.4. Find necessary and sufficient conditions for the data (4) to be double lowering. In this case describe L and ∆.
The above problem is solved in the present paper. The necessary and sufficient conditions are given in Theorem 12.1. By that theorem, there are four cases. For the first three cases, L and ∆ are described in Section 6. For the fourth case, L and ∆ are described in Section 13. We would like to acknowledge that the above problem was previously solved by R. Vidunas under the assumption that [41].
We have some remarks. The polynomials (6), (7) satisfy Moreover Lemma 2.5. Assume that ψ ∈ End(V ) is double lowering. Then ψ1 = 0. Moreover and this common value is contained in F.
Proof. Apply ψ to each side of the equations in (9), and use Definition 2.1. We is a basis for V n .
Proof. Each of τ i , η i has degree i for 0 ≤ i ≤ N.
Proof. By Definition 2.1 and Lemma 2.6.
For T ∈ End(V ), T is called nilpotent whenever there exists a positive integer j such that T j = 0. The map T is called locally nilpotent whenever for all v ∈ V there exists a positive integer j such that T j v = 0. If T is nilpotent then T is locally nilpotent. For N = ∞, if T is locally nilpotent then T is nilpotent.
Lemma 2.8. Each element of L is locally nilpotent. If N = ∞ then each element of L is nilpotent.
Proof. By Lemma 2.7 and the comments below it.
We mention an elementary result for later use.
Lemma 2.14. Assume that T ∈ End(V ) is locally nilpotent. Then I − T is invertible, and

Adjusting the data
In this section we describe how the double lowering space is affected when we adjust the data in an affine way.
Let GL 2 (F) denote the group of invertible 2 × 2 matrices that have all entries in F.
The above matrix is denoted g(s, t). (ii) the inverse of g(s, t) is g(s −1 , −s −1 t).
For an algebra A, an automorphism of A is an algebra isomorphism A → A.
Lemma 3.3. The group G acts on the algebra F[x] as a group of automorphisms in the following way: each element g(s, t) ∈ G sends x → sx + t.
Proof. This is routinely checked using Lemma 3.2.
Recall from Definition 2.2 the double lowering space L for the data (4). Pick 0 = s ∈ F and t ∈ F. Let L ′ denote the double lowering space for the data Proposition 3.4. The following (i)-(iii) hold for the above scalars s, t and g = g(s, t): (i) there exists an F-linear map L → L ′ , ψ → g −1 ψg; (ii) there exists an F-linear map L ′ → L, ζ → gζg −1 ; (iii) the maps in (i), (ii) above are inverses, and hence bijections.
(ii) Similar to the proof of (i) above. (iii) By construction.
is double lowering.
Proof. By Definition 2.3 and Proposition 3.4.
We have a comment.
Lemma 3.6. Referring to the data (4), for distinct Then s = 0 and Proof. Routine.
Corollary 3.5 and Lemma 3.6 show that for double lowering data (4), the scalars a 0 and b 0 are "free", with a 0 = b 0 being the only constraint.

The parameters ϑ i
We continue to discuss the double lowering space L for the data (4). In this section we use L to define some scalars {ϑ i } N i=0 that will play a role in our theory.
Referring to Definition 4.1, we have ϑ 0 = 0 and ϑ 1 = 1. By (5) we have Proposition 4.2. The following (i)-(iii) hold for ψ ∈ L and 1 ≤ i ≤ N: Proof. We use induction on i. First assume that i = 1, and recall ϑ 1 = 1. Assertion (i) is vacuously true. Assertions (ii), (iii) hold by (11) and since τ 0 = 1 = η 0 . Next assume that i ≥ 2. By Lemma 2.7, ψx i ∈ V i−1 . Let the scalar α i be the coefficient of Similarly, We show that α i = ϑ i ψx. Using (14), (15) we see that times is equal to By Lemma 2.10 and Definition 4.1, In this inclusion, we apply ψ to each side and use Lemma 2.7 to find that (18) is contained in V i−3 . In (21) we replace i by i − 1, to find that (19) is contained in V i−3 . By induction (20) is contained in V i−3 . By these comments the polynomial (16) times the scalar (17) is contained in V i−3 . Consider the factors in the polynomial (16). We have a 0 − b 0 = 0 and ϑ i−1 = 0 and x i−2 ∈ V i−3 . So the polynomial (16) is not contained in V i−3 . Consequently the scalar (17) is zero, so α i = ϑ i ψx. The result follows from this and (14), (15).     Proof. The vector space L has dimension 1, and its normalized element is nonzero. (i) ψ ∈ L and ψ is normalized; Proof. (i) ⇒ (ii) Set ψx = 1 in Proposition 4.2(ii),(iii).

Describing L using ∆
We continue to discuss the double lowering space L for the data (4). In this section we describe L using the map ∆ from (8).
We introduce some notation.
Note that Proposition 5.2. The following (i)-(iii) are equivalent: Assume that (i)-(iii) hold, and let ψ ∈ L be normalized. Then For 0 ≤ i ≤ j we show Let ψ ∈ L be normalized. In (30) we apply ψ i to each side, and evaluate the result using Lemma 4.8. We then set x = a 0 , and use the fact that τ 0 = 1 and τ k (a 0 ) = 0 for 1 ≤ k ≤ N. By these comments From this equation we obtain (31). By (30), (31) we obtain (26), so (ii) holds.
in the proof of (i) ⇔ (ii). Now assume that (i)-(iii) hold. We saw in the proof of (ii) ⇒ (i) that (28) holds. Interchanging the roles of in that proof, we see that (29) holds. Later in the paper, we will obtain necessary and sufficient conditions for the data (4) to satisfy conditions (i)-(iii) in Proposition 5.2; our result is Theorem 12.1. In order to motivate this result, we look at some examples of double lowering data. This will be done in the next section.

First examples of double lowering data
We continue to discuss the double lowering space L for the data (4). In this section we give three assumptions under which this data is double lowering. Under each assumption we describe the polynomials (6), (7), the parameters {ϑ i } N i=0 from Definition 4.1, and the map ∆ from (8).
As a warmup, we examine the condition (26) for some small values of j.
Then the following (i)-(v) hold: in Lemma 6.4. For the rest of this section, assume that N ≥ 2. Also for the rest of this section, fix θ ∈ F and assume Using Definition 4.1, and for N = ∞, Using (25), and for N = ∞, For 0 ≤ i ≤ N the polynomials τ i , η i are described in the table below: Lemma 6.6. Under assumptions (35)-(37) the following (i)-(iii) hold: In the above lines ψ ∈ L is normalized.
Proof. (i) We invoke Proposition 5.2(i),(ii). For 0 ≤ j ≤ N we verify (26). We may assume that 2 ≤ j ≤ N; otherwise we are done by Lemma 6.1. For N = ∞ we separate the cases 2 ≤ j ≤ N − 1 and j = N. It suffices to show that For 2 ≤ j ≤ N the values of η j − τ j and η j (a 0 ) are given in the table above the lemma statement. Also for 2 ≤ j ≤ N, Using the above comments we routinely verify (38), (39).
(ii) We will verify the equation by showing that the two sides agree on V j for 2 ≤ j ≤ N. Using (28) and ψ j+1 V j = 0 we see that on V j , The result follows.
We just gave some examples of double lowering data. There is another example that is somewhat more involved; it will be described later in the paper.

Extending the data
Throughout this section, we assume that N is an integer at least 2. Recall the data (4), and assume that this data is double lowering. Let giving data In this section we obtain necessary and sufficient conditions on a N , b N for the data (41) to be double lowering. By (10), Lemma 7.1. The following (i)-(iii) are equivalent: (i) the data (41) is double lowering; (ii) we have (iii) we have Proof. By Proposition 5.2.
Proposition 7.3. The following (i)-(iii) are equivalent: Proof. (i) ⇔ (ii) We invoke Lemma 7.1(i),(ii). Subtract (42) from (44) to obtain an equation We find that d i is equal to (42) holds if and only if Our next general goal is to solve the equations in Proposition 7.3(ii),(iii). The main solution will involve a type of sequence, said to be recurrent.

Recurrent sequences
Throughout this section let n denote an integer at least 2, or ∞. let {a i } n i=0 denote scalars in F.
(iv) The sequence {a i } n i=0 is said to be recurrent whenever there exists β ∈ F such that {a i } n i=0 is β-recurrent.
Lemma 8.2. The following are equivalent: (i) the sequence {a i } n i=0 is recurrent; (ii) there exists β ∈ F such that {a i } n i=0 is β-recurrent.
Lemma 8.3. For β ∈ F the following are equivalent: (49) is zero by assumption, so The left-hand side of (48) is independent of i, and the result follows.
Let p i denote the expression on the left in (47), and observe for 1 ≤ i ≤ n − 1. Assertions (i), (ii) are both routine consequences of this.
Definition 8.5. Assume that {a i } n i=0 is recurrent. By a parameter triple for {a i } n i=0 we mean a 3-tuple β, γ, ̺ of scalars in F such that {a i } n i=0 is (β, γ)-recurrent and (β, γ, ̺)-recurrent. Note that a recurrent sequence has at least one parameter triple.

Recurrent sequences in closed form
In this section, we describe the recurrent sequences in closed form. let n denote an integer at least 2, or ∞.
Lemma 9.1. The recurrent sequences {a i } n i=0 are described in the table below: In the above table q, α 1 , α 2 , α 3 are scalars in F.
Lemma 9.2. The following scalars β, γ, ̺ give a parameter triple for the recurrent sequence {a i } n i=0 in Lemma 9.1. Case I: Lemma 9.3. Referring to Lemma 9.1, for 0 ≤ i ≤ n + 1 the sum a 0 + a 1 + · · · + a i−1 is given in the table below: Note 9.4. Referring to case III of the above table, the subcases i even and i odd can be handled in the following uniform way. For 0 ≤ i ≤ n + 1, 10 Twin recurrent sequences are related in the specified way. Then these sequences share the parameter triple β, γ, ̺ from Lemma 9.2. Therefore these sequences are twins. Next assume that the sequences {a i } n i=0 , {b i } n i=0 are twins. It follows from Definition 8.1 and Lemma 9.1 that they are related in the specified way.

A characterization of twin recurrent sequences
In this section we explain what twin recurrent sequences have to do with the equations in Proposition 7.3. Let n denote an integer at least 2, or ∞. Let {a i } n i=0 , {b i } n i=0 denote scalars in F.
Proof. This is routinely checked.
Proposition 11.3. Assume that the sequences {a i } n i=0 , {b i } n i=0 are recurrent and twins.
Proof. This is routinely verified for each case I-III in Lemma 10.2. To carry out the verification, use the formulas in Lemma 9.3.
In the next two lemmas, we give some additional solutions for the equations E(i, j) in Definition 11.1.
Lemma 11.5. Pick θ ∈ F and assume Then E(i, j) holds for 0 ≤ i ≤ j ≤ n.
Proof. By Lemma 11.2 it suffices to verify E(i, j) for 1 ≤ i < j ≤ n. Let i, j be given. For j < n, each side of E(i, j) is zero. For n = ∞ and j = n, the equation E(i, j) becomes which is a reformulation of (50).
Proposition 11.6. Assume that a 1 = b 0 and a 1 = b 1 . Further assume that E(i, j) holds for 1 ≤ i ≤ 2 and i + 1 ≤ j ≤ n. Then the sequences {a i } n i=0 , {b i } n i=0 are recurrent and twins.
Proof. Using E(1, 2), Since a 1 = b 1 , there exists a unique pair β, γ of scalars in F such that Using these equations we eliminate a 2 , b 2 in (51): In this equation we rearrange terms to get . Let ̺ denote this common value. We show that each of {a i } n i=0 , {b i } n i=0 is (β, γ)-recurrent and (β, γ, ̺)-recurrent. To do this, we show that for 2 ≤ j ≤ n, each of {a i } j i=0 , {b i } j i=0 is (β, γ)-recurrent and (β, γ, ̺)-recurrent. We will use induction on j. First assume that j = 2. By construction {a i } 2 i=0 is (β, γ)-recurrent. By construction and Lemma 8.4(i), the sequence is (β, γ)-recurrent and (β, γ, ̺)-recurrent. We are done for j = 2. Next assume that j ≥ 3. By E(1, j), By E(2, j), The equations (52), (53) give a linear system in the unknowns a j , b j . For this system the coefficient matrix is We have In the above equation we simplify the right-hand side using (51), to obtain Therefore det(C) = 0, so the system (52), (53) has a unique solution for a j , b j . We now describe the solution. By induction the sequences Consider the two sequences a 0 , a 1 , . . . , a j−1 , a ′ j ; By construction, each of (54), (55) is (β, γ)-recurrent. By construction and Lemma 8.4(i), each of (54), (55) is (β, γ, ̺)-recurrent. We show that a j = a ′ j and b j = b ′ j . The sequences (54), (55) are recurrent and twins, so by Proposition 11.3 they satisfy E(1, j) and E(2, j). Therefore, the equations (52), (53) still hold if we replace a j , b j by a ′ j , b ′ j . We mentioned earlier that the system (52), (53) has a unique solution for a j , b j . By these comments a j = a ′ j and b j = b ′ j . Consequently each of {a i } j i=0 , {b i } j i=0 is (β, γ)-recurrent and (β, γ, ̺)recurrent. The above argument shows that each of {a i } n i=0 , {b i } n i=0 is (β, γ)-recurrent and (β, γ, ̺)-recurrent.
Lemma 11.7. Assume that a i−1 = b i for 1 ≤ i ≤ n. Then the equation E(1, j) holds for 1 ≤ j ≤ n. However, in general it is not the case that E(i, j) holds for 0 ≤ i ≤ j ≤ n.
If i = 1 then each side of (56) is zero, so E(1, j) holds. Assume that n = 3 and Then (56) fails for i = 2 and j = 3.

The classification of the double lowering data
Recall the data (4). In this section we obtain necessary and sufficient conditions for this data to be double lowering. In view of Lemma 6.2 we assume N ≥ 3.
Theorem 12.1. Let N denote an integer at least 3, or ∞. Let denote scalars in F such that Then the data (57) is double lowering if and only if at least one of the following (i)-(iv) holds: (iv) the sequences (57) are recurrent and twins.
Proof. First assume that at least one of (i)-(iii) holds. Then (57) is double lowering, by Lemmas 6.4, 6.5, 6.6. Next assume that (iv) holds and N = ∞. By Proposition 11.3 (with n = N − 1) the equations E(i, j) hold for 0 ≤ i ≤ j ≤ N − 1. We delete a N −1 , b N −1 from (57) and consider the data By Lemma 6.2 and induction on N, we may assume that the sequences (59) are double lowering. By Proposition 7.3(i),(ii) (with N replaced by N − 1) we find that (57) is double lowering. Next assume that (iv) holds and N = ∞. Then for all integers j ≥ 2 the sequences is double lowering. We are done in one direction.
We now reverse the direction. Next assume that (57) is double lowering. We break the argument into cases.
in the previous case, we find that (ii) holds.
We show that (iv) holds. Using Proposition 7.3(i),(ii) as in the previous case, we find that E(i, j) holds for 1 ≤ i ≤ 2 and i + 1 ≤ j ≤ N − 1. Now by Proposition 11.6 (with n = N − 1), the sequences (57) are recurrent and twins. We have shown that (iv) holds.

L and ∆ for twin recurrent data
Our goal for the rest of the paper is to give a comprehensive description of L and ∆ for twin recurrent data. We will focus on Case I in Lemma 10.2, or more precisely, an adjusted version of this case as described in Section 3.
For the rest of this paper we assume that N is an integer at least 2, or ∞. Recall the double lowering space L for the data (4). For the rest of this paper, fix nonzero a, b, q ∈ F and assume for 0 ≤ i ≤ N − 1. By (5) we have a = b and Lemma 13.2. The sequences (4) are recurrent and twins.
Our next general goal is to describe the polynomials (7), the parameters {ϑ i } N i=0 from Definition 4.1, and the map ∆ from (8). We mention some formulas for later use.
(ii) By Note 13.1 and (i) above.
We mention some formulas for later use.
(ii) Similar to the proof of (i) above.
(ii) Similar to the proof of (i) above.
Our data is double lowering, so L = 0. For the rest of the paper, let ψ ∈ L be normalized. The maps ∆, ψ are related by (28), (29). Our next goal is to interpret (28), (29) using the q-exponential function. This function is defined as follows. For locally nilpotent T ∈ End(V ), The map exp q (T ) is invertible; its inverse is Lemma 13.12. For locally nilpotent T ∈ End(V ), Proof. To verify this equation, for 0 ≤ i ≤ N compare the coefficient of T i on each side.
Proof. For 0 ≤ j ≤ N we compare the coefficient of ψ j on each side of (71). For the left-hand side these coefficients are obtained from (28). We require By (67) and the construction, By Lemma 13.11(ii), Using these comments and (66), the equation (72) becomes where z = a −1 bq j . Basic hypergeometric series are discussed in [13,25]. In (73) the sum on the right is the basic hypergeometric series This observation reveals that (73) is an instance of the q-binomial theorem [13, Section 1.3]. The result follows. Proposition 13.13 gives a factorization of ∆. We now investigate the factors.
Proof. By (71) and the comments above Lemma 13.12, To obtain (74), apply each side of (76) to τ i and evaluate the result using (8). For the equation (71), the two factors on the right commute; swapping these factors and proceeding as above, To obtain (75), apply each side of (77) to τ i and evaluate the result using (8).
Proof. By Definition 13.15 and the comments above Lemma 13.12.
Note 13.17. Referring to Definition 13.15, the polynomials Example 13.18. The following (i)-(iii) hold: (i) w 0 = 1; (ii) w 1 is equal to each of (iii) w 2 is equal to each of Lemma 13.19. The following (i)-(iii) hold: is a basis for the vector space V n ; is a basis for the vector space V .
Note 13.21. The polynomials {w i } N i=0 and {w ′ i } N i=0 are in the Al-Salam/Chihara family [25,Section 14.8] if N = ∞, and the dual q-Krawtchouk family [25,Section 14.17] if N = ∞. The Al-Salam/Chihara and dual q-Krawtchouk polynomials satisfy a 3-term recurrence; the details will be given in Lemmas Proof. By Definition 13.15 and since ψτ i = ϑ i τ i−1 .
Our next general goal is to describe in more detail how the bases are related. To this end, we introduce some maps K, B, M ∈ End(V ).
Definition 13.23. Define K, B, M ∈ End(V ) such that for 0 ≤ i ≤ N, Each of K, B, M is invertible.
(ii), (iii) Similar to the proof of (i) above.
Lemma 13.25. The following (i)-(iii) hold: Proposition 13.26. The following (i)-(iv) hold: Proof. (i) The map T = b −1 ξψ is locally nilpotent. We have KT K −1 = qT by Lemma 13.24(i), and Kexp q (T ) = exp q (T )M by Lemma 13.25(ii). By these comments and Lemma 13.12, By this and since exp q (T ) is invertible, The result follows from this and ξ = 1 − ab.
Proposition 13.28. We have Proof. Compute b times Proposition 13.26(i) minus a times Proposition 13.26(iii).
Lemma 13.29. Each of the following is invertible: Proof. By Proposition 13.28 and since M is invertible.
Our next goal is to describe how K, B are related. We will use the following result.
We mention some reformulations of Proposition 13.31.
Corollary 13.32. We have Proposition 13.33. We have In the above fractions the denominator is invertible since ψ is locally nilpotent. Lemma 13.34. The following mutually commute: Proof. By Proposition 13.33.
Proposition 13.35. We have In the above fractions the denominator is invertible by Lemma 13.29.
Proof. In Proposition 13.33 solve for ψ.
We have a comment.
Lemma 13.37. The relations in Proposition 13.31 and Corollary 13.32 still hold if we replace Proof. Use Proposition 13.31 and Lemma 13.34.
We recall some notation. Let Mat N +1 (F) denote the set of N + 1 by N + 1 matrices that have all entries in F. We index the rows and columns by 0, 1, . . . , N. Let {v i } N i=0 denote a basis for V . For M ∈ Mat N +1 (F) and T ∈ End(V ), we say that M represents T with respect Our next goal is to display the matrices that represent ψ, K ±1 , M ±1 , B ±1 with respect to the bases Definition 13.38. Let ψ denote the matrix in Mat N +1 (F) that has (i − 1, i)-entry ϑ i for 1 ≤ i ≤ N, and all other entries 0. Thus Lemma 13.39. The matrix ψ represents ψ with respect to {τ i } N i=0 and {w i } N i=0 and {η i } N i=0 .
Lemma 13.41. We give the matrix in Mat N +1 (F) that represents K with respect to All other entries are 0.
All other entries are 0.
Lemma 13.43. We give the matrix in Mat N +1 (F) that represents M −1 with respect to All other entries are 0.
Lemma 13.44. We give the matrix in Mat N +1 (F) that represents M −1 with respect to We give a variation on Proposition 13.26.
Lemma 13.45. We have Proof. For each equation in Proposition 13.26, take the inverse of each side and evaluate the result using Lemma 2.14.
Lemma 13.46. We give the matrix in Mat N +1 (F) that represents M with respect to All other entries are 0.
Lemma 13.47. We give the matrix in Mat N +1 (F) that represents K −1 with respect to All other entries are 0.
Lemma 13.48. We give the matrix in Mat N +1 (F) that represents M with respect to All other entries are 0.
We give a variation on Proposition 13.33.
Lemma 13.50. We have Proof. Evaluate each equation in Proposition 13.33 using Lemma 2.14.
Lemma 13.51. We give the matrix in Mat N +1 (F) that represents K with respect to All other entries are 0.
Lemma 13.52. We give the matrix in Mat N +1 (F) that represents B with respect to All other entries are 0.
Lemma 13.54. We give the matrix in Mat N +1 (F) that represents B −1 with respect to All other entries are 0.
We recall some notation. Let {u i } N i=0 and {v i } N i=0 denote bases for V . By the transition matrix Our next goal is to display the transition matrices between the bases Consider the following matrices: exp q a −1 ξ ψ , exp q b −1 ξ ψ .
The inverse of (107) is The matrices (107), (108) are upper triangular. Shortly we will give their entries.
Proof. We first show (111). For 0 ≤ i ≤ N − 1 apply each side of (111) to τ i , and evaluate the result using Lemma 2.13 along with (65) and Definition 13.23. We have By these comments we obtain (111). Equation (112) is similarly obtained.
By the above comments and Lemmas 13.4, 13.7 we get the result.
Lemma 13.62. On V N −1 , Proof. For 0 ≤ i ≤ N − 1 apply each side of (114) to τ i . Evaluate the result using (22) Proof. Let X denote the expression on either side of (113). Compute qAX − XA q − 1 and evaluate the result using Lemma 13.62.
Proposition 13.65. On V N −2 , Proof. Let Y denote the expression on either side of (114). Compute qY A−AY and evaluate the result using Lemma 13.61.
In Note 13.21 we mentioned a 3-term recurrence satisfied by the polynomials {w i } N i=0 and {w ′ i } N i=0 . Our next goal is to describe this recurrence.