Doubling (Dual) Hahn Polynomials: Classification and Applications

We classify all pairs of recurrence relations in which two Hahn or dual Hahn polynomials with different parameters appear. Such couples are referred to as (dual) Hahn doubles. The idea and interest comes from an example appearing in a finite oscillator model [Jafarov E.I., Stoilova N.I., Van der Jeugt J., J. Phys. A: Math. Theor. 44 (2011), 265203, 15 pages, arXiv:1101.5310]. Our classification shows there exist three dual Hahn doubles and four Hahn doubles. The same technique is then applied to Racah polynomials, yielding also four doubles. Each dual Hahn (Hahn, Racah) double gives rise to an explicit new set of symmetric orthogonal polynomials related to the Christoffel and Geronimus transformations. For each case, we also have an interesting class of two-diagonal matrices with closed form expressions for the eigenvalues. This extends the class of Sylvester-Kac matrices by remarkable new test matrices. We examine also the algebraic relations underlying the dual Hahn doubles, and discuss their usefulness for the construction of new finite oscillator models.


Introduction
The tridiagonal (N + 1) × (N + 1) matrix of the following form appears in the literature under several names: the Sylvester-Kac matrix, the Kac matrix, the Clement matrix, . . .. It was already considered by Sylvester [28], used by M. Kac in some of his seminal work [17], by Clement as a test matrix for eigenvalue computations [9], and continues to attract attention [6,7,29]. The main property of the matrix C N +1 is that its eigenvalues are given explicitly by Because of this simple property, C N +1 is a standard test matrix for numerical eigenvalue computations, and part of some standard test matrix toolboxes (e.g., [12]).
This spectrum simplifies even further for δ = −γ − 1; in this case one gets back the eigenvalues (1.2).
What is the context here for these new tridiagonal matrices with simple eigenvalue properties? Well, remember that C N +1 also appears as the simplest example of a family of Leonard pairs [24,30]. In that context, this matrix is related to symmetric Krawtchouk polynomials [13,19,23]. Indeed, let K n (x) ≡ K n (x; 1 2 , N ), where K n (x; p, N ) are the Krawtchouk polynomials [13,19,23]. Then their recurrence relation [19, equation (9.11.3)] yields nK n−1 (x) + (N − n)K n+1 (x) = (N − 2x)K n (x), n = 0, 1, . . . , N. (1.5) Writing this down for x = 0, 1, . . . , N , and putting this in matrix form, shows indeed that the eigenvalues of C N +1 (or rather, of its transpose C T N +1 ) are indeed given by (1.2). Moreover, it shows that the components of the kth eigenvector of C T N +1 are given by K n (k). So we can identify the matrix C N +1 with the Jacobi matrix of symmetric Krawtchouk polynomials, one of the families of finite and discrete hypergeometric orthogonal polynomials. The other matrices C N (γ, δ) appearing in this introduction are not directly related to Jacobi matrices of a simple set of finite orthogonal polynomials. In this paper, however, we show how two sets of distinct dual Hahn polynomials [13,19,23] can be combined in an appropriate way such that the eigenvalues of matrices like C N (γ, δ) become apparent, and such that the eigenvector components are given in terms of these two dual Hahn polynomials. This process of combining two distinct sets is called "doubling". We examine this not only for the case related to the matrix C N (γ, δ), but stronger: we classify all possible ways in which two sets of dual Hahn polynomials can be combined in order to yield a two-diagonal "Jacobi matrix". It turns out that there are exactly three ways in which dual Hahn polynomials can be "doubled" (for a precise formulation, see later). By the doubling procedure, one automatically gets the eigenvalues (and eigenvectors) of the corresponding two-diagonal matrix in explicit form.
This process of doubling and investigating the corresponding two-diagonal Jacobi matrix can be applied to other classes of orthogonal polynomials (with a finite and discrete support) as well. In this paper, we turn our attention also to Hahn and to Racah polynomials. The classification process becomes rather technical, however. Therefore, we have decided to present the proof of the complete classification only for dual Hahn polynomials (Section 3). For Hahn polynomials (Section 4) we give the final classification and corresponding two-diagonal matrices (but omit the proof), and for Racah polynomials we give the final classification and some examples of two-diagonal matrices in Appendix A.
We should also note that the two-diagonal matrices appearing as a result of the doubling process are symmetric. So matrices like (1.3) do not appear directly but in their symmetrized form. Of course, as far as eigenvalues are concerned, this makes no difference (see Section 6).
The doubling process of the polynomials considered here also gives rise to "new" sets of orthogonal polynomials. One could argue whether the term "new" is appropriate, since they arise by combining two known sets. The peculiar property is however that the combined set has a common unique weight function. Moreover, we shall see that the support set of these doubled polynomials is interesting, see the examples in Section 5. In this section, we also interpret the doubling process in the framework of Christoffel-Geronimus transforms. It will be clear that from our doubling process, one can deduce for which Christoffel parameter the Christoffel transform of a Hahn, dual Hahn or Racah polynomial is again a Hahn, dual Hahn or Racah polynomial with shifted parameters.
In Section 6 we reconsider the two-diagonal matrices that have appeared in the previous sections. It should be clear that we get several classes of two-diagonal matrices (with parameters) for which the eigenvalues (and eigenvectors) have an explicit and rather simple form. This section reviews such matrices as new and potentially interesting examples of eigenvalue test matrices.
In Section 7 we explore relations with other structures. Recall that in finite-dimensional representations of the Lie algebra su(2), with common generators J + , J − and J 0 , the matrix of J + + J − also has a symmetric two-diagonal form. The new two-diagonal matrices appearing in this paper can be seen as representation matrices of deformations or extensions of su (2). We give the algebraic relations that follow from the "representation matrices" obtained here. The algebras are not studied in detail, but it is clear that they could be of interest on their own. The general algebras have two parameters, and we indicate how special cases with only one parameter are of importance for the construction of finite oscillator models.

Introductory example
We start our analysis by the explanation of a known example taken from [27]. For this example, we first recall the definition of Hahn and dual Hahn polynomials and some of the classical notations and properties.

5)
Related to the Hahn polynomials are the dual Hahn polynomials: R n (λ(x); γ, δ, N ) of degree n, n = 0, 1, . . . , N , in the variable λ(x) = x(x + γ + δ + 1), with parameters γ > −1 and δ > −1 (or γ < −N and δ < −N ) which are defined similarly to (2.1) [13,19,23] As is well known, the (discrete) orthogonality relation of the dual Hahn polynomials is just the "dual" of (2.3) Orthonormal dual Hahn functions are defined bỹ In [27], the following difference equations involving two sets of Hahn polynomials were derived (for convenience we use the notation Q n (x) ≡ Q n (x; α, β+1, N ) andQ n (x) ≡ Q n (x; α+1, β, N )): Writing out these difference equations for x = 0, 1, . . . , N , the resulting set of equations can easily be written in matrix form. For this matrix form, let us use the normalized version of the polynomials, and construct the following (2N + 2) × (2N + 2) matrix U with elements where x, n ∈ {0, 1, . . . , N }. By construction, this matrix is orthogonal [27]: the fact that the columns of U are orthonormal follows from the orthogonality relation of the Hahn polynomials, and from the signs in the matrix U . Thus U T U = U U T = I, the identity matrix.
The normalized difference equations (2.10), (2.11) for x = 0, 1, . . . , N can then be cast in matrix form. The coefficients in the left hand sides of (2.10), (2.11) give rise to a tridiagonal (2N + 2) × (2N + 2)-matrix of the form with Note that the eigenvalues of the matrix M are (up to a factor 2) the same as those of the matrix C 2N +2 (α, β), the two-parameter extension of the Sylvester-Kac matrix. As we will further discuss in Section 6, the above result proves that the eigenvalues of C 2N +2 (α, β) are indeed given by (1.4). Even more: the orthonormal eigenvectors of M are just the columns of U .
Another way of looking at (2.16) is in terms of the dual Hahn polynomials. Interchanging x and n in the expressions (2.12), (2.13), we have where x, n ∈ {0, 1, . . . , N }. In this way, each row of the matrix U consists of a dual Hahn polynomial of a certain degree, having different parameter values for even and odd rows. Now, the relation (2.16) can be interpreted as a three-term recurrence relation with M being the Jacobi matrix. Two sets of (dual) Hahn polynomials (with different parameters) are thus combined into a new set of polynomials such that the Jacobi matrix for this new set has a simple twodiagonal form, with simple eigenvalues. The pair of difference equations (2.10), (2.11) involving two sets of Hahn polynomials then corresponds to the following relations involving the dual Hahn polynomials R n (x) ≡ R n (λ(x); γ, δ + 1, N ) andR n (x) ≡ R n (λ(x); γ + 1, δ, N ): This is in fact a special case of the so-called Christoffel transform of a dual Hahn polynomial with its transformation parameter chosen specifically so that the result is again a dual Hahn polynomial (with different parameters). We will further elaborate on this in Section 5. This introductory example, taken from [27], opens the following question: in how many ways can two sets of (dual) Hahn polynomials be combined such that the Jacobi matrix is two-diagonal? This will be answered in the following section.

Doubling dual Hahn polynomials: classif ication
The essential relation in the previous example is the existence of a pair of "recurrence relations" (2.20), (2.21) intertwining two types of dual Hahn polynomials (or equivalently a couple of difference equations (2.10), (2.11) for two types of their duals, the Hahn polynomials). Let us therefore examine the existence of such relations in general. Say we have two types of dual Hahn polynomials with different parameter values for γ and δ (and possibly N ) denoted by R n (λ(x); γ, δ, N ) and R n (λ(x);γ,δ,N ), that are related in the following manner If we want these relations to correspond to a matrix identity like (2.16), then it is indeed necessary that the (unknown) functions a(n),â(n), b(n) andb(n) are functions of n and not of x, and that d(x) andd(x) are functions of x and not of n. Of course, the parameters γ, δ, N , γ,δ,N can appear in these functions.
In order to lift this technique also to other polynomials than just the dual Hahn polynomials, say we have the following relations between two sets of orthogonal polynomials of the same class, denoted by y n andŷ n , but with different parameter values where a,â, b,b are independent of x and d,d are independent of n. Although (3.3), (3.4) are not actual recurrence relations since they involve both y n andŷ n , we will refer to a couple of such relations intertwining two types of orthogonal polynomials as "a pair of recurrence relations". When substituting (3.4) in (3.3), we arrive at the following recurrence relation forŷ n In the same manner,ŷ n can be eliminated to find a recurrence relation for y n Of course, the orthogonal polynomials y n already satisfy a three-term recurrence relation of the form (2.4). A comparison of the coefficients of y n+1 , y n , y n−1 in (3.5), (3.6) with the known coefficients given in (2.9) leads to the following set of requirements for a,â, b,b, d,d After a slight rearrangement of terms in the requirements (3.9) and (3.10), we arrive at two new equations where the left hand side is independent of x while the right hand side is independent of n, namely, Hence, the two sides must be independent of both n and x. By means of (3.7)-(3.12) we can eliminate A,Â, C,Ĉ to find Moreover, subtracting one from the other yields Now, for a given class of orthogonal polynomials with recurrence relation of the form (2.4), we determine all possible functions a,â, b,b, d,d satisfying the list of requirements (3.7)-(3.12). Hereto, we proceed as follows • From (3.7) and (3.8) we observe that, up to a multiplicative factor, C(n) is split into two functions, a(n − 1) andâ(n − 1). When a(n − 1) is shifted by 1 in n and multiplied again byâ(n − 1) we must arrive atĈ(n). Hence, C andĈ consist of an identical part, and a part which differs by a shift of 1 in n. This observation gives a first list of possibilities for a andâ.
• Similarly we find a list for b andb by means of (3.11) and (3.12).
• These possibilities are then to be compared with requirements (3.9) and (3.10). From The actual performance of the procedure just described is still quite long and tedious, when carried out for a fixed class of polynomials. In what follows we achieve this for the dual Hahn polynomials, which have the easiest recurrence relation, and it takes about three pages to present this. The reader who wishes to skip the details can advance to Theorem 1.
For dual Hahn polynomials, the data is given by (2.9) y n = R n (λ(x); γ, δ, N ),ŷ n = R n λ(x);γ,δ,N , and with similar expressions forΛ(x),Â(n) andĈ(n) (with x, γ, δ, N replaced byx,γ,δ,N ). From (3.15), the following expression must be independent of x In order for the term in x 2 to disappear, we must havex = x + ξ which gives and as we require the coefficient of x to be zero we find the following condition for ξ From (3.8) we see that we have four distinct possible combinations for a(n − 1) andâ(n − 1) with c a a factor. Combining this with (3.7) we must have This immediately implies that c a is independent of n, and (a1)-(a4) yield the following possibilities Because of the restriction on δ the option (a4 ) is ineligible, leaving (a1 )-(a3 ) as only viable options.
In a similar way, from (3.12) we see that we have four possible combinations for b(n) andb(n), Combining this with (3.11) we must have This implies that c b is independent of n and moreover for (b1)-(b4) yields We thus have four viable options for b,b and three for a,â, giving a total of 12 possible combinations, which we will systematically consider and treat. Case (b1). Plugging (b1) in (3.14), we get As the right hand side is independent of n, so must be the left hand side. This eliminates options (a2) and (a3) for a,â as that would result in a third order term in n which cannot vanish. On the other hand, (a1) yields This must be independent of n, so the coefficient of n 2 in the left hand side must vanish, hence c a /c b + c b /c a + 2 = 0 or thus c a /c b = −1. For this value of c a /c b the left hand side equals zero and is indeed independent of n. Note that this leaves one degree of freedom as only the ratio c a /c b is fixed. This is just a global scalar factor for (3.3) and (3.4), also present in (2.4). Henceforth, for convenience, we set c a = 1 and c b = −1.
The combined options (b1) and (a1) thus give a valid set of equations of the form (3.3) and (3.4), and they correspond to the parameter valueŝ Moreover, by means of (3.16) we find ξ = −1 and sox = x − 1. Finally, plugging these a,â, b,b in (3.3) and (3.4), and putting n = 0 we find and similarly d( Interchanging x and n, these recurrence relations for dual Hahn polynomials are precisely the known actions of the forward and backward shift operator for Hahn polynomials [19, equations (9.5.6) and (9.5.8)]. Case (b2). Next, we consider the option (b2) for b,b. Plugging (b2) in (3.14), we get Since the left hand side must be independent of n, option (a1) is ruled out. Also option (a2) is ruled out: using (a2) and δ + N + 1 = 0 (from (a2 )), the left hand side again cannot be independent of n. Only (a3) remains, giving In order for n 2 in the left hand side to vanish, we again require c a /c b = −1. This gives and we see that both sides are indeed independent of n.
The combined options (b2) and (a3) also give a valid set of equations of the form (3.3) and (3.4), now corresponding to the parameter valueŝ Moreover, by means of (3.16) we find ξ = 0 and sox = x. Putting again n = 0 in (3.3) and (3.4) for these a,â, b,b we find and similarly d(x) = N . The relations in question are then, for R n (x) ≡ R n (λ(x); γ, δ, N ) and which can be verified algebraically or by means of a computer algebra package.
Case (b3). The next option to consider is (b3), for which (3.14) becomes The independence of n in the left hand side again rules out options (a1) and (a2), while (a3) gives Also here, we require c a /c b = −1 to arrive at a left hand side independent of n, namely The combined options (b3) and (a3) thus give a valid set of equations of the form (3.3) and (3.4), and they correspond to the parameter valueŝ by means of (3.16) we find ξ = 0 and sox = x. Finally, plugging these a,â, b,b in (3.3) and (3.4) and putting n = 0 we find and similarly d( These can again be verified algebraically or by means of a computer algebra package. Note that these relations coincide with (2.20), (2.21) from the previous section (up to a shift δ → δ + 1).
Case (b4). The final option (b4) for b,b does not correspond to a valid set of equations of the form (3.3) and (3.4) as the left hand side of (3.14) can never be independent of n for either options (a1), (a2) or (a3).
This completes the analysis in the case of dual Hahn polynomials, and we have the following result Theorem 1. The only way to double dual Hahn polynomials, i.e., to combine two sets of dual Hahn polynomials such that they satisfy a pair of recurrence relations of the form (3.1), (3.2) is one of the three cases: dual Hahn I, R n (x) ≡ R n (λ(x); γ, δ, N ) andR n (x) ≡ R n (λ(x − 1); γ + 1, δ + 1, N − 1): dual Hahn II, R n (x) ≡ R n (λ(x); γ, δ, N ) andR n (x) ≡ R n (λ(x); γ, δ, N − 1): By interchanging x and n, each of the recurrence relations for dual Hahn polynomials in the previous theorem gives rise to a set of forward and backward shift operators for regular Hahn polynomials. The case dual Hahn I corresponds just to the known forward and backward shift operators for Hahn polynomials [19]: The case dual Hahn III corresponds to our introductory example (2.10), (2.11) (up to a shift β → β + 1), and appears already in [27]. The case dual Hahn II yields a new set of relations (encountered recently in [16, equations (16), (17)]), namely Q n (x) ≡ Q n (x; α, β, N ) andQ n (x) ≡ Q n (x; α, β, N − 1): The most important thing is, however, that we have classified the possible cases. Because the sets of recurrence relations are of the form (3.1), (3.2), they can be cast in matrix form, like in (2.16), with a simple two-diagonal matrix. For the case dual Hahn I, note that the N -values of R n (x) andR n (x) differ by 1, so the definition of the matrix U (again in terms of the normalized version of the polynomials) requires a little bit more attention. The matrix U is now of order (2N + 1) × (2N + 1) with matrix elements U 2n,N −x = U 2n,N +x = (−1) n √ 2R n (λ(x); γ, δ, N ), x = 1, . . . , N, where the row index of the matrix U (denoted here by 2n or 2n + 1, depending on the parity of the index) also runs over the integers from 0 up to 2N . This matrix U is orthogonal: the orthogonality relation of the dual Hahn polynomials (2.7) and the signs in the matrix U imply that its rows are orthonormal. Thus U T U = U U T = I, the identity matrix. Then the recurrence relations for dual Hahn I of Theorem 1 are now reformulated in terms of a two-diagonal (2N + 1) × (2N + 1)-matrix of the form

19)
and U the orthogonal matrix determined in For the case dual Hahn II, the matrix U is again of order (2N + 1) × (2N + 1) with matrix elements where the row indices are as in (3.17). The orthogonality relation of the dual Hahn polynomials and the signs in the matrix U imply that its rows are orthonormal, so U T U = U U T = I. The pair of recurrence relations for dual Hahn II of Theorem 1 yield Note that the order in which the normalized dual Hahn polynomials appear in the matrix U is different for (3.17) and (3.21). This is related to the indices of the polynomials in the relations of Theorem 1.
Finally, for the case dual Hahn III, the matrix U is given by (2.18), (2.19) and we recapitulate the results given at the end of the previous section, now in terms of the dual Hahn parameters γ and δ.
Proposition 4 (dual Hahn III). Suppose γ > −1, δ > −1 or γ < −N − 1, δ < −N − 1. Let M be the tridiagonal matrix (2.14) with To conclude for dual Hahn polynomials: there are three sets of recurrence relations of the form (3.1), (3.2). Each of the three cases gives rise to a two-diagonal matrix with simple and explicit eigenvalues, and eigenvectors given in terms of two sets of dual Hahn polynomials.

Doubling Hahn polynomials
The technique presented in the previous section can be applied to other types of discrete orthogonal polynomials with a finite spectrum. We have done this for Hahn polynomials. One level up in the hierarchy of orthogonal polynomials of hypergeometric type are the Racah polynomials. Also for Racah polynomials we have applied the technique, but here the description of the results becomes very technical. So we shall leave the results for Racah polynomials for Appendix A.
For Hahn polynomials the analysis is again straightforward but tedious, so let us skip the details of the computation and present just the final outcome here. Applying the technique described in (3.3)-(3.15), with y n = Q n (x; α, β, N ) andŷ n = Q n (x;α,β,N ) yields the following result.
For the two remaining cases we need not give all details: the matrix M for the case Hahn III is equal to the matrix M for the case Hahn I with the replacement α ↔ β, and so its eigenvalues are ± √ k + β + 1, k = 0, 1, . . . , N . And the matrix M for the case Hahn IV is equal to the matrix M for the case Hahn II with the same replacement α ↔ β, so its eigenvalues are 0 and ± √ k, k = 1, . . . , N .

Polynomial systems, Christof fel and Geronimus transforms
So far, we have only partially explained why the technique in the previous sections is referred to as "doubling" polynomials. It is indeed a fact that the combination of two sets of polynomials, each with different parameters, yields a new set of orthogonal polynomials. This can be compared to the well known situation of combining two sets of generalized Laguerre polynomials (both with different parameters α and α − 1) into the set of "generalized Hermite polynomials" [8]. There, for α > 0, one defines Then the orthogonality relation of Laguerre polynomials leads to the orthogonality of the polynomials (5.1): Note that the even polynomials are Laguerre polynomials in x 2 (for parameter α − 1), and the odd polynomials are Laguerre polynomials in x 2 (for parameter α) multiplied by a factor x. The weight function (5.2) is common for both types of polynomials. It is this phenomenon that appears here too in our doubling process of Hahn or dual Hahn polynomials. From a more general point of view, this fits in the context of obtaining a new family of orthogonal polynomials starting from a set of orthogonal polynomials and its kernel partner related by a Christoffel transform [8,22,32]. In a way, our classification determines for which Christoffel parameter ν (see [32] for the notation) the Christoffel transform of a Hahn, dual Hahn or Racah polynomial is again a Hahn, dual Hahn or Racah polynomial with possibly different parameters. This determines moreover quite explicitly the common weight function.
For a dual Hahn polynomial R n (x) ≡ R n (λ(x); γ, δ, N ), with data given in (2.9), and a Christoffel parameter ν the kernel partner is given by the transform Because of the recurrence relation (2.4) and what is called the Geronimus transform the original polynomials can also be expressed in terms of the kernel partners. This is usually done for monic polynomials (see [32, equations (3.2) and (3.3)]), but it can be extended to non-monic dual Hahn polynomials as follows where the coefficients b n are related to the recurrence relation (2.4) as follows b n a n−1 = C(n), A(n)a n + b n = A(n) + C(n) + Λ(ν). (5.5) Our classification now shows that only for ν equal to one of the values 0, N or −δ, the kernel partner P n (x) will again be a dual Hahn polynomial. Indeed, taking for example ν = 0 in (5.3) we have R n (0) = 1 and which we obtained using the first relation of dual Hahn II. For the reverse transform (5.4) we find, using the second relation of dual Hahn II with shifted n → n − 1, For the last case, taking ν = −δ in (5.3) we have R n (−δ) = (−N − δ) n /(−N ) n and which we obtained using the first relation of dual Hahn III. For the transform (5.4) we have which equals R n (x) by the second relation of dual Hahn III.
In a similar way, for the Hahn polynomials, putting Q n (x) ≡ Q n (x; a, b, N ), using the data (2.5) in and in (5.4), (5.5), the cases Hahn I, II, III, V correspond respectively to the choices −α − 1, 0, N + β + 1 and N for ν. The task of determining for which Christoffel parameter ν the kernel partner of a dual Hahn polynomial is again of the same family is not trivial. It comes down to finding a pair of recurrence relations of the form (3.1), (3.2) with coefficients related to ν as in (5.3). We have classified these for general coefficients, without a relation to ν, and we observe that each solution indeed corresponds to a specific choice for ν.
The transforms (5.3), (5.4) give rise to new orthogonal systems, but in general there is no way of writing the common weight function. However, since here both sets are of the same family, we can actually do this. Let us begin with the dual Hahn polynomials, in particular the case dual Hahn I, for which the corresponding matrix U is given in (3.17). They give rise to a new family of discrete orthogonal polynomials with the relation M U = U D corresponding to their three term recurrence relation with Jacobi matrix M (3.19). In general the support of the weight function is equal to the spectrum of the Jacobi matrix [5,18,20,21]. After simplifying with the normalization factors (2.8), this leads to a discrete orthogonality of polynomials, with support equal to the eigenvalues of M (so in this case, the support follows from (3.20)). Concretely, for the case under consideration, we have These polynomials satisfy the discrete orthogonality relation q∈S (−1) k (2k + γ + δ + 1)(γ + 1) k (−N ) k N ! (k + γ + δ + 1) N +1 (δ + 1) k k! (1 + δ q,0 )P n (q)P n (q) = γ + n/2 n/2 with S = 0, ± k(k + γ + δ + 1), k = 1, 2, . . . , N .
The ideas described in the three propositions of this section should be clear. It would lead us too far to give also the explicit forms corresponding to the remaining cases. Let us just mention that also for these cases the support of the new polynomials coincides with the spectrum of the corresponding two-diagonal matrix M .

First application: eigenvalue test matrices
In Sections 3 and 4 we have encountered a number of symmetric two-diagonal matrices M with explicit expressions for the eigenvectors and eigenvalues. In general, if one considers a twodiagonal matrix A of size (m + 2) × (m + 2), are the same. The eigenvectors of A are those of A after multiplication by a diagonal matrix (the diagonal matrix that is used in the similarity transformation from A to A ). The importance of the Sylvester-Kac matrix as a test matrix for numerical eigenvalue routines has already been emphasized in the Introduction. In this context, it is also significant that the matrix itself has integer entries only (so there is no rounding error when represented on a digital computer), and that also the eigenvalues are integers. Of course, matrices with rational numbers as entries suffice as well, since one can always multiply the matrix by an appropriate integer factor.
Let us now systematically consider the two-diagonal matrices encountered in the classification process of doubling Hahn or dual Hahn polynomials. For the matrix (3.18) of the dual Hahn I case, the corresponding non-symmetric form can be chosen as the two-diagonal matrix with The eigenvalues are determined by Proposition 2 and given by 0, ± k(k + γ + δ + 1), k = 1, . . . , N . This is (up to a factor 2) the matrix (1.3) mentioned in the Introduction. As test matrix, the choice γ + δ + 1 = 0 (leaving one free parameter) is interesting as it gives rise to integer eigenvalues. In Proposition 2 there is the initial condition γ > −1, δ > −1. Clearly, if one is only dealing with eigenvalues, the condition for (6.2) is just γ + δ + 2 ≥ 0. And when one substitutes δ = −γ − 1 in (6.2), there is no condition at all for the one-parameter family of matrices of the form (6.2). For the dual Hahn II case, the matrix (3.22) is given in Proposition 3, and its non-symmetric form can be taken as 3) The eigenvalues are given by 0, ± k(γ + δ + 1 + 2N − k), k = 1, . . . , N . There is no simple substitution that reduces these eigenvalues to integers. For the dual Hahn III case, the matrix (2.15) is given in Proposition 4, and its simplest non-symmetric form is The eigenvalues are given by (2.17), i.e., ± (γ + k + 1)(δ + k + 1), k = 0, . . . , N . Up to a factor 2, this is the third matrix mentioned in the Introduction. The substitution δ = γ leads to a one-parameter family of two-diagonal matrices with square-free eigenvalues. And in particular when moreover γ is integer, all matrix entries and all eigenvalues are integers. The two-diagonal matrices arising from the Hahn doubles or the Racah doubles can also be written in a square-free form of type (6.1). However, for these cases the entries in the twodiagonal matrices M are already quite involved (see, e.g., Propositions 6, 7, 12 or 13), and we shall not discuss them further in this context. The three examples given here, (6.2)-(6.4), are already sufficiently interesting as extensions of the Sylvester-Kac matrix as potential eigenvalue test matrices.

Further applications: related algebraic structures and f inite oscillator models
The original example of a (dual) Hahn double, described here in Section 2, was encountered in the context of a finite oscillator model [14]. In that context, there is also a related algebraic structure. In particular, the two-diagonal matrices M of the form (2.14) or (3.18) are interpreted as representation matrices of an algebra, which can be seen as a deformation of the Lie algebra su (2). Once an algebraic formulation is clear, this structure can be used to model a finite oscillator. The close relationship comes from the fact that for the corresponding finite oscillator model the spectrum of the position operator coincides with the spectrum of the matrix M . Therefore, it is worthwhile to examine the algebraic structures behind the current matrices M . We shall do this explicitly for the three double dual Hahn cases.
For the case dual Hahn I, we return to the form of the matrix M given in (3.18) or (3.19). For any positive integer N , let J + denote the lower-triangular tridiagonal (2N + 1) × (2N + 1) matrix given below, and let J − be its transpose Let us also define the common diagonal matrix J 0 = diag(−N, −N + 1, . . . , N ), (7.2) and the "parity matrix" Then it is easy to check that these matrices satisfy the following relations (as usual, I denotes the identity matrix) Especially the last equation is interesting. From the algebraic point of view, it introduces some two-parameter deformation or extension of su (2). When γ = δ = −1/2, the equations coincide with the su(2) relations. Another important case is when δ = −γ − 1, leaving a one-parameter extension of su(2) without quadratic terms. For the case dual Hahn II, the corresponding expressions of J + , J − , J 0 and P are the same as above in (7.1)-(7.3), but with M k -values given by (3.22). As far as the algebraic relations are concerned, they are also given by (7.4) but with the last relation replaced by For the case dual Hahn III, the size of the matrices changes to (2N + 2) × (2N + 2). For J + and J − one can use (7.1), with M k -values given by (3.23). P has the same expression (7.3), but for J 0 we need to take With these expressions, the algebraic relations are given by (7.4) but with the last relation replaced by [J + , J − ] = 2J 0 + 2(γ − δ)J 0 P − ((2N + 2)(γ + δ + 1) + (2γ + 1)(2δ + 1))P + (γ − δ)I.
The structure of these algebras is related to the structure of the so-called algebra H of the dual −1 Hahn polynomials, see [11,31]. It is not hard to verify that the algebra H, determined by [11, equations (3.4)-(3.6)] or [11, equations (6.2)-(6.4)], can be cast in the form (7.4) (or vice versa). Indeed, starting from the form [11, equations (6.2)-(6.4)] coming from dual −1 Hahn polynomials, we can take to get the same form as (7.4) P 2 = 1, P J 0 = J 0 P, P J ± = −J ± P, where ν, σ, ρ depend on the parameters of the dual −1 Hahn polynomials α, β, N through [11, equations (3.4)-(3.6)]. In our case, the algebraic relations are the same, but the dependence of the "structure constants" in (7.6) on the parameters γ, δ, N of the dual Hahn polynomials is different.
As far as we can see, the doubling of dual Hahn polynomials as classified in this paper gives a set of polynomials that is similar but in general not the same as a set of dual −1 Hahn polynomials [31] (except for specific values of parameters, e.g., δ = −γ − 1 does coincide with a specific dual −1 Hahn polynomial). For general parameters, the support of the weight function is different, the recurrence relations (or difference relations) are different, and the hypergeometric series expression is different.
The algebraic structures obtained here (or special cases thereof) can be of interest for the construction of finite oscillator models [1,2,3,14]. Two familiar finite oscillator models fall within this framework: the model discussed in [14] corresponds to (7.5) with δ = γ, and the one analysed in [15] to (7.4) with δ = γ. Observe that there are some other interesting special values. For example, the case (7.4) with δ = −γ − 1 gives rise to an interesting algebra, and in particular also to a very simple spectrum (3.20). We intend to study the finite oscillator that is modeled by this case, and study in particular the corresponding finite Fourier transform; but this will be the topic of a separate paper.

Conclusion
We have classified all pairs of recurrence relations for two types of dual Hahn polynomials (i.e., dual Hahn polynomials with different parameters), and refer to these as dual Hahn doubles. The analysis is quite straightforward, and the result is given in Theorem 1, yielding three cases. For each case, we have given the corresponding symmetric two-diagonal matrix M , its matrix of orthonormal eigenvectors U and its eigenvalues in explicit form. The same classification has been obtained for Hahn polynomials and Racah polynomials.
The orthogonality of the matrix U gives rise to new sets of orthogonal polynomials. These sets could in principle also be obtained from, for example, a set of dual Hahn polynomials and a certain Christoffel transform. In our approach, the possible cases where such a transform gives rise to a polynomial of the same type follow naturally, and also the explicit polynomials and their orthogonality relations arise automatically.
As an interesting secondary outcome, we obtain nice one-parameter and two-parameter extensions of the Sylvester-Kac matrix with explicit eigenvalue expressions. Such matrices can be of interest for testing numerical eigenvalue routines.
The first example of a (dual) Hahn double appeared in a finite oscillator model [14]. For this model, the Hahn polynomials (or their duals) describe the discrete position wavefunction of the oscillator, and the two-diagonal matrix M lies behind an underlying algebraic structure. Here, we have examined the algebraic relations corresponding to the three dual Hahn cases. It is clear that the analysis of finite oscillators for some of these cases is worth pursuing.

A Appendix: doubling Racah polynomials
The technique presented in Sections 3 and 4 is applied here for Racah polynomials.
Note that after interchanging n and x, and α ↔ γ and β ↔ δ, the relations in Racah III coincide with the known forward and backward shift operator relations [19, equations (9.2.6) and (9.2.8)]. The relations in Racah I were already found in [16, equations (5) and (6)].
For each of the four cases, one can translate the set of difference relations to a matrix identity of the form M U = U D. In fact, for each of the four cases, there are three subcases depending on the choice of −N in (A.1). We shall not give all of these cases: they should be easy to construct for the reader who needs one. Let us just give an example or two.