Inverse Spectral Problems for Tridiagonal N by N Complex Hamiltonians

In this paper, the concept of generalized spectral function is introduced for finite-order tridiagonal symmetric matrices (Jacobi matrices) with complex entries. The structure of the generalized spectral function is described in terms of spectral data consisting of the eigenvalues and normalizing numbers of the matrix. The inverse problems from generalized spectral function as well as from spectral data are investigated. In this way, a procedure for construction of complex tridiagonal matrices having real eigenvalues is obtained.


Introduction
Consider the N × N tridiagonal symmetric matrix (Jacobi matrix) with complex entries where for each n, a n and b n are arbitrary complex numbers such that a n is different from zero: a n , b n ∈ C, a n = 0. (1.2) In the real case a n , b n ∈ R, a n = 0, (1.3) the matrix J is Hermitian (self-adjoint) and in this case many versions of the inverse spectral problem for J have been investigated in the literature, see [1,2,3] and references given therein.
In the complex case (1.2), the matrix J is in general non-Hermitian (non-selfadjoint) and our aim in this paper is to introduce appropriate spectral data for such a matrix and then consider the inverse spectral problem consisting in determining of the matrix from its spectral data.
As is known [4,5,6,7,8,9], for non-selfadjoint differential and difference operators a natural spectral characteristic is the so-called generalized spectral function which is a linear continuous functional on an appropriate linear topological space. In general very little is known about the structure of generalized spectral functions.
Given the matrix J of the form (1.1) with the entries satisfying (1.2), consider the eigenvalue problem Jy = λy for a column vector y = {y n } N −1 n=0 , that is equivalent to the second order linear difference equation a n−1 y n−1 + b n y n + a n y n+1 = λy n , n ∈ The case of infinite Jacobi matrices was considered earlier in the papers [6,7,8,9] in which the generalized spectral function was introduced and the inverse problem from the generalized spectral function was studied. However, in the case of infinite Jacobi matrices the structure of the generalized spectral function does not allow any explicit description because of complexity of the structure. Our main achievement in the present paper is that we describe explicitly the structure of the generalized spectral function for the finite order Jacobi matrices (1.1), (1.2).
The paper is organized as follows. In Section 2, the generalized spectral function is introduced for Jacobi matrices of the form (1.1) with the entries satisfying (1.2). In Section 3, the inverse problem from the generalized spectral function is investigated. It turns out that the matrix (1.1) is not uniquely restored from the generalized spectral function. There are precisely 2 N −1 distinct Jacobi matrices possessing the same generalized spectral function. The inverse problem is solved uniquely from the data consisting of the generalized spectral function and a sequence {σ 1 , σ 2 , . . . , σ N −1 } of signs + and −. Section 4 is devoted to some examples. In Section 5, we describe the structure of the generalized spectral function and in this way we define the concept of spectral data for matrices (1.1). In Section 6, the inverse problem from the spectral data is considered. In Section 7, we characterize generalized spectral functions of real Jacobi matrices among the generalized spectral functions of complex Jacobi matrices. In Section 8, we describe the structure of generalized spectral functions and spectral data of real Jacobi matrices. Finally, in Section 9, we consider inverse problem for real Jacobi matrices from the spectral data.
Note that considerations of complex (non-Hermitian) Hamiltonians in quantum mechanics and complex discrete models have recently received a lot of attention [10,11,12,13]. For some recent papers dealing with the spectral theory of difference (and differential) operators with complex coefficients see [14,15,16,17,18]. Fur further reading on the spectral theory of the Jacobi difference equation (three term recurrence relation) the books [19,20,21,22,23] are excellent sources.
Applying the functional Ω to both sides of the last equation, and recalling (2.13), (2.14), and (2.15), we obtain for A mn the boundary value problem

Inverse problem from the generalized spectral function
The inverse problem is stated as follows: 1. To see if it is possible to reconstruct the matrix J, given its generalized spectral function Ω. If it is possible, to describe the reconstruction procedure.
2. To find the necessary and sufficient conditions for a given linear functional Ω on C 2N [λ], to be the generalized spectral function for some matrix J of the form (1.1) with entries belonging to the class (1.2).
If this system is uniquely solvable, and s 2n + n−1 k=0 χ nk s k+n = 0 for n ∈ {1, 2, . . . , N − 1}, then the entries a n , b n of the required matrix J can be found from (3.2) and (3.3), respectively, α n being found from (3.10). The next theorem gives the conditions under which the indicated procedure of solving the inverse problem is rigorously justified.
Theorem 2. In order for a given linear functional Ω, defined on C 2N [λ], to be the generalized spectral function for some Jacobi matrix J of the form (1.1) with entries belonging to the class (1.2), it is necessary and sufficient that the following conditions be satisfied: Proof . Necessity. We obtain (i) from (2.7) with n = m = 0, recalling (2.5). To prove (ii), we write the expansion and take as H(λ) the polynomial where the bar over a complex number denotes the complex conjugation. Then we find from Sufficiency. The proof will be given in several stages. be an arbitrary vector. We multiply both sides of (3.11) by h m and sum over m between 0 and n − 1; we get Substituting expression (3.7) for s k+m in this equation and denoting is an arbitrary vector, we find from (3.12), in the light of condition (ii) of the theorem, that G(λ) ≡ 0, and hence g 0 = g 1 = · · · = g n−1 = 0, in spite of our assumption. Thus, for any n ∈ {1, 2, . . . , N }, equation (3.8) has a unique solution.
(b) Let us show that where (χ nk ) n−1 k=0 is the solution of the fundamental equation (3.8). (For n = 0, the left-hand side of (3.13) is s 0 = Ω, 1 = 1.) Assume the contrary, i.e., for some n ∈ {1, 2, . . . , N − 1} Joining this equation to the fundamental equation (3.8), we obtain (3.14) Let (h m ) n 0 be an arbitrary vector. Multiplying both sides of (3.14) by h m and summing over m from 0 to n, we obtain Replacing s l in this by its expression (3.7), we obtain Since (h m ) n 0 is an arbitrary vector, we obtain from the last equation in the light of condition (ii) of the theorem: which is impossible. Our assumption is therefore false.
Remark 1. It follows from the above solution of the inverse problem that the matrix (1.1) is not uniquely restored from the generalized spectral function. This is linked with the fact that the α n are determined from (3.10) uniquely up to a sign. To ensure that the inverse problem is uniquely solvable, we have to specify additionally a sequence of signs + and −. Namely, let {σ 1 , σ 2 , . . . , σ N −1 } be a given finite sequence, where for each n ∈ {1, 2, . . . , N − 1} the σ n is + or −. We have 2 N −1 such different sequences. Now to determine α n uniquely from (3.10) for n ∈ {1, 2, . . . , N − 1} (remember that we always take α 0 = 1) we can choose the sign σ n when extracting the square root. In this way we get precisely 2 N −1 distinct Jacobi matrices possessing the same generalized spectral function. For example, the two matrices as well as the four matrices   1 1 0 have the same generalized spectral function. The inverse problem is solved uniquely from the data consisting of Ω and a sequence {σ 1 , σ 2 , . . . , σ N −1 } of signs + and −.
Using the numbers s l = Ω, λ l , l = 0, 1, . . . , 2N, (3.23) let us introduce the determinants It turns out that Theorem 2 is equivalent to the following theorem. Proof . Necessity. The condition D 0 = 1 follows from 1 = Ω, 1 = s 0 = D 0 . By Theorem 2, if for a polynomial for all polynomials This is a linear homogeneous system of algebraic equations with respect to g 0 , g 1 , . . . , g n and the determinant of this system coincides with the determinant D n . Since this system has only the trivial solution g 0 = g 1 = · · · = g n = 0, we have that D n = 0, where n ≤ N − 1.
To prove that D N = 0, we write equation This equation has the unique solution χ N 0 , χ N 1 , . . . , χ N,N −1 . Next, these equalities together with (3.9) can be written in the form  This means that the last column in the determinant D N is a linear combination of the remaining columns. Therefore D N = 0. Sufficiency. Given the linear functional Ω : C 2N [λ] → C satisfying the conditions (3.25), it is enough to show that then the conditions of Theorem 2 are satisfied. We have Ω, 1 = s 0 = D 0 = 1. Next, let (3.27) hold for a polynomial G(λ) of the form (3.26) and all polynomials H(λ) of the form (3.28). Then (3.29) holds. Since the determinant of this system is D n and D n = 0 for n ≤ N − 1, we get that g 0 = g 1 = · · · = g n = 0, that is, G(λ) ≡ 0. Finally, we have to show that there is a polynomial T (λ) of degree N such that for all polynomials G(λ) with deg G(λ) ≤ N. For this purpose we consider the homogeneous system Thus, if the conditions of Theorem 3 or, equivalently, the conditions of Theorem 2 are satisfied, then the entries a n , b n of the matrix J for which Ω is the generalized spectral function, are recovered by the formulas (3.35), (3.36), where D n is defined by (3.24) and (3.23), and ∆ n is the determinant obtained from the determinant D n by replacing in D n the last column by the column with the components s n+1 , s n+2 , . . . , s 2n+1 .

Examples
In this section we consider some simple examples to illustrate the solving of the inverse problem given above in Section 3. In fact, obviously, Ω, 1 = 1. Next, let for a polynomial Taking, in particular, where the bar over a complex number denotes the complex conjugation, we get  Hence, taking the real part and using the condition Re c k > 0 (k = 1, . . . , N ), we get G(λ k ) = 0, k = 1, . . . , N. Therefore G(λ) ≡ 0 because λ 1 , . . . , λ N are distinct and G(λ) is a polynomial with deg G(λ) ≤ N − 1.
Further, for the polynomial we have Ω, G(λ)T (λ) = 0 for all polynomials G(λ) so that the condition (iii) of Theorem 2 is also satisfied. Thus the functional Ω satisfies all the conditions of Theorem 2. Consider the case N = 2 and take the functional Ω defined by the formula where c is any complex number such that c = 0 and c = 1. Let us solve the inverse problem for this functional by using formulas Therefore the functional Ω satisfies all the conditions of Theorem 3. According to formulas (3.35) and (3.36), we find Therefore there are two matrices J ± for which Ω is the spectral function: The characteristic polynomials of the matrices J ± have the form Therefore the functional Ω satisfies all the conditions of Theorem 3. According to formulas (3.35) and (3.36), we find Therefore the two matrices J ± for which Ω is the spectral function have the form The characteristic polynomials of the matrices J ± have the form Note that if N = 3, then the functional

Structure of the generalized spectral function and spectral data
Let J be a Jacobi matrix of the form (1.1) with the entries satisfying (1.2). Next, let Ω be the generalized spectral function of J, defined above in Section 2. The following theorem describes the structure of Ω.
Proof . Let J be a matrix of the form (1.1), (1.2). Consider the second order linear difference equation a n−1 y n−1 + b n y n + a n y n+1 = λy n , n ∈ {0, 1, . . . , N − 1}, where {y n } N n=−1 is a desired solution. Denote by {P n (λ)} N n=−1 and {Q n (λ)} N n=−1 the solutions of equation (5.2) satisfying the initial conditions For each n ≥ 0, P n (λ) is a polynomial of degree n and is called a polynomial of first kind (note that P n (λ) is the same polynomial as in Section 2) and Q n (λ) is a polynomial of degree n − 1 and is known as a polynomial of second kind. Let us set Then it is straightforward to verify that the entries R nm (λ) of the matrix R(λ) = (J − λI) −1 (resolvent of J) are of the form Let f be an arbitrary element (column vector) of C N , with the components f 0 , f 1 , . . . , f N −1 . Since as |λ| → ∞, we have for each n ∈ {0, 1, . . . , N − 1}, where r is a sufficiently large positive number, Γ r is the circle in the λ-plane of radius r centered at the origin. Denote by λ 1 , . . . , λ p all the distinct roots of the polynomial P N (λ) (which coincides by (2.6) with the characteristic polynomial of the matrix J up to a constant factor) and by m 1 , . . . , m p their multiplicities, respectively: where c is a constant. We have 1 ≤ p ≤ N and m 1 + · · · + m p = N. By (5.8), we can rewrite the rational function Q N (λ)/P N (λ) as the sum of partial fractions: where β kj are some uniquely determined complex numbers depending on the matrix J. Substituting (5.6) in (5.7) and taking into account (5.5), (5.9) we get, applying the residue theorem and passing then to the limit as r → ∞, Now define on C 2N [λ] the functional Ω by the formula we call the normalizing chain (of the matrix J) associated with the eigenvalue λ k (the sense of "normalizing" will be clear below in Section 8).
If we delete the first row and the first column of the matrix J given in (1.1), then we get the new matrix The matrix J (1) is called the first truncated matrix (with respect to the matrix J).
Theorem 5. The normalizing numbers β kj of the matrix J can be calculated by decomposing the rational function into partial fractions.
Proof . Let us denote the polynomials of the first and the second kinds, corresponding to the matrix J (1) , by P Indeed, both sides of each of these equalities are solutions of the same difference equation N −2 = 1, and the sides coincide for n = −1 and n = 0. Therefore the equality holds by the uniqueness theorem for solutions. Consequently, taking into account Lemma 1 and using (5.16), we have Comparing this with (2.6), we get so that the statement of the theorem follows from (5.9).
6 Inverse problem from the spectral data By the inverse spectral problem is meant the problem of recovering matrix J, i.e. its entries a n and b n , from the spectral data.

Theorem 6. Let an arbitrary collection of complex numbers
{λ k , β kj (j = 1, . . . , m k , k = 1, . . . , p)} (6.1) be given, where λ 1 , λ 2 , . . . , λ p (1 ≤ p ≤ N ) are distinct, 1 ≤ m k ≤ N , and m 1 + · · · + m p = N . In order for this collection to be the spectral data for some Jacobi matrix J of the form (1.1) with entries belonging to the class (1.2), it is necessary and sufficient that the following two conditions be satisfied: (ii) D n = 0, for n ∈ {1, 2, . . . , N − 1}, and D N = 0, where D n is defined by (3.24) in which n kl = min{m k , l + 1}, l j−1 is a binomial coefficient. Proof . The necessity of conditions of the theorem follows from Theorem 3 because the generalized spectral function of the matrix J is defined by the spectral data according to formula (5.1) and therefore the quantity (6.2) coincides with Ω, λ l . Besides, p k=1 β k1 = Ω, 1 = s 0 = D 0 .
Note that the condition (iii) of Theorem 2 holds with Let us prove the sufficiency. Assume that we have a collection of quantities (6.1) satisfying the conditions of the theorem. Using these data we construct the functional Ω on C 2N [λ] by formula (5.1). Then this functional Ω satisfies the conditions of Theorem 3 and therefore there exists a matrix J of the form (1.1), (1.2) for which Ω is the generalized spectral function. Now we have to prove that the collection (6.1) is the spectral data for the recovered matrix J. For this purpose we define the polynomials P −1 (λ), P 0 (λ), . . . , P N (λ) as the solution of equation (5. hold. We show that (5.8) holds which will mean, in particular, that λ 1 , . . . , λ p are eigenvalues of the matrix J with the multiplicities m 1 , . . . , m p , respectively. Let T (λ) be defined by (6.3). Let us show that there exists a constant c such that for all λ ∈ C. If we prove this, then from here and (5.2) with y k = P k (λ) and n = N − 1 we get that P N (λ) = cT (λ).
Hence taking into account the relations (2.7), (2.8) and (6.4), (6.5), we find from (6.7) that Proof . Any polynomial G(λ) ∈ R 2m [λ] which is not identically zero and which satisfies the inequality can be represented in the form where A(λ), B(λ) are polynomials of degrees ≤ m with real coefficients. Indeed, it follows from (7.1) that the polynomial G(λ) has even degree: deg G(λ) = 2p, where p ≤ m. Therefore its decomposition into linear factors has the form where c > 0, β k ≥ 0, α k are real (among the roots α k + iβ k , of course, may be equal). Now setting we get that the polynomials A(λ), B(λ) have real coefficients and (7.2) holds. Now writing where x k , y k are real numbers, we find s j+k y j y k .
This implies the statement of the lemma. (iii) there exists a polynomial T (λ) of degree N such that Ω, G(λ)T (λ) = 0 for all polynomials G(λ) with deg G(λ) ≤ N.
Proof . Necessity. The condition Ω, 1 = 1 follows from (2.7) with m = n = 0. To prove positivity on C 2N −2 [λ] of the generalized spectral function Ω of the real Jacobi matrix J, take an arbitrary polynomial G(λ) ∈ R 2N −2 [λ] which is not identically zero and which satisfies the inequality This polynomial can be represented in the form (see the proof of Lemma 3) where A(λ), B(λ) are polynomials of degrees ≤ N − 1 with real coefficients. Since the polynomials P 0 (λ), P 1 (λ), . . . , P N −1 (λ) have real coefficients (because J is a real matrix) and they form a basis of R N −1 [λ], we can write the decompositions where c k , d k are real numbers not all zero. Therefore using the "orthogonality" property (2.7) we get from (7.3), The property of Ω indicated in the condition (iii) of the theorem follows from (2.8) if we take T (λ) = P N (λ).
Sufficiency. It follows from the conditions of the theorem that all the conditions of Theorem 2 are satisfied. In fact, we need to verify only the condition (ii) of Theorem 2. Let for some polynomial G(λ), deg G(λ) = n ≤ N − 1, Ω, G(λ)H(λ) = 0 (7.4) for all polynomials H(λ), deg H(λ) = n. We have to show that then G(λ) ≡ 0. Setting we get from (7.4) that This is a linear homogeneous system of algebraic equations with respect to g 0 , g 1 , . . . , g n and the determinant of this system coincides with the determinant D n . From the condition (ii) of the theorem it follows by Lemma 3 that D n > 0. So D n = 0 and hence system (7.5) has only the trivial solution g 0 = g 1 = · · · = g n . Thus, all the conditions of Theorem 2 are satisfied. Therefore there exists, generally speaking, a complex Jacobi matrix J of the form (1.1), (1.2) for which Ω is the generalized spectral function. This matrix J is constructed by using formulas (3.35), (3.36). It remains to show that the matrix J is real. But this follows from the fact that by Lemma 2 and Lemma 3 we have D n > 0 for n ∈ {0, 1, . . . , N − 1} and the determinants ∆ n are real. Therefore formulas (3.35), (3.36) imply that the matrix J is real.
If we take into account Lemma 3, then it is easily seen from the proof of Theorem 3 that Theorem 7 is equivalent to the following theorem.
Lemma 5. The equation holds, where the prime denotes the derivative with respect to λ.
Summing the last equation for the values n = 0, 1, . . . , m (m ≤ N − 1) and using the initial conditions (5.3), we obtain Proof . Let λ 0 be a root of the polynomial P N (λ). The root λ 0 is an eigenvalue of the matrix J by (2.6) and hence it is real by Hermiticity of J. Putting λ = λ 0 in (8.3) and using P N (λ 0 ) = 0, we get The right-hand side of (8.5) is different from zero because the polynomials P n (λ) have real coefficients and hence are real for real values of λ, and P 0 (λ) = 1. Consequently P ′ N (λ 0 ) = 0, that is, the root λ 0 of the polynomial P N (λ) is simple. Proof . The reality of eigenvalues of J follows from its Hermiticity. Next, the eigenvalues of J coincide, by (2.6), with the roots of the polynomial P N (λ). This polynomial of degree N has N roots. These roots are pairwise distinct by Lemma 6.
The following theorem describes the structure of generalized spectral functions of real Jacobi matrices.
Since {P n (λ k )} N −1 n=0 is an eigenvector of the matrix J corresponding to the eigenvalue λ k , it is natural, according to the formula (8.10), to call the numbers β k the normalizing numbers of the matrix J. Remark 2. Assuming that λ 1 < λ 2 < · · · < λ N , let us introduce the nondecreasing step function ω(λ) on (−∞, ∞) by where ω(λ) = 0 if there is no λ k ≤ λ. So the eigenvalues of the matrix J are the points of increase of the function ω(λ). Then equality (8.6) can be written as where the integral is a Stieltjes integral. Therefore the orthogonality relation (5.14) can be written as Such function ω(λ) is known as a spectral function (see, e.g., [21]) of the operator (matrix) J. This explains the source of the term "generalized spectral function" used in the complex case.
9 Inverse spectral problem for real Jacobi matrices By the inverse spectral problem for real Jacobi matrices we mean the problem of recovering the matrix, i.e. its entries, from the spectral data. Proof . The necessity of the conditions of the theorem was proved above. To prove the sufficiency, assume that we have a collection of quantities (9.1) satisfying the conditions of the theorem. Using these data we construct the functional Ω on C Thus, the functional Ω defined by the formula (9.2) satisfies all the conditions of Theorem 7. Therefore there exists a real Jacobi matrix J of the form (1.1), (1.3) for which Ω is the generalized spectral function. Further, from the proof of sufficiency of the conditions of Theorem 6 it follows that the collection {λ k , β k (k = 1, . . . , N )} is the spectral data for the recovered matrix J.
Note that under the conditions of Theorem 10 the entries a n and b n of the matrix J for which the collection (9.1) is spectral data, are recovered by formulas (3.35), (3.36).