Spectra of observables in the q-oscillator and q-analogue of the Fourier transform

Spectra of the position and momentum operators of the Biedenharn-Macfarlane q-oscillator (with the main relation aa^+-qa^+a=1) are studied when q>1. These operators are symmetric but not self-adjoint. They have a one-parameter family of self-adjoint extensions. These extensions are derived explicitly. Their spectra and eigenfunctions are given. Spectra of different extensions do not intersect. The results show that the creation and annihilation operators a^+ and a of the q-oscillator for q>1 cannot determine a physical system without further more precise definition. In order to determine a physical system we have to choose appropriate self-adjoint extensions of the position and momentum operators.


Introduction
Approximately 20 years ago the first papers on quantum groups and quantum algebras have appeared. Quantum groups and quantum algebras can be considered as q-deformations of semisimple Lie groups and Lie algebras. Soon after that an appropriate q-deformation of the quantum harmonic oscillator, related to the quantum group SU q (2), was introduced (see [1] and [2]). There exist several variants of the q-oscillator. They are obtained from each other by some transformation (see [3] and [4,Chapter 5]). As their relation to the nilpotent part of the quantum algebra U q (sl 3 ), see in [5].
One of the main problems for different forms of the q-oscillator is form of spectra of the main operators, such as the Hamiltonian, the position operator, the momentum operator, etc. There is no problem with a spectrum of the Hamiltonian H = 1 2 (aa + + a + a): this spectrum is discrete and the corresponding eigenvectors are easily determined. But in some cases (similar to the case of representations of noncompact quantum algebras (see, for example, [6]) there are difficulties with spectra of the position and momentum operators (see [7,8,9]). It was shown that if the position operator Q = a + + a (or the momentum operator P = i(a + − a)) is not bounded, then this symmetric operator is not essentially self-adjoint. Moreover, in this case it has deficiency indices (1,1), that is, it has a one-parameter family of self-adjoint extensions. Finding self-adjoint extensions of a closed symmetric (but not self-adjoint) operator is a complicated problem. We need to know self-adjoint extensions in order to be able to find their spectra. As we shall see, different self-adjoint extensions of Q and P have different spectra.
In this paper we study self-adjoint extensions of the position and momentum operators Q and P for the q-oscillator with the main relation aa + − qa + a = 1 when q > 1. For these values of q, the operators Q and P are unbounded and not essentially self-adjoint (for q < 1, these operators are bounded and, therefore, self-adjoint; they are studied in [8]). These operators can be represented in an appropriate basis by a Jacobi matrix. This means that they can be studied by means of properties of q-orthogonal polynomials associated with them (see Section 2 below). These q-orthogonal polynomials are expressed in terms of the q −1 -continuous Hermite polynomials h n (x|q) introduced by R. Askey [10]. These polynomials correspond to an indeterminate moment problem and, therefore, are orthogonal with respect to infinitely many positive measures (see Section 2 below). Using orthogonality measures for these polynomials we shall find spectra of self-adjoint extensions of Q and P . This paper is an extended exposition of the results of the paper [11].
It follows from conclusions of this paper that the creation and annihilation operators a + and a of the q-oscillator for q > 1 do not determine uniquely a physical system. In order to fix a physical system we have to choose appropriate self-adjoint extensions of the position and momentum operators. This fact must be taken into account with respect to applications of q-oscillators with q > 1. Thus, we cannot operate with the creation and annihilation operators of the q-oscillator so freely as in the case of the usual quantum harmonic oscillator.
Below we use (without additional explanation) notations of the theory of q-special functions (see [12]). In order to study the position and momentum operators Q and P we shall need the results on Jacobi matrices, orthogonal polynomials and symmetric operators, representable by a Jacobi matrix. In the next section, we give a combined exposition of some results on this connection from books [13, Chapter VII], [14] and from paper [15] in the form, appropriate for a use below, and some consequences of them.
2 Operators representable by a Jacobi matrices, and orthogonal polynomials 2.1 Jacobi matrices and orthogonal polynomials Many operators used in theoretical and mathematical physics are operators representable by a Jacobi matrix. There exists a well-developed mathematical method for studying such operators.
In what follows we shall use only symmetric Jacobi matrices and the word "symmetric" will be often omitted. By a symmetric Jacobi matrix we mean a (finite or infinite) symmetric matrix of the form We assume below that a i = 0, i = 0, 1, 2, . . .. All a i are real. Let L be a closed symmetric operator on a Hilbert space H, representable by a Jacobi matrix M . Then there exists an orthonormal basis e n , n = 0, 1, 2, . . ., in H such that Le n = a n e n+1 + b n e n + a n−1 e n−1 , where e −1 ≡ 0. Let p n (x)e n be an eigenvector 1 of L with an eigenvalue x, that is, L|x = x|x . Then [p n (x)a n e n+1 + p n (x)b n e n + p n (x)a n−1 e n−1 ] Equating coefficients at the vector e n one comes to a recurrence relation for the coefficients p n (x): a n p n+1 (x) + b n p n (x) + a n−1 p n−1 (x) = xp n (x). Since Similarly we can find successively p n (x), n = 2, 3, . . .. Thus, the relation (3) completely determines the coefficients p n (x). Moreover, the recursive computation of p n (x) shows that these coefficients p n (x) are polynomials in x of degrees n, respectively. Since the coefficients a n and b n are real (as the matrix M is symmetric), all coefficients of the polynomials p n (x) themselves are real. Since a n > 0 and b n ∈ R in (3) then due to Favard's characterization theorem the polynomials p n (x) are orthogonal with respect to some positive measure µ(x). It is known that orthogonal polynomials admit orthogonality with respect to either unique positive measure or with respect to infinitely many positive measures.
The polynomials p n (x) are very important for studying properties of the closed symmetric operator L. Namely, the following statements are true (see, for example, [13] and [15]): I. Let the polynomials p n (x) be orthogonal with respect to a unique orthogonality measure µ, Then the corresponding closed operator L is self-adjoint. Moreover, the spectrum of the operator L is simple and coincides with the set, on which the polynomials p n (x) are orthogonal (that is, with the support of the measure µ). The measure µ(x) determines also the spectral measure for the operator L (for details see [13,Chapter VII]).
II. Let the polynomials p n (x) be orthogonal with respect to infinitely many different orthogonality measures µ. Then the closed symmetric operator L is not self-adjoint and has deficiency indices (1, 1), that is, it has infinitely many (in fact, one-parameter family of) self-adjoint extensions. It is known that among orthogonality measures, with respect to which the polynomials are orthogonal, there are so-called extremal measures (that is, such measures that a set of polynomials {p n (x)} is complete in the Hilbert space L 2 with respect to the corresponding measure; see Subsection 2.3 below). These measures uniquely determine self-adjoint extensions of the symmetric operator L. There exists one-to-one correspondence between essentially distinct extremal orthogonality measures and self-adjoint extensions of the operator L. The extremal orthogonality measures determine spectra of the corresponding self-adjoint extensions.
The inverse statements are also true: I ′ . Let the operator L be self-adjoint. Then the corresponding polynomials p n (x) are orthogonal with respect to a unique orthogonality measure µ, where a support of µ coincides with the spectrum of L. Moreover, a measure µ is uniquely determined by a spectral measure for the operator L (for details see [13,Chapter VII]). II ′ . Let the closed symmetric operator L be not self-adjoint. Since it is representable by a Jacobi matrix (1) with a n = 0, n = 0, 1, 2, . . ., it admits a one-parameter family of self-adjoint extensions (see [13,Chapter VII]). Then the polynomials p n (x) are orthogonal with respect to infinitely many orthogonality measures µ. Moreover, spectral measures of self-adjoint extensions of L determine extremal orthogonality measures for the polynomials {p n (x)} (and a set of polynomials {p n (x)} is complete in the Hilbert spaces L 2 (µ) with respect to the corresponding extremal measures µ).
On the other hand, with the orthogonal polynomials p n (x), n = 0, 1, 2, . . . the classical moment problem is associated (see [14] and [16]). Namely, with these polynomials (that is, with the coefficients a n and b n in the corresponding recurrence relation) real numbers c n , n = 0, 1, 2, . . . are associated, which determine the corresponding classical moment problem. (The numbers c n are uniquely determined by a n and b n , see [14].) The definition of the classical moment problem consists in the following. Let a set of real numbers c n , n = 0, 1, 2, . . . be given. We are looking for a positive measure µ(x), such that where the integration is taken over R (in this case we deal with the Hamburger moment problem).
There are two principal questions in the theory of moment problem: (i) Does there exist a measure µ(x), such that relations (4) are satisfied? (ii) If such a measure exists, is it determined uniquely?
The answer to the first question is positive, if the numbers c n , n = 0, 1, 2, . . . are those corresponding to a family of orthogonal polynomials. Moreover, then the measure µ(x) coincides with the measure with respect to which these polynomials are orthogonal.
If a measure µ in (4) is determined uniquely, we say that we deal with the determinate moment problem. (In particular, it is the case when the measure µ is supported on a bounded set.) Then the corresponding polynomials {p n (x)} are orthogonal with respect to this measure and the corresponding symmetric operator L is self-adjoint.
If a measure with respect to which relations (4) hold is not unique, then we say that we deal with the indeterminate moment problem. In this case there exist infinitely many measures µ(x) for which (4) take place. Then the corresponding polynomials are orthogonal with respect to all these measures, and the corresponding symmetric operator L is not self-adjoint. In this case the set of solutions of the moment problem for the numbers {c n } coincides with the set of orthogonality measures for the corresponding polynomials {p n (x)}.
See that not every set of real numbers c n , n = 0, 1, 2, . . . is associated with a set of orthogonal polynomials. In other words, there are sets of real numbers c n , n = 0, 1, 2, . . . such that the corresponding moment problem does not have a solution, that is, there is no positive measure µ, for which the relations (4) are true. But if for some set of real numbers c n , n = 0, 1, 2, . . . the moment problem (4) has a solution µ, then this set corresponds to some set of polynomials p n (x), n = 0, 1, 2, . . ., which are orthogonal with respect to this measure µ. There exist criteria indicating when for a given set of real numbers c n , n = 0, 1, 2, . . . the moment problem (4) has a solution (see, for example, [14]). Moreover, there exist procedures, that associate a collection of orthogonal polynomials to a set of real numbers c n , n = 0, 1, 2, . . . for which the moment problem (4) has a solution (see, [14]).
Thus, we see that the following three theories are closely related: (i) the theory of symmetric operators L, representable by a Jacobi matrix; (ii) the theory of orthogonal polynomials in one variable; (iii) the theory of classical moment problem.

Self-adjointness
We have seen that orthogonal polynomials {p n (x)} associated with the symmetric operator L determine whether it is self-adjoint or not. There exist other criteria of self-adjointness of a closed symmetric operator: (a) If the coefficients a n and b n in (2) are bounded, the operator L is bounded and, therefore, self-adjoint.
(b) If in (2) b n are arbitrary real numbers and a n are such that ∞ n=0 a −1 n = ∞, then the operator L is self-adjoint.
(c) Let |b n | ≤ C, n = 0, 1, 2, . . ., and let for some positive j we have a n−1 a n+1 ≤ a 2 then the operator L is not self-adjoint.
Each closed symmetric operator representable by a Jacobi matrix, which is not a self-adjoint operator, has deficiency indices (1, 1), that is, it has a one-parameter family of self-adjoint extensions. Let us explain this statement in more detail. Let A be a closed operator (not obligatorily symmetric) on a Hilbert space H. Suppose that a domain D(A) of A is an everywhere dense subspace of H. There are pairs v and v ′ of elements from H such that We set v ′ = A * v and call A * the operator conjugate to A. It is proved that A * is defined on an everywhere dense subspace D( and D(A) = D(A * ), then it can have self-adjoint extensions. The fact that the operator A has a self-adjoint extension A ext means that (A ext ) * = A ext (that is, A ext is a self-adjoint operator), D(A) ⊂ D(A ext ) and the operators A and A ext coincide on D(A).
To know whether a symmetric operator A has self-adjoint extensions or not, there is the notion of deficiency indices (m, n) of A (m and n are nonnegative integers). If these indices are equal to each other, then A has self-adjoint extensions. Self-adjoint extensions are constructed with the help of two deficiency subspaces (they are of dimensions m and n, respectively). A detailed description of deficiency indices and deficiency subspaces can be found in [17].
The important fact is that different self-adjoint extensions of a symmetric operator can have different spectra (this can happen when their spectra have discrete parts). We shall meet this situation in the forthcoming sections.

Extremal orthogonality measures
As we have seen in Subsection 2.1, a closed symmetric operator L representable by a Jacobi matrix that is not self-adjoint can be studied by means of orthogonal polynomials, associated with L. Self-adjoint extensions of L are connected with extremal orthogonality measures for these polynomials. Let us consider such measures in more detail.
Let p n (x), n = 0, 1, 2, . . . be a set of orthogonal polynomials associated with an indeterminate moment problem (4). Then for orthogonality measures µ(x) for these polynomials the following formula holds: where A(z), B(z), C(z), D(z) are entire functions which are the same for all orthogonality measures µ. These functions are related to asymptotics of the polynomials p n (x) and of an associated set of polynomials p * n (x) (see [14] for details). Expressions for A(z), B(z), C(z), D(z) as infinite sums (in n) of the polynomials p n (x) and p * n (x) see [13,section VII.7]. In (5), σ(z) is a Nevanlinna function. Moreover, to each such function σ(z) (including cases of constant σ(z) and σ(z) = ±∞) there is a corresponding single orthogonality measure µ(t) ≡ µ σ (t) and, conversely, to each orthogonality measure µ there is a corresponding function σ such that formula (5) holds. There exists the Stieltjes inversion formula that converts the formula (5). It has the form Thus, orthogonality measures for a given set of polynomials p n (x), n = 0, 1, 2, . . . in principle can be found. However, it is very difficult to evaluate the functions A(z), B(z), C(z), D(z).
(In [19] they are evaluated for particular example of polynomials, namely, for the q −1 -continuous Hermite polynomials h n (x|q).) So, as a rule, for the derivation of orthogonality measures other methods are used.
The importance of extremal measures is explained by Riesz's theorem. Let us suppose that a set of polynomials p n (x), n = 0, 1, 2, . . . associated with the indeterminate moment problem, is orthogonal with respect to a positive measure µ (that is, µ is a solution of the moment problem (4)). Let L 2 (µ) be the Hilbert space of square integrable functions with respect to the measure µ. Evidently, the polynomials p n (x) belong to the space L 2 (µ). Riesz's theorem states the following: Riesz's theorem. The set of polynomials p n (x), n = 0, 1, 2, . . . is complete in the Hilbert space L 2 (µ) (that is, they form a basis in this Hilbert space) if and only if the measure µ is extremal.
Note that if a set of polynomials p n (x), n = 0, 1, 2, . . . corresponds to a determinate moment problem and µ is an orthogonality measure for them, then this set of polynomials is also complete in the Hilbert space L 2 (µ).
Riesz's theorem is often used in order to determine whether a certain orthogonality measure is extremal or not. Namely, if we know that a given set of orthogonal polynomials corresponding to an indeterminate moment problem is not complete in the Hilbert space L 2 (µ), where µ is an orthogonality measure, then this measure is not extremal.
Note that for applications in physics and in functional analysis it is of interest to have extremal orthogonality measures. If an orthogonality measure µ is not extremal, then it is important to find a system of orthogonal functions {f m (x)}, which together with a given set of polynomials constitute a complete set of orthogonal functions (that is, a basis in the Hilbert space L 2 (µ)). Sometimes, it is possible to find such systems of functions (see, for example, [18]).
Extremal orthogonality measures have many interesting properties [14]: (a) An extremal measure µ σ (x) associated (according to formula (5)) with a number σ is discrete. Its spectrum (that is, the set on which the corresponding polynomials p n (x), n = 0, 1, 2, . . . are orthogonal) coincides with the set of zeros of the denominator B(z) − σD(z) in (5). The mass concentrated at a spectral point x j (that is, a jump of µ σ (x) at the point (b) Spectra of extremal measures are real and simple. This means that the corresponding self-adjoint operators, which are self-adjoint extensions of the operator L, have simple spectra, that is, all spectral points are of multiplicity 1.
(d) For a given real number x 0 , there always exists a (unique) real value σ 0 , such that the measure µ σ 0 (x) has x 0 as its spectral point. The points of the spectrum of µ σ (x) are analytic monotone functions of σ.
It is difficult to find all extremal orthogonality measures for a given set of orthogonal polynomials (that is, self-adjoint extensions of a corresponding closed symmetric operator). As far as we know, at present they are known only for one family of polynomials, which correspond to indeterminate moment problem. They are the q −1 -continuous Hermite polynomials h n (x|q) (see [19]).
As noted in [14], if extremal measures µ σ are known then by multiplying µ σ by a suitable factor (depending on σ) and integrating it with respect to σ, one can obtain infinitely many continuous orthogonality measures (which are not extremal).
Since spectra of self-adjoint extensions of the operator L coincide with the spectra of the corresponding extremal orthogonal measures for the polynomials p n (x), then the properties (a)-(d) can be formulated for spectra of these self-adjoint extensions: (a ′ ) Spectra of self-adjoint extensions of L are discrete. (b ′ ) Self-adjoint extensions of L have simple spectra, that is, spectral points are not multiple. (c ′ ) Spectra of two different self-adjoint extensions of L are mutually separated. (d ′ ) For a given real number x 0 , there exists a (unique) self-adjoint extension L ext such that x 0 is a spectral point of L ext .

The Biedenharn-Macfarlane q-oscillator
There are different forms of the q-oscillator algebra (see [3]; some forms of q-oscillators go back to the paper by R. Santilli [21]) which mathematically are not completely equivalent. For our definition of the q-oscillator we use the following relations for the creation and annihilation operators a + , a and for the number operator N . The Fock representation of this q-oscillator acts on the Hilbert space H with the orthonormal basis |n , n = 0, 1, 2, . . ., and is given by the formulas where the expression is called a q-number. Note that the basis vectors |n are eigenvectors of the Hamiltonian H = 1 2 (aa + + a + a). It follows from (6) that Thus, the spectrum of H consists of the points 1 2 (q n (q + 1) − 2)/(q − 1), n = 0, 1, 2, . . ..

Functional realization of the Fock representation
The Fock representation of the q-oscillator can be realized on many spaces of functions. We shall need the realization related to the space of polynomials in one variable. Let P be the space of all polynomials in a variable y (this variable has no relation to the coordinates of the position operator Q considered below). We introduce in P a scalar product such that the monomials e n ≡ e n (y) := (−1) n/2 (q; q) where constitute an orthonormal basis of P. The orthonormality of this basis gives a scalar product in P. We close the space P with respect to this scalar product and obtain a Hilbert space which can be considered as a realization of the Hilbert space H. The operators a + and a are realized on this space as where D q is the q-derivative determined by Then the operators a + and a act upon the basis elements (8) by formulas (6). Everywhere below we assume that H is the Hilbert space of functions in y, introduced above.

Position and momentum operators
We are interested in the position and momentum operators Q = a + + a, P = i(a + − a) of the q-oscillator. It is clear from (7) that We have the formulas P e n = i{n} 1/2 q e n−1 − i{n + 1} 1/2 q e n+1 , which follow from (6). When q < 1, it is clear from the definition (7) of {n} q that Q and P are bounded and therefore self-adjoint, operators. When q > 1, then it follows from (10) and (11) that Q and P are unbounded symmetric operators. Since {n − 1}{n + 1} ≤ {n} 2 , then it follows from criterion (c) of Subsection 2.2 that closures of these operators are not self-adjoint. Each of these operators has a one-parameter family of self-adjoint extensions. One of the aims of this paper is to give these self-adjoint extensions using reasoning of Section 2.

Eigenfunctions of the position operator
We assume below that q is a fixed real number such that q > 1. For convenience, we also introduce the notationq = q −1 . The aim of this section is to derive formulas for eigenfunctions ϕ x (y) of the position operator Q:

Let us show that
where x ′ := 1 2 (q − 1) 1/2 x. Using formula (9) and the definition of the q-derivative D q we have Therefore, that is, the functions (12) are eigenfunctions of the operator Q.
Let us reduce the functions (12) to another form. To do this we note that Thus, Comparing the right hand side of the above formula with the right hand side in the formula ∞ n=0 t nqn(n−1)/2 (q;q) n h n (y|q) = −t y 2 + 1 + y ;q ∞ t y 2 + 1 − y ;q ∞ (see formula (2.4) in [19]), giving a generating function for the q −1 -Hermite polynomials h n (x|q) defined by h n (x|q) = n k=0 (−1) kqk(k−n) (q;q) n (q;q) k (q;q) n−k we conclude that the functions ϕ x (y) can be decomposed into the orthogonal polynomials h n (x ′ |q) (other expressions for h n (x|q) can be obtained from expressions for the continuous q-Hermite polynomials H n (x|q) in [20] since H n (ix|q) = i n h n (x|q), see [10]). We have Taking into account the expression (8) for the basis elements e n and the formula (q;q) n = (−1) n (q; q) n q −n(n+1)/2 ,q = q −1 , Thus, we proved the following decomposition of the eigenfunctions ϕ x (y) in the basis elements (8) of the Hilbert space H: where the coefficients P n (x) are given by the formula and, as before, x ′ = 1 2 (q − 1) 1/2 x. We have found that eigenfunctions of the position operator Q are given by formula (13). However, we do not know the spectra of self-adjoint extensions Q ext of Q. In order to find these extensions and their spectra we use assertions of Section 2. Namely, since eigenfunctions of Q are expressed in terms of the basis elements e n (y) by formula (14), then self-adjoint extensions Q ext and their spectra are determined by orthogonality relations of the polynomials (15).

Spectra of self-adjoint extensions of Q
The polynomials h n (z|q), n = 0, 1, 2, . . ., 0 <q < 1, have infinitely many orthogonality relations. Extremal orthogonality measures are parametrized by a real number b,q ≤ b < 1, which is related to the parameter σ of Section 2 (see [19]). It is shown in [19] that for a fixed b, the corresponding orthogonality measure is concentrated on the discrete set of points and the orthogonality relation is given by where the weight function m r coincides with and (a;q) ∞ := ∞ s=0 (1 − aq s ).
Therefore, the orthogonality relations for the polynomials (15) with extremal orthogonality measures are given by the same parameter b,q ≤ b < 1, and for fixed b the measure is concentrated on the discrete set The corresponding orthogonality relation is where m r is given by (17). These orthogonality relations and assertions of Section 2 allow us to make the following conclusions: Theorem 1. Self-adjoint extensions Q ext b of the position operator Q are given by the parameter b, q ≤ b < 1. Moreover, the spectrum of the extension Q ext b coincides with the set of points These points coincide with values of the coordinate of our physical system fixed by the parameter b. It follows from (15) that to the eigenvalues (20) there correspond eigenfunctions From (20) and from assertions of Section 2 we derive the following corollary: such that x 0 is a spectral point of Q ext b .
For a fixed b, the eigenfunctions (21) are linearly independent (and, therefore, orthogonal), since they correspond to different eigenvalues of Q ext b . Since the corresponding orthogonality measure in (19) is extremal, they constitute a basis of the Hilbert space H. Let us normalize these basis elements. To do this, we have to multiply each ϕ xr(b) (y) by the corresponding normalization constant: These functions form an orthonormal basis of H. Since the matrix (a rn ), a rn = c b (r)P n (x b (r)), where r = 0, ±1, ±2, . . . and n = 0, 1, 2, . . . connects two orthonormal bases of the Hilbert space H. Therefore, this matrix is unitary, that is, Comparing this formula with relation (19) we have c b (r) = m 1/2 r and ϕ norm where m r ≡ m r (b) is given by (17).

Coordinate realization of H
In order to realize Q ext b as a self-adjoint operator, we construct a one-to-one isometry Ω of the Hilbert space H onto the Hilbert space L 2 b (m r ) of functions F on the set of points (20) (coinciding with the set of values of the coordinate) with the scalar product It follows from (19) that the polynomials P n (x b (r)) are orthogonal on the set (20) and constitute an orthonormal basis of L 2 b (m r ). For a fixed b, the isometry Ω is given by the formula It follows from (22) that This formula shows that Ω is indeed a one-to-one isometry. The operator Q ext b acts on L 2 b (m r ) as the multiplication operator: It is known (see [17]) that the multiplication operator is a self-adjoint operator. The Hilbert space L 2 b (m r ) is the space of states of our physical system in the coordinate representation. Since the elements e n (y) ∈ H are eigenfunctions of the Hamiltonian H = 9 Eigenfunctions and spectra of the momentum operator By changing the basis {e n (y)} by the basis {e ′ n (y)}, where e ′ n (y) = i −n e n (y), we see that the momentum operator P = i(a + −a) is given in the latter basis by the same formula as the position operator is given in the basis {e n (y)}. This means that the operator P is symmetric, but is not self-adjoint. Moreover, it has infinitely many (in fact, one-parameter family of) self-adjoint extensions.
Eigenfunctions of the momentum operator can be found (by using the basis {e ′ n (y)}) in the same way as in the case of the position operator. For this reason, we adduce only the results.
Eigenfunctions ξ p (y) of the momentum operator P , P ξ p (y) = pξ p (y), are of the form where p ′ := 1 2 (q − 1) 1/2 p. The function ξ p (y) can be decomposed in the q −1 -Hermite polynomials h n (p|q): Thus, we have the following decomposition of the eigenfunctions ξ p (y) in the basis elements (8) of the Hilbert space H: where the coefficientsP n (x) are given by the formulã Using orthogonality relations for the polynomials h n (x ′ |q) described above, we can make the following conclusion: It follows from (24) and from assertions of Section 2 that the spectra of extensions of the operator P have the same properties as the spectra of extensions of the position operator Q: (c) For a given real number p 0 there exists a (unique) self-adjoint extension P ext b such that p 0 is a spectral point of P ext b . In the same way as in the case of the position operator we derive that the eigenfunctions (25) constitute a basis of the Hilbert space H. Let us normalize it. To do this, we make the same reasoning as in Section 7, and obtain that the functions form a normalized basis of H, where m r ≡ m r (b) is given by (17).

Momentum realization of H
In order to realize P ext b as a self-adjoint operator we use the reasoning of Section 8, namely, we construct a one-to-one isometry Ω ′ of the Hilbert space H onto the Hilbert spaceL 2 b (m r ) of functions F on the set of points (24) (which coincides with the set of values of the momentum) with the scalar product It follows from (19) that the polynomialsP n (p b (r)) are orthogonal and constitute an orthonormal basis ofL 2 b (m r ). For a fixed b, the isometry Ω ′ is given by the formula
This formula shows that Ω ′ is indeed a one-to-one isometry. The operator P ext b acts onL 2 b (m r ) as the multiplication operator: and this operator is self-adjoint. The Hilbert spaceL 2 b (m r ) is the space of states of our physical system in the momentum representation. Since the elements e n (y) ∈ H are eigenfunctions of the Hamiltonian H = 1 2 (aa + + a + a), thenP n (p b (r)) ∈L 2 b (m r ) are eigenfunctions of the same Hamiltonian if its action is considered inL 2 b (m r ). For different values of b the sets (24) of values of the momentum are different. Therefore, the spacesL 2 b (m r ) for them are different, since they consist of functions defined on different sets. Clearly, we may identify L 2 b (m r ) withL 2 b (m r ). Physical conclusion. Our consideration shows that the creation and annihilation operators a + and a of Section 3 at q > 1 cannot determine a physical system without further indications. Namely, in order to determine a physical system we have to take appropriate selfadjoint extensions of the operators Q and P . Thus, the q-oscillator algebra of Section 3 in fact determine two-parameter family of q-oscillators. We denote them by 11 Fourier transforms related to the q-oscillator with q > 1 Let us first consider what we have in the case of the usual quantum harmonic oscillator. This oscillator is determined by the relation For position and momentum operators Q A and P A we have The Hilbert space of states H A is spanned by the orthonormal vectors |n , n = 0, 1, 2, . . . .

For eigenvectors of Q
In this way we obtain a realization of H A as a space of functions in the coordinate or as a space of functions in the momentum. Then the functions h(x) andĥ(p) from (26) are related with each other by the usual Fourier transform: (p)e ipx dp.
An analog of this Fourier transform for the q-oscillator in the case when 0 < q < 1 is derived in [22]. The aim of this section is to give an analog of the Fourier transform for the q-oscillator O(b, b ′ ) for fixed b and b ′ (when q > 1). This analog is a transform on a discrete set since the coordinate and the momentum run over discrete sets.
We fix b and b ′ from the interval [q, 1). Let f ∈ H and Ωf = F (x b (r)) ∈ L 2 b (m r ) , We have to find a linear transform F :L 2 b ′ (m r ′ ) → L 2 b (m r ) such that FF = F . By the definition of Ω and Ω ′ , one has It is clear that where T b ′ b r ′ r = ξ norm p b ′ (r ′ ) , ϕ norm x b (r) . Therefore, Thus, an analog of the Fourier transform for the q-oscillator O(b, b ′ ) is given by the matrix For entries of this matrix we have In order to sum up the last sum we set q = e τ , b = e σ , b ′ = e σ ′ .