Symmetry, Integrability and Geometry: Methods and Applications Bäcklund–Darboux Transformation for Non-Isospectral Canonical System and Riemann–Hilbert Problem ⋆

A GBDT version of the Backlund-Darboux transformation is constructed for a non-isospectral canonical system, which plays essential role in the theory of random matrix models. The corresponding Riemann-Hilbert problem is treated and some explicit formulas are obtained. A related inverse problem is formulated and solved.


Introduction
We shall consider a non-isospectral system When the Hamiltonian H ≥ 0, and the spectral parameter λ does not depend on x, the system above is a classical canonical system. A version of the Bäcklund-Darboux transformation (BDT) for the classical canonical system have been constructed in [15]. In our case (1.1) the spectral parameter λ = (z − x) −1 depends on x, and here we construct BDT for this case.
BDT is a fruitful approach to obtain solutions of the linear differential equations and systems. It is also widely used to construct explicit solutions of integrable nonlinear systems. For that purpose BDT is applied simultaneously to two auxiliary linear systems of the integrable one. BDT is closely related to the symmetry properties. Since the original works of Bäcklund and Darboux, a much deeper understanding of this transformation has been achieved and various interesting versions of the Bäcklund-Darboux transformation have been introduced (see, for instance, [1,2,6,8,11,12,13,21,23]). Important works on the Bäcklund-Darboux transformation both in the continuous and discrete cases have been written by V.B. Kusnetzov and his coauthors (see [9,10] and references therein).
We apply BDT to construct explicitly new solutions of the Riemann-Hilbert problem on the interval [0, l]: where W (z) is analytic for z / ∈ [0, a], and W (z) → I m , when z → ∞, I m is the m × m identity matrix. For important classes of R the solution of problem (1.2) takes the form where the m × m fundamental solution w of (1.1) is normalized by the condition The necessary and sufficient conditions for (1.3) are given in [20, p. 209] (see also [16,19]). It is useful to obtain explicit formulas for H and R.
The problem (1.2) is of interest in the random matrix theory: the Markov parameters appearing in the series representation w(l, z) = I m +z −1 M 1 (l)+z −2 M 2 (l)+· · · are essential for the random matrices problems [3,4]. In particular, in the bulk scaling limit of the Gaussian unitary ensemble of Hermitian matrices the probability that an interval of length l contains no eigenvalues is given by the function P (l), which satisfies the equality When J = I m , system (1.2) is essential in the prediction theory [22]. We construct a Bäcklund-Darboux transformation for system (1.1) in the next Section 2. Section 3 is dedicated to explicit solutions, and Section 4 is dedicated to an inverse problem.

Bäcklund-Darboux transformation
To construct Bäcklund-Darboux transformation we use the methods developed in [14,15] for non-isospectral problems and canonical system, respectively. For this purpose fix integer n > 0 and n × n parameter matrices A(0), S(0) = S(0) * . Fix also n × m parameter matrix Π(0) so that the matrix identity Then it can be checked by direct differentiation that the matrix identity holds for each x. Notice that the equation A x = A 2 is motivated by the similar equation λ x = λ 2 for the spectral parameter λ because A can be viewed as a generalized spectral parameter (see [14]). In the points of invertibility of S we can introduce a transfer matrix function in the Lev Sakhnovich form [17,18,19] This transfer matrix function has an important J-property [17]: where matrix function w 0 is defined by the relations up to J-unitary initial value w 0 (0). (We omit sometimes argument x in the formulas for brevity.) is well defined and satisfies the transformed system then the fundamental solution of system (2.11) is given by the formula Proof . The proof is based on the equation for the transfer matrix function We shall apply (2.18) as well as the second relation in (2.2) to differentiate w A (x, z): and collect terms to rewrite (2.19) in the form (2.14). According to formulas (2.7)-(2.9), (2.14), and (2.15) we have Taking into account (2.8) we get Thus we rewrite (2.20) as where H is given by (2.12). From (1.1) and (2.22) it follows that (2.11) is true for w of the form (2.10). In view of (1.4) one can see that normalization (2.13) yields w(0, z) = I m .
Our next proposition provides conditions for invertibility of S.
Then in view of (2.3) and (2.23) we have

It follows that
In view of the first equality in (2.2) invertible matrix function A is of the form A = (B − xI n ) −1 . Further we shall suppose that A is defined and both A and S are invertible on some interval [0, l]. Remark 1. Suppose A(x) and S(x) are invertible on the interval [0, l]. Using (2.17) we can differentiate In this way similarly to (2.14) we can show that the matrix function w 0 , which satisfies (2.8), (2.9) and initial condition w 0 (0) = U , admits representation we get In view of (1.2) and (2.25) we obtain

Explicit solutions
If we know A, S, Π, then using the results of the previous section we can construct explicit expressions for H and R. Consider the simplest case Then in formula (1.2) we have Indeed, in view of (1.1) and (3.1) we get βw(x, z) = βw(0, z) = β, βJw(x, z) = −2i ln (z − x) β + const, (3.3) where

const means some constant (vector). In the first relation in (3.3) we use normalization condition (1.4). Taking into account (1.4) again, from the second relation in (3.3) we derive
βJw(x, z) = 2i ln z z − x β + βJ. So according to (3.6) we have Moreover, formula (3.5) implies that Hence, we obtain Substitute (3.9) into (3.8) to see that R 2 = I 2 + 2πJβ * β, i.e., we can assume (3.2). Also we can set From Π x = −iAΠJH and (3.1) we get We also have It follows that Formulas (3.10) and (3.11) give us Π. We shall assume that b k ∈ [0, ∞), and so Π is well-defined on [0, ∞). Taking into account (3.7) we also get The matrix function S is easily derived from the identity AS − SA * = iΠJΠ * . Finally, in view of (2.24) and (3.10) we get which, taking into account (2.12) and (3.1), implies Thus matrix functions H(x) and R(s) are given by formulas (3.13) and (3.14), respectively.
Example 1. Consider the simplest case n = 1. Put b 1 = b and assume b ∈ R. Rewrite (3.12) as Here g is the complex number conjugated to g. Hence, in view of (2.4) we get Put now h = 0 to derive Rewrite (3.14) as R(s) = I 2 + πJU * r(s) * r(s)U, Finally, using (3.15) and (3.17) rewrite h in the explicit form:

Inverse problem: explicit solutions
In view of (2.6) it is immediate that formula (3.14) can be written in the form (3.16): R(s) = I 2 + πJU * r(s) * r(s)U , where vector function is rational and satisfies the following properties Here J is defined in (3.1). Introduce matrices K and j: Consider function From (4.2) and (4.3) we get rKjK * r * = 0, and so Rational function u satisfying (4.5) admits [17] a so called minimal realization where Using (4.6) one recovers H (explicitly, though not necessarily uniquely) from the given function u. Introduce now matrix-functions S and Π by (4.9) and equations Proof . First notice that equations (4.7) and (4.9) and the first relation in (4.8) imply identities The correspondence between R and H follows now from the results of Section 3. It remains to prove (4.4). For this purpose notice that by (2.5), (4.1), (4.3), and (4.9) we have where 2 × 2 matrix function W is of the form In view of (4.12) we get Using system theory results on the realization of the inverse matrix function, from the first relation in (4.9) and (4.16) we derive Compare the result with the recovery of the so called pseudo-exponential potentials (see [5,7] and references therein).

Remark 2.
As |u| = 1, so matrix α in the realization (4.6) is invertible. Therefore we can choose θ 2 satisfying the first relation in (4.8) and sufficiently small for α − i c θθ * 2 S −1 0 to be invertible too.

Summary
The first new result in this paper is the construction of the Bäcklund-Darboux transformation for the non-isospectral canonical system (1.1), which is important both in prediction theory and random matrices theory. The GBDT-version of the Bäcklund-Darboux transformation, constructed in Theorem 1, is more general than iterated BDT and admits parameter matrix A with an arbitrary Jordan structure. (For the applications of GBDT to non-isospectral integrable systems see [14].) In Section 3, we apply GBDT to the initial system with H ≡ const to obtain a family of explicit solutions of system (1.1) and of the corresponding Riemann-Hilbert problem (1.2). In particular, we construct the transformed Hamiltonians H and the transformed jump functions R 2 (see formula (3.13) for H and formula (3.14) for R). The subcase from Example 1 is treated in greater detail. The interesting case of non-diagonal matrix A and applications to prediction theory will follow elsewhere.
Finally, in Section 4, using the methods of system theory, we recover H and R from a partial information on R similar to the way, in which the Dirac system is recovered explicitly from its Weyl function in [7].