Negative Times of the Davey-Stewartson Integrable Hierarchy

We use example of the Davey-Stewartson hierarchy to show that in addition to the standard equations given by Lax operator and evolutions of times with positive numbers, one can consider time evolutions with negative numbers and the same Lax operator. We derive corresponding Lax pairs and integrable equations.


Introduction
In [3], we proposed a method of derivation of (2 + 1)-dimensional nonlinear integrable equations based on commutator identities on associative algebras. Taking into account the algebraic similarity of operator commutators and derivatives, we have transformed commutator identities into linear partial differential equations. A characteristic property of these linear equations is the possibility to lift them up to nonlinear, integrable ones. In [4,5], this approach was extended to differential-difference and difference equations, where the analogy of similarity transformations and shifts of independent variables was used. In [6], we developed this result for non-Abelian identities of commutators.
To formulate the main aspects of this approach, we start here with the simplest examples. Let A and B be arbitrary elements of an arbitrary associative algebra A. Then they obey the commutator identity Being a trivial consequence of associativity, this identity easily proves that the function i.e., such that B tn = [A n , B], n = 1, 2, 3, obeys the linearized Kadomtsev-Petviashvili (KP) 1 equation with respect to the variables t j : This paper is a contribution to the Special Issue on Mathematics of Integrable Systems: Classical and Quantum in honor of Leon Takhtajan.
It was stated in [3,7] that there are similar relations for higher commutators. In the case of KP, they lead to higher linear equations 2 n ∂ tn ∂ n t 1 B = ∂ t 2 + ∂ 2 n B, n = 3, 4, . . . .
Similar results were obtained in [4,5,6] for difference and differential-difference equations. In that case, we replace (1.1) with, say, the commutator identity where the element A is assumed to be invertible. Thus, in addition to commutators of the kind (1.1), we get similarity transformations here (commutators in the group sense). Therefore, we introduce the element B depending on the number n 1 and continuous variables t 1 and t −1 by means of and denote the shift with respect to variable n 1 as B (1) = ABA −1 . Accordingly, this element B obeys the linear differential-difference equation which gives a linearized version of the two-dimensional Toda system [8,9].
In [3,4,5], we proved that any linear equation, resulting from the commutator identity, can be lifted up to a nonlinear integrable equation using a special dressing procedure. In this paper, our goal is to extend the class of commutator identities. For this purpose one can use arbitrary functions f (A) with commutativity being the only condition they should obey: [f (A), g(A)] = 0. A natural generalization of the choice of functions of the element A was suggested in [3,7]. We assume that in algebra A there exists an element σ such that where {·, ·} denotes anticommutator. In particular, we can consider elements of A as 2 × 2 matrices, where A is proportional to the unity matrix I, B is off diagonal,  where n ≥ 1. These two sets of commutator identities give two sets of differential hierarchies if, in addition to (1.2), we introduce two sets of variables, t = {t 1 , t 2 , . . . } and x = {x 1 , x 2 , . . . }, given by the equations (1.8b) Taking n = 1 here, we get so thanks to (1.6) and (1.7), we get linear differential equations for B(t, x, z), (1.11) For n = 2, these equalities are read as σB t 2 = B t 1 x 1 and σB x 2 = B t 1 t 1 + B x 1 x 1 , respectively. In [7], these linear equations were lifted to the Davey-Stewartson equation (see [1]) and higher equations of its hierarchy. Here we consider "negative" version of this hierarchy, i.e., we assume negative values of n in (1.8). In Section 2, we derive the corresponding commutator identities and the corresponding linear differential equations. In Section 3, we introduce the realization of elements of the associative algebra A using pseudo-differential operators. On this basis, in Section 4, we consider the dressing procedure that enables introduction of the dressing operator and its time evolutions. The Lax pair and nonlinear equations are derived in Section 5. Section 6 is devoted to (1 + 1)-dimensional reductions of the systems under consideration. Some concluding remarks are given in Section 7. Taking into account that all these commutators mutually commute, we consider B as a function of t 1 , x 1 and t −1 , or x −1 , such that

Commutator identities and linear equations
Thus, we again have two versions of the equations: one involving ∂ t −1 , and the other involving ∂ x −1 . Taking into account the symmetry of these two equations with respect to the substitution x −1 ↔ −t −1 , we study here mainly (2.4). By extending (1.8) to negative values of n, we arrive at a hierarchy of commutator identities and linear equations. We can use (1.10) and (1.11), substitute n → −n into these equations and multiply them both by ∂ 2 where n = 1, 2, . . . and where by analogy with (2.3b) We omit here form of (2.6) and (2.7) in terns of commutator identities. It can be easely restored with the help of (1.9). In the case of n = 1, equations (2.6) and (2.7) are reduced to (2.1) and (2.2). Now we have to show that all these linear equations admit lift up to nonlinear integrable ones.

Realization of elements of the associative algebra
To this end, we consider a special realization of the elements of the associative algebra A, see [3,4,5,6]. By analogy with the standard definition of the pseudo-differential operators, we define an element F of A by its symbol F (t, x, z). Here t and x denote (finite) subsets of . . }, and z ∈ C denotes a complex parameter. The subsets t and x definitely include the variables t 1 and x 1 and at least one of the other variables of these lists. In the following we call such subsets minimal. The symbol of the composition of two elements of the algebra is given by means of the symbols of cofactors in the form where t denotes a subset t without variable t 1 . We see that the variable t 1 plays a special role here: the composition with respect to other variables is pointwise. In what follows we consider elements of the algebra A such that their symbols belong to the space of tempered distributions of their arguments. The symbol of the unity operator is 1, and we choose the symbol of operator A as Thanks to (3.1) we have that for any F where A n is understood as n-th power of composition (3.1), where now n ∈ Z. Then, for n = 1, we get [A, F ] = ∂ t 1 F according to (1.8a). Further relations of these equalities give in terms of symbols: Because of our assumption, the symbol B(t, x, z) admits a Fourier transform with respect to the variable t 1 , so the above relations show where n ∈ Z and f (p, z) is an arbitrary 2 × 2 off diagonal matrix function independent of all t n and x n . Note that here we do not specify set of "times" t i and x i involved in the evolution equation. We know that this set includes at least three times: t 1 , x 1 and one of times t n , or x n with n = 0 and 1. It can include more times, but t 1 and x 1 and every third time gives an evolution equation generated by the commutator identity. Thus, in (3.3), summation in the exponent goes over finite number of terms, corresponding to times that are "switched on" while other times are equal to zero.
It is natural to impose on B(t, x, z) the conditions of convergence of the integral and the boundedness of the limits of B(t, x, z) as t, or x tends to infinity. Two obvious conditions are sufficient for this. The first one is given by the choice f (p, z) = δ(p + 2z Im )g(z), where δ denotes delta-function, so that (3.3) takes the form where g(z) is an arbitrary bounded function of its argument. But in order to get B(t, x, z) bounded with respect to variables x n , it is necessary to perform substitution where the new x n are real. The second case is given by reduction f (p, z) = δ(z Re )h(p, z Im ), where z = z Re + iz Im and h(p, z Im ) is an arbitrary function of its arguments. Then (3.3) takes the form B(t, x, z) = dp exp n i n (z Im + p) n − z n Im t n + σi n n (z Im + p) n + z n Im x n × h(p, z Im )δ(z Re ), (3.6) Here we see that B(t, x, z) is bounded with respect to variables t n and x n with odd numbers, and in order to make it bounded for variables with even numbers, we need to make a substitution Thus we have two types of systems defined by the choices (3.4) and (3.6).

Dressing procedure
Specific property of the above set of operators is the possibility of defining operation of∂differentiation by the complex variable z, F →∂F . In terms of symbols, this is defined, see [3], as where derivative is understood in the sense of distributions. Thanks to (3.2), we get the equalitȳ which plays essential role in what follows.
Now we can define a dressing operator K with symbol K(t, x, z) by means of ∂-problem where the product in r.h.s. is understood in the sense of the composition law (3.1). Thanks to (3.1) and (4.1), the equality (4.3) takes the explicit form for time evolutions given by (3.4) and the form for time evolutions given by (3.6). Thus, in the case of (4.4), the equation (4.3) gives the ∂problem, while in the case of (4.5) we get Riemann-Hilbert problem. In both these cases, we normalize solution K of the equation (4.3) by the asymptotic condition In what follows, we assume unique solvability of the problem (4.3), (4.6). The time evolution of the dressing operator follows from these equations. Say, due to (1.8) and (2.3) we get Accordingly, thus, taking into account the commutativity of A m and A n , we get ∂(K tmtn −K tntm ) = (K tmtn − K tntm )B by (4.3). Thus, the commutativity of derivatives K tmtn = K tntm (4.8) follows due to the unique solvability of the problem (4.3), (4.6). Similarly, we prove that K xmtn = K tnxm and K xmxn = K xnxm . In [7], the time derivatives of the dressing operator for positive times (n > 0 in (1.8)) were calculated in terms of the asymptotic decomposition of the dressing operator K where u, v, and w are multiplication operators, i.e., their symbols do not depend on z. Say, using (4.7) for n = 1 we get ∂K t 1 = K t 1 B + K[A, B]. This can be written as ∂(K t 1 + KA) = (K t 1 + KA)B, where (4.2) and (4.3) were used. Due to the condition of unique solvability of (4.3), (4.6) we derive that there exists multiplication operator X such that K t 1 + KA = (A + X)K. Thanks to (4.9), it is easy to see that it equals to zero, so we have (4.10) The situation with K x 1 is more involved, here analogous multiplication operator does not vanish and by (4.6) we get where the multiplication operator u is defined in (4.9). Combining (4.10) and (4.11) we get Our goal here is to extend the approach of [7] to the negative numbers of times in (1.8). More exactly, we start with the times t 1 and x 1 as above and we choose either t −1 or x −1 as the third time according to (2.3b).
To determine the evolutions with respect to t −1 or x −1 for the dressing operator we differentiate (4.3) and use (2.3b): so for the first equality, we have We see that situation here is more complicated than in the case of positive numbers of times. There we were able to reduce the equations to the form ∂(K tn + KA n ) = (K tn + KA n )B due to (4.2). While for negative n, this equality gives an additional delta-term. Therefore, to use the relation (4.14), we must find replacement for A −1 BA. This can be done by introducing a discrete variable, cf. [4] and (1.3) here. We assume that the symbols of B, K, etc. depend on an intermediate variable n ∈ Z. Denote B (1) (t, x, n, z) = B(t, x, n + 1, z), K (1) (t, x, n, z) = K(t, x, n + 1, z) and set It is easy to see that these shifts commute with times t and x: B (1) (1) and we extend definition of composition law (3.1) to symbols that depend on n pointwise with respect to this variable. Now ∂K (1) = K (1) ABA −1 because of (4.3), so that due to the unique solvability of the problem (4.3), (4.6) there exists a multiplication operator ψ such that and thanks to (4.9) we get where u (1) (t, x, n) = u(t, x, n + 1). Let us shift n → n + 1 of (4.14) that due to (4.15) gives ∂ K (1) so that, because of (4.6), there exists multiplication operator Z such that K (1) Thanks to (4.9), we get that Z = 1 + u It looks like we have constructed a (3+1)-dimensional integrable system with the independent variables t 1 , x 1 , t −1 , and n. But in fact, we have two different systems here: t 1 , x 1 , n (see (4.16)) and t 1 , x 1 , t −1 , because the dependence on n can be excluded. Indeed, substituting K (1) for K by means of (4.16) and using ψ as new dependent variable in (4.17) instead of u (1) , we get Here the equation (4.19) is derived by analogy using the second equality in (4.13). The compatibility of any of these equations with (4.12) can be proved like in (4.8).
Compatible evolutions (4.12) and (4.18) or (4.12) and (4.19) admit higher (in fact, lower) versions that involve the times t −n and x −n , n > 1, see (2.8). By analogy with (4.13), we get for this case by (2.8) (4.20) Multiplying this equality by A n from the right, we use n-multiple application of (4.15): B [−n] = A −n BA n . Thus (4.20) takes the form cf. (4.14). Again thanks to the assumed unique solvability of the Inverse problem (4.3), (4.6) we get that there exist multiplication operators α 0 , . . . , α n−1 such that where we applied n-fold shift operation. The operators α j are defined in terms of operators u, v, w, etc. in (4.9). We omit these calculations here. Next, we execute an (n − 1)-fold shift of a discrete variable in equation (4.16), which gives where the multiplication operator ψ was defined in (4.17). The final expression follows as a result of inserting of K [n] from (4.22) to (4.21), which again cancels dependence on the auxiliary variable n. The consideration of dependence on x −n is similar.

Lax pair and nonlinear equations
In Here we omit dependence on the discrete variable n, since it was excluded from (4.18) and (4.19). Thanks to this substitution coefficients of the equations (4.12), (4.18), and (4.19) become independent on z: where the first equation is the famous two-dimensional linear Zakharov-Shabat problem. One can also rewrite (4.3) in terms of the Jost solutions. Say, by means of (3.4) we get ∂ϕ(t, x, z) ∂z = ϕ(t, x, z)g(z), (5.5) and by means of (3.6) ∂ϕ(t, x, z) ∂z = δ(z Re ) dpϕ(t, x, ip)h(p − z Im , z Im ). (5.6) We see that the equations on the Jost solutions are independent on all "time" variables t and x.
The dependence on them, as well as on z in (5.2)-(5.4) is given by (4.6), which, thanks to (5.1), takes the form Note that (5.5) is a standard ∂-problem with the normalization condition (5.7), where we must perform substitution mentioned in (3.5). At the same time, (5.6) shows that the Jost solution in this case is analytic in the left and right half planes of z with discontinuity on the imaginary axis. Thus here inverse problem is given in terms of the Riemann-Hilbert problem, i.e., we define the boundary values of the Jost solution as ϕ ± (t, x, iz Im ) = lim z Re →±0 ϕ(t, x, z) and set under the condition (5.7) and substitution given in (3.7). The difference of these two formulations of the inverse problem results from the condition of boundedness of the symbol of operator B in (3.4) and (3.6). In the case of (5.5) t n are real and x n are pure imaginary, while in the case of (5.6) t n and x n with odd n are real and are pure imaginary for even n. The compatibility of (5.2) with (5.3) and (5.4) follows from (4.8) and (5.1). Thus, we get the following theorem.
It is natural to decompose both matrices u and ψ into diagonal and anti-diagonal parts: so σ, u d = 0, [σ, u a ] = 2σu a thanks to (1.4). Then anti-diagonal parts of the equations (5.8) and (5.10) give while their diagonal parts reduces to the derivative of (5.8) with respect to t −1 and of (5.10) with respect to x −1 of one and the same equation that we have integrated here with respect to t −1 (or, correspondingly, to x −1 ) under the assumption of the rapid decay of u as (t 1 , x 1 ) → ∞.

Dimensional reductions
Here we introduce (1 + 1)-dimensional reductions of (2 + 1)-dimensional nonlinear integrable equations constructed above. Such reductions follow due to time evolutions (3.4), (3.6), which, due to (4.3) and (4.6), lead to the same reductions of the dressing operator K, and then to reductions of all coefficients of the series (4.9). The reduction of time dependence of the operator B, in turn, is the result of conditions on the supports of the functions g(z) and h(p, z Im ) in (3.4) and (3.6), which reduce the number of independent time variables. For example, for the operator B(t, x, z) in (3.4), depending on times t 1 , x 1 , and t −1 , we can cancel dependence on x 1 by imposing condition Thanks to (3.4), this gives It is clear that this dependence on two variables is preserved in evolution and that thanks to the ∂-problem (4.3) and (4.6) and the composition law (3.1) (or due to (4.4)) we get the symbol of the operator K also independent of x 1 . Moreover, this operator is now analytic function for z Re = 0. Taking into account the independence of the operator K from x 1 , we must change the definition of the Jost solution, cf. (5.1), We see that the ∂-problem in this case is the Riemann-Hilbert problem for a function analytic in the right and left half planes on the complex z-plane with discontinuity given by (6.1) on the imaginary axis. The function K is normalized by the condition (4.6) at z → ∞. Summarizing, (6.2) is nothing but Zakharov-Shabat linear problem [10] that has been extensively studied in the literature, e.g., [2]. This is not the only reduction applicable to (3.4). Setting there g(z) = δ(|z| − 1)g(z Im ), we get the scattering data, i.e., symbol of operator B, depending on two variables t 1 −t −1 and x 1 : B(t, x, z) = δ(|z| − 1) exp −2iz Im (t 1 − t −1 ) + 2σz Re x 1 g(z Im ). (6.3) Thus after shifting t 1 → t 1 + t −1 , we exclude the dependence on t −1 from B, and then from K. Now, because of the delta-function in (6.3), we reduce the inverse problem (4.3) to the Riemann-Hilbert problem on the circle |z| = 1 and the normalization condition (4.6). Now we define the Jost solution by means of the relation ϕ(t 1 , x 1 , z) = K(t 1 + t −1 , t −1 , x 1 , z)e zt 1 +σzx 1 , where the r.h.s. does not depend on t −1 . The integrable equation follows from (5.8): where the second equation (5.9) is left unchanged. By analogy, we can consider the reductions of the symbol of operator B in (4.5), i.e., when t 1 , x 1 , and x −1 are chosen as independent variables.

Concluding remarks
In the above derivation of nonlinear integrable equations we needed some essential assumptions, the main was the condition of unique solvability of the ∂-problem (4.3), (4.6). But when nonlinear equation is derived, these assumptions are not necessary: the nonlinear equation is given as a compatibility condition of a Lax pair. On the other hand, the existence of linear equations given by commutator identities always leads to nonlinear integrable equations, as was shown above.