Pseudo-Exponential-Type Solutions of Wave Equations Depending on Several Variables

Using matrix identities, we construct explicit pseudo-exponential-type solutions of linear Dirac, Loewner and Schr\"odinger equations depending on two variables and of nonlinear wave equations depending on three variables.


Introduction
Explicit solutions of linear and nonlinear equations of mathematical physics play an important role in theory and applications. The theory is welldeveloped for the case of linear equations depending on one variable and nonlinear integrable equations depending on two variables and includes, in particular, various versions of the commutation methods, algebro-geometric methods and Bäcklund-Darboux transformations (see [11, 16-18, 20, 28, 36, 38, 46] and references therein). In spite of various interesting results on the cases of more variables (see, e.g., [1, 5, 7-9, 27, 32, 33, 35, 42, 44, 48]), these cases are more complicated and contain also more open problems. The pseudoexponential-type potentials and solutions, that is, potentials and solutions, which, roughly speaking, rationally depend on exponents (see, e.g., [15,19] for the definition of the term pseudo-exponential potential) are of a special interest. When we deal with rational functions of matrix exponents, rational potentials may appear as a special subcase.
In this paper we apply the S-nodes approach from [35] and S-multinodes approach from [37] in order to construct explicitly pseudo-exponential-type potentials and solutions of several important equations of mathematical physics depending on several variables. S-multinodes were introduced in [37] as a certain generalization of the S-nodes by L.A. Sakhnovich [39][40][41] on one hand and commutative colligations by M.S. Livšic [24,25] on the other hand and were used in [37] in order to construct explicit solutions of the time-dependent Schrödinger equation. We start (see Subsection 2.1) with the construction of the explicit solutions of the nonstationary Dirac equation Then, we consider in the Subsection 2.2 the well-known Loewner's system where L is an m × m matrix function (and the case m = 2 with applications to the hodograph equation was dealt with in the seminal paper [26] by C. Loewner). System (1.1) was studied in the interesting papers [29,43], see also references therein. For the Loewner's system, its transformations, generalizations and applications in mechanics, physics and soliton surfaces see, for instance, [10,22,26,30,31] and references therein. Section 3 is dedicated to the nonlinear integrable equations. In our paper, Ψ tx = ∂ ∂x ∂ ∂t Ψ = ∂ 2 ∂x∂t Ψ, σ(D) stands for the spectrum of D, [G, F ] stands for the commutator GF − F G and ⊗ stands for Kronecker product. By diag{b 1 , b 2 , . . . , b m } we denote the diagonal matrix with the entries b 1 , b 2 , . . . on the main diagonal. First we note that in the GBDT version [36,38] of the Bäcklund-Darboux transformation (BDT) the solution of the transformed equation is represented in the form Π * S −1 , where Π * is the solution of the initial equation. Here we construct solutions of (1.1) in the same form. Namely, we set where C is an N × 2 matrix, g * 1 and g * 2 are columns of C, A 1 and A 2 are N × N matrices and C is an n × N matrix (n, N ∈ N). We assume that the equalities hold. From (2.1) and (2.2), we easily see that Matrices A 1 , A 2 , R, ν 1 , ν 2 and C form a symmetric 2-node if A 1 and A 2 commute and the following identities are valid: It is immediate that the matrix function satisfies equations ∂ ∂t S = Πν 1 Π * and ∂ ∂y S = Πν 2 Π * . These equations and equation (2.3) yield the proposition below.
where H has the form (1.1).
The important part of the problem is to find the cases where the conditions of Proposition 2.1 hold.
are n 1 ×n 1 and n 2 ×n 2 diagonal blocks of the diagonal matrix D, n 1 +n 2 = n, We uniquely define R 11 and R 22 by the matrix identities Then the conditions of Proposition 2.1 hold.
Thus, according to Proposition 2.1 and Example 2.2, each vector g 1 and diagonal matrix D (such that σ(D k ) ∩ σ(−D * k ) = ∅) determine a family of pseudo-exponential-type potentials and explicit solutions of (1.1).

Loewner's system
Direct calculation proves the following proposition.

Proposition 2.3
Let m × m and m × n, respectively, matrix functions Λ 1 and Λ 2 satisfy a linear differential equation Then, in the points of invertibility of Λ 1 , the matrix function For some special kinds of similarity transformations of L see also [26, formulas (5.10a) and (5.27)]. Pseudo-exponential-type Ψ and L are constructed in the next proposition.
Proposition 2.4 Introduce m × m and m × n, respectively, matrix functions Λ 1 and Λ 2 by the equalities Here ⊗ is Kronecker product, e k is a column vector given by e k = {δ jk } m j=1 and δ jk is Kronecker's delta. Then, in the points of invertibility of Λ 1 , the matrix functions P r o o f. It is easy to see that Λ 1 and Λ 2 given by (2.9) satisfy equation Λ x = DΛ y . Now, Proposition 2.4 follows from Proposition 2.3.
In a similar (to the construction of Λ i in the proposition above) way, matrix functions Π satisfying (3.20) are constructed in (3.23)-(3.25).

Nonlinear integrable equations
Among 2+1-dimensional integrable equations, Kadomtsev-Petviashvili, Davey-Stewartson (DS) and generalized nonlinear optics (also called N-wave) equations are, perhaps, the most actively studied systems. S-nodes were applied to the construcion and study of the pseudo-exponential, rational and lump solutions of the Kadomtsev-Petviashvili equations in [35]. Here we investigate the remaining two equations from the three above.
Introduce Φ 1 , Φ 2 and S via relations where C 1 and C 2 are n × N matrices; A 1 , A 2 , R 1 = R * 1 and R 2 = R * 2 are N × N matrices; C 1 and C 2 are N × m 1 and N × m 2 , respectively, matrices; S 0 is an n × n matrix and the following identities hold: It is immediate from (3.6)-(3.9) that Π = Φ 1 Φ 2 and S satisfy relations (3.3) and the first two relations in (3.4). In order to prove the third equality in (3.4), we note that Here we used (3.6) and the first identity in (3.9).
In a similar way we show that Equalities (3.8), (3.10) and (3.11) yield the last equality in (3.4). Hence, the conditions of Proposition 3.1 are valid, and so we proved the following proposition.

Remark 3.3
It is easy to see that if σ(A 1 ) = σ(A 2 ) = 0, then Φ 1 , Φ 2 and S are rational matrix functions. Thus, if σ(A 1 ) = σ(A 2 ) = 0, the solutions u, q 1 and q 2 of the DS I system, which are constructed in Proposition 3.2, are also rational matrix functions.
The compatibility condition w tx = w xt of the auxiliary systems w x = ±ijw y + jV w, w t = 2ijw yy ± 2jV w y ± jQw, (3.12) is equivalent (for the case that the solution w is a non-degenerate matrix function) to the matrix DS II equation Open problem. Use the approach from Proposition 3.1 in order to construct explicit pseudo-exponential solutions of the matrix DS II.
We note that various results on DS II, including BDT results, are not quite analogous to the results on DS I (see, e.g., [20]).

Generalized nonlinear optics equation
was dealt with in [3,47]. This system is a generalization of the well-known ξ] first studied in [45] (see also [2]). GBDT version of the Bäcklund-Darboux transformation for GNOE was (as well as the GBDT version for the equations) constructed in [34]. When the initial system in GBDT for GNOE from [34,Theorem 4] is trivial (i.e., ξ 0 ≡ 0), Theorem 4 takes the form: Let an n × m matrix function Π and an n × n matrix function S satisfy equations Then the matrix function satisfies (in the points of invertibility of S) GNOE (3.17) and reduction condition (3.18).
In order to construct pseudo-exponential-type solutions ξ, we will consider matrix functions Π and S of the form (2.1) and (2.5), respectively, where E A will depend on three variables and N = ml, l ∈ N. Namely, we set Π(x, t, y) = CE A (x, t, y) C, E A (x, t, y) = exp{xA 1 + tA 2 + yA 3 }, (3.23) where C is an n × N matrix, A is an l × l matrix, N = ml, ⊗ is Kronecker product, c is an l × m matrix, e k is a column vector and δ ik is Kronecker's delta. It is immediate that the matrices A k (k = 1, 2, 3) commute. Hence, we see that matrices A, C and c determine (via (3.23)-(3.25)) matrix function Π satisfying (3.20).  (3.26) where the N × N matrix R (N = ml, R = R * ) satisfies matrix identities We note that, according to (3.25), the right-hand sides of the equalities in (3.27) and (3.28) are block diagonal matrices with l × l blocks. Therefore, we will construct block diagonal matrix R, which blocks R kk are also l × l matrices: