Polynomial Bundles and Generalised Fourier Transforms for Integrable Equations on A.III-type Symmetric Spaces

A special class of integrable nonlinear differential equations related to A.III-type symmetric spaces and having additional reductions are analyzed via the inverse scattering method (ISM). Using the dressing method we construct two classes of soliton solutions associated with the Lax operator. Next, by using the Wronskian relations, the mapping between the potential and the minimal sets of scattering data is constructed. Furthermore, completeness relations for the 'squared solutions' (generalized exponentials) are derived. Next, expansions of the potential and its variation are obtained. This demonstrates that the interpretation of the inverse scattering method as a generalized Fourier transform holds true. Finally, the Hamiltonian structures of these generalized multi-component Heisenberg ferromagnetic (MHF) type integrable models on A.III-type symmetric spaces are briefly analyzed.


Introduction
It is well known [9], that two important classes of integrable nonlinear evolution equations (NLEE) are related to symmetric spaces: the multi-component nonlinear Schrödinger equations and their gauge equivalent Heisenberg ferromagnet (HF) equations: Here S takes values in some simple Lie algebra. In the simplest case of the algebra su(2) the function S describes the spin of one-dimensional ferromagnet [4]. The HF's equation is integrable in the sense of inverse scattering transform [9,45]. Its Lax representation has the form: Since the time the complete integrability of HF equations was discovered, many attempts for constructing its generalization have been made [22]. The main goal of this paper is to derive and analyze a special class of NLEE which is obtained from the HF type equations by applying additional algebraic symmetries. A well known method for constructing new integrable NLEE is based on the so-called reduction group, introduced in [32,33,34,35] and further developed in [13,30,28,29]. It led to the discovery of the 2-dimensional Toda field theories [34,36]. Its latest developments of the method led to the discovery of new automorphic Lie algebras and their classification [30,28,29].
The hierarchies of integrable NLEE are generated by the so-called recursion (generating) operator. Such operators have been constructed and analyzed for a wide class of Lax operators L and appeared to generate not only the Lax representations, but also the hierarchy of NLEE's related to a given Lax operator L, their conservation laws and their hierarchy of Hamiltonian structures, see [7,40,45,20,21] and the numerous references therein. Such operators can be viewed also as Lax L operators, taken in the adjoint representation of the underlying Lie algebra g.
The (generating) recursion operator appeared first in the AKNS-approach [1] as a tool to generate the class of all A-operators as well as the NLEE related to the given Lax operator L(λ). Next, I.M. Gel'fand and L.A. Dickey [10] discovered that the class of these A-operators is contained in the diagonal of the resolvent of L. The kernel of the resolvent of L can be explicitly constructed in terms of the so-called fundamental analytic solutions (FAS), see [11,17,12].
The construction of the recursion operator for Lax operators, whose explicit dependence on the spectral parameter λ is comparatively simple (say, linear, or quadratic) was done a long time ago [1,27,15,16]. Furthermore, the completeness property for the set of eigenfunctions of the recursion operator (the 'squared solutions' of L) is of paramount importance. The completeness of the 'squared solutions' plays a fundamental role in proving that the inverse scattering method is, in fact, a nonlinear analogue of the Fourier transform, which allows one to linearize the NLEE. Using these relations one is able to derive all fundamental properties of the NLEE on a common basis.
In the symmetry approach to integrable equations the so-called 'formal recursion operator' plays a crucial role. It has the general property to map a symmetry into a symmetry of the given integrable NLEE [41]. If the considered NLEE possesses an infinite hierarchy of symmetries [26], or conservation laws [43] of arbitrary (high) order, or can be linearized by a differential substitution [44], then it has a formal recursion operator [39,38,42]. For an extensive and up-to-date surveys on symmetry approach to integrability we refer to [2,37] and the references therein. We just note that in the Symmetry approach the formal recursion operator does not depend on the boundary conditions, imposed on the potential of the associated Lax operator. An alternative approach for construction of recursion operator from a given Lax operator is presented by M. Gürses, A. Karasu and V. Sokolov in [24]. This method is pure algebraic, it is not related yet with the spectral theory of the Lax operator L and it is not sensitive to the choice of the class of admissible potentials of L.
The problem of deriving recursion operators becomes more difficult when we impose additional reductions on L. If this additional reduction is compatible with L, being linear or quadratic in λ, the construction of the recursion operators is not a difficult task (see [20]; an alternative construction of Λ is given in [24,23]). The effect of the Z n -reduction is as follows: i) the relevant 'squared solutions' have analyticity properties in sectors of the complex λ-plane closing angles π/n; ii) the grading of the Lie algebra g ≡ ⊕ n−1 k=0 g (k) is more involved and as a consequence the recursion operator is factorized into a product of n factors Λ = n−1 k=0 Λ k , and each of the factors Λ k maps Λ k : g (k−1) → g (k) .
The applications of the differential geometric and Lie algebraic methods to soliton type equations led to the discovery of a close relationship between the multi-component (matrix) integrable equations (of nonlinear Schrödinger type) and the symmetric and homogeneous spaces [9]. Later on, this approach was extended to other types of multi-component integrable models, like the derivative NLS, Korteweg-de Vries and modified Korteweg-de Vries, N -wave, Davey-Stewartson, Kadomtsev-Petviashvili equations [3,8].
The purpose of the present paper is to derive nonlinear evolution equations to generalize Heisenberg's model and study some of the properties of their Lax operators. We are going to focus our attention on equations whose Lax representation is related to su (3).
This paper is a natural continuation of our previous papers [18,19,14]. In Section 2 below we give some of the necessary preliminaries. We also study the Z 2 -reductions of the generalized Heisenberg ferromagnets (Z 2 -HF) related to symmetric spaces and outline the spectral properties of the relevant Lax operator and the construction of its fundamental analytic solutions. Section 3 is dedicated to the soliton solutions of the (Z 2 -HF) models. In their derivation we make use of the dressing method [40,49,50,51]. Due to the additional reductions the Lax operator L may possess two types of discrete eigenvalues: generic ones, coming in quadruplets ±λ k , ±λ * k and purely imaginary ones coming as doublets ±iκ j . Therefore we will have two types of soliton solutions which we will call quadruplet and doublet solitons respectively. We outline the purely algebraic construction for deriving the N -soliton solutions for both types of solitons and provide the explicit expressions for the one-soliton solutions. Section 4 is devoted to the derivation of the recursion operators Λ. Here we first derive Λ using the Gürses-Karasu-Sokolov (GKS) method [24]. Next we analyze the Wronskian relations as a basic tool in the inverse scattering method [5,6]. From them, there naturally arise the 'squared solutions', which play a fundamental role also in the analysis of the mapping between the set of admissible potentials and the minimal sets of scattering data. The 'squared solutions' can also be viewed as eigenfunctions of the recursion operators Λ ± , which can be used as an alternative definition of the recursion operators. One can check that both approaches lead to equivalent expressions for Λ ± . Section 5 is dedicated to the spectral properties of the recursion operators. There we start by proving the completeness relation for the 'squared solutions'. Next we use this relation to derive the expansions of ad −1 L 1 L 1,x , ad −1 L 1 L 2,x and ad −1 L 1 δL 1 over the 'squared solutions'. In the last Section 6 these expansions are treated as generalized Fourier transforms, allowing one to linearize the NLEE. We also demonstrate that all fundamental properties of the class of NLEE can be formulated through the recursion operators. These include not only the description of the class of NLEE related to L, but also their integrals of motion and hierarchy of Hamiltonian structures. We end by discussion and conclusions. In Appendix A we have collected some useful formulae for the Gauss factors of the scattering matrix and explicitly derive the operator ad −1 L 1 .

Basic notations and general form of equations
The main object in our paper is the following 2-component system of NLEEs: where the functions u : R × R → C and v : R × R → C are assumed to be infinitely smooth and satisfy the following boundary conditions Moreover, u and v are not functionally independent, but obey the constraint |u| 2 + |v| 2 = 1.
The system (2.1) represents a reduction of generalized Heisenberg ferromagnet equations related to the symmetric space SU (3)/S(U (1) × U (2)), see [18,19]. It possesses a zero curvature representation with Lax operators in the form where λ is a spectral parameter and the matrix coefficients (potentials) read

5)
The specific structure of the matrices above is a result of the simultaneous action of two Z 2 reductions on generic Lax operators L and A, namely where the operation˘is defined as follows The latter reduction represents Cartan's involutive automorphism [25,31] involved in the definition of the symmetric space SU (3)/S(U (1) × U (2)), that is, it induces a Z 2 -grading in the Lie algebra sl(3, C) It is evident that the grading condition is fulfilled. This grading will be used widely in our further considerations. Due to the existence of the grading (2.9), any function X(x, t, λ) with values in sl(3) can be represented in the form: Each component X 0,1 , in its turn, splits into a term commuting with L 1 and its orthogonal complement, namely The brackets above stand for Killing form, defined as: It is evident that the functions κ 0 and κ 1 can be expressed as follows One can easily check that the norms of L 1 and L 2 are and hence κ 0 and κ 1 are given by As we shall convince ourselves in some cases it proves to be more convenient to deal with Lax operators gauge equivalent to (2.3), (2.4). The gauge transformation, which puts L 1 and A 2 into their diagonal form, is given by the unitary matrix Further on we are also going to use the explicit form of U 0 given by next formula

Direct scattering problem for the Lax operator (2.3)
Here we shall provide a short summary of some basic results and notions concerning the spectral theory of the Lax operator L, the direct scattering problem and introduce the so-called fundamental analytic solutions. The spectral properties of L depend on the choice of the class of admissible potentials, i.e. the potentials are a subject to certain boundary condition. In the case of boundary conditions of type (2.2), the continuous spectrum of L coincides with the real axis in the complex λ-plane, see [19]. In order to formulate the direct scattering problem for L, one needs to introduce fundamental sets of solutions 1 ψ to the auxiliary (spectral) linear system L(λ)ψ(x, t, λ) = i∂ x ψ(x, t, λ) + λL 1 (x, t)ψ(x, t, λ) = 0. (2.16) Since L(λ) and A(λ) commute, fundamental solutions ψ also satisfy the equation The matrix-valued function is called dispersion law of the nonlinear equation. A special type of fundamental solutions are the so-called Jost solutions ψ ± which are normalized as follows Further we shall simply call them fundamental solutions for short.
diagonalizes the asymptotics L 1,± = lim x→±∞ L 1 (x, t). Due to (2.18) one can show that the asymptotic behavior of ψ ± do not depend on t, i.e. the definition is correct. The transition matrix is called scattering matrix. It can be easily deduced from relation (2.17) that the scattering matrix evolves with time according to the linear differential equation which is integrated straight away to give Since the time parameter t does not play a significant role in our further considerations, we shall omit it (it will be fixed).
The action of Z 2 -reductions (2.7), (2.8) imposes the following restrictions on the Jost solutions and the scattering matrix. From now on we will assume that φ + = φ − = 0. Hence the asymptotic values of L 1 and g become equal to each other: The complex λ-plane is separated by the continuous spectrum of L (real axis) into two regions of analyticity: the upper half plane C + and the lower one C − . We are going to sketch two ways of constructing fundamental solutions (FAS) χ + (x, λ) and χ − (x, λ) which are analytic functions in C + and C − respectively, see [19] for more detailed explanations. The first way is based on introducing some auxiliary functions which satisfy the auxiliary system: with the boundary conditions lim x→±∞ η ± (x, λ) = 1 1. Equivalently, η ± (x, λ) can be regarded as solutions to the following Volterra-type integral equations: Then we define ξ + (x, λ) as a solution to the following set of integral equations: and ξ − (x, λ) being a solution to It is easy to check that ξ + and ξ − have the analytic properties in C + and C − respectively due to the appropriate choice of the lower integration limits. The initial fundamental analytic solutions χ ± (x, λ) are obtained from ξ ± (x, λ) by applying the transformation: Another way of how one can construct FAS is by multiplying the Jost solutions by appropriately chosen matrices. It turns out that these matrix factors are involved in the Gauss decomposition of the scattering matrix T (λ). Here is a list of all Gauss factors in a parametrization we are going to use further in our exposition: where m + k (λ) (resp. m − k (λ)) are the principal minors of T (λ) of order k = 1, 2. Then χ + and χ − are expressed as follows The relation above can be rewritten in the following manner, using (2.22): Thus FAS can be regarded as solutions to a local Riemann-Hilbert problem. The established interrelation between the inverse scattering method and local Riemann-Hilbert problem proves to be useful in constructing solutions to NLEEs. It can be shown that the reduction conditions (2.20), (2.21) and equation (2.23) lead to the following demands on the Gauss factors According to (2.24) these relations can be also written as: (s + (λ * )) † = −s − (λ), (r + (λ * )) † = −r − (λ), Finally, combining all this information we see that the FAS obey the symmetry conditions Remark 1. The Riemann-Hilbert problem allows singular solutions as well. The simplest types of singularities are simple poles and zeroes of the FAS and generically correspond to discrete eigenvalues of the Lax operator L. Due to the reduction symmetries the discrete eigenvalues must form orbits of the reduction group. Generic orbits contain quadruplets, so if µ is an eigenvalue, then −µ and ±µ * , are eigenvalues too. However, we can have degenerate orbits too. If the eigenvalue lies on the imaginary axis we will have doublets of eigenvalues.

Dressing method and soliton solutions
In the present section we are going to derive and analyze the 1-soliton solution to the 2component system (2.1). For this to be done, we are going to apply the dressing method proposed in [49] and developed in [50,51,34,35]. In the previous section (see Remark 1) we established that the operator L may possess two types of discrete eigenvalues: one, coming in quadruplets as well as purely imaginary ones coming as doublets ±iκ j . Therefore we will have two types of soliton solutions: generic (or quadruplet) ones and doublet ones (kinks).

Rational dressing
The idea of the dressing method is the following. Firstly, one starts from known solutions u 0 (x, t), v 0 (x, t) of (2.1) and a solution ψ 0 (x, t, λ) of the auxiliary linear problems 1 and A 2 have the same form as given in (2.5), (2.6) but u and v are replaced by u 0 and v 0 respectively. Then one constructs another function ψ 1 (x, t, λ) = Φ(x, t, λ)ψ 0 (x, t, λ) which is assumed to be a fundamental solution to a similar system of equations The matrices L 1 and A 2 have the same structure as L 1 and A (0) 2 but instead of u 0 (x, t) and v 0 (x, t) one plugs some new solutions u 1 (x, t) and v 1 (x, t) to (2.1) which are to be found.
The dressing factor Φ(x, t, λ) is assumed to be regular at |λ| → 0, ∞. Due to the reduction conditions (2.7), (2.8) the dressing factor has the symmetries: From (3.1) and (3.2) it follows that Φ(x, t, λ) is a solution to the following equations: Since Φ(x, t, λ) is regular at |λ| → ∞ from (3.5) one can derive the following relation between L This equation will play a fundamental role in our further considerations since it allows one to generate a new solution to (2.1) from a given one. If we consider a λ-independent dressing factor then from (3.5) and (3.6) we can deduce that Φ do not depend on x and t either. Thus we have a simple rotation, related to the U (2) symmetry of the model. In order to get something more nontrivial we shall assume that Φ(x, t, λ) is a rational function of λ with a minimal number of simple poles. From the reduction condition (3.3) it follows that if µ is a pole of Φ then −µ is a pole too. It is natural to assume that these poles do not overlap with the continuous spectrum of L, i.e. Im µ = 0. On the other hand (3.4) leads to the conclusion that Φ −1 has poles at µ * and −µ * .
We first consider the generic case when µ = −µ * , so the poles of Φ do not coincide with the poles of Φ −1 and choose the following anzatz for the dressing factor and its inverse: From (3.8) it follows that the matrix M satisfies the following equation: If we assume that the residue M is an invertible matrix (rank M = 3), then it is proportional to 1 1 and the dressing is trivial. In order to obtain a non-trivial dressing M ought to be singular. For our purposes it suffices to consider the case rank M = 1. Then the residue can be represented as a product M = |n m| of a vector |n = (n 1 , n 2 , n 3 ) T and a co-vector m| = (m * 1 , m * 2 , m * 3 ). Substitution of this representation into (3.9) leads to a linear equation for the vector |n : Introduce the functions Then, the solution of (3.10) reads: .
The vector |m is an element of the projective space CP 2 . Indeed, it is easily seen from above that a rescaling |m → f |m with any nonzero complex f does not affect the matrix M . Taking into account the anzatz (3.8) one can rewrite (3.7) as: Notice that the dressing preserves the matrix structure of the Lax operator L, since the dressing factor 1 1 + M + CM C is a block-diagonal matrix. From this equality and from (3.12) it follows that the potentials u 1 , v 1 can be expressed by u 0 , v 0 in the following way: where It is easy to verify that the matrix M is unitary for a generic choice of µ and the nonzero vector |m . Thus we expressed all quantities needed in terms of |m . It remains to find |m itself. For that purpose we rewrite equations (3.5), (3.6) in the form: (3.14) It is obviously satisfied at λ = 0. From (3.14), it follows that the residues at λ = ±µ * , λ = ±µ should vanish. It is sufficient to vanish the residue at λ = µ * (vanishing of the other residues follows from the symmetry conditions (3.3)). Computing the residue at λ = µ * and taking into account equation (3.9), we get i.e. |m(x, t) is an eigenfunction of the dressed Lax operator. A general solution of equations (3.15), (3.16) is is an arbitrary non-vanishing complex function and |m 0 ∈ C 3 is a non-zero, but otherwise arbitrary complex vector. Without any loss of generality we can set f (x, t) = 1 (see the discussion above about the projective nature of the vector |m . Thus we proved the following proposition: form a solution of the system (2.1) and ψ 0 (x, t, λ) be a simultaneous fundamental solution of (3.1). Let also Re µ = 0, Im µ = 0 and |m 0 ∈ C 3 be a complex vector. Then u 1 (x, t), v 1 (x, t) given by (3.13) for m k being components of the vector |m = ψ 0 (x, t, µ * )|m 0 satisfy (2.1) as well. The corresponding solution ψ 1 (x, t, λ) of the associated linear system (3.2) is given by The new solution u 1 (x, t), v 1 (x, t) of equations (2.1) and the fundamental solution ψ 1 (x, t; λ) of the corresponding linear system are parameterized by a complex number µ and a complex vector |m 0 ∈ C 3 .
Let us now consider the special case when µ = iκ. Then the sequence of steps necessary to determine Φ slightly changes. Indeed, suppose we have a dressing factor in the form: Due to the reduction (3.4) its inverse has the same poles as Φ itself and therefore the equation ΦΦ −1 = 1 1 already contains second order poles. Vanishing of the poles of second and first order respectively leads to the following algebraic conditions: P is a degenerate matrix which means that it can be presented as P = |q p|. Then the former algebraic constraint transforms into a quadratic relation for the vector |p Equation (3.18) is reduced to a linear system for |q where σ is some auxiliary real function to be found. The above linear system allows one to express |q through |p and σ In order to find |p and σ we consider again the partial differential equations (3.14). The requirement that the poles of second and first order in (3.14) vanish identically yields to differential relations for |p and σ: It is not hard to verify that |p(x, t) and σ(x, t) are expressed through the initial solution ψ 0 (x, t, λ) as follows: where |p 0 and σ 0 are constants of integration andψ 0 := ∂ λ ψ 0 . After substituting (3.21) into (3.19) and taking into account the first Z 2 reduction we see that the components of the polarization vector |p 0 are no longer independent but satisfy the constraint: (3.23) Thus to calculate the soliton solution itself one just substitutes the result for |p and p| into M and uses formula (3.12). Now we are able to formulate a result quite similar to Proposition 1: Proposition 2. Let it be given functions u 0 (x, t), v 0 (x, t) to satisfy (2.1) and ψ 0 (x, t, λ) to be a fundamental solution of (3.1). Let also κ ∈ R\{0}, σ 0 ∈ R and |p 0 be a non-vanishing complex vector satisfying (3.23). Then u 1 (x, t), v 1 (x, t) given by where p k are the components of the vector |p = ψ 0 (x, t, −iκ)|p 0 is a solution of the system (2.1) too. The solution ψ 1 (x, t, λ) of the linear system (3.2) is given by ψ 1 = Φψ 0 where Φ is defined by As it is seen the new solution is parametrized by the polarization vector |m 0 , the real number σ 0 and the pole iκ.
One can apply the dressing procedure repeatedly to build a sequence of exact solutions to the system which is parametrized by a set of complex constants µ k and constant vectors |m k ∈ CP 2 (resp. a set of real constants κ j , σ 0,j and complex three-vectors |p k in the case of purely imaginary poles).

One-soliton solutions
Let us apply the dressing procedure to a trivial solution u 0 = 0, v 0 = 1 of equation (2.1). In this case We are going to consider the generic case first, i.e. we have 4 distinct poles of Φ and Φ −1 to form a 'quadruplet' {µ, −µ, µ * , −µ * }. It is convenient to decompose a constant complex vector |m 0 according to the eigen-spaces of the endomorphism ψ 0 (3.24): where α, β, γ are arbitrary complex constants. If the vector |m 0 is proportional to one of the eigenvectors of the endomorphism ψ 0 , then the corresponding matrix M does not depend on the variables x and t (due to the projective nature of the vector |m ) and the corresponding solution (3.13) is a simple unitary rotation of the constant solution u 0 = 0, v 0 = 1.
Elementary solitons correspond to vectors |m 0 , belonging to an essentially two-dimensional invariant subspaces of the endomorphism ψ 0 . In other words, elementary soliton solutions correspond to the choices of vector |m 0 with only one zero coefficient in the expansion (3.25). Let us consider each of these three cases in more detail.
In this case, the solution does not depend on the variable t. It follows from (3.13) that the solution can be written in the following form The function v 1 can be written in the form v 1 = exp(4iθ(x)) where It is easy to check that u = 0, v = exp(if (x)) is an exact solution of (2.1) for any differentiable function f (x). It resembles the case of the three-wave equation [48] where one wave of an arbitrary shape is an exact solution of the system and the two other waves are identically zero. In this case, from (3.13) it follows that the solution can be written in the form  where The solution now can be obtained from the solution in the case (ii), by changing α → β and x → −x.
In the cases (ii) the solution (3.27) is a soliton of width 1/κ moving with velocity 2ω. The corresponding soliton in the case (iii) moves with a velocity −2ω.
In the generic case, when all three constants are non-zero, the solution (3.13) represents a nonlinear interaction of the above described solitons. For κ > 0 it is a decay of unstable time independent soliton from the case (i) into two solitons, corresponding to the cases (ii) and (iii) (see Fig. 2). For κ < 0, the solution is a fusion of two colliding solitons into a stationary one.
Let us now consider the case of imaginary poles, i.e. µ = iκ. There exist two essentially different cases. 1. Suppose p 0,2 = γ = 0. Then from (3.23) it follows that |p 0,1 | = |p 0,3 |. It suffices to pick up p 0,1 = 1 and for the third component we have p 0,3 = exp(iϕ), ϕ ∈ R. In this case the 1-soliton solution is stationary: The function v can be presented as A plot of the solution (3.28) is shown on Fig. 3. In particular, when ϕ = 0; π, i.e. p 0,1 = ±p 0,3 we have for the doublet solution: It is evident that the presence of the constant σ 0 here is essential since otherwise the solution coincides with the vacuum. 2. Now let us assume p 0,2 = 0. For simplicity we fix p 0,2 = 1. Then the norms of p 0,1 and p 0,3 are interrelated through This is why it proves to be convenient to parametrize them as follows: The 1-soliton solution reads: The above solution can be significantly simplified if one assumes that p 0,3 /p 0,1 ∈ R. In these boundary cases the doublet solution reads: when p 0,3 /p 0,1 < 0 (φ = π/2). A plot of the solution (3.29) is depicted on Fig. 4.

Multisoliton solutions
As we mentioned above, the dressing procedure can be applied several times consequently. Thus after dressing the 1-soliton solution one derives a 2-soliton solution, after dressing the 2-soliton solution one obtains a 3-soliton solution and so on. Of course, in doing this one is allowed to apply either of dressing factors (3.8) and (3.17). Therefore the multisoliton solution will be a certain combination of quadruplet and doublet solitons. Another way of derivation the multisoliton solution consists in using a dressing factor with multiple poles. For example, if one wants to generate N quadruplet solitons one should use a dressing factor in the form: which is evidently compatible with the reduction condition (3.3). Due to (3.4) its inverse has the form: In order to determine the residues of Φ one follows the same steps as in the case of a 2-poles dressing factor. Firstly, the identity ΦΦ −1 = 1 1 implies that the residues of Φ and Φ −1 satisfy certain algebraic relations, namely: to ensure vanishing of the residue of ΦΦ −1 at λ = µ k . Of course due to the Z 2 reductions we will have an additional set of algebraic relations which are obtained from (3.30) by hermitian conjugation.
As discussed before in order to obtain a nontrivial dressing the residues must be degenerate matrices. Thus one introduces the factorization M k = |n k m k | and then reduces (3.30) to a linear system for |n k By solving it one can express the vectors |n l , l = 1, . . . , N through all |m k and that way determine the whole dressing factor in terms of |m k . This step contains the main technical difficulty in the whole scheme. The vectors |m k can be found from the natural requirement of vanishing of the poles in the differential equations (3.14). The result reads Thus we have proved that as in the 2-poles case the dressing factor is determined if one knows the seed solution ψ 0 (x, t, λ). The quadruplet N -soliton solution itself can be derived through the following formula L (1) From all said above it follows that the algorithm for obtaining the N -soliton solution can be presented symbolically as follows 1 .
Similarly, one is able to generate a sequence of N doublet solitons by making use of the following factor Following the same steps as in the 1-soliton case one can convince himself that the vectors |q l involved in the decomposition P l = |q l p l | satisfy the linear system: On the other hand the vectors |p l and the real functions σ l are expressed in terms of the initial solution ψ 0 and its derivatives with respect to λ as follows: where |p l,0 and σ l,0 for l = 1, . . . , N are constants of integration to parametrize the solution.
The components of |p l are not independent but satisfy In order to derive doublet N -soliton solution one uses: In this case the diagram which describes the algorithm to obtain the multisoliton solution reads It is clear that apart of the 'pure' multisolitons discussed above there is a 'mixed' type of multisolitons, i.e. multisolitons to contain both species of 1-soliton solutions. In order to obtain such solution one needs to use a dressing factor possessing both types of poles.

Generalized Fourier transform
In this section we aim to develop the generalized Fourier transform interpretation of the inverse scattering method for the Lax operator (2.3). For this to be done we are going to use some basic notions like Wronskian relations, 'squared solutions', recursion operators etc. The 'squared solutions' also known as adjoint solutions are generalizations of exponential functions in usual Fourier analysis, namely they are a complete system. They occur naturally in Wronskian relations which interrelate the scattering data and the potential L 1 (x) (as well as their variations) in a form which resembles a scalar product and is called skew-scalar product. The Wronskian relations allows one to express the expansion coefficients of L 1 (x) and its variation along these generalized exponents in terms of the scattering data and its variations as we shall see later on. Within this framework the role of the operator id/dx whose eigenfunctions are exp(ikx) is played by the recursion operators. We shall present two substantially different ways of obtaining the recursion operators and compare them.

Recursion operators and integrable hierarchy
Recursion operators appear in many situations in the theory of integrable systems [1,20,41]. Their existence is tightly related to the fact that each integrable NLEE belongs to a whole family of infinite number of integrable NLEEs associated to the same Lax operator L. One can also assign to this family a whole infinite set of symmetries and integrals of motion (Hamiltonians). In this subsection we are to derive the recursion operators by analyzing the interrelations between members of the hierarchy of NLEEs. In doing this we are following the ideas in [24,46].
The hierarchy of integrable NLEEs associated to L is generated through the zero curvature representation which we rewrite in the original Lax form Each choice of V corresponds to an individual member of the hierarchy. Following the ideas in [24] we interrelate two adjacent flows V andṼ through the equalitỹ where k(λ) is a scalar function and the remainder B has the structure of the Lax operator A(λ) All quantities above must be invariant with respect to the action of reductions (2.7), (2.8). This is why we pick up k(λ) = λ 2 while the operator B obey the condition This means that the coefficients of B are hermitian matrices to fulfill B 1 ∈ sl 1 (3), B 2 ∈ sl 0 (3). After substituting (4.1) into the Lax representation one obtains the following relation where τ is the evolution parameter of the flowṼ . Then the recursion operator R is defined as the mapping In order to find it we need to solve (4.2) and thus calculate the remainder B.
Since relation (4.2) holds identically with respect to λ it splits into the following set of recurrence relations: In accordance with our conventions explained in the Preliminaries (see formulae (2.10)-(2.12)) the coefficients B 1 and B 2 have splittings of terms B ⊥ 1 and B ⊥ 2 not commuting with L 1 and their orthogonal complements. After substituting (4.6) into (4.3) it becomes possible to express B ⊥ 2 through L 1,t as follows: where ad −1 L 1 acts on the orthogonal parts only, i.e. the kernel of ad L 1 is factorized. In this case it can be proven the presentation below holds true The coefficient b 2 is determined from relation (4.4). Indeed, after plugging (4.6) into (4.4) we get The L 1 -commuting part of (4.8) is separated by taking · , L 2 in both hand sides of the equation to give where we have denoted by ∂ −1 x the integral x ±∞ dy. On the other hand by projecting (4.4) with π = ad −1 L 1 ad L 1 one extracts its orthogonal part Taking into account (4.9) one can find out the following recurrence relation Formally speaking, one should write Λ ± 2 instead of Λ 2 for the integro-differential operator introduced above since the lower integration limit in ∂ −1 x differs. This choice of signs, however, is unessential for the considerations in the current subsection and this is why prefer a more simplified notation.
After a similar treatment of (4.5) we obtain Finally combining (4.7), (4.10) and (4.11) one derives The recursion operator can be also presented conveniently as a 4 × 4 matrix to map the column vector (u, v, u * , v * ) T t to the column vector (u, v, u * , v * ) T τ . Then it can be verified that R obeys the splitting: The factors π and A look as follows The matrices are a local and a nonlocal part respectively originating from the orthogonal part B ⊥ 2 and the L 1 -commuting part b 2 L 2 . Finally is connected with the term b 1 L 1 in the splitting of B 1 .

Wronskian relations and 'squared solutions'
Wronskian relations [5,6] provide an important tool for analyzing the relevant class of NLEE and the mapping F : M → T, where M is the set of allowed potentials of L (in our case L 1 ) and T is the minimal set of scattering data. In deriving them we will need along with the first equation in (2.16) also two other related equations: where the variation of χ(x, λ) is due to the variation δL 1 (x). The first type of Wronskian relations interrelates the asymptotics of FAS with L 1 and its powers as shown in the examples below 14) We remind the reader that J = diag(1, 0, −1) and I = diag(1/3, −2/3, 1/3) are the diagonal forms of (the asymptotics of) L 1 and L 2 = L 2 1 − 21 1/3 respectively, that we introduced at the very beginning in section Preliminaries (see formulae (2.13), (2.14)).
A second class of Wronskian relations connects the variation δχ(x, λ) with the variation of L 1 , i.e.χ δχ(x, λ) In the left hand sides of the Wronskian relations are involved the scattering data and its variation while the right hand sides can be viewed as Fourier type integrals. To make this obvious let us take the Killing form of the Wronskian relation (4.13) with a Cartan-Weyl generator E α and use the invariance of the Killing form The quantities e α (x, λ) = χE αχ (x, λ) introduced above are called 'squared solutions'. Due to the fact that we have two FAS χ + (x, λ) and χ − (x, λ) we obtain two types of 'squared solutions' e ± α (x, λ). Similarly, taking the Killing form in (4.14) and (4.15) we find We are interested more specifically in variations that are due to the time evolution of L 1 (x), i.e.
Therefore up to first order terms of δt we obtain Now we can explain why the Wronskian relations are important for analyzing the mapping F : M L 1 → T. Indeed, taking χ(x, λ) to be a fundamental analytic solution of L we can express the left hand sides of (4.16) and (4.17) (resp. (4.18)) through the Gauss factors S ± , T ∓ and D ± (resp. through the Gauss factors and their variations). The right hand side of (4.16) and (4.17) (resp. (4.18)) can be interpreted as a Fourier-like transformation of the potential L 1 (x) (resp. of the variation δL 1 (x)). As a natural generalization of the usual exponents there appear the 'squared solutions'. The 'squared solutions' are analytic functions of λ. This fact underlies the proof of their completeness in the space of allowed potentials, as we shall see.

4.3
The skew-scalar product and the mapping M L 1 → T j It is obvious, that only some of the matrix elements of the squared solutions e α (x, λ) contribute to the right-hand sides of the Wronskian relations. To make this clearer we will use the Z 2grading of the Lie algebra which hints that we should split the squared solutions as in (2.10), namely: In addition, according to (2.11), (2.12) each components H α (x, λ) and K α (x, λ) can be split into where H α (x, λ) and K α (x, λ) do not commute with L 1 (x). It is not difficult to realize that only H α (x, λ) and K α (x, λ) contribute to the right-hand sides of the Wronskian relations.
In what follows we will make use also of the skew-scalar product where X and Y are functions with values in sl(3)/ ker ad L 1 and vanishing for x → ±∞. We will denote the linear space of such functions by M L 1 . Note, that we can express the right hand sides of the Wronskian relations using the skew-scalar product: We should also point out that the skew-scalar product is non-degenerate on M L 1 . We finish this subsection by listing the effective results from the Wronskian relations. Below α ∈ ∆ + : In deriving the above formulae, besides the splitting (4.19) we also used the fact that ad −1 L 1 δL 1 , ad −1 L 1 L 1,x ∈ sl 0 (3) and ad −1 L 1 L 2,x ∈ sl 1 (3). The quantities ρ (k),± α , τ (k),± α , k = 1, 2, can be viewed as reflection coefficients. These formulae provide the basis in the analysis of the mapping M L 1 → T k , k = 1, 2. Indeed L 1,x ∈ M L 1 , while the sets of coefficients completed with additional sets characterizing the discrete spectrum of L are candidates for the minimal sets of scattering data of the Lax operator. The above considerations show, that following [1] we can treat the 'squared solutions' as generalized exponentials. Therefore the mapping M L ! can be viewed as a generalized Fourier transform.
Of course, the justifications of the above statements must be based on a proof of the completeness relation for the 'squared solutions' which will be presented in the next section.

Recursion operators -an alternative approach
Our goal in this subsection is to present an alternative definition of the recursion operators. We introduce them as operators Λ ± whose eigenfunctions are the 'squared solutions'. More precisely, we shall see that the following equality holds: Our consideration starts from analysis of the equation satisfied by each of the 'squared solutions'. We have omitted the superscript ± in the notation of the squared solutions since it does not play a significant role in most of our further considerations. We shall use it again at the end of subsection when it will become essential. Due to the grading condition (2.9) the components H α and K α are interrelated through the system After we extract terms proportional to L 1 from the above equations we obtain where h α,0 and k α,0 are some integration constants. On the other hand the orthogonal part of (4.24) reads: After substituting h α and k α in the equations above we get where the operators Λ 2 and Λ 1 are given in (4.10) and (4.11) respectively. This is the right place to mention that remarkably the operators Λ 1,2 are invertible. Indeed, they admit the factorization It is not hard to see that the operators J 1 and J 2 can be inverted as follows Therefore Λ 1 and Λ 2 are invertible as well, namely Let us apply Λ 2 to (4.25) and Λ 1 to (4.26). After restoring the index notation where needed the result reads: This is the right place to remind the reader that the constants h α,0 and k α,0 are determined by the asymptotic of the relevant 'squared solution' for x → ∞ (or for x → −∞), depending on the proper definition of the recursion operator. More detailed analysis shows that for each Λ ± 1 and Λ ± 2 there exist certain roots for which the constants k α,0 and h α,0 vanish. More specifically the following equalities holds: for all roots α > 0 and therefore: The operator Λ ± introduced as is the recursion operator we have been looking for. If we compare (4.27) and (4.12) we can conclude that both approaches lead to compatible results which coincide up to some conjugation by ad L 1 .

Asymptotics of the fundamental analytic solution
For performing the contour integration, in order to derive the completeness relations for the squared solutions, one needs the asymptotics of the squared solutions for |λ| → ∞. The starting point here is the asymptotic expansion of the FAS for λ → ∞: Introduce: which satisfy: Together with properly chosen asymptotic conditionsχ as (x, λ) the last equation will provide the asymptotics of the fundamental analytic solution. Therefore it will allow an asymptotic expansion of the form: This will lead to a recurrent relations for the expansion coefficientsχ k,as (x). Inserting this expansion into equation ( From (5.2) it follows thatχ 0,as (x) must be a diagonal matrix and using the diagonal part of (5.3) one can derive it in an explicit form: and for the off-diagonal part ofχ f 1,as (x) we have (see equation (2.15)): As a result the asymptotic behavior of χ(x, λ) for λ → ∞ is given by: Note, that the fundamental analytic solution χ ± does not allow a canonical normalization for |λ| → ∞. This difficulty can be overcome by applying a suitable gauge transformation. Using the asymptotic behavior (5.5) of the FAS, one can derive the asymptotics of the squared solutions e α (x, λ) = (χ(x, λ)E α χ −1 (x, λ)) f . Skipping the details:

Completeness of squared solutions
Below we will derive the completeness relation for the 'squared solutions' for a class of potentials L 1 (x) of the Lax operator L which for every fixed value of t satisfy the following conditions: C1) L 1 (x)−L 1,± is complex valued function of Schwartz type, i.e. it is infinitely smooth function of x falling off for |x| → ∞ faster than any power of x; C2) L 1 (x) is such that the corresponding functions m ± 1 (λ) and m ± 2 (λ) have at most finite number of simple zeroes. For simplicity we assume also that m + 1 (λ) and m + 2 (λ) (resp. m − 1 (λ) and m − 2 (λ)) have no common zeroes; C3) the corresponding functions m ± 1 (λ) and m ± 2 (λ) have no zeroes on the real axis of the complex λ-plane.

Remark 2.
It is well known that the zeroes of the functions m ± 1 (λ) and m ± 2 (λ) determine the discrete spectrum of L. The proof of this fact comes out of the scope of the present paper. For other classes of Lax operators it is well known, see e.g. [11,20].
Below for definiteness we introduce notations for the discrete eigenvalues. As we already mentioned, due to the two Z 2 -reductions, they are of two types. Let there be N 1 discrete eigenvalues of generic type which come in quadruplets (see Fig. 5). We will denote by λ + 1 , λ + 2 , . . . , λ + N 1 those of them that are in the first quadrant, i.e. 0 < arg λ + k < π/2, 1 ≤ k ≤ N 1 . The eigenvalues λ + N 1 +k = −(λ + k ) * belong also to C + but are in the second quadrant. Besides we have also λ − j = (λ + j ) * , j = 1, . . . , 2N 1 which lie in C − . The second type of discrete eigenvalues are Figure 5. The bold line defines the continuous spectrum of L, the regular lines show the integration contours γ ± ; by γ ±,∞ we denote the 'infinite' semi-circles. By • we denote the discrete eigenvalues relevant for one generic soliton which come in quadruplets ±λ ± j ; the second type of solitons correspond to purely imaginary pairs of discrete eigenvalues ±iν a denoted by ×.
Consider the Green function: In order to derive the completeness relations for the squared solutions, one needs to apply a contour integration to the integral: where the integration contours γ ± = R ∪ γ ±,∞ are shown on Fig. 5. Let us recall that the Lax operator L has two types of discrete eigenvalues whose number is 4N 1 + 2N 2 ; the first type correspond to generic solitons and come in quadruplets ±λ ± j ; the second type correspond to purely imaginary pairs of discrete eigenvalues ±iν a . Thus according to Cauchy's residuum theorem presuming that the radii of the 'infinite' semicircles are large enough so that all discrete eigenvalues are inside the contours γ ± . Condition C3 above ensures that G ± (x, y, λ) has no poles on the continuous spectrum.
Next we integrate along the contours: In order to calculate the integrals over γ ±,∞ it is enough to use the asymptotic behavior of the Green function for λ → ∞. From equations (5.4) and (5.5) we have: and As a result we get: The next step is to simplify the integral along the continuous spectrum; namely we will calculate the jump of the Green function with the result: The proof of this fact goes as follows. From the expression for the Green function (equations (5.6)-(5.9)) and using the relation θ(x − y) + θ(y − x) = 1 we easily get: so it will be enough to prove that the coefficient in front of θ(x − y) vanishes. Indeed, we have: y, λ)). Here and H 1 = J, H 2 = J 4 . Note that Π 2 is the second Casimir endomorphism for the algebra sl (3). It remains to use equation (2.25) and the basic property of Π 2 that states: for any element G(λ) of the corresponding group SL(3). We evaluate the contribution from the discrete spectrum assuming that the squared solutions e ± α (x, λ) have at most simple poles at the points of the discrete spectrum. Therefore in the vicinity of the eigenvalues λ ± k we use the following expansions of the 'squared solutions': where p α are integers and p α = −p −α . Thus we obtain: Therefore the completeness relations takes the form:

Expansion over the 'squared solutions'
Using the completeness relations of the squared solutions, one can expand any generic element F (x) of the phase space M over the complete sets of squared solutions. We remind that F (x) is a generic element of M if it tends fast enough to zero for |x| → ∞ and is 'perpendicular', i.e.
It can be parametrized as follows: Let us now multiply both sides of the completeness relation (5.10) by 1 1 ⊗ [L 1 (y), F (y)] on the right, take the Killing form of the second factors in the tensor product and integrate over y. In the left hand side after some manipulations we get: The same operations applied to the right hand sides of (5.10) replaces each of the terms e ± α (x, λ)⊗ e ± −α (y, λ) by e ± α (x, λ) e ± α (y, λ), F (y) . Thus the expansion coefficients of F (x) are provided by its skew-scalar products with the 'squared solutions'.
i) The function F (x) ≡ 0 if and only if all its expansion coefficients (5.12) vanish, i.e.: ii) The function F (x) ≡ 0 if and only if all its expansion coefficients (5.14) vanish, i.e.: Proof . Inserting F (x) ≡ 0 into the r.h.s. of the inversion formulae (5.12) we obtain the relations (5.15). Next, let us assume that equation (5.15) holds and let us insert it into the r.h.s. of equation (5.11). This immediately gives F (x) = 0. Thus i) is proved. ii) is proved analogously using equations (5.14) and (5.13).
In other words we established an one-to-one correspondence between the element F (x) ∈ M and its expansion coefficients. Now, let us take F (x) = ad −1 L 1 L j,x (x), j = 1, 2. Then the corresponding expansion coefficients are given by: From the Wronskian relations we have: Taking into account that ad −1 (1) we obtain the following expansions of these functions over the 'squared solutions' (for details see the appendix): Similarly: Here and δ + 1 = {α 1 , α 2 }. Next, taking F (x) = ad −1 L 1 δL 1 (x) we derive the corresponding expansions of the variation of L 1 (x) over the squared solutions. Thus we obtain the following expansions for ad −1 L 1 δL 1 (x): The corresponding expansion coefficients are given by: The calculation of the expansion coefficient for the discrete spectrum is explained in the appendix.
Presuming that the variation of the potential is due to the time evolution of L 1 (x): one can get similar expansions for the time-derivative ad −1 L 1 L 1,t (x): where the expansion coefficients δ t τ (1),± α and δ t ρ (1),± α are given by: The details of calculating the expansion coefficients relevant for the discrete spectrum of L are given in Appendix A. The completeness relation (5.10) and the expansions that we constructed above can be viewed as spectral decompositions for the recursion operators Λ ± andΛ ± (4.27). Indeed, one may check that In other words we have proved the completeness of the set of eigenfunctions of Λ ± andΛ ± (4.27).

Conjugation properties of the recursion operator
Here we derive the recursion operators, that are adjoint with respect to the skew-scalar product, i.e. we define Λ * 1,2 by the relations: where X 1,2 = X ⊥ 1,2 ∈ sl 1 (3) and Y 1,2 = Y ⊥ 1,2 ∈ sl 0 (3). In doing this we use several times integration by parts and the following properties of the operator ad −1 The derivation of Λ ± 1 * goes as follows: The other recursion operators are treated analogously. Thus we obtain: 6 Fundamental properties of the NLEE

Integrals of motion
In this subsection we are going to apply a method by Drinfel'd and Sokolov [7] to derive the integrals of motion to system (2.1). In order to do that it proves to be technically more convenient to deal with the Lax pair (2.13), (2.14). Then we map the operatorsL andÃ into L = P −1L P = i∂ x + λJ + L 0 + L 1 λ + · · · , (6.1) where all matrix coefficients L k , A −1 and A k , k = 0, 1, . . . are diagonal and make use of the following asymptotic expansion for P(x, t, λ): In order to make all further considerations unambiguous we assume that all coefficients p l (l = 1, 2, . . .) are off-diagonal matrices. The zero curvature representation is gauge invariant, i.e. [L, A] = 0 is fulfilled. Since [L k , A l ] = 0 the commutativity of L and A is equivalent to the following requirements Hence L k represent densities of the integrals of motion we are interested in. Equality (6.1) rewritten as holds identically with respect to λ. This requirement leads to the following set of recurrence relations: 2) · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · In order to solve it we apply the well-known procedure of splitting each relation into a diagonal and off-diagonal part. For example, treating this way the first relation above one gets where the superscripts d and f above denote projection onto diagonal and off-diagonal part of a matrix respectively. Taking into account the explicit form of U 0 (see formula (2.15)) for L 0 we have Thus as a density of our first integral we can choose: I 0 = u * u x + v * v x . It represents momentum density of our system. For the stationary solutions (3.26) and (3.28) the momentum density is depicted on Fig. 6. It is evidential that for both cases of solutions it is a well localised function of x. On the other hand after inverting the commutator in the second equation in (6.3) one obtains Similarly, for the second matrix L 1 one needs to extract the diagonal part of (6.2). The result reads After substituting the expression (6.4) for p 1 into (6.5) one obtains Hence the second integral density is In general, one is able to calculate the-k integral of motion through the formula The matrix p k in its turn is obtained from the following recursive formula
As we did many times in our exposition we shall split each element A k into two mutually orthogonal parts A ⊥ 2q−1 and f 2q−1 L 1 (resp. A ⊥ 2q and f 2q L 2 in the case even indices): Substituting the splitting of A N −1 into (6.7) we have After taking the Killing form · , L 2 to separate the L 1 -commuting part and its orthogonal complement we deduce that f N = c N = const and Similarly, after inserting the splitting (6.10) into a generic recurrence relation (6.8) we obtain After extracting the L 1 -commuting part from the equations above one obtains the coefficient where c 2q−1 (resp. c 2q ) is a constant of integration. On the other hand for A ⊥ 2q−1 (resp. A ⊥ 2q ) we have: where the integro-differential operators Λ 2 and Λ 1 are given by (4.10) and (4.11) respectively.
The last recurrence relation (6.9) yields to What remains is to substitute consequently the expressions for A ⊥ k , k = 1, . . . , 2p − 1 above in order to write the NLEE in terms of the operators Λ 1 and Λ 2 . Here we give the results for both cases a) and b): The coefficients c k are involved in dispersion laws of NLEEs. By analogy with (2.18) the dispersion law of a general NLEE is defined through The dispersion law governs the time evolution of scattering matrix (2.19) through the linear equation It is not hard to check that the equalities below are valid We remind that the constant diagonal matrices J and I represent the diagonal forms of the asymptotic of L 1 and L 2 respectively, see (2.13), (2.14). As one can easily convince himself the initial NLEE (2.1) can be derived from the above formulae in the simplest case N = 2 after plugging c 2 = −1 and c 1 = 0.

The hierarchy of NLEE and generalized Fourier transforms
In the previous subsection we described the class of NLEE related to the Lax operator L. In particular we showed that if L 1 (x, t) is a solution to one of the NLEE (6.11) then the corresponding scattering matrix T (λ, t) satisfies the linear evolution equation (6.12) with dispersion law f (λ) given by (6.13).
Here we will prove a more general theorem.
Theorem 1. Let the Lax operator L be such that its potential satisfies the conditions C1)-C3).
Then the NLEE (6.11) are equivalent to each of the following set of linear evolution equations: Proof . Let us first consider the NLEE (6.11a) and let us denote its left hand side by N a) (x, t).
Next let us expand it over the complete set of eigenfunctions H ± ±α (x, λ). If we make use of the expansion (5.11) then we need to calculate the expansion coefficients (5.12). Using the properties of the recursion operators (5.16) and (5.17) we have: From equation (6.16) we find that where H(λ) ∈ h. It remains to take the Killing form of (6.17) with the Cartan elements J and I of sl to determine that H(λ) = f (λ). Thus we have proved that from the NLEE one gets the first of the equations in (6.14): Next we rewrite equation (2.23) in the form: and equate the principle upper-and lower-minors of both sides. As a result we get: for λ ∈ R. We remind that the scalar functions m + 1 (λ) and m + 2 (λ) (resp. m − 1 (λ) and m − 2 (λ)) are analytic functions for λ ∈ C + (resp. λ ∈ C − ). Then the relations (6.18) can be viewed as a RHP which can be solved by the Plemelj-Sokhotsky formulae. Thus equations (6.18) allow us to recover m ± 1 (λ) and m ± 2 (λ), and effectively D ± (λ) in their whole regions of analyticity from S ± (λ). Likewise, equations (6.19) allow us to recover D ± (λ) from T ± (λ).
From equations (6.18) there also follows that We can also reconstruct T ± (λ) as the Gauss factors of D +Ŝ+ S −D− (λ) and check that Thus we have proved the statement of the theorem for the scattering data on the continuous spectrum of L. Similar procedure allows us to recover the equations for the data on the discrete spectrum of L.
In order to complete the proof of the equivalence between the NLEE (6.11) and (6.14) it remains to apply Corollary 1.
The equivalence of the NLEE (6.11) and (6.15) is proved along the same lines analyzing the expansion coefficients θ ± N a) (λ).
Proof . The fact that each of the sets S k determine the scattering matrix T (λ) follows easily from the Theorem. Indeed, we showed how, starting from S k one can construct each of the Gauss factors of T (λ); so it remains just to take the corresponding products. In order to reconstruct the corresponding potential we have to solve the RHP with canonical normalization for λ = 0. Considering the Taylor expansion of χ ± (x, λ): we get

Hierarchy of Hamiltonian formulations
Our analysis here is again based on the Wronskian relations, extending the ones in Section 4. We first use slightly modified equation (4.18): A third class of Wronskian relations connects the variation of the derivativeχ(x, λ) ≡ ∂ λ χ(x, λ) with L 1 , i.e.
Thus we derive the interrelations between the generating functionals of integrals of motion and the corresponding potential: Note that the second reduction in equation (2.21) applied to h ± J (x, λ) and h ± I (x, λ) gives which means that where C J;2s+1 and C I;2s are integrals of motion.
Let us note that neither h ± J (x, λ) nor h ± I (x, λ) are eigenfunctions of the recursion operators. Indeed, if we separate the 'orthogonal' and 'parallel' to L 1 and L 2 parts and split them into: we get: Therefore or in other words We can rewrite equations (6.20) in the form: Thus in fact we have proved that the conserved quantities of these NLEE satisfy the well known Lenart relation δC J,2s+1 δL 1 (x) = Λ 2 Λ 1 δC J,2s−1 δL 1 (x) , δC I,2s δL 1 (x) = Λ 2 Λ 1 δC I,2s−2 δL 1 (x) . (6.21) We finish by formulating the Hamiltonian properties of these NLEE. It is only natural that the Hamiltonians must be linear combinations of integrals of motion: Next we introduce a symplectic structure using the symplectic form: Using (6.21) we find that the equation of motion (6.11) can be written down as Ω ∨ 1 ≡ ad −1 L 1 δL 1 · Λ 2 ad −1 L 1 L 1,t = δH, (6.22) with H = H a) or H = H b) respectively.
Thus we conclude that the family of symplectic forms (6.23) are dynamically compatible.

Discussion and conclusions
A system of coupled equations, which generalize Heisenberg ferromagnet equations has been studied. The system is associated with a polynomial bundle Lax operator L related to the symmetric space SU (3)/S(U (1) × U (2)). The spectral properties of the operator L in the case of the simplest constant boundary condition (2.2) have been described. The continuous spectrum of L fills up the real axis in the complex λ-plane and divides it into two regions: the upper half plane C + and the lower half plane C − . Each region is an analyticity domain of a fundamental analytic solution to the auxiliary linear problem.
By using the dressing method [40,49,50,51] we have constructed 1-soliton solutions of the (Z 2 -HF) models. Due to the additional symmetries the Lax operator L may possess two types of discrete eigenvalues: generic ones and purely imaginary ones. Therefore we will have two types of soliton solutions -quadruplet and doublet solitons respectively. We outlined the purely algebraic construction for deriving the N -soliton solutions for both types of solitons.
Using the Wronskian relations one is able to construct 'squared solutions' and an integrodifferential operator called recursion operator whose eigenfunctions they are. There exists another viewpoint on recursion operator -they generate hierarchy of symmetries of NLEEs. Thus one can derive the recursion operator of a NLEE from purely symmetry considerations.
Using the interpretation of the ISM as a generalized Fourier transform and the expansions over the 'squared solutions' we studied the fundamental properties of the class of NLEE admitting a Lax operator of the form (2.3), including the description of the whole class of NLEE, the infinite set of integrals of motion and the Hamiltonian formulation of the corresponding hierarchy.
The results of the present article can be extended in several directions. Firstly, one can develop the theory in the case of a rational bundle L: The simplest NLEEs related to Lax pair of this type take the form where u and v are functions of x and t subject to the same condition as in the polynomial case, i.e. |u| 2 + |v| 2 = 1. The system (7.1) can also be seen as an anisotropic deformation of (2.1) with k = 1, N = 3 and is a special case of the models proposed in [23]. That modification is required when one imposes an additional Z 2 reduction of the form λ → 1/λ, see [18]. This case is more complicated and much richer than the one we have studied. It requires the construction of automorphic Lie algebras and studying their properties following the ideas of [30,29,46].
Remark 3. More precise treatment of the contribution from the discrete spectrum starts by considering the potentials for which L 1 (x) − L 1,as is on finite support. Then the Jost solutions of L, as well as the scattering matrix T (λ) become meromorphic functions of λ which ensures the validity of all considerations above. The next step would be taking the limit to potentials of Schwartz-type. These considerations come out of the scope of the present paper.
When we analyze the discrete spectrum we make use of the conditions C2 and C3. Since the discrete eigenvalues are zeroes of the principal minors, and assuming that λ + k (resp. λ − k ) is a zero of, say m + 1 (λ), then we have: and similar expansions for m ± 2 (λ). Using remark 2 on formulae (A.1) we easily find, that τ (j),± α (λ) have simple zeroes in the vicinity of λ ± k . We end by reminding a few useful formulae of frequently appearing expressions in the main text; they are derived in [18,19].
Since L 1 satisfies the characteristic equation L 3 1 = L 1 then ad L 1 has eigenvalues ±2, ±1, 0 and satisfies the characteristic equation: If we choose X ∈ sl 1 we have where w = ua * + vb * . If in particular, X = L 1,x we find: where w 0 = uu * x + vv * x . Similarly for Y ∈ sl 0 and ad −1 Choosing Y = L 2,x we get: