Manin Matrices for Quadratic Algebras

We give a general definition of Manin matrices for arbitrary quadratic algebras in terms of idempotents. We establish their main properties and give their interpretation in terms of the category theory. The notion of minors is generalised for a general Manin matrix. We give some examples of Manin matrices, their relations with Lax operators and obtain the formulae for some minors. In particular, we consider Manin matrices of the types $B$, $C$ and $D$ introduced by A. Molev and their relation with Brauer algebras. Infinite-dimensional Manin matrices and their connection with Lax operators are also considered.


Introduction
In the second half of the 80s Yuri Manin proposed to consider non-commutative quadratic algebras as a generalisation of vector spaces and called them 'quantum linear spaces' [Man87,Man88,Man91]. A usual finite-dimensional vector space is presented by the algebra of polynomials. The linear maps correspond to the homomorphisms of these algebras in the category of graded algebras. One can consider homomorphisms of a quadratic algebra over a noncommutative ring (algebra). Such 'non-commutative' homomorphisms of finitely generated quadratic algebras ('finite-dimensional' quantum linear spaces) are described by matrices with non-commutative entries satisfying some commutation relations. We call them Manin matrices.
The main example proposed by Manin is the 'quantum plane' defined by the algebra with 2 generators x, y and the commutation relation yx = qxy. He established the connection of the quantum plane with some quantum group (Hopf algebra). This quantum group gives a 'non-commutative' endomorphism of the quantum plane.
This picture was also generalised to the case of n generators with the commutation relations x j x i = qx i x j , i < j. The algebra describing all the 'non-commutative' endomorphisms of the corresponding quantum linear space was called right quantum algebra in [GLZ], where the authors proved a q-analogue of MacMahon Master Theorem. The 'non-commutative' endomorphisms and homomorphisms of these quadratic algebras are described by square and rectangular matrices respectively. We call them q-Manin matrices.
'Non-commutative' endomorphisms and homomorphisms of a usual finite-dimensional vector space generalise the matrices with commutative entries to the matrices with noncommutative entries satisfying certain commutation relations (these are the commutation relations of the right quantum algebra for q = 1). This type of matrices were observed in the Talalaev's formula [T] and called then Manin matrices [CF]. These Manin matrices have a lot of applications to integrable systems, Sugawara operators, affine Lie algebras and quantum groups [CF, RTS, CM]. Since they are an immediate generalisation of the usual matrices to the non-commutative case, almost all the formulae of the matrix theory are valid for these Manin matrices [CFR]. Most of them are also generalised to the q-Manin matrices [CFRS].
The 'non-commutative' homomorphisms between two quantum linear spaces are described by an algebra that can be constructed as internal hom [Man88]. These 'noncommutative' homomorphisms give us a notion of a Manin matrix for a pair of finitely generated quadratic algebras (A, B). These are the matrices with non-commutative entries satisfying the commutation relations of the internal hom algebra. In previous works on Manin matrices the attention was concentrated on the case A = B corresponding to the 'non-commutative' endomorphisms. This general case allows us to consider more examples and answer some questions about q-Manin matrices, which was unclear before.
For instance, the column determinant is a natural generalisation of the usual determinant for the q = 1 Manin matrices, but some properties of this determinant are proved by using a part of commutation relations for the entries of these matrices. We show here that this part of commutation relations define Manin matrices for some pair of different quadratic algebras.
In the case q = 1 the column determinant is generalised to (column) q-determinant. It is a natural generalisation for q-Manin matrices. Permutation of columns of q-Manin matrices gives a matrix, which is not a q-Manin matrix. However its natural analogue of the determinant is also q-determinant. In contrast, the q-determinant is not relevant operation for a q-Manin matrix with permuted rows. To consider the permuted q-Manin matrices as another type of Manin matrices and to define natural determinants for them one needs to consider some pairs of different quadratic algebras. It is convenient to do it in a more universal case when the role of quadratic algebras is played by multi-parametric deformations of the polynomial algebras.
A multi-parametric deformation of super-vector spaces was considered by Manin in the article [Man89]. The super-versions of Manin matrices and MacMahon Master Theorem for the non-deformed case were also considered in [MR]. Here we do not consider the super-case, but we plan to do it in future works.
The notion of Manin matrices for two different quadratic algebras includes Manin matrices of types B, C and D introduced by Molev in [Molev].
In this work we use tensor notations to describe quadratic algebras and Manin matrices by generalising the tensor approach given in [CFR, CFRS]. In this frame a quadratic algebra is defined by an idempotent that gives commutation relations for this algebra. E.g. the commutative algebra of polynomials is defined by the anti-symmetrizer of the tensor product of two copies of a vector space. Two different idempotents may define the same quadratic algebra. This gives an equivalence relation between the idempotents.
The relations for Manin matrices can be written in tensor notations with the corresponding idempotents. In the case of two different quadratic algebras these commutation relations are given by a pair of idempotents A and B, so we call the matrices, satisfying these relations, (A, B)-Manin matrices. In the case A = B we call them A-Manin matrices.
The impotent notion in the matrix theory is the notion of minor, which is usually defined as a determinant of a square submatrix. The minors (defined via column determinant) play a role of decomposition coefficients in the expression for a 'non-commutative' homomorphism of the anti-commutative polynomial algebras (Grassmann algebras). The dual notion is permanent of square submatrices (where some rows and columns may be repeated) that give coefficients in the case of commutative polynomial algebras. This is directly generalised to some kinds of Manin matrices including the q-Manin matrices and the multi-parametric case. However in the general situation an analogue of minors is not defined by an operation (determinant or permanent) on submatrices. The minors are defined immediately without such operations. To do it we introduce some auxiliary 'dual' quadratic algebras for a given idempotent and define a non-degenerate pairing between quadratic algebras (if it exists). In the tensor notation this pairing is written by using some higher idempotents, which we call pairing operators. These operators allow to define minors as entries of some operators with non-commutative entries, we call them minor operators. For the main examples the pairing operators is related with the representation theory of the symmetric groups, Hecke algebras and Brauer algebras.
The paper is organised as follows.
In Section 2 we consider quadratic algebras and related Manin matrices. In Subsection 2.1 we consider the quadratic algebras in terms of idempotents. Subsection 2.2 is devoted to the equivalence relations between idempotents. In Subsections 2.3 and 2.4 we define A-and (A, B)-Manin matrices in terms of matrix commutation relations and in terms of quadratic algebras, we give the main properties of these matrices. In Subsection 2.5 we interpret Manin matrices in terms of comma categories and define a general right quantum algebra via adjoint functors. Subsection 2.6 is devoted to multiplication of Manin matrices and to bialgebra structure on the right quantum algebra. A generalisation of Manin matrices to the infinite-dimensional case was done in Subsection 2.7.
In Section 3 we consider the following particular cases: the Manin matrices of [CF, CFR], q-Manin matrices [CFRS] and multi-parametric case [Man89] (the Subsections 3.1, 3.2 and 3.3 respectively). We recall basic properties of these Manin matrices and their determinants.
In Subsection 3.4 we consider a generalisation of the n = 3 case from Subsection 3.3.
Section 4 is devoted to relationship between Manin matrices and Lax operators. In Subsection 4.1 we explain the known connection of q-Manin matrices with some Lax operators (without spectral parameter) via a decomposition of an R-matrix into idempotents. This gives an important example of the idempotent equivalence. In Subsection 4.2 we interpret Lax operators (with a spectral parameter) for the rational R-matrix as infinite-dimensional Manin matrices.
In Section 5 we generalise the notion of minors for Manin matrices. Subsection 5.1 has a motivating role, where we consider minors for q-Manin matrices (defined via determinant and permanent) as some decomposition coefficients. In Subsections 5.2 and 5.3 we define 'dual' quadratic algebras and the corresponding parings via pairing operators. In Subsections 5.4 and 5.5 we define minors as entries of minor operators and give some properties of these operators. In Subsection 5.6 we investigate the relation of pairing and minor operators for equivalent idempotents. Subsection 5.7 devoted to an algebraic approach to construction of pairing operators.
We consider examples of pairing and minor operators in Section 6. Subsection 6.1 is devoted to the multi-parametric case (which includes the case of q-Manin matrices). In Subsection 6.2 we again consider the case of q-Manin matrices, but we construct the pairing operators by using another idempotent (equivalent to the idempotent used in Subsection 6.1), which was found in Subsection 4.1. We show how they give related minor operators. Subsection 6.3 is devoted to the case defined in Subsection 3.4.
Section 7 is devoted to the Manin matrices of types B, C and D. In Subsection 7.1 we recall the Molev's definition of these matrices and interpret them as Manin matrices for pairs of idempotents. We consider the corresponding quadratic algebras. In Subsection 7.2 we construct the pairing operators by means of Brauer algebra.
In Appendix A we give a formulae for a set of inversions in the case of arbitrary Weyl group. In Appendix B the universal enveloping algebras of Lie algebras are represented as quadratic algebras.
Acknowledgements. The author is grateful to A. Chervov, V. Rubtsov, An. Kirillov and A. Molev for useful discussions and advice.

Quadratic algebras and Manin matrices
As a basic field we choose the field of complex numbers C (through any algebraically closed field of characteristic zero can be taken). All the vector spaces are supposed to be defined over C. By an algebra we understand an associative unital algebras over C (not necessary commutative). By a graded algebra we mean an N 0 -graded associative unital algebra over C, where N 0 is the set of non-negative integers: the grading of such algebra has the form A = k∈N 0 A k and the condition A k A l ⊂ A k+l is implied for all k, l ∈ N 0 . By a quadratic algebra we mean a graded algebra of the form A = R ⊗ (T V /I), where R is an arbitrary . . is the tensor algebra of a finite-dimensional vector space V and I ⊂ T V is its (two-sided) ideal generated by a subspace of V ⊗ V ; in particular, A 0 = R, A 1 = R ⊗ V .
Let C n be the space of complex column vectors of the size n. Its dual (C n ) * = Hom(C n , C) is the space of complex row vectors. Their standard bases (e i ) n i=1 and (e i ) n i=1 are dual to each other: e i e j = δ i j . Let E ij be the n × m matrices with entries (E ij ) kl = δ ik δ jl . It acts on C m from the left and on (C n ) * from the right as E ij e k = δ jk e i , e i E jk = δ ij e k . We have E ij = e i e j .
We use the following tensor notations. Let M ∈ R ⊗ Hom(C m , C n ) be an n × m matrix over an algebra R. It can be considered as an operator from C m to C n with entries M ij ∈ R, that is M = i,j M ij E ij . Introduce the notation where E ij is placed to the a-th site and 1 are unite matrices (of different sizes in general). This is an operator from C m 1 ⊗· · ·⊗C mr to C n 1 ⊗· · ·⊗C nr where m a = m, n a = n and m b = n b for all b = a. Analogously, for a matrix T = i,j,k,l T ij,kl E ik ⊗E jl ∈ R⊗Hom(C m ⊗C m ′ , C n ⊗C n ′ ) with entries T ij,kl ∈ R we denote with E ik in the a-th site and E jl in the b-th site. The numbers a and b should be different but the both cases a < b and a > b are possible. In particular, In the same way the notations M (a) and T (ab) can be defined for any vector spaces V, W, V ′ , W ′ and any operators M ∈ R ⊗ Hom(V, W ), T ∈ R ⊗ Hom(V ⊗ V ′ , W ⊗ W ′ ) (in the infinitedimensional case the tensor product with R may be completed in some way).

Quadratic algebras
Consider a quadratic algebra with n generators, that is an algebra generated by the elements x 1 , . . . , x n over C with some quadratic commutation relations. Since the number of elements x i x j is equal to n 2 the number of independent quadratic relations is less or equal to n 2 . It means that these relations can be presented in the form n i,j=1 A kl,ij x i x j = 0, k, l = 1, . . . , n, (2.4) where A kl,ij ∈ C. The quadratic algebra with these relations is the quotient T V /I where V = (C n ) * and I is the ideal generated by the elements n i,j=1 A kl,ij e i ⊗ e j . An element x i ∈ T V /I is the class of e i ∈ (C n ) * ⊂ T (C n ) * .
The coefficients A kl,ij can be considered as entries of a matrix A acting on C n ⊗ C n . In terms of basis this action looks as follows: A(e i ⊗ e j ) = n k,l=1 A kl,ij (e k ⊗ e l ). Note that for any invertible n 2 × n 2 matrix G, the product GA defines the same quadratic algebra, since the relations n ij=1 (GA) kl,ij x i x j = 0 is equivalent to (2.4).
Proposition 2.1. For each quadratic algebra the matrix A defining its relations can be chosen to be an idempotent: (2.5) Proof. Let the given quadratic algebra be defined by the relations with a matrix A. If one has proved that there exists an invertible G ∈ Aut(C n ⊗ C n ) such that (GA) 2 = GA then we can choose GA as an idempotent matrix defining the quadratic relations for this algebra.
Since G should be invertible the equation GAGA = GA is equivalent to AGA = A. Let us reduce the matrix A to a Jordan form. In the corresponding basis of C n ⊗ C n it takes the form where each β k is an invertible square submatrix, which has the same dimension as the Jordan cell α k . Then, the equation AGA = A is equivalent to system of r equations α k β k α k = α k . If the Jordan cell α k corresponds to the non-zero eigenvalue of the matrix A, then the matrix α is invertible and we can choose β k = α −1 k . Otherwise, α k has the form (2.9) For α k = 0 we choose β k = 1. In all the cases such chosen matrices β k are invertible. Further we will suppose that the matrix A is an idempotent unless otherwise specified. Denote the corresponding quadratic algebra by X A (C).
More generally, for an algebra R define the quadratic algebra X A (R) = R ⊗ X A (C). This is a graded algebra generated by the elements x 1 , . . . , x n over R with the quadratic commutation relations (2.4) and rx i = x i r, r ∈ R, i = 1, . . . , n. The formulae deg x i = 1, deg r = 0 ∀ r ∈ R give the grading on X A (R). An idempotent A ∈ End(C n ⊗ C n ) defines a functor X A from the category of algebras to the category of graded algebras: X A (f )(rx i 1 · · · x i k ) = f (r)x i 1 · · · x i k , ∀f ∈ Hom(R, R ′ ), ∀r ∈ R. Due to Proposition 2.1 any quadratic algebra is isomorphic to X A (R) for some A and R.
The relations (2.4) can be rewritten in matrix notations as follows. Consider the column vector X = n i=1 x i e i . Then the relations (2.4) takes the form A(X ⊗ X) = 0, (2.10) where X ⊗ X = n i,j=1 x i x j (e i ⊗ e j ).
The fact that there is no another independent relations for x i besides (2.4) can be reformulated as follows: if T ij x i x j = 0 for some T ij ∈ R then T ij = n k,l=1 G kl A kl,ij for some G kl ∈ R.
Lemma 2.2. The following equations for a matrix T ∈ R ⊗ End(C n ⊗ C n ) are equivalent: T (X ⊗ X) = 0, (2.11) T (1 − A) = 0. (2.12) Proof. The equation (2.11) is derived from (2.12) by multiplying by (X ⊗ X) from the right. Conversely, if the equation (2.11) holds then we have n 2 relations n i,j=1 T ab,ij x i x j = 0 enumerated by a, b = 1, . . . , n, where T ab,ij is entries of the matrix T . This implies T ab,ij = n k,l=1 G ab,kl A kl,ij for some matrix G = (G ab,kl ) that is T = GA. By multiplying the last equality by (1 − A) on the right and by taking into account (2.5) one yields (2.12).
Let us consider the 'change of variables' y i = m k=1 M ik x k , i = 1, . . . , n, where x 1 , . . . , x m are the generators of X A (C). In general, the transition matrix M is an n × m matrix with non-commutative entries M ik ∈ R, so that y i ∈ X A (R). In terms of Y = n i=1 y i e i we have Y = MX. (2.13)

Lemma 2.3. The equation
A(Y ⊗ Y ) = 0 (2.14) is equivalent to Proof. Note that the generators x i ∈ X A (C) commute with the entries M kl ∈ R as elements of algebra X A (R) = R ⊗ X A (C). Then by substituting (2.13) to (2.14) we obtain AM (1) M (2) (X ⊗ X) = 0, which in turn is equivalent to (2.15) by Lemma 2.2. Consider the quadratic algebra Ξ A (R) generated by the elements ψ 1 , . . . , ψ n over R with the commutation relations and rψ i = ψ i r, r ∈ R. Each idempotent A ∈ End(C n ⊗ C n ) defines a functor Ξ A from the category of algebras to the category of graded algebras: ψ i e i = (ψ 1 , . . . , ψ n ). Then the relations (2.16) take the form is a Koszul algebra, then the algebra Ξ A (C) is its Koszul dual algebra.
We will use the following conventions. We write X A (R) = X A ′ (R) iff there exists an algebra homomorphism X A (R) → X A ′ (R) identical on generators, that is x i → x i ∀ i and r → r ∀ r ∈ R. We write Ξ A (R) = Ξ A ′ (R) iff there exists an algebra homomorphism Ξ A (R) → Ξ A ′ (R) identical on generators ψ i and r ∈ R. We also write Ξ A (R) = X A ′ (R) iff there exists an algebra homomorphism respectively. The latter equalities are exactly the isomorphisms of quadratic algebras T V /I as T V -modules (left, right of two-sided), where in the case Ξ A (C) = X A ′ (C) we identify C n with (C n ) * via e i → e i .
Note that S = 1−A is an idempotent iff A is idempotent. The transformation S → SG by an invertible matrix G ∈ Aut(C n ⊗ C n ) does not change the relation (2.17). By transposing the relation (2.17) we see that it is equivalent to the relation (2.10) with X replaced with Ψ ⊤ and A replaced with S ⊤ = 1 − A ⊤ , where (·) ⊤ means the matrix transposition. Hence Ξ A (R) = X S ⊤ (R) the functor Ξ A can be identified with the functor X S ⊤ = X 1−A ⊤ .

Left and right equivalence of idempotents
A fixed quadratic algebra can be defined by different idempotents. Here we give a condition when it happens. To do it we introduce two equivalence relations for idempotent endomorphisms acting on a vector space. We prove some useful properties of these idempotents and their equivalence relations.
Let V be a finite-dimensional vector space. Note that the Jordan form of an idempotent A ∈ End(V ) (in an appropriate basis of V ) is a diagonal matrix diag(1, . . . , 1, 0, . . . , 0). It can be written in the block form as with the blocks of the sizes r×r, r×d, d×r, d×d, where r is the rank of A and d = dim V −r.
We obtain the first property of the idempotents.
Proposition 2.6. The rank of an idempotent A ∈ End(V ) equals to its trace: rk A = tr A.
Let us say that two idempotents A, A ′ ∈ End(V ) are left-equivalent (to each other) if there exists G ∈ Aut(V ) such that A ′ = GA, and we call them right-equivalent if there exists G ∈ Aut(V ) such that A ′ = AG. The both relations are true equivalence relations.
Proof. Let us fix a basis of V such that the idempotent A has the form (2.18). Suppose that A is left-equivalent to A ′ , then there exists an invertible matrix G such that A ′ = GA. The matrices G and A ′ have the form where α, β, γ and δ are d ×d, d ×ℓ, ℓ ×d and ℓ ×ℓ matrices respectively. The idempotentness Further, note that two idempotents S 1 and S 2 are right equivalent iff the idempotents S ⊤ 1 and S ⊤ 2 are left equivalent. Thus the right equivalence of S and S ′ implies the left equivalence of A = 1 − S and A ′ = 1 − S ′ .
Remark 2.8. We see from the proof that any matrix left equivalent to (2.18) has the form A γ = G γ A. Analogously, any matrix right equivalent to S = 1 − A has the form Proof. If G ∈ Aut(V ) then GV = V , so from S ′ = SG one yields S ′ V = SGV = SV . The second statement follows from the first one for S = A ⊤ , S ′ = (A ′ ) ⊤ .
Remark 2.10. By using the Jordan forms one can prove the converse statement: the equalities SV = S ′ V and V * A = V * A ′ imply the right equivalence of S, S ′ and the left equivalence of A, A ′ respectively.
Proof. By substituting (2.18) and A ′ = α β γ δ to these relations we obtain β = 0, δ = 0 and α = 1, so that A ′ = A γ = G γ A (see the proof of Proposition 2.7). Consider the case V = C n ⊗ C n . Recall that any idempotent A ∈ End(C n ⊗ C n ) defines quadratic algebras X A (C) and Ξ A (C).
Proposition 2.12. Let A, A ′ ∈ End(C n ⊗C n ) be idempotents. Then the following conditions are equivalent.
Proof. The conditions (a) and (b) are equivalent due to Proposition 2.7. Two left-equivalent idempotents A and A ′ define the same commutation relations (2.10). This means that (a) implies (c). Let us prove the converse implication. If X A (C) = X A ′ (C) then A ′ (X ⊗ X) = 0 and A(X ⊗ X) = 0. By Lemma 2.2 we obtain the relations A ′ (1 − A) = 0 and A(1 − A ′ ) = 0. Due to Lemma 2.11 these relations imply the left equivalence of A and A ′ . Since Ξ A (C) = X S ⊤ (C) the equivalence of (b) and (d) is derived by considering the transposed matrices.

A-Manin matrices
Definition 2.13. An n × n matrix M over some algebra R satisfying the equation is called Manin matrix corresponding to the idempotent A or simply A-Manin matrix.
The definition (2.20) can be rewritten in the following form This relation means that the expression AM (1) M (2) is invariant with respect to multiplication by A from the right.
Proposition 2.14. Let M be an n × n matrix over R. Let x i and ψ i be the generators of X A (C) and Ξ A (C) respectively. Consider the elements y i ∈ X A (R) and φ i ∈ Ξ A (R) defined as follows: (2.23) Then the following three conditions are equivalent: • The matrix M is an A-Manin matrix.
• The elements y i satisfy the relations Proof. Note that the formulae (2.22) and (2.23) have the form Y = MX and Φ = ΨM. The second condition is equivalent to the first one due to Lemma 2.3. The third condition can be written as Since the operator 1 − A ⊤ is also idempotent we can again apply Lemma 2.3, so the third condition is equivalent to the equation Transposition of this equation and the formula ( gives exactly (2.20), which is the first condition.
Note that the relation (2.20) has the form AM (1) M (2) S = 0, where S = 1 − A. It is equivalent to (2.25) Let P = 1 − 2A. Then P 2 = 1. We have A = 1−P 2 and S = 1+P 2 . The idempotents A and S are often given by the matrix P satisfying P 2 = 1. The relation (2.20) in terms of P takes the form (2.26) Remark 2.15. The notations A, S, P and the sign of P is chosen according to the basic case: the algebra of commutative polynomials. In this case the roles of A, S and P are played by antisymmetrizer, symmetrizer and permutation matrix respectively. See Subsection 3.1 for details.
Due to Propositions 2.12 and 2.14 the notion of A-Manin matrix does not change if we substitute A by a left-equivalent idempotent A ′ . This means that M is an A-Manin matrix iff it is an A ′ -Manin matrix. This implies that A-Manin matrices are associated to the quadratic algebras X A (C) and Ξ A (C) (with fixed generators).
More generally, the definition of A-Manin matrix can be written in the form where A = GA and S = SG for any G, G ∈ Aut(C n ⊗ C n ).

(A, B)-Manin matrices
Let A ∈ End(C n ⊗ C n ) and B ∈ End(C m ⊗ C m ) be two idempotents and R be an algebra. Let x 1 , . . . , x m be generators of X B (C) and ψ 1 , . . . , ψ n be generators of Ξ A (C). They satisfy x j e j and Ψ = n k=1 ψ k e k .
Let M be an n × m matrix with entries M ij ∈ R. Consider a 'change of variables' φ j e j . Then one can rewrite (2.29), (2.30) in the matrix form: Y = MX and Φ = ΨM. Now let us generalise Proposition 2.14 to this case.
Proposition 2.16. The following three conditions are equivalent: (2.33) The proof of this proposition repeats the proof of Proposition 2.14 with B instead of A somewhere.
Definition 2.17. An n × m matrix M over an algebra R satisfying the equation (2.31) is called (A, B)-Manin matrix or Manin matrix corresponding to the pair of idempotents (A, B).
The defining relation for the (A, B)-Manin matrices can be also written in terms of dual idempotents S and matrices P . For instance, the relation (2.31) is equivalent to (2.26) where we should understand the matrices P placing to the left from the matrices M as 1 − 2A and P placing to the right as 1 − 2B. In the similar way we can generalise the relation (2.25).
Proposition 2.18. Let A be an algebra and M be an (A, B)-Manin matrix with entries M ij ∈ A. Let x 1 , . . . , x m ∈ A be elements commuting with all the entries M ij and satisfying m i,j=1 B kl,ij x i x j = 0, k, l = 1, . . . , m, (in particular, it is valid for the algebra A = X B (R), the generators x i ∈ X B (C) and an (A, B)-Manin matrix M over R). Then the elements A kl,ij y i y j = 0, k, l = 1, . . . , n.
Proof. We use the matrix notations introduced below. From Y = MX and (2.31) we obtain By means of the identification of the functors Ξ A and X 1−A ⊤ we can write Proposition 2.18 in the following form.
Proposition 2.19. Let A be a algebra and M be an (A, B)-Manin matrix with entries M ij ∈ A. Let ψ 1 , . . . , ψ n ∈ A be elements commuting with all the entries M ij and satisfying m i,j=1 A ij,kl ψ i ψ j = ψ k ψ l , k, l = 1, . . . , n, (in particular, it is valid for the algebra A = Ξ A (R), the generators ψ i ∈ Ξ A (C) and an (A, B)-Manin matrix M over R). Then the elements

For arbitrary operators
Hence it is equivalent to (2.31) for some idempotents A and B, which have the forms A = G 1 A ′ and B = 1 − S ′ G 2 . In particular, if A ′ is an idempotent it is left-equivalent to the idempotent A; if S ′ is an idempotent it is right-equivalent to 1−B, which means that 1−S is left-equivalent to B (see Proposition 2.7). Thus we obtain the following statement.
Proposition 2.20. Let M be an n × m matrix over R. Let A, A ′ ∈ End(C n ⊗ C n ) and We see from this proposition that the property of the matrix M to be an (A, B)-Manin matrix effectively depends on the algebras X A (C) = X A ′ (C) and X B (C) = X B ′ (C) (with fixed generators). So the notion of (A, B)-Manin matrix can be associated with the pair of X-quadratic algebras X A (C) and X B (C). Alternatively it can be associated with the pair of Ξ-quadratic algebras Ξ A (C) and Ξ B (C). We will see this in Subsection 2.5 more explicitly.
Consider the question of permutation of rows and columns of a Manin matrix. Let S n be the n-th symmetric group. For a permutation σ ∈ S n let us permute the rows of an n × m matrix M in the following way: we put the i-th row to the place of the σ(i)-th row. We denote the obtained matrix by σ M. By permuting columns of M with a permutation σ ∈ S m in the same way we obtain a matrix denoted by σ M. More explicitly we have ( σ M) σ(i)j = M ij and ( σ M) iσ(j) = M ij . We write the permutation-index from the left since τ ( σ M) = τ σ M and τ ( σ M) = τ σ M for any τ, σ from S n and S m respectively. Note that the space C n has a structure of S n -module: a permutation σ ∈ S n acts by the formula σe i = e σ(i) , and we denote the corresponding operator C n → C n by the same letter σ. In this notation we have σ M = σM and σ M = Mσ −1 .
Proposition 2.21. Let M be an n × m matrix over R. Let σ ∈ S n and τ ∈ S m or, more generally, σ ∈ GL(n, C), τ ∈ GL(m, C). Then the following statements are equivalent.
• M is an (A, B)-Manin matrix.
Proof. Multiplication of the condition (2.31) by σ ⊗ σ from the left gives the relation Proposition 2.21 can be interpreted in terms of linear change of generators of the corresponding quadratic algebras. Indeed, consider another generatorsψ i = n j=1 α ji ψ j of the algebra Ξ A (C), where α ji are entries of an invertible matrix α ∈ GL(n, C). Let σ ∈ GL(n, C) be the inverse matrix: The quadratic commutation relation for the generatorsψ i arẽ A ij,klψiψj =ψ kψl , whereÃ ij,kl are entries of the idempotent matrixÃ = (σ ⊗σ)A(σ −1 ⊗σ −1 ).
Analogously, the matrix Mτ −1 corresponds to the new generatorsx i = m j=1 τ ij x j of the algebra X B (C), where τ ji are entries of τ ∈ GL(m, C). The transition to these generators corresponds to the change of basis in C n . The new basis elements areẽ Thus the notion of (A, B)-Manin matrix is not associated with a pair of isomorphism classes of quadratic algebras. However, isomorphisms of quadratic algebras gives a transformation of the Manin matrix in the same way as the change of bases of vector spaces V and W transform a matrix of a linear operator V → W . The notion associated with isomorphism classes of quadratic algebras is a notion of Manin operator, which we will introduce in Subsection 2.7.
Finally, consider the case when A = 0 or B = 1 in the both cases the defining relation (2.31) is 0 = 0. This means that any n × m matrix is a (0, B)-Manin matrix as well as an (A, 1)-Manin matrix, where 0 ∈ End(C n ⊗ C n ) and 1 ∈ End(C m ⊗ C m ). Further, the definition (2.31) for B = 0 ∈ End(C m ⊗ C m ) has the from AM (1) M (2) = 0. By multiplying if by (1 −B) from the right with an arbitrary idempotent B ∈ End(C m ⊗C m ) we obtain (2.31). Thus any (A, 0)-Manin matrix is in particular an (A, B)-Manin matrix. Analogously, any (1, B)-Manin matrix is also an (A, B)-Manin matrix, where 1 ∈ End(C n ⊗ C n ).

Comma categories and Manin matrices
Consider first the set Hom X A (C), X B (R) with A and B as above. It consists of algebra homomorphisms f : X A (C) → X B (R) preserving the grading. Let x 1 A , . . . , x n A and x 1 B , . . . , x m B be the generators of X A (C) and X B (C) respectively, they satisfy n To give a homomorphism f ∈ Hom X A (C), X B (R) it is enough to give its value on the generators x i A ; since f (x i A ) has degree 1 in X B (C) it has the form where M ij ∈ R. This means that each f is given by a matrix M = (M ij ) over R. Proposition 2.14 implies that the formula (2.34) defines a homomorphism f iff the matrix M = (M ij ) is an (A, B)-Manin matrix. Denote this homomorphism f : X A (C) → X B (R) by f M . Thus we obtain a bijection f M ↔ M between Hom X A (C), X B (R) and the set of all (A, B)-Manin matrices over the algebra R. Note that this bijection depends on the choice of generators of the algebras X A (C) and X B (C). More generally, consider the set Hom X A (S), X B (R) for two algebras S and R. It can be identified with a subset of Hom(S, R) × Hom X A (C), X B (R) , since each graded homomorphism f : X A (S) → X B (R) is given on the zero-degree elements s ∈ S and the generators x i A ∈ X A (C). Let α : S → R be an algebra homomorphism and f M ∈ Hom X A (C), X B (R) be a graded algebra homomorphism defined by an (A, B)-Manin matrix M = (M ij ), then the formulae for all s ∈ S, i = 1, . . . , n and j = 1, . . . , m. We write f = (α, f M ) in this case. We have Analogously, we obtain that the set Hom Ξ B (C), Ξ A (R) consists of the algebra ho- Definition 2.22. [MacLane] Let C and D be categories, c be an object of C and G : D → C be a functor. The comma category (c ↓ G) consists of the pairs (d, f ), where d is an object Let A be the category of associative unital algebras over C and G be the category of associative unital N 0 -graded algebras over C. By setting C = G, D = A, c = X A (C) and G = X B in Definition 2.22 we obtain the comma category X A (C) ↓ X B . It consists of the pairs (R, f M ), where R is an algebra and f M ∈ Hom X A (C), X B (R) is a homomorphism corresponding to some (A, B)-Manin matrix M over R. Thus we can interpret the (A, B)-Manin matrices as objects of the comma category X A (C) ↓ X B .
The morphisms in In the same way the (A, B)-Manin matrices can be interpreted as objects of the comma category Ξ B (C) ↓ Ξ A . So that the last one is equivalent to X A (C) ↓ X B .
One of the application of the comma categories in the category theory is the universal morphism (universal arrow).
Definition 2.23. [MacLane] Let C and D be categories, c be an object of C and G : D → C be a functor. The universal morphism from c to G is the initial object of the comma category (c ↓ G), that is a pair (r, u) of an object r ∈ D and a morphism u ∈ Hom C (c, Gr) such that for any object d ∈ D and any morphism f ∈ Hom C (c, Gd) there is a unique morphism h ∈ Hom D (r, d) making the diagram commutative.
As an initial object the universal morphism is unique up to an isomorphism. Let us describe the universal morphism (U A,B , u A,B ) from the object X A (C) to the functor X B . It consists of the algebra U A,B generated by the elements M ij , i = 1, . . . , n, j = 1, . . . , m, with the commutation relation Via the equivalence of comma categories X A (C) ↓ X B and Ξ B (C) ↓ Ξ A we obtain the universal morphism from the object Ξ B (C) to the functor Ξ A , this is the pair ( Let us call the matrix M the universal (A, B)-Manin matrix. The algebra U A,B is a generalisation of the right quantum algebra [GLZ], so we call it the right quantum algebra for the pair of idempotents (A, B).
Consider more general comma category X A (S) ↓ X B . It consists of pairs (R, f ) of an algebra R and a homomorphism f ∈ Hom X A (S), X B (R) . Such a homomorphism f can be identified with a pair of homomorphisms in the sense of the formulae (2.35), (2.36). If the formulae h(s) = α(s) and h(M ij ) = M ij and making the diagram Now let us calculate a composition Recall also (see [MacLane]) that the functor F : C → D is called left adjoint to the functor G : D → C (while the functor G : D → C is called right adjoint to F : C → D), iff there is an isomorphism Hom C (c, Gd) ∼ = Hom D (F c, d) natural in c and d. To construct a left adjoint functor to a functor G : D → C it is enough to construct a universal morphism (r c , u c ) from each c ∈ C to G. Then the left adjoint functor on objects is defined as F c = r c . For a morphism α : c → c ′ the morphism F α : F c → F c ′ is the unique morphism h : F c → F c ′ such that the diagram is commutative (as consequence, we obtain a natural transformation u : id C → GF with components u c : c → GF c).
Let Q be the full subcategory of G consisting of the graded algebras of the form X A (S). This is the category of quadratic algebras. We can consider the functors X A and Ξ A as functors from A to Q. This means that we can substitute G by Q in the previous considerations.
From the diagram (2.39) we see that the functor G = X B : A → Q has a left adjoint functor F = F B : Q → A. It is defined on objects as F B X A (S) = S ⊗ U A,B . In particular, on the quadratic algebras X A (C) ∈ Q the functor F B gives the right quantum algebra U A,B .
Let us calculate the functor F B on a morphism X A (S) is a homomorphism and f N is defined by an (A, A ′ )-Manin matrix N = (N ij ). We need to construct a commutative diagram The left adjoint to the functor Ξ A : A → Q is the functor F A : Q → A defined on objects as Proposition 2.24. Let A ∈ End(C n ⊗ C n ) and A ′ ∈ End(C m ⊗ C m ) be idempotents. Then the following three statements are equivalent.
• n = m and there exists σ ∈ GL(n, C) such that A is left-equivalent to the idempotent be an isomorphism and f −1 : X A ′ (C) → X A (C) be its inverse. They correspond to complex (A, A ′ )-Manin matrices σ ∈ Hom(C m , C n ) and σ −1 ∈ Hom(C n , C m ) respectively. Due to invertibility of these matrices we must have n = m. The relations A(σ ⊗ σ)(1 − A ′ ) = 0 and A ′ (σ −1 ⊗σ −1 )(1−A) = 0 imply A(1− A) = 0 and A(1−A) = 0. By virtue of Lemma 2.11 the idempotents A and A are left equivalent. One can similarly prove that the second statement implies the third one.
Remark 2.25. In the same way one can prove that the quadratic algebras X A (S) and X A ′ (R) are isomorphic as graded algebras iff n = m, S ∼ = R and there exist invertible

Products of Manin matrices and co-algebra structure
The main property of Manin matrices is that their product is also a Manin matrix under some condition.
Proposition 2.26. Let R be an algebra. Let A ∈ End(C n ⊗ C n ), B ∈ End(C m ⊗ C m ) and C ∈ End(C k ⊗ C k ) be idempotents. Let M and N be n × m and m × k matrices over R.
The right hand side vanishes since N is a (B, C)-Manin matrix.
Remark 2.27. Proposition 2.26 can be proved by using the functors X A or Ξ A . In the case of X A one needs to consider the elements y j = l N jl x l and z i = j M ij y j of the algebra A = X C (R) where x l = x l C . By applying Proposition 2.18 twice we obtain the relations k,l A ij,kl z k z l = 0. Since z i = l K il x l the matrix K is an (A, C)-Manin matrix by virtue of Proposition 2.16. Analogously, one can apply Propositions 2.19 and 2.16 to the elements In some particular case the property claimed in Proposition 2.26 was deduced in Subsection 2.5 (see the formula (2.41) and the text after it). The role of R is played by the algebra T. The homomorphism β : R → T gives a R-algebra structure on T and maps entry-wise the (A, B)-Manin matrix M = (M ij ) over R to a (A, B)-Manin matrix β(M) = β(M ij ) over T. The entries of this matrix commute with the entries of N since [β(r), N ij ] = 0 for any r ∈ R.
The typical case of Proposition 2.26 is R = S ⊗ S ′ , M ij ∈ S, N jl ∈ S ′ . In this case it can be formulated in terms of comma categories as follows. Let (S, f M ) and (S ′ , f N ) be objects of the categories X A (C) ↓ X B and X B (C) ↓ X C corresponding to (A, B)-and (B, C)-Manin matrices M and N. Then (S ⊗ S ′ , f K ) is an object of X A (C) ↓ X C , where K = MN. In particular, for A = B = C we obtain a structure of tensor category on the comma category X A (C) ↓ X A with the unit object (C, id X A (C) ) corresponding to the unit matrix.
Proposition 2.26 can be formulated in terms of right quantum algebras. This formulation was described in the works [Man87,Man88,Man91].
Proposition 2.28. Let M, N and K be universal (A, B)-, (B, C)-and (A, C)-Manin matrices respectively. They have entries M ij ∈ U A,B , N ij ∈ U B,C and K ij ∈ U A,C . Then the formula Consider the algebra U A = U A,A . Proposition 2.28 implies that the map ∆ A = ∆ A,A,A is a homomorphism U A → U A ⊗ U A . Moreover, it is easy to check that ∆ A is a comultiplication, that is (id ⊗∆ A )∆ A = (∆ A ⊗ id)∆ A . More generally, we have the following commutative diagram: which reflects the associativity of the matrix multiplication.
Proposition 2.29. The algebra U A has a bialgebra structure defined by the following comultiplication ∆ A : U A → U A ⊗ U A and counit ε A : U A → C: (2.49) The formula (2.34) for the universal A-Manin matrix M gives a coaction of the bialgebra U A on the algebra X A (C). This is a homomorphism δ = f M : (2.50) It satisfies the coaction axiom (id ⊗δ)δ = (∆ ⊗ id)δ. In terms of non-commutative geometry the algebra X A (C) is interpreted as an algebra of functions on a non-commutative space and the coaction δ as an action on this space. Thus the bialgebra U A (or its dual) plays the role of algebra of endomorphisms of a non-commutative space corresponding to the algebra X A (C).

More generally, for arbitrary
Remark 2.30. The bialgebra U A is not a Hopf algebra: an antipode S for the bialgebra structure (2.49) should have the form S(M) = M −1 , but the matrix M is not invertible over U A . One can extend the algebra A A by adding new generators M ij being elements of the formal inverse matrix M = M −1 . Then the matrix S(M) ⊤ should be inverse to M ⊤ , however its invertibility is not guaranteed in this extended algebra. The universal construction of a Hopf algebra extending the bialgebra U A is Hopf envelope [Man91]. This is the algebra H A generated by the entries of the infinite series of matrices M k , k ∈ N 0 , with the relations Its Hopf structure is given by the formulae The algebra U A is mapped to H A by the formula M → M 0 . Note also that the matrix The complex square and rectangular matrices are often interpreted as homomorphisms in a category with objects n ∈ N 0 and homomorphisms Hom(m, n) = Mat n×m (C) (it is equivalent to the category of finite-dimensional vector spaces). One can interpret the Manin matrices in a similar way. Let A ′ ⊂ A be a small full tensor subcategory, this means that A ′ is a full subcategory of A such that C ∈ A ′ , R ⊗ S ∈ A ′ ∀ R, S ∈ A ′ and Ob(A ′ ) is a small set (in practice one needs only a small set of algebras, so we can take the full tensor subcategory generated by this set as A ′ ). Define the following category If two idempotents are left-equivalent, then they are isomorphic as objects of M A ′ , so it is enough to take the quadratic algebras X A (C) as objects of M A ′ instead of the idempotents. Moreover, one can prove that X A (C) and X A ′ (C) are isomorphic as objects of M A ′ iff they are isomorphic as objects of Q.
Remark 2.31. Due to Proposition 2.12 the formula X A (C) → Ξ A (C) correctly define an operation on the quadratic algebras A ∈ M A ′ . It was denoted in the works of Manin [Man87,Man88,Man91] by A → A ! . The first equality (2.45) gives a contravariant fully faithful func- Remark 2.32. One can extend the category M A ′ up to a category M A ′ by taking all the algebras X A (S) ∈ Q as objects of M A ′ . The sets homomorphisms in M A ′ can be defined as with some natural composition rule. In these settings X A (S) and X A ′ (S ′ ) are isomorphic as objects of M A ′ iff they are isomorphic as objects of Q (see Remark 2.25).

Infinite-dimensional case: Manin operators
Let V and W be vector spaces (may be infinite-dimensional). Let A ∈ End(V ⊗ V ) and B ∈ End(W ⊗ W ) be idempotents and R be an algebra. Instead of a matrix over R we need to take an element M of the space R ⊗ Hom(W, V ) or of some completion of this space. We consider M as an operator with entries in the algebra R: this means that λMw ∈ R where λ ∈ V * and w ∈ W (in the case of completion the covector λ runs over a subset of the dual space V * such that λMw is well defined).
Let the space V have a basis (v i ), so that any vector v ∈ V has the from v = i α i v i for unique coefficients α i ∈ C and the sum is finite. Then the action of the operator A can be written as A(v k ⊗v l ) = i,j A ij,kl v j ⊗v j for unique coefficients A ij,kl ∈ C. Since the sum i,j should be finite there are only finitely many non-zero coefficients A ij,kl for any fixed k, l, we call it left finiteness condition for the matrix (A ij,kl ). This condition allows as to define the quadratic algebra Ξ A (C) generated by ψ i with commutation relations ψ k ψ l = i,j A ij,kl ψ i ψ j . Formally this is the quotient Ξ A (C) = T/I, where T is the algebra of all the non-commutative polynomials of the formal variables ψ i while I is the ideal of T generated by the elements Another choice of the basis (v i ) leads to an isomorphic quadratic algebra, so it essentially depends on the operator A only.
To interpret an (A, B)-Manin operator M as an object of a comma category as in Subsection 2.5 we need to define also an algebra Ξ A (R). Let V and W have bases (v i ) and (w i ). In these bases an (A, B)-Manin operator M has entries M ij ∈ R defined by the formula , then there are only finitely many non-zero entries M ij for any fixed j (left finiteness condition for the matrix (M ij )) and hence the sum Then the set of (A, B)-Manin operators M ∈ R ⊗ Hom(W, V ) bijectively corresponds to the set Hom Ξ B (C), Ξ A (R) .
To consider the case of arbitrary infinite matrix (M ij ) without any finiteness condition we define a completion of the space R ⊗ Hom(W, V ) as the set of all the infinite formal sums Note that the completion R ⊗ V (and hence R ⊗ Hom(W, V ) = Hom(W, R ⊗ V )) does not depend on the choice of the basis (w i ) but it does depend on the choice of the basis (v i ). Namely, a basis (v i ) defines the following topology in the R-module R ⊗ V : neighbourhoods of 0 are the R-submodules generated by all v i except finitely many of them. The module R ⊗ V is the completion of R ⊗ V with respect to this topology. Any two bases they define the same topology of R ⊗ V iff for any k there are only finitely many non-zero α ki and β kj .
Suppose A satisfy right finiteness condition: there are only finitely many non-zero A ij,kl for fixed i, j. This condition means exactly that the operator A : V ⊗V → V ⊗V is continuous with respect to the topology corresponding to the basis (v i ⊗v j ). Define Ξ A (R) = T/ I where T = k∈N 0 T k is the graded algebra with grading component T k consisting of the infinite formal sums to the right finiteness condition the sum over k and l is correctly defined). The algebra Ξ The proof of Lemma 2.2 holds for the completed algebra Ξ A (R), so that the system of equations i,j ψ i ψ j T ij,ab = 0 for T ij,ab ∈ R is equivalent to the system k,l A ij,kl T kl,ab = 0. Hence we have a bijection between the set of (A, B)-operators M ∈ R ⊗ Hom(W, V ) and the set Hom Note that we do not need the left finiteness condition to define the algebra Ξ A (R). Hence the (A, B)-Manin operators M ∈ R ⊗ Hom(W, V ) for arbitrary idempotents A ∈ End(V ⊗V ) and B ∈ End(W ⊗W ) can be identified with the elements of the set Hom Ξ B (C), Ξ A (R) . Explicitly, the relations k,l,a,b A ij,kl M ka M lb (δ ar δ bs − B ab,rs ) = 0 are correctly defined in this case since all the sums in these relations are finite.
Thus the (A, B)-Manin operators in the non-completed or completed case are objects of the comma category Ξ One can also generalise the graded algebras X A (C) to the case of infinite-dimensional matrices (A ij,kl ). To do it one needs to require the right finiteness condition, which means that this matrix defines a continuous operator A : In particular, the completions allow us to consider the universal (A, B)-Manin operator. It has the form M = i,j M ij E ij , where M ij are generators of the algebra U A,B with the commutation relations k,l,a,b A ij,kl M ka M lb (δ ar δ bs − B ab,rs ) = 0. These relations are correctly defined iff the matrices (A ij,kl ) and (B ij,kl ) satisfy the right and left finiteness property respectively. Proposition 2.26 can be generalised if we additionally suppose that the sums j M ij N jl are well defined: if nether M nor N satisfies a needed finiteness condition, then we need to complete R in order to include all these sums. In particular, the map ∆ A,B,C is a homomorphism from U A,C to a completion of U A,B ⊗ U B,C which contains the sums j M ij N jl (we need to suppose that A, B and C satisfy the right, both and left finiteness conditions respectively). In the case A = B = C the map ∆ A is a 'completed' comultiplication for U A = U A,B . In terms of representations this means that tensor product of U A -modules are not always defined: we need to impose some finiteness condition on the modules to guarantee the existence of their tensor product.
Finally, we consider some examples of completions used below. The simplest example of the infinite-dimensional space is the space of polynomials V = C [u]. It has the basis

It consists of all the formal infinite sums
, but it is not an algebra in the usual sense.

Particular cases
Here we consider the main examples corresponding to the polynomial and Grassmann algebras and their deformations. We also consider a generalisation of a deformed polynomial algebra with three variables. More examples will appear in Sections 4, 6, 7 and Appendix B.

Manin matrices for the polynomial algebras
Let P n ∈ End(C n ⊗ C n ) be the permutation operator acting as Substituting basis elements v = e k and w = e l we obtain the entries of this operator: (P n ) ij,kl = δ il δ jk . Since P 2 n = 1 the operators are idempotents: A 2 n = A n , S 2 n = S n . These are anti-symmetrizer and symmetrizer for two tensor factors respectively. Note that the permutation operator satisfies the braid relation (this is an equality of operators acting on the space C n ⊗ C n ⊗ C n ).
The commutation relations (2.4) and (2.16) for A = A n have the form x i x j − x j x i = 0 and ψ i ψ j + ψ j ψ i = 0. Hence the algebra X An (C) is the polynomial algebra C[x 1 , . . . , x n ] and Ξ An (C) is the Grassmann algebra with Grassmann variables ψ 1 , . . . , ψ n .
The (A n , A m )-Manin matrices are n × m matrices M over an algebra R satisfying the relation These matrices were called Manin matrices in [CF]. In this subsection we call them just Manin matrices if there is no confusion with the general notion of Manin matrix.
To write the matrix relation (3.4) in terms of entries one can substitute P ij,kl = δ il δ jk to the formula (2.26) written in entries. We obtain the following system of commutation relations: (the relations (3.5) is in fact the relations (3.6) for k = l). The commutation relations (3.5) mean that any two entries of the same column commute. The formula (3.6) is so-called cross-relation for the 2 × 2 submatrix with rows i, j and columns k, l.
For example, consider a 2 × 2 matrix It is a Manin matrix iff its entries a, b, c, d ∈ R satisfy In the case n 2 an n × m matrix is a Manin matrix iff any 2 × 2 submatrix of this matrix is a Manin matrix. In particular, any matrix over a commutative algebra is a Manin matrix. The notion of (A n , A m )-Manin matrix is such a non-commutative generalisation of matrix that the most of the properties of the usual matrices are inherited (with some generalisation of the notion of determinant). These properties are described in the works [CF] and [CFR] in details.
It is clear, that a Manin matrix keeps to be a Manin matrix after the following operations: taking a submatrix, permutation of rows or columns, doubling of a row or a column. In other words, if M = (M ij ) is an n × m Manin matrix and i 1 , . . . , i k ∈ {1, . . . , n}, j 1 , . . . , j l ∈ {1, . . . , m} then the new k × l matrix N with the entries N st = M isjt is also a Manin matrix. Note that in the case of a permutation this fact follows from Proposition 2.21 and Let us recall one more important fact on the Manin matrices [CFR].
[M ij , M kl ] = 0 for any i, j, k, l) iff M and its transposed M ⊤ are both Manin matrices.
Proof. If all entries of M commute with each other then M as well as M ⊤ is a Manin matrix. In the converse direction it is enough to prove the statement for the case of 2 × 2 matrix, since any two entries are contained in some 2 × 2 submatrix (if n = 1 or m = 1 there are no 2 × 2 submatrices -in this case the commutativity of entries follows from the relations (3.5) for M ⊤ or M respectively).
The condition that the matrix Together with the relations (3.8) they imply that all the entries a, b, c, d pairwise commute. For a general Manin matrix the entries from different columns do not commute, so the notion of determinant should be generalised in a special way. It turns out that the natural generalisation is so-called column determinant. For a n × n matrix M over an algebra (or a ring) R it is defined as where (−1) σ is the sign of the permutation σ. This is the usual expression for the determinant, but with the specified order of entries in each term: they are ordered in accordance with the order of columns. If M is a Manin matrix then the order of columns can be chosen in a different way and this leads to the same result (see [CF], [CFR]): However, we can not take different order for different terms. For example, the determinant of the 2 × 2 matrix (3.7) is det(M) = ad − cb. If it is a Manin matrix then due to the last relation (3.8) we have det(M) = da − bc, but in general det(M) does not equal to ad − bc or to da − cb even for a Manin matrix. An important property of the column determinant of Manin matrices is its behaviour under the permutation of rows and columns. The determinant of an n × n Manin matrix M changes the sign under the transposition of two columns or rows. In the notations of Subsection 2.4 we have det( τ M) = (−1) τ det M and det( τ M) = (−1) τ det M for any τ ∈ S n . The first formula is deduced as where we made the substitution σ → τ σ and used (−1) τ σ = (−1) τ (−1) σ . The second formula follows from (3.10) in a similar way.
Since any submatrix of a Manin matrix M is also a Manin matrix, it is natural to define k ×k minors of M to be the column determinants of k ×k submatrices. We say that a Manin matrix has rank r if there is non-zero r ×r minor and all the k ×k minors vanish for all k > r. In fact, it is enough to check it for k = r + 1 (see [CF], [CFR]). Many important properties of the Manin matrices are formulated in terms of column determinants and minors. In particular, one can construct 'spectral invariants' of square Manin matrices [CF], [CFR].
The theory of Manin matrices are applies to the Yangians Y (gl n ), affine Lie algebras gl n , Heisenberg gl n XXX-chain, Gaudin gl n model [CF], to elliptic versions of these models [RTS] etc.
Let us, for example, present the connection of the notion of A n -Manin matrix with the Yangian Y (gl n ). Consider the rational R-matrix R(u) = u − P n . The Yangian Y (gl n ) is defined as the algebra generated by t r ij , i, j = 1, . . . , n, r ∈ Z 1 , with the commutation relation ( 3.11) where u and v are formal variables and T is the quantum determinant. The coefficients of the series qdet T (u) ∈ Y (gl n )[[u −1 ]] generate the centre of the Yangian Y (gl n ) (see [MNO]). Now let us consider (A n , 0)-Manin matrices M = (M ij ), where 0 ∈ End(C m ⊗ C m ). They are defined by the relation M ik M jl = M jk M il where i, j = 1, . . . , n, k, l = 1 . . . , m. The 2 × 2 matrix of the form (3.7) is an (A 2 , 0)-Manin matrix iff ad = cb, bc = da, ac = ca, bd = db. These are the relations (3.8) plus the condition det M = 0. Again one can see that an n × m matrix is an (A n , 0)-Manin matrix iff any 2 × 2 submatrix of this matrix is an (A 2 , 0)-Manin matrix. Thus the set of an (A n , 0)-Manin matrices over R coincides with the set of rank 1 Manin matrices over R of the size n × m.
• The matrix M is an (A n , A n )-Manin matrix iff it satisfies (3.6).
• The matrix M is an ( A n , A n )-Manin matrix iff it satisfies (3.6) and (3.14).
The notion of Manin matrix can be generalized to the infinite-dimensional case. Denote for any vector space V the operator . Its tensor square can be identified with the space of polynomials of two variables: V ⊗ V = C[u 1 , u 2 ]. In terms of this identification the operator P C [u] can be interpreted as the operator permuting the variables u 1 and u 2 , we denote it by P u 1 , To generalise the consideration to the case of any infinite matrix (M ij ) without any finiteness condition we should suppose that the operator M belongs to the completion of

q-Manin matrices
Let q be a non-zero complex number. Consider the q-commuting variables x 1 , . . . , x n , that is x j x i = qx i x j for i < j. By means of the notation one can write these relations as ( 3.17) In the matrix form they have the form ( x i e i and It also satisfies the braid relation Since (P q n ) 2 = 1 the matrices are idempotents. The corresponding algebra X A q n (C) is generated by x i with the relations (3.17). It can be interpreted as an 'algebra of functions' on the n-dimensional quantum space C n q . The algebra Ξ A q n (C) is the q-Grassmann algebra generated by ψ 1 , . . . , ψ n with the relations In terms of entries these relations have the form The (A q n , A q m )-Manin matrices are called q-Manin matrices. The Manin matrices considered in Subsection 3.1 are q-Manin matrices for q = 1. The properties of the Manin matrices described in [CF], [CFR] were generalised to the q-case in the work [CFRS].
A natural generalisation of the column determinant to the case of q-Manin matrices is where inv(σ) is the number of inversions: it is equal to the number of pairs (i, j) such that 1 i < j n and σ(i) > σ(j). It coincides with the length of σ defined as the minimal l such that σ can be presented as a product of l elementary transpositions In this case the q-determinant can be rewritten as det q M = da − qbc.
A general n × m matrix is a q-Manin matrix iff any 2 × 2 submatrix of this matrix is a q-Manin matrix (n 2). For an n × n q-Manin matrix we can change the order of columns in the expression of q-determinant in the following way [CFRS]: By changing τ by τ −1 in (3.30) and by taking into account inv(τ −1 ) = inv(τ ) one can write this formula in the form det q ( τ M) = (−q) − inv(τ ) det q M. In contrast with the case of Subsection 3.1 the q-determinant of the matrix τ M obtained from an n × n q-Manin matrix by a permutation of rows does not related with det q M (the proof done for the q = 1 case does not work since q inv(τ σ) = q inv(τ ) q inv(σ) ). Moreover, neither τ M nor τ M are q-Manin matrices in general. However they are Manin matrices for another idempotents. Namely, by virtue of Proposition 2.21 they are (τ ⊗τ )A q n (τ −1 ⊗τ −1 ), A q m -and A q n , (τ ⊗τ )A q m (τ −1 ⊗τ −1 ) -Manin matrices respectively (see Section 3.3). As we will see in Subsection 6.1 the q-determinant is a natural operation for (A q n , B)-Manin matrices for any B, but not for (B, A q m )-Manin matrices (the symmetry of the q-determinant of an (A q n , B)-Matrix with respect to permutation of columns depends on the choice of the idempotent B).
Analogously to the case q = 1, the formula (3.30) is valid for any M ∈ R ⊗ End(C n ) satisfying the cross-relations (3.27), that is for any (

Multi-parametric case: ( q, p)-Manin matrices
In Subsection 3.2 we introduced a q-deformation of the polynomial algebra C[x 1 , . . . , x n ]. The q-commutation of variables was defined by a unique parameter q. However we can consider multi-parameter deformation [Man89]. One has n(n − 1)/2 deformation parameters: one for each pair of variables. We will say that an n×n matrix q = (q ij ) is parameter matrix iff it has entries q ij ∈ C\{0} satisfying the conditions A parameter matrix q defines the commutation relations where i, j = 1, . . . , n are arbitrary or subjected to i < j. It gives the algebra X A q (C) where It is immediately checked that (P q ) 2 = 1 and that P q satisfies the braid relation (3.34) The corresponding algebra Ξ A q (C) is defined by the relations The independent relations are (3.35) for i < j and ψ 2 i = 0. Let p = (p ij ) be an m × m parameter matrix. An (A q , A p )-Manin matrix M is an n × m matrix over an algebra R satisfying (3.36) In terms of entries this relation can be written as These conditions are empty for i = j and they do not change under i ↔ j or k ↔ l, hence it is enough to check (3.37) for i < j and (3.38) for i < j, k < l (the relation (3.37) is the relation (3.38) for k = l).
Definition 3.2. A matrix M is called a ( q, p)-Manin matrix satisfying the relations (3.37), (3.38). A square matrix M satisfying these relations is called q-Manin matrix if q = p. Now let us consider the permutation of rows and columns of such matrices.
Proposition 3.3. Let M be an n × m matrix over R, σ ∈ S n and τ ∈ S m . Then the following statements are equivalent.
The matrix σ qσ −1 has entries Proof. It follows from Proposition 2.21 and the formula which in turn is deduced by direct calculation.
Remark 3.4. Proposition does not work for any operators σ ∈ GL(n, C) and τ ∈ GL(m, C) since the formula (3.40) is not valid for general σ ∈ GL(n, C).
Let us consider more general situation: one can apply the following operations on a matrix M: taking a submatrix, permutation of rows or columns, doubling of a row or a column. The result of a sequence of such operations is a new matrix N = M IJ considered below.
Theorem 3.5. Let I = (i 1 , . . . , i k ) and J = (j 1 , . . . , j l ) where 1 i s n and 1 j t m for any s = 1, . . . , k and t = 1, . . . , l. Let M = (M ij ) be an n × m matrix over R and M IJ be k ×l matrix with entries (M IJ ) st = M isjt . Let q and p be n×n and m×m parameter matrices and let p II and q JJ be k × k and l × l matrices with entries ( q II ) su = q isiu , s, u = 1, . . . , k, ( p JJ ) tv = q jtjv , t, v = 1, . . . , l. They are also parameter matrices. If M is a ( q, p)-Manin matrix then M IJ is a ( q II , p JJ )-Manin matrix.
Proof. By substituting i → i s , j → i u , k → j t , l → j v to (3.38) we obtain the relations (3.38) for the matrix M IJ with coefficients defined by the parameter matrices q II and p JJ .
For a non-zero complex number q let us denote by q [n] the n × n parameter matrix with entries (q [n] ) ij = q sgn(j−i) . Then P q [n] = P q n and the (q [n] , q [m] )-Manin matrices are exactly the n × m q-Manin matrices. A permutation of rows or columns of a such matrix M gives a (σq [n] In general these are not q-Manin matrices any more (see Subsection 3.2), but they are related with the quadratic algebras isomorphic to X A q n (C) and X A q m (C), so that they have the same properties permuted in some sense. For instance, properties of the q-determinant of σ M are similar to ones of the q-determinant of a q-Manin matrix.
Let x 1 , . . . , x n be the generators of X A q n (C). Then the n × n diagonal matrix is an (A q n , A n )-Manin matrix, i.e. a (q [n] , 1 [n] )-Manin matrix. More generally, this is a (pq) [n] , p [n] -Manin matrix for any p ∈ C\{0}.
An analogue of the q-determinant for a ( q, p)-Manin matrix depends on q, but not on p. We call it q-determinant, it is defined for an n × n matrix M as follows [Man89]: The products in this formula runs over all inversions of the permutations σ and σ −1 respectively. Hence for q = q [n] the formula (3.42) gives the q-determinant (3.28). The q-determinant is a 'generalised' determinant for the square ( q, p)-Manin matrices. In particular, q-determinant is a 'generalised' determinant for the square (q [n] , p)-Manin matrices.
In particular, one can place the factors ψ i k and ψ i l to neighbour sites. It i k = i l then ψ i k ψ i l = 0. Let us rewrite the formula (3.44) in terms of Appendix A. Consider the root system for the reflection group S n . Denote q α = q ij for the root α = e i − e j . Then due to (A.2) the formula (3.44) takes the form (3.45) We prove the formula (3.45) by using induction on the length ℓ = ℓ(σ).
Together with the induction assumption this implies the formula (3.45).
Lemma 3.6 implies that for any n × n matrix M such that [M ij , ψ k ] = 0 we have A permutation of rows or columns corresponds to a permutation of ψ 1 , . . . , ψ n or φ 1 , . . . , φ n respectively: Theorem 3.7. Let q = (q ij ) and p = (p ij ) be n × n parameter matrices and τ ∈ S n . Let M be a ( q, p)-Manin matrix over an algebra R. Then the generalised determinants of the (τ qτ −1 , p)-Manin matrix τ M and of the ( q, τ pτ −1 )-Manin matrix τ M have the form More generally, the formula (3.48) is valid for any n × n matrix M.
Proof. Let ψ i be generators of the algebra Ξ A q (C). Then Proposition 2.16 implies that satisfy the commutation relations (3.35) with the parameter matrix p. Due to the formula (3.39) the elements ψ ′ i = ψ τ −1 (i) and φ ′ j = φ τ −1 (j) satisfy the commutation relations (3.35) with the parameter matrices σ qσ −1 and σ pσ −1 respectively. From the formulae (3.46) and (3.47) we obtain The formula (3.44) for φ j and ψ i with σ = τ −1 takes the form Substitution of these formulae and the formula (3.46) to (3.50) gives (3.48) and (3.49). We did not use the commutation relations of φ j for the proof of the formula (3.48), so it is valid for any matrix M. Let us consider the case when two rows or two columns coincide. We write some conditions leading to vanishing of the q-determinant.
Corollary 3.8. Let M be a k × k matrix over R. Let q and p be k × k parameter matrices.
• Let i = j. If M il = M jl and q il = q jl ∀ l (in particular, q ij = 1) then det q (M) = 0.
• If two columns of a ( q, p)-Manin matrix M coincide (that is M li = M lj ∀ l for some i = j) then det q (M) = 0.
Proof. Let σ ij ∈ S k be the transposition of i and j. Suppose i < j (without loss of generality). The conditions q il = q jl imply σ ij qσ ij = q and s<t σ ij (s)>σ ij (t) so the substitution τ = σ ij to (3.48) gives det q (M) = − det q (M). For generic p one can similarly prove the second statement by using the formula (3.49) with τ = σ ij . For arbitrary p it follows from the relations (3.46) and φ i φ j = φ 2 i = 0.
Remark 3.10. The formula (3.44) follows from the relations ψ j ψ i = −q −1 ij ψ i ψ j , i < j. As consequence, we did not need the relations φ 2 i = 0 to prove the formula (3.49). The relations Hence the formula (3.49) is valid for any (A q , A p )-Manin matrix M. As consequence, the second statement of Corollary 3.8 is valid for these matrices if p is generic. However they are not valid for some p, so it is necessary to require M to be a ( q, p)-Manin matrix. Moreover the third and forth statements of Corollary 3.9 are not valid for (A q , A p )-Manin matrices even if p is generic since we used Theorem 3.5. For example, let Remark 3.11. Let n = k + l, q ij = −1 for i, j = k + 1, . . . , n, i = j, and q ij = 1 for other i, j. By factorizing the algebra X A q (C) over the relations x 2 i = 0, i = k + 1, . . . , n, and introducing a Z 2 -grading we obtain the free super-commutative quadratic algebra with k even and l odd generators. However the approach of Section 2 applied to this algebra does not give super-Manin matrices considered in [Man89,MR]. The reason is that we suppose commutativity of x i with entries of M, which should be replaced by super-commutativity in the super-case. We will consider Manin matrices for quadratic super-algebras in future works.

A 4-parametric quadratic algebra
Consider the algebra with generators x, y, z and relations Let x 1 = x, x 2 = y, x 3 = z, a 12 = a, a 23 = b, a 31 = c, a ij = a −1 ji , a ii = 1. Let ε ijk be the totally antisymmetric tensor such that ε 123 = 1. Then (3.51) is equivalent to the system These relations can be written as (3.53) The operator P ∈ End(C 3 ⊗ C 3 ) with the entries (3.53) satisfies P 2 = 1. Hence the operator is an idempotent and the relations (3.51) define the algebra X A a,b,c κ (C). By setting κ = 0 we obtain the quadratic algebra X A q (C) with the parameters q ij = a 2 ij , so the algebra X A a,b,c κ (C) is a generalisation of the 3-dimensional case of the algebra X A q (C) considered in Subsection 3.3 (this is not a κ-deformation in general, see Remark 6.4).

Let us consider some examples of Manin matrices by taking
for all cyclic permutation (i, j, k) of (1, 2, 3). The relation (3.54) is exactly the cross relation (3.38) for the parameters q ij = q sgn(j−i) and p ij = a 2 ij , while the relation (3.55) is a generalisation of the q-commutation (3.37).
are satisfied by the substitutions x i = α i and x i = β i and for all cyclic permutations (i, j, k) of (1, 2, 3).

Lax operators
Lax operators are different square matrices and endomorphisms of vector spaces arisen in the theories of integrable systems and quantum groups. We will consider Lax operators satisfying RLL-relations with some R-matrices (solutions of the Yang-Baxter equation). Different Rmatrices give different types of Lax operators. Since many quantum groups can be defined by RLL-relations the Lax operators of a certain type are related with the representation theory of the corresponding quantum group. Here we consider connections between Manin matrices associated with some quadratic algebras and the Lax operators associated with the quantum groups U q (gl n ), Y (gl n ). Notice also that a connection between the q-Manin matrices and Lax operators associated the affine quantum group U q ( gl n ) was described in [CFRS].

Lax operators of U q (gl n ) type and q-Manin matrices
A relationship between Lax operator and q-Manin matrices was first described by Manin, see [Man88]. We investigate this relationship by applying a decomposition of the corresponding R-matrix. Let us first write the relations for a transposed q-Manin matrix. Recall that the matrices P n defined by (3.1) permute the factors Hom(C m , C n ) ⊗ Hom(C m , C n ) in the following way: Note also that transposition gives the same: Lemma 4.1. Let M ∈ R ⊗ Hom(C m , C n ). The transposed matrix M ⊤ is a q-Manin matrix iff the matrix M satisfies one of the following equivalent relations: Proof. The relation (3.22) for the m×n matrix M ⊤ has the form A q m (M ⊤ ) (1) (M ⊤ ) (2) S q n = 0. If we transpose the both hand sides and take into account (M (1) M (2) ) ⊤ = (M ⊤ ) (1) (M ⊤ ) (2) and (4.3) we obtain (4.4). Due to (4.2) the permutation of tensor factors yields (4.5).
Suppose that q 2 = −1. Consider the R-matrix It satisfies the Yang-Baxter equation (4.7) A Lax operator of U q (gl n ) type is an n × n matrix L ∈ R ⊗ End(C n ) satisfying the RLLrelation More generally, consider an n × m matrix L ∈ R ⊗ Hom(C m , C n ) satisfying R q n L (1) L (2) = L (2) L (1) R q m . (4.8) Remark 4.2. The commutation relations for the quantum group U q (gl n ) can be written as three matrix relations R q n L (1) ± L (2) + R q n for some matrices L + , L − ∈ U q (gl n ) ⊗ End(C n ) [FRT, RTF].
By multiplying the relation (4.8) by P n from the left and by taking into account (4.1) we obtain the equivalent relation where (4.10) The matrix (4.10) satisfies the braid relation which is obtained by multiplying left and right hand sides of (3.3) and (4.7).
We see that (4.17) This means that the idempotents A q n and R q n− are left equivalent. Thus an n × m q-Manin matrix is exactly the Manin matrix for the pair ( R q n− , R q m− ). Due to (4.2) we obtain so that (4.20) Theorem 4.4. Let L ∈ R ⊗ Hom(C m , C n ), then the following statements are equivalent.
• L satisfies the relation (4.21) • L satisfies the relation • L satisfies the relation (4.23) • L satisfies the relations (4.24) • The matrices L and L ⊤ are both q-Manin matrices.
Proof. By adding qL (1) L (2) to the both hand sides of (4.9) and by dividing by q + q −1 we obtain the equivalent relation (4.21). The equivalence of the relations (4.9) and (4.22) is proved similarly. Further, by using (4.17) one establishes the equivalence of (4.22) and (4.23). The relations (4.24) are obtained from (4.23) by multiplying by S q m from the right and by multiplying by S q n from the left respectively. Conversely, suppose that L satisfies the relations (4.24). By virtue of the formulae (4.2) the second of the relations (4.24) can be written in the form S q −1 n L (1) L (2) A q −1 m = 0. Thus we have These relations implies By using (4.20) one yields Multiplication by P m from the right gives (4.28) By taking into account (4.24) we obtain (4.23) in the following way: Finally, by virtue of Lemma 4.1 the relations (4.24) mean exactly that L and L ⊤ are q-Manin matrices. Theorem 4.4 implies that a Lax operator of U q (gl n ) type is a particular case of q-Manin matrix. Some properties of these Lax operators can be generalised to the case of q-Manin matrices. The q-determinant (3.28) arose as a natural generalisation of the determinant for the Lax operators of U q (gl n ) type, its properties were generalised for the case of q-Manin matrices in [CFRS].
Note that the fact that the RLL-relation (4.8) is equivalent to the claim that L and L ⊤ are both q-Manin matrices can be proved in the same way as Proposition 3.1 (see [Man88,CFR]). The approach considered here explains this fact in terms of left equivalence of idempotents, which will be applied in Subsection 6.2. This allows to explain why the Newton identities for the q-Manin matrices proved in [CFRS] differs from the Newton identities for L-operators deduced in [PS,IOP98,IOP99].
Decomposition of the operator R q into dual idempotents described in Lemma 4.3 gives a general idea how to connect Lax operators with Manin matrices. It can be applied to a some general class of R-matrices.

Lax operators of Yangian type as Manin operators
The decomposition method described in Subsection 4.1 is generalised here to the case of the rational R-matrix. This gives an interpretation of the corresponding Lax matrices as a class of Manin operators.
Let h : Y (gl n ) → R be a homomorphism from the Yangian to some algebra R. It is defined by the image of the matrix T (u). It has the form L(u) = 1 + ∞ r=1 n i,j=1 ℓ r ij E ij u −r , where ℓ r ij = h(t r ij ), and satisfies the RLL-relation where R(u) = R n (u) = u − P n . Conversely, any n × n matrix over R which has this form and satisfies (4.30) defines a homomorphism Y (gl n ) → R. These are Lax operators of Yangian type.
By taking into account (4.36) we obtain 4 A 2 n = 2 + 2u −1 12 + 2u −1 12 R = 4 A n . The basis of C n ⊗ C n [u 1 , u −1 1 , u 2 , u −1 2 ] is (e i ⊗ e j u k 1 u l 2 ) where k, l ∈ Z, i, j = 1, . . . , n, and the completion with respect to this basis is the space C n ⊗ C n [[u 1 , u −1 1 , u 2 , u −1 2 ]]. Since A n preserves the space C n ⊗ C n [u 1 , u −1 1 , u 2 , u −1 2 ] its matrix in this basis satisfies the left finiteness condition introduced in Subsection 2.7. However it does not satisfies the right finiteness condition since it can not be extended to the completed tensor product space . Explicitly these can be seen from the formulae (4.43) For instance, for any k 1 the vector A n (e i ⊗ e j u k 1 u 1−k 2 ) has a non-zero coefficient at the term e i ⊗ e j u 0 1 u 0 2 . Consider the topology of the space V = C n [u, u −1 ] defined by the neighbourhoods of 0 of the form V r = N k=−r t k u k | N −r, t k ∈ C n . The completion of V with respect to this topology is the space C n ((u −1 )). The corresponding completion of the space R ⊗ V is R ⊗ V = R((u −1 )) ⊗ C n . The neighbourhoods V r ⊗ V s defines the topology of V ⊗ V which gives the completion C n ⊗ C n ((u −1 1 , u −1 2 )). The operator A n ∈ End(V ⊗ V ) is continuous with respect to this topology. In particular, it means that A n is extended to the completion C n ⊗ C n ((u −1 1 , u −1 2 )).

Thus an ( A n , A m )-Manin operator is an element
(4.44) Note also that the operators e a(∂u 1 +∂u 2 ) = e a∂u ⊗ e a∂u commute with A n . Hence if M is an ( A n , A m )-Manin operator then e a∂u Me b∂u is also an ( A n , A m )-Manin operator for any a, b ∈ C (this follows from Proposition 2.21 generalised to the infinite-dimensional case).
Proof. Remind first that the relation (4.31) is equivalent to (4.35). Let us multiply by 4u 12 from the right and substitute 2u 12 A n = 1 + u 12 + R n , 2 A m = 1 + u −1 12 + u −1 12 R m . This gives the equivalent relation (4.46) The left hand side of (4.46) does not change at the conjugation T → u 12 T u −1 12 while the right hand side changes the sign. This means that (4.46) is valid iff the both hand sides vanish. Due to (4.36) the vanishing of each hand side of (4.46) is equivalent to (4.35).
Let L(u) be an ( A n , A m )-Manin operator. Then M = L(u + a)e b∂u is also an ( A n , A m )-Manin operator for any a, b ∈ C. In particular, the A n -Manin matrix M = L(u)e −∂u considered in Subsection 3.1 is an A n -Manin operator.
Remark 4.9. One can renormalise the matrix L(u) ∈ R((u −1 )) ⊗ Hom(C m , C n ) by multiplying it by a function in u. Such renormalization does not violent the RLL-relation (4.31). Hence one can suppose that . We see from the formulae (4.42),(4.43) that ( A n ) − res satisfies the right finiteness condition. In particular, we have the right quantum algebra U ( An) − res and Theorem 4.8 implies that the Yangian Y (gl n ) is a factor algebra of U ( An) − res . Remark 4.10. The facts described in this Subsection works also for a matrix L(u) ∈ R((u)) ⊗ Hom(C m , C n ) since we can consider another completion of R[u, u −1 ]. Again, one can suppose that L res is the restriction of A n to C n ⊗ C n [u 1 , u 2 ].

Minors of Manin matrices
The notion of minor (determinant of a submatrix) is an important tool in the classical matrix theory. It can be interpreted in terms of the Grassmann algebra as some coefficients. This gives minors of (A n , A m )-Manin matrices which we defined in Subsection 3.1 as column determinants of submatrices. The same can be done for q-and ( q, p)-Manin matrices by considering the quadratic algebras Ξ A (C) for A = A q n and A q respectively. There is a dual notion of minors corresponding to the quadratic algebras X A (C). In the case of the polynomial algebras these dual minors are written via permanent.
For a general (A, B)-Manin matrix M we define two types of minors corresponding to the homomorphisms f M : X A (C) → X B (R) and f M : Ξ B (C) → Ξ A (R). In fact these minors give the graded components of these homomorphisms. As usual minors they have a good behaviour at the multiplication of Manin matrices and at permutations of rows and columns. In future works we hope to find more properties of these minors by generalising the properties of the usual minors.

The q-minors and permanents
First we remind that the q-determinant (3.28) is the coefficient of proportionality for the product of the q-Grassmann variables [CFRS] (see also the formula (3.46) for the q-version). Namely, let ψ 1 , . . . , ψ n be the generators of Ξ A q n (C), M be an n × n matrix over an algebra R and φ j = n i=1 ψ i M ij , where j = 1, . . . , n, then φ 1 φ 2 · · · φ n = det q (M)ψ 1 ψ 2 · · · ψ n . (5.1) More generally, let M be n × m matrix over R and φ j = n i=1 ψ i M ij , where j = 1, . . . , m. For two k-tuples of indices I = (i 1 , . . . , i k ) and J = (j 1 , . . . , j k ) we denote by M IJ the k × k matrix over R with entries where we suppose 1 i a n and 1 j b m. If i 1 < . . . < i k and j 1 < . . . < j k then M IJ is a k × k submatrix of M and det q (M IJ ) is a q-analogue of minor defined in Subsection 3.1. The q-determinants det q (M IJ ) are the coefficients in the decomposition where the sum is taken over k-tuples I = (i 1 , . . . , i k ) such that 1 i 1 < . . . < i k n.
If M is a q-Manin matrix then φ 1 , . . . , φ m are also q-anticommuting and the elements φ j 1 φ j 2 · · · φ j k with 1 j 1 < . . . < j k m span the subalgebra of Ξ A q n (R) generated by φ 1 , . . . , φ m . This means that the q-determinants of submatrices of M are enough to describe the decompositions (5.3) in this case.
A dual notion to column determinant is row permanent. Recall that the permanent of an n × n matrix M is perm(M) = σ∈Sn M 1,σ(1) · · · M n,σ(n) .
(5.4) This is the same expression as for the determinant but without the factors (−1) σ . If M is over non-commutative algebra the factors in this expression should be placed in a certain way. The formula (5.4) defines the row permanent of M (see [CF], [CFR]). It is invariant under a permutation of columns: perm( τ M) = perm(M). Contrary to the determinant it does not change sign, so, in particular, the permanent of a matrix with coinciding rows can be non-zero. If M is an A n -Manin matrix, then the permanent is invariant under a permutation of rows: perm( τ M) = perm(M). Now consider the analogues decompositions of products of y i = m j=1 M ij x j , where x 1 , . . . , x m are generators of X Am (C) (for simplicity we consider the case q = 1). Define an action of the group S k on k-tuples by the formula σ(j 1 , . . . , j k ) = (j σ −1 (1) , . . . , j σ −1 (k) ).
For a general idempotent A we will define minors of two types and they will be coefficients of decomposition of y i 1 · · · y i k and φ i 1 · · · φ i k into sums of x j 1 · · · x j k and ψ j 1 · · · ψ j k respectively (up to a factor).

Dual quadratic algebras and their pairings
Consider the question of decomposition coefficients in general settings. Let V and W be vector spaces with a non-degenerate pairing ·, · : V × W → C. Let them have dual bases (v i ) and (w i ), v i , w j = δ i j . Let R be an algebra, then R ⊗ V is a free R-module with the basis (v i ). Consider a decomposition α = i α i v i of an element α ∈ R ⊗ V in the basis (v i ). The coefficients α i ∈ R can be calculated via the pairing: More generally, let A ∈ End(V ) and A * ∈ End(W ) be adjoint operators: Av, w = v, A * w ∀ v ∈ V, w ∈ W . They act on basis vectors as Av i = j A i j v j , A * w j = i A i j w i , A i j ∈ C (the both sums are finite). Suppose they are idempotents: j A i j A j k = A i k . Consider the subspaces V = AV and W = A * W . These subspaces are spanned by the vectors v i = Av i and w i = A * w i respectively. We have v i , w j = A i j . A decomposition α = i α i v i of an element α ∈ R ⊗ V is not unique since v i are not linearly independent in general. However we can fix the coefficients α i by imposing some 'symmetry' conditions. Proposition 5.1. (1). The restriction of the non-degenerate pairing ·, · : V × W → C to V ×W is also non-degenerate. (2). For any α ∈ R⊗V there is a unique (finite) sequence (α i ) such that α = i α i v i and i A i j α i = α j . The coefficients α i ∈ R fixed by these conditions can be found by the formula α i = α, w i .

Proof.
(1). If w ∈ W and v, w = 0 for all v ∈ V , then for any v ∈ V we have v, w = v, A * w = Av, w = 0 and hence w = 0.
(2). Since α ∈ R ⊗ V we have Aα = α, where the action of A is extended on R⊗V by the formula A(r⊗v) = r⊗Av. Let α i = α, w i , To show the uniqueness suppose that α = i α i v i for some α i ∈ R such that i A i j α i = α j , then we have α, w j = i α i A i j = α j . Let A ∈ End(C n ⊗ C n ) be an idempotent. Consider the corresponding quadratic algebra X A (C). We need to define a vector space X * A (C) and a non-degenerate pairing with X A (C). This implies that X * A (C) is a graded vector space and the pairing respects the grading. Define the algebra X * A (C) as the graded algebra generated by the elements x 1 , . . . , x n with the 'dual' quadratic commutation relations: n ij=1 x i x j A ij,kl = 0, that is We have the algebra isomorphisms X * A (C) = Ξ 1−A (C) = X A ⊤ (C). Let us introduce a pairing ·, · : X A (C)×X * A (C) → C respecting the grading, i.e. the product x i 1 · · · x i l , x j 1 · · · x j k vanishes if k = l. The pairing for the elements of the degree 0 and 1 are defined as follows: 1, 1 = 1 and x i , x j = δ i j . For the higher degree elements the pairing has the form for some S i 1 ...i k j 1 ...j k ∈ C. The commutation relations (2.10) and (5.5) implies that for all a = 1, . . . , k −1 and l, m = 1, . . . , n. Under these conditions the formula (5.6) correctly defines a pairing ·, · : X A (C) × X * A (C) → C that respects the grading.
Remark 5.2. In contrast to the situation described in Remark 2.31 one can not correctly define an operation on quadratic algebras by mapping an algebra X A (C) to X * A (C). It happens since the equality X A (C) = X A ′ (C) does not imply an isomorphism of X * A (C) and X * A ′ (C) (see Section 5.6 for details).
To write the formulae (5.6), (5.7) in a matrix form we introduce some conventions. Let V and W be vector spaces and W * = Hom(C, W ). The product πξ of elements π ∈ V and ξ ∈ W * is usually identified with the linear operator W → V acting as (πξ)(w) = ξ(w) · π, w ∈ W . For example, e i e j = E ij ∈ End(C n ). More generally, e i 1 ...i k e j 1 ...j k = E (1) where e j 1 ...j k = e j 1 ⊗ · · · ⊗ e j k and e i 1 ...i k = e i 1 ⊗ · · · ⊗ e i k . Let us introduce the notation α, β for α ∈ X A (C) ⊗ V and β ∈ X * A (C) ⊗ W * . The pairing acts on the first tensor factors while in the second factor the elements are multiplied as above: uπ, vξ = u, v πξ ∈ Hom(W, V ) where u ∈ X A (C), v ∈ X * (C), π ∈ V and ξ ∈ W * . In particular, By using this notation we can write the pairing of degree 1 elements in matrix form: Consider the operators S (k) = i 1 ,...,i k ,j 1 ,...,j k i k j k ∈ End (C n ) ⊗k . For k = 1 the operator S (1) is the n × n unit matrix. The formula (5.6) in matrix form reads X ⊗ · · · ⊗ X, X * ⊗ · · · ⊗ X * = S (k) (5.9) (here and below dots mean that we have k tensor factors). The conditions (5.7) is written as Analogously, one can define the algebra Ξ * A (C) generated by the elements ψ 1 , . . . , ψ n over C with the commutation relations ψ i ψ j = k,l A ij,kl ψ k ψ l , i.e.
In particular, these statements are valid for T, T ∈ End (C n ) ⊗k and for T ∈ (C n ) ⊗k * , T ∈ (C n ) ⊗k .
Proof. The statement (a) are obtained from (5.15) in the case W = (C n ) ⊗k , T = 1 ∈ End (C n ) ⊗k ) . For V = (C n ) ⊗k ), T = 1 ∈ End (C n ) ⊗k ) we obtain (b). The statements (c) and (d) are obtained from (5.16) in the same way. In the cases V = W = (C n ) ⊗k and V = W = C we obtain Hom (C n ) ⊗k , V = Hom W, (C n ) ⊗k = End (C n ) ⊗k ) and Hom (C n ) ⊗k , V = (C n ) ⊗k * , Hom W, (C n ) ⊗k = (C n ) ⊗k respectively.

Pairing operators
Let A ∈ End(C n ⊗ C n ) be an idempotent and S = 1 − A. We formulate some conditions on the operators (5.9) and (5.13) that guarantee non-degeneracy of the corresponding pairings.
Definition 5.5. Operators S (k) , A (k) ∈ End (C n ) ⊗k are called pairing operators (for the idempotent A) if they satisfy the following conditions: We call them the k-th S-operator and the k-th A-operator.
The conditions (5.19) and (5.21) for k = 1 implies that S (1) = A (1) = 1. For k = 2 the equations (5.17)-(5.19) and (5.20)-(5.22) have the solutions S (k) = S and S (k) = A respectively. Let us prove the uniqueness of solutions of these equations for any k (we do not prove their existence for k > 2 in a general case).
Proof. Let S ′ (k) andS (k) be k-th S-operators for the same idempotent A. By applying the part (a) of Corollary 5.4 for S (k) = S ′ (k) , T = 1 −S (k) and the part (b) of Corollary 5.4 for . The uniqueness of the A-operators follows similarly from the parts (c) and (d) of Corollary 5.4.
Below we suppose that S (k) and A (k) are pairing operators for an idempotent A. Note that in general the sum S (k) + A (k) does not coincide with the identity operator.
The following formulae generalise the equalities (5.23). They are proved in the same way.

Since Ξ
Thus if S (k) and A (k) are pairing operators for A then A (k) and S (k) are pairing operators for S = 1 − A.
The following property of the pairing operators shows their role for the quadratic algebras. Recall that we have identifications X A (C) 1 = (C n ) * and Ξ A (C) 1 = C n . Let us identify the higher graded components X A (C) k and Ξ A (C) k with subspaces of (C n ) ⊗k * and (C n ) ⊗k by using the idempotents S (k) and A (k) respectively.
Let d k = rk S (k) and r k = rk A (k) . Then Proposition 5.9 implies Proposition 5.10. Let S (k) and A (k) be the pairing operators for the idempotent A. Then the pairings ·, · : X A (C) × X * A (C) → C and ·, · : Ξ * A (C) × Ξ A (C) → C defined by the formulae (5.9) and (5.13) are non-degenerate.
Proposition 5.11. The elements form the dual bases of the k-th graded components Proof. This is consequence of Proposition 5.9 and the fact that In contrast to the uniqueness the existence of the pairing operators is not guaranteed for an arbitrary idempotent A. In many interesting cases the pairing operators can be found explicitly. In general situation we can claim the existence of S (1) , A (1) , S (2) and A (2) only. In Subsection 6.3 we consider cases when the third pairing operators do not exist. Now we give necessary and sufficient conditions for existence of a pairing operator.
Theorem 5.12. Let A ∈ End(C n ⊗C n ) be an idempotent and k 2. Consider the subspaces • The k-th S-operator S (k) for the idempotent A exists iff the spaces V k and V k are dual via the natural pairing, that is dim V k = dim V k and there are bases (v α ) and (v α ) of V k and V k such that v α v β = δ α β . This pairing operator has the form S (k) = α v α v α .
• The k-th A-operator A (k) for the idempotent A exists iff the spaces W k and W k are dual via the natural pairing, that is dim W k = dim W k and there are bases (w α ) and (w α ) of W k and W k such that w α w β = δ α β . This pairing operator is A (k) = α w α w α .
Proof. Due to the symmetry (5.26) it is enough to prove the statements concerning V k and V k . By definition the algebra X * A (C) is the quotient of the tensor algebra T C n = k∈N 0 (C n ) ⊗k by its two sided ideal I generated by the elements n s,t=1 e st A st,ij . In the k-th graded component we have X * A (C) k = (C n ) ⊗k /I k , where I k is a subspace of (C n ) ⊗k spanned by the vectors n s,t=1 e i 1 ...i a−1 ⊗(e st A st,iai a+1 )⊗e i a+2 ...i k = A (a,a+1) e i 1 ...i k , a = 1, . . . , k−1, i 1 , . . . , i k = 1, . . . , n.
given by the pairing (5.37). In the same way we obtain the isomorphism are dual bases of V k and V k respectively. Conversely, let d = dim V k = dim V k and let (v α ) d α=1 and (v α ) d α=1 be dual bases of V k and V k . Since the vectors v α are not orthogonal to V k they do not belong to I k , that is This implies that the restriction of the projection p k to the subspace V k is an isomorphism and hence the elements x (k) α := (X * ⊗ · · · ⊗ X * )v α form a basis of X * A (C) k . In particular, ..i k ∈ (C n ) ⊗k * . By multiplying this by A (a,a+1) from the right and taking into account (5.5) we obtain d α=1 x (k) α ξ α A (a,a+1) = 0. Since the elements x (k) α are linearly independent, we have ξ α A (a,a+1) = 0 for all a = 1, . . . , k −1, so that ξ α ∈ V k . Multiplication of the same relation by v β from the right gives α v α = (X * ⊗· · ·⊗X * ). Analogously we obtain S (k) (X ⊗· · ·⊗X) = (X ⊗· · ·⊗X). Since S (k) satisfies also A (a,a+1) S (k) = S (k) A (a,a+1) = 0 it is the k-th S-operator for the idempotent A.
Remark 5.13. Theorem 5.12 means in fact that the non-degeneracies of the pairings V k × V k → C and W k × W k → C given by ξ, π = ξπ imply that they induce non-degenerate pairings X * is non-degenerate but it may differ from the corresponding pairing defined by ξ, π = ξπ. In other words, the conditions dim X * A (C) k = dim X A (C) k and dim Ξ * A (C) k = dim Ξ A (C) k do not guarantee the existence of S (k) and A (k) respectively (e.g. see Subsection 6.3).
Remark 5.14. If some pairing S-or A-operators do not exist then one can consider the dual W k (without a structure of algebra) instead of the algebra X * A (C) or Ξ * A (C) respectively. The algebra structures on the spaces X * A (C) and Ξ * A (C) are auxiliary. They are not in agreement with the algebra structures of X A (C) and Ξ A (C), but they are used to define these spaces in a more convenient way.

Minor operators
Let S (k) , A (k) ∈ End (C n ) ⊗k and S (k) , A (k) ∈ End (C m ) ⊗k be pairing operators for idempotents A ∈ End(C n ⊗C n ) and A ∈ End(C n ⊗C n ) respectively. Let X, X * , Ψ, Ψ * denote the same as previous subsection. The corresponding column-and row-vectors for A we denote by X, X * , Ψ * , Ψ.
By virtue of Proposition 5.9 any graded linear operator X A (C) → X A (R) is given by the formula ξ(X ⊗ · · · ⊗ X) → ξT k ( X ⊗ · · · ⊗ X), ξ ∈ (C n ) ⊗k * (5.40) for some operators T k ∈ R ⊗ Hom (C m ) ⊗k , (C n ) ⊗k such that S (k) T k S (k) = T k S (k) . In the same time a graded linear operator X * A (C) → X * A (R) has the form ( X * ⊗ · · · ⊗ X * )π → (X * ⊗ · · · ⊗ X * )T k π, π ∈ (C m ) ⊗k (5.41) Analogously, a graded linear operator Ξ A (C) → Ξ A (R) can be written as Note that T k and R k can be replaced by S (k) T k S (k) and A (k) R k A (k) respectively and this does not change the maps (5.40), (5.41), (5.42), (5.43). Hence we can always suppose . The pairings are invariant in the following sense.
Proposition 5.16. Any (A, A)-Manin matrix M ∈ R ⊗ Hom(C m , C n ) satisfy the relations Proof. Note that the left hand sides of (5.46) and (5.47) are invariant under multiplication by S (k) from the left and by A (k) from the right respectively. As consequence, we obtain S (k) M (1) · · · M (k) ( X ⊗ · · · ⊗ X) = M (1) · · · M (k) ( X ⊗ · · · ⊗ X), Then, (5.48) and (5.49) are derived by application of Corollary 5.4 for V = R ⊗ (C n ) ⊗k , For an arbitrary M ∈ R⊗Hom(C m , C n ) define the linear operators t M : Analogously, for an arbitrary matrix M ∈ R ⊗ Hom(C m , C n ) define the linear operators M = t M (X ⊗ · · · ⊗ X), X * ⊗ · · · ⊗ X * = X ⊗ · · · ⊗ X, t * M ( X * ⊗ · · · ⊗ X * ) , If M is an (A, A)-Manin matrix then due to Proposition 5.16 these operators take the form In these case these minor operators are defined by one operator only: S (k) and A (k) respectively, so we can denote them as Min S (k) M := M (1) · · · M (k) S (k) = f M (X ⊗ · · · ⊗ X), X * ⊗ · · · ⊗ X * , (5.57) Definition 5.18. Let M ∈ R ⊗ Hom(C m , C n ) be an (A, A)-Manin matrix. Then the minor operators (5.57) and (5.58) are called S-minor and A-minor operators respectively. We also call them minor operators for (A, A)-Manin matrix. Their entries are called S-minors and A-minors of the order k or simply minors for (A, A)-Manin matrix (here x i and ψ i are the generators of X * A (C) and Ξ A (C) respectively).
In terms of They are coefficients in the decompositions In the matrix form these formulae are written as (Y ⊗ · · · ⊗ Y ) = (Min S (k) M)( X ⊗ · · · ⊗ X), where Y = n i=1 y i e i and Φ = m j=1 φ j e j . The formulae (5.63) and (5.64) are not decompositions by bases. However, due to Proposition 5.9 they have the form of the decompositions considered in the part (2) of Proposition 5.1. For example, for the formula (5.63) we need to set V = (C m ) ⊗k * and W = (C m ) ⊗k , while the operator S (k) acting on ξ ∈ (C m ) ⊗k * from the right plays the role of the idempotent A. Thus the minors of an (A, A)-Manin matrix M are the coefficients of the decompositions (5.63) and (5.64) satisfying the conditions m l 1 ,...,l k =1 (5.66) In operator form these symmetries can be written as The expression for the S-and A-minors of an (A, A)-Manin matrix M depends on the pairing operators S (k) and A (k) only, hence they are defined if these pairing operators exist even if S (k) and A (k) do not exist. The condition that M is a (A, A)-Manin matrix implies the symmetry of the minor S-and A-operators with respect to the upper and lower indices: If S (k) and A (k) do exist these symmetries can be written in the form

Properties of the minor operators
The determinant of usual complex matrices is a homomorphism: det(MN) = det(M) det (N). The generalisation of this property to the case of k × k minors is (a generalisation of the) Cauchy-Binet formula: The right hand side corresponds to the product of the A-minor operators. This property is generalised to Manin matrices.
Proposition 5.19. Let S (k) , A (k) , S (k) , A (k) and S (k) , A (k) be the pairing operators for idempotents A ∈ End(C n ⊗ C n ), A ∈ End(C m ⊗ C m ) and A ∈ End(C l ⊗ C l ) respectively. Let M and N be n × m and m × l matrices over an algebra R. Suppose that the entries of the first one commute with the entries of the second one: [M ij , N kl ] = 0.
Proof. The first and second statements follow from Proposition 5.16. For instance, the formula (5.68) is derived in the following way: where we used the commutativity in the form The last statement is implied by Proposition 2.26 and the formulae (5.55), (5.56). Now we give formulae for permutations of rows and columns. For any n and σ ∈ GL(n, C) denote the conjugation by the element σ ⊗k = σ ⊗ · · · ⊗ σ as where T ∈ End (C n ) ⊗k and (σ ⊗k ) −1 = σ −1 ⊗ · · · ⊗ σ −1 . Note that ι σ S (k) and ι σ A (k) are pairing operators for the idempotent ι σ A = (σ ⊗ σ)A(σ −1 ⊗ σ −1 ).
In particular, Proposition 5.20 gives the minors of σ M = σM and τ M = Mτ −1 for M ∈ R ⊗ Hom(C m , C n ), σ ∈ S n and τ ∈ S m .
Let us consider the minor operators Min S (k) M and Min A (k) M for an (A, A)-Manin matrix M as n k × m k matrices over R with the entries (5.61) and (5.62). They are also Manin matrices for some pairs of idempotents.
Proposition 5.21. Let M be an (A, A)-Manin matrix. Let S (k) , A (k) and S (k) , A (k) are the pairing operators for A and A. For any k, ℓ 1 we have In particular, Min S (k) M and Min A (k) M are (1 − S (2k) , 1 − S (2k) )-and (A (2k) , A (2k) )-Manin matrices respectively.
Proof. The formulae (5.76) and (5.77) follow from Proposition 5.8. To prove the second statement one needs to put ℓ = k in these formulae and to apply Proposition 5.16 for 2k. Let us finally write the minor operators in terms of bases. Denote y α are eigenvectors of the idempotents S (k) and ( A (k) ) ⊤ respectively. Let d k and r k be the ranks of S (k) and A (k) . Consider matrix entries of the minor operators: where α d k , β d k , γ r k and δ r k . Then the formulae (5.63), (5.64) are rewritten as where α = 1, . . . , d k , β = 1, . . . , r k . These are decompositions by bases. They generalise the formulae given in Subsection 5.1. Note that y α (k) = f M (x α (k) ) and φ (k) . Thus the formulae (5.80) describe the homomorphisms f M : X A (C) → X A (R) and f M : Ξ A (C) → Ξ A (R) in terms of bases (5.32)-(5.35).
The formulae (5.69) and (5.70) are written in terms of bases in the form (5.82)
Proposition 2.9 implies the equalities of subspaces: . Proposition 5.9 in turn gives the isomorphisms of vector spaces X * A (C) k ∼ = X * A ′ (C) k and Ξ * A (C) k ∼ = Ξ * A ′ (C) k (these are not homomorphisms of algebras). They are given by the formulae Let {v α } d k change of basis in the space (C m ) ⊗k corresponding to the matrix G −1 (k) (see Subsection 2.4). In the same way the matrices Min A (k) M and Min A ′ (k) M are (A (k) , A (k) )-Manin matrices as well as (A ′ (2k) , A ′ (2k) )-Manin matrices, and they are related by the change of basis in the space (C n ) ⊗k corresponding to the matrix G [k] .
Proposition 5.24. Consider the entries of minor operators (5.78) and (5.79) in the bases defined above: These entries coincide:

Proof. By using Min
The equality of the entries of the S-minors is proved similarly.

Construction of pairing operators
Theorem 5.12 gives a formula for the pairing operators via dual bases. However there is basisless method of construction. It uses the representation theory of groups and algebras, for which some appropriate idempotents are already constructed.
The operators A (1,2) , A (2,3) , . . . , A (k−1,k) ∈ End (C n ) ⊗k generate a subalgebra U k of the algebra End (C n ) ⊗k . Equivalently, the algebra U k can be defined as a subalgebra of End (C n ) ⊗k generates by P (a,a+1) or S (a,a+1) . Let I + k and I − k be ideals of End (C n ) ⊗k generated by A (a,a+1) and S (a,a+1) respectively. The subspaces U + k := U k ∩ I + k and U − k := U k ∩ I − k are non-unital subalgebras generated by A (a,a+1) and S (a,a+1) . They are maximal ideals of the algebra U k . In these terms the conditions (5.17) and (5.20) can be written in the form The commutation relations for the algebras X * A (C), X A (C), Ξ A (C) and Ξ * A (C) imply (X * ⊗ · · · ⊗ X * )T = 0, Thus, if S (k) ∈ U k satisfies (5.89) and 1 − S (k) ∈ U + k then S (k) is the k-th S-operator. Analogously, an operator A (k) ∈ U k satisfying (5.90) and 1−A (k) ∈ U − k is the k-th A-operator. If the algebra U k admits the involution ω : P (a,a+1) → −P (a,a+1) then A (k) = ω(S (k) ), so by using this involution one can obtain the k-th A-operator from the k-th S-operator and vice versa.
Let us consider a case when the algebra U k is a group algebra of a finite group.
Proposition 5.25. Let G + k and G − k be subgroups of End (C n ) ⊗k generated by the operators P (1,2) , P (2,3) , . . . , P (k−1,k) and −P (1,2) , −P (2,3) , . . . , −P (k−1,k) respectively. The group G + k is finite iff the group G − k is finite. In this case we the pairing operators exist and have the form For brevity we denote g a = P (a,a+1) and g * a = −g a = −P (a,a+1) . The finiteness of the group G + k means that there exists N ∈ N 0 such that any element g ∈ G + k can be written as g a 1 g a 2 · · · g am with m N. Then any product g * a 1 · · · g * a N+1 = (−1) N +1 g a 1 · · · g a N+1 can be written as (−1) N +1 g a ′ 1 · · · g a ′ m = (−1) N +1+m g * a ′ 1 · · · g * a ′ m for some m N. By using (g * s ) 2 = 1 we obtain (−1) N +1+m = g * a 1 · · · g * a N+1 g * a ′ m · · · g * a ′ 1 ∈ G − k . Since −1 / ∈ G − k we have (−1) N +1+m = 1, so for any a 1 , . . . , a N , a N +1 there exist m N and a ′ 1 , . . . , a ′ m such that g * a 1 · · · g * a N+1 = g * a ′ 1 · · · g * a ′ m . This means that by induction we can write any element of G − k as a product g * a 1 · · · g * am for some m N, which implies the finiteness of the group G − k . The converse implication is obtained by changing the sign of P .
Let U k be an abstract algebra with an augmentation ε : U k → C. We call an element s (k) left invariant or right invariant (with respect to ε) if us (k) = ε(u)s (k) ∀ u ∈ U k or s (k) u = ε(u)s (k) ∀ u ∈ U k respectively. If an element s (k) ∈ U k is left or right invariant and normalised as ε(s (k) ) = 1, then it is an idempotent.
Let ρ : U k → End (C n ) ⊗k be an algebra homomorphism (representation) such that U k ⊂ ρ(U k ) and ρ(u) − ε(u) ∈ U + k for all u ∈ ρ −1 (U k ). Then the left and right invariance of s (k) implies that S (k) = ρ(s (k) ) satisfy (5.89). If ρ(U k ) = U k then S (k) ∈ U k , so due to the formulae (5.91) the operator S (k) is the k-th S-operator. In more general case one need to check the conditions (5.18), (5.19). Note that due to the condition ε(s (k) ) = 1 it is enough to show that the operators ρ(u) − ε(u) annihilate (X * ⊗ · · · ⊗ X * ) and (X ⊗ · · · ⊗ X) by acting from the right and left respectively, where u runs over the algebra U k or at least over a set of its generators.
The pairing operators A (k) are obtained in the same way. Usually one needs to consider the same U k , ε, s (k) with different representation ρ or the same representation with different s (k) and ε. Let us conclude.
Theorem 5.26. Let ρ : U k → End (C n ) ⊗k and ε : U k → C be algebra homomorphisms. Suppose that U k ⊂ ρ(U k ). Let s (k) ∈ U k be a normalised left and right invariant element: for all u ∈ U k , then S (k) = ρ(s (k) ) ∈ End (C n ) ⊗k is the k-th S-operator.
Remark 5.27. The augmentation ε : U k → C defines the ideal I k := Ker(ε) of U k . It is a maximal ideal consisting of the elements u − ε(u) where u ∈ U k . The conditions ρ(u) − ε(u) ∈ U ± k ∀ u ∈ ρ −1 (U k ) equivalent to ρ(I k ) ∩ U k ⊂ U ± k . They can be written in the form ρ(u) ∈ U ± k ∀ u ∈ I k ∩ ρ −1 (U k ). In terms of this ideal the conditions (5.94) take the form us (k) = s (k) u = 0 ∀ u ∈ I k and 1 − s (k) ∈ I k .
Conversely, for any maximal ideal I k of the algebra U k there is a unique algebra isomorphism U k /I k ∼ = C, so the canonical projection U k ։ U k /I k defines the augmentation ε : U k → C. In particular, the ideals U ± k gives augmentations If the algebra U k admits an anti-automorphism which does not change generators then it is enough to check only left invariance (or only right invariance) due to the following fact.
Proposition 5.28. Let ε : U k → C be a homomorphism.
• If a solution of (5.94) exists then it is unique.
Consider the case when P satisfies the braid relation P (12) P (23) P (12) = P (23) P (12) P (23) . Since P 2 = 1 we have the homomorphisms ρ ± : C[S k ] → End (C n ) ⊗k defined by the formulae ρ ± (σ a ) = ±P (a,a+1) . The role of U k is played by C[S k ]. Since G ± k = ρ ± (S k ) the groups G + k and G − k are finite. The augmentation ε : C[S k ] → C is the counit ε(σ) = 1 ∀ σ ∈ S k . The operators (5.93) coincide with the image of s (k) = 1 k! σ∈S k σ under the homomorphisms ρ + and ρ − . Note that A (k) can be obtained as the image of a (k) = 1 In this case one need to consider the augmentation ε(σ a ) = −1. Anyway, we obtain

Examples of minor operators
Here we construct pairing operators for the examples given in Section 3 and consider the corresponding minors. Since the Manin matrices described in Subsections 3.1 and 3.2 are particular cases of the ( q, p)-Manin matrices it is sufficient to consider minors for the case of Subsection 3.3. The formulae for S-and A-minors of the ( q, p)-Manin matrices are valid for more general case: for (B, A p )-and (A q , B)-Manin matrices respectively. By starting with the idempotents R q − introduced in Subsection 4.1 we write another pairing operators, which gives related minor operators for q-Manin matrices. Finally we investigate the case of Subsection 3.4.
Since {ψ i 1 · · · ψ i k | 1 i 1 < . . . < i k n} is a basis of the space Ξ A q (C) k , its dimension is r k = dim Ξ A q (C) k = dim Ξ * A q (C) k = n k . By using the formula (6.5) one can directly check that tr A (k) = r k : Let us calculate the A-minors of an (A q , A)-Manin matrix M ∈ R ⊗ Hom(C m , C n ) where A ∈ End(C m ⊗ C m ) is an arbitrary idempotent. For any I = (i 1 < . . . < i k n) and J = (j 1 , . . . , j k ), where j 1 , . . . , j k = 1, . . . , m, we have (the other entries vanish). Note that this is in agreement with the formulae (5.60) and (6.3).
Together with the properties the q-determinant with respect to a change of rows formulated in Corollary 3.9 the relation (6.7) implies for any I = (i 1 , . . . , i k ) and J = (j 1 , . . . , j k ). Let A = A p , then M is a ( q, p)-Manin matrix. The formulae (5.60) and (6.1) gives us the symmetry with respect to the lower indices in the form where J = (j 1 < . . . < j k m) and i 1 , . . . , i k are arbitrary (the entries for other lower indices vanish). This also follows from the formula (6.8) and Corollary 3.9.
Since A (k) e σI is proportional to A (k) ρ q (σ)e I = (−1) σ A (k) e I the vectors form the basis of A (k) (C n ) ⊗k . The corresponding elements of the dual basis are (6.10) The A-minors in these bases are exactly q-minors of the matrix M: To consider S-operators for A = A q and corresponding S-minors we first note the formula (6.12) It is proved in the same way as (3.44). Let I = (i 1 . . . i k n), i.e. I = (i 1 , . . . , i k ) such that 1 i 1 . . . i k n. By applying the homomorphism X A q II (C) → X A q (C), x s → x is , to (6.12) we obtain (6.14) Recall that for a tuple I = (i 1 , . . . , i k ) we defined ν i = |{s | i s = i}| (see Subsection 5.1). As we mentioned the stabiliser (S k ) I has order ν 1 ! · · · ν n !. This means that the group (S k ) I is generated by σ s ∈ (S k ) I , since the subgroup of S k generated by these elements has exactly the order ν 1 ! · · · ν n !. In other words, we have (S k ) I ∼ = S ν 1 × · · · × S νn . Denote ν I := (S k ) I = ν 1 ! · · · ν n !.

Pairing operators for q-Manin matrices from Hecke algebras
The formulae of the Subsection 6.1 at q = q [n] and p = q [m] give the corresponding formulae for the q-Manin case. Here we construct the minors for the q-Manin matrices by using the idempotent (4.14) by supposing that q ∈ C\{0} is not a root of unity. In particular, this condition implies that the q-numbers do not vanish for any k ∈ Z 1 . Remind that due to the formula (4.17) the idempotents A = R q n− = 2 −1 q (q −1 − R q n ) and A ′ = A q n are left-equivalent, so the quadratic algebras for these idempotents coincide: X R q n− (C) = X A q n (C) and Ξ R q n− (C) = Ξ A q n (C). The 'dual' quadratic algebras do not coincide. Namely, since the idempotent R q n− is right-equivalent to A q −1 n (the formula (4.18)) we have Let us construct pairing operators for the idempotent A = R q n− . Due to Lemma 4.3 the dual idempotent S = 1 − A = R q n+ has the form (4.13). The algebra U k in this case is a subalgebra of End (C n ) ⊗k generated by the matrices ( R q n ) (a,a+1) , while its maximal ideals U ± k are generated by ( R q n± ) (a,a+1) . Since R q n satisfies the braid relation (4.11) and Hecke relation (4.16) the role of this algebra is U k is played by the Hecke algebra.
Recall that the Hecke algebra H q k is the algebra generated by h 1 , . . . , h k−1 with the relations For b k we will identify the subalgebra generated by h 1 , . . . , h b with H q b . Note that the relations (6.36) implies that the elements of H q b commute with h b+1 , . . . , h k−1 . The algebra U k is the image of the representation The relations (6.35), (6.37) and the conditions on the parameter q imply that there are exactly two augmentations Since R q n −q −1 = −2 q R q n− and R q n +q = 2 q R q n+ , we can check the formula ρ(u)−ε ± (u) ∈ U ± k for the generators u = h a , so they are valid for all u ∈ H q n . We need idempotents s + (k) , s − (k) ∈ H q k invariant with respect to the augmentations ε + and ε − respectively. Such idempotents were constructed by D. Gurevich on the level of representation (6.38). We define the idempotents s ± (k) (as some elements of the abstract Hecke algebra) by following his work [Gur]. Consider the elements By applying the augmentations (6.39) we obtain The elements (6.40), (6.41) can be defined iteratively by the formulae t ± 1 = 1 and Define the elements s ± (k) iteratively as Note that ρ(s ± (2) ) = R q n± . It was proved in [Gur] that ρ(s ± (k) ) are left and right invariant with respect to corresponding augmentations of U k = ρ(H q k ). This proof is suitable to establish the invariance of s ± (k) with respect to ε ± , but we give an alternative proof.
Proposition 6.1. The formulae (6.44) define idempotents s ± (k) satisfying ε ± (s ± (k) ) = 1, Hence S (k) = ρ(s + (k) ) and A (k) = ρ(s − (k) ) are pairing operators for A = R q n− . Proof. The first formula follows from (6.42) and the equality ε ± (s ± (k−1) ) = 1 which can be supposed by induction. Further we prove the left invariance. Suppose us ± (k−1) = ε ± (u)s ± (k−1) ∀ u ∈ H q k−1 . We need to prove h a s ± (k) = ±q ∓1 s ± (k) for a = 1, . . . , k − 1. Denote by I ± b,k the left ideal of H q k generated by the elements u − ε ± (u), u ∈ H q b . Due to the induction assumption we have I ± b,k s ± (k−1) = 0 for b = 1, . . . , k − 1. Then the relations we need to prove follows from the formulae h k−l t ± k ∈ ±q ∓1 t ± k + I ± k−l,k which we prove by induction in l 1. For l = 1 we have By using the relations (6.35)-(6.37) we obtain By taking into account q k−1 + q k−2 (q −1 − q) = q −1 q k−2 and q 1−k − q 2−k (q −1 − q) = qq 2−k we derive h k−1 t ± k ∈ ±q ∓1 t ± k + I ± k−1,k . Suppose that h k−l t ± k ∈ ±q ∓1 t ± k + I ± k−l,k for some l 1 and all k > l then for any k > l + 1 we obtain Since uh k−1 = h k−1 u for any u ∈ H q k−l−1 we have the inclusion I ± k−l−1,k−1 h k−1 ⊂ I ± k−l−1,k , so that h k−l−1 t ± k ∈ ±q ∓1 t ± k + I ± k−l−1,k . The right invariance of s ± (k) now follows from Proposition 5.28, since the formula ρ(h a ) = h a defines an anti-automorphism of H q k . Corollary 6.2. We have the following symmetric recurrent formulae: (6.48)

Pairing operators for the 4-parametric case
Here we consider the case described in Subsection 3.4. The idempotent A = A a,b,c κ is parametrised by a, b, c ∈ C\{0} and κ ∈ C. The commutation relations for the algebra Ξ A (C) are ψ i ψ j + 3 k,l=1 ψ k ψ l P kl,ij = 0. They have the form ψ i ψ j + a 2 ij ψ j ψ i = 0, i = j, and 2ψ 2 i = 3 k,l=1 ε ikl a lk ψ k ψ l or where C 3 is the set of cyclic permutations (1, 2, 3), (2, 3, 1), (3, 1, 2). The algebra Ξ * A (C) is defined by the commutation relations ψ i ψ j + 3 k,l=1 P ij,kl ψ k ψ l = 0, these are ij . This implies that the idempotents A a,b,c κ and A q are right equivalent. In particular, Ξ * From the relations (6.63) we see that dim Ξ A a,b,c κ (C) 2 = 3. However, the dimension of Ξ A a,b,c κ (C) 3 is not always equal to 1, it depends on the parameters. The following theorem describes these dependence and gives the necessary and sufficient condition for existence of the corresponding pairing operator. Note that it is enough to consider the case κ = 0 since the case of the idempotent A a,b,c 0 = A q was considered in Subsection 6.1.
Any two of these conditions implies the third one. We have Ξ A a,b,c 3 iff all three conditions (6.64) hold, 1 iff one and only one of three conditions (6.64) holds, 0 iff no one of three conditions (6.64) holds, dim Ξ A a,b,c κ (C) k = 0 for k 4. The third A-operator exists iff the condition (i) holds and (ii), (iii) do not, that is iff a 2 = b 2 = c 2 and (a 6 = −1 or κ 3 = −abc). It equals where w 1 = 1 6 (e 123 + e 231 + e 312 ) − a 2 6 (e 132 + e 213 + e 321 ), (6.65) w 1 = e 123 + e 231 + e 312 − a −2 (e 132 + e 213 + e 321 ) − κ(b −1 e 111 + c −1 e 222 + a −1 e 333 ). (6.66) In this case the elements ψ i ψ j ψ k ∈ Ξ A a,b,c κ (C) 3 have the form Under the condition (i) both conditions (ii) and (iii) gives a 6 = −1, which in turn implies −a 3 b −1 c = a −3 b −1 c. Hence (i) implies equivalence of (ii) and (iii). Further, by comparing the conditions (ii) and (iii) we see that they imply (i). Now let us use Theorem 5.12. Since the idempotents S a,b,c κ = 1−A a,b,c κ and S q = 1−A q are left equivalent the space W k for the case A = A a,b,c κ coincide with the space W k for A = A q . In particular, W 3 = Cw 1 . The space W 3 consists of the covectors ξ = 3 i,j,k=1 ξ ijk e ijk satisfying ξP (12) = ξP (23) = −ξ. This gives us the system of equations The coefficients ξ ijk can be divided into three sets: (6.74) The relations (6.70), (6.71) imply that any two coefficients from the same set are proportional to each other (with a non-zero coefficient of proportionality), so that dim W 3 3. The isomorphism Ξ A (C) 3 ∼ = W * 3 implies that dim Ξ A (C) 3 = dim W 3 . Let us prove that nonvanishing of the coefficients from the sets (6.72), (6.73), (6.74) corresponds exactly to the conditions (i), (ii), (iii) respectively.
Remark 6.4. The dimension of the space X A (C) 3 also depends on the values of the parameters a, b, c, κ. By using the relations (3.51) one can relate xyz with zyx in two different ways. As a result we obtain two different expressions for xyz − a −2 b −2 c 2 zyx, namely, Assume κ = 0. Then we see that the elements x 3 , y 3 , z 3 are linearly independent iff the condition (i) holds. One can also obtain the relation One can deduce that the dimension of the subspace spanned by the elements x 2 i x j , x i x j x i , x j x 2 i , (i, j, k) ∈ C 3 , equals 4 if the condition (ii) holds and it equals 3 if this condition is false. The dimension of the subspace spanned by the elements x 2 j x i , x j x i x j , x i x 2 j , (i, j, k) ∈ C 3 , depends on the condition (iii) in the same way. Thus the difference dim X A (C) 3 − dim Ξ A (C) 3 does not depend on the values of the parameters and equals 9. Moreover, the third S-operator S (3) for the idempotent A a,b,c κ exists iff the A-operator A (3) exists.
Finally we write the A-Minors for an (A a,b,c κ , B)-Manin matrix M. By substituting the expression where (i, j, k) ∈ C 3 . Let a 2 = b 2 = c 2 and the condition (ii) do not hold (a 6 = −1 or κ = −abc). The components of the pairing operator A (3) = w 1 w 1 have the form By using (6.65), (6.66) we obtain if i k = i l for some k = l.

Manin matrices of types B, C and D
Remind that the A n -Manin matrices and A q n -Manin matrices are related with the Yangians Y (gl n ) and quantum affine algebras U q ( gl n ) respectively. The Lie algebra gl n = gl(n, C) is usually considered as the case 'type A', since gl n ∼ = C ⊕ sl n , where sl n = sl(n, C) is the simple algebra of the type A n−1 . Hence these Manin matrices can be referred to the type A. Moreover, the minor operators for more general ( q, p)-Manin matrices are described by using the symmetric groups S k , which are the Weyl groups of the type A k−1 and participate in Schur-Weyl duality with the Lie algebras gl n . A generalisation of the A n -Manin matrices to the types B, C and D was introduced by A. Molev. Remind that so 2r+1 = so(2r + 1, C), sp 2r = sp(2r, C) and so 2r = so(2r, C) are simple Lie algebras of types B r , C r and D r respectively (where r 2 for D r ). In this section we assume that n = 2r + 1 for the B r case and n = 2r for C r and D r cases.

Molev's definition and corresponding quadratic algebras
By starting with the definition of Manin matrices of types B, C and D we interpret them as Manin matrices for quadratic algebras. To do it we use the notations i ′ = n − i + 1 and ε i = 1, i = 1, . . . , r, −1, i = r + 1, . . . , n; (the notation ε i is used for the case C r only). Introduce the operators Q n ∈ End(C n ⊗ C n ) for the B and D cases and Q n ∈ End(C n ⊗ C n ) for the C case: One can check that these operators satisfies the following relations: (Q n ) 2 = nQ n , P n Q n = Q n P n = Q n , (7.1) ( Q n ) 2 = n Q n , P n Q n = Q n P n = − Q n , (7.2) where P n is the permutation operator defined in Section 3.1.
Definition 7.1. [Molev] A matrix M ∈ R ⊗ End(C n ) is a Manin matrix of the type B (for odd n) or D (for even n) if it satisfies (1 − P n )M (1) M (2) 1 + P n 2 − Q n n = 0. (7.3) A matrix M ∈ R ⊗ End(C n ) for even n is a Manin matrix of the type C if it satisfies 1 − P n 2 − Q n n M (1) M (2) (1 + P n ) = 0. (7.4) Introduce operators B n ∈ End(C n ⊗ C n ) for the B and D cases and B n ∈ End(C n ⊗ C n ) for the C case as B n = 1 − P n 2 + Q n n = A n + Q n n , B n = 1 − P n 2 − Q n n = A n − Q n n . (7.5) The formulae (7.1), (7.2) implies that these operators are idempotents. We see that the relations (7.3) and (7.4) can be written as the definition (2.31) by means of the idempotents A n = 1−Pn 2 , B n and B n . Proposition 7.2. A matrix M ∈ R ⊗ End(C n ) is a Manin matrix of type B or D iff it is an (A n , B n )-Manin matrix. A matrix M ∈ R ⊗ End(C n ) is a Manin matrix of type C iff it is an ( B n , A n )-Manin matrix (for even n). Now let us consider the quadratic algebras related with the idempotents B n and B n .
Proof. The relation B n (X ⊗ X) = 0 has the form 1 − P n 2 + Q n n (X ⊗ X) = 0. (7.8) Multiplication by Q n from the left gives Q n (X ⊗ X) = 0, so we derive (7.6). Analogously, (7.7) can obtained from (Ψ ⊗ Ψ)(1 − B n ) = 0 by multiplication by Q n from the right. The converse implications are obvious. The algebra X Bn (C) is the quotient of the polynomial algebra C[x 1 , . . . , x n ] = X An (C) by the relation n i=1 x i x i ′ = 0. (7.9) The group of matrices G ∈ GL(n, C) preserving the symmetric bilinear form g n (x, y) = n i=1 x i y i ′ is isomorphic to O(n, C). The algebra Ξ Bn (C) is generated by λ, ψ 1 , . . . , ψ n with the relations ψ i ψ j + ψ j ψ i = λδ i,j ′ . (7.10) Note that λ = ψ 1 ψ n + ψ n ψ 1 = 2 n n i=1 ψ i ψ i ′ is a central element and the Grassmann algebra Ξ An (C) is the quotient of Ξ Bn (C) by the relation λ = 0 (by fixing a non-zero value of λ we obtain the Clifford algebra Cl n (C)).
The algebra X Bn (C) is generated by λ, x 1 , . . . , x n with the relations (7.11) where n = 2r. If n 4 then the element λ = x 1 x n − x n x 1 = 1 r r i=1 (x i x i ′ − x i ′ x i ) is central and X Bn (C) is the universal enveloping of the (n + 1)-dimensional Heisenberg Lie algebra (by fixing a non-zero value of λ we obtain the Weyl algebra A r (C)).
A Manin matrix of type B, C or D does not always keep so under such operations as taking a submatrix, permutation of rows or columns, doubling of a row or column, a composition of them, but sometimes it does.
Corollary 7.7. Let I = (i 1 , . . . , i k ) and J = (j 1 , . . . , j l ) where 1 i s n and 1 j t m for any s = 1, . . . , k and t = 1, . . . , l. Let M = (M ij ) be an n × m matrix over R and M IJ be k × l matrix with entries (M IJ ) st = M isjt .
• Let M be an (A n , B m )-Manin matrix and j s + j t = m + 1 for any s, t = 1, . . . , l, then M IJ is an (A k , A l )-Manin matrix.
• Let M be an (A n , B m )-Manin matrix and j s + j t = m + 1 iff s + t = l + 1 (this condition implies that j 1 , . . . , j l are pairwise different and hence l m), then the matrix M IJ is an (A k , B l )-Manin matrix.
• Let M be a ( B n , A m )-Manin matrix and i s + i t = n + 1 for any s, t = 1, . . . , k, then M IJ is an (A k , A l )-Manin matrix.
• Let M be a ( B n , A m )-Manin matrix and i s + i t = n + 1 iff s + t = k + 1 (this condition implies that i 1 , . . . , i k are pairwise different and hence k n), then M IJ is a ( B k , A l )-Manin matrix.
By multiplying (7.20) by 1 − B n from the right we see that M = T (u)e − ∂ ∂u is a B n -Manin matrix and hence is a Manin matrix of the type C.

Minor operators for B, C, D cases and Brauer algebras
Remind that the pairing operators S (k) and A (k) for the idempotent A n are the symmetrizers and anti-symmetrizers of the k-th tensor power. We will denote them as S gl n (k) and A gl n (k) to differ them from other pairing operators. The A-minors for a Manin matrix of type B or D (or, more generally, of an (A n , B m )-Manin matrix) is the column determinants of submatrices. The S-minors for a Manin matrix M of type C (or, more generally, of a ( B n , A m )-Manin matrix M) are the normalised row permanents 1 ν J perm(M IJ ) (see the formulae (5.4) and (6.15)).
The S-minors for the Manin matrices of types B, D and the A-minors for the Manin matrices of type C are given by k-th S-operators for B n and by k-th A-operators for B n respectively. They can be constructed by the method described in Substitution 5.7. In this case the role of the algebra U k is played by the Brauer algebra [Br].
If ω is a positive integer or an even negative integer then the Brauer algebra B k (ω) has the following representation on a tensor power of a finite-dimensional vector space, which extends the representation ρ + or ρ − of the symmetric group (see e.g. [Molev]).
The element ǫ ab is presented by the diagram 1 . . . a − 1 a a + 1 . . . b − 1 b b + 1 . . . k   1 . . . a − 1 a a + 1 . . . b − 1 To multiply two elements one needs to put the corresponding diagrams one over the other and substitute each arising loop by a factor ω. This allows to simplify calculations in the Brauer algebra.
Consider the elements These elements (up to a constant term) were introduced in [N] as analogues of Jucys-Murphy elements for the Brauer algebra. One can check that each element y b commute with the subalgebra B b−1 (ω). This implies, in particular, commutativity [y a , y b ] = 0 (see [N] for details). The images of the elements y b ∈ B k (n) and y b ∈ B k (−n) under the homomorphisms ρ and ρ are the matrices respectively.
In particular, by substituting n = 2r and k = r + 1 to the expression for tr A sp n (k) we derive rk A Remark 7.13. The existence of the pairing operators for the idempotents A = B n and A = B n can be deduced from Theorem 5.12. Since A ⊤ = A we have isomorphisms V k ∼ = V k and W k ∼ = W k given by restriction of the map (C n ) ⊗k → (C n ) ⊗k * , e i 1 ...i k → e i 1 ...i k . Under these isomorphisms the natural pairings V k × V k → C and W k × W k → C are identified with restrictions of the standard bilinear form e i 1 ...i k , e j 1 ...j k = δ i 1 j 1 · · · δ i k j k to the spaces V k and W k respectively. Since the subspaces V k and W k are given by linear equations with real coefficients the restrictions of this bilinear form are non-degenerate, so the pairing operators S (k) and A (k) exist for any k. In particular, the pairing A-operators for B n and the pairing S-operators for B n also exist.

Conclusion
It was demonstrated in the works [CF, CFR, CM, RTS, MR, CFRS] as well as in the current paper that the Manin matrices have a lot of applications to integrable systems, representation theory and other fields of mathematics and physics. These results show importance of noncommutative geometry developed by Manin in [Man87,Man88,Man89,Man91,Man92] for many questions of mathematics and mathematical physics. By switching the consideration from A-Manin matrices to more general (A, B)-Manin matrices we obtain a larger class of useful examples such as the ( q, p)-Manin matrices and the Manin matrices of types B, C and D. In particular, the theory of the ( q, p)-Manin matrices gives more complete picture for the q-Manin matrices.
It was shown that the tensor notations and usage of idempotents is a convenient approach to the Manin matrices. In particular, it allows to generalise the notion of minors to the general case. This general theory of minors relates Manin matrices with the representation theory of the symmetric groups and their generalisations such as Hecke and Brauer algebras. This alludes to a possible relation between the theory of Manin matrices and the Schur-Weyl duality.
The left equivalence of idempotents gives us relationship between different considerations of Manin matrices. For example, the minors of q-Manin matrices can be considered by means of the q-antisymmetrizer arisen from a representation of the symmetric group as well as by using higher idempotents of the Hecke algebra; the left equivalence implies a simple relation between the minor operators constructed in this two different way. Moreover, right equivalence can be used to investigate the corresponding quadratic algebras as it was done in Subsection 6.3.
It is also discovered that some Lax matrices with spectral parameter is a type of Manin matrices generalised to the infinite-dimensional case. Namely, we defined an idempotent A n acting on a completed tensor power C n [u, u −1 ] ⊗2 and proved that these Lax matrices are exactly Manin operators that acts on the tensor factor C[u, u −1 ] by a function multiplication.

Appendix B. Lie algebras as quadratic algebras
Here we present a quadratic algebra closely related with a finite dimensional Lie algebra g.
Let n = dim g + 1 and {x 1 , . . . , x n−1 } be a basis of g with the commutation relations [x i , x j ] = n−1 k=1 C ij k x k , C ij k ∈ C. Consider the quadratic algebra with generators x 1 , . . . , x n−1 , x n and relations x i x j − x j x i = n−1 k=1 C ij k x k x n , i, j = 1, . . . , n − 1, (B.1) x i x n − x n x i = 0, i = 1, . . . , n − 1. (B.2) Let C g ∈ End(C n ⊗ C n ) be a matrix with entries (C g ) ij,kl = C ij k δ ln + C ij l δ kn , i, j, k, l = 1, . . . , n, (B.3) where we set C ij n = C in k = C nj k = 0. Since (B.3) is antisymmetric in i, j and symmetric in k, l we have the formulae C 2 g = 0, C g A n = 0 A n C g = C g .

(B.4)
They imply that the operator A g = A n − 1 4 C g is an idempotent. The relations (B.1) and (B.2) define the algebra X Ag (C).
Remark B.1. By fixing a non-zero value of the central element x n we obtain the universal enveloping algebra U(g). In other words, the algebra X Ag (C) is the quantisation of the algebra Sg = C[g * ] with the Lie-Poisson brackets {x i , x j } = n−1 k=1 C ij k x k . The central element x n plays the role of the quantisation parameter.