Commutation Relations and Discrete Garnier Systems

We present four classes of nonlinear systems which may be considered discrete analogues of the Garnier system. These systems arise as discrete isomonodromic deformations of systems of linear difference equations in which the associated Lax matrices are presented in a factored form. A system of discrete isomonodromic deformations is completely determined by commutation relations between the factors. We also reparameterize these systems in terms of the image and kernel vectors at singular points to obtain a separate birational form. A distinguishing feature of this study is the presence of a symmetry condition on the associated linear problems that only appears as a necessary feature of the Lax pairs for the least degenerate discrete Painlev\'e equations.


Introduction
Associated with any system of linear differential equations is a linear representation of the fundamental group of a sphere punctured at the poles of the system, called the monodromy representation. An isomonodromic deformation is the way in which the system's coefficients change while preserving this monodromy representation [22]. It is known that all the Painlevé equations arise as isomonodromic deformations of second-order differential equations [23]. The Garnier system arises as an isomonodromic deformation of a second-order Fuchsian scalar differential equation with m apparent singularities and m + 3 poles [20]. When we fix three poles, we have m remaining poles that are considered time variables [23]. The simplest nontrivial case, where m = 1, corresponds to the sixth Painlevé equation [18,19].
The focus of this study is a collection of systems that may be regarded as discrete analogues of the Garnier system. We regard these to be nonlinear integrable systems arising as discrete isomonodromic deformations [40]. Our starting point is a regular system of difference equations of the form where A(x) is a 2 × 2 matrix polynomial whose determinant is of degree N in x, which is called a spectral variable, and where σ = σ h : f (x) → f (x + h) or σ = σ q : f (x) → f (qx). These operators are defined in terms of two constants h, q ∈ C subject to the constraints h > 0 and 0 < |q| < 1. The goal of this work is to specify a parameterization of these matrices by giving a factorization, A(x) = L 1 (x) · · · L N (x), (1.2) which will be conducive to finding the discrete isomonodromic deformations of (1.1). A discrete isomonodromic deformation is a transformation induced by an auxiliary system of difference equations, which may be written in matrix form as (1. 3) The transformed matrix,Ỹ (x), satisifes a new equation of the form (1.1), given by (1.4) where consistency in the calculation of σỸ (x) imposes the relatioñ A(x)R(x) = σR(x)A(x), (1.5) which is compatible with (1.1). Comparing the left and right-hand sides of (1.5) defines a rational map between the entries of A(x) andÃ(x). The two operators appearing in (1.1) and (1.3) define a Lax pair for the resulting map. Compatibility conditions of the form (1.3) give rise to discrete isomonodromic deformations in the sense of Papageorgiou et al. [40]. It was shown later by Jimbo and Sakai that compatibility relations of the form (1.3), as a map between linear systems, also preserves a connection matrix [24]. This connection matrix, introduced by Birkhoff [6,7], is considered to be a discrete analogue of a monodromy matrix. It is known that various discrete Painlevé equations, QRT maps and general classes of integrable mappings that characterize reductions of partial difference equations arise in this way [38,39,40]. Discrete isomonodromic deformations share more in common with Schlesinger transformations than isomonodromic deformations. That is, given an A(x) we have a collection of transformations of the form (1.3). In a similar manner to Schlesinger transformations, the system of transformations governed by (1.3) and (1.5) has the structure of a finitely generated lattice [34,35]. Our discrete Garnier systems are systems of elementary transformations generating an action of Z d for some dimension, d. One of the consequences of (1.2), for the particular choice of L i we propose, is that the resulting analogues of elementary Schlesinger transformations are simply expressed in terms of commutation relations between factors. This factorization, and their commutation relations, are also features of the work of Kajiwara et al. [26].
An additional novel feature of our work is the presence of symmetric Lax pairs, in which solutions of (1.1) satisfy an extra symmetry constraint. We may take this into consideration by letting Y (x) satisfy relations involving two operators The composition of τ 1 and τ 2 recovers (1.1) while (1.6b) gives a constraint on the entries of A(x). The operators τ 1 and τ 2 generate a copy of the infinite dihedral group. The presence of this additional symmetry is a structure that plays an important role in hypergeometric and basic hypergeometric orthogonal polynomials, biorthogonal functions and related special functions. This constraint naturally manifests itself in the known Lax pairs for the elliptic Painlevé equation [43,55] via their parameterization in terms of theta functions, but this property has not manifested itself in any obvious way in the known Lax pairs for more degenerate Painlevé equations.
There are a number of technical issues in presenting the discrete isomonodromic deformations of such systems: Firstly, the classical theory of Birkhoff (see [6,7]) is no longer sufficient to guarantee the existence of solutions. For this we appeal to the work of Praagman [42]. Secondly, the theorems that prescribe discrete isomonodromic deformations do not necessarily preserve the required symmetry. What makes finding the isomonodromic deformations of the symmetric cases tractable is that A(x) can be shown to admit a factorization for some rational matrix B(x). By insisting that B(x) takes the same factored form, namely we are able to describe the discrete isomonodromic deformations of these systems in terms of the same commutation relations as the non-symmetric case. Thirdly, since the classical fundamental solutions of Birkhoff do not necessarily exist, it is not clear that the analogue of monodromy involving Birkhoff's connection matrix (see [24]) is appropriate. To address this, we give a short account of how discrete isomonodromic deformations preserve the associated Galois group of (1.1). This gives us four classes of system; two difference Garnier systems whose associated linear problems are of the form of (1.1), and two symmetric difference Garnier systems, whose associated linear problems are of the form of (1.6). We reparameterize these systems in terms of the image and kernel vectors at the singular points. This provides a correspondence between one of our systems and Sakai's q-Garnier system [47]. We also consider specializations whose evolution coincides with discrete Painlevé equations of type q-P A (1) k for k = 0, 1, 2, 3 and d-P A (1) k for k = 0, 1, 2. The convention we use is that we list the type of the Affine root system associated with the surface of initial conditions [46]. This means that the systems we treat appear as the top cases of the disrete Painlevé equations and q-Painlevé equations.
It should be recognised that the phrase symmetric and asymmetric discrete Painlevé equations has been applied to equations arising as deautonomized symmetric and asymmetric QRT maps respectively [29]. The way in which the word symmetric is used in the context of this article is that the associated linear problem possesses an additional symmetry. The ideas of having a symmetric system of difference equations and having a symmetric QRT mapping or its deautonomization are very different and should not be confused.
The product form for A(x) in (1.1) arises naturally in recent work on reductions of partial difference equations [36,38]. We present a way in which these systems characterize certain periodic and twisted reductions of the lattice Korteweg-de Vries (KdV) equation and the lattice Schwarzian KdV equation [38]. A corollary of this work is that Sakai's q-Garnier system arises as a twisted reduction of the lattice Schwarzian KdV equation, as do any specializations. This work also gives an explicit expression for the evolution in terms of known Yang-Baxter maps.
The plan of the paper is as follows. In Section 2 we give an overview of the theory of linear systems of difference equations where we formalize the way in which we consider our systems to be isomonodromic. In Section 3 we provide evolution equations for the discrete Garnier systems in terms of viables naturally associated with (1.2) and (1.8), whereas Section 4 gives the same evolution equations in terms of variables associated with the image and kernel at each value of x in which A(x) is singular. Section 5 gives a number of cases in which the evolution of the discrete Garnier systems coincide with known case of discrete Painlevé equations. Section 6 shows how both cases of the non-symmetric Garnier systems, and their special cases, arise as reductions of discrete potential KdV equation and the discrete Schwarzian KdV equation.

Linear systems of dif ference equations
This section aims to provide the relevant theorems concerning systems of linear difference equations. This includes a recapitulation of the classical results of Birkhoff on linear systems of difference and q-difference equations [6,7]. While the work of Birkhoff gives fundamental solutions to systems of difference equations of the form (1.1), they are not sufficient to ensure solutions of systems of the form of (1.6).
Secondly, given a system of the form of (1.1) or (1.6), we wish to specify the type of transformations we expect. The set of transformations has the structure of a finite-dimensional lattice. Characterizating these transformations follows the work of Borodin [11], who developed this theory in application to gap probabilities of random matrices [10]. We extend this to q-difference equations [34].
A secondary issue concerns what structures are being preserved by discrete isomonodromic deformations. The celebrated work of Jimbo and Sakai [24] argues that (1.3) preserves the connection matrix, when it exists. A fundamental object that is preserved under transformations of the form of (1.3) is the structure of the difference module [52]. This provides a more robust definition of what it means to be a discrete isomonodromic deformation. In particular, this holds for discrete isomonodromic deformations of (1.6), or any system in which the existence of a connection matrix may not be assumed.

Systems of linear h-dif ference equations
We start with (1.1) where σ = σ h , which we write as where A(x) is a rational M × M matrix that is invertible almost everywhere and h > 0 as above. We may reduce to the case in which A(x) is polynomial by multiplying Y (x) by gamma functions. This means that the form of A(x) can generally be taken to be where A n = 0. Furthermore, if A n is invertible and semisimple then, by applying constant gauge transformations, we can assume that A n is diagonal. By the same argument, if A n = I and A n−1 is semisimple, we may assume that A n−1 is diagonal. Under these assumptions, it is useful to describe an asymptotic form of formal solutions which is the subject of the following theorem due to Birkhoff [6].
where the ρ i are pairwise distinct, or if A n = I and A n−1 = diag(r 1 , . . . , r M ) subject to the non resonnancy constraint then there exists a unique formal matrix solution of the form where {d i } is some set of constants.
Given a solution of (2.1) that is convergent when x 0 or x 0, and since h > 0, we may use (2.1) to extend the solution bŷ This extension introduces possible singularities at translates by integer multiples of h of the points where det A(x) = 0. The values of x in which det A(x) = 0 play an important role in the theory of discrete isomonodromy, hence, it is useful to parameterize the determinant by Theorem 2.2 (see [11]). Assume that A 0 = diag(ρ 1 , . . . , ρ M ), with then there exists unique solutions of (2.1), Y l (x) and Y r (x), such that 1. The functions Y l (x) and Y r (x) are analytic throughout the complex plane except at translates to the left and right by integer multiples of h of the poles of A(x) and A(x − h) −1 respectively.

2.
In any left or right half-plane, Y l (x) and Y r (x) are asymptotically represented by (2.3).
Both Y l (x) and Y r (x) form a basis for the solutions of (2.1), which are both non-degenerate in the sense that in the limit as x → ±∞, they possess a non-zero determinant, hence, are invertible almost everywhere. The notion that any two non-degenerate solutions of the same difference equation should be related leads us to the concept of a connection matrix. It should be clear as Y l (x) and Y r (x) are both solutions of (2.1), the connection matrix, defined by is periodic in x with period h. The connection matrix and the monodromy matrices for systems of linear differential equations play very similar roles [11]. Given a linear system of difference equations, it is useful to talk about the Riemann-Hilbert or monodromy map, which sends the Fuchsian system of differential equations to a set of monodromy matrices [9]. The monodromy matrices depend on the coefficients of a given Fuchsian system. The collection of variables that specify the monodromy matrices are called the characteristic constants. The next theorem defines the characteristic constants for systems of difference equations. Theorem 2.3. Under the general assumptions of Theorem 2.2 the entries of connection matrix take the general form is a polynomial of degree n − 1 with p i,i (0) = 1 and λ i,j denotes the least integer as great as the real part of (log(ρ i ) − log(ρ j ))/2πi. Perhaps the simplest nontrivial example of such a connection matrix arises from solutions of the one dimension case, which can be broken down into linear factors of the form In this way we associate a set of constants to each system of linear difference equations, by giving a map This gives us M (M n+1) constants in total, which is also the number of entries in the coefficient matrices.
Theorem 2.5. For any 1 , . . . , M n ∈ Z, δ 1 , . . . , δ M ∈ Z such that there exists a non-empty Zariski open subset, A ⊂ M h (a 1 , . . . , a M n ; d 1 , . . . , d M ; ρ 1 , . . . , ρ M ), such that for any (A 0 , . . . , A n−1 ) ∈ A there exists a unique rational matrix, R(x) and a matrix, A(x) related by (1.5) such that Fixing some translation, we obtain that (2.1) and (1.3) are a Lax pair for a birational map of algebraic varieties which we wish to identify as some integrable system. For reasons of simplification, if A n = I we will also assume that A n−1 is semisimple, in which case we we may apply a constant gauge transformation so that A n−1 is also diagonal. Hence, if A n = I, we will impose the condition that A n−1 is diagonal for any tuple in M h (a 1 , . . . , a M n ; d 1 , . . . , d M ; 1, . . . , 1).

Classical q-dif ference results
We write (1.1) where σ = σ q as where A(x) is a rational M × M matrix that is invertible almost everywhere and 0 < |q| < 1 as above.
The functions required to express the solution of any scalar linear first-order system of qdifference equations are not as commonly used as the gamma function. Hence, before discussing some of the particular existence theorems, let us introduce some standard functions, all of which may be found in [21]. We define the q-Pochhammer symbol by The important property of (x; q) ∞ is that We also have the Jacobi theta function, which is analytic over C * and satisfies The last expression is known as the Jacobi triple product identity [21]. The function θ q (x) has simple roots on −q Z . We define the q-character to be which satisfies e q,c (qx) = ce q,c (x) and has simple zeroes at x = q Z and simple poles at x = cq Z . In the special case in which c = q n then e q,q n (x) is proportional to x n . Lastly, we have the q-logarithm which satisfies σ q l q (x) = l q (x) + 1, and is meromorphic over C * with simple poles on q Z . We may use the functions above to solve any scalar q-difference equation, hence transform (2.5) in which A(x) is rational to a case in which A(x) is polynomial, given by (2.2). If A 0 and A n are semisimple and invertible, then by using constant gauge transformations we can assume that one of them, say A n , is diagonal. Under these circumstances, we may specify two formal solutions. Lemma 2.6 (Birkhoff [7]). Suppose A n = diag(κ 1 , . . . , κ M ) and A 0 is semisimple with non-zero eigenvalues, θ 1 , . . . , θ M , such that the non resonnancy conditions are satisfied, then there exists two formal matrix solutions By using θ q as our building block for the multiplicative factors appearing on the right in (2.6a), this formulation is slightly different from the original formulation of Birkhoff [7]. These functions have nicer properties with respect to the Galois theory of difference equations [48,52]. We should mention that the above form can be generalized to the case in which some of the eigenvalues are 0 by using the so-called Birkhoff-Guenther form [8]. Formal solutions defined in terms of the Birkhoff-Guenther form do not necessarily define convergent solutions. This issue of convergence gives rise to a q-analogue of the Stokes phenomenon for systems of linear differential equations [16]. Regardless of the convergence, these solutions may be used to derive deformations of the form (1.3), as shown in [34].
We are interested in solutions defined in open neighborhoods of x = 0 and x = ∞, which may be extended by The resulting solutions are singular at q-power multiples of the values of x where det A(x) = 0. For this reason, it is once again convenient to fix where A(x) is not invertible. If A n is semisimple with eigenvalues κ 1 , . . . , κ M (κ i = 0) then we parameterize the determinant as The series part of the solution around x = 0, which we denoteŶ 0 (x), satisfieŝ whereas the series part of the solution around x = ∞, denotedŶ ∞ (x), satisfies a similar equation. By a succinct argument featured in van der Put and Singer [52, Section 12.2.1] we have solutions, Y 0 (x) andŶ ∞ (x), that are convergent in neighborhoods of x = 0 and x = ∞ respectively.
While we do not have an explicit presentation of the connection matrix, it is generally known to be expressible in terms of elliptic theta functions. In particular, the entries of the connection matrix lie in the field of meromorphic functions on an elliptic curve, i.e., C * / q . Let us specify the required lattice actions in a similar way to the h-difference case. We denote the algebraic variety of all n-tuples of M × M matrices, (A 1 , . . . , A n ) such that A n has eigenvalues κ 1 , . . . , κ M and A 0 = diag(θ 1 , . . . , θ M ) with determinant specified by (2.7) by M q (a 1 , . . . , a M n ; κ 1 , . . . , κ M ; θ 1 , . . . , θ M ). The natural constraint obtained by evaluating (2.7) at x = 0, is that A(x) = A 0 +Ã 1 x + · · · +Ã n−1 x n−1 +Ã n x n , (Ã 1 , . . . ,Ã n−1 ) ∈ M q a 1 q 1 , . . . , a mn q mn ; κ 1 q δ 1 , . . . , κ M q δ M ; θ 1 , . . . , θ M , Proof . It is sufficient to specify an atomic operation that performs the following invertible operation e 1,1 : which when composed with actions that permutes a 1 , . . . , a M n and κ 1 , . . . , κ n give us all transformations we require. A matrix that does this is found by using a constant gauge transformation to change the basis so that the vectors are the new coordinate vectors. We then perform a gauge transformation of the form (1.5) whose effect is dividing the first column by (1 − x/a 1 ) and multiplying the first row by (1 − x/a 1 q).
Reverting back to a basis in which A 0 is the constant coefficient matrix using another constant matrix gives the required matrix. It should be clear from the determinant that a 1 → qa 1 , while looking atÃ(x) asymptotically around x = ∞ it is clear κ 1 → κ 1 /q. Since all these steps were invertible, the inverse atomic operation is also rational, hence, we obtain all possible transformations this way.
Systems of linear q-difference equations can also be treated as discrete connections, where the matrix presentations of these systems of linear q-difference equations arise as trivializations of linear maps between the fibres of a vector bundle. In this framework, the theorem above may also be deduced by purely geometric means, as was done in the h-difference case in [5]. The q-difference version of this framework was the subject of a recent paper by Kinzel [28].
Remark 2.9. The elementary translations are those that multiply any collection of up to m of the a i 's by q and multiply the same number of κ j 's by q −1 . For the applications that follow, this formulation will be sufficient, however, this is a slightly less general result than possible. One may generally find a rational matrix in which the θ i values are shifted by q-powers in a way that preserves (2.8).

Dif ference equations and vector bundles
The aim of this section is to present the theorems required for the existence of meromorphic solutions to (1.6), which we write as two cases: and To prove the general existence of solutions with these symmetry properties, we turn to some general results concerning sheaves on compact Riemann surfaces (see [17] for example). For a connected Riemann surface, Σ, we may denote the sheaves of holomorphic and meromorphic functions on Σ by O Σ and M Σ respectively. A holomorphic or meromorphic vector bundle of rank n is a sheaf of O Σ -modules or M Σ -modules which is locally isomorphic to O n Σ or M n Σ respectively.
Theorem 2.10 ([42, Theorem 3]). Let G be a group of automorphisms of P 1 , L is the limit set of G and U a component of then the system of equations possesses a meromorphic solution.
The two important examples in the context pertain to the case in which G is a group of automorphisms of P 1 admitting the presentation which is often called the infinite dihedral group. In particular, we are interested in the case in which the groups of automorphisms are x .
If we let A τ 2 = I in each case and A τ 1 (x) be some rational matrix, A(x) −1 , the commutation relation on τ 1 and τ 2 requires respectively.
Lemma 2.11. Let L/K be a quadratic field extension and A ∈ GL n (L) be a matrix such that A = A −1 , whereĀ is the conjugation of A in L over K. Then there exists a matrix B ∈ GL n (L) such that A =BB −1 and B is unique up to right-multiplication by GL n (K).
Proof . Given a vector w ∈ L n , it is easy to see that if v =w + A −1 w thenv = w + Aw = Av.
Applying to any basis for L n over K gives at least n vectors satisfyingv = Av that are linearly independent over K, whose columns give a matrix B such that For uniqueness we suppose two such matrices, B 1 and B 2 , satisfy (2.9), then C = B 1 B −1 2 satisfies C =C, in which case C ∈ GL n (K).
Remark 2.12. This lemma is a special case of what is often called "Hilbert's theorem 90", which states that any 1-cocycle of a Galois group with values in GL n is trivial. Hilbert dealt with the case in which Gal(L/K) is cyclic, and n = 1.
Specializing to the function fields L = C(x) and K is the subfield of rational functions invariant under x → −x − h or x → 1/x allows us to write A(x) as one of two cases; where B(x) is rational. This reduces the problem of determining the algebraic variety of all ntuples of matrices with a symmetry condition to determining n-tuples of matrices with prescribed properties. In particular, it makes sense to let where either In discussing the isomonodromic deformations, we specify two different types of discrete isomonodromic deformations; those that act on the left and those that act on the right, which are given as follows where λ(x) is some rational scalar factor. This scalar factor only swaps poles and roots of the determinant and should be considered trivial from the perspective of integrability. These two equations should be thought of as the symmetric equivalent of (1.5). We may rigidify the definitions of R l (x) or R r (x) by insisting that these matrices are proportional to identity matrices around x = ∞.
If we insist that R l (x) is invariant under τ 2 , i.e., we have the symmetry R l (x) = τ 2 R l (x), then it is clear that a transformation of the form (2.11a) coincides with a transformation of the form (1.5), hence, will be considered a discrete isomonodromic deformation. Furthermore, if we may find such a matrix, Theorem 2.5 or Theorem 2.8, depending on the case, tells us that this matrix and resulting transformation are unique, hence, the discrete isomonodromic deformation does preserve the required symmetry.

Preserving the Galois group
The main reason for passing from connection preserving deformations to the Galois theory of difference equations is that we have not shown that systems of the form (1.6) possess connection matrices. While mechanically, we still have Lax pairs using (1.5) or (2.11), the implications of possessing a discrete Lax pair of any form are not generally known. We wish to show that (1.5) and (2.11) preserve the associated Galois group.
This is an issue that is not confined to symmetric Lax pairs. Various Painlevé equations are known to arise as relations of the form of (1.5) where the series part of the formal solutions at x = ±∞ or x = 0 are not convergent [33,34]. From an integrable systems perspective, it is useful to know precisely what is preserved, and it turns out the associated difference module is always preserved under transformations of the form (1.3). We require some of the formalism described in [52] to demonstrate this. Definition 2.13. A difference ring is a commutative ring/field, R, with 1, together with an automorphism σ : R → R. The constants, denoted C R are the elements satisfying σ(f ) = f . An ideal of a difference ring is an ideal, I, such that φ(I) ⊂ I. If the only difference ideals are 0 and R then the difference ring is called simple. This is a natural discrete analogue of a differential field. In Picard-Vessiot theory, a Picard-Vessiot extension is formed by extending the field of constants by the solutions of a homogenous linear ordinary differential equation [52]. The analogue of this for difference equations is the following construction.
Definition 2.14. Let K be a difference field and (1.1) be a first-order system with A(x) ∈ GL n (K). We call a K-algebra, R, a Picard-Vessiot ring for (1.1) if: 1) an extension of σ to R is given, 3) there exists a solution of (1.1) with coefficients in R, 4) R is minimal in the sense that no proper subalgebra satisfies 1, 2 and 3.
We are treating C(x) as a difference field where σ h and σ q are the relevant automorphisms. The field of constants contain C extended by the σ-periodic functions (e.g., e 2iπx/h and φ c,d = e q,c e q,d /e q,cd ). We may formally construct a Picard-Vessiot ring for (1.1) by considering a matrix of inderminants, Y (x) = (y i,j (x)). We extend σ to K(Y ) via the entries of (1.1). If I is a maximal difference ideal, then we obtain a Picard-Vessiot ring for (1.1) by considering the quotient K(Y )/I. This quotient by a maximal difference ideal ensures the resulting construction is a simple difference ring.
This formal construction may be replaced by a fundamental system of meromorphic solutions of either (1.1) or (1.6) specified by Theorem 2.11. For q-difference equations, in general (see [51]) the entries of any solution are elements of the field M(C)(l q , (e q,c ) c∈C * ). Definition 2.15. If R is a Picard-Vessiot ring for (1.1), the Galois group, G = Gal(R/C R ) is the group of automorphisms of R commuting with σ.
Let us briefly describe the role of the connection matrix in this context. We have given conditions for there to exist two fundamental solutions, which we will call Y 1 (x) and Y 2 (x), which are distinguished by the regions of the complex plane in which they define meromorphic functions. If we adjoin the entries of Y 1 (x) or Y 2 (x) we describe two Picard-Vessiot extensions, denoted R 1 and R 2 . We expect R 1 and R 2 to be isomorphic to the formal construction above, in particular, there exists an isomorphism between R 1 and R 2 . The connection matrix, P (x), relates solutions via which defines such an isomorphism between R 1 and R 2 . For any generic value of x for which P (x) is defined, P (x) describes a connection map, which is an isomorphism of Picard-Vessiot extensions, hence, for generic values of u and v for which the connection matrix is defined, the matrix P (u)P (v) −1 defines an automorphism of R 1 . In the case of regular systems of q-difference equations, it is a result of Etingof that the Galois group is a linear algebraic group over C that is generated by matrices of the form P (u)P (v) −1 for u, v ∈ C where defined [15]. This mirrors differential Galois theory where it is generally known that the differential Galois group is generated by the monodromy matrices, the Stokes matrices and the exponential torus [45]. More generally, this relation between values of the connection matrix and the Galois group has been the subject of works of a number of authors [48,51].
We may generalize the definition of the Galois group from a category theoretic perspective. Given a difference field, K (e.g., C(x)), with a difference operator σ, we can consider the ring of finite sums of difference operators in a new operator, φ, where φ is defined by the relation φ(λ) = σ(λ)φ for λ ∈ K. We can consider the category of left modules, M , over K. Under a suitable basis, we may identify M with K m . In this basis, the action of φ is identified with a matrix by φY = AσY. (2.12) Conversely, given a difference equation of the form σY = AY , we may endow K m with the structure of a difference module via (2.12).
The object that is being preserved under these deformations is the local system/sheaf of solutions. We could also call these transformations isomodular since the difference module is preserved.
The advantage of this definition is that the category of difference modules over a difference field is a rigid abelian tensor category. We may use the definitions of [12] to define the Galois group from a category theoretic perspective. While it is difficult to see a priori that a transformation of the form (1.3) necessarily preserves the Galois group, from the perspective of the category theory, isomorphic difference modules resulting from Theorem 2.16 yield isomorphic Galois groups. These structures can be defined without reference to a connection matrix, it only requires the existence of a linearly independent set of solutions specified by Theorem 2.10. In particular, it specifies that the birational maps of Theorems 2.5 and 2.8 are integrable with respect to the preservation of a Galois group. What may be interesting from an integrable systems perspective is to consider the combinatorial data that specifies the difference module. Such data would be the analogue of the characteristic constants involved in isomonodromic deformations, and the map from the given difference module to this data would constitute a discrete analogue of the Riemann-Hilbert map [9].

Discrete Garnier systems
We now turn to the parameterization of our discrete Garnier systems, which has drawn inspiration from a series of results concerning the description of various integrable autonomous mappings and discrete Painlevé equations in terms of reductions of partial difference equations [38]. We have denoted the various cases of discrete Garnier systems by a value m in a way that the case m = 1 coincides with a discrete Garnier system that possesses the sixth Painlevé equation as a limit. With respect to the Garnier systems increasing m increases the number of poles of the matrix of the associated linear problem whereas increasing m by one in what we are calling the discrete Garnier systems increases the number of roots of the determinant of the matrix for the associated linear problem by two.

The asymmetric h-dif ference Garnier system
The variable a parameterizes the value of the spectral parameter, x, in which L is singular. Some of the useful properties of these matrices are hence we think of u as the variable parameterizing the image and kernel vectors. The resulting matrix, A(x), takes the general form where each factor is given by (3.1) subject to the constraints where the values of d 1 and d 2 are which follows from (3.2a). The first two terms in the asymptotic expansion around x = ∞ are where d 1 and d 2 are given by (3.6) and r 1,2 is given by the left-hand side of (3.4), hence, A m+2 = I when assuming the constraints. The value of r 2,1 is which may be used in a constant lower triangular gauge transformation that diagonalizes A n . This naturally preserves d 1 and d 2 , hence, defines an element of M h (a 1 , . . . , a 2m+4 ; d 1 , d 2 ; 1, 1).
While it is a consequence of (3.7), (3.4), (3.6a) and (3.6b), it should be noted that d 1 and d 2 satisfy which is a constraint that is necessarily satisfied by any element of M h (a 1 , . . . , a 2m+4 ; d 1 , d 2 ; 1, 1). Suppose we are given A m+2 = I, and an (m + 2)-tuple where A m+1 has been diagonalized, we wish to know whether there is a corresponding matrix of the form (1.2). We claim that the subvariety of (m + 2)-tuples arising from (1.2) is of the same dimension. If we fix A m+2 = I and A m+1 = diag(d 1 , d 2 ) then each of the 4(m + 1) entries of the A i 's, for i = 0, . . . , m, are considered free. We have 2m + 3 coefficients of the determinant not automatically satisfied. Conjugating by diagonal matrices may also be used to fix one additional off-diagonal entry, which also removes any gauge freedom, making a algebraic variety of dimension 2m (or 2m + 1 with a gauge freedom). Similarly, a product of the form (1.2) is specified by 2m + 4 values, u i for i = 1, . . . , 2m + 4 subject to two constraints, namely (3.4) and and (3.4), one gauge freedom and two constants related by (3.9), giving a total of 2m + 2 free variables. Fixing r 2,1 removes another variable, as does conjugating by diagonal matrices, which gives an algebraic variety dimension 2m (or 2m + 1 with a gauge freedom), as above.
We may also describe maps between M h (a 1 , . . . , a N ; d 1 , d 2 ; 1, 1) and matrices given by (1.2). To obtain an element of M h (a 1 , . . . , a N ; d 1 , d 2 ; 1, 1), we expand the product and diagonalize. To obtain (1.2) we obtain left (or right) factors of A(x) by observing the corresponding image (kernel) vectors at the points x = a 1 (or x = a n ).
The property we will use to parameterize the system of discrete isomonodromic deformations is given by the following observation.
Lemma 3.2. The matrices of the form of (3.1) satisfy the commutation relation This is a well known relation for these matrices [1,26,49]. This map is related to the discrete potential Korteweg-de Vries equation [41]. If we let R i,j be the map (3.11) then this map satisfies the relation which is known as the Yang-Baxter property for maps. This map appears as F V in the classification of quadrirational Yang-Baxter maps [3]. A common pictorial representation of this property appears in Fig. 1. More generally, it has been remarked upon in [11] that the set of commuting transformations obtained by discrete isomonodromic deformations define solutions to the set-theoretic Yang-Baxter maps [53]. We may use Lemma 3.2 to define an action of S N on A(x). Given a permutation, σ ∈ S N , we denote the corresponding rational transformation of the u i and a i by s σ u i and s σ a i respectively. The group S N is generated by 2-cycles of the form (i, i+1), whose action we denote by s i = s (i,i+1) for i = 1, . . . , N − 1. Using Lemma 3.2 these are given by 13a) By construction for any σ ∈ S N , the effect of s σ on A(x) is trivial. We may use the action of S n to determine the image or kernel of A(x) at x = a i by acting on A(x) by a permutation that sends the factor that is singular at x = a i to either the first or last term of (1.2) respectively.
We are now in a position to define an elementary collection of translations, T i , whose effect on the parameters, a i , is given by and whose action on the u i variables is the subject of the following proposition.
3) defines a birational map between linear algebraic varieties The effect of T 1 on the u i variables is given by Proof . To ascertain the how this transformation acts on A(x), we observe that a rearrangement of (1.3) is that where we have used (3.2b). It is convenient to leave it in this form and read off the transformed values of d 1 and d 2 in the expansion of (3.14) to be given by which determines thatÃ(x) is an element of M h (a 1 + h, a 2 + h, a 3 , . . . , a 2m+4 ; d 2 − h, d 1 ; 1, 1). We may inductively determine T 1 u k by observing that the kernel ofÃ(a 2m+2 ), giving us by applying s 2m+3 and (3.2d). Any subsequent kernels may be found inductively by examining the kernel of L(a k , u 2 , a 2 ) · · · L(x, u k , a k )L(a k , u 1,k , a 1 + h), for k > 1 and where u 1,2m+2 = u 1 .
Rather than computing compatibility relations explicitly, we have simply exploited the commutation relations between the L i factors. All the other elementary transformations may be obtained by conjugating by elements of S N . One of the issues with this type of transformation is that it is singular at x = ∞, which manifests itself in the way it has swapped the roles of d 1 and d 2 . If we conjugate by the matrix with 1's on the off diagonal, we can also swap the roles of d 1 and d 2 , however, the effect this has on the u i variables is not so clear, as it requires a nontrivial refactorization into a product of the appropriate form. We may now present the generators for the discrete Garnier systems, which are compositions of the form (1.3) defines a birational map between linear algebraic varieties a 2 , a 3 , . . . , a N ; d 1 , d 2 ; 1, 1) The effect of T 1,2 on the u i variables is given by for k = 2, . . . , N , u 1,N = u 1 and u 2,N = u 2 .
Proof . As was the case in the previous proposition, using the identification ofÃ(x) with the action of T 1,2 we find that To compute the action on the u i variables, we inductively compute the kernel of using the action of S N , which gives (3.15) with an initial step where u 1,N = u 1 and u 2,N = u 2 as above.
We may construct a generic element T i,j , whose action on the space of parameters is The system of transformations of the form T i,j constitutes what we call the h-Garnier system. The simplest case, when m = 1, is shown to coincide with the difference analogue of the sixth Painlevé equation in Section 5.2. As a consequence of Theorem 2.5, we have the following. T i,j = T j,i .

The symmetric h-dif ference Garnier system
Let us consider difference equations whose solutions satisfy Y (x) = Y (−x). The consistency of (1.6) requires that Under these conditions we express A(x) by (2.10a), in which where L i (x) = L(x, u i , a i ) given by (3.1) and N = 2m + 4 as before. In this case, by using (3.2c) we may write This could be transformed via Γ functions to a matrix of the form of (1.2) for N = 2N and where the last N factors take a slightly different form. If we were to apply Theorem 2.5, it is not clear at this point that the solutions would preserve the symmetry. Due to (3.2c) and the invariance of (3.13) under changes to the spectral variable, it is easy to see that one may simultaneously act on B(x) and B(−h − x) −1 by S n in the same way as (3.13). As discussed previously, we expect to find transformations induced by multiplication on the left and the right. The left multiplication is expected to define a trivial transformation of A(x), but what is not expected is that the transformation is similar to the transformation specified by Lemma 3.2.
Lemma 3.6. The rational matrix (−a 1 , −a 2 , a 3 , . . . , a N ; d 1 + a 1 + a 2 , d 2 + a 1 + a 2 ; 1, 1), via (2.11a) with λ = (x − a 1 ) −1 (x − a 2 ) −1 . The effect on the u i variables is given by This is an elementary calculation that is easily verified. It is also seen that R l (x) = R l (−x), as required, and that This defines an involution on the parameter space.
Lemma 3.7. The rational matrix whose effect on the u i variables is given by It is easy to see R r (x) = R r (−x − h) and These matrices are not of the same form as L i (x), yet the resulting transformation takes the form specified in Lemma 3.2 where the roles of u i and u j have been swapped. It is fitting that we define the generators of the symmetric difference Garnier system to be the maps E i,j and F i,j , which may be expressed as The translations, T i,j , are specified in terms of these generators as which form the generators for the system of translations in the h-difference Garnier system. While this bears some similarity with (3.16), the difference is that given an A(x) with the appropriate symmetry, the resulting action is inequivalent since the resulting transformations of (3.16) do not necessarily preserve the symmetry, whereas by acting upon B(x), the resulting matrix A(x) necessarily possesses the required symmetry. The key difference is not the moduli space itself, but the actions being considered on them.

q-dif ference Garnier systems
As we did with the h-difference systems, we start with (1.1) where σ = σ q and where A(x) is specified by (1.2). Before defining L i (x), we specify two matrices, L(x, u, a) and a diagonal matrix which we call D given by which satisfy the commutation relation Due to (3.19), rather than letting each factor take the form DL(x, u i , a i ) it is sufficient to letting only the first first factor take the form DL(x, u 1 , a 1 ) while all other factors are of the form L(x, u i , a i ), i.e., we let A(x) take the general form (1.2) where As in the previous section, some of the desirable properties of L(x, u, a) are By expanding (1.2) we have that A(x) takes the general form As we did with the previous section, we find that the properties of A(x) are given by the following proposition.
Proposition 3.8. Given A(x) specified by (1.2) where each factor is given by (3.20), with the constraints θ 1 = θ 2 and Proof . The property (3.21a) is sufficient to tell us where expansions around x = ∞ and x = 0 are where the values of κ 1 and κ 2 are as above and The constraints that both θ 1 and θ 2 and (3.22) are sufficient (but not necessary) to ensure that A 0 and A m+1 are semisimple.
By a similar counting argument to the h-difference case, we may show that matrices taking the form given by (1.2) and M q (a 1 , . . . , a N ; κ 1 , κ 2 ; θ 1 , θ 2 ) both describe algebraic varieties of dimension 2m with birational maps between the two. This justifies that we may parameterize our discrete isomonodromic deformations in terms of actions on matrices of the form (1.2).
As we mentioned above, the conditions that θ 1 = θ 2 or equality holds in (3.22) are not necessary for A 0 and A m+1 to be semisimple, as requires that the matrix A 0 is diagonalizable, which amounts to requiring that A 0 is diagonal, which imposes the constraint N i=0 u i = 0.
On the other hand, if equality holds in (3.22), we require that r 1,2 = 0. If θ 1 = θ 2 then the case of m = 1 has too many constraints to be interesting, hence, it is more natural to consider m = 2 to be the first interesting case. Similarly, if both θ 1 = θ 2 and equality holds in (3.22), then we have an additional constraint, making m = 3 the first interesting case for similar reasons.
Once again, this map satisfies the Yang-Baxter property in that if we define R i,j in the same manner as (3.11) then (3.12) holds. In the classification of quadrirational Yang-Baxter maps (3.26) appears as F III [3].
In the same manner as the previous section, it is useful to utilize Lemma 3.9 to define the action of S N on A(x). Given a permutation, σ ∈ S N , we denote the action of σ on the u i and a i by s σ u i and s σ a i . Following the notation from the previous section, we specify the action of the generators is computed using (3.9) to be Once again, the effect of s σ is trivial on A(x). This action and (3.19) will be sufficient to express the discrete isomonodromic deformations. We wish to specify the transformation whose action on the parameters is and whose action on the u i is to be specified. Once again, the matrices that define the elementary Schlesinger transformations are of the form of L(x, u, a). The most basic transformation is specified in terms of the left-most factor. The effect of T 1 on the u i variables is given by where u 1,k−1 = a k u k θ 2 (u k θ 2 + θ 1 u 1,k ) θ 1 (a k u k θ 2 + qa 1 θ 1 u 1,k ) u 1 for k = N + 1.

Proof . This proposition follows in a similar manner, in that we identifyÃ(x) with
A(x) = L(x, u 2 , a 2 ) · · · L(x, u N , a N )DL(x, u 1 , qa 1 ), using (1.5) and (3.21b), which allows us to compute the determinant. Secondly, we note that we may use (3.19) to show which is of the form in Proposition 3.8, which shows This shows that the image of T 1 is indeed in M q (a 1 , . . . , a N ; κ 2 , κ 1 /q; θ 1 , θ 2 ). To determine the effect on the u i variables, the only difference in the inductive step is that we need to use (3.26) in combination with Lemma 3.9. We compute the kernel of L(x, u 2 , a 2 ) · · · L(x, u k , a k )DL(x, u 1,k , qa 1 ), using the action of S N and (3.26), which inductively provides us with (3.27) with u 1,N = u 1 .
The T i transformations may be obtain through conjugation by the action of S N . This transformation is also not a transformation of the form specified in Theorem 2.8, since it swaps the role of κ 1 and κ 2 . To define the generators of the q-Garnier system, we compute T 1 • T 2 , which may be used to compute T i,j .
The following arises as a consequence of Theorem 2.8. T i,j = T j,i .

Symmetric q-Garnier system
We now impose the symmetry constraint that the solutions satisfy Y (x) = Y (1/x). The consistency of (1.6) requires that

A(x)A(1/(qx)) = I.
We assume that A(x) takes the form of (2.10b) where B(x) is given by the product of L-matrices as where L i is given by (3.20) where the diagonal entry cancels, hence, without loss of generality, we may choose D = I. Using (3.21c), we may write A(x) as which defines an matrix in terms of a product of L-matrices.
Proposition 3.14. The rational matrix defines a birational transformation F N ,N −1 : M q (a 1 , . . . , a N −2 , a n−2 , a n ; κ 1 , κ 2 ; 1, 1) → M q (a 1 , . . . , a n−2 , 1/(qa n−1 ), 1/qa n ; κ 1 , κ 2 ; 1, 1) , via (2.11b) where λ = x(1 − x/(qa N −1 )) −1 (1 − x/(qa N )) −1 , whose effect on the u i variables is given by This is also easy to verify, as is the property that R r (x) = R r (1/qx) and In the same way as the h-difference case, we define the q-difference Garnier system to be generated by maps E i,j and F i,j , which may be expressed as The translations, T i,j , also specified by generate the translational portion of the symmetric q-Garnier system.

Reparameterization
The aim of this section is to express the above systems in terms of variables that have been chosen to make a correspondence between our q-Garnier systems and the q-Garnier system specified in the work of Sakai [47]. This choice makes sense in both the h-difference and q-difference setting.

h-dif ference Garnier systems
Let us consider the h-difference Garnier system defined by is specified by (1.2) for N = 2m + 2 and L i is given by (3.1) subject to the constraint (3.4). As (3.4) implies that the leading coefficient of A(x) is proportional to the identity matrix, provided d 1 = d 2 , which is specified (3.6a) and (3.6b), we may gauge by a constant lower triangular matrix so that the next leading coefficient is diagonal. Under these conditions, we specify a new set of variables, y i , z i and w i , related to A(x) by for i = 1, . . . , N . This choice is inspired by many works on the matter, such as the work on the q-Garnier systems [47], and various works on the Lagrangian approaches to difference equations [13,14]. This defines 3N parameters, many of which are redundant. After diagonalizing, with A(x) = (a i,j (x)), we have that each a i,j (x) is a polynomial with the following properties specifying their coefficients: • a i,i (x) = x m+2 + d i x m+1 + O(x m ) with a 1,1 (a k ) = y k w k and a 2,2 (a k ) = y k z k for k = 1, . . . , m + 1, • a 1,2 (a k ) = wy k and a 2,1 = w k y k z k /w for k = 1, . . . , m + 1.
We use a form of Lagrangian interpolation in the following way: If we let This allows us to express the entries of A(x) as It is convenient to write the expressions for each of the y k , z k and w k as which is trivially true for k = 1, . . . , m + 1 and defines an expression for y k , z k and w k in terms of the first m + 1 values for k = m + 2, . . . , N . This also produces expressions for each of the new variables in terms of the u i . Naturally, this does not take into account any constants with respect to T i,j . After diagonalizing the leading coefficient in the polynomial expansion in x, it is easy to see that the matrix inducing T i,j takes the form hence, we may calculate the equivalent of (3.16) on the y k , z k and w k variables.
Theorem 4.1. The system (3.16) is equivalent to the following action on the variables y k , z k and w k whereas for k = j we swap the roles of i and j above.
Proof . We temporarily use the notationũ = T i,j u. Given (4.3), we multiply the left and righthand sides of (1 whereby evaluating the resulting expression at x = a i gives us which specifies that the rows of ((a i + h)I + R 0 ) are annihilated by the image of A(a i ). Imposing the same condition for x = a j uniquely specifies R 0 by Using the values x = a i + h and x = a j + h gives us whose equivalence with (4.5) gives the first part of (4.4c). Using (4.5) with (1.5) at x = a k gives us which is equivalent to (4.4a)-(4.4c). The remaining parts may be calculated from evaluating which is equivalent to (1.5) using (4.5) at x = a i + h. The symmetry and uniqueness of R(x) determines that the corresponding formula for k = j may be obtained by swapping the roles of i and j.
While we have chosen to express the system in this way, this is not to be considered a 3(m+1)dimensional map since it has enough constants with respect to T i,j to be considered a (N − 2)dimensional system in terms of the u i .
The symmetric version may be treated in the same way by considering transformations of B(x) instead of A(x). We take A(x) to be given by (1.7) where B(x) is given by (1.2), in which case we may parameterize B(x) in the same way by introducing the variables y i , z i and w i by for i = 1, . . . , N . The Lagrangian interpolation is the same as it was for A(x) above, hence the entries of B(x) = (b i,j (x)) are also given by (4.2). We may calculate the effect of E i,j and F i,j on these new variables.
Proposition 4.2. The system (3.17a) is equivalent to the following action on the variables y k , z k and w k for k = i, j and for k = i whereas for k = j we swap the roles of i and j above.
Proposition 4.3. The system (3.17b) is equivalent to the following action on the variables y k , z k and w k for k = i, j and for k = i whereas for k = j we swap the roles of i and j above.
Proof of Propositions 4.2 and 4.3. We note that for B(x) to be of the same form we require that R l (x) and R r (x) from (2.11a) and (2.11b) take the forms We may multiply the left and right-hand sides of (2.11a) and (2.11b) by (x − a i )(x − a j ) to see that R 0 and R 1 satisfy where we use the notation E i,j u =ũ and F i,j u =û for the parameters of A(x) and A(x) itself.
Evaluating at x = a i and x = a j gives us the two matrices from which using these values in (2.11a) and (2.11b) evaluated at x = a k give (4.6a)-(4.6c) and (4.7a)-(4.7c) easily follow.

q-dif ference Garnier systems
Let us consider the q-difference Garnier system, defined by (1.1) where σ = σ q , where A(x) is specified by (1.2) for N = 2m + 2 and L i is given by (3.18). We may diagonalize the leading coefficient matrices around x = 0 and x = ∞ provided θ 1 = θ 2 and κ 1 = κ 2 using a lower diagonal constant matrix. From this matrix, we define a new set of variables, y i , z i and w i , for for i = 1, . . . , N . This specification in terms of the image and kernel of A(a i ) means that we may use (3.21d) and/or (3.21e) and the action of S n to determine the values of z i /w and w i /w. This defines 3N parameters, many of which are redundant. However, if we choose the first N (or any collection), we may reconstruct A(x) using Lagrangian interpolation using any collection of m + 1 values with the following data: • a i,i (x) = κ i x m+1 + O(x m ) with a 1,1 (a k ) = y k w k and a 2,2 (a k ) = y k z k for k = 1, . . . , m + 1, • a 1,2 (a k ) = wy k and a 2,1 = w k y k z k /w for k = 1, . . . , m + 1.
If we let this collection be the first m values, and let D(x) satisfy (4.1). We use this to express the entries of A(x) as , (4.8b) , (4.8c) . (4.8d) After diagonalizing the leading coefficient in the polynomial expansion in x, it is easy to see that the matrix inducing T i,j takes the form from which we may calculate the equivalent action on the variables y i , z i , w i and w.
Swapping i and j gives the case for k = j.
Proof . For a parameter or matrix, u, we use the notationũ = T i,j u. After establishing (4.9), we may multiply (1.5) by (x − a i )(x − a j )(x − qa i )(x − qa j ), whereby cancelling the denominators and evaluating at x = qa i shows that which specifies that the columns are in the kernel ofÃ(qa i ), whereas evaluating (1.5) at x = qa j gives a similar equation which is enough to uniquely specifies R 0 , which can be written explicitly as Evaluating (1.5) at x = a i gives that which specifies that the rows are in the kernel, which means R 0 may be computing in terms of z i and z j , which is explicitly given by The comparison of these values specifies wT i,j w i = z i T i,j w which implies (4.10d). The second part of (4.10d) is specified by looking at the leading order expansion of (1.5) in the top righthand entry. The remaining values of are easily and uniquely determined by evaluating (1.5) at x = a k . We need only determine the action on y i and z i , which can be achieved by evaluating at x = qa i , whereby using the value of R 0 above gives (4.10e) and (4.10f). By Proposition 2.8, the uniqueness of R(x) shows T i,j = T j,i , and the symmetry of A(x) with respect to swapping i and j implies the action on y j and z j are obtained by swapping i and j in (4.10e) and (4.10f).
The resulting form of the evolution was called the birational form of the q-Garnier system in [47].
Remark 4.5. The author of [47] also produces another parameterization in which every root of the polynomial a 1,2 (x) is a parameter say y 1 , . . . , y m , while the other parameter are the values of z i = a 1,1 (y i ) for i = 1, . . . , n. This may be considered a natural extension of known parameterizations of Lax pairs for Painlevé equations and discrete Painlevé equations. The issue in defining a collection of variables in this way is that we can only formally distinguish the roots of a 1,2 (x). A discrete isomonodromic will produceã 1,2 (x), whose roots areỹ 1 , . . . ,ỹ n , yet there is no way of ordering the y i andỹ i in a way that makes the mapping y i →ỹ i . The space formed by considering set of roots of monic polynomials of degree n is a construction for the n-th symmetric power of C, which may be consider the correct setting for such a parameterization. In the continuous setting, this parameterization makes more sense as the variables change continuously.
Let us now start with a matrix satisfying A(x)A(1/qx) = I, then we take A(x) to be given by (1.7) where B(x) is given by (1.8). We define variables y i , z i and w i by for i = 1, . . . , N . The Lagrangian interpolation is equivalent to the formulation for A(x) above, hence the entries of B(x) = (b i,j (x)) are also given by (4.8). We may calculate the effect of E i,j and F i,j on these new variables.
Proposition 4.6. The system (3.30a) is equivalent to the following action on the variables y k , z k and w k

11c)
for k = i, j and for k = i with with the equivalent form for k = j obtained by interchanging i and j.
Proposition 4.7. The system (3.30b) is equivalent to the following action on the variables y k , z k and w k for k = i, j whereas for k = i we have with the equivalent form for k = j obtained by interchanging i and j.
Proof of Propositions 4.6 and 4.7. We wish to take a different approach from the proofs of Propositions 4.2 and 4.3 by deducing R l (x) and R r (x) in terms ofÃ(x) andÂ(x) wherẽ u = E i,j u andû = F i,j u respectively. Since we know the determinant of R l (x) must include and factor of (x − a i )(x − a i ) and is symmetric with respect to the action x → 1/x, we have that R l (x) takes the form with the same λ(x). Due to the involutive nature of the transformation, it is natural that the R 0 and R 1 satisfy where R * i is the cofactor matrix for R i for i = 0, 1. Sinceũ =û = u, these two equations are equivalent to (2.11a) and (2.11b) applied to the transformed values of A(x). This gives from which we may calculate and equivalent form of (4.11a)-(4.11c) and (4.12a)-(4.12c) in terms ofw i 's andŵ i respectively. Comparing entries of (2.11a) and (2.11b) using these values at x = 1/a i and x = 1/qa i gives the remaining values and brings gives (4.11d)-(4.11e) and (4.12d)-(4.12e). The first parts of (4.11e) and (4.12e) bring (4.11a)-(4.11c) and (4.12a)-(4.12c) into their presented form, similarly with x = 1/a j and x = 1/qa j .

Special cases
We wish to demonstrate that the simplest cases of the h-difference and q-difference Garnier systems are known to coincide with discrete versions of the sixth Painlevé equation. Specializing the higher cases coincide with discrete Painlevé equations that appear higher in Sakai's hierarchy. We summarize the results in Table 1. To avoid confusion, we have used the value of N and since we have used the notation r 1,2 and r 2,1 in both sections we state that the value of r 2,1 in Table 1 is specified by (3.8) and the value of r 1,2 is given by (3.25). Table 1. A summary of the special cases of discrete Garnier systems whose evolution coincides with discrete Painlevé equations.
We remark that scalar Lax pairs for the q-difference cases of discrete Painlevé equations we present have also been presented in [56] and more recently scalar Lax pairs for the h-difference cases appeared in [27]. A correspondence between the scalar Lax pairs and matrix Lax pairs for the q-P A (1) 2 case that appears here was constructed in [54]. Such correspondences are almost sure to exist for the other cases, however, we do not pursue these lengthy correspondences here. We do however remark that the characteristic properties of the Lax pairs presented in [56] and [27] and scalar versions of the Lax pairs we present here seem to coincide up to some nontrivial transformations.

5.1
The twisted m = 1 asymmetric q-dif ference Garnier system The first system we present as a special case is the q-analogue of the sixth Painlevé equation, which we write as which was first presented by Jimbo and Sakai [24]. We consider an associated linear problem of the from (2.5), where This matrix is of the form where A 0 is upper triangular with diagonal entries θ 1 and θ 2 , while A 2 is lower triangular with diagonal entries The two natural consequences that which means that by diagonalizing the constant coefficient, we may let A 0 = diag(θ 1 , θ 2 ) and have a pair We may diagonalize A 2 in order to bring this Lax pair into the form of Jimbo and Sakai [24]. We propose a slightly different form in which A 0 and A 2 are upper and lower triangular respectively. This gives us a simple alternative parameterization, which takes the general form We satisfy (5.3) when x = y by letting a 1,1 (x) = κ 1 z 1 , a 11 (x) = κ 2 z 2 , We may solve for α, β, γ and δ in terms of y, z and w to show a 1 a 2 a 3 + a 1 a 2 a 4 + a 1 a 3 a 4 + a 2 a 3 a 4 ) The only minor difference in the theory presented above is that the constant coefficient in the series part of the solution, Y ∞ (x), is lower triangular, rather than the identity, as is the leading term in the discrete isomonodromic deformation. As above, we wish to we have four variables, u 1 , . . . , u 4 , with one constant with respect to T 1,2 , which we wish to identify with the variables y, z and w. Equating the various coefficients of (5.4) with the corresponding expressions in (5.2) gives the following expressions for y and z , Conversely, we notice that since the right-most factor of A(a 4 ), L(a 4 , u 4 , a 4 ), has a 0 eigenvector of the form (u 4 , −1), we may iteratively define u i by determining the 0-eigenvector at x = a i for i = 1, . . . , 4. For example, using x = a 4 we see which is given in terms of y, w and z above. This gives a right factor which we may remove to iteratively proceed for x = a 3 and so on and so forth. This gives us a one-to-one correspondence between u 1 , . . . , u 4 and y, z and w with κ 1 and κ 2 specified, with constraint, in terms of u 1 , . . . , u 4 .
This determines the an R(x) inducing T 1,2 in terms of y, w and z. This is used into be used in (1.5). If we temporarily introduce the notation T 1,2 f =f then these calculations reveal which coincides with (5.1) when the b i are specified by (5.6).
Alternatively, we may simply use (5.5) and (3.28) and the expressions for the u i in terms of y and z.

A special case of the m = 1 h-dif ference Garnier system
The second system we present is the case of the difference analogue of the sixth Painlevé equation, which we present as We consider an associated linear problem of the from (2.5), where where we impose the constraint This product takes the general form and may be expressed in the general form where a 2,1 (x) is a polynomial of degree 2 before we diagonalize A 2 . After diagonalizing A 2 it becomes a linear function in x, which we write as The values of α, β, γ and δ are uniquely determined by (3.7). The values of z 1 and z 2 are satisfy This relation is solved by introducing a variable, z, via We also have that the variables d 1 and d 2 are specified by which are known to be constant with repect to T i,j . Using the determinantal relations, and the correspondence between r i and d i , we have by setting A 3 = I, we have the 3-tuple . . . , a 6 ; d 1 , d 2 ; 1, 1).
Theorem 5.2. The action of the translation T 1,2 is equivalent to (5.7) where a 7 and a 8 are given by Proof . This follows much the same way as Proposition 5.1, however, there is an added difficulty in that the diagonalization of A 2 introduces a non-trivial correspondence between the matrix (5.8) and its corresponding R(x), denoted R (x), to be used in (1.5). The resulting matrix, R (x), can be shown to be of the form for some constant matrix R 0 , which can be calculated using (1.5). The uniqueness of R(x) ensures this calculation coincides with T 1,2 , as defined by (3.15). Using this same over determined relation, namely (1.5), we may determine that the mapping in terms of the variables y and z are specified by (ỹ + z)(y + z) = (z + a 3 )(z + a 4 )(z + a 5 )(z + a 6 ) (z + a 3 + a 4 + a 5 + a 6 + d 1 ) whereỹ andz are identified with T 1,2 y and T 1,2 z respectively. This coincides with (5.7) with a 7 and a 8 specified by (5.9).

An extra special case of the m = 3 asymmetric and symmetric h-dif ference Garnier system
We have one more special case to consider in the symmetric case, when we allow A(x) to be given by the product where we impose the constraint that Under this constraint, the coefficient of x 3 , denoted A 3 , takes the form We introduce one more constraint that d 1 = d 2 , where d 1 and d 2 are defined by (3.6). It is clear from above that T 1,2 d i = d i + h, however, it is also easy to show that We impose the constraint that the expression for d 1,2 is identically 0 for A(x) to define a regular system, so that where the equality implies that As d 1 and d 2 are defined in terms of the u i by this is considered an extra constraint on the u i . The map resulting from T 1,2 is two-dimensional, which an additional difference equations satisfied by one additional gauge freedom. The result is a matrix of the general form where The determinant at x = a 1 and x = a 2 are automatically 0 by construction. This is also a polynomial of degree six with six nontrivial conditions to satisfy, which are sufficient to write down expressions for α 2 , β 2 and γ.
Proposition 5.3. The map T 1,2 on the variables (z 1 , z 2 , w 1 , w 2 , y 1 , y 2 ) is given by and under some rigidification, it is clearer that the compactifying the moduli space of linear difference equations is indeed P 2 blown up at 9 points, which has been the subject of one of the authors work [44].
Another two-dimensional mapping may be obtained by allowing A(x) to symmetric with respect to the change x → −x, in which case we may allow A(x) to be given by (2.10a) and B(x) to be given by (5.10) subject to the same constraints. It is easy to show that, under the conditions, that (E 1,2 − I)d 2,1 = 0, (F 1,2 − I)d 2,1 = 0, indicating that (5.11) is a valid parameterization of B(x) that is invariant under the actions E i,j and F i,j .
Proposition 5.4. The maps E 1,2 and F 1,2 on the variables (z 1 , z 2 , w 1 , w 2 , y 1 , y 2 ) are given by and The map E 1,2 •F 1,2 is once again 2-dimensional and specializes to T 1,2 . This map is also acting on a surface obtained by blowing up P 2 at 9 points, hence, this map coincides with d-P A

5.4
An extra special case of the m = 3 asymmetric and symmetric q-dif ference Garnier system Let us consider the multiplicative version of the previous section, where A(x) is given by the product As discussed above, if the u i variables satisfy u 1 + u 2 + u 3 + u 4 + u 5 + u 6 + u 7 + u 8 , then A 0 = I. With κ 1 and κ 2 specified by (3.23), then If κ 1 = κ 2 then we find that which means that if κ 2,1 = 0 then T 1,2 κ 2,1 = 0. For similar reasons as the previous section, this defines a two-dimensional mapping. Using a similar approach as the previous section, we may specify that the matrix A(x) takes the general form with κ 1 = κ 2 and the (3.24) implying that α 2 = β 2 = a 1 a 2 a 3 a 4 a 5 a 6 a 7 , in which the mapping T 1,2 is may be computed accordingly.
. By construction, this is a map that sits above the case of q-P A

Reductions of partial dif ference equations
One of the consequences of this work is that we will able to show that the q-Garnier system of [47] arises as a set of reduction the lattice Schwarzian Korteweg-de Vries equation. By specializing the h-Garnier systems, this also means that q-P A The general setting for reductions of partial difference equations on the quad(rilateral) is that we take a function w : Z 2 → C, whose values are denoted w l,m , in which for every (l, m) ∈ Z 2 we impose the constraint Q(w l,m , w l+1,m , w l,m+1 , w l+1,m+1 ; α l , β m ) = 0, (6.1) where Q is linear in each of the variables. Given a staircase of initial conditions, we are able to determine each value on Z 2 , hence, we require an infinite number of initial conditions to specify a solution [50]. For this reason, these systems are commonly referred to as infinite-dimensional systems, and are considered to be the discrete analogues of partial differential equations. The twisted (n 1 , n 2 )-reduction is the system of solutions that satisfy an additional relation of the form w l+n 1 ,m+n 2 = T (w l,m ), (6.2) where the function, T , is called the twist [37]. We require that (6.1) is invariant under T , i.e., we require that Q(w l,m , w l+1,m , w l,m+1 , w l+1,m+1 ; α l , β m ) = 0 ⇐⇒ Q(T w l,m , T w l+1,m , T w l,m+1 , T w l+1,m+1 ; α l , β m ) = 0.
The staircase of initial conditions for a twisted reduction consists of the n 1 +n 2 initial conditions in some finite staircase extended infinitely in both directions using (6.2). Secondly, we require that the parameters change in a way that if two points are related by (6.2), then the points calculated using those n 1 + n 2 initial conditions are also related by (6.2). This means we require Q(w l,m , w l+1,m , w l,m+1 , w l+1,m+1 ; α l , β m ) = 0 ⇐⇒ Q(w l,m , w l+1,m , w l,m+1 , w l+1,m+1 ; α l+n 1 , β m+n 2 ) = 0. (6. 3) The resulting system may be described by a (n 1 + n 2 )-dimensional map, which we call a twisted reduction of (6.1) [38]. If α l = α l+n 1 and β m = β m+n 2 then the resulting ordinary difference equation is necessarily autonomous, otherwise, the system is non-autonomous. In the special case that T is the identity, we call the reduction a periodic reduction. One definition of integrability for systems of the form (6.1) is 3-dimensional consistency. If we impose a constraint of the form (6.1) on each of the faces of a cube, then 3-dimensional consistency requires that each way of determining the values on the vertices of the cube agree [32]. A classification of 3-dimensionally consistent multilinear equations of the form of (6.1) was the subject of the classification of Adler et al. [2,4].
We consider two equations of the form (6.1); the lattice potential Korteweg-de Vries equation [30], (w l,m − w l+1,m+1 )(w l+1,m − w l,m+1 ) = α l − β m , (6.4) which is also known as H1 in [2,4] and the lattice Schwarzian Korteweg-de Vries equation [31], which is also known as Q1 δ=0 in [2,4]. It is easy to see that (6.4) is invariant under translational twists, i.e., those of the form T (u) = u + λ whereas (6.5) is invariant under any twist in the full group of invertible Möbius transformations. Secondly, we see that (6. which are also both of the form (3.18) for u = w l+1,m − w l,m and u = w l,m+1 − w l,m . This determines a well known relation between integrable equations of the form (6.1) and Yang-Baxter maps [41]. The general framework for determining Lax pairs for ordinary difference equations arising as twisted reductions of partial difference was recently outlined in [37]. From the point of view of symmetries of reductions [37], it is slightly more conducive to regard a twisted (n 1 , n 2 )-reduction as a reduction on an (n 1 + n 2 )-dimensional hypercube [25]. The symmetries of the reductions arise from different paths on this hypercube from the points connected via (6.2).
The key to constructing Lax pairs for periodic reductions is that we have two parameters, α l and β m , whereas the reductions of (6.4) and (6.5) depend upon a single variable t = α l − β m and t = α l /β m respectively, which is constant with respect to shifts (l, m) → (l + n 1 , m + n 2 ).
We simply need to choose a spectral variable that is not constant with respect to the shift (l, m) → (l + n 1 , m + n 2 ).
Let us first treat the h-difference case. The correspondence between the discrete Garnier systems is made simple by taking periodic reductions of (6.4) with x = α l + a l , t = α l − β m + b m , (6.9) where a l and b m are n 1 -periodic and n 2 -periodic functions of l and m respectively. Note that an operator that shifts (l, m) → (l + n 1 , m + n 2 ) by (6.6) has the effect of fixing t, and has the effect of shifting x → x + h. The operator that shifts (l, m) → (l, m − n 2 ) fixes x and shifts t → t + h. A matrix inducing the shift in x may be written as A(x, t) = M l+n 1 ,m+n 2 −1 · · · M l+n 1 ,m L l,n 1 −1 · · · L l,m , where L l,m and M l,m are given by (6.7) and we have assumed the correspondence between α l and β m and x and t is given by (6.9). By writing A(x, t) in this way, we have chosen a path of initial conditions represents an L-shaped path. For the other operator, we have R(x, t) = M −1 l,m−n 2 · · · M −1 l,m−1 , which brings us to the following result.
Proof . We start by showing that T 1,2 arises as a periodic reduction. Up to relabeling for some fixed l, m, we let b m+1 = a 1 , b m = a 2 , a l+N −3 = a 3 , . . . , a l = a N , which is extended periodically with periods 2 and N − 2. For that same fixed l, m we let u 1 = w l+N −2,m+2 − w l+N −2,m+1 , u 2 = w l+N −2,m+2 − w l+N −2,m+1 , u 3 = w l+N −2,m − w l+N −1,m , . . . , u N = w l+1,m − w l,m , which, due to (6.2) in the case that T is the identity (i.e., the periodic case), then these values are also extended periodically in the lattice. Furthermore, the constraint u 1 + · · · + u N = w l+N −2,m+2 − w l,m = 0, by (6.2). Under this labelling of the initial conditions the operator A(x, t) is of the form (1.2) where each factor is of the form (3.1). Furthermore, due to the periodicity, we have that The compatibility, given by (1.5) whereÃ(x, t) = A(x, t + h) coincides with the computations in Proposition 3.4. To complete the correspondence, one notices that the action of S n defined by (3.10) is equivalent to using (6.4) to define a different path of initial conditions.
Due to the equivalence between (3.16) and the birational form, namely (4.4), also arises as a reduction, as do any of the special cases that have arisen in Section 5.
The correspondence between the q-Garnier systems is made simple by taking twisted reductions of (6.5) with x = α l , t = α l /β m . (6.10)