Modular group representations in combinatorial quantization with non-semisimple Hopf algebras

Let $\Sigma_{g,n}$ be a compact oriented surface of genus $g$ with $n$ open disks removed. The algebra $\mathcal{L}_{g,n}(H)$ was introduced by Alekseev--Grosse--Schomerus and Buffenoir--Roche and is a combinatorial quantization of the moduli space of flat connections on $\Sigma_{g,n}$. Here we focus on the two building blocks $\mathcal{L}_{0,1}(H)$ and $\mathcal{L}_{1,0}(H)$ under the assumption that the gauge Hopf algebra $H$ is finite-dimensional, factorizable and ribbon, but not necessarily semi-simple. We construct a projective representation of $\mathrm{SL}_2(\mathbb{Z})$, the mapping class group of the torus, using $\mathcal{L}_{1,0}(H)$ and we study it explicitly for $H = \overline{U}\!_q(\mathfrak{sl}(2))$. We also show that it is equivalent to the representation constructed by Lyubashenko and Majid.


Introduction
Let Σ g,n be a compact oriented surface of genus g with n boundary components, let Γ be a ribbon graph embedded on the surface, and let H be a ribbon Hopf algebra. The lattice algebra of (Σ g,n , Γ, H) is an associative algebra, which is a quantum analogue of the algebra of functions associated to lattice gauge theory on (Σ g,n , Γ, G), where the role of the gauge group G is now played by the Hopf algebra H. The lattice algebra is endowed with an O(H)-coaction (where O(H) is the restricted dual of H with its canonical Hopf algebra structure), which is the quantum analogue of gauge transformations, and which turns the lattice algebra into an O(H)-comodulealgebra.
We can choose a canonical graph on Σ g,n which has just one vertex and has one ribbon edge corresponding to each homotopy class of curves. The corresponding lattice algebra is called the graph algebra, which will be denoted here L g,n (H). The representation theory of L g,n (H) and of their algebras of H-invariants is investigated in [AS96a] when H is finite-dimensional and semisimple and in [Ale94] when H is the quantum group U q (g) for q generic. Moreover, in [Ale94], it is asserted in the case H = U q (g) (q generic) that there are isomorphisms: L g,n (H) ∼ = L 0,1 (H) ⊗n ⊗ L 1,0 (H) ⊗g , L 0,1 (U q (g)) ∼ = U q (g), L 1,0 (U q (g)) ∼ = Fun(T * G q ) where Fun(T * G q ) is a quantized algebra of functions associated to the cotangent bundle T * G (G being the Lie group associated to g). Note that in our terminology (definition 4.3), Fun(T * G q ) is the Heisenberg double of the Hopf algebra O (U q (g)) (restricted dual of U q (g)). These isomorphisms indicate that L 0,1 (H) and L 1,0 (H) are building blocks of L g,n (H) and thus deserve particular interest. Following the terminology of [AS96a], L 0,1 (H) is called the loop algebra and L 1,0 (H) is called the handle algebra. A summary about the representation theory of L g,n (H) and L inv g,n (H) and about these isomorphisms is also available in [BNR02].
An important feature of L g,n (H) is that it gives rise to a projective representation of the mapping class group of Σ g,n (see [AS96a], [AS96b]).
In the previously cited original papers on combinatorial quantization, it is always assumed that H is either U q (g) with q generic or a particular semi-simplified specialization of U q (g) with q root of unity. This latter specialization is well-defined in the context of weak quasi-Hopf algebras and the whole construction only involve simple U q (g)-modules having non-zero quantum dimensions (see [AGS95], [AGS96], [AS96a]).
In our setting, we assume that H is a finite-dimensional, factorizable, ribbon Hopf algebra which is not necessarily semi-simple, the guiding example being H = U q (sl(2)). We do not semi-simplify H, and it turns that the non-semi-simple modules (especially the principal indecomposable modules) play an important role. Moreover, the combinatorial quantization of [AGS95], [AGS96] for U q (sl(2)) (q being a root of unity) and the one provided here for U q (sl(2)) describe different topological field theories. The first is related to topological datas associated to Wess-Zumino-Witten conformal field theory, whereas the second is related to logarithmic conformal field theory.
We now outline the work presented here. Let H be a finite-dimensional, factorizable, ribbon Hopf algebra. The aim of the paper is to make a complete and careful study of the two building blocks L 0,1 (H) and L 1,0 (H) in this setting, to show how L 1,0 (H) gives rise to a projective representation of the mapping class group of the torus on SLF(H) (the space of symmetric linear forms on H), and to explicitly describe the theory for the important example H = U q (sl(2)). In our constructions we do not use the Clebsch-Gordan maps nor the S-matrix, since this objects have nice properties in the semi-simple case only.
Some well-known facts about braided Hopf algebras and the matrix coefficients of their finitedimensional representations are recalled in section 2. In sections 3 and 4, the definition of the two building blocks L 0,1 (H) and L 1,0 (H) as well as the O(H)-coaction on them are provided. We carefully prove the isomorphisms of L 0,1 (H) with H (Theorem 3.6) and of L 1,0 (H) with the Heisenberg double of O(H) (Theorem 4.7), which implies that L 1,0 (H) is (isomorphic to) a matrix algebra. In section 4.3 we define the representation of the algebra of coinvariants, L inv 1,0 (H), on SLF(H) (Theorem 4.8) which is a key-point for us, and we provide useful technical formulas about it.
In section 5, the Dehn twist presentation of the mapping class group is recalled. Then we explain how to define a projective representation ρ SLF of SL 2 (Z) on SLF(H) (Theorem 5.8). As in [AS96a], [Sch98], we will be led to associate a copy in L 1,0 (H) of the ribbon element to each Dehn twist. We then show that the relations involved in the presentation of the mapping class group of the torus hold. Note that we clearly distinguish the relations which hold in L 1,0 (H) itself (see Proposition 5.5) and those which hold only when they are represented on SLF(H). The computations rely on several technical results where integrals on H play an important role. We finally recall the Lyubaschenko-Majid projective representation ρ LM on Z(H) and show in Theorem 5.12 that ρ SLF and ρ LM are equivalent.
The last section is devoted to the example of U q = U q (sl(2)). All the preliminary facts about U q and the GTA basis, which is a suitable basis of SLF(U q ), are available in [Fai18]. We recall the braided extension of U q to which the R-matrix belongs and explain how to define L 0,1 (U q ) and L 1,0 (U q ). Then in Theorem 6.4 we give the explicit formulas for the action of SL 2 (Z) on the GTA basis of SLF(U q ). The multiplication formulas in this basis (see [Fai18, Section 5], [GT09]), are the crucial tool to obtain the result. The structure of SLF(U q ) under the action of SL 2 (Z) is determined.
Since ρ SLF ∼ = ρ LM , we recover a result of [FGST06] about the structure of the Lyubaschenko-Majid representation on Z(U q ). Finally, in subsection 6.4, we formulate a conjecture about the structure of SLF(U q ) as a L inv 1,0 (U q )-module. In a work in progress, we extend these results to surfaces of arbitrary genus, dealing with the algebras L g,n (H). Acknowledgments. I am grateful to my advisors, Stéphane Baseilhac and Philippe Roche, for their regular support and their useful remarks.

Notations.
If A is an algebra, V is a finite-dimensional A-module and x ∈ A, we denote by V x ∈ End C (V ) the representation of x on the module V . More generally, if X ∈ A ⊗n and if V 1 , . . . , V n are A-modules, we denote by X the representation of X on V 1 ⊗ . . . ⊗ V n . As in [CR62], we will use the abbreviation PIM for Principal Indecomposable Module. Here we consider only finitedimensional representations.
We use integral indices to describe embeddings Kas95,VIII.2] for the precise definition. Note that this notation does not take into account the number of tensorands of the target space. For instance, However, the target space is always clear from the context. We generalize in the obvious way this notation to embeddings We denote by Mat n (A) = Mat n (C) ⊗ A, the algebra of matrices of size n with coefficients in the algebra A. Every M ∈ Mat n (A) is uniquely written as M = i,j E i j ⊗a ij , where E i j is the elementary matrix with 1 at the intersection of the i-th row and the j-th column and 0 elsewhere. We define ⊗ A, and let as above M 1 (resp. N 2 ) be the embedding of M (resp. N) in Mat n (C) ⊗ Mat n (C) ⊗ A = Mat n 2 (A). Then we see that M 1 N 2 (resp. N 2 M 1 ) contains all the possible products of coefficients of M (resp. of N) by coefficients of N (resp. of M): In particular, M 1 N 2 = N 2 M 1 if and only if the coefficients of M commute with those of N. More generally, if the coefficients of a tensor M commutes with those of a tensor N and if they are embedded on differents tensorands, then their embeddings commute. For instance M 135 N 24 = N 24 M 135 , M 145 N 23 = N 23 M 145 and so on.
In order to simplify notations, we will use implicit summations. First, we use Einstein's notation for the computations involving indices: when an index variable appears twice, one time in upper position and one time in lower position, it implicitly means summation over all the values of the index. For instance if L, M ∈ Mat n (C) ⊗2 ⊗A and N ∈ Mat n (C) ⊗3 ⊗A, then (L 32 M 13 N 312 ) ace bdf = L ec ij M ai kl N lkj f bd . Second, we use Sweedler's notation (see [Kas95, Not. III.1.6]) without summation sign for the coproducts, that is we write Finally, we will write the R-matrix as R = a i ⊗ b i with implicit summation on i.
For q ∈ C \ {−1, 0, 1}, we define the q-integer [n] (with n ∈ Z) by We will denoteq = q − q −1 to shorten formulas. Observe that if q is a 2p-root of unity, then [p] = 0 and [p − n] = [n]. As usual I n will denote the identity matrix of size n and δ s,t is the Kronecker symbol.

Some basic facts
We refer to [CR62,Chap. IV and VIII] for background material about representation theory.

Dual of a finite-dimensional algebra
Let A be a finite-dimensional C-algebra. Denote by A * the dual of A, that is the vector space Hom C (A, C). Let V be a n-dimensional A-module. We define: If we choose a basis in V , then we can express V x ∈ End C (V ) in this basis, and hence V T becomes a matrix V T ∈ Mat n (A * ). An element V T i j with 1 ≤ i, j ≤ n is then called a matrix coefficient (associated to the representation V ). This is the point of view used throughout this paper: we think about V T as a matrix.
Since A * is finite-dimensional, it is generated as a vector space by the matrix coefficients of the PIMs. Indeed, let (x 1 , . . . , x n ) be a basis of A with x 1 = 1, let (x 1 , . . . , x n ) ⊂ A * be the dual basis and let A A be the regular representation. It is readily seen that The claim is proved since the PIMs are the direct summands of A A. Note however that the matrix coefficients of the PIMs do not form a basis of A * in general. Indeed, even if we fix a family (P α ) representing each isomorphism class of PIMs, it is possible for P α and P β to have a composition factor S in common. In this case, both Pα T and P β T contain S T as submatrix. This is what happens for H = U q (sl(2)), see e.g. [Fai18,Section 3]. In the semi-simple case this phenomenon does not occur.
An obvious but important relation is functoriality: if φ : V W is an A-linear map, then we have (1)

Braided Hopf algebras, factorizability, ribbon element
For all the definitions and basic results about braided Hopf algebras, we refer to [Kas95,Chap. VIII].
Let H be a braided Hopf algebra with universal R-matrix R. We will often write R = a i ⊗ b i . Let us recall the main properties of R: (3) R 12 R 13 R 23 = R 23 R 13 R 12 . (4) The relation (4) is called the (quantum) Yang-Baxter equation.
where R ′ = P (R), with P the flip map defined by P (x⊗y) = y ⊗x. Ψ is often called the Reshetikhin-Semenov map, or the Drinfeld map. Here we will encounter several variants of the map Ψ, and we reserve the name Reshetikhin-Semenov-Drinfeld for the morphism introduced in Proposition 3.4. We say that H is factorizable if Ψ is an isomorphism of vector spaces. By the remarks above, we can restrict β to be a matrix coefficient of some We will use the letters I, J, . . . for modules over Hopf algebras. Observe that Hence, if H is factorizable, the coefficients of the matrices I L (±) generate H as an algebra. These matrices satisfy nice relations which are consequences of (2) and (4): Be aware that the expression of the coproduct is not the same as in an usual matrix product, since the order of indices is inverted (compare with (11) below). If the representations I and J are fixed and arbitrary, we will simply write these relations as: to alleviate notations (the space 1 (resp. 2) corresponds to the representation I (resp. J)).
Recall that the Drinfeld element u and its inverse are: We say that v ∈ H is a ribbon element if it is central and it satisfies: We also have ε(v) = 1. A ribbon element is in general not unique. A ribbon Hopf algebra (H, R, v) is a braided Hopf algebra (H, R) together with a ribbon element v. We say that g ∈ H is a pivotal element if: A pivotal element is in general not unique. But in a ribbon Hopf algebra (H, R, v) there is a canonical choice: We will always take this canonical pivotal element g in the sequel.

Dual Hopf algebra O(H)
Let (H, ·, η, ∆, ε, S) be a finite-dimensional Hopf algebra. There exists a canonical Hopf algebra structure on H * , defined for ψ, ϕ ∈ H * by: with x, y ∈ H. When it is endowed with this structure, H * is called dual Hopf algebra, and denoted O(H) in the sequel. In terms of matrix coefficients, we have: These relations are called fusion relations. If the two representations I and J are fixed and arbitrary, we will simply write Remark 1. By coassociativity, we have (I ⊗J)⊗K = I ⊗(J ⊗K). Hence, there are two decompositions of I⊗J⊗K M , namely: where we used (2). Applying the Yang-Baxter equation twice, the reader may check that these two expressions are equal. Also recall that we have commutation relations like We have a useful analogue of the FRT relations: Proposition 3.2. The following exchange relations hold in L 0,1 (H): Such a relation is called a reflection equation. It can be written in a shortened way if the representations I and J are fixed and arbitrary:  If H has a pivotal element g, then for every Φ ∈ End H (I), the element tr( Before giving the proof, let us precise that: . We will always use these identifications in the sequel to shorten notations.
Proof: It is easy to check the axioms of a coaction. Let us check that Ω is compatible with the fusion relation. We use the shortened notation explained before: (eq. (11) and (15)) = T 1 M 1 T 2 R 21 M 2 R −1 21 S(T ) 2 S(T ) 1 (commuting elements in tensor product algebra) (commuting elements in tensor product algebra) The proof of the last claim is just matrix computation using (1), (14) and (11): Ω tr(

Isomorphism
It is easy to check that the space of invariants is the space of coinvariants and that if moreover V is a O(H)-comodule-algebra, then this formula endows V with a structure of (right) H-module-algebra.
In the case of L 0,1 (H), Proposition 3.3 shows that the right action of H is: Also recall the right adjoint action of H on itself defined by a · h = S(h ′ )ah ′′ with a, h ∈ H, whose invariants are the central elements of H.
Proposition 3.4. The following map is a morphism of algebras: If we endow H with the right adjoint action, then Ψ 0,1 is a morphism of (right) H-modules. Hence, Ψ 0,1 brings coinvariants to central elements.
We will call Ψ 0,1 the Reshetikhin-Semenov-Drinfeld morphism (RSD morphism for short). The difference with the morphism Ψ of section 2.2 is that the source spaces are different. Proof: Using the relations of (6), we check that Ψ 0,1 preserves the relation of Definition 3.1: For the H-linearity: We used the basic properties of S and the fact that is the subspace generated by all the products ψ 1 · · · ψ n , with ψ i ∈ H * for each i.

Proof:
It suffices to show that the product of two elements of T 1 (H * ) is equivalent to a linear combination of elements of T 1 (H * ), and the result follows by induction. We can restrict to matrix coefficients since they linearly span H * . The idea is to invert the fusion relation. If we write R = a i ⊗ b i , then the fusion relation is rewritten as: It follows that: and this give the result since Theorem 3.6. Recall we assume that H is a finite-dimensional factorizable Hopf algebra. Then the RSD morphism Ψ 0,1 gives an isomorphism of H-module-algebras L 0,1 (H) ∼ = H. It follows that Let us point out obvious consequences. First, by comparing the dimensions, we see that the canonical map H * ֒ T(H * ) ։ L 0,1 (H) is an isomorphism of vector spaces. Second, this shows that the matrices I M are invertible since RR ′ is invertible. More importantly, this theorem allows us to where the matrices L (±) are defined in (5). We will always work with this identification in the sequel.
We denote SLF(H) the space of symmetric linear forms on H: SLF(H) is obviously a subalgebra of O(H). Consider the following variant of the map Ψ of section 2.2, which will be useful in what follows: where g is the pivotal element (10). Since H is factorizable, D is an isomorphism of vector spaces.
A computation similar to that of the proof of Proposition 3.4 shows that D brings symmetric linear forms to central elements. Moreover, it is not difficult to show that it induces an isomorphism of In order to avoid the indices, define for each I a matrix Λ I ∈ Mat dim(I) (C) by (Λ I ) i j = λ I ji . Then ψ can be expressed as: We write the summation sign because the set of indices is unusual. By applying D, we get the following lemma. Although it is trivial, we will use it several times.
Lemma 3.7. Every x ∈ L 0,1 (H) can be expressed as: Let us stress that, due to non-semi-simplicity, this way of writing elements of L 0,1 (H) and of SLF(H) is in general not unique, see the comments in section 2.1.
4 The handle algebra L 1,0 (H) From now on, we assume that H is a finite-dimensional factorizable ribbon Hopf algebra. Note however that the ribbon assumption is not needed in sections 4.1 and 4.2.

Definition of L 1,0 (H) and O(H)-comodule-algebra structure
If A 1 and A 2 are two algebras, we denote by A 1 * A 2 their free product. A 1 and A 2 are subalgebras of A 1 * A 2 , hence there exists two canonical injections j 1 : Consider the free product L 0,1 (H) * L 0,1 (H), and let j 1 (resp. j 2 ) be the injection in the first (resp. second) copy of L 0,1 (H). We define Definition 4.1. The handle algebra L 1,0 (H) is the quotient of L 0,1 (H) * L 0,1 (H) by the following exchange relations: for all finite-dimensional H-modules I, J.
Like the other relations before, the L 1,0 -exchange relation can be written more simply as: An important feature of L 1,0 (H) is that it is endowed with a O(H)-comodule-algebra structure, defined in a similar way to that of L 0,1 (H). O(H) ⊗ L 1,0 (H) be the canonical embeddings. Then the following map defines a structure of (left) O(H)-comodule-algebra on L 1,0 (H) Ω : If H has a pivotal element g, the elements with Φ ∈ End H (I ⊗ J) and tr 12 = tr ⊗ tr, are coinvariants.
Proof: To show that Ω is an algebra morphism, we just have to check that Ω is compatible with the exchange relation. The proof is similar to that of Proposition 3.3 and is left to the reader. For the claim about coinvariants, we have: For the first equality, use (13), (11) and (1), and for the second take back the computation of the proof of Proposition 3.3 with Let L inv 1,0 (H) be the subalgebra of coinvariants of L 1,0 (H). We now describe a wide family of maps L 0,1 (H) L 1,0 (H). For w ∈ L inv 0,1 (H) = Z(H) and m 1 , n 1 , . . . , m k , n k ∈ Z, define: It is clear that these maps are morphisms of O(H)-comodules, but not of algebras in general. Hence the restriction satisfies j wA m 1 B n 1 ...A m k B n k : L inv 0,1 (H) L inv 1,0 (H). This gives a particular type of coinvariants in L 1,0 (H). We will more shortly write: Remark 3. Recall from remark 2 that the matrix coefficients do not form a basis of L 0,1 (H). They just linearly span this space. Thus, it is not totally obvious that the maps j wA m 1 B n 1 ...A m k B n k are well-defined since they are defined using matrix coefficients. First, are well-defined. Let us show for instance that the map Applying the coproduct in O(H) twice and tensoring with id H , we get: We evaluate this on ( Finally, we apply the map j A ⊗ j B ⊗ j A and multiplication in L 1,0 (H): • Under this identification, we have the exchange relation: and the representation ⊲ is: Proof: Using (6) and (11), we have: Next, we have: The last equality is obvious. a ✷ Proposition 4.5. The following map is a morphism of algebras: Proof: We have to check that the fusion and exchange relations are compatible with Ψ 1,0 . Observe that the restriction of Ψ 1,0 to the first copy of L 0,1 (H) ⊂ L 1,0 (H) is just the RSD morphism Ψ 0,1 , thus Ψ 1,0 is compatible with the fusion relation over A. For the fusion relation over B, we have: (6) and (11) The same kind of computation allows one to show that Ψ 1,0 is compatible with the L 1,0 -exchange relation. a ✷ We wish to show that Ψ 1,0 is an isomorphism.
Lemma 4.6. Every element in L 1,0 (H) can be written as Proof: We use the same strategy as in Lemma 3.5. It suffices to show that an element like y B x A can be written as i (x i ) A (y i ) B , and the result will follow because we can reorder all the elements by induction. We can restrict to matrix coefficients since they linearly span L 0,1 (H). The idea is to invert the exchange relation. If we write R = a i ⊗ b i , then proceeding as in the proof of Lemma 3.5, we find: This give the result since Using Lemma 4.4, it is easy to get: where as usual R = a i ⊗ b i and the last equality is obtained using (2).
As it was first pointed out in [Ale94] in the case of H = U q (g) (q generic), the representation O(H) has an important submodule when we restrict to L inv 1,0 (H). This is also the case with our assumptions.  Lemma 4.9. 1) Recall that L 1,0 (H) is endowed with a structure of (right) H-module-algebra given by: ∀h ∈ H, Then, for the matrices Proof: 1) We compute each side of the equality. First, for U = A or B, we get using (6): Second, we get using (6), (23) and the shortened notation: The details in our general setting will be given in [Fai].
We now need to determine explicit formulas for the representation of particular types of coinvariants that will appear in the proof of the modular identities in section 5. If ψ ∈ H * and a ∈ H, we define: where ψ(a?) : x ψ(ax). This defines a right representation of H on H * . Obviously, if z ∈ Z(H) and ψ ∈ SLF(H) then ψ z ∈ SLF(H).
Recall that z A = j A (z) (resp. z B = j B (z)) is the image of z ∈ L 0,1 (H) by the map j A ( where D is the isomorphism defined in (17).
Proof: For the first relation, we show a more general formula. Let x ∈ L 0,1 (H) and let φ ∈ H * .
As usual, we identify L 0,1 (H) and H and we can restrict to x = We can also restrict to φ = J T c d . Then by (22): This gives the first formula because ψ is symmetric.
The second is less easy. By Lemma 3.7, we write z B = I tr(Λ I with tr 12 = tr ⊗ tr and R = a i ⊗ b i . Note that we used that D −1 (z) is symmetric. Now, denoting m : H ⊗ H H the product in H and using the Yang-Baxter equation, we have: (7) and (10). Hence for x ∈ H: as desired. a ✷ Lemma 4.11. Let z ∈ L inv 0,1 (H) = Z(H) and let ψ ∈ SLF(H). Then: It follows that if S(ψ) = ψ for all ψ ∈ SLF(H), then ρ SLF (z B −1 ) = ρ SLF (z B ).

Proof:
This proof is quite similar to that of the previous proposition. Using the fact that Using the fact that S(g) = g −1 and (12), we thus get: Now, we have: (7) and (8). Hence we get as in the previous proof Projective representation of SL 2 (Z) As previously, H is a factorizable ribbon finite-dimensional Hopf algebra.

Mapping class group of the torus
Let Σ g,n be a compact oriented two-dimensional surface of genus g with n punctures. Recall that the mapping class group MCG(Σ g,n ) is the group of all isotopy classes of orientation-preserving homeomorphisms which leave the set of punctures globally invariant, see [FM12] (also see the introductory survey [Mas09]). Let D ⊂ Σ g,n be an embedded open disk. Then MCG(Σ g,n \ D) is defined as MCG(Σ g,n ) except we restrict to homeomorphisms which fix pointwise the boundary circle C = ∂(Σ g,n \ D). Let us put the base point of π 1 (Σ g,n \ D) on the boundary circle C. Since C is pointwise fixed, we can consider the action of MCG(Σ g,n \ D) on π 1 (Σ g,n \ D), obviously defined by:

Until now, we identify f with its isotopy class [f ] and γ with its homotopy class [γ].
Here we focus on the torus Σ 1,0 = S 1 × S 1 . By the Dehn-Lickorish theorem, MCG(Σ 1,0 ) is generated by the Dehn twists τ a , τ b along the curves a = {1} × S 1 and b = S 1 × {1} respectively. It is well known (see e.g. [Mas09]) that This presentation is not the usual one of SL 2 (Z), which is: The link between the two presentations is s = τ a τ b τ a , t = τ −1 a . We now remove from Σ 1,0 an open disk D which does not intersect a nor b. The surface Σ 1,0 \ D and the curves a and b are represented in the figure below. We view these curves as elements of π 1 (Σ 1,0 \ D), that is we consider them up to homotopy and we provide them an orientation.

Automorphisms α and β
The fundamental idea, proposed in [AS96a] and [AS96b], is to mimic the action of the Dehn twists of Σ g,n (viewed as elements of MCG(Σ g,n \ D)) on π 1 (Σ g,n \ D) at the level of the algebra L g,n (H). Let us be more precise. We focus on the case (g, n) = (1, 0). In π 1 (Σ 1,0 \ D) we have the two canonical curves a and b, while in L 1,0 (H) we have the matrices In view of (24), let us try to define two morphisms τ a , τ b : L 1,0 (H) L 1,0 (H) by the same formulas: Let us see the behavior of these mappings under the fusion and exchange relations. For the exchange relation, no problem arises: and a similar computation holds for τ b . The fusion relation is almost satisfied: and we get similarly: From this we conclude that the elements Since v is central, we see by functoriality that the exchange relation still holds with these elements. We thus have found the morphisms which mimic τ a and τ b . We denote them by α and β respectively. Moreover, these automorphisms are inner: there exist α, β ∈ L 1,0 (H) unique up to scalar such that ∀ x ∈ L 1,0 (H), α(x) = αx α −1 , β(x) = βx β −1 .
Proof: It remains to show that α and β are invertible, but it is obvious since their inverses are given by: By Theorem 4.7, L 1,0 (H) is a matrix algebra. Hence, by the Skolem-Noether theorem, every automorphism of L 1,0 (H) is inner. a ✷ A natural question is then to find explicitly the elements α, β. The answer is amazingly simple (it has been provided in [AS96a] and [Sch98] for the semi-simple case).
Theorem 5.2. Up to scalar, α = v −1 A and β = v −1 B . Recall that we have identified H with L 0,1 (H) under Ψ 0,1 , thus we regard the ribbon element v as an element of L 0,1 (H). Proof: By the definition of α and β, we have: Conversely, every invertible element satisfying the first (resp. the second) line of equations is necessarily a scalar multiple of α (resp. of β) by the Skolem-Noether theorem. Thus we will show that v −1 A and v −1 B satisfy these equations. It is obvious that v −1 A (resp. v −1 B ) commutes with the matrices I A (resp. I B) since it is central in j A (L 0,1 (H)) (resp. in j B (L 0,1 (H))). Let us show the other commutation relation for v −1 A . The idea is to make the computations in H(O(H)). Recall that when we restrict Ψ 1,0 to j A (L 0,1 (H)), we just get the morphism Ψ 0,1 : Hence we must show that: Using the exchange relation of Definition 4.3 together with (11) and (8), we have: as desired. We now apply the morphism α −1 • β −1 to the equality v −1 Using that v B and I B commute, we easily get the desired equality. a ✷ This theorem is important because it will allow us to use properties of the morphisms α and β in order to show relations between v A and v B .

Projective representation of SL 2 (Z) on SLF(H)
We now show that the elements v A , v B give rise to a projective representation of MCG(Σ 1,0 ) on SLF(H) via the following assignment: We must then check that where ∼ means equality up to scalar. As we will see, it turns out that the braid relation holds in the algebra L 1,0 (H) itself (the scalar being 1), while the relation (v A v B ) 6 ∼ 1 only holds in the representation SLF(H). This is not surprising because the relation (τ a ) * (τ b ) * (τ a ) * = (τ b ) * (τ a ) * (τ b ) * holds on π 1 (Σ 1,0 \ D), while the relation ((τ a ) * (τ b ) * ) 6 = 1 only holds on π 1 (Σ 1,0 ). Thus we see that, intuitively, applying ρ SLF amounts to gluing back the disk D.
Integrals on H will play a prominent role. Recall that a left integral (resp. right integral) is a non-zero linear form µ l (resp. µ r ) on H which satisfies: Since H is finite-dimensional, this is equivalent to: It is well-known that left and right integrals always exist if H is finite-dimensional. Moreover, they are unique up to scalar. We fix µ l . Then µ l • S −1 is a right integral, and we choose We can now state an important result for the sequel. Then: It follows that: 3. ∀ x, y ∈ H, µ l (xy) = µ l (yS 2 (x)) , µ r (xy) = µ r (S 2 (y)x).
Proof: Consider the following computation, where we use (8) and (25): Since H is factorizable, the map D is an isomorphism of vector spaces. The left integral µ l is nonzero, so µ l (g −1 v −1 ?) is non-zero either. Since D is an isomorphism, it follows that D µ l (g −1 v −1 ?) = µ l (v −1 )v = 0, and thus µ l (v −1 ) = 0. Hence the formula for ϕ v is well defined. Moreover, we have the restriction D : SLF(H) ∼ Z(H), so since v ∈ Z(H), we get that ϕ v ∈ SLF(H). This allows us to deduce the properties stated about µ l . Using (27), we obtain the properties 1, 2 and 3 for µ r . We can now proceed with the computation for ϕ v −1 : where we used (27), the property 3) previously shown and (8). We conclude as before. a ✷ Since D is an isomorphism of algebras, we have ϕ v −1 = ϕ −1 v , and By Proposition 4.10, the actions of v A and v B on SLF(H) are: We simply used (26). a ✷ This lemma has an important consequence.
Proposition 5.5. The following braid relation holds in L 1,0 (H): Proof: It is easy to check that the morphisms α and β satisfy the braid relation αβα = βαβ. Because L 1,0 (H) is a matrix algebra, we have the existence of a scalar λ such that α β α = λ β α β. Hence, by Theorem 5.2, we have: Let us see the action of both sides on the counit: We used ε(v?) = ε(v)ε = ε and Lemma 5.4. It follows that λ = 1. a ✷ Observe that (αβ) 6 = id, thus the other relation of MCG(Σ 1,0 ) does not hold in L 1,0 (H). In order to show it in the representation, we begin with a technical lemma, in which we use the notation of (19).
Proof: Using Lemma 3.7, write as usual z = I tr Λ I The idea is to make the computations in the Heisenberg double. We have Using the defining relation of H(O(H)) together with (11), (2) and (7): A similar computation shows that Ψ 1,0 (z vB −1 A ) = S(D −1 (z)). Hence z v −1 AB −1 = z vB −1 A . Applying the morphism α to this equality, we find: Then ω implements the automorphism ω = αβα: The key observation is the following lemma.
Lemma 5.7. For all ψ ∈ SLF(H): Proof: Firstly, we show the formula for ψ = ε: where we used (29) and (28). Secondly, note that: Using Lemma 5.6, we get for all invariants z ∈ L inv 0,1 (H): This shows in particular that Thirdly, observe that by Proposition 4.10: These three facts together with Lemma 4.11 yield: as desired. a ✷ Recall that PSL 2 (Z) = SL 2 (Z)/{±I 2 } admits the following presentations: These maps satisfy the following restrictions: This is due to the fact that they intertwine the adjoint and the coadjoint actions (for the first the computation is analogous to that of the proof of Proposition 3.4, while the second is immediate by Proposition 5.3). It follows that Z(H) is stable under S and T . But since S 2 is inner, we have S 4 (z) = S −2 (z) = z for each z ∈ Z(H). Thus there exists a projective representation ρ LM of SL 2 (Z) on Z(H), defined by: As a corollary of these remarks, we have the following lemma.
Lemma 5.9. 1) H is unimodular, which means that there exists c ∈ Z(H), called two-sided cointegral, such that xc = ε(x)c for all x ∈ H.
Then c ∈ Z(H) and Since γ is invertible, it follows that xc = ε(x)c.
2) The terminology "unibalanced" is picked from [BBG18], where some facts about integrals and cointegrals are recalled. Let a ∈ H be the comodulus of µ r : ψµ r = ψ(a)µ r for all ψ ∈ O(H) (see e.g. [BBG18, eq. 4.9]). By a result of Drinfeld (see [Mon93, Prop. 10.1.14], but be aware that in this book the notations and conventions for a and g are different from those we use), we know that: where a ∈ H * is the modulus of the left cointegral c l of H. Here, since c = c l is two-sided, we have a = ε. Thus g 2 = u 2 v −2 = uS(u) −1 = a by (8) and (10). We deduce that where the second equality is [BBG18,Prop. 4.7]. a ✷ The left q-characters are nothing more than shifted symmetric linear forms. More precisely, we have an isomorphism of algebras: Let us define shifted versions of χ and of γ: The equality S = χ g −1 • γ g still holds, but we have now SLF(H) instead of Ch l (H).
In order to show the equivalence of ρ SLF and ρ LM , we begin with two technical lemmas.
Proof: The first equality is easy to show with formulas (8) and (3). For the second one: where we simply used the defining property (25) of µ r . a ✷ We will employ an immediate consequence of Lemma 5.9: Lemma 5.11. It holds: Proof: We compute each side of the equality. On the one hand: whereas on the other hand: as desired. We used (30) and Lemma 5.10. a ✷ The link between the two presentations of SL 2 (Z) is s = τ a τ b τ a , t = τ −1 a . Hence we define two operators S ′ , T ′ : SLF(H) SLF(H) by: Theorem 5.12. Recall that we assume that H is a finite-dimensional factorizable ribbon Hopf algebra. Then the projective representation ρ SLF of Theorem 5.8 is equivalent to ρ LM .
Proof: Consider the following isomorphism of vector spaces: By Lemma 5.11: The example of H = U q (sl(2)) Let q be a primitive root of unity of order 2p, with p > 2. We now work in some detail the case of H = U q (sl(2)), the restricted quantum group associated to sl(2), which will be denoted U q in the sequel. The definitions, notations, conventions and main properties about U q , Z(U q ), SLF(U q ), their canonical bases, and about the U q -modules used here are summarized in the first pages of [Fai18], to which we refer in order to keep this text compact.
To explicitly describe the representation of SL(2, Z), we use the GTA basis of SLF(U q ) which is studied in detail in [Fai18], and which has been defined in [GT09] and [Ari10].
In principle, since U q is not braided (see below), it is not clear that the previous results still hold. In practice, the universal R-matrix belongs to an extension of U q by a square root of K, and although some computations occur in the extension, the final result always belongs to U q . This is explained in what follows.

The braided extension of U q
Recall that U q is not braided itself. But its extension by a square root of K is braided, as shown in [FGST06]. Let U q 1/2 be this extension and R ∈ U q 1/2 ⊗ U q 1/2 be the universal R-matrix, given by where q 1/2 is a fixed square root of q. We use the notation q H⊗H/2 because q H⊗H/2 v ⊗ w = q ab/2 if K 1/2 v = q a/2 v and K 1/2 w = q b/2 w. Recall thatq = q − q −1 . Then RR ′ ∈ U q ⊗ U q and v ∈ U q (we choose g = K p+1 as pivotal element, and by (10) this fixes the choice of v). Moreover, even if it is not braided, U q is factorizable, in the sense that the morphism Ψ of section 2.2 is an isomorphism of vector spaces. Let I be a U q 1/2 -module. Since U q ⊂ U q 1/2 , I determines a U q -module, which we denote I |Uq . We say that a U q -module J is liftable if there exists a U q 1/2 -module J such that J |Uq = J. Not every U q -module is liftable, see [KS11]. But the simple modules and the PIMs are liftable, which is enough for us. Indeed, it suffices to define the action of K 1/2 on these modules. Take back the notations of [Fai18] for the canonical basis of modules. For the simple module X ǫ (s) (ǫ ∈ {±}), there are two choices for ǫ 1/2 , and so the two possible liftings are defined by and the action of E and F is unchanged. The two possible liftings of the PIM P ǫ (s) are defined by K 1/2 b 0 = ǫ 1/2 q (s−1)/2 b 0 , K 1/2 x 0 = ǫ 1/2 q p/2 q (p−s−1)/2 x 0 K 1/2 y 0 = −ǫ 1/2 q p/2 q (p−s−1)/2 y 0 , K 1/2 a 0 = ǫ 1/2 q (s−1)/2 a 0 and the action of E and F is unchanged.
Let X − (1) be the 1-dimensional U q 1/2 -module with basis v defined by Ev = F v = 0, K 1/2 v = −v. If I is a lifting of a simple module or a PIM I, then we have seen that the only possible liftings of I are I and I ⊗ X − (1). Moreover, using (2), we get equalities which will be used in the next section: 6.2 L 0,1 (U q ) and L 1,0 (U q ) We define L 0,1 (U q ) as the quotient of T(U q * ) by the fusion relation where I, J are simple modules or PIMs and I, J are liftings of I and J. From (31) and the fact that K p is central, we see that this does not depend on the choice of I and J. As we saw in section 2.1, the matrix coefficients of the PIMs linearly span L 0,1 (H), thus we can restrict to them in the definition. However, the simple modules are added for convenience. All the results of section 3 remain true for L 0,1 (U q ). In particular, Ψ 0,1 is an isomorphism since U q is factorizable. We now describe L 0,1 (U q ) by generators and relations. Let where X + (2) is the lifting of X + (2) defined by K 1/2 v 0 = q 1/2 v 0 . By the decomposition rules of tensor products (see [Sut94], and also [KS11], [Iba15]), every PIM (and every simple module) is a direct summand of some tensor power X + (2) ⊗n . Thus every matrix coefficient of a PIM is a matrix coefficient of some X + (2) ⊗n (with n ≥ p). It follows from the fusion relation that a, b, c, d generate L 0,1 (U q ). Let us seek the relations. We emphasize that each relation corresponds to a particular morphism, as we explain now. First, we have seen in Proposition 3.2 that the braiding morphism P R : X + (2) ⊗2 X + (2) ⊗2 provides the reflection equation. Proceeding as in the proofs of lemmas 3.5 and 4.6, we invert the reflection equation: A calculation gives the following exchange relations: Second, since X + (2) ⊗2 ∼ = X + (1) ⊕ X + (3), up to scalar there exists a unique morphism Φ : X + (1) X + (2) ⊗2 . It is easily computed: By functoriality and fusion, we have This gives just one new relation, which is the analogue of the quantum determinant: ad − q 2 bc = 1.
Let us compute the RSD isomorphism on M: We deduce the relations b p = c p = 0 and d 2p = 1 from the defining relations of U q . Let us mention that it is possible to find two morphisms f 1 , f 2 defined by (2) ⊗2p−1 . One can show using functoriality that the relations b p = c p = 0 are consequences of the existence of f 1 and that the relation d 2p = 1 is a consequence of the existence of f 2 . The proof rely on matrix computations; the details will be provided in [Fai].

A basis is given by the monomials
Proof: Let A be the algebra defined by this presentation. It is readily seen that a = d −1 + q 2 bcd −1 and that the monomials b i c j d k with 0 ≤ i, j ≤ p − 1, 0 ≤ k ≤ 2p − 1 linearly span A. Thus dim(A) ≤ 2p 3 . But we know that 2p 3 = dim(U q ) = dim L 0,1 (U q ) since the monomials E i F j K ℓ with 0 ≤ i, j ≤ p − 1, 0 ≤ k ≤ 2p − 1 form the PBW basis of U q . It follows that dim(A) ≤ dim L 0,1 (U q ) . Since these relations are satisfied in L 0,1 (U q ), there exists a surjection p : A L 0,1 (U q ). Thus dim(A) ≥ dim L 0,1 (U q ) , and the theorem is proved. a ✷ Remark 6. A consequence of this theorem is that L 0,1 (U q ) is a restricted version (i.e. a finitedimensional quotient by monomial central elements) of L 0,1 (U q ) spe , the specialization at our root of unity q of the algebra L 0,1 (U q ). A complete study of the algebra L 0,1 (U q ) spe will appear in [BaR]. Let us also mention that, specializing the RSD morphism of L 0,1 (U q ), we get a new morphism: where π is the canonical projection. It is easy to see that ker(Ψ 0,1 ) = b p , c p , d 2p − 1 and we obtain L 0,1 (U q ) ∼ = L 0,1 (U q ) spe / ker(Ψ 0,1 ).
Applying the isomorphism of algebras D defined in (17) to the GTA basis of SLF(U q ), we get a new basis of Z(U q ). We introduce notations for these basis elements: with 1 ≤ s ≤ p, ǫ ∈ {±} and 1 ≤ s ′ ≤ p − 1. They satisfy the same multiplication rules than the elements of the GTA basis, see [Fai18,Section 5] or [GT09] (the elements χ(s) defined in [GT09] correspond to [s]H s here). Let us mention that under the identification L 0,1 (U q ) = U q via Ψ 0,1 , it holds by definition M ), since we choose K p+1 as pivotal element. In particular, where C is the standard Casimir element of U q .
Similarly, we define L 1,0 (U q ) as the quotient of L 0,1 (U q ) * L 0,1 (U q ) by the exchange relations: B generate L 1,0 (U q ). Using the commutation relations of the Heisenberg double, it is easy to show that Ψ 1,0 indeed takes values in H(O(U q )) (the square root of K does not appear). In order to obtain a presentation of L 1,0 (U q ), one can again restrict to I = J = X + (2) and write down the corresponding exchange relations. We do not give this presentation of L 1,0 (U q ) since we will not use it in this work.
6.3 Explicit description of the SL 2 (Z)-projective representation Note that it can be shown directly that U q is unimodular and unibalanced, see for instance [Iba15, Cor. II.2.8] (also note that in [BBG18] it is shown that all the simply laced restricted quantum groups at roots of unity are unibalanced). In this way, we recover that γ (and thus γ g ) is invertible. Indeed, one can check that γ −1 (ψ) = ψ(c ′ )c ′′ , where c is the two-sided integral.
Proposition 6.2. For all z ∈ Z(U q ), S(z) = z and for all ψ ∈ SLF(U q ), S(ψ) = ψ. It follows that in the case of U q , ρ SLF is in fact a projective representation of PSL 2 (Z).
Proof: By [FGST06, Appendix D], the canonical central elements are expressed as e s = P s (C), w ± s = π ± s Q s (C) where P s and Q s are polynomials, C is the Casimir element and π ± s are Fourier transforms of (K j ) 0≤j≤2p−1 . It is easy to check that S(C) = C and that S(π ± s ) = π ± s , thus S(e s ) = e s and S(w ± s ) = w ± s . Next, let ψ ∈ SLF(U q ). Since γ g is an isomorphism, we can write ψ = γ g (z) = µ r (gS(z) ?) with z ∈ Z(U q ). Then: to compute the action of β by induction. The multiplication rules of the GTA basis (see [Fai18, Section 5]) will be used several times. Let us denote Relation (35) provides β On the one hand, we obtain by (32): On the other hand, we use Lemma 6.3 and the multiplication rules: This gives recurrence equations between the coefficients which are easily solved: β ⊲ χ ǫ s = λ(ǫ, s) The coefficients λ(ǫ, s) = λ + p (ǫ, s) and δ(ǫ, s) = δ 1 (ǫ, s) are still unknown. In order to compute them by induction, we use the relation β X + (2) W B = X + (2) W B β, which is another consequence of (35). Before, note that with 1 ≤ s ≤ p − 1 and the convention that χ ± 0 = 0. It follows that Due to (33), (34) and the multiplication rules, we have δ 2 (ǫ, s)G 1 + . . .
We now proceed with the proof of the formula for G s ′ . Relation (35) implies βH 1 B = H 1 B β. By (33), (34) and the multiplication rules, we have on the one hand: whereas on the other hand: Equalizing both sides and inserting the previously found values, we obtain the desired formula. a ✷ Remark 7. The guiding principle of the previous computations was that the mutiplication of two symmetric linear forms in the GTA basis is easy when one of them is χ + 2 , χ − 1 or G 1 (see [Fai18, Section 5]), and that all the formulas can be derived from β ⊲ χ + 1 using only such products. Recall that the standard representation C 2 of SL 2 (Z) = MCG(Σ 1,0 ) is defined by τ a 1 0 −1 1 , τ b 1 1 0 1 .
Then there exists a (projective) representation W of SL 2 (Z) such that V ∼ = C 2 ⊗ W . More precisely, W admits a basis (w s ) such that τ a w s = ℓ a ℓ (s)w ℓ , τ b v s = ℓ b ℓ (s)w ℓ .
Proof: It is easy to check that the formulas for τ a w s and τ b w s indeed define a SL 2 (Z)-representation on W . Let (e 1 , e 2 ) be the canonical basis of C 2 . Then e 1 ⊗ w s y s , e 2 ⊗ w s x s is an isomorphism which intertwines the SL 2 (Z)-action. a ✷ The structure of the Lyubaschenko-Majid representation on Z(U q ) is described in [FGST06]. Here, we recover this result on the SLF(U q ) side (recall from Theorem 5.12 that these representations are equivalent).
Proposition 6.7. Assume that the Conjecture holds. Then the L inv 1,0 (U q )-modules V and SLF(U q )/V are simple. It follows that SLF(U q ) has length 2 as L inv 1,0 (U q )-module.
Proof: This is basically a consequence of (34) and of the multiplication rules in the GTA basis. To avoid particular cases, define χ ǫ 0 = 0, χ ǫ p+1 = χ −ǫ 1 , χ ǫ −1 = χ −ǫ p−1 and e −1 = e p+1 = 0. Let 0 = V ⊂ V be a submodule, and let v = p j=0 λ j (χ + j + χ − p−j ) ∈ V be non-zero. Assume that λ s is non-zero. Then using Proposition 4.10, we get (e s ) A ⊲ v = λ s (χ + s + χ − p−s ), and thus χ + s + χ − p−s ∈ V . Apply X + (2) W B : Hence: (e s−1 ) A It follows that χ + s−1 + χ − p−s+1 , χ + s+1 + χ − p−s−1 ∈ V . Continuing like this, one gets step by step that all the basis vectors belong to V , hence V = V. Next, let G s and χ + s be the classes of G s and χ + s modulo V (with χ + 0 = χ + p = 0). Let 0 = W ⊂ SLF(U q )/V be a submodule and w = p−1 j=1 ν j G j + σ j χ + j ∈ W be non-zero. If all the ν j are 0, then there exists σ s = 0 and (e s ) A ⊲ w = σ s χ + s ∈ W . If one of the ν j , say ν s , is non-zero, then (w + s ) A ⊲ w = ν s χ + s ∈ W . In both cases we get χ + s ∈ W . Now we proceed as previously: (e s−1 ) A Thus we get step by step that χ + j ∈ W for all j. Apply H 1 B : It follows that G j ∈ W for all j, and thus W = SLF(U q )/V as desired. a ✷ In order to determine the structure of SLF(U q ) if the Conjecture is true, it will remain to determine whether the exact sequence 0 − V − SLF(U q ) − SLF(U q )/V − 0 is split or not, i.e. whether V is a direct summand of SLF(U q ) or not.