Piecewise Principal Coactions of Co-Commutative Hopf Algebras

Principal comodule algebras can be thought of as objects representing principal bundles in non-commutative geometry. A crucial component of a principal comodule algebra is a strong connection map. For some applications it suffices to prove that such a map exists, but for others, such as computing the associated bundle projectors or Chern-Galois characters, an explicit formula for a strong connection is necessary. It has been known for some time how to construct a strong connection map on a multi-pullback comodule algebra from strong connections on multi-pullback components, but the known explicit general formula is unwieldy. In this paper we derive a much easier to use strong connection formula, which is not, however, completely general, but is applicable only in the case when a Hopf algebra is co-commutative. Because certain linear splittings of projections in multi-pullback comodule algebras play a crucial role in our construction, we also devote a significant part of the paper to the problem of existence and explicit formulas for such splittings. Finally, we show example application of our work.


Introduction
Let H be a Hopf algebra (with bijective antipode), interpreted as a Peter-Weyl algebra of functions on a quantum group. Principal H-comodule algebras can be loosely viewed as the algebras of appropriate classes of functions on (non-commutative) principal bundles ( [2] makes the relationship explicit in the classical case). A crucial ingredient in the definition of principal comodule algebra is a so called strong connection map. For some applications it suffices to prove that a strong connection map exists, for instance when proving principality of a comodule algebra (see e.g. [18]). Other applications (see e.g. [3], [15], [16], [19]), such as computing the associated bundle projector or Chern-Galois character [5], call for an explicit formula for this map.
Piecewise principal comodule algebras [10], [13] is an interesting class of principal comodule algebras for which a fair amount of examples recently appeared in the literature (see e.g. [1], [4], [6], [8], [14], [15], [17], [18], [19]). They can be understood as being glued (constructed as a multi-pullback) from simpler parts which are principal. In [13] (cf. the generalization in [19]) it was proven that piecewise principal comodule algebras are, in fact, principal. The paper contains a derivation of the explicit formula for a strong connection on a pullback of two principal extensions from the "local" strong connections on pullback components and an appropriate choice of splittings of the gluing maps. If the piecewise comodule algebra is a multipullback one can present this multipullback as an iterated pullback, and then iterate the formula. Unfortunately, in practice, already the second iteration of the formula from [13] becomes overly complicated.
In the paper we derive, under the assumption of the co-commutativity of the Hopf algebra, a much simpler strong connection formula (which does not need to be iterated, nor requires putting the multipullback in the iterated form -the latter being complicated and error prone by itself). While the assumption of co-commutativity limits severely the applicability of the formula, it is worth pointing out that many of the known piecewise principal comodule algebras, such as those considered in [17], [19], [14], [1], [15] and [18] are either C(Z n ) or O(U(1))comodule algebras, hence our result could have been used to compute strong connections for these examples. The strong connection formula presented in this paper was inspired (very loosely) by the proof of [22,Theorem 3.3.2].
The plan of the paper is as follows: Section 2 contains some preliminaries about principal comodule algebras and piecewise principality. In Section 3 we present the explicit formula for a strong connection, and prove that it is indeed a strong connection, as long as the Hopf algebra is co-commutative. Because the strong connection formula uses the colinear and unital splittings of projections onto pieces, we devote the Section 4 to the presentation of the explicit procedure for constructing such splittings from the appropriate splittings of the gluing maps. Note that Theorem 7 can be viewed as the strengthening of [9, Proposition 9] (cf. [20,Theorem 7])instead of merely showing that, for each element in the multipullback component, there exists an element in the multipullback projected to this element we explicitly construct the whole (co-)linear and unital splitting.
As some of the splittings of gluing maps used in the construction of the splitting from Theorem 7 are required to have fairly non-obvious properties, the Section 5 is devoted to showing when such a splittings are guaranteed to exist, as well as to their semi-explicit constructions. Lemma 8, which links the existence of certain partitions of a vector space generated by a collection of vector subspaces to the distributivity of the lattice generated by those subspaces, is crucial for the results in this section.
Finally, in Section 6, we derive a formula for a strong connection on a non-commutative sphere S 2 RT introduced in [18] as a quantum Z 2 -principal bundle. To this end, and to provide comparison, we use two methods -the one from [13] and the one introduced in this paper.

2.1.
Hopf algebra and comodule-related notation. We work over a fixed ground field K and, unless stated otherwise, all vector spaces are understood to be K-vector spaces and the unadorned tensor product is understood to be the algebraic tensor product over K. The comultiplication, counit and the antipode of a Hopf algebra H are denoted by ∆, ǫ and S, respectively. Let P be a right comodule algebra. We denote by ∆ P : P → P ⊗ H the right H-coaction on P , and by the subalgebra of coaction invariant elements. Instead of writing ∆'s and ∆ P 's we usually employ the Heynemann-Sweedler notation with the summation symbol supressed, e.g.,: 2.2. Principal comodule algebras. Let H be a Hopf algebra with bijective antipode, and let P be a right H-comodule algebra. Then P is a principal comodule algebra if and only if there exists a linear map ℓ : H → P ⊗ P, ℓ(h) =: ℓ(h) 1 ⊗ ℓ(h) 2 (note the Sweedler-like notation with summation sign supressed) satisfying the following conditions: Such a map, if it exists, is called a strong connection on P [11,7,5]. Strong connections are usually non-unique.

2.3.
Multi-pullbacks of algebras. Let J be a finite set, and let be a family of algebra homomorphisms to which we will occasionally refer as "gluing maps".
Definition 1 ( [9,23]). The multi-pullback algebra A π of a family (2) of algebra homomorphisms is defined as Let (π i j : A i → A ij ) i,j be a family of surjective algebra homomorphisms. For any distinct i, j, k we put A i jk := A i /(ker π i j +ker π i k ) and take [·] i jk : A i → A i jk to be the canonical surjections. Next, we introduce the family of maps They are isomorphisms when π i j 's are epimorphisms.
Definition 3. We say [9, in Proposition 9] that a family (π i j : A i → A ij ) i,j of algebra epimorphisms satisfies the cocycle condition if and only if, for all distinct i, j, k ∈ J, Observe that, for all distinct i, j, k ∈ J and any a i ∈ A i , a j ∈ A j , One can prove ( [9], cf. [20], see also Theorem 7 in this paper) that the cocycle condition guarantees that all projections on components of a multipullback are surjective (in fact all projections on submultipullbacks are surjective, but we will not make use of that fact).
The family of ideals (ker π i ) i∈{1,...,N } generates a distributive lattice with + and ∩ as meet and join respectively.
Piecewise principal comodule algebras generalize the notion of (algebras of functions on) classical spaces which are locally principal, but with respect to closed instead of open coverings -hence the use of the term "piecewise" instead of "locally".
Definition 5. (see [13,Definition 3.8]) An H-comodule algebra P is called piecewise principal if there exists a family {π i : P → P i } i∈J of surjective H-comodule algebra morphisms such that: (1) The restrictions π i P coH : P coH → P coH i form a covering.
(2) The P i 's are principal H-comodule algebras.
By [13, Theorem 3.3] a piecewise principal comodule algebra is principal. Note that any piecewise principal comodule algebra can be presented as a multipullback comodule algebra with the gluing maps being comodule algebra morphisms [10].

Strong connection formula
In this section we present an explicit (and arguably simple) expression for a strong connection on a piecewise principal H-comodule algebra where H is a co-commutative Hopf algebra. Regretfully, the co-commutativity assumption is used crucially in the proof of the correctness of the formula, and so we have little hopes of generalizing further the method which led to the derivation of this strong connection formula. Theorem 6. Let H be a cocomutative Hopf algebra. Let {π i : P → P i } i∈{0,...,n} be a piecewise principal H-comodule algebra, and let {ℓ i : H → P i ⊗ P i } i∈{0,...,n} denote a family of strong connections on P i 's. For any i ∈ {0, . . . , n}, let V i be an H sub-comodule of P i such that ℓ i (H) ⊆ V i ⊗ V i and let α i : V i → P be a unital, colinear splitting of π i , i.e., π i • α i = id V i . For brevity, denote for i ∈ {0, . . . , n}, h ∈ H Then the linear map ℓ : H → P ⊗ P defined for all h ∈ H by the formula is a strong connection on P .
Proof. Note that any co-commutative Hopf algebra has bijective (in fact involutive) antipode. We need to prove that the map ℓ defined in the Theorem, satisfies all the properties (1).
First note that, by the colinearity of α j 's, colinear properties (1d), (1c) of ℓ j 's and the cocommutativity of H we have, that α j (ℓ j (h) 1 )α j (ℓ j (h) 2 ) is a coaction invariant element of P for any j ∈ {0, . . . , n} and h ∈ H, and hence also T i (h) is a coaction invariant element of P for any i ∈ {0, . . . , n + 1} and h ∈ H: In the penultimate equality we used co-commutativity of H to swap Sweedler indices (1) and (2) to be able to use the antipode property. In order to prove that ℓ is left colinear (Equation (1d)) we use the left colinearity of ℓ i 's, the colinearity of α i 's and the coaction invariance of the T i (h)'s: The right colinearity (Equation (1c)) of ℓ follows from the H-coaction invariance of T i (h)'s, the right colinearity of ℓ i 's, the colinearity of α i 's, and the co-commutativity of H: Here, in the penultimate inequality we used the co-commutativity of H exchanging Sweedler indices (2) and (3) .
In order to prove that ℓ is unital (Equation (1a)), note first that for any i ∈ {0, . . . , n} because ǫ, all ℓ i 's and all α i 's are unital. It follows that T i (1) = 0 for all i ∈ {0, . . . , n}, and T n+1 = ǫ by definition, hence where we used again the unitality of α n and ℓ n Note now that for all i ∈ {0, . . . , n}, and h ∈ H By applying this formula to T 0 (h) and keeping to expand with it the leftmost summand of the resulting expansion we obtain easily: On the other hand, for all h ∈ H and i ∈ {0, . . . , n}, as α i is the splitting of π i it follows that: i.e., ℓ satisfies Equation (1b) as needed.
The expression for a strong connection provided in the above theorem requires the unital and colinear splittings of projections π i to be given. The existence of such a splittings is guaranteed by the [13, Lemma 3.1] and [13, Theorem 3.3], but the mere existence does not suffice for someone desirous of finding the explicit formula. The proof of [13, Lemma 3.1] involves constructing a unital and colinear splitting of surjective comodule algebra map π from a unital and linear splitting of restriction of π to the subalgebra of coaction invariant elements (which always exists) utilizing the strong connection. Hence, we cannot use even the slight simplification provided by the proof of [13, Lemma 3.1].
In practice, we expect that in many simpler cases, the appropriate splittings will not be difficult to guess. However, for our result to be more widely applicable in practice, we will examine the explicit construction of colinear and unital splittings of multipullback comodule algebra projections on components which does not assume the existence of a strong connection on a multipullback comodule algebra (recall that a piecewise principal comodule algebra can always be presented as a multipullback).

Colinear splittings of piecewise principal comodule algebras
The result presented in this section allows to explicitly construct linear (colinear when appropriate) and unital splittings of projections on components of a multipullback (comodule) algebra.
Theorem 7. Suppose that a family (2) is distributive and satisfies the cocycle condition. Moreover suppose that there exists two families α i j , β i j : where we denote κ j := κ(j) to easy the notation. Then a unital and linear (colinear) splitting α i : A i → A π of π i : A π → A i can be given explicitly, for any a ∈ A i as α i (a) := (a j ) j∈J , where a i := a and a κ m+1 := a m κ m+1 for any 0 ≤ m < n. The collections {a k κ m+1 } 0≤k≤m ⊆ A κ m+1 , for 0 ≤ m < n are defined by the following inductive formula: Proof. It is clear that because all the maps involved in the definition of α i are unital and linear (colinear if need be) then also α i is linear (resp. colinear). The proof of unitality is slightly more subtle and it requires a simple induction. Pick some bijection κ : . We need to show that a j = 1 for all j ∈ J. Indeed, a κ 0 = a i = 1 by definition. Suppose we have proven that a j = 1 for all 0 ≤ j ≤ m < n. Then using the Equation (7) we get a 0 (π κ 0 κ m+1 (1)) = 1 as both π κ 0 κ m+1 and β κ m+1 κ 0 are unital. Suppose now that we have proven that a k κ m+1 = 1 for all 0 ≤ k < m. Then, Equation (7) yields Now it remains to show that α i (a) ∈ A π for all a ∈ A i . The inductive proof essentially follows the steps of the proof of [9, Proposition 9]. We will show that for any 0 ≤ m ≤ n we have (8) π κ j κ l (a κ j ) = π κ l κ j (a κ l ), for all j, l ∈ {0, . . . , m}, j = l. For m = 0 this condition is emptily satisfied. Suppose we have proven the above condition for some m. In order to demonstrate it for m + 1, we prove by induction that for any 0 ≤ k ≤ m, where m < n, we have Suppose now that we have proven Condition (9) for some 0 ≤ k < m. Pick any 0 ≤ j ≤ k. Then by (inductively assumed) Condition (8) and Equation (4) we have Then it follows that {By Condition (9) and Equation (4) {By the cocycle condition}.
This equality, again by Equation (4), is equivalent to the following condition: Because the above relation "is an element of" holds for an arbitrary 0 ≤ j ≤ k it implies immediately that Then The above equation implies immediately, that for all 0 ≤ l ≤ k where, in the second equality we used the inductive assumption. Moreover, using the fact that α κ m+1 κ k+1 is a splitting of π κ m+1 κ k+1 we obtain κ m+1 (a κ k+1 ), which ends the proof.
At this point, the skeptical reader might be excused for doubting the applicability of Theorem 7. Indeed, while the existence of unital and linear splittings β i j 's of π i j 's follows immediately from the surjectivity of π i j 's, and the existence of colinear splittings is assured (and assisted in explicit construction) by [13,Lemma 3.1] if all the A i 's are principal comodule algebras, it is not clear how to find the linear splittings α i j satisfying Equation (6) nor that they exist at all in general case. Fortunately, the results from the next section, interesting in their own right, not only assure the existence of splittings α i j satisfying Equation (6) under no stronger assumptions than those of Theorem 7, but they also provide the method of their (semi)-explicit construction.

5.
Colinear splittings of principal comodule algebras 5.1. Partitions of sets. Let A be a set and let A i , i ∈ J be a fixed finite family of subsets of A. For any Γ ∈ 2 J we denote for brevity: Indeed, the partition can be described explicitly, for all Γ ∈ 2 J by the formula:

5.2.
Partitions of vector spaces. Let now A be a vector space and let A i , i ∈ J be a fixed finite family of vector subspaces of A. A Γ , for any Γ ∈ 2 J is defined as in Equation (13). We want to define a linear counterpart of an associated partition {B Γ } Γ defined above for sets. Similarly to plain sets, vector sub-spaces can be ordered by the set inclusion, and the resulting ordered set is a lattice, with subspace intersection (V 1 ∩ V 2 ) serving as infimum and subspace sum (V 1 + V 2 ) playing the role of supremum. The problem is that this lattice is not, in general, distributive. It turns out that the assumption that the subspaces A i , i ∈ J generate a distributive lattice is pivotal for proving our desired result, stated immediately below: Lemma 8. Let A be a linear vector space and let A i , i ∈ I be a finite family of vector subspaces of A generating a distributive lattice. A has a linear basis B = Γ∈2 I B Γ , where B Γ ⊆ A Γ , Γ ∈ 2 I , such that subsets B Γ are all disjoint and satisfy the following property: Proof. First fix a linear order ≤ on 2 I subject to the condition It is immediate that the minimal element in this order is I and maximal is ∅. Note the following property of ≤ which will be used later: Indeed, assume Γ > Γ ′ . Γ ∪ Γ ′ ⊇ Γ always, so we need just to show that the equality leads to contradiction. Suppose that Γ ∪ Γ ′ = Γ. This is equivalent to Γ ⊇ Γ ′ which implies by Equation (16) that Γ ≤ Γ ′ contradicting the assumption Γ > Γ ′ .
The sets B Γ , Γ ∈ 2 I can be generated inductively (with respect to ≤) as follows: (1) B I is some linear basis of A I .
(2) B Γ , for Γ > I, is chosen as a maximal subset of A Γ such that Γ ′ ≤Γ B Γ ′ is linearly independent.
It is immediate by construction of B Γ 's that B := Γ∈2 I B Γ is a linear basis of A and that all B Γ 's are disjoint. Also by construction, B Γ ′ ⊆ A Γ , Γ ∈ 2 I whenever Γ ⊆ Γ ′ , which implies that half of Property (15) is trivially satisfied: We will prove the second half of Property (15) by induction on ≤: (1) I is minimal in 2 I with respect to ≤. Then by definition of B I we have (2) Suppose we have proven Eq. (15) for all Γ ′ < Γ. For any a ∈ A, denote by {α Γ (a)} Γ∈2 I the unique family of vectors such that a = Γ∈2 I α Γ (a) and that α Γ (a) ∈ Span(B Γ ) for all Γ ∈ 2 I (they are unique because B is a basis and B Γ 's are disjoint). By (19) α Γ ′ (a) = 0 whenever a ∈ A Γ and Γ ′ > Γ, i.e., Let a ∈ A Γ . Define v := a − α Γ (a). By Equation (20) It follows that The following result is a common knowledge: Lemma 9. Let π : A → B be a linear map, and let {A i } i∈I be a finite family of vector subspaces of A. Assume that ker π ∩ i∈I A i = i∈I (ker π ∩ A i ). Then π i∈I A i = i∈I π(A i ).
Lemma 10. Let π : A → B be a linear epimorphism, and let {A i } i∈I be a finite family of vector subspaces of A such that {A i } i∈I ∪ {ker π} generates a distributive lattice of vector subspaces. Then there exists a linear splitting α : B → A of π such that α(π(A i )) ⊆ A i for all i ∈ I.
Proof. Let B := Γ∈2 I B Γ be a linear basis of B satisfying conditions guaranteed by Lemma 8 with respect to the family {B i } i∈I , where B i := π(A i ). Note that Lemma 9 implies that B i 's generate distributive lattice of ideals because A i 's generate distributive lattice of ideals. For all Γ ∈ 2 I such that B Γ is non-empty we define α(b) for all b ∈ B Γ , to be an arbitrary element of π −1 (b) ∩ A Γ . Note that π −1 (b) ∩ A Γ is non-empty (so that this choice is possible) as b ∈ B Γ = ∅, and, B Γ = π(A Γ ) by Lemma 9. The map α : B → A thus obtained is clearly a linear splitting of π. For any i ∈ I consider any b ∈ B i . Then, by Lemma 8, b ∈ Span Γ∈2 I | i∈Γ B Γ , and hence Finally, we argue that we can generate a colinear splitting (with appropriate properties) from the linear one on the coaction invariant subalgebra: Lemma 11. Let A be a principal H-comodule algebra, let π : A → B be an H-comodule algebra surjection, and let {A i } i∈I be a finite family of ideals in A which are subcomodules, such that {A i } i∈I ∪ {ker π} generates a distributive lattice. Define for all i ∈ I: Suppose that there exists a linear map α coH : B coH → A coH such that Let ℓ : H → A ⊗ A be a strong connection on A. Then the following formula:

defines a linear map satisfying
Proof. The fact that α defined above is the colinear splitting of π follows immediately from the proof of [13,Lemma 3.1]. It remains to show that α(B i ) ⊆ A i for all i ∈ I. Indeed, let b ∈ B i . Because of the left colinearity of ℓ (Equation (1d)) it follows easily that b (0) π(ℓ(b (1)

Example
In [18] a new non-commutative real projective space RP 2 T and a non-commutative sphere S 2 RT were introduced, by defining C(RP 2 T ) and C(S 2 RT ) as a particular triple pullbacks of, respectively, three copies of the Toeplitz algebra T and the tensor product T ⊗ C(Z 2 ). The algebra C(S 2 RT ) has a natural (component-wise) diagonal coaction of the Hopf algebra C(Z 2 ), and it was proven in [18] that the subspace of invariants of this coaction is isomporphic with C(RP 2 T ). Moreover, it was demonstrated that C(S 2 RT ) is a piecewise principal (hence principal) C(Z 2 )-comodule algebra. However, the paper [18] does not present an explicit formula for a strong connection. Because C(Z 2 ) is co-commutative and C(S 2 RT ) is defined as a triple pullback algebra, our main result is applicable. In this section we will present the comparison of computations of a strong connection on C(S 2 RT ) using two methods: the first one uses the strong connection formula from [13] and the other one uses Theorem 6. The reader will see that, while application of the formula from [13] is trivial in case of double pullbacks, already for triple pullbacks the computations becomes fairly unmanageable. Also note that, in many cases, the values of strong connection formula on generators of the Hopf algebra are easily guessable, and then the values on arbitrary Hopf algebra elements can be computed using well known recursive formula. Here the Hopf algebra C(Z 2 ) has linear basis consisting of 1 and u, where u is the single generator such that u 2 = 1, so that it suffices to find the value of a strong connection on u without any need for recursion. However, guessing the value of a strong connection on u is nigh impossible.
We will start with recalling the definition of the comodule algebra C(S 2 RT ). Our presentation will be very brief (mostly lifted from [18]), though sufficient to understand what follows, and will hardly include any geometric intuitions behind C(S 2 RT ). Also, because the definition of C(RP 2 T ) is irrelevant for the strong connection computation, we omit it entirely. Therefore, the reader is recommended to read the full account from [18].
6.1. A pullback quantum sphere. We consider the Toeplitz algebra T as the universal C *algebra generated by an isometry s, and the symbol map given by the assignment σ : T ∋ s → u ∈ C(S 1 ), where u is the unitary function generating C(S 1 ). The following two maps and their pullbacks feature prominently in the definition of C(S 2 RT ). We will denote for brevity σ i := δ * i • σ, i = 1, 2. The definitions of the δ i 's seem completely arbitrary. In fact, as shown on the picture [18] below, each of these maps is meant as the parametrisation of two appropriate quarters of S 1 : We view S 1 and I as Z 2 -spaces via multiplication by ±1. Then Z 2 ×I and I ×Z 2 are Z 2 -spaces with the diagonal action. Accordingly, C(I), C(S 1 ), C(Z 2 ) ⊗ C(I) and C(I) ⊗ C(Z 2 ) are right C(Z 2 )-comodule algebras with coactions given by the pullbacks of respective Z 2 -actions. Denote by u the generator C(Z 2 ) given by u(±1) := ±1. Then the assignment s → s ⊗ u makes T a C(Z 2 )-comodule algebra. (This coaction corresponds to the Z 2 -action given by α T −1 (s) = −s.) It is easy to verify that the maps δ i , i = 1, 2, are Z 2 -equivariant, so that their pullbacks δ * i 's are right C(Z 2 )-comodule maps. Also, since the symbol map σ is a right C(Z 2 )-comodule map, so are σ i 's.
The construction of C(S 2 RT ) can be seen as the quantum version of constructing the topological 2-sphere by assembling three pairs of squares to the boundary of a cube. In the quantum version the algebra T ⊗ C(Z 2 ) replaces the pair of squares. Explicitly, the algebra C(S 2 RT ) is defined in [18] to be the following triple pullback of three copies of T ⊗ C(Z 2 ): where the isomorphisms Φ ij are defined by the following formulas, for all h, k ∈ C(Z 2 ) and p ∈ C(I): We view the algebras T ⊗ C(Z 2 ), C(I) ⊗ C(Z 2 ) ⊗ C(Z 2 ) and C(Z 2 ) ⊗ C(I) ⊗ C(Z 2 ) as right C(Z 2 )-comodules with the diagonal C(Z 2 )-coaction. The coaction of C(Z 2 ) is defined on C(S 2 RT ) componentwise.
6.2. Construction of certain auxilliary elements. Both constructions of strong connections will require the existence of elements φ 1 ∈ σ −1 1 (u ⊗ 1 C(I) ) ⊆ T , φ 2 ∈ σ −1 2 (1 C(I) ⊗ u) ⊆ T with certain additional properties. These elements will play the crucial role in the construction of appropriate splittings required by both methods. More explicitly, we have the following: Lemma 12. There exist elements φ 1 , φ 2 ∈ T satisfying: where ı I ∈ C(I) is an an identity map ı I (t) = t and ρ : T → T ⊗ C(Z 2 ) is a right coaction.
The last condition of the Lemma requires more work. Unfortunately, ) ∈ T is nonzero by considering the properties of its image in C(S 1 ) under σ and we must work directly in T . We will use the flexibility afforded by the fact that conditions (25a) and (25b) do not fix completely elements φ i ∈ T . We will show that even if (1 − φ 2 2 )(1 − φ 2 1 ) = 0 for our initial choice of φ i 's, there exists a family {φ 2;t,n } t∈R,n∈N of deformations of φ 2 such that the conditions (25a) and (25b) are still satisfied for all pairs (φ 1 , φ 2;t,n ) and there exist n ∈ N and t ∈ R such that (1 − φ 2 2;t,n )(1 − φ 2 1 ) = 0. Let z be a partial isometry generating T , and let ρ : T → T ⊗ C(Z 2 ) be a right C(Z 2 )coaction. Define, for all n ∈ N, t ∈ R (29) φ 2;t,n := φ 2 + tE n , where E n = z z n (z * ) n − z n+2 (z * ) n+2 .
Because ρ(z) = z ⊗ u, we have ρ(φ 2;t,n ) = φ 2;t,n ⊗ u and because σ(E n ) = 0, we have σ(φ 2;t,n ) = φ 2 , hence all of the conditions (25) are satisfied, and for all t and n we can use φ 2;t,n instead of φ 2 in the formula (51) defining a strong connection on C(S 2 RT ). Assume that (1−φ 2 2;t,n )(1−φ 2 1 ) = 0 for all t ∈ R and n ∈ N. We will show that this assumption leads to contradiction. Using eq. (29) elements (1 − φ 2 2;t,n )(1 − φ 2 1 ) can be explicitly written as = 0 for all t ∈ R and n ∈ N then the above polynomials in t are identically zero for all n ∈ N, which implies in particular that coefficients at t 2 must be zero, i.e., that Consider now the faithful representation R : T → H of the Toeplitz algebra T on a Hilbert space H spanned by an orthonormal basis |n , n ∈ N, where the partial isometry z is represented as a right shift, i.e., R(z)|n = |n + 1 for all n ∈ N. One easily proves that (32) R(E 2 n )|m = δ m,n |n + 2 , for all m, n ∈ N.
6.3. A strong connection. Method I. In this subsection we construct a strong connection on the C(Z 2 )-comodule algebra C(S 2 RT ) by repeated application of the formula stated in the proof of [13,Lemma 3.2]. Let P be a fibre product of P 1 defines a strong connection ℓ : H → P ⊗ P . Here f i j := µ i j • π i j and µ i j is any unital colinear splitting of π j i , i = j. Note also that we use the convention that, if and similarly for coproducts. Observe that for C(Z 2 )-comodule algebras it is enough to compute the value of a strong connection for h = u, where u is the group-like generator of C(Z 2 ) because strong connections are unital and linear, i.e., it is sufficient to use the following equation: Note that it is sufficient to know the values f i j (x) only for a set of elements x ∈ P j which actually appear in the above formula and which (because of bi-colinearity of strong connections) can be assumed to be linearly independent and satisfy ρ(x) = x ⊗ u, i.e., one needs only to solve the following equations with unknowns f i j (x) ∈ P j (where ρ denotes the coaction): As the formula (33) assumes the comodule algebra to be presented as the ordinary (double) pullback, we need to convert the triple-pullback defining C(S 2 RT ) to an iterated pullback and apply the formula recursively. Since all the maps C(RP 2 T ) → T i ⊗ C(Z 2 ) are surjective [18], we can apply [21, Lemma 0.2 and Proposition 1.3] to present C(S 2 RT ) as a desired iterated pullback. (36) Here We will first compute a strong connection ℓ 01 : C(Z 2 ) → P 1 ⊗ P 1 on P 1 -the fiber product of T 0 ⊗C(Z 2 ) and T 1 ⊗C(Z 2 ) (see (36)). We use the particular choice of the the strong connections ℓ 0 and ℓ 1 on trivial pieces T 0 ⊗ C(Z 2 ) and T 1 ⊗ C(Z 2 ) given by (38) ℓ 0 (u) = (1 ⊗ u) ⊗ (1 ⊗ u), ℓ 1 (u) = (1 ⊗ u) ⊗ (1 ⊗ u).
Let us denote for brevity γ i := α i (1 T ⊗ u), as well as omit subscripts indicating the algebra the unit elements belong to. Let us note that because u 2 = 1 we have Then the straightforward application of the formula from the Theorem 6 yields: Note the similarity of this formula to the formula (51) obtained using the other method in the previous subsection. This similarity is understandable, because (not excluding the possibility of some general link, as yet unexplored by the author, between the two methods used) the common feature of both particular computations is that by construction, both strong connection formulas were expressed using the limited set of elements: φ 1 , φ 2 , 1 T ∈ T , and 1 C(Z 2 ) , u ∈ C(Z 2 ).
We leave to the reader analogous computations as those at the end of the previous subsection, which prove that both left and right legs of the above strong connection are linearly independent (when taken separately).