Renormalization of the Hutchinson Operator

One of the easiest and common ways of generating fractal sets in ${\mathbb R}^D$ is as attractors of affine iterated function systems (IFS). The classic theory of IFS's requires that they are made with contractive functions. In this paper, we relax this hypothesis considering a new operator $H_\rho$ obtained by renormalizing the usual Hutchinson operator $H$. Namely, the $H_\rho$-orbit of a given compact set $K_0$ is built from the original sequence $\big(H^n(K_0)\big)_n$ by rescaling each set by its distance from $0$. We state several results for the convergence of these orbits and give a geometrical description of the corresponding limit sets. In particular, it provides a way to construct some eigensets for $H$. Our strategy to tackle the problem is to link these new sequences to some classic ones but it will depend on whether the IFS is strictly linear or not. We illustrate the different results with various detailed examples. Finally, we discuss some possible generalizations.


Introduction and notation
The theory and the use of fractal objects, introduced and developed by Mandelbrot (see, e.g., [19]), still play an important role today in scientific areas as varied as physics, medicine or finance (see, e.g., [12] and references therein). Exhibit theoretical models or solve practical problems requires to produce various fractal sets. There is a long history of generating fractal sets using Iterated Function Systems. After the fundamental and theoretical works by Hutchinson (see [17]), this method was popularized and developed by Barnsley in the 80s (see [1,2]). Since these years very numerous developments and extensions were made (see, e.g., [4]) making even more enormous the literature related to these topics. Indeed, the simplicity and the efficiency of this approach have contributed to its success in a lot of domains, notably in image theory (see, e.g., [13]) and shape design (see, e.g., [15]).

Background
Let us recall the mathematical context and give the main notation used throughout the paper. Let (M, d) be a metric space. For any map f : M → M , we define the f -orbit of a point x 0 ∈ M as the sequence (x n ) n given by where f n is the nth iterate of f with the convention that f 0 is the identity function Id. In particular, one has x n+1 = f (x n ) hence, if f is continuous and if (x n ) n converges to z ∈ M , then z is an invariant point for f , i.e., f (z) = z.
We denote by K M the set of all non-empty compact subsets of M . We obtain a metric space endowing it with the Hausdorff metric d H defined by ∀ K, K ∈ K M , d H (K, K ) = inf ε > 0 | K ⊂ K (ε) and K ⊂ K(ε) , where K(ε) is the set of points at a distance from K less than ε.
For every K ⊂ M we define the set f (K) = {f (x) : x ∈ K} and we will assume in the sequel that f (K) ∈ K M if K ∈ K M .
Let us consider p 1 maps f 1 , . . . , f p with f i : M → M . Then we can define a new map H : We say that H is the Hutchinson operator associated with the iterated function system (IFS in short) {f 1 , . . . , f p } (see, e.g., [1,12,17]). Basic questions about an IFS are the following: Does the orbit H n (K 0 ) n converge for any compact set K 0 ? Does its limit depend on K 0 ? What are the geometrical properties of the limit sets?
The classic theory of IFS's is based on the contractive mapping principle (see, e.g., [1,12,17]). Let us recall that a map f : M → M is contractive if Let us assume that (M, d) is a complete metric space. Then, any contractive map is continuous, has a unique invariant point z ∈ M , and the f -orbit of any x 0 ∈ M converges to z with the basic estimate ∀ n 0, d f n (x 0 ), z λ n f d(x 0 , z).
If f 1 , . . . , f p are contractive then the associated Hutchinson operator H is also contractive because of Since (K M , d H ) inherites the completeness of (M, d), the map H has then a unique invariant point L ∈ K M , called the attractor of H, and for all K 0 ∈ K M the sequence H n (K 0 ) n converges to L. One of the interests is that such sets L are generally fractal sets.
In the sequel, the space M will be essentially R D , D 1, endowed with the metric induced by the Euclidean norm · . Writing simply K for K M , a subset K ⊂ R D belongs to K if and only if it is closed and bounded. In particular, the closed ball with center x ∈ R D and radius r > 0 will be denoted by B(x, r).
In this paper, we are interested in affine IFS's, i.e., when f i is defined by f i (x) = A i x + b i with A i a D × D matrix and b i ∈ R D a vector. Such a map satisfies λ f i = A i where A i is the norm of A i given by In particular, classic IFS's consist of transformations involving rotations, symmetries, scalings and translations. In this case, if H is contractive, the corresponding attractor L is called a selfaffine set. One obtains a nice subclass of such IFS's when the f i 's are homotheties, i.e., when f i (x) = α i x+b i with α i 0. Indeed, contrarily to general affine maps, f i contracts the distances with the same ratio α i in all directions. This enables a precise description of L. For example, if the sets f i (K 0 ) are mutually disjoints then L is a Cantor set whose fractal dimension is the solution of a very simple equation (see [12,21]). Cantor sets are fundamental and come naturally when one studies IFS's. A simple family of Cantor sets in R is Γ a : 0 < a < 1 2 where Γ a is the attractor of the IFS {f 1 , f 2 } with f 1 (x) = ax and f 2 (x) = ax + (1 − a). For example, Γ 1 3 is the usual triadic Cantor set (see [10,12,17]). When 1 2 a < 1, the attractor of the previous IFS becomes the whole interval [0, 1]. These basic examples will be extensively used in the sequel.

Motivation
Let us point out two specific situations: -When λ H 1 the previous results become false: typical orbits fail to converge. Basically, the orbits of some points x 0 ∈ K 0 may then satisfy f n i (x 0 ) → ∞ for some i, preventing the sequence H n (K 0 ) n from being bounded.
-When all the f i 's are contractive linear maps, the attractor of H is always {0} so does not depend on the fine structure of the A i 's but only on their norms.
However, in these two degenerate situations we can observe an intriguing geometric structure of the sets H n (K 0 ). For example, let us consider the IFS {f 1 , f 2 } where the f i : R 2 → R 2 are the linear maps given by their canonical matrices with a > 0. We focus on the H-orbit of the unit ball B(0, 1). For all a large enough we have A 1 = A 2 > 1 and the sequence H n (B(0, 1)) n is not bounded: the diameter d n of H n (B(0, 1)) grows to infinity. At the contrary, for all a small enough we have A 1 = A 2 < 1.
Thus H is now contractive and H n (B(0, 1)) n converges to {0}: d n vanishes to 0. Nevertheless, whatever is a, one can observe that the sets H n (B(0, 1)) tend to a same limit shape looking like a 'sea urchin'-shaped set (see Fig. 1). So one can wonder if there exists a critical value a for which d n do not degenerate so makes possible to observe this asymptotic set. Maps f 1 , f 2 are given above. Since the maps are linear, changing the parameter a gives the same sets for each n up to a scaling factor. Thus an adequate renormalization should reveal a common asymptotic limit-shape.
In this paper, we aim to modify the original Hutchinson operator to annihilate these two degenerate behaviors. We wish to obtain a limit set even if the IFS is not contractive, and a non zero limit set for contractive linear IFS's. Moreover, we would like this new operator to exhibit the typical 'limit shape' observed above.

Renormalization with the radius function
Our strategy is to rescale each set H n (K 0 ) by dividing it by its size. The idea of rescale a sequence of sets (K n ) n to get its convergence to a non degenerate compact limit is not new and is particularly used in stochastic modeling (see, e.g., [8,22] for famous examples of random growth models and more recently [18,20] in the context of random graphs and planar maps). Probabilists usually consider the a posteriori rescaled sets 1 dn K n where d n estimates the size of K n , often its diameter.
Here we proceed differently. First, in order to keep dealing with the orbit of an operator, we will do an a priori renormalization. Secondly, we will measure the size of a compact set with its distance from 0. Precisely, we consider the radius function ρ defined on K by and we denote by H ρ the operator defined by The radius function ρ satisfies the three following basic properties: -continuity: ρ is continuous with respect to d H ; -monotonicity: If K ⊂ K then ρ(K) ρ(K ); -homogeneity: For all α ∈ R, ρ(αK) = |α|ρ(K).
Actually ρ is a very nice function because it enjoys an additional stability property: The subject of interest of the paper is then the H ρ -orbit of sets K 0 ∈ K. For simplicity, we will write in the sequel K n = H n ρ (K 0 ) so that We will assume that d n > 0, i.e., K n = {0}. Observe that ρ(K n ) = 1 for all n 1, thus: -K n ⊂ B(0, 1) so that the orbit of any set K 0 is bounded; -there exist at least one x n ∈ K n such that x n = 1 so that (K n ) n cannot vanish to {0}.
In particular, if (K n ) n converges to a set K then ρ(K) = 1 and K = {0}. This new operator H ρ is then a good candidate to solve the problems discussed in Section 1.2. It will act by freezing the geometrical structure of H n (K 0 ) at each step n of the construction of the orbit.

Eigen-equation problem
Let us point out a very strong connection with the 'eigen-equation problem' recently studied in [3] for affine IFS's. Indeed, if (K n ) n converges to a set K then (d n ) n converges to d > 0 and taking the limit in (1.3) leads to H(K) = dK. Hence d is an eigenvalue of H and K a corresponding eigenset. Existence of solutions for this equation is discussed and proved in [3]. The values for d are closely related to the joint spectral radius σ M of the A i 's (see (2.1)). In particular, for linear IFS's, σ M was interpreted as a transition value for which exists a corresponding eigenset K whose structure is similar to the one described in Section 1.2. Unfortunately, these results don't hold for every IFS. In particular it rules out simple IFS's only made up with homotheties or some more interesting ones made up with stochastic matrices. However, the results stated in [3] provide important clues to determine and study the possible limits of both sequences (d n ) n and (K n ) n .
When studying the eigen-equation problem, an interesting question is to approximate any couple (d, K) of solutions of equation H(K) = dK. Let us look at the special case when the IFS consists in only one linear map with matrix A and set One recognizes the famous power iteration algorithm. With suitable assumptions it gives a simple way to approximate the unit eigenvector associated with the dominant eigenvalue σ M of A, this eigenvalue being the limit of d n = Ax n . Therefore, iterating the operator H ρ from a set K 0 is nothing but a generalization of this algorithm and then provides a natural procedure to approximate both an eigenvalue of H and one of its associated eigenset. From now on we are then interested in the convergence of (K n ) n and the geometric properties of its limit. Typically, H ρ is not contractive and the classic theory may not be applied. In particular, H ρ may have different invariant points so that the limit of (K n ) n may be no longer unique but deeply depend on K 0 . Furthermore, it is clear that the H ρ -orbits of K 0 may diverge for some K 0 (for example when the A i 's are only rotations). We will expose different ways to state the convergence of (K n ) n depending on whether the IFS is affine (Section 2) or strictly linear (Section 3). Finally, some generalizations will be shown in the last section (Section 4).

Results for affine IFS's
We suppose in this section that the IFS consists in p 1 affine maps f i : We denote by M = {A 1 , . . . , A p } the set of their canonical matrices. Let us recall that the joint spectral radius of M is defined by where α(M ) denotes the usual spectral radius of the matrix M (see [24]). Finally, we denote by Spec(M ) the set of the eigenvalues of M so that α(M ) = max{|α| : α ∈ Spec(M )}.

Strategy: a general result
Our strategy consists in linking the convergence of (K n ) n to the asymptotic behavior of the sequence of positive numbers (d n ) n . If (K n ) n converges to a set K then (d n ) n converges to d > 0 and the eigen-equation H(K) = dK shows that K may be seen as an invariant set of the classical Hutchinson operator Conversely, if (d n ) n is a constant sequence, say d n = d, one has K n = H n d (K 0 ) so that (K n ) n converges to L d if d > λ H . Actually, when (d n ) n is no longer constant, but converges to a positive number d > λ H , then the convergence of (K n ) n to L d still happen.
Theorem 2.1. Let K 0 ∈ K. Assume that the sequence (d n ) n converges to a positive number d > λ H . Then the sequence (K n ) n converges to the attractor L d of the Proof . Let us set K n = H n d (K 0 ). We have to prove that ε n = d H (K n , K n ) converges to 0. We can write Since (K n ) n converges, there exists B ∈ K such that K n ⊂ B for all n 0. Then let us fix η > 0 and N 0 such that 0 < λ H < d − η d n d + η for all n N . We obtain 0 ε n+1 µε n + m n where µ = λ H d−η and m n = 1 dn − 1 d ρ(H(B)). It follows that Since µ ∈ [0, 1) and m n → 0 it follows that ε n → 0.
Let us emphasize that we did not use the definition of (d n ) n nor the fact that the f i 's are affine. Hence the result is valid for any pairs of sequences (K n ) n and (d n ) n satisfying (1.3).
Let us notice that the sequence (d n ) n depends on K 0 , so that the two limits d and L d may also depend on K 0 . If d λ H , the asymptotic behavior of (K n ) n is more delicate to derive directly from the one of (d n ) n . Therefore, in view of Theorem 2.1, we ask the following questions: Does the sequence (d n ) n always converge? Does its limit may not depend on K 0 or may be smaller than λ H ? 2.2 Convergence of (d n ) n Except for very special cases it is impossible to obtain the exact expression of d n . Therefore we rather seek for bounds for d n and d. Let us begin with a basic result.
In particular, if (d n ) n converges to d, then d also satisfies (2.2).
The next result provides non trivial bounds for the possible limit d.
In particular, if d > λ H then Proof . Let i ∈ {1, . . . , p} and consider the sequence (x n ) n 1 defined by One has x n ∈ K n and b i = d n x n+1 − A i x n . By summation we get Therefore, for all n > 1, The first term in the sum above goes to 0 when n → ∞ and Cesro's lemma implies that the term into brackets goes to d Id −A i . That gives (2.3). Now assume that i is such that d / ∈ Spec(A i ). Then the matrix M i = d Id −A i is invertible and (2.5) yields Thus we obtain in a similar way We conclude as above using that , which concludes the proof.
We will now show that (2.4) is an equality when the A i 's are homotheties. Actually, we will prove again (2.4) but with a very different approach which can be generalized (see Theorem 4.1(i)). We need the following result. We denote by ch(K) the convex hull of a non-empty set K. Proof . Let us write C = ch({z 1 , . . . , z p }). Let i ∈ {1, . . . , p}. Since f i (z i ) = z i , one has z i ∈ L ⊂ ch(L) and then C ⊂ ch(L). To prove the reverse inclusion we have to state that hence (2.6) becomes an equality. Finally, if b i = 0 for all i ∈ {1, . . . , p} then the left-hand side of (2.6) is zero, hence a contradiction.
Notice that using the stability property of ρ, (2.6) gives (2.4). We conclude now by giving another non trivial bounds for d valid for a particular class of IFS's. The next result is only a rephrasing of Theorems 2 and 3 in [3].
The determination of σ M is delicate but the basic estimates always hold (see [24]). In particular for homotheties, i.e., when A i = α i Id with α i 0, one obtains σ M = λ H = max

Case of homotheties
We can give a complete answer when all the A i 's are homotheties: the sequence (K n ) n always converges and its limit may be explicited. First, we show that (d n ) n converges and we give the possible value for its limit d.
Proof . For all n 1 we can find y n ∈ K n and i n ∈ {1, . . . , p} such that d n = α in y n + b in . Then, u n = 1 dn (α in y n + b in ) satisfies u n ∈ K n+1 and u n = 1. Since y n 1 we obtain Thus (d n ) n 1 is increasing and bounded (see (2.2)), so it converges. Let d be its limit. For all n 1, choosing x n ∈ K n such that x n = 1 we get If d > α j , it follows from Proposition 2.5 that d is a solution of max We can consider only the b i = 0. Then, since d > λ H and the functions t → b i |t−α i | are strictly decreasing on (λ H , +∞), the unique solution is max We can state now the precise result. We denote by cl(K) the closure of a non-empty set K.
Then, for all K 0 ∈ K, the sequence (K n ) n converges to a set K ∈ K. Precisely, (a) either α j α k + b k and then K = cl n 1 K n , (b) or else α j < α k + b k and then K does not depend on K 0 : it is the attractor L d with Proof . (i) Assume that b j = 0. Hence by Lemma 2.7 we have d = α j . (a) Suppose first that d < α j . Then, use of Lemma 2.7 again shows that d = α j − b j . In particular α j − b j = 0 and α j = 0. Moreover, it follows from (2.7) that d n = d for all n 1. Let x n ∈ K n and consider the sequence (x n+k ) k defined by Since d < α j we must have x n −u = 0. It follows that K n = {u} for all n 1. Thus f i (u) = du for all i ∈ {1, . . . , p} and K = {u}. Therefore conditions of (a) are all fulfilled. Conversely, if they are satisfy we have obviously K n = K 1 for all n 0 and the result.
(a) Suppose first that α j α k + b k . Then it follows from Lemma 2.2 that d = α j and then by (2.7) that d n = d. Therefore, for all n 1, Thus (K n ) n 1 is increasing. Since it is bounded it converges to cl( n 1 K n ).
This latter being the unique possibility we obtain α k α j α k − b k . Thus b k = 0 and α j = α k which is a contradiction. Therefore d > α j and we conclude along the same lines as for (i)(b).
Let us note that, when D = 1, the unit sphere being finite, we can prove that (d n ) n is always stationary.
Then, for all K 0 ∈ K, the sequence (K n ) n converges to the attractor of the IFS 1 It is a classical Sierpinski gasket (see Fig. 2(a)).
Proof . We apply Theorem 2.8 with max Then, for all K 0 ∈ K, the sequence (K n ) n is increasing and converges to the set cl n 1 K n (see Fig. 2  Proof . We apply Theorem 2.8 with max We can observe that in the previous theorem the asymptotics of (d n ) n was given by the points of B(0, 1) whose image by the f i 's have the largest norm. Exploiting this remark we can obtain a more general result assuming only one function f i has a homothety linear part but which is responsible of large norms. Then, for all K 0 ∈ K, the sequence (K n ) n converges to a set K ∈ K. Precisely, Proof . Actually the proof is very similar to the one of Theorem 2.8 thus we will only detail the key-points. The first point is to show that d n is always obtained with the function f p . Indeed, let n 1 and x n ∈ K n with x n = 1. Then, for all i ∈ {1, . . . , p − 1}, It follows that d n = f p (y n ) for some y n ∈ K n and d n |α − b |. Considering now u n = 1 dn (αy n + b) we get u n ∈ K n+1 , u n = 1 and, since y n 1, Thus (d n ) n 1 is increasing, bounded, so it converges. Let d be its limit.
In particular, we have proved that This case is similar to the case (i)(a) of Theorem 2.8.
It follows that d > λ H and we conclude as for the case (i)(b) of Theorem 2.8. (ii) Assume that b = 0. It follows from (2.8) that d n = d = α for all n 1. This case is then similar to the case (ii)(a) of Theorem 2.8.

Results for linear IFS's
We suppose in this section that the IFS consists in p 1 linear maps f i : We still denote by M = {A 1 , . . . , A p } the set of their canonical matrices.

New strategy
If (d n ) n converges to d, it follows from Lemma 2.2 that d λ H so we cannot apply Theorem 2.1. Actually the convergence of (d n ) n may not imply the convergence of (K n ) n . Consider for example D = 2 and the functions f 1 , f 2 defined by If K 0 = {(1, 0)} then K n = 2 −k , 0 : 0 k n hence converges to cl n 0 K n and (d n ) n is constant to 2 < 3 = λ H . If K 0 = {(0, 1)} then K n = (−1) n 0, 3 −k : 0 k n hence diverges but (d n ) n is constant to λ H . Therefore we adopt here a new strategy, taking advantage of both the linearity of the f i 's and the homogeneity of ρ. Proof . Let n 1. We obtain by linearity Since ρ(K n ) = 1, it follows by homogeneity that ρ H n (K 0 ) = d 0 · · · d n−1 . Using linearity again we observe that H n d (K 0 ) = 1 d n H n (K 0 ). Thus ρ H n d (K 0 ) =  [3]) which is a contradiction. Thus we get (i). Finally, Hypotheses and continuity of ρ allow us to take the limit in the right-hand side above. That gives (ii).
As we saw in Proposition 2.6 if such a d exists then d = σ M for a large class of IFS's. We can expect that this is true in general. However, the first example in this section shows that the strict inequality is possible even for very simple IFS's. Actually the hypothesis on the common invariant subspaces of the A i 's is essential. Notice that the A i 's share a common non trivial invariant subspace if and only if there exists an invertible matrix P such that, for all i ∈ {1, . . . , p}, with A i and A i square and some matrix M i (see [24]). This in particular the case of diagonal matrices, where the numerous invariant spaces will provide very special behaviors for (K n ) n . In the rest of this section, we will look at such IFS's focusing on the convergence of H n d (K 0 ) n especially for d = 1.

LCP sets of matrices
We say that M is a left convergent product set of matrices (LCP set in short) if the infinite products A in · · · A i 1 converge for all sequences (i) = (i 1 , i 2 , . . . ) ∈ I = {1, . . . , p} ∞ . In this case, we set A (i) = lim n→∞ A in · · · A i 1 (see [9,16]). The theory of LCP sets was popularized in the 90s (see [9]) and it is still of interest nowadays (see [16]) for example in the study of inhomogeneous Markov chains (see, e.g., [23]). One can always associate a canonical IFS with a LCP set. The next result gives sufficient conditions to obtain its convergence.
There exists a sequence (ε n ) n of positive numbers such that ε n → 0 and Then, H n (K 0 ) n converges for all K 0 ∈ K to the limit set Proof . Let us write K n = H n (K 0 ) and L = (i)∈I A (i) (K 0 ). Hypothesis (i) implies that M is product bounded (see [5]), then there exists R > 0 such that A (i) R for all (i) ∈ I. Since K 0 is compact, it follows that L is bounded, hence L is compact. We claim that d H (K n , L ) Cε n for all n 1, C > 0. Let n 1 be fixed. We have Let x ∈ L . One has x = A (i) (x 0 ) with x 0 ∈ K 0 and (i) = (i 1 , i 2 , . . . , i n , . . . ) ∈ I. Let x = A in · · · A i 1 (x 0 ). One has x ∈ K n and x − x A (i) − A in · · · A i 1 x 0 Cε n where C = ρ(K 0 ). Thus L ⊂ K n (Cε n ). We prove in a similar way that K n ⊂ L (Cε n ), hence d H (K n , L ) Cε n . It follows that d H (K n , L ) → 0 and, since d H (K n , L ) = d H (K n , L), that K n → L.
We illustrate this result with the family of positive stochastic matrices in R 2 . In this case, we can give a precise description of the limit set L. Let us recall that a positive stochastic matrix is a matrix whose rows consist of positive real numbers, with each row summing to 1.
Then, for all K 0 ∈ K, the sequence H n (K 0 ) n converges to the set where h v 0 : R → R is an affine map which depends on v 0 and f i , and Γ is the attractor of an IFS {g 1 , . . . , g p } where g i : R → R is an affine map which only depends on f i .
Proof . We will apply Lemma 3.2. First, since each product A in · · · A i 1 contains a positive stochastic matrix then it converges (see [7]) and M is a LCP set. Next, there exists a matrix P of the form such that, for all i ∈ {1, . . . , p}, A i = P T i P −1 where T i is a matrix of the form Notice that it is also proved in [9] that {T 1 , . . . , T p } is a LCP set with a continuous limit function.
Here we want more and describe precisely the limit set of matrices. Let us define g i : R → R by g i (x) = b i x + a i and set b = max We obtain by induction that, for all sequences of indices i 1 , . . . , i n , n 2, Hence, considering the contractive Hutchinson operator G associated with the IFS {g 1 , . . . , g p }, its attractor Γ, and the orbit G n (A) n of the compact set A = {a 1 , . . . , a p }, we obtain, for all n 2, with 0 b(c i 1 ···in ) b n and d(c i 1 ···in , Γ) Cb n for a constant C > 0 (see [17]). It follows first that, for all (i) ∈ I and all n 1, Hence (3.1) and all the hypotheses of Lemma 3.2 are satisfied with ε n = C b n .
Moreover, letting n goes to ∞ we obtain the following set of limit matrices: Therefore, if v 0 = (x 0 , y 0 ) ∈ K 0 we get The result follows by taking and using (3.2).
Notice that if K 0 ⊂ Span{(1, 1)} then f i (v 0 ) = v 0 and L = K 0 . Actually, Span{(1, 1)} is a common invariant space of the A i 's so that K n = K 0 for all n 0. In particular, one cannot apply Proposition 2.6 but it follows from the decomposition of the A i 's that d = 1 = σ M .
Thus g 1 (x) = ax, g 2 (x) = ax + (1 − a) and Γ is the Cantor set Γ a when 0 < a < 1 2 and the interval [0, 1] when 1 2 a < 1. The limit L of the sequence H n (K 0 ) n depends on the starting set K 0 . One has Fig. 3(a)) whereas L = − 1 Fig. 3(b)). More K 0 contains points then more complicated is the limit set L, with unions of overlapping Cantor sets.
Several necessary and sufficient conditions for a finite set of matrices to be a LCP set have been given (see [6,7,11] and [16] for a survey). Not surprisingly, they require to evaluate the joint spectral radius of M or determine the generalized eigenspaces of the A i 's.

Identity-block matrices
Hypothesis (3.1) implies that the address function A : (i) → A (i) is continuous. Unfortunately, very simple LCP sets may not fulfill this condition preventing from applying Lemma 3.2. This situation happens for example adding the matrix Id to a LCP set with a continuous function A (see [9]). However this simple case can be solved directly.   Proof . Since f p = Id, the sequence H n (K 0 ) n is clearly increasing. So it is enough to prove that it is bounded to get the convergence to L. Let R > 0 be large enough to ensure K 0 ⊂ B(0, R). Then, f i (B(0, R)) ⊂ B(0, R) for all i ∈ {1, . . . , p}. Therefore H n (K 0 ) ⊂ B(0, R) for all n 0. Now, observe that, for all K ∈ K, H(K) = H (K) ∪ K where H is the Hutchinson operator associated with the IFS {f 1 , . . . , f p−1 }. Assume that K 0 ⊂ L . Then, H(K 0 ) ⊂ H(L ) = H (L ) ∪ L = L ∪ L = L . By induction it follows that H n (K 0 ) ⊂ L for all n 0. Taking the limit we get L ⊂ L . Next, we have L = H(L) = H (L) ∪ L ⊃ H (L). By induction it follows that L ⊃ (H ) n (L) for all n 0. Taking the limit we get L ⊃ L , and finally L = L . These latter sequences are related to the so-called inhomogeneous IFS's (see, e.g., [14]). Indeed, one has H n (K 0 ) = H n 0 (K 0 ) where H 0 is the Hutchinson operator associated with the contractive IFS {f 0 , f 1 , . . . , f p−1 } where f 0 : K ∈ K → K 0 .
We can generalize the previous result to matrices A i which contain an identity-block, that is when the restriction of f i to a certain subspace of R D is the identity function. We will state two results dealing with two special cases of such families. Then, H n (K 0 ) n converges for all K 0 ∈ K to a set L. Precisely, -either there is at least one i ∈ {1, . . . , p} such that f i,V = Id, then where p W,V is the projection onto W along V , and set λ H = max Theorem 3.7. Assume that there exists two subspaces V, W ⊂ R D satisfying V ⊕ W = R D and, for all i ∈ {1, . . . , p}, (ii) f i (W ) ⊂ W and the linear function f i,W : W → W induced by f i is a contraction.
Then, H n (K 0 ) n converges for all K 0 ∈ K to the set In particular, p V,W (L) = p V,W (K 0 ).

Proof . Let us write
We then prove by induction that where H is the Hutchinson operator associated with the contractive IFS f 1 , . . . , f p . Therefore the sequence H n (w 0 ) n converges to a set L(v 0 ) which only depends on v 0 . It follows that H n ({z 0 }) n converges to v 0 + L(v 0 ) with the estimate .
We then have proved that the set v 0 ∈V 0 v 0 + L(v 0 ) is bounded, i.e., L ∈ K. Furthermore, since L(v 0 ) does not depend on w 0 , we can write (c) Finally, writing H n (K 0 ) = v 0 +w 0 ∈K 0 v 0 + H n (w 0 ) and using (a) and (b), we obtain Since this latter supremum is finite, we obtain the result by letting n goes to ∞.
Notice that hypotheses of Theorem 3.7 mean that, for all i ∈ {1, . . . , p}, the maps f i 's have a block matrix with respect to the sum V ⊕ W of the form with A i contractive and some matrix M i . We deduce that σ M = 1.
By uniqueness, we check that this attractor is the one announced. Assume that (a, b, c) = 1, 1,

Orbit of the unit ball
To avoid the various behaviors due to the different invariant subspaces of the A i 's, we propose to take into account all the directions of R D by focusing on the H-orbit of the unit ball B(0, 1). . , p} such that the matrix A i N · · · A i 1 has eigenvalue 1.
Notice that the two hypotheses imply that α(A i N · · · A i 1 ) = A i N · · · A i 1 = 1, hence σ M = 1. Notice also that M need not to be a LCP set. One can for example consider matrices with rotations or symmetries.
belongs to L and satisfies v = 1. As an example, L is shown in Fig. 5 when a = 1. Proof . Since A 1 2 = A 2 2 = a 2 + 1 2 1+ √ 1 + 4a 2 1, we cannot apply directly the previous proposition. Thus we have to consider the normalized matrices Notice that we can show here that d = σ M (see [24]). Then, hypothesis (i) is satisfied and one checks that the matrix has eigenvalue 1. Hence, using now Proposition 3.9, the sequence H n d (B(0, 1)) n converges to the set with ρ(L) = 1. Therefore H n ρ (B(0, 1)) n converges to L by Proposition 3.1. Finally, it follows from the proof of Proposition 3.9 that v, one of the unit eigenvectors associated with 1, belongs to L.
Let us emphasize that the previous method may be applied generally when two matrices of M are symmetric each other, or when one of them is symmetric. In particular we can use it to study the example of Section 1.2.
3.5 Angular structure of the limit set in R 2 Under the hypotheses of Proposition 3.9, the set L defined by (3.3) is an eigenset for H associated with the eigenvalue σ M = 1. It is straightforward to see that, as the ball B(0, 1), the set L is symmetric and star-shaped with respect to the origin, i.e., if x ∈ L then rx ∈ L for any r ∈ [−1, 1]. Such a property was also discussed in [3]. More generally, if K 0 is symmetric and star-shaped with respect to the origin, then the same holds for every set H n (K 0 ) and thus for its limit set L in case of convergence. Therefore, such a limit set L admits a polar representation and it would be especially interesting to know its angular structure. In this section we state a result in this direction for IFS's in R 2 .
Let P = (x, y) ∈ R 2 : x 0 \{0}. Every point (x, y) ∈ P may be written with polar coordinates as (R cos θ, R sin θ) with R > 0 and θ ∈ − π 2 , π 2 . Let K ∈ K with K = {0}. Assume moreover that K is symmetric with respect to the origin. Then K ∩ P = ∅ and we can define its set of slopes S K ⊂ [−∞, ∞] by S K = tan θ | ∃ (R cos θ, R sin θ) ∈ K ∩ P with the convention tan ± π 2 = ±∞. The following result provides a description of the set of slopes of L for particular IFS's. Proposition 3.11. Let p 1 linear maps f i : R 2 → R 2 given by their canonical matrices A i = a i b i c i d i such that det(A i ) = 0 and f i (P) = P. Let K 0 ⊂ R 2 be a symmetric and star-shaped with respect to the origin set. Assume that the sequence H n (K 0 ) n converges to a set L = {0}. Then L is symmetric and star-shaped with respect to the origin and its set of slopes S L is a nonempty invariant set of the Hutchinson operator Proof . Notice that S L is well-defined. Writing L ∩ P = {(R cos θ, R sin θ) : R > 0, θ ∈ Θ} we get S L = tan Θ. Observing that L ∩ P = H(L) ∩ P, we have S L = S H(L) . We deduce now from the hypothesis that Hence H(L) ∩ P is the set of the points f i (s) for all s ∈ L ∩ P and i ∈ {1, . . . , p}. Since the set of slopes of H(L) is exactly the set of all the points of the form Therefore, S H(L) = p i=1 f i (tan Θ). The results follows.
The angular structure of L is then known as soon as we can describe the invariant sets of the Hutchinson operator H. This operator may have several invariant sets S, not necessarily closed. However, there is a useful way to determine them. Indeed, if H is contractive then every bounded invariant set S of H satisfies cl(S) = L where L is the attractor of H. It is possible to determine such attractors of IFS made up with homographic functions (see for example [12, p. 136]). Notice that [−∞, ∞] is always a compact set and then would be an invariant set. Thus, it will be often necessary to consider a restriction of H to obtain a contractive operator. Example 3.12. Let us consider the IFS {f 1 , f 2 } where the f i : R 2 → R 2 are the linear maps given by their canonical matrices with 0 < a < 1, 0 < b < 1 and a + b 1. Starting from the unit square K 0 = [−1, 1] × [−1, 1], the sequence H n (K 0 ) n converges to the set L = n 0 H n (K 0 ). Its set of slopes satisfies cl(S L ) = Γ a if 0 < a < 1 2 and cl(S L ) = [0, 1] if 1 2 a 1 (see Fig. 6(a)). Moreover, if b = 1 − a then S L = cl(S L ) (see Fig. 6(b)). Proof . One checks that H(K 0 ) ⊂ K 0 so that the sequence H n (K 0 ) n converges to the given set L. Moreover all the hypothesis on the A i s of Proposition 3.11 are satisfied. For example (1, 0) ∈ H n (K 0 ) for all n 0 so that L = {0}. Hence the set of slopes S L is an invariant set of the operator H = f 1 ∪ f 2 where f 1 (z) = az and f 2 (z) = az + (1 − a). Since f 1 (0, y) < 1 and f 2 (0, y) < 1 for all y = 0, one has Span{(0, 1)} ∩ L = {0}. It follows that S L ⊂ R. Since H is contractive on K R one obtains cl(S) = Γ a if 0 < a < 1 2 and cl(S) = [0, 1] if 1 2 a < 1. Assume now that b = 1 − a. To obtain a more precise description of the limit set L we rather apply Theorem 3.7 with V = Span{(1, 0)} and W = Span{(0, 1)}. Let z 0 = (x 0 , y 0 ) ∈ K 0 . We have p V,W (z 0 ) = (x 0 , 0). It follows that H n (K 0 ) n converges to the set L = cl The fact that in Example 3.12 the set S L is not always the whole Cantor set Γ a comes from the function f 2 . When b = 1 − a, f 2 is a contraction. Thus, all the orbits of points z ∈ B(0, 1) associated with f 2 infinitely many times correspond to a 'slope' s = tan θ for which R = R(θ) = 0. These slopes are then not visible in the limit set.

Other renormalizations
We have renormalized the sets H n (K 0 ) by dividing them by their radius. As we mentioned in the introduction, one usually rather uses the diameter to rescale a sequence of compact sets. Thus, we can wonder what a such renormalization would yield. More generally, we want to study in this section the iteration of more general operators H ϕ which will provide various ways to approximate the solutions of the eigen-equation H(K) = dK. We keep using the sequence (K n ) n but choosing a well-adapted operator according to the form of the matrices A i 's.

Renormalization with a size function
We are interested in functions ϕ that describe the size of a compact set and the way it occupies the space. Following the example of the radius function ρ, we will say that a function ϕ : K → [0, +∞) is a size function if it is continuous with respect to d H , monotonic and homogeneous (see Section 1.3). For example, the max-radius function ρ ∞ and the diameter δ respectively defined on K by and consider the H ϕ -orbit of some set K 0 ∈ K. Let us keep all the notation of the previous sections, easily adapted by replacing ρ with ϕ. In particular K n = H n ϕ (K 0 ) (see (1.3)) and d n = ϕ p i=1 f i (K n ) (see the first equality in (1.4)). We will assume that K n is always welldefined, i.e., ϕ(K n ) = 0.
We are still interested in the convergence of the sequence (K n ) n and the description of its limit. The key-points are Theorem 2.1 and Lemma 3.1 which provide general conditions of convergence. We summarize here the main results which still hold for any size functions.

Renormalization with the diameter function
We consider now the diameter function δ. The situation is more complicated than the previous ones, even if the matrices A i 's are homotheties. The stability property (1.2) of ρ was a key-point in the proof of Theorem 2.8 but unfortunately it is no longer satisfied. We will only deal with the one dimensional case D = 1.