Non-Commutative Resistance Networks

In the setting of finite-dimensional $C^*$-algebras ${\mathcal A}$ we define what we call a Riemannian metric for ${\mathcal A}$, which when ${\mathcal A}$ is commutative is very closely related to a finite resistance network. We explore the relationship with Dirichlet forms and corresponding seminorms that are Markov and Leibniz, with corresponding matricial structure and metric on the state space. We also examine associated Laplace and Dirac operators, quotient energy seminorms, resistance distance, and the relationship with standard deviation.


Introduction
This paper has its origins in three questions that arose at different times during my research concerning quantum metric spaces. The first of these questions was my puzzlement about how the "resistance distance" that is defined on resistance networks fits in with the metrics that arise on the state spaces of quantum metric spaces. (See section 12 of [31]). The second question concerns what conditions ensure that quotients of Leibniz seminorms are again Leibniz, a property that is important in dealing with quantum metric spaces. (See [34].) More recently my research let me to examine the Leibniz property for standard deviation. (See [35].) This eventually led me to ask whether there was a relationship between that topic too and quantum metric spaces and resistance networks.
In this paper I seek to give a coherent account of how closely these questions are related, and of the answers to them that I have found. I generally carry out the discussion in the setting of non-commutative C*-algebras. In order not to be distracted by all the technicalities encountered when dealing with unbounded operators and their dense domains, I deal in this paper only with finite-dimensional C*-algebras, 2010 Mathematics Subject Classification. Primary 46L87; Secondary 46L57, 58B34 .
The research reported here was supported in part by National Science Foundation grant DMS-1066368. somewhat in the spirit of the seminal paper of Beurling and Deny [3]. There is plenty to be said just about the purely algebraic aspects.
As a thread to tie things together I introduce a structure that I call a "non-commutative Riemannian metric". This structure lies just below the surface of some of the literature concerning quantum dynamical semigroups and Dirichlet forms [38,39,7], but I have not seen this structure explicitly mentioned there. We will see (Section 7) that when the underlying C*-algebra is commutative, a Riemanian metric for it leads naturally to a resistance network.
In order to provide a coherent narrative, I include much material that already appears in the literature. Thus many parts of this paper can be considered to be expository. But even in these parts many small novelties are included. And perhaps this paper can serve as a useful guide for those who are beginning to learn about quantum dynamical semigroups and Dirichlet forms.
I expect that most of the new results in this paper have suitable extensions to the setting of infinite-dimensional C*-algebras, for which one will need to work with unbounded operators (mostly derivations). Many new phenomena will then arise. There is a very large literature containing many techniques for dealing with that setting. (See [6] and the references it contains.) But I do not plan to carry out some of these extensions myself, unless I happen to find later that they are important for my study of quantum metric spaces. My impression at this point 10. Dirac operators 46 11. Quotients of energy metrics 51 12. The relationship with standard deviation 54 References 57 1. Differential calculi with quasi-correspondences In this section we develop the aspects of non-commutative Riemannian metrics that do not depend on the positivity of the A-valued inner product. So we assume here only that A is a finite-dimensional unital * -algebra over C. In Section 2 we will assume that A is a finite-dimensional C*-algebra, so that positivity has meaning. Finitedimensionality is needed in only a few crucial places, and we will usually point out these places. We recall [13] that by a first-order differential calculus over A one means a pair (Ω, ∂) consisting of an A-bimodule Ω (thought of as an analog of a space of differential one-forms) and a derivation ∂ from A into Ω. Thus ∂ is a linear map that satisfies the Leibniz identity ∂(ab) = (∂(a))b + a∂(b) for all a, b ∈ A. Note that this implies that ∂(1 A ) = 0, where here 1 A denotes the identity element of A. It will be important for us to make the usual requirement that the sub-bimodule of Ω generated by the range of ∂ is all of Ω, unless the contrary is explicitly stated.
A Riemannian metric on a differentiable manifold is usually specified by giving an inner product on the tangent space at each point of the manifold, but one can equally well use the cotangent space instead of the tangent space. Then the Riemannian metric gives an inner product on the space of differential one-forms that has values in the algebra of smooth functions on the manifold. Thus, in generalization of Riemannian metrics, we want to consider A-valued sesquilinear forms on Ω that are compatible with the right A-module structure on Ω. (Since we work over C, we actually have an analog of the complexified cotangent bundle. One might well want to introduce a "real" structure, but we will not discuss that possibility. ) We will require that the left action of A on Ω be a * -action with respect to the inner product. We will not assume any positivity for our inner products until the next section. When positivity is present, it is usual [4] to refer to such a bimodule with A-valued inner product (no derivation involved) as a "correspondence". In the present more general setting we will use the term"quasi-correspondence". We will usually denote such an inner product by ·, · A . Thus: Definition 1.1. Let A be a unital * -algebra. By a (right) pre-quasicorrespondence over A we mean an A-bimodule Ω that is equipped with an A-valued sesquilinear form ·, · A that satisfies ω, ω ′ a A = ω, ω ′ A a, ( ω, ω ′ A ) * = ω ′ , ω A and aω, ω ′ A = ω, a * ω ′ A for all ω, ω ′ ∈ Ω and a ∈ A. We will refer to the sesquilinear form as a pre-inner-product. The null-space, N , of the sesquilinear form is defined to be N = {ω : ω, ω ′ A = 0 for all ω ′ ∈ Ω}. If N = {0} then we say that the sesquilinear form is non-degenerate, and we call it an inner product. We then call (Ω, ·, · A ) a (right) quasi-correspondence over A. Left (pre-)quasi-correspondences are defined analogously, with the A-valued inner product linear in the first variable.
As is commonly done, we will usually work with right (pre-)quasicorrespondences, and will usually omit the word "right".
It is easily seen that the null-space, N , of the pre-inner-product for a pre-quasi-correspondence is a sub-bimodule, so that Ω/N is an A-bimodule, to which the pre-inner-product drops to give an inner product, for which Ω/N is then a quasi-correspondence. Definition 1.2. Let A be a unital * -algebra. By a calculus with pre-quasi-correspondence for A we mean a triple (Ω, ∂, ·, · A ) such that (Ω, ∂) is a first-order differential calculus for A and (Ω, ·, · A ) is a pre-quasi-correspondence over A. If the pre-inner product is nondegenerate we will call this a calculus with quasi-correspondence.
When in the next section we impose positivity we will call this structure a "Riemannian (pre-)metric" for A. Notice that we make no assumption about how ∂ is related to ·, · A . Later we will discuss some relations that one might want to require.
For a closely related definition of a Riemannian metric for a *algebra, coming from quite different motivation, see [2], and see [24] for its application to graphs. For another interesting, and very new, definition of a Riemannian metric, in the context to non-commutative tori, see [36].
We remark that Definition 1.1 is very close to section 3 of [39]. Sauvageot [38] prefers to view Ω as an analog of the tangent bundle, and ∂ as the gradient, and he does not introduce an A-valued inner product.
If (Ω, ∂, ·, · A ) is a calculus with pre-quasi-correspondence, and if N is the null-space of the pre-inner product as above, then it is easily verified that (Ω/N , ∂, ·, · A ) is a calculus with quasi-correspondence, where here we do not change the notation for the derivation and the inner product, but they are defined in the evident way.
We now give three simple but very pertinent examples. Example 1.3. Let X be a finite set, and let A = C(X), the algebra of C-valued functions on X with pointwise multiplication and with complex-conjugation as involution. We define a first-order differential calculus for A in a familiar way. Let Z = {(x, y) ∈ X × X : x = y} , and let Ω = C(Z). Then Ω is an A-bimodule for the operations (f ω)(x, y) = f (x)ω(x, y) and (ωf )(x, y) = ω(x, y)f (y) for f ∈ A and ω ∈ Ω. We define a derivation ∂ from A into Ω by We find it helpful to view this in the following heuristic way. For a given point y ∈ X the directions in which a function f ∈ A can be "differentiated" are given by the points of X \ {y}. These points form a basis for the "tangent space" at y, and the "tangent space" at y can be considered to be C(X \ {y}). The differential of f at y is then given by the function x → (∂f )(x, y). It is easily verified that the sub-bimodule generated by the range of ∂ is all of Ω.
To define an A-valued pre-inner product on Ω we choose an R-valued function, c, on Z. Eventually c will provide the conductances for a resistance network, but at this stage we do not assume that c(x, y) = c(y, x), nor that c be non-negative. We write c xy for c(x, y), and for ω, ω ′ , ∈ Ω we set ω, ω ′ A (y) = x,x =yω (x, y)ω ′ (x, y)c xy .
For fixed y this can be viewed as giving a pre-inner-product on the cotangent space at y. It is easily verified that with this pre-innerproduct Ω becomes a pre-quasi-correspondence over A, and in this way (Ω, ∂, ·, · A ) is a calculus with pre-quasi-correspondence for A.
Example 1.4. Let A be any non-commutative unital finite-dimensional * -algebra. LetΩ be A viewed as a bimodule over itself. Choose any element v of A that is not in the center of A, and define a derivation of A intoΩ by ∂(a) = [v, a] = va − av for all a ∈ A. Define an A-valued pre-inner-product onΩ by for all a, b ∈ A. Let Ω be the sub-bimodule ofΩ generated by the range of ∂. It is easily verified that with the restriction of the above pre-inner-product, Ω becomes a pre-quasi-correspondence over A, and in this way (Ω, ∂, ·, · A ) is a calculus with pre-quasi-correspondence for A.
We remark that if Ξ is any quasi-correspondence over a unital *algebra A, then any fixed element ξ ∈ Ξ determines an inner derivation If we let Ω ξ be the sub-bimodule of Ξ generated by the range of ∂ ξ , and restrict to Ω ξ the A-valued inner-product on Ξ, then we obtain a calculus with quasi-correspondence. Example 1.5. This next example is somewhat a combination of the two above. Let A be any possibly non-commutative unital finitedimensional * -algebra. Let G be a finite group, and let α be an action of G on A by * -automorphisms. Let Ξ = C(G, A), the vector space of A-valued functions on G. Define a right action of A on Ξ by Let c be a fixed R-valued function on G, and define an A-valued preinner-product on Ξ by We define a left action of A, denoted by a · ξ, by With these definitions Ξ is an A-pre-quasi-correspondence.
Let ω 0 ∈ Ξ be defined by ω 0 (x) = 1 A for all x ∈ G. The inner derivation, ∂, determined by ω 0 is then given by for all x ∈ G. We let Ω be the sub-bimodule of Ξ generated by the range of ∂, and we restrict to Ω the A-valued pre-inner-product on Ξ. Then (Ω, ∂, ·, · A ) is a calculus with pre-quasi-correspondence for A.
Notice that the structure of Ω depends strongly on the choice of α. If α is the trivial action, then Ω = {0}. Notice also that for any choice of α, if ω is in Ω then ω(e) = 0, where e is the identity element of G.
Thus we do not need a value for c e . Then a natural choice for c is the inverse of a length function on G, or its square, left undefined at e. This is related to the seminorms prominently used in [30,32]. Example 1.6. This example is related to the previous two examples. Let B be any possibly non-commutative unital finite-dimensional *algebra, and let A be a unital * -subalgebra of B. We can in the evident way view B as a bimodule over A, and of course A can be viewed as a bimodule over itself. Suppose that E is a conditional expectation from B onto A, that is, an A-bimodule projection from B onto A that preserves the involution. On B we define a (right) pre-inner-product It is easily verified that (B, ·, · A ) is a pre-quasicorrespondence over A. Then, as commented just before the previous example, any element of B will define an inner derivation from A into B. If we let Ω be the sub-A-bimodule of B generated by the range of this derivation, and if we restrict to Ω the above pre-inner-product on B, we obtain a calculus with pre-quasi-correspondence.
Let (Ω, ∂, ·, · A ) be a calculus with pre-quasi-correspondence over some unital * -algebra A. Notice that the Leibniz identity implies that the right sub-module of Ω generated by the range of ∂ is in fact a sub-bimodule, and so by our assumptions it is all of Ω. That is, every element of Ω can be expressed as a finite sum of terms of the form (∂a)b for a, b ∈ A. But Thus the pre-inner-product is entirely determined by the A-valued form Γ defined on A by Notice that Γ(1 A , a) = 0 for all a ∈ A. The form Γ is C-sesquilinear, and A-symmetric in the sense that for b, c ∈ A, but it has no properties with respect to the right A-module structure. However Γ does have an important property reflecting the * -representation condition of the correspondence. For a, b, c ∈ A we have That is, In the setting of Dirichlet forms and quantum semigroups [7] the corresponding form Γ is often called a "carré-du-champ" (or sometimes a "gradient form"). Once we require positivity, we will use this terminology. So at this point we set: Definition 1.7. Let A be a unital * -algebra over C. By a (right) quasicarré-du-champ (qCdC) for A we mean an A-symmetric A-valued Csesquilinear form Γ, linear in the second variable, that satisfies both the condition Γ(1 A , a) = 0 for all a ∈ A, and also the * -representation condition for all a, b, c ∈ A. A left qCcD is defined similarly, but it is linear in the first variable, and its * -representation condition is given by If Γ comes from a first-order differential calculus as above, then we will say that Γ is the qCdC for the first-order differential calculus with quasi-correspondence.
We remark that if Γ(a, a) is self-adjoint for all a ∈ A , then the usual argument shows that Γ is A-symmetric. Example 1.9. The qCdC for Example 1.3 is given by while that for Example 1.4 is given by Proposition 1.10. Let A be a unital * -algebra over C, and let Γ be a qCdC for A. Then there is a calculus with quasi-correspondence for A whose qCdC is Γ.
Proof. LetΩ = A ⊗ A, with its usual A-bimodule structure, given on elementary tensors by (This is the negative of the usual convention, but seems to be more appropriate when using right quasi-correspondences, and fits well with Examples 1.3.) Let Ω u be the sub-bimodule ofΩ generated by the range of ∂ u . It is well-known and easily seen to be the kernel of the bimodule map m : A ⊗ A → A determined by m(a ⊗ b) = ab. Thus Ω u consists of finite sums a j ⊗ b j such that a j b j = 0. (We remark that when A = C(X) for a finite set X then Ω u is exactly the C(Z) of Example 1.3.) Then (Ω u , ∂ u ) is universal [13] in the sense that if (Ω, ∂) is any other first-order differential calculus for A, then the mapping Φ that sends It extends to an A-valued linear form on A ⊗4 , which we can view as a bilinear form on A ⊗ A. From B Γ we can then define an A-valued preinner-product on A ⊗ A, denoted by ·, · Γ A . It is given on elementary tensors by We can then restrict this pre-inner product to Ω u . With this definition it is clear that we have the following properties: for all ω, ω ′ ∈ Ω and a, b ∈ A. To obtain the last relation, notice that for a, b, c ∈ A we have , to which we can apply the * -representation condition (equation 1.8) on Γ. Thus we see that (Ω u , ∂ u , ·, · Γ A ) is a calculus with pre-quasicorrespondence. Let N be the null-space for the pre-inner-product, and let Ω Γ = Ω u /N , to which the pre-inner-product drops as an inner product. Let ∂ Γ be the composition of ∂ u with the quotient map to Ω Γ . Then (Ω Γ , ∂ Γ , ·, · Γ A ) is a calculus with quasi-correspondence whose qCdC is Γ, as desired.
There is an evident notion of isomorphism between any two calculiwith-correspondence over A. It is easy to verify that: Theorem 1.11. The above construction gives a natural bijection between qCdC's over A and isomorphism classes of calculi-with-quasicorrespondence for A. Example 1.12. We now describe an important construction of qCdC's which we will use later. We phrase this construction in terms of the beginnings of Hochschild cohomology (e.g. page 187 of [8]), but it is not clear to me whether it is useful to do this. Let N be any operator on A with the property that N(1 A ) = 0. LetN be the C-trilinear extended to give an A-bimodule map from A⊗A⊗A to A. We can vieŵ N as a Hochschild 2-cochain for A with coefficients in the bimodule A.
(See equation 8.47 of [13].) Then its coboundary, δN, is the bimodule map from A ⊗4 to A defined on elementary tensors by We can turn this into an A-valued pre-inner-product onΩ = A ⊗ A, denoted by ·, · N A , defined on elementary tensors by We can then restrict this inner product to Ω u . We can hope that this gives a pre-quasi-correspondence for (Ω u , ∂ u ). It is clear that then its qCdC would be given by Notice that Γ N measures the extent to which N fails to be a derivation on A. In particular, two N's that differ by a derivation will give the same Γ N . (We will see later that it can be convenient to include a factor of 1/2 in the definition of Γ N .) We now seek to determine when Γ N is indeed a qCdC. It is clear that However, Γ N will not in general be symmetric. Notice that for a, b ∈ A we have where we define N ♯ by N ♯ (c) = (N(c * )) * for c ∈ A. Thus Γ N will be symmetric exactly if Γ N = Γ N ♯ , and so exactly if N −N ♯ is a derivation of A. We have thus obtained: Example 1.14.
In anticipation of what will come in example 6.9, let us consider the case, associated to Examples 1.4 and 1.9, in which we fix v ∈ A and define an operator N v by for all a ∈ A. We will show that Γ Nv is a qCdC. Notice that Then according to Theorem 1.13, in order for Γ Nv to be symmetric we must show that N v − N ♯ v is a derivation of A. But by the Jacobi identity in Example 1.9. We see that Γ Nv = 2Γ v exactly if Γ v * = Γ v (which is one example of why a factor of 1/2 would be convenient in the definition If A has the property that a * a = 0 implies that a = 0, as happens for C*-algebras, then we see that [v * , v] = 0, that is, v is "normal". Since non-normal elements are common in C*-algebras, the property Γ Nv = 2Γ v can easily fail.

Non-commutative Riemannian metrics
We now assume that A is a (finite-dimensional) C*-algebra, so that it is meaningful to consider positive elements of A. Accordingly, we will now require that the A-valued pre-inner-products that we consider on Ω are non-negative, that is, that ω, ω A ≥ 0 for all ω. Thus as right Amodules our Ω's will be a right pre-Hilbert A-modules, as defined for example in section II.7.1 of [4]. (See also definition 2.8 in [29].)We remark that because positive elements are self-adjoint, this implies the symmetry of the A-valued pre-inner-products. By a pre-correspondence we will mean a pre-quasi-correspondence whose pre-inner-product is non-negative. A correspondence is then a pre-correspondence whose pre-inner-product is definite. (See II.7.4.4 of [4].) Definition 2.1. Let A be a (finite-dimensional) C*-algebra. By a (right) Riemannian pre-metric for A we mean a calculus with prequasi-correspondence, (Ω, ∂, ·, · A ), over A, whose pre-inner-product is non-negative. If the pre-inner-product is definite, then we will call (Ω, ∂, ·, · A ) a (right) Riemannian metric for A.
We remark that it would be natural to require also that if ∂a = 0 then a ∈ C1 A , but it will be more convenient for us not to require this property, and to view the failure of this property to mean that (A, ∂) is not "metrically connected".
For positive A-valued pre-inner-products there is a corresponding Cauchy-Schwartz inequality. See proposition 2.9 of [29], or lemma 2.5 of [28], or proposition II.7.1.4 of [4]. It states that for any ω, ω ′ ∈ Ω we have with respect to the partial order on positive elements of A. From this inequality one sees by the usual argument that the null-space, N , of the pre-inner-product is a right A-submodule of Ω, and in fact is an A-sub-bimodule because the left action of A is a * -representation. Then the pre-inner-product drops to an A-valued inner product on Ω/N . This inner product determines a norm, ω, ω A 1/2 , on Ω/N , and since in our finite-dimensional situation Ω/N is complete for this norm, Ω/N is a right Hilbert C*-module over A, as defined in section II.7.1of [4]. The left action then makes Ω/N into a correspondence over A exactly as defined for C*-algebras. (See II.7.4.4 of [4].) We will denote the composition of ∂ with the quotient map from Ω to Ω/N again by ∂. Then (Ω/N , ∂, ·, · A ) will be a Riemannian metric for A. In this way we can always pass from a Riemannian pre-metric to a Riemanian metric. 3 we must assume that the function c takes non-negative values, in Examples 1.4 and 1.5 we must assume that A is a unital C*algebra, and in Example 1.6 we must assume that A and B are unital C*-algebras and that the conditional expectation E is non-negative, so that it is a conditional expectation in the sense used for C*-algebras [40].
We now give a further example.
Example 2.4. Let (A, H, D) be a finite-dimensional spectral triple [13,8], that is, A is a finite-dimensional C*-algebra, H is a finitedimensional Hilbert space on which A is represented, and D is a selfadjoint operator on H. For ease of discussion we assume that the representation of A is faithful, and so we just take A to be a * -subalgebra  [40].) Then as in Example 1.6 we obtain a Riemannian metric for A whose bimodule is the A-sub-bimodule of B(H) generated by the range of the derivation a → [D, a]. Thus in our finite-dimensional setting every spectral triple has a canonically associated Riemannian metric. Note that different D's on H can define the same derivation, and thus the same Riemannian metric. More generally, different spectral triples for a given A can determine isomorphic Riemannian metrics. (For a related infinite-dimensional version see theorem 2.9 of [12]. I thank D. Goswami for bring this theorem to my attention.) Suppose now that (Ω, ∂, ·, · A ) is a Riemannian pre-metric for a finite-dimensional C*-algebra A. Then in particular, it is a calculus with pre-correspondence. Let Γ be its qCdC as discussed in the previous section. Now for every element ω of Ω, expressed as a finite sum This implies exactly that the matrix {Γ(a j , a k )} is a positive element of the C*-algebra M n (A). The fact that this holds for all n and all choices of the a j 's is exactly what is meant by saying that Γ is "completely positive".
Definition 2.5. Let A be a finite-dimensional C*-algebra. By a (right) carré-du-champ (CdC) for A we mean a qCdC for A that is completely positive. (No definiteness is required.) Notice that since positive elements of A are self-adjoint, a CdC will automatically be symmetric, as mentioned before Example 1.9. The sum of two CdC's is again a CdC, and a positive scalar multiple of a CdC is again a CdC, so the CdC's form a cone.
From this definition and Theorem 1.13 we immediately obtain: Proposition 2.6. Let N be an operator on A such that N(1 A ) = 0 . As in Theorem 1.13 define Γ N by Then Γ N is a CdC if and only if it is completely positive and N − N ♯ is a derivation of A.
We remark that in our finite-dimensional situation all derivations of A are inner. A relatively simple proof of this can be extracted from exercise 8.7.53 of [19].
Example 2.7. (Following Lindblad [23].) Let {Φ t } be a quantum dynamical semigroup on A, that is, for every t ∈ R ≥0 the operator Φ t on A is completely positive and of norm no greater than 1, and t → Φ t is a continuous semigroup homomorphism from R ≥0 into the algebra of bounded operators on A (with Φ 0 the identity operator on A). Assume further that this semigroup is "conservative" in the sense that Φ t (1 A ) = 1 A for all t. Especially in section 6 of [18] (and references there) the attitude is taken that such semigroups are a good substitute for metrics in the non-commutative setting. It is not difficult to show that, because we assume that A is finite-dimensional, the function t → Φ t is differentiable. Let −∆ denote its derivative at 0, so that for all completely positive, we have the basic inequality (II.6.9.14 of [4]) for all a ∈ A. When we differentiate this inequality at t = 0, noting that the left-hand side has value 0 for t = 0, we obtain −∆(a * a) + ∆(a * )a + a * ∆(a) ≥ 0.
As in Example 1.12, set for all a, b ∈ A. We see that Γ ∆ is a positive A-valued form. Because Φ t is positive for all t we have that ∆(a * ) = (∆(a)) * for all a ∈ A, so that ∆ ♯ = ∆. Because each Φ t is completely positive, all of the above observations apply equally well to the semigroup I n ⊗ Φ t acting on M n ⊗ A, whose generator is I n ⊗ ∆. It follows that Γ ∆ is completely positive. It then follows from Proposition 2.6 that Γ ∆ is a CdC.
We remark that Lindblad shows in [23] that, conversely, under the conditions obtained just above on ∆ it will always generate a quantum dynamical semigroup. For a very interesting recent account of some uses of quantum dynamical semigroups in quantum physics see [42], especially the "master equations" in the chapter "Open Systems". I thank Eleanor Rieffel for bringing this reference to my attention.
Theorem 2.8. For every CdC Γ for A there exists a Riemannian metric for A whose CdC is Γ.
Proof. Let Γ be a CdC for A. As in the proof of Proposition 1.10 we define onΩ = A ⊗ A an A-valued sesquilinear form determined on elementary tensors by and then we restrict it to Ω u . The positivity of the resulting form follows immediately from the complete positivity of Γ. (This is closely related to the Stinespring construction [4].) The other properties of a correspondence then follow from the fact that Γ is a qCdC. As before, we set and then, as for a qCdC, we have We thus find that (Ω u , ∂ u , ·, · Γ A ) is a Riemannian pre-metric for A. From this one can then pass to a Riemannian metric in the way described above.
In particular, from Example 2.7 we see that every quantum semigroup on a finite-dimensional C*-algebra has a canonically associated Riemannian metric.
In terms of the definition of a conditional expectation [4], the following proposition is easily proved by direct calculation: We remark that in our finite-dimensional case (in which our C*algebras are abstract von Neumann algebras) there will always exist a conditional expectation from A onto B.
An interesting relationship between quantum dynamical semigroups and conditionally completely negative operators was first presented by Evans [11]. For a more recent account see section 1 of [26]. We recall that an operator N on a C*-algebra A is said to be "conditionally completely negative" if whenever we have n a j b j = 0 for elements of A then n b * j N(a * j a k )b k ≤ 0. Within our setting, the relationship to Γ N as defined in Theorem 1.13 is given by: Proposition 2.10. Let N be an operator on a unital C*-algebra A with the property that N(1 A ) = 0. Then Γ N is completely positive if and only if N is conditionally completely negative.
Proof. Suppose that n a j b j = 0 for elements of A. Then From this calculation it is clear that if Γ N is completely positive then N is conditionally completely negative. Conversely, suppose that N is conditionally completely negative, and suppose that we are given elements a 1 , . . . , a n , b 1 , . . . , b n of A. Set b n+1 = − n a j b j and a n+1 = 1 A . Then n+1 a j b j = 0, and so, on using the above calculation towards the end, we have When we combine this with Proposition 2.6 we obtain: Let N be an operator on A such that N(1 A ) = 0. Then Γ N is a CdC if and only if N is conditionally completely negative and N − N ♯ is a derivation of A.
Example 2.12. For any v, h ∈ A it is easily verified that the maps a → −v * av and a → ha + ah * are conditionally completely negative.
we see that N v is conditionally negative definite. But it is clear that N v (1 A ) = 0 and N ♯ = N. Then from Theorem 2.11 it follows that Γ N is a CdC. It is easily calculated that More generally, for any given v 1 , . . . , v m ∈ A, if we set N = N v j , then Γ N will be a CdC. This should be compared with theorem 2 of [23], which implies that for the case in which A = M n (C) the above N's are the most general form of generators of conservative dynamical semigroups on A, up to a Hamiltonian. For examples from physics see equations 3.29 and 3.81 of [42] and the text around them. See also equation 3.154 of [27].

Energy forms, and Markov and Leibniz seminorms
In this section we assume that (Ω, ∂, ·, · A ) is a Riemannian premetric for A. In the present setting this structure does not seem to lead to a canonical integration procedure, in contrast to the case of ordinary Riemannian manifolds. We will just need to choose an integration procedure, perhaps satisfying some compatibility requirement. We begin by considering a somewhat more general procedure.
Let D be a unital central subalgebra of A, and let E be a conditional expectation [4] from A onto D that satisfies the extra "tracial" condition that E(ab) = E(ba) for all a, b ∈ A. This means that E is a "D-valued trace" as defined in definition V1.24 of [40]. The two classes of examples that are most immediately evident are, first, that in which D is one-dimensional, τ is a tracial state on A, and E(a) = τ (a)1 A ; and, second, the class in which A is commutative, D = A, and E is the identity map on A. We will not discuss other classes in this paper.
We then define a D-valued pre-inner-product on Ω by It is easy to check that this makes Ω into a pre-Hilbert D-module for the right action of D on Ω coming from the right action of A. Furthermore, the left action of A on Ω will again be a * -representation with respect to ·, · D . The additional feature that we gain is that the right anti-representation of A on Ω will be a * -anti-representation for ·, · D because where we have used the tracial condition on E.
We are now in position to apply a line of reasoning from 3.3.3 of the paper [39] by Sauvageot. Let a ∈ A with a = a * , and let B be the unital C*-subalgebra of A generated by a. Then B is commutative, and so we can identify B with C(S) where S is the maximal ideal space of B, which we can identify with the spectrum of a, a subset of R. Because B is commutative, its right action on Ω coming from the right action of A is a * -representation for the D-valued inner product. Let us denote this right * -representation of B by ρ and denote the left * -representation by λ. Because these two representations commute, they combine to give a representation, ρ ⊗ λ, of B ⊗ B = C(S × S) on Ω. Because D is central, the representation ρ ⊗ λ commutes with the right action of D on Ω, that is, it is a * -representation into the C*-algebra of endomorphisms of the right Hilbert D-module Ω. Now let p be a polynomial of one variable with real coefficients. Let p be the corresponding polynomial of two variables defined bỹ If p is the monomial p(t) = t n for some n, theñ p(s, t) = s n−1 + s n−2 t + · · · + t n−1 .
Thusp for a general polynomial p will be a linear combination of such expressions. (The map p →p is actually a nice coproduct with a Leibniz property. See proposition 3.11 of [14].) Since S is a subset of R we can viewp as an element of C(S × S), and thus we can form the operator (λ ⊗ ρ)(p). Notice that ∂(a n ) = a n−1 (∂a) + a n−2 (∂a)a + · · · + (∂a)a n−1 , which we then recognize as being ((λ ⊗ ρ)(p))(∂a) when p is the monomial p(t) = t n . It follows that for any polynomial p we have where p S denotes the supremum norm ofp as an element of C(S ×S). But this supremum norm is exactly the Lipschitz constant, Lip(p), of p with respect to the restriction to S of the metric from R. When we set ω D = ω, ω D 1/2 , we see that we have Now let F be any R-valued Lipschitz function on S. It has an extension [41] to a Lipschitz function,F , on any interval containing S, such that F ∞ = F ∞ and Lip(F ) = Lip(F ). By the usual smoothing argument (e.g. as in the proof of proposition 2.2 of [30]),F can be uniformly approximated on a neighborhood of the interval by functions with continuous first derivative, with no increase in the Lipschitz constant, and these functions can in turn be approximated by polynomials uniformly and uniformly in the first derivative. Thus F can be uniformly approximated by such polynomials, whose Lipschitz constants are no bigger than Lip(F ). From what we found above for polynomials we thus obtain: Proposition 3.1. With notation as above, for any a ∈ A such that a * = a and for any R-valued Lipschitz function F on σ(a) we have Notice that for the case in which D = C1 A the above result holds for any trace, not just for tracial states, as seen by scaling.
We now concentrate on the tracial case.
Definition 3.2. Let Γ be a CdC for A, and let τ be a faithful trace on A. By the corresponding energy form for (Γ, τ ) we mean the C-valued pre-inner product E Γ on A defined by for all a, b ∈ A. If Γ is the CdC for a Riemannian pre-metric (Ω, ∂, ·, · A ), then we will also say that E Γ is the energy form for Ω. When no confusion is likely we will often just write E.
By normalizing the trace and then applying Proposition 3.1 we immediately obtain: Corollary 3.3. Let notation be as just above. Then for any a ∈ A such that a * = a, and for any R-valued Lipschitz function F on σ(a) we have We remark that this means that E is a Dirichlet form according to definition 2.3 of [1]. The definition of a Dirichlet form given before theorem 3.3 of [10] has a further requirement, but we will see shortly that this further requirement is also satisfied. (Note that the Dirichlet forms we deal with here are all "conservative", i.e. take value 0 if one of the entries is 1 A .) In view of the terminology often used for Dirichlet forms, we set: Definition 3.4. We will say that a form that satisfies the property obtained in Corollary 3.3 is "Markov", or satisfies the "Markov property". Definition 3.5. Let notation be as above. We define a seminorm, L E , on A by L E (a) = (E(a, a)) 1/2 .
We will call L E the energy norm on A (even though it is a seminorm).
Theorem 3.6. Let E be the energy form for a given CdC and faithful trace, and define L E as above. Then L E is indeed a seminorm, and it satisfies the Markov property that for any a ∈ A such that a * = a and for any R-valued Lipschitz function F on the spectrum σ(a) we have Furthermore, L E satisfies the Leibniz property that for any a, b ∈ A we have Proof. L E is a seminorm because it is the seminorm for the ordinary pre-inner-product E. The Lipschitz property follows immediately from Corollary 3.3.
For the Leibniz property, if we have started with a CdC Γ, we apply Theorem 2.8 to obtain the corresponding Riemannian metric (Ω, ∂, ·, · Γ A ) for A. We use τ to define an ordinary inner product on Ω by ω, ω ′ τ = τ ( ω, ω ′ A ), with corresponding norm · τ . This norm is an A-bimodule norm. This means that ωa τ ≤ ω τ a and aω τ ≤ a ω τ for all ω ∈ Ω and a ∈ A. The first of these inequalities uses the tracial property of τ to calculate that For the second of these inequalities, note that a 2 1 A − a * a ≥ 0 in A, and so has a positive square-root, say c, in A. Then We can now apply τ to this to obtain the desired inequality. Now Because ∂ is a derivation, the Leibniz inequality for L E follows immediately. (Notice the importance of the complete positivity of Γ for this proof, because it leads to the inner product on Ω.) We will see at the end of Section 9 that basically the completely Markov property implies the completely Leibniz property. The Markov property of standard deviation given in theorem 3.9 of [35] is a special case of Theorem 3.6 above, as can be seen from the discussion in Section 12.
The Leibniz property is important for the considerations in [34,33], which is one reason that I began to study the topic of this paper. But there are many other seminorms that satisfy both the Markov and Leibniz conditions. As examples when A is commutative, let (X, ρ) be a finite metric space, let Z be defined as in Example 1.3, and define c on Z by c xy = 1/ρ(x, y). Define L on A = C(X) by This is the usual Lipschitz constant for f for the metric ρ. The metric from L on the state space S(A) of A, when restricted to X identified with the extreme points of S(A), is the original metric ρ. More generally, for any p ≥ 1 define L by It is easily seen that these seminorms satisfy both the Markov and Leibniz conditions. A further example is given in Example 10.3.
Note that L E , as defined in Definition 3.5, need not be a * -seminorm. For example in the setting of Example 1.14 where . But we can always obtain a Markov and Leibniz * -seminorm by taking the max L E (a) ∨ L E (a * ).
So far we have put no conditions on ∂. But there is an important condition that is used in various papers concerning Dirichlet forms, which does put a condition on ∂, involving the chosen trace as well as the CdC. a)) for all a, b ∈ A. If Γ comes from a first-order differential calculus with correspondence, (Ω, ∂, ·, · A ), then we will say that and that L E is then a * -seminorm.
Following the terminology given in [10], we accordingly set: for all a, b ∈ A.
In view of the considerations above it would be reasonable to define a Riemannian metric to be a pair (Γ, τ ) consisting of a CdC and a trace, and even require τ -reality. But we do not adopt this definition.
We now relate the above definitions to the setting of Example 1.12.
Proposition 3.9. Let N be a positive operator on the Hilbert space L 2 (A, τ ) with the property that N(1 A ) = 0, and define Γ N as in Example 1.12. Then Γ N is τ -real in the sense that where for the last equality we have used the fact that for any c ∈ A we have with the evident left and right actions of M n (A), and with M n (A)valued (pre-)inner product determined by It is easily verified that the left and right actions of M n (A) relate to this sequilinear form in the way needed for a pre-correspondence.
Lemma 4.1. The above M n (A)-valued sesquilinear form ·, · n is positive. Thus (Ω n , ·, · m ) is a pre-correspondence for M n (A). If ·, · A is definite, then so is ·, · n .
Proof. Given t = m α j ⊗ ω j , we have , and for any {a j } ∈ A n we have Thus C can be expressed as C = D * D for some D ∈ M n (A), and then, for D = {d jk }, we have If t, t n = 0, then by the generalized Cauchy Schwartz inequality of Equation 2.2 we have t, s n = 0 for all s ∈ M n (Ω). Let {e jk } be the usual matrix units for M n . We can express t as t = j,k e jk ⊗ ω jk . For any fixed p, q and any ω ′ ∈ Ω set s = e pq ⊗ ω ′ . Then By the linear independence of the e jk 's it follows that ω jk = 0 for all j, k.
Suppose now that (Ω, ∂, ·, · A ) is a Riemannian pre-metric for A. Then we can define ∂ n on M n (A) with values in Ω n by setting it on elementary tensors to be It is easily seen that ∂ n is a derivation. Note further that the (∂A)B's for A, B ∈ M n (A) span Ω n because for a, b ∈ A. Thus: Proposition 4.2. For notation as above, (Ω n , ∂ n , ·, · n ) is a Riemannian pre-metric for M n (A). Proposition 4.3. Let notation be as above, let Γ be the CdC for (Ω, ∂, ·, · A ), and let Γ n be the CdC for (Ω n , ∂ n , ·, · n ). Then Γ n is given in terms of Γ by (Γ n (A, B)) jk = p Γ(a pj , b pk ).  We remark that the right-hand side above is just the usual pre-inner product that one puts on the tensor product of pre-Hilbert spaces, applied to M n ⊗A with the inner product on M n from its un-normalized trace and with the pre-inner-product on A being E, as can be seen by calculations similar to those in the proof of Proposition 4.3. We can make the same definition for general sesquilinear forms.
We can now apply the results of the previous section to obtain: Corollary 4.5. With notation as above, E n is a Markov form for all n. In other words, E is completely Markov.
The definition given in [10] for a sequilinear form E to be a Dirichlet form is slightly stronger than that used by many authors, for in addition to the Markov condition (which in [10] is called the Lipschitz condition), which concerns only self-adjoint elements, it also requires that (in our context) for any a ∈ A one have E(|a|, |a|) ≤ E(a, a).
In proposition 3.4 of [10] it is shown that if E 2 is Markov, then E is Dirichlet in their sense. From this and what we have shown above it is not difficult to obtain: Corollary 4.6. For E coming from a Riemannian metric and a trace on A as above, each E n is Dirichlet. In other words, E is completely Dirichlet.
As in Definition 3.5, for each n we define a seminorm, L En , on M n (A) by L En (A) = (E n (A, A)) 1/2 . Let V be a vector space over C. We let M n (V) denote the vector space of n × n matrices with entries in V. Then M n (V) is in an evident way a bimodule over M n . We adapt to seminorms in the obvious way the definition of Ruan [37] of an L 2 -matricial norm on V.
Definition 4.7. Let notation be as above. A sequence {σ n } in which σ n is a seminorm on M n (V) for each n, is said to be an L 2 -matricial seminorm on V if it satisfies the following two properties: • the normed-bimodule condition σ n (αV β) ≤ α σ n (V ) β for all α, β ∈ M n and all V ∈ M n (V). From this the L 2 -condition follows immediately.

The metric on the state space
With notation as in the previous sections, let L E be the seminorm on A from (Ω, ∂, ·, · A , τ ). Of course, because ∂1 A = 0 we have L E (1 A ) = 0. Such seminorms are exactly the kind used in defining quantum metric spaces [30,31,34], and they determine an ordinary metric, ρ E , on the state space, S(A), of A, defined by ρ E (µ, ν) = sup{|µ(a) − ν(a)| : L E (a) ≤ 1}.
Definition 5.1. The metric ρ E defined above on S(A) is called the energy metric associated with (Ω, ∂, ·, · A , τ ) This metric will take value +∞ if there is an a ∈ A with a / ∈ C1 A such that L E (a) = 0. In this case we interpret this as meaning that our "quantum space" is not metrically connected. For instance, for the ])) 1/2 , we obviously have L E (v) = 0, so that ρ E does take the value +∞. On the other hand, if ∂ is given as a sum of terms of the form [v, a] for different v's (as in Examples 1.4 and 6.9), it can easily happen that ρ E takes only finite values.
Definition 5.2. With notation as above we say that A is metrically connected for (Ω, ∂, ·, · A ) if (for Γ the corresponding CdC) we have Γ(a, a) = 0 only when a ∈ C1 A .
For the rest of this section we will assume that A is metrically connected unless the contrary is stated. Then because τ is faithful, ρ E will take only finite values (in our finite-dimensional situation).
In order to make clear at what point we need the various properties satisfied by E, let us now assume for a while that E is an arbitrary preinner-product on A that satisfies the property that E(a, a) = 0 exactly when a ∈ C1 A . We define L E as in Definition 3.5, and we define the metric ρ E on S(A) as in Definition 5.1. LetÃ = A/C1 A . Then E drops to a definite inner product onÃ, which we will again denote by E. Since A is finite-dimensional,Ã equipped with E is a Hilbert space. Each element ofÃ has a unique representative in the null-space of τ (i.e. orthogonal to 1 A in L 2 (A, τ )), and so we can identifyÃ with the null-space of τ when convenient.
Denote the dual vector space of A by A ′ . Then the dual vector space ofÃ can be identified with the subspace A ′o consisting of elements of A ′ that take value 0 on 1 A . Note that if µ, ν ∈ S(A) then µ − ν ∈ A ′o . Any λ ∈ A ′o determines a linear functional on the finite-dimensional Hilbert spaceÃ, and thus is represented by an element, h λ , of A ′o , so that for all a ∈ A, where here a, λ denotes the usual pairing betweenÃ and its dual space. (We let a also denote its image inÃ.) Thus λ → h λ is a conjugate linear map from A ′o intoÃ. It is clearly injective. But A ′o andÃ have the same dimension, and so this map is also surjective. When convenient we can view h λ as an element (unique) of A such that τ (h λ ) = 0. Notice that L E is just the Hilbert space norm onÃ, and it determines a dual norm, L ′ E , on A ′o , defined by L ′ E (λ) = sup{| a, λ | : L E (a) ≤ 1}. For µ, ν ∈ S(A) we then see that E) is a Hilbert space, we know that the supremum on the right side is attained at the unit vector pointing in the direction of h λ , and consequently That is, the surjective map λ → h λ is a (conjugate linear) isometry from A ′o ontoÃ. From this we obtain: Proposition 5.3. Let notation be as above. For µ, ν ∈ S(A) we have Fix now some µ 0 ∈ S(A), and for any µ ∈ S(A) set Note that for µ, ν ∈ S(A) we have Let · E denote the norm onÃ from the inner product E onÃ. Then: Theorem 5.4. With notation as above for all µ, ν ∈ S(A). Thus σ is an affine isometry from the convex metric space (S(A), ρ E ) into the Hilbert space (Ã, E).
Note that σ(µ 0 ) = 0, so that the choice of µ 0 determines which element of S(A) is sent to 0 by σ.
Jorgensen and Pearse [17] were the first to discover that, for resistance networks, i.e. for A commutative, at least the set of extreme points of the state space equipped with the metric from the energy form, embeds isometrically into a Hilbert space. See also section 5.1 of [16], where the relationship with negative semidefinite forms is discussed.
In order to see the further consequences of requiring that E actually comes from a non-commutative Riemannian metric, we need to introduce the Laplace operator.

The Laplace operator
Because A is finite-dimensional and τ is a faithful trace on A, for any pre-inner-product E on A there will be a unique positive linear operator, N, on L 2 (A, τ ) such that E(a, b) = a, Nb τ for all a, b ∈ A, where a, b τ = τ (a * b). (The word "positive" here refers to N as an operator on the Hilbert space L 2 (A, τ ), and not to how it relates to the order structure on the C*-algebra A.) If E comes from a Riemannian metric (Ω, ∂, ·, · A ), so that E(a, b) = τ ( ∂a, ∂b A ), then it is appropriate to view N as ∂ * ∂, and make: Definition 6.1. If E comes from a Riemannian metric and faithful trace, then we denote the operator N as above by ∆ and we call it the Laplace operator corresponding to the Riemannian metric and faithful trace.
We remark that this is contrary to the conventions frequently made that lead to the Laplace operator being a negative operator.
We now investigate the resulting special properties that ∆ will have. We assume from now on that ∆ comes from a Riemannian metric and a faithful trace as above. Notice first that ∆(1) = 0 because E(a, 1) = 0 for all a ∈ A. Define Γ ∆ as in Theorem 1.13 with ∆ playing the role of N there (but here with an additional factor of 1/2 as is commonly done in the literature), so that for all a, b ∈ A. It has the properties described in Theorem 1.13. Furthermore, Γ ∆ is τ -real according to Proposition 3.9.
, where for the last equality we have used both the fact that ∂ is a derivation and that several terms cancel, and the tracial property of τ . Proposition 6.3. With notation as above, Γ ∆ (a, a) ≥ 0 for all a ∈ A. Consequently Γ ∆ is also symmetric.
Proof. If we let c = dd * in the above Lemma, and then rearrange, we obtain Since the representation of A on L 2 (A, τ ) is faithful, it follows that Γ ∆ (a, a) ≥ 0 as desired. The symmetry of Γ ∆ follows by the usual arguments.
Define ∆ ♯ by ∆ ♯ (a) = (∆(a * )) * for a ∈ A, as done just before Theorem 1.13. Then the above expression is equal to 2Γ ∆ ♯ (a, a), and so by polarization we have This is consistent with the relation found in Theorem 1.13 that ensures that Γ ∆ is symmetric. Let τ n be defined on M n (A) as in Corollary 4.4, and let E n be the corresponding energy form. Then for any A, B ∈ M n (A) we see from Corollary 4.4 that Thus we obtain: Proposition 6.4. The Laplacian for E n is the operator I n ⊗ ∆ on L 2 (M n (A, τ n ). In particular, I n ⊗ ∆ comes from a Riemannian metric.
We denote I n ⊗∆ by ∆ n . It will have the properties described above. In particular, Γ ∆n is positive on M n (A), and τ n -real by Proposition 3.9. But straight-forward calculations using Proposition 4.3 show that Thus Γ ∆ is completely positive in the sense defined just before Definition 2.5. Then from Proposition 2.6 we obtain: Theorem 6.5. Let Γ be the CdC for a Riemannian metric on A, let τ be a faithful trace on A, and let ∆ be the Laplace operator for the corresponding energy form. Then Γ ∆ is a CdC.
Proof. We obtain the first assertion when we set c = 1 A in the equation of Lemma 6.2. The second assertion then follows from the definition of ∂ being τ -real.
We remark that the formula of Proposition 6.7 shows the virtue of including the factor of 1/2 that we introduced in this section.
Returning to our case of CdC's, it follows that if Γ is not τ -real then it can not coincide with Γ ∆ .
Let us now define ∆ ♮ by ∆ ♮ = (1/2)(∆+∆ ♯ ). Then we see that again Γ ∆ ♮ = Γ ∆ . But ∆ ♮ has the further property that (∆ ♮ (a)) * = ∆ ♮ (a * ). This means that it satisfies the conditions given in [23] for −∆ ♮ to be the generator of a quantum semigroup. We can consider this semigroup to be the heat semigroup for our Riemannian metric, especially when ∂ is τ -real, in which case we also have E = E ∆ ♮ . Proposition 6.8. For notation as above, Γ is τ -real if and only if (∆(a)) * = ∆(a * ) for all a ∈ A, that is, Proof. Essentially by definition, Γ is τ -real exactly when E is real (Definition 3.8). Then for all a, b ∈ A we have Thus E(a, b) = E(b * , a * ) for all a, b ∈ A if and only if (∆(a)) * = ∆(a * ) for all a ∈ A. Example 6.9. We consider a continuation of Examples 1.14 and 2.12.
Let v 1 , · · · , v m be elements of A, and set Each term is a CdC since it comes from a Riemannian metric as in Examples 2.3 and 1.9. Since sums of CdC's are again CdC's, it follows that Γ is a CdC. For any faithful trace τ we have . It follows that the corresponding Laplace operator ∆ is defined by [44] for a somewhat special case of this.) Let us see when Γ is τ -real. According to Proposition 6.8 it suffices to determine when ∆ ♯ = ∆. It is easily seen that , where the last equality comes from the Jacobi identity as in Example 1.14. It follows that Γ is τ -real exactly if [v * j , v j ] is in the center of A. But it is clear that for every trace τ on A we have τ ( [v * j , v j ]) = 0 and that [v * j , v n ] is self-adjoint. Since it is in the center, this is equivalent to being 0. Thus Γ is τ -real exactly if [v * j , v j ] = 0. This last condition is exactly the "detailed balance" condition of proposition 6.9 of [10].
We remark that when this example is compared to Example 2.12 we see that there are generators N of quantum dynamical semigroups on some C*-algebras A for which there is no faithful trace τ on A such that the CdC Γ N is τ -real. This suggests that one should consider faithful states that are not tracial, as considered in [5,6]. But I have not investigated that direction.
The formula in Lemma 6.2 suggests a further condition that can be required of a Riemannian metric and trace. This condition essentially appears already in section 1.2 of [26], where Peterson calls it "real". Since we already are using "τ -real", I prefer to use the term "τ -balanced": Definition 6.10. Let (Ω, ∂, ·, · A ) be a Riemannian metric on A. For τ a faithful trace on A, we say that this Riemannian metric is τ -balanced if τ ( ∂a, (∂b)c A ) = τ ( c * ∂b * , ∂a * A for all a, b, c ∈ A. If Γ is the CdC for the Riemannian metric, then the τ -balanced condition can be stated in terms of Γ as for all a, b, c ∈ A. Because τ ( ∂a, (∂b)c * A = c, Γ(a, b) τ , it follows immediately from Lemma 6.2 that: Theorem 6.11. Let (Ω, ∂, ·, · A ) be a Riemannian metric on A, and let Γ be its CdC. Let τ be a faithful trace on A, and let ∆ be the corresponding Laplace operator. Then Γ = Γ ∆ if and only if Γ is τbalanced.
It is clear that being τ -balanced is a stronger condition than being τ -real. The following example shows that it is in fact a strictly stronger condition.
Example 6.12. We consider CdC's of the form discussed in Example 1.14, that is, of the form Γ(a, b) = [v, a] * [v, b] for some v ∈ A. We want Γ to be τ -real, and so from Example 6.9 we see that v must commute with v * , that is, be normal.
By Definition 6.10, in order for Γ to be τ -balanced we must have Because this is true for all c ∈ A, and τ is faithful and tracial, this is equivalent to the requirement that for all a, b ∈ A. This is satisfied if v is self-adjoint, so we must choose v to be normal but not self-adjoint.
To continue, we now take A to be M n (C) for some n, with its usual trace. Since v is to be normal, we can assume that it is a diagonal matrix. Since the requirement (*) involves only commutators, we can always change v by adding a scalar multiple of 1 A . Thus if v has only two eigenvalues, we can assume that one of those eigenvalues is 0. Then v is a scalar multiple of a self-adjoint matrix, and again the requirement (*) is satisfied. Thus we must assume that v has at least 3 eigenvalues, but we can assume that one of those eigenvalues is 0. We can also multiply v by a scalar, and so assume that another of the eigenvalues is 1. By conjugating v by a permutation matrix we can then assume that the first 3 diagonal entries of v are 1, α, 0, where α is some nonreal complex number. View A as acting on C n in the usual way. Then let b be the element of A that takes the first standard basis vector to the second, takes the second to the third, and sends all other standard basis vectors to 0. A simple calculation shows that with a = b * we have Thus Γ is not τ -balanced.
We now relate the Laplace operator ∆ to the metric ρ E on the state space. For this we assume that A is metrically connected (Definition 5.2), so that the kernel of ∆ is exactly C1 A . Let A 0 = {a ∈ A : τ (a) = 0}, so that when viewed as a subspace of L 2 (A, τ ) it is exactly the orthogonal complement of C1 A . Thus ∆ carries A 0 into itself and is invertible there. Later when we write ∆ −1 it is to be interpreted as an operator on A 0 . In the evident way A 0 can be identified with A = A/1 A . Because A 0 is a Hilbert space when equipped with the inner product from L 2 (A, τ ), we can identify A ′0 conjugate linearly with A 0 itself. Accordingly we change our earlier conventions in the usual way, and for λ ∈ A 0 we define h λ ∈ A 0 such that λ, a τ = E(h λ , a) for all a ∈ A, so that λ → h λ is linear, in contrast to our convention in Section 5. But E(h λ , a) = ∆h λ , a τ for all a, and so λ = ∆h λ , or In view of Proposition 5.3, we can express ρ E in terms of ∆ by: Proposition 6.13. With notation as above, for any µ, ν ∈ S(A) we have

The commutative case
We now examine the special case in which A is a commutative (and finite-dimensional) C*-algebra. Let X be its maximal ideal space. Then X is a finite set, and we can and will identify A with C(X). We begin by determining all possible CdC's on A. Theorem 7.1. As in Example 1.9, for Z = {(x, y) ∈ X × X : x = y}, any non-negative function c on Z defines a CdC for A, by Γ(f, g)(y) = x,x =y (f (x) −f (y))(g(x) − g(y))c xy , for x ∈ X. Conversely, every CdC for A is of this form, and thus there is a bijection between the set of CdC's and the set of c's. Furthermore, every CdC Γ for A satisfies the extra condition that for all f, g ∈ A, so that Γ is τ -real for any trace τ on A. Furthermore, if c yx = c xy for all (x, y) ∈ Z then Γ is τ -balanced.
Proof. Note that in the above sum defining Γ we do not need values for c yy for any y. As suggested in Example 1.9, it is easily seen that, given c, the above formula gives a CdC (which clearly satisfies the extra condition), and a direct calculation verifies the statement about being τ -balanced.
Thus we must prove the converse. So let Γ be some given CdC for A. For each x ∈ X let δ x be the usual "delta-function" at x. Since the δ x 's form a basis for A, the constants γ y pq = Γ(δ p , δ q )(y) for p, q, y ∈ X completely determine Γ. Because Γ is symmetric, we see that γ y qp =γ y pq for all p, q, y ∈ X. Because Γ is positive we see that γ y pp ≥ 0 for all p, y ∈ X. Because Γ(1, f ) = 0 for all f ∈ A we see that p γ y pq = 0 for all q, y ∈ X. Finally, we must examine the consequences of the *representation condition, Equation 1.8. Let y, w, p, q ∈ X. Then from Equation 1.8 we obtain Suppose that p = q. On setting w = p we see that: If p = q, y = p and y = q then γ y pq = 0, whereas: If p = q and y = q then γ q pq = −γ q pp . In particular, since γ q pp ≥ 0 by the positivity of Γ, we see that γ y pq ∈ R for all p, q, y ∈ X.
Then for f, g ∈ A and y ∈ X we have We can add to this 0 =f (y)g(y) p γ y pp (because γ y pp = −γ y py ) to obtain: Accordingly, if we set c py = γ y pp for all p, y with p = y we obtain the desired formula. Note that for each p, y ∈ X with p = y we have c py = γ y pp = Γ(δ p , δ p )(y), which is non-negative by assumption, as needed.
We remark that, as suggested by the remarks about commutative algebras near the beginning of Section 3 leading to the proof of Proposition 3.1, the above CdC will satisfy a Markov condition. This is easily seen directly. If f ∈ A, and if F is even a C-valued function defined on σ(f ) (which is the range of f ), then Given c xy 's as above, we can view X as consisting of the nodes of a directed graph such that there is an edge from x to y exactly if c xy = 0; and we can consider the c xy 's to be weights on the edges, recognizing that the weights on the two edges joining in opposite directions two given nodes need not be equal.
For traditional reasons that will be discussed below, we now introduce a factor of 1/2 into the formula for Γ, much as we did just before Lemma 6.2. Thus from now on we assume that Γ(f, g)(y) = (1/2) x =y (f (x) −f (y))(g(x) − g(y))c xy , We have not yet chosen a trace on A. But counting measure is implicit in the formula above for Γ, and it is anyway almost a canonical choice. Thus we will choose (integration against) counting measure as our trace τ . The corresponding energy form E is then given by (So we are in the "jump part" of example 4.20 of [7].) Notice that the part of the summand involving f and g is even in x and y (i.e. unchanged under exchanging x and y). Consequently its sum with the odd part of c will be 0. Thus we can replace c byc defined byc xy = (c xy +c yx )/2. Of coursec will define a different CdC. But the two CdC's will give the same energy form, and for the following considerations it is the energy form that we study. So we assume now that With this condition, the graph whose nodes are the elements of X and whose edge-weights are given by c can exactly be interpreted as a resistance network, with c specifying the conductances between the various nodes [20,16]. It is in this way that our Riemannian metrics together with trace correspond, when A is commutative, exactly to resistance networks.
We now sketch the usual development for resistance networks [20,16], since we need it for our discussion of the metric on the state space. We first determine the corresponding Laplacian. For this purpose it is convenient to define (as, for example, in definition 1.9 of [16]) two operators on A, which when they are viewed as operators on L 2 (A, τ ) are self-adjoint operators. In defining these operators, we will assume that c xx = 0 for all x ∈ X. The first of these operators, the "transfer operator" T , is an "integral operator" defined by For the second of these operators, C, define first a function,ĉ, on X bŷ We let C be the operator of pointwise multiplication byĉ. Then 2Γ(f, g)(y) = p (f (p) −f (y))(g(p) − g(y))c py = (T (f g))(y) −f (y)(T g)(y) − (Tf )(y)g(y) + (C(f g))(y) = −((C − T )(f g))(y) + ((C − T )f )(y)g(y) +f (y)((C − T )g)(y) = 2Γ C−T (f, g)(y).
Consequently Γ = Γ C−T , and from the above calculation we also see that Thus the Laplace operator for Γ is given by consistent with Theorem 6.11. The specific formula for the Laplace operator ∆ can be written as Note that in the literature the Laplace operator is often taken to be the negative of the above expression, so that it is a non-positive operator. We see from the above formulas why it is common to introduce a factor of 1/2 in the definition of E. There is another compelling reason for introducing a factor of 1/2, namely that when the system is interpreted as a resistance network, and f is interpreted as giving voltages that are applied to the various nodes, the rate of dissipation of energy caused by the resulting current is given by the earlier E divided by 2, basically because the earlier formula for E double-counts the edges.
Let the δ x 's now be viewed as elements of L 2 (A, τ ), so that they form an orthonormal basis for L 2 (A, τ ). For any z ∈ X we have Consequently, for x, y ∈ X with x = y we have Notice that this relation fails if x = y, but that instead we have It is easily seen that even when some of the c xy are negative the corresponding form E defined as above can still be non-negative. Proof. We recall here the simple argument (found, for example, in the proof of proposition 2.1.3 of [20]). Suppose that for some given x, y we have c xy < 0. Set f = δ x − rδ y for some r ∈ R >0 . Define F on R by F (t) = t if t ≥ 0 and F (t) = 0. Then Lip(F ) = 1, and F • f = δ x . Thus if E were to satisfy the Lipschitz condition we should have E(δ x , δ x ) ≤ E(f, f ). But when we expand the sum for E(f, f ) we obtain E(δ x , δ x ) + 2rc xy + r 2 E(δ y , δ y ) Since c xy is strictly negative, it is clear that we can choose a positive r small enough that E(f, f ) < E(d x , d x ). The converse assertion follows from Corollary 3.3.

Resistance distance
As mentioned in the introduction, I have been puzzled about the nature of the "resistance distance" for a resistance network ever since I wrote section 12 of [31]. In this section we will arrive at an answer that I consider satisfactory. Let me mention that resistance distance has seen use in chemistry (e.g. [21,15,22,43] and their references), and even in evolution [25].
As is usual, we say that a graph is "connected" if it is not the disjoint union of two non-empty subsets, A and B, such that there is no edge between any point of A and any point of B. Maximal connected subsets of X are called its "connected components". These concepts are of importance to us because it is easily seen that for a resistance network we have L E (f ) = 0 for some f ∈ A exactly if f is constant on the connected components of X. Thus it is only when X itself is connected that we have the property that if L E (f ) = 0 then f ∈ C1 A , so that the corresponding metric on the state space takes only finite values. It is easily seen that the properties of E and related objects can be obtained by treating each connected component separately. Consequently, we will assume for the rest of this section that X is connected. Since this depends on the choice of c xy 's we will tend to say "metrically connected".
We will now develop the standard ideas about harmonic functions, which for our context go all the way back to the seminal paper [3]. (I have not noticed a useful way to define "harmonic functions" in the non-commutative setting.) The material in the next paragraphs, through Theorem 8.4 is well-known. See for example section 2.1 of [20].
Let ∆ be the Laplace operator for the given choice of c xy 's for X (metrically connected).
Definition 8.1. For given f ∈ C(X) and x ∈ X we say that f is harmonic at x if ∆(f )(x) = 0.
If f is interpreted as an application of voltages at the points of X, then being harmonic at x means that no current is being inserted (or extracted) at x.
If f is harmonic at x, then Notice that y → c xy /ĉ(x) is a probability distribution on the points of X that share an edge with x. Thus f (x) is a weighted average of the values of f on the neighbors of x. Here we make essential use of the fact that the c xy 's are non-negative. For any subset Y of X let Y = Y ∪{z ∈ X : c yz > 0 for some y ∈ Y }. Notice that this is not a true closure operation.
Theorem 8.2 (The maximum principle). Let Y be a subset of X, and assume that Y is connected (for the restriction of c to Y ). Let f ∈ C(X), withf = f , and suppose that f is harmonic at all points of Y . Let m = max{f (w) : w ∈ Y }. If there is a y ∈ Y such that f (y) = m, then f is constant on Y .
Proof. Let W = {y ∈ Y : f (y) = m}, and suppose that W is not empty. Let w ∈ W . Because f (w) is the weighted average of its values on {w}, it must take value m at all points of {w}. It follows that {w} ⊆ W . Because Y is connected, it follows easily that W = Y .
Notice that by considering −f we obtain a similar statement about the minimum of f on Y .
As discussed before Proposition 6.13, because X is metrically conncted, the restriction of ∆ to A 0 is an invertible operator on A 0 , where here A 0 consists of the functions f such that τ (f ) = 0. For any distinct p, q ∈ X the function δ p − δ q is in A 0 . Let h pq = ∆ −1 (δ p − δ q ). Then ∆(h pq ) = δ p − δ q , and so h pq is harmonic on the complement of {p, q}. On applying the maximal principle to the different components of X \ {p, q} we see that h pq must take its maximum and minimum values on {p, q}. Now and from this it is clear that h pq must take its maximum value at p, and so its minimum value at q. That is: Note that according to Proposition 6.13 we then have (where here τ is counting measure).
Definition 8.3. Define ρ r on X × X by ρ r (p, p) = 0 for all p ∈ X, and for p, q ∈ X with p = q. We call ρ r the resistance metric on X (for the given c xy 's).
Theorem 8.4. The resistance metric ρ r is indeed a metric.
Proof. It is clear that ρ r is symmetric (because h qp = −h pq ), and that ρ r (x, y) = 0 exactly if x = y. We must show that it is transitive. So let points n, p, q of X be given. By the linearity of ∆ we have h pq = h pn + h nq , while h nq (p) ≤ h nq (n) and h pn (q) ≥ h pn (n) by the inequalities (*) from the maximum principle. Thus Now this is strange, because from equation (**) we see that and usually the square of a metric is not a metric. Since ρ E is defined on the whole state space S(A), not just on its extreme points, it is natural to as whether ρ 2 E is a metric on all of S(A). We will now see that this is not the case, so the resistance metric is of a quite different nature than the energy metric.
Recall that Theorem 5.4 tells us that S(A), equipped with the energy metric, is isometrically embedded in a Hilbert space. So let us determine what kinds of subsets of a Hilbert space have the property that when the square of the Hilbert space metric is restricted to them the result is again a metric.
Proposition 8.5. Let X be a subset of a Hilbert space H, and let d be the restriction to X of the metric on H that comes from its inner product. Then d 2 is a metric on X if and only if X has the property that for all x, y, z ∈ X we have Proof. From the definition of a metric, d 2 is a metric on X if and only if for all x, y, z ∈ X we have On expanding these inner products and canceling some terms, we find that 0 ≤ Re( x − y, z − y ) as desired.
The above proposition is related to von Neumann's embedding theorem (for which see appendix A.1 of [16]), but the condition of the above proposition is necessarily much stronger than the negative semidefiniteness of von Neumann's theorem.
The real part of an inner product is itself an inner product when the Hilbert space is considered to be a vector space over R. In view of the above proposition, we now assume that we have a Hilbert space H over R, and a subset X of it having the above property that for all x, y, z ∈ X we have 0 ≤ x − y, z − y .
Let K denote the closed convex hull of X in H. Given a z ∈ X, choose any y ∈ X with y = z, and let φ z;y be the linear functional on H defined by φ z;y (w) = w, z − y . for all w ∈ H. Then the above inequality implies that every x ∈ X lies in the half-space of H defined by φ z;y (w) ≥ y, z − y . Thus K must lie in this half-space. Notice in particular that z itself lies strictly in the interior of this half-space. From all of this it is clear that each x ∈ X is an extreme point of K. Furthermore, our assumed inequality says that the angles between the lines from one point of X to any two other points of X are acute or right angles.
In our original situation in which X is a finite subset of S(A), it follows that X consists of exactly all the extreme points of S(A). Thus we obtain: Proposition 8.6. Let ∆ be the Laplace operator for a connected resistance network on a finite set X, let A = C(X) as earlier, and let ρ r be defined as above on S(A). Then no subset of S(A) that properly contains X has the property that the restriction of ρ r to it is a metric.
In view of this, I consider the resistance metric to be of a quite different nature than the metrics on state spaces (such as the energy metric) that I have been studying. To me this is a satisfactory resolution to my puzzlement about the resistance metric. In particular, I believe that the "free resistance" defined there in equation 4.73 of section 4.9 of [16] does not satisfy the triangle inequality.
It is natural to ask whether the property given in Proposition 8.5 characterizes the operators that arise as Laplace operators for connected resistance networks. A counter-example is given in exercise 2.5 of [20]. The basic idea is quite simple. In Section 5 we discussed forms more general than the energy forms coming from resistance networks, and saw that they too embed S(A) into Hilbert spaces. Now if X has at least 4 points, one can find conductances on X such that X is connected but one of the conductances is 0, while Re( x−y, z −y ) > 0 for all x, y, z ∈ X. Then we can make the 0 conductance slightly negative (so we no longer have a resistance network), in such a way that the angles are changed so little that we still have Re( x − y, z − y ) > 0 for the new inner product.

From Dirichlet forms to CdC's
We have seen that from a CdC and a faithful trace τ on A we obtain an energy form E. This energy form is completely Markov in the sense that each E n is Markov. One of the central theorems of the general theory of Dirichlet forms is that conversely each completely positive and completely Markov form on L 2 (A, τ ) comes from a CdC (and so in our setting comes from a Riemannian metric and trace, and has a corresponding quantum dynamical semigroup, etc). In the infinitedimensional case substantial technical assumptions are needed in order to prove this. Here we will just treat the finite-dimensional case. We will use this case later in Section 11. We assume that E is real (Definition 3.8), as is usually required for Dirichlet forms. The main theorem of this section is thus: Theorem 9.1. Let E be a sesquilinear form on L 2 (A, τ ) which is real, completely positive and completely Markov, and has 1 A in its null-space. Let N be the operator on L 2 (A, τ ) determined by Then Γ N is a CdC, and Proof. It is the Markov property that is the key to the proof, but it seems to be a bit tricky to extract useful information from it. We will follow the usual method, as given for example following theorem 2.7 of [1].
Since E is real, the argument given in the proof of Proposition 6.8 shows that (N(a)) * = N(a * ) for all a ∈ A, that is, N ♯ = N. Thus according to Proposition 2.6 we only need to show that Γ N is completely positive. A simple calculation using Corollary 4.4 shows that because E is real E n also is real. But at first we will not use the "completely" aspects of E.
Notice that N is a positive operator on L 2 (A, τ ) such that N(1 A ) = 0. Since N is positive, I + N is invertible, where I denotes the identity operator on the Hilbert space L 2 (A, τ ).
Key Lemma 9.2. Let R = (I + N) −1 . Then, for any a ∈ A for which a ≥ 0 we have R(a) ≥ 0 as an element of A, and R(a) ≤ a .
Proof. Define a positive sesquilinear form, F , on L 2 (A, τ ) by and note that F is definite and that F (b, Ra) = b, a τ . It is easily calculated that From this it is clear that for fixed a the left hand side has a unique minimum when b = Ra. Thus we find, for fixed a, that Suppose now that a = a * . Because E is real (Definition 3.8), N preserves the involution, and thus R will also. Consequently Ra is selfadjoint. Let F be an R-valued Lipschitz function on R with Lip(F ) ≤ 1. Then on setting b = F (Ra), which is well-defined because Ra is selfadjoint, we obtain We apply this last result in the following way. Let F be defined by F (t) = max(t, 0). Notice that Lip(F ) = 1. Observe that for any a such that a * = a we have F (a) = a + , the positive part of a. Suppose that b ∈ A with b * = b. We denote the negative part of a by a − and similarly for b. Because a + and a − are orthogonal to each other in L 2 (A, τ ) since a + a − = 0, and similarly for b, we have . Because b + and a − are positive and τ is tracial, (This is a special case of the fact that for any real Lipschitz function F we would have but this general case seems not to have an easy proof. See lemma 2.2 of [1] or proposition 2.5 of [10].) Now assume that a is positive, so that F (a) = a. From inequality (**) we then obtain Using this in the right side of inequality (*) and cancelling, we obtain for our F E(Ra, Ra) < E(F (Ra), F (Ra)) if F (Ra) = Ra. But Lip(F ) ≤ 1 and E is assumed to be Markov, which means that E(Ra), Ra) ≥ E(F (Ra), F (Ra)). This contradiction implies that F (Ra) = Ra, so that Ra ≥ 0. This proves the first assertion of the proposition.
To prove the second assertion, for any r ∈ R define F r by F r (t) = min(t, r), so again Lip(F r ) = 1. Then F r (t) = −(t − r) + + r, so that, using the fact that A is unital and writing r for r1 A , we have When we use this in inequality (*) in the way done above, we see that F r (Ra) = Ra, so that Ra ≤ a , as desired.
Notice that we did not need the full force of the Markov property. We only needed it for the two functions F (t) = max(t, 0) and F (t) = min(t, a ). But in the context of Theorem 9.1 the full Markov property will then be a consequence. Suppose now that E is completely Markov, where E n is defined by the same formula as given just before Proposition 6.4. Then as in that proposition, the "Laplacian" for E n is just I n ⊗ N.
If we multiply E be any t ∈ R >0 the various E n 's will also be multiplied by t, and the resulting forms will still be Markov. The corresponding "Laplacians" will also be multiplied by t. We can apply the Key Lemma to all of them. For this purpose we set R t = (I + tN) −1 , and R (n) t = I n ⊗R t . By the Key Lemma each R (n) t is a positive operator on M n (A) that is contractive on positive elements. This says that each R t is a completely positive operator on A. Also each of these operators will carry the identity element to itself because N(1 A ) = 0. The basic inequality for completely positive operators that we already used in Example 2.7 implies that for any A ∈ M n (A) we have t (A) ≥ 0. From this it easily follows that R (n) t = 1. Note also that R 0 is well-defined and R 0 = I, and that R t is actually well-defined also for negative t's in a neighborhood of 0.
In our finite-dimensional situation the function t → R (n) t is clearly differentiable for each n. Notice that the left-hand side of inequality (***) has value 0 at t = 0. It follows, much as in Example 2.7, that the derivative at t = 0 of the left-hand side is non-negative. But the derivative R ′ t is −N(I + tN) −2 , so that R ′ 0 = −N, and similarly for R (n) . Thus the derivative at t = 0 of (***) gives −N(a * a) − (−N(a * )a − a * N(a)) ≥ 0, that is, for Γ N defined as in Example 1.12 we have Γ N (a, a) ≥ 0 for all a ∈ A. In the same way we find that Γ (n) N (A, A) ≥ 0 for all A ∈ M n (A), so that Γ N is completely positive. From Proposition 2.6 it follows that Γ N is a CdC. This completes the proof of Theorem 9.1.
On combining the above result with Theorem 3.6 and Corollary 4.5 we find that the completely Markov property of E implies the completely Leibniz property of E.

Dirac operators
In this section we show how to construct a Hodge-Dirac operator for a Riemannian metric, once a trace has been chosen. We assume throughout that A is a finite-dimensional C*-algebra, that (Ω, ∂, ·, · A ) is a Riemannian metric for A, and that τ is a faithful trace on A. We define the corresponding Hodge-Dirac operator in analogy with definition 9.24 of [13]. For the case in which A is commutative our Dirac operator is essentially the operator used by Davies in theorem 4.6 of [9]. We begin by defining an ordinary inner product, ·, · τ , on Ω by ω, ω ′ τ = τ ( ω, ω ′ A ). Because Ω is finite-dimensional, it is a Hilbert space for this inner product. We denote this Hilbert space by L 2 (Ω, τ ). Then ∂ can be viewed as an operator from L 2 (A, τ ) to L 2 (Ω, τ ). We denote the adjoint of this operator, going from L 2 (Ω, τ ) to L 2 (A, τ ), by ∂ * . Let We define the operator D on H by We view A as acting on H by means of its left actions on A and Ω. We now calculate much as in the proof of theorem 4.6 of [9]. For a ∈ A and ( b ω ) in H we have By the Leibniz rule where for emphasis we here denote the C*-norm of A by · ∞ . Furthermore, for any c ∈ A we have c, ∂ * (aω) − a∂ * ω τ = a * ∂c, ω τ − ∂(a * c), ω τ = − (∂a * )c, ω τ , and by the Cauchy-Schwarz inequality ∞ ω τ . From this and the calculation (*) above we find that ∞ is not in general stable under the involution, whereas a → [D, a] is stable because D is self-adjoint. Thus the form of the right-hand side of the above inequality is reasonable. Notice further that because the representation of A on L 2 (A, τ ) is faithful, we have When A = C(X) and its Riemannian metric is determined by the function c on Z, then Theorem 7.1 shows that for its Hodge-Dirac operator D the corresponding seminorm, L, is given by Aside from a traditional factor of 1/2 this is the same seminorm as the seminorms d 3 and d 4 defined after lemma 4.1 of of [9]. It is easily seen to be Markov (and Leibniz) in slight generalization of the seminorms defined after the proof of Theorem 3.6.
We now show that the above seminorm seldom is the energy seminorm for a Riemannian metric on A.
Theorem 10.4. Let A = C(X) and let its Riemannian metric be determined by the function c on Z as in Theorem 7.1. Assume that c xy = c yx for all x, y ∈ X and that X is metrically connected. Then the seminorm L(f ) = [D, f ] for its Hodge-Dirac operator can be obtained as the energy seminorm from a Riemannian metric on A if and only if there is a point t ∈ X such that every other point in X is linked only to t, that is, if c xy = 0 exactly when x = t or y = t.
Proof. Suppose that the above L for D can be obtained from an energy form. Then L must satisfy the parallelogram law. Let p and q be any two distinct points of X, and set f = δ p and g = δ q , so that f + g = δ p + δ q and f − g = δ p − δ q . Let us denote (L(f )) 2 simply by L 2 (f ), etc. Then it is easily calculated that L 2 (f ) =ĉ(p) and similarly for L 2 (g), whereĉ was defined before Theorem 7.2 byĉ(p) = x c xp . For any distinct u, v ∈ X define m(u, v) by In terms of m one can calculate that where the ∨ means "maximum". Thus if L satisfies the parallelogram law, then in particular we must have Now, choose t ∈ X such thatĉ(t) ≥ĉ(x) for all x ∈ X. We will show that every x ∈ X is linked exactly to t.
As a first step, we show that every element of X is linked to t, that is, c tq = 0 for every q ∈ X with q = t. So given q, suppose that c tq = 0. Set p = t in formula (*). Then that formula becomeŝ c(t) ∨ m(t, q) =ĉ(t) +ĉ(q). Ifĉ(t) ≥ m(t, q) then we obtainĉ(q) = 0 which contradicts connectedness. Otherwise, by the definition of m(t, q) there is an r ∈ X distinct from t and q such that c rt + c rq =ĉ(t) +ĉ(q).
As the final step, we show that the elements of X are only linked to t, that is, for every p ∈ X with p = t we have c xp = 0 for all x = t. So, let p be given, with p = t. Choose q ∈ X distinct from p such that c pq ≥ c px for all x ∈ X (so possibly q = t). Then for any x we havê c(q) ≥ c xq and so [ĉ(p) ∨ĉ(q) + 3c pq ] ≥ c xp + c xq . It follows that [ĉ(p) ∨ĉ(q) + 3c pq ] ≥ m(p, q).
Because r is distinct from p and q, we must haveĉ(q) = c pq . So q is linked only to p. But by step 1 above q is linked to t. This contradicts the assumption that p = t. Thus we must haveĉ(p) ≤ĉ(q), in which case we find that c pr + c qr = 2(ĉ(p) − c pq ) + (ĉ(q) − c pq ).
Thus, much as above, we must haveĉ(p) = c pq , so that p is linked only to q. But by step 1 above p is linked to t. Thus we must have q = t, and so p is linked only to t, as desired.
The converse assertion of the theorem is easily verified, and is closely related to proposition 3.8 of [35] and the class of examples discussed in connection with standard deviation in section 2 of [35], as we will discuss again in Section 12.
Returning to the general situation, we caution that if we start with a spectral triple (A, H, D) for A, and then form its corresponding Riemannian metric (Ω, ∂, ·, · A ), and then form the Hodge-Dirac operator D H for this spectral triple as above, then usually we will have D = D H . The following commutative example is instructive. According to Example 2.4 we must apply to this latter the conditional expectation E from B(H) onto A corresponding to the trace on B(H). We see that it suffices to determine the restriction of this conditional expectation to C(Z), where C(Z) is viewed as an algebra of pointwise multiplication operators on B(H). Since A can be viewed as consisting of functions in C(Z) that depend only on the first coordinate, averaging over the second coordinate gives a conditional expectation from C(Z) onto A ⊆ C(Z) . By examining the proof of proposition 2.36 of [40] it is not difficult to see that this is the restriction of the conditional expectation E from B(H) onto A . Thus for any F ∈ C(Z) we have where n is the number of elements of X. From this we find that the CdC is Notice that up to the constant in front, this is the expression used to determine the CdC in 7.1 except using c 2 instead of c. From Example 10.3 we see that the seminorm L corresponding to the Dirac operator for this CdC is given by This is quite different from the seminorm L that we started with near the beginning of this example, and this shows that the Dirac operators themselves are quite different.

Quotients of energy metrics
Quotients of energy forms are discussed in the literature for the case in which A is commutative, e.g. on page 44 of [20]. But I have not seen any discussion of quotients for non-commutative A. We give such a discussion here, since quotients are important for the theory of quantum metric spaces. See section 5 of [34].
Let E be the energy form for a Riemannian metric and trace on A, and let B be a quotient C*-algebra of A, with π the quotient homomorphism from A onto B. It is not so clear how we should define the quotient of E on B. But we can consider the corresponding energy norm, L E , on A, and we do know how to take quotients of (semi)-norms. The quotient, L B E , of L E is, of course, defined by L B E (b) = inf{L E (a) : a ∈ A and π(a) = b} for all b ∈ B. The following general observation is an important step in our discussion. Proposition 11.1. Let L be a seminorm on a C*-algebra A, and let B be a quotient C*-algebra of A. Let L B denote the quotient seminorm on B. If L is Markov then so is L B .
Proof. Let b ∈ B with b * = b, and let F be a Lipschitz function from σ(b) to R. LetF be an extension of F to all of R such that Lip(F ) = Lip(F ). (See Theorem 1.5.6 of [41].) Let ε > 0 be given. Then there exists an a ∈ A with a * = a such that π(a) = b and L(a) ≤L(b) + ε. Note that π(F (a)) =F (b) = F (b). Theñ Since ε is arbitrary, we obtainL(F (b)) ≤ Lip(F )(L(b)) as desired. Now in our finite-dimensional setting every 2-sided ideal of A is generated by a central projection, and the quotient by that ideal can be identified with the sub-C*-algebra generated by the complementary central projection. Thus there is a (proper) central projection, p, in A such that B can be identified with pA. We write B = pA. The quotient map from A onto B is then simply given by π(a) = pa.
To understand L B E more clearly we now express it in terms of the Laplace operator ∆ for E. We are primarily interested in the corresponding metric on the state space, and so to avoid unimportant complications we will treat here only the case in which A is metrically connected (Definition 5.2). Thus we assume that if L A (a) = 0 then a ∈ C1 A , so that the kernel of ∆ is exactly C1 A . We also require that E is real, as defined in Definition 3.8.
We follow the argument that is given for the commutative case around lemma 2.1.5 of [20]. Many of the calculations below will work for any positive operator on L 2 (A, τ ) whose kernel is exactly C1 A . Let C = (1 − p)A, so that A = B ⊕ C as C*-algebras. Let τ also denote its restrictions to B and C, so that L 2 (A, τ ) = L 2 (B, τ ) ⊕ L 2 (C, τ ), an orthogonal decomposition for the τ -inner-product. (But note that if τ happens to be normalized, the traces obtained by restricting τ to B or C will not be normalized, but this is not a difficulty.) With respect to this decomposition ∆ can be expressed as a matrix: in which R ≥ 0 and S ≥ 0. Because A is metrically connected, S is invertible as an operator on L 2 (C, τ ). (To see this, note that if c is in the kernel of S, and if we set a = 0 B ⊕ c, then E(a, a) = a, ∆a τ = 0, so that a ∈ C1 A , so that c = 0.) Then we can use S to do "row and column operations" on the matrix for ∆ to obtain its Schur complement. Specifically, we have which is the Schur complement for S. From the above computations we see that ∆ B ≥ 0 as a Hilbert-space operator. It is thus natural to define the quotient, E B , of E by Notice that 1 B = p. Because 1 A = 1 B ⊕ 1 C , it follows from the above computations that ∆ B (1 B ) = 0. Furthermore, if ∆ B (b) = 0, then if we set a = b ⊕ (−S −1 Jb) we see that E(a, a) = 0, so that a ∈ C1 A and so b ∈ C1 B . Thus B is metrically connected.
Theorem 11.2. Let E be the energy form for a metrically connected Riemannian metric on A and a faithful trace on A. Assume further that E is real 3.8. Let B be a quotient C*-algebra of A, and define E B as above. Then E B is the energy form for a metrically connected Riemannian metric on B.
Proof. According to Theorem 9.1 it suffices to show that E B is a real completely positive and completely Markov form.
Because E is assumed to be real, its energy norm L E is a * -seminorm. It is easily seen that then its quotient seminorm is a * -seminorm, so that for all b ∈ B. By the usual polarization identity it follows that E B is real.
For each natural number n we must show that the form (E B ) n is positive and Markov, where (E B ) n is defined on M n (B) by for all B, B ′ ∈ M n (B). But, much as in the proof of Proposition 6.4, the right-hand side is equal to Now by Proposition 6.4 the Laplacian for E n is I n ⊗∆. Then it is easily seen that the matricial expression for this operator for the decomposition M n (A) = M n (B) ⊕ M n (C) is given by ∆ = I n ⊗ R I n ⊗ J * I n ⊗ J I n ⊗ S .
Notice that I n ⊗S is invertible. Then we see that the Schur complement for I n ⊗ S, which we denote by (I n ⊗ ∆) B , is I n ⊗ R − (I n ⊗ J * )(I n ⊗ S) −1 (I n ⊗ J) = I n ⊗ ∆ B .
Arguing much as above, and comparing with the expression obtained above for (E B ) n , we thus find that the quotient, (E n ) B , of E n on M n (B) is given by (E B ) n . It follows that (E B ) n is positive, and from Proposition 11.1 it follows that (E B ) n is Markov. Thus E B is completely positive and completely Markov, and so comes from a CdC.
For me this theorem is striking because it means that the class of Leibniz seminorms coming from Riemannian metrics that are τ -real for a trace has the property that the quotient of any seminorm in this class is again Leibniz, as will be any quotient of the quotient, etc. This is quite a contrast with the difficulties I had with quotients of Leibniz seminorms not necessarily being again Leibniz, as discussed, for example, in section 5 of [34].

The relationship with standard deviation
In the first two paragraphs of section 2 of [35] , for a finite set X, a Dirac operator is defined on A = C(X) whose corresponding seminorm is easily seen to be of the form where x * is a special point in X and the β x 's are strictly positive real numbers. (For the α x 's of those two paragraphs in [35] we have b x = |α x | 2 .) If we include a factor of 1/2, this seminorm clearly corresponds to a resistance network in which every point of x is connected only to x * , with the conductances given by c xx * = β x while c xy = 0 if x = x * = y. Notice that this is exactly the situation that was obtained in Theorem 10.4. The normalization |α x | 2 = 1 used in [35] corresponds toĉ(x * ) = 1 whileĉ(x) = c xx * for x = x * .
In [35] the quotient of the above seminorm L by the minimal ideal of C(X) corresponding to x * is shown to be given by standard deviations. This result is also extended there to non-commutative C*algebras. We now show that standard deviations, and the versions for non-commutative C*-algebras, come from our Riemannian metrics.
For the set-up we have a finite-dimensional C*-algebra A and a faithful trace τ on A. We also have a positive element, p, of A such that τ (p) = 1, corresponding to the functionĉ restricted to the complement in X of {x * }. We assume that p is strictly positive, corresponding to the connectedness of the resistance network. Then p determines a faithful state µ on A by µ(a) = τ (pa). We make the further quite strong requirement on p that it is in the center of A. Thus µ is a faithful tracial state.
From these formulas it is not difficult to guess the form of a Riemannian metric that leads to ∆. To obtain it we proceed as follows.  , b), (c, d) B = (1/2)(b * dp, τ (a * cp)) = (1/2)(b * dp, µ(a * c)) It is easily checked that with this inner productΩ is a right Hilbert B-module, and in fact a correspondence over B. We define a derivation from B intoΩ by ∂((a, α)) = (a − α, −a + α), where of course here α means α1 A . We note that if ∂((a, α)) = 0, then (a, α) = α(1 A , 1). Let Ω be the sub-bimodule ofΩ generated by the range of ∂. Then we see that (Ω, ·, · B , ∂) is a Riemannian metric for B, for which B is metrically connected. Let Γ be its CdC. A simple calculation shows that Γ = Γ ∆ for the Γ ∆ defined in the previous paragraph, as desired. It is easy to check that Γ isτ -real, and that ∆ is the Laplace operator for Γ andτ . The energy form for Γ andτ is clearly given by E Γ ((a, α), (b, β)) = µ((a − α) * (b − β)).
We can now consider the quotient of the energy form when we factor B by its ideal C. The quotient can be identified in the evident way with A. We defined ∆ by its matrix for the decomposition B = A ⊕ C of B. Thus we are already in position to apply the discussion leading up to Theorem 11.2 to obtain the Laplace operator ∆ A for the quotient A.
(For the case in which A is commutative, this is closely related to the second half of remark 4.40 of [16].) For our notation above in which M plays the role of R in Section 11, we find that ∆ A (a) = M(a) − J * J(a) = p(a − µ(a)).
The corresponding CdC is given by and the corresponding energy form is The corresponding energy seminorm is L A (a) = a − µ(a) µ , which, when a * = a, is exactly the standard deviation of a for the state µ as discussed in [35] and in the quantum physics literature.
It is not hard to guess a specific description of the Riemannian metric for this situation. Define a "slice map" E from A ⊗ A onto 1 A ⊗ A = A by E(a ⊗ b) = τ (a)b. It is easy to see that, up to normilization of τ , this is a faithful conditional expectation. Much as in Examples 1.6 and 2.3 we define a corresponding A-valued inner product on A ⊗ A, determined on elementary tensors by With this inner product A ⊗ A becomes a correspondence over A. We define ∂ somewhat as in Theorem 2.8 by Because p in invertible, the sub-bimodule generated by the range of ∂ is, by a simple well-known calculation, seen to be the kernel of the bimodule homomorphism from A ⊗ A onto A sending a ⊗ b to ab. We denote this bimodule by Ω. Then (Ω, ·, · A , ∂) is a Riemannian metric for A. Its CdC is given by This agrees with the CdC obtained just above. All of this is related to the "independent copies trick" discussed before proposition 3.6 of [35].
We remark that it does not seem easy to guess this Riemannian metric directly from that preceding it for B, of which A is the quotient, without using the Laplace operators. This seems to be related to the fact that so far there does not seem to be known a useful general way to obtain from a spectral triple on a C*-algebra a spectral triple on a quotient C*-algebra of that C*-algebra.
We also remark that in [35] the case in which µ is not tracial is treated, but we do not pursue that aspect here.