Pfaffian point processes from free fermion algebras; perfectness and conditional measures

The analogy between determinantal point processes (DPPs) and free fermionic calculi is well-known. We point out that, from the perspective of free fermionic algebras, Pfaffian point processes (PPPs) naturally emerge, and show that a positive contraction acting on a"doubled"one-particle space with an additional structure defines a unique PPP. Recently, Olshanski inverted the direction from free fermions to DPPs, proposed a scheme to construct a femionic state from a quasi-invariant probability measure, and introduced the notion of perfectness of a probability measure. We propose a method to check the perfectness and show that Schur measures are perfect as long as they are quasi-invariant under the action of the symmetric group. We also study conditional measures for PPPs associated with projection operators. Consequently, we show that the conditional measures are again PPPs associated with projection operators onto subspaces explicitly described.

1. Introduction 1.1. Pfaffian point process. In this paper, we assume that X is a countable set. The collection of point configurations in X is identified with Ω = Ω(X) = { 0, 1 } X , which is equipped with the product topology to be a compact topological space. We regard each element ω ∈ Ω as a function ω : X → { 0, 1 } or a collection of points ω = { x i ∈ X } i . We adopt the σ-algebra of Borel sets Σ. Then, for distinct points x 1 , . . . , x n ∈ X, the cylinder set Ω x 1 ,...,xn = { ω ∈ Ω | ω(x 1 ) = · · · = ω(x n ) = 1 } is measurable. Given a probability measure M on (Ω, Σ), the n-point correlation function ρ M n , n ∈ N is an n-variable symmetric function defined as the probability weight of cylinder sets: ρ M n (x 1 , . . . , x n ) = M (Ω x 1 ,...,xn ), where x 1 , . . . , x n ∈ X are distinct. It is conventional to extend ρ M n to a function on X n so that it vanishes if any two points coincide. Note that a system of correlation functions { ρ M n } n∈N determines the probability measure M uniquely since the cylinder sets generate Σ. A random variable X with values in (Ω, Σ) is called a point process in X. We also call a probability measure on (Ω, Σ) a point process not distinguishing a random variable from its distribution.
To define a Pfaffian point process, we need to fix some notations. Suppose that a 2 × 2-matrix-valued function K(·, ·) : X × X → M (2; C) satisfying the anti-simmetry (1.1) K(x, y) T = −K(y, x), x, y ∈ X 1.2. From a positive contraction to a PPP. A well-known construction of a DPP on X [Sos00] starts from an operator K on ℓ 2 (X) such that K = K * and 0 ≤ K ≤ 1, namely, a positive contraction on ℓ 2 (X). Given a positive contraction K on ℓ 2 (X), there exists a DPP M K on X such that each correlation function is given by ρ M K n (x 1 , . . . , x n ) = det [K(x i , x j )] 1≤i,j≤n , K(x, y) = (e x , Ke y ) ℓ 2 (X) .
Here e x ∈ ℓ 2 (X), x ∈ X is a function defined by e x (y) = δ x,y , y ∈ X. Then the collection { e x } x∈X forms an complete orthonormal system of ℓ 2 (X).
Here, we give a direct generalization of this result to PPPs. We define the complex conjugate J of ℓ 2 (X) as an anti-linear operator on it that fixes each function e x , x ∈ X. We set K = ℓ 2 (X) ⊕ ℓ 2 (X) and take an anti-unitary involution Γ on K defined by where B(K) is the set of bounded operators on K, and for any operator A ∈ B(K), we write A := ΓAΓ. We will show that each operator S ∈ Q(K, Γ) uniquely determines a PPP.
Proposition 1.3. Let us take S ∈ Q(K, Γ) and write it as There exists a PPP M S on X such that each correlation function is given by where (1.4) K S (x, y) = (e x , S 21 e y ) ℓ 2 (X) (e x , S 22 e y ) ℓ 2 (X) (e x , (S 11 − 1)e y ) ℓ 2 (X) (e x , S 12 e y ) ℓ 2 (X) , x, y ∈ X.
It is standard to restate Proposition 1.3 in terms of the Fredholm Pfaffian. Let us assume that a matrix-valued function K : X × X → M (2; C) is finitely supported. We take another matrix-value function J : X × X → M (2; C) defined by J(x, y) = δ x,y 0 1 −1 0 , x, y ∈ X.
Then the sum J+K still exhibits the anti-symmetry (1.1). For each Y = { x 1 , . . . , x n } ⊂ X, it can be verified that [Rai00] Pf [(J + K)(x i , x j )] 1≤i,j≤n = 1 + where the sum runs over non-empty subset X ⊂ Y . Since K is now supposed to be finitely supported, this description gets stable under the limit Y → X so that the following definition of the Fredholm Pfaffian makes sense: where the sum over X in fact reduces to a finite sum. It is, of course, possible to extend the definition of the Fredholm Pfaffian to a not-necessarily finitely supported function K, but we will not need such a generality. Let α : X → R be a function such that α(x) ≥ 1, x ∈ X and α−1 is finitely supported. Given an anti-symmetric matrix-valued function K : X × X → M (2; C), we understand a new function denoted as √ α − 1K √ α − 1 as √ α − 1K √ α − 1 (x, y) := α(x) − 1K(x, y) α(y) − 1, x, y ∈ X.
Notice that the infinite product reduces to a finite one since α − 1 is finitely supported. In terms of these notions, Proposition 1.3 is equivalent to the following one: Proposition 1.5. Let S ∈ Q(K, Γ). There is a unique PPP M S on X possessing the following property: for any function α on X such that α(x) ≥ 1, x ∈ X and α − 1 is finitely supported, The equivalence between Proposition 1.3 and Proposition 1.5 basically follows from [Rai00]. We will contain a proof of this equivalence in Sect. 2 for readers' convenience.
An interesting subclass of PPPs obtained in this manner consists of those associated with projection operators. We write the collection of projection operators in Q(K, Γ) as Gr(K, Γ) = P ∈ Q(K, Γ) P 2 = P .
This notation is, of course, motivated by the fact that a projection operator P ∈ Gr(K, Γ) determines a closed subspace P K ⊂ K and, therefore, the collection of projection operators can be regarded as an analogue of the Grassmann variety. Let P 0 ∈ Gr(K, Γ) be the projection operator onto the first component of the direct sum decomposition K = ℓ 2 (X) ⊕ ℓ 2 (X), which is expressed as For each n ∈ Z ≥0 , we write Ω n = { ω ∈ Ω | #ω = n } for the collection of n-point configurations and set Ω • = ∞ n=0 Ω n , which consists of configurations of finitely many points.
Proposition 1.6. Let P ∈ Gr(K, Γ) be a projection operator such that P − P 0 is of Hilbert-Schmidt class. Then the associated PPP M P is supported in Ω • . Equivalently, a point process X in X obeying M P satisfies #X < ∞ almost surely.
1.3. CAR algebra and quasifree states. We introduce an algebra of canonical anticommutation relations (CAR algebra, for short) with a general one-particle Hilbert space following [Ara71,Bin95]. Another style of, but equivalent, definition of a CAR algebra, can be found in e.g. [BR97]. Let K be a complex Hilbert space of infinite dimension and Γ be an anti-unitary involution on K. The algebra C 0 (K, Γ) is a * -algebra over C generated by B(f ), f ∈ K subject to the relations where {·, ·} is the anti-commutator; { a, b } := ab + ba. It is known that the algebra C 0 (K, Γ) admits a unique C * -norm · . We denote the C * -completion by C(K, Γ) = C 0 (K, Γ) · and call it the (self-dual) CAR algebra with one-particle space (K, Γ). To those who are more familiar with regarding ℓ 2 (X) as a one-particle Hilbert space, we emphasize that we adopt a "doubled" space as a one-particle Hilbert space.
For a general C * -algebra A, a state over it is, by definition, a linear functional ϕ : A → C satisfying conditions that (1) for every A ∈ A, ϕ(A * A) ≥ 0 holds, (2) it is normalized: Note that, from these properties, it can be deduced that ϕ(1) = 1 (see e.g. [Tak79, Chap. I, Sect. 9]). Since any positive element B ∈ A admits an expression B = A * A with some A ∈ A, the first condition is equivalently stated that ϕ(B) ≥ 0 for all positive element B ∈ A.
It is immediate from the definition that, for a quasi-free state ϕ over C(K, Γ), a 2n-point correlation function is expressed in terms of a Pfaffian so that where A ϕ (f 1 , . . . , f 2n ) is the unique anti-symmetric 2n × 2n-matrix defined by By definition, a quasi-free state over a CAR algebra is uniquely determined by the two-point function. In fact, we have the following.
). The collection of quasi-free states over C(K, Γ) is in one-to-one correspondence with the collection of operators Q(K, Γ) defined in (1.3), under which an operator S ∈ Q(K, Γ) corresponds to the quasi-free state ϕ S defined by We will include a proof of Lemma 1.8 in Sect. 2 for readers' convenience.
As we have announced, we work on the case when K = ℓ 2 (X) ⊕ ℓ 2 (X) equipped with Γ given by (1.2), and will adopt this pair (K, Γ) in the sequel without any specification. The associated CAR algebra C(K, Γ) is generated by a x = B((0, e x )) and a * x = B((e x , 0)), x ∈ X. Notice that the notation is compatible with the * -involution since Γ(e x , 0) = (0, e x ). Then, from the definition of a quasi-free state, we have ,j≤n , where the matrix-valued function K S (·, ·) : X × X → M (2; C) was defined in (1.4) associated with the operator S. In fact, the values of the matrix-valued function K S (·, ·) is written in terms of the quasi-free state as At this stage, we can find an analogy between this expectation value and a correlation function of a PPP. Moreover, if S 12 = S 21 = 0, i.e., S preserves the decomposition of K, we have ω S a * x 1 · · · a * xn a xn · · · a x 1 = det [K S (x i , x j )] 1≤i,j≤n , K S (x, y) = (e x , S 22 e y ) ℓ 2 (X) , which seems to correspond to a correlation function of a DPP.
Let us consider a commutative algebra topologically generated by a * x a x , x ∈ X, which is identified with the algebra C(Ω) of continuous functions on Ω by the correspondence n i=1 a * x i a x i → χ Ωx 1 ,...,xn , x 1 , . . . , x n ∈ X : distinct.
In the sequel, we regard C(Ω) as a subalgebra of C(K, Γ) under this correspondence. Our strategy to prove Proposition 1.3 is to identify the above expectation value (1.9) with the correlation function of the desired PPP, expecting that for a quasi-free state ϕ S , there exists a probability measure M S on (Ω, Σ) so that the restriction of ϕ S on the subalgebra C(Ω) is identical to the integration with respect to M S : Then the probability measure M S is automatically a PPP. We will see in Sect. 2 that this indeed happens to prove Proposition 1.3.
1.4. Perfectness of a probability measure. As we have seen in Lemma 1.8, the set Q(K, Γ) labels quasi-free states over C(K, Γ). In this sense, Proposition 1.3 states that associated to a quas-free state, a PPP exists. Recently, Olshanski [Ols20] proposed a scheme to invert this correspondence, which is outlined here.
Let S be the group of finite permutations of X. The assumptions are (1) A probability measure M on (Ω, Σ) is S-quasi-invariant.
(2) The set X is equipped with a linear order ≤ so that the ordered set (X, ≤) is isomorphic to Z or N.
We consider the gauge invariant subalgebra A(K, Γ) ⊂ C(K, Γ) topologically generated by a * x a y , x, y ∈ X. Under the above assumptions, we can associate to the probability measure M a representation T M of the gauge invariant subalgebra A(K, Γ) on the Hilbert space L 2 (Ω, M ). Then, we immediately obtain a state ϕ M on A(K, Γ) by where I ∈ L 2 (Ω, M ) is the unit constant function on Ω. By construction of the representation T M , the action of C(Ω) on L 2 (Ω, M ) is just a multiplication. Therefore, we have which implies that the state ϕ M restricted on the commutative subalgebra C(Ω) is just the expectation value with respect to the probability measure M and, in particular, if M is a PPP, ϕ M restricted on C(Ω) admits a Pfaffian expression. Definition 1.9. Let M be a S-quasi-invariant probability measure on (Ω, Σ) and assume that X can be equipped with a linear order ≤ so that (X, ≤) is isomorphic to Z or N. The probability measure M is said to be perfect if there exists a quasifree state ϕ on C(K, Γ) such that the resulting state ϕ M on A(K, Γ) is realized as 1.5. Schur measures. Schur measures form a family of DPPs on X = Z + 1 2 introduced in [Oko01] that includes the Plancherel measure and the z-measure as special cases. Let Y be the collection of partitions, each element of which is a sequence of non-increasing integers λ = (λ 1 ≥ λ 2 ≥ · · · ≥ 0) such that there exists ℓ ∈ N and λ ℓ+1 = 0. To each λ ∈ Y, we associate a subset which defines an embedding M : Y ֒→ Ω. Therefore, given a probability measure on Y, we obtain one on (Ω, Σ) by pushing it forward via M. Let T be the collection of data ρ = (α; β), where α = (α 1 ≥ α 2 ≥ · · · ≥ 0) and β = (β 1 ≥ β 2 ≥ · · · ≥ 0) satisfying j≥1 α j + j≥1 β j ≤ 1. It is known that T parametrizes Schur-positive specializations of the ring of symmetric functions.
Then it is perfect.
1.6. Conditional measures of PPPs. Let X and X ′ be disjoint finite subsets in X and take a cylinder set which consists of configurations such that every points in X are occupied and those in X ′ are unoccupied. We identify the cylinder set C(X, X ′ ) with Ω(X\(X ⊔ X ′ )) : For a probability measure M on (Ω, Σ), assume that the cylinder set has strictly positive weight: M (C(X, X ′ )) > 0. We define the conditional measure of M on (X, X ′ ) by We focus on PPPs associated with projection operators. For a subset A ⊂ X, we set which is the direct sum of Hilbert spaces. It is obvious from the definition that Γ preserves the subspace K A . Thus we can write Γ A := Γ| K A .
Let X and X ′ be finite disjoint subsets of X and take a projection operator P ∈ Gr(K, Γ). Then, P K ⊂ K is a closed subspace. We define a new closed subspace in K X\(X⊔X ′ ) and a projection operator P X,X ′ as the orthogonal projection onto P K X,X ′ in K X\(X⊔X ′ ) .
Lemma 1.11. Let X and X ′ be finite disjoint subsets of X. Then, we have Let us also introduce a notion of regularity.
Definition 1.12. Let X and X ′ be finite disjoint subsets of X. A projection operator P ∈ Gr(K, Γ) is said to be (X, X ′ )-regular if The following result is an analogue of [Ols20, Proposition 6.13] for PPPs. Theorem 1.13. Let X and X ′ be finite disjoint subsets of X. Assume that a projection operator P ∈ Gr(K, Γ) is (X, X ′ )-regular. Then, we have M P X,X ′ := M P X,X ′ = M P X,X ′ . In particular, it is a PPP associated with a projection operator.
Remark 1.14. A similar problem has been studied in [BCQ19]. Our result describes the reduction of a projection operator, which is new.
A natural application of Theorem 1.13 is a proof of quasi-invariance of PPPs with respect to the symmetric group along the line of [Ols11, BO19, Ols20], which is set aside for a future work.
Organization. In Sect. 2, we recall the Fock representations of a CAR algebra and prove Propositions 1.3, 1.5, and 1.6. In Sect. 3, after recalling the procedure of obtaining a state over the gauge-invariant subalgebra of a CAR algebra from a probability measure proposed in [Ols20], we give a proof of Theorem 1.10. Sect. 4 is devoted to a proof of Theorem 1.13, which includes one of Lemma 1.11 as a part. In Appendix A, we illustrate that the shifted Schur measures are understood as examples in our perspective from a CAR algebra.
Acknowledgements. The author is grateful to Makoto Katori and Tomoyuki Shirai for comments on the manuscript. This work was supported by the Grant-in-Aid for JSPS Fellows (No. 19J01279).

Fock representations.
Here we see Fock representations of C(K, Γ), which play a prominent role in the theory of a CAR algebra.
2.1.1. General construction. Let H be a Hilbert space. For each n ∈ N, we denote by n H the n-th wedge product of H, which is generated by vectors f 1 ∧ · · · ∧ f n , f 1 , . . . , f n ∈ H subject to the anti-symmetry: When we take { e i } i=1,2,... for a complete orthonormal system of H, then vectors e i 1 ∧ · · · ∧ e in , i 1 > · · · > i n form a complete orthonormal system of n H. The Fermi Fock space over H is defined by where we set 0 H = C1. The Fermi Fock space admits a natural inner product induced from each component of the direct sum and becomes a Hilbert space, i.e., the direct sum is understood in the topological sense.
For each f ∈ H, the creation operator a * (f ) is an operator on F(H) defined by The annihilation operator a(f ) is defined as the adjoint operator of a * (f ). By definition, it is obvious that creation and annihilation operators act as We can also see that the assignment f → a * (f ) is linear and f → a(f ) is anti-linear.
2.1.2. Fock representation and a quasi-free state. Let us take a projection operator P ∈ Gr(K, Γ), associated to which we can construct a representation of C(K, Γ) on a Fock space F(P K). To describe the action, notice that, from the property 1 − P = P , we can see that the projection to the complementary subspace of P K is P . Let us set Then it is known that (π P , F(P K)) is a faithful and irreducible representation of C(K, Γ). When we set ϕ P (A) = (1, π P (A)1) F(P K) , A ∈ C(K, Γ), we can verify that ϕ P is just the quasi-free state corresponding to P in the sense that it possesses the property required in Lemma 1.8 In particular, when we take P 0 ∈ Gr(K, Γ), we have P 0 K ≃ ℓ 2 (X), and the map π P 0 is described as In the sequel, we adopt an abuse of notation and often write a * (e x ) = a * x and a(e x ) = a x not distinguishing the elements in C(K, Γ) and their action via π P 0 .
Let us describe a standard complete orthonormal system of the Fock space F(ℓ 2 (X)). Since X is countable, it can be equipped with a linear order ≤. For each ω = { x 1 > · · · > x n } ∈ Ω • , we set e ω := e x 1 ∧ · · · ∧ e xn . Then the collection { e ω } ω∈Ω • forms a complete orthonormal system.
Proof of Lemma 1.8. Let ϕ be a quasi-free state over C(K, Γ). Then the assignment defines a quadratic form on K. It follows from the relation in the CAR algebra that Since the norm of a state is unity, we have Q ϕ (f, f ) ≤ B(f ) * B(f ) ≤ f 2 for any f ∈ K, which implies that Q ϕ is a bounded quadratic form. Therefore, owing to the correspondence between bounded operators and bounded quadratic forms (see e.g. [Kos99, Chap. 1]), there exists a bounded operator S ∈ B(K) such that Q ϕ (f, g) = (f, Sg) K , f, g ∈ K. It also follows from the positivity of the state that the quadratic form Q ϕ is positive, and therefore, S = S * ≥ 0. Again, from the relation in the CAR algebra, we have Conversely, given an operator S ∈ Q(K, Γ), we define the following operator Notice that the assumption 0 ≤ S = S * ≤ 1 ensures that the square roots S 1/2 and (1 − S) 1/2 make sense. Let us equip this Hilbert space with an anti-unitary involution Then, it can be checked that P S ∈ Gr( K, Γ), which implies that the functional ϕ P S defined by is a quasi-free state over C( K, Γ). Now, since Γ acts diagonally along the direct sum decomposition, C(K, Γ) is regarded as a subalgebra of C( K, Γ) and the restriction ϕ S = ϕ P S | C(K,Γ) is the quasi-free state corresponding to the given S.

2.2.
Proof of Proposition 1.3. Now we are at the position of proving Proposition 1.3. Our strategy is to check the criteria by Lenard [Len75a,Len75b].
2.2.1. Lenard's criteria. Suppose that a system of functions { ρ n : X n → R } ∞ n=1 is given. A question is if there exists a probability measure M on (Ω, Σ) such that its n-point correlation function ρ M n coincides with the given function ρ n for every n ∈ N. Lenard [Len75a,Len75b] clarified the necessary and sufficient conditions for this to happen.
n=1 is a one of correlation functions for a probability measure on (Ω, Σ) if it possesses the following properties: (1) Symmetry: Each function ρ n is a symmetric function, i.e., for arbitrary σ ∈ S n and x 1 , . . . , x n ∈ X.
(2) Positivity: For any system of functions Therefore, Proposition 1.3 reduces to the following assertion.
To show the positivity condition, let Φ = { Φ n : X n → R } N n=0 be a system of functions satisfying (2.1). We can see that where we set Therefore, owing to the positivity of a state over a C * -algebra, it suffices to show that A Φ ∈ C(K, Γ) is a positive element, which can be checked in any faithful representation.
In fact, when we take a faithful representation (π, H), we may regard C(K, Γ) as a C * -subalgebra of B(H) via the embedding π. Since the spectrum of an element in a C *subalgebra coincides with that in a whole algebra (see e.g. [BR87, Proposition 2.2.7]), the relevant element We can, in particular, take a Fock representation (π P 0 , F(ℓ 2 (X))).
Proof. This follows from a direct computation. Let us notice that a * x 1 · · · a * xn a xn · · · a x 1 = 0 if any two points from x 1 , . . . , x n coincide. Hence, we have We can also see that a * x a x e ω = χ [x∈ω] e ω , x ∈ X, ω ∈ Ω • . Therefore the desired result is obtained.
The eigenvalues of π P 0 (A Φ ) are non-negative from the assumption implying that A Φ is a positive element, and therefore, the system of functions { ρ n } ∞ n=1 fulfills the positivity conditions. Now the proof is complete.

Restatement in terms of the Fredholm Pfaffian.
Here we see the equivalence between Propositions 1.3 and 1.5. First, notice that a multiplicative functional Ψ α associated with a function α is identified with in C(Ω) ⊂ C(K, Γ). Since α − 1 is finitely supported and {a x , a * x } = 1, factors except for finitely many ones in the product are unity. We can see that Ψ α can also be expressed as Therefore, it is immediate that, for the the probability measure M S associated with S ∈ Q(K, Γ) in the sense of Proposition 1.3, the expectation value of Ψ α is When we write D √ α−1 : X × X → M (2; C) for the matrix-value function Due to the formula Pf(B T AB) = (det B)(PfA) for an anti-symmetric matrix A and an arbitrary matrix B of the same size, we have as has also been shown in [Rai00]. Conversely, let M S be the probability measure associated with S ∈ Q(K, Γ) in the sense of Proposition 1.5. For a finite subset X ⊂ X, we set α X = δ X + 1, where δ X is the delta function over X. Then α X is finitely supported and α(x) ≥ 1, x ∈ X. Owing to the expression (2.2), we have On the other hand, from the characterization of M S , it follows that we have Av S = Aw S . Since the matrix A is triangular with respect to the partial order induced from the inclusion relation with unit diagonal, it is invertible. Therefore, v S = w S implying that which is the desired property.
2.4.1. Bogoliubov automorphisms. Let us consider the following collection of operators: where U(K) is the set of unitary operators on K. It obviously forms a group. Given an operator V ∈ I(K, Γ), we can define an automorphism α V of C(K, Γ) by α V (B(f )) := B(V f ), f ∈ K, which is called the Bogoliubov automorphism associated with V . When we have a state ϕ over C(K, Γ), we can twist it by a Bogoliubov automorphism to obtain a new state ϕ • α V , which defines a right action of the group I(K, Γ) on the collection of states. On the other hand, the group I(K, Γ) also acts on Q(K, Γ) from the right via When the state is quasi-free, then these two actions are compatible: Proof. It is easily checked that for any f, g ∈ K, which implies the desired equality.
It is obvious that the group action by I(K, Γ) preserves the collection Gr(K, Γ) of projection operators. Moreover, we have the following: Lemma 2.5. The group I(K, Γ) acts on Gr(K, Γ) transitively.
Proof. For projection operators P, P ′ ∈ Gr(K, Γ), let us take complete orthonormal systems { f i } i∈I and { g i } i∈I of P K and P ′ K, respectively. Then { f i , Γf i } i∈I and { g i , Γg i } i∈I are both complete orthonormal systems of K. If we define an operator V by it is a unitary operator such that V * P ′ V = P and commutes with Γ.
2.4.2. Unitary implementability. Let P ∈ Gr(K, Γ) be a projection and take an operator V ∈ I(K, Γ). We say that the Bogoliubov automorphism α V is unitarily implementable on the Fock representation (π P , F(P K)) if there exists a unitary operator U on F(P K) such that Since α V is an automorphism, π P • α V is an irreducible representation and is identified with the Fock representation π V * P V . It is known that the unitary implementability is equivalent to the quasi-equivalence of two representations π P • α V and π P , which is therefore equivalent to the quasi-equivalence of quasi-free states ϕ V * P V and ϕ P .
The following criterion is well-known: Theorem 2.6 ( [SS65, PS70, Ara71]). Let P, P ′ ∈ Gr(K, Γ) be projection operators and take an operator V ∈ I(K, Γ) such that P ′ = V * P V . Then V is unitarily implementable on the Fock representation (π P , F(P K)) if and only if P −P ′ is of Hilbert-Schmidt class.
We may expand U1 ∈ F(ℓ 2 (X)) in the complete orthonormal system { e ω } ω∈Ω • as It is obvious that M P := |c P (·)| 2 defines a probability measure supported on Ω • such that Therefore, we can conclude that M P = M P and, in particular, M P is supported on Ω • .
2.4.4. Straightforward generalization of Proposition 1.6. The above proof suggests a straightforward generalization of Proposition 1.6. Let us take a subset X ⊂ X and write Ω fin X for the subset of Ω consisting of ω such that (X\X) ∩ ω and X\ω are both finite. Note that, if X is a finite set, then Ω fin X = Ω • . Let P fin X be the orthogonal projection onto K + X\X ⊕ K − X . Then we have the following: Proposition 2.7. If P ∈ Gr(K, Γ) is such that P − P fin X is of Hilbert-Schmidt class, then the associated PPP M P is supported on Ω fin X .

From measures to states
3.1. Quasi-invariant measures and representations. This and next subsections are devoted to an exposition of a construction of a state over the gauge-invariant subalgebra of a CAR algebra from a probability measure that was proposed in [Ols20].
Two measures M 1 and M 2 are said to be equivalent if they are absolutely continuous with respect to each other and, in this case, we write M 1 ≃ M 2 . By means of this notion, we say that a measure M is G-quasi-invariant if M ≃ g M for arbitrary g ∈ G.
Since the group G acts naturally on the commutative algebra C(Ω) of continuous functions on Ω, we can consider a semi-direct product of C * -algebras C(Ω) ⋊ G. Take a G-quasi-invariant measure M on (Ω, Σ). We define a representation of C(Ω) ⋊ G on L 2 (Ω, M ) following the Koopman-type construction: where φ is a 1-cocycle defined by Notice that the G-quasi-invariance ensures the existence of the Radon-Nikodým derivative.
3.1.2. Wreath product S ≀ Z 2 . Hereafter, we assume that Ω = { 0, 1 } X and G = S = S(X) that consists of finite permutations of X. The wreath product S ≀ Z 2 is realized as a semi-direct product S ⋉ E, where E is a commutative algebra generated by ε x , x ∈ X with relations ε 2 x = 1, x ∈ X. The covariance structure reads gε x g −1 = ε g(x) , g ∈ S, x ∈ X.
Proposition 3.1. Let us write C * [S ≀ Z 2 ] for the C * -algebra completion of the group algebra C[S ≀ Z 2 ]. We have an isomorphism which is characterized by the assignment d x → ε x , x ∈ X and the natural identification of S in both sides.
Let M be an S-quasi-invariant measure and T M be the associated representation of C(Ω) ⋊ S on L 2 (Ω, M ). Then, due to the above isomorphism, it is regarded as a representation of C * [S ≀ Z 2 ].
We introduce a two-sided ideal , where we write s x,y for the transposition of x and y. Since it is generated by self-adjoint elements, the quotient C * [S ≀ Z 2 ]/I is a C * -algebra. To define a morphism C * [S≀Z 2 ] → A(K, Γ), we assume that X is equipped with a linear order ≤ so that, as an ordered set, (X, ≤) is isomorphic to Z or N. In particular, we assume that each interval is a finite set.
Proposition 3.3. We write Notice that η x , x ∈ X are commutative and the ordering of the product does not matter.
(2) ker p = I. Therefore, C * [S ≀ Z 2 ]/I ≃ A(K, Γ). where I ∈ L 2 (Ω, M ) is the unit constant function. When we restrict this state on the subalgebra C(Ω), we see that

Construction of states. When
which is just the expectation value of f under the probability measure M . Therefore, if M is a PPP, ϕ M (a * x 1 · · · a * xn a xn · · · a x 1 ) = Ωx 1 ,...,xn Therefore, we are tempted to expect that the state ϕ M is quasifree, but it is not obvious.

Perfectness: Warm-up.
To illustrate an idea of proving Theorem 1.10, let us start with a simple example. Let us take linearly independent vectors v = { v n ∈ ℓ 2 (X) | n = 1, . . . , N } and let K v be the orthogonal projection to the subspace spanned by v. Then the pro- Proposition 3.4. Let us expand each vector v n as v n = x∈X v n (x)e x , n = 1, . . . , N .
then the DPP M Pv is perfect.
Proof. It is immediate that P v − P 0 is of Hilbert-Schmidt class as far as N < ∞. Therefore, the Bogoliubov automorphism induced from a unitary V v such that P = V * v P 0 V v is unitarily implementable on (π P 0 , F(ℓ 2 (X))). Let us denote such a unitary operator by V v . Since the representation (π P 0 • α V , F(ℓ 2 (X))) is the GNS representation of ϕ Pv , we have Let us take an orthonormal basis { φ n } N n=1 of the space Span { v n } N n=1 and expand each of them as φ n = x∈X φ n (x)e x . In the particular case we are considering, the unitary operator V v acts on the vacuum vector as [Rui78] V Now there exists a constant Z N > 0 independent of x 1 , . . . , x N such that Note that the above quantity is non-negative from the assumption. Therefore, we have Next, we construct an injective homomorphism ι : L 2 (Ω, M Pv ) ֒→ F(ℓ 2 (X)) of Hilbert spaces. Noting that L 2 (Ω, M Pv ) is realized as ℓ 2 (suppM Pv , M Pv ), we define it as where δ ω , ω ∈ Ω is the unit function supported at ω. Then it is obviously an isometry. In particular, the unit constant function I is mapped to V v 1. It remains to show that the homomorphism ι intertwines the representations T M Pv and π P 0 : In fact, it implies that ϕ M Pv = ϕ Pv | A(K,Γ) .
(3) When x ∈ ω and y ∈ ω, ω ′ = ω\ { x } ∪ { y }. Suppose that e ω has the form whereê y means that e y is removed from the corresponding position. Then we have These properties verify that π P 0 (p(s x,y ))e ω = e ω ′ .
(4) When x ∈ ω and y ∈ ω, the same property is verified in a similar argument.
For f ∈ C(Ω), it is obvious that For g ∈ S, we have while, on the other hand, Therefore, we can conclude that ι intertwines representations T M Pv and π P 0 , and the proof is complete.
Notice that, in the above arguments, it is essential that the coefficients in the expansion (3.1) are non-negative; in general, the squared absolute value of each coefficient gives a weight of the probability measure.
Example 3.5. Suppose that X ⊂ R with the induced order. For a weight function W (x) such that we take v n = x n−1 W (x) 1/2 , n = 1, . . . , N . The corresponding DPP M Pv is a discrete orthogonal polynomial ensemble [Kön05]. It is immediate that Therefore, M Pv is perfect as shown in [Ols20] by directly estimating the correlation kernel.

Schur measures.
3.4.1. Schur functions and positive specialization. Let Λ n = C[x 1 , . . . , x n ] Sn be the ring of symmetric polynomials of n variables. We write Λ = lim ← −n Λ n for the projective limit in the category of graded rings and call it the ring of symmetric functions. Note that an object like i≥1 (1 + x i ) is not, counter-intuitively, a symmetric function. We set p n = i≥1 x n i and call it the n-th power-sum symmetric function. Then the powersum symmetric functions freely generate Λ so that Λ = C[p 1 , p 2 , . . . ]. The ring of symmetric functions Λ admits a distinguished basis { s λ | λ ∈ Y } constituted with the Schur functions, which are characterized in several manners (see [Mac99]).
An algebraic homomorphism τ : Λ → C is said to be Schur-positive if τ (s λ ) ≥ 0 for all λ ∈ Y. It is a classical result [ASW52,Edr52,Tho64] that Schur-positive specializations are parametrized by the set T in the way that ρ = (α; β) gives a Schurpositive specialization τ ρ defined by τ ρ (p 1 ) = 1 and For a symmetric function F ∈ Λ, we often write its image under τ ρ as F (ρ) instead of τ ρ (F ).

Free fermion description.
Here we consider the case when X = Z + 1 2 . Let us write P S 0 for the orthogonal projection onto K + . Then, it is obvious that P S 0 ∈ Gr(K, Γ). It is standard to realize the corresponding Fock space F(P S 0 K) as a space of infinite wedges. Let Ω S be the collection of ω ∈ Ω such that ω + := ω ∩ Z ≥0 + 1 2 and ω − := Z ≤0 − 1 2 ω are finite. Equivalently, each element ω ∈ Ω S is a collection { x 1 > x 2 > · · · } such that x 1 < ∞ and x j+1 = x j − 1 for all sufficiently large j. Then the Fock space F(P S 0 K) admits a complete orthonormal system { e ω | ω ∈ Ω S }, where The action of the CAR algebra C(K, Γ) is natural: a * x f := e x ∧ f, x ∈ X, and a x acts as its adjoint operator. The cyclic vector 1 is identified with Under the embedding, M : Y ֒→ Ω, the image is included in Ω S . Strictly speaking, the image is isomorphic to a subset of Ω S consisting of ω such that #ω + = #ω − . Under the inclusion M, the empty partition is mapped to the cyclic vector 1.
We set D S = Span { e ω | ω ∈ Ω S } for a dense subspace of the Fock space F(P S 0 K) Observe that, as operators on F(P S 0 K), h n := x∈X a * x−n a x , n ∈ Z\ { 0 } make sense with a dense domain D S and exhibit the Heisenberg commutation relations [h m , h n ] = mδ m+n,0 , m, n ∈ Z\ { 0 }. Notice that h * n = h −n , n ∈ Z\ { 0 }. For each ρ ∈ T • , we introduce operators on F(P S 0 K) by Proposition 3.6. For ρ ∈ T • , the operators Ξ ± (ρ) are well-defined with a dense domain D S .
Proof. First, let us verify that Ξ + (ρ)e ω ∈ D S is at most a finite linear combination of e ω , ω ∈ Ω S . To this aim, we introduce the following energy operator: Then the complete orthonormal system { e ω | ω ∈ Ω S } diagonalizes H. In fact, for ω ∈ Ω S , we have In particular, we see that the spectrum of H coincides with 1 2 Z ≥0 . We can also show, by direct computation, that [H, h n ] = −nh n , n ∈ Z, which implies that each operator h n lowers the eigenvalue of H by n. Hence, for any ω ∈ Ω S , h n 1 · · · h n k e ω = 0 whenever n 1 + · · · + n k is sufficiently large. Therefore, Ξ + (ρ)e ω ∈ D S .
This properties allows us to have where we set It is obvious that the assignment A → π P S (ρ) (A) gives a representation. Therefore, computation of ϕ s(ρ) admits Wick's formula, implying that it is a quasi-free state.
3.4.3. Proof of Theorem 1.10. The property (3.2) implies that where the coefficients are all non-negative; recall that this is a prominent observation in proof of Proposition 3.4. Now it suffices to show that the embedding defined by ι : L 2 (Ω, M s(ρ) ) ֒→ F(P S 0 K); δ ω → M s(ρ) (ω) 1/2 e ω intertwines the representations T M s(ρ) and π P S 0 , which can be checked in the same manner as in the proof of Proposition 3.4.

Conditional Measures
4.1. Proof of Lemma 1.11. First, we notice that the subspace P K X,X ′ consists of vectors v ∈ K X\(X⊔X ′ ) such that there exist a ∈ K + X , a ′ ∈ K − X ′ and v + a + a ′ ∈ P K. Due to the property ΓK ± X = K ∓ X , we can see that Γ X\(X⊔X ′ ) P K X, We can also see that P K X,X ′ ⊥ = P K X ′ ,X . In fact, for u ∈ P K X,X ′ and v ∈ P K X ′ ,X , we can take a ∈ K + X , Now, since a, a ′ , b, b ′ are mutually orthogonal and orthogonal to u and v, we have which implies P K X,X ′ ⊥ = P K X ′ ,X . Therefore, P X,X ′ = 1 − P X,X ′ holds.
According to the decomposition X = X ⊔ X ′ ⊔ X\(X ⊔ X ′ ), we have a decomposition which enables us to regard C(K X\(X⊔X ′ ) , Γ X\(X⊔X ′ ) ) as a subalgebra of C(K, Γ) that coincides with the subalgebra realized as χ C(X,X ′ ) C(K, Γ)χ C(X,X ′ ) . Therefore, the conditional state ϕ(·|χ C(X,X ′ ) ) is regarded as a state over C(K X\(X⊔X ′ ) , Γ X\(X⊔X ′ ) ) and computed as In particular, in the case when ϕ = ϕ S is a quasi-free state associated with S ∈ Q(K, Γ), we have 4.3. Some observations. We see that obtaining P X,X ′ from a projection operator P ∈ Gr(K, Γ) is decomposed into fundamental steps.
Lemma 4.1. We have the following decomposition properties.
(3) When X ′ = X ′ 1 ⊔ X ′ 2 ⊂ X is a disjoint union of finite subsets, then P ∅,X ′ = (P ∅,X ′ 1 ) ∅,X ′ 2 . Proof. We only prove (4.1) since the other two follow from arguments of the same type. We have noticed that P K consists of vectors u ∈ K X\(X⊔X ′ ) such that there exist a ∈ K + X , a ′ ∈ K − X ′ and u + a + a ′ ∈ P K. This exactly means that u + a ∈ v ∈ K X\X ′ ∃a ′ ∈ K X ′ , v + a ′ ∈ P K , while the latter set is just P K ∅,X ′ . Therefore, P K X,X ′ = (P K ∅,X ′ ) X,∅ . The other relation P K X,X ′ = (P K X,∅ ) ∅,X ′ is also shown. Consequently, we reach (4.1).
We also notice that the regularity is inherited along decomposition.
Lemma 4.2. We have the following properties regarding the regularity.
Proof. We only prove the equivalence to the item (a) in (1) since the others are shown in similar arguments. Suppose that P ∈ Gr(K, Γ) is (X, X ′ )-regular. Then it is obvious that P is (X, ∅)-regular. Now P K X,∅ consists of vectors u ∈ K X\X such that there exists a ∈ K + X and u + a ∈ P K. Let us take a vector u 0 ∈ P K X,∅ ∩ K − X ′ . Then there exists a 0 ∈ K + X such that u 0 + a 0 ∈ P K, while we also have u 0 + a 0 ∈ K + X ⊕ K − X ′ , which implies that u 0 + a 0 ∈ P K ∩ K + X ⊕ K − X ′ = { 0 } by assumption. Therefore, we must have u 0 = 0 and see that P K X,∅ is (∅, X ′ )-regular.
Lemmas 4.1 and 4.2 verify that it suffices to prove Theorem 1.13 in the case when (X, and set R ± (x; P K) = P K ∩ R ± (x; K). When we write an element u ∈ P K as u = (f, g) according to the direct sum decomposition K = ℓ 2 (X) ⊕ ℓ 2 (X), these are just Then, we can see that P K { x },∅ is the image of R − (x; P K) under the orthogonal projection onto K X\{ x } , and similarly, P K ∅,{ x } is the image of R + (x; P K) under the orthogonal projection onto K X\{ x } .  H 2 ), and D ∈ B(H 2 ).
(2) The operator A + B(1 − D) −1 C is self-adjoint and shown to be an idempotent.
Proposition 4.4. Under the above notation, we have, if P is ({ x } , ∅)-regular, Proof. As we saw in Remark 1.4, P 12 and P 21 are anti-symmetric; in particular, d 12 = d 21 = 0. We also saw that P 11 and P 22 are self-adjoint and JP 11 J = 1 − P 22 , which implies d 11 = 1 − d 22 .
Let us derive the expression of P { x },∅ . By assumption that P is ({ x } , ∅)-regular, we have d 11 = 1 and d 22 = 0. Therefore, the desired expression makes sense. It is convenient to write P as where we set x 0 := x and x 1 , . . . , x n ∈ X\ { x } are distinct. It is standard to express the above quantity in terms of a Pfaffian of a matrix of smaller size. In fact, by means of the formula Pf(B T AB) = (det B)(PfA) for an anti-symmetric matrix A and a matrix B, we can see that 1≤i,j≤n , 4.6. Proof of Theorem 1.13. It remains to show that K(y, z) = K P { x },∅ (y, z) and K(y, z) = K P ∅,{ x } (y, z), y, z ∈ X\ { x }. It is straightforward that K 11 (y, z) = (e y , P 21 e z ) − (e y , P 22 e x )(e x , P 21 e z ) (e x , P 22 e x ) + (e y , P 21 e x )(e x , P 11 e z ) (e x , (1 − P 11 )e x ) , K 12 (y, z) = (e y , P 22 e z ) − (e y , P 22 e x )(e x , P 22 e z ) (e x , P 22 e x ) + (e y , P 21 e x )(e x , P 12 e z ) (e x , (1 − P 11 )e x ) , K 22 (y, z) = (e y , P 12 e z ) − (e y , P 12 e x )(e x , P 22 e z ) (e x , P 22 e x ) + (e y , P 11 e x )(e x , P 12 e z ) (e x , (1 − P 11 )e x ) .

Appendix A. Shifted Schur measures
The shifted Schur measures were introduced in [TW04] associated with the Schur Qfunctions. It has been shown [Mat05] that they are defined in terms of quasi-free states of a CAR algebra, as we overview in this appendix. Note that a free fermionic approach to shifted Schur measures has also been proposed in [WL19] relying on a different algebra from the one adopted in [Mat05] and here.
A.1. Definition of shifted Schur measures. The notations regarding symmetric functions are inherited from Subsect. 3.4. A partition λ = (λ 1 ≥ λ 2 ≥ · · · ) ∈ Y is said to be strict if λ 1 > λ 2 > · · · . We write D for the collection of strict partitions. We do not contain here a definition of the Schur Q-functions (see [Mac99,Chap. III,8]), but just say that, for each strict partition λ ∈ D, the Schur Q-function Q λ ∈ Λ is defined as the Macdonald symmetric function at the parameter (q, t) = (0, −1) and the Schur P -function is P λ = 2 −ℓ(λ) Q λ , where ℓ(λ) is the length of the partition. Another significant property of the Schur Q-functions is that, when we write Λ odd for the subring generated by power-sum symmetric functions p n , n = 1, 3, 5, . . . of odd degree, then Q λ , λ ∈ D form a basis of Λ odd .
Note that, in general, the specializations for Q λ and P λ can be different. In this sense, we focus on special cases of shifted Schur measures.
We write S(−I) for the second quantization of −I ∈ U(ℓ 2 (X)) (see [ Then, it is obvious that Ξ ± (ρ) * = Ξ ∓ (ρ). The verification of these operators goes in a similar manner as the one in Subsect. 3.4 for Schur measures.
Theorem A.2. Assume that a shifted Schur measure M Q(ρ) is S-quasi-invariant. Then it is perfect.