Multivariate Orthogonal Polynomials and Modified Moment Functionals

Multivariate orthogonal polynomials can be introduced by using a moment functional defined on the linear space of polynomials in several variables with real coefficients. We study the so-called Uvarov and Christoffel modifications obtained by adding to the moment functional a finite set of mass points, or by multiplying it times a polynomial of total degree 2, respectively. Orthogonal polynomials associated with modified moment functionals will be studied, as well as the impact of the modification in useful properties of the orthogonal polynomials. Finally, some illustrative examples will be given.


Introduction
Using a moment functional approach as in [20,21,22], one interesting problem in the theory of orthogonal polynomials in one and several variables is the study of the modifications of a quasidefinite moment functional u defined on Π, the linear space of polynomials with real coefficients.In fact, there are many works devoted to this topic since modifications of moment functionals are underlying some well known facts, such as quasi-orthogonality, relations between adjacent families, quadrature and cubature formulas, higher-order (partial) differential equations, etc.
In the univariate case, given a quasi-definite moment functional u defined on Π, the (basic) Uvarov modification is defined by means of the addition of a Dirac delta on a ∈ R, v = u + λδ a , such that v, p(x) = u, p(x) + λp(a), λ = 0.
Apparently, this modification was introduced by V.B. Uvarov in 1969 [27], who studied the case where a finite number of mass points is added to a measure, and proved connection formulas for orthogonal polynomials with respect to the modified measure in terms of those with respect to the original one.
In some special cases of classical Laguerre and Jacobi measures, if the perturbations are given at the end points of the support of the measure, then the new polynomials are eigenfunctions of higher-order differential operators with polynomial coefficients and they are called Krall polynomials (see, for instance, [32] and the references therein).
In the multivariate case, the addition of Dirac masses to a multivariate measure was studied in [11] and [13].Moreover, Uvarov modification for disk polynomials was analysed in [10].
Besides Uvarov modifications by means of Dirac masses at finite discrete set of points, in the context of several variables it is possible to modify the moment functional by means of moment functionals defined on lower dimensional manifolds such as curves, surfaces, etc.Recently, a family of orthogonal polynomials with respect to such a Uvarov modification of the classical ball measure by means of a mass uniformly distributed over the sphere was introduced in [23].The authors proved that, at least in the Legendre case, these multivariate orthogonal polynomials satisfy a fourth-order partial differential equation, which constitutes a natural extension of Krall orthogonal polynomials [16] to the multivariate case.In [5], a modification of a moment functional by adding another moment functional defined on a curve is presented, and a Christoffel formula built up in terms of a Fredholm integral equation is discussed.As far as we know, a general theory about Uvarov modifications by means of moment functionals defined on lower dimensional manifolds remains as an open problem.
Following [31], the univariate (basic) Christoffel modification is given by the multiplication of a moment functional u times a polynomial of degree 1, usually x − a, a ∈ R, v = (x − a)u, acting as v, p(x) = u, (x − a)p(x) .
This type of transformations were first considered by E.B. Christoffel in 1858 [9] within the framework of Gaussian quadrature theory.Nowadays, Christoffel formulas are classical results in the theory of orthogonal polynomials, and they are presented in many general references (e.g., [8,26]).
Christoffel modification is characterized by the linear relations that both families, modified and non-modified orthogonal polynomials, satisfy.They are called connection formulas.It is well known that some families of classical orthogonal polynomials can be expressed as linear combinations of polynomials of the same family for different values of their parameters, the so-called relations between adjacent families (e.g., see formulas in Chapter 22 in [1] for Jacobi polynomials, or (5.1.13)in [26] for Laguerre polynomials).The study of such type of linear combinations is also related with the concept of quasi-orthogonality introduced by M. Riesz in 1921 (see [8, p. 64]) as the basis of his analysis of the moment problem, and it is related with quadrature formulas based on the zeros of orthogonal polynomials.
The extension of this kind of results to the multivariate case is not always possible.Gaussian cubature formulas of degree 2n−1 were characterized by Mysovskikh [24] in terms of the number of common zeros of the multivariate orthogonal polynomials.However, these formulas only exist in very special cases and the case of degree 2n−2 becomes interesting.Here, linear combinations of multivariate orthogonal polynomials play an important role, as it can be seen for instance in [6,25,28,29].
To our best knowledge, one of the first studies about Christoffel modifications in several variables, multiplying a moment functional times a polynomial of degree 1, was done in [2], where the modification naturally appears in the study of linear relations between two families of multivariate polynomials.Necessary and sufficient conditions about the existence of orthogonality properties for one of the families was given in terms of the three term relations, by using Favard's theorem in several variables [12].
In [7] the authors show that modifications of univariate moment functionals are related with the Darboux factorization of the associated Jacobi matrix.In this direction, in [3,4,5] long discussions about several aspects of the theory of multivariate orthogonal polynomials can be found.In particular, Darboux transformations for orthogonal polynomials in several variables were presented in [3], and in [4] we can find an extension of the univariate Christoffel determinantal formula to the multivariate context.Also, as in the univariate case, they proved a connection with the Darboux factorization of the Jacobi block matrix associated with the three term recurrence relations for multivariate orthogonal polynomials.Similar considerations for multivariate Geronimus and more general linear spectral transformations of moment functionals can be found, among other topics, in [5].
In this paper, we study Uvarov and Christoffel modifications of quasi-definite moment functionals.The study of orthogonal polynomials associated with moment functionals fits into a quite general frame that includes families of orthogonal polynomials associated either with positive-definite or non positive-definite moment functionals such as those generated using Bessel polynomials, among others.We give necessary and sufficient conditions in order to obtain the quasi-definiteness of the modified moment functional in both cases, Uvarov and Christoffel modifications (see Theorems 3.1 and 4.3).We also investigate properties of the polynomials associated with the modified functional, relations between original and modified orthogonal polynomials as well as the impact of the modification in some useful properties of the orthogonal polynomials.
When dealing with the Christoffel modification, and in the case where both moment functionals, the original and the modified, are quasi-definite, some of the results are similar to those obtained in [3,4,5] using a different technique.In particular, the necessary condition in our Theorem 4.3 was proven there for an arbitrary degree polynomial, but the sufficient condition was not discussed there.
The main results of this work can be divided in three parts, corresponding with Sections 3, 4 and 5. Section 2 is devoted to establish the basic concepts and tools we will need along the paper.For that section, we recall the standard notations and basic results in the theory of multivariate orthogonal polynomials following mainly [12].
Uvarov modification of a quasi-definite moment functional is studied in Section 3. In this case, we modify a quasi-definite moment functional by adding several Dirac deltas at a finite discrete set of fixed points.First, we will give a necessary and sufficient condition for the quasidefiniteness of the moment functional associated with this Uvarov modification, and, in the affirmative case, we will deduce the connection between both families of orthogonal polynomials.A similar study was done in [11] in the case when the original moment functional is defined from a measure, and the modification is defined by means of a positive semi-definite matrix.In that case, both moment functionals are positive-definite, and both orthogonal polynomial systems exist.As a consequence, since it is possible to work with orthonormal polynomials, in [11] the first family of orthogonal polynomials is considered orthonormal, and formulas are simpler than in the general case considered here.
In Section 4 we study the Christoffel modification by means of a second degree polynomial.This is not a trivial extension of the case when the degree of the polynomial is 1 studied in [2], since in several variables not every polynomial of degree 2 factorizes as a product of polynomials of degree 1.Again, we relate both families of orthogonal polynomials and also we deduce the orthogonality by using Favard's theorem in several variables.In fact, from a recursive expression for the modified polynomials, we give necessary and sufficient conditions for the existence of a three term relation, and we show that there exists a polynomial of second degree constructed in terms of the first connection coefficients.Since three term relations can be reformulated in terms of Jacobi block matrices, similar connection results can be found in [4,5] for arbitrary degree polynomials by using a block matrix formalism.Moreover, we study this kind of Christoffel modification in the particular case when the original moment functional is centrally symmetric, the natural extension of the concept of symmetry for univariate moment functionals.
Finally, Section 5 is devoted to apply our results to two families of multivariate orthogonal polynomials: the classical orthogonal polynomials on the ball in R d , and the Bessel-Laguerre polynomials, a family of bivariate polynomials orthogonal with respect to a non positive-definite moment functional, that can be found in [19] as solution of one of the classical Krall and Sheffer's second-order partial differential equation [18].
First, we modify the classical moment functional on the ball by adding a Dirac mass at 0 ∈ R d .This example was introduced in [13], and here, we complete that study giving, as a new result, the asymptotics of the modification.Next, we use the Christoffel modification by means of a second degree polynomial to deduce a relation between adjacent families of classical ball orthogonal polynomials in several variables.As far as we know, there is not a relation of this kind for ball polynomials in the literature.This relation can be seen as an extension of (22.7.23) in [1, p. 782] for Gegenbauer polynomials in one variable.Last example corresponds to a Uvarov modification for the non positive-definite classical Bessel-Laguerre bivariate polynomials defined in [19].

Basic tools
In this section we collect the definitions and basic tools about orthogonal polynomials in several variables that will be used later.We follow mainly [12].
Along this paper, we will denote by M h×k (R) the linear space of matrices of size h × k with real entries, and the notation will simplify when h = k as M h (R).As usual, we will say that M ∈ M h (R) is non-singular (or invertible) if det M = 0, and symmetric if M t = M , where M t denotes its transpose.Moreover, I h will denote the identity matrix of size h, and we will omit the subscript when the size is clear from the context.
Let us consider the d-dimensional space R d , with d 1.Given ν = (n 1 , n 2 , . . ., n d ) ∈ N d 0 a multi-index, a monomial in d variables is an expression of the form where x = (x 1 , . . ., x d ) ∈ R d .The total degree of the monomial is denoted by Then, a polynomial of total degree n in d variables with real coefficients is a finite linear combination of monomials of degree at most n, Let us denote by Π d n the linear space of polynomials in d variables of total degree less than or equal to n, and by Π d the linear space of all polynomials in d variables.
For d 2 and n 0, if we denote then, unlike the univariate case, r d n > 1 for d 2.Moreover, When we deal with more than one variable, the first problem we have to face is that there is not a natural way to order the monomials.As in [12], we use the graded lexicographical order, that is, we order the monomial by their total degree, and then by reverse lexicographical order.For instance, if d = 2, the order of the monomials is A useful tool in the theory of orthogonal polynomials in several variables is the representation of a basis of polynomials as a polynomial system (PS).
Definition 2.1.A polynomial system (PS) is a sequence of column vectors of increasing size r d n , {P n } n 0 , whose entries are independent polynomials of total degree n where ν 1 , ν 2 , . . ., ν r d n ∈ ν ∈ N d 0 : |ν| = n are different multi-indexes arranged in the reverse lexicographical order.
Observe that, for n 0, the entries of {P 0 , P 1 , . . ., P n } form a basis of Π d n , and, by extension, we will say that the vector polynomials {P m } n m=0 is a basis of Π d n .Using this representation, we can define the canonical polynomial system, as the sequence of vector polynomials whose entries are the basic monomials arranged in the reverse lexicographical order, Thus, each polynomial vector P n , for n 0, can be expressed as a unique linear combination of the canonical polynomial system with matrix coefficients Since both vectors P n and X n contain a system of independent polynomials, the square matrix G n,n is non-singular and it is called the leading coefficient of the vector polynomial P n .Usually, it will be denoted by In the case when the leading coefficient is the identity matrix for all vector polynomial in a PS then we will say that {P n } n 0 is a monic PS.With this notation, we will say that two vector polynomials P n and Q n , for n 0, have the same leading coefficient if or, equivalently, if any entry of the vector P n − Q n is a polynomial of degree at most n − 1.
The shift operator in several variables, that is, the multiplication of a polynomial times a variable x i for i = 1, 2, . . ., d, can be expressed in terms of the canonical PS as full rank matrices such that (see [12]) In particular, L n,i is a matrix containing the columns of the identity matrix of size r d n but including r d n+1 − r d n columns of zeros eventually separating the columns of the identity.Moreover, for i, j = 1, 2, . . ., d, the matrix products L n,i L n+1,j ∈ M r d n ×r d n+2 (R), are also full rank matrices and of the same type of L n,i .In addition, since x i x j X n = x j x i X n , for n 0, we get Let us now turn to deal with moment functionals and orthogonality in several variables.Given a sequence of real numbers {µ ν } |ν|=n 0 , the moment functional u is defined by means of its moments and extended to polynomials by linearity, i.e., if Now, we recall some basic operations acting over a moment functional u.The action of u over a polynomial matrix is defined by where M = (m i,j (x)) h,k i,j=1 ∈ M h×k (Π d ), and the left product of a polynomial p ∈ Π d times u by pu, q = u, pq , ∀ q ∈ Π d .
Using the canonical polynomial system, for h, k 0, we define the We must remark that m h,k contains all of moments of order h×k.Then, we can see the moment matrix as a block matrix in the form Definition 2.2.A moment functional u is called quasi-definite or regular if and only if Now, we are ready to introduce the orthogonality.Two polynomials p, q ∈ Π d are said to be orthogonal with respect to u if u, pq = 0.A given polynomial p ∈ Π d n of exact degree n is an orthogonal polynomial if it is orthogonal to any polynomial of lower degree.
We can also introduce the orthogonality in terms of a polynomial system.We will say that a PS {P n } n 0 is orthogonal (OPS) with respect to u if u, where S n is a non-singular matrix of size r d n × r d n .As a consequence, it is clear that where the matrix H n is symmetric and non-singular.At this point we have to notice that, with this definition, the orthogonality between the polynomials of the same total degree may not hold.In the case when the matrix H n is diagonal for all n 0 we say that the OPS is mutually orthogonal.
A moment functional u is quasi-definite or regular if and only if there exists an OPS.If u is quasi-definite then there exists a unique monic OPS.As usual u is said positive definite if u, p 2 > 0, for all polynomial p = 0, and a positive definite moment functional u is quasidefinite.In this case, there exist an orthonormal basis satisfying For n 0, the kernel functions in several variables are the symmetric functions defined by Both kernels satisfy the usual reproducing property, and they are independent of the particularly chosen orthogonal polynomial system.For n = 0, we assume P −1 (u; x, y) = K −1 (u; x, y) = 0.
To finish this section, we need to recall the three term relations satisfied by orthogonal polynomials in several variables.As in the univariate case, the orthogonality can be characterized by means of the three term relations [12, p. 74].
, P 0 = 1, be an arbitrary sequence in Π d .Then the following statements are equivalent.
(1) There exists a linear functional u which defines a quasi-definite moment functional on Π d and which makes {P n } n 0 an orthogonal basis in Π d .
(2) For n 0, 1 i d, there exist matrices A n,i , B n,i and C n,i of respective sizes (a) the polynomials P n satisfy the three term relations with P −1 = 0 and C −1,i = 0, (b) for n 0 and 1 i d, the matrices A n,i and C n+1,i satisfy the rank conditions and, where A n is the joint matrix of A n,i , defined as and C t n+1 is the joint matrix of C t n+1,i .Moreover, In the case when the orthogonal polynomial system is monic, it follows that A n,i = L n,i , n 0 for 1 i d.In this case, the rank conditions for the matrices A n,i = L n,i and A n = L n obviously hold.

Uvarov modif ication
Let u be a quasi-definite moment functional defined on Π d , let {ξ 1 , ξ 2 , . . ., ξ N } be a fixed set of distinct points in R d , and let {λ 1 , λ 2 , . . ., λ N } be a finite set of non zero real numbers.Then, for p ∈ Π d , the expression defines a moment functional on Π d which is known as a Uvarov modification of u.
If the moment functional v is quasi-definite, our first result shows that orthogonal polynomials with respect to v can be derived in terms of those with respect to u.To simplify the proof of this result we will make use of a vector-matrix notation which we will introduce next.
Throughout this section, we shall fix {P n } n 0 as an orthogonal polynomial system associated with u.We denote by P n (ξ) the matrix whose columns are P n (ξ i ) denote by K n the matrix whose entries are the kernels K n (u; ξ i , ξ j ) defined in (2.3), and, finally, denote by K n (ξ, x) the vector of polynomials From the fact that K n (u; x, y) − K n−1 (u; x, y) = P n (u; x, y) = P n (x) t H −1 n P n (y), we have immediately the following relations which will be used below.Now we are ready to state and prove the main result in this section.In fact, we give a necessary and sufficient condition in order to ensure the quasi-definiteness of the modified moment functional in terms of the non-singularity of a matrix.Theorem 3.1.Let u be a quasi-definite moment functional and assume that the moment functional v defined in (3.1) is quasi-definite.Then the matrices are invertible for n = 1, 2, . .., and any polynomial system {Q n } n 0 orthogonal with respect to v can be written in the form where {P n } n 0 is a polynomial system orthogonal with respect to u.Moreover, the invertible matrices Conversely, if the matrices defined in (3.8) and (3.10) are invertible then the polynomial system {Q n } n 0 defined by (3.9) constitutes an orthogonal polynomial system with respect to v, and therefore v is quasi-definite.
Proof .Let us assume that v is a quasi-definite moment functional and let {Q n } n 0 be an OPS with respect to v. We can select an OPS {P n } n 0 with respect to u such that Q n has the same leading coefficient as P n , for n 0, in particular we have Q 0 = P 0 .
From this assumption, the components of Q n − P n are polynomials in Π d n−1 for n 1, then, we can express them as linear combinations of orthogonal polynomials P 0 , P 1 , . . ., P n−1 .In vector-matrix notation, this means that where M n j are matrices of size r d n × r d j .These coefficient matrices can be determined from the orthogonality of P n and Q n .Indeed, v, Q n P t j = 0 for 0 j n − 1, which shows, by definition of v and the fact that P j is orthogonal, where P j (ξ) is defined as in (3.3) and ) is the analogous matrix with Q n (ξ i ) as its column vectors.Consequently, we obtain where the second equation follows from relation (3.6), which leads to a telescopic sum that sums up to K n−1 (ξ, x).Setting x = ξ i , we obtain which by the definition of K n−1 at (3.4) leads to and therefore Next, we are going to show that the matrices I N +ΛK n−1 are invertible.Assume that there exists an index k such that I N + ΛK k−1 is non regular, then there exists a vector C = (c 1 , c 2 , . . ., c N ) t satisfying (I N + ΛK k−1 )C = 0. From (3.12) we deduce P k (ξ)C = 0, which implies (I N + ΛK k )C = 0 using (3.7), and therefore we conclude Let us consider the discrete moment functional ) implies that L vanishes on every orthogonal polynomial of total degree n k.Using duality we can deduce the existence of a polynomial q(x) of total degree k − 1 such that L = q(x)u.Let p(x) be a non zero polynomial vanishing at ξ 1 , ξ 2 , . . ., ξ N , then we get p(x)q(x)u = p(x)L = 0, which contradicts the quasi-definite character of u.Now, solving for Q n (ξ) in equation (3.12) we get and substituting this expression into (3.11)establishes (3.9).Finally, from (3.9) and (3.14) we obtain which proves (3.10).
Conversely, if we define polynomials Q n as in (3.9), then above proof shows that Q n is orthogonal with respect to v. Since Q n and P n have the same leading coefficient, it is evident that {Q n } n 0 is an OPS in Π d .
From now on, let us assume that v is a quasi-definite moment functional and {Q n } n 0 is an OPS with respect to v as given in (3.9).Then, the invertible matrix Ĥn = v, Q n Q t n can be expressed in terms of matrices involving only {P n } n 0 , as we have shown in Theorem 3.1.It turns out that this happens also for Ĥ−1 n .
Proposition 3.2.In the conditions of Theorem 3.1, for n 0, the following identity holds, Proof .Formula (3.15) is a direct consequence of the Sherman-Morrison-Woodbury identity for the inverse of the perturbation of a non singular matrix (see [14, p. 51]).
Proof .(I N + ΛK m ) −1 Λ is a symmetric matrix as it is the inverse of the symmetric matrix Next theorem establishes a relation between the kernels of both families.Similar tools as those used in the proof of Theorem 2.5 in [11] can be applied to obtain this result.Theorem 3.4.Suppose that we are in the conditions of Theorem 3.1.Then, for m 0, we get where we assume K −1 (x, y) ≡ 0. Furthermore, for n 0,

Christof fel modif ication
Christoffel modification of a quasi-definite moment functional u will be studied in this section.We define the Christoffel modification of u as the moment functional We will work with the particular case when the polynomial λ(x) has total degree 2. We must remark that in several variables there exist polynomials of second degree that they can not be factorized as a product of two polynomials of degree 1, and then this case is not a trivial extension of the case considered in [2].Using a block matrix formalism for the three term relations, this case have been also considered in [4] and [5] for arbitrary degree polynomials.
Let u be a quasi-definite moment functional, and let λ(x) be a polynomial in d variables of total degree 2. In terms of the canonical basis, this polynomial can be written as where a k ∈ M 1×r d k (R), for k = 0, 1, 2, whose explicit expressions are 12 , . . ., a 1d , a 22 , . . ., a 2d , . . ., a dd , a 1 = a Observe that the first moment of v is given by μ0 = v, 1 = u, λ(x) , and using (4.2), we get μ0 = 0. First we are going to describe the relations between the moment matrices of both functionals u and v. Taking into account that, for h 0, we get the r d h × r d k block of moments for the functional v, mh,k , can be expressed in terms of the block of moments for the functional u, m h,k , in the following way where are matrices of orders r d h × r d h+1 and r d h × r d h+2 , respectively, and A h,2 has full rank.If we define the block matrices , then we can write the moment matrix for the functional v as the following perturbation of the original moment matrix If u and v are quasi-definite, we want to relate both orthogonal polynomial systems {P n } n 0 and {Q n } n 0 associated with u and v, respectively.As usual in this paper, for n 0, we denote H n = u, P n P t n , and Ĥn = u, Q n Q t n , both symmetric and invertible matrices.Theorem 4.1.Let u and v be two quasi-definite moment functionals, and let {P n } n 0 and {Q n } n 0 be monic OPS associated with u and v, respectively.The following statements are equivalent: (1) There exists a polynomial λ(x) of exact degree two such that v = λ(x)u.
(2) For n 1, there exist matrices (4.4) Proof .First, we prove (1) ⇒ (2).Let us assmue that v = λ(x)u where λ(x) = a 2 X 2 + a 1 X 1 + a 0 X 0 , and |a 2 | = 0. Since {Q n } n 0 is a basis of the space of polynomials, and P n and Q n are monic, then where M n j ∈ M r d n ×r d j (R), and Given that the degree of λ(x) is 2, from the orthogonality of P n we get and then M n j = 0 for j < n − 2. Therefore, (4.4) holds with On the other hand, and therefore Conversely, we see ( 2) ⇒ (1).Using the dual basis, and the same reasoning as in the proof of Lemma 1 in [2], we obtain where E t n = v, P t n , n 0. By (4.4), we get Then, or equivalently, there exists a polynomial Since N 2 ≡ 0, then λ(x) has exact degree 2.Moreover, we will prove that N n has full rank for n 2. In fact, using (4.4), we get On the other hand, where A n−2,2 was defined in (4.3).Then, Therefore N n is full rank for n 2 since H n and Ĥn−2 are invertible matrices, and the rank of a matrix is invariant by multiplication times non-singular matrices [15, p. 13].
Remark 4.2.When both moment functionals are quasi-definite, that is, when both OPS {P n } n 0 and {Q n } n 0 exist, the orthogonality condition of the second family with respect to v = λ(x)u trivially implies that the polynomial entries in λ(x)Q n are quasi-orthogonal with respect to the first moment functional u.In fact, there exist matrices of adequate size such that where Then, A n k = 0, for 0 k n − 1, and therefore The matrix version of this relation is the first identity in Proposition 2.7 of [4].Now, we assume that u is a quasi-definite moment functional, and {P n } n 0 is the monic OPS associated with u.Then {P n } n 0 satisfy the three term relations (2.4) with the rank conditions (2.5), (2.6).Defining recursively the monic polynomial system {Q n } n 0 by means of (4.4), we want to deduce its relation with u as well as conditions for its quasi-definiteness.
Theorem 4.3.Let {P n } n 0 be a monic OPS associated with the quasi-definite moment functional u, and let {M n } n 1 and {N n } n 2 be two sequences of matrices of orders r d n × r d n−1 and r d n × r d n−2 respectively, such that N 2 ≡ 0. Define recursively the monic polynomial system Then {Q n } n 0 is a monic OPS associated with a quasi-definite moment functional v, satisfying the three term relation with initial conditions Q −1 (x) = 0, Ĉ−1,i = 0, if and only if In such a case, there exists a polynomial of exact degree two given by Proof .Replacing (4.4) in (2.4) we get On the other hand, if {Q n } n 0 satisfy (4.6) then Subtracting both expressions we get (4.7)-(4.10).
Conversely, we will prove the three term relation for {Q n } n 0 using induction.In fact, for n = 1, multiplying relation (4.4) times L 0,i , we get Let us suppose that (4.6) is satisfied for n−1.Multiplying (4.4) for n+1 times L n,i and applying the three term relation for {P n } n 0 , we get Replacing again (4.4) in the left hand side, using induction hypotheses and relations (4.7)-(4.10)we get the announced three term relations for {Q n } n 0 .Now, we define the moment functional v as Since {Q n } n 0 is a basis of Π d , then v is well defined.Following [12, p. 74], since L n,i and L n have full rank, then Finally, we need to prove that Ĥn = v, Q n Q t n , is an invertible matrix, for n 0. From the definition of v, Ĥ0 = v, Q 0 Q t 0 = v, 1 = 1.On the other hand, since expression (4.5) still holds in this case, using the properties of the rank of a product of matrices, we get rank Therefore and then rank Ĥn−2 = r d n−2 .In this way, the moment functional v is quasi-definite and {Q n } n 0 is a monic OPS associated with v. Using Theorem 4.1, both moment functionals are related by means of a Christoffel modification Remark 4.4.Observe that relation (4.10) always holds when {Q n } n 0 is an OPS.In fact, using (2.7), we get and Ĉn,i Ĥn−1 = Ĥn L t n−1,i , and jointly with (4.5), it follows On the other hand, from (4.3), from property (2.2).However, if {Q n } n 0 is not orthogonal, then we can not assume a priori that Ĥn−3 is non singular, and so (4.10) does not necessarily hold.
Remark 4.5.In the case when both functionals u and v are quasi-definite, Theorem 4.3 can be rewritten by using a matrix formalism, as it is done in [4,5].For 1 i d, we denote by , the respective block Jacobi matrices associated with the three term relations [12, p. 82].Also we define the lower triangular block matrix with identity matrices as diagonal blocks , where M n , N n are defined in Theorem 4.1.Then, formulas (4.7)-(4.10)can be expressed as the matrix product The explicit expressions of the matrices M n and N n given in Theorem 4.1 lead to matrix relations of Proposition 2.4 in [4].
This definition constitutes the multivariate extension of the symmetry for a moment functional.Quasi-definite centrally symmetric moment functionals can be characterized in terms of the matrix coefficients of the three term relations (2.4).In fact, u is centrally symmetric if and only if B n,i = 0 for all n 0 and 1 i d.
As a consequence, an orthogonal polynomial of degree n with respect to u is a sum of monomials of even degree if n is even and a sum of monomials of odd degree if n is odd.
Let us suppose that u is a quasi-definite centrally symmetric moment functional, and we define its Christoffel modification by v = λ(x)u, where λ(x) is a polynomial of second degree as (4.1).Then Proposition 4.6.v is centrally symmetric if and only if a 1 = 0, that is, If v is quasi-definite and centrally symmetric, then relation (4.4) is given by where 5 Examples

Two modif ications on the ball
Let us denote by the unit ball and the unit sphere on R d , respectively, where denotes the usual Euclidean norm.Consider the weight function Associated with W µ (x), we define the usual inner product on the unit ball where the normalizing constant is chosen in order to have 1, 1 µ = 1.The associated moment functional is defined by means of its moments Observe that u µ is a centrally symmetric positive-definite moment functional, that is, u µ , x ν = 0, whenever |ν| is an odd integer.

Let us denote by {P (µ)
n } n 0 a ball OPS.In this case, several explicit bases are known, and we will describe one of them.An orthogonal basis in terms of classical Jacobi polynomials and spherical harmonics is presented in [12].
Harmonic polynomials (see [12, p. 114]) are homogeneous polynomials in d variables Y (x) satisfying the Laplace equation ∆Y = 0. Let H d n denote the space of harmonic polynomials of degree n.It is well known that and is called a spherical harmonic.We will use spherical polar coordinates x = rξ, for x ∈ R d , r 0, and ξ ∈ S d−1 .If Y ∈ H d n , we use the notation Y (x) to denote the harmonic polynomial, and Y (ξ) to denote the spherical harmonic.This notation is coherent with the fact that if x = rξ then Y (x) = r n Y (ξ).Moreover, spherical harmonics of different degree are orthogonal with respect to the surface measure on S d−1 , and we can choose an orthonormal basis.
Then, an orthonormal basis for the classical ball inner product (see [12, p. 142]) is given by the polynomials for 0 j n/2 and 1 k σ n−2j .Here P (α,β) j (t) denotes the j-th classical Jacobi polynomial, } is an orthonormal basis for H d n−2j and the constants h j,n are defined by where (a) m = a(a + 1) • • • (a + m − 1) denotes the Pochhammer symbol.This basis can be written as an orthonormal polynomial system in the form where we order the entries by reverse lexicographical order of their indexes.Observe that for n odd, then n − 2[ n 2 ] = 1, and σ 1 = d, while for n even, n − 2[ n 2 ] = 0, and σ 0 = 1.

Connection properties for adjacent families of ball polynomials
Christoffel modification allows us to relate two families of ball polynomials for two values of the parameter µ differing by one unity.In fact, Then, if we define λ d , using the matrix formalism (4.11), we get the following relation , where F n is the non-singular matrix needed to change the leading coefficients, and N n has full rank.However, in this case, we can explicitly give both matrices using the previously described basis.
Here we use the relation for Jacobi polynomials (formula (22.7.18) in [1]) for n 0, in (5.1) and we can relate ball polynomials corresponding to adjacent families , where for n 2 and 1 j n/2.Then, the matrix F n is a diagonal and invertible matrix given by ] , . . ., a n

Ball-Uvarov polynomials
Let u µ be the classical ball moment functional defined as above, and we add a mass point at the origin and consider the moment functional v, p(x) = u µ , p(x) + λp(0).
Using Theorem 3.1 we deduce the quasi-definite character of the moment functional v except for a denumerable set of negative values of λ.Of course, v is positive definite for every positive value of λ.Let {P n j,k (x)} be the mutually orthogonal basis for the ball polynomials given in (5.1).Since spherical harmonics are homogeneous polynomials, we get Y n−2j k (0) = 0 whenever n − 2j > 0. Hence, it follows (−1) if n is even, j = n 2 and k = 1, 0 in any other case. (5.2) As a consequence we get , where From (5.2) we get Here, we can recognize the n 2 -th kernel of the Jacobi polynomials with parameters (µ− 1 2 , d−2 2 ), using P (α,β) j (t) = (−1) j P (β,α) j (−t) and relation (4.5.3) in [26], we get In particular, setting x = 0 we obtain K n (u µ ; 0, 0) = µ + d+1  Using Stirling's formula we can easily deduce the convergence of the sequence {b n } to a positive value.Therefore we can find a constant C such that 0 < b n < C, n = 0, 1, 2, . . ., that is 0 < K n (u µ ; x, x) − K n (v; x, x) < C P , and Lemma 5.3 gives (5.5) and (5.6).Finally, (5.7) follows from the corresponding asymptotic result for Christoffel functions on the ball obtained by Y. Xu in [30].