Dunkl-Type Operators with Projection Terms Associated to Orthogonal Subsystems in Root System

In this paper, we introduce a new differential-difference operator $T_\xi$ $(\xi \in \mathbb{R}^N)$ by using projections associated to orthogonal subsystems in root systems. Similarly to Dunkl theory, we show that these operators commute and we construct an intertwining operator between $T_\xi$ and the directional derivative $\partial_\xi$. In the case of one variable, we prove that the Kummer functions are eigenfunctions of this operator.

The main objective of this paper is to present a new class of differential-difference operators T ξ , ξ ∈ R N with the help of orthogonal projections related to orthogonal subsystems in root systems. In other words, our operators follow from Dunkl operator after replacing the usual reflections that exist in the definition of the operator with their corresponding orthogonal projections. Several problems related to the Dunkl theory arise in the setting of our operators, in particular, commutativity of {T ξ , ξ ∈ R N } and the existence of the intertwining operators.
The outline of the content of this paper is as follows. In Section 2, we collect some definitions and results related to root systems and Dunkl operators which will be relevant for the sequel. In Section 3, we introduce new differential-difference operators T ξ and we prove the first main result. In Section 4, we give an explicit formula for the intertwining operator between T ξ and the directional derivative. In Section 5, we study the one variable case. Finally, in Section 6 we study the cases of orthogonal subsets in root systems of type A N −1 and B N .

Dunkl operators
Let us begin to recall some results concerning the root systems and Dunkl operators. A useful reference for this topic is the book by Humphreys [9]. Let α ∈ R N \{0}, we denote by s α the arXiv:1304.5866v3 [math.CA] 4 Nov 2013 reflection onto the hyperplane orthogonal to α; that is, where ·, · denotes the Euclidean scalar product on R N , and |x| = x, x . A root system is a finite set R of nonzero vectors in R N such that for any α ∈ R one has s α (R) = R, and R ∩ Rα = {±α}.
A positive subsystem R + is any subset of R satisfying R = R + ∪ {−R + }. The Weyl group W = W (R) (or real finite reflection group) generated by the root system R ⊂ R N is the subgroup of orthogonal group O(N ) generated by {s α , α ∈ R}. A multiplicity function on R is a complex-valued function κ : R → C which is invariant under the Weyl group W , i.e., Let ξ ∈ R N , the Dunkl operator D ξ associated with the Weyl group W (R) and the multiplicity function κ, is the first order differential-difference operator: Here ∂ ξ is the direction derivative corresponding to ξ and s α is the orthogonal reflection onto the hyperplane orthogonal to α. The Dunkl operator D ξ is a homogeneous differential-difference operator of degree −1. By the W -invariance of the multiplicity function κ, we have The remarkable property of the Dunkl operators is that the family {D ξ , ξ ∈ R N } generates a commutative algebra of linear operators on the C-algebra of polynomial functions.

Operators of Dunkl-type
Let R be a root system. A subset R of R is called a subsystem of R if it satisfies the following conditions: ii) If α, β ∈ R and α + β ∈ R, then α + β ∈ R .
A subsystem R of a root system R in R N consisting of pairwise orthogonal roots is called orthogonal subsystem. In this case the related Weyl group W (R ) is a subgroup of Z N 2 . For a vector α ∈ R N \ {0}, we write for the orthogonal projection onto the hyperplane (Rα) ⊥ = {x, x, α = 0}, so that the reflection s α with respect to hyperplane orthogonal to α is related to τ α by The hyperplane (Rα) ⊥ is the invariant set of τ α . If α, β = 0, then the orthogonal projections τ α and τ β commute. The conjugate of orthogonal projection onto a hyperplane is again an orthogonal projection onto a hyperplane: suppose u ∈ O(N ) and α ∈ R N \{0} then Let R be a root system and R a positive orthogonal subsystem of R. For ξ ∈ R N , we define the differential-difference operator T ξ by where κ is a multiplicity function on R . For j = 1, . . . , N denotes T e j by T j . The operator T ξ can be considered as a deformation of the usual directional derivatives and when κ = 0, the operator T ξ reduces to the corresponding directional derivative. Furthermore, there is overlap between the notations (2) and (1). In fact, the operator (2) follows from Dunkl operator after replacing the reflections terms that exist in (1) by orthogonal projection terms.

Example 1.
In the rank-one case, the root system is of type A 1 and the corresponding reflection s and orthogonal projection τ are given by The Dunkl-type operator T κ associated with the projection τ and the multiplicity parameters κ (κ ∈ C) is given by Example 2. Let R = {±(e 1 ± e 2 ), ±e 1 , ±e 2 } be a root system of type B 2 in the 2-plane and R = {e 1 ± e 2 } be a positive orthogonal subsystem in R. The related Dunkl-type operators to R and to the positive parameters (κ 1 , κ 2 ) are given by We denote by Π N the space of polynomials and by Π N n the subspace of homogenous polynomials of degree n.
Let R = {α 1 , . . . , α n } be a positive orthogonal subsystem of a root system R. Consider the operator ρ i defined on Π N by It follows from the equality Proposition 1. The operators ρ i (i = 1, . . . , n) have the following properties: The family {α 1 , . . . , α n } is orthogonal, then there exist scalars ξ 1 , . . . , ξ n and a vector ξ ∈ R N orthogonal to the subspace Rα 1 ⊕ · · · ⊕ Rα n such that This allows us to decompose the operator T ξ (2) associated with R and the multiplicity parameters (κ 1 , . . . , κ n ) in a unique way in the form We now have all ingredients to state and prove the first main result of the paper.
On the other hand, From Proposition 1, we get This proves the result.
One important consequence of the Theorem 1, is that the operators T α 1 , . . . , T αm generate a commutative algebra.

Intertwining operator
In this section, we give an intertwining operator between T ξ and the directional derivative ∂ ξ . Consider a positive orthogonal subsystem R = {α 1 , . . . , α n } composed of n vectors in a root system R, and κ = (κ 1 , . . . , κ n ) ∈ C n and ξ ∈ R N . The associated Dunkl-type operator T ξ with R and κ takes the form where t = (t 1 , . . . , t n ) ∈ R n and x ∈ R N . We define where Proof . For j = 1, . . . , n, we denote by θ j the orthogonal projection in R n with respect to the hyperplane (Re j ) ⊥ orthogonal to the vector e j of the canonical basis (e 1 , . . . , e n ) of R n . The orthogonal projection θ j acts on R n as The system R is orthogonal, then for j = 1, . . . , n, we have Let f ∈ C ∞ (R N ) and ξ ∈ R N . The mapping x → h(t, x) is linear on R N , then we can write Hence, Since we can write we are lead to This, combined with the last expression of ∂ ξ (χ κ f )(x), yields Therefore,

The one variable case
The specialization of this theory to the one variable case has its own interest, because everything can be done there in a much more explicit way and new results for special functions in one variable can be obtained. In this setting there is only one Dunkl-type operator T κ associated up to scaling and it equals to This operator leaves the space of polynomials invariant and acts on the monomials as T κ 1 = 0, T κ x n = (n + κ)x n−1 , n = 1, 2, . . . .

Its square is given by
Consider the confluent hypergeometric function (see [15, § 7 where (a) n is the Pochhammer symbol defined by This is a solution of the confluent hypergeometric differential equation This function possesses the following Poisson integral representation (see [15, § 7 Theorem 3. For λ ∈ C and κ > −1, the problem has a unique analytic solution M κ (iλx) given by Proof . Searching a solution of (6) in the form f (z) = ∞ n=0 a n x n . Replacing in (6), we obtain ∞ n=0 (n + 1 + κ)a n+1 x n = iλ ∞ n=0 a n x n .

Remark 1.
Multiply the equation (6) by x and differentiating both sides, we see that a function u of class C 2 on R, is a solution of the equation (6), if and only if, it is a solution of the generalized eigenvalue problem xu + (κ + 1)u = iλ(xu + u).
Definition 1. We define the Kummer transform on L 1 (R) by When κ = 0, the transformation F 0 reduces to the usual Fourier transform F that is given by Theorem 4. Let f be a function in L 1 (R) then F κ (f ) belongs to C 0 (R), where C 0 (R) is the space of continuous functions having zero as limit at the infinity. Furthermore, Proof . It's clear that F κ (f ) is a continuous function on R. From Proposition 2, we get for all Since f is in L 1 (R), we conclude by using the dominated convergence theorem that F κ (f ) belongs to C 0 (R) and We now turn to exhibit a relationship between the Kummer transform and the Fourier transform. The crucial idea is to use the intertwining operator χ κ . We denote by C ∞ (R) the space of infinitely differentiable functions f on R, provided with the topology defined by the semi norms In the rank-one case the intertwining operator (3) becomes This operator is a particular case of the so called Erdélyi-Kober fractional integral I γ,δ , which is given by (see [10]) It was shown in [12, § 3], that the Erdélyi-Kober fractional integral has a left-inverse where and n = δ ( δ denotes the ceiling function the smallest integer ≥ δ).
As a consequence of Theorem 2, we deduce that the operator χ κ (9) has the fundamental intertwining property We regard it as a second main result since it allows us to move from the complicated operator T κ defined in (4) to the simple derivative operator d dx .
Theorem 5. Let κ > 0, the operator χ κ is a topological isomorphism from C ∞ (R) onto itself and its inverse χ −1 κ is given for all f ∈ C ∞ (R) by where n = κ .
Proof . Let a > 0 and f ∈ C ∞ (R). For x ∈ [0, a], t ∈ [0, 1] and l ∈ N, we have the following estimate By the theorem of derivation under the integral sign, we can prove that and Then χ κ is a linear continuous mapping from C ∞ (R) onto its self. From formula (10) the operator is a left-inverse of χ κ . This shows that χ κ is injective and D 0,κ is surjective. So it suffices to prove that D 0,κ is injective. Let f be a function in C ∞ (R) such that D 0,κ f = 0. Then the function g = I κ+1,n−κ f ∈ C ∞ (R) is a solution of the linear differential equation Since, the last differential equation has a unique C ∞ -solution, which is equal to y(x) = 0, it follows that g = 0. From (10) the operator I κ+1,κ has a left-inverse, then f = 0. This shows that χ κ is a bijective operator.
Let κ > 0, we define the dual intertwining operator t χ κ on D(R) (D(R) is the space of C ∞ -functions on R with compact support) by Proposition 3. The operator t χ κ is a topological automorphism of D(R), and satisfies the transmutation relation: Proof . Let f ∈ C ∞ (R) and g ∈ D(R), we have Using Fubini's theorem and a change of variable, we get Proposition 4. Let κ > 0, the Kummer transform F κ satisfies the decomposition Proof . The result follows from Proposition 3.

Direct product setting
In this subsection, we consider the direct product of the one-dimensional models, which means that the Weyl group of the corresponding subsystem of root system is a subgroup of Z N 2 . We denote by τ k (for each k from 1 to N ) the orthogonal projection with respect to the hyperplane orthogonal to e k , that is to say for every x = (x 1 , . . . , x, e k |e k | 2 e k = (x 1 , . . . , x k−1 , 0, x k+1 , . . . , x N ).
Let κ = (κ 1 , κ 2 , . . . , κ N ) ∈ C N . The associated Dunkl type operators T j for j = 1, . . . , N , are given for x ∈ R N by These operators form a commuting system. The generalized Laplacian associated with T j is defined in a natural way as A straightforward computation yields This operator will play in our context a similar role to that of the Euclidean Laplacian in the classical harmonic analysis. Obviously, the trivial choice of the multiplicity function κ = 0, reduces our situation to the analysis related to the classical Laplacian ∆.