\documentclass[12pt,reqno]{article}
\usepackage[usenames]{color}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amscd}
\usepackage[colorlinks=true,
linkcolor=webgreen,
filecolor=webbrown,
citecolor=webgreen]{hyperref}
\definecolor{webgreen}{rgb}{0,.5,0}
\definecolor{webbrown}{rgb}{.6,0,0}
\usepackage{color}
\usepackage{fullpage}
\usepackage{float}
\usepackage{psfig}
\usepackage{graphics,amssymb}
\usepackage{amsfonts}
\usepackage{latexsym}
\usepackage{epsf}
\setlength{\textwidth}{6.5in}
\setlength{\oddsidemargin}{.1in}
\setlength{\evensidemargin}{.1in}
\setlength{\topmargin}{-.5in}
\setlength{\textheight}{8.9in}
\newcommand{\seqnum}[1]{\href{http://www.research.att.com/cgi-bin/access.cgi/as/~njas/sequences/eisA.cgi?Anum=#1}{\underline{#1}}}
\begin{document}
\begin{center}
\epsfxsize=4in
\leavevmode\epsffile{logo129.eps}
\end{center}
\begin{center}
\vskip 1cm{\LARGE\bf Some Results on Summability of Random Variables}
\vskip 1cm
\large
Rohitha Goonatilake\\
Department of Mathematical and Physical Sciences\\
Texas A\&M International University\\
Laredo, Texas 78041-1900 \\
USA\\
\href{mailto:harag@tamiu.edu}{\tt harag@tamiu.edu} \\
\end{center}
\vskip .2 in
\begin{abstract}
A convolution summability method introduced as an extension of the
random-walk method generalizes the classical Euler, Borel, Taylor and
Meyer-K\"onig type matrix methods. This corresponds to
the distribution of sums of independent and identically distributed
integer-valued random variables. In this paper, we discuss
the strong regularity concept of Lorentz
applied to the convolution method of summability. Later, we obtain
the summability functions and absolute summability functions of this
method.
\end{abstract}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}{Proposition}[section]
\newtheorem{corollary}{Corollary}[section]
\newtheorem{lemma}{Lemma}[section]
%\usepackage{amssymb}
%\usepackage{amsthm}
\newcommand{\Reals}{\rm I\kern-.19emR}
\newcommand{\Notin}{/\kern-.6em\hbox{$\in$}}
\newcommand{\MM}{\rm I\kern-.19emM}
\newcommand{\Notequiv}{/\kern-.6em\hbox{$\equiv$}}
\newcommand{\Ceals}{\rm I\kern-.5emC}
\newcommand{\nsubset}{/\kern-.6em\hbox{$\subset$}}
\newcommand{\dis}{\displaystyle}
\newcommand{\nin}{\backslash \kern-.5em\in}
\newtheorem{theo}{Theorem}
\newtheorem{coro}{Corollary}
\newtheorem{lema}{Lemma}
\newtheorem{prop}{Proposition}
\newtheorem{defn}{Definition}
\newtheorem{remk}{Remark}
\newtheorem{note}{Note}
\newtheorem{expl}{Example}
\section{Introduction}
The methods that sum all almost convergent sequences are called strongly regular.
We will show in section 2 that the matrix transformation corresponding to the regular convolution
method generated by an independent and identically distributed sequence of aperiodic
nonnegative integer-valued random variables with finite third moment and positive
variance is strongly regular.
Summability functions \cite{Lore48} in some sense determine the strength of the regularity
of method for bounded sequences. It may also be used to show that Tauberian conditions
of a certain kind may not be improved. Under the existence of first three moments,
in section 3, it is shown that $\Omega(n) = o(\sqrt{n})$ are the summability functions
for the convolution methods, thus extending some of previously known results for
other methods such as Borel and Euler. The optimality of class of summability
functions is also ascertained to show that all functions of the form,
$\Omega(n) = o(\sqrt{n})$ (are only these functions) are summability functions for the
$C(p, q)$ methods with some moment conditions. We conclude this paper with a discussion
of absolute summability functions for this method.
The discussion will now revolve around the following types of summability methods \cite{Meye49} and
\cite{ZeBe70}. This is a larger class of summability methods that includes random-walk method and many others.
\begin{defn}{\rm Let $\{p_k\}_{k \geq 0}$ and $\{q_k\}_{k \geq 0}$ be two sequences of nonnegative
numbers with $\sum_{k =0}^\infty p_k = 1$ and $\sum_{k =0}^\infty q_k = 1.$ Define a summability
matrix, $C = [C_{n, k}],$ whose entries are given by $C_{0, k} = q_k$ and $C_{n +1, k} := (C_{n, \cdot}*p)_k
= \sum_{j = 0}^kp_jC_{n, k-j}$ for $n, k \geq 0.$ The matrix $C$ is called a convolution summability matrix.}
\end{defn}
A useful probabilistic interpretation of $C$ is the following. Let $Y, X_1, X_2,
\ldots$ be a sequence of independent non-negative integer valued random variables such that $Y$ has probability
function $q$ and the $X_i's$ are identically distributed with probability function $p.$
Let $S_0 = Y$ and $S_n = Y + X_1 + \ldots + X_n$ for $n \geq 1.$ Let $\{p_j\}_{j \geq 0}$ and $\{q_j\}_{j \geq 0}$
be the probability distributions of $X_1, X_2, \ldots$ and $Y$ respectively. The $n^{\hbox{th}}$ row $k^{\hbox{th}}$
column entry of the convolution summability matrix $C$ is the probability $C_{n, k} = P(S_n = k).$ The method
$C$ is regular if and only if $P(X_1 = 0) < 1$ \cite{Khan91}. Some classical summability methods
are examples of the method $C.$ For instance, when $Y = 0$ and $X_1 \sim \hbox{Binomial}(1, 1),$
then $C$ becomes the Euler method denoted by $E_r.$ When $Y \sim X_1 \sim \hbox{Poisson}(1)$
we get the Borel matrix method. When $Y \sim \hbox{Geometric}(1 - r)$ and $X_1 \sim Y + 1,$
then we get the Taylor method. And when $Y \sim X_1 \sim \hbox{Geometric}(1 - r)$ we get the
Meyer-K\"onig method. We shall call $C$ a convolution method and when $Y = 0$ with probability
1, it is called the random-walk method. The method $C$ can be extended to non-identically
distributed random variables (for example, Jakimovski family of summability methods \cite{ZeBe70});
however, it will serve our purpose adequately for the time being,
as it is. The regular convolution summability matrix $\{C_{n, k \geq 0}\},$ referred to everywhere
in this paper has the above construction with appropriate moment conditions and in section 4
with finite moment generating function of $\{X_i, i \geq 1\}.$
\section{Strong Regularity}
Given below is a definition of almost convergence of a sequence, which is as we see, a
generalization of ordinary convergence.
\begin{defn}
{\rm
A bounded sequence $\{x_i\}_{i \geq 0}$ is called almost convergent, if
there is a number $s$ such that
$$\lim_{\ell \to \infty}{{x_n + x_{n +1} + \ldots + x_{n + \ell -1}}\over \ell}
= s~\hbox{holds uniformly in}~n.$$
We denote $s$ by Lim $x_n.$
}
\end{defn}
\begin{expl}
{\rm
For a complex $z$ on the boundary of the unit circle Lim$z^n = 0$ holds
everywhere except for $z = + 1,$ as follows from
$${1\over \ell}(z^n + z^{n +1} + \cdots + z^{n + \ell -1}) = {z^n\over \ell}
\Bigl({{1 -z^\ell}\over{1 -z}}\Bigr).$$
}
\end{expl}
We now use the following theorem of Lorentz \cite{Lore48}. For more details on these concepts,
see \cite{Lore48}.
\begin{theo}\label{theo31}(Lorentz \cite{Lore48})
In order that regular matrix method (transformation) $A = \{a_{n, k}\}$
sum all almost convergent sequences, it is necessary and sufficient that
$$\lim_{n \to \infty} \sum_{k=0}^\infty |a_{n, k} - a_{n, k + 1}| = 0.$$
\end{theo}
In view of the above result, we now give below the definition for strong
regularity of an almost convergent sequence.
\begin{defn}
{\rm A summability method $A$ is called strongly regular, if for any almost
convergent sequence $\{x_i\}_{i \geq 0}$ with Lim$x_n = l,$ we have
$\lim_{n \to \infty}(Ax)_n = l.$
}
\end{defn}
Lorentz \cite{Lore48} showed that the Ces\`aro method $C_\alpha$ of order $\alpha > 0$
and the Euler method $E_r,$ with parameter $r,$ are strongly regular. In an attempt to
generalize these results, we will prove that the random-walk method is strongly regular
for a probability function with finite third moment. Then using this result, we show that
the convolution summability method is also strongly regular.
\begin{theo}\label{theo32}
Let $\xi_1, \xi_2, \xi_3, \ldots$ be an i.i.d. sequence of aperiodic
nonnegative integer-valued random variables with finite third moment and
positive variance. Then the matrix transformation corresponding to the
random-walk method of the above sequence of random variables is strongly
regular.
\end{theo}
Prior to the proof of this theorem, we need an important theorem due to
Bikelis and Jasjunas \cite{Bike67}, which gives the rate of convergence for
the central limit theorem:
\begin{theo}\label{theo32a}(Bikelis \& Jasjunas \cite{Bike67})
For a sequence $\{\xi_i\}_{i \geq 1},$ i.i.d. aperiodic nonnegative
integer-valued random variables with mean $\mu,$ positive variance
$\sigma^2,$ and finite third moment, the following holds:
$$\sum_{j = -\infty}^\infty (1 + |{{j - n\mu}\over{\sigma\sqrt{n}}}|^3)|
P(S_n = j) - {1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(j - n\mu)^2/(n\sigma^2)\}| = O(n^{-1/2}),$$
where $S_n = \xi_1 + \cdots + \xi_n.$
\end{theo}
\noindent{\sl Proof of theorem \ref{theo32}}
As suggested in Theorem \ref{theo31}, we now consider
$$\sum_{k=0}^\infty |a_{n, k+1} - a_{n, k}| = \sum_{k=0}^\infty |
P(S_n = k +1) - P(S_n = k)|,~\hbox{where}~S_n = \sum_{i =0}^n \xi_i
~\hbox{and}~\xi_0 =0.$$
If the mean of $\xi_i$ is $\mu$ and standard deviation of $\xi_i$ is $\sigma,$
we write
\begin{eqnarray*}
& & \sum_{k=0}^\infty |P(S_n = k +1) - P(S_n = k)|\\
& = & \sum_{k=0}^\infty |\{P(S_n = k +1) - {1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(k + 1 - n\mu)^2/(n\sigma^2)\}\}\\
& & + \{ {1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(k - n\mu)^2/(n\sigma^2)\} - P(S_n = k)\}\\
& & + \{ {1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(k + 1 - n\mu)^2/(n\sigma^2)\} -
{1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(k - n\mu)^2/(n\sigma^2)\}\}|\\
& \leq & \sum_{k=0}^\infty
|P(S_n = k +1) - {1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(k + 1 - n\mu)^2/(n\sigma^2)\}|\\
& & + \sum_{k=0}^\infty|{1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(k - n\mu)^2/(n\sigma^2)\} - P(S_n = k)|\\
& & + \sum_{k=0}^\infty|{1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(k + 1 - n\mu)^2/(n\sigma^2)\} -
{1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(k - n\mu)^2/(n\sigma^2)\}|.
\end{eqnarray*}
Subject to the finiteness of the third moment, considering the fact that
$(1 + |{{j - n\mu}\over{\sqrt{n \sigma^2}}}|^3) \geq 1$ and restricting
the values of $j,$ for $j \geq 0,$ we obtain from Bikelis and Jasjunas
theorem \ref{theo32a} that
$$\sum_{j =0}^\infty |P(S_n = j) - {1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(j - n\mu)^2/(n\sigma^2)\}| = O(n^{-1/2}).$$
This implies that the first two sums of the above are in fact of
$O(n^{-1/2}).$
For the last sum, we look at the telescopic series in the following form,
noting that as $k \to \infty,$ the terms increase until $k = n\mu$ and then
decreases to $0$ thereafter:
\begin{eqnarray*}
& & {1\over{\sigma(2\pi n)^{1\over 2}}}\{(
\exp \{-{1\over 2}(1 - n\mu)^2/(n\sigma^2)\} -
\exp \{-{1\over 2}(0 - n\mu)^2/(n\sigma^2)\})\\
& & + (\exp \{-{1\over 2}(2 - n\mu)^2/(n\sigma^2)\} -
\exp \{-{1\over 2}(1 - n\mu)^2/(n\sigma^2)\}) + \cdots\\
& & + \cdots + (.................................. -
..................................) + \cdots\\
& & + \cdots + (\exp \{-{1\over 2}(i + 1 - n\mu)^2/(n\sigma^2)\} -
\exp \{-{1\over 2}(i - n\mu)^2/(n\sigma^2)\}) + \cdots\\
& & + \cdots + (.................................. -
..................................) + \cdots\\
& & + \cdots + (\exp \{-{1\over 2}(n\mu - n\mu)^2/(n\sigma^2)\} -
\exp \{-{1\over 2}(n\mu -1 - n\mu)^2/(n\sigma^2)\})\}\\
& & + {1\over{\sigma(2\pi n)^{1\over 2}}}\{
(\exp \{-{1\over 2}(n\mu + 1 - n\mu)^2/(n\sigma^2)\} -
\exp \{-{1\over 2}(n\mu + 2 - n\mu)^2/(n\sigma^2)\})\\
& & + (\exp \{-{1\over 2}(n\mu + 2 - n\mu)^2/(n\sigma^2)\} -
\exp \{-{1\over 2}(n\mu + 3 - n\mu)^2/(n\sigma^2)\}) + \cdots\\
& & + \cdots + (.................................. -
..................................) + \cdots\\
& & + \cdots + (\exp \{-{1\over 2}(j - n\mu)^2/(n\sigma^2)\} -
\exp \{-{1\over 2}(j + 1 - n\mu)^2/(n\sigma^2)\}) + \cdots\\
& & + \cdots + (.................................. -
..................................)\};
\end{eqnarray*}
for $0 \leq i \leq n\mu - 1$ and $n\mu + 1 < j < \infty.$
Since $\lim_{j \to \infty} \exp \{-{1\over 2}(j - n\mu)^2/(n\sigma^2)\} = 0,$
the above series sums to
$${1\over{\sigma(2\pi n)^{1\over 2}}}\{- e^{-{1\over 2}n^2\mu^2/n\sigma^2} +
1 + e^{-{1\over 2}/n\sigma^2}\}$$
$$= {1\over{\sigma(2\pi n)^{1\over 2}}}\{- e^{- n\mu^2/2\sigma^2} +
1 + e^{-1/2 n\sigma^2}\} = O(1/\sqrt{n}).$$
This together with above leads to the fact that
$$\sum_{k =0}^\infty |P(S_n = k+1) - P(S_n = k)| = O(1/\sqrt{n}).$$
Now, by the Lorentz criteria, we have the strong regularity of the
random-walk method.
\vrule height 8pt width 4pt
\vskip 1em
We will use the above result to prove the following generalization for the convolution
summability method $C$ defined in section 1.
\begin{theo}\label{theo33}
Let $Y, \{X_i, i\geq 1\}$ be independent and let $\{X_i, i\geq 1\}$ be
identically distributed aperiodic nonnegative integer-valued random
variables with finite third moment. Let $C$ be the convolution summability method.
Then the following are equivalent.
\begin{description}
\item (i)~ $\hbox{Var}(X_1) > 0,$
\item (ii)~$C$ is strongly regular.
\end{description}
\end{theo}
\noindent{\sl Proof.}
We will first show that (i) implies (ii).
Let $\{q_j\}$ and $\{p_j\}$ be the probability weights associated with
random variables $Y$ and $\{X_i, i\geq 1\}.$
The weight of the convolution summability method is
$$C_{n, k} = P(Y + S_n = k) = \sum_{j =0}^k q_j\{P(S_n = k -j)\},~
\hbox{where}~S_n = \sum_{i =1}^n X_i.$$
Now,
\begin{eqnarray*}
& & \sum_{k =0}^\infty|C_{n, k +1} - C_{n k}|\\
& = & \sum_{k =0}^\infty|\sum_{j =0}^{k +1}q_j P(S_n = k +1 -j) -
\sum_{j =0}^k q_j P(S_n = k -j)|\\
& = & \sum_{k =0}^\infty|\sum_{j =0}^k q_j P(S_n = k +1 -j) -
\sum_{j =0}^k q_j P(S_n = k -j) + q_{k +1} P(S_n = 0)|\\
& \leq & \sum_{k =0}^\infty\sum_{j =0}^k q_j|P(S_n = k +1 -j) - P(S_n = k -j)|
+ \sum_{k =0}^\infty q_{k +1} P(S_n = 0)\\
& \leq & \sum_{j = 0}^\infty q_j\sum_{k = j}^\infty|P(S_n = k +1 -j) -
P(S_n = k -j)| + P(S_n = 0)
\end{eqnarray*}
as $\sum_{k =0}^\infty q_k = 1.$
Change of summation index $k - j \to k$ gives
\begin{eqnarray*}
& & \sum_{k =0}^\infty|C_{n, k +1} - C_{n k}|\\
& \leq & \sum_{j = 0}^\infty q_j\sum_{k = 0}^\infty|P(S_n = k +1) -
P(S_n = k)| + (p_0)^n\\
&\leq & \sum_{k = 0}^\infty|P(S_n = k +1) - P(S_n = k)| + (p_0)^n
\end{eqnarray*}
as $\sum_{k =0}^\infty q_k = 1.$
As already seen in the previous proof, the first term is of $O(n^{-1/2}),$
provided that the $\{X_i\}'s$ have finite third moment and positive
variance, whereas the second term tends to $0.$
Since we assumed that $\hbox{Var}(X_1) > 0,$ it must be that $p_0 < 1.$
Hence
$$\sum_{k =0}^\infty |C_{n, k +1} - C_{n, k}| \to^n 0 \qquad (n \to \infty).$$
To prove that (ii) implies (i), assume that (ii) holds and (i) fails.
When $\hbox{Var}(X_1) =0$ there exists a nonnegative integer $m$
such that $P(X_1 = m) =1.$ Hence,
$$C_{n, j} = P(Y + S_n =j) = P(Y = j -nm)$$
$$ = \left\{ \matrix {
& 0 &~\hbox{if}~& j < nm &\cr
& q_{j - nm}&~\hbox{if}~&j \geq nm. &}\right
.$$
Therefore,
$$\sum_{j =0}^\infty |C_{n, j +1} - C_{n, j}| \geq \sum_{j = 0}^\infty
|q_{j +1} - q_j| \not= 0,$$
as $\sum_{j =0}^\infty q_j =1,$ and $q_i \geq 0$ for $i \geq 0.$
This contradiction gives the result.
\vrule height 8pt width 4pt
\vskip 1em
\begin{remk}
{\rm
It should be noted that with the condition $p_0 < 1,$ Khan \cite{Khan91}
proved the regularity of the convolution summability method. Our condition
$\hbox{Var}(X_1) > 0$ implies that $p_0 < 1.$
Furthermore, it follows as a result of Theorem \ref{theo33} that Taylor and
Meyer-K\"onig methods are strongly regular.
}
\end{remk}
\section{Summability Functions}
The concept of summability functions was introduced by Lorentz \cite{Lore48}.
There are many uses of the summability functions. Summability functions, in
some sense, determine the strength of the regularity of the method for
bounded sequences and also may be used to show that Tauberian conditions of a
certain kind cannot be improved.
\begin{defn}
{\rm The class $\cal U$ is the set of regular matrix methods
$A = \{a_{n,k}\}$ for which
$$\lim_{n \to \infty}\{\max_k |a_{n, k}|\} = 0$$ is fulfilled.
}
\end{defn}
Every regular convolution summability method satisfies this property, so they
form a subset of $\cal U.$
For,
$$C_{n, k} = \sum_{j=0}^k q_j\{ P(\sum_{i=1}^n X_i = k-j)\},$$
where $\{C_{n, k}\}$ are the convolution summability weights under
consideration.
Now with $\mu = EX_1$ and $0 < \sigma^2 =$ Var($X_1) < \infty,$
$$C_{n, k} = \sum_{j=0}^k q_j [({1\over {2\>\pi n {\sigma}^2}})^{1/2}\exp
\{-{1\over 2}(k-j-n\mu)^2/n{\sigma}^2\} + o(1/{\sqrt{n}})],$$
uniformly in $k - j.$ Since $\sum_{j = 0}^\infty q_j = 1$ and $q_j \geq 0,~
\forall~j \geq 0,$ we have
$$\max_k |C_{n, k}| = O({1\over \sqrt{n}}) \to^n 0.$$
The methods of the class $\cal U$ are characterized by the fact that they all
possess summability functions. We now give the precise definition of the
summability function.
\begin{defn}
{\rm Given a matrix $A = \{a_{n, k}\},$ a nonnegative sequence $\Omega(n)$
that increases to $\infty$ is called a summability function for $A$ if
$s_n \to 0~(A)$ holds whenever $s_n = O(1)$ and $A(n, s) =
\sum_{\nu \leq n,~s_\nu \not= 0} 1 \leq \Omega(n).$ The sequence $A(n, s)$ is
sometimes called a counting function of the sequence $\{s_n\}.$
}
\end{defn}
\begin{theo}\label{theo34}(Lorentz \cite{Lore48})
The condition $\lim_{m \to \infty}\{\max_n |a_{m, n}|\} = 0$ is necessary as
well as sufficient for the existence of an integer-valued function
$\Omega(n)$ that increases to $\infty,$ such that every bounded sequence
$x = \{x_n\}$ for which the indices $n_\nu,$ with $x_{n_\nu} \not= 0$ have a
counting function $A(n, x) \leq \Omega(n)$ is $A-$summable to zero.
\end{theo}
The following theorem gives sufficient conditions under which the
existence of summability functions can be determined.
\begin{theo}\label{theo35}(Lorentz \cite{Lore48})
Let $A = \{a_{n, k}\}$ be a regular matrix summability method. If the
integer-valued function of $k,~f(k),$ is such that $0 < f(k) \uparrow \infty$
and if
$$\sum_{k =0}^\infty f(k)|a_{n, k} - a_{n, k+1}| = O(1),$$
then every nonnegative sequence $\Omega(n)$ that increases to $\infty,$
$\Omega(n) = o(f(n))$ is a summability function for $A.$
\end{theo}
We make use of the above theorem to show that $\Omega(k) = o(\sqrt{k})$ is a
summability function for the convolution summability methods generated
by a sequence of aperiodic nonnegative integer-valued random variables with
finite third moment. The optimality of the summability function so obtained
is also ascertained. In this connection, we shall show that all functions
$\Omega(n) = o(\sqrt{n})$ and only these functions are summability functions
for the methods $\{C_{n, k}\}_{n, k \geq 1}.$ Lorentz \cite{Lore48} showed
that all functions of the form $\Omega(n) = o(\sqrt{n})$ and only these
functions are summability functions for the Euler method $E_\alpha$ with
$\alpha > 0.$ We use this fact in the proof of the following theorem.
There are cases where Theorem \ref{theo35} does not give all summability
functions. For an example, the N\"olund method $N_p,$ with $p_n = 1/(n + 1);$
Theorem \ref{theo35} gives that $\Omega(n) = o(\log(n))$ are its summability
functions. However, it is known that any function $\Omega(n) =
O(n^{\epsilon_n})$ with $0 < \epsilon_n \to 0$ is a summability function for
$N_p$ (p. 61, \cite{Peye69}).
The following theorem provides summability functions for a convolution
summability method.
\begin{theo}\label{theo36}
Let $Y, \{X_i, i\geq 1\}$ be independent with $E(Y) = \mu_Y < \infty,$ and
$\{X_i, i\geq 1\}$ be identically distributed aperiodic nonnegative
integer-valued random variables with finite third moment and positive
variance. Then for the matrix transformation corresponding to the regular
convolution summability method $C = \{C_{n, k}\},$ any function $0 < \Omega(n)
\uparrow \infty$ of the form $\Omega(n) = o(\sqrt{n})$ gives a summability
function of $C.$
Furthermore, $0 < \Omega(n) = o(\sqrt{n})$ with $\Omega(n) \uparrow \infty$ are
the only functions which are summability functions over the class of regular
convolution methods under consideration.
\end{theo}
\noindent{\sl Proof.}
The weight of the convolution summability method is given by
$$C_{n, k} = P(Y + S_n = k) = \sum_{j =0}^k q_j\{P(S_n = k -j)\}~
\hbox{where}~S_n = \sum_{i =1}^n X_i.$$
Now consider,
\begin{eqnarray*}
& & \sum_{k =0}^\infty \sqrt{k}|C_{n, k +1} - C_{n k}|\\
& = &
\sum_{k =0}^\infty \sqrt{k}|\sum_{j =0}^{k +1}q_j P(S_n = k +1 -j) -
\sum_{j =0}^k q_j P(S_n = k -j)|\\
& = & \sum_{k =0}^\infty \sqrt{k} |\sum_{j =0}^k q_j P(S_n = k +1 -j) -
\sum_{j =0}^k q_j P(S_n = k -j) + q_{k +1} P(S_n = 0)|\\
&\leq & \sum_{k =0}^\infty\sqrt{k}\sum_{j =0}^k q_j|P(S_n = k +1 -j) -
P(S_n = k -j)| + \sum_{k =0}^\infty \sqrt{k} q_{k +1} P(S_n = 0)\\
&\leq & \sum_{j =0}^\infty q_j \sum_{k = j}^\infty \sqrt{k}
|P(S_n = k +1 -j) - P(S_n = k -j)| + P(S_n = 0)\sum_{k =0}^\infty
\sqrt{k} q_{k +1}.
\end{eqnarray*}
Making the change of summation index $k - j \to k$ in the first sum and
$k + 1 \to k$ in the second sum, we obtain
$$= \sum_{j =0}^\infty q_j \sum_{k = 0}^\infty \sqrt{k + j}
|P(S_n = k +1) - P(S_n = k)| + P(S_n = 0)\sum_{k = 1}^\infty \sqrt{k -1}q_k.$$
Since the sequence $\{q\}$ has a finite first moment, say $\mu_Y,$ it follows
that
$$ \sum_{k =0}^\infty \sqrt{k}|C_{n, k +1} - C_{n k}|
\leq \sum_{j =0}^\infty q_j \sum_{k = 0}^\infty (\sqrt{k} + \sqrt{j})
|P(S_n = k +1) - P(S_n = k)| + \mu_Y (p_0)^n.$$
Note that the last term tends to $0,$ since $\hbox{Var}(X_1) > 0,$ which
implies that $p_0 < 1.$
$$\sum_{k =0}^\infty \sqrt{k}|C_{n, k +1} - C_{n k}|$$
$$\leq \mu_Y\sum_{k = 0}^\infty |P(S_n = k +1) - P(S_n = k)| +
\sum_{k = 0}^\infty \sqrt{k}|P(S_n = k +1) - P(S_n = k)| + o(1).$$
The first sum also tends to 0, since $\hbox{Var}(X_1) > 0$ is a necessary and
sufficient condition for the strong regularity of the convolution summability
method.
Now what remains is to show that
$$\sum_{k = 0}^\infty \sqrt{k}|P(S_n = k +1) - P(S_n = k)| = O(1).$$
We begin with the following.
\begin{eqnarray*}
\sqrt{k} & = & \sqrt{(k - n\mu) + n\mu} \leq \sqrt{|k - n\mu| + n\mu}
\leq \sqrt{|k - n\mu|} + (n\mu)^{1/2}\\
&\leq & |{{k - n\mu}\over{\sigma\sqrt{n}}}|^{1/2} (\sigma\sqrt{n})^{1/2} +
(n\mu)^{1/2}.
\end{eqnarray*}
With the assumption of the finiteness of the third moment of the sequence
$\{p_j\}$ of i.i.d. random variables Theorem \ref{theo32a} of Bikelis and Jasjunas
\cite{Bike67} gives
$$\sum_{j = -\infty}^\infty (1 + |{{j - n\mu}\over{\sigma\sqrt{n}}}|^3)|
P(S_n = j) - {1\over{\sigma(2\pi n)^{1\over 2}}}
\exp \{-{1\over 2}(j - n\mu)^2/(n\sigma^2)\}| = O(n^{-1/2}),$$
where $S_n = X_1 + X_2 + \cdots + X_n.$ Then
\begin{eqnarray*}
& &\sum_{k = 0}^\infty \sqrt{k}|P(S_n = k +1) - P(S_n = k)|\\
& \leq & (\sigma\sqrt{n})^{1/2} \sum_{k = 0}^\infty
|{{k - n\mu}\over{\sigma\sqrt{n}}}|^{1/2} |P(S_n = k +1) - P(S_n = k)|\\
& & + (n\mu)^{1/2} \sum_{k = 0}^\infty |P(S_n = k +1) - P(S_n = k)| =
\sum_1 + \sum_2 ~\hbox{say.}
\end{eqnarray*}
We have already shown in the proof of Theorem \ref{theo32} that
$\sum_2 = O(1).$
For $\sum_1,$ we will proceed as follows:
\begin{eqnarray*}
\sum_1 & \leq & (\sigma\sqrt{n})^{1/2}\sum_{k = 0}^\infty
|{{k - n\mu}\over{\sigma\sqrt{n}}}|^{1/2}|P(S_n = k +1)\\
& & - {1\over{\sigma(2\pi n)^{1\over 2}}}
\exp\{-{1\over 2}(k + 1 - n\mu)^2/(n\sigma^2)\}|\\
& & + (\sigma\sqrt{n})^{1/2}\sum_{k = 0}^\infty
|{{k - n\mu}\over{\sigma\sqrt{n}}}|^{1/2}
|{1\over{\sigma(2\pi n)^{1\over 2}}}
\exp\{-{1\over 2}(k - n\mu)^2/(n\sigma^2)\} - P(S_n = k)|\\
& & + (\sigma\sqrt{n})^{1/2}\sum_{k = 0}^\infty
|{{k - n\mu}\over{\sigma\sqrt{n}}}|^{1/2}
{1\over{\sigma(2\pi n)^{1\over 2}}}
|\exp\{-{1\over 2}(k + 1 - n\mu)^2/(n\sigma^2)\}\\
& & - \exp\{-{1\over 2}(k - n\mu)^2/(n\sigma^2)\}|.
\end{eqnarray*}
Note that
$$\Bigl(1 + |{{k - n\mu}\over{\sigma\sqrt{n}}}|^3\Bigr) >
|{{k - n\mu}\over{\sigma\sqrt{n}}}|^{1/2}~\hbox{for all}~k \geq 0~
\hbox{and all}~n \geq 0.$$
This shows that the first two sums are of order $O(1)$ as we expected.
Now we consider the last sum:
$${{(\sigma\sqrt{n})^{1/2}}{{1\over{\sigma(2\pi n)^{1\over 2}}}}}
\sum_{k = 0}^\infty |{{k - n\mu}\over{\sigma\sqrt{n}}}|^{1/2}
|\exp\{-{1\over 2}(k + 1 - n\mu)^2/(n\sigma^2)\} $$
$$ - \exp\{-{1\over 2}(k - n\mu)^2/(n\sigma^2)\}|.$$
Let $t_{n, k} = {k - n\mu\over \sigma\sqrt{n}}$ and let
$\Delta t_{n, k} = t_{n, k+1} - t_{n, k} = {1\over \sigma\sqrt{n}}.$
The last sum is
\begin{eqnarray*}
& & {\sqrt{\sigma\sqrt{n}}\over\sqrt{2\pi}}\sum_{k =0}^\infty
{1\over\sigma\sqrt{n}}|t_{n, k}|^{1/2}
\Bigl|e^{-{1\over 2}t^2_{n, k+1}} - e^{-{1\over 2}t^2_{n, k}}\Bigr|\\
& = & {\sqrt{\sigma\sqrt{n}}\over\sqrt{2\pi}}\sum_{k =0}^\infty
\Bigl(t_{n, k+1} - t_{n, k}\Bigr)|t_{n, k}|^{1/2}
\Bigl|e^{-{1\over 2}\Bigl(t_{n, k} + {1\over \sigma\sqrt{n}}\Bigr)^2}
- e^{-{1\over 2}t^2_{n, k}}\Bigr|\\
& = & {\sqrt{\sigma\sqrt{n}}\over\sqrt{2\pi}}\sum_{k =0}^\infty
\Bigl(\Delta t_{n, k}\Bigr)|t_{n, k}|^{1/2}
\Bigl|e^{-{1\over 2}t^2_{n, k}}\Bigl(e^{-{1\over 2}\Bigl(2t_{n, k}\Delta
t_{n, k} + \Bigl(\Delta t_{n, k}\Bigr)^2\Bigr)} - 1\Bigr)\Bigr|\\
& = & \sqrt{\sigma\sqrt{n}}\sum_{k =0}^\infty \Bigl(\Delta t_{n, k}\Bigr)
|t_{n, k}|^{1/2}\phi\Bigl(t_{n, k}\Bigr)\Bigl|e^{t_{n, k}\Delta t_{n, k}}
e^{-{1\over2}\Bigl(\Delta t_{n, k}\Bigr)^2} -1\Bigr|\\
& = & \sqrt{\sigma\sqrt{n}}\sum_{k =0}^\infty \Bigl(\Delta t_{n, k}\Bigr)
|t_{n, k}|^{1/2}\phi\Bigl(t_{n, k}\Bigr)\Bigl|\sum_{j =0}^\infty
{\Bigl(-t_{n, k}\Delta t_{n, k} - {\Bigl(\Delta t_{n, k}\Bigr)^2\over
2}\Bigr)^j\over j!} - 1\Bigr|\\
& \leq & \sqrt{\sigma\sqrt{n}}\sum_{k =0}^\infty \Bigl(\Delta t_{n, k}\Bigr)
|t_{n, k}|^{1/2}\phi\Bigl(t_{n, k}\Bigr)\sum_{j =0}^\infty
\Bigl(\Delta t_{n, k}\Bigr){\Bigl(|t_{n, k}| + 1\Bigr)^j\over j!}\\
&\leq & {\sqrt{\sigma\sqrt{n}}\over\sigma\sqrt{n}}\sum_{k =0}^\infty
\Bigl(\Delta t_{n, k}\Bigr)|t_{n, k}|^{1/2}\phi\Bigl(t_{n, k}\Bigr)
e^{1 + |t_{n, k}|}\\
& \sim & {1\over \sqrt{\sigma\sqrt{n}}}
\int_{-\infty}^\infty |t|^{1/2}e^{|t| +1}\phi(t)dt\\
& = & O\Bigl({1\over n^{1/4}}\Bigr).
\end{eqnarray*}
This now concludes the proof of the first half of the theorem.
Since $E_\alpha$ with $\alpha > 0$ are members of the convolution method and
as proved in Lorentz \cite{Lore48}, $\Omega(n) = o(\sqrt{n})$ are the only summability
functions of the method $E_\alpha,$ one cannot enlarge the class of
summability functions over the space of convolution methods under
consideration. This concludes the sharpness of the result.
\vrule height 8pt width 4pt
\vskip 1em
\begin{remk}
{\rm
The summability functions for the following methods were obtained by Lorentz
\cite{Lore48}, of which $\Omega(n) = o(\sqrt{n})$ for the Euler $E_p$ method
agrees with the above theorem.
\begin{enumerate}
\item For the $(C, 1)$ method; $\Omega(n) = o(n).$
As the methods $(C, \alpha)~(\alpha > 0)$ and the Abel method $\cal A$ are
equivalent to the $(C, 1)$ method for bounded sequences, they also have the
same summability functions.
\item For the Euler $E_p$ method; $\Omega(n) = o(\sqrt{n}).$
\end{enumerate}
}
\end{remk}
Let $R_{n, j}$ be the weight of the random-walk method. As usual, by
writing $\mu,~\sigma^2$ for the mean and variance of the sequence of
i.i.d. random variables with finite third moment, we obtain
$$\max_j|R_{n, j}| = O\Bigl({1\over n^{1/2}}\Bigr)~\hbox{as}~n \to \infty$$
as follows from
$$R_{n, j} = P(X_1 + X_2 + \ldots + X_n = j) = {1\over{\sigma(2\pi n)^{1/2}}}
\exp\{{-1\over 2}(j - n\mu)^2/n\sigma^2\} + o(1/\sqrt{n})$$
uniformly in $j$ \cite{BiMa85}. Hence, the set of all random-walk methods
with finite third moment is contained in the class ${\cal U}$ of the matrix method.
The following corollary can be easily drawn from the above theorem.
\begin{coro}\label{coro31}
Let $\{X_i, i\geq 1\}$ be independent and identically distributed aperiodic
nonnegative integer-valued random variables with finite third moment and
positive variance. Then for the matrix transformation corresponding to the
regular random-walk method, $\{R_{n, j}\};$ any function $0 < \Omega(n)
\uparrow \infty$ of the form $\Omega(n) = o(\sqrt{n})$ gives the possible
summability functions.
Furthermore, $0 < \Omega(n) = o(\sqrt{n})$ with $\Omega(n) \uparrow \infty$
are the only functions which are summability functions over the class of
regular random-walk methods.
\end{coro}
\section{Absolute Summability Functions}
Definition of the summability functions has been improved by introducing
the concept of absolute summability functions.
\begin{defn}
{\rm
Let $\Omega(n)$ be a non-decreasing positive function which tends to
$+\infty$ with $n.$ We say that $\Omega(n)$ is an absolute summability
function of a summability matrix $A = \{a_{n, k}\}_{n, k \geq 0}$, if any
bounded sequence $\{f(k), ~k \geq 0\}$ for which $f(k) = 0$ except for a
subsequence $\{n_\nu\}$ with the counting function $A(n, f) \leq \Omega(n)$
is absolutely $A$-summable, that is, $\sum_{n = 0}^\infty
|\sigma_n - \sigma_{n -1}| < +\infty$ for any such sequence, where
$\sigma_n = \sum_{k =0}^\infty a_{n, k}f(k)$ for $n \geq 0.$
}
\end{defn}
Theorem 7 of Lorentz \cite{Lore51} addresses question of the existence of
absolute summability functions.
\begin{theo}\label{theo12}(Lorentz \cite{Lore51}) The method of summation $A$
generated by the matrix $A = \{a_{n, k}\}_{n, k \geq 0}$ for which
$\sum_{k =0}^\infty|a_{0, k}| < + \infty$ has absolute summability
functions if and only if the variation of the $k-$th column
$V_k = {\hbox{var}}_n~a_{n, k}$ defined by $\sum_{n =0}^\infty|a_{n +1, k} -
a_{n, k}|$ converges to 0 for $k \to \infty.$
\end{theo}
As we will show below, a regular convolution summability method that
has been considered in the preceding sections has this structure. Hence,
according to theorem 7 of Lorentz \cite{Lore51}, a regular convolution summability
method under consideration has absolute summability functions. Since the moment
generating function (mgf) may exist for some real arguments but not all, we
simply insist that the characteristic function is to be entire (analytic) in the
results to follow \cite{Luka70}. For the probabilistic relevance of the mgf condition,
see \cite{CsRe81}.
\begin{theo}\label{theo12a}
Let $Y, \{X_i, i\geq 1\}$ be independent with $E(Y) = \mu_Y < \infty,$ and
$\{X_i, i\geq 1\}$ be identically distributed aperiodic nonnegative
integer-valued random variables with characteristic function is analytic. The matrix
transformation corresponding to the regular convolution summability method
$C = \{C_{n, k}\}_{n, k \geq 0}$ has absolute summability functions.
\end{theo}
\noindent{\sl Proof.}
First, we verify that
$$\sum_{k =0}^\infty|C_{0, k}| = \sum_{k =0}^\infty P(Y = k) = \sum_{k =0}^\infty q_k = 1 < \infty.$$
This means that the method already satisfies the condition of hypotheses.\\
Let $S_n = X_1 + X_2 + \ldots + X_n$ and $S_0 = 0.$
We now consider,
\begin{eqnarray*}
& & \sum_{n = 0}^\infty|C_{n + 1, k} - C_{n, k}|\\
& = & \sum_{n = 0}^\infty|P(Y + S_{n +1} = k) - P(Y + S_n = k)|\\
& = & \sum_{n = 0}^\infty|\sum_{j =0}^{k + 1} q_jP(S_n = k+ 1 -j) -\sum_{j = 0}^k q_jP(S_n = k - j)|\\
& \leq & \sum_{n = 0}^\infty|\sum_{j =0}^k|P(S_n = k + 1 -j) - P(S_n = k - j)| + \sum_{n = 0}^\infty q_{k + 1}P(S_n = 0)\\
& \leq & \sum_{j = 0}^k q_j \sum_{n = 0}^\infty|P(S_n = k + 1 -j) - P(S_n = k - j)| + q_{k +1}\sum_{n = 0}^\infty\bigl(p_0\bigr)^n\\
& = & \sum_{j = 0}^k q_j \sum_{n = 0}^\infty|P(S_n = k + 1 -j) - P(S_n = k - j)| + q_{k +1}\Bigl({1\over 1 - p_0}\Bigr).\\
\end{eqnarray*}
The last term on the right is $o_k(1)$ as the method is regular ($p_0 < 1,$) and $\sum_{k = 0}^\infty q_k = 1.$
The convergence of first (other) sum is evident from Theorem 4 of Kesten \cite{Kest72}. Now, we show that this sum is in fact
$o_k(1),$ where $k$ denote that order notation has taken as $k \to \infty.$
For,
$$I_k = \sum_{j = 0}^k q_j \sum_{n = 0}^\infty|P(S_n = k + 1 -j) - P(S_n = k - j)| + o_k(1),$$
using the Chung-Erd\~os inequality cited in page 706 of Kesten \cite{Kest72}, of the form:
If for some integer $k \geq 0, a,~P(S_k = a)P(S_{k + m} = a + j) > 0$ holds, then for every $\epsilon > 0$ there exists a
$\delta > 0$ such that sufficiently large $n,$
\begin{eqnarray*}
P(S_n = i_n) & \leq &(1 + \epsilon)P(S_{n +m} = i_n + j) + e^{-\delta n}~\hbox{and}\\
P(S_{n + m} = i_n + j) & \leq & (1 + \epsilon)P(S_n = i_n) + e^{-\delta n}.\\
\end{eqnarray*}
Thus, for $\delta_1 \not= \delta_2,$ we have $I_k$
\begin{eqnarray*}
& \leq & \sum_{j =0}^k q_j\sum_{n =0}^\infty|(1 + \epsilon_1)P(S_{n + k -j} = 0) + e^{-\delta_1(n +k -j)} - (1 + \epsilon_2)
P(S_{n + k - j} = 0) - e^{-\delta_2(n + k -j)}| + o_k(1)\\
& \leq & \sum_{j =0}^k q_j\Bigl\{|\epsilon_1 - \epsilon_2|\sum_{n =0}^\infty\bigl(p_0\bigr)^{n + k -j} + \Bigl|\sum_{n = 0}^\infty
\bigl(e^{-\delta_1(n + k -j)} - \sum_{n = 0}^\infty\bigl(e^{-\delta_2(n + k -j)}\Bigr|\Bigr\} + o_k(1)\\
& \leq & |\epsilon_1 - \epsilon_2|\bigl(p_0\bigr)^k\sum_{j = 0}^k q_jp_0^{-j}\Bigl({1\over 1 - p_0}\Bigr)
+ \sum_{j = 0}^kq_je^{\hbox{max}(\delta_1, \delta_2)j}\Bigl|\sum_{n =0}^\infty\Big\{e^{-\delta_1n} - e^{-\delta_2n}\Bigr\}\Bigr|
\Bigl(e^{-\hbox{min}(\delta_1, \delta_2)}\Bigr)^k + o_k(1)\\
& \leq & |\epsilon_1 - \epsilon_2|\bigl(p_0\bigr)^k\sum_{j = 0}^k q_jp_0^{-j}\Bigl({1\over 1 - p_0}\Bigr)
+ \sum_{j = 0}^kq_je^{\hbox{max}(\delta_1, \delta_2)j}\Big|{1\over 1 - e^{-\delta_1}} - {1\over 1 - e^{-\delta_2}}\Bigr|
\Bigl(e^{-\hbox{min}(\delta_1, \delta_2)}\Bigr)^k + o_k(1).\\
\end{eqnarray*}
This gives $I_k = o_k(1)$ with characteristic function of $\{X_i, i\geq 1\}$ is
analytic.
\vrule height 8pt width 4pt
\vskip 1em
In analogy to Theorem 9 of Lorentz \cite{Lore51}, we prove the following
generalization for the convolution summability method; which now
includes Taylor and Meyer-K\"onig summability methods and many others. We will use
most of the preliminary facts from Lorentz \cite{Lore51} and avoid further
discussions as we proceed.
\begin{theo}\label{theo13} A function $\Omega(n)$ is an absolute summability
function of the regular convolution summability method generated by a
sequence of $Y, \{X_i, i\geq 1\}$ of independent with $E(Y) = \mu_Y
< \infty,$ and $\{X_i, i\geq 1\}$ of identically distributed aperiodic
non negative integer-valued random variables with characteristic function is analytic,
if and only if $$\sum_{n =1}^\infty n^{-3/2}\Omega(n) < +\infty.$$
\end{theo}
\noindent{\sl Proof.}
The sufficiency of the theorem is proved as follows:
Let $\{C_{n, k}\}_{n, k \geq 1}$ be the convolution summability matrix
corresponding to the given sequence of random variables.
From Theorem \ref{theo12a} the variation of the $k-$th column is
$$V_k = {\hbox{var}}_n~C_{n, k} = \sum_{n =0}^\infty|C_{n + 1, k} -
C_{n, k}| = \alpha\bigl(p_0\bigr)^k + O\Bigl(e^{-\beta k}\Bigr) + \gamma q_{k +1}$$
for some positive constants $\alpha, \beta,$ and $\gamma$ as follows from the proof of
Theorem \ref{theo12a}. If $\{n_\nu\}$ is a sequence of integers with the counting function
$\omega(n) \leq \Omega(n),$ we have $\sum_{\nu =1}^\infty k_\nu^{-\beta} < \infty$ by
Lemma 2 (p. 247 of Lorentz \cite{Lore51}), and noting that $e^{-\beta k_\nu} < {k_\nu}^{-\beta},$
we see that
$$\sum_{\nu =1}^\infty {\hbox{var}}_n C_{n, k_\nu} =
\sum_{\nu =1}^\infty\Bigl\{\alpha\bigl(p_0\bigr)^{k_\nu} + O\Bigl(e^{-\beta k_{\nu}}\Bigr) +
\gamma q_{k_\nu +1}\Bigr\} < \infty.$$
The later is the necessary and sufficient condition for $\Omega(n)$ to be an absolute summability
function of the given matrix method (Theorem 6 of Lorentz \cite{Lore51}).
For the necessity, using the fact that Euler method $E_t,~0 < t < 1$ and the Borel method $B$
are members of the convolution summability methods, it suffices to proceed in the following manner.
Suppose the series $\sum_{n =1}^\infty n^{-{3\over 2}}\Omega(n)$ be
divergent. Then taking the Euler method $E_t,~0 < t < 1$ or the Borel method
$B,$ we have that the series is convergent for either of these methods. This
contradiction concludes the assertion.
\vrule height 8pt width 4pt
\vskip 1em
Most of the discussions and remarks appeared in \cite{Lore48}, \cite{Lore49}, \cite{Lore51},
and \cite{Peye69} now follow without further proofs and hold for the random-walk method and
all members of the convolution summability method.
\section{Acknowledgments}
This paper stems from the author's Ph.D. dissertation \cite{Goon97} at Kent
State University
under Professor M. Kazim Khan. The author wishes to thank him for proposing this topic
for further exploration with some pertinent insights to the problem in hand. Other readers have
helped to improve this paper to a greater extent. Thanks are also due to the referee for his(her)
valuable comments, suggestions and submitting some writing errors on earlier versions of this manuscript.
They were immensely helpful to improve this paper.
\vskip 1em
\begin{thebibliography}{10}
\bibitem{Bike67}
A. Bikelis and G. Jasjunas, Limits in the metric of the spaces $L_1,$ $l_1,$ (in Russian),
{\it Litovsk.\ Mat.\ Sb.} {\bf 7} (1967), 195--218.
\bibitem{BiMa85}
N. H. Bingham and Makoto Maejima, Summability methods and almost sure convergence, {\it Z.\ Wahrsch.\
Verw.\ Gebiete} {\bf 68} (1985), 383--392.
\bibitem{CsRe81}
M. Cs\"org\H{o} and P. R\'ev\'esz, {\it Strong Approximations in Probability and Statistics,}
Academic Press, NY, 1981.
\bibitem{Goon97}
Rohitha Goonatilake, {\it On Probabilistic Aspects of Summability Theory,} Department of Mathematics
and Computer Sciences, Kent State University, Kent, Ohio 44242, December 1997.
\bibitem{Kest72}
Harry Kesten, Sums of independent random variables-without moments conditions, {\it Ann. of
Math. Statist.} (3) {\bf 43} (1972), 701--732.
\bibitem{Khan91}
M. K. Khan, Statistical methods in analysis I: Some Tauberian theorems for absolute summability,
{\it Pak. J. Statist.,} {\bf 7} (1)A (1991), 21--32.
\bibitem{Lore48}
G. G. Lorentz, A contribution to the theory of divergent sequences, {\it Acta Math.,} {\bf 80} (1948),
167--190.
\bibitem{Lore49}
G. G. Lorentz, Direct theorems on methods of summability, {\it Canad. J. Math.,} {\bf 1} (1949), 305--319.
\bibitem{Lore51}
G. G. Lorentz, Direct theorems on methods of summability II, {\it Canad. J. Math.,} {\bf 3} (1951), 236--256.
\bibitem{Luka70}
E. Lukacs, {\it Characteristics Functions,} 2nd edition, revised and enlarged, Hafner Publishing Co., NY, 1970.
\bibitem{Meye49}
W. Meyer-K\"onig, Untersuchungen ueber einige verwandte Limitierungsverfahren, {\it Math. Z.} {\bf 52} (1949),
257--304.
\bibitem{Peye69}
A. Peyerimhoff, {\it Lectures on Summability,} Springer-Verlag Lecture Notes, No. 107, 1969.
\bibitem{ZeBe70}
K. Zeller and W. Beekmann, {\it Theorie der Limitierungsverfahren,} Ergaenzungen, Section 70, Math. Grenzgeb, 15,
2nd edition, Springer, Berlin, 1970.
\end{thebibliography}
\bigskip
\hrule
\bigskip
\noindent 2000 {\it Mathematics Subject Classification}:
Primary 40A05;
Secondary 40C05, 42B08, 43A99 .\\
\noindent \emph{Keywords: }
absolute summability functions,
almost convergence, convolution summability method,
random-walk method, regularity, summability functions, strong regularity.
\bigskip
\hrule
\bigskip
\vspace*{+.1in}
\noindent
Received September 11 2003.
revised version received August 18 2004.
Published in {\it Journal of Integer Sequences}, September 30 2004.
\bigskip
\hrule
\bigskip
\noindent
Return to
\htmladdnormallink{Journal of Integer Sequences home page}{http://www.math.uwaterloo.ca/JIS/}.
\vskip .1in
\end{document}
\end{document}