% This paper has been transcribed in Plain TeX by
% David R. Wilkins
% School of Mathematics, Trinity College, Dublin 2, Ireland
% (dwilkins@maths.tcd.ie)
%
% Trinity College, 1st June 1999.
\magnification=\magstep1
\vsize=227 true mm \hsize=170 true mm
\voffset=-0.4 true mm \hoffset=-5.4 true mm
\def\folio{\ifnum\pageno>0 \number\pageno \else
\ifnum\pageno<0 \romannumeral-\pageno \else\fi\fi}
\font\Largebf=cmbx10 scaled \magstep2
\font\largerm=cmr12
\font\largeit=cmti12
\font\tensc=cmcsc10
\font\sevensc=cmcsc10 scaled 700
\newfam\scfam \def\sc{\fam\scfam\tensc}
\textfont\scfam=\tensc \scriptfont\scfam=\sevensc
\font\largesc=cmcsc10 scaled \magstep1
\input amssym.def
\newsymbol\backprime 1038
\def\nlt{\mathrel{\vcenter{\rlap{\kern0.7em \vrule height2pt depth2pt}}<}}
\def\ngt{\mathrel{\vcenter{\rlap{\kern0.55em \vrule height2pt depth2pt}}>}}
\def\neq{\mathrel{\vcenter{\halign{\hfil$##$\hfil\cr
>\cr\noalign{\kern-8pt}<\cr}}}}
\pageno=0
\null\vskip72pt
\centerline{\Largebf ON FLUCTUATING FUNCTIONS}
\vskip24pt
\centerline{\Largebf By}
\vskip24pt
\centerline{\Largebf William Rowan Hamilton}
\vskip24pt
\centerline{\largerm (Transactions of the Royal Irish Academy, 19 (1843),
pp.\ 264--321.)}
\vskip36pt
\vfill
\centerline{\largerm Edited by David R. Wilkins}
\vskip 12pt
\centerline{\largerm 1999}
\vskip36pt\eject
\pageno=-1
\null\vskip36pt
\centerline{\Largebf NOTE ON THE TEXT}
\bigskip
The paper {\it On Fluctuating Functions}, by Sir William Rowan Hamilton,
appeared in volume 19 of the {\it Transactions of the Royal Irish
Academy}, published in 1843.
The following obvious typographical errors have been corrected:---
\smallbreak
\item{}
in article~8, an upper limit of integration of $\infty$ has been
added to the integral which in the original publication was printed as
$\displaystyle \int_0 d\alpha \,
{\sin \beta \alpha \over \alpha (1 + \alpha^2)}$;
\smallskip
\item{}
in article~13, the right hand side of equation (d${}'''$) was
printed in the original publication as
$(\alpha - x)^{-1} \psi_{k_{-1 (\alpha - x)}}$;
\smallskip
\item{}
a full stop (period) has been inserted after equation (d${}^{IX}$).
\bigbreak\bigskip
\line{\hfil David R. Wilkins}
\vskip3pt
\line{\hfil Dublin, June 1999}
\vfill\eject
\pageno=1
\null\vskip36pt
\noindent
{\largeit On Fluctuating Functions. %\hfil\break
By {\largesc Sir William Rowan Hamilton}, LL.~D., P.~R.~I.~A., F.~R.~A.~S.,
Fellow of the American Society of Arts and Sciences, and of the
Royal Northern Society of Antiquaries at Copenhagen; Honorary or
Corresponding Member of the Royal Societies of Edinburgh and
Dublin, of the Academies of St.\ Petersburgh, Berlin, and Turin,
and of other Scientific Societies at home and abroad; Andrews'
Professor of Astronomy in the University of Dublin, and Royal
Astronomer of Ireland.\par}
\vskip12pt
\centerline{Read June 22nd, 1840.}
\vskip12pt
\centerline{[{\it Transactions of the Royal Irish Academy},
vol.~xix (1843), pp.~264--321.]}
\bigskip
The paper now submitted to the Royal Irish Academy is designed
chiefly to invite attention to some consequences of a very
fertile principle, of which indications may be found in
{\sc Fourier's} Theory of Heat, but which appears to have
hitherto attracted little notice, and in particular seems to have
been overlooked by {\sc Poisson}. This principle, which may be
called the {\it Principle of Fluctuation}, asserts (when put
under its simplest form) the evanescence of the integral, taken
between any finite limits, of the product formed by multiplying
together any two finite functions, of which one, like the sine or
cosine of an infinite multiple of an arc, changes sign infinitely
often within a finite extent of the variable on which it depends,
and has for its mean value zero; from which it follows, that if
the other function, instead of being always finite, becomes
infinite for some particular values of its variable, the integral
of the product is to be found by attending only to the immediate
neighbourhood of those particular values. The writer is of
opinion that it is only requisite to develope the foregoing
principle, in order to give a new clearness, and even a new
extension, to the existing theory of the transformations of
arbitrary functions through functions of determined forms. Such
is, at least, the object aimed at in the following pages; to
which will be found appended a few general observations on this
interesting part of our knowledge.
\bigbreak
[1.]
The theorem, discovered by {\sc Fourier}, that between any finite
limits, $a$ and $b$, of any real variable~$x$, any arbitrary but
finite and determinate function of that variable, of which the
value varies gradually, may be represented thus,
$$fx = {1 \over \pi} \int_a^b d\alpha \int_0^\infty d\beta \,
\cos (\beta \alpha - \beta x) \, f\alpha,
\eqno {\rm (a)}$$
with many other analogous theorems, is included in the following
form:
$$fx = \int_a^b d\alpha \int_0^\infty d\beta \,
\phi (x, \alpha, \beta) \, f\alpha;
\eqno {\rm (b)}$$
the function~$\phi$ being, in each case, suitably chosen. We
propose to consider some of the conditions under which a
transformation of the kind (b) is valid.
\bigbreak
[2.]
If we make, for abridgment,
$$\psi(x,\alpha, \beta) = \int_0^\beta d\beta \, \phi(x, \alpha, \beta),
\eqno {\rm (c)}$$
the equation (b) may be thus written:
$$fx = \int_a^b d\alpha \, \psi(x, \alpha, \infty) \, f\alpha.
\eqno {\rm (d)}$$
This equation, if true, will hold good, after the change of
$f\alpha$, in the second member, to $f\alpha + {\sc f}\alpha$;
provided that, for the particular value $\alpha = x$, the
additional function ${\sc f}\alpha$ vanishes; being also, for
other values of $\alpha$, between the limits $a$ and $b$,
determined and finite, and gradually varying in value. Let then
this function ${\sc f}$ vanish, from $\alpha = a$ to $\alpha =
\lambda$, and from $\alpha = \mu$ to $\alpha = b$; $\lambda$ and
$\mu$ being included, either between $a$ and $x$, or between $x$
and $b$; so that $x$ is not included between $\lambda$ and $\mu$,
though it is included between $a$ and $b$. We shall have, under
these conditions,
$$0 = \int_\lambda^\mu d\alpha \, \psi(x, \alpha, \infty) \, {\sc f}\alpha;
\eqno {\rm (e)}$$
the function ${\sc f}$, and the limits $\lambda$ and $\mu$, being
arbitrary, except so far as has been above defined.
Consequently, unless the function of $\alpha$, denoted here by
$\psi(x, \alpha, \infty)$, be itself $= 0$, it must change sign
at least once between the limits $\alpha = \lambda$,
$\alpha = \mu$, however close these limits may be; and therefore
must change sign indefinitely often, between the limits $a$ and
$x$, or $x$ and $b$. A function which thus changes sign
indefinitely often, within a finite range of a variable on which
it depends, may be called a {\it fluctuating function}. We shall
consider now a class of cases, in which such a function may
present itself.
\bigbreak
[3.]
Let ${\sc n}_\alpha$ be a real function of $\alpha$, continuous or
discontinuous in value, but always comprised between some finite
limits, so as never to be numerically greater than $\pm {\rm c}$,
in which ${\rm c}$ is a finite constant; let
$${\sc m}_\alpha = \int_0^\alpha d\alpha \, {\sc n}_\alpha;
\eqno {\rm (f)}$$
and let the equation
$${\sc m}_\alpha = {\rm a},
\eqno {\rm (g)}$$
in which ${\rm a}$ is some finite constant, have infinitely many
real roots, extending from $-\infty$ to $+\infty$, and such that
the interval $\alpha_{n+1} - \alpha_n$, between any one root
$\alpha_n$ and the next succeeding $\alpha_{n+1}$, is never
greater than some finite constant, ${\rm b}$. Then,
$$0 = {\sc m}_{\alpha_{n+1}} - {\sc m}_{\alpha_n}
= \int_{\alpha_n}^{\alpha_{n+1}} d\alpha \, {\sc n}_\alpha;
\eqno {\rm (h)}$$
and consequently the function ${\sc n}_\alpha$ must change sign
at least once between the limits $\alpha = \alpha_n$ and $\alpha
= \alpha_{n+1}$; and therefore at least $m$ times between the
limits $\alpha = \alpha_n$ and $\alpha = \alpha_{n+m}$, this
latter limit being supposed, according to the analogy of this
notation, to be the $m^{\rm th}$ root of the equation (g), after
the root $\alpha_n$. Hence the function ${\sc n}_{\beta
\alpha}$, formed from ${\sc n}_\alpha$ by multiplying $\alpha$ by
$\beta$, changes sign at least $m$ times between the limits
$\alpha = \lambda$, $\alpha = \mu$, if\footnote*{These notations
$\ngt$ and $\nlt$ are designed to signify the contradictories of
$>$ and $<$; so that ``$a \ngt b$'' is equivalent to
``$a$ not $> b$,'' and ``$a \nlt b$'' is equivalent to
``$a$ not $< b$.''}
$$\lambda \ngt \beta^{-1} \alpha_n,\quad
\mu \nlt \beta^{-1} \alpha_{n+m};$$
the interval $\mu - \lambda$ between these limits being less than
$\beta^{-1} (m + 2) {\rm b}$, if
$$\lambda > \beta^{-1} \alpha_{n-1},\quad
\mu < \beta^{-1} \alpha_{n + m + 1};$$
so that, under these conditions, ($\beta$ being $> 0$,) we have
$$m > -2 + \beta {\rm b}^{-1} (\mu - \lambda).$$
However small, therefore, the interval $\mu - \lambda$ may be,
provided that it be greater than $0$, the number of changes of
sign of the function ${\sc n}_{\beta \alpha}$, within this range
of the variable~$\alpha$, will increase indefinitely with
$\beta$. Passing then to the extreme or limiting supposition,
$\beta = \infty$, we may say that the function
${\sc n}_{\infty \alpha}$ {\it changes sign infinitely often\/} within
a finite range of the variable~$\alpha$ on which it depends; and
consequently that it is, in the sense of the last article, a
{\sc fluctuating function}. We shall next consider the integral
of the product formed by multiplying together two functions of
$\alpha$, of which one is ${\sc n}_{\infty \alpha}$, and the
other is arbitrary, but finite, and shall see that this integral
vanishes.
\bigbreak
[4.]
It has been seen that the function ${\sc n}_\alpha$ changes sign
at least once between the limits $\alpha = \alpha_n$,
$\alpha = \alpha_{n+1}$. Let it then change sign $k$ times
between those limits, and let the $k$ corresponding values of
$\alpha$ be denoted by
$\alpha_{n,1}, \alpha_{n,2},\ldots \, \alpha_{n,k}$.
Since the function ${\sc n}_\alpha$ may be discontinuous in
value, it will not necessarily vanish for these $k$ values of
$\alpha$; but at least it will have one constant sign, being
throughout not $< 0$, or else throughout not $> 0$, in the
interval from $\alpha = \alpha_n$ to $\alpha = \alpha_{n,1}$; it
will be, on the contrary, throughout not $> 0$, or throughout not
$< 0$, from $\alpha_{n,1}$ to $\alpha_{n,2}$; again, not $< 0$,
or not $> 0$, from $\alpha_{n,2}$ to $\alpha_{n,3}$; and so on.
Let then ${\sc n}_\alpha$ be never $< 0$ throughout the whole of
the interval from $\alpha_{n,i}$ to $\alpha_{n,i+1}$; and let it
be $> 0$ for at least some finite part of that interval; $i$
being some integer number between the limits $0$ and $k$, or even
one of those limits themselves, provided that the symbols
$\alpha_{n,0}$, $\alpha_{n,k+1}$ are understood to denote the
same quantities as $\alpha_n$, $\alpha_{n+1}$. Let
${\sc f}_\alpha$ be a finite function of $\alpha$, which receives
no sudden change of value, at least for that extent of the
variable~$\alpha$, for which this function is to be employed; and
let us consider the integral
$$\int_{\alpha_{n,i}}^{\alpha_{n,i+1}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha.
\eqno {\rm (i)}$$
Let $F^\backprime$ be the algebraically least, and
$F^{\backprime\backprime}$ the algebraically greatest value of
the function ${\sc f}_\alpha$, between the limits of integration;
so that, for every value of $\alpha$ between these limits, we
shall have
$${\sc f}_\alpha - {\sc f}^\backprime \nlt 0,\quad
{\sc f}^{\backprime\backprime} - {\sc f}_\alpha \nlt 0;$$
these values ${\sc f}^\backprime$ and ${\sc
f}^{\backprime\backprime}$, of the function ${\sc f}_\alpha$,
corresponding to some values $\alpha_{n,i}^\backprime$ and
$\alpha_{n,i}^{\backprime\backprime}$ of the variable~$\alpha$,
which are not outside the limits $\alpha_{n,i}$ and
$\alpha_{n,i+1}$. Then, since, between these latter limits, we
have also
$${\sc n}_\alpha \nlt 0,$$
we shall have
$$\left. \eqalign{
\int_{\alpha_{n,i}}^{\alpha_{n,i+1}} d\alpha \,
{\sc n}_\alpha ({\sc f}_\alpha - {\sc f}^\backprime)
&\nlt 0;\cr
\int_{\alpha_{n,i}}^{\alpha_{n,i+1}} d\alpha \,
{\sc n}_\alpha ({\sc f}^{\backprime\backprime} - {\sc f}_\alpha)
&\nlt 0;\cr}
\right\}
\eqno {\rm (k)}$$
the integral~(i) will therefore be not
$< s_{n,i} {\sc f}^\backprime$,
and not
$> s_{n,i} {\sc f}^{\backprime\backprime}$,
if we put, for abridgment,
$$s_{n,i}
= \int_{\alpha_{n,i}}^{\alpha_{n,i+1}} d\alpha \, {\sc n}_\alpha;
\eqno {\rm (l)}$$
and consequently this integral (i) may be represented by
$s_{n,i} {\sc f}'$, in which
$${\sc f}' \nlt {\sc f}^\backprime,\quad
{\sc f}' \ngt {\sc f}^{\backprime\backprime},$$
because, with the suppositions already made, $s_{n,i} > 0$. We
may even write
$${\sc f}' > {\sc f}^\backprime,\quad
{\sc f}' < {\sc f}^{\backprime\backprime},$$
unless it happen that the function~${\sc f}_\alpha$ has a
constant value through the whole extent of the integration; or
else that it is equal to one of its extreme values,
${\sc f}^\backprime$ or ${\sc f}^{\backprime\backprime}$,
throughout a finite part of that extent, while, for the remaining
part of the same extent, that is, for all other values of
$\alpha$ between the same limits, the factor ${\sc n}_\alpha$
vanishes. In all these cases, ${\sc f}'$ may be considered as a
value of the function ${\sc f}_\alpha$, corresponding to a value
$\alpha_{n,i}'$ of the variable~$\alpha$ which is included
between the limits of integration; so that we may express the
integral (i) as follows:
$$\int_{\alpha_{n,i}}^{\alpha_{n,i+1}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha
= s_{n,i} {\sc f}_{\alpha'_{n,i}};
\eqno {\rm (m)}$$
in which
$$\alpha'_{n,i} > \alpha_{n,i},\quad < \alpha_{n,i+1}.
\eqno {\rm (n)}$$
In like manner, the expression (m), with the inequalities (n),
may be proved to hold good, if ${\sc n}_\alpha$ be never $> 0$,
and sometimes $< 0$, within the extent of the integration, the
integral $s_{n,i}$ being in this case $< 0$; we have, therefore,
rigorously,
$$\int_{\alpha_n}^{\alpha_{n+1}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha
= s_{n,0} {\sc f}_{\alpha'_{n,0}}
+ s_{n,1} {\sc f}_{\alpha'_{n,1}}
+ \cdots +
+ s_{n,k} {\sc f}_{\alpha'_{n,k}}.
\eqno {\rm (o)}$$
But also, we have, by (h)
$$0 = s_{n,0} + s_{n,1} + \cdots + s_{n,k};
\eqno {\rm (p)}$$
the integral in (o) may therefore be thus expressed, without any
loss of rigour:
$$\int_{\alpha_n}^{\alpha_{n+1}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha
= s_{n,0} \Delta_{n,0} + \cdots + s_{n,k} \Delta_{n,k},
\eqno {\rm (q)}$$
in which
$$\Delta_{n,i} = {\sc f}_{\alpha'_{n,i}} - {\sc f}_{\alpha_n};
\eqno {\rm (r)}$$
so that $\Delta_{n,i}$ is a finite difference of the function
${\sc f}_\alpha$, corresponding to the finite difference
$\alpha'_{n,i} - \alpha_n$ of the variable~$\alpha$, which latter
difference is less than $\alpha_{n+1} - \alpha_n$, and therefore
less than the finite constant~${\rm b}$ of the last article. The
theorem (q) conducts immediately to the following,
$$\int_{\beta^{-1} \alpha_n}^{\beta^{-1} \alpha_{n+1}} d\alpha \,
{\sc n}_{\beta \alpha} {\sc f}_\alpha
= \beta^{-1} ( s_{n,0} \delta_{n,0} + \cdots + s_{n,k} \delta_{n,k} ),
\eqno {\rm (s)}$$
in which
$$\delta_{n,i}
= {\sc f}_{\beta^{-1} \alpha'_{n,i}}
- {\sc f}_{\beta^{-1} \alpha_n};
\eqno {\rm (t)}$$
so that, if $\beta$ be large, $\delta_{n,i}$ is small, being the
difference of the function~${\sc f}_\alpha$ corresponding to a
difference of the variable~$\alpha$, which latter difference is
less than $\beta^{-1} {\rm b}$. Let $\pm \delta_n$ be the
greatest of the $k + 1$ differences
$\delta_{n,0},\ldots \, \delta_{n,k}$,
or let it be equal to one of those differences and not exceeded
by any other, abstraction being made of sign; then, since the
$k + 1$ factors $s_{n,0},\ldots \, s_{n,k}$ are alternately
positive and negative, or negative and positive, the numerical
value of the integral (s) cannot exceed that of the expression
$$\pm \beta^{-1} ( s_{n,0} - s_{n,1} + s_{n,2}
- \cdots + (-1)^k s_{n,k} ) \delta_n.
\eqno {\rm (u)}$$
But, by the definition (l) of $s_{n,i}$, and by the limits
$\pm {\rm c}$ of value of the finite function ${\sc n}_\alpha$,
we have
$$\pm s_{n,i} \ngt (\alpha_{n,i+1} - \alpha_{n,i}) {\rm c};
\eqno {\rm (v)}$$
therefore
$$\pm ( s_{n,0} - s_{n,1} + \cdots + (-1)^k s_{n,k} )
\ngt (\alpha_{n+1} - \alpha_n) {\rm c};
\eqno {\rm (w)}$$
and the following rigorous expression for the integral (s)
results:
$$\int_{\beta^{-1} \alpha_n}^{\beta^{-1} \alpha_{n+1}} d\alpha \,
{\sc n}_{\beta \alpha} {\sc f}_\alpha
= \theta_n \beta^{-1} (\alpha_{n+1} - \alpha_n) {\rm c} \delta_n;
\eqno {\rm (x)}$$
$\theta_n$ being a factor which cannot exceed the limits $\pm 1$.
Hence, if we change successively $n$ to
$n + 1$, $n + 2,\ldots$ $n + m - 1$,
and add together all the results, we obtain this other rigorous
expression, for the integral of the product
${\sc n}_{\beta \alpha} {\sc f}_\alpha$,
extended from $\alpha = \beta^{-1} \alpha_n$ to
$\alpha = \beta^{-1} \alpha_{n+m}$:
$$\int_{\beta^{-1} \alpha_n}^{\beta^{-1} \alpha_{n+m}} d\alpha \,
{\sc n}_{\beta \alpha} {\sc f}_\alpha
= \theta \beta^{-1} (\alpha_{n+m} - \alpha_n) {\rm c} \delta;
\eqno {\rm (y)}$$
in which $\delta$ is the greatest of the $m$ quantities
$\delta_n, \delta_{n+1},\ldots$, or is equal to one of those
quantities, and is not exceeded by any other; and $\theta$ cannot
exceed $\pm 1$. By taking $\beta$ sufficiently large, and
suitably choosing the indices $n$ and $n + m$, we may make the
limits of integration in the formula (y) approach as nearly as we
please to any given finite values, $a$ and $b$; while, in the
second member of that formula, the factor
$\beta^{-1} (\alpha_{n+m} - \alpha_n)$
will tend to become the finite quantity $b - a$, and
$\theta {\rm c}$ cannot exceed the finite limits $\pm {\rm c}$;
but the remaining factor~$\delta$ will tend indefinitely to $0$,
as $\beta$ increases without limit, because it is the difference
between two values of the function ${\sc f}_\alpha$,
corresponding to two values of the variable~$\alpha$ of which the
difference diminishes indefinitely. Passing then to the limit
$\beta = \infty$, we have, with the same rigour as before:
$$\int_a^b d\alpha \, {\sc n}_{\infty \alpha} {\sc f}_\alpha = 0;
\eqno {\rm (z)}$$
which is the theorem that was announced at the end of the
preceding article. And although it has been here supposed that
the function ${\sc f}_\alpha$ receives no sudden change of value,
between the limits of integration; yet we see that if this
function receive any finite number of such sudden changes between
those limits, but vary gradually in value between any two such
changes, the foregoing demonstration may be applied to each
interval of gradual variation of value separately; and the
theorem (z) will still hold good.
\bigbreak
[5.]
This theorem (z) may be thus written:
$$\lim_{\beta = \infty} \int_a^b d\alpha \,
{\sc n}_{\beta \alpha} {\sc f}_{\alpha}
= 0;
\eqno {\rm (a')}$$
and we may easily deduce from it the following:
$$\lim_{\beta = \infty} \int_a^b d\alpha \,
{\sc n}_{\beta (\alpha - x)} {\sc f}_{\alpha}
= 0;
\eqno {\rm (b')}$$
the function ${\sc f}_\alpha$ being here also finite, within the
extent of the integration, and $x$ being independent of $\alpha$
and $\beta$. For the reasonings of the last article may easily
be adapted to this case; or we may see, from the definitions in
article [3.], that if the function ${\sc n}_\alpha$ have the
properties there supposed, then ${\sc n}_{\alpha - x}$ will also
have those properties. In fact, if ${\sc n}_\alpha$ be always
comprised between given finite limits, then
${\sc n}_{\alpha - x}$ will be so too; and we shall have, by (f),
$$\int_0^\alpha d\alpha \, {\sc n}_{\alpha - x}
= \int_{-x}^{\alpha - x} d\alpha \, {\sc n}_\alpha
= {\sc m}_{\alpha - x} - {\sc m}_{-x};
\eqno {\rm (c')}$$
in which ${\sc m}_{-x}$ is finite, because the suppositions of
the third article oblige ${\sc m}_\alpha$ to be always comprised
between the limits ${\rm a} \pm {\rm b} {\rm c}$; so that the
equation
$$\int_0^\alpha d\alpha \, {\sc n}_{\alpha - x}
= {\rm a} - {\sc m}_{-x},
\eqno {\rm (d')}$$
which is of the form (g), has infinitely many real roots, of the
form
$$\alpha = x + \alpha_n,
\eqno {\rm (e')}$$
and therefore of the kind assumed in the two last articles. Let
us now examine what happens, when, in the first member of the
formula (b${}'$), we substitute, instead of the finite factor
${\sc f}_\alpha$, an expression such as
$(\alpha - x)^{-1} f_\alpha$,
which becomes infinite between the limits of integration, the
value of $x$ being supposed to be comprised between those limits,
and the function $f_\alpha$ being finite between them. That is,
let us inquire whether the integral
$$\int_a^b d\alpha \, {\sc n}_{\beta (\alpha - x)}
(\alpha - x)^{-1} f_\alpha,
\eqno {\rm (f')}$$
(in which $x > a$, $< b$), tends to any and to what finite and
determined limit, as $\beta$ tends to become infinite.
In this inquiry, the theorem (b${}'$) shows that we need only
attend to those values of $\alpha$ which are extremely near to
$x$, and are for example comprised between the limits
$x \mp \epsilon$, the quantity~$\epsilon$ being small. To
simplify the question, we shall suppose that for such values of
$\alpha$, the function $f_\alpha$ varies gradually in value; we
shall also suppose that ${\sc n}_0 = 0$, and that
${\sc n}_\alpha \alpha^{-1}$ tends to a finite limit as $\alpha$
tends to $0$, whether this be by decreasing or by increasing;
although the limit thus obtained, for the case of infinitely
small and positive values of $\alpha$, may possibly differ from
that which corresponds to the case of infinitely small and
negative values of that variable, on account of the discontinuity
which the function ${\sc n}_\alpha$ may have. We are then to
investigate, with the help of these suppositions, the value of
the double limit:
$$\lim_{\epsilon = 0} \mathbin{.} \lim_{\beta = \infty} \mathbin{.}
\int_{x - \epsilon}^{x + \epsilon} d\alpha \,
{\sc n}_{\beta (\alpha - x)} (\alpha - x)^{-1} f_\alpha;
\eqno {\rm (g')}$$
this notation being designed to suggest, that we are first to
assume a small but not evanescent value of $\epsilon$, and a
large but not infinite value of $\beta$, and to effect the
integration, or conceive it effected, with these assumptions;
then, retaining the same value of $\epsilon$, make $\beta$ larger
and larger without limit; and then at last suppose $\epsilon$ to
tend to $0$, unless the result corresponding to an infinite value
of $\beta$ shall be found to be independent of $\epsilon$. Or,
introducing two new quantites $y$ and $\eta$, determined by the
definitions
$$y = \beta (\alpha - x),\quad \eta = \beta \epsilon,
\eqno {\rm (h')}$$
and eliminating $\alpha$ and $\beta$ by means of these, we are
led to seek the value of the double limit following:
$$\lim_{\epsilon = 0} \mathbin{.} \lim_{\eta = \infty} \mathbin{.}
\int_{-\eta}^\eta dy \,
{\sc n}_y y^{-1} f_{x + \epsilon \eta^{-1} y};
\eqno {\rm (i')}$$
in which $\eta$ tends to $\infty$, before $\epsilon$ tends to
$0$. It is natural to conclude that since the sought limit
(g${}'$) can be expressed under the form (i${}'$), it must be
equivalent to the product
$$f_x \times \int_{-\infty}^\infty dy \, {\sc n}_y y^{-1};
\eqno {\rm (k')}$$
and in fact it will be found that this equivalence holds good;
but before finally adopting this conclusion, it is proper to
consider in detail some difficulties which may present
themselves.
\bigbreak
[6.]
Decomposing the function $f_{x + \epsilon \eta^{-1} y}$ into two
parts, of which one is independent of $y$, and is $= f_x$, while
the other part varies with $y$, although slowly, and vanishes
with that variable; it is clear that the formula (i${}'$) will be
decomposed into two corresponding parts, of which the first
conducts immediately to the expression (k${}'$); and we are now
to inquire whether the integral in this expression has a finite
and determinate value. Admitting the suppositions made in the
last article, the integral
$$\int_{-\zeta}^\zeta dy \, {\sc n}_y y^{-1}$$
will have a finite and determinate value, if $\zeta$ be finite
and determinate; we are therefore conducted to inquire whether
the integrals
$$\int_{-\infty}^{-\zeta} dy \, {\sc n}_y y^{-1},\quad
\int_\zeta^\infty dy \, {\sc n}_y y^{-1},$$
are also finite and determinate. The reasonings which we shall
employ for the second of these integrals, will also apply to the
first; and to generalize a little the question to which we are
thus conducted, we shall consider the integral
$$\int_a^\infty d\alpha \, {\sc n}_\alpha {\sc f}_\alpha;
\eqno {\rm (l')}$$
${\sc f}_\alpha$ being here supposed to denote any function of
$\alpha$ which remains always positive and finite, but decreases
continually and gradually in value, and tends indefinitely
towards $0$, while $\alpha$ increases indefinitely from some
given finite value which is not greater than $a$. Applying
to this integral (l${}'$) the principles of the fourth article,
and observing that we have now
${\sc f}_{\alpha'_{n,i}} < {\sc f}_{\alpha_n}$, $\alpha'_{n,i}$
being $> \alpha_n$, and $\alpha_n$ being assumed $\nlt a$; and
also that
$$\pm (s_{n,0} + s_{n,2} + \cdots )
= \mp (s_{n,1} + s_{n,3} + \cdots)
\ngt {\textstyle {1 \over 2}} {\rm b} {\rm c};
\eqno {\rm (m')}$$
we find
$$\pm \int_{\alpha_n}^{\alpha_{n+1}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha
< {\textstyle {1 \over 2}} {\rm b} {\rm c}
({\sc f}_{\alpha_n} - {\sc f}_{\alpha_{n+1}});
\eqno {\rm (n')}$$
and consequently
$$\pm \int_{\alpha_n}^{\alpha_{n+m}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha
< {\textstyle {1 \over 2}} {\rm b} {\rm c}
({\sc f}_{\alpha_n} - {\sc f}_{\alpha_{n+m}}).
\eqno {\rm (o')}$$
This latter integral is therefore finite and numerically less
than ${1 \over 2} {\rm b} {\rm c} \, {\sc f}_{\alpha_n}$,
however great the upper limit $\alpha_{n+m}$ may be; it tends
also to a determined value as $m$ increases indefinitely, because
the part which corresponds to values of $\alpha$ between any given
value of the form $\alpha_{n+m}$ and any other of the form
$\alpha_{n+m+p}$ is included between the limits
$\pm {1 \over 2} {\rm b} {\rm c} \, {\sc f}_{\alpha_{n+m}}$,
which limits approach indefinitely to each other and to $0$, as
$m$ increases indefinitely. And in the integral (l${}'$), if we
suppose the lower limit of $a$ to lie between $\alpha_{n-1}$ and
$\alpha_n$, while the upper limit, instead of being infinite, is
at first assumed to be a large but finite quantity~$b$, lying
between $\alpha_{n+m}$ and $\alpha_{n+m+1}$, we shall only
thereby add to the integral (o${}'$) two parts, an initial and a
final, of which the first is evidently finite and determinate,
while the second is easily proved to tend indefinitely to $0$ as
$m$ increases without limit. The integral (l${}'$) is therefore
itself finite and determined, under the conditions above supposed,
which are satisfied, for example, by the function
${\sc f}_\alpha = \alpha^{-1}$, if $a$ be $> 0$. And since the
suppositions of the last article render also the integral
$$\int_0^a d\alpha \, {\sc n}_\alpha \alpha^{-1}$$
determined and finite, if the value of $a$ be such, we see that
with these suppositions we may write
$$\varpi^\backprime
= \int_0^\infty d\alpha \, {\sc n}_\alpha \alpha^{-1},
\eqno {\rm (p')}$$
$\varpi^\backprime$ being itself a finite and determined
quantity. By reasonings almost the same we are led to the
analogous formula
$$\varpi^{\backprime\backprime}
= \int_{-\infty}^0 d\alpha \, {\sc n}_\alpha \alpha^{-1};
\eqno {\rm (q')}$$
and finally to the result
$$\varpi = \varpi^\backprime + \varpi^{\backprime\backprime}
= \int_{-\infty}^\infty d\alpha \, {\sc n}_\alpha \alpha^{-1};
\eqno {\rm (r')}$$
in which
$\varpi^{\backprime\backprime}$ and $\varpi$ are also finite and
determined. The product (k${}'$) is therefore itself determinate
and finite, and may be represented by $\varpi f_x$.
\bigbreak
[7.]
We are next to introduce, in (i${}'$), the variable part of the
function~$f$, namely
$$f_{x + \epsilon \eta^{-1} y} - f_x,$$
which varies from $f_{x - \epsilon}$ to $f_{x + \epsilon}$, while
$y$ varies from $-\eta$ to $+\eta$, and in which $\epsilon$ may
be any quantity $> 0$. And since it is clear, that under the
conditions assumed in the fifth article,
$$\lim_{\epsilon = 0} \mathbin{.} \lim_{\eta = \infty} \mathbin{.}
\int_{-\zeta}^\zeta dy \,
{\sc n}_y y^{-1} (f_{x + \epsilon \eta^{-1} y} - f_x)
= 0,
\eqno {\rm (s')}$$
if $\zeta$ be any finite and determined quantity, however large,
we are conducted to examine whether this double limit vanishes
when the integration is made to extend from $y = \zeta$ to
$y = \eta$. It is permitted to suppose that $f_\alpha$
continually increases, or continually decreases, from
$\alpha = x$ to $\alpha = x + \epsilon$; let us therefore
consider the integral
$$\int_\zeta^\eta d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha {\sc g}_\alpha,
\eqno {\rm (t')}$$
in which the function ${\sc f}_\alpha$ decreases, while
${\sc g}_\alpha$ increases, but both are positive and finite,
within the extent of the integration.
By reasonings similar to those of the fourth article, we find
under these conditions,
$$\pm \int_{\alpha_n}^{\alpha_{n+1}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha {\sc g}_\alpha
< {\rm b} {\rm c}
( {\sc f}_{\alpha_n} {\sc g}_{\alpha_{n+1}}
- {\sc f}_{\alpha_{n+1}} {\sc g}_{\alpha_n} );
\eqno {\rm (u')}$$
and therefore
$$\left. \eqalign{
\pm {1 \over {\rm b} {\rm c}}
\int_{\alpha_n}^{\alpha_{n+m}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha {\sc g}_\alpha
&< {\sc f}_{\alpha_{n+m-1}} {\sc g}_{\alpha_{n+m}}
- {\sc f}_{\alpha_{n+1}} {\sc g}_{\alpha_n} \cr
&\mathrel{\phantom{<}}
+ ({\sc f}_{\alpha_n} - {\sc f}_{\alpha_{n+2}})
{\sc g}_{\alpha_{n+1}}
+ ({\sc f}_{\alpha_{n+2}} - {\sc f}_{\alpha_{n+4}})
{\sc g}_{\alpha_{n+3}}
+ \hbox{\&c.} \cr
&\mathrel{\phantom{<}}
+ ({\sc f}_{\alpha_{n+1}} - {\sc f}_{\alpha_{n+3}})
{\sc g}_{\alpha_{n+2}}
+ ({\sc f}_{\alpha_{n+3}} - {\sc f}_{\alpha_{n+5}})
{\sc g}_{\alpha_{n+4}}
+ \hbox{\&c.} \cr}
\right\}
\eqno {\rm (v')}$$
This inequality will still subsist, if we increase the second
member by changing, in the positive products on the second and
third lines, the factors ${\sc g}$ to their greatest value
${\sc g}_{\alpha_{n+m}}$; and, after adding the results, suppress
the three negative terms which remain in the three lines of these
expression, and change the functions ${\sc f}$, in the first and
third lines, to their greatest value ${\sc f}_{\alpha_n}$.
Hence,
$$\pm \int_{\alpha_n}^{\alpha_{n+m}} d\alpha \,
{\sc n}_\alpha {\sc f}_\alpha {\sc g}_\alpha
< 3 {\rm b} {\rm c} \, {\sc f}_{\alpha_n} {\sc g}_{\alpha_{n+m}};
\eqno {\rm (w')}$$
this integral will therefore ultimately vanish, if the product of
the greatest values of the functions ${\sc f}$ and ${\sc g}$ tend
to the limit~$0$. Thus, if we make
$${\sc f}_\alpha = \alpha^{-1},\quad
{\sc g}_\alpha = \pm (f_{x + \epsilon \eta^{-1} \alpha} - f_x),$$
the upper sign being taken when $f_\alpha$ increases from
$\alpha = x$ to $\alpha = x + \epsilon$; and if we suppose that
$\zeta$ and $\eta$ are of the forms $\alpha_n$ and
$\alpha_{n+m}$; we see that the integral (t${}'$) is numerically
less than
$3 {\rm b} {\rm c} \, \alpha_n^{-1} (f_{x + \epsilon} - f_x)$,
and therefore that it vanishes at the limit $\epsilon = 0$. It
is easy to see that the same conclusion holds good, when we
suppose that $\eta$ does not coincide with any quantity of the
form $\alpha_{n+m}$, and where the limits of integration are
changed to $-\eta$ and $-\zeta$. We have therefore, rigorously,
$$\lim_{\epsilon = 0} \mathbin{.} \lim_{\eta = \infty} \mathbin{.}
\int_{-\eta}^\eta dy \,
{\sc n}_y y^{-1} (f_{x + \epsilon \eta^{-1} y} - f_x)
= 0,
\eqno {\rm (x')}$$
nowithstanding the great and ultimately infinite extent over
which the integration is conducted. The variable part of the
function~$f$ may therefore be suppressed in the double limit
(i${}'$), without any loss of accuracy; and that limit is found
to be exactly equal to the expression (k${}'$); that is, by the
last article, to the determined product $\varpi f_x$. Such,
therefore, is the value of the limit (g${}'$), from which
(i${}'$) was derived by the transformation (h${}'$); and such
finally is the limit of the integral (f${}'$), proposed for
investigation in the fifth article. We have, then, proved that
under the conditions of that article,
$$\lim_{\beta = \infty} \mathbin{.} \int_a^b d\alpha \,
{\sc n}_{\beta (\alpha - x)} (\alpha - x)^{-1} f_\alpha
= \varpi f_x;
\eqno {\rm (y')}$$
and consequently that the arbitrary but finite and gradually
varying function $f_x$, between the limits $x = a$, $x = b$, may
be transformed as follows:
$$f_x = \varpi^{-1} \int_a^b d\alpha \,
{\sc n}_{\infty (\alpha - x)} (\alpha - x)^{-1} f_\alpha;
\eqno {\rm (z')}$$
which is a result of the kind denoted by (d) in the second
article, and includes the theorem (a) of {\sc Fourier}. For
all the suppositions made in the foregoing articles, respecting
the form of the function~${\sc n}$, are satisfied by assuming
this function to be the sine of the variable on which it depends;
and then the constant~$\varpi$, determined by the formula
(r${}'$), becomes coincident with $\pi$, that is, with the ratio
of the circumference to the diameter of a circle, or with the
least positive root of the equation
$${\sin x \over x} = 0.$$
\bigbreak
[8.]
The known theorem just alluded to, namely, that the definite
integral (r${}'$) becomes $= \pi$, when
${\sc n}_\alpha = \sin \alpha$, may be demonstrated in the
following manner. Let
$$\eqalign{
{\sc a}
&= \int_0^\infty d\alpha \, {\sin \beta \alpha \over \alpha};\cr
{\sc b}
&= \int_0^\infty d\alpha \, {\cos \beta \alpha \over 1 + \alpha^2};\cr}$$
then these two definite integrals are connected with each other
by the relation
$${\sc a} = \left( \int_0^\beta d\beta - {d \over d\beta} \right) {\sc b},$$
because
$$\int_0^\beta d\beta \, {\sc b}
= \int_0^\infty d\alpha \,
{\sin \beta \alpha \over \alpha (1 + \alpha^2)},$$
$$-{d \over d\beta} {\sc b}
= \int_0^\infty d\alpha \,
{\alpha \sin \beta \alpha \over 1 + \alpha^2};$$
and all these integrals, by the principles of the foregoing
articles, receive determined and finite (that is, not infinite)
values, whatever finite or infinite value may be assigned to
$\beta$. But for all values of $\beta > 0$, the value of
${\sc a}$ is constant; therefore, for all such values of $\beta$,
the relation between ${\sc a}$ and ${\sc b}$ gives, by
integration,
$$e^{-\beta}
\left\{
\left( \int_0^\beta \, d\beta + 1 \right) {\sc b} - {\sc a}
\right\}
= \hbox{const.};$$
and this constant must be $= 0$, because the factor of
$e^{-\beta}$ does not tend to become infinite with $\beta$. That
factor is therefore itself $= 0$, so that we have
$${\sc a}
= \left( \int_0^\beta \, d\beta + 1 \right) {\sc b},
\hbox{ if } \beta > 0.$$
Comparing the two expressions for ${\sc a}$, we find
$${\sc b} + {d \over d\beta} {\sc b} = 0,
\hbox{ if } \beta > 0;$$
and therefore, for all such values of $\beta$,
$${\sc b} e^\beta = \hbox{const.}$$
The constant in this last result is easily proved to be equal to
the quantity ${\sc a}$, by either of the two expressions already
established for that quantity; we have therefore
$${\sc b} = {\sc a} e^{-\beta},$$
however little the value of $\beta$ may exceed $0$; and because
${\sc b}$ tends to the limit
$\displaystyle {\pi \over 2}$ as $\beta$ tends to $0$, we find
finally, for all values of $\beta$ greater than $0$,
$${\sc a} = {\pi \over 2},\quad
{\sc b} = {\pi \over 2} e^{-\beta}.$$
These values, and the result
$$\int_{-\infty}^\infty d\alpha \, {\sin \alpha \over \alpha}
= \pi,$$
to which they immediately conduct, have long been known; and the
first relation, above mentioned, between the integrals ${\sc a}$
and ${\sc b}$, has been employed by {\sc Legendre} to deduce the
former integral from the latter; but it seemed worth while to
indicate a process by which that relation may be made to conduct
to the values of both those integrals, without the necessity of
expressly considering the second differential coefficient of
${\sc b}$ relative to $\beta$, which coefficient presents itself
at first under an indeterminate form.
\bigbreak
[9.]
The connexion of the formula (z${}'$) with {\sc Fourier's}
theorem (a), will be more distinctly seen, if we introduce a new
function ${\sc p}_\alpha$ defined by the condition
$${\sc n}_\alpha = \int_0^\alpha d\alpha \, {\sc p}_\alpha,
\eqno {\rm (a'')}$$
which is consistent with the suppositions already made respecting
the function ${\sc n}_\alpha$. According to those suppositions
the new function ${\sc p}_\alpha$ is not necessarily continuous,
nor even always finite, since its integral ${\sc n}_\alpha$ may
be discontinuous; but ${\sc p}_\alpha$ is supposed to be finite
for small values of $\alpha$, in order that ${\sc n}_\alpha$
may vary gradually for such values, and may bear a finite ratio
to $\alpha$. The value of the first integral of ${\sc p}_\alpha$
is supposed to be always comprised between given finite limits,
so as never to be numerically greater than $\pm {\rm c}$; and the
second integral,
$${\sc m}_\alpha
= \left( \int_0^\alpha d\alpha \right)^2 {\sc p}_\alpha,
\eqno {\rm (b'')}$$
becomes infinitely often equal to a given constant, ${\rm a}$,
for values of $\alpha$ which extend from negative to positive
infinity, and are such that the interval between any one and the
next following is never greater than a given finite constant,
${\rm b}$. With these suppositions respecting the otherwise
arbitrary function ${\sc p}_\alpha$, the theorems (z) and
(z${}'$) may be expressed as follows:
$$\lim_{\beta = \infty} \mathbin{.}
\int_a^b d\alpha \,
\left( \int_0^{\beta \alpha} d\gamma \, {\sc p}_\gamma \right)
f_\alpha
= 0;
\eqno {\sc (a)}$$
and
$$f_x = \varpi^{-1} \int_a^b d\alpha \, \int_0^\infty d\beta \,
{\sc p}_{\beta (\alpha - x)} f_\alpha;
\quad ( x > a, \enspace < b )
\eqno {\sc (b)}$$
$\varpi$ being determined by the equation
$$\varpi
= \int_{-\infty}^\infty d\alpha \,
\int_0^1 d\beta \, {\sc p}_{\beta \alpha}.
\eqno {\rm (c'')}$$
Now, by making
$${\sc p}_\alpha = \cos \alpha,$$
(a supposition which satisfies all the conditions above assumed),
we find, as before
$$\varpi = \pi,$$
and the theorem ({\sc b}) reduces itself to the less general
formula (a), so that it includes the theorem of {\sc Fourier}.
\bigbreak
[10.]
If we suppose that $x$ coincides with one of the limits, $a$ or
$b$, instead of being included between them, we find easily, by
the foregoing analysis,
$$f_a = \varpi^{\backprime -1}
\int_a^b d\alpha \, \int_0^\infty d\beta \,
{\sc p}_{\beta (\alpha - a)} f_\alpha;
\eqno {\rm (d'')}$$
$$f_b = \varpi^{\backprime\backprime -1}
\int_a^b d\alpha \, \int_0^\infty d\beta \,
{\sc p}_{\beta (\alpha - b)} f_\alpha;
\eqno {\rm (e'')}$$
in which
$$\varpi^\backprime
= \int_0^\infty d\alpha \, \int_0^1 d\beta \, {\sc p}_{\beta \alpha};
\eqno {\rm (f'')}$$
$$\varpi^{\backprime\backprime}
= \int_{-\infty}^0 d\alpha \, \int_0^1 d\beta \, {\sc p}_{\beta \alpha};
\eqno {\rm (g'')}$$
so that, as before,
$$\varpi = \varpi^\backprime + \varpi^{\backprime\backprime}.$$
Finally, when $x$ is outside the limits $a$ and $b$, the double
integral in ({\sc b}) vanishes; so that
$$0 = \int_a^b d\alpha \, \int_0^\infty d\beta \,
{\sc p}_{\beta (\alpha - x)} f_\alpha,
\hbox{ if } x < a \hbox{ or } > b.
\eqno {\rm (h'')}$$
And the foregoing theorems will still hold good, if the
function~$f_\alpha$ receive any number of sudden changes of
value, between the limits of integration, provided that it remain
finite between them; except that for those very values
$\alpha^\backprime$ of the variable~$\alpha$, for which the
finite function $f_\alpha$ receives any such sudden variation, so
as to become $= f^\backprime$ for values of $\alpha$ infinitely
little greater than $\alpha^\backprime$, after having been
$= f^{\backprime\backprime}$ for values infinitely little less
than $\alpha^\backprime$, we shall have, instead of ({\sc b}),
the formula
$$\omega^\backprime f^\backprime
+ \omega^{\backprime\backprime} f^{\backprime\backprime}
= \int_a^b d\alpha \, \int_0^\infty d\beta \,
{\sc p}_{\beta(\alpha - \alpha^\backprime)} f_\alpha.
\eqno {\rm (i'')}$$
\bigbreak
[11.]
If ${\sc p}_\alpha$ be not only finite for small values of
$\alpha$, but also vary gradually for such values, then, whether
$\alpha$ be positive or negative, we shall have
$$\lim_{\alpha = 0} \mathbin{.} {\sc n}_\alpha \alpha^{-1} = {\sc p}_0;
\eqno {\rm (k'')}$$
and if the equation
$${\sc n}_{\alpha - x} = 0
\eqno {\rm (l'')}$$
have no real root~$\alpha$, except the root $\alpha = x$, between
the limits $a$ and $b$, nor any which coincides with either of
those limits, then we may change $f_\alpha$ to
$\displaystyle {(\alpha - x) {\sc p}_0 \over {\sc n}_{\alpha - x}} f_\alpha$,
in the formula (z${}'$), and we shall have the expression:
$$f_x = \varpi^{-1} {\sc p}_0 \int_a^b d\alpha \,
{\sc n}_{\infty (\alpha - x)}
{\sc n}^{-1}_{\alpha - x} f_\alpha.
\eqno {\rm (m'')}$$
Instead of the infinite factor in the index, we may substitute
any large number, for example, an uneven integer, and take the
limit with respect to it; we may, therefore, write
$$f_x = \varpi^{-1} {\sc p}_0 \lim_{n = \infty}
\int_a^b d\alpha \,
{\displaystyle
\int_0^{(2n+1)(\alpha - x)} d\alpha \, {\sc p}_\alpha
\over \displaystyle
\int_0^{\alpha - x} d\alpha \, {\sc p}_\alpha}
f_\alpha.
\eqno {\rm (n'')}$$
Let
$$\int_{(2n-1) \alpha}^{(2n+1) \alpha} d\alpha \, {\sc p}_\alpha
= {\sc q}_{\alpha,n} \int_0^\alpha d\alpha \, {\sc p}_\alpha;
\eqno {\rm (o'')}$$
then
$$1 + {\sc q}_{\alpha, 1} + {\sc q}_{\alpha, 2} + \cdots
+ {\sc q}_{\alpha, n}
= {\displaystyle
\int_0^{(2n+1) \alpha} d\alpha \, {\sc p}_\alpha
\over \displaystyle
\int_0^\alpha d\alpha \, {\sc p}_\alpha},
\eqno {\rm (p'')}$$
and the formula (n${}''$) becomes
$$f_x = \varpi^{-1} {\sc p}_0
\left(
\int_a^b d\alpha \, f_\alpha
+ \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_a^b d\alpha \, {\sc q}_{\alpha - x, n} f_\alpha
\right);
\eqno {\sc (c)}$$
in which development, the terms corresponding to large values of
$n$ are small. For example, when ${\sc p}_\alpha = \cos \alpha$,
then
$$\varpi = \pi,\quad
{\sc p}_0 = 1,\quad
{\sc q}_{\alpha,n} = 2 \cos 2 n \alpha,$$
and the theorem ({\sc c}) reduces itself to the following known
result:
$$f_x = \pi^{-1}
\left(
\int_a^b d\alpha \, f_\alpha
+ 2 \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_a^b d\alpha \, \cos (2n\alpha - 2nx) f_\alpha
\right);
\eqno {\rm (q'')}$$
in which it is supposed that $x > a$, $x < b$, and that
$b - a \ngt \pi$, in order that $\alpha - x$ may be comprised
between the limits $\pm \pi$, for the whole extent of the
integration; and the function $f_\alpha$ is supposed to remain
finite within the same extent, and to vary gradually in value, at
least for values of the variable~$\alpha$ which are extremely
near to $x$. The result (q${}''$) may also be thus written:
$$f_x = \pi^{-1} \sum\nolimits_{(n) -\infty}^{\phantom{(n) -} \infty}
\int_a^b d\alpha \, \cos (2n\alpha - 2nx) f_\alpha;
\eqno {\rm (r'')}$$
and if we write
$$\alpha = {\beta \over 2},\quad
x = {y \over 2},\quad
f_{y \over 2} = \phi_y,$$
it becomes
$$\phi_y = {1 \over 2\pi} \sum\nolimits_{(n) -\infty}^{\phantom{(n) -} \infty}
\int_{2a}^{2b} d\beta \, \cos (n\beta - ny) \phi_\beta,
\eqno {\rm (s'')}$$
the interval between the limits of integration relatively to
$\beta$ being now not greater than $2\pi$, and the value of $y$
being included between those limits. For example, we may assume
$$2a = -\pi,\quad 2b = \pi,$$
and then we shall have, by writing $\alpha$, $x$, and $f$,
instead of $\beta$, $y$, and $\phi$,
$$f_x = {1 \over 2\pi} \sum\nolimits_{(n) -\infty}^{\phantom{(n) -} \infty}
\int_{-\pi}^{\pi} d\alpha \, \cos (n\alpha - nx) f_\alpha,
\eqno {\rm (t'')}$$
in which $x > -\pi$, $x < \pi$. It is permitted to assume the
function $f_\alpha$ such as to vanish when $\alpha < 0$,
$> -\pi$; and then the formula (t${}''$) resolves itself into the
two following, which (with a slightly different notation) occur
often in the writings of {\sc Poisson}, as does also the formula
(t${}''$):
$${\textstyle {1 \over 2}} \int_0^\pi d\alpha \, f_\alpha
+ \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_0^\pi d\alpha \, \cos (n\alpha - nx) f_\alpha
= \pi f_x;
\eqno {\rm (u'')}$$
$${\textstyle {1 \over 2}} \int_0^\pi d\alpha \, f_\alpha
+ \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_0^\pi d\alpha \, \cos (n\alpha + nx) f_\alpha
= 0;
\eqno {\rm (v'')}$$
$x$ being here supposed $> 0$, but $< \pi$; and the
function~$f_\alpha$ being arbitrary, but finite, and varying
gradually, from $\alpha = 0$ to $\alpha = \pi$, or at least not
receiving any sudden change of value for any value~$x$ of the
variable~$\alpha$, to which the formula (u${}''$) is to be
applied. It is evident that the limits of integration in
(t${}''$) may be made to become $\mp l$, $l$ being any finite
quantity, by merely multiplying $n\alpha - nx$ under the sign
${\rm cos.}$, by
$\displaystyle {\pi \over l}$,
and changing the external factor
$\displaystyle {1 \over 2\pi}$ to
$\displaystyle {1 \over 2l}$;
and it is under this latter form that the theorem (t${}''$) is
usually presented by {\sc Poisson}: who has also remarked, that
the difference of the two series (u${}''$) and (v${}''$) conducts
to the expression first assigned by {\sc Lagrange}, for
developing an arbitrary function between finite limits, in a
series of sines of multiples of the variable on which it depends.
\bigbreak
[12.]
In general, in the formula (m${}''$), from which the theorem
({\sc c}) was derived, in order that $x$ may be susceptible of
receiving all values $> a$ and $< b$ (or at least all for which
the function $f_x$ receives no sudden change of value), it is
necessary, by the remark made at the beginning of the last
article, that the equation
$$\int_0^\alpha d\alpha \, {\sc p}_\alpha = 0,
\eqno {\rm (w'')}$$
should have no real root~$\alpha$ different from $0$, between the
limits $\mp (b - a)$. But it is permitted to suppose,
consistently with this restriction, that $a$ is $< 0$, and that
$b$ is $> 0$, while both are finite and determined; and then the
formula (m${}''$), or ({\sc c}) which is a consequence of it, may
be transformed so as to receive new limits of integration, which
shall approach as nearly as may be desired to negative and
positive infinity. In fact, by changing $\alpha$ to
$\lambda \alpha$, $x$ to $\lambda x$, and $f_{\lambda x}$ to
$f_x$, the formula ({\sc c}) becomes
$$f_x = \lambda \varpi^{-1} {\sc p}_0
\left(
\int_{\lambda^{-1} a}^{\lambda^{-1} b} d\alpha \, f_\alpha
+ \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_{\lambda^{-1} a}^{\lambda^{-1} b} d\alpha \,
{\sc q}_{\lambda \alpha - \lambda x, n} f_\alpha
\right);
\eqno {\rm (x'')}$$
in which $\lambda^{-1} a$ will be large and negative, while
$\lambda^{-1} b$ will be large and positive, if $\lambda$ be
small and positive, because we have supposed that $a$ is
negative, and $b$ positive; and the new variable~$x$ is only
obliged to be $> \lambda^{-1} a$ and $< \lambda^{-1} b$, if the
new function~$f_x$ be finite and vary gradually between these new
and enlarged limits. At the same time, the definition (o${}''$)
shows that
${\sc p}_0 {\sc q}_{\lambda \alpha - \lambda x, n}$
will tend indefinitely to become equal to
$2 {\sc p}_{2n\lambda (\alpha - x)}$;
in such a manner that
$$\lim_{\lambda = 0} \mathbin{.}
{{\sc p}_0 {\sc q}_{\lambda \alpha - \lambda x, n}
\over 2 {\sc p}_{2n\lambda (\alpha - x)}}
= 1,
\eqno {\rm (y'')}$$
at least if the function ${\sc p}$ be finite and vary gradually.
Admitting then that we may adopt the following ultimate
transformation of a sum into an integral, at least under the sign
$\displaystyle \int_{-\infty}^\infty d\alpha$,
$$\lim_{\lambda = 0} \mathbin{.} 2 \lambda
\left(
{\textstyle {1 \over 2}} {\sc p}_0
+ \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
{\sc p}_{2n\lambda (\alpha - x)}
\right)
= \int_0^\infty d\beta \, {\sc p}_{\beta (\alpha - x)},
\eqno {\rm (z'')}$$
we shall have, as the limit of (x${}''$), this formula:
$$f_x = \varpi^{-1} \int_{-\infty}^\infty d\alpha \,
\int_0^\infty d\beta \, {\sc p}_{\beta (\alpha - x)} f_\alpha;
\eqno {\sc (d)}$$
which holds good for all real values of the variable~$x$, at
least under the conditions lately supposed, and may be regarded as
an extension of the theorem ({\sc b}), from finite to infinite
limits. For example, by making ${\sc p}$ a cosine, the theorem
({\sc d}) becomes
$$f_x = \pi^{-1} \int_{-\infty}^\infty d\alpha \,
\int_0^\infty d\beta \, \cos (\beta \alpha - \beta x) f_\alpha,
\eqno {\rm (a''')}$$
which is a more usual form than (a) for the theorem of
{\sc Fourier}. In general, the deduction in the present article,
of the theorem ({\sc d}) from ({\sc c}), may be regarded as a
verification of the analysis employed in this paper, because
({\sc d}) may also be obtained from ({\sc b}), by making the
limits of integration infinite; but the demonstration of the
theorem ({\sc b}) itself, in former articles, was perhaps more
completely satisfactory, besides that it involved fewer
suppositions; and it seems proper to regard the formula ({\sc d})
as only a limiting form of ({\sc b}).
\bigbreak
[13.]
This formula ({\sc d}) may also be considered as a limit in
another way, by introducing, under the sign of integration
relatively to $\beta$, a factor ${\sc f}_{k\beta}$ such that
$${\sc f}_0 = 1,\quad {\sc f}_\infty = 0,
\eqno {\rm (b''')}$$
in which $k$ is supposed positive but small, and the limit taken
with respect to it, as follows:
$$f_x = \lim_{k = 0} \mathbin{.} \varpi^{-1}
\int_{-\infty}^\infty d\alpha \,
\left(
\int_0^\infty d\beta \, {\sc p}_{\beta (\alpha - x)}
{\sc f}_{k\beta}
\right)
f_\alpha.
\eqno {\sc (e)}$$
It is permitted to suppose that the function ${\sc f}$ decreases
continually and gradually, at a finite and decreasing rate, from
$1$ to $0$, while the variable on which it depends increases from
$0$ to $\infty$; the first differential coefficient ${\sc f}'$
being thus constantly finite and negative, but constantly tending
to $0$, while the variable is positive and tends to $\infty$.
Then, by the suppositions already made respecting the function
${\sc p}$, if $\alpha - x$ and $k$ be each different from $0$, we
shall have
$$\int_0^\beta d\beta \, {\sc p}_{\beta (\alpha - x)} {\sc f}_{k\beta}
= {\sc f}_{k\beta} {\sc n}_{\beta (\alpha - x)} (\alpha - x)^{-1}
- k (\alpha - x)^{-1} \int_0^\beta d\beta \,
{\sc n}_{\beta (\alpha - x)} {\sc f}'_{k\beta};
\eqno {\rm (c''')}$$
and therefore, because ${\sc f}_\infty = 0$, while ${\sc n}$ is
always finite, the integral relative to $\beta$ in the formula
({\sc e}) may be thus expressed:
$$\int_0^\beta d\beta \, {\sc p}_{\beta (\alpha - x)} {\sc f}_{k\beta}
= (\alpha - x)^{-1} \psi_{k^{-1} (\alpha - x)},
\eqno {\rm (d''')}$$
the function~$\psi$ being assigned by the equation
$$\psi_\lambda = - \int_0^\infty d\gamma \,
{\sc n}_{\lambda \gamma} {\sc f}'_\gamma.
\eqno {\rm (e''')}$$
For any given value of $\lambda$, the value of this
function~$\psi$ is finite and determinate, by the principles of
the sixth article; and as $\lambda$ tends to $\infty$, the
function~$\psi$ tends to $0$, on account of the fluctuation of
${\sc n}$, and because ${\sc f}'$ tends to $0$, while $\gamma$
tends to $\infty$; the integral (d${}'''$) therefore tends to
vanish with $k$, if $\alpha$ be different from $x$; so that
$$\lim_{k = 0} \mathbin{.}
\int_0^\infty d\beta \, {\sc p}_{\beta (\alpha - x)} {\sc f}_{k\beta}
= 0, \hbox{ if } \alpha \neq x.
\eqno {\rm (f''')}$$
On the other hand, if $\alpha = x$, that integral tends to become
infinite, because we have, by (b${}'''$),
$$\lim_{k = 0} \mathbin{.}
{\sc p}_0 \int_0^\infty d\beta \, {\sc f}_{k\beta}
= \infty.
\eqno {\rm (g''')}$$
Thus, while the formula (d${}'''$) shows that the integral
relative to $\beta$ in ({\sc e}) is a homogeneous function of
$\alpha - x$ and $k$, of which the dimension is negative unity,
we see also, by (f${}'''$) and (g${}'''$), that this function is
such as to vanish or become infinite at the limit $k = 0$,
according as $\alpha - x$ is different from or equal to zero.
When the difference between $\alpha$ and $x$, whether positive or
negative, is very small and of the same order as $k$, the value
of the last mentioned integral (relative to $\beta$) varies very
rapidly with $\alpha$; and in this way of considering the
subject, the proof of the formula ({\sc e}) is made to depend on
the verification of the equation
$$\varpi^{-1} \int_{-\infty}^\infty d\lambda \,
\psi_\lambda \lambda^{-1} = 1.
\eqno {\rm (h''')}$$
But this last verification is easily effected; for when we
substitute the expression (e${}'''$) for $\psi_\lambda$, and
integrate first relatively to $\lambda$, we find, by (r${}'$),
$$\int_{-\infty}^\infty d\lambda \,
{\sc n}_{\lambda \gamma} \lambda^{-1} = \varpi;
\eqno {\rm (i''')}$$
it remains then to show that
$$- \int_0^\infty d\gamma \, {\sc f}'_\gamma = 1;
\eqno {\rm (k''')}$$
and this follows immediately from the conditions (b${}'''$). For
example, when ${\sc p}$ is a cosine, and ${\sc f}$ a negative
neperian exponential, so that
$${\sc p}_\alpha = \cos \alpha,\quad
{\sc f}_\alpha = e^{-\alpha},$$
then, making $\lambda = k^{-1} (\alpha - x)$, we have
$$\int_0^\infty d\beta \, e^{-k\beta} \cos (\beta \alpha - \beta x)
= (\alpha - x)^{-1} \psi_\lambda;$$
$$\psi_\lambda
= \int_0^\infty d\gamma \, e^{-\gamma} \sin \lambda \gamma
= {\lambda \over 1 + \lambda^2};$$
and
$$\varpi^{-1} \int_{-\infty}^\infty d\lambda \, \psi_\lambda \lambda^{-1}
= \pi^{-1} \int_{-\infty}^\infty {d\lambda \over 1 + \lambda^2}
= 1.$$
It is nearly thus that {\sc Poisson} has, in some of his
writings, demonstrated the theorem of {\sc Fourier}, after
putting it under a form which differs only slightly from the
following:
$$f_x = \pi^{-1} \lim_{k = 0} \int_{-\infty}^\infty d\alpha \,
\int_0^\infty \, d\beta \, e^{-k\beta}
\cos (\beta \alpha - \beta x) f_\alpha;
\eqno {\rm (l''')}$$
namely, by substituting for the integral relative to $\beta$ its
value
$${k \over k^2 + (\alpha - x)^2};$$
and then observing that, if $k$ be very small, this value is
itself very small, unless $\alpha$ be extremely near to $x$, so
that $f_\alpha$ may be changed to $f_x$; while, making
$\alpha = x + k \lambda$, and integrating relatively to $\lambda$
between limits indefinitely great, the factor by which this
function $f_x$ is multiplied in the second member of (l${}'''$),
is found to reduce itself to unity.
\bigbreak
[14.]
Again, the function ${\sc f}_\alpha$ retaining the same
properties as in the last article for positive values of
$\alpha$, and being further supposed to satisfy the condition
$${\sc f}_{-\alpha} = {\sc f}_\alpha,
\eqno {\rm (m''')}$$
while $k$ is still supposed to be positive and small, the formula
({\sc d}) may be presented in this other way, as the limit of the
result of two integrations, of which the first is to be effected
with respect to the variable~$\alpha$:
$$f_x = \lim_{k = 0} \mathbin{.} \varpi^{-1}
\int_0^\infty d\beta \, \int_{-\infty}^\infty d\alpha \,
{\sc f}_{k\alpha} {\sc p}_{\beta (\alpha - x)} f_\alpha.
\eqno {\sc (f)}$$
Now it often happens that if the function $f_\alpha$ be obliged
to satisfy conditions which determine all its values by means of
the arbitrary values which it may have for a given finite range,
from $\alpha = a$ to $\alpha = b$, the integral relative to
$\alpha$ in the formula ({\sc f}) can be shown to vanish at the
limit $k = 0$, for all real and positive values of $\beta$,
except those which are roots of a certain equation
$$\Omega_\rho = 0;
\eqno {\sc (g)}$$
while the same integral is, on the contrary, infinite, for these
particular values of $\beta$; and then the integration relatively
to $\beta$ will in general change itself into a summation
relatively to the real and positive roots $\rho$ of the equation
({\sc g}), which is to be combined with an integration relatively
to $\alpha$ between the given limits $a$ and $b$; the resulting
expression being of the form
$$f_x = \sum\nolimits_\rho \int_a^b d\alpha \,
\phi_{x,\alpha,\rho} f_\alpha.
\eqno {\sc (h)}$$
For example, in the case where ${\sc p}$ is a cosine, and
${\sc f}$ a negative exponential, if the conditions relative to
the function~$f$ be supposed such as to conduct to expressions of
the forms
$$\int_0^\infty d\alpha \, e^{-h\alpha} f_\alpha
= {\psi(h) \over \phi(h)},
\eqno {\rm (n''')}$$
$$\int_0^{-\infty} d\alpha \, e^{h\alpha} f_\alpha
= {\psi(-h) \over \phi(-h)},
\eqno {\rm (o''')}$$
in which $h$ is any real or imaginary quantity, independent of
$\alpha$, and having its real part positive; it will follow that
$$\int_{-\infty}^\infty d\alpha \, e^{-k\sqrt{\alpha^2}}
( \cos \beta \alpha - \sqrt{-1} \sin \beta \alpha ) f_\alpha
= {\psi(\beta \sqrt{-1} + k) \over \phi(\beta \sqrt{-1} + k)}
- {\psi(\beta \sqrt{-1} - k) \over \phi(\beta \sqrt{-1} - k)},
\eqno {\rm (p''')}$$
in which $\sqrt{\alpha^2}$ is $= \alpha$ or $- \alpha$, according
as $\alpha$ is $>$ or $< 0$, and the quantities $\beta$ and $k$
are real, and $k$ is positive. The integral in (p${}'''$), and
consequently also that relative to $\alpha$ in ({\sc f}), in
which, now
$${\sc p}_\alpha = \cos \alpha,\quad
{\sc f}_\alpha = e^{-k \sqrt{\alpha^2}},$$
will therefore, under these conditions, tend to vanish with $k$,
unless $\beta$ be a root $\rho$ of the equation
$$\phi(\rho \sqrt{-1}) = 0,
\eqno {\rm (q''')}$$
which here corresponds to ({\sc g}); but the same integral will
on the contrary tend to become infinite, as $k$ tends to $0$, if
$\beta$ be a root of the equation (q${}'''$). Making therefore
$\beta = \rho + k \lambda$, and supposing $k\lambda$ to be small,
while $\rho$ is a real and positive root of (q${}'''$), the
integral (p${}'''$) becomes
$${k^{-1} \over 1 + \lambda^2}
({\sc a}_\rho - \sqrt{-1} {\sc b}_\rho),
\eqno {\rm (r''')}$$
in which ${\sc a}_\rho$ and ${\sc b}_\rho$ are real, namely,
$$\left. \eqalign{
{\sc a}_\rho
&= {\psi(\rho \sqrt{-1}) \over \phi'(\rho \sqrt{-1})}
+ {\psi(- \rho \sqrt{-1}) \over \phi'(- \rho \sqrt{-1})},\cr
{\sc b}_\rho
&= \sqrt{-1}
\left(
{\psi(\rho \sqrt{-1}) \over \phi'(\rho \sqrt{-1})}
- {\psi(- \rho \sqrt{-1}) \over \phi'(- \rho \sqrt{-1})}
\right);\cr}
\right\}
\eqno {\rm (s''')}$$
$\phi'$ being the differential coefficient of the
function~$\phi$. Multiplying the expression (r${}'''$) by
$\pi^{-1} \, d\beta \, (\cos \beta x + \sqrt{-1} \sin \beta x)$,
which may be changed to
$\pi^{-1} k \, d\lambda \, (\cos \rho x + \sqrt{-1} \sin \rho x)$;
integrating relatively to $\lambda$ between indefinitely great
limits, negative and positive; taking the real part of the
result, and summing it relatively to $\rho$; there results,
$$f_x = \sum\nolimits_\rho
({\sc a}_\rho \cos \rho x + {\sc b}_\rho \sin \rho x);
\eqno {\rm (t''')}$$
a development which has been deduced nearly as above, by
{\sc Poisson} and {\sc Liouville}, from the suppositions
(n${}'''$), (o${}'''$), and from the theorem of {\sc Fourier}
presented under a form equivalent to the following:
$$f_x = \lim_{k = 0} \mathbin{.} \pi^{-1}
\int_0^\infty d\beta \, \int_{-\infty}^\infty d\alpha \,
e^{-k\sqrt{\alpha^2}}
\cos (\beta \alpha - \beta x) f_\alpha;
\eqno {\rm (u''')}$$
and in which it is to be remembered that if $0$ be a root of the
equation (q${}'''$), the corresponding terms in the development
of $f_x$ must in general be modified by the circumstance, that in
calulating these terms, the integration relatively to $\lambda$
extends only from $0$ to $\infty$.
For example, when the function~$f$ is obliged to satisfy the
conditions
$$f_{-\alpha} = f_\alpha,\quad
f_{l - \alpha} = - f_{l + \alpha},
\eqno {\rm (v''')}$$
the suppositions (n${}'''$) (o${}'''$) are satisfied; the
functions $\phi$ and $\psi$ being here such that
$$\eqalign{
\phi(h) &= e^{hl} + e^{-hl},\cr
\psi(h) &= \int_0^l d\alpha \,
(e^{h(l - \alpha)} - e^{h(\alpha - l)}) f_\alpha;\cr}$$
therefore the equation (q${}'''$) becomes in this case
$$\cos \rho l = 0,
\eqno {\rm (w''')}$$
and the expressions (s${}'''$) for the coefficients of the
development (t${}'''$) reduce themselves to the following:
$${\sc a}_\rho
= {2 \over l} \int_0^l d\alpha \, \cos \rho \alpha \, f_\alpha;\quad
{\sc b}_\rho = 0;
\eqno {\rm (x''')}$$
so that the method conducts to the following expression for the
function~$f$, which satisfies the conditions (v${}'''$),
$$f_x = {2 \over l} \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\cos {(2n - 1) \pi x \over 2l}
\int_0^l d\alpha \,
\cos {(2n - 1) \pi \alpha \over 2l} f_\alpha;
\eqno {\rm (y''')}$$
in which $f_\alpha$ is arbitrary from $\alpha = 0$ to $\alpha =
l$, except that $f_l$ must vanish. The same method has been
applied, by the authors already cited, to other and more
difficult questions; but it will harmonize better with the
principles of the present paper to treat the subject in another
way, to which we shall now proceed.
\bigbreak
[15.]
Instead of introducing, as in ({\sc e}) and ({\sc f}), a factor
which has unity for its limit, we may often remove the apparent
indeterminateness of the formula ({\sc d}) in another way, by the
principles of fluctuating functions. For if we integrate first
relatively to $\alpha$ between indefinitely great limits,
negative and positive, then, under the conditions which conduct
to developments of the form ({\sc h}), we shall find that the
resulting function of $\beta$ is usually a fluctuating one, of
which the integral vanishes, except in the immediate
neighbourhood of certain particular values determined by an
equation such as ({\sc g}); and then, by integrating only in such
immediate neighbourhood, and afterwards summing the results, the
development ({\sc h}) is obtained. For example, when ${\sc p}$
is a cosine, and when the conditions (v${}'''$) are satisfied by
the function~$f$, it is not difficult to prove that
$$\int_{-2ml - l}^{2ml + l} d\alpha \,
\cos (\beta \alpha - \beta x) f_\alpha
= {2 \cos (2m\beta l + \beta l + m \pi) \over \cos \beta l}
\cos \beta x
\int_0^l d\alpha \, \cos \beta \alpha f_\alpha;
\eqno {\rm (z''')}$$
$m$ being here an integer number, which is to be supposed large,
and ultimately infinite. The equation ({\sc g}) becomes
therefore, in the present question and by the present method, as
well as by that of the last article,
$$\cos \rho l = 0;$$
and if we make $\beta = \rho + \gamma$, $\rho$ being a root of
this equation, we may neglect $\gamma$ in the second member of
(z${}'''$), except in the denominator
$$\cos \beta l = - \sin \rho l \, \sin \gamma l,$$
and in the fluctuating factor of the numerator
$$\cos (2m\beta l + \beta l + m\pi)
= - \sin \rho l \, \sin (2 m \gamma l + \gamma l);$$
consequently, multiplying by $\pi^{-1} \, d\gamma$, integrating
relatively to $\gamma$ between any two small limits of the forms
$\mp \epsilon$, and observing that
$$\lim_{m = \infty} \mathbin{.} {2 \over \pi}
\int_{-\epsilon}^\epsilon d\gamma \,
{\sin (2ml\gamma + l\gamma) \over \sin l\gamma}
= {2 \over l},$$
the development
$$f_x = {2 \over l} \sum\nolimits_\rho \cos \rho x
\int_0^l d\alpha \, \cos \rho \alpha \, f_\alpha,$$
which coincides with (y${}'''$), and is of the form ({\sc h}), is
obtained.
\bigbreak
[16.]
A more important application of the method of the last article is
suggested by the expression which {\sc Fourier} has given for the
arbitrary initial temperature of a solid sphere, on the
supposition that this temperature is the same for all points at
the same distance from the centre. Denoting the radius of the
sphere by $l$, and that of any layer or shell of it by $\alpha$,
while the initial temperature of the same layer is denoted by
$\alpha^{-1} f_\alpha$, we have the equations
$$f_0 = 0,\quad f'_l + \nu f_l = 0,
\eqno {\rm (a^{\mit IV})}$$
which permit us to suppose
$$f_\alpha + f_{-\alpha} = 0,\quad
f'_{l + \alpha} + f'_{l - \alpha}
+ \nu (f_{l + \alpha} + f_{l - \alpha} ) = 0;
\eqno {\rm (b^{\mit IV})}$$
$\nu$ being here a constant quantity not less than $-l^{-1}$, and
$f'$ being the first differential coefficient of the
function~$f$, which function remains arbitrary for all values of
$\alpha$ greater than $0$, but not greater than $l$. The
equations (b${}^{IV}$) give
$$\eqalignno{
& (\beta \cos \beta l + \nu \sin \beta l)
\int_{l - \alpha}^{l + \alpha} d\alpha \,
\sin \beta \alpha \, f_\alpha \cr
&\quad =
(\beta \sin \beta l - \nu \cos \beta l)
\int_{\alpha - l}^{\alpha + l} d\alpha \,
\cos \beta \alpha \, f_\alpha
- \cos \beta \alpha \, (f_{\alpha + l} + f_{\alpha - l});
& {\rm (c^{\mit IV})}\cr}$$
so that
$$(\rho \sin \rho l - \nu \cos \rho l)
\int_{\alpha - l}^{\alpha + l} d\alpha \,
\cos \rho \alpha \, f_\alpha
= \cos \rho \alpha \, (f_{\alpha + l} + f_{\alpha - l}),
\eqno {\rm (d^{\mit IV})}$$
if $\rho$ be a root of the equation
$$\rho \cos \rho l + \nu \sin \rho l = 0.
\eqno {\rm (e^{\mit IV})}$$
This latter equation is that which here corresponds to ({\sc g});
and when we change $\beta$ to $\rho + \gamma$, $\gamma$ being
very small, we may write, in the first member of (c${}^{IV}$),
$$\beta \cos \beta l + \nu \sin \beta l
= \gamma \{ (1 + \nu l) \cos \rho l + \rho l \sin \rho l \},
\eqno {\rm (f^{\mit IV})}$$
and change $\beta$ to $\rho$ in all the terms of the second
member, except in the fluctuating factor $\cos \beta \alpha$, in
which $\alpha$ is to be made extremely large. Also, after making
$\cos \beta \alpha = \cos \rho \alpha \, \cos \gamma \alpha
- \sin \rho \alpha \, \sin \gamma \alpha$,
we may suppress $\cos \gamma \alpha$ in the second member of
(c${}^{IV}$), before integrating with respect to $\gamma$,
because by (d${}^{IV}$) the terms involving
$\cos \gamma \alpha$ tend to vanish with $\gamma$, and because
$\gamma^{-1} \cos \gamma \alpha$ changes sign with $\gamma$. On
the other hand, the integral of
$\displaystyle {d\gamma \, \sin \gamma \alpha \over \gamma}$
is to be replaced by $\pi$, though it be taken only for very
small values, negative and positive, of $\gamma$, because
$\alpha$ is here indefinitely large and positive. Thus in the
present question, the formula
$$f_x = {1 \over \pi} \mathbin{.} \lim_{\alpha = \infty} \mathbin{.}
\int_0^\infty d\beta \, \int_{l - \alpha}^{l + \alpha} d\alpha \,
\sin \beta \alpha \, f_\alpha,
\eqno {\rm (g^{\mit IV})}$$
(which is obtained from (a${}'''$) by suppressing the terms which
involve $\cos \beta x$, on account of the first condition
(b${}^{IV}$),) may be replaced by a sum relative to the real
and positive roots of the equation (e${}^{IV}$); the term
corresponding to any one such root being
$${{\sc r}_\rho \sin \rho x
\over (1 + \nu l) \cos \rho l - \rho l \sin \rho l},
\eqno {\rm (h^{\mit IV})}$$
if we suppose $\rho > 0$, and make for abridgment
$${\sc r}_\rho
= (\nu \cos \rho l - \rho \sin \rho l)
\int_{\alpha - l}^{\alpha + l} d\alpha \,
\sin \rho \alpha \, f_\alpha
+ \sin \rho \alpha \, (f_{\alpha + l} + f_{\alpha - l}).
\eqno {\rm (i^{\mit IV})}$$
The equations (b${}^{IV}$) show that the quantity
${\sc r}_\rho$ does not vary with $\alpha$, and therefore that it
may be rigorously thus expressed:
$${\sc r}_\rho
= 2 (\nu \cos \rho l - \rho \sin \rho l)
\int_0^l d\alpha \,
\sin \rho \alpha \, f_\alpha;
\eqno {\rm (k^{\mit IV})}$$
we have also, by (e${}^{IV}$), $\rho$ being $> 0$,
$${2 (\nu \cos \rho l - \rho \sin \rho l)
\over \cos \rho l + l ( \nu \cos \rho l - \rho \sin \rho l)}
= {2\rho \over \rho l - \sin \rho l \, \cos \rho l}.
\eqno {\rm (l^{\mit IV})}$$
And if we set aside the particular case where
$$\nu l + 1 = 0,
\eqno {\rm (m^{\mit IV})}$$
the term corresponding to the root
$$\rho = 0,
\eqno {\rm (n^{\mit IV})}$$
of the equation (e${}^{IV}$), vanishes in the development of
$f_x$; because this term is, by (g${}^{IV}$),
$${x \over \pi} \int_0^\beta d\beta
\left(
\beta \int_{l - \alpha}^{l + \alpha} d\alpha \,
\sin \beta \alpha \, f_\alpha
\right),
\eqno {\rm (o^{\mit IV})}$$
$\alpha$ being very large, and $\beta$ small, but both being
positive; and unless the condition (m${}^{IV}$) be satisfied,
the equation (c${}^{IV}$) shows that the quantity to be
integrated in (o${}^{IV}$), with respect to $\beta$, is a
finite and fluctuating function of that variable, so that its
integral vanishes, at the limit $\alpha = \infty$. Setting aside
then the case (m${}^{IV}$) which corresponds physically to
the absence of exterior radiation, we see that the
function~$f_x$, which represents the initial temperature of any
layer of the sphere multiplied by the distance~$x$ of that layer
from the centre, and which is arbitrary between the limits
$x = 0$, $x = l$, that is, between the centre and the surface,
(though it is obliged to satisfy at those limits the conditions
(a${}^{IV}$)), may be developed in the following series,
which was discovered by {\sc Fourier}, and is of the form
({\sc h}):
$$f_x = \sum\nolimits_\rho
{\displaystyle 2 \rho \sin \rho x
\int_0^l d\alpha \, \sin \rho \alpha \, f_\alpha
\over \rho l - \sin \rho l \, \cos \rho l};
\eqno {\rm (p^{\mit IV})}$$
the sum extending only to those roots of the equation
(e${}^{IV}$) which are greater than $0$. In the particular
case (m${}^{IV}$), in which the root (n${}^{IV}$) of the
equation (e${}^{IV}$) must be employed, the term
(o${}^{IV}$) becomes, by (c${}^{IV}$) and
(d${}^{IV}$),
$${3x \over \pi l^3}
\left\{
\int_{\alpha - l}^{\alpha + l} d\alpha \,
f_\alpha \alpha {\sc c}
- l ( f_{\alpha + l} + f_{\alpha - l} ) \alpha {\sc c}
\right\},
\eqno {\rm (q^{\mit IV})}$$
in which, at the limit here considered,
$${\sc c}
= \int_0^\infty d\theta \,
{\mathop{\rm vers} \theta \over \theta^2}
= {\pi \over 2};
\eqno {\rm (r^{\mit IV})}$$
but also, by the equations (b${}^{IV}$), (m${}^{IV}$),
$$\int_{\alpha - l}^{\alpha + l} d\alpha \, f_\alpha \alpha
- l ( f_{\alpha + l} + f_{\alpha - l} ) \alpha
= 2 \int_0^l d\alpha \, f_\alpha \alpha;
\eqno {\rm (s^{\mit IV})}$$
the sought term of $f_x$ becomes, therefore, in the present case,
$${3x \over l^3} \int_0^l d\alpha \, f_\alpha \alpha,
\eqno {\rm (t^{\mit IV})}$$
and the corresponding term in the expression of the temperature
$x^{-1} f_x$ is equal to the mean initial temperature of the
sphere; a result which has been otherwise obtained by
{\sc Poisson}, for the case of no exterior radiation, and which
might have been anticipated from physical considerations. The
supposition
$$\nu l + 1 < 0,
\eqno {\rm (u^{\mit IV})}$$
which is inconsistent with the physical conditions of the
question, and in which {\sc Fourier's} development
(p${}^{IV}$) may fail, is excluded in the foregoing analysis.
\bigbreak
[17.]
When a converging series of the form ({\sc h}) is arrived at, in
which the coefficients~$\phi$ of the arbitrary function~$f$,
under the sign of integration, do not tend to vanish as they
correspond to larger and larger roots~$\rho$ of the equation
({\sc g}); then those coefficients $\phi_{x, \alpha, \rho}$ must
in general tend to become fluctuating functions of $\alpha$, as
$\rho$ becomes larger and larger. And the sum of those
coefficients, which may be thus denoted,
$$\sum\nolimits_\rho \phi_{x,\alpha,\rho}
= \psi_{x,\alpha,\rho},
\eqno {\sc (i)}$$
and which is here supposed to be extended to all real and
positive roots of the equation ({\sc g}), as far as some given
root~$\rho$, must tend to become a fluctuating function of
$\alpha$, and to have its mean value equal to zero, as $\rho$
tends to become infinite, for all values of $\alpha$ and $x$
which are different from each other, and are both comprised
between the limits of the integration relative to $\alpha$; in
such a manner as to satisfy the equation
$$\int_\lambda^\mu d\alpha \, \psi_{x,\alpha,\infty} f_\alpha
= 0,
\eqno {\sc (k)}$$
which is of the form (e), referred to in the second article;
provided that the arbitrary function~$f$ is finite, and that the
quantities $\lambda$, $\mu$, $x$, $\alpha$ are all comprised
between the limits $a$ and $b$, which enter into the formula
({\sc h}); while $\alpha$ is, but $x$ is not, comprised also
between the new limits $\lambda$ and $\mu$. But when
$\alpha = x$, the sum ({\rm i}) tends to become infinite with
$\rho$, so that we have
$$\psi_{x,x,\infty} =\infty,
\eqno {\sc (l)}$$
and
$$\int_{x - \epsilon}^{x + \epsilon} d\alpha \,
\psi_{x,\alpha,\infty} f_\alpha = f_x,
\eqno {\sc (m)}$$
$\epsilon$ being here a quantity indefinitely small. For
example, in the particular question which conducts to the
development (y${}'''$), we have
$$\phi_{x,\alpha,\rho} = {2 \over l} \cos \rho x \, \cos \rho \alpha,
\eqno {\rm (v^{\mit IV})}$$
and
$$\rho = {(2n - 1) \pi \over 2l};
\eqno {\rm (w^{\mit IV})}$$
therefore, summing relatively to $\rho$, or to $n$, from $n = 1$
to any given positive value of the integer number~$n$, we have,
by ({\sc i}),
$$\psi_{x,\alpha,\rho}
= {\displaystyle \sin {n \pi (\alpha - x) \over l}
\over \displaystyle 2l \sin {\pi (\alpha - x) \over 2l}}
+ {\displaystyle \sin {n \pi (\alpha + x) \over l}
\over \displaystyle 2l \sin {\pi (\alpha + x) \over 2l}};
\eqno {\rm (x^{\mit IV})}$$
and it is evident that this sum tends to become a fluctuating
function of $\alpha$, and to satisfy the equation ({\sc k}), as
$\rho$, or $n$, tends to become infinite, while $\alpha$ and $x$
are different from each other, and are both comprised between the
limits $0$ and $l$. On the other hand, when $\alpha$ becomes
equal to $x$, the first part of the expression (x${}^{IV}$)
becomes
$\displaystyle = {n \over l}$,
and therefore tends to become infinite with $n$, so that the
equation ({\sc l}) is true. And the equation ({\sc m}) is
verified by observing, that if $x > 0$, $< l$, we may omit the
second part of the sum (x${}^{IV}$), as disappearing in the
integral through fluctuation, while the first part gives, at the
limit,
$$\lim_{n = \infty} \int_{x - \epsilon}^{x + \epsilon} d\alpha \,
{\displaystyle \sin {n \pi (\alpha - x) \over l}
\over \displaystyle 2l \sin {\pi (\alpha - x) \over 2l}}
f_\alpha
= f_x.
\eqno {\rm (y^{\mit IV})}$$
If $x$ be equal to $0$, the integral is to be taken only from $0$
to $\epsilon$, and the result is only half as great, namely,
$$\lim_{n = \infty} \mathbin{.} \int_0^\epsilon d\alpha \,
{\displaystyle \sin {n \pi \alpha \over l}
\over \displaystyle 2l \sin {\pi \alpha \over 2l}}
f_\alpha
= {\textstyle {1 \over 2}} f_0;
\eqno {\rm (z^{\mit IV})}$$
but, in this case, the other part of the sum (x${}^{IV}$)
contributes an equal term, and the whole result is $f_0$. If
$x = l$, the integral is to be taken from $l - \epsilon$ to $l$,
and the two parts of the expression (x${}^{IV}$) contribute
the two terms ${1 \over 2} f_l$ and $-{1 \over 2} f_l$, which
neutralize each other. We may therefore in this way prove,
{\it \`{a} posteriori}, by consideration of fluctuating
functions, the truth of the development (y${}'''$) for any
arbitrary but finite function $f_x$, and for all values of the
real variable~$x$ from $x = 0$ to $x = l$, the function being
supposed to vanish at the latter limit; observing only that if
this function $f_x$ undergo any sudden change of value, for any
value $x^\backprime$ of the variable between the limits $0$ and
$l$, and if $x$ be made equal to $x^\backprime$ in the
development (y${}'''$), the process shows that this development
then represents the semisum of the two values which the
function~$f$ receives, immediately before and after it undergoes
this sudden change.
\bigbreak
[18.]
The same mode of {\it \`{a} posteriori\/} proof, through the
consideration of fluctuating functions, may be applied to a great
variety of other analogous developments, as has indeed been
indicated by {\sc Fourier}, in a passage of his Theory of Heat.
The spirit of {\sc Poisson's} method, when applied to the
establishment, {\it \`{a} posteriori}, of developments of the
form ({\sc h}) would lead us to multiply, before the summation,
each coefficient $\phi_{x,\alpha,\rho}$ by a factor
${\sc f}_{k,\rho}$ which tends to unity as $k$ tends to $0$, but
tends to vanish as $\rho$ tends to $\infty$; and then instead of
a {\it generally fluctuating sum\/} ({\sc i}), there results a
{\it generally evanescent sum\/} ($k$ being evanescent), namely,
$$\sum_\rho {\sc f}_{k,\rho} \phi_{x,\alpha,\rho}
= \chi_{x,\alpha,k,\rho},
\eqno {\sc (n)}$$
which conducts to equations analogous to ({\sc k}) ({\sc l})
({\sc m}), namely,
$$\lim_{k = 0} \int_\lambda^\mu d\alpha \,
\chi_{x,\alpha,k,\infty} f_\alpha
= 0;
\eqno {\sc (o)}$$
$$\lim_{k = 0} \chi_{x,x,k,\infty} = \infty;
\eqno {\sc (p)}$$
$$\lim_{k = 0} \int_{x - \epsilon}^{x + \epsilon} d\alpha \,
\chi_{x,\alpha,k,\infty} f_\alpha
= f_x.
\eqno {\sc (q)}$$
It would be interesting to inquire what form the generally
evanescent function~$\chi$ would take immediately before its
vanishing when
$${\sc f}_{k,\rho} = e^{-k\rho},$$
and
$$\phi_{x,\alpha,\rho}
= {2\rho \sin \rho x \, \sin \rho \alpha
\over \rho l - \sin \rho l \, \cos \rho l},$$
$\rho$ being a root of the equation
$$\rho l \mathop{\rm cotan} \rho l = \hbox{const.},$$
and the constant in the second member being supposed not greater
than unity.
\bigbreak
[19.]
The development ({\sc c}), which, like ({\sc h}), expresses an
arbitrary function, at least between given limits, by a
combination of summation and integration, was deduced from the
expression (m${}''$) of the eleventh article, which conducts also
to many other analogous developments, according to the various
ways in which the factor with the infinite index,
${\sc n}_{\infty (\alpha - x)}$, may be replaced by an infinite
sum, or other equivalent form. Thus, if, instead of (o${}''$),
we establish the following equation,
$$\int_{(2n-2)\alpha}^{2n\alpha} d\alpha \, {\sc p}_\alpha
= {\sc r}_{\alpha,n} \int_0^\alpha d\alpha \, {\sc p}_\alpha,
\eqno {\rm (a^{\mit V})}$$
we shall have, instead of ({\sc c}), the development:
$$f_x = \varpi^{-1} {\sc p}_0
\sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_a^b d\alpha \, {\sc r}_{\alpha - x,n} f_\alpha;
\eqno {\sc (r)}$$
which, when ${\sc p}$ is a cosine, reduces itself to the form,
$$f_x = {2 \over \pi} \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_a^b d\alpha \,
\cos (\overline{2n - 1} \mathbin{.} \overline{\alpha - x})
f_\alpha,
\eqno {\rm (b^{\mit V})}$$
$x$ being $> a$, $** \pi$; and easily
conducts to the known expression
$$f_x = {1 \over l} \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_{-l}^l d\alpha \,
\cos {(2n - 1) \pi (\alpha - x) \over 2l} f_\alpha,
\eqno {\rm (c^{\mit V})}$$
which holds good for all values of $x$ between $-l$ and $+l$. By
supposing $f_\alpha = f_{-\alpha}$, we are conducted to the
expression (y${}'''$); and by supposing
$f_\alpha = - f_{-\alpha}$ we are conducted to this other known
expression,
$$f_x = {2 \over l} \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\sin {(2n - 1) \pi x \over 2l}
\int_0^l d\alpha \,
\sin {(2n - 1) \pi \alpha \over 2l} f_\alpha;
\eqno {\rm (d^{\mit V})}$$
which holds good even at the limit $x = l$, by the principles of
the seventeenth article, and therefore offers the following
transformation for the arbitrary function~$f_l$:
$$f_l = - {2 \over l} \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
(-1)^n \int_0^l d\alpha \,
\sin {(2n - 1) \pi \alpha \over 2l} f_\alpha.
\eqno {\rm (e^{\mit V})}$$
For example, by making $f_\alpha = \alpha^i$, and supposing $i$
to be an uneven integer number; effecting the integration
indicated in (e${}^{V}$), and dividing both members by $l^i$, we
find the following relation between the sums of the reciprocals
of even powers of odd whole numbers:
$$1 = [i]^1 \omega_2 - [i]^3 \omega_4 + [i]^5 \omega_6 - \cdots;
\eqno {\rm (f^{\mit V})}$$
in which
$$[i]^k = i (i - 1) (i - 2) \cdots (i - k + 1);
\eqno {\rm (g^{\mit V})}$$
and
$$\omega_{2k}
= 2 \left( {2 \over \pi} \right)^{2k}
\sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
(2n - 1)^{-2k};
\eqno {\rm (h^{\mit V})}$$
thus
$$1 = \omega_2
= 3 \omega_2 - 3 \mathbin{.} 2 \mathbin{.} 1 \mathbin{.} \, \omega_4
= 5 \omega_2 - 5 \mathbin{.} 4 \mathbin{.} 3 \, \omega_4
+ 5 \mathbin{.} 4 \mathbin{.} 3 \mathbin{.} 2 \mathbin{.} 1 \, \omega_6,
\eqno {\rm (i^{\mit V})}$$
so that
$$\omega_2 = 1,\quad
\omega_4 = {\textstyle {1 \over 3}},\quad
\omega_6 = {\textstyle {2 \over 15}}.
\eqno {\rm (k^{\mit V})}$$
Again, by making $f_\alpha= \alpha^i$, but supposing $i =$ an
uneven number $2k$, we get the following additional term in the
second member of the equation (f${}^{V}$),
$$(-1)^k [2k]^{2k} \omega_{2k+1},
\eqno {\rm (l^{\mit V})}$$
in which
$$\omega_{2k+1}
= -2 \left( {2 \over \pi} \right)^{2k+1}
\sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
(-1)^n (2n - 1)^{-2k-1};
\eqno {\rm (m^{\mit V})}$$
thus
$$1 = \omega_1
= 2 \omega_2 - 2 \mathbin{.} 1 \, \omega_3
= 4 \omega_2 - 4 \mathbin{.} 3 \mathbin{.} 2 \, \omega_4
+ 4 \mathbin{.} 3 \mathbin{.} 2 \mathbin{.} 1 \, \omega_5,
\eqno {\rm (n^{\mit V})}$$
so that
$$\omega_1 = 1,\quad
\omega_3 = {\textstyle {1 \over 2}},\quad
\omega_5 = {\textstyle {5 \over 24}}.
\eqno {\rm (o^{\mit V})}$$
Accordingly, if we multiply the values (k${}^{V}$) by
$\displaystyle {\pi^2 \over 8}$,
$\displaystyle {\pi^4 \over 32}$,
$\displaystyle {\pi^6 \over 128}$,
we get the known values for the sums of the reciprocals of the
squares, fourth powers, and sixth powers of the odd whole
numbers; and if we multiply the values (o${}^{V}$) by
$\displaystyle {\pi \over 4}$,
$\displaystyle {\pi^3 \over 16}$,
$\displaystyle {\pi^5 \over 64}$,
we get the known values for the sums of the reciprocals of the
first, third, and fifth powers of the same odd numbers, taken
however with alternately positive and negative signs. Again, if
we make $f_\alpha = \sin \alpha$, in (e${}^{V}$), and divide both
members of the resulting equation by $\cos l$, we get this known
expression for a tangent,
$$\tan l
= \sum\nolimits_{(n) -\infty}^{\phantom{(n) -} \infty}
{2 \over (2n - 1) \pi - 2 l};
\eqno {\rm (p^{\mit V})}$$
which shows that, with the notation (h${}^{V}$),
$$\tan l = \omega_2 l^1 + \omega_4 l^3 + \omega_6 l^5 + \cdots;
\eqno {\rm (q^{\mit V})}$$
so that the coefficients of the ascending powers of the arc in
the development of its tangent are connected with each other by
the relations (f${}^{5}$), which may be briefly represented thus:
$$\sqrt{-1} = (1 + \sqrt{-1} {\sc d}_0)^{2k - 1} \tan 0;
\eqno {\rm (r^{\mit V})}$$
the second member of this symbolic equation being supposed to be
developed, and ${\sc d}_0^i \tan 0$ being understood to denote
the value which the $i^{\rm th}$ differential coefficient of the
tangent of $\alpha$, taken with respect to $\alpha$, acquires
when $\alpha = 0$; thus
$$1 = {\sc d}_0 \tan 0
= 3 {\sc d}_0 \tan 0 - {\sc d}_0^3 \tan 0
= 5 {\sc d}_0 \tan 0 - 10 {\sc d}_0^3 \tan 0
+ {\sc d}_0^5 \tan 0.
\eqno {\rm (s^{\mit V})}$$
Finally, if we make $f_\alpha = \cos \alpha$, and attend to the
expression (p${}^{V}$), we obtain, for the secant of an arc~$l$,
the known expression:
$$\sec l
= \sum\nolimits_{(n) -\infty}^{\phantom{(n) -} \infty}
{2 (-1)^{n+1} \over (2n - 1) \pi - 2l};
\eqno {\rm (t^{\mit V})}$$
which shows that, with the notation (m${}^{V}$),
$$\sec l = \omega_1 l^0 + \omega_3 l^2 + \omega_5 l^4 + \cdots,
\eqno {\rm (u^{\mit V})}$$
and therefore, by the relations of the form (n${}^{V}$),
$$\sqrt{-1} (1 - (\sqrt{-1} {\sc d}_0)^{2k} \sec 0)
= (1 + \sqrt{-1} {\sc d}_0)^{2k} \tan 0;
\eqno {\rm (v^{\mit V})}$$
thus
$$1 = \sec 0 = 2 {\sc d}_0 \tan 0 - {\sc d}_0^2 \sec 0
= 4 {\sc d}_0 \tan 0 - 4 {\sc d}_0^3 \tan 0 + {\sc d}_0^4 \sec 0.
\eqno {\rm (w^{\mit V})}$$
Though several of the results above deduced are known, the writer
does not remember to have elsewhere seen the symbolic equations
(r${}^{V}$), (v${}^{V}$), as expressions for the laws of the
coefficients of the developments of the tangent and secant,
according to ascending powers of the arc.
\bigbreak
[20.]
In the last article, the symbol ${\sc r}$ was such, that
$$\sum\nolimits_{(n) 1}^{\phantom{(n)} n} {\sc r}_{\alpha, n}
= {\sc n}_{2n\alpha} {\sc n}_\alpha^{-1};
\eqno {\rm (x^{\mit V})}$$
and in article [11.], we had
$$1 + \sum\nolimits_{(n) 1}^{\phantom{(n)} n} {\sc q}_{\alpha, n}
= {\sc n}_{2n\alpha + \alpha} {\sc n}_\alpha^{-1}.
\eqno {\rm (y^{\mit V})}$$
Assume, now, more generally,
$$\nabla_\beta {\sc s}_{\alpha, \beta}
= {\sc n}_{\beta \alpha} {\sc n}_\alpha^{-1};
\eqno {\rm (z^{\mit V})}$$
and let the operation $\nabla_\beta$ admit of being effected
after, instead of before, the integration relatively to $\alpha$;
the expression (m${}''$) will then acquire this very general
form:
$$f_x = \varpi^{-1} {\sc p}_0 \nabla_\infty
\int_a^b d\alpha \, {\sc s}_{\alpha - x, \beta} f_\alpha;
\eqno {\sc (s)}$$
which includes the transformations ({\sc c}) and ({\sc r}), and
in which the notation $\nabla_\infty$ is designed to indicate
that after performing the operation $\nabla_\beta$ we are to make
the variable $\beta$ infinite, according to some given law of
increase, connected with the form of the operation denoted by
$\nabla$.
\bigbreak
[21.]
In order to deduce the theorems ({\sc c}), ({\sc r}), ({\sc s}),
we have hitherto supposed (as was stated in the twelfth article),
that the equation ${\sc n}_\alpha = 0$ has no real root different
from $0$ between the limits $\mp (b - a)$, in which $a$ and $b$
are the limits of the integration relative to $a$, between which
latter limits it is also supposed that the variable~$x$ is
comprised. If these conditions be not satisfied, the factor
${\sc n}_{\alpha - x}^{-1}$, in the formula (m${}''$), may become
infinite within the proposed extent of integration, for values of
$\alpha$ and $x$ which are not equal to each other; and it will
then be necessary to change the first member of each of the
equations (m${}''$), ({\sc c}), ({\sc r}), ({\sc s}), to a
function different from $f_x$, but to be determined by similar
principles. To simplify the question, let it be supposed that
the function ${\sc n}_\alpha$ receives no sudden change of value,
and that the equation
$${\sc n}_\alpha = 0,
\eqno {\rm (a^{\mit VI})}$$
which coincides with (w${}''$) has all its real roots unequal.
These roots must here coincide with the quantities $\alpha_{n,i}$
of the fourth and other articles, for which the function
${\sc n}_\alpha$ changes sign; but as the double index is now
unnecessary, while the notation $\alpha_n$ has been appropriated
to the roots of the equation (g), we shall denote the roots of
the equation (a${}^{VI}$), in their order, by the symbols
$$\nu_{-\infty},\ldots \, \nu_{-1}, \nu_0, \nu_1,\ldots \, \nu_\infty;
\eqno {\rm (b^{\mit VI})}$$
and choosing $\nu_0$ for that root of (a${}^{VI}$) which has
already been supposed to vanish, we shall have
$$\nu_0 = 0,
\eqno {\rm (c^{\mit VI})}$$
while the other roots will be $>$ or $< 0$, according as their
indices are positive or negative. If the differential
coefficient ${\sc p}_\alpha$ be also supposed to remain always
finite, and to receive no sudden change of value in the immediate
neighbourhood of any root~$\nu$ of (a${}^{VI}$), we shall have,
for values of $\alpha$ in that neighbourhood, the limiting
equation:
$$\lim_{\alpha = \nu} \mathbin{.}
{\sc n}_\alpha (\alpha - \nu)^{-1}
= {\sc p}_\nu;
\eqno {\rm (d^{\mit VI})}$$
and ${\sc p}_\nu$ will be different from $0$, because the real
roots of the equation (a${}^{VI}$) have been supposed unequal.
Conceive also that the integral
$$\int_{-\infty}^\infty d\alpha \,
{\sc n}_{\alpha + \beta \nu} \alpha^{-1}
= \varpi_{\nu,\beta}
\eqno {\rm (e^{\mit VI})}$$
tends to some finite and determined limit, which may perhaps be
different for different roots~$\nu$, and therefore may be thus
denoted,
$$\varpi_{\nu,\infty} = \varpi_\nu,
\eqno {\rm (f^{\mit VI})}$$
as $\beta$ tends to $\infty$, after the given law referred to at
the end of the last article. Then, by writing
$$\alpha = x + \nu + \beta^{-1} y,
\eqno {\rm (g^{\mit VI})}$$
and supposing $\beta$ to be very large, we easily see, by
reasoning as in former articles, that the part of the integral
$$\int_a^b d\alpha \,
{\sc n}_{\beta (\alpha - x)}
{\sc n}_{\alpha - x}^{-1} f_\alpha,
\eqno {\rm (h^{\mit VI})}$$
which corresponds to values of $\alpha - x$ in the neighbourhood
of the root~$\nu$, is very nearly expressed by
$$\varpi_\nu {\sc p}_\nu^{-1} f_{x + \nu};
\eqno {\rm (i^{\mit VI})}$$
and that this expression is accurate at the limit. Instead of
the equation ({\sc s}), we have therefore now this other
equation:
$$\sum\nolimits_\nu \varpi_\nu {\sc p}_\nu^{-1} f_{x + \nu}
= \nabla_\infty \int_a^b d\alpha \,
{\sc s}_{\alpha - x, \beta} f_\alpha;
\eqno {\sc (t)}$$
the sum in the first member being extended to all those
roots~$\nu$ of the equation (a${}^{VI}$), which satisfy the
conditions
$$x + \nu > a,\quad < b.
\eqno {\rm (k^{\mit VI})}$$
If one of the roots~$\nu$ should happen to satisfy the condition
$$x + \nu = a,
\eqno {\rm (l^{\mit VI})}$$
the corresponding term in the first member of ({\sc t}) would be,
by the same principles,
$$\varpi_\nu^\backprime {\sc p}_\nu^{-1} f_a,
\eqno {\rm (m^{\mit VI})}$$
in which
$$\varpi_\nu^\backprime
= \lim_{\beta = \infty} \int_0^\infty d\alpha \,
{\sc n}_{\alpha + \beta \nu} \alpha^{-1}.
\eqno {\rm (n^{\mit VI})}$$
And if a root~$\nu$ of (a${}^{VI}$) should satisfy the condition
$$x + \nu = b,
\eqno {\rm (o^{\mit VI})}$$
the corresponding term in the first member of ({\sc t}) would
then be
$$\varpi_\nu^{\backprime\backprime} {\sc p}_\nu^{-1} f_b,
\eqno {\rm (p^{\mit VI})}$$
in which
$$\varpi_\nu^{\backprime\backprime}
= \lim_{\beta = \infty} \int_{-\infty}^0 d\alpha \,
{\sc n}_{\alpha + \beta \nu} \alpha^{-1}.
\eqno {\rm (q^{\mit VI})}$$
Finally, if a value of $x + \nu$ satisfy the conditions
(k${}^{VI}$), and if the function~$f$ undergo a sudden change of
value for this particular value of the variable on which that
function depends, so that $f = f^{\backprime\backprime}$
immediately before, and $f = f^\backprime$ immediately after the
change, then the corresponding part of the first member of the
formula ({\sc t}) is
$${\sc p}_\nu^{-1} (\varpi_\nu^\backprime f^\backprime
+ \varpi_\nu^{\backprime\backprime} f^{\backprime\backprime}).
\eqno {\rm (r^{\mit VI})}$$
And in the formul{\ae} for $\varpi_\nu$, $\varpi_\nu^\backprime$,
$\varpi_\nu^{\backprime\backprime}$, it is permitted to write
$${\sc n}_{\alpha + \beta \nu} \alpha^{-1}
= \int_0^1 dt \, {\sc p}_{t\alpha + \beta \nu}.
\eqno {\rm (s^{\mit VI})}$$
\bigbreak
[22.]
One of the simplest ways of rendering the integral (e${}^{VI}$)
determinate at its limit, is to suppose that the function
${\sc p}_\alpha$ is of the periodical form which satisfies the
two following equations,
$${\sc p}_{-\alpha} = {\sc p}_\alpha,\quad
{\sc p}_{\alpha + p} = - {\sc p}_\alpha;
\eqno {\rm (t^{\mit VI})}$$
$p$ being some given positive constant. Multiplying these
equations by $d\alpha$, and integrating from $\alpha = 0$, we
find, by (a${}''$),
$${\sc n}_{-\alpha} + {\sc n}_\alpha = 0,\quad
{\sc n}_{\alpha + p} + {\sc n}_\alpha = {\sc n}_p;
\eqno {\rm (u^{\mit VI})}$$
therefore
$${\sc n}_p = {\sc n}_{p \over 2} + {\sc n}_{-{p \over 2}}
= 0,
\eqno {\rm (v^{\mit VI})}$$
and
$${\sc n}_{\alpha + p} = - {\sc n}_\alpha,\quad
{\sc n}_{\alpha + 2p} = {\sc n}_\alpha,\quad \hbox{\&c.}
\eqno {\rm (w^{\mit VI})}$$
Consequently, if the equations (t${}^{VI}$) be satisfied, the
multiples (by whole numbers) of $p$ will all be roots of the
equation (a${}^{VI}$); and reciprocally that equation will have no
other real roots, if we suppose that the function
${\sc p}_\alpha$, which vanishes when $\alpha$ is any odd
multiple of
$\displaystyle {p \over 2}$,
preserves one constant sign between any one such multiple and the
next following, or simply between $\alpha = 0$ and
$\displaystyle \alpha = {p \over 2}$.
We may then, under these conditions, write
$$\nu_i = ip,
\eqno {\rm (x^{\mit VI})}$$
$i$ being any integer number, positive or negative, and $\nu_i$
denoting generally, as in (b${}^{VI}$), any root of the equation
(a${}^{VI}$). And we shall have
$$\int_{-\infty}^\infty d\alpha \,
{\sc n}_{\alpha + kp} \alpha^{-1}
= (-1)^k \varpi,
\eqno {\rm (y^{\mit VI})}$$
$k$ being any integer number, and $\varpi$ still retaining the
same meaning as in the former articles. Also, for any integer
value of $k$,
$${\sc p}_{kp} = (-1)^k {\sc p}_0.
\eqno {\rm (z^{\mit VI})}$$
These things being laid down, let us resume the integral
(e${}^{VI}$), and let us suppose that the law by which $\beta$
increases to $\infty$ is that of coinciding successively with the
several uneven integer numbers $1$, $3$, $5$, \&c., as was
supposed in deducing the formula ({\sc c}). Then $\beta \nu$ in
(e${}^{VI}$) will be an odd or even multiple of $p$, according as
$\nu$ is the one or the other, so that we shall have by
(x${}^{VI}$), (y${}^{VI}$), the following determined expression
for the sought limit (f${}^{VI}$):
$$\varpi_{\nu_i} = (-1)^i \varpi;
\eqno {\rm (a^{\mit VII})}$$
but also, by (x${}^{VI}$), (z${}^{VI}$),
$${\sc p}_{\nu_i} = (-1)^i {\sc p}_0;
\eqno {\rm (b^{\mit VII})}$$
therefore
$$\varpi_\nu {\sc p}_\nu^{-1} = \varpi {\sc p}_0^{-1},
\eqno {\rm (c^{\mit VII})}$$
the value of this expression being thus the same for all the
roots of (a${}^{VI}$). At the same time, in (i${}^{VI}$),
$$f_{x + \nu} = f_{x + ip};
\eqno {\rm (d^{\mit VII})}$$
the equation ({\sc t}) becomes therefore now
$$\sum\nolimits_i f_{x + ip}
= \varpi^{-1} {\sc p}_0 \nabla_\infty
\int_a^b d\alpha \, {\sc s}_{\alpha - x, \beta} f_\alpha,
\eqno {\sc (u)}$$
$\beta$ tending to infinity by passing through the successive
positive odd numbers, and $i$ receiving all integer values which
allow $x + ip$ to be comprised between the limits $a$ and $b$.
If any integer value of $i$ render $x + ip$ equal to either of
these limits, the corresponding term of the sum in the first
member of ({\sc u}) is to be ${1 \over 2} f_a$, or
${1 \over 2} f_b$; and if the function~$f$ receive any sudden
change of value between the same limits of integration,
corresponding to a value of the variable which is of the form
$x + ip$, the term introduced thereby will be of the form
${1 \over 2} f^\backprime + {1 \over 2} f^{\backprime\backprime}$.
For example, when
$${\sc p}_\alpha = \cos \alpha,\quad
\varpi = \pi,\quad
p = \pi,
\eqno {\rm (e^{\mit VII})}$$
we obtain the following known formula, instead of (r${}''$),
$$\sum\nolimits_i f_{x + i\pi}
= \pi^{-1} \sum\nolimits_{(n) -\infty}^{\phantom{(n) -} \infty}
\int_a^b d\alpha \,
\cos (2n\alpha - 2nx) \, f_\alpha;
\eqno {\rm (f^{\mit VII})}$$
which may be transformed in various ways, by changing the limits
of integration, and in which halves of functions are to be
introduced in extreme cases, as above.
On the other hand, if the law of increase of $\beta$ be, as in
({\sc r}), that of coinciding successively with large and larger
even numbers, then
$$\varpi_\nu = \varpi,\quad
{\sc p}_\nu = \mp {\sc p}_0,
\eqno {\rm (g^{\mit VII})}$$
and the equation ({\sc t}) becomes
$$\sum\nolimits_i (-1)^i f_{x + i\pi}
= \varpi^{-1} {\sc p}_0 \nabla_\infty
\int_a^b d\alpha \, {\sc s}_{\alpha - x, \beta} f_\alpha.
\eqno {\sc (v)}$$
For example, in the case (e${}^{VII}$), we obtain this extension
of formula (b${}^{V}$),
$$\sum\nolimits_i (-1)^i f_{x + i\pi}
= \pi^{-1} \sum\nolimits_{(n) -\infty}^{\phantom{(n) -} \infty}
\int_a^b d\alpha \,
\cos (\overline{2n - 1} \mathbin{.} \overline{\alpha - x}) \,
f_\alpha.
\eqno {\rm (h^{\mit VII})}$$
We may verify the equations (f${}^{VII}$) (h${}^{VII}$) by
remarking that both members of the former equation remain
unchanged, and that both members of the latter are changed in
sign, when $x$ is increased by $\pi$. A similar verification of
the equations ({\sc u}) and ({\sc v}) requires that in general
the expression
$$\nabla_\infty \int_a^b d\alpha \,
{\sc s}_{\alpha - x, \beta} f_\alpha
\eqno {\rm (i^{\mit VII})}$$
should either receive no change, or simply change its sign, when
$x$ is increased by $p$, according as $\beta$ tends to $\infty$
by coinciding with large and odd or with large and even numbers.
\bigbreak
[23.]
In all the examples hitherto given to illustrate the general
formul{\ae} of this paper, it has been supposed for the sake of
simplicity, that the function~${\sc p}$ is a cosine; and this
supposition has been sufficient to deduce, as we have seen, a
great variety of known results. But it is evident that this
function~${\sc p}$ may receive many other forms, consistently
with the suppositions made in deducing those general formul{\ae};
and many new results may thus be obtained by the method of the
foregoing articles.
For instance, it is permitted to suppose
$${\sc p}_\alpha = 1, \hbox{ if } \alpha^2 < 1;
\eqno {\rm (k^{\mit VII})}$$
$${\sc p}_1 = 0;
\eqno {\rm (l^{\mit VII})}$$
$${\sc p}_{\alpha + 2} = - {\sc p}_\alpha;
\eqno {\rm (m^{\mit VII})}$$
and then the equations (t${}^{VI}$) of the last article, with all
that were deduced from them, will still hold good. We shall now
have
$$p = 2;
\eqno {\rm (n^{\mit VII})}$$
and the definite integral denoted by $\varpi$, and defined by the
equation (r${}'$), may now be computed as follows. Because the
function ${\sc n}_\alpha$ changes sign with $\alpha$, we have
$$\varpi =2 \int_0^\infty d\alpha \, {\sc n}_\alpha \alpha^{-1};
\eqno {\rm (o^{\mit VII})}$$
but
$$\left. \vcenter{\halign{\hfil #&&\enspace \hfil #\cr
${\sc n}_\alpha = \alpha$,& from $\alpha = 0$& to $\alpha = 1$;\cr
$\ldots 2 - \alpha$,& $\ldots \, 1$& $\ldots \, 3$;\cr
$\ldots \alpha - 4$,& $\ldots \, 3$& $\ldots \, 4$;\cr}}
\quad
\right\}
\eqno {\rm (p^{\mit VII})}$$
and
$${\sc n}_{\alpha + 4} = {\sc n}_\alpha.
\eqno {\rm (q^{\mit VII})}$$
Hence
$$\int_0^4 d\alpha \, {\sc n}_\alpha \alpha^{-1}
= 6 \log 3 - 4 \log 4,
\eqno {\rm (r^{\mit VII})}$$
the logarithms being Napierian; and generally, if $m$ be any
positive integer number, or zero,
$$\eqalignno{
\int_{4m}^{4m+4} d\alpha \, {\sc n}_\alpha \alpha^{-1}
&= \int_0^4 {\sc n}_\alpha (\alpha + 4m)^{-1} \cr
&= 4m \log (4m) - (8m + 2) \log (4m + 1) \cr
&\mathrel{\phantom{=}}
+ (8m + 6) \log (4m + 3) - (4m + 4) \log (4m + 4) \cr
&= \sum\nolimits_{(k) 1}^{\phantom{(k)} \infty}
{1 - 2^{-2k} \over k (k + {1 \over 2})}
(2m + 1)^{-2k}.
& {\rm (s^{\mit VII})} \cr}$$
But, by (h${}^{V}$),
$$\sum\nolimits_{(m) 0}^{\phantom{(m)} \infty} (2m + 1)^{-2k}
= {\textstyle {1 \over 2}}
\left( {\pi \over 2} \right)^{2k} \omega_{2k},
\eqno {\rm (t^{\mit VII})}$$
if $k$ be any integer number $> 0$; therefore
$$\varpi
= \sum\nolimits_{(k) 1}^{\phantom{(k)} \infty}
{1 - 2^{-2k} \over k (k + {1 \over 2})}
\left( {\pi \over 2} \right)^{2k} \omega_{2k};
\eqno {\rm (u^{\mit VII})}$$
$\omega_{2k}$ being by (q${}^{V}$) the coefficient of $x^{2k-1}$
in the development of $\tan x$. From this last property, we have
$$\sum\nolimits_{(k) 1}^{\phantom{(k)} \infty}
{\omega_{2k} x^{2k} \over k (k + {1 \over 2})}
= {4 \over x} \left( \int_0^x dx \right)^2 \tan x
= {4 \over x} \int_0^x dx \, \log \sec x;
\eqno {\rm (v^{\mit VII})}$$
therefore, substituting successively the values
$\displaystyle x = {\pi \over 2}$
and
$\displaystyle x = {\pi \over 4}$,
and subtracting the result of the latter substitution from that
of the former, we find, by (u${}^{VII}$),
$$\eqalignno{\varpi
&= {8 \over \pi}
\left(
\int_{\pi \over 4}^{\pi \over 2}
- \int_0^{\pi \over 4}
\right)
dx \, \log \sec x \cr
&= {8 \over \pi} \int_{\pi \over 4}^{\pi \over 2}
dx \, \log \tan x \cr
&= {8 \over \pi} \int_0^{\pi \over 4}
dx \, \log \mathop{\rm cotan} x.
& {\rm (w^{\mit VII})} \cr}$$
Such, in the present question, is an expression for the constant
$\varpi$; its numerical value may be approximately calculated by
multiplying the Napierian logarithm of ten by the double of the
average of the ordinary logarithms of the cotangents of the
middles of any large number of equal parts into which the first
octant may be divided; thus, if we take the ninetieth part of the
sum of the logarithms of the cotangents of the ninety angles
$\displaystyle {1^\circ \over 4}$,
$\displaystyle {3^\circ \over 4}$,
$\displaystyle {5^\circ \over 4},\ldots$
$\displaystyle {177^\circ \over 4}$,
$\displaystyle {179^\circ \over 4}$,
as given by the ordinary tables, we obtain nearly, as the average
of these ninety logarithms, the number $0,5048$; of which the
double, being multiplied by the Napierian logarithm of ten,
gives, nearly, the number $2,325$, as an approximate value of the
constant~$\varpi$. But a much more accurate value may be
obtained with little more trouble, by computing separately the
doubles of the part (r${}^{VII}$), and of the sum of
(s${}^{VII}$) taken from $m = 1$ to $m = \infty$; for thus we
obtain the expression
$$\varpi = 12\log 3 - 8 \log 4
+ 2 \sum\nolimits_{(k) 1}^{\phantom{(k)} \infty}
{1 - 2^{-2k} \over k (k + {1 \over 2})}
\sum\nolimits_{(m) 1}^{\phantom{(m)} \infty}
(2m + 1)^{-2k},
\eqno {\rm (x^{\mit VII})}$$
in which each sum relative to $m$ can be obtained from known
results, and the sum relative to $k$ converges tolerably fast; so
that the second line of the expression (x${}^{VII}$) is thus
found to be nearly $= 0,239495$, while the first line is nearly
$= 2,092992$; and the whole value of the expression
(x${}^{VIII}$) is nearly
$$\varpi = 2,332487.
\eqno {\rm (y^{\mit VII})}$$
There is even an advantage in summing the double of the
expression (s${}^{VII}$) only from $m = 2$ to $m = \infty$,
because the series relative to $k$ converges then more rapidly;
and having thus found
$\displaystyle 2 \int_8^\infty d\alpha \, {\sc n}_\alpha \alpha^{-1}$,
it is only necessary to add thereto the expression
$$2 \int_0^8 d\alpha \, {\sc n}_\alpha \alpha^{-1}
= 12 \log 3 - 20 \log 5 + 28 \log 7 - 16 \log 8.
\eqno {\rm (z^{\mit VII})}$$
The form of the function~${\sc p}$ and the value of the
constant~$\varpi$ being determined as in the present article, it
is permitted to substitute them in the general equations of this
paper; and thus to deduce new transformations for portions of
arbitrary functions, which might have been employed instead of
those given by {\sc Fourier} and {\sc Poisson}, if the
discontinuous function~${\sc p}$, which receives alternately the
values $1$, $0$, and $-1$, had been considered simpler in its
properties than the trigonometrical function cosine.
\bigbreak
[24.]
Indeed, when the conditions (t${}^{VI}$) are satisfied, the
function ${\sc p}_x$ can be developed according to cosines of the
odd multiples of
$\displaystyle {\pi x \over p}$,
by means of the formula (y${}'''$), which here becomes, by
changing $l$ to $\displaystyle {p \over 2}$, and $f$ to
${\sc p}$,
$${\sc p}_x
= \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
{\sc a}_{2n-1} \cos {(2n - 1) \pi x \over p},
\eqno {\rm (a^{\mit VIII})}$$
in which
$${\sc a}_{2n-1}
= {4 \over p} \int_0^{p \over 2} d\alpha \,
\cos {(2n - 1) \pi \alpha \over p} {\sc p}_\alpha;
\eqno {\rm (b^{\mit VIII})}$$
the function ${\sc n}_x$ at the same time admitting a development
according to sines of the same odd multiples, namely
$${\sc n}_x
= {p \over \pi} \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
{{\sc a}_{2n-1} \over 2n - 1} \sin {(2n - 1) \pi x \over p};
\eqno {\rm (c^{\mit VIII})}$$
and the constant~$\varpi$ being equal to the following series,
$$\varpi
= p \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
{{\sc a}_{2n-1} \over 2n - 1}.
\eqno {\rm (d^{\mit VIII})}$$
Thus, in the case of the last article, where $p = 2$, and
${\sc p}_\alpha = 1$ from $\alpha = 0$ to $\alpha = 1$, we have
$${\sc a}_{2n-1} = {4 \over \pi} {(-1)^{n+1} \over 2n - 1};
\eqno {\rm (e^{\mit VIII})}$$
$${\sc p}_x
= {4 \over \pi}
\left(
\cos {\pi x \over 2}
- 3^{-1} \cos {3 \pi x \over 2}
+ 5^{-1} \cos {5 \pi x \over 2}
- \cdots
\right);
\eqno {\rm (f^{\mit VIII})}$$
$${\sc n}_x
= {8 \over \pi^2}
\left(
\sin {\pi x \over 2}
- 3^{-2} \sin {3 \pi x \over 2}
+ 5^{-2} \sin {5 \pi x \over 2}
- \cdots
\right);
\eqno {\rm (g^{\mit VIII})}$$
$$\varpi
= {8 \over \pi} ( 1^{-2} - 3^{-2} + 5^{-2} - 7^{-2} + \cdots );
\eqno {\rm (h^{\mit VIII})}$$
so that, from the comparision of (w${}^{VII}$) and
(h${}^{VIII}$), the folowing relation results:
$$\int_0^{\pi \over 4} dx \, \log \cot x
= \sum\nolimits_{(n) 0}^{\phantom{(n)} \infty}
(-1)^n (2n + 1)^{-2}.
\eqno {\rm (i^{\mit VIII})}$$
But most of the suppositions made in former articles may be
satisfied, without assuming for the function~${\sc p}$ the
periodical form assigned by the conditions (t${}^{VI}$). For
example, we might assume
$${\sc p}_\alpha
= {4 \over \pi} \int_0^\pi d\theta \,
\sin \theta^2 \cos (2 \alpha \sin \theta);
\eqno {\rm (k^{\mit VIII})}$$
which would give, by (a${}''$) and (b${}''$),
$${\sc n}_\alpha
= {2 \over \pi} \int_0^\pi d\theta \,
\sin \theta \sin (2 \alpha \sin \theta);
\eqno {\rm (l^{\mit VIII})}$$
$${\sc m}_\alpha
= {1 \over \pi} \int_0^\pi d\theta \,
\mathop{\rm vers} (2 \alpha \sin \theta);
\eqno {\rm (m^{\mit VIII})}$$
and finally, by (r${}'$)
$$\varpi = 2 \int_0^\pi d\theta \, \sin \theta = 4.
\eqno {\rm (n^{\mit VIII})}$$
This expression (k${}^{VIII}$) for ${\sc p}_\alpha$ satisfies all
the conditions of the ninth article; for it is clear that it gives
a value to ${\sc n}_\alpha$, which is numerically less than
$\displaystyle {4 \over \pi}$; and the equation
$${\sc m}_\alpha = 1,
\eqno {\rm (o^{\mit VIII})}$$
which is of the form (g), is satisfied by all the infinitely many
real and unequal roots of the equation
$$\int_0^\pi d\theta \, (2 \alpha \sin \theta) = 0,
\eqno {\rm (p^{\mit VIII})}$$
which extend from $\alpha = -\infty$ to $\alpha = \infty$, and of
which the interval between any one and the next following is
never greater than $\pi$, nor even so great; because (as it is
not difficult to prove) these several roots are contained in
alternate or even octants, in such a manner that we may write
$$\alpha_n > {n \pi \over 2} - {\pi \over 4},
\quad < {n \pi \over 2}.
\eqno {\rm (q^{\mit VIII})}$$
We may, therefore, substitute the expression (k${}^{VIII}$) for
${\sc p}$, in the formul{\ae} ({\sc a}), ({\sc b}), ({\sc c}),
\&c.; and we find, by ({\sc b}), if $x > a$, $< b$,
$$f_x
= \pi^{-1} \int_a^b d\alpha \, \int_0^\infty d\beta \,
\int_0^\pi d\theta \, \sin \theta^2 \,
\cos \{ 2\beta (\alpha - x) \sin \theta \} f_\alpha;
\eqno {\rm (r^{\mit VIII})}$$
that is,
$$f_x
= {1 \over 2\pi} \lim_{\beta = \infty}
\int_0^\pi d\theta \, \sin \theta
\int_a^b d\alpha \,
\sin \{ 2\beta (\alpha - x) \sin \theta \}
(\alpha - x)^{-1} f_\alpha;
\eqno {\rm (s^{\mit VIII})}$$
a theorem which may be easily proved {\it \`{a} posteriori}, by
the principles of fluctuating functions, because those principles
show, that (if $x$ be comprised between the limits of
integration) the limit relative to $\beta$ of the integral
relative to $\alpha$, in (s${}^{VIII}$), is equal to $\pi f_x$.
In like manner, the theorem ({\sc c}), when applied to the
present form of the function~${\sc p}$, gives the following other
expression for the arbitrary function~$f_x$:
$$f_x
= {\textstyle {1 \over 2}} \int_a^b d\alpha \, f_\alpha
+ \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty}
\int_a^b d\alpha \, f_\alpha
{\displaystyle
\int_0^\pi d\theta \, \sin \theta \,
\sin ( 2(\alpha - x) \sin \theta)
\cos ( 4n (\alpha - x) \sin \theta)
\over \displaystyle
\int_0^\pi d\theta \, \sin \theta \,
\sin ( 2(\alpha - x) \sin \theta)};
\eqno {\rm (t^{\mit VIII})}$$
$x$ being between $a$ and $b$, and $b - a$ being not greater than
the least positive root~$\nu$ of the equation
$${1 \over \nu} \int_0^\pi d\theta \,
\sin \theta \sin (2 \nu \sin \theta)
= 0.
\eqno {\rm (u^{\mit VIII})}$$
And if we wish to prove, {\it \`{a} posteriori}, this theorem of
transformation (t${}^{VIII}$), by the same principles of
fluctuating functions, we have only to observe that
$$1 + 2 \sum\nolimits_{(n) 1}^{\phantom{(n)} \infty} \cos 2 n y
= {\sin (2ny + y) \over \sin y},
\eqno {\rm (v^{\mit VIII})}$$
and therefore that the second member of (t${}^{VIII}$) may be put
under the form
$$\lim_{n = \infty}
\int_a^b d\alpha \, f_\alpha
{\displaystyle
\int_0^\pi d\theta \, \sin \theta \,
\sin ( (4n + 2) (\alpha - x) \sin \theta)
\over \displaystyle
2 \int_0^\pi d\theta \, \sin \theta \,
\sin ( 2(\alpha - x) \sin \theta)};
\eqno {\rm (w^{\mit VIII})}$$
in which the presence of the fluctuating factor
$$\sin ( (4n + 2) (\alpha - x) \sin \theta),$$
combined with the condition that $\alpha - x$ is numerically less
than the least root of the equation (u${}^{VIII}$), shows that we
need only attend to values of $\alpha$ indefinitely near to $x$,
and may therefore write in the denominator,
$$\int_0^\pi d\theta \, \sin \theta \,
\sin ( 2(\alpha - x) \sin \theta)
= \pi (\alpha - x);
\eqno {\rm (x^{\mit VIII})}$$
for thus, by inverting the order of the two remaining
integrations, that is by writing
$$\int_a^b d\alpha \, \int_0^\pi d\theta \, \ldots
= \int_0^\pi d\theta \, \int_a^b d\alpha \, \ldots,
\eqno {\rm (y^{\mit VIII})}$$
we find first
$$\lim_{n = \infty}
\int_a^b d\alpha \, f_\alpha
{\displaystyle
\sin ( (4n + 2) (\alpha - x) \sin \theta)
\over 2 \pi (\alpha - x)}
= {\textstyle {1 \over 2}} f_x,
\eqno {\rm (z^{\mit VIII})}$$
for every value of $\theta$ between $0$ and $\pi$, and of $x$
between $a$ and $b$; and finally,
$${\textstyle {1 \over 2}} f_x \int_0^\pi d\theta \, \sin \theta
= f_x.$$
\bigbreak
[25.]
The results of the foregoing articles may be extended by
introducing, under the functional signs ${\sc n}$, ${\sc p}$, a
product such as $\beta \gamma$, instead of $\beta \alpha$,
$\gamma$ being an arbitrary function of $\alpha$; and by
considering the integral
$$\int_a^b d\alpha\, {\sc n}_{\beta \gamma} {\sc f}_\alpha,
\eqno {\rm (a^{\mit IX})}$$
in which ${\sc f}$ is any function which remains finite between
the limits of integration. Since $\gamma$ is a function of
$\alpha$, it may be denoted by $\gamma_\alpha$, and $\alpha$ will
be reciprocally a function of $\gamma$, which may be denoted
thus:
$$\alpha = \phi_{\gamma_\alpha}.
\eqno {\rm (b^{\mit IX})}$$
While $\alpha$ increases from $a$ to $b$, we shall suppose, at
first, that the function $\gamma_\alpha$ increases constantly and
continuously from $\gamma_a$ to $\gamma_b$, in such a manner as
to give always, within this extent of variation, a finite and
determined and positive value to the differential coefficient of
the function~$\phi$, namely,
$${d\alpha \over d\gamma} = \phi'_\gamma.
\eqno {\rm (c^{\mit IX})}$$
We shall also express, for abridgment, the product of this
coefficient and of the function ${\sc f}$ by another function of
$\gamma$, as follows,
$$\phi'_\gamma {\sc f}_\alpha = \psi.
\eqno {\rm (d^{\mit IX})}$$
Then the integral (a${}^{IX}$) becomes
$$\int_{\gamma_a}^{\gamma_b} d\gamma \,
{\sc n}_{\beta \gamma} \psi_\gamma;
\eqno {\rm (e^{\mit IX})}$$
and a rigorous expression for it may be obtained by the process
of the fourth article, namely
$$ \left(
\int_{\gamma_a}^{\beta^{-1} \alpha_n}
+ \int_{\beta^{-1} \alpha_{n+m}}^{\gamma_b}
\right)
d\gamma \, {\sc n}_{\beta \gamma} \psi_\gamma
+ \theta \beta^{-1} (\alpha_{n+m} - \alpha_n) {\rm c} \delta;
\eqno {\rm (f^{\mit IX})}$$
in which, as before, $\alpha_n$, $\alpha_{n+m}$ are suitably
chosen roots of the equation (g); ${\rm c}$ is a finite constant;
$\theta$ is included between the limits $\pm 1$; and $\delta$ is
the difference between two values of the function $\psi_\gamma$,
corresponding to two values of the variable~$\gamma$ of which the
difference is less than $\beta^{-1} {\rm b}$, ${\rm b}$ being
another finite constant. The integral (a${}^{IX}$) therefore
diminishes indefinitely when $\beta$ increases indefinitely; and
thus, or simply by the theorem ({\sc z}) combined with the
expression (e${}^{IX}$), we have, rigorously, at the limit,
without supposing here that ${\sc n}_0$ vanishes,
$$\int_a^b d\alpha \, {\sc n}_{\infty \gamma} {\sc f}_\alpha
= 0.
\eqno {\sc (w)}$$
The same conclusion is easily obtained, by reasonings almost the
same, for the case where $\gamma$ continually decreases from
$\gamma_a$ to $\gamma_b$, in such a manner as to give, within
this extent of variation, a finite and determined and negative
value to the differential coefficient (c${}^{IX}$). And with
respect to the case where the function~$\gamma$ is for a moment
stationary in value, so that its differential coefficient
vanishes between the limits of integration, it is sufficient to
observe that although $\psi$ in (e${}^{IX}$) becomes then
infinite, yet ${\sc f}$ in (a${}^{IX}$) remains finite, and the
integral of the finite product
$d\alpha \, {\sc n}_{\beta \gamma} {\sc f}_\alpha$,
taken between infinitely near limits, is zero. Thus, generally,
the theorem ({\sc w}), which is an extension of the theorem
({\sc z}), holds good between any finite limits $a$ and $b$, if
the function~${\sc f}$ be finite between those limits, and if,
between the same limits of integration, the function~$\gamma$
never remain unchanged throughout the whole extent of any finite
change of $\alpha$.
\bigbreak
[26.]
It may be noticed here, that if $\beta$ be only very large,
instead of being infinite, an approximate expression for the
integral (a${}^{IX}$) may be obtained, on the same principles, by
attending only to values of $\alpha$ which differ very little
from those which render the coefficient (c${}^{IX}$) infinite.
For example, if we wish to find an approximate expression for a
large root of the equation (p${}^{VIII}$), or to express
approximately the function
$$f_\beta
= {1 \over \pi} \int_0^\pi d\alpha \, \cos (2 \beta \sin \alpha),
\eqno {\rm (g^{\mit IX})}$$
when $\beta$ is a large positive quantity, we need only attend to
values of $\alpha$ which differ little from
$\displaystyle {\pi \over 2}$;
making then
$$\sin \alpha = 1 - y^2,\quad
d\alpha = {2 \, dy \over \sqrt{2 - y^2}},
\eqno {\rm (h^{\mit IX})}$$
and neglecting $y^2$ in the denominator of this last expression,
the integral (g${}^{IX}$) becomes
$$f_\beta
= {\sc a}_\beta \cos 2\beta + {\sc b}_\beta \sin 2\beta,
\eqno {\rm (i^{\mit IX})}$$
in which, nearly,
$$\left. \eqalign{
{\sc a}_\beta
&= {\surd 2 \over \pi} \int_{-\infty}^\infty dy \,
\cos (2 \beta y^2)
= {1 \over \sqrt{2 \pi \beta}};\cr
{\sc b}_\beta
&= {\surd 2 \over \pi} \int_{-\infty}^\infty dy \,
\sin (2 \beta y^2)
= {1 \over \sqrt{2 \pi \beta}};\cr}
\right\}
\eqno {\rm (k^{\mit IX})}$$
so that the large values of $\beta$ which make the function
(g${}^{IX}$) vanish are nearly of the form
$${n \pi \over 2} - {\pi \over 8},
\eqno {\rm (l^{\mit IX})}$$
$n$ being an integer number; and such is therefore the
approximate form of the large roots $\alpha_n$ of the equation
(p${}^{VIII}$): results which agree with the relations
(q${}^{VIII}$), and to which {\sc Poisson} has been conducted, in
connexion with another subject, and by an entirely different
analysis.
The theory of fluctuating functions may also be employed to
obtain a more close approximation; for instance, it may be shown,
by reasonings of the kind lately employed, that the definite
integral (g${}^{IX}$) admits of being expressed (more accurately
as $\beta$ is greater) by the following semiconvergent series, of
which the first terms have been assigned by {\sc Poisson}:
$$f_\beta
= {1 \over \sqrt{\pi \beta}}
\sum\nolimits_{(i) 1}^{\phantom{(i)} \infty}
[0]^{-i} ([-{\textstyle {1 \over 2}}]^i)^2 (4\beta)^{-i}
\cos \left( 2\beta - {\pi \over 4} - {i\pi \over 2} \right);
\eqno {\rm (m^{\mit IX})}$$
and in which, according to a known notation of factorials,
$$\left. \eqalign{
[0]^{-i}
&= 1^{-1} \mathbin{.} 2^{-1} \mathbin{.} 3^{-1} \mathbin{.}
\ldots \, i^{-1};\cr
[-{\textstyle {1 \over 2}}]^i
&= {-1 \over 2} \mathbin{.} {-3 \over 2} \mathbin{.}
{-5 \over 2} \, \cdots \, {1 - 2i \over 2}.\cr}
\right\}
\eqno {\rm (n^{\mit IX})}$$
For the value $\beta = 20$, the 3 first terms of the series
(m${}^{IX}$) give
$$\left. \eqalign{
f_{20}
&= \left( 1 - {9 \over 204800} \right)
{\cos 86^\circ \, 49' \, 52'' \over \sqrt{ 20 \pi } }
+ {1 \over 320}
{\sin 86^\circ \, 49' \, 52'' \over \sqrt{ 20 \pi } } \cr
&= 0,0069736 + 0,0003936 = + 0,0073672.\cr}
\right\}
\eqno {\rm (o^{\mit IX})}$$
For the same value of $\beta$, the sum of the first sixty terms
of the ultimately convergent series
$$f_\beta
= \sum\nolimits_{(i) 0}^{\phantom{(i)} \infty}
([0]^{-i})^2 (-\beta^2)^i
\eqno {\rm (p^{\mit IX})}$$
gives
$$\left. \eqalign{
f_{20} = + 7 \, 447 \, 387 \, 396 \, 709 \, 949,9657957 \cr
- 7 \, 447 \, 387 \, 396 \, 709 \, 949,9584289 \cr
= + 0,0073668.& \cr}
\right\}
\eqno {\rm (q^{\mit IX})}$$
The two expressions (m${}^{IX}$) (p${}^{IX}$) therefore agree,
and we may conclude that the following numerical value is very
nearly correct:
$${1 \over \pi} \int_0^\pi d\alpha \, \cos (40 \sin \alpha)
= + 0,007367.
\eqno {\rm (r^{\mit IX})}$$
\bigbreak
[27.]
Resuming the rigorous equation ({\sc w}), and observing that
$$\int_0^\infty d\beta \, {\sc p}_{\beta \gamma}
= \lim_{\beta = \infty} \mathbin{.}
{\sc n}_{\beta \gamma} \gamma_\alpha^{-1},
\eqno {\rm (s^{\mit IX})}$$
we easily see that in calculating the definite integral
$$\int_a^b d\alpha \, \int_0^\beta d\beta \,
{\sc p}_{\beta \gamma} f_\alpha,
\eqno {\rm (t^{\mit IX})}$$
in which the function~$f$ is finite, it is sufficient to attend
to those values of $\alpha$ which are not only between the limits
$a$ and $b$, but are also very nearly equal to real roots~$x$ of
the equation
$$\gamma_x = 0.
\eqno {\rm (u^{\mit IX})}$$
The part of the integral (t${}^{IX}$), corresponding to values of
$\alpha$ in the neighbourhood of any one such root~$x$, between
the above-mentioned limits, is equal to the product
$${f_x \over \gamma'_x} \times \int_{-\infty}^\infty d\alpha \,
{{\sc n}_{\beta \gamma'_x (\alpha - x)} \over \alpha - x},
\eqno {\rm (v^{\mit IX})}$$
in which $\beta$ is indefinitely large and positive, and the
differential coefficient $\gamma'_x$ of the function~$\gamma$ is
supposed to be finite, and different from $0$. A little
consideration shows that the integral in this last expression is
$= \pm \varpi$, $\varpi$ being the same constant as in former
articles, and the upper or lower sign being taken according as
$\gamma'_x$ is positive or negative. Denoting then by
$\sqrt{\gamma'^2_x}$ the positive quantity, which is
$= + \gamma'_x$ or $= - \gamma'_x$, according as $\gamma'_x$ is
$> 0$ or $< 0$, the part (v${}^{IX}$) of the integral
(t${}^{IX}$) is
$${\varpi f_x \over \sqrt{\gamma'^2_x}};
\eqno {\rm (w^{\mit IX})}$$
and we have the expression
$$\int_a^b d\alpha \, \int_0^\infty d\beta \,
{\sc p}_{\beta \gamma} f_\alpha
= \varpi \sum_x {f_x \over \sqrt{\gamma'^2_x}},
\eqno {\rm (x^{\mit IX})}$$
the sum being extended to all those roots~$x$ of the equation
(u${}^{IX}$) which are $> a$ but $< b$. If any root of that
equation should coincide with either of these limits $a$ or $b$,
the value of $\alpha$ in its neighbourhood would introduce, into
the second member of the expression (x${}^{IX}$), one or other of
the terms
$${ \varpi^\backprime f_a \over \gamma'_a},\quad
{- \varpi^{\backprime\backprime} f_a \over \gamma'_a},\quad
{ \varpi^{\backprime\backprime} f_b \over \gamma'_b},\quad
{- \varpi^\backprime f_b \over \gamma'_b};
\eqno {\rm (y^{\mit IX})}$$
the first to be taken when $\gamma_a = 0$, $\gamma'_a > 0$; the
second when $\gamma_a = 0$, $\gamma'_a < 0$; the third when
$\gamma_b = 0$, $\gamma'_b > 0$; and the fourth when
$\gamma_b = 0$, $\gamma'_b < 0$. If, then, we suppose for
simplicity, that neither $\gamma_a$ nor $\gamma_b$ vanishes, the
expression (x${}^{IX}$) conducts to the theorem
$$\sum\nolimits_x f_x
= \varpi^{-1} \int_a^b d\alpha \, \int_0^\infty d\beta \,
{\sc p}_{\beta \gamma} f_\alpha \sqrt{ \gamma'^2_\alpha };
\eqno {\sc (x)}$$
and the sign of summation may be omitted, if the equation
$\gamma_x = 0$ have only one real root between the limits $a$ and
$b$. For example, that one root itself may then be expressed as
follows:
$$x = \varpi^{-1} \int_a^b d\alpha \, \int_0^\infty d\beta \,
{\sc p}_{\beta \gamma} \alpha \sqrt{ \gamma'^2_\alpha };
\eqno {\rm (z^{\mit IX})}$$
The theorem ({\sc x}) includes some analogous results which have
been obtained by {\sc Cauchy}, for the case when ${\sc p}$ is a
cosine.
\bigbreak
[28.]
It is also possible to extend the foregoing theorem in other
ways; and especially be applying similar reasonings to functions
of several variables. Thus, if
$\gamma, \gamma^{(1)} \, \ldots$
be each a function of several real variables
$\alpha, \alpha^{(1)} \, \ldots$;
if ${\sc p}$ and ${\sc n}$ be still respectively functions of the
kinds supposed in former articles, while
${\sc p}^{(1)}$, ${\sc n}^{(1)},\ldots$
are other functions of the same kinds; then the theorem ({\sc w})
may be extended as follows:
$$\int_a^b d\alpha \,
\int_{a^{(1)}}^{b^{(1)}} d\alpha^{(1)} \, \ldots \,
{\sc n}_{\infty \gamma}
{\sc n}^{(1)}_{\infty \gamma^{(1)}}
\, \ldots \,
{\sc f}_{\alpha, \alpha^{(1)},\ldots}
= 0,
\eqno {\sc (y)}$$
the function~${\sc f}$ being finite for all values of the
variables $\alpha, \alpha^{(1)},\ldots$, within the extent of the
integrations; and the theorem ({\sc x}) may be thus extended:
$$\left. \eqalign{
\sum f_{x, x^{(1)},\ldots}
&= \varpi^{-1} \varpi^{(1)-1} \, \ldots
\int_a^b d\alpha \,
\int_{a^{(1)}}^{b^{(1)}} d\alpha^{(1)} \, \ldots
\int_0^\infty d\beta \,
\int_0^\infty d\beta^{(1)} \, \ldots \,
{\sc p}_{\beta \gamma}
{\sc p}^{(1)}_{\beta^{(1)} \gamma^{(1)}}
\, \ldots \cr
&\mathrel{\phantom{=}}
\ldots \,
f_{\alpha, \alpha^{(1)},\ldots}
\sqrt{ {\sc l}^2 };\cr}
\right\}
\eqno {\sc (z)}$$
in which, according to the analogy of the foregoing notation,
$$\varpi^{(i)}
= \int_{-\infty}^\infty d\alpha \,
\int_0^1 d\beta \, {\sc p}^{(i)}_{\beta \alpha};
\eqno {\rm (a^{\mit X})}$$
and ${\sc l}$ is the coefficient which enters into the expression,
supplied by the principles of the transformation of multiple
integrals,
$${\sc l} \, d\alpha \, d\alpha^{(1)} \, \ldots
= d\gamma \, d\gamma^{(1)} \, \ldots;
\eqno {\rm (b^{\mit X})}$$
while the summation in the first member is to be extended to all
those values of $x, x^{(1)},\ldots$ which, being respectively
between the respective limits of integration relatively to the
variables
$\alpha, \alpha^{(1)},\ldots$
are values of those variables satisfying the system of equations
$$\gamma_{x, x^{(1)},\ldots} = 0,\quad
\gamma^{(1)}_{x, x^{(1)},\ldots} = 0,\ldots \, .
\eqno {\rm (c^{\mit X})}$$
And thus may other remarkable results of {\sc Cauchy} be
presented under a generalized form. But the theory of such
extensions appears likely to suggest itself easily enough to any
one who may have considered with attention the remarks already
made; and it is time to conclude the present paper by submitting
a few general observations on the nature and history of this
important branch of analysis.
\nobreak\bigskip\bigskip
\centerline{\vbox{\hrule width 144pt}}
\bigbreak\bigskip
{\sc Lagrange} appears to have been the first who was led (in
connexion with the celebrated problem of vibrating cords) to
assign, as the result of a species of interpolation, an
expression for an arbitrary function, continuous or discontinuous
in form, between any finite limits, by a series of sines of
multiples, in which the coefficients are definite integrals.
Analogous expressions, for a particular class of rational and
integral functions, were derived by {\sc Daniel Bernouilli},
through successive integrations, from the results of certain
trigonometric summations, which he had characterized in a former
memoir as being {\it incongruously true}. No farther step of
importance towards the improvement of this theory seems to have
been made, till {\sc Fourier}, in his researches on Heat, was led
to the discovery of his well-known theorem, by which any
arbitrary function of any real variable is expressed, between
finite or infinite limits, by a double definite integral.
{\sc Poisson} and {\sc Cauchy} have treated the same subject
since, and enriched it with new views and applications; and
through the labours of these, and, perhaps, of other writers, the
theory of the development or transformation of arbitrary
functions, through functions of determined forms, has become one
of the most important and interesting departments of modern
algebra.
It must, however, be owned that some obscurity seems still to
hang over the subject, and that a farther examination of its
principles may not be useless or unnecessary. The very
existence of such transformations as in this theory are sought for
and obtained, appears at first sight paradoxical; it is
difficult at first to conceive the possibility of expressing a
perfectly arbitrary function through any series of sines or
cosines; the variable being thus made the subject of known and
determined operations, whereas it had offered itself originally
as the subject of operations unknown and undetermined. And even
after this first feeling of paradox is removed, or relieved, by
the consideration that the number of the operations of known form
is infinite, and that the operation of arbitrary form reappears
in another part of the expression, as performed on an auxiliary
variable; it still requires attentive consideration to see
clearly how it is possible that none of the values of this new
variable should have any influence on the final result, except
those which are extremely nearly equal to the variable originally
proposed. This latter difficulty has not, perhaps, been removed
to the complete satisfaction of those who desire to examine the
question with all the diligence its importance deserves, by any
of the published works upon the subject. A conviction,
doubtless, may be attained, that the results are true, but
something is, perhaps, felt to be still wanting for the full
rigour of mathematical demonstration. Such has, at least, been
the impression left on the mind of the present writer, after an
attentive study of the reasonings usually employed, respecting
the transformations of arbitrary functions.
{\sc Poisson}, for example, in treating this subject, sets out,
most commonly, with a series of cosines of multiple arcs; and
because the sum is generally indeterminate, when continued to
infinity, he alters the series by multiplying each term by the
corresponding power of an auxiliary quantity which he assumes to
be less than unity, in order that its powers may diminish, and at
last vanish; but in order that the new series may tend
indefinitely to coincide with the old one, he conceives, after
effecting its summation, that the auxiliary quantity tends to
become unity. The limit thus obtained is generally zero, but
becomes on the contrary infinite when the arc and its multiples
vanish; from which it is inferred by {\sc Poisson}, that if this
arc be the difference of two variables, an original and an
auxiliary, and if the series be multiplied by any arbitrary
function of the latter variable, and integrated with respect
thereto, the effect of all the values of that variable will
disappear from the result, except the effect on those which are
extremely nearly equal to the variable originally proposed.
{\sc Poisson} has made, with consummate skill, a great number of
applications of this method; yet it appears to present, on close
consideration, some difficulties of the kind above alluded to.
In fact, the introduction of the system of factors, which tend to
vanish before the integration, as their indices increase, but
tend to unity, after the integration, for all finite values of
those indices, seems somewhat to change the nature of the
question, by the introduction of a foreign element. Nor is it
perhaps manifest that the original series, of which the sum is
indeterminate, may be replaced by the convergent series with
determined sum, which results from multiplying its terms by the
powers of a factor infinitely little less than unity; while it is
held that to multiply by the powers of a factor infinitely
greater than unity would give an useless or even false result.
Besides there is something unsatisfactory in employing an
apparently arbitrary contrivance for annulling the effect of
those terms of the proposed series which are situated at a great
distance from the origin, but which do not themselves originally
tend to vanish as they become more distant therefrom. Nor is
this difficulty entirely removed, when integration by parts is
had recourse to, in order to show that the effect of these
distant terms is insensible in the ultimate result; because it
then becomes necessary to differentiate the arbitrary function;
but to treat its differential coefficient as always finite, is to
diminish the generality of the inquiry.
Many other processes and proofs are subject to similar or
different difficulties; but there is one method of demonstration
employed by {\sc Fourier}, in his separate Treatise on Heat, which
has, in the opinion of the present writer, received less notice
than it deserves, and of which it is proper here to speak. The
principle of the method here alluded to may be called the
{\it Principle of Fluctuation}, and is the same which was
enunciated under that title in the remarks prefixed to this
paper. In virtue of this principle (which may thus be considered
as having been indicated by {\sc Fourier}, although not expressly
stated by him), if any function, such as the sine or cosine of an
infinite multiple of an arc, changes sign infinitely often within
a finite extent of the variable on which it depends, and has for
its mean value zero; and if this, which may be called a
{\it fluctuating function}, be multiplied by any arbitrary but
finite function of the same variable, and afterwards integrated
between any finite limits; the integral of the product will be
zero, on account of the mutual destruction or neutralization of
all its elements.
It follows immediately from this principle, that if the factor by
which the fluctuating function is multiplied, instead of
remaining always finite, becomes infinite between the limits of
integration, for one or more particular values of the variable on
which it depends; it is then only necessary to attend to values
in the immediate neighbourhood of these, in order to obtain the
value of the integral. And in this way {\sc Fourier} has given
what seems to be the most satisfactory published proof, and (so
to speak) the most natural explanation of the theorem called by
his name; since it exhibits the actual process, one might almost
say the interior mechanism, which, in the expression assigned by
him, destroys the effect of all those values of the auxiliary
variable which are not required for the result. So clear, indeed,
is this conception, that it admits of being easily translated into
geometrical constructions, which have accordingly been used by
{\sc Fourier} for that purpose.
There are, however, some remaining difficulties connected with
this mode of demonstration, which may perhaps account for the
circumstance that it seems never to be mentioned, nor alluded to,
in any of the historical notices which {\sc Poisson} has given on
the subject of these transformations. For example, although
{\sc Fourier}, in the proof just referred to, of the theorem
called by his name, shows clearly that in integrating the product
of an arbitrary but finite function, and the sine or cosine of an
infinite multiple, each successive positive portion of the
integral is destroyed by the negative portion which follows it,
if infinitely small quantities be neglected, yet he omits to show
that the infinitely small outstanding difference of values of the
positive and negative portions, corresponding to the single
period of the trigonometrical function introduced, is of the
second order; and, therefore, a doubt may arise whether the
infinite number of such infinitely small periods, contained in
any finite interval, may not produce, by their accumulation, a
finite result. It is also desirable to be able to state the
argument in the language of limits, rather than that of
infinitesimals, and to exhibit, by appropriate definitions and
notations, what was evidently foreseen by {\sc Fourier}, that the
result depends rather on the {\it fluctuating\/} than on the
{\it trigonometric\/} character of the auxiliary function
employed.
The same view of the question had occurred to the present writer,
before he was aware that indications of it were to be found among
the published works of {\sc Fourier}; and he still conceives that
the details of the demonstration to which he was thus led may be
not devoid of interest and utility, as tending to give greater
rigour and clearness to the proof and the conception of a widely
applicable and highly remarkable theorem.
Yet, if he did not suppose that the present paper contains
something more than a mere expansion or improvement of a known
proof of a known result, the Author would scarcely have ventured
to offer it to the Transactions\footnote*{The Author is desirous
to acknowledge, that since the time of his first communicating
the present paper to the Royal Irish Academy, in June, 1840, he
has had an opportunity of entirely rewriting it, and that the
last sheet is only now passing through the press, in June, 1842.
Yet it may be proper to mention that the theorems (A) (B) (C),
which sufficiently express the character of the communication,
were printed (with some slight differences of notation) in the
year 1840, as part of the {\it Proceedings} of the Academy for
the date prefixed to this paper.}
of the Royal Irish Academy. It aims not merely to give a more
perfectly satisfactory demonstration of {\sc Fourier's}
celebrated theorem than any which the writer has elsewhere seen,
but also to present that theorem, and many others analogous
thereto, under a greatly generalized form, deduced from the
principle of fluctuation. Functions more general than sines or
cosines, yet having some correspondent properties, are introduced
throughout; and constants, distinct from the ratio of the
circumference to the diameter of a circle, present themselves in
connexion therewith. And thus, if the intention of the writer
have been in any degree accomplished, it will have been shown,
according to the opinion expressed in the remarks prefixed to
this paper, that the development of the important principle
above referred to gives not only a new clearness, but also (in
some respects) a new extension, to this department of science.
\bye
**