\documentclass[12pt,reqno]{article}
\usepackage[usenames]{color}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amscd}
\usepackage[colorlinks=true,
linkcolor=webgreen,
filecolor=webbrown,
citecolor=webgreen]{hyperref}
\definecolor{webgreen}{rgb}{0,.5,0}
\definecolor{webbrown}{rgb}{.6,0,0}
\usepackage{color}
\usepackage{fullpage}
\usepackage{float}
\usepackage{psfig}
\usepackage{graphics,amsmath,amssymb}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{latexsym}
\usepackage{epsf}
\setlength{\textwidth}{6.5in}
\setlength{\oddsidemargin}{.1in}
\setlength{\evensidemargin}{.1in}
\setlength{\topmargin}{-.5in}
\setlength{\textheight}{8.9in}
\newcommand{\seqnum}[1]{\href{http://oeis.org/#1}{\underline{#1}}}
\DeclareMathOperator{\des}{des}
\DeclareMathOperator{\sgn}{sgn}
\DeclareMathOperator{\perm}{perm}
\DeclareMathOperator{\Sym}{Sym}
\newcommand{\wh}{\widetilde{h}}
\newcommand{\wlm}{\overline{F}}
\begin{document}
\begin{center}
\epsfxsize=4in
\leavevmode\epsffile{logo129.eps}
\end{center}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\begin{center}
\vskip 1cm{\LARGE\bf Permanents and Determinants, Weighted \\
\vskip .1in
Isobaric Polynomials, and Integer Sequences}
\vskip 1cm
\large
Huilan Li\\
Department of Mathematics\\
Drexel University\\
3141 Chestnut Street\\
Philadelphia, PA 19104 \\
USA \\
\href{huilan.li@gmail.com}{\tt huilan.li@gmail.com} \\
\vskip 1cm
Trueman MacHenry\\
Department of Mathematics and Statistics\\
York University\\
To\-ron\-to, ON M3J 1P3\\
Canada\\
\href{machenry@mathstat.yorku.ca}{\tt machenry@mathstat.yorku.ca}
\end{center}
\vskip .2 in
\begin{abstract}
In this paper we construct two types of Hessenberg matrices with the property that every weighted isobaric polynomial (WIP) appears as a determinant of one of them, and as the permanent of the other. Every integer sequence which is linearly recurrent is representable by (an evaluation of) some linearly recurrent sequence of WIPs. WIPs are symmetric polynomials written in the elementary symmetric polynomial basis. Among them are the generalized Fibonacci polynomials and the generalized Lucas polynomials, which already have these sweeping representation properties. Among the integer sequences discussed are the Chebyshev polynomials of the 2nd kind, the Stirling numbers of the 1st and 2nd kind, the Catalan numbers, and the triangular numbers, as well as all sequences which are either multiplicative arithmetic functions or additive arithmetic functions.
\end{abstract}
\section{Introduction}
In this paper, we give a general method for finding permanental and determinantal representations of many families of integer sequences. These include all of the integer sequences in a recent paper of Kirgisiz and Sahin \cite{KS}. However, the methods used here are general and are part of an overarching theoretical structure. The tools that we use involve the ring of symmetric polynomials, and the methods are ones that should be congenial to workers in algebraic combinatorics.
The term \textit{permanent} dates from Cauchy, 1821, and the modern usage of the term as permanent of a matrix goes back to Muir \cite{Mu} in 1882. A permanent is computed very much like a determinant except that one ignores the parity of the elements acting from the symmetric group. The interest in the permanent of a matrix is at least two-fold. On the one hand, it has applications in graph theory, on the other hand, it is used in quantum physics. Since its applications are computational, finding economical computing techniques is a desirable end. Clearly the more zeros that occur in a matrix, the easier will be the computation of its permanent as well as its determinant.
\textit{Hessenberg matrices}, \textit{upper} and \textit{lower}, are matrices that are nearly triangular. Specifically, an \textit{upper Hessenberg matrix} is one with a not-necessarily zero sub-main diagonal, otherwise there are only zeros below the main diagonal; a \textit{lower Hessenberg matrix} has a not-ncessarily zero super-main diagonal, otherwise there are only zeros above the main diagonal.
The representation theorems in Kaygisiz and Sahin \cite{KS} are of this sort, as will be ours as well. After having described a very general class of polynomials, \textit{weighted isobaric polynomials} (WIPs), whose evaluations produce a large number of interesting integer sequences (in Section \ref{WIPICM}) simply by varying the parameters in these polynomials, we then construct (in Section \ref{HM}) two special Hessenberg matrices, one of which has these polynomials as permanents, the other has these polynomials as determinants.
The paper begins with a presentation of the theory of isobaric polynomials (in Section \ref{IP}) following the earlier papers of Li, MacHenry, Tudose and Wong \cite{LM,TM,TM1,MT,MT2,MW,MW2}. Isobaric polynomials are symmetric polynomials written in the elementary symmetric polynomial basis. MacHenry \cite{TM} introduced two especially important sequences of isobaric polynomials, the \textit{generalized Fibonacci polynomial} (GFP) sequence and the \textit{generalized Lucas polynomial} (GLP) sequence. Both of these terms, ``generalized Fibonacci'' and ``generalized Lucas'' have been used for many years, and continue to be used, but they refer to two-variable polynomials, while our polynomials have arbitrarily many variables and include the two-variable case. They appeared for the first time in MacHenry \cite{TM1,TM}. GFPs and GLPs are examples of WIPs, and their sequences correspond to complete homogeneous symmetric polynomials and power sum symmetric polynomials, respectively, both of which are bases for the ring of symmetric polynomials.
The theory of isobaric polynomials can be thought of as a collection of \textit{packages} of results following from the specification of a monic polynomial: --- for the purposes of this paper --- with integer coefficients. We call this polynomial the \textit{core polynomial} (CP). We distinguish between a \textit{generic }monic polynomial, one with variable coefficients, and a \textit{numerical} polynomial in which the coefficients have been evaluated over, say, the ring of integers. A package consists of a \textit{generic core} polynomial (GCP) and its companion matrix (GCM), and of an extension of the GCM, the i\textit{nfinite companion matrix} (ICM), whose right hand column contains the positively and negatively indexed GFPs, and the sums of its diagonal elements give the GLPs \cite{MT}. Each evaluation of the GCM induces the generic linear recursion of degree $n$ (GLR($n$)) \cite{MW}; it also induces a multiplicative arithmetic function (MF), and an additive arithmetic function (AddF) in the ring (unique factorization domain) of arithmetic functions \cite{CE,LM,MW2}, and the GCP gives families of MF and AddF \cite{MW2}. Moreover, we can also think of the rational extension ring (field, if CP is irreducible) obtained from the GCP or the CP as being part of this package, so, in particular, when CP is irreducible, a class of number fields (NF($n$)) is induced \cite{MW}. The Frobenius character theorem can be represented in terms of WIPs, so the GCP of degree $n$ induces the character table of $\Sym(n)$ (ChT($n$)) \cite{LM} . Thus given a GCP of degree $n$, we can think of the package as the collection, (GCP, GCM, ICM, GLR($n$), GFP($n$), GLP($n$), MF($n$), AddF($n$), NF($n$), ChT($n$)). This can be thought of as the theory arising from the classical result that the coeficients of a monic polynomial are elementary symmetric functions (ESP) of its roots, that is an answer to the question, what follows from the way the roots determine the coefficients of a monic polynomial? This is somewhat analogous to the way in which Galois theory arises from asking the question: in what way do the coefficients determine the roots of the polynomial? For the history of the term \textit{isobaric} itself, see Read, Redfield and P\'olya \cite{ Read, Red,P}.
\vspace{0.75 cm}
In what follows we give the outline of the paper.
\medskip
\noindent Section 1. Introduction\\
Section 2. Core polynomial, the companion matrix and isobaric polynomials.\\
Section 3. Weighted isobaric polynomials and the infinite companion matrix.\\
Section 4. Different matrix and related results. \\
Section 5. Convolution product.\\
Section 6. Isobaric logarithm, isobaric exponential operators and isobaric\,trigonometry.\\
Section 7. Multiplicative arithmetic functions.\\
Section 8. Hessenberg matrices.\\
Section 9. Permanental and determinantal representation.\\
Section 10. Representability of integer sequences.\\
Section 11. Examples.\\
%%%%%%%%%%%%%%%%%%%%%%
\section{Isobaric polynomials}\label{IP}
Let
$$\mathcal{C}(X) = X^k-t_1X^{k-1}-\cdots-t_k$$
be a \textit{generic} degree-$k$ monic polynomial, that is, we consider the coefficients to be variables which can be evaluated over a suitable ring. In this paper, that ring will be the ring of integers, $\mathbb{Z}$. We call this polynomial the \textit{core polynomial}. It is with respect to this polynomial that we shall make the following definitions. For this polynomial we construct the \textit{companion matrix}
$$A = \left(\begin{array}{ccccc}0 & 1 & 0 & \cdots & 0 \\0 & 0 & 1 & \cdots & 0 \\\vdots & \vdots & \vdots & \ddots & \vdots \\0 & 0 & 0 & \cdots & 1 \\t_k & t_{k-1} & t_{k-2} & \cdots & t_1\end{array}\right).$$
The companion matrix will be used to generate some remarkable polynomials which we now define.
\begin{definition}\label{isoP} An \textit{isobaric polynomial} is a polynomial on the variables $t_1, t_2, \ldots , t_k $ for $k \in \{1,2,\ldots,n, \ldots \}$, with coefficients, for purposes of this paper, in $\mathbb{Z}$, of the form
$$P_{k,n}(t_1,t_2, \ldots , t_k)= \sum_{\alpha \vdash n} {C_\alpha }t_1^{\alpha_1} t_2^{\alpha_2} \cdots t_k^{\alpha_k}, $$
where $\alpha = \{\alpha_1, \alpha_2, \ldots ,\alpha_k\}$ and $\alpha \vdash n$ means that
$\sum_{j=1} ^k j \alpha_j = n$; that is , in a standard partition notation, $(1^{\alpha_1}, 2^{\alpha_2}, \ldots, k^{\alpha_k})$ is a partition of $n$. $n$ is called the \textit{isobaric degree} of the polynomial.
\end{definition}
Thus, isobaric polynomials are polynomials indexed by partitions of the natural numbers, or, equivalently, by Young diagrams.
\begin{theorem}\label{myisoring} The isobaric polynomials form a ring which is isomorphic to the ring of symmetric polynomials. \end{theorem}
In fact, the isobaric polynomials are simply the symmetric polynomials written in the elementary symmetric polynomial basis. The isomorphism is given by letting $t_j$ go to $(-1)^{j-1}$ times the $j$-th elementary symmetric polynomial, for each grading $k$ of the graded ring of symmetric polynomials \cite{MacD}.
%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Weighted isobaric polynomials and the infinite companion matrix}\label{WIPICM}
\begin{definition}\label{myWIP} A \textit{weighted isobaric polynomial} is an isobaric polynomial given by the following explicit expression:
$$P_{\omega,k,n}(t_1, t_2, \ldots, t_k)= \sum_{\alpha \vdash n} {
|\alpha| \choose \alpha_1,\ldots,\alpha_k } \frac{\sum_{j=1}^k \omega_j\alpha_j}{|\alpha|}t_1^{\alpha_1}\cdots t_k^{\alpha_k},$$
where $\omega$ is the weight vector $(\omega_1,\omega_2, \ldots, \omega_n, \ldots), \omega_j \in\mathbb{Z}$ and $|\alpha|= \alpha_1 + \cdots + \alpha_k.$ The coefficient ${C_\alpha }$ is then just $$ {
|\alpha| \choose \alpha_1,\ldots,\alpha_k } \frac{\sum_{j=1}^k \omega_j\alpha_j}{|\alpha|}.$$
\end{definition}
To make the notation simpler, we will write $P_{\omega,k,n}(t_1, t_2, \ldots, t_k)$ as $P_{\omega,k,n}$.
%%%%%%%%%%%%%%%%%%%
Let the collection of weighted isobaric polynomials be denoted by $\mathcal{W}$. Generating functions for the polynomials in $\mathcal{W}$ are given by:
$$\Omega(y) = 1 + \frac{\omega_1t_1y +\omega_2t_2y^2 + \omega_3t_3y^3 + \cdots+\omega_kt_ky^k}{1-p(y)},$$ where $p(y) = t_1y+t_2y^2+t_3y^3+\cdots+t_ky^k.$
An easy induction gives the following very useful proposition.
\begin{proposition}\label{myLR} (Linear Recursion Property) Let $P_{\omega,k,n}$ be a weighted isobaric polynomial, then the following recursion holds: $$P_{\omega,k,n} =\displaystyle\sum_{j=1}^k t_j P_{\omega,k,n-j}.$$
\end{proposition}
Two important sequences are the \textit{generalized Fibonacci polynomials} (GFP), and the \textit{generalized Lucas polynomials} (GLP). The weight vector for the GFPs is $(1,1,\ldots,1,\ldots)$ and for the GLPs, $(1,2,\ldots,n,\ldots).$ Thus,
\begin{corollary}\label{myGFP,GJP } (Explicit formulae for GFP and GLP)
\begin{eqnarray*}F_{k,n} &=& \sum_{\alpha \vdash n}{
|\alpha| \choose \alpha_1,\ldots,\alpha_k } t_1^{\alpha_1} t_2^{\alpha_2} \cdots t_k^{\alpha_k}.\\
G_{k,n} &=& \sum_{\alpha \vdash n} \frac{n}{|\alpha|} {
|\alpha| \choose \alpha_1,\ldots,\alpha_k } t_1^{\alpha_1} t_2^{\alpha_2} \cdots t_k^{\alpha_k}.
\end{eqnarray*}
\end{corollary}
The generating function given above specializes to
$$\Omega_F(y) = \frac{1}{1-p(y)} \quad \hbox{for GFP}, $$
and to
$$\Omega_G(y) = \frac{1+t_2y^2+\cdots+(k-1)t_ky^k}{1-p(y)} \quad \hbox{for GLP}.$$
Another way of producing these polynomials is through the \textit{infinite companion matrix}, which we now define.
\begin{definition}\label{my ICM } The \textit{infinite companion matrix} $A^\infty$ is the matrix defined by allowing the companion matrix $A$ to operate on the row vectors of $A$, that is on itself. \end{definition}
We clearly reproduce $A$ itself this way, and if we sequentially adjoin the orbit vectors under this operation as new rows of $A$, we obtain infinitely many new rows southward. If we also agree that $ t_k\neq 0$, so that $A$ is non-singular, and operate on the first row vector of $A$ with $A^{-1}$, and adjoin this orbit northward to $A$, we obtain infinitely many rows northward, so that the resulting matrix is an $(\infty \times k)$-matrix. As we shall see, it is a rather remarkable matrix which contains a great deal of information. Below we give a ``picture'' of $A^\infty$.
$${\Large A_k^\infty} = \left( \begin{matrix}
\vdots & \ddots & \vdots & \vdots \\
(-1)^{k-1}S_{(-2,1^{(k-1)})} & \cdots & -S_{(-2,1)} & S_{(-2)} \\
(-1)^{k-1}S_{(-1,1^{(k-1)})} & \cdots & -S_{(-1,1)} & S_{(-1)} \\
(-1)^{k-1}S_{(0,1^{(k-1)}))} &\cdots & -S_{(0,1)} & S_{(0)} \\
(-1)^{k-1}S_{(1,1^{(k-1)})} & \cdots & -S_{(1,1)} & S_{(1)} \\
(-1)^{k-1}S_{(2,1^{(k-1)})} & \cdots & -S_{(2,1)} & S_{(2)} \\
(-1)^{k-1}S_{(3,1^{(k-1)})} & \cdots & -S_{(3,1)} & S_{(3)} \\
(-1)^{k-1}S_{(4,1^{(k-1)})} & \cdots & -S_{(4,1)} & S_{(4)} \\
\vdots & \ddots & \vdots & \vdots
\end{matrix} \right) = {\Big((-1)^{k-j} S_{(n,1^{k-j})}\Big).} $$
\begin{lemma}[\cite{MW}]\label{my CMLR } Let $v$ be a row vector with $k$ components. The orbit of $v$ under the action of $A$ on the right, giving the ordered set $vA^n$, is a linearly recursive sequence with recursion parameters $\{ t_1,t_2,\ldots,t_k \} $. In fact, we get all k-th order linear recursions in this way.
\end{lemma}\qed
\begin{corollary}\label{mycolMF } Each column of the infinite companion matrix is a linearly recursive sequence of degree $k$.\end{corollary}
\begin{proof} It is easy to see that the orbit of any vector under the operation of $A$ is necessarily a linear recursion with the set $\{t_j\}$ as the recursion parameters. \end{proof}
\begin{theorem}[\cite{LM}]\label{my propsCM }
We assume that $t_k \neq 0$, so that $A^{-1}$ is defined.
1. The powers of $A$ constitute an infinite cyclic group.
2. The $k \times k$ contiguous blocks of $A_k^\infty$ are the powers $A_k^n$ of $A$. In particular, the $k \times k$ block whose lower right-hand corner is $S_{(0)}$ is the identity matrix, and the $k \times k$ block whose lower right-hand corner is $S_{(1)}$ is just $A$.
3. The entries in $A_k^\infty$ are Schur-hook polynomials with arm-length $n$ and leg-length $r$.
4. The right-hand column is the linearly recursive sequence of GFPs, the generalized Fibonacci polynomials. The traces of the infinite cyclic group generated by $A$, that is, the diagonal sums of $A_k^\infty$, is the linearly recursive sequence of GLPs, that is, the generalized Lucas polynomials.
5. All of the columns of $A_k^\infty$ are linear recursions with recursion parameters $\{t_1, t_2, \ldots, t_k \}$, and every linear recursion of degree $k$ can be obtained by a suitable choice of k and evaluation of the parameters. In particular, when $k=2$ and $t_1 = t_2 = 1$, then GFP becomes the Fibonacci sequence, and GLP becomes the Lucas sequence.
\end{theorem}
Letting the collection of weighted isobaric polynomials be denoted by $\mathcal{W}$, we can define a group operation on $\mathcal{W}$ by applying the usual component-wise addition of vectors to the weight vectors. Thus given weights $\omega$ and $\omega'$, we can define a new weight vector $\omega + \omega'$. It is trivial to see that this induces a group structure on $\mathcal{W}$. Call this group $\mathcal{W(\omega)}$. Also, ring addition of two weighted isobaric polynomials with possibly different weights gives a group structure on $\mathcal{W}$. Call it $\mathcal{W(+)}$.
\begin{theorem}\label{myW+ } $\mathcal{W(\omega)} \cong \mathcal{W(+)}$. In fact they are identical. \qed
\end{theorem}
%%%%%%%%%%%%%%%%%%%%%
\section{The different matrix}\label{DM}
Consider the derivative of the core polynomial,
$$C'(X) = kX^{k-1} - (k-1)t_1X^{k-2} - \cdots-t_{k-1}.$$
Define the vector $d_k = (-t_{k-1} , -2t_{k-2}, \ldots, -(k-1)t_{1}, k),$ and construct the \textit{different matrix}
$$D_k = \left( \begin{matrix}
d_kA^0\\ d_kA\\ \vdots\\ d_kA^{k-1}
\end{matrix} \right)$$
obtained by operating on $d_k$ with the companion matrix. For example, when $k=3$,
$$ D_3 = \left( \begin{matrix}
-t_2 & -2t_1 & 3 \\
3t_3 & 2t_2 & t_1 \\
t_1t_3 & 3t_3+t_1t_2 & t_1^2+2t_2
\end{matrix} \right).$$
\vspace{0.5cm}
Let $ \Delta_k=\det D_k$. We obtain the \textit{infinite different matrix} $D^\infty$ by continuing to operate on $d_k$ by the companion matrix, analogous to the way we obtained $A_k^\infty$.
It will follow from Proposition \ref{F=G} and Theorem \ref{logA=D} that the right hand column of $A_k^\infty$ is the GLP sequence.
%%
\begin{definition}\label{mymodp} Let $p$ be a rational prime, then $p$ \textit{ramifies} if $p$ divides $\Delta_k$. \end{definition}
This definition is consistent with the usual definition of ramification when speaking about number fields.
% \begin{definition}\label{mynum.seq }
Let $p$ be a rational prime, and let the variables $t_j$ take integer values. Then the sequences in $\mathcal{W}$ are now numerical sequences, and all of the matrices defined are numerical matrices. In particular, the sequences GFP and GLP become numerical sequences.
% \end{definition}
Let GFPmod($p$) and GLPmod($p$) be the numerical sequences obtained after evaluating the variables (the $t_j's$) over the field $\mathbb{Z}_p$ for some rational prime $p$.
Let $c_p$ and $c'_p$ be the periods of, respectively, GFPmod($p$) and GLPmod($p$), then we have the following theorem concerning ramification.
\begin{theorem}\label{myram} The rational prime $p$ ramifies if and only if $c_p = p \times c'_p$.
\end{theorem}
We shall prove this theorem in Section \ref{ILE}.
%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The convolution product}
\label{CP}
In addition to the additive structure on $\mathcal{W}$, there is also a multiplicative structure, namely a convolution product. This product operates on two polynomials in $\mathcal{W}$ with the same isobaric degree to give a product of the same isobaric degree. (MacHenry and Tudose \cite{MT} called this product the \textit{level product}). It makes use of the fact that elements in $\mathcal{W}$ belong to linear recursive sequences. The convolution product on $\mathcal{W}$ can also be thought of as a product on sequences of weighted isobaric polynomials.
\begin{definition}\label{my*prod }Let $P_{\omega, k,n}$ and $P_{\omega', k,n}$ be two polynomials of isobaric degree $n$
in $\mathcal{W}$; their \textit{convolution product} is defined as
$$P_{\omega, k,n} \ast P_{\omega', k,n}= \sum_{j=0}^n P_{\omega, k,n-j}P_{\omega', k,j}.$$
\end{definition}
It is straightforward to show that this product is commutative, associative and distributes over addition.
For the purposes of this paper this product will be used mostly to multiply a sequence by itself, or to multiply a sequence by the sequence $(1, -t_1,\ldots,-t_j, \ldots)$. The sequence of the second option acts as an identity for this product. We have already seen it at work in the linear recursion of Proposition \ref{myLR}.
If we consider the evaluations of the GFPs over the integers, we then get a family of integer sequences (one of which is, of course, the Fibonacci sequence), the product induced on these sequences turns out to be Dirichlet convolution. Under this product the family of sequences is an abelian group, one isomorphic to the group of multiplicative arithmetic functions (MF). For a proof, see Li and MacHenry and MacHenry and Wong \cite{LM, MW2}. MacHenry and Tudose \cite{MT} also implicitly proved the isomorphism where the isobaric ring was used to construct $q$-th roots for the subgroup of rational multiplicative functions for all $q \in\mathbb{Q}$. This turns out to be one of many ways in which the isobaric polynomials, and in particular, those in $\mathcal{W}$ act as ``representing'' structures. Of course, the Schur-hook polynomials are characters for the permutation characters of the symmetric group, more generally, we can represent all of the Schur polynomials in terms of the GLPs by using a version of the Jacoby-Trudi theorem, deriving a version of the Frobenius theorem giving the character table for $\Sym(n)$ \cite{LM}. Number fields derived from irreducible core polynomials can also be described completely in terms of elements in $\mathcal{W}$ \cite{{MW}}. MacHenry \cite{TM} gave the first completely general analogues of the Binet formulae, where again, the representation is in terms of the isobaric theory.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The isobaric logarithm and isobaric exponential operators}\label{ILE}
Rearick \cite{Rear2} introduced the notions of \textit{logarithm} and \textit{exponential} with respect to the ring of arithmetic functions under the Dirichlet product. Li and MacHenry \cite{LM} transported these notions to
the submodule of weighted isobaric polynomials in the isobaric ring. We reproduce these notions here and show that these operators are inverse to one another, and that they have the usual properties of logarithms and exponential functions with respect to sums and products, in this case, Dirichlet (convolution) products. All of the relevant proofs appear in Li and MacHenry \cite{LM}. Moreover, the following important consequence of these theorems is
\begin{proposition}\label{F=G} Let $F_{k,n}$ and $G_{k,n}$ be, respectively, a generalized Fibonacci polynomial and a generalized Lucas polynomial, then $$\mathcal{L}_n (F_{k,n}) = G_{k,n} $$
and $$\mathcal{E}_n (G_{k,n}) = F_{k,n}.$$
\end{proposition}
\begin{remark}\label{myAddF}
Proposition \ref{F=G} gives the well-known result that the multiplicative group MF is isomorphic to the additive group of additive arithmetic functions. Li and MacHenry \cite{LM} called two arithmetic functions that are related by this isomorphism, \textit{companion sequences} (also see Lehmer \cite{DJL}).
\end{remark}
Moreover, Li and MacHenry \cite{LM} proved the following theorem:
\begin{theorem}\label{logA=D} Given the infinite companion matrix $ A_k^\infty $ and the infinite different matrix $D_k^\infty$ we have $$\mathcal{L} (A_k^\infty) = (D_k^\infty).$$
\end{theorem}
We can now prove Theorem \ref{myram}: The rational prime $p$ ramifies if and only if $c_p = p \times c'_p.$
For, if we look at $A_k^\infty \mod(p) $ and $D_k^\infty \mod(p)$, then $A_k^\infty \mod(p) \cong \mathbb{Z}_{c'_p} $ and $D_k^\infty \mod(p) \cong \mathbb{Z}_{c'_p}$. So, from the above theorem, we have that $\mathcal{L}$ induces the exact sequence. $$1 \rightarrow K \rightarrow \mathbb{Z}_{c_p} \rightarrow \mathbb{Z}_{c'_p} \rightarrow 1,$$
where the kernel $K$ is $= 0$ if and only if, $p \in \Delta_k $. \quad \quad \qed
Li and MacHenry \cite{LM} showed that the analogues to the trigonometric functions, sine, cosine, etc. can also be defined, using the isobaric exponential function. These isobaric trigonometric functions behave like hyperbolic trigonometric functions, and satisfy analogues to all of the usual hyperbolic trigonometric identities. We shall record these identities here, since we shall show in this paper that these identities can be applied to a large class of integer sequences.
We first define the isobaric sines and cosines:
\begin{definition}\label{mylsine}
The \textit{isosine} of $G$ is $$ S(G) = \frac{1}{2}(\mathcal{E}(G) - \overline{\mathcal{E}(G)});$$
and the \textit{isocosine} of $G$ is
$$C(G)= \frac{1}{2}(\mathcal{E}(G) +\overline{\mathcal{E}(G)}),$$
where $ \overline{\mathcal{E}(G)}$ means convolution inverse of $\mathcal{E}(G)$. \end{definition}
Since $\mathcal{E}(G) = F$ and the convolution product induces a group structure on evaluated GFP with $-{\bf t}_n$ being the convolution inverse of $F_n$, i.e., $ \overline{F_n}=-{\bf t}_n$, where $$-{\bf t}_n=\begin{cases}
1, & \text{if }n=0, \\
-t_n, & \text{if }n\geqslant1.
\end{cases}$$
\begin{proposition} \label{CGSG} The isosine and isocosine of $G$ are as follows:
\begin{eqnarray*}
C(G_n) &=& \frac{1}{2} (F_n - {\bf t}_n), \\
S(G_n) &=& \frac{1}{2} (F_n +{\bf t}_n).
\end{eqnarray*}
\end{proposition}\qed
Let $\delta$ be the function whose values are $(1,0,\ldots,0,\ldots)$.
\begin{theorem}\label{mytheoremCOSSIN} The isosine and the isocosine are related by the following ``hyperbolic'' formula:
$$ C(G)^{\ast 2} - S(G)^{\ast 2} = \delta.$$
\end{theorem}
\begin{proof} \begin{eqnarray*}
&& C(G)\ast C(G)-S(G)\ast S(G) \\
& =& \frac{1}{4}[\mathcal{E}(G)\ast \mathcal{E}(G)+\overline{\mathcal{E}(G)}\ast\overline{\mathcal{E}(G)}+2(\mathcal{E}(G)\ast\overline{\mathcal{E}(G)})]\\
&& \qquad-\frac{1}{4}[\mathcal{E}(G)\ast \mathcal{E}(G)+\overline{\mathcal{E}(G)}\ast\overline{\mathcal{E}(G)}-2(\mathcal{E}(G)\ast\overline{\mathcal{E}(G)})]\\
&=&\mathcal{E}(G)\ast\overline{\mathcal{E}(G)}\\
&=&\delta. \end{eqnarray*}\qed
\vspace{0.5cm}
\begin{corollary}\label{mycorollaryCOSSIN}In particular,
\begin{eqnarray*}
C(G_0)^{\ast 2} - S(G_0)^{\ast 2} &=& 1; \\
C(G_n)^{\ast 2}- S(G_n)^{\ast 2}&=& 0 , \quad n>0.
\end{eqnarray*}
\end{corollary}\end{proof}
\begin{theorem}\label{mytheoremCOS}
Let $F$ and $G$ be induced by the core $[t_1,\ldots,t_k]$ and $F'$ and $G'$ be induced by the core $[t'_1,\ldots,t'_k]$ with $\mathcal{L}(F)=G$ and $\mathcal{L}(F')=G'$. Then
\begin{eqnarray*}
C(G+G') &=& C(G) \ast C(G') + S(G) \ast S(G'), \\
S(G+G' ) &=& S(G) \ast C(G') + C(G) \ast S(G').
\end{eqnarray*}
\end{theorem}
\begin{proof} \begin{eqnarray*}
C(G+G') & = & \frac{1}{2} (\mathcal{E}(G+G') + \overline{\mathcal{E}(G+G')} ) \\
& = & \frac{1}{2}(\mathcal{E}(G)\ast \mathcal{E}(G') +\overline{\mathcal{E}(G)\ast \mathcal{E}(G') })
\end{eqnarray*}
while
\begin{eqnarray*}
& & C(G) \ast C(G') + S(G) \ast S(G') \\
& = & \frac{1}{4} [(\mathcal{E}(G)+\overline{\mathcal{E}(G)})\ast(\mathcal{E}(G')+\overline{\mathcal{E}(G')}) + (\mathcal{E}(G)-\overline{\mathcal{E}(G)})\ast (\mathcal{E}(G')-\overline{\mathcal{E}(G')})] \\
& = & \frac{1}{2}[(\mathcal{E}(G) \ast \mathcal{E}(G') + \overline{\mathcal{E}(G)} \ast \overline{\mathcal{E}(G')}]\\
& = & \frac{1}{2}[(\mathcal{E}(G) \ast \mathcal{E}(G') + \overline{\mathcal{E}(G)\ast \mathcal{E}(G')}].
\end{eqnarray*}
The proof for $S$ is analogous. \end{proof}
%% %% %% %%
\section{Multiplicative arithmetic functions}\label{MAF}
We remind the reader that an arithmetic function is a function $\xi : \mathbb{N} \rightarrow \mathbb{C}$.
Such functions form a unique factorization domain under addition and Dirichlet convolution \cite{CE, Sha}. An arithmetic function $\xi$ is \textit{multiplicative} if $\xi(mn)=\xi(m)\xi(n) $ whenever $(m,n) = 1$. If the first equation holds without the second equation, then $\xi$ is said to be \textit{completely multiplicative} (CM). The Dirichlet product on arithimetic functions is given by $$\xi_1 \ast \xi_2 (n) = \sum_{d|n} \xi_1\Big(\frac{n}{d}\Big)\xi_2 (d).$$
%%%%%
See, e.g., McCarthy \cite{PJM}.
\begin{definition}\label{mydefinitionMFLR}
An arithmetic function $f$ is \textit{locally representable} if for each prime $p$ there is a core polynomial or power series $[t_1,t_2,\ldots,t_k,\ldots]$ such that $f(p^n) = F_n$ for all $n\in \mathbb{Z},$ where $\{F_n\}$ is the GFP sequence induced by this core polynomial.
\end{definition}
\begin{definition}\label{mydefinitionMFGR}
%%%%%%%%%%%%%%%%
An arithmetic function $f$ is \textit{globally representable} if there is a core polynomial or power series $[t_1,t_2,\ldots,t_k,\ldots]$ such that $f(n) = F_n$ for all $n\in \mathbb{Z},$ where $\{F_n\}$ is the GFP sequence induced by this core polynomial.
\end{definition}
\begin{definition}\label{mydefinitionLOCLINR}
An arithmetic function $f$ is \textit{locally linearly recursive} if for each prime $p$
$$f(p^n) = a_1f(p^{n-1})+ a_2f(p^{n-2} )+ \cdots + a_nf(p^0), $$
where $a_i$'s are determined by the core polynomial for $p$.
An arithmetic function $f$ is \textit{globally linearly recursive} if
$$f(n) = a_1f(n-1)+ a_2f(n-2) + \cdots + a_nf(1). $$
\end{definition}
\begin{theorem}[\cite{LM}]\label{mytheoremLOCLINR}
Multiplicative functions are locally linearly recursive. \qed
\end{theorem}
\begin{remark}\label{myremarkGLR}
If an arithmetic function is globally representable, then it belongs to the group of units of the ring of arithmetic functions, but lies outside of the subgroup of multiplicative arithmetic functions \cite{TM1,LM}.
\end{remark}
\begin{proposition}\label{myMFcond.}
A necessary and sufficient condition that $f$ be representable, either globally or locally, is that it be, respectively, globally or locally linearly recursive.
\end{proposition}
\begin{proof} Since the GFP sequence is inherently a linear recursion, it is clear that the function $f$ must also be either a locally or globally linear recursion. On the other hand, if it is globally or locally linearly recursive, the parameters of the linear recursion determine a core polynomial (or power series) which in turn induces a suitable GFP sequence. \end{proof}
\begin{corollary}\label{myMFLR}
Every multiplicative function is locally linearly recursive and hence locally representable. \qed
\end{corollary}
\begin{proposition} [{\cite[Theorem 4]{Rear1}}] \label{mylemmaL=0}
Let $f $ be an arithmetic function, then $f \in \mathcal{M} $ if and only if $Lf(m) = 0$ whenever $m$ is not a power of a prime.
\end{proposition}
$L$ is the logarithm function for the ring of arithmetic functions defined by Rearick in the paper cited above, and $\mathcal{M}$ is the convolution group of multiplicative arithmetic functions.
\begin{remark} Let $f$ be a globally and locally representable arithmetic function. Since $f$ is globally representable there is a global representation $f\xrightarrow{\rm gr}\,_fF$ with $\,_f\mathcal{L}(\,_fF)=\,_fG$. Since $f$ is locally representable, $f\in\mathcal{M}$ and for any prime $p$ there is a local representation $f\xrightarrow{{\rm lr}_p}\,_fF'$ with $\,_f\mathcal{L}(\,_fF')=\,_fG'$. Then $$\,_fG_n=\begin{cases}
_fG'_r , & \text{if }n=p^r,\\
0, & \text{otherwise.}
\end{cases}$$Since $\,_fG$ and $\,_fG'$ uniquely determine $\,_fF$ and $\,_fF'$, and $\,_fF$ and $\,_fF'$ uniquely determine $f$, $f$ is well-defined and in a certain sense only trivially globally defined. So Rearick's theorem, Proposition \ref{mylemmaL=0}, tells us that an arithmetic function $f$ can not be ``non-trivially'' both locally representable and globally representable.
\end{remark}
%%%%%%%%%%%%%%%%
\section{Hessenberg matrices}\label{HM}
\begin{definition}\label{myHess } A\textit{ lower Hessenberg matrix} is a matrix whose entries above the first super-diagonal are zero. An \textit{upper Hessenberg matrix} is defined similarly with respect to entries below the first sub-diagonal. \end{definition}
They are especially important for computational purposes.
Our main theorem involves the following lower Hessenberg matrices:
$$H_{+(\omega,k,n)} = \left(\begin{matrix}
t_1 & 1 & 0 & \cdots & 0 \\
t_2 & t_1 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
t_{k-1} & t_{k-2} & t_{k-3} & \cdots & 1 \\
\omega_k t_k & \omega_{k-1} t_{k-1} & \omega_{k-2}t_{k-2} & \cdots & \omega_1t_1
\end{matrix} \right),$$
and
$$H_{-(\omega,k,n)} = \left(\begin{matrix}
t_1 & -1 & 0 & \cdots & 0 \\
t_2 & t_1 & -1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
t_{k-1} & t_{k-2} & t_{k-3} & \cdots & -1 \\
\omega_k t_k & \omega_{k-1} t_{k-1} & \omega_{k-2}t_{k-2} & \cdots & \omega_1t_1
\end{matrix} \right),$$
Notice that these two matrices differ only in that their super-diagonals have opposite signs.
%%%%%%%%%%%%%%%%%
\section{Permanental and determinantal representations}\label{PDR}
\begin{definition}\label{myperm }The \textit{permanent} of a square matrix $M$, denoted by $\perm M$ is computed by taking the determinant while ignoring the parity of the operation of the permutations on the indices.
\end{definition}
Our main theorem is as follows:
\begin{theorem}\label{mypermth} The permanent of the lower Hessenberg matrix $H_{+(\omega,k,n)}$ and the determinate of the lower Hessenberg matrix $H_{-(\omega,k,n)}$ are connected to weighted isobaric polynomial $P_{\omega,k,n}$ as follows:
$$\perm H_{+(\omega,k,n)} = \perm \left(\begin{matrix}
t_1 & 1 & 0 & \cdots & 0 \\
t_2 & t_1 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
t_{k-1} & t_{k-2} & t_{k-3} & \cdots & 1 \\
\omega_k t_k & \omega_{k-1} t_{k-1} & \omega_{k-2}t_{k-2} & \cdots & \omega_1t_1
\end{matrix} \right) = P_{\omega,k,n}$$
and
$$\det H_{-(\omega,k,n)} =\det \left(\begin{matrix}
t_1 & -1 & 0 & \cdots & 0 \\
t_2 & t_1 & -1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
t_{k-1} & t_{k-2} & t_{k-3} & \cdots & -1 \\
\omega_k t_k & \omega_{k-1} t_{k-1} & \omega_{k-2}t_{k-2} & \cdots & \omega_1t_1
\end{matrix} \right) = P_{\omega,k,n}.$$
\end{theorem}
\begin{proof} It is convenient to follow the steps of the proof in the $4 \times 4$-case:
$$H_{+(\omega,4,4)} = \left( \begin{matrix}
t_1 & 1 & 0 & 0 \\ t_2 & t_1 & 1 & 0 \\ t_3 & t_2 & t_1 & 1 \\ \omega_4 t_4 & \omega_3 t_3 & \omega_2 T_2 & \omega_1 t_1
\end{matrix}\right)$$
whose permanent is easily seen to be \; $\omega_1t_1^4 + (2\omega_1 +\omega_2)t_1^2t_2 + \omega_2t_2^2 + (\omega_1 + \omega_3)t_1t_3 +\omega_4t_4 = P_{\omega,4,4}.$
Moreover, it is easy to see that there is a nesting of the Hessenberg matrices from lower right hand corner to upper left, which enables an inductive proof of the theorem, as the $4 \times 4$ example above suggests.
\begin{lemma}\label{myM_j }
Let $M_j = \perm H_{+(\omega,k,j)}, \, j=1,\ldots,n.$
Clearly $M_1 = \omega_1t_1 = P_{\omega,1}$. Assume that our result holds for $H_{+(\omega,k,j)}, j=1,\ldots, n-1,$ that is, that $M_{n-j} = P_{\omega,n-j}, j=0, \ldots,n-1.$
then, $M_n = t_1M_{n-1} + t_2M_{n-2} + \cdots + t_{n-1}M_1 + \omega_nt_n.$
\end{lemma}
It is easy to see that this expression is just the computation for the permanent. But the right-hand side $$t_1P_{\omega,n-1} + t_2P_{\omega,n-2} + \cdots + t_{n-1}P_{\omega,1} + \omega_nt_n$$ is just $P_{\omega,n}$, by the linear recursion property of weighted isobaric polynomials. So
$$P_{\omega,n} = t_1P_{\omega,n-1} + t_2P_{\omega,n-2} + \cdots + t_{n-1}P_{\omega,1} + \omega_nt_n = t_1M_{n-1} + t_2M_{n-2} + \cdots + t_{n-1}M_1 + \omega_nt_n.$$
Thus $$M_n = t_1P_{\omega,n-1} + t_2P_{\omega,n-2} + \cdots + t_{n-1}P_{\omega,1} + \omega_nt_n = P_{\omega,n}.$$
The second part of the theorem is proved by a similar argument. \end{proof}
The following theorem is a special case of a theorem of Kaygisiz and Sahin \cite{KS}.
\begin{theorem}\label{myperm=det} The permanents and determinants of $H_{+(\omega,k,n)}$ and $H_{-(\omega,k,n)}$ are related as follows:
\begin{eqnarray*}
\perm H_{+(\omega,k,n)} &= &\det H_{-(\omega,k,n)},\\
\det H_{+(\omega,k,n)}&= &\perm H_{-(\omega,k,n)}.
\end{eqnarray*}
\end{theorem}
\begin{proof} The theorem follows from Theorem \ref{mypermth}. \end{proof}
An immediate consequence of this theorem is
\begin{corollary}\label{myiff}
An element of $\mathcal{W}$ has a determinantal representation if and only if it has a permanental representation. \qed
\end{corollary}
%%%%%%%%%%%%%%%%%%
\section{The representability of integer sequences}\label{RIS}
The power of the isobaric theory lies in the extent to which other parts of mathematics are modeled in the isobaric ring, that is, in the ring of symmetric polynomials written in the elementary symmetric polynomial basis: MacHenry and Tudose \cite{MT,MW2} showed that there are faithful copies of the convolution group of multiplicative arithmetic functions and of the additive group of additive arithmetic functions in the group $\mathcal{W}$ in the isobaric ring, in fact, using only the GFP in the first case, and the GLP in the second. Moreover, MacHenry and Tudose \cite{MT} showed how to embed either of these groups in their divisible closures, using only the tools in $\mathcal{W}$ \cite{CG, TM}. MacHenry and Wong \cite{MW} showed that all linear recursive sequences on the integers can be represented in $\mathcal{W}$ and that the local structure of number fields can also be described using the polynomials in $\mathcal{W}$. We have shown here that whenever an integer sequence is linearly recursive, or when it is the set of values of a multiplicative or an additive arithmetic function, then it is faithfully represented by sequences in $\mathcal{W}$.
Li and MacHenry \cite{LM} gave other examples of mathematical theories modeled in terms of $\mathcal{W}$.
In this section we want to show how these results can be used to represent, and find relations among, a vast collection of integer sequences; in fact, almost all of the well- known, workaday sequences will appear in such a list. This places integer sequences in a unified theory with inner structure that enables easy calculation, and the discovery of new relations among the sequences.
It is useful at this point to give a summary of the different ways that we have of producing the elements of $\mathcal{W}$:
\noindent1. By using the Differential Lattice $\mathcal{L} (t_1^{\alpha_1} t_2^{\alpha_2} \cdots t_k^{\alpha_k})$ \cite{MT, MT2}. \\
2. By using the permanents of $H_{+(\omega,k,n)}$.\\
3. By using the determanents of $H_{-(\omega,k,n)}$.\\
4. By $\displaystyle P_{w.k.n} = \omega_nt_n \ast F_{k,n} = \sum_{j=1}^n \omega_jt_jF_{k,n-j}$ \cite{MT}.\\
5. By $$P_{\omega,k,n}= \sum_{\alpha \vdash n} {
|\alpha| \choose \alpha_1,\ldots,\alpha_k } \frac{\sum_{j=1}^k \omega_j\alpha_j}{|\alpha|}t_1^{\alpha_1}\cdots t_k^{\alpha_k}.$$
Numbers 2. and 3. are described in this paper. The fifth one is the explicit definition that results from using the methods of the differential lattice.
%%%%%%%%%%%%%%%%%%%
\section{Examples}\label{eg}
In this section, we first give an example of calculating the statistics of an integer sequence using the isobaric theory, followed by examples of some interesting representable integer sequences. We start with two well-known multiplicative arithmetic functions.
\begin{example}\label{E1} The arithmetical functions $ \tau$ and
$\sigma$ are respectively the arithmetical functions which give the
number of divisors of $n$ (\seqnum{A000005} in \cite{EIS}),
and the sum of the
divisors of $n$ (\seqnum{A000203} in \cite{EIS}),
$n \in \mathbb{N}$. They are
well-known to be multiplicative functions of degree 2, and hence are
defined locally, that is, at each rational prime $p$.
Let us see how this follows using isobaric theory: $\tau(p^n)= n+1.$ Suppose that $\tau$ is locally representable, that is, suppose $\tau(p^n) = F_{k,n}(t_1,t_2,\ldots,t_k)$ for some natural number $k$ and for some integer values of $t_1,t_2, \ldots, t_k$: use the notation, $\tau \rightarrow \ _{\tau}F_{k,n}$.
For simplicity in the proof we drop the initial subscript $\tau$. Since $p^0 = 1$, we have $F_{k,0}=1$, and $\tau(p) = 2 = F_{k,1} = t_1$. Moreover, $\tau(p^2) = 3 = F_{k,2} = t_1^2 + t_2$, we infer that $t_2 = -1$. Again, noting that $\tau(p^3) = 4$, we compute from the fact that $F_{k,3} = t_1^3 +2 t_1t_2 +t_3$, that $t_3 = 0$. We can then compute inductively, using that multiplicative functions are locally linearly recursive, that $t_j =0, j>2$. Thus $k=2,$ and the core polynomial is given by $[2,-1]$.
If we apply the same line of reasoning to $\sigma(p^n) = 1+p+\cdots+p^n$, we again deduce that $k=2$ and that the core is given by $[p+1,-p]$. Note that letting $p=1$ gives us back the statistics for $\tau$.
The companion sequences (i.e., the isobaric logarithm), using Proposition \ref{myLR} and induction, are respectively, $(2,2,\ldots,2,\ldots)$ and $(2, p+1, p^2+1, \ldots, p^n+1,\ldots)$. Note that the value of $G_{k,0}$ is consistent with the general term, letting $p = 1$, and, of course, with the value of the isobaric Log of $F_{k,0}$.
\end{example}
Using the techniques illustrated in Example \ref{E1} above, we have the following interesting examples:
\begin{example} The Euler totient function (\seqnum{A007434} in \cite{EIS}) is a MF of infinite degree; its core is a power series: $$\varphi (p^n) = p^n - p^{n-1}$$ $$ [t_1,\ldots, t_k,\ldots] = [p-1, p-l,\ldots, p-1,\ldots].$$
It is locally $F$-representable. Its companion sequence, which is locally $G$-representable, is given by
$$G_0 = 2, \quad G_n = p^n-1, \quad n>0. $$ The trigonometric identities then apply. We leave these computations to the interested reader.\end{example}
\begin{example} Jordan's function (\seqnum{A007434} in \cite{EIS}),
$J_k(n)$, is a multiplicative function and a generalization of the Euler function: $$J_k(n)= p^{kn}-p^{(k-1)n}.$$ Its degree is infinite, that is, $[p^k-1,\ldots, p^k-1,\ldots]$, and it is locally $F$-representable. Its companion sequence is given by $$G_0 = 2,\quad G_n = p^{kn}-1,\quad n>0.$$\end{example}
\begin{example} The Catalan sequence (\seqnum{A000108} in
\cite{EIS}), $C_n = \displaystyle\frac{1}{n+1}{2n \choose n+1}$, is
$F$-globally representable with core $[t_j = C_{j-1}]$. We have named
such a sequence, \textit{incestuous}. Its companion sequence is
number (\seqnum{A001700} in \cite{EIS}),
which is %given by Proposition 2 as
$$_CG_n =\left(\begin{array}{c }
2n-1 \\
\end{array}\right).$$
It also has interpretations in terms of Dyck paths.
\end{example}
\begin{example} The Chebyshev polynomials of the 2nd kind
(\seqnum{A011973} in \cite{EIS}), $U_n(x)$, are globally $F$-representable with core $[2x,-1]$. If, however, we let $x=1$, then we get the local $F$-representation of $\tau$. Multiplicative functions belong to the group of units of the ring of arithmetic functions. However, the group of units is much larger than the group of multiplicative functions \cite{TM1}. In particular, any $F$-globally representable function belongs to this group. Hence, for any value of $x$, $\{U_n(x)\}$ we obtain an element of this group.
\end{example}
\begin{example} The Stirling numbers of the 2nd kind
(\seqnum{A008277} in \cite{EIS}), usually denoted as
$$\left\{\begin{array}{c }
n \\
k
\end{array}\right\},$$
are defined by a kind of ``Pascal'' recursion relation; namely,
$$\left\{\begin{array}{c }
n+1 \\
k
\end{array}\right\}=k\left\{\begin{array}{c }
n \\
k
\end{array}\right\}+\left\{\begin{array}{c }
n \\
k-1
\end{array}\right\}.$$
\begin{table}
\centering
\begin{tabular}{ cccccccccccc}
{\color{red} $n/k$ } & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\
{\color{red} 0} & 1 \\
{\color{red} 1} & 0 & 1 \\
{\color{red} 2} & 0 & 1 & 1 \\
{\color{red} 3} & 0 & 1 & 3 & 1 \\
{\color{red} 4} & 0 & 1 & 7 & 6 & 1 \\
{\color{red} 5} & 0 & 1 & 15 & 25 & 10 & 1 \\
{\color{red} 6} & 0 & 1 & 31 & 90 & 65 & 15 & 1 \\
{\color{red} 7} & 0 & 1 & 63 & 301 & 350 & 140 & 21 & 1 \\
{\color{red} 8} & 0 & 1 & 127 & 966 & 1701 & 1050 & 266 & 28 & 1 \\
{\color{red} 9} & 0 & 1 & 255 & 3025 &7770 & 6951 & 2646 & 462 & 36 & 1 \\
{\color{red} 10} & 0 & 1 & 511 & 9330 & 34105 & 42525 & 22827 & 5880 & 750 & 45 & 1 \\
\end{tabular}
\caption{Stirling numbers of the second kind, $n=0,\ldots,10$}
\label{tablestirling}
\end{table}
%(Table of Stirling Numbers of the second kind, $n=0,\ldots,10$)
Note that the first non-zero element in each column of Table \ref{tablestirling} is a 1, and that the set of next-to-last numbers in each row gives the triangular numbers in the correct ascending sequence.
\begin{theorem}\label{myStirling2}
Each column is an arithmetic function in the group of units of the ring of arithmetic functions; that is each column is a globally $F$-representable, degree $k$, arithmetic function. (\seqnum{A071951}(n+k,k) in \cite{EIS}).
\end{theorem}
For example, the first few core polynomials are given by
$$[1], [3,-2], [6,-11,6], [10,-35, 50, -24], [15,-85, 225,-274,120]. $$
One will recognize these numbers as the coefficients of the polynomials otherwise given as
\begin{eqnarray*}
&& (x-1); \\
&& (x-1)(x-2); \\
&& (x-1)(x-2)(x-3);\\
&& (x-1)(x-2)(x-3)(x-4);\\
&& (x-1)(x-2)(x-3)(x-4)(x-5),
\end{eqnarray*}
etc.
\vspace{0.3cm}
This is not a surprise: the Stirling numbers of the second kind are
given as the coefficients of the polynomials defined by so-called
``falling'' factorial.
That is, $$\mathcal{C}(X) = \prod_{j=1}^k (X-j).$$ $t_{k,1} = T_k$,
where $t_{k,1}$ is just the recursion parameter $t_1$ for the $k$-th
column, and $T_k $ is the $k$-th element in the sequence of
triangular numbers. The last coefficient in the core representation
is $t_k = k!$.
As for Stirling numbers of the first kind
(\seqnum{A008275} in \cite{EIS}),
they are not representable in our sense. However, there is an unexpected and interesting relation between Stirling numbers of the 1st kind and Stirling numbers of the 2nd kind. Namely, the rows, in what might be called the Stirling triangle of the first kind, are just the absolute values of the core numbers, $t_j$ in reverse order. For example, the 5th row of the table for Stirling numbers of the 1st kind is just $$(0, 24,50, 35, 10, 1).$$ The apparently extra 1 is just $t_0$, the leading coefficient of the core polynomial, which we have suppressed in our notation.
As a result of these observations, we obtain
\begin{corollary}\label{myStirling1&2} The following relations between Stirling numbers of the first kind and Stirling numbers of the second kind hold:
$$\sum_{j=0}^{k-1} (-1)^{j-1} \left\{\begin{array}{c }
n+j \\
k
\end{array}\right\} \left[\begin{array}{c }
k+1 \\
k-j
\end{array}\right] = \left\{\begin{array}{c }
n+k \\
k
\end{array}\right\},$$
where we use $ \left[\begin{array}{c }
k+1 \\
k-j
\end{array}\right] $
to denote Stirling numbers of the first kind.
\end{corollary}
That is, the recursion parameters of the $k$-th column of the table of Stirling numbers of the 2nd kind is $$t_{k-1} = (-1)^{j-1} \left[\begin{array}{c }
k+1 \\
k-j
\end{array}\right] . $$
\begin{proof} The proof follows from the definitions of the two types of sequences in terms of rising and falling factorials. \end{proof}
The companion sequence for the $k$-th column is just the sequence $1^n+2^n+\cdots + k^n$. When $k=3$, for example, this is the sequence
\seqnum{A001550} in \cite{EIS}.
\end{example}
\begin{example}The triangular numbers (\seqnum{A000217} in \cite{EIS})
\textsl{$T_k$} are also $F$-globally representable: $\mathcal{C}(X) = [3,-2,1]$, so that $k=3$. The companion sequence is just $\{G_n = 3\}$ for all $n$.
\end{example}
Kirgisiz and Sahin \cite{KS} pointed out that Pell numbers, Pell-Lucas
numbers, bivarate Fibonacci numbers, Perrin sequences, and Exponential
Perrin sequences have permanental and determinantal representations.
We give their statistics here.
\begin{table}
\centering
\begin{tabular}{rcl}
$F_0$& =& 1 \\
$F_1 $&=&$ t_1 $\\
$F_2 $&=&$ t_1^2 + t_2$ \\
$F_3$& = &$t_1^3 + 2t_1t_2 + t_3 $\\
$F_4$& =&$ t_1^4 + 3t_1^2t_2 + t_2^2 + 2t_1t_3 + t_4$ \\
$F_5$& =&$ t_1^5 + 4t_1^3t_2 + 3t_1t_2^2 +3t_1^2t_3+ 2t_2t_3 + 2t_1t_4 + t_5 $\\
$F_6$& = &$t_1^6 + 5t_1^4t_2 + 6t_1^2t_2^2 + t_2^3 + 4t_1^3t_3 + t_3^2 + 6t_1t_2t_3 + 3t_1^2t_4 + 2t_2t_4 + 2t_1^2t_5 + t_6 $ \\
&$\vdots$&\\
\end{tabular}
\caption{Generalized Fibonacci polynomials}
\label{GFP}
\end{table}
\begin{table}
\centering
\begin{tabular}{rcl}
$G_0$& =& k \\
$G_1$ &=&$ t_1$ \\
$G_2$ &=& $t_1^2 + 2t_2$ \\
$G_3$ &=& $t_1^3 + 3t_1t_2 + 3t_3$ \\
$G_4$ &=& $t_1^4 + 4t_1^2t_2 +2 t_2^2 + 4t_1t_3 + 4t_4$ \\
$G_5$ &=&$ t_1^5 + 5t_1^3t_2 + 5t_1t_2^2+5t_1^2t_3+ 5t_2t_3 + 5t_1t_4 + 5 t_5 $\\
$G_6$& =& $t_1^6 + 6t_1^4t_2 + 6t_1^2t_2^2 + 2t_2^3 + 6t_1^3t_3 +3 t_3^2 + 12t_1t_2t_3 + 6t_1^2t_4 + 6t_2t_4 + 6t_1^2t_5 + 6t_6 $ \\
&$\vdots$&\\
\end{tabular}
\caption{Generalized Lucas polynomials}
\label{GLP}
\end{table}
\begin{example} Pell numbers (\seqnum{A000129} in \cite{EIS}) are $F$-representable with a core $[2,1]$. \end{example}
\begin{example} Pell-Lucas numbers (\seqnum{A113501} in \cite{EIS}) are their companions, hence, $G$-representable. \end{example}
\begin{example} Bivarate Fibonacci numbers (\seqnum{A015441} in \cite{EIS}) are $F$-representable with $k=2$. \end{example}
\begin{example}\label{E11} Perrin sequences (\seqnum{A078512} in \cite{EIS}) are $G$-representable with a core of $[0,1,1]$. \end{example}
\begin{example}\label{E12} Exponential (Padovan) Perrin sequences
(\seqnum{A001608} in \cite{EIS}) are $F$-representable on the same core. Thus Example \ref{E11} is the companion sequence for Example \ref{E12}.\end{example}
%%%%%%%%
\vspace{0.5cm}
We include Table \ref{GFP} and Table \ref{GLP} showing the first few generalized Fibonacci polynomials and the first few generalized Lucas polynomials. In those tables, we take the point of view that $k$ goes to $\infty$, and we omit it from the indices of the polynomials. When the companion matrix is non-singular, all sequence can be extended ``northward'', this extension will also be omitted from those tables.
\vspace{0.5cm}
\begin{thebibliography}{99}
\bibitem{CG} T. B. Carroll and A. A. Gioia, On a subgroup of the group of multiplicative arithmetic functions, \textit{J. Austral. Math. Soc.} Ser. A \textbf{20} (1975), 348--358.
\bibitem{CE} E. D. Cashwell and C. J. Everett, The ring of number-theoretic functions, \textit{Pacific J. Math.} \textbf{9} (1959), 975--985.
\bibitem{KS} K. Kaygisiz and A. Sahin, Determinants and permanents of Hessenberg matrices and generalized Lucas polynomials, preprint,
\url{http://arxiv.org/abs/1111.4071}.
\bibitem{DJL} D. J. Lehmer, An extended theory of Lucas functions, \textit{Ann. Math.}, 2nd Ser., \textbf{31} (1930), 419--448.
\bibitem{LM} H. Li and T. MacHenry, The convolution ring of arithmetic functions and symmetric polynomials, \textit{Rocky Mountain J. Math.}, to appear.
\bibitem{MacD} I. G. Macdonald, \textit{Symmetric Functions and Hall Polynomials}, Clarendon Press, Oxford, 1995.
\bibitem{TM1} T. MacHenry, A subgroup of the group of units in the ring of arithmetic functions, \textit{Rocky Mountain J. Math.} \textbf{29} (1999), 1055--1075.
\bibitem{TM} T. MacHenry, Generalized Fibonacci and Lucas polynomials and multiplicative arithmetic functions, \textit{Fibonacci Quarterly} \textbf{38} (2000), 17--24.
\bibitem{MT} T. MacHenry and G. Tudose, Reflections on isobaric polynomials and arithmetic functions, \textit{Rocky Mountain J. Math.} \textbf{35} (2005), 901--928.
\bibitem{MT2} T. MacHenry and G. Tudose, Differential operators and weighted isobaric polynomials, \textit{Rocky Mountain J. Math.} \textbf{36} (2006), 1957--1976.
\bibitem{MW} T. MacHenry and K. Wong, Degree-$k$ linear recursions mod($p$) and algebraic number fields, \textit{Rocky Mountain J. Math.} \textbf{41} (2011), 1303--1328.
\bibitem{MW2} T. MacHenry and K. Wong, A correspondence between isobaric rings and multiplicative arithmetic functions, \textit{Rocky Mountain J. Math.} \textbf{42} (2012), 1247--1290.
\bibitem{PJM} P. J. McCarthy, \textit{Introduction to Arithmetic Functions}, Springer, New York, 1986.
\bibitem{Mu} T. Muir, \textit{A Treatise on the Theory of Determinants}, Dover, 1960.
\bibitem{P} G. P\'olya, Anzahlbestimmungen f\"{u}r Gr\"{u}ppen, Graphen und chemische Verbindungen, \textit{Acta Math.} \textbf{68} (1937), 145--254.
\bibitem{Read} R. C. Read, The use of S-functions in combinatorial analysis, \textit{Can. J. Math.} \textbf{20} (1968), 808--841.
\bibitem{Rear1} D. Rearick, Operators on algebras of arithmetic functions, \textit{Duke Math. J.} \textbf{35} (1967), 761--766.
\bibitem{Rear2} D. Rearick, The trigonometry of numbers, \textit{Duke Math. J.} \textbf{35} (1968), 767--776.
\bibitem{Red} J. H. Redfield, The theory of group-reduced distributions, \textit{Amer. J. Math.} \textbf{49} (1927), 433--455.
\bibitem{Sha} H. N. Shapiro, On the convolution ring of arithmetic functions, \textit{Commun. Pur. Appl. Math.} \textbf{25} (1972), 287--336.
\bibitem{EIS} The On-Line Encyclopedia of Integer Sequences,
\url{http://www.oeis.org}.
\end{thebibliography}
\bigskip
\hrule
\bigskip
\noindent 2010 {\it Mathematics Subject Classification}:
Primary 11B39; Secondary 11B75, 11N99, 11P99, 05E05.
\noindent \emph{Keywords: } integer sequence, isobaric polynomial,
arithmetic function, multiplicative function, additive function,
generalized Fibonacci polynomial, generalized Lucas polynomial.
\bigskip
\hrule
\bigskip
\noindent (Concerned with sequences
\seqnum{A000005},
\seqnum{A000010},
\seqnum{A000108},
\seqnum{A000129},
\seqnum{A000203},
\seqnum{A000217},
\seqnum{A001550},
\seqnum{A001608},
\seqnum{A001700},
\seqnum{A001973},
\seqnum{A007434},
\seqnum{A008275},
\seqnum{A008277},
\seqnum{A011973},
\seqnum{A015441},
\seqnum{A071951},
\seqnum{A078512}, and
\seqnum{A113501}.)
\bigskip
\hrule
\bigskip
\vspace*{+.1in}
\noindent
Received July 1 2012;
revised versions received August 5 2012; October 27 2012; January 24 2013;
February 17 2013.
Published in {\it Journal of Integer Sequences}, March 2 2013.
\bigskip
\hrule
\bigskip
\noindent
Return to
\htmladdnormallink{Journal of Integer Sequences home page}{http://www.cs.uwaterloo.ca/journals/JIS/}.
\vskip .1in
\end{document}