\documentclass[12pt]{amsart}
%\usepackage[french] {babel}
\usepackage{amssymb,latexsym}
%=======================================================
%le usepackage{euscript} produit des lettre style Kues...
%la macro commande est: {\EuScript}
%=======================================================
%\usepackage{euscript}
%\usepackage{graphics}
%\usepackage{stalike}
%\usepackage{theorem}
%\usepackage{epsfig}
%\input Apicmacs.tex
%\input option_keys
%=======================================================
%Double jambe. La macro commande est:{\Bbb}
%(definit par Rene. C'est deux lignes sont équivalentes a
%un usepackage)
%=======================================================
\DeclareSymbolFont{AMSb}{U}{msb}{m}{n}
\DeclareSymbolFontAlphabet{\Bbb}{AMSb}
%\textheight 7.75 in
%\textwidth 6.5in
%\headheight 0.5in
%\topskip 0.0in
%\topmargin 0.0in
%\oddsidemargin 0.3in
%\evensidemargin 0.3in
\numberwithin{equation}{section}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
%{\theorembodyfont{\rmfamily}\newtheorem{defin}{Definition}[section]}
%{\theorembodyfont{\rmfamily}\newtheorem{examp}{Example}[section]}
%{\theorembodyfont{\rmfamily}\newtheorem{remq}{Remark}[section]}
\theoremstyle{remark}
\newtheorem{definition}{Definition}
\newtheorem{remark}{Remark}
%\newenvironment{definition}
% {\begin{defin}}
% {\hfill {$\triangle$}\par\end{defin}}
%\newenvironment{example}
% {\begin{examp}}
% {\hfill {$\Box$}\par\end{examp}}
%\newenvironment{preuve}{{\bf Preuve} \par}{\hfill {\Bbb j}}
\newcommand{\ds}{\displaystyle}
\renewcommand{\thefootnote}{$\fnsymbol{footnote}$}
\font\ly=lasy10 scaled 1100
\catcode`\«=\active\def«{\ofd}
\catcode`\»=\active\def»{\ffg}
\chardef\lg='050
\chardef\rg='051
\def\ofd{\leavevmode\hbox{\ly\lg\kern-0.2em\lg}}
\def\ffg{\leavevmode\hbox{\ly\rg\kern-0.2em\rg}}
%$< \hspace{-9pt}(\
%\hspace{-2pt} >\hspace{-7pt})\,$
%
%=======================================================
%
%Fin préambule
%
%=======================================================
\begin{document}
\title[Rank of linear matrices]{Commutative/noncommutative
rank of linear matrices and subspaces of
matrices of low rank}
\author[Marc Fortin and Christophe Reutenauer]{Marc
Fortin$^*$ and
Christophe Reutenauer$^\dagger$}
\thanks{$^*$Marc Fortin;
C\'egep R\'egional de Lanaudi\`ere \`a
l'Assomption, 180, rue Dorval, l'Assomption (Qu\'ebec) Canada, J5W
6C1}
\thanks{$^\dagger$Christophe Reutenauer;
Universit\'e du Qu\'ebec \`a
Montr\'eal; Département de mathématiques; Case postale 8888,
succursale Centre-Ville, Montr\'eal (Qu\'ebec) Canada, H3C 3P8
(mailing address); e-mail: christo@math.uqam.ca. Partially supported by
NSERC
and CRC (Canada).}
\dedicatory{ D\'edi\'e \`a notre ami Alain Lascoux}
\date{}
\begin{abstract}
A space of matrix of low rank is a vector space of rectangular matrices
whose maximum rank is stricly smaller than the number of rows and the
numbers of columns. Among these are the compression spaces, where the
rank condition is garanteed by a rectangular hole of 0's of appropriate
size. Spaces of matrices are naturally encoded by linear matrices. The
latter have a double existence: over the rational function field, and
over the free field (noncommutative). We show that a linear matrix
corresponds to a compression space if and only if its rank over both
fields is equal. We give a simple linear-algebraic algorithm in order
to decide if a given space of matrices is a compression space. We give
inequalities relating the commutative rank and the noncommutative rank
of a linear matrix.
\end{abstract}
\maketitle
%First page headline in AmS-LaTeX for S\'eminaire Lotharingien de Combinatoire
%--first part
\thispagestyle{myheadings}
\font\rms=cmr8
\font\its=cmti8
\font\bfs=cmbx8
\markright{\its S\'eminaire Lotharingien de
Combinatoire \bfs 52 \rms (2004), Article~B52f\hfill}
\def\thepage{}
%
%
\setcounter{section}{0}
\section{Introduction}
We consider here {\it linear matrices}, that is, matrices whose entries are
of
the form $a_0+a_1x_1+\cdots +a_dx_d$, where the coefficients $a_i$ are taken
in
a (commutative) field $k$ and where the $x_i$ are indeterminates, which
may be commuting or noncommuting. Such a matrix is of the form
$M_0+\sum\limits_{i=1}^dx_i\,M_i$, where the $M_i$ are all over $k$ and of
the
same size.
Linear matrices appear in several area of mathematics: Kronecker pencils
(\cite{G} chapter 2), determinantal varieties (\cite{H} chapter 9), spaces
of
matrices of low rank \cite{EH}, algebraic automata theory (\cite{E} VII. 6),
free algebras \cite{Ro}, free fields \cite{CR}, 3-tensors (\cite{K}
4.6.4).
A linear matrix $M$ has two lives: it may be considered as a matrix over the
rational function field $k(x_1,\ldots,x_d)$ in the commuting variables
$x_1,\ldots,x_d$. It may also be considered as a matrix over the {\it free
field}
\linebreak$k< \hspace{-9pt}(\ x_1,\ldots,x_d\hspace{-2pt}
>\hspace{-7pt})\,$
(which is the noncommutative analogue of the previous one, see \cite{C1},
\cite{C2}). In both cases, $M$ has a rank, which we denote by $crk(M)$
and
$ncrk(M)$, respectively. It is intuitively clear that $crk(M)\leq
ncrk(M)$ (see Corollary~2.2). One may have strict inequality. For
instance,
the matrix
$$\left({\begin{array}{ccc}
\hphantom{-}0&\hphantom{-}x&\hphantom{-}y\\
-x&\hphantom{-}0&\hphantom{-}1\\
-y&-1&\hphantom{-}0
\end{array}}\right)$$
has commutative rank 2, but is invertible over the free field, hence of
noncommutative rank 3: if one inverts this matrix, one is lead to the
element
$(yx-xy)^{-1}$, which exists only noncommutatively.
Our main result gives a necessary and sufficient condition for equality of
both
ranks. To state it, we need some definitions. Observe first that if the
commutative rank of $M$ is maximal with respect to its size (that is, equal
to
$\min(n,p)$ where $M$ is of size $n\times p$), then both ranks coincide, as
seen
from the previous inequality. So we may disregard this case and may assume
that
$M$ has rank
$<\min(n,p)$. Associate to the linear matrix $M=M_0+\sum\limits_{i=1}^d
x_i\,M_i$
the subspace $H$ of $k^{n\times p}$ spanned by the matrices $M_i$.
%First page headline in AmS-LaTeX for S\'eminaire Lotharingien de Combinatoire
%--restoring the headers and pagenumbering
\pagenumbering{arabic}
\addtocounter{page}{1}
\markboth{\SMALL MARC FORTIN AND CHRISTOPHE REUTENAUER}{\SMALL
RANK OF LINEAR MATRICES}
%
%
Since the commutative rank of $M$ is $<\min(n,p),$ the rank of each element
of
$H$ is also $<\min(n,p)$. Moreover, if we assume that $k$ is {\it
infinite}, the
maximum rank in $H$ is equal to the commutative rank of $M$ (see Lemma~3.1).
A subspace
$H$ of $k^{n\times p}$ whose maximum rank is $<\min(n,p)$ is called a {\it
subspace of matrices of low rank}. Such a subspace always comes from some
linear matrix (for the $M_i$, take a spanning set of $H$).
These subspaces have been considered by several authors
\cite{F}, \cite{W}, \cite{AL},
\cite{A},
\cite{B}, \cite{EH}, \cite{Re}. They are not completely well-understood.
Among
them, the simplest one are the so called {\it compression space} (the
terminology is from Eisenbud and Harris; Westwick \cite{W} calls {\it
essentially decomposable} such a space, in the case where all nonzero
matrices in $H$ have the same rank $r$):
$H$ is a compression space if after some change of basis of rows and
columns over
$k$ (independently), the matrix
$M$ has the block form $\left({\begin{array}{cc}
A&0\\
B&C
\end{array}}\right)$, where the matrix $B$ is of size $i\times j$, and where
moreover the maximum rank in $H$ is $i+j$ (note that a matrix of this form,
with
$B$ of size $i\times j$, has necessarily rank $\leq i+j$).
Our characterization is the following (Theorem~3.1): $H$ {\it is a
compression
space if and only if the commutative and noncommutative rank of $M$ are
equal}.
Before proving this result, we need to generalize a result of Cohn that
characterizes linear square matrices that are invertible in the free field.
We
characterize the noncommutative rank of any linear matrix (see Theorem~2.1
for a
precise statement). As a corollary, we obtain that the noncommutative rank
of a
linear matrix $M=M_0+\sum\limits_{i=1}^d x_i\, M_i$ depends only on
the subspace $H$ spanned by the $M_i$.
In the last section, we give an efficient algorithm to solve the following
question: given a subspace $H$ of $k^{n\times p}$ of low rank, decide if $H$
is
a compression space (note that in order to know that $H$ is of low rank, it
is
enough to compute the commutative rank of $M$, which is easy), see Theorem~4.1.
Note that in \cite{EH} is given an effective criterion for a subspace $H$ to
be
a compression space, but the underlying algorithm seems to be of high
complexity. Our algorithm uses only techniques of linear algebra.
From Theorem~3.1 and Theorem~4.1 we may deduce two proofs of the following
fact:
the property for $H$ to be a compression space or not is invariant under
extension of the field of scalars $k$. This is not so obvious because,
translating the definition into equations leads to a system of nonlinear
algebraic equations.
%
%=======================================================
%Section 2
%=======================================================
%
\section{Rank over the free field of linear matrices}
Let $X$ be a set of noncommuting variables and $k$ a (commutative)
field. We denote by $k\langle X\rangle$ the $k$ -- algebra of
noncommutative polynomials over $k$ generated by the noncommuting
variables $x\in X$. Among all the fields containing $k\langle
X\rangle$, there is one, called the {\em free field}, which is
unique up to isomorphism and which is characterized by the
following property: each square matrix $M$ over $k\langle
X\rangle$, which is full, becomes invertible over the free field.
Recall that a square matrix $M$ over a ring $R$ is called {\em full}
(over $R$) if it is not possible to have a factorization $M=PQ$,
with $P$ of size $n\times p, Q$ of size $p\times n$ and $p\hspace{-7pt})\,$. See \cite{C1}, \cite{C2}
for more on the free field.
We consider now linear matrices, that is, matrices whose entries
are polynomials of degree $\leq 1$. These matrices play a special
role for the free field, since each element of the free field is
equal to some entry of the inverse in the free field of some square linear
matrix \cite{C2}, Theorem~6.3.7.
Recall from the Introduction that to each linear matrix over $k\langle
X\rangle^{n\times p}$, we associate a subspace of $k^{n\times p}$.
We say that two subspaces $H,K$ of $k^{n\times p}$ are {\it equivalent} if
$K=UHV$ for some invertible matrices $U,V$ over $k$; equivalently, $H$ is
obtained from $K$ by row and column operations over $k$. Now, Atkinson
and Lloyd \cite{AL} call $r-${\it decomposable} a subspace $H$ of
$k^{n\times p}$
if
$H$ is equivalent to subspace of matrices all of the form
$\left({\begin{array}{cc}
A&0\\
B&C
\end{array}}\right)$, where $B$ is of size $i\times j$ and $i+j=r$
(equivalently,
the 0 block is of size $n-i\times p-j$) . Note that such a matrix is
necessarily
of rank
$\leq r$. Following them, we say that a linear matrix $M$ is $r-${\it
decomposable} if its associated subspace is; equivalently, for some
invertible
matrices $U,V$ over $k$,
$U\,M\,V$ is of the above form.
The next result characterizes the rank of a linear matrix by a
linear -- algebraic property. It extends Cohn's
characterization of full linear matrices \cite{C2} Cor. 6.3.6.
\begin{theorem}
Let $M$ be a linear matrix of size $n\times p$ over $k\langle X\rangle$
and $r<\min(n,p)$. Its rank in the free field (equivalently, its inner rank
in
$k\langle X\rangle$) is $\leq r$ if and only if $M$ is $r$-decomposable.
\end{theorem}
Note that if the rank of a linear matrix is not $<\min(n,p)$ then it is
equal to
$\min(n,p)$: thus the theorem completely characterizes the rank of a linear
matrix
in the free field.
\bigskip
\begin{proof}
\medskip
\noindent
1)\quad
Note that
$$\left(\begin{array}{cc}
A&0\\
B&C
\end{array}\right)=
\left(\begin{array}{cc}
A&0\\
B&I_i
\end{array}\right)\ \
\left(\begin{array}{cc}
I_j&0\\
0&C
\end{array}\right)$$
where $I_l$ denotes the identity matrix of size $l\times l$; the
product is well--defined, since $A$ has $j$ columns, and $C$ has $i$
rows. In particular, the number of columns of the first factor (=
number of rows of the second) is $j+i=r$, hence the product has
inner rank $\leq r$.
\medskip
\noindent
2)\quad
In order to prove the converse, we may suppose that the rank of $M$
in the free field is exactly $r$. Then we may write $M=FG$, where
$F,G$ are matrices over $k\langle X\rangle$ of size $n\times r$ and
$r\times p$. Since $r\hspace{-7pt})\,$
depends
only on the subspace of $k^{n\times p}$ spanned by the matrices $M_i$.
\end{corollary}
Hence this rank gives an invariant of subspaces of matrices. It seems not
easy
to calculate. The algorithm of \cite{CR} (which decides if a linear matrix
is
full) may be easily adapted to compute the rank of a linear matrix over the
free
field; it is however of high complexity, since it uses Gröbner bases. It
would
be interesting to find an algorithm which uses only linear algebra
techniques.
The following result compares the commutative and noncommutative ranks.
\begin{corollary}
Let $M$ be a linear matrix. Then $crk(M)\leq ncrk(M)\leq 2 crk(M)$.
\end{corollary}
\begin{proof}
The first inequality follows from the theorem.
For the second, we use a result of \cite{F} (Lemma~1): if a subspace $H$
of $k^{n\times p}$ has maximum rank $r$, then it is equivalent to a
subspace,
all matrices of which are of the form $\left({\begin{array}{cc}
A&0\\
B&C
\end{array}}\right)$, where $B$ is square of order $r$. Hence, we may
suppose that $M$ is of this form. Thus the theorem implies that
$ncrk(M)\leq 2r.$
\end{proof}
\begin{remark}
The first inequality in the corollary is evidently sharp: indeed, take a
linear matrix of size $n\times p$ and of commutative rank $\min(n,p)$.
However,
the second is not. Indeed, if
$M$ has commutative rank 1, then necessarily it has also noncommutative
rank 1;
this is because, classically, such a matrix is equi\-valent (after change of
bases over
$k$ in the spaces of rows and of columns) to a matrix of the form
$$\left({\begin{array}{cccc}
0&\cdots&0&\ast\\
0&\cdots&0&\ast\\
\vdots&&\vdots&\vdots\\
0&\cdots&0&\ast
\end{array}}\right)$$
or its transpose.
The case of rank 2 and 3 may also be handled by results of Atkinson \cite{A}
and
Eisenbud-Harris \cite{EH}. Suppose that $M$ is of commutative rank 2 and
let
$H$ be the span in $k^{n\times p}$ of the $M_i^\prime s$. Then it follows
from
Theorem~1.1 in \cite{EH} that its noncommutative rank is 2 or 3 (since the
matrix
in the Introduction has noncommutative rank 3).
Similarly, suppose that $M$ has commutative rank 3. Then it follows from
Theorem~1.2 in \cite{EH} that $M$ has noncommutative rank 3 or 4.
Note that, by direct sum of the matrix of the Introduction, one may find a
square linear matrix of order $3n$, which is of noncommutative rank $3n$,
and of
commutative rank $2n$. So, one could expect an inequality of the form
$ncrk(M)\leq \frac{3}{2}crk(M)$, which would certainly be sharp.
\end{remark}
%
%=======================================================
%Section 3
%=======================================================
%
\section{Compression spaces}
We recall the definitions of the Introduction, and we choose the language
of linear mappings, instead of matrices.
We consider a subspace $H$ of $\mbox{Hom}\,(E,F)$, where $E,F$ are
vectors spaces over $k$, of dimension $n,p$ respectively. We assume
that $k$ is infinite.
By definition, the {\em rank} of $H$ is the maximum rank of its
elements. We say that $H$ is of {\em low rank} if its rank is
smaller than $\min(n,p)$.
Among subspaces of low rank are the following ones: $H$ is a {\em
compression space} if for some subspaces $E^\prime$ of $E$ and
$F^\prime$ of $F$ one has:
\begin{enumerate}
\item[1)]
$\mbox{codim}\,E^\prime+\dim F^\prime=$ rank of $H$;
\item[2)]
any element of $H$ maps $E^\prime$ into $F^\prime$.
\end{enumerate}
Taking bases of $E$ and $F$, containing bases of $E^\prime$ and
$F^\prime$, we see that the matrices representing $H$ are of all
the form $\left(\begin{array}{cc}
\times&0\\
\times&\times
\end{array}\right)$, where the 0 matrix has
$\mbox{codim}\,F^\prime$ lines and $\dim E^\prime$ columns. In other words,
in
the terminology of \cite{AL}, $H$ is a compression space if $H$ is
$r$-decomposable and of rank $r$.
\begin{lemma}
Let $M=M_0+\sum\limits_{i=1}^d x_iM_i$ be a linear matrix, and $H$ the
subspace spanned by the matrices $M_0,M_1,\ldots,M_d$. Then the maximum
rank in $H$ is equal to the commutative rank of $M$ ($k$ is infinite).
\end{lemma}
\pagebreak
\begin{proof}
Note that, for square matrices $A_i$ over $k$, $\det
\left({x_0A_0+\sum\limits_{i=1}^d x_iA_i}\right)$ vanishes if and only so
does $\det
\left({A_0+\sum\limits_{i=1}^d x_iA_i}\right)$, since the first is an
homogeneous polynomial of degree the order of the $A_i$'s. If we apply
this to the minors of $M$, we see that it is enough to prove the lemma for
the linear matrix $\sum\limits_{i=0}^d x_iM_i$. Now, since $k$ is
infinite, a polynomial vanishes if and only if it vanishes for all values
of the variables in $k$. This implies the lemma.
\end{proof}
\medskip
Given a subspace $H$ of $\mbox{Hom}\,(E,F)$, viewed in matrix form,
we associate to $H$ a linear matrix as follows: let
$M_1,\ldots,M_d$ be a spanning set of $H$, let $x_1,\ldots,x_d$ be
indeterminates; then the linear matrix is
$M=\sum\limits_{i=1}^dx_iM_i$.
\medskip
We know from Lemma~3.1 that
$crk(M)$ = rank$(H)$. Suppose that $H$ is not of low rank;
equivalently, its rank is $\min(n,p)$. Then, since a $n\times p$
matrix over any field has rank $\leq\min(n,p)$, we deduce that $crk(M)=$
$ncrk(M)$ if $H$ is not of low rank.
The striking fact is that, when $H$ is of low rank, then this
equality characterizes compression spaces.
\begin{theorem}
Let $H$ be a space of matrices of low rank and $M$ its
associated linear matrix. Then $H$ is a compression space if and
only if the commutative rank of $M$ and its noncommutative rank
coincide.
\end{theorem}
\begin{proof}
Suppose that $crk(M)=r=$ $ncrk(M)$. Note that $r<\min(n,p)$
since $H$ is of low rank. Then by Theorem~2.1, after suitable change of
bases in
$E,F$ we may assume that
$M$ is of the form
$\left(\begin{array}{cc }
A&0\\
B&C
\end{array}\right)$, where the zero matrix is of size
$(n-i)\times(p-j)$, with $i+j=r$. Let $E^\prime$ be the subspace
of $E$ spanned by the last $p-j$ basis vectors, and $F^\prime$ the
subspace of $F$ spanned by the last $i$ basis vectors. Then we see
that each element of $H$ maps $E^\prime$ into $F^\prime$.
Moreover, $\mbox{codim}\,E^\prime+\dim F^\prime=j+i=r$. Hence
$H$ is a compression space.
Conversely, if $H$ is a compression space, let $r$ be its rank.
Then, by definition, we may after change of basis in $E,F$ bring $H$
into the form $\left(\begin{array}{cc}
A&0\\
B&C\end{array}\right)$, where $A$ has $\mbox{codim}\,E^\prime$
columns, and $C$ has $\dim F^\prime$ rows, with
$r=\mbox{codim}\,E^\prime+\dim F^\prime$. Then $M$ is also of
this form, and its noncommutative rank is $\leq r$ by Theorem~2.1. Hence,
by
Corollary~2.2, it is exactly $r$.
\end{proof}
\begin{corollary}
Let $k\subset k^\prime$ be an extension of commutative fields. Let
$H$ be a vector space of matrices of low rank over $k$, and
$H^\prime$ its extension to $k^\prime$. Then $H$ is a compression
space if and only
$H^\prime$ is a compression space.
\end{corollary}
\begin{proof}
Let $M$ be the associated linear matrix; $M$ has coefficients in
$k$. Its commutative rank is unchanged under the extension
$k\subset k^\prime$. Furthermore, its noncommutative rank is also
unchanged, since the free field $k< \hspace{-9pt}(\
x_1,\ldots,x_d\hspace{-2pt}>\hspace{-7pt})$\,
embeds in the free field $k^\prime<\hspace{-10pt}(\
x_1,\ldots,x_d\hspace{-2pt}>\hspace{-7pt})$\, (see \cite{C2}
Theorem~6.4.6). Thus, the corollary follows form the theorem.
\end{proof}
\medskip
A more elementary proof of this corollary will be given in Section~4.
%
%=======================================================
%Section 4
%=======================================================
%
\section{An algorithm}
Let $E,F$ be vector spaces of dimension $n,p$ and
let $H\subset\mbox{Hom}\,(E,F)$ be a subspace of low rank $r<\min(n,p)$.
Select
an
$f\in H$ with rank $(f)=r$.
Define the sequence of subspaces $(E_i)$ of $E$ and $(F_i)$ of $F$
by:
$F_0=\{0\}$, and for
$i\geq 1$, recursively,
$$E_i=f^{-1}(F_{i-1}),\ F_i=\sum\limits_{g\in
H}g(E_i).\hspace{24pt}(\ast)$$
For effective computations, note that, if $H$ is given, by a basis
$(g_j)$ for instance, one has $F_i=\sum\limits_j g_j(E_i)$,
so that the sequence may effectively be computed.
Note that both sequences $(F_i),(E_i)$ are increasing, since
$F_0=\{0\}\subseteq F_1$, and: $F_{i-1}\subseteq
F_i\Rightarrow E_i\subseteq E_{i+1}$ and $F_i\subseteq F_{i+1}$.
Thus, there is some $p$ such that $F_{p-1}=F_p$, and for this $p$
one has: $E_p=f^{-1}(F_{p-1})=f^{-1}(F_p)$, thus
$f^{-1}(F_p)=E_p$ and $\forall g\in H, g(E_p)\subseteq F_p$.
\begin{theorem}
With the previous notations, $H$ is a compression space if and only
if $\mbox{\em{codim}}\, E_p+\dim F_p=r$.
\end{theorem}
\begin{proof}
If this last equality holds, surely $H$ is a compression space,
since each $g$ in $H$ maps $E_p$ into $F_p$.
Conversely, suppose that $H$ is a compression space. Then for some
subspace $E^\prime,F^\prime$ of $E,F$, we have
$\mbox{codim}\,E^\prime+\dim F^\prime=r$, and each element of $H$
maps $E^\prime$ into $F^\prime$.
In particular $f(E^\prime)\subseteq F^\prime$.
We claim that
$f(E^\prime)=F^\prime$ and $f^{-1}(F^\prime)=E^\prime$.
Indeed let $\bar f$ be the composition
$E\buildrel{f}\over\longrightarrow F\longrightarrow
F/F^\prime$, where the second mapping is the canonical one.
Then
$\mbox{Ker}\,\bar f=f^{-1}(F^\prime)$. We have
$E^\prime\subseteq \mbox{Ker}\,\bar f$, since $f(E^\prime)
\subseteq F^\prime$, so that $\dim
E^\prime\leq\dim(\mbox{Ker}\,\bar f)$.
Now, rank $(f)=r$, hence rank $(\bar f)\geq r-\dim
F^\prime$ with equality if and only if $F^\prime\subseteq f(E)$. We
have by hypothesis
$r-\dim F^\prime=\mbox{codim}\, E^\prime=\dim E-\dim E^\prime=$ rank
$(\bar f)+\dim(\mbox{Ker}\, \bar f)-\dim E^\prime$; hence
the previous inequality implies $\dim(\mbox{Ker}\,\bar f)
\leq\dim E^\prime$,
which implies equality and
$E^\prime=\mbox{Ker}\,\bar f=f^{-1}(F^\prime)$. Using
$\dim(\mbox{Ker}\,\bar f)=\dim E^\prime$, the same
computation shows that rank $(\bar f)=r-\dim
F^\prime$. This implies by a previous remark that
$F^\prime \subseteq f(E^\prime)$ and finally, that
$f(E^\prime)=F^\prime$.
%\Bloc 1 fin
%Debut bloc 2
Observe that $F_0\subseteq F^\prime$. If $F_{i-1}\subseteq
F^\prime$, then by the claim $E_i=f^{-1}(F_{i-1})\subseteq
f^{-1}(F^\prime)=E^\prime,$ and
$F_i=\sum\limits_{g\in H}g(E_i)\subseteq\sum\limits_{g\in
H}g(E^\prime)\subseteq F^\prime$. Thus, we obtain by induction that
$E_i\subseteq E^\prime$, and
$F_i\subseteq F^\prime$ for each $i$.
This is true in particular for $i=p$, so that, changing notations
$(A=E_p,B=F_p)$, we have a subspace $A$ of $E^\prime$ and a subspace
$B$ of
$F^\prime$ such that: $\forall g\in H, g(A)\subseteq B$ and
$f^{-1}(B)=A$.
We show that $\mbox{codim}\, A+\dim B=r$, which will prove the
theorem. Choose a subspace $A^\prime$ of $E^\prime$ such that
$A\oplus A^\prime=E^\prime$. We claim that the composition
$u:A^\prime \buildrel{f}\over\longrightarrow
F^\prime\longrightarrow F^\prime/B$ is an isomorphism.
Taking the claim for granted, we obtain $\dim A^\prime=\dim F^\prime-\dim
B$,
thus $r=\mbox{codim}\, E^\prime+\dim F^\prime=\dim E-\dim E^\prime +\dim
F^\prime=\dim E-\dim A-\dim A^\prime+\dim F^\prime
=\dim E-\dim A+\dim B
=\mbox{codim}\, A+\dim B$.
Let us prove the claim. We have $f(A)\subseteq B$. In fact,
$f(A)=B$; indeed, if $b\in B$, then, since $f(E^\prime)=F^\prime,
b=f(e^\prime)$ for some $e^\prime\in E^\prime$; then $e^\prime\in
f^{-1}(b)\subseteq f^{-1}(B)=A$, hence $e^\prime \in A$ and $b \in
f(A)$, which shows that $B\subseteq f(A)$.
Thus we have $f(A)=B\Rightarrow
F^\prime=f(E^\prime)=f(A)+f(A^\prime)=B+f(A^\prime)$ and this
implies that the restriction to $f(A^\prime)$ of the canonical
mapping
$F^\prime\to F^\prime/B$ is surjective; in other words, $u$ is
surjective.
Now, $u$ is also injective, since $\mbox{Ker}\, u=f^{-1}(B)\cap
A^\prime=A\cap A^\prime=\{0\}.$
\end{proof}
Now, the algorithm works as follows. By using the linear matrix
$M$ associated to $H$, compute the maximal rank $r$ of $H$; by
finding in the infinite field $k$ values of the variables appearing
in $M$ that do not annihilate some nonzero $r\times r$ minor of
$M$, one obtains an element $f$ in $H$ of rank $r$. Then one
computes the suspaces $E_i,F_i$ described at the beginning of the
section; note that this computation is of low complexity, since it
amounts to solve linear equations. Then one finds $p\leq\dim F$
such that $F_{p-1}=F_p$ as above. It is enough then to check if
$\mbox{codim}\, E_p+\dim F_p=r$ and apply the theorem.
%bloc 3fin
Note that in \cite{EH}, an effective criterion is given for a
space
$H$ of matrices of low rank to be a compression space;
however, the computations use Gr\"{o}bner bases, so are of
high complexity (see \cite{EH} p. 150--151).
\bigskip
\noindent
\begin{proof}[Second proof of Corollary 3.1] The spaces $E_i, F_i$
are all defined over $k$, since so are $H$ and $f$. Moreover, $F_{p-1}=F_p$
if and only if
$F_{p-1}\otimes_k k^\prime=F_p\otimes_k k^\prime$, and
the condition of the theorem is invariant under extension, since
the rank of $H$ does not change under commutative field
extension.
This proves the
corollary.
\end{proof}
%
%=======================================================
%References
%=======================================================
%
%\bibliographystyle{plain}
\begin{thebibliography}{45}
\bibitem[A]{A}
M.D. Atkinson, Primitive spaces of matrices of bounded rank II,
\textit{J. Austral. Math. Soc. Ser. A} 34, 1983, 306--315.
\bibitem[AL]{AL}
M.D. Atkinson, S. Lloyd, Large spaces of matrices of bounded rank,
\textit{Quarterly Journal of Mathematics Oxford 31}, 1980, 253--262.
\bibitem[B]{B}
L.B. Beasley, \textit{Null spaces of matrices of bounded rank},
Current trends in matrix theory, 45--50, North Holland, New York,
1987.
\bibitem[C1]{C1}
P.M. Cohn, \textit{Free rings and their relations}, Academic
Press, 1985 (first edition 1971).
\bibitem[CR]{CR}
P.M. Cohn, C. Reutenauer, On the construction of the free field,
\textit{Intern. J. Alg. Computation} 9, 1999, 307--323.
\bibitem[C2]{C2}
P.M. Cohn, \textit{Skew fields, theory of general division
rings}, Cambridge University Press, 1995.
\bibitem[C3]{C3}
P.M. Cohn, The word problem for free fields, \textit{J. Symb.
Logic} 38, 1973, 309--314; correction and addendum 40, 1975,
69--74.
\bibitem[E]{E}
S. Eilenberg, \textit{Automata, languages and machines}, vol.~A, Academic
Press,
1974.
\bibitem[EH]{EH}
D. Eisenbud, J. Harris, Vector spaces of matrices of low rank,
\textit{Advances Math.} 70, 1988, 135--155.
\bibitem[F]{F}
H. Flanders, On spaces of linear transformations with bounded
rank, \textit{J. London Math. Soc.} 37, 1962, 10--16.
\bibitem[G]{G}
F.R. Gantmacher, \textit{Applications of the theory of matrices},
Interscience,
1959.
\bibitem[H]{H}
J. Harris, \textit{Algebraic geometry, a first course}, Springer, 1993.
\bibitem[K]{K}
D.E. Knuth, \textit{The art of computer programming}, vol. 2,
Seminumerical problems, Addison Wesley, 1981.
\bibitem[Re]{Re}
R. Re, A note on linear subspaces of determinantal varieties,
\textit{Le Mate\-matiche} 50, 1995, 173--178.
\bibitem[Ro]{Ro}
M.L. Roberts, Endomorphism rings of finitely presented modules over free
algebras, \textit{J. London Math. Soc. 30}, 1984, 197--209.
\bibitem[W]{W}
R. Westwick, \textit{Spaces of linear transformations of equal rank},
Linear Algebra and Its Applications 5, 1972, 49--64.
\end{thebibliography}
\end{document}