\input amstex
%\font\bigsymbols=cmsy10 scaled \magstep5
%\input gothic.tex
%\documentstyle{bul}
\documentstyle{amsppt}
\Monograph
%\NoRunningHeads
\TagsOnRight
\catcode `\@=11
\def\logo@{}
\catcode `@=12
\hoffset=.05in
\voffset=0.02in
\pagewidth{5.3in}
\pageheight{7.25in}
%\pageno=-2
\topmatter
\define\sumprime{\sideset \and ^\prime \to\sum}
\title {The spacing distributions between zeros of zeta functions}
\endtitle
\author Nicholas M. Katz and Peter Sarnak\endauthor
\affil{Department of Mathematics, Princeton University, Princeton, NJ 08544-1000}\endaffil
\address{Princeton University\newline Mathematics Department}\endaddress
\address Preprint\endaddress
\date{June 1996}\enddate
%\thanks{}\endthanks
%\email{}
%\keywords{}
%\subclass{}
%\abstract{TO COME}\endabstract
\endtopmatter
\def\zee1{z_{1}}
\def\z2{z_{2}}
\def\zj1{z_{j+1}}
\def\zja{z_{j+a}}
\def\ja{j,a}
\def\zn{z_{N}}
\NoBlackBoxes
%\NoPageNumbers
\document
%\magnification=\magstephalf
\magnification=\magstep1
%\magnification=1200
\baselineskip=14pt
%\baselineskip=16pt
\head{0. \ Introduction}\endhead \
In a remarkable numerical experiment, Odlyzko \cite{Od} has found
that the local spacing distribution between the zeros of the Riemann
Zeta function is modelled by the eigenvalue distributions coming from
random matrix theory. In particular by the ``GUE''
(Gaussian Unitary Ensemble) model \cite{Gau}. His experiment was inspired by the
paper of Montgomery \cite{Mon} who determined the pair correlation distribution
for the zeros (in a restricted range). We will refer to the above
phenomenon as the Montgomery--Odlyzko Law. Rudnick and
Sarnak \cite{Ru--Sa} have determined the $n \geq 2$ correlations
for the zeros of the zeta function, as well as for more
general automorphic $L$--functions (again only in
restricted ranges). These are in perfect agreement with GUE predictions.
It appears that the Montgomery--Odlyzko Law is a universal feature for
such $L$--functions. However, a complete proof of this law is well beyond
the range of existing techniques. If one believes that the above
phenomenon is a manifestation of the spectral nature of the zeros, then it
is natural to ask if there is such a law for the zeta and $L$--functions
associated to curves and exponential sums over finite fields. For, in
these cases, their zeros may be realized as eigenvalues of Frobenius on
cohomology groups (see Section 7). One of the goals of this paper is the
formulation and proof of an analogue of the Montgomery--Odlyzko Law for
these zeta and $L$--functions.
\vskip .5cm
A major part of this work is devoted to an analysis of the distribution
of the local spacings between the eigenvalues of matrices in the classical
groups. Let $G(N)$ denote one of the compact groups $U(N)$, $SU(N)$,
$O(N)$, $SO(N)$, $USp(N)$, i.e. the unitary, orthogonal and symplectic
groups (Weyl) \cite{We}) realized in their standard representations as $N\times N$
unitary matrices ($N$ being even in the symplectic case). Denote by
$A$ a typical element of $G(N)$ and by $dA$ the unit normalized Haar
measure on $G(N)\,$. The eigenvalues of $A$ are all on the unit circle and
may be ordered cyclically counterclockwise
$$
\zee1 \rightarrow \z2 \ldots \ldots \rightarrow \zn \rightarrow \zee1 \; .
\tag0.1
$$
Set $z_j = z_{j \bmod N}$ for $j \in \Bbb Z\,$. We are interested in the arc length
spacings, $\overline{z_j z}_{j+1}\,$, between $z_j$ and $z_{j+1}\,$, or more generally
between $z_j$ and $z_{j+a}$ where $a\geq 1$ is an integer. Define for $j \in \Bbb Z$
$$
\Delta_{j,a} = \frac{N}{2\pi} \quad \overline{z_{j+a} z}_j \;.
\tag0.2
$$
The mean value of $\Delta_{j,a}$ satisfies
$$
\frac{1}{N} \quad \sum_{j=1}^{N} \quad \Delta_{j,a} = a \; . \tag0.3
$$
The $a^{th}$ consecutive spacing distribution is
described by the probability measure on $\Bbb R_{ \geq 0}$;
$$
\mu _a (A) = \frac{1}{N} \quad \sum_{j=1}^{N} \; \delta_{\Delta_{j,a}} \;. \tag0.4
$$
Here $\delta_\zeta$ is the point mass at $\zeta \in \Bbb R$. In view of (0.3),
$\mu_a (A)$ has mean $a$. Note that $\mu_a (A) (I)$ gives the proportion of
$\Delta_{j,a} (A)$'s lying in $I$, for any interval I. For any function $f$ on $G(N)$
its expectation over $G(N)$, denoted $E(f(A), G(N))$ or simply $E(f)$, is by
definition
$$
E(f) = \int_{G(N)} f(A) dA \, .
$$
Our first result asserts that the expectations of $\mu_a (A)$ have universal
limits (that is independent of the particular family $G(N)$).
\proclaim{Theorem 0.1} For each $a \geq 1$ there is a measure $\mu_a$ on $\Bbb R_{ \geq 0}$
of mean $a$, such that
$$
\lim_{N\rightarrow \infty} \quad E \bigg( \mu_a (A) , \, G(N) \bigg) = \mu_a
\, .
$$
\endproclaim
\vskip .5cm
The proof of this theorem which goes via $n$--level correlations, provides an explicit
formula for the density of $\mu_a$ (see Proposition 3, section 5) which can be expressed in terms
of a Fredholm determinant:
$$
d\mu_a = \bigg( \frac{d^2}{ds^2} \;
\sum_{j=0}^{a-1} \; \frac{a-j}{j!} \left( \frac{\partial}{\partial T}\right)^j
\det(I+ TK_s )\big)_{T= -1} \bigg)ds
\tag0.5
$$
where $K_s$ is the trace class operator on
$L^2[-s/2, s/2]$ whose kernel is
$$
K(x,y) = \frac{\sin \pi(x-y)}{\pi(x-y)}
\tag0.6
$$
(0.6) allows us to identify $\mu_a$ with the conditional
spacing probabilities computed by Gaudin for GUE \cite{Gau}
(the case of Theorem 0.1 with $G(N) = U(N)$ is, essentially due to Gaudin).
Incidentally $\mu_1$ is quite well approximated by ``Wigner's
surmise'' \cite{Wi},
$$
\frac{32}{\pi^2} s^2 e^{-4s^2/ \pi} \qquad \text{(see figure 1)} \; .
\tag0.7
$$
The identification (0.5) is important to our analysis, for with it
we establish an important technical estimate for the tail of $\mu_1$:
$$
\mu_1 [s, \infty ) \leq \frac{4}{3} \; e^{-s^2/8} \, , \qquad \text{for} \; s \geq 0 \; .
\tag0.8
$$
In passing, we note that a complete asymptotic expansion as $s \rightarrow \infty$
of the Fredholm determinant appearing in (0.5) has been recently developed
in \cite{T--W} and \cite{D--}.
As was noted by Gaudin \cite{Gau}, (0.5) may be used to compute $\mu_a$ numerically
(see also section 8).
In figures 1, 2, 4, 5, 6, and 7 the densities of $\mu_1$ and $\mu_2$ are drawn
together with the empirical spacing distribution for the zeros
of the Riemann Zeta function as computed by Odlyzko.
\vskip .5cm
For our applications, it is not only the average of $\mu_a (A)$ that is
of interest, but also the individual behavior of $\mu_a (A)$ for almost all $A$
($N$ large). That is we seek a law of large numbers. To this end let
$D(\nu , \mu)$ denote the Kolmogorov--Smirnov discrepancy between two measures
$\nu , \mu$ on $\Bbb R$, that is
$$
D(\nu , \mu ) = \sup_{-\infty < x < \infty} \quad \{ | \mu (-\infty, x ] -
\nu(- \infty , x] |\} \; .
\tag0.9
$$
We are interested in $D(\mu_a (A), \mu_a)$.
$D$ measures how well $\mu_a (A)$ approximates $\mu_a$.
Clearly, $0 \leq D \leq 1$ and tends to zero iff the $\Delta_{j ,a}(A)$'s become
equidistributed w.r.t. $\mu_a$, as $N \rightarrow \infty$. The following
yields the behavior of $\mu_a (A)$ for almost all $A$.
\proclaim{Theorem 0.2} \ For $ \epsilon > 0 $ and $a \geq 1$ there is an
explicit $N_0 = N_0 (\epsilon , a)$ such that for $N\geq N_0$,
$$
E(D(\mu_a (A), \mu_a ) , \; G(N)) \leq N^{-\frac{1}{6} + \epsilon} \; .
$$
\endproclaim
\vskip .5cm
An immediate consequence of this is that for $N$ large
$(\rightarrow \infty)$ almost all $A$'s have their eigenvalue spacings
follow the Gaudin GUE distributions.
For example, if $\alpha + \beta < 1/6$ and $N$ is sufficiently large,
then Haar $\{A\in G(N) | D(\mu_a (A), \mu_a) \geq N^{-\alpha} \} \leq N^{-\beta}$.
\remark{Remark}
The exponent $1/6$ in Theorem 0.2 is certainly not sharp.
In fact S. Miller has carried out some Monte--Carlo simulations
with random elements of $G(N)$ with $N$ of size about 500.
He finds that as $N$ gets large, $N^{1/2} D( \mu_1 (A), \mu_1 )$
$(A\in G(N))$ has a universal limiting law which is depicted in
Figure [10] (where it is compared with the usual Kolmogorov law
for the Kolmogorov statistic). The same statistic computed with
consecutive blocks of sizes about 200 of zeros of the Riemann
zeta function at height $10^{20}$ (courtesy of Odlyzko \cite{Od}) gives the
same law--see Figure [11]. Recently, Soshnikov \cite{Sos} has made
a first step towards establishing that $N^{1/2} D( \mu_1 (A) , \mu_1)$
has a limiting distribution as $N \rightarrow \infty$.
\endremark
%$$
%%\spreadlines{3\jot}
%%\split
%\Delta_{j,a} = \frac{N}{2\pi} \big(\overline{z_j z_{j+a_{1}}} \;, \;
% \overline{z_{j+a_1} z_{j_{+a_1 + a_2 }}} , \dots ,
% \overline{z_{j+a_1 + \cdots + a_{\tau -1}} z_{j_+a_1 + \cdots + a_\tau}} \big)
% \quad\in \Bbb R ^\tau _{ \geq 0} \; .
%%\endsplit\tag0.10
%\tag0.10
%$$
\vskip .5cm
We turn to the applications mentioned at the outset.
Let $\Bbb F_q$ be the field with $q$ elements.
The extensions $\Bbb F_{q^m}$, $m \geq 1$, of $\Bbb F_q$
exhaust the algebraic extensions of $\Bbb F_q$.
Let $C$ be a smooth projective curve defined over $\Bbb F_q$.
For example $C$ may be a plane curve
$$
C: \qquad f(x_1 , x_2, x_3) = 0
\tag0.13
$$
where $f$ is a nonsingular homogeneous polynomial with coefficients
in $\Bbb F_q$. In analogy with the Riemann Zeta Function,
Artin [Ar], introduced a zeta function $\zeta (C/\Bbb F _q, T)$
which may be defined by
$$
\zeta (C/\Bbb F_q , T) =
\exp \bigg( \sum_{m=1}^{\infty} \; \frac{N_m T^{m}}{m} \bigg)
\tag0.14
$$
where $N_m = \# C ( \Bbb F_{q^{m}})$, that is the number of points
of $C$ with coordinates in $\Bbb F _{q^{m}}$ (see section 6, (11)),
for an equivalent definition which makes the analogy with the
Riemann zeta function more transparent.)
It follows from Riemann--Roch Theorem on $C$ (see [Sch])
that this zeta function is rational and is
of the form
$$
\zeta (C/\Bbb F_q , T) = \frac{P (C/\Bbb F_q , T)}
{(1-qT)(1-T)}
\tag0.15
$$
where $P$ is an integral, self reciprocal\footnote"*"{$a_0 + a_1 T + \cdots+a_\nu T^\nu$
is self reciprocal if $a_j = a_{\nu - j , j=1 , \ldots, \nu}$.} polynomial of degree
$2N , N$ being the genus of $C$. The Riemann hypothesis for
$\zeta (C/ \Bbb F _q , T)$ lies deeper and asserts that the zeros
of $P (C/\Bbb F_q , T)$ all lie on the circle
$|T| = 1/\sqrt{q}$. This was proven by Weil \cite{WEI}.
So we may write the zeros $\xi_j$ of $P$ as
$$
\xi_j = q^{-1/2} z_j \quad , \quad j = 1,\dots , 2N
\tag0.16
$$
with $|z_j | = 1$.
\vskip .5cm
Proceeding now as in (0.1), (0.2), (0.3), (0.4), we get
spacing measures $\mu_a (C/\Bbb F_q )$
for any $a \in \Bbb N$.
In order to obtain a limiting behavior for the individual
spacing measures $\mu_a (C/\Bbb F_q )$
We must let the genus $N$ of $C$, go to infinity. This alone
will not suffice to get any reasonable behavior because one can
easily show, using the equidistribution result (0.19) below, that we
can obtain any probability measure with mean at most equal to
$a$ as a limit of $\mu_a (C/ \Bbb F_q )$'s.
%In fact, it is found\footnote"**"{also S. Miller [ \ ]}
%on experimenting numerically with special families such as
%$$
%\Phi_n: x^n + y^n + z^n = 0 \qquad \text{genus} \; (\Phi_n) = N = \frac{(n-1)(n-2)}{2}
%\tag0.17
%$$
%and modular curves $X_0(n) \big/ \Bbb F_q$ [ \ ],
%that the local spacing distributions follow the model of spacings of
%random numbers on $|z| = 1$ (that is the model of the spacings between
%$(\theta _1 \, , \ldots , \; \theta_N) \in (\Bbb R/N \Bbb Z )^N$
%with the measure $\frac{d\theta_1 \cdots d\theta_N}{N^N}$)
%as $N \rightarrow \infty$.\footnote"***"{Actually one can prove this on average
%in $q=p$ a prime if one assumes some standard conjectures about the
%analytic properties of global automorphic $L$--functions [ \ ]. One can
%show that $D(\mu_a (X_0(N)/ \Bbb F_p ), \; \tilde{\mu}_a) \rightarrow 0$
%for almost all $p$ when $N \rightarrow \infty$.}
%It is well known that for the latter the corresponding measure $\tilde{\mu}_a$ is given by
%%$$
%d \tilde{\mu}_a = \frac{s_1^{a_1 -1}}{(a_1 -1)!} \cdots \frac{s_\tau^{a_\tau-1}}
% {(a_\tau-1)!} \;
% e^{\dsize{-\sum_{j=1}^{\tau} s_j}} ds_1 \cdots ds_\tau \; .
%\tag0.18
%$$
%
\vskip .5cm
Given this, one natural question to ask is what does $\mu (C/ \Bbb F_q )$ look like for
the typical curve of large genus? To answer this,
let $\Cal{M}_N ( \Bbb F_q)$ denote the set of isomorphism classes
of curves of genus $N$ defined over $\Bbb F_q$. This is a finite set whose
cardinality goes to infinity if either of $N$ or $q$ go to infinity.
Our main result is the following:
%\proclaim{Theorem 0.3} For any $a \in \Bbb N ^\tau$
%$$
%\lim_{N\rightarrow\infty} \; \lim_{q \rightarrow \infty } \;
%\frac{1}{\dsize{\sum_{C \in \Cal M (N_q)}} m (C/ \Bbb F _q )^{-1}}
%%\sum_{C\in \Cal M (N,q)}
%\frac{D(\mu_a (C / \Bbb F _q), \mu_a)}{m(C/ \Bbb F _q )} = 0 \; .
%$$
%In particular since most curves of genus $\geq 3$ have no automorphisms we get
\proclaim{Theorem 0.3}
Given $a\in \Bbb N$ we have
$$
\lim_{N \rightarrow \infty} \; \lim_{q \rightarrow \infty}
\frac{1}{| \Cal M _N( \Bbb F_q)|} \sum_{C\in \Cal M_N ( \Bbb F_q)}
D(\mu_a (C/ \Bbb F _q ) , \mu_a ) = 0 \; .
$$
\endproclaim
\vskip .5cm
As a consequence, we see that the typical curve over a large field and of large
genus will have the local spacing
distributions of the zeros of its zeta function, follow the universal
$\mu_a$'s of GUE. That is the Montgomery--Odlyzko law is valid
for the typical such Artin Zeta function.
\vskip .5cm
The connection between Theorems 0.2 and 0.3 comes through monodromy (see Section 7).
The monodromy group for a family of curves related to $\Cal M_N (\Bbb F_q)$, is the full
symplectic group $Sp(2N)$ (see Section 7). It allows one to associate
to each curve $C/\Bbb F _q$ a Frobenius conjugacy class $\theta (C/\Bbb F _q)$ in
$Sp(2N)$. In this case this is the
conjugacy class in $Sp(2N, \Bbb C )$
whose characteristic polynomial is ${P} (C/ \Bbb F _q , q^{-1/2}T)$.
That the latter is the characteristic polynomial of a symplectic matrix, is a
consequence of ${P}$ being self reciprocal. The Riemann
Hypothesis for $\zeta(C / \Bbb F_q , T)$ ensures that this
is a conjugacy class meets $USp(2N)$.
In Section 7 we explain how to use Deligne's main Theorem of ``Weil II'' \cite{DE}
to establish the following Chebotarev equidistribution theorem:
\vskip .5cm
Let $f$ be a continuous class function on $USp(2N)$, then
$$
\lim\limits_{| \Bbb F_q|\rightarrow\infty}
\frac{\dsize{\sum_{C\in \Cal M_N ( \Bbb F_q)}
|\text{aut} \; (C/\Bbb F_q) | ^{-1} f(\theta (C / \Bbb F_q))}}
{\dsize{\sum\limits_{C\in \Cal M_N (\Bbb F_q)} \;
| \text{aut} \; (C/ \Bbb F_q )|^{-1}}} = \int\limits_{USp(2N)} f(A) dA
\tag0.17
$$
where $\text{aut} \; (C/ \Bbb F_q)$ is the number of automorphisms of the
curve $C$ over $\Bbb F_q$.
\vskip .5cm
Theorem (0.3) follows by combining (0.17) with Theorem (0.2), applied with
$$
f(A) = D(\mu_a(A) , \mu_a ) \; .
\tag0.20
$$
\vskip .5cm
The other application concerns similar questions for $L$--functions arising
from exponential sums. We consider the case of Kloosterman sums.
For $n\geq 2$, $q$ as above, $t \in \Bbb F_q^*$ and $\psi$ a nontrivial
additive character of $\Bbb F_q$, define
$$
K \ell_\psi ^{(n)} (t, q) =
\sum\Sb x_1x_2\dotsm x_n = t\\ x_j \in \Bbb F_{q}^* \endSb
\psi (x_1 + x_2 +\cdots+ x_n) \; .
\tag0.19
$$
To this sum we associate the $L-$ function
$$
L( \psi , t, q, T) = \exp \left( (- 1)^n \sum_{m=1}^{\infty}
\frac{KL_{ \psi _0 TR} (t, q^m) T^m} {m} \right)
\tag0.20
$$
where
$$
K \ell_{\psi_{0} TR} (t,q^m) = \sum\Sb x_1x_2 \cdots x_n =t\\ x_j \in \Bbb F_{q^m}^*\endSb
\psi (TR_{\Bbb F_{q}^m / \Bbb F_q} (x_1 +\cdots+ x_n ))
\tag0.21
$$
$L(\psi , t, q, T)$ is a polynomial of degree $n$
(in the variable $T$) [ \ ] and Deligne \cite{Del} has proven that it
satisfies the Riemann Hypothesis: that is all its zeros are of
absolute value $q^{-(n-1)/2}$.
After normalizing the zeros as in (0.1) to (0.4) we define the measures
$\mu_a (K\ell\psi ^{(n)} (t, q))$.
These measure the local spacings between the zeros of such an
$L$--function, $L(\psi, t, q, T)$.
\vskip .5cm
\proclaim{Theorem 0.4}
For any $a \in \Bbb N$,
$$
\lim_{n \rightarrow \infty} \lim_{q \rightarrow \infty} \frac{1}{q-1} \;
\sum_{t\in\Bbb F_q ^*} \; D(\mu_a (K \ell_\psi ^{(n)} (t,q)) , \mu_a) = 0 \; .
$$
\endproclaim
So again we see that for $q$ and $n$ large, the typical Kloosterman
$L-$ function satisfies the Montgomery--Odlyzko Law.
This time the analysis leading to Theorem 0.4 involves
all the families of groups in Theorem 0.2. The $T$--polynomial
$L(\psi, t, q, Tq^{-(n-1) /2} )$ is naturally the characteristic
polynomial of the Frobenius conjugacy class
$\theta (K\ell_n{(t,q))}$ in the geometric monodromy group
$G_{\text{geom}}$, of the corresponding Kloosterman sheaf \cite{Ka}.
These have been determined in \cite{Ka}. For $n \geq 2$ even,
$G_{\text{geom}} = Sp (n)$ (so that the Frobenius is in
$G_n = USp(n)$). For $n \geq 3$ odd and $q = p$ an odd prime
$G_{\text {geom}} = SL (n)$ (so that the Frobenius
is in $G_n = SU(n)$). Finally if $q = 2$ and $n$ is odd
$(n > 7)$, $G_{\text{geom}} = SO(n)$.
The equidistribution Theorem established in Katz \cite{Ka} asserts
that if $f$ is a continuous class function on $G_n$ as above, then
$$
\lim_{q \rightarrow \infty} \frac{1}{q-1} \sum_{t \in \Bbb F _q ^*}
f(\theta (K\ell_n(t,q))) = \int_{G_n} f(A) dA \;.
\tag0.22
$$
Again applying (0.24)
with $f$ as in (0.18), together with Theorem 0.2, yields Theorem 0.4.
\vskip .5cm
We next outline the contents of this paper.
The first five sections are concerned with the proofs of
Theorems 0.1 and 0.2.
In more detail, Section 1 deals with some elementary reductions.
The spacing distributions $\mu_a (A)$ as defined in (0.4)
are not the most convenient to deal with analytically.
We introduce close relatives of these which we call the naive
spacings, $\mu_a (A, \; \text{naive})$.
The analogues of Theorems 0.1 and 0.2 for the latter are then
established in the later sections. With section 2, the proof
begin in earnest. To compute and estimate expectations and variances of
class functions on these classical groups, we make use of the explicit
Weyl integration formulae. These are recalled in Section 2.
Using the method of orthogonal polynomials,
which was introduced into calculations of this type by Gaudin \cite{Gau}
and Mehta \cite{Meh},
we give explicit formulae in
terms of determinants formed out of projection kernels (which turn
out to be simple trigonometric functions) for the Haar measure
when any number of the variables in the Weyl formula are integrated out.
These are summarized in (2.25) and serve as a basis for all further
calculations. Section 3 is devoted to the analysis of certain random
variables $Z$ on $G(N)$, called $n$--level correlation functions.
These measure the distribution in $\Bbb R ^n$ of subsets of size $n$,
of the $N$ eigenvalues of an $A\epsilon G(N)$. The key result of this
Section is Theorem 2 which describes the behavior as $N \rightarrow \infty$
of the expectation and variance of $Z$. Part 1 of Theorem 2 (Section 3)
shows that the limit of the expectations of the $Z$'s does not depend
on the particular family $G(N)$. This is responsible for the universality in
Theorem 0.1. Once we have control over the $Z$'s we attack the measures
$\mu_a (A, \, \text{naive})$ using combinatorial inclusion/exclusion.
This is carried out in Section 4. Corollary 3 of Section 4, gives the
precise relation between the $Z$'s and the $\mu_a$'s while Lemma 4 gives
suitable inequalities for a truncation of this relation. The proofs of
Theorem 0.1 and 0.2 are then carried out in Section 4. However, along the
way sharp estimates on the size of the tails of the measures $\mu_a$ (univ)
are needed (see 4.41). The derivation of these estimates is carried out
in Section 5. This is achieved by expressing $\mu_a$ (univ) in terms of
Fredholm determinants, see (5.18). (This also allows us to identify
$\mu_a$ (univ) with Gaudin's ``GUE'' spacing measures \cite{Gau}.
These Fredholm determinants are scaling limits of finite Fredholm determinants
associated with the families $G(N)$. In particular, there is a
remarkable relation between the determinants associated with $U(2N-1))$,
$SO(2N)$ and $USp(2N-2)$ (see Corollary 6 of Section 5).
With this relation and a direct estimation of the determinant,
which is possible for the $SO(2N)$ case (see Proposition 7 of section 5) we eventually
obtain the sharp bounds for the tails of the $\mu_a$ (univ),
completing the proof of Theorems 0.1 and 0.2.
In the supplement to Section 5, we also investigate some nonuniversal statistics.
For $j\geq 1$, we consider the distribution of the distance along the unit
circle to the point $z = 1$, of the $j^{th}$ eigenvalue of $A$, as $A$
varies over $G(N)$. Renormalizing (see 5.2) this distance and letting
$N$ go to infinity, we obtain certain limiting probability measures on
$\Bbb R _{\geq 0}$, denoted $\nu_j$ and $\nu_{\pm , j}$.
The $\nu_j$ being the $U(N)$ limit while $\nu_{\pm ,j}$ are the $SO(2N)$
and $USp(N)$ limits, respectively. These measures can be expressed in terms
of Fredholm determinants (see Corollary 2) which allow us to compute their
densities numerically -- see Figures 1, 2, and 3. Being sensitive to the
particular family $G(N)$, these allow for some interesting applications
which we describe below. Their means are not equal to $j$
so we denote the rescaling of these which
have mean $j$ by $\widetilde{\nu_j}$ and $\widetilde{\nu_{\pm , j}}$.
\vskip .5cm
Sections 6 and 7 are devoted to the proof of Theorem 0.3 and, in particular,
of (0.17). In Section 6, we discuss some Theorems concerning the equidistribution
of Frobenius under an $\ell$--adic representation $\rho$ of the fundamental
group of a scheme over a finite field $k$.
These results are essentially due to Deligne \cite{Del}.
The key ingredients in the proof are the
Lefshetz Trace Formula (section 6, (3)) and Deligne's main result
in ``Weil II'' (see section 6, after (3))
which concerns the size of the eigenvalues of Frobenius acting
on cohomology. A combination
of these, leads after some analysis, to an estimation of ``Weyl sums''
measuring the equidistribution of the Frobenius conjugacy classes
in the geometric monodromy group $G_{\text{geom}}$ (section 6, top of page 4)
associated with $\rho$. A uniform version of this (which allows us to
vary the field $k$ and, in particular, its characteristic) is given in Theorem 6.7.
In Section 7, we apply the above abstract theorem to specific geometric
examples. We begin with families of hyperelliptic curves over $k = \Bbb F _q$.
These give rise to a monodromy representation $\rho$ of $\pi_1$ of the parameter space
(whose fibres are the hyperelliptic curves of a given genus $g$). We describe
in some detail the computation of $G_{\text{geom}}$ (for such a $\rho$),
which turns out to be $Sp(2g)$. For certain one parameter families of
such hyperelliptics, we obtain entirely effective bounds for the
Weyl equidistribution sums (see section 7, (3$^\prime$)). Now (0.17) is concerned with
the universal family $\Cal M _g (\Bbb F _q )$ of curves of genus $g$ over $\Bbb F _q$.
Since this family is not representable, we make use of the tri--canonical
structure and the space $\Cal M _{g , 3K}$ to set up the appropriate
monodromy representation $\rho$. This leads naturally to counting
curves weighted by the inverse of the order of their automorphism groups,
as is done in (0.17). Since most curves of genus $g \geq 3$
over $\Bbb F _q$ ($q$ large)have no automorphisms (0.17) leads to Theorem 0.3.
The precise connection between the zeta function $\zeta (C/ \Bbb F _q , T)$
and the Frobenius class under $\rho$ is described in interlude page 5 of section 6.
We note that in our analysis we have not determined explicitly the bounds on the
equidistribution sums leading to (0.17).
\vskip .5cm
We end the introduction with some further results along these lines which
use Theorem 0.2, and the analysis in Section 5. We also put forth some
conjectures which suggest themselves from this work.
Theorem 0.3 tells us what happens to most curves of large genus and
over a large $\Bbb F _q$. One may ask what happens if we choose a curve $C$
of large genus $g$ defined over $Q$, and ask the same question about the
zeros of $\zeta (C / \Bbb F _ p, T)$ for almost all primes $p$.
\vskip .5cm
To answer this, we need to assume some standard conjectures. Let $C/Q$
be such a curve. For $\ell$ a prime we obtain via the action of $GAL(\overline{Q}/Q)$
on $H^1 (C, Q_ \ell )$ (or equivalently on the Tate module
$T_\ell (J)$ of the Jacobian $J$ of $C$) a $2g$ dimensional $\ell$--adic
representation $\rho$ of $GAL (\overline{Q}/Q)$.
The image $\rho (GAL (\overline{Q} /Q))$ lies in $G= SpS(2g, Q_\ell)$
the group of symplectic similitudes with $\ell$--adic coefficients. If this image is open
in $G$, we will say $C$ is general. Explicit examples of such $C$'s
of large genus are given in Appendix \ \ \ . In this case, it is
conjectured (this being the generalization of the ``Sato--Tate'' conjectures
\cite{Sa}, \cite{Ta}) that the unitarized Frobenius conjugacy classes
$\theta (C / \Bbb F _p)$ become equidistributed in $USp(2g)$ as
$p \rightarrow \infty$. Precisely this means that for any
continuous class function $f$ on $USp(2g)$
$$
\lim\limits_{x \rightarrow \infty} \; \frac{1}{\pi (x)} \sumprime\limits_{p < x}
%^{\quad \; \; \prime}
%\sumprime f(\theta (C/\Bbb F _p )) =
f(\theta (C/\Bbb F _p )) =
\int\limits_{USp(2g)} f(A) dA
\tag0.23
$$
Here $\prime$ denotes omit the primes $p$ for which $C$ has bad
reduction$\mod p$ and $\pi(x)$ is the number of primes less than $x$.
Combining \thetag{0.25} and Theorem 0.2, we obtain
\proclaim{Theorem 0.5}
Assume the Sato--Tate conjectures described above. Let $C_N$ be a sequence of
curves of genus $g_N$ and which are of general type.
If $g_N \rightarrow \infty$ as $N \rightarrow \infty$, then for any
$a \in \Bbb N$
$$
\lim\limits_{N \rightarrow \infty} \; \lim\limits_{x \rightarrow \infty} \;
\frac{1}{\pi (x)} \sumprime\limits_{p \leq x}
%^{\quad \; \; \prime}
%\sumprime
D(\mu_a (C_N / \Bbb F_p ) , \mu_a (\text{univ})) = 0
$$
\endproclaim
Hence, for such a $C_N$ of large genus, $\zeta (C_N / \Bbb F_p , T)$
satisfies GUE statistics (for the spacings of its zeros)
for most primes $p$.
%&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
%\vskip .1in
%In the above it is crucial that some assumption like $C_N$ being
%of general type be made. To see this consider for example the sequence of
%modular curves $C_N = X_0 (N)$ (see [ \ \ \ ] ). We assume
%(for simplicity) that $N$ is prime. As a complex curve
%$X_0(N) = \Gamma (N) \backslash \Bbb H$ carries a basis
%$f_1 \, , \ldots , \; F_{g(N)}$ of holomorphic,
%Hecke eigenforms, of weight 2 (new forms).
%Moreover, by Eichler--Shimura theory \cite{Ei--Shi} and
%Igusa \cite{Ig}, we have that for $p\dagger N$ the eigenvalues of
%$\theta (X_0(N) / \Bbb F _p )$ are of the form
%$(\theta _1 (p), \theta_2 (p)\, , \ldots , \; \theta_{g(N)} (p)) \in [0, \pi ]^{g(N)}$
%where $\lambda_j (p) = 2 \sqrt{p} \cos \theta_j (p)$ is the
%eigenvalue of the Hecke operator $T_p$ on $f_j$ [ \ \ \ ].
%The eigenforms $f_j$ come in two types:
%\roster
%\item"{(A)}" $f_1 \, , \ldots , \; f_{L(N)}$ which are ``monomial,'' that
%is their $L$--functions are Hecke $L$--functions attached to certain
%Grossen--characters of quadratic extensions of $Q$.
%\item"{(B)}" The rest $j = L(N) +1 \, , \ldots , \; g(N)$.
%\endroster
%For the monomial $f_j$'s the distribution of the
%$\theta _j (p)$'s as $p \rightarrow \infty$, is known
%(Hecke \cite{Hec}). They are distributed according to the law
%$\frac{1}{2} \delta _{\pi/2} ( \theta ) + \frac{d \theta}{2\pi }$
%and $[0, \pi ]$. For $f_j$'s of type (B), the generalized Sato--Tate
%conjecture asserts that the $\theta _j (p)$'s are distributed
%according to $\frac{2}{\pi} \sin^2 \theta d \theta$ on $[0, \pi ]$.
%Moreover, for any $j \neq k$ the $\theta_j (p)$ and $\theta_k (p)$
%are conjectured to vary independently as $p \rightarrow \infty$.
%We can summarize this by saying that the Sato--Tate conjectures assert
%here that for any $f \in C ([0, \pi ] ^{g(N)})$
%$$
%\lim\limits_{x \rightarrow \infty}
% \frac{1}{\pi (x)}
% \sum\limits_{\Sb p < x \\ p+N \endSb} f(\theta_1 (p) \, , \ldots , \;
% \theta_{g(N)} (p) ) = \int _{[0, \pi ]^{g(N)}}
% f(\theta ) dm (\theta )
%\tag0.26
%$$
%where
%$$
%dm ( \theta_1a\, , \ldots , \; \theta_{g(N)})
% \prod\limits_{j=1}^{L(N)} \biggl( \frac{1}{2} \delta_{\pi/2}
% (\theta_j) + \frac{d \theta_j}{2 \pi} \biggr)
% \prod\limits_{k= L(N) +1}^{g(N)} \; \frac{2}{\pi}
% \sin^2 \theta_k d \theta_k \; .
%\tag0.27
%$$
%Now the number of monomial forms satisfies
%$L(N) = 0_\epsilon ((g(N))^\epsilon ) \; \text{for any} \; \epsilon > 0$
%(see \ \ \ \ ), and so these have no effect in what follows.
%The analogue of Theorem 0.2 for the product measure
%(i.e. independent random variables) in (0.27) is easy to establish
%and is well known (the so called ``Poisson Approximation'').
%The measures $\mu_a$ (univ) are replaced in the $N \rightarrow \infty$
%limit by $\mu _a$ (Poisson) where
%$$
%d \mu _a (\text{Poisson}) = e^{-s}
% \frac{s^{a-1}}{(a -1)!} ds \quad , \; s \geq 0 \; .
%\tag0.28
%$$
%The measures $\mu_a$ (Poisson) are very different from the
%$\mu_a$'s. For example, the density of $\mu_1$ vanishes to
%second order at $s=0$ (see Figure \ \ \ \ ) reflecting
%the fact that the consecutive spacings are rarely very small
%(this is referred to as ``level repulsion'').
%For $\mu_1$ (Poisson) $= e^{-s} ds$ and so small consecutive spacings are
%the mode. Applying this reasoning to $X_0 (N)$, we get:
%\proclaim{Theorem 0.6}
%Assume the general Sato--Tate conjectures, then for any $a \in \Bbb N$
%$$
%%\lim\limits_{N \rightarrow \infty} \; \lim\limits_{x \rightarrow \infty} \;
% \frac{1}{\pi (x)} \; \sum\limits_{\Sb p \leq x \\ p \dagger N \endSb}
% D(\mu _a (X_0 (N) / \Bbb F _p ), \; \mu_a \; (\text{Poisson})) = 0 \; .
%$$
%\endproclaim
%This says that the spacings between the zeros of $\zeta (X_0 (N)/\Bbb F _p , T)$
%become Poissonian as $N \rightarrow \infty$ for almost all $p$.
%It does not seem unreasonable to expect that the last holds in much
%stronger form -- that is without averaging over $p$.
%&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
\vskip .5cm
In the above it is crucial that some assumption like $C_N$ being of general type
be made. To see this consider the example of the sequence of modular curves
$C_N = X_0(N)$ (see [ \ \ \ ] for definitions).
We will assume for simplicity that $N$ is prime.
As a complex curve, $X_0 (N) = \Gamma _0 (N) \backslash \Bbb H$
carries a basis $f_1, f_2\, , \ldots , \; f_g$ ($g = \; \text{genus} (C_N) )$
of Hecke eigenforms of weight 2. Since $N$ is prime, these are all new forms.
By Eichler--Shumura \cite{EI}, \cite{SH} and Igusa \cite{Ig} we have that
for $p \dagger N$ the eigenvalues of $\theta (X_0 (N) / \Bbb F_p )$ are
of the form
$(\theta _1 (p) , \theta_2 (p) \, , \ldots , \; \theta_g (p) ) \in [0. \pi ]^g$
where $\lambda_j (p) = 2 \sqrt{p} \cos \theta_j (p)$ is the eigenvalue of the
Hecke operator $T_p$ on $f_j$. The image of
the $\ell$--adic representation $\rho$ of $GAL(\bar{Q}/Q)$
on $H^1 (X_0 (N) , \bar{Q}_L)$ is contained in the algebraic group $G$,
where $G$ is the subgroup of $x \epsilon (GL(2))^g$ with
$x = (x_1 \, , \ldots , \; x_g)$ and $\det (x_j) = \det (x_K), j \neq k$.
Here $\ell$ is any prime different from $N$. It follows from results of
Ribet \cite{Ri} that the image of $\rho$ is Zariski dense in $G$.
Correspondingly, the ``Sato--Tate'' conjecture for $X_0(N) , N $ prime,
asserts that for any continuous $f$ on $[0, \pi ]^g$
$$
\lim_{x \rightarrow \infty} \frac{1}{\pi (x)}
\sum_{\Sb p \leq x \\ p \dagger N \endSb}
f(\theta_1 (p) \, , \ldots , \; \theta_g (p)) =
\int\limits_{[0,\pi]^g} f (\theta) \frac{2 \sin^2 \theta_1}{\pi} d \theta_1 \ldots
\frac{2\sin^2 \theta_g}{\pi} d \theta_g \; .
\tag0.24
$$
It is a straight forward matter to prove the analogue of Theorem 0.2 for the product measure
$\prod\limits_{j=1} ^{g} \frac{1}{\pi} \sin^2 \theta_j d \theta_j$,
when $g \rightarrow \infty$ and hence to develop analogues results about the
spacing distributions between the zeros of the zeta functions of
$X_0 (N)/\Bbb F _p$. Doing so in this way will lead to a complicated answer
(i.e. for the spacing measures) simply because the asymptotic density
of the zeros of $X_0 (N) / \Bbb F_p$ is not uniform on $[0, \pi ]$.
Indeed according to (0.24)
$$
\lim_{x \rightarrow \infty} \frac{1}{\pi (x)} \sum\limits_{p \leq x}
\frac{\# \{j | \theta_j (p) \in [a,b]\}}{g} =
\int_a^b \frac{2}{\pi} \sin^2 \theta \, d \theta
\tag0.25
$$
That is, the density is $\frac{2\sin^2 \theta}{\pi}$ which is not uniform.
[In the $U(N), USp(2N), SO(2N)$ and $SO(2N+1)$ cases the densities are
(see (2.55))
$\frac{d\theta}{2\pi}$ on $[0,2\pi)$,
$\left(\frac{1}{\pi}+\frac{1}{2N\pi}-\frac{\sin(2N+1)\theta}{(\sin \theta)2N\pi} \right) d \theta$
on
$[0, \pi]$, $\left(\frac{1}{\pi} - \frac{1}{2N\pi} + \frac{\sin (2N-1)\theta}{2\pi N \sin \theta} \right) d\theta$
on $[0, \pi]$ and
$\left( \frac{1}{\pi} - \frac{\sin N \theta}{2\pi N \sin \theta} \right) d \theta$
on $[0, \pi]$, respectively. For the purpose of determining the spacing distributions
as $N \rightarrow \infty$, it turns out that these approach the uniform density fast
enough as Theorems 0.1 and 0.2 indicate. For this reason, this issue never came
up in say Theorem 0.3.]
\vskip .5cm
To get around this variable density problem in (0.25) one can proceed
in two ways. One, is to localize, i.e. to consider the spacings between the
zeros of $\zeta (X_0 (N) / \Bbb F_p , T)$ which are near a fixed
$0 < \theta_0 < \pi$ (exactly how near introduces a new parameter),
the second is to make a monotone change of parameter which renders the
density uniform. We choose the second approach, which we call
``straightening the angles''. Set
$$
t = \frac{\theta}{\pi} - \frac{\sin 2 \theta}{2 \pi} \quad
\qquad 0 < \theta \leq \pi \; .
\tag0.26
$$
Then $0 \leq t \leq 1$ and in the parameter $t$ the zeros
$0 \leq t_1 \leq t_2 \ldots \leq t_g \leq 1$ have a
constant density as we vary over all $p$. In this parameter
(0.24) asserts that in the $p$ limit, $t_1, t_2 \, , \ldots ,\; t_g$
are {\it uniformly} distributed {\it independent} random variables on
$[0,1]$. The rescaled spacing distributions for these as
$g \rightarrow \infty$, that is the analogues of Theorems 0.1 and
(0.2), are well known \cite{P}. A similar result holds for these
except that $\mu_a$ is replaced by $\mu_a$ (random) where
$$
\mu_a \; (\text{random}) = \frac{s^{a-1}}{(a-1)!} e^{-s} ds
\tag0.27
$$
(random here indicates the spacings between random numbers i.e. identically
independently distributed uniform variables). The shapes of the measures
$\mu_a$ (random) and $\mu_a$ are quite different. For example, the density
of $\mu_1$ (classical) vanishes to second order at $s = 0$ (see Figure 1)
reflecting the fact that consecutive spacings are rarely very small (this
is referred to as level repulsion), while for $\mu_1$ (random) $= e^{-s} ds$
small consecutive spacings are the mode.
\vskip .5cm
Applying the above discussion to $X_0 (N)$ we get the following:
Let $\tilde{\mu}_a (X_0(N)/ \Bbb F_q )$ be the straightened
spacing measure for the zeros of $\zeta (X_0 (N) / \Bbb F_p , T)$
(i.e. the measure defined as before, but using the parameter $t$ for the
zeros), then we have
\vskip .3cm
\proclaim{Theorem 0.6} Let $N$ be prime and assume the general
Sato--Tate conjectures. For any $a \in \Bbb N$, we have
$$
\lim\limits_{N \rightarrow \infty} \; \lim_{x \rightarrow \infty}
\sum\limits_{ \Sb p \leq x \\ p \dagger N \endSb } \;
D( \tilde{\mu}_a (X_0 (N) /\Bbb F_p ) , \mu_a
\; (\text{random})) = 0 \; .
$$
\endproclaim
\vskip .5cm
This says that the spacings between zeros of
$\zeta (X_0 (N) / \Bbb F _p , T )$ behave like random
numbers (or what is sometimes referred to as
``Poissonian'') as $N \rightarrow \infty$, for almost
all $p$. It does not seem unreasonable to expect that
the last holds in the stronger form without averaging over $p$.
To formulate this, we need to straighten a little differently.
The reason is that for fixed $p$, the density of zeros of
$\zeta (X_0(N) / \Bbb F_p , T)$ approaches the $p$--adic
Plancherel density $d \mu_p (\theta)$ on $[0, \pi ]$
as $N \rightarrow \infty$, see Serre \cite{SE \ \ } (note that
$d \mu_p \rightarrow \frac{2}{\pi} \sin^2 \theta d \theta$ as
$p \rightarrow \infty$). Set $s(\theta) = \int_0^\theta d \mu_p$
and let ${\mu^{\hskip -0.2cm ^\approx}}_a (X_0 (N) /\Bbb F_p )$ be the
%and let ${\approx\atop{\mu}}_a (X_0 (N) /\Bbb F_p )$ be the
corresponding spacing distribution between the zeros in the
$s$--parameter.
\demo{Conjecture 0.7}
Fix $p$ and $a$. Then
$$
\lim\limits_{\Sb N \rightarrow \infty \\ p\dagger N\endSb}
D( {\mu^{\hskip -0.2cm ^\approx}}_a
(X_0 (N) / \Bbb F _p ) , \mu_a \;(\text{random})) = 0 \; .
$$
\enddemo
In view of the interpretation of the zeros of $\zeta(X_0 (N) / \Bbb F _p , T )$
in terms of the eigenvalues of a $p+1$ regular graph on $g(X_0 (N))$ vertices
(see for example Mestre \cite{Mes}) this conjecture can be tested numerically.
The conjecture asserts that the spacings between the eigenvalues of the
Hecke operator $T_p$ on weight 2 modular forms for $X_0 (N)$ is Poissonian in
the limit $N \rightarrow \infty$. One may conjecture that the same is true
for the eigenvalues of the Laplacian $\Delta$ (i.e. ``$T_p$'' for $p$ the
infinite place). Interestingly, this Poissonian phenomenon
for $X(1)$ and $\Delta$ was discovered by physicists in numerical
experiments \cite{SC}.
\vskip .5cm
An analogue the individual type behavior in Conjecture 0.7, but for
Kloosterman sums also seems quite plausible though in this case
we know of no feasible way of testing the following conjecture:
\demo{Conjecture 0.8}
Fix $\Bbb F_q$ and $t \in \Bbb F _q ^\star$ and $\psi$ an
nontrivial additive character of $\Bbb F_q$.
Then for $a \in \Bbb N$
$$
\lim\limits_{n \rightarrow \infty} D( \mu_a (K \ell_\psi ^{(n)}
(t, q)) , \; \mu_a ( \text{univ})) = 0 \; .
$$
\enddemo
\vskip .5cm
With the exception of the discussion in the supplement to
Section 5, the above has been concerned with universal features
-- i.e. ones not sensitive to the particular family $G(N)$.
The distribution of eigenvalue closest to 1, is sensitive to the
particular family. This can be pursued arithmetically in a number of ways,
including the following. For $\Bbb F_q$ as above, and $n \geq 1$,
let $H_n ( \Bbb F _q ) = \left\{ D|D \right.$ is a square free monic
polynomial of degree $n$ in $\Bbb F_q [t]$\}. The quadratic extensions
$\Bbb F _q (t) ( \sqrt{D})$ of $\Bbb F_q (t)$ are precisely the
function fields whose zeta functions were considered by Artin in his
thesis. For $j \geq 1$, let $\lambda_j (q, D)$ be $\frac{n}{2\pi}$
times the distance along the unit circle from the $j^{th}$ zero of
$\zeta (\Bbb F _q (t) (\sqrt{D}) , T)$ to $1$. This zeta function is
the same as $\zeta (C_D / \Bbb F_q , T)$ where $C_D$ is the curve
$y^2 = D(x) $ over $ \Bbb F_q$.
Using the equidistribution techniques of Chapter 3, (see also the preprint of
Yu \cite{Yu} who investigates the distribution of the class numbers of
$\Bbb F _q (t) (\sqrt{D} ))$ together with the scaling limits
derived in Section 5, we obtain:
\proclaim{Theorem 0.9}
Fix $j \geq 1$ and let $f \in C_0 ( \Bbb R_{\geq 0})$.
Then
$$
\lim\limits_{n \rightarrow \infty} \lim\limits_{q \rightarrow \infty}
\frac{1}{|H_n ( \Bbb F _q ) |} \;
\sum\limits_{
D \in H_n ( \Bbb F_q )} f (\lambda _j (q, D)) =
\int_0^{\infty} f(t) d \nu_{-, j} (t)
$$
where $\nu_{-,j}$ is the measure in Proposition 4, Supplement to Section 5.
\endproclaim
\vskip .5cm
The point is that as is shown in Section 7, the monodromy associated
to the family of curves $H_n ( \Bbb F_q )$ is the full symplectic group.
According to Section 5 (supplement) this leads to the measures
$\nu_{-, j}$ which are associated with ``$USp( \infty)$''
and characteristically so (i.e. the other classical groups give
the entirely different measures $\nu_j$ and $\nu_{+,j}$).
\vskip .5cm
Again one can optimistically conjecture that Theorem 0.9 should hold
in the stronger form without the $q$--limit.
\demo{Conjecture 0.10}
Fix $q$ and $j$. For any $f \in C_0 (\Bbb R_{\geq 0} )$ we have
$$
\lim\limits_{n \rightarrow \infty} \frac{1}{|H_n (\Bbb F_q ) |}
\sum\limits_{D \in H_n ( \Bbb F _q )}
f( \lambda_j (q, D)) =
\int_0^{\infty} f(t) d \nu_{-,j} (t) \; .
$$
\enddemo
\vskip .5cm
In this form Conjecture 0.10 asserts that if we fix the
function field $K = \Bbb F _q (t)$, and consider the distribution of the
$j^{th}$ zero from 1 of the zeta functions of quadratic extensions of $K$,
then these follow the distributions associated with the underlying monodromy
$USp (\infty)$. While we don't know how to test Conjecture 0.10,
even numerically, one can test its analogue over the rational number field.
So we return to the Riemann zeta function and Dirichlet $L$--functions.
The universal ``GUE'' feature common to all of these when averaging
over all zeros of a given such $L$--function, was our starting point.
However, we have now identified in the function field setting a
nonuniversal feature which betrays further structure underlying
the zeros, such as the symplectic structure. The analogue of
Conjecture 0.10 for $K = Q$ is to consider the distribution of
the $j^{th}$ closest zero to $s = 1/2$ of $\zeta_L (s)$, where
$L/Q$ is quadratic extension, as $D_L = \text{disc} \; (L) \rightarrow \infty$.
Rubinstein \cite{Rub} has recently developed a method to compute the
$j^{th}$ such zero for $D_L$ of order $10^9$. He has gathered data on
the distribution (normalized) of the lowest and second to lowest
such zeros on varying $D_L$ (through primes).
His results are displayed in Figures 7, 8 and 9. The fit with $\tilde{\nu}_{-,1}$
and $\tilde{\nu}_{-,2}$ is excellent (to the extent that one expects
convergence at a rate of $0(1/ \log D_L )$ -- compare with
Odlyzko \cite{Od} where $t$ plays the role of $D_L$).
Further evidence of the $Sp (\infty )$ influence can be given by
considering the density of zeros of $\zeta _L (s)$ near
$s = 1/2$. In the function field setting, this density
is dictated by the symplectic measure $\mu _{1,N} (x_1)$ in (2.46).
As $N \rightarrow \infty$ this density converges to
$$
\biggl( 1 - \frac{\sin 2 \pi x}{2 \pi x } \biggr) dx \; .
\tag0.28
$$
On the other hand, using the analytic methods for studying
correlations of the zeros of the Riemann zeta function
mentioned in the first paragraph, Ozluk and Shnider \cite{Sh}
have shown that, if one assumes the Riemann-Hypothesis
for $\zeta _L (s)$, then for a restricted class of test
functions; viz $f \in \Cal S (\Bbb R )$ whose Fourier
transforms are supported in $(-2/3, 2/3)$, we have
$$
\spreadlines{.3\jot}
\align
\lim\limits_{x \rightarrow \infty} \frac{1}{\# \{L|D_L < x \}}
\sum\limits_{D_L < x} \biggl( \; \sum\limits_{\zeta_{L}(\frac{1}{2} + i \gamma)=0}
f\bigl(\frac{\log D_L}{2\pi} \gamma \bigr) \biggr)
\quad = \int_{-\infty} ^{\infty} \bigl( 1 - \frac{\sin 2 \pi x}{2 \pi x} \bigr)
f(x) dx \; .
\tag0.29
\endalign
$$
So (0.29) and (0.28) are in perfect agreement.
Note that the scaling of the imaginary parts by $(\log D_L ) /2 \pi$
is the natural one here.
\vskip .5cm
With this numerical and theoretical evidence, we are led to
\demo{Conjecture 0.11}
Let the $j^{th}$ lowest zero of $\zeta_L (s)$ be denoted by
$\rho = 1/2 + i \gamma_L^{(j)}$. Then for any
$f \in C_0 (\Bbb R_{\geq 0})$
$$
\lim\limits_{x \rightarrow \infty} \frac{1}{\#\{L|D_L < x \}}
\sum\limits_{D_L < x} f \left(\frac{ \gamma_L^{(j)} \log D_L}{2 \pi} \right)
= \int_0^\infty f(t) d \tilde{\nu}_{-,j} (t) \; .
$$
\enddemo
The density of the measure $\tilde{\nu}_{-,1}$ (unlike
$\tilde{\nu}_1$ or $\tilde{\nu}_{+,1}$) corresponding to
$USp (\infty)$ has the remarkable property that it vanishes
to second order at $s=0$. That is the zeros are rarely
close to $1/2$ (or they repel the point $1/2$).
At a crude level this was already observed by Hazelgrove
\cite{Haz} who was the first to experiment numerically with
zeros of $\zeta_L (s)$. What Conjecture 0.11 offers is a precise
version of this phenomenon together with a theoretical explanation.
That $Sp (\infty)$ appears to be gluing these $\zeta _L (s)$
together as it does in the function field case, hints at the
existence in the global case of a remarkable structure yet to be discovered.
\head{1. \ Measures attached to spacings of eigenvalues}\endhead \
Let $N \geq 1$ be an integer. Given an element $A$ in the unitary group
$U(N)$, all of its eigenvalues lie on the unit circle,
so there is a unique increasing sequence of angles in $[0, 2\pi]$
$$
0 \leq \phi_1 \leq \phi_2 \ldots \leq \phi _N < 2 \pi
\tag1.1
$$
such that
$$
\det \; (T-A) = \overset N\to {\underset{j=1}\to \Pi} (T - e^{i \phi_j} ) \; .
\tag1.2
$$
The $N$ non-negative real numbers
$$
\left.
\aligned
&\qquad \phi_{j+1} -\phi_j , \quad j =1 ,\dots, N-1 \qquad\\
\text{and}\\
&\qquad2\pi + \phi_1 - \phi_N \qquad
\endaligned \right\} \tag1.3
$$
are called the ``literal'' spacings between the adjacent eigenvalues of $A$.
Their sum is $2\pi$, so their mean is $2\pi/N$. By the normalized spacings
between adjacent eigenvalues of $A$, we mean the $N$ non--negative numbers
$\Delta_1 , \dots , \Delta _N$ defined by
$$
\align
&\qquad \Delta_j =
\frac{N}{2\pi} (\phi_{j+1} - \phi_j ), j = 1, \dots , N-1 \\
\text{and}\\
&\qquad \Delta_N = \frac{N}{2\pi} (2 \pi + \phi_1 - \phi_N ) \; . \tag1.4
\endalign
$$
Clearly their mean is 1.
\vskip .5cm
More generally if $a \geq 1$ is an integer, we can define the
''$a-th$'' spacings. Firstly, we prolong the sequence $j \rightarrow \phi_j$
to all integers $j$ by requiring that $\phi_{j+N} = \phi_j + 2\pi$.
For $j = 1 , \dots , N$ define
$$
\Delta_{j,a} = \frac{N}{2\pi} (\phi_{j+a} - \phi_j ).
\tag1.5
$$
The mean of these nonnegative numbers is clearly equal to $a$.
\vskip .5cm
Corresponding to these spacings, we have the measures defined in the
introduction:
$$
\mu_a(A) = \frac{1}{N} \sum_{j=1}^{N} \delta_{\Delta_{j,a}} \; .
\tag1.6
$$
That is to say, for any continuous $f$ on $\Bbb R$
$$
\int_{\Bbb R} f d \mu_a (A) =
\frac{1}{N} \sum_{j=1}^{N} f (\Delta_{j,a} ) \; .
\tag1.7
$$
Note that in the above discussion we chose angles $\phi_j, j = 1, \ldots, N$
in a particular fundamental domain, viz $[0,2\pi)$.
Suppose that instead we had fixed a real number $\alpha$ and had chosen
the angles in $[\alpha , \alpha + 2\pi )$. The corresponding spacing
vectors $\Delta_{j,a}$ for $A$ computed using $[\alpha , \alpha + 2\pi)$
would be some cyclic permutation of those computed using $[0, 2\pi)$.
Hence, the measure $\mu_a(A)$ would be the same.
\vskip .5cm
We begin with some easily proven elementary facts.
\proclaim{Lemma 1}
\roster
\item On $U(N)$, each of the angles $\phi_j$ (computed using
$[0,2\pi)$) is a (Borel) measurable function. Each is continuous on
the open set $U(N) [1 / \det (I-A)]$ of $U(N)$, where 1 is not an
eigenvalue. More generally, for each $x$, the angles computed using
$[\alpha, \alpha + 2\pi )$ are measurable and continuous on the open set
$U(N) [1/ \det (e^{i \alpha}-A)]$.
\item For any $a \geq 1$ and $\alpha$, the $N$ normalized spacings
$\Delta_{j,a} (A)$ computed using $[\alpha , \alpha + 2 \pi )$
are measurable and are continuous on $U(N) [1 / \det(e^{i\alpha}-A)]$.
\item If $f$ is a continuous function on $\Bbb R$
$$
A \longrightarrow \int_{\Bbb R} f d \mu_a (A)
$$
\vskip -.3cm
is continuous on $U(N)$.
\endroster
\endproclaim
\demo{Proof}
(2) is immediate from (1). For (3) we observe that,
if we compute angles using $[\alpha, \alpha + 2\pi)$
each of $f(\Delta_{j,a} (A))$ is continuous on
$U(N)[1 / \det (e^{i \alpha} - A)]$ and so
$A \rightarrow \int f d\mu_a (A)$ is continuous on
$U(N) [1/ \det (e^{i\alpha} - (A)]$. Since these open
sets cover $U(N)$ as we vary $\alpha$, the result follows.
The proof of (1) is given in Appendix 1.
\enddemo
\vskip .5cm
For $\nu$ a measure (positive) on $\Bbb R$ its cummulative
distribution function $CDF_{\nu}$ is the function on $\Bbb R$
defined by
$$
CDF_\nu (x) = \nu((-\infty, x]) \; .
\tag1.8
$$
Recall that we defined the discrepancy $D(\nu, \mu)$
between two such measures to
$$
D(\nu, \mu) = \sup_{-\infty < x < \infty} | CDF_\nu (x) - CDF_\nu (x)| \;.
\tag1.9
$$
For probability measures (or measures of mass at most 1), $0 \leq D \leq 1$.
The following Lemma will be very useful later on when we apply various
equidistribution theorems.
\proclaim{Lemma 2}
Let $\nu$ be a probability measure on $\Bbb R$ with a continuous $CDF$.
Then the function
$$
A \mapsto D (\mu_a(A), \nu )
$$
from $U(N)$ to $[0,1]$ is continuous.
\endproclaim
\demo{Proof}
As before, it suffices to show that this function is continuous on each open
set $U(N) [1 / \det (e^{i\alpha} -A)]$.
On this open set, the spacings $\Delta_{j,a} (A)$ are
continuous. So the Lemma follows from:
\enddemo
\proclaim{Lemma 3}
Let $\nu$ be a probability measure on $\Bbb R$ with a continuous $CDF$.
Fix $N$ and for each $P = (x_1 , \dots, x_N) \in \Bbb R^N$
let $\mu(P)$ be the measure (on $\Bbb R$)
$$
\mu (P) = \frac{1}{N} \sum_{j=1}^{N} \delta_{x_{j}} \; .
$$
The function $P \rightarrow D(\mu (P), \nu)$
is a continuous function on $\Bbb R^N \rightarrow [0,1]$.
\endproclaim
\demo{Proof} see Appendix 2.
\enddemo
\vskip .5cm
Theorems 0.1 and 0.2 are concerned with the measures $\mu_a(A)$ for $A$
in various subgroups of $U(N)$. Let $K$ be a compact group and
$$
\rho: K \rightarrow U(N)
\tag1.10
$$
a continuous representation. In this way, we have the measures
$\mu_a( \rho (A))$.
We will denote this measure by $\mu_a(A, K, \rho)$ if we wish
to emphsize $K$. The Haar measure (total mass 1) on $K$
will be denoted by Haar$_K$ or $d_K(A)$
(or simply $dA$) when it appears in an integral.
The average, or expected value, is defined in the obvious way:
$$
%\spreadlines{2\jot}
\align
E( \mu_a (A, K, \rho) , \text{Haar}_K (A))& = \mu_a (K, \rho) \\
&= \frac{1}{N} \sum_{j=1}^{N} ( \Delta_{j,a} \circ \rho )_* (\text{Haar}_K ) \;. \tag1.11
\endalign
$$
Put another way, for any bounded continuous function $f$ on $\Bbb R$
$$
\int_{\Bbb R} f d \mu _a (K , \rho ) = \int_K \bigg(\int_{\Bbb R}
f d \mu_a (A, K , \rho \bigg) d_K A \; .
\tag1.12
$$
We may also write
$$
\align
\mu_a (K, \rho )& = E ( \mu_a(A, K, \rho), \; \text{Haar}_K ) \\
&= \int_K \mu_a (A, K, \rho ) dA \; .
\tag1.13
\endalign
$$
It is clear tht $\mu_a (K, \rho )$ is a probability measure.
If the representation $\rho$ is understood, we will omit it from the notation.
In what follows, we are primarily interested in the case when $K$ is one of the compact
classical groups $G(N)$ in their standard representations.
That is $G(N)$ is one of:
\roster
\item"{(i)}" $U(N)$ the group of $N \times N$ unitary matrices and
its subgroup $SU(N)$ consisting of those of determinant equal to 1.
\item"{(ii)}" $O(N)$, the group of $N\times N$ real orthogonal matrices and
its subgroup $SO(N)$.
\item"{(iii)}" $USP(N)$, ($N$ even ) the compact form of $SP(N)$.
That is the set of $N\times N$ complex matrices $A$ which are
\endroster
$$
\xalignat2
\text{(a)} \quad &\text{unitary} \; i.e. \; ^*A A= I \\
\text{(b)} \quad & \text{symplectic} \; i.e. \; ^t AJA = J \\
&\quad \text{where} \qquad \quad J = \left[ \matrix
0 & I_{N/2} \\
-I_{N/2} &0
\endmatrix \right].
\endxalignat
$$
\vskip .5cm
Some cases of Theorems 0.1 and 0.2 for the various $G(N)$'s are simply
related to each other. To see this, we make the following observation.
\proclaim{Lemma 4}
Let $H$ and $K$ be compact groups, and $\Pi : H \rightarrow K$ a continuous
surjective group homomorphism. Then $\Pi_*$ Haar$_H = $ Haar$_K$.
\endproclaim
\vskip .2cm
\demo{Proof}
Since $\Pi$ is surjective and Haar$_H$ is translation invariant on $H$ its direct
image $\Pi_*$ Haar$_H$ is translation invariant on $K$. It is also of mass
1 and so the result follows from the uniqueness of Haar measure.
\enddemo
\vskip .2cm
\proclaim{Corollary 5}
Let $H$ be a compact group, $M$ a compact normal subgroup of $H$, and $K$ a
closed subgroup of $H$, such that $H = MK$.
Then for any bounded function\footnote"****"{Here and everywhere
else all functions will be Borel measurable.} $f$ on $H$ which is $M$
invariant, we have
$$
\int_H f(A) d_H (A) = \int_K f(A) d_K(A) \; .
$$
\endproclaim
\vskip .2cm
\demo{Proof}
Consider the quotient $H/N \simeq K/N \cap K$.
>From Lemma 4 we have
$$
%\spreadlines\2jot
\split
\int_H f(A)d_H(A) &= \int_{H/M} f(A) d_{H/N} (A) \\
&= \int_{K/K\cap M} f(A) d_{K/K \cap M} (A) \\
&= \int_K f(A) d_K (A) \; .
\endsplit
$$
\enddemo
\vskip .5cm
We apply these remarks as follows. The various spacing measures attached
to an element $A$ of $U(N)$ are the same for $A$ and for $\lambda A$,
for any scalar $\lambda \in S^1$. So for each $a \geq 1$, the
expected values of $\mu_a (A, U(N))$ and of $\mu_a (A, SU(N))$
coincide (take $M$ to be the group of scalars in $H= U(N)$ and $K$
the subgroup $SU(N)$). Similarly, the expected values of these
measures for $O(2N+1)$ and $SO(N+1)$ will coincide
(take $M = \{ 1, -1\}$, $H = O(2N+1)$ and $K =SO(2N+1)$).
Thus, the $SU(N)$ case of Theorem 0.1 is equivalent to the $U(N)$ case
of it and the $SO(2N+1)$ case to the $O(2N+1)$ case.
Similarly, the discrepancy integrals in Theorem 0.2 coincide for
$U(N)$ and $SU(N)$, and for $O(2N+1)$ and $SO(2N+1)$.
So Theorem 0.2 for $SU(N)$ and $SO(2N+1)$ follow from the cases
$U(N)$ and $O(2N+1)$, and visa--versa.
\vskip .5cm
We may also apply the above to give slight extensions of
Theorems 0.1 and 0.2, that is to groups $H(N)$ which are close to $G(N)$.
\proclaim{Lemma 6}
\roster
\item"{(a)}" $G(N)=SU(N) \subset H(N) \subset\; \text{Normalizer of} \; G(N) \; \text{in} \;U(N)$
\item"{(b)}" $G(N)=SO(2N+1) \subset H(N) \subset\; \text{Normalizer of} \; G(N) \; \text{in} \;U(2N+1)$
\item"{(c)}" $G(N)=USP(2N) \subset H(N) \subset\; \text{Normalizer of} \; G(N) \; \text{in} \;U(2N)$
\item"{(d)}" $G(N)=SO(2N) \subset H(N) \subset\; \text{Normalizer of} \; G(N) \; \text{in} \;U(2N)$
\endroster
\endproclaim
Let $f$ be a bounded function on the ambient group (i.e. $U(N)$ in case (a), etc.)
which is invariant by the subgroup $S^1$ of unitary scalars.
In case (a), (b), and (c), we have
$$
\int_{H(N)} f(A) dA = \int_{G(N)} f(A) dA \; .
$$
In case (d), we have either
$$
\int_{H(N)} f(A) dA = \int_{SO(2N)} f(A) dA
$$
or
$$
\int_{H(N)} f(A) dA = \int_{O(2N)} f(A)dA
$$
depending on whether or not every element of $H(N)$,
acting by conjugation on $SO(2N)$, induces an inner automorphism of $SO(2N)$.
\demo{Proof}
In cases (a), (b), and (c) we claim that the normalizer of $G(N)$ in the
ambient unitary group is $S^1 G(N)$, while in case (d) we claim it is
$S^1 O(2N)$. It is clear that the named group normalizes $G(N)$,
what must be shown is that the normalizer is no bigger. In other words,
we must show that $H(N) \subset S^1 G(N)$ in cases (a), (b), (c), while
$H(N) \subset S^1 O(2N)$ in case (d).
In the case (a), $S^1 G(N) = U(N)$ so there is nothing to prove.
In cases (b) and (c), we use the fact that the Dynkin diagram
has no nontrivial automorphisms, so every automorphism of
$G(N)$ is inner. So any $h \in H(N)$, acting by conjugation on
$G(N)$ induces conjugation by some element $g \in G(N)$.
Then $h^{-1}g$ commutes with $G(N)$ in its standard representation.
As this representation is irreducible, $h^{-1}g$ must be scalar,
and this scalar being unitary, lies in $S^1$. Thus $H(N) \subset S^1 G(N)$.
\vskip .5cm
Case (d) requires some additional attention.
If $N=1$, one can argue by inspection, and is left to the reader.
If $N \geq 2$, $N \neq 4$ there is one nontrivial automorphism of the
Dynkin diagram and it is induced by conjugation by any element of
$O_- (2N)$. So any $h$ in $H(N)$, induces conjugation by some element
$g\in O(2N)$. Repeating the irreducibility argument given above in the
(b) and (c) cases, we get $H(N) \subset S^1 O(2N)$.
\vskip .5cm
It remains to examine case (d) with $N=4$. Here the Dynkin
diagram has three extreme points, and its automorphism group is
$\sum_3$ acting by permutations of these three points. Think of
these three points as (the highest weights of) the three 8--dimensional
irreducible representations of Lie$(SO(8))$ (namely the
``standard'' one $\rho_{std}$ and the two spin representations).
The action of any $h$ in $H(N)$ preserves the isomorphism class
$\rho_{std}$ indeed for $A$ in $SO(2N) \subset U(2N)$ and $h$ in
$U(2N)$ which normalizes $SO(2N)$,
$$
\rho_{std} (h A h^{-1} ) = h A h^{-1} = h \rho_{std} (A)h^{-1} \; .
$$
Therefore conjugaion by $h$ is either inner, or interchanges the two spin
representations.
In the first case, we have $h$ in $S^1 G(N)$. In the latter
case, use the fact that any $\gamma_-$ in $O_-(2N)$ also interchanges
the two spin representations. Then $h\gamma_-$ induces an inner
automorphism and hence $h\gamma_-$ is in $S^1G(N)$, whence
$h \in S^1 O(2N)$, as required.
\vskip .5cm
In cases (a), (b), and (c) the inclusions
$$
G(N) \subset H(N) = S^1 G(N)
$$
make clear that we have
$$
S^1H(N) = S^1 G(N) \; .
$$
>From this, Lemma 4 and its corrollary, we infer that
$$
\int_{H(N)} f(A) dA = \int_{S^1 H(N)} f(A)dA =
\int_{S^1G(N)} f(A)dA = \int_{G(N)} f(A)dA
$$
as required.
\vskip .5cm
In case (d) we must distinguish two cases.
If $H(N)$ acts on $SO(2N)$ through inner automorphisms,
we have the inclusion
$$
G(N) \subset H(N) = S^1 G(N)
$$
and we argue as above. If some $h$ in $H(N)$ induces a non--inner
automorphism, then $H(N)$ contains an element of $S^1 O_-(2N)$,
in which case the inclusion $G(N) \subset H(N) = S^1 O(2N)$ forces
the equality
$$
S^1 H(N) = S^1 O(2N) \; .
$$
Applying the same arguments to
$H(N) \subset S^1 H(N)$ and $O(2N) \subset S^1 O(2N)$
yields the result in these cases.
\vskip .5cm
If we apply Lemma 6 to the discrepancy function we obtain
$$
\int_{H(N)} D(\mu_a (A), \mu_a ) dA =
\int_{G(N)} D(\mu _a (A), \mu _a ) dA
\tag1.14
$$
for case (a) (b) (c) while
$$
\int_{H(N)} D(\mu_a (A) , \mu _a ) dA =
\left\{
\aligned \int_{SO(2N} & D( \mu_a (A), \mu_a ) dA \\
\int_{O(2N)} & D(\mu_a (A) , \mu_a ) dA\endaligned
\right. \tag1.15
$$
depending on whether or not $H(N)$ acts by inner automorphism on
$SO(2N)$ in case (d). In particular, we see that Theorem 0.2 for
$G(N)$ implies the same theorem with $G(N)$ replaced by $H(N)$.
\enddemo
\vskip .5cm
We end this section with some further technical remarks which will
fascilitate the analysis in the proof of Theorems 0.1 and 0.2.
The eigenvalues of any $A \in G(N)$ are on the unit circle
and this makes it difficult to carry out explicit calculations.
For this reason we define, case by case, for each $N > a$, measures
$\mu_a (A, G(N)$, naive) which approximate $\mu_a (A)$ and which
are well adapted to calculation because they involve points on a line.
\vskip .5cm
\noindent (I) The $U(N)$ case: \ Attached to $A \in U(N)$
are its $N$ angles
$$
0 \leq \phi \leq \phi_2 \dots \leq \phi_N < 2 \pi \; .
$$
We use only the first $N-a$ of the normalized spacing
measures $\Delta_{j,a} (A)$, that is precisely those
which don't require wrapping around the unit circle. Define
$$
\mu_a (A, U(N), \text{naive} ) = \frac{1}{N}
\sum_{j=1}^{N-a} \delta _{\Delta_{j,a}} \; .
\tag1.16
$$
This measure has total mass $\frac{N-a}{N}$\; .
\vskip .5cm
\noindent (II) The $USP(2N)$ case: \ The eigenvalues of $A$ in
$USP(2N)$ occur in $N$ complex conjugate pairs which we may
write uniquely as $e^{\pm \phi_j}$ with $N$ angles
$$
0 \leq \phi _1 \leq \phi_2 \ldots \leq \phi_N \leq \pi \; .
\tag1.17
$$
Each of these angles is a continuous function on $USP(2N)$.
For $1 \leq i \leq N-a$, we define the $a$--spacings as
before and set
$$
\mu_a (A, USP(2N) , \text{naive} ) = \frac{1}{N} \sum_{j=1}^{N-a} \delta_{\Delta_j,a}
\tag1.18
$$
which has mass $(N-a)/N$.
\vskip .5cm
\noindent (III) The $SO(2N+1)$ case: \ For this case, $1$ is an
eigenvalue of every $A$.
The remaining eigenvalues of $A$ occur in $N$ complex
conjugate pairs, which we may write uniquely as
$e^{\pm i \phi_{j}}$ with $N$ angles in $[0,\pi]$,
$$
0 \leq \phi_1 \leq \phi_2 \ldots \leq \phi_N \leq \pi \; .
\tag1.19
$$
Again, each of these angles $\phi_j$ are continuous on
$SO(2N+1)$. With the corresponding spacings
$\Delta_{j,a}\, , \; 1 \leq j \leq N-a$, we set
$$
\mu_a (A, SO(2N+1) \, , \; \text{naive}) = \frac{1}{N} \sum_{j=1}^{N-a}
\delta_{\Delta_{j,a}}
\tag1.20
$$
which again has mass $(N-a)/N$.
\vskip .5cm
\noindent (IV) The $SO(2N)$ case: \ The eigenvalues of an $A$ in
$SO(2N)$ occur in $N$ complex conjugate pairs, which we may write
uniquely as $e^{\pm \phi_{j}}$ with $N$ angles in $[0, \pi ]$ and the
$\phi_j$'s as in (1.19). The measures $\mu_a (A, SO(2N) \, , \; \text{naive})$
are then defined as in (1.20) with the corresponding $\Delta_{j,a}$'s.
\vskip .5cm
(V) The $O_- (2N+2)$ case: \ Here $O_- (2N)$ denotes the set of elements
in $O(2N+2)$ with determinant $-1$. $O_-(2N+2)$ is the principal
homogeneous space under $SO(2N+2 )$ and $O(2n+2)$ is the disjoint union
$SO(2N+2) \cup O_- (2N+2)$. We will denote by Haar$_{O_-(2N+2)}$ the
restriction to $O_-(2N+2)$ of Haar measure on $O(2N+2)$, but
normalized so that $O_-(2N+2)$ has mass $1$.
\vskip .5cm
\noindent Any element $A$ in $O_- (2N+2)$ has both $1$ and $-1$ as eigenvalues.
The remaining eigenvalues of $A$ occur in $N$ complex conjugate pairs,
which we may write uniquely as $e^{\pm \phi_j}$ with $N$ angles
in $[0,\pi]$ as in (1.20). We form the corresponding measures
$\mu_a (A, O(2N+2) \, , \; \text{naive})$ as in (1.20).
\vskip .5cm
This concludes our case by case definition of the naive spacing measures.
In view of the comments following Lemma 4, these cases will suffice
for the purpose of understanding the general $G(N)$.
>From the definitions of the various spacing measures it\footnote"*****"{and
noting that one obtains the same spacing measures $\mu_a (A \, , \; \text{naive})$
on reversing the order of the points.} follows that for
$G(N) = U(N), \; SO(2N+1) , \; USP(2N), \; SO(2N)$ and
$O_-(2N+2)$ and $N > a$
$$
D( \mu _a (A, G(N) ), \; \mu_a (A, G(N) , \; \text{naive})) \leq
\frac{a+1}{N} \; .
\tag1.21
$$
Hence $E(\mu_a (A, G(N))$ and $E(\mu_a (A, G(N) , \; \text{naive})$
will have the same limits, if they have limits at all.
In particular, the above implies that Theorem 0.1 will follow from:
\vskip .5cm
\proclaim{Theorem 0.1 (bis)}
For $G(N)$ one of $U(N), \; SO(N), \; USP(2N)$ or $0_- (2N)$ and $a \geq 1$
$$
\lim_{N \rightarrow \infty} E( \mu_a (A, G(N) , \; \text{naive})) = \mu_a \; .
$$
\endproclaim
Similarly the bound (1.21), together with the triangle inequality, implies that
Theorem 0.2 follows from:
\vskip .5cm
\proclaim{Thorem 0.2 (bis)}
Given $A>0$ and $a \geq 1$, there is an effectively computable
$N = N (\epsilon, a)$ such that for $G(N)$ one of $U(N)$, $SO(N)$,
$USP(2N)$, $O_-(2N)$ and $N \geq N (\epsilon , a)$
$$
\int_{G(N)} D(\mu_a (A, G(N), \;
\text{naive}), \; \mu_a ) dA \leq N^{-\frac{1}{6}+ \epsilon} \; .
$$
\endproclaim
\vskip .5cm
\noindent Note that the case $O(2N)$ follows
from each of $SO(2N)$ and $O_-(2N)$.
In the next sections, we will establish Theorems 0.1 and 0.2 bis.
%\end{document}
\head{2. Haar Measure}\endhead
In each of the cases (I) to (V) at the end of section 1, we described the
basic angles $\phi_1 ,\ldots , \phi_N$ corresponding to $A \in G(N)$.
We denote by $X(A) \in \Bbb R^N$ this vector of eigenvalue angles
for $A\in G(N)$. Thus, for $A \in U(N)$,
$X(A) \in [0, 2\pi)^N$, while in all the other cases,
$X(A)$ lies in $[0, \pi ]^N$. The Weyl integration
formula gives the measure on these $\phi_j$'s induced by Haar measure on
$G(N)$. The version of the formula which we need, especially in the
non--$U(N)$ case, is precisely the one given in Weyl [``The classical
groups'' pp 197 (7.4B), \hskip .05cm \hskip .05cm 218 (7.8B), 224 (9.7), and 226 (9.15)].
In all but the $O_- (2N+2)$ case, this formula is, to the modern reader,
a straight forward deciphering of the ``intrinsic'' one given in
Bourbaki [Lie IX, \S 6 N 2, Cor.1]. The $O_- (2N+2)$ formula, which
seems to have been all but forgotten in modern treatments, is
discussed in the appendix to this chapter, cf also [Deligne letter].
\vskip .5cm
\noindent (I) the $U(N)$ case: \ An element $A$ in $U(N)$ is
determined up to conjugacy by its vector of angles $X(A)$.
Bounded central functions on $U(N)$ are in one--one correspondence with
bounded functions $\tilde{g}$ on $[0, 2\pi )^N$ which
are $\sum_N$ invariant, via $g(A) = \tilde{g} (x(A))$.
We denote by $V_{U(N)}$ the measure on $[0,2\pi)^N$
(with coordinates $x_j, j=1, \ldots , N)$ given by
$$
dV_{U(N)} = \frac{1}{N!} (\underset j N$, $D_{n,N} \equiv 0$.
\item For $n \geq 1$, let $F$ be a symmetric function on $T^n$.
Define $F_{n,N}$ on $T^N$ by
$$
F_{n,N} (t_1, \ldots , t_N) = \sum_{1 \leq i_1 < i_2 \dotsm < i_n \leq N}
F(t_{i_1} , \dots , t_{i_n}) \; .
\tag2.17
$$
So $F_{n,N}$ is symmetric on $T^N$.
Let $\mu_{n,N}$ be the measure on $T^n$ defined by
$$
d\mu_{n,N} (t_1 , \dots, t_n) = \frac{1}{n!}
D_{n,N} (t_1, \dots , t_n ) d\mu (t_1) \cdots d\mu (t_n)
\tag2.18
$$
then we have the integration formula
$$
\int_{T^N} F_{n,N} (t_1, \dots, t_N) d\mu_{N,N} (t_1, \dots, t_N ) =
\int_{T^n} F(t_1 , \dots , t_n ) d \mu_{n,N}
(t_1, \dots , t_n ) \; .
$$
\endroster
\endproclaim
\demo{Proof}
\roster
\item For $n \geq 1$, $\phi_n = P_n (f) = f^n + $ lower terms.
This means that we can pass from $\det \hskip -.05cm \scriptstyle{_{N\times N}} (\phi_{i-1} (t_j))$
to the $N\times N$ determinant whose $i^{th}$ row for $i \geq 2$ is
$f(t_j)^{i-1}$ and whose first row is
$\phi_0 (t_j) = \frac{1}{\sqrt{\mu(T)}} (1, \ldots , 1)$,
by elementary row operations.
Since the determinant does not change, we get the first assertion.
\item Taking the square absolute value of (1) we get
$$
\align
\frac{1}{\mu(T)} |\text{Van} (f(t_1) , \ldots , f(t_N) )|^2 &
= | \det \hskip -.05cm \scriptstyle{_{N\times N}} (\phi_{i-1} (t_j))|^2 \\
& = \det \hskip -.05cm \scriptstyle{_{N\times N}} ( \phi_{i-1} (t_j)) \times
\det \hskip -.05cm \scriptstyle{_{N\times N}} (\overline{\phi_{i-1} (t_j)})\\
& = \det \hskip -.05cm \scriptstyle{_{N\times N}} ( \phi_{j-1} (t_i)) \times
\det \hskip -.05cm \scriptstyle{_{N\times N}} (\overline{\phi_{i-1} (t_j)})\\
&= \det \hskip -.05cm \scriptstyle{_{N\times N}} ((i,j) \mapsto \sum_{k=1}^{N}
\phi_{k-1} (t_i) \overline{\phi_{k-1}} (t_j)) \\
&= D\scriptstyle{_{N,N}} (t_1, \ldots , t_N ) \; .
\endalign
$$
\item From the orthonormality of the $\phi_n$'s and
the definition of $K_N$ we have immediately
$$
\int_T K_N (t,t) d \mu (t) = N
\tag2.19
$$
and
$$
\int_T K_N (x,t) K_N (t,y) d\mu (t) = K_N (x,y) \; .
\tag2.20
$$
The first of these is the $n=1$ case of the assertion. For
$n> 1$, we expand $D_{n,N}$ by its $n^{th}$
column:
$$
D_{n,N} (t_1 , \ldots , t_n ) =
\sum_{k=1}^n (-1)^{k+n} K_N
(t_k, t_n) \; \text{Cofactor}_{k,n} \; .
$$
The term with $k =n$ is $K_N (t_n , t_n ) D_{n-1 , N}$
which integrates to give $N \times D_{n-1,N} (t_1, \ldots , t_{n-1})$.
It remains to see that for each of the $n-1$ values of $k$ from 1 to $n-1$,
the term $(-1)^{k+n} K_N (t_k, t_n) \times$ Cofactor$_{k,n}$ integrates
to give $-D_{n_1,N} (t_1 , \ldots , t_{n-1})$. For each such $k$,
we expand Cofactor$_{k,n}$ by its $n^{th}$ row
$$
\text{Cofactor}_{k,n} = \sum_{\ell = 1}^{n-1}
(-1)^{n-1+\ell} K_N (t_n, t_\ell) \times \text{Cofactor}_{(n,k), (\ell,n)}
$$
where
$$
\text{Cofactor}_{(n,k), (\ell,n)} = \; \text{the} \;
(n,\ell) - \text{cofactor \; of \; Cofactor}_{k,n}
$$
is the $(n-2) \times (n-2)$ matrix obtained by removing the
indicated rows and columns. So we obtain
$$
\align
(-1)^{k+n} K_N (t_k , t_n ) \times \text{Cofactor}_{k,n}
& = \sum_{\ell = 1}^{n-1} (-1)^{k-1+\ell} K_N (t_n, t_\ell)K_N(t_k, t_n) \cdot \\
& \qquad \qquad \qquad \text{Cofactor}_{(n,k), (\ell,n)} \; .
\endalign
$$
The term Cofactor$_{(n,k), (l,n)}$ is just the $(k, \ell)$
cofactor of $D_{n-1, N}$ (itself the $(n,n)$--cofactor of $D_{n,N}$),
and $K_N(t_n,t_\ell)K_N(t_k,t_n)$ integrates to give
$K_N(t_k,t_\ell)$. So after integration we get
$$
\sum_{\ell=1}^{n-1} (-1)^{k-1+\ell} K_N (t_k, t_\ell) \times
(\text{the} \; (k,\ell) \;\text{cofactor \; of} \; D_{n-1,N})
$$
which is precisely $(-1)$ times (the expansion by minors along
the $k^{th}$ row) of $D_{n-1, N}$.
\item In view of the integration formula (3), it suffices to treat
the case $n \leq N$, to show that $D_{N,N}$ is real, symmetric and
nonnegative in its $N$--variables. This is obvious from
(2) since the Vandermonde determinant transforms under $\sum_N$ by
the sign character, and hence its square absolute value is symmetric as well as
real and nonnegative.
\vskip .5cm
To show that $D_{n,N}$ vanishes for $n > N$, think of $D_{n,N}$
as the $n \times n$ determinant made from $K_N(x,y)$. Introduce
functions $\psi_i$ on $T$ for $i=0, \ldots, n-1$ by defining
$$ \psi_i = \phi_i \quad \text{for} \quad 0\leq i \leq N-1 $$
and
$$ \psi_i \equiv 0 \quad \text{for} \quad N \leq i \leq n-1 \; .$$
Then it is trivially the case that
$$
K_N(x,y) = \sum_{i=0}^{n-1} \psi (x) \overline{\psi(y)}
$$
and hence (as in the proof of (2)) that
$$
\det {\hskip -.05cm \scriptstyle{_{n\times n}}} (K_N (x_i , x_j)) =
| \det {\hskip -.05cm \scriptstyle{_{n\times n}}} (\psi_{i-1}(x_j))|^2 \; .
$$
In as much as the last row $(n > N)$ is zero,
the determinant is zero.
\vskip .3cm
\item The $\sum_n$--invariance of $\mu_{n,N}$
is obvious from (4). If $n > N$, both $F_{n,N}$ and
$\mu_{n,N}$ vanish identically, so the integration formula is true,
but nugatory. For $n=N$, there is nothing to prove.
So we take $1 \leq n < N$. Consider the integral
$$
\split
\int_{T^N} F_{n,N} (t_1, \ldots , t_N )
D_{N,N} (t_1, \ldots , t_N) d\mu(t_1) \cdots d\mu t_N = \qquad \qquad \qquad \\
\qquad \quad \sum_{1\leq i_1 < i_2 \cdots < i_n \leq N}
\int_{T^{N}} F(t_{i_1}, , \ldots , t_{i_n} )
D_{N,N} (t_1 , \ldots , t_N)
d\mu(t_1) \cdots d\mu (t_N) \; .
\endsplit
$$
By symmetry, each summand is equal to
$$
\int_{T^N} F(t_1 , \ldots , t_n )
D_{N,N} (t_1 , \ldots , t_N ) d\mu (t_1) \cdots d\mu(t_N) \; .
$$
Using (3) to successively integrate out the variables
$t_N , t_{N-1} , \ldots , t_{n+1}$, we get
$$
\split
\int_{T^N} F(t_1 , \ldots , t_n ) D_{N,N} (t_1 , \ldots , t_N )
d\mu (t_1) \cdots d \mu (t_N) = \qquad\qquad\qquad \qquad\\
\qquad \quad 1. 2. \cdots (N-n) \int_{T^N} F(t_1 , \ldots , t_n )
D_{n,N} (t_1 , \ldots , t_n ) d \mu (t_1) \cdots d\mu (t_n) \; .
\endsplit
$$
Since there are $\binom Nn$ summands we get
$$
\split
\frac{1}{N!} \int_{T^{N}} F_{n,N} (t_1 , \ldots , t_N )
D_{N,N} (t_1 , \ldots , t_N) d \mu (t_1) \cdots d \mu(t_N) = \qquad \qquad\qquad\\
\qquad \quad \frac{1}{N!} \binom Nn \int_{T^n} F(T_1 , \ldots , t_n)
D_{n,N} (t_1 , \ldots , t_n ) d \mu (t_1) \cdots d \mu (t_n)
\endsplit
$$
as required. This completes the proof of Lemma 1.
\endroster
\enddemo
\vskip .5cm
We now apply this Lemma to rewrite the Weyl measure
$V_{G(N)}$ as being the measure $\mu_{N,N}$ on $T^N$ for a suitable
situation of the type $(T, \mu, f)$ considered in the Lemma.
We proceed case by case according to the cases at the
beginning of this section. Note that since $V_{O_-(2N+2)} = V_{USP(2N)}$
we do not discuss the $O_-(2N+2)$ case separately.
\vskip .5cm
\noindent (I) The $U(N)$ case: \ The group $G(1) = U(1)$
is abelian, so $U(1)$ is its own space of conjugacy classes.
We take for $T$ this space of conjugacy classes viewed as being
$[0,2\pi)$ endowed with Haar measure $d\mu = \frac{dx}{2\pi}$.
For $f$, we take the function $f(x) = e^{ix}$, the character of
the standard representation of $U(1)$. The powers $f^n$,
$n \in \Bbb Z$, are orthonormal, so we may take $P_n(X) = X^n$.
Since $\mu $ has total mass $1$, we have $\phi_n = f^n$
for $n \geq 0$. For $N \geq 1$, $K_N(x,y)$ is
$$
K_N(x,y) = \sum_{n=0}^{N-1} e^{in(x-y)} \; .
\tag2.21
$$
The measure $V_{U(N)}$ on $T^N = [0,2\pi)^N$
is equal to
$$
\aligned
dV_{U(N)}&= \frac{1}{N!}\; \underset jFrom this it is clear that $\frac{\sin nx}{\sin x}$ is a
$\Bbb Z$--monic polynomial of degree $n-1$ in
$2 \cos x = e^{ix} + e^{-ix}$. By the Peter Weyl Theorem
(or elementarily) we know that the functions $\frac{\sin nx}{\sin x}$
are orthonormal on $[0,\pi]$ for the measure $d\mu$.
So we have
$$
\phi_n (x) = \frac{\sin (n+1) x}{\sin x} \;, \quad \text{for} \; n \geq 0 \; .
\tag2.26
$$
For $N \geq 1$, the function $K_N(x,y)$ is thus
$$
K_N(x,y) = \frac{1}{\sin x \sin y} \sum_{n=1}^{N}
\sin (nx) \sin (ny) \;.
\tag2.27
$$
The measure $V_{USP(2N)}$ on $T^N = [0, \pi]^N$ is
equal to
$$
\aligned
\frac{1}{N!} \; \underset i \alpha \; .
\tag3.1
$$
\endroster
(If $F$ satisfies this last condition with some $\alpha$,
we write $\text{supp} (F) \leq \alpha$).
Condition (3) ensures that $F$ measures the local
spacings between coordinates.
\noindent Now given $A \epsilon G(N)$ we have its vector of eigenvalue
angles $X(A)$ lying in $[0, \sigma \pi ]^N$. For $F \epsilon \Cal T_0(n)$
we define the random variable $Z$:
$$
\aligned
Z[n,F,G(N)] (A) & = \frac{1}{N+ \lambda} \sum_{\Sb S \subset X(A)\\ |S| = n \endSb}
F(\frac{(N+ \lambda)}{\sigma \pi} S) \\
&= \frac{1}{N+ \lambda } \sum_{1\leq j_1 < j_2 < \cdots < j_n \leq N}
F(\frac{(N+\lambda)}{\sigma \pi}
(x_{j_1} (A) , \ldots , x_{j_n} (A)))
\endaligned\tag3.2
$$
(The quantities $ \sigma , \lambda$, etc. are those defined in Table 1
of section 2.) Note that the scale of $\frac{N}{\sigma \pi} X(A)$
normalizes these numbers on the line so that the mean spacing is 1.
We also note that $Z[n, F, G(N)](A)$ is actually continuous on
$G(N)$, if $G(N) \neq U(N)$.
When $G(N) = U(N)$, we have artificially chosen the origin to be the
point 1 on the unit circle. So in this case $Z$ is
bounded and corresponds to the naive spacing measures of section 1.
Lemma 1 of section 2, and in particular part (IV) thereof,
implies the following basic relation.
\vskip .5cm
\proclaim{Proposition 1}
$$
E (Z[n, F, G(N)]) = \frac{1}{N+ \lambda} \int_{[0, \sigma \pi]^n}
F((N+ \lambda ) x \big/ \sigma \pi ) d \mu_{n,N} (x)\;.
$$
\endproclaim
This relation allows us to investigate $E(Z[n, F, G(N) ])$ as
$N\rightarrow\infty$. The renormalized
limiting density of the densities of $\mu_{n,N}$, is the
density on $\Bbb R ^n$
$$
W_n (x_1 , \ldots , x_n) =
\det {\hskip -.05cm \scriptstyle{_{n\times n}}}
\left( \frac{\sin \pi (x_j - x_k)}{\pi (x_j - x_k)} \right)
\tag3.3
$$
For $F \epsilon \Cal T _0(n)$ set
$$
\aligned
E(n, F, \text{univ} ) & = \int\limits_{[0, \alpha]^{n-1} (\text{order})}
F(0, z_2 , \ldots , z_n) W_n (0, z_2
, \ldots , z_n ) d z_2 \cdots dz_n \\
&= \frac{1}{(n-1)!} \int\limits_{[0, \alpha ]^{n-1}}
F(0, z_2 , \ldots , z_n )
W _n (0, z_{2} , \ldots , z_n ) dz_2 \cdots dz_n
\endaligned\tag3.4
$$
Here we are assuming that $\text{supp} F \leq \alpha$ and the notation
$\int_{\text{order}}$ means the integral is over the set
$z_2 \leq z_3 \cdots \leq z_n$. Since $F$ and $W$
are $\sum_n$--invariant, it follows that the two integrals
in (3.4) must coincide.
\vskip .5cm
The main result of this section is
\proclaim{Theorem 2}
$$
|E(Z[n, F, G(N)]) - E(n, F, \text{univ}) |
\leq \Vert F \Vert_\infty ( 8 \alpha )^{n-1} ((\pi \alpha)^2
+ \alpha +1 + 10 \log N ) / N \leqno(1)
$$
$$
|E(Z[n,F, G(N)]) | \leq \Vert F \Vert_\infty
2(2\alpha)^{n-1} / (n-1)! \leqno(2)
$$
$$
\text{Var} (Z[n,F, G(N) ] ) \leq \frac{(3(8 \alpha )^{n-1} + 65(8 \alpha)^{2n-2} )
\Vert F \Vert_\infty}{N} \; .
\leqno(3)
$$
\endproclaim
\noindent The variance is the quantity
$E((Z-E(Z))^2) = E(Z^2) - (E(Z))^2)$. Part (3) of the above
theorem shows that $Z$ concentrates about its mean with $O(1/\sqrt{N})$
deviation.
\vskip .5cm
The rest of this section is devoted to the proof of Theorem 2.
We begin with some estimates for $L_N(x,y)$ and the related determinants.
\proclaim{Lemma 3}
For $x,y \epsilon \Bbb R$,
$$
|L_N (x,y) | \leq \frac{2N}{\sigma} , \;
|L_{N,-} (x,y) | \leq N+1, \; \text{and} \; |L_{N,+} (x,y)| \leq N+1
$$
\endproclaim
\demo{Proof}
Obvious from the series representations (42), (48) of section 2
and the definitions of $L_{N,-}(x,y)$ and $L_{N,+} (x,y)$.
\enddemo
\proclaim{Lemma 4}
$L_N(x,y)$ is periodic in each variable with period $4 \pi$ and
$$
\int_0^{4\pi} \int_0 ^{4\pi} |L_N (x,y) |^2 dx dy = 4 \pi N
$$
\endproclaim
\demo{Proof}
Clear from (42), $\cdots$ (48) and Plancherel.
\enddemo
\proclaim{Lemma 5}
For $1 \leq n \leq N$ and $x \epsilon \Bbb R^n$
$$
|\det {\hskip -.05cm \scriptstyle{_{n\times n}}} (L_N (x_i, x_j))| \leq (\frac{2N}{\sigma} )^n
$$
and
$$
| \det {\hskip -.05cm \scriptstyle{_{n \times n}}}
(L_{N,-} (x_i , x_j))| \leq (N + \frac{ \tau}{2})^n
$$
\endproclaim
\demo{Proof}
Interpret $L_N (x_i, x_j )$ as the standard Hermitian inner product
$\langle v(i), v(j) \rangle$ of vectors in $\Bbb C ^N$,
where $v(j)$ in $\Bbb C ^N$ is the vector
$$
\aligned
(1, e^{ix_j}, e^{2ix_j} , \ldots , e^{i(N-1)x_j} ) & \qquad \text{for} \; U(N), \\
\sqrt{2} (\sin x_j, \sin 2x_j , \ldots , \sin Nx_j) & \qquad\text{for} \; USP(2N),\\
\sqrt{2} (\sin x_j/2, \sin 3x_j/2 , \ldots ,
\sin (2N-1) x_j/2)) & \qquad\text{for} \; SO(2N+1),\\
(1, \sqrt{2} \cos x_j , \sqrt{2} \cos (2x_j) , \ldots , \sqrt{2}
\cos ((N-1) x_j)) &\qquad \text{for} \; SO(2N) \; .
\endaligned
$$
Now each $v(j)$ satisfies
$$
\Vert v(j) \Vert^2 \leq \frac{2N}{\sigma}
$$
so our assertion follows from the following (presumably) well known inequality.
\enddemo
\proclaim{Lemma 6}
Given $n\geq 1$ vectors $v (1) , \ldots , v(n)$ in a Hilbert space, we
have
$$
| \det {\hskip -.05cm \scriptstyle{_{n\times n}}} ( \langle v(i) , v(j) \rangle )| \leq \;
\underset j\to \prod \; \Vert v(j) \Vert^2
$$
\endproclaim
\demo{Proof}
The truth of the asserted inequality is invariant under scaling the vectors
$v(i)$ by positive real numbers $\beta_i$.
This allows us to reduce to the case where all
$v(i)$'s of length 1 (if any has $\Vert v(i) \Vert = 0$,
the result is trivially true). In this case we must prove that
$$
| \det {\hskip -.05cm \scriptstyle{_{n \times n}}} (( v(i) , v(j) )) | \leq 1
$$
If the vectors are linearly dependent, then the determinant clearly vanishes
and the result is true. So we may assume that the vectors are linearly
independent. Then the matrix $A= (\langle v(i), v(j) \rangle )$
is the matrix of a positive definite Hermitian form (namely the
Hilbert space inner product) on the $n$--dimensional space spanned by the
$v(i)$, expressed in that basis. Therefore, the $n$--eigenvalues
$\lambda _1 , \ldots , \lambda_n$ of $A$ are real and positive.
So $| \det A | = \overset n\to{\underset j=1\to \prod} \; \lambda_j$.
By the geometric arithmetic mean inequality, we have
$$
\aligned
| \det {\hskip -.05cm \scriptstyle{_{n \times n}}}
( \langle v (i) , v(j) \rangle ) | & = ( \det A)^{1/n} \\
& = \left( \overset n\to {\underset j=1\to \prod} \; \lambda _j \right)^{1/n} \leq
\frac{1}{n} \sum_{j=1}^n \lambda_j \\
& = \frac{1}{n} \; \text{trace}\; (A) = \frac{1}{n}
\sum_{j=1}^n \langle v(j) , v((j) \rangle = 1
\endaligned
$$
This complete the proof of Lemma 6, and hence Lemma 5.
\enddemo
\vskip .5cm
With these Lemma's we can now give the proof of part (2)
of Theorem 1. Clearly
we may assume that $n \leq N$, and that $\Vert F \Vert_\infty = 1$.
>From Proposition 1
$$
\aligned
|E(Z[n,F,G(N)])| & \leq \frac{1}{N+ \lambda} \int_{[0, \sigma \pi ]^n}
|F((N+\lambda) x/\sigma \pi)| d\mu_{n,N}(x) \\
& \leq \frac{1}{n!} \left(\frac{2N}{\sigma} \right) ^n \frac{1}{N+ \lambda}
\int_{[0, \sigma \pi ]^n} | F((N+ \lambda ) x / \sigma \pi ) | \;
\underset j\to \prod \; \frac{dx_j}{\sigma \pi}\\
&\hskip 1.75in \text{by Lemma 3} \\
& \leq \frac{1}{n!} \left(\frac{2N}{\sigma} \right)^n
\frac{1}{N+\lambda} \int_{[0,1]^n} |
F((N+ \lambda ) x ) | \; \underset j\to \prod \; dx_j \\
& \leq \frac{1}{n!} \left(\frac{2N}{\sigma} \right)^n \frac{1}{(N+ \lambda)}
\text{Vol}( \Delta (n , \frac{\alpha}{N+ \lambda} ) \cap [0,1]^n ) \, ,
\endaligned
$$
where
$$
\aligned
\Delta (n,\alpha) = \{ x \epsilon \Bbb R ^n |
\max_{i,j} |x_i - x_j | \leq \alpha \} \qquad\\
\qquad \qquad \leq \frac{1}{n!} \left(\frac{2}{\sigma} \right)^n
\text{Vol} (\Delta (n, \frac{\alpha}{N} ) \cap [0,1])^n
\endaligned
$$
Clearly part (2) will follow from this and the following Lemma.
\proclaim{Lemma 7}:
For $n \geq 2$, $\alpha \geq 0$, we have
$$
\frac{1}{n!} \text{Vol} ( \Delta (n, \alpha ) \cap [0,1]^n) \leq
\frac{\alpha^{n-1}}{(n-1)! }\; .
$$
\endproclaim
\demo{Proof}
\noindent The region $\Delta (n, \alpha) \cap [0,1]^n$ is
stable by the symmetric group so that the
$\ell$.h.s. of the inequality to be proven is the volume of the region
$$
\align
0 \leq x_1 \leq x_2 \cdots \leq x_n &\leq 1 \\
x_n - x_1 & \leq \alpha
\endalign
$$
This region is contained in
$$
\xalignat2
0 \leq x_1 \leq 1 , & \qquad 0 \leq x_1 \leq x_2 \cdots \leq x_n,
\qquad x_n - x_1 \leq \alpha \;.
\endxalignat
$$
By the unimodular change of coordinates
$$
\align
&y_0 = x_1 \\
& y_i = x_{i+1 }- x_1 \qquad \text{for $i =1 , \ldots , n-1$} \; ,
\endalign
$$
the last region is the product
$$
[0,1] \times \{ y \epsilon \Bbb R ^{n-1} \, ;
\qquad 0 \leq y_1 \leq y_2 \cdots
\leq y_{n-1} \leq \alpha \}
$$
whose volume in $\Bbb R ^n$ is that of
$$
\{ y \epsilon \Bbb R ^{n-1} \, ; \qquad 0 \leq y_1
\cdots \leq y_{n-1} \leq \alpha \} \; .
$$
This in turn is a fundamental domain for the action of
$\sum_{n-1}$ on the $\alpha$--cube $[0, \alpha ]^{n-1}$.
Thus it has volume $\alpha^{n-1}/(n-1)!$.
This completes the proof of Lemma 7 and hence of
part (2) of Theorem 2.
\enddemo
\noindent {\it Proof of Part 1 of Theorem 2:}
What underlies this analysis is the elementary limit
$$
\lim_{N \rightarrow \infty} \frac{1}{N} S_N (2 \pi x/N) =
\frac{\sin \pi x}{\pi x} \; .
\tag3.5
$$
>From this, Lemma 5 for the case of $U(N)$ and definition (3), we have
\proclaim{Lemma 8}
$$
| W_n (x_1 , \ldots , x_n)| \leq 1
$$
\endproclaim
\noindent The reader may verify for himself the following
elementary inequalities:
\vskip .4cm
\noindent For $-1 \leq x \leq 1$,
$$
\bigg| \frac{\sin x}{x} -1 \bigg| \leq \frac{x^2}{5} \; .
\tag3.6
$$
\noindent For $-M \leq x \leq M \quad , \; M > 0$
$$
\bigg|1 - \frac{x}{M \sin (x/M)} \bigg| \leq \frac{x^2}{4M^2} \; .
\tag3.7
$$
\noindent For $-M \leq x \leq M \quad , \; M > 0$
$$
\bigg| \frac{\sin x}{x} - \frac{\sin x}{M \sin (x/M)} \bigg| \leq
\frac{x^2}{4M^2} \; .
\tag3.8
$$
\noindent For $-M \leq x \leq M \, , \; M > 0 \, , \quad \delta \epsilon \Bbb R $
$$
\bigg|\frac{\sin x}{x} - \frac{\sin (1+ \delta ) x}{M \sin (x/M)} \bigg|
\leq | \delta | + \frac{(1+ |\delta| ) x^2}{4M^2} \; .
\tag 3.9
$$
The above allow us to deduce
\proclaim{Lemma 9}
For the $G(N)$'s in Theorem 2:
\roster
\item If $|\pi x | \leq N$
$$
\bigg| \frac{\sin \pi x}{\pi x} - \frac{\sigma}{2(N + \lambda)}
S_{\rho N + \tau} \bigg( \frac{\sigma \pi x }{N+ \lambda} \bigg) \bigg|
\leq \frac{1}{2N} + \big(1 + \frac{1}{2N} \big) \frac{\pi^2 x^2}{4N^2}
$$
\item For any $x$
$$
\bigg| \frac{\sin \pi x}{\pi x} - \frac{\sigma}{2(N+ \lambda)}
S_{\rho N+\tau} \bigg( \frac{\sigma \pi x}{N + \lambda} \bigg) \bigg|
\leq \frac{1}{2N} + \frac{2 \pi ^2 x^2}{N^2} \; .
$$
\endroster
\endproclaim
\demo{Proof}
According to the definition of $S$ we have
$$
\frac{\sigma}{2(N+ \lambda )} S_{\rho N + \tau}
\big( \frac{\sigma \pi x}{N+ \lambda} \big)
= \frac{\sin(( \rho N + \tau ) \pi x / \rho (N+ \lambda ))}
{\rho (N+ \lambda ) \sin ( \pi x / \rho (N+ \lambda ))} \; .
$$
So (1) follows from (9) with
$$
\align
\delta & = (\rho N + \tau ) / \rho(N+ \lambda ) -1
= \frac{\tau - \rho \lambda}{\rho (N + \lambda )} \\
M & = \rho N+ \tau
\endalign
$$
and noting that $| \delta | \leq 1/2N$.
\vskip .5cm
\noindent If $| \pi x /N | > 1$, the second assertion is immediate
since the terms being subtracted are of absolute value at most 1
and $(\rho N + \tau ) /( \rho N + \rho \lambda) \leq 1 + (1 / 2N )$.
$$
\aligned
\text{Set} \qquad W_{n,N} (x_1 \, , \cdots , \; x_n) & =
\det {\hskip -.05cm \scriptstyle{_{n \times n}}}
\bigg( \frac{1}{N+ \lambda}
L_{N,-} \bigg( \frac{\sigma \pi x_i}{N+ \lambda} , \;
\frac{\sigma \pi x_i}{N+ \lambda} \bigg) \bigg) \\
& \\
&= \det {\hskip -.05cm \scriptstyle{_{n \times n}}}
\bigg(\frac{\sigma}{2(N + \lambda)} S_{\rho N+ \tau}
\bigg(\frac{\sigma \pi (x_i - x_j)}{N+ \lambda}
\bigg) \bigg)
\endaligned \tag3.10
$$
According to Lemma 5, we have
$$
|W_{n,N} (x_1 , \ldots , x_n) | \leq
\bigg(1+ \frac{1}{2N} \bigg)^n \leq 2^n
\tag3.11
$$
\enddemo
\proclaim{Lemma 10}
Let $n \geq 2 , \; N \geq 1$ and $ \alpha \geq 0$ and
$x \epsilon \Bbb R ^n$ with $\max_{i,j} |x_i - x_j | \leq \alpha$.
Then
$$
|W_{n,N} (x_1 \, , \cdots , \; x_n ) -
W_n (x_1 , \ldots , x_n ) | \leq
n! \times n \times 2 ^{n-1} \times
\bigg( \frac{1}{2N} + \frac{2\pi ^2 \alpha ^2}
{N^2} \bigg) \; .
$$
\endproclaim
\demo{Proof}
The difference in question is the difference of
determinants of $n \times n$ matrices,
say $(a_{ij} \ \ )$ at $(b_{ij})$ which satisfy
$$
|a_{ij} - b_{ij} | \leq \frac{1}{2N} +
\frac{2 \pi^2 \alpha ^2}{N^2} = \Delta
\tag3.12
$$
by Lemma 9, and
$$
|a_{ij} | \quad \text{and} \quad |b_{ij} | \leq 2 = t
\tag3.13
$$
by Lemma 3.
\vskip .5cm
\noindent So, if we expand the determinants as a sum over permutations,
we see that the contribution of the permutation $\sigma$ is
at most
$$
%\aligned
\left| \underset i\to \prod \; b_{i, \sigma (i)} -
\underset i\to \prod \; a_{i, \sigma (i)} \right|
= \left| \sum_{j=1}^n \; \left( \underset i < j\to \prod \;
b_{i, \sigma (i)} \big(b_{j , \sigma (j) } - a_{j, \sigma(j)} \big)\right)
\underset j< i\to \prod \; a_{i, \sigma (i)} \right|
\leq nt^{n-1} \Delta
%\endaligned
\tag3.14
$$
from which Lemma 10 follows.
\enddemo
\vskip .5cm
We are ready to investigte the terms in part 1 of Theorem 2.
We have
$$
E(Z[n,F,G(N)] ) = \frac{1}{N+ \lambda} \quad
\int_{[0, \sigma \pi]^n} F\big( \frac{(N+\lambda) x}{\sigma \pi} \big)
d\mu_{n,N}(x)
$$
(by Proposition 1)
$$
= \frac{1}{n!(N + \lambda )} \quad \int_{[0, \sigma \pi ] ^n }
F \bigg(\frac{(N+ \lambda ) x}{\sigma \pi} \bigg)
\det {\hskip -.05cm \scriptstyle{_{n\times n}}}
(L_N (x_i , x_j)) \times \underset m\to \prod \;
\frac{dx_m}{\sigma \pi}
\tag3.15
$$
(by (53) of section 2).
\vskip .5cm
To analyze (3.15), we introduce
$$
\aligned
E_- (Z[n, F, G(N)]) & = \frac{1}{(N+ \lambda)} \quad
\int_{[0, \sigma \pi ]^n }
F\bigg( \frac{(N+ \lambda ) x}{\sigma \pi} \bigg)
d\mu_{- n,N}(x)\\
\vspace{.03cm}
&= \frac{1}{n!(N+ \lambda)} \quad \int_{[0,\sigma \pi ]^n}
\quad F \bigg( \frac{(N+ \lambda ) x}{\sigma \pi} \bigg)
%\vspace{.07cm}
\; \det (L_{N, -} (x_i, x_j)) \;
\underset m\to \prod \; \frac{dx_m}{\sigma \pi} \; .
\endaligned \tag3.16
$$
\vskip .5cm
We will establish the following three Lemmas:
\proclaim{Lemma 11}
$$
|E(Z[n, F, G(N) ]) - E_- (Z[n,F, G(N) ])|
\leq \Vert F \Vert _\infty (8 \alpha)^{n-1}
10 \log N/N \; .
$$
\endproclaim
\proclaim{Lemma 12}
$$
\split
\left|E_- (Z[n,F,G(N)]) \; - \;
\int_{[0,\alpha]\scriptstyle{^{n-1} \text{(order)}}} \;
F(0,z) W_{n,N} (0,z) \overset n\to{\underset j=2\to \prod} \;
dz_j \right| \\
\\
\qquad\qquad\qquad\qquad\qquad\qquad \leq 2 (2 \alpha )^n \Vert F \Vert _\infty / ((n-1) !N)
\endsplit
$$
\endproclaim
\proclaim{Lemma 13}
$$
\split
\left| \, \int\limits_{[0, \alpha]\scriptstyle{^{n-1} \text{(order)}}}
F(0,z) W_{n,N} (0, z) \;
\overset n\to{\underset j=2\to \prod} d z_j - E (n,F, \text{univ}) \right| \\
\\
\qquad \qquad \qquad \leq n^2 (2 \alpha) {\displaystyle{^{n-1}}}
\bigg(\frac{1}{2N} + \frac{2\pi^2 \alpha^2}{N^2} \bigg)
\Vert F \Vert_\infty \; .
\endsplit
$$
\endproclaim
One checks that these 3 Lemmas and the triangle inequality imply
part 1 of Theorem 2. Now we have essentially proven Lemma 13, since
according to the definition (4) and Lemma 10, the difference in question is
at most
$$
\split
\int\limits_{[0, \alpha ]^{n-1}(\text{order})} \; n! n 2^{n-1}
\left( \frac{1}{2N} + \frac{2 \pi^2 \alpha ^2}{N^2} \right) \Vert F \Vert_\infty
\; \underset j\to \prod \; dz_j \\
\\
\qquad \qquad = n^2 (2 \alpha)^{n-1} \left( \frac{1}{2N} + \frac{2\pi^2 \alpha ^2}{N^2}
\right) \Vert F \Vert_\infty
\endsplit
$$
as needed.
\vskip .5cm
\demo{Proof of Lemma 11}
If $G(N)$ is $U(N)$ then $\mu_{n,N} = \mu_{ -, n, N}$ and there is
nothing to prove. In the other cases we have $\sigma = 1, \rho =2$.
We expand the $n\times n$ determinant of the $L_N(x,y)$ into $n!$ terms of the
type $\text{sgn} (\phi) \; \underset i\to \prod \; L_N (x_i , x_{\phi (i)} )$ for
$\phi$ in $\sum_n$. Writing
$$
L_N (x,y) = L_{N, -} (x,y) + L_{N+} (x,y)
$$
we expand the $n$--fold product $\underset i\to \prod \; L_N (x_i, x_{\phi (i)} )$
into $2^n$ terms, corresponding to which factors we replace by
$L_{N,-}( x,y)$ and which by $L_{N,+}(x,y)$. The choice of all ``$-$''
gives the $n \times n$ determinant of the $L_{N,-}(x,y)$: the remaining
$(2^n -1) n!$ terms are individually to be regarded as error terms.
Thus the difference of the two integrals is a sum, with signs of
$(2^n -1) n!$ integrals of the form
$$
\frac{1}{(N+\lambda ) n!} \; \int_{[0,\pi]^n} \;
F \left(\frac{(N+ \lambda) x}{\pi} \right)
\left( \underset i\to \prod \; L_{N,\pm} (x_i, x_{\phi (i)})\right) \;
\underset j\to \prod
\; \frac{dx_j}{\pi}
$$
where in the product $\underset j\to \prod \; L_{N,\pm}(x_i, x_{\phi(i)})$
at least one $\pm$ is $+$. We choose one particular term
$L_{N,+}(x_{i_o} , x_{\phi(i_o)})$ which has the $+$, and use the trivial
estimate $ \big| L_{N,\pm} (x,y) \big| \leq N+1$, (Lemma 3),
to deal with the $n-1$ other terms. Thus, each of the $(2^n-1)n!$ integrals is
bounded in absolute value by one of the form
$$
\frac{1}{n!N} (N+1)^{n-1} \; \int_{[0, \pi ]^n} \;
\bigg| F \left( \frac{(N+ \lambda ) x}{\pi} \right) \bigg| \;
\bigg|L_{N,+} (x_{i_o}, x_{j_o}) \bigg| \; \underset j\to \prod \;
\frac{dx_j}{\pi}
$$
Now $\text{supp} (F) \leq \alpha$, so
$\big| F (\frac{(N+ \lambda) x}{\pi})\big|$ is supported in the region
$$
\Delta \left(n, \frac{\pi \alpha}{N} \right): \;
\max _{i,j} |x_i - x_j | \leq \frac{\pi \alpha}{N + \lambda} \leq \frac{\pi \alpha}{N} \; .
$$
Hence the above integral is, at most,
$$
\frac{1}{n!N} (N+1)^{n-1} \Vert F \Vert_\infty \;
\int\limits _{\scriptstyle{[0, \pi]^n \cap
\Delta (n, \frac{\pi \alpha}{N} )}} \;
|L_{N,+} (x_{i_o} , x_{j_o})| \; \underset j\to \prod \;
\frac{dx_j}{\pi} \; .
$$
Renumbering we may assume $i_o = 1$ and $j_o = 1 $ or $2$.
We pass to coordinates
$$
\align
x_1 & \\
\xi_j& = x_j - x_1 \quad \text{for} \quad j=2 ,\ldots, n\;.
\endalign
$$
In these coordinates each $|\xi_j| \leq \frac{\pi \alpha}{N}$,
and so the domain of integration lies in the product region
$$
[0, \pi ] \times \left[ \frac{- \pi \alpha}{N} ,
\frac{\pi \alpha}{N} \right]^{n-1} \; .
$$
Thus
$$
\align
\int\limits_{[0, \pi]^n \cap \Delta (n, \frac{\pi \alpha}{N} )}
\; \left| L_{N,+} (x_1, x_{j_o} ) \right| \; \prod \; \frac{dx_j}{\pi}
&\leq \int\limits_{[0, \pi] \times [ \frac{- \pi \alpha}{N}\,, \frac{\pi \alpha}{N}]^{n-1}}
\left| L_{N,+} (x_1, x_{j_o} ) \right| \frac{dx_1}{\pi} \;
\underset j\geq 2\to \prod \; \frac{d \xi_j}{\pi} \\
\\
&= \int\limits_{[0,\pi] \times [\frac{- \pi \alpha}{N} , \; \frac{\pi \alpha}{N} ]^{n-1}}
\left| S_{2N+\tau} (2x_j + \xi_{j_o}) \right| \;
\frac{dx_1}{\pi} \; \underset j\geq 2\to \prod \; \frac{d\xi_j}{\pi} \; .
\endalign
$$
If $j_o =1$, the integrand is a function of $x_1$ alone, and this is
$$
\align
&= \left( \frac{2 \alpha}{N} \right) ^{n-1} \; \int_{[0, \pi]}
\left| S_{2N+ \tau} (2x_1) \right| \frac{dx_1}{2 \pi} \\
\\
&= \left( \frac{2\alpha}{N} \right) ^{n-1}
\int_{[0,\pi]} \left| \frac{\sin (2N +\tau)x}{\sin x} \right| \frac{dx}{2\pi} \; .
\endalign
$$
If $j_o = 2$, the integrand is a function of $x_1$ and $\xi_2$ alone, and the integral is
$$
\align
&\left(\frac{2\alpha}{N}\right)^{n-2}
\int\limits_{[\frac{- \pi \alpha}{N} \, , \frac{\pi \alpha} {N}]}
\; \int_{[0, \pi]}
\left| S_{2N+\tau} (2x + \xi) \right| \; \frac{dx}{2 \pi} \, \frac{d\xi}{\pi}\\
\\
&\qquad \leq \left(\frac{2 \alpha}{N} \right)^{n-1}
\sup_{|\xi| \leq \frac{\alpha \pi}{N}}
\int_{[0, \pi]} \left| S_{2N+\tau} (2x+ \xi) \right| \, \frac{dx}{2\pi} \\
\\
&\qquad \leq \left(\frac{2 \alpha}{N} \right)^{n-1}
\sup_{|\xi| \leq \frac{\alpha \pi}{N}} \; \int_{[0, \pi ]}
\left| \frac{\sin(2N+\tau)(x+ \xi/2)}{\sin (x+ \xi /2)} \right| \; \frac{dx}{2 \pi} \; .
\endalign
$$
In either case $(j_o =1$ or $2$) we apply the well known $L^1$--norm bound for the
Dirichlet kernel [ \ \ \ ]
$$
\int_0^\pi \left| \frac{\sin N(x+y)}{\sin(x+y)} \right|
\; \frac{dx}{2\pi} \leq 2 + \log (N+1)
\tag3.17
$$
to conclude that
$$
\align
& \int_{[0,\pi]^n \cap \Delta (n, \frac{ \pi \alpha}{N} )}
\left| L_{N,+} (x_{i_o} , \, x_{j_o}) \right| \; \underset j\to \prod \; \frac{dx_j}{\pi} \\
\\
& \qquad\qquad \leq \left( \frac{2 \alpha}{N} \right) ^{n-1} \left( 2 + \log (2N+ \tau -1) \right) \\
\\
& \qquad\qquad\leq \left( \frac{2 \alpha}{N} \right) ^{n-1} \; 5 \log N \; .
\endalign
$$
\vskip .5cm
This, and the above discussion, concludes the proof of Lemma 11.
\enddemo
\vskip .5cm
We turn to Lemma 12.
\demo{Proof of Lemma 12}
Recall the definition of $E_- (Z[n, F, G(N)])$ in (3.16). If we make the change of
variable
$$
y = \frac{(N+ \lambda ) x}{\sigma \pi}
\tag3.18
$$
we get
$$
E_{-} (Z(n,F,G(N)]) = \frac{1}{n!(N+\lambda)}
\; \int_{[0,N+\lambda]^{n}}
F(y) W_{n,N} (y) \; \underset j\to \prod \; dy_j \; .
\tag3.19
$$
\vskip .2cm
\noindent A key point is that $W_{n,N}(y)$ is itself in
$\Cal T (n)$, i.e. it is both $\sum_n$--invariant
(being of the form $\det {\hskip -.05cm \scriptstyle{_{n \times n}}} (f(y_i , y_j))$
and invariant by additive translations by all $\Delta_n (t) = (t , \ldots , t)$,
because $f(x,y)$ is of the form $g (x-y)$). The
$\sum_n$--invariance allows us to write $E_- (Z(n, F, G(N)])$ as
$$
\frac{1}{N+\lambda} \; \int\limits_{[0,N+\lambda]^n (\text{order})}
\; f(y) W_{n,N} (y) \; \underset j\to \prod \; dy_j \;.
$$
\vskip .5cm
\noindent Since $F(y) W_{n,N}(y) \epsilon \Cal T_n$ we have
$$
F(y) W_{n,N} (y) = F(0,y_2-y_1 , \ldots , y_n-y_1)
W_{n,N} (0, y_2 - y_1 , \ldots , y_n - y_1 ) \;.
$$
\vskip .5cm
\noindent Make the further change of variables
$$
\align
&t = y_1 \\
&z_i = y_i - y_1 \qquad , \quad i = 2 , \ldots , n \; .
\endalign
$$
The integral for $E_-$ becomes
$$
\align
&\frac{1}{N+ \lambda} \; \int_0^{N+\lambda}
\int\limits_{[0,N+\lambda - t]^{n-1} \text{(order)}}
F(0,z) W_{n,N} (0, z) \; \overset n\to{\underset j=1\to \prod} \;
dz_j dt \\
\\
&\qquad \qquad \qquad \qquad = \frac{1}{N+ \lambda} \int_0^{N+ \lambda} g(t) dt
\endalign
$$
with
$$
g(t) = \int\limits_{[0,N + \lambda -t]^{n-1} (\text{order})}
F(0,z) W_{n,N} (0,z) \; \overset n\to{\underset j=2\to \prod} \; dz_j \; .
$$
Now $F$ has $\text{supp}\; F \leq \alpha $, so for
$z \epsilon [0, N+ \lambda -t ]^{n-1} (\text{order})$ the function
$F(0, z)$ vanishes unless $z_n \leq \alpha$. So we may write the integral
for $g(t)$ as
$$
g(t) = \int\limits_{[0, \min (\alpha , N+\lambda - t)]^{n-1} (\text{order})}
F(0, z) W_{n,N} (0, z) dz_2 \cdots dz_n \; .
$$
Thus for $t \epsilon [0, N+\lambda ]$
$$
\aligned
\left| g(t) \right| & \leq \int\limits_{[0, \alpha ]^{n-1} (\text{order})}
\left| F(0, z) W_{n,N} (0,z) \right| dz_2 \cdots dz_n \\
\\
&\leq \frac{\alpha^{n-1}}{(n-1)!} \left\Vert F \right\Vert _\infty
\left\Vert W_{n,N} \right\Vert _\infty \\
\\
& \leq \frac{2^n \alpha^{n-1}}{(n-1)!} \left\Vert F \right\Vert _\infty
\, , \quad \text{by} \; (11) \; .
\endaligned \tag3.20
$$
In particular note that
$$
\left| \int\limits_{[0,\alpha ]^{n-1} (\text{order})}
F(0,z) W_{n,N} (0,z) dz_2 \cdots dz_n \right|
\leq \frac{2^n \alpha^{n-1}}{(n-1)!} \left\Vert F \right\Vert _\infty \; .
\tag3.21
$$
\vskip .5cm
To make the estimate claimed in Lemma 12, assume first that $N+ \lambda > \alpha$:
then we have
\vskip .5cm
$$
\frac{1}{(N+\lambda)} \int_0^{N+\lambda} g(t) dt =
\frac{1}{N+ \lambda} \; \int_0^{N+\lambda - \alpha} g(t) dt +
\frac{1}{N+\lambda} \; \int_{N+\lambda - \alpha}^{N+ \lambda} g(t) dt \;
\tag3.22
$$
\vskip .5cm
For $t \epsilon [0, N + \lambda - \alpha ]$,
$$
g(t) = \int\limits_{[0, \alpha]^{n-1} (\text{order})}
F(0,z) W_{n,N} (0,z) dz_2 \cdots dz_n
$$
\vskip .5cm
so that the first term of (3.21) is
$$
\aligned
&\frac{N+ \lambda - \alpha}{N+ \lambda} \; \int\limits_{[0,\alpha]^{n-1} (\text{order})}
F(0,z)W_{n,N}(0,z) dz_2 \cdots dz_n \\
\\
&\qquad \qquad \qquad = \left(1- \frac{\alpha}{N+\lambda} \right)
\; \int\limits_{[0,\alpha]^{n-1} (\text{order})}
F(0,z)W_{n,N} (0, z) dz_2 \cdots dz_n \; .
\endaligned \tag3.23
$$
\vskip .5cm
We may estimate the second term in (3.23) by (using 3.20)
$$
\frac{1}{N+\lambda} \; \int_{N+\lambda - \alpha}^{N+ \lambda} \; \left| g(t) \right| dt
\leq \frac{\alpha}{N+ \lambda } 2^n
\frac{\alpha^{n-1}}{(n-1)!} \left\Vert F \right\Vert _\infty
\leq \frac{(2 \alpha )^n \left\Vert F \right\Vert_\infty}{N(n-1)!} \;.
$$
\vskip .5cm
\noindent Combining this with (3.23), we have
$$
\left| E_- (Z[n,F,G(N)]) - \int\limits_{[0,\alpha]^{n-1} (\text{order})}
F(0,z)W_{n,N} (0,z) dz_2 \cdots dz_n \right|
\leq \frac{2(2 \alpha )^n \Vert F \Vert _\infty}{(n-1)!N}
\tag3.24
$$
if $N + \lambda > \alpha$.
\vskip .5cm
\noindent If $N+ \lambda \leq \alpha$, then the above holds trivially, since
$$
\left| E_- (Z [n,F,G(N)])\right| \leq \left| \frac{1}{N+ \lambda} \;
\int_0^{N+\lambda} g(t) dt \right| \leq \frac{2^n \alpha^{n-1}}{(n-1)!}
\Vert F \Vert_\infty
$$
and we have already noted in (3.21) that the other term in Lemma 12 is
also this small. This concludes the proof of Lemma 12, and hence,
the proof of part 1 of Theorem 2, as well.
\enddemo
\noindent {\it Proof of Part 3 of Theorem 2 (the variance):}
The variance of $Z[n,F,G(N)]$ may be expressed as
$$
\text{var} (Z) = E(Z^2[n,F,G(N)]) - E(Z[n,F,G(N)])^2 \; .
\tag3.25
$$
So we begin by analyzing the first term on the right hand side.
According to its definition (see equation (3.2)) we have
$$
(N+ \lambda )^2 Z^2 = {\sum\sum}_{\hskip -1.8cm \Sb {}\\ \vspace{.04cm} S\subset X(A) \; T\subset X(A)\\
|S| = |T| = n\endSb}
F\left( \frac{(N+ \lambda) S}{\sigma \pi} \right)
F \left( \frac{(N+ \lambda) T}{\sigma \pi}\right) \; .
\tag3.26
$$
We may write this as
$$
\sum\Sb S\subset \{ 1 , \ldots , N\} \\
T\subset \{ 1 , \ldots , N\} \\
|S| = |T| = n\endSb
F \left(\frac{(N+ \lambda) p r (S)(X)}{\sigma \pi} \right)
F \left( \frac{(N+ \lambda ) p r (T) (X)}{\sigma \pi} \right)
\tag3.27
$$
where $p r (S) : \Bbb {R}^N \rightarrow \Bbb {R}^n$ is projection onto
the $n$--coordinates defined by $S$.
We break the sum (3.26) according to the value of $C: = T \cup S$.
For each subset $C$ of $\{1 , \ldots , N\}$ of cardinality $\ell$,
define
$$
h(C) = \sum_{C = T\cup S} F \left(
\frac{(N+ \lambda) p r (S)(X)}{\sigma \pi} \right)
F\left( \frac{(N+\lambda) p (T)(X)}{\sigma \pi} \right) \; .
\tag3.28
$$
So
$$
Z^2 = \frac{1}{N+\lambda} \sum_{n \leq \ell \leq 2n} \frac{1}{N+ \lambda}
\sum_{|C| = \ell} h(C) \; .
\tag3.29
$$
For each $\ell$ with $n \leq \ell \leq 2n$, it is tautological that the inner summand
$$
\frac{1}{N+\lambda} \sum_{|C| = \ell} h(C)
$$
is itself of the form $Z[\ell, H_\ell , G(N)](A)$
for $H_\ell$ the function on $\Bbb {R}^\ell$ defined by
$$
H_\ell (x_1 , \ldots , x_\ell ) = \sum_{\Sb C = T\cup S\\|T|=|S|=n \endSb}
F(p r (S)(X)) F(p r (T)(X)) \; .
\tag3.30
$$
Thus, we have
$$
Z[n,F,G(N)]^2 = \sum_{n\leq \ell \leq 2n}
\frac{1}{N+\lambda} Z[\ell, H_\ell, G(N)]
\tag3.31
$$
and hence
$$
E(Z[n,F,G(N)]^2 ) = \sum_{n \leq \ell \leq 2n} \frac{1}{N+\lambda}
E(Z[\ell, H_\ell , G(N)]) \; .
\tag3.32
$$
To make use of our analysis of expectations we need
\proclaim{Lemma 14}
$H_\ell$ lies in $\Cal T (\ell)$ and
$$
\Vert H_\ell \Vert_\infty \leq
\binom \ell n \binom n{\ell-n} \Vert F \Vert_\infty ^2 \; .
$$
Moreover, if $\ell < 2n$, $H_\ell$ lies in $\Cal T_o (\ell)$
and $\text{supp} \; (H_\ell) \leq 2 \alpha$.
\endproclaim
\demo{Proof}
That $H_\ell \epsilon \Cal T (\ell)$ follows from $F$ being
in $\Cal T (n)$ and the definition of $H_\ell$ (eqn. (3.30)).
That is the $\sum_\ell$ and $\Delta (t)$ invariance are immediate consequences
of (3.30). Also the bound for $\Vert H_\ell \Vert_\infty$ follows from
(3.30) by simply counting the number of possible $S$ and $T$'s
contributing to the sum defining $H$.
We first choose $T \subset \{1 , \ldots, \ell \}$, which can be done
in $\binom \ell n$ ways. Having picked $T$,
$S$ may be any set of cardinality $n$ which contains
$C(T) = \{ 1 , \ldots , \ell \} -T$, a set of
cardinality $\ell -n$. So picking
$S$ amounts to picking $S-C(T)$ which is any subset of cardinality
$n-(\ell -n) = 2n - \ell$, of $T$.
That can be done in $\binom n {2n-\ell} = \binom n {\ell-n}$ ways.
So $\binom \ell n \binom n {\ell-n}$ gives the number of terms in the sum.
Suppose now that $\ell < 2n$ we must show that
$\text{supp} \; (H_\ell ) \leq 2 \alpha$.
In each of the terms $F(pr(T)(X)) F(pr(S)(X))$
the two sets $S$ and $T$ cannot be disjoint (since
$|S\cup T | < 2n$). So there is an index $i_o$ which lies in
both $S$ and $T$. Any index $j$ with $1 \leq j \leq \ell$
lies in either $S$ or $T$. If $j$ lies in $S$, then
$F(pr (S)(X))$ vanishes if $|x_{i_o} - x_j | > \alpha$
because $\text{supp} \; F \leq \alpha$. If $j$ lies in $T$,
then $F(pr (T)(X))$ vanishes if $|x_{i_o} - x_j | > \alpha$.
So the product vanishes if $|x_{i_o} - x_j | > \alpha$
for any $j$. By the triangle inequality,
$F(pr(T(X)) F(pr(S)(X))$, and hence $H_\ell(X)$
itself vanishes if there are any two indices $j$ and $k$ such
that $|x_k - x_j| > 2 \alpha$. That is $\text{supp} \; H_\ell \leq 2 \alpha$.
\enddemo
Combining Lemma 14 with part (2) of Theorem 2, we get
\proclaim{Proposition 15}
For $\ell < 2n$
$$
\left. \left| \frac{1}{N+\lambda} E(Z[\ell, H_\ell, G(N)])\right|
\leq \Vert F \Vert_\infty ^2 \binom \ell n \binom n {\ell- n}
2(2\alpha)^{\ell-1} \right/N(\ell-1)! \; .
$$
\endproclaim
>From this, it is clear that the terms in (3.32) with $\ell < 2n$ will go to zero like
$1/N$, as $N \rightarrow \infty$. We turn to a detailed look at the
$\ell = 2n$ term,
$$
\frac{1}{N+ \lambda} E (Z[2n, H_{2n}, G(N)])\; .
\tag3.33
$$
\proclaim{Lemma 16}
$$
\left. \left| \frac{1}{N+\lambda } E (Z[2n,H_{2n}, G(N)]) - E (Z[n,F,G(N)]^2)\right|
\leq 16 \, \Vert F \Vert_\infty \binom {2n} n (4 \alpha)^{2n-2} \right/ N \; .
$$
\endproclaim
\demo{Proof}
According to Proposition 1, we have
$$
\aligned
\frac{1}{N+\lambda} E( Z[2n, H_{2n} , G(N) ]) & =
\frac{1}{(N+\lambda)^2}
\int_{[0,\sigma \pi]^{2n}} H_{2n} \bigg(\frac{(N+ \lambda )x}{\sigma \pi} \bigg)
d\mu_{2n,N}(x)
\endaligned\tag3.34
$$
$$
= \frac{1}{(N+\lambda)^2} \sum_{ \Sb S\cup T = \{1,2 , \ldots , 2n\}\\
|S|= |T| = n\endSb} I(S,T)
\tag3.35
$$
where
$$
I(S,T) = \int_{[0,\sigma \pi]^{2n}} F \bigg(
\frac{(N+ \lambda ) pr(T)(x)}{\sigma \pi} \bigg)
F \bigg( \frac{(N+\lambda) p r(T)(x)}{\sigma \pi} \bigg) d \mu _{2n,N} (x) \; .
\tag3.36
$$
For brevity of notation let
$$
\aligned
x(T) & = pr(T) (x) \\
f(x) & \left. = F((N+ \lambda ) x \right/ \sigma \pi) \; .
\endaligned\tag3.37
$$
So that in this notation
$$
I(S,T) = \int_{[0,\sigma \pi ]^{2n}}
f(x(T)) f(x(S))d\mu_{2n,N} (x) \;.
\tag3.38
$$
Because the measure $\mu_{2n,N}$ is $\sum_{2n}$--invariant
and $\sum_{2n}$ acts transitively on the set of partitions
$(S,T)$ of $\{1 , \ldots , 2n \}$ into disjoint sets of
size $n$, we see that $I(S,T)$ is independent of $(S,T)$.
There are $\binom {2n} n$ such partitions.
Fixing one such partition say $S= \{1,2 , \ldots , n\}$,
$T = \{n+1 , \ldots , 2n \}$ we have
$$
\frac{1}{N+\lambda} E(Z[2n, H_{2n} , G(N)]) =
\binom {2n} n \frac{1}{(N+ \lambda )} I(S,T) \; .
\tag3.39
$$
Recall that the measure $\mu_{2n,N}$ is of the form
(\S 2, eqn (53))
$$
\frac{1}{(2n)!} \det {\hskip -.05cm \scriptstyle{_{2n\times 2n}}}
(L_N (x_i , x_j )) \; \underset m\to \prod \; \frac{dx_m}{\sigma \pi} \; .
$$
Denote by
$$
D_{2n,N} (x) = \det {\hskip -.05cm \scriptstyle{_{2n\times 2n}}}
(L_N (x_i, x_j )) \; .
\tag3.40
$$
Then from (3.39)
$$
\multline
\shoveright{\frac{1}{N+\lambda} E(Z[2n, H_{2n} , G(N)])}\\
\qquad = \frac{1}{(n!)^2(N+ \lambda )^2} \;
\int_{[0, \sigma \pi]^{2n}} f(x(T))f(x(S)) D_{2n,N} (x)
\frac{dx_1}{\sigma \pi} \cdots \frac{dx_{2n}}{\sigma \pi}
\endmultline
\tag3.41
$$
\vskip .4cm
Now expand $D_{2n,N} (x)$ as a sum of $(2n)!$ terms of the form
$$
\text{sgn} \; (\phi) \; \underset i\to \prod \; L_N (x_i, x_{\phi(i)} )
$$
indexed by $\phi \epsilon \sum_{2n}$. We distinguish two sorts of
$\phi$, those which respect the chosen partition $(S,T)$, and
those which do not. We group the corresponding terms
$$
D_{\text{resp},2n,N} (x) = \sum_{\text{respects} \; S\cup T}
\text{sgn} (\phi) \; \underset i\to \prod \; L_N (x_i x_{\phi (i) } )
$$
and
$$
D_{\text{noresp},2n,N} (x) = \sum\Sb \phi \\
\text{noresp} \; S\cup T \endSb
\text{sgn} (\phi) \; \underset i\to \prod \; L_N(x_i, x_{\phi(i)} ) .
$$
\vskip .3cm
\noindent Correspondingly the r.h.s. of (3.41) splits into a sum of two terms.
\enddemo
\vskip .2cm
It is tautological that we have the product decomposition
$$
D_{\text{resp},2n,N} (x) = D_{n,N} (x(S)) D_{n,N} (x(T))
$$
so that the first term in 3.41
$$
\frac{1}{(n!(N+ \lambda ))^2} \; \int_{[0,\sigma \pi ]^{2n}}
f(x(T)) f(x(S)) D_{\text{resp}, 2n, N} (x) \;
\overset 2n\to {\underset m=1\to \prod} \; \frac{dx_m}{\sigma \pi}
$$
is the square of
$$
\frac{1}{n!(N+ \lambda)} \; \int_{[0, \sigma \pi ]^n}
f(x) D_{n,N} (x) \overset n\to {\underset m=1\to \prod} \; \frac{dx_m}{\sigma \pi}
= E(Z[n,F,G(N)]) \; .
$$
\vskip .2cm
\noindent It follows that
$$
\multline
\frac{1}{N+\lambda} E(Z[2n, H_{2n}, G(N)]) - E(Z(n,F,G(N)]^2)\\
= \frac{1}{(n!(N+\lambda))^2} \; \int_{[0,\sigma \pi ]^{2n}}
f(x(T))f(x(S)) D_{\text{noresp}, 2nN} (x) \;
\overset 2n\to{\underset i=1\to \prod } \;
\frac{dx_i}{\sigma \pi}
\endmultline
\tag3.42
$$
\vskip .2cm
\noindent which will serve as our basis for proving the Lemma 16.
Clearly, we must show that the absolute value of the r.h.s. of (3.42)
is at most the r.h.s. of the inequality in the Lemma.
For this it suffices to show that for each $\phi \epsilon \sum_{2n}$
which does not respect $(S,T)$, of which there are most $(2n)!$, we have
\vskip .2cm
$$
\multline
\left| \frac{1}{(n!(N+\lambda ))^2} \;
\int_{[0,\sigma \pi ]^{2n}}
f(x(T))f(x(S)) \underset i\to \prod \;
L_N (x_i x_{\phi (i)})\; \underset m\to \prod \;
\frac{dx_m}{\sigma \pi} \right| \\
\leq \Vert F \Vert _\infty ^2 \cdot 16\cdot \left. (4\alpha)^{2n-2}
\right/ N(n!)^2
\endmultline
\tag3.43
$$
\vskip .2cm
\noindent To establish (3.43), look at the cycle decomposition of $\phi$.
At least one of its cycles contains elements of both $S$ and of $T$.
Renumbering, we may suppose that $(S,T)$ is
$(\{1,2 , \ldots , n \} , \{ n+1 , \ldots, n\})$,
that both $1$ and $n+1$ are both in the same $\phi$--orbit,
and that $\phi(1) = n+1$.
Because the forward $\phi$--orbit of $n+1$ does
not stay in $T$, there is some $n+a >n$ and some $b\leq n$
for which $\phi(n+a) = b$. We pay special
attention to the two factors $L_N(x_1, x_{n+1})$, $L_N(x_{n+a} , x_b )$
and use the trivial bound $|L_N (x_i , x_{\phi_i}) | \leq 2N$
(Lemma 3) for the other $2n-2$ factors. This yields that the L.H.S. of (3.43)
$$
\multline
|LHS| \leq \frac{(2N)^{2n-2}}{(n!(N+\lambda ))^2} \;
\int_{[0,\sigma \pi]^{2n}} |f(x(T)) f(x(S))| \\
\cdot \left|L_N( x_1, x_{n+1}) L_N (x_{n+a}, x_b )\right|
\; \underset i\to \prod \; \frac{dx_i}{\sigma \pi} \; .
\endmultline
\tag3.44
$$
Change variables to
$$
\xalignat2
s&= x_1 \\
\xi_j &= x_j - x_1 \qquad \qquad \; j=2 , \ldots , n \\
t & = x_{n+1} \\
\eta_{j} &= x_{n+j} - x_{n+1} \qquad j=2 , \ldots , n
\endxalignat
$$
and put $\xi_1 = \eta_1 = 0$.
\vskip .5cm
In as much as $\text{supp}\; F \leq \alpha$ we have
$f(x(S))$ is supported in
$\max_{i} | \xi _i | \leq \frac{\sigma \pi \alpha}{N}$
and similarly $f(z(T))$ is supported in
$\max_{i} |\eta _i| \leq \frac{\sigma \pi \alpha}{N}$.
Hence
$$
\multline
\int_{[0,\sigma \pi]^{2n}} \left|f(x(T)) f(x(S)) \right|
\left| L_N (x_1, x_{n+1}) L_N (x_{n+a} , x_b) \right| \;
\prod \;\frac{dx_i}{\sigma \pi} \\
\leq \Vert F \Vert_\infty ^2 \int_{[\frac{-\sigma \pi\alpha}{N} ,
\frac{\sigma\pi\alpha}{N}]^{2n-2}}
Int_2 (\xi , \eta) \; \prod \; \frac{d\xi_i}{\sigma \pi} \;
\prod \; \frac{d\eta_j}{\sigma \pi}
\endmultline
\tag3.45
$$
where
$$
Int_{2} (\xi, \eta) = \int_{[0, \pi \sigma]^2}
\left|L_N (s,t) L_N (t+\eta_a , s+ \xi _b) \right| \,\frac{ds dt}{(\sigma \pi)^2} \; .
\tag3.46
$$
It follows that
$$
\multline
\int_{[0, \sigma \pi]^{2n}} \left| f(x(T)) f(x(S)) \right|
\left| L_N(x_1, x_{n+1} ) L_N (x_{n+a}, x_b)\right | \;
\prod \; \frac{dx_i}{\sigma \pi} \\
\leq \Vert F \Vert_\infty ^2 \left( \frac{2\alpha}{N}\right)^{2n-2}
\max_{\xi , \eta} \left| Int_2 (\xi,\eta) \right| \; .
\endmultline
\tag3.47
$$
But
$$
\left| Int_2 (\xi , \eta ) \right| \leq 16 \int_{[0,4\pi]^2}
\left| L_N(s,t) L_N (t+ \eta_a , s+ \xi_b) \right|
\frac{ds}{4\pi} \frac{dt}{4\pi}
$$
which by Cauchy--Schwartz is
$$
\leq 16 \Vert L_N (s,t) \Vert_2 \Vert L_N(t+ \eta_a , s+ \xi _b) \Vert_2
$$
the $L_2$--norms being over $[0, 4\pi]^2$ w.r.t.
$\frac{dsdt}{16\pi^2}$.
By Parseval applied to the definitions of $L_N (x,y)$ in \S 2, we have
$$
\Vert L_N(S,t)\Vert_2 = \Vert L_N (S+ \eta , t + \xi) \Vert_2 = \sqrt{N} \; .
$$
Hence
$$
\left| Int_2 (\xi , \eta ) \right| \leq 16N \; .
$$
Applying this bound in (3.47) and that in (3.44) yields
(3.43), and hence the Lemma.
\vskip .5cm
We can now complete the proof of the variance bound in Theorem 2.
Using (3.32), Proposition (15), and Lemma 16, we have
$$
\left| \text{var} (Z[n, F,G(N)]) \right|
\leq \frac{\Vert F \Vert_\infty ^2}{N} \left( \sum_{\ell= n}^{2n-1}
\left. \binom {\ell} n \binom n {n- \ell} 2(2\alpha)^{\ell-1} \right/ \ell !
+ 16 \binom {2n} n (4 \alpha )^{2n-2} \right) \; .
$$
Now one checks that in the above ranges
$$
\frac{\binom \ell n \binom n {\ell -n} }{\ell !} \leq 3 \; ,
$$
also $ \binom {2n} n \leq 2^{2n} \, $, so
$$
\aligned
\left| \text{var} (Z[n, F, G(N)])\right| & \leq
\frac{\Vert F \Vert_\infty ^2}{N}
\left( 6 \sum_{\ell =n}^{2n-1} (2\alpha )^{\ell -1} +
2^{2n} 16(4 \alpha)^{2n-2} \right) \\
&\leq \frac{\Vert F \Vert_\infty ^2}{N} \left( 6n\left((2\alpha)^{n-1}
+(2\alpha)^{2n-2} \right) + 2^{2n} \cdot 16 (4\alpha)^{2n-2} \right)\\
& \leq \left(3 (8\alpha)^{n-1} + 65 (8\alpha)^{2n-2} \right)
\frac{\Vert F\Vert_\infty ^2}{N} \; .
\endaligned
$$
This completes the proof of Theorem 2, and with it, this section.
%denote quantities below. \newline
%\centerline{Table 1}
%$$
%\Tabskip= .9 em plus .5em
%\vBox{\halign{&\hfil#\hfil\cr
%$G(n)$ & $\lambda$ & $\sigma$ & $\rho$ & $\tau$ & $\epsilon$\cr
%\noaLign{\smallskip}
%$U(N)$ & 0 & 2 & 1 & \hfil$\; \; \; 0 $ \hfil & \hfil $\; \; \; 0$ \hfil\cr
%$USP(2n)$ & 0 & 1 & 2 & \hfil $\; \; \; 1 $ \hfil & \hfil$-1$ \hfil\cr
%$SO(2N+1)$ & $1/2$ & 1 & 2 & \hfil$\; \; \; 0 $ \hfil & \hfil $-1$ \hfil\cr
%%%$SO(2N)$ & 0 & 1 & 2 \hfil & \hfil $-1$ \hfil \hfil \hfil & \hfil \hfil $ \;\; \;1$ \hfil\cr
%$O_-(2N+2)$ & 1 & 1 & 2 \hfil & \hfil$ \; \; \; 1$ \hfil & \hfil $-1$ \hfil \cr}}
%$$
%The measure $\mu_{n,N}$ on $[0, \sigma \pi]^n$ is
\head{4. Proofs of Theorems 0.1 and 0.2}\endhead
Armed with the results of Section 3 we will prove the above two
Theorems in this Section. Actually, in order to fully establish Theorem 0.2,
we will need some sharp estimates on the sizes of the tails of the distributions
$\mu_a$ (univ). The verification of these estimates is left for Section 5.
\vskip .5cm
We begin with some combinatorics.
Let $X = (x_1 , x_2 , \ldots , x_N ) \in \Bbb R ^N$
be ordered, that is $x_1 \leq x_2 \leq x_3 \cdots \leq x_N$. We view these
as $N$--points on a line (with possible multiplicities).
For $s \geq 0$ the number of adjacent, or $0$--separated, pairs of
$x_j$'s in $X$ which are at most distance $s$ apart will be
denoted by $S_0 (s, x_1 , \ldots , x_N)$ or $S(s, X)$
or simply $S_0(s)$ if $X$ is understood. That is
$$
S_0 (s, X) = \vert \{1 \leq j \leq N-1 \colon x_{j+1} - x_j
\leq s \} \vert \; .
\tag4.1
$$
More generally, if $b \geq 0$ is an integer, the number of $b$--separated
pairs which are at most $s$ apart is denoted by
$$
S_b (s, X) = \vert \{ 1 \leq j \leq N-b \colon x_{j+b+1} - x_j
\leq s \} \vert \; .
\tag4.2
$$
\vskip .5cm
The naive spacing measures $\mu_a (A, \; \text{naive})$
may be expressed in terms of the functions $S_b$.
For $A\in G(N)$ and $X(A)$ its vector (ordered) of
eigenvalue angles in $[0, \sigma \pi ]^N$ (see Section 2),
we have on interpreting the definitions
$$
\mu_a (A, \; \text{naive} , \; G(N) ) [0,s] =
\frac{1}{N} S_{a-1} \left(s, \frac{N}{\sigma \pi} X(A) \right) \; .
\tag4.3
$$
In order to study $S_b$ further, we express these functions in
terms of the correlation functions $Z$.
Let $C_b (s, X)$ denote the number of subsets of
$\{x_1 , \ldots , x_N \}$ of cardinality $b+2$
whose extreme points are a distance at most $s$ apart. That is
$$
\hskip -1.4in C_b (s,X) = \vert \{ 1 \leq j_1 < j_2 \cdots < j_{b+2} \leq N \vert
x_{j_{b+2}} - x_{j_1} \leq s \} \vert
\tag4.4
$$
$$
\aligned
\hskip 0.68in = \vert \{ B \subset \{ 1 , \ldots , N \} \vert
B= \{ j_1 , \ldots , j_{b+2} \} ,
\max\limits_{j_r , j_{s\in B}} \vert x_{j_\tau} - x_{j_s} \vert
\leq s \} \vert \; .
\endaligned\tag4.5
$$
Both $C_b$ and $S_b$ vanish for $b >> 0$
(in fact $b> N-2$). There is a simple relation between the
$C$'s and $S$'s.
\proclaim{Lemma 1}
$$
C_b (s) = \sum_{t \geq b} \binom tb S_t(s) \; .
$$
\endproclaim
\demo{Proof}
For each $t \geq b$ consider all tuples $1 \leq j_1 < j_2 \cdots < j_{b+2} \leq N$
with $j_{b+2} - j_1 - 1 = t$ and $x_{j_{b+2}} - x_{j_1} \leq s$.
The two end points $j_1 , j_{b+2}$ contribute one unit to
$S_t(s)$. On the other hand, for each subset $B$ of
$j_1 + 1, \; j_1 +2 \, , \cdots , \; j_1 + t(=j_{b+2} - 1)$, of size $b$
we get a tuple $(j_1 , B, j_{b+2})$ which conributes one unit to
$C_b (s)$. There are $\binom tb$ such subsets and as we vary over all
$t \geq b$, we get precisely the entire contribution to $C_b(s)$.
This establishes the Lemma. Note that the sum is finite $(t \leq N-2)$.
\enddemo
\proclaim{Corollary 2}
In the indeterminate $T$, we have the identities
$$
\sum_{b\geq 0} C_b (s) T^b = \sum_{b\geq 0} S_b (s) (1+T)^b
$$
and
$$
\sum_{b\geq 0} S_b (s) T^b = \sum_{b\geq 0} C_b (s) (T-1)^b \; .
$$
\endproclaim
\demo{Proof}
The two are of course trivially equivalent.
The first is equivalent to Lemma 1, as is clear, if we expand
$(1+T)^b$ in the binomial expansion and collect terms.
\enddemo
If we equate like powers of $T$ in the second identity in Corollary 2, we get
\proclaim{Corollary 3}
$$
S_b(s) = \sum_{n \leq b} (-1)^{n-b} \binom nb C_n (s) \; .
$$
\endproclaim
On truncating the series above, we get inequalities for
$S_b(s)$ as the following Lemma shows.
\proclaim{Lemma 4}
For any integer $m \geq b$ we have
$$
S_b (s) \geq \sum_{n=b}^m (-1)^{n-b} \binom nb C_n (s) \qquad \text{if} \;m-b \; \; \text{is odd}
$$
and
$$
S_b (s) \leq \sum_{n=b}^m (-1)^{n-b} \binom nb C_n (s) \qquad \text{if} \;
\; m-b \; \text{is even} \; .
$$
\endproclaim
\demo{Proof}
Fix $m \geq b$. According to Corollary 3, these inequalities will
follow from
$$
(-1)^{m+1-b} \sum_{n\geq m+1} (-1)^{n-b} \binom nb C_n (s) \geq 0
$$
that is
$$
\sum_{n\geq m+1} (-1)^{n-m-1} \binom nb C_n(s) \geq 0 \; .
$$
Now according to Lemma 2, this is the same as
$$
\sum_{n\geq m+1} (-1)^{n-m-1} \binom nb \sum_{t \geq n} \binom tn
S_t(s) \geq 0
$$
that is
$$
\sum_{t\geq m+1} S_t(s) \sum_{n=m+1}^t (-1) ^{n-m-1} \binom tn \binom nb \geq 0 \; .
$$
Since $S_t(s) \geq 0$, it suffices to show that
$$
\sum_{n=m+1}^t (-1) ^{n-m-1} \binom tn \binom nb \geq 0
$$
For $0 \leq b \leq n \le t$ we use the elementary identity
$$
\binom tn \binom nb = \binom tb \binom {t-b}{n-b}
$$
reducing us to show that
$$
\binom bt \sum_{n=m+1}^t (-1)^{n-m-1} \binom {t-b}{n-b} \geq 0 \; .
$$
Setting $k = n-b$, $j= m+1-b$, $\ell = t-b$ we are left with showing that
$$
\sum_{k=j}^\ell (-1)^{k-j} \binom \ell k \geq 0 \; .
\tag4.6
$$
But
$$
\sum_{k=j}^{\ell} (-1)^{k-j} \binom \ell k = \binom {\ell-1}{j-1}
\tag4.7
$$
which establishes (4.6) and hence the Lemma.
\enddemo
To see (4.7) we use Pascal's triangle identity
$$
\binom \ell k = \binom {\ell -1} k + \binom {\ell -1}{k-1}
$$
to write the $\ell.h.s.$ of (4.7) as
$$
\sum_{k=j}^{\ell} (-1)^{k-j} \binom {\ell-1} k + \sum_{k=j}^{\ell}
(-1)^{k-j} \binom {\ell-1}{k-1}
$$
which telescopes to $\binom {\ell-1}{j-1}$.
\vskip .5cm
With this we are set to establish Theorem 0.1 (bis). From (4.3) and
Corollary 3, we have
$$
\mu_a (A, \; \text{naive}, \; G(N))[0,s] =
\sum_{n\geq a-1} (-1)^{n-a+1} \binom n {a-1}
\frac{C_n \left( s, \frac{NX(A)}{\sigma \pi} \right)}{N} \; .
\tag4.8
$$
According to the definitions of $C_n(s)$ in (4.5) and
of $Z [n,F,G(N)]$ in equation (3.2) of Section 3, we have
$$
\frac{1}{N} C_n (s, \frac{N}{\pi \sigma} X(A)) =
Z[n+2 , F_{n+2,s} , G(N)](A)
\tag4.9
$$
where
$$
\spreadlines{.3\jot}
F_{m,s} (x_1 , \ldots , x_m ) =
\left\{ \matrix
1 & \text{if} \; \max\limits_{j,k} |x_j - x_k | \leq s \\
0 & \text{otherwise}
\endmatrix \right. \; . \tag4.10
$$
Note that $F_{m,s} \in \Cal T_0 (m)$ and has
$$
\left.
\aligned
&\text{supp} \; F_{m, s} \leq s \qquad\\
& \Vert F_{m,s}\Vert_\infty = 1 \qquad
\endaligned \right\} \tag4.11
$$
So we may apply the results of Section 3 and, in particular,
Theorem 2 of that Section to $Z[n+2, F_{n+2,s} , G(N)](A)$.
Substituting (4.9) into (4.8) yields
$$
\mu_a (A, \; \text{naive},G(N))[0,s] = \sum_{n\geq a-1}
\binom n {a-1} Z [n+2, F_{n+2,s}, G(N)](A) \; .
\tag4.12
$$
\vskip .3cm
\noindent Taking expectations w.r.t. $A$ over $G(N)$ yields
$$
E(\mu_a (\text{naive}, G(N)[0,s]) =
\sum_{n \geq a-1} (-1)^{n-a+1} \binom n {a-1}
E(Z[n+2, F_{n+2, s} , G(N)]) \; .
\tag4.13
$$
\vskip .3cm
\noindent According to Theorem 2 of Section 3, we have
$$
|E(Z(m, F_{m,s} , G(N)])| \leq \frac{2(2s)^{m-1}}{m-1}
\tag4.14
$$
and
$$
\lim_{N\rightarrow \infty}
E(Z[m,F_{m,s} , G(N)]) =
E(Z[m, F_{m,s}, \; \text{univ} ] ) \; .
\tag4.15
$$
\vskip .3cm
\noindent Hence for $s$ fixed we may apply the dominated convergence theorem
in (4.13) to conclude that
$$
\lim_{N\rightarrow \infty} E(\mu_a (\text{naive}, \; G(N)[0,s])
= \sum_{n\geq a-1} (-1)^{n-a+1} \binom n {a-1}
E(Z[n+2, F_{n+2,s}, \text{univ} ] )
\tag4.16
$$
which according to (3.4) of Section 3
$$
\split
\hskip -0.3in = \sum_{n\geq a-1} (-1)^{n-a+1} \binom n {a-1}
\frac{1}{(n+1)!} \quad \int\limits_{[0,s]^{n+1}}
F_{n+2} (0, x_2 , \ldots , x_{n+2} ) \cdot \\
\qquad\qquad\qquad\qquad\qquad\qquad W_{n+2} (0, x_2 ,\ldots , x_{n+2})
dx_2 \dots d x_{n+2}
\endsplit
\tag4.17
$$
$$
\align
&= \sum_{m\geq a} (-1)^{m-a} \binom {m-1}{a-1} \frac{1}{m!}
\quad \int\limits_{[0,s]^m} W_{m+1} (0, x_2 , \ldots ,
x_{m+1} ) dx_2 \cdots dx_{m+1} \tag4.18
\endalign
$$
$$
\align
\hskip -3.8in := H_a(s) \; . \tag4.19
\endalign
$$
\vskip .3cm
\noindent Now Lemma 8 of Section 3 asserts that
$\vert W_m (x_1\, , \cdots x_m ) \vert \leq 1$,
so we see (again) that the series (4.18) is absolutely and rapidly
convergent (for fixed s). And in fact one easily concludes
that $H_a(s)$ is smooth in $s$. Moreover, since
$E(\mu_a (A, \text{naive}, G(N))[0,s]$
is a positive measure (of mass $\frac{N-a}{N}$) it follows that
$H_a(s)$ is nondecreasing as a function of $s \geq 0$. Hence
$$
d \mu_a(s) = d \mu_a (\text{universal} )(s)
= \frac{dH_a}{ds}(s) \cdot ds
\tag4.20
$$
defines an absolutely continuous measure on $\Bbb R_{\geq 0}$.
It is in fact a probability measure (i.e. has total mass equal to 1).
The reason being that from the definition of $\mu_a (A, \text{naive}, G(N))$
we see that its mean is at most equal to $a$. Hence
$E(\mu_a (\text{naive} , G(N))$ has mean at most equal to $a$.
The total mass of $E(\mu_a ( \text{naive}, G(N)))$
is $\frac{N-a}{N}$. Thus, for any $T> 0$
$$
\spreadlines{3\jot}
\align
E(\mu_a (\text{naive}, G(N) [0From this, (4.21) and an easy approximation argument it follows
that for any bounded continuous function $f$ on $\Bbb R _{\geq 0}$
$$
\lim_{N \rightarrow \infty} \int_0^\infty f(x) d(E(\mu_a
(\text{naive}, G(N)) (x) = \int_0^\infty f(x) d\mu_a (x) \; .
\tag4.23
$$
Either (4.22) or (4.23) are the content of Theorem 0.2 bis, and
in view of the comments at the end of Section 1,
we have established Theorem 0.1
(with $\mu_a$ given by (4.20)).
\vskip .5cm
We turn our attention now to Theorem 0.2, or rather its proof.
Fix $a \geq 1$ an integer. For $s\geq 0$ let
$$
\Delta (s,A) = \vert \mu_a (A, \text{naive} , G(N)) [0,s] - \mu_a [0,s] \vert
\tag4.24
$$
be the deviation of the measure given to $[0,s]$ by $\mu_a(A)$
to that given by $\mu_a$. By definition
$$
D(\mu_a (A , \text{naive}, G(N)) , \mu_a ) =
\sup_{0\leq s < \infty} \Delta (s, A) \; .
\tag4.25
$$
According to (4.3) and Lemma 4, we have for any $m \geq a-1$
$$
\mu_a (A, \text{naive} ) [0,s] \leq
\frac{1}{N} \sum_{n=a-1}^m (-1) ^{n-a+1} \binom n {a-1}
C_n \left(s, \frac{N X(A)}{\sigma \pi} \right)
\tag4.26
$$
if $m-a+1$ is even and
$$
\mu_a (A, \text{naive} )[0,s] \geq
\frac{1}{N} \sum\limits_{n=a-1}^m (-1) ^{n-a+1} \binom n {a-1}
C_n \left( s, \frac{NX(A)}{\sigma \pi} \right)
\tag4.27
$$
if $m-a+1$ is odd.
\noindent In terms of $Z$ we have
$$
\mu_a (A, \text{naive} )[0,s] \leq
\sum\limits_{n=a-1}^m (-1)^{n-a+1} \binom n {a-1}
Z[n+2, F_{n+2,s}, G(N)](A)
\tag4.28
$$
if $m-a+1$ is even, and
$$
\mu_a (A, \text{naive})[0,s] \geq
\sum\limits_{n=a-1} (-1)^{n-a+1} \binom n {a-1}
Z[n+2, F_{n+2, s,} G(N)](A)
\tag4.29
$$
if $m-a+1$ is odd.
\vskip .5cm
\noindent Taking expectations in (4.28) and (4.29)
and letting $N \rightarrow \infty$ we get (using Theorem 2 of
Section 3 and (4.22))
$$
\mu_a [0,s] \leq \sum_{n=a-1}^m (-1)^{n-a+1} \binom n {a-1}
Z[n+2 , F_{n+2,s}, \text{univ}]
\tag4.30
$$
if $m-a+1$ is even, while
$$
\mu_a [0,s] \geq \sum\limits_{n=a-1}^m (-1)^{n-a+1} \binom n {a-1}
Z[n+2, F_{n+2,s}, \text{univ}]
\tag4.31
$$
if $m-a+1$ is odd.
\vskip .5cm
\proclaim{Lemma 5}
For $s \geq 0$ and any integer $L \geq a-1$,
$$
\spreadlines{3\jot}
\split
\Delta (s,A) \leq \sum\limits_{k=a-1}^L
\binom k {a-1} \big| Z [ k+2, F_{k+2,s}, G(N)](A) -
Z[k+2, F_{k+2,s}, \text{univ} ] \big| \\
\qquad\qquad + \binom L {a-1} Z[L+2, F_{L+2, s} , \text{univ}]
+ \binom {L+1}{a-1} Z[L+3, F_{L+3,s} , \text{univ}] \; .
\endsplit
$$
\endproclaim
\vskip .5cm
\demo{Proof}
Let $m \geq a$ and be of the same parity as $a-1$.
Assume first that $\mu_a [0,s] \geq \mu_a (A, \text{naive}) [0,s]$,
then
$$
\align
0 &\leq \mu_a [0,s] - \mu_a (A, \text{naive} ) [0,s]
\leq \sum\limits_{k=a-1}^m (-1)^{k-a+1} \binom k {a-1}
Z[k+2, F_{s,k+2}, \text{univ}] \\
& \qquad - \sum\limits_{k=a-1}^{m-1} (-1)^{k-a+1} \binom k {a-1}
Z[k+2, F_{k+2,s}, G(N)](A)
\endalign
$$
according to (4.28),$\ldots$, (4.31)
$$
\spreadlines{2\jot}
\align
&\leq \binom {m} {a-1} Z[m+2, F_{m+2,s}, \text{univ} ]
+ \sum\limits_{k=a-1}^{m-1} \binom k {a-1} \big|
Z[k+2, F_{k+2,s}, \text{univ}] \\
&\qquad\qquad \qquad\qquad \qquad\qquad \qquad \qquad
\qquad \qquad - Z[k+2, F_{k+2,s}, G(N)](A) \big| \; .
\endalign
$$
\enddemo
If $m=L$, this implies the asserted inequality.
If $L= m+1$ is of the right parity, then the above
inequality with $m$ replaced by $m+2$ implies the claimed inequality.
\vskip .5cm
Now suppose that $\mu_a (A)[0,s] \geq \mu_a [0,s]$,
then
$$
\spreadlines{1\jot}
\align
0 & \leq \mu_a (A, \text{naive})[0,s] - \mu_a [0,s]
\leq \sum_{k=a-1}^m (-1)^{k-a+1} \binom k {a-1}
Z[k+2, F_{k+2,s}G(N)](A) \\
&\qquad\qquad\qquad \qquad\qquad\qquad\qquad - \sum_{k=a-1}^{m+1} (-1)^{k-a+1}
\binom k {a-1} Z[k+2, F_{k+2,s}, \text{univ}]
\endalign
$$
according to (4.28), (4.29) $\cdots$ (4.31)
$$
\spreadlines{1\jot}
\align
&\leq \binom {m+1}{a-1} Z[m+3 , F_{s,m+3}, \text{univ}]
+\sum_{k=a-1}^m \binom k {a-1} \big| Z[k+2, F_{k+2,s}, G(N)](A)\\
&\hskip 2.35in - Z[k+2, F_{k+2,s}, \text{univ} ] \big| \; .
\endalign
$$
Now if $L$ is one of $m$ or $m+1$, the desired inequality
follows from the above.
\vskip .5cm
\noindent Invoking the bounds (3.1) and (3.2) of Theorem 2
of Section 3, the triangle inequality
$$
\multline
\big| Z[k+2, F_{s,k+2} , G(N)](A) - Z[k+2, F_{s,k+2}, \text{univ}]\big| \\
\leq \big| Z[k+2, F_{s,k+2}, G(N)](A) - E(Z[k+2, F_{s,k+2}, G(N)])\big| \\
+\big| E(Z[k+2, F_{s,k+2}, G(N)]) - Z[k+2, F_{s,k+2}, \text{univ}]\big|
\endmultline
$$
and the inequality, $\binom k {a-1} \leq 2^k$, we get
\vskip .5cm
\proclaim{Corollary 6}
For $L \geq a$,
$$
\multline
\Delta (s,A) \leq \sum\limits_{k=a-1}^L 2^L \big|
Z[k+2, F_{s,k+2}, G(N)](A) - E(Z[k+2, F_{s,k+2}, G(N)])\big| \\
+ \frac{(4s)^{L+1} (s+1)}{(L+1)!}
+ \frac{10(s^{L+1} +1) \cdot 16^{L+1} \cdot L(s^2 + \log N)}{N} \; .
\endmultline
$$
\endproclaim
\vskip .5cm
In order to estimate the supremum over $s \geq 0$ of $\Delta (s,A)$
as required in (4.25), we proceed in the most direct (but crude)
way. Let $M$ be a second parameter, which like $L$, will be chosen
to go to infinity as a function of $N$. $M$ will be the number of points
$s_j \, , \; j=1 , \ldots , M$ at which we invoke Corollary 6.
Choose $0 < s_1 < s_2 \cdots < s_M < \infty$ such that
$$
H_a (s_j ) = \mu_a [0,s_j] = \frac{j}{M+1}
\tag4.32
$$
Since $H_a(s)$ is continuous and increasing with $H_a(0) = 0$
and $H_a (\infty) = 1$, such points $s_j$ can be found (actually it is not
difficult to see from the expression (4.18) for $H_a(s)$ that $H_a$
is analytic in $s$\footnote"*"{See Section 5}, and hence $H_a$ is
strictly increasing so that the $s_j$'s are in fact unique!).
Also set $s_0 = 0$, $s_{M+1} = \infty$.
If $s_j \leq s \leq s_{j+1}$, then clearly
$$
\Delta (s,A) \leq \frac{1}{M+1} + \max \{ \Delta (s_j, A),
\Delta (s_{j+1}, A) \} \; .
\tag4.33
$$
Hence
$$
\sup_s \Delta (s,A) \leq \frac{1}{M+1} + \max _{0\leq j \leq M}
\{\Delta (s_j, A) \} \; .
\tag4.34
$$
Applying Corollary 6 yields
$$
\multline
\sup_s \Delta (s,A) \leq \frac{1}{M+1} +
\frac{(4s_M)^{L+1} (s_M + 1)}{(L+1)!} +
\frac{10(s_M^{L+1} +1) \cdot 16^{L+1} \cdot L(s_M^2+ \log N)}{N} \\
+ \max_{0\leq j \leq M} 2^L \sum\limits_{k=a-1}^L
\big| Z[k+2, F_{s_j , k+2}, G(N)](A)
-E(Z[k+2, F_{s_j , k+2}, G(N)])
\endmultline
\tag4.35
$$
$$
\multline
\leq \frac{1}{M+1} + \frac{(4s_M )^{L+1} (s_M + 1 )}{L!} +
\frac{10(s_M^{L+1} +1 ) \cdot 16^{L+1} \cdot L(s_M^2 + \log N)}{N} \\
+ 2^L \sum_{j=0}^M \sum_{k=a-1}^L \big| Z[k+2, F_{s_j,k+2}, G(N)](A)
-E(Z[k+2, F_{s_j , k+2}, G(N)]) \big|
\endmultline
\tag4.36
$$
We integrate this inequality over $G(N)$ and apply
Theorem 2 of Section 3 which asserts that
$$
\multline
\int\limits_{G(N)} \big| Z[k+2, F_{s_j, k+2} G(N)](A) -
E(Z[k+2, F_{s_j, k+2}, G(N)])\big| dA \\
\leq \left( \int\limits_{G(N)} |Z-E(Z)|^2 dA \right)^{1/2} \\
\leq \big(3(8s_j)^{k+1} + 65 (8s_j)^{2k+2} \big)^{1/2} \biggl. \biggr/ \sqrt{N} \; .
\endmultline
\tag4.37
$$
This leads to (assuming $s_M > 1$) the main inequality valid for
$L > 1, \; M> 1$;
$$
\multline
\int\limits_{G(N)} \left( \sup_s \Delta (s,A) \right) dA
\leq \frac{1}{M+1} + \frac{(4s_M)^{L+1} (s_M +1)}{L!} \\
+ \frac{10(s_M^{L+1} + 1) \cdot 16^{L+1} \cdot L(s_M^2 + \log N)}{N}
+ \frac{10ML(16s_M)^{L+1}}{\sqrt{N}}
\endmultline
\tag4.38
$$
At this point we can directly establish the main Theorem 0.2
in non--quantitative form viz
$$
\lim_{N\rightarrow \infty} \int\limits_{G(N)} (\sup_s \Delta (s,A)) dA = 0 \; .
\tag4.39
$$
Indeed for $\epsilon > 0$, choose $M+1 > \frac{1}{3\epsilon}$,
then $s_M$ is determined (indeed according to (4.21) $s_M \leq a(M+1)$).
Next choose $L$ sufficiently large so that the second term in (4.38)
is less than $\epsilon /3$. Finally in terms of this $M$,
$s_M$ and $L$, we can clearly make the 3$^{rd}$ plus the 4$^{th}$ terms
in (4.38) at most $\epsilon /3$ for $N$ large enough, and hence the $\ell. h.s.$
is less than $\epsilon$ for $N$ large enough. This establishes (4.39).
\vskip .5cm
In order to estimate the right hand side of (4.38) quantitatively,
and in particular to get an estimate of the type $N^{- \alpha}$ for
some $\alpha > 0$, we need an upper bound on $s_M$. The upper bound
$s_M = 0(M)$ in (4.21) and even an upper bound of $s_M =0 (\log M)$
does not suffice for this purpose. In Section 5 we will establish
the estimate
$$
s_M \leq 3a \sqrt{ \log \frac{4Ma}{3}}
\tag4.40
$$
(see equation (5.56) Section 5).
\vskip .5cm
\noindent Substituting this for $s_M$ in (4.38), and choosing $L$
and $M$ so as to minimize the resulting quantity will lead to the
desired estimate. We choose $M=N^{1/6}$ (or the integer part
thereof to be more exact). For this choice
$$
s_M \leq 3a \sqrt{ \log \frac{4Ma}{3}} \leq 16a \sqrt{ \log N}
\tag4.41
$$
if $M \geq a$. With such a choice the third plus the fourth terms
in (4.38) is at most
$$
\frac{20ML \left(16^2 a \sqrt{\log N} \right)^{L+3}}{\sqrt{N}} \; .
$$
The second term is at most
$$
\frac{\left(16^2 a \sqrt{\log N} \right)^{L+3}}{L!} \; .
$$
Hence the integral in (4.38) satisfies
$$
\int\limits_{G(N)} \left( \sup_s \Delta (s,A) \right) dA \leq
N^{-1/6} + \left( (16)^2 a \sqrt{\log N} \right)^{L+4}
\left( \frac{1}{L!} + \frac{M}{\sqrt{N}} \right)
\tag4.42
$$
for $20L \leq \left(16^2 a \sqrt{ \log M} \right) ^{L+3}$,
which is the case if $L \geq 20a$, for example.
\vskip .5cm
Next choose
$$
L! =\frac{N^{1/2}}{M} = N^{1/3} \; .
\tag 4.43
$$
According to Stirlings series [ \ ], we have
$$
\align
& \hskip .7in \log L! = (L+1/2 ) \log L - L + \beta (L) \\
\hskip -9.90in \text{with} \\
& \hskip .7in |\beta(L) | \leq 5 \qquad \text{if} \quad L \geq 2 \; .
\tag4.44
\endalign
$$
Hence for $L \geq 0$
$$
\left .\matrix
\log N^{1/3} &\leq &L \log L \\
\log N^{1/3} &\geq & L \\
\hskip -1.in \text{and} & &\\
L & \geq &\frac{\dsize{\log N ^{1/3}}}{\dsize{\log \log N^{1/3}}}
\endmatrix \qquad\qquad \right\} \quad .
\tag4.45
$$
Let
$$
R = \left( 16^2 a (\log N)^{1/2} \right) ^{L+4}
\tag4.46
$$
then
$$
\aligned
\log R &= (L+4) \log \left( 16^2 a (\log N ) ^{1/2} \right) \\
& \quad \leq L \log \left( \sqrt{ \log N} \right) + L \log (16^2 a)
+ 4 \log \left( 16^2 a \sqrt{\log N} \right) \\
& \quad \leq L \log (3L \log L )^{1/2} + L \log (16^2 a) + 4 \log
\left( 16^2 a \sqrt{L \log L} \right) \\
& \quad \leq \frac{1}{2} L \log L + 10 \cdot L \log \log L \\
&\hskip -.8in \text{if} \hskip 1.3in L \geq 16^2a^2 \quad .
\endaligned \tag4.47
$$
\noindent Given $\epsilon > 0$ we therefore have
$$
\aligned
\log R &\leq \left( \frac{1}{2} + \epsilon \right) [(L+ 1/2) \log
L - L + \beta (L) ] + 10 L \log \log L + \frac{L}{2} - \frac{1}{4}
\log L + \frac{\beta (L)}{2} \\
& \qquad - \epsilon [ (L+ 1/2 ) \log L - L + \beta (L) ] \\
& \leq \left( \frac{1}{2} + \epsilon \right) \log N ^{1/3} \quad ,
\endaligned
\tag4.48
$$
if $\epsilon L \log L \geq 11 L \log \log L$, which will be the case if
$$
L \geq e^{22/ \epsilon}
\tag4.49
$$
That is to say that if the side inequalities are valid, then
$$
R \leq N^{1/6+ \epsilon} \quad .
$$
This implies that
$$
\int\limits_{G(N)} \left(\sup_s \Delta (s,A) \right) dA \leq
N^{-1/6} + N^{1/6+\epsilon} N^{-1/3} \leq 2N^{-1/6 + \epsilon} \; .
\tag4.50
$$
Checking the side conditions, one finds that (4.50) holds if
$$
N \geq \exp (\max (16^4 a^4 , \; \exp 22/ \epsilon )) \quad .
\tag4.51
$$
This completes the proof of Theorem 2, bis, and
hence of Theorem 2.
%G\define\mu_a[0,s] = \sum\limits_{n=a-1}^\infty (-1)^{n-a+1}
\head{5. Tail estimates and Fredholm determinants}\endhead
The primary goal of this section is to establish the estimates for the
tails of the measure $\mu_a$. This is done by expressing these
measures in terms of certain Fredholm determinants. In this way
the relation between the universal measures $\mu_a$ and Gaudin's
``GUE'' spacing measures is established. A crucial factorization
see (5.41) gives a relation between the finite (i.e. $N$ finite)
level versions of these determinants for the unitary, orthogonal
and symplectic cases. For the even orthogonal case, the determinant
can be estimated from above directly using its representation as a
multiple integral (see Proposition 7). This eventually leads us to
the tail estimates (see Proposition 10). In the supplement to this
section, we give a more conceptual proof of the relation between
$\mu_a$ and the Gaudin measures for the special case of $U(N)$.
\vskip .5cm
We begin with the series representation for $\mu_a$,
equation (4.17) of Section 4.
$$
\mu_a[0,s] = \sum\limits_{n=a-1}^\infty (-1)^{n-a+1}
\binom n {a-1} \frac{1}{(n+1)!} \int_0^s \dotsi \int_0^s
W_{n+2} (0, x_2 ,\dots ,x_{n+2} )
dx_2 \dotsm dx_{n+2} \quad .
\tag5.1
$$
Set
$$
G(T,s) = \sum\limits_{n=0}^\infty T^n
\int\limits_{0 \leq x_2 \leq x_3 \cdots \leq x_{n+2}}
W_{n+2} (0, x_2 , \dots , x_{n+2} ) dx_2 \cdots dx_{n+2}
\tag5.2
$$
$$
= \sum_{n=0}^\infty \frac{T^n}{(n+1)!}
\int_0^s \dotsi \int_0^s W_{n+2} (0, x_2 , \dots , x_{n+2})
dx_2 \cdots dx_{n+2}
\tag5.2$^\prime$
$$
Since $|W| \leq 1$ (Lemma 8, Section 3),
the series defining $G$ converges absolutely for $s \geq 0$ and all $T$.
It is an entire function of $T$ for each such $s$. Differentiating
w.r.t. $T$, $(a-1)$ times and setting $T=-1$ yields
$$
\spreadlines{3\jot}
\aligned
\left( \frac{\partial G}{\partial T}\right)^{a-1}
\bigg|_{T=-1} &= (a-1)! \sum\limits_{n=a-1}^\infty
(-1)^{n-a+1} \binom n {a-1} \frac{1}{(n+1)!} \cdot \\
& \qquad \int_0^s \dotsi \int_0^s W_{n+2} (0, x_2 , \dots ,
x_{n+2}) dx_2 \cdots dx_{n+2} \; .
\endaligned
$$
Hence
$$
\mu_a [0,s] = \frac{1}{(a-1!)} \left( \frac{\partial} {\partial T} \right) ^{a-1}
G(T,s) \bigg|_{T=-1} \; .
$$
Equivalently
$$
G(T,s) = \sum_{n=0}^\infty \mu_{n+1} [0,s] (1+T)^n \; .
$$
That is $\mu_{n+1} [0,s]$ is the $n^{th}$ Taylor coefficient
of $G(T,s)$ at $T=-1$.
\vskip .5cm
\proclaim{Lemma 1}
Let $n \geq 0$ be an integer and $s \geq 0$ and set
$$
e_n(s) = \int_0^s \dotsi \int_0^s W_n(x_1 , \dots , x_n )
dx_1 \cdots dx_n \; .
$$
Then
\roster
\item"{(i)}" $e_n$ is the restriction to $\Bbb R _{\geq 0}$ of
an entire function of $s$ which is divisible by $s^n$ and
satisfies
$$
|e_n (s) | \leq |s|^n \; e^{n|s| \pi} \; n^{n/2}
$$
\item"{(ii)}" $e_1 (s) = s$ and
$$
\frac{d}{ds} \; \frac{e_{n+2}(s)}{(n+2)!} =
\frac{1}{(n+1)!} \int_0^s \dotsi \int_0^s
W_{n+2} (0, z_2 , \dots , z_{n+2} )
dz_2 \cdots dz_{n+2}
$$
\endroster
\endproclaim
\demo{Proof}
\roster
\item"{(i)}" For $s>0$ make the substitution $sy_j = x_j$.
This yields
$$
e_n (s) = s^n \int_0^1 \dotsi \int_0^1 W_n (sy) dy_1 \cdots dy_n \; .
$$
>From this it is clear that $e_n(s)$ is divisible by $s^n$
and entire, since
%$ W_n(sy) = \det {\hskip -05cm \scriptstyle{_{n \times n}}}
$W_n(sy)=\text{det}_{n \times n} \left( \frac{\sin \pi s (y_i - y_i)}{s \pi (y_i - y_j)} \right)$
is entire.
Moreover, for $0 \leq y_i \leq 1$ and $0 \leq y_j \leq 1$
and $s\in \Bbb C$
$$
\bigg|\frac{\sin \pi s (y_i - y_j)}{\pi s (y_i - y_j )} \bigg| \leq e^{\pi |s|} \; .
$$
The inequality in (i) then follows from this and Hadamards inequality \cite{Ha}
$$
\big| \text{det}_{n\times n} (a_{ij}) \big| \leq M^n n^{n/2}
\tag5.5
$$
where $M= \max_{i,j} |a_{ij} |$.
\vskip .3cm
\item"{(ii)}" It suffices to verify the identity for $s > 0$.
For $\epsilon \rightarrow 0, \epsilon > 0$, we have
$$
\aligned
e_n (s+ \epsilon ) - e_n (s) & = \int\limits_{[0,s+\epsilon]^n}
W_n (x_1 , \dots , x_n ) dx_1 \cdots dx_n -
\int\limits_{[0,s]^n } W_n (x_1 , \dots , x_n)
dx_1 \cdots dx_n \\
& = \epsilon \sum_{i=1} ^n \int_0^s \dotsi \int_0^s
W_n (x_1 , \dots , x_{i-1}, s, x_{i+1}, \dots , x_n )
\prod_{j \neq i} dx_j + 0(\epsilon ^2)
\endaligned
$$
\endroster
Now use the $\sum_n$--invariance of $W_n$ to rewrite the last as
$$
\epsilon n \int_0^s \dotsi \int_0^s W_n (s, x_2 , \dots , x_n )
dx_2 \cdots dx_n + 0(\epsilon ^2) \; .
$$
The invariance of $W_n$ under $x \rightarrow -x$ and
$x \rightarrow x + (t, t , \dots , t)$ will take the integrand
to $W_n (0, s - x_2 , \dots , s - x_n)$ under
$x \rightarrow x-s$. Letting $\epsilon \rightarrow 0$ we get
$$
\spreadlines{3\jot}
\aligned
\frac{d e_n}{ds}& =
n \int_0^s \dotsi \int_0^s W_n (0, s- x_2 , \dots , s-x_n) dx_2\cdots dx_n \\
& = n \int_0^s \dotsi \int_0^s
W_n (0, x_2^\prime , \dots , x_n^\prime) dx_2^\prime \cdots dx_n^\prime
\endaligned
$$
after setting $x_j^\prime = s - x_j$. This proves the claim.
\enddemo
\vskip .5cm
\proclaim{Corollary 2}
Let $E(T,s)$ be defined by
$$
E(T,s) = 1 + \sum_{n=1}^\infty \frac{e_n(s)}{n!} T^n
\tag5.6
$$
then $E$ is entire in $T$ and $s$ and
$$
\frac{\partial E}{\partial s} = T + T^2 G
\tag5.7
$$
\endproclaim
\demo{Proof} \ \
Part (i) of Lemma 1 shows $(n! \sim n^{n+1/2} e^{-n} \sqrt{2\pi} )$
that the series defining $E(T,s)$ converges absolutely and uniformly
on compacta in $(s,T)$. Since each term is analytic in $s$ and $T$,
the entirety of $E$ follows. To get the identity, we have
$$
\spreadlines{3\jot}
\aligned
\frac{\partial E}{\partial s} & = \sum_{n=1}^\infty
\frac{T^n}{n!} \; \frac{d}{ds} e_n (s) \\
& = \sum_{n=1}^\infty \frac{nT^n}{n!}
\int_0^s \dotsi \int_0^s W_n(0, z_2 , \dots , z_n )
dz_2 \cdots dz_n \\
& = T+T^2 G \, , \qquad \qquad \text{according to (2$^\prime$)} \; .
\endaligned
$$
\enddemo
\vskip .5cm
In particular it follows from (5.3) and (5.7) that
$H_a(s) = \mu_a [0,s]$ extends to an entire function
of $s$, a fact which we mentioned in Section 4.
Next we recognize $E(T,s)$ as Fredholm determinant. Writing
out the definition of $E$
$$
E(T,s) = 1+ \sum_{n=1}^ \infty \frac{T^n}{n!}
\int_0^s \dotsi \int_0^s \text{det}_{n\times n} (K(x_i, x_j ) ) dx_1 \cdots dx_n
\tag5.8
$$
with
$$
K(x,y) = \frac{\sin \pi (x-y)}{\pi (x-y)}
\tag5.9
$$
or
$$
E(T,s) = 1+ \sum_{n=1}^ \infty \frac{T^n}{n!}
\int_0^{\alpha +s} \dotsi \int_0^{\alpha +s}
\text{det}_{n\times n} (K(x_i, x_j ) ) dx_1 \cdots dx_n
\tag5.10
$$
for any $\alpha \in \Bbb R$ (since $K(x,y) = K(x+ \alpha , y+\alpha ))$.
The series (5.8) or (5.9) are precisely the Fredholm expansions of
$\text{det} (I + T K_{s,\alpha} )$ where $K_{s,\alpha}$ is the integral
operator with kernel (5.9), acting on $L^2 ([\alpha , \alpha +s ] , d x )$,
see \cite{W--W}. That is for any $\alpha$,
$$
E(T,s) = \text{det} (I + TK_{s, \alpha} ) \; .
\tag5.11
$$
Write the Taylor expansion of $E$ at $T = -1$ as
$$
E(T,s) = \sum\limits_{n=0}^{\infty} E_n (s) (1+T)^n
\tag5.12
$$
so that
$$
E_n(s) = \frac{1}{n!} \left( \frac{ \partial}{ \partial T} \right)^n
E(T,s) \bigg|_{T= -1} \; .
\tag5.13
$$
We express $\mu_a$ in terms of the Fredholm determinant.
\vskip .5cm
\proclaim{Proposition 3}
$$
\mu_a [0,s] = 1 + \frac{d}{ds} \sum\limits_{n=0}^{a-1} \frac{a-n}{n!}
\left( \biggl(\frac{\partial}{\partial T} \biggr) ^n
E(T,s) \right)_{T= -1}
$$
\endproclaim
\vskip .5cm
\demo{Proof}
>From (5.7) we have
$$
\spreadlines{3\jot}
\aligned
G & = \frac{1}{T^2} \frac{\partial E}{\partial s} - \frac{1}{T} \\
&= \frac{1}{((T+1)-1)^2} \sum\limits_{n=0}^\infty \frac{dE_n(s)}{ds}
(1+T)^n + \frac{1}{1-(T+1)} \\
& = \sum_{\tau=0}^\infty \binom {-2} \tau (-1)^ \tau (T+1)^\tau
\sum_{n=0}^\infty \frac{dE_n}{ds} (1+T)^n + \sum_{n=0}^\infty
(T+1)^n \\
&= \sum_{m=0}^\infty \beta_m (1+T)^m
\endaligned \tag5.14
$$
where
$$
\beta_m = \sum_{n=0}^m (m-n+1) \frac{dE_n}{ds} + 1 \; .
\tag5.15
$$
However, according to (5.4) and (5.14)
$$
\beta_n = \mu_{n+1} [0,s] \; .
$$
Thus
$$
\hskip -.634in \mu_a[0,s] = 1 + \sum_{n=0}^{a-1} (a-n) \frac{dE_n}{ds}
\tag5.16
$$
$$
\hskip 2.0in = 1+ \frac{d}{ds} \sum_{n=0}^{a-1} \frac{(a-n)}{n!}
\left( \frac{\partial} {\partial T} \right) ^n E(T,s) \bigg|_{T=1}
\tag5.17
$$
as claimed
\enddemo
\vskip .5cm
If we differentiate (5.17) w.r.t. $s$, we have
$$
d\mu_a = \left( \frac{d^2}{ds^2} \; \sum_{n=0}^{a-1}
\frac{a-n}{n!} \biggl[ \left( \frac{\partial}{\partial T} \right)^n
\det (I+TK_s )\biggr]_{T=-1} \right) ds \; .
\tag5.18
$$
If we use the fact already established, that $\mu_a$ is a probability
measure, then another way of writing (5.16) is
$$
\text{tail}_{\mu_a} (s) = - \sum_{n=0}^{a-1} (a-n) \frac{dE_n}{ds}
\tag5.19
$$
where, in general,
$$
\text{tail}_{\nu} (s) = \int_s^\infty d\nu (t) \; .
\tag5.20
$$
Finally, we record the special case of which we will make
crucial use
$$
\text{tail}_{\mu_1} (s) = -\frac{d}{ds} E(-1, s) =
-\frac{d}{ds} \det (I -K_s) \; .
\tag5.21
$$
\vskip .5cm
\noindent{\bf Fredholm Determinants.} \ There are a number of applications
of the Fredholm determinant expression for $\mu_a$. One such application
is to write say $E_0(s) = E(-1,s)$ in terms of the eigenvalues of
$K_s$. That is
$$
E_0 (s) = \det (I-K_s) = \prod_{j=0}^\infty (1 - \lambda_j (s))
\tag5.22
$$
where $\lambda_j (s)$ are the eigenvalues of the integral equation
$$
\int_{-s/2}^{s/2} K(x,y) f(y) dy = \lambda f(x) \; .
\tag5.23
$$
The point is that the eigenfunctions of (5.23) are known special
functions (``Prolate-spheroidal functions'') and so for a given
$s$ the $\lambda_j (s)$'s can be efficiently computed
(numerically). In this way $E_0(s)$ may be computed numerically,
indeed the graphs in Figure 3 were obtained in this way.
Also the formula (5.19), for example, shows that the measures $\mu_a$
coincide with the spacing measures ``$p_2 (a-1,s)$''
obtained by Gaudin and Mehta \cite{GAU}, \cite{MEH}
for the Gaussian Unitary Ensemble.
In the Appendix to the section we will give a more conceptual
derivation of this equality. Thirdly, it has been shown by
Jimbo et. al. \cite{J--M---M--S} that $E_n(s)$ may be
expressed in terms of Painlav\'e V transcendents and this
allows one to bring in powerful
tools from completely integrable systems to analyze the asymptotics
of $E_n(s)$ as $s \rightarrow \infty$ (see \cite{DEI}).
We proceed more elementarily by exploiting another probabilistic
interpretation of $E_n(s)$.
Before turning to that we decompose $L^2([-s, s], dx)$ into
even and odd functions.
\vskip .5cm
Let $K_+$ and $K_-$ be the even and odd parts of the kernel
$K(x,y)$. That is
$$
K_\pm (x,y) = K(x,y) \pm K(-x,y) \;.
\tag5.24
$$
These kernels may be used to define integral operators
$K_{2s}, \; K_{\pm ,s}$ as follows:
$$
K_{2s} = \; \text{the integral operator with kernel } \; K(x,y)
\; \text{on} \; L^2 ([-s,s], dx)
\tag5.25
$$
$$
K_{\pm,s} = \; \text{the integral operator with kernel} \;
K_{\pm} (x,y) \; \text{on} \; L^2([0,s],dx) \; .
\tag5.26
$$
Now we have an orthogonal decomposition
$$
\aligned
L^2([-s,s],dx) &= L^2([-s,s],dx)_{\text{even}} \; \oplus
L^2([-s,s],dx))_{\text{odd}} \\
&f= f_+ + f_- \, , \; f_{\pm} = \frac{1}{2} (f(x) \pm f(-x))
\endaligned\tag5.27
$$
Both of these subspaces are stable by the operator $K_{2s}$
because $K(x, -y) = K(-x,y)$. Via the isometric isomorphisms
$$
\align
\frac{1}{\sqrt{2}} \text{Rest}\tau &\colon L^2 ([-s,s],dx)_{\text{even}}
\cong L^2 ([0,s],dx) \\
\frac{1}{\sqrt{2}} \text{Rest}\tau &\colon L^2 ([-s,s],dx)_{\text{odd}}
\cong L^2 ([0,s],dx)
\endalign
$$
we get isomorphisms
$$
K_{2s} \bigg|_{L^2([-s,s],dx)_{\text{even}}} \cong K_{+,s} \bigg|_{L^2([0,s],dx)}
\tag5.28
$$
$$
K_{2s} \bigg|_{L^2([-s,s],dx)_{\text{odd}}} \cong K_{-,s} \bigg|_{L^2([0,s],dx)} \; .
\tag5.29
$$
Define $E_{\pm}(T,s)$ by
$$
\aligned
E_{\pm} (T,s) & = 1+ \sum_{n=1}^\infty \frac{T^n}{n!}
\int_0^s \dotsi \int_0^s \; \text{det}_{n \times n}
(K_\pm (x_i, x_j )) dx_1 \cdots dx_n \\
&= \det (I +TK_{\pm ,s}) = \det \left(I -TK_{2s} \bigg|_{L^2([-s,s],dx)_\pm} \right)
\endaligned\tag5.30
$$
As before $E_\pm (T,s)$ is entire in $(T,s)$ and in view of
(5.27) and (5.29), we have the identity
$$
E(T,2s) = E_+ (T,s) E_- (T,s) \; .
\tag5.31
$$
As in (5.12) denote the expansions of $E_\pm$ at $T= -1$ by
$$
E_\pm (T,s) = \sum_{n=0}^\infty (1+T)^n E_{n,\pm} (s) \; .
\tag5.32
$$
It turns out that $E(T,s)$, $E_\pm (T,s)$ are scaling limits of
similar determinants defined at the level of $G(N)$.
\vskip .5cm
Recall that for each integer $N \geq 1$, $S_N (x)$
is the function
$$
S_N(x) = \frac{\sin (Nx/2)}{\sin (x/2)} =
\sum_{j\equiv 0}^{N-1} e^{i(N-1-2j)x/2} \; .
\tag5.33
$$
We define kernels $S_N(x,y) $ and $S_{\pm,N} (x,y)$
by
$$
\left.
\aligned
S_N(x,y) &= S_N(x-y) \quad \\
S_{\pm,N} (x,y) & = S_N(x,y) \pm S_N (-x,y) \quad
\endaligned \right\} \tag5.34
$$
For $\alpha \in \Bbb R$ and $s \geq 0$ define the integral operators
$$
\aligned
K_{N,s,\alpha} &= \; \text{the integral operator with kernel} \\
S_N (x,y) \; & \text{on} \; L^2 \left([\alpha, \alpha +s ] , \; \frac{dx}{2 \pi} \right)
\endaligned \tag5.35
$$
$$
\spreadlines{1\jot}
\aligned
K_{\pm,N,s} &= \; \text{the integral operator with kernel} \\
S_{\pm,N}(x,y) &\; \text{on} \; L^2 \left([0,s], \; \frac{dx}{2\pi} \right) \\
&= \; \text{the integral operator with kernel} \\
& \qquad \frac{1}{2} \, S_{\pm,N} (x,y) \; \text{on} \; L^2 \left([0,s], \frac{dx}{\pi}\right)
\; .
\endaligned\tag5.36
$$
(For $s=0$, the spaces $L^2 ([\alpha , \alpha +s], dx/2\pi)$ and
$L^2 ([0,s], dx/\pi)$ are the zero spaces). For fixed $s$
and variable $\alpha$ all the operators $K_{N,s,\alpha}$ are
isometrically equivalent. From the explicit formulas for their
kernels, we see that the operators $K_{N,s,\alpha} \, , \; K_{\pm, N,s}$
are of finite rank $(\leq N)$. More precisely, their images lie
in the span of $\{e^{i(N-1-2j)x/2} | 0 \leq j \leq N-1 \}$.
Define the characteristic polynomials (finite rank Fredholm Determinants)
$$
E(N,T,s) = \det(I + TK_{N,s,\alpha} )
\tag5.37
$$
$$
E_\pm (N,T,s) = \det(I+TK_{\pm,N,s} ) \; .
\tag5.38
$$
Explicitly, we have the formulas
$$
E(N,T,s) = \sum_{k=0}^\infty \frac{T^k}{k!} \int_0^s \dotsi \int_0^s
\text{det}_{k\times k} (S_N(x_i , x_j)) \frac{dx_1}{2 \pi} \cdots
\frac{dx_k}{2\pi}
\tag5.39
$$
$$
E_\pm (N,T,s) = \sum_{k=0}^ \infty \frac{T^k}{k!}
\int_0^s \dotsi \int_0^s \text{det}_{k\times k}
(S_{\pm,N} (x_i, x_j)) \frac{dx_1}{2 \pi} \cdots \frac{dx_k}{2\pi}
\tag5.40
$$
with the convention that at $0 \times 0$ determinant is equal to 1.
These are polynomials in $T$ of degree at most $N$.
Their coefficients are controlled by the following Lemma.
\vskip .5cm
\proclaim{Lemma 4}
For $N \geq 1 , \; k \geq 1$ the functions $A_{k,N}(s)$ and
$A_{\pm,k,N} (s)$ defined for $s \geq 0$ by
$$
\spreadlines{2\jot}
\aligned
A_{k,N}(s) &= \int_0^s \dotsi \int_0^s \text{det}_{k\times k}
(S_N (x_i, x_j)) \frac{dx_1}{2\pi} \cdots \frac{dx_k}{2\pi} \\
A_{\pm,k,N}(s) & = \int_0^s \dotsi \int_0^s \text{det}_{k\times k}
(S_{\pm,N} (x_i,x_j)) \frac{dx_1}{2\pi} \cdots \frac{dx_k}{2\pi} \\
\endaligned
$$
are restrictions to $\Bbb R _{>0}$ of entire functions of $s$ which
satisfy
$$
|A_{k,N}(s) | \leq \bigg| \frac{s}{2\pi} \bigg|^k k^{k/2} (2N)^k
e^{kN|s|/2}
$$
and
$$
|A_{\pm, k,N}(s) | \leq \bigg|\frac{s}{2\pi} \bigg|^k k^{k/2} (2N)^k
e^{kN|s|/2}
$$
for $s \in \Bbb C$.
\endproclaim
\vskip .3cm
\demo{Proof} At $s=0$, all these vanish. For $s>0$ we change variable;
$x_j = sy_j$ and the integrals become
$$
\aligned
\left(\frac{s}{2\pi}\right)^k \int_0^1 \dotsi \int_0^1 \text{det}_{k\times k}
(S_N(sy_i , sy_j)) dy_1 \cdots dy_k \\
\endaligned
$$
and
$$
\aligned
\left(\frac{s}{2\pi} \right)^k \int_0^1 \dotsi \int_0^1 \text{det}_{k\times k}
(S_{\pm,N} (sy_i , sy_j )) dy_1 \cdots dy_k \; .
\endaligned
$$
>From this it is clear that $A_{k,N}$ and $A_{\pm ,k,N}$ are
entire in $s$. Also from (5.33) it is clear that
$$
|S_N (x) | \leq N \exp (N |x| /2) \; \text{for} \; x \in \Bbb C \; .
$$
So the claimed inequalities follow from Hadamard's inequality (5.5).
\enddemo
\vskip .2cm
\noindent Exactly as in the case of $E(T,s)$ we have an
orthogonal decomposition of $L^2([-s,s], \; dx/2\pi)$
into even and odd parts which gives
$$
E(N,T,2s) = E_+ (N,T,s) E_- (N,T,s) \; .
\tag5.41
$$
\vskip .2cm
Following a well known calculation due to
Gaudin and Mehta \cite{GAU}, \cite{MEH}
we give a probabilistic interpretation of the Taylor coefficients
of $E(N, T, s)$, $E_+ (N,T,s)$, and $E_-(N,T,s)$ at $T = -1$.
\vskip .5cm
Recall (Section 2) that to each $A \in G(N)$ we have $N$ eigenvalue
angles, that is $X(A) \in [0, \sigma \pi ]^N$. We call these
$\phi_1 , \; \phi _2 , \dots , \phi_N$. For $n\geq 0$
an integer, and $0 \leq s < \sigma \pi$ we let $P_n (s,G(N))$
be the probability that an $A\in G(N)$ has exactly $n$ of its
eigenvalue--angles lie in $[0,s]$. That is
$$
P_n (s,G(N)) = \text{Haar}_{G(N)} \left\{ A \in G(N) \bigg|
\phi_j (A) \in [0,s], \; \text{for exactly $n$ indices}
\; j\in \{1, \cdots N \} \right\}
\tag5.42
$$
\proclaim{Proposition 5}
$P_n (s, G(N))$ is given in terms of $E_\pm (N,s,T)$ by the following table:
$$
\spreadlines{3\jot}
\aligned
\hskip -2.2in \underline{G(N)} & \qquad \qquad \underline{P_n (s,G(N))} \\
U(N) & \qquad \left(\bigg(\frac{d}{dT}\bigg)^n \frac{1}{n!}
E\big(N,T,s \big) \right)_{T= -1} \\
SO(2N+1) & \qquad \left( \bigg( \frac{d}{dT} \bigg) ^n \frac{1}{n!}
E_- \big(2N,T,s \big) \right)_{T= -1} \\
USP(2N)\; \text{or }\; 0_-(2N+2) & \qquad \left( \bigg(\frac{d}{dT} \bigg)^n
\frac{1}{n!} E_- \big(2N+1, T,s \big) \right)_{T=-1} \\
SO(2N) & \qquad \left(\bigg(\frac{d}{dT} \bigg)^n
\frac{1}{n!} E_+ \big(2N-1, T,S \big) \right)_{T=-1} \quad .
\endaligned
$$
\endproclaim
\vskip .3cm
\demo{Proof}
Recall that the kernels $L_N(x,y)$ attached to $G(N)$ are given
by the following table (see Section 2):
$$
\spreadlines{3\jot}
\aligned
\hskip -2.50in \underline{G(N)} & \qquad \qquad \underline{L_N (x,y)} \\
U(N) & \qquad S_N(x,y) e^{i(N-1)(x-y)/2} \\
SO(2N+1) & \qquad 1/2 \big( S_{2N} (x-y) - S_{2N} (x+y) \big) \\
USP(2N) \; \text{or }\; 0_- (2N+2) & \qquad 1/2 \big(S_{2N+1} (x-y) - S_{2N+1} (x+y) \big) \\
SO(2N) & \qquad 1/2 \big( S_{2N-1} (x-y) + S_{2N-1} (x+y) \big)
\endaligned
$$
\enddemo
In the $U(N)$ case, the integral operator with kernel
$L_N(x,y)$ is unitarily conjugate (by multiplication by
$e^{i(N-1) x/2}$ to the one with kernel $S_N (x,y)$
on $L^2 ([\alpha, \alpha +s], dx/2\pi )$. So Proposition 5 will
follow from:
\vskip .5cm
\proclaim{Proposition 5 bis}
Let $N \geq 2$ and let $G(N)$ be one of $U(N)$, $SO(2N+1)$, $USP(2N)$,
$SO(2N)$, $0_- (2N+2)$, and $s\in [0,\sigma \pi]$. Then
$$
P_n (s,G(N)) = \left( \bigg(\frac{d}{dT} \bigg) ^n \frac{1}{n!}
\det \left(I+T L_N (x,y) \bigg|_{L^2 ([0,s], dx/ \sigma \pi )} \right) \right)_{T=-1}
$$
\endproclaim
\demo{Proof}
We begin with the case $n=0$, to give the idea.
In this case, $P_0 (s, G(N))$ is the probability that
$A \in G(N)$ has all its eigenvalue angles $\phi_j \in (s,\sigma\pi]$.
By \ \ \ \ of Section 2, this probability is equal to
$$
\int\limits_{(s,\sigma \pi]^N} d\mu_{N,N}(x)
= \frac{1}{N!} \int\limits_{(s,\sigma\pi]^N} \text{det}_{N\times N}
\big(L_N (x_i , x_j ) \big) \prod_m \frac{dx_m}{\sigma \pi}
\tag5.43
$$
Let $I$ denote the characteristic function of $[0,s]$
($s$ is fixed). The $\ell.h.s.$ of (5.43) is
$$
\int\limits_{[0, \sigma \pi]^N} \prod_i \big(1-I (x_i )\big) d \mu_{N,N}(x) \; .
$$
\enddemo
\noindent Expanding out the product, this is equal to
$$
\sum_{k=0}^N (-1)^k \int\limits_{[0, \sigma \pi ]^N} \left(
\sum \Sb \text{subsets} \; J \\ |J| = k \endSb \;
\prod_{j\epsilon J} I(x_j) \right) d\mu_{N,N} (x) \; .
$$
Applying \ \ \ \ of Section 3, this becomes
$$
\spreadlines{2\jot}
\aligned
&\sum_{k=0}^N (-1)^k \int\limits_{[0,\sigma \pi]^k} \prod_{i=1}^k
I(x_i) d \mu _{k,N} (x_1 , \dots, x_k ) \\
&\qquad = \sum_{k=0}^N
(-1)^k \int_0^s \dotsi \int_0^s d \mu_{k,N} (x_1 , \cdots x_k ) \\
&\qquad = \sum_{k=0}^N (-1)^k \frac{1}{k!} \int_0^s \dotsi \int_0^s
\text{det}_{k \times k} \big(L_N (x_i , x_j ) \big)
\prod_{m=1}^k \frac{dx_m}{\sigma \pi} \; .
\endaligned
$$
This is the value at $T= -1$ of
$$
\spreadlines{2\jot}
\aligned
&\sum_{k=0}^N \frac{T^k}{k!} \int_0^s \dotsi \int_0^s
\text{det}_{k\times k} \big( L_N (x_i , x_j) \big) \;
\prod_{m=1}^k \; \frac{dx_m}{\sigma \pi} \\
&\qquad = \sum_{k=0}^\infty \frac{T^k}{k!} \int_0^s \dotsi \int_0^s
\text{det}_{k \times k} \big( L_N (x_i, x_j) \big)
\; \prod_{m=1}^k \; \frac{dx_m}{\sigma \pi} \\
&\qquad = \det \left( I + T L_N (x,y)
\bigg|_{L^2 \big([0,s], \; \frac{dx}{\sigma \pi}\big)} \right) \quad .
\endaligned
$$
This is what was claimed in the case $n=0$.
\vskip .5cm
Turning to the general case $n>0$, we denote by $A_k$ the quantity
$$
A_k = \int_0^s \dotsi \int_0^s \text{det}_{k\times k}
\big(L_N (x_i, x_j )\big) \;
\prod_{m=1}^k \; \frac{dx_m}{\sigma\pi} \; .
$$
If $n > N$, then there is nothing to prove, since
$P_n (s,G(N)) = 0$, there being no $A\in G(N)$ with
$n$ eigenangles while the r.h.s. of the identity to be proven, is $0$
since the determinant is a polynomial in $T$ of degree at most $N$.
For $n=N$, $P_N (s,G(N))$ equals
$$
\spreadlines{3\jot}
\aligned
\int\limits_{[0, \sigma \pi ]^N} d \mu_{N,N}(x) &=
\frac{1}{N!} \int\limits_{[0, \sigma \pi]^N} \text{det}_{N \times N}
\big(L_N(x_i , x_j)\big) \; \prod_{m=1}^N \;
\frac{dx_m}{\sigma \pi} \\
& = \frac{A_N}{N!} \\
& = \bigg( \frac{d}{dT}\bigg)^ N \frac{1}{N!} \left( \sum_{0 \leq \ell \leq N}
T^\ell \frac{A \ell}{\ell !} \right) \\
& = \bigg( \frac{d}{dT} \bigg) ^N \frac{1}{N!}
\det\left(I + T L_N (x,y) \bigg|_{L^2 \big([0,s], \frac{dx}{\sigma \pi} \big)}
\right) \\
& = \left(\bigg( \frac{d}{dT} \bigg) ^N \frac{1}{N!}
\det\left(I + T L_N (x,y) \bigg|_{L^2 \big([0,s], \frac{dx}{\sigma \pi} \big) }
\right)\right)_{T= -1} \\
\endaligned
$$
as claimed.
\vskip .5cm
We are left with $1 \leq n \leq N-1$. In this case,
the probability in question is
$$
\frac{1}{n!(N-n)!} \int\limits_{[0,s]^n \times (s,\sigma \pi]^{N-n}} \;
D_{N,N} (x) \; \prod_{m=1}^N \; \frac{dx_m}{\sigma \pi}
$$
where
$$
\spreadlines{3\jot}
\aligned
D_{n,N } (x_1 , \dots , x_n ) & = \text{det}_{n\times n}
\big( L_N (x_i , x_j) \big)\; . \quad \text{So the above} \\
& = \frac{1}{n!(N-n)!} \int\limits_{[0,s]^n \times [0, \sigma \pi ] ^{N-n}} \;
\prod_{i= n+1} ^N \; \big( 1-I (x_i ) \big)
D_{N,N} (x) \; \prod_{m} \; \frac{dx_m}{\sigma \pi} \\
& = \frac{1}{n!(N-n)!} \sum_{k=0}^{N-n} (-1)^k
\sum \Sb J \subset \{n+1 , \dots , N \} \\
|J|=k \endSb \times \\
& \qquad \quad \int\limits_{[0,s]^n \times [0, \sigma \pi ]^{N-n}}
\; \prod_{j \in J} \; I (x_j) D_{N,N} (x) \;
\prod_m \; \frac{dx_m}{\sigma \pi } \; .
\endaligned
$$
Using the $\sum_N$--invariance of $D_{N,N} (x_1 , \dots , x_N)$
the integral above is independent of the particular choice of the subset $J$
of the given cardinality $k$, so its equal to
$$
\int\limits_{[0,s]^{n+k} \times [0, \sigma \pi ]^{N-n-k}}
D_{N,N} (x) \; \prod_m \; \frac{dx_m}{\sigma \pi} \; .
$$
Now apply Lemma \ \ \ \ to integrate out the unrestricted variables
$x_i \, , i > n+k$ yielding
$$
(N-n-k)! \int\limits_{[0,s]^{n+k}} D_{n+k,N}
(x_1 , \dots , x_{n+k} ) \;
\prod_{m=1}^{n+k} \; \frac{dx_m}{\sigma \pi}
= (N-n-k)! A_{n+k} \; .
$$
Thus $P_n(s,G(N))$ is equal to
$$
\spreadlines{2\jot}
\aligned
\frac{1}{n!(N-n)!} \sum _{k=0}^{N-n} (-1)^k \binom {N-n} k
(N-n-k)! A_{n+k} & = \sum_{k=0}^{N-n} (-1)^k
\left. A_{n+k} \right/ n!k! \\
&= \sum_{k=0}^\infty (-1)^k \left. A_{n+k} \right/ n!k! \\
&= \sum_{k=0}^ \infty (-1)^k \binom {n+k} n \frac{A_{n+k}}{(n+k)!}
\endaligned
$$
which is the value at $T= -1$ of
$$
\spreadlines{1\jot}
\align
&= \sum_{k=0}^\infty T^k \binom {n+k} n \frac{A_{n+k}}{(n+k)!} \\
&= \left(\frac{d}{dT} \right)^n \frac{1}{n!}
\sum_{k=0}^\infty \frac{T^{n+k} A_{n+k}}{(n+k)!} \\
&= \left(\frac{d}{dT}\right)^n \frac{1}{n!}
\sum_{\ell =n}^\infty \frac{T^\ell A_\ell}{\ell !} \\
& = \left( \frac{d}{dT}\right)^n \frac{1}{n!}
\sum_{\ell = 0}^\infty \frac{T^\ell A_\ell}{\ell !} \\
& = \left(\left( \frac{d}{dT} \right)^n \frac{1}{n!}
\det \big(I+T L_N (x,y)\big) \bigg|_{L^2 \big([0,s] , \frac{dx}{\sigma \pi}\big)}
\right)_{T= -1}
\endalign
$$
This completes the proof of Proposition 5, bis.
\vskip .5cm
As a consequence of Proposition 5, we learn that the degree in $T$ of the
polynomials
$E(N,T,s), \; E_- (2N+1, T, s), \; E_+ (2N -1 , T, s), \; E_-(2N, T,s)$
and $E_+(2N, T,s)$ are all equal to $N$. Indeed, this follows from the
interpretation of the $T$ derivatives of the first four as
$P_n(s,G(N))$. For $n > N$, $P_n(s, G(N)) \equiv 0$ while for
$n = N$ and $s = \sigma \pi$, $P_n (\sigma \pi , G(N)) = 1$.
The case of the degree of the fifth follows from the first and the fourth
and the identity (5.41). In fact, the identity (5.41), together with
Proposition 5, implies the following striking relation.
\vskip .5cm
\proclaim{Corollary 6}
For $N \geq 2$ and $s \epsilon [0, \pi]$
$$
\align
P_0 (2s, U(2N-1)) & = P_0 (s, SO(2N)) \times P_0 (s, USP(2N-2)) \\
& = P_0 (s, SO(2N)) \times P_0 (s, O_-(2N)) \; .
\endalign
$$
More generally for each $n \geq 0$.
$$
\align
P_n(2s, U(2N-1)) & = \sum_{a+b=n} P_a (s, SO(2N)) P_b (s, UPS(2N-2)) \\
&= \sum_{a+b= n} P_a (s, SO(2N)) P_b (s, O_- (2N)) \; .
\endalign
$$
\endproclaim
\noindent The probabilistic interpretation shows that $0 \leq E(N,-1, s) \leq 1$
and also $0 \leq E_\pm (N, -1, s) \leq 1$. For $E_+ (2N-1, -1,s)$
one can give a much better upper bound by direct examination of the integral
defining $P_0(s, SO(2N))$. This is the analogue of the estimation by
Mehta \cite{MEH2} for the related probability for the Gaussian
orthogonal ensemble. The following is the crucial inequality which
allows us to suitably bound the tails of $\mu_a$.
\vskip .5cm
\proclaim{Proposition 7}
For $N \geq 2$ and $s \epsilon [0, \pi]$
$$
P_0 (s, SO (2N)) \leq \bigg(1 - \bigg(\frac{s}{\pi}\bigg)^2 \bigg)^{N^2-N} \; .
$$
\endproclaim
\vskip .3cm
\demo{Proof}
To prove this estimate, we go back to the very first (and standard)
expression for the Haar measure on $SO(2N)$, viz \hskip .7in of Section 2.
We have
$$
P_0(s,SO(2N)) = \frac{2}{N!} \int_s^\pi \dotsi \int_s^ \pi \prod_{i0$ and $\epsilon >0$,
there exists $L$ such that in the region $(|T| \leq m, |s| \leq m)$,
each of the functions in question, namely $E(T,s)$ or $E_\pm (T,s)$
or one of its finite $N$ approximants with $N$ large enough, is
approximated within $\epsilon$ by the sum of its first $L$ terms as a
series in $T$ with coefficients functions of $s$. So we need only
prove that the coefficients of individual powers of $T$ converge
uniformly on compact subsets of the $s$--plane. Let us do this explicitly
for say $E_\pm (T,s)$, the $E(T,s)$ case being entirely similar.
Fix $k \geq 1$. The coefficient $e_{\pm,k}(s)$ of $T^k/k!$ in
$E_\pm ( T,s)$ is
$$
\int_0^1 \dotsi \int_0^1 \text{det}_{k\times k} (sK_\pm
\big(sy_i-sy_j) \big) dy_1 \cdots dy_k \; .
$$
The coefficient of $T^k/k!$ in $E_\pm (N,T, 2s\pi/N+\alpha )$ is
$$
\int_0^1 \dotsi \int_0^1 \text{det}_{k\times k} \left(
\frac{s}{N+ \alpha} S_{\pm,N} \left(
\frac{2\pi sy_i}{N+ \alpha} \, , \frac{2\pi sy_i}{N+ \alpha}
\right)\right) dy_1 - dy_k \; .
$$
The determinant is integrated over a compact region,
so it suffices to check uniform convergence for
$y \epsilon [0,1]^k$ and $s$ in a compact,
of the above determinant to
$$
\text{det}_{k \times k} \big( sK_\pm (sy_i, sy_j) \big) \; .
$$
For this it suffices for the individual entries
$$
\frac{s}{N+ \alpha} S_{\pm,N} \left(\frac{2\pi sy_i}{N+ \alpha} \, , \,
\frac{2\pi sy_j}{N+\alpha} \right)
$$
to converge to $sK_\pm (sy_i , sy_j)$. This reduces to the
elementary well known limit
$$
\lim_{N\rightarrow \infty} \frac{1}{N+\alpha} S_N \left(\frac{2\pi s}{N+\alpha} \right)
= \lim_{N \rightarrow \infty} \frac{\sin \left(\frac{\pi s N}{N+\alpha} \right)}
{(N+ \alpha ) \sin (\pi s / N + \alpha)} = \frac{\sin \pi s}{\pi s} \; ,
$$
which completes the proof of Proposition 8.
\enddemo
\vskip .5cm
>From Propositions 6 and 8 we have for $s \geq 0$ and $n \geq 0$,
$$
\lim_{N\rightarrow \infty} P_n \left(
\frac{2\pi s}{N} \, , \, U(N) \right) = E_n (s)
\tag5.48
$$
$$
\lim_{N \rightarrow \infty} P_n \left( \frac{\pi s}{N+ 1/2} \, , \, SO(2N+1) \right)
= E_{-,n} (s)
\tag5.49
$$
$$
\lim_{N \rightarrow \infty} P_n \left( \frac{\pi s}{N} \, , \, USP(2N)\right)
= \lim_{N \rightarrow \infty} P_n \left(
\frac{\pi s}{N+1} , \, 0_-(2N+2)\right) = E_{-,n}(s)
\tag5.50
$$
$$
\lim_{N\rightarrow \infty} P_n \left( \frac{\pi s}{N} \, , \, SO(2N) \right)
= E_{+,n}(s) \; .
\tag5.51
$$
In particular, since $P_n$'s are probabilities, we learn from
(5.48), (5.49), or (5.50) that
$$
\aligned
0 &\leq E_0(s) \leq 1 \\
& \hskip -1.2in \text{and} \\
0 & \leq E_{-,0} (s) \leq 1 \; .
\endaligned\tag5.52
$$
While from (5.51) and Proposition 7
$$
E_{+,0} (s) \leq \lim_{N \rightarrow \infty}
\big(1- (s/N)^2\big)^{N^2-N} = e^{-s^2}
\tag5.53
$$
(5.52), (5.53) and (5.31) imply that for $s \geq 0$
$$
E_0 (2s) \leq e^{-s^2} \; .
\tag5.54
$$
We can now establish the main tail estimate for $\mu_1$:
\vskip .5cm
\proclaim{Proposition 9}
For $s \geq 0$
$$
\text{tail}_{\mu_1} (s) \leq \frac{4}{3} e^{-s^2/8} \; .
$$
\endproclaim
\vskip .5cm
\demo{Proof}
>From (5.21) we have
$$
\text{tail}_{\mu_1} (s) = - \frac{dE_0}{ds} \; .
$$
Thus
$$
\align
E_0(s) - E_0 (s+1) & = \int_s^{s+1} - \frac{dE_0}{ds} ds \\
& = \int_s^{s+1} \text{tail}_{\mu_1} (s) ds \\
& \qquad \geq \text{tail}_{\mu_1} (s+1)
\endalign
$$
($\text{tail} (s)$ is nonincreasing in s).
So
$$
E_0(s) \geq E_0 (s) - E_0 (s+1) \geq \text{tail}_{\mu_1} (s+1) \; ,
$$
hence applying (5.54) we get
$$
\text{tail}_{\mu_1} (s+1) \leq e^{-s^2/4} \; .
\tag5.55
$$
This bound for $s \geq 1$ combined with the
trivial bound $\text{tail}_{\mu_1} (s) \leq 1$ easily
imply the bound asserted in Proposition 9.
\enddemo
\vskip .5cm
The above estimate for $\text{tail}_{\mu_1}$ implies a similar
estimate for $\text{tail}_{\mu_a} (s)$ as follows:
\vskip .5cm
\proclaim{Proposition 10}
For $a \geq 1$ and $s \geq 0$
$$
\text{tail}_{\mu_a}(s) \leq \frac{4a}{3} \exp
\left( - \frac{s^2}{8a^2} \right) \quad .
$$
\endproclaim
\vskip .3cm
\demo{Proof}
For any $A\in U(N)$ with eigenvalues
$\phi \leq \phi_1 \leq \phi_2 \cdots \leq \phi_N < 2 \pi$ we have
$$
\bigg|\left\{ 1 \leq j \leq N \bigg| \frac{N}{2\pi} \big(
\phi_{j+a} (A) -\phi_j (A) \big) \geq a \, s \right\} \bigg|
\leq a \bigg| \left\{ 1 \leq j \leq N \bigg|
\frac{N}{2\pi} \big(\phi_{j+1} (A) - \phi_j (A)\big) \geq s \right\} \bigg| \; .
$$
Dividing by $N$ integrating w.r.t. $A$ and letting $N\rightarrow \infty$
yields
$$
\align
\text{tail}_{\mu_a} (a \, s ) & \leq a \; \text{tail}_{\mu_1} (s) \\
& \leq \frac{a4}{3} e^{-s^2/8a^2} \; .
\endalign
$$
\enddemo
\vskip .5cm
With this we can finally establish the desired bound for
$s_M $ that was used in the proof in Section 4. That is,
if $s_M$ is such that $\mu_a[0, s_M ] = 1-1/M$, or
$\text{tail}_{\mu_a} (s_M) = \frac{1}{M}$, then according to
Proposition 10
$$
\frac{1}{M} \leq \frac{4a}{3} e^{-s_M^2/8a^2} \; .
$$
That is
$$
s_M \leq 3a \sqrt{\log \frac{4Ma}{3}} \; .
\tag5.56
$$
\vskip .5cm
\head{Supplement to Section 5}\endhead
In the supplement to this Section we give a more conceptual proof of the
relation between the universal measures $\mu_a$ and Fredholm determinants
(viz Proposition 3). In fact, in the special case of $U(N)$, the measure
$E(\mu_a (A, U(N))$ can be expressed as a (finite) Fredholm determinant
for each $N$. In the limit as $N \rightarrow \infty$ this relation reproduces
Proposition 3. We also pursue the probabilities $P_n(s,G(N))$
further and establish that the mean of $\mu_a$ is indeed $a$.
\vskip .5cm
%\head{Supplement to Section (cont'd)}\endhead
We begin by introducing some probability measures which are closely
related to $P_n(s,G(N))$ and their scaling limits.
As before, if $A\in G(N)$ we let
$$
0 \leq \phi_1 \leq \phi_2 \ldots \leq \phi_N \leq \sigma \pi
\tag"S.1"
$$
denote its sequence of eigen--angles. We renormalize
these angles by setting
$$
\theta_n = \frac{(N+ \lambda) \phi_n}{\sigma \pi}
\tag"S.2"
$$
for $n=1 , \ldots , N$. So
$$
0 \leq \theta_1 \leq \theta_2 \cdots \;\leq \theta_N \leq N + \lambda
\tag"S.3"
$$
Note that the normalization of multiplying by $(N + \lambda)/\sigma \pi$
is not the only natural one. One could normalize so that the mean of
$\phi_n$ over $G(N)$ is equal to 1.
Asymptotically as $N \rightarrow \infty$, we will see that this
amounts to the normalization $\widetilde{\theta}_n = \alpha_n N$ for suitable
constants $\alpha _n$. The limiting distributions of the $\theta_n$'s are
not universal which is an interesting and useful feature.
\vskip .5cm
It is worth pointing out that unlike the normalized spacings
$\Delta_{j,n}$, the above normalized angles of an $A\in G(N)$
depends on $A$ as an element of $G(N)$, not just as an
element of the ambient unitary group. For example, in
$SO(2N+1)$ every element $A$ has $1$ as an eigenvalue, and
so there is a shift in numbering.
$\theta_n (A \; \text{in} \; SO(2N+1)) = \theta_{n+1} (A \; \text{in} \; U(2N+1))$
for $1 \leq n \leq N$. Similarly in $O_-(2N+2)$ every element $A$
has both $\pm 1$ as eigenvalues, and again there is a shift of numbering,
$\theta_n(A \; \text{in} \; O_- (2N+2))= \theta_{n+1} (A \; \text{in} \; U(2N+2))$
for $1 \leq n \leq N$. In the case of $A$ in $USP(2N)$ or $SO(2N)$ the
eigenvalues $1$ occurs with an even multiplicity $2k$ and there is a shift
depending on this multiplicity. $\theta_n (A \; \text{in} \; SO(2N)$
or $USP(2N)) = \theta_{n+k} (A \; \text{in} \; U(2N) , 1 \leq n \leq N$.
A redeeming feature is that for $G(N)$ one of $U(N), \; USP(2N)$, or
$SO(2N)$ the set of $A$'s for which $1$ is not an eigenvalue is of
full measure, while for $SO(2N+1)$, or $O_-(2N+2)$, the set of $A$'s for
which 1 is an eigenvalue is of full measure.
\vskip .5cm
Define the probability measures $\nu_n (G(N))$ ``the $n^{th}$ normalized eigenvalue
distribution'' by
$$
\nu_n(G(N)) = (\theta_n)_* \big(\text{Haar}_{G(N)}\big) \; .
\tag"S.4"
$$
That is, the cumulative distribution function, $CDF_{\nu_n} (G(N))$
is given by
$$
CDF_{\nu_n (G(N))} (s) = \text{Haar}_{G(N)} \big\{ A\in G(N)) \big| \theta_n \leq s \big\} \; .
\tag"S.5"
$$
The measures $\nu_n(G(N))$ are closely related to the probabilities
$P_n(s,G(N))$.
\vskip .5cm
\proclaim{Lemma 1} For $n \geq 2$ and $G(N)$ any one of
$U(N)$, $SO(2N+1)$, $USP(2N)$, $SO(2N)$, $O_-(2N+2)$, and
$s\in (0, N+ \lambda)$ we have
\roster
\item $P_0 \big(\frac{\dsize{s \sigma \pi}}{N+ \lambda} , \; G(N)\big) = \text{tail}_{\nu_1 (G(N))} (s):= 1- CDF_{\nu_1 (G(N))}(s)$
\vskip .02cm
\item For $1 \leq n \leq N-1$
%\vskip .1cm
$$
P_n \big(\frac{\dsize{s \sigma \pi}}{N+\lambda} , \; G(N)\big) = \text{tail}_{\nu_{n+1}(G(N))}(s)
- \text{tail}_{\nu_n (G(N))} (s)
$$
%\vskip -.1cm
\item $P_N \big( \frac{\dsize{s \sigma \pi}}{N+ \lambda} , G(N) \big) = CDF_{\nu_N
(G(N))}(s)$
\item $\text{tail}_{\nu_n (G(N))} (s)= \dsize{\sum_{j=0}^{n-1}} P_j$
$\big(\frac{\dsize{s \sigma \pi}}{N+\lambda} , \; G(N)\big)$
\endroster
\endproclaim
\demo{Proof}
The set $\{A \in G(N) | \theta_n > s \}$ is equal to
$\left\{A \in G(N)\right.$ with at most $n-1$ normalized angles in
$\left .[0,s]\right\}$
$$
= \coprod_{j=0}^{n-1} \{ A \in G(N) \; \text{with exactly} \;
j \; \text{normalized angles in} \; [0,s] \} \;.
$$
Its complement $\{A \in G(N) | \theta_n \leq s \}$
$$
= \coprod_{j \geq n} \{ A \in G(N) \; \text{with exactly $j$ normalized angles in} \;
[0,s] \} \; .
$$
Taking Haar measure of these sets, and recalling the definition
(5.42) and that for $j > N$, $P_j \big(\frac{\dsize{s \sigma \pi}}{N+ \lambda} , G(N)\big) =0$
(the set being empty) gives our assertions.
\enddemo
\vskip .5cm
In view of Proposition 5, we have
\proclaim{Corollary 2}
For $N \geq 2$, $G(N)$ and $s$ as above, we have
$$
\left(\left(\frac{d}{dT}\right)^n \frac{1}{n!}\right) E_{\text{sign} \epsilon}
\left( \rho N + \tau , \; T , \; \frac{s \sigma \pi}{N+ \lambda} \right)
\bigg|_{T = -1}
$$
$$
\aligned
= \left\{
\aligned
\text{tail}_{\nu_1(G(N))} (s) &\quad \text{if} \; n=0 \\
\text{tail}_{n+1}(G(N)) (s) - \text{tail}_{\nu_n (G(N))} (s) &
\quad \text{if} \; 1 \leq n \leq N-1 \\
CDF_{\nu_N (G(N))} (s) & \quad \text{if} \; n = N \\
0 &\quad \text{if} \; n > N \endaligned \right. \;.
\endaligned
$$
\endproclaim
\vskip .5cm
We turn now to the scaling limits as $N \rightarrow \infty$,
that is the limit of $\nu_n(G(N))$ as $N \rightarrow \infty$.
\vskip .5cm
\proclaim{Corollary 3}
For $s \geq 0$, $n \geq 1$ and $G(N)$ any one of $U(N)$,
$SO(2N+1)$, $USP(2N)$, $SO(2N)$, $O_-(2N+2)$ we have
$$
\aligned
&(1) \; \; \lim_{N\rightarrow \infty} \;
\text{tail}_{\nu_n (G(N))}(s) = \sum_{j=0}^{n-1}
E_{\text{sgn}(\epsilon),j} (s) \\
&(2) \; \; \lim_{N\rightarrow \infty} CDF_{\nu_n (G(N))}(s) =
\sum_{j\geq n} E_{\text{sgn} (\epsilon), j} (s) \; .
\endaligned
$$
\endproclaim
\demo{Proof}
(1) follows from Lemma 1, Corollary 2, and Proposition 6 and 8
(or rather their consequences (5.48) $\ldots$ (5.51)).
(2) follows from (1) and the relation
$$
1= \sum_{n=0}^{\infty} E_{\text{sgn} (\epsilon) ,n} (s)
\tag"S.6"
$$
The last being an immediate from (5.12) and (5.32) evaluated when
$T=0$ (and $\det I$ being equal to 1). Note in passing that the
pre--limit version of (S.6), viz
$$
1= \sum_{n=0}^{\infty} P_n (s, G(N)) , \;
0\leq s \leq \sigma \pi
\tag"S.7"
$$
comes from simply interpreting the statement that the sets
$\left\{A\in G(N) |A \right.$ has exactly $n$ eigenangles in $\left. [0,s] \right\}$
are a partition of $G(N)$.
\enddemo
\vskip .5cm
\proclaim{Proposition 4}
Let $\nu_n$ and $\nu_{\pm , n} ( n \geq 1)$
be given by
$$
\aligned
&CDF_{\nu_n}(s) = 1 - \sum_{j=0}^{n-1} E_j(s) \\
&CDF_{\nu_{\pm,n}}(s) = 1 - \sum_{j=0}^{n-1} E_{\pm,j}(s) \, ,
\endaligned
$$
$s \geq 0$. Then $\nu_n , \; \nu _{\pm,n}$ are probability measures
on $[0,\infty)$ and
$$
\aligned
&(1) \; \; \lim_{N\rightarrow \infty} \nu_n (U(N)) = \nu_n \\
&(2) \; \; \lim_{N\rightarrow \infty} \nu_n (SO(2N+1)) = \nu_{-,n} \\
&(3) \; \; \lim_{N\rightarrow \infty} \nu_n (USP(2N)) = \nu_{-,n} \\
&(4) \; \; \lim_{N\rightarrow \infty} \nu_n (SO(2N)) = \nu_{+,n} \\
&(5) \; \; \lim_{N\rightarrow \infty} \nu_n (O_-(2N+2)) = \nu_{-,n} \; .
\endaligned
$$
The convergence in these being convergence in measure, i.e.
Convergence of the $CDF's$ for each $s$.
\endproclaim
\demo{Remark}
The probability measures $\nu_n$, $\nu_{\pm, n}$ are all different.
This is the nonuniversal feature mentioned earlier and which is
useful in distinguishing the $G(N)$--limits. Their means (which are not
equal to 1) are an artifact of our scaling in (2). As mentioned above, one can
scale in (2) by $\alpha_{n,N}^{-1}$ where $\alpha_{n,N}$ is the mean
of $\phi_n$ over $G(N)$. Doing so will lead to the measure
$\tilde{\nu}_n$ and $\tilde{\nu}_{\pm, n}$ where $\tilde{\nu}$ is the
measure $\nu$ rescaled to have mean equal to $n$. Graphs of $\tilde{\nu}_n$
and $\tilde{\nu}_{\pm,n}$ are displayed in Figures 1, 2 and 3.
\enddemo
\demo{Proof of Proposition 4}
In fact we have established everything in the Proposition except that the
$\nu$'s are probability measures. A priori, being the limits of probability
measures on $\Bbb R _{\geq 0}$, the $\nu_{\epsilon ,n}$'s are nonnegative measures
of total mass at most equal to 1. What needs proof is that they do indeed
have total mass 1 or in view of the formulae in their definition in
Proposition 4, that for any fixed $n \geq 0$:
$$
\aligned
\left.
\aligned
& \lim_{s \rightarrow \infty} E_{n}(s) =0 \\
& \lim_{s\rightarrow \infty} E_{\pm,n} (s) = 0 \endaligned
\qquad \right\}
\endaligned \tag"S.8"
$$
The next few pages will be concerned with a proof of (S.8).
We give a proof for $E_n (s)$ and this proof works equally
well for $E_{\pm,n}(s)$.
\enddemo
\vskip .5cm
Recall that
$$
E(T,s) = \det (I+TK_s)
\tag"S.9"
$$
where
$$
K_s = K \bigg|_{L^2 [-s/\ssize{2}, s/\ssize {2}]} \; .
\tag"S.10"
$$
>From a general analysis of such self--adjoint integral operators
$K$ with entire kernels (see {[H--T]) we know that the eigenvalues of
$K_s$ are real and if they are denoted by $\lambda_j (s)$, $j=1, \; 2, \ldots$
with
$$
|\lambda_1| \geq |\lambda_2 | \ldots
\tag"S.11"
$$
then
$$
|\lambda_j (s) | \leq C_s \exp (- \frac{j \log j}{8} ) \; .
\tag"S.12"
$$
Here $C_s$ is a constant depending on $s$. The product
$$
E(T,s) = \prod_{j=1}^\infty (1+ T \lambda_j (s) )
\tag"S.13"
$$
is therefore rapidly convergent. We claim that
\roster
\item"{(A)}" for any $s>0$ and any $j$, $0< \lambda_j(s) < 1$, and
\item"{(B)}" for any fixed $j$, $\lim_{s\rightarrow \infty} \lambda_j (s) =1$.
\endroster
\vskip .5cm
To see these, consider first the operator
$K: L^2 ( \Bbb R ) \rightarrow L^2 ( \Bbb R )$.
Taking Fourier transform we see that $K$ is isometric with
$\widetilde{K}: L^2 ( \widehat{\Bbb {R}} ) \rightarrow L^2 ( \widehat{\Bbb {R}} )$
where
$$
\widetilde{K} \hat{f} (\xi) = I_{[-1,1]} (\xi) \hat{f} (\xi)
\tag"S.14"
$$
$I_A$ being the characteristic function of the set $A$.
Hence $K$ is isometric to the projection operator $\widetilde{K}$ which
projects $L^2 (\widehat{\Bbb {R}} ) \rightarrow L^2 [-1, 1]$ by
truncating the functions outside $[-1,1]$. Thus, $K$ is a projector
$(K^2 = K, \; K = K^* , \; \Vert K \Vert = 1 )$
and its spectrum is $\{0,1\}$ and in fact each of 0 and 1 is of
infinite multiplicity. Let $L_s: L^2 (\Bbb R ) \rightarrow L^2 (\Bbb R )$
be defined by
$$
L_s = P_s K P_s
\tag"S.15"
$$
where $P_s$ is the projector of $L^2 (\Bbb R )$ to
$L^2 [-\ssize{s}/\ssize{2}, \ssize{s}/\ssize{2}]$. So
$$
\aligned
\left.
\aligned
&L_s \bigg|_{L^2 [-s/2, s/2]} = K_s \\
&L_s \bigg|_{(L^2 [-s/2, s/2])^\perp} = 0
\endaligned \qquad \right\}
\endaligned \tag"S.16"
$$
Now it is clear that as $s \rightarrow \infty$
$$
L_s \rightarrow K \qquad \text{strongly}
\tag"S.17"
$$
(i.e. $L_s f \rightarrow Kf$ for any $f \epsilon L^2 ( \Bbb R )$).
Let $\lambda$ be an eigenvalue of $K_s$ i.e.
$$
(K_s f) (x) =
\lambda f(x) \qquad \text{for} \quad
x\epsilon [-\ssize{s} / \ssize{2}, \ssize{s} / \ssize{2}]
\quad \dsize{(f \neq 0)} \; .
$$
Extend $f$ to $\Bbb R$ by setting it equal to zero outside
$[- \ssize{s} / \ssize {2}, \ssize{s} / \ssize{2} ]$, then $L_s f = \lambda f$,
and hence
$$
\lambda \left< f, f \right> =
\left< P_s K P_s f, f \right> = \left< K(P_s f ) , (P_s f) \right> \; .
\tag"S.18"
$$
Since $K$ is projector, we have that for any $g \not\equiv 0$
$$
0 \leq \frac{\left< Kg, g \right>}{\left< g,g \right>} \leq 1
\tag"S.19"
$$
Moreover, equality occurs on the left iff $g \epsilon \; (\text{Range} \; K)^{\perp}$
and equality on the right iff $g\epsilon \; \text{Range}\; K$.
Hence from (S.18) we see that $0 \leq \lambda \leq 1$.
If $\lambda = 1$, then in fact $P_s f \epsilon \;\text{Range} \; (K)$
which implies that $\widehat{P_s f} (\xi)$ is supported in
$[-1, 1]$. This means that both $(P_s f) (x)$ and $\widehat{P_s f} (\xi)$ are
of compact support (and not $\equiv 0$) which is a contradiction.
Hence $\lambda =1$ is not possible. If $\lambda = 0$, then
$P_s f \epsilon \; (\text{Range} \; K )^\perp$ which implies that
$\widehat{P_s f} (\xi) \equiv 0$ for $-1 \leq \xi \leq 1$.
But $\widehat{P_s f} (\xi)$ is the Fourier Transform of a
function of compact support, and so is entire (and is not $\equiv 0$).
This contradicts the above so that $\lambda = 0$ is also not allowed.
Hence $0 < \lambda < 1$, and we have established (A) above.
\vskip .5cm
To prove (B), we use (S.17). From general functional analysis it follows
that since $L_s \rightarrow L$ strongly, every point of spectrum $(K)$
is a limit of spectrum $(L_s)$ as $s \rightarrow \infty$ (see \cite{R--S}).
Hence $\lambda_1 (s) \rightarrow 1$ as $s \rightarrow \infty$.
In fact, the above is true with multiplicities as well (precisely the spectral
projections
$ E_{[1 -\epsilon, 1+ \epsilon]} (L_s) \rightarrow E_{[1 - \epsilon , 1+ \epsilon]}(L)$
for any $\epsilon > 0$ (see \cite{R--S}), since the latter is infinite dimensional
it follows that the former must grow unboundedly in dimension as
$s \rightarrow \infty$). Hence $\lambda _j (s) \rightarrow 1$
as $s \rightarrow \infty$ for any fixed $j$. This proves (B).
\vskip .5cm
We can now complete the proof of (8).
\vskip .5cm
For $|T +1 | \leq 1$ we have, (using A),
$$
|1+T \lambda _j (s) | \leq |1- \lambda_j (s) | + |\lambda_j (s) + T \lambda_j (s) |
\leq 1 - \lambda_j (s) + \lambda_j (s) | 1+ T| \leq 1 \; .
$$
Hence for $n\geq 0$ fixed, and $|1+T | \leq 1$
$$
|E(T,s) | = \prod_{j=1}^{\infty} | 1+ T \lambda _j (s) |
\leq \prod_{j=1}^{n+1} | 1+T \lambda _j (s) | \; .
$$
Let $\epsilon > 0$, then for $|1+T | = \epsilon$ we have, (using B),
$$
\lim_{s \rightarrow \infty} \quad \max_{|T+1|= \epsilon}
|E(T,s)| \leq \lim_{s \rightarrow \infty} \quad \max_{|T+1 | = \epsilon}
\quad \prod_{j=1}^{n+1} |1+T \lambda_j (s) | = \epsilon^{n+1} \; .
\tag"S.20"
$$
Applying Cauchy's inequalities to the analytic function (of $T$) $E(T,s)$
with expansion at $T = -1$ and $|1+T | = \epsilon$, yields
$$
|E_n (s) | \leq \max_{|T+1| = \epsilon} \frac{|E(T,s)|}{\epsilon^n} \; .
$$
Hence from (S.20)
$$
\lim_{s \rightarrow \infty} | E_n (s) | \leq \epsilon
$$
$\epsilon$ is arbitrary, and so we have proven (8), and with it also
Proposition (4).
\vskip .5cm
The measures $\nu_n$, that is the scaling limits of the $n^{th}$ eigenvalue of
the $U(N)$'s, are related in a simple way to the universal spacing measures
$\mu_a = \mu_a (\text{univ})$ as follows:
\proclaim{Proposition 5}
$$
\hskip -4.0cm (i)\hskip 1.7in \frac{d}{ds} CDF_{\nu_1} = 1 - CDF_{\mu_1} = \text{tail}_{\mu_1}
$$
(ii) For $n \geq 2$
$$
\frac{d}{ds} CDF_{\nu_n} = CDF_{\mu_{n-1}} - CDF_{\mu_n}
$$
(iii) For $n\geq 1$
$$
\text{tail}\mu_n = \sum_{j=1}^{n} \frac{d}{ds} CDF_{\nu_j}
$$
(iv)
$$
\text{tail}_{\mu_n} (s) = - \sum_{j=0}^{n-1}
(n-j) \frac{dE_j}{ds} \; .
$$
\endproclaim
\demo{Proof}
The relation (iv) was already established in (5.19). Substituting the
expression for $CDF_{\nu_n} (s)$ in Proposition 4 in (iii) one sees
that (iii) and (iv) are equivalent. (i) and (iii) follow by
inverting the relation (iii).
\enddemo
\vskip .5cm
If we integrate (iv) of Proposition 5 from $s$ to $\infty$ and use (8),
we get
$$
\int_s^\infty \; \text{tail}_{\mu_n} (t) dt =
- \sum_{j=0}^{n-1} (n-j) (E_j ( \infty ) - E_j (s) )
= \sum_{j=0}^{n-1} (n-j) E_j (s) \; .
$$
Setting $s=0$ and using the relations $E_0(0) =1$ and $E_j (0) = 0$
for $j\geq 1$ which are clear from definitions (5.8) and (5.13), yields
$$
\int_0^\infty \; \text{tail}_{\mu_n} (t) dt = n \qquad \text{for} \quad n\geq 1
\tag"S.21"
$$
With this, we can establish that the mean of $\mu_n$ is indeed $n$ (this was
promised after (4.21). In view of (S.21), there is a sequence $x_j \rightarrow \infty$
s.t.
$$
\lim_{j\rightarrow \infty} x_j \; \text{tail}_{\mu_n} (x_j) = 0 \; .
\tag"S.22"
$$
Hence, if
$$
M_n = \lim_{j\rightarrow \infty} \int_0^{x_j} x \; d \mu_n (x)
\left( = \int_0^{\infty} x \, d \mu_n (x) \right)
$$
then
$$
\aligned
M_n &= \lim_{j\rightarrow \infty} - \int_0^{x_j} x \frac{d}{dx}
\text{tail}_{\mu_n} (n) dx \\
& = \lim_{j\rightarrow \infty}
\left( -x \; \text{tail}_{\mu_n}(x) \bigg|_0^{x_j} +
\int_0^{x_j} \; \text{tail}_{\mu_n} (x) dx \right)\\
& = \int_0^\infty \; \text{tail}_{\mu_n} (x) dx \; .
\endaligned
$$
>From (S.21) it follows that $M_n = n$, i.e. that $\mu_n$ has mean $n$.
\vskip .5cm
To end the supplement, we give a direct and more conceptual proof of the
relation between the $\mu_n$'s and the $\nu_n$'s (and hence between
the $\mu_n$'s and $E_n (s)$). The relation can be derived at the
finite level for the family $U(N)$ (all of this is special to this
particular family). Consider the product group $U(1) \times U(N)$.
For $1 \leq n \leq N$, let
$F_n: U(1) \times U(N) \rightarrow \Bbb R_{\geq 0}$ be defined by
$F_n (e^{i \phi}, A)$ is equal to the normalized distance from
$e^{i \phi}$ to the $n^{th}$ eigenvalue of $A$ which one encounters
starting from $e^{i \phi}$ and walking counter clockwise about the
unit circle, measuring distances so that the circumference is equal to $N$.
\proclaim{Lemma 6}
The direct image of Haar measure on $U(1) \times U(N)$ under
$F_n$ is $\nu_n (U(N))$.
\endproclaim
\demo{Proof}
In terms of the $n^{th}$ normalized angle $\theta_n(A)$
attached to $A\in U(N)$, we have
$$
F_n (e^{i\phi} A) = \theta_n (e^{-i \phi} A) \, .
$$
So if we denote by $\pi: U(1) \times U(N) \rightarrow U(N)$
the surjective homomorphism $\pi(e^{i\phi} , A) = e^{-i \phi} A$,
then we have the diagram
$$
%\CD
\aligned
&U(1) \times U(N) \hskip .6cm \underrightarrow{\; \; \; F_n \; \; \;} \hskip .4cm \Bbb R \\
\vspace{.3cm}
%&\pi \hskip -.00000001cm \uparrow \\
&\hskip .2258cm \uparrow \\
\vspace{-.34cm}
& \quad\vert \\
\vspace{-.3cm}
&\pi \hskip .14cm \vert \\
\vspace{-.3cm}
& \quad\vert \\
\vspace{-.3cm}
& \quad\vert \\
\vspace{.3cm}
&U(N) \hskip 1.2in \theta_n
\endaligned
%U(N) \hskip .6cm \theta_n
%\endCD
$$
Since $\pi_ \star \; \text{Haar}_{U(1) \times U(N)} = \text{Haar}_{U(N)}$
the assertion follows from the transitivity
of direct image
$$
F_{n } \star \text{Haar}_{U(1) \times U(N)} =
\theta_{n} \star \pi_\star \text{Haar}_{U(1) \times U(N)}
= \theta_{n} \star \text{Haar}_{U(N)} = \nu_n (U(N)) \; .
$$
\enddemo
Recall that we defined $\mu_n (U(N))$ to be
$$
\mu_n (U(N)) = E (\mu_n (A, U(N)), U(N)) \; .
$$
\proclaim{Proposition 7}
For any $N \geq 2$, and $s \geq 0$
$$
\text{tail}_{\nu_1 (U(N))} (s) =
\int_s^\infty \; \text{tail}_{\mu_1 (U(N))} (t) dt
$$
\endproclaim
\demo{Proof}
First of all both measures $\mu_1(U(N))$ and
$\nu_1(U(N))$ are supported in $[0,N]$ so
both sides vanish if $s > N$. So suppose that
$0 \leq s \leq N$. We first express both sides as integrals over
$U(N)$ and then we show that these have the same integrand.
\vskip .5cm
We begin with $\text{tail}_{\nu _1 (U(N))} (s)$.
We view $\nu_1(U(N))$ as $F_1 \star (\text{Haar}_{U(1) \times U(N)} ) $, thus by
definition
$$
\aligned
\vspace{.3cm}
\text{tail}_{\nu_1 (U(N))} (s) = \nu_1(U(N)) (s, \infty)
& = \text{Haar}_{U(1)\times U(N)}\quad
\foldedtext\foldedwidth{2.4in}{$\left\{ (e^{i \phi}, A) | \right.$ first eigenvalue past
$e^{i\phi}$ is at normalized distance $> s$ past $\left. e^{i\phi} \right\}$} \\
\vspace{.3cm}
&= \text{Haar}_{U(1) \times U(N)} \quad
\foldedtext\foldedwidth{2.4in}{$\left\{ (e^{i \phi} , A) | \; A \right.$ has no eigenvalue in
$\left. [\phi, \phi + \frac{ 2 \pi s}{N}] \right\}$} \\
&= \int_{U(N)} \text{Haar}_{U(1)} \quad
\foldedtext\foldedwidth{2.4in}{$\left\{ [ \phi | A \right.$ has no
eigenvalue in $ \left. [\phi , \phi + \frac{2 \pi s}{N} ] \right\} dA$}
\endaligned \; .
$$
\enddemo
\vskip .5cm
On the other hand by definition
$$
\text{tail}_{\mu_1 (U(N))} (t) =
\int_{U(N)} \text{tail}_{\mu_1 (A, U(N))} (t) dA \; .
$$
Hence
$$
\int_s^\infty \text{tail}_{\mu_1(U(N))} (t) dt =
\int_{U(N)} \left( \int_s^\infty \text{tail}_{\mu_1 (A,U(N))}
(t) dt \right) dA \; .
\tag"S.24"
$$
So the putative identity will follow, if we show the integrands in
(S.23) and (S.24) coincide. The set $U(N)^{\text{reg}}$ of
regular elements in $U(N)$, i.e. those with $N$ distinct
eigenvalues, is of full measure. So it suffices to show,
that for each such regular $A$, we have
$$
\aligned
&\text{Haar}_{U(1)} \quad \foldedtext\foldedwidth{2.4in}{$\left\{ \phi |A \right.$
has no eigenvalue in $ \left. [\phi , \phi + \frac{2\pi s}{N} ] \right\}$}
\; = \int_s^\infty \text{tail}_{\mu_1 (A, U(N))}(t) dt \; .
\endaligned \tag"S.25"
$$
Now for a regular element $A$ with
$$
0 \leq \phi_1 < \phi_2 \ldots < \phi_N < 2 \pi , \; \phi_{N+1} = 2\pi + \phi_N \; ,
$$
let
$$
S_i = \frac{N}{2\pi} (\phi_{i+1} - \phi_i ), \quad i = 1 , \; \ldots N
$$
be the normalized spacings of $A$. Consider the intervals
$$
S_i = (\phi_i , \phi_{i+1} ] \subset U(1) , \, i = 1, \; \ldots N \; .
$$
Thus $s_i =$ normalized length of $S_i$, and $U(1)$ is the disjoint union
of the $S_i$`s. Hence
$$
\aligned
&\left\{ \phi | A \;
\text{has no eigenvalue in} \;
[\phi , \phi + \frac{2\pi s}{N} ] \right\} \\
&\qquad = \coprod_i \left\{ \phi \epsilon S_i |A \; \text{has no eigenvalue in} \;
[\phi , \phi + \frac{2\pi s}{N} ] \right\} \; .
\endaligned
$$
But for $\phi$ in $S_i$ the first eigenvalue after $\phi$ is $\phi_{i+1}$
so the condition that $A$ have no eigenvalue in $[\phi , \phi + \frac{2 \pi s}{N} ]$
is the condition that the entire interval $[\phi , \phi + \frac{2\pi s}{N} ]$
lie in the interior of $S_i$. Now $S_i$ has Haar length $s_i/N$, while
$[\phi, \phi + \frac{2\pi s}{N} ]$ has length $s/N$.
So unless $s_i > s$ there are no intervals $[\phi , \phi + \frac{2\pi s}{N}]$
of Haar length $s/N$ in the interior of $S_i$.
If $s_i >s$, then there are such intervals and their starting points
can be any $\phi $ in $(\phi_i , \phi_{i+1} - \frac{2\pi s}{N} )$,
an interval of Haar length $\frac{s_i -s}{N}$. Thus we see that
$$
\aligned
\left\{ \phi | A \; \text{has no eigenvalue in} \; [\phi, \phi + \frac{2\pi s}{N} ] \right\}
= \coprod_{i, s_i > s} \left( \phi_i , \phi_{i+1} - \frac{2\pi s}{N} \right) \; .
\endaligned
$$
Taking Haar measure, we get
$$
\text{Haar}_{U(1)} \left\{ \phi | A \; \text{has no eigenvalue in}
\; [\phi , \phi + \frac{2\pi s}{N} ] \right\}
= \frac{1}{N} \sum_{i, s_i > s} (s_i -s ) \; .
\tag"S.26"
$$
Finally we compute the r.h.s. of (S.25) in terms of the $s_i$'s
$$
\aligned
\text{tail}_{\mu_1 (A, U(N))} (t) &= \mu_1 (A, U(N)) (t, \infty )\\
&= \frac{1}{N} \big| \{ i | \ssize{s}_i \dsize{> t \} \big|} \\
& = \frac{1}{N} \sum_i I_{[0, s_i )} (t) \; .
\endaligned
$$
Hence
$$
\aligned
\int_s^\infty \text{tail}_{\mu_1 (A, U(N))} (t) dt &= \int_s ^\infty \frac{1}{N}
\sum_i I _{[0,s_i]} (t) dt \\
&= \frac{1}{N} \sum_{i, s_i > s} (s_i - s )
\endaligned \tag"S.27"
$$
Since (S.26) coincides with (S.27) we have established (S.25), and with it
Proposition 7.
\vskip .5cm
We now state and prove the extenison of Proposition 7 for $n \geq 2$.
\proclaim{Proposition 8}
For $N > k \geq 2 , s \geq 0$
$$
\text{tail}_{\nu_k (U(N))} (s) = \int_s ^\infty \left(
\text{tail}_{\mu_k(U(N))} (t) - \text{tail}_{\mu_{k-1} (U(N))} (t) \right) dt \; .
$$
\endproclaim
\demo{Proof}
We proceed as in the proof of Proposition 7.
\vskip .5cm
The finite $N$ identities in Propositions 7 and 8 lead to similar
relations in the $N \rightarrow \infty$ limit.
Since the mean of $\mu_1 (U(N))$ is equal to 1, we can rewrite the
relation in Proposition 7 as
$$
CDF_{\nu_1(U(N))}(s) = \int_0^s \text{tail}_{\mu_1 (U(N))} (t) dt
\tag"S.28"
$$
As $N \rightarrow \infty$ the $\ell. h.s. $ converges to
$CDF_{\nu_1}(s)$ according to Proposition 7.
On the other hand as $N \rightarrow \infty$, we know
from Theorem 1 that $CDF_{\mu_1(U(N))}(t)$ converges to a limit called
$CDF_{\mu_1}(t)$. By the dominated convergence theorem, we conclude
that the $r.h.s.$ of (S.28) converges to
$\int_0^s \text{tail}_{\mu_1}(t) dt$. Hence,
we have
$$
CDF_{\nu_1}(s) = \int_0^s \text{tail}_{\mu_1} (t) dt \; .
$$
Similarly the rest of Propostion 5, and hence also (5.19),
can be established this way.
\enddemo
\frenchspacing
\def\sqr#1#2{{\vcenter{\vbox{\hrule height.#2pt
\hbox{\vrule width.#2pt height#1pt \kern#1pt
\vrule width.#2pt}
\hrule height.#2pt}}}}
\def\square{\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3}
\widestnumber\key{99999999999}
%\baselineskip=22pt
\Refs
%\magnification=\magstep1
\ref \key{A--K} \by Altman, A. and Kleiman, S. \pages
\paper Introduction Grothendieck Duality Theory
\yr1970
\jour Springer Lecture Notes in Mathematics 146
\endref
\ref \key{Ar} \by Artin, E. \pages 153-296
\paper Quadratische K\"opper in Gebiet der H\"oheren Kongruenzen, I, II
\yr1924 \vol19
\jour Math, Zeit. \endref
\ref \key{B--T--W} \by Basor, E., Tracy, C., and Widom, H.
\paper Asymptotics of level--spacing distributions for random matrices
\jour Phys. Rev. Lett. \vol 69, No 1 \yr 1992 \pages 5--8
\moreref \paper Errata \jour Phys. Rev. Lett. \vol 69 No.19 \yr 1992 \page 2880
\endref
\ref \key{Bott} \by Bott, R. \pages 203--248
\paper Homogeneous vector bundles
\yr1957 \vol 66
\jour Ann. of Math. \endref
\ref \key{Bour--L9} \by Bourbaki, N. \pages
\book Groupes et Alg\`ebres de Lie
\publaddr Chapitre 9, Masson, Paris \yr1982 \endref
\ref \key{C--F} \by Chai, C. L. and Faltings, G.
\book Degeneration of Abelian Varieties
\yr1990
\publ Springer--Verlag \endref
\ref \key{Chav} \by Chavdarov, Nick
\book The generic irreducibility of the numerator of
the zeta function in a family of curves with large monodromy
\publ Princeton University Ph.D. Thesis \yr1995 \endref
\ref \key{D--??} \by Deift, P. A. \pages
\paper
\yr
\jour \endref
\ref \key{Del--AFT} \by Deligne, P. \pages 168--232
\inbook Application de la formule des traces aux sommes trigonom\'etriques
\publ Cohomologie Etale (SGA 4 1/2)
\publaddr Springer Lecture Notes in Mathematics 569 \yr 1977 \endref
\ref \key{Del--CCI} \bysame
\inbook Cohomologie des intersections completes, Exp. XI
\publ Groupes de Monodromie en G\'ometrie Alg\'ebrique Etale (SGA 7 Part II)
\publaddr Springer Lecture Notes in Mathematicvs 340 \yr1973 \endref
\ref \key{Del--CEF} \bysame
\inbook Courbes elliptiques: formulaire (d'apres J. Tate
\publ Modular Functions of One Variable IV
\publaddr Springer Lecture Notes in Mathematics 476 \yr1975 \pages 53--73
\endref
\ref \key{Del--Mum} \by Deligne, P. and Mumford, D. \pages 75--109
\paper Irreducibility of the space of curves of given genus
\yr1969
\jour Pub. Math. IHES 36 \endref
\ref \key{Del--Weil I} \bysame \pages 273--308
\paper La Conjecture de Weil I
\yr1974 \vol 48
\jour Pub. Math. I.H.E.S. \endref
\ref \key{De--Weil II} \by Deligne, P. \pages 313--428
\paper La conjecture de Weil II
\yr 1981 \vol 52
\jour Pub. Math. I.H.E.S. \endref
\ref \key{Dw} \by Dwork, B. \pages 631--648
\paper On the rationality of the zeta function of an algebraic variety
\yr1960 \vol 82
\jour Amer. J. Math \endref
\ref \key{EI} \by Eichler \pages
\paper
\yr
\jour \endref
\ref \key{Fel} \by Feller, W.
\book An Introduction to Probability Theory and its Applications, Volume II
\publ John Wiley and Sons, Inc. \yr1966 \endref
\ref \key{FGA}
\paper Fondements de la G\'eom\'etrie Alg\'ebrique
\publ a collection Bourbaki talks of Grothendieck, Paris \yr1962 \endref
\ref \key{Fre} \by Fredholm \pages
\paper
\yr
\jour \endref
\ref \key{Gaudin} \by Gaudin, M. \pages 447--458
\paper Sur la loi limite de l'\'espacement des valeurs propres d'une matrice al\'eatoire
\jour Nucl. Phys. \vol 25 \yr1961 \endref
%\ref \key{Gau} \by Gaudin \pages
%\paper
%\yr
%\jour \endref
\ref \key{Gau--Meh} \by Gaudin and Mehta
\paper
\yr
\jour \endref
\ref \key{Gro--FL} \by Grothendieck, A.
\paper Formule de Lefschetz et rationlit\'e des fonctions L, Seminaire Bourbabki 1964--65, Expos\'e 279
\inbook Dix Expos\'es sur la cohomologie des sch\'emas
\publaddr North--Holland \yr1968 \endref
\ref \key{Ha} \by Hadamands \pages
\paper
\yr
\jour \endref
\ref \key{Hassel} \by Hassel, H.
\paper Sur Theorie der Abstrakten elliptischen Funktionenkorper I, II, III
\jour Reine Angew. Math. \yr1936 \vol175 \endref
\ref \key{Haz} \by Hazelgrove \pages
\paper
\yr
\jour \endref
\ref \key{Hec} \by Hecke \pages
\paper
\yr
\jour \endref
\ref \key{Ig} \by Igusa, J. \pages 561--577
\paper Fibre systems of Jacobian varieties III
\yr1959 \vol81
\jour Amer. J. Math \endref
\ref \key{Ill--DFT} \by Illusie, L.
\paper\nofrills $\ell$--adic fourier transform
\yr1987
\inbook Algebraic Geometry: Bowdoin 1985
\eds S. J. Bloch
\publaddr A.M.S., Providence \endref
\ref \key{Ill--Ord} \bysame \pages 375--405
\paper Ordinarit\'e
\yr 1990
\inbook The Grothendieck Festschrift Volume II
\eds Cartier, et. al.
\publaddr Birkhauser \endref
\ref \key{J--M---M--S} \by Jimbo, M. Miwa, T., Mori, Y., Sato, M. \pages 80-158
\paper
\jour Physica 1D \yr1980 \endref
\ref \key{Ka--ACT} \by Katz, N. \pages 149--222
\paper Affine cohomological transforms, perversity and monodromy
\yr1993 \vol 6, No. 1
\jour JAMS \endref
\ref \key{Ka--ESDE} \bysame \pages
\paper Exponential sums and differential equations
\yr1990
\publ Ann. of Math. Study 124
\publaddr Princeton University Press \endref
\ref \key{Ka--GKM} \bysame
\paper Gauss sums, Kloosterman sums, and monodromy
\inbook Ann. of Math. Study 116
\publaddr Princeton University Press \yr1988 \endref
\ref \key{Ka--Lang} \by Katz, N. and Lang, S. \pages 285--314
\paper Finiteness theorems in geometric classfield theory
\yr1981
\jour L'Enseignment Math\'ematique T. XXVII, fasc. 3--4 \endref
\ref \key{Ka--Maz} \by Katz, N. and Mazur, B.
\paper Arithmetic moduli of elliptic curves
\inbook Ann. of Math Study 108
\publaddr Princeton University Press \yr1985 \endref
\ref \key{Ka--MG} \by Katz, N. \pages 41-56
\paper On the monodromy groups attached to certain families of exponential sums
\yr1987 \vol 54 No. 1
\jour Duke Math. J. \endref
\ref \key{Ka--ODW23} \bysame \pages 537--557
\paper An overview of Deligne's work on Hilbert's twenty--first problem
\jour A.M.S. Proc. Symp. Pure Math. XXVIII \yr1976 \endref
\ref \key{Ka--RLS} \bysame
\paper Rigid local systems
\inbook Ann. of Math. Study 138
\publaddr Princeton University Press \yr1995 \endref
\ref \key{Ka--SE} \bysame
\paper Sommes Exponentielles, r\'edig\'e par G. Laumon
\inbook Ast\'erisque 79 \yr1980 \endref
\ref \key{Ka--TA} \bysame \pages 484--499
\paper On a theorem of Ax
\yr1971 \vol XCII, No. 2
\jour Amer. J. Math \endref
\ref \key{Ka--TL} \bysame
\paper Travaux de Laumon, S\'eminaire Bourbaki Expos\'e 691
\inbook S\'eminaire Bourbaki Volume 1987--88, Ast\'erisque \pages 161--162
\moreref \yr 1988 \pages 105--132 \endref
\ref \key{K--S} \by Kodaira, K., and Spencer, D. C. \pages 403--466
\paper On deformations of complex structures II
\yr1958 \vol 67
\jour Ann. of Math. \endref
\ref \key{Lang--LSer} \by Lang, S. \pages 385--407
\paper Sur les s\'eries L D'une vari\'et\'e alg\'ebrique
\yr 1956
\jour B.S.M.F. \endref
\ref \key{Lau--TF} \by Laumon, G. \pages 131--210
\paper Transofrmation de Fourier, constantes d'\'equations fonctionelles et conjecture de Weil
\yr1987 \vol 65
\jour Pub. Math. I.H.E.S. \endref
\ref \key{Mat--Mon} \by Matsumura, H. and Monsky, P. \pages 347--361
\paper On the automorphisms of hypersurfaces
\yr 1964 \vol 3
\jour J. Math. Kyoto Univ. \endref
\ref \key{Mehta} \by Mehta \pages
\book Random Matrices
\publ Academic Press \yr1991 \endref
\ref \key{Mes} \by Mestre, J. F. \pages 217--242
\paper La methode des graphes
\yr1986
\publ Proc. Int. conf. on Class Numbers and Fundamental Units of lgebraic Number Fields
\publaddr Katata, Japan \endref
\ref \key{Messing} \by Messing, W.
\paper The crystals associated to Barsotti-Tate groups; with applications to Abelian schemes
\publ Springer Lecture Notes in Mathematics 264 \yr 1972 \endref
\ref \key{Mil} \by Miller, S. \pages
\paper Experiemtns Princeton
\yr 1995
\jour \endref
\ref \key{Mon} \by Montgomery H. \pages 181--193
\paper The pair correlation of the zeors of the zeta function
\yr1973 \vol 24
\jour Proc. Sym. Pure Math. \endref
\ref \key{Mum--AV} \by Mumford, D.
\book Abelian Varieties
\publ Oxford University Press \yr 1970 \endref
\ref \key{Mum--GIT} \bysame
\book Geometric Invariant Theory
\publ Springer Verlag \yr 1965 \endref
\ref \key{Oda} \by Oda, T. \pages 63--135
\paper The first de Rham cohomology groups and Dieudonne modules
\yr 1969 \vol 2
\jour Ann. Sci. E.N.S. \endref
\ref \key{Od} \by Odlyzko, A. \pages 273--308
\paper On the distribution of spacings between zeros of zeta functions
\yr 1987 \vol 48
\jour Math. Comp. \endref
\ref \key{O--S} \by Ozluk, A. and Snyder, C. \pages 307--319
\paper Small zeros of quadratic $L$--functions
\yr1993 \vol 47
\jour Bull. Aust. Math. Soc. \endref
\ref \key{P--W} \by Pasiencier, S. and Wang, H. C. \pages 907--913
\paper Commutators in a complex semi--simple Lie group
\yr1962 \vol 13
\jour Proc. A.M.S. \endref
\ref \key{P--S} \by Pollak, H. O. and Slepian, D. \pages 43--64
\paper Prolate spheroidal functi9ns, Fourier Analysis and uncertainty -- I
\yr 1961 \vol 40
\jour Bull Sysm. Tech. J. \endref
\ref \key{Po} \by Pop, F. \pages
\inbook Private Communication.
\yr
\jour \endref
\ref \key{P} \by Pyke
\paper
\yr
\jour \endref
\ref \key{Ray} \by Raynaud, M.
\inbook Caract\'eristique d'Euler--Poincar\'e d'un Faisceau et cohomologie des vari\'et\'es ab\'eliennes
\publ Expos\'e 286 S\'eminaire Bourbaki 1964/65
\publaddr W. A. Benjamin, New York \yr 1966 \endref
\ref \key{Ree} \by Ree, R. \pages 457--460
\paper Commutators in semi--simple algebraic groups
\yr 1964 \vol 15
\jour Proc. A.M.S. \endref
\ref \key{Reed--Simon} \by Reed, M., and Simon. B.
\book Methods of Modern Mthematical Physics I: Functional Analysis
\publ Revised and Enlarged Edition
\publaddr Academic Press \yr 1990 \endref
\ref \key{Ri} \by Ribet, K.
\paper Images of semi--stable Galois representations
\yr 1996
\jour preprint \endref
\ref \key{Riesz--Sz--Nagy} \by Riesz, F. and Sz.--Nagy, B.
\book Functional Analysis
\publ Frederick Ungar Publishing Company
\publaddr New York \yr 1955 \endref
\ref \key{Rub} \by Rubinstein \pages
\paper In Preparation \endref
\ref \key{R--S} \by Rudnick, Z. and Sarnak. P. \pages 269--322
\paper Zeros of Principal $L$--functions and random matrix theory
\yr 1996 \vol 81, 2
\jour Duke Math. J. \endref
\ref \key{SC} \by Schmidt, K. \pages 1--32
\paper Analytische Zahle Theorie in Korpen der charakteristik $p$
\yr1931 \vol 33
\jour Math . Zeit \endref
\ref \key{Se} \by Serre J. P.
\paper R\'epartitian asymptotique des valuers propres de L'operat-teur de Hecke $T_p$
\yr 1996
\jour J. Amer. Amth. Soc. \finalinfo To appear \endref
\ref \key{Serre--GACC} \by Serre, J. P.
\inbook Groupes alg\'ebriques et corps de classes
\publ Hermann \yr 1959 \endref
\ref \key{Serre--Rig} \bysame
\inbook Rigidit\'e du foncteur de Jacobi d'\'echelon $n \geq 3$ appendice d'expos\'e 17
\publ S\'eminaire Henri Cartan 13e ann\'ee 1960/61 \endref
\ref \key{SGA} \by Grotehendieck et. al.
\inbook S\'eminaire de G\'eom\'etrie Alg\'ebrique du Bois-Marie
SGA 1, SGA 4 Parts I, II, and III, SGA 4--1/2, SGA 5, SGA 7 Parts I and II
\publ Springer Lecture Notes in Math. 224, 269--270---305,
569, 589, 288--340, 1971 to 1977 \endref
\ref \key{Shi} \by Shimura, G. \pages
\pages
\yr
\jour \endref
\ref \key{Shoda} \by Shoda, K. \pages 361--365
\paper Einige Satz \"uber Matrizen
\yr1937 \vol 13
\jour Japan J. Math \endref
\ref \key{Sos} \by Soshnikov, A.
\paper Global level spacing distjribution for large random matrices form classical compact groups
\yr 1996
\finalinfo preprint \endref
\ref \key{Sti} \by Stirlings \pages
\paper
\yr
\jour \endref
\ref \key{Sut} \by Sutor, R.
\paper The calculation of some geometric monodromy groups
\jour Princeton University Ph.D. Thesis, 1992 \endref
\ref \key{T--W} \by Tracy, C. and Widom, H.
\paper Introduction to random matrices in geometric and quantum aspects o integrable systems
(Scheveningen 1991)
\publ Springer Lecture Notes in Physics 424, 1993, 103--130 \endref
\ref \key{Weil--CA} \by Weil, A.
\paper Courbes alg\'ebriques et vari\'et\'es ab\'eliennes
\publ Hermann, Paris, 1971 \endref
\ref \key{Weil--NS} \bysame \pages 497--508
\paper A number of solutions of equations in finite fields
\yr 1949 \vol 55
\jour Bull. A.M.S. \endref
\ref \key{Wei} \bysame \pages 592--594
\paper Sur les fonctions algebriqurs a'corps de constantrs fine
\yr1940
\jour C. R. Acad. SC, Paris 210 \endref
\ref \key{Weyl} \by Weyl, H.
\book Classical Groups
\publ Princeton University Press \yr1946 \endref
\ref \key{W--W} \by Whittaker, E. T. and Watson, G. N.
\book A course of Modern Analysis, Fopurth edition reprinted
\publ Cambridge University Press \yr1962 \endref
\ref \key{Widom} \by Widom, H. \pages 51--64
\paper The asymptotics of a continuous analogue of orthogonal polynomials
\yr 1994 \vol 77, No. 1
\jour J. Approx. Theory \endref
\ref \key{Wi} \by Wigner, E. \pages
\paper
\yr
\jour \endref
\ref \key{Yu} \by Yu, J. K.
\paper Lectures at Princeton University, February 1995, unpublished \endref
\endRefs
\end{document}