%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% WHAT FOLLOWS IS A PLAIN TeX FILE
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\magnification=\magstep1
%\hfuzz=2pt
%%%%%%%%%%%%%%%%%%%%%%%%%%% structure de page %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\headline={\ifnum\pageno=1 \hfil\else\hss{\tenrm\folio}\hss\fi}
\footline={\hfil}
\hoffset=-.2cm
%%%%%%%%%%%%%%%%%%%%%% polices %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\font\titlefont= cmbx10 scaled \magstep3
\font\bigfont= cmbx10 scaled \magstep2
%\font\titlefont= cmbx10 scaled \magstep1
%\font\titlefont= ambx10
%\font\titlefont= ambx10 scaled \magstep1
\font\bigcal= cmsy10 scaled \magstep2
\def\bigl{{\hbox{{\bigcal \char 76}}}}
%%%%%%%%%%%%%%%%%%% general math defs %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def\integer{{\bf N}}
\def\real{{\bf R}}
\def\Real{\real}
\def\expectation{\mathchoice{\rm I\hskip-1.9pt E}{\rm I\hskip-1.9pt E}
{\rm I\hskip-.8pt E}{\rm I\hskip-1.9pt E}}
\def\zinteger{{\bf Z}}
\def\Oun{{\cal O}(1)}
\def\oun{{\hbox{\sevenrm o}(1)}}
\def\proof{\noindent{\bf Proof. }}
\def\proba{\mathchoice{\rm I\hskip-1.9pt P}{\rm I\hskip-1.9pt P}
{\rm I\hskip-.9pt P}{\rm I\hskip-1.9pt P}}
\def\bigun{\hbox{\bigfont 1}}
%%%%%%%%%%%%%%%%%%%%%%%%%%begin%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\centerline{\titlefont FLUCTUATIONS OF REPETITION}
\bigskip
\centerline{\titlefont TIMES FOR GIBBSIAN SOURCES}
\bigskip
\bigskip
\centerline{{\bf Pierre Collet}
\footnote{$^{\hskip -.1cm\circ}$}{\sevenrm C.N.R.S.,
Centre de Physique Th\'eorique, Ecole
Polytechnique, 91128 Palaiseau Cedex, France.
e-mail: collet@pth.polytechnique.fr},
{\bf Antonio Galves}\footnote{$^*$}{\sevenrm Instituto de Matem\'atica e
Estat\'\i stica, Universidade
de S\~ao Paulo, BP 66281, 05315-970, S\~ao Paulo, SP,
Brasil. e-mail: galves@ime.usp.br},
{\bf Bernard Schmitt}\footnote{$^\dagger$}{\sevenrm Universit\'e de
Bourgogne, D\'epartement de Math\'ematiques, Facult\'e des Sciences
Mirande, BP-400, 21011 Dijon Cedex, France.
e-mail: schmittb@satie.u-bourgogne.fr}}
\vskip 5cm
\noindent{\sl Abstract~:}
In this paper we consider the class of stochastic stationary sources
induced by one-dimensional Gibbs states, with H\" older continuous
potentials. We show that the time elapsed before the source repeats its
first $n$ symbols converges in law, when suitably renormalized, to a
lognormal distribution.
\vskip 2cm
\noindent{\sl Key words~: Gibbsian sources, entrance time,
convergence in law, lognormal, entropy.}
\bigskip
\noindent{\sl AMS (MOS) Subject Classification~:} 94A17, 60F05, 60F10.
%\bigskip
%\noindent
\footnote{}{{\sevenrm This work was partially supported by
USP-ProInter, USP-Cofecub (45-97), CNPq
(grant 301301/79), FAPESP ({
Projeto Tem\'atico} 95/0790-1) and Pronex (grant
41.96.0923.00).}}
\vfill\supereject
\beginsection{1.Introduction.}
In this paper, we study the law of $R_n$ which is the first time a stochastic
stationary source repeats its first $n$ symbols of output.
We show that this
time suitably renormalized converges in law to a lognormal distribution.
The result holds for a large class of $\phi$-mixing sources, namely the
class of stationary processes induced by one-dimensional Gibbs states.
The original motivation for studying $R_n$ came from the Zempel-Liv and
related compression algorithms.
A first result on the asymptotics of $R_n$ was obtained by
Wyner and Ziv (1989) who proved that for stochastic ergodic sources
$$
\lim_{n\to\infty}{1\over n}\log R_{n}=h \eqno(1.1)
$$
in probability, where $h$ denotes the entropy of the process. Later on,
Ornstein and Weiss (1993) improved the result by showing almost sure
convergence.
In the present paper we study the fluctuations of $\log R_{n}$ around its
limiting behavior for Gibbsian sources. We prove that the random variables
$(\log R_{n}-nh)/\sqrt n$ converges in law to a Gaussian random variable when
$n$ tends to infinity.
As will be clear from the proof, the log-normal asymptotics comes out from the
fact that the law of $R_n$ is a mixture of asymptotically exponential laws
with fluctuating rates. The main tools of the proof are a theorem about
exponential approximations for entrance times in cylinder sets in
$\phi$-mixing systems proven in Galves and Schmitt (1997) and a central limit
theorem for the fluctuations in the Shannon-McMillan-Breiman theorem. For a
recent review on exponential and Poissonian approximations we refer to Aldous
(1989). Lognormal asymptotics has a long history starting with Galton (1879)
and passing through Kolmogorov (1941). For a survey of this field we refer to
Crow and Shimizu (1988).
This paper is organized as follows. In section 2 we give the definitions
and state the main theorem. The proof is given in section 3.
\bigskip
\noindent{\bf 2.Definitions and main result.}
\smallskip
\noindent
We consider a stationary ergodic discrete time stochastic process
$(X_{n})_{n\in\zinteger}$ defined on a probability space $(\Omega, {\cal F},
\proba)$ with values in a finite set $A$.
For any sequence $(a_{k})_{k\in\zinteger}$ with elements in $A$, we will
denote the partial sequence $(a_{i}, a_{i+1},\cdots,a_{j})$ by $a_{i}^{j}$,
for $ii}$. The same notation will be used for the sequence of random
variables. Coherently we shall use the notation $\{X_1^n=a_1^n\}$ to denote
the cylinder set
$$
\{X_{j}(\omega)=a_{j}
\;,\; j=1\,,\cdots,n\}\;.
$$
For a finite sequence $a_{1}^{n}$ the entrance time
$\tau_{a_{1}^{n}}$ is defined by
$$
\tau_{a_{1}^{n}}=\inf\;\{j\ge 2\,|\,X_{j}^{j+n}=a_1^n\}\;.
$$
We recall that the ergodic theorem implies that $\tau_{a_{1}^{n}}$ is almost
surely finite, provided that $\proba\big(\big\{X_1^n=a_1^n\big\}\big)>0$.
We now define the sequence of return times $(R_{n})$ by
$$
R_{n}=\tau_{X_{1}^{n}}\;.
$$
>From now on we shall always assume that
$$
\proba\left\{X_1^n=a_1^n \right\} > 0\ ,\eqno(2.1)
$$
for any $n \ge 1$ and any finite sequence $a_1^n \in A^n$.
A stationary source is Gibbsian if there is a H\" older continuous function
$\varphi\,:\, A^{\integer}\to \real$ and two constants $P$ and $K>1$ such that
for any integer $n>0$, for any finite sequence $a_1^n\in A^n$, we have
$$ K^{-1}\le{\proba\big(\big\{X_1^n=a_1^n\big\}\big)\over
e^{-nP+\,\sum_{j=0}^{n-1}\varphi(a_j^{+\infty})}}\le K\;, \eqno(2.2)
$$
where $(a_j^{+\infty})$ is defined by completing the finite sequence $a_j^n$
in an arbitrary way.
We have assumed here for simplicity (through condition (2.1)), that the
subshift of finite time associated to the Gibbs measure is the shift on the
full alphabet $A$. We leave to the interested reader the trivial extension of
our result to general subshifts of finite type. We refer the reader to Bowen
(1975) for more details and properties of Gibbsian sources. We observe that
this condition holds for irreducible and aperiodic Markov chains of any finite
order, and more generally for chains with complete connections with
exponential decay of correlations (see Lalley (1986)).
For the convenience of the reader we recall some important facts about
Gibbsian sources which are used below.
Gibbsian processes are exponentially $\phi$-mixing, namely
there is a sequence $(\phi(l))$ of positive numbers, decreasing
exponentially fast to zero such that
for all integers $n$ and $l$ larger than or equal to one we have
$$
\sup_{A\in{\cal F}_0^n\,,\,B\in{\cal F}_{n+l}^\infty}
{\left|\proba(A\cap B)
-\proba(A)\proba(B)\right|\over \proba(A)\proba(B)}\le \phi(l)\;,\eqno(2.3)
$$
where ${\cal F}_r^s$ denotes the $\sigma$-algebra generated by $X_r^s$.
We observe that in formula (2.2) we can add a constant to $\varphi$ and
subtract it from $P$ without changing the law of the process. In particular
we can always assume without loss of generality that
$$
\expectation(\varphi(X_0^{+\infty}))=0\;.\eqno(2.4)
$$
>From now on we shall assume that (2.4) holds. In this case the constant
$P$ in formula (2.2) is equal to the entropy $h$, and is strictly positive.
In what follows we will have to deal with the sequence of mean zero
random variables $S_n$, $n\ge1$ defined by
$$
S_n=\sum_{j=0}^{n-1}\varphi(X_j^{+\infty})\;.\eqno(2.5)
$$
The exponential $\phi$-mixing property of Gibbsian sources
implies that the variance of $S_n/\sqrt n$ converges to
$$
\sigma^2=
\expectation(\varphi(X_0^{+\infty})^2)+
2\sum_{j=1}^\infty \expectation(\varphi(X_0^\infty)
\varphi(X_j^{+\infty}))\;,\eqno(2.6)
$$
when $n$ tends to infinity.
We now state our main result.
\proclaim{Theorem}. {Assume that the stationary
process $(X_{n})$ taking values in the finite set $A$ is Gibbsian. If
$\sigma^2>0$, the random variable
$$
\bigl\left({\log R_{n}-nh\over \sigma\sqrt n}\right)
\Rightarrow\;N(0,1)\;,
$$
as $n$ goes to infinity.
}
In the statement of the theorem, ${\cal L}$ denotes the law of the
random variable, and $\Rightarrow$ means weak convergence.
\beginsection{3. Proof of the theorem.}
The proof has three steps. In the first one we show that the law of the
return time can be approximated by a mixture of laws of entrance times
into fixed cylinder sets. In the second step we give an exponential
approximation for each of the laws of entrance times in the cylinder
sets. These exponential approximations have the probability of the
cylinder as respective parameters. The third step uses a central limit
theorem to deal with the lognormal fluctuations of the measures of the
cylinders around a typical value expressed in terms of the entropy
according to the Shannon-McMillan-Breiman theorem.
Observe that in the first two steps we only use the hypothesis that the
process is exponentially $\phi$-mixing ({\sl i.e.} satisfies condition
(2.3)). The stronger Gibbs condition (2.2) is only used in the third step.
The first step is handled in the following two lemmas.
\proclaim{Lemma 1}. There exists a constant $\gamma >0$ such that for all
$n \ge 1$ the following inequality holds
$$
\sup_{a_{1}^{n}\in A^n}\proba\{X_1^n=a_1^n\} \le e^{-{\gamma}n}\; .
$$
\proof This follows almost immediately from the exponential $\phi$-mixing
condition. We refer the reader to Lemma 1 in Galves and Schmitt (1997) for
the details.
\proclaim{Lemma 2}. {Assume the source $(X_n)$ is a stationary
stochastic process exponentially $\phi$-mixing. Then for any sequence
$(t_n)$ such that
$$
\lim_{n\to\infty} {t_n\over n}=+\infty
$$ we have
$$ \lim_{n\to\infty}\left|\proba\{R_n>t_n\} -\sum_{a_{1}^{n}\in{A}^n}
\proba\{X_1^n=a_1^n\}\proba\{\tau_{a_{1}^{n}}>t_n\}\right|=0\;.
$$
}
\proof By definition for any integer $t>0$ we have
$$
\proba\{R_n>t \}=
\sum_{a_{1}^{n}\in{ A}^n}
\proba\{X_1^n=a_1^n,\tau_{a_{1}^{n}}>t\}\; .\eqno(3.1)
$$
We now observe that for any $r$ with $n < r< t$ we have
$$
\left|\proba\{X_1^n=a_1^n,\tau_{a_{1}^{n}}>t\}-
\proba\{X_1^n=a_1^n\}\proba\{\tau_{a_{1}^{n}}>t\}\right| \le
$$
$$
\left|\proba\{X_1^n=a_1^n,\tau_{a_{1}^{n}}>t\}-
\proba\{X_1^n=a_1^n,X_s^{s+n-1}\not=a_1^n \; ,\; r< s\le t
\}\right|+
$$
$$
\left|
\proba\{X_1^n=a_1^n,X_s^{s+n-1}\not=a_1^n \; ,\; r< s\le t\}-
\proba\{X_1^n=a_1^n\}\proba\{X_s^{s+n-1}\not=a_1^n \; ,\; r<
s\le t\}\right| +
$$
$$
\proba\{X_1^n=a_1^n\}
\left|\proba\{X_s^{s+n-1}\not=a_1^n \; ,\; r< s\le t\}-
\proba\{\tau_{a_{1}^{n}}>t\}\right|\; .\eqno(3.2)
$$
The third term in the right hand side of (3.2) is trivially
bounded above by
$$
\proba\{X_1^n=a_1^n\}\;
\proba\{X_s^{s+n-1}=a_1^n \; ,\; 1\le s\le r\}\le
r\;\proba\{X_1^n=a_1^n\}^2\; .\eqno(3.3)
$$
By the $\phi$-mixing property we obtain that the second term in the right hand
side of (3.2) is bounded above by
$$
\phi(r-n+1)\proba\{X_1^n=a_1^n\}\; .\eqno(3.4)
$$
To handle the first term in the right hand side of (3.2)
we first observe that it is bounded above by
$$
\sum_{u=2}^r\proba\{X_1^n=a_1^n,X_u^{u+n-1}=a_1^n\}\; .\eqno(3.5)
$$
Let us decompose the sum in (3.5) in three parts, $2 \le u \le n/3$,
$n/3 < u \le n$ and $n < u \le r$ where $n/3$ is a shorthand notation
for its integer part.
An upper bound for the third piece is provided by the exponential
$\phi$-mixing property together with lemma 1,
namely
$$
\sum_{u=n+1}^r\proba\{X_1^n=a_1^n,X_u^{u+n-1}=a_1^n\}\le
C_1\proba\{X_1^n=a_1^n\}e^{-\gamma n}
\; ,\eqno(3.6)
$$
where
$$
C_1=1+\sum_{l=1}^{+\infty}\phi(l)\; .
$$
The second piece is bounded above by
$$
\sum_{k=n/3}^n\proba\{X_1^n=a_1^n,X_{n+1}^{n+k-1}=a_{n-k+2}^n\}\; ,\eqno(3.7)
$$
By the $\phi$-mixing property and lemma 1, (3.7) is bounded above by
$$
{(1+\phi(1))e^{-\gamma (n/3-2)}\over{1-e^{-\gamma}}}\proba\{X_1^n=a_1^n\}\;
.\eqno(3.8)
$$
To obtain an upper bound for the first piece we need to look in more detail to
the set
$$
\{X_1^n=a_1^n, X_{n+1}^{n+k-1}=a_{n-k+2}^n\}
$$
for small $k$. This set is non empty only when $a_1^n$ has a particular
structure, namely when it is obtained by repeating several times a smaller string
of length $k$. Therefore, for any $k$, with $2 \le k \le n/3$
$$
\sum_{a_{1}^{n}\in{ A}^n}\proba\{X_1^n=a_1^n, X_{n+1}^{n+k-1}=a_{n-k+2}^n\}\le
$$
$$
\sum_{b_{1}^{k}\in{ A}^k}
\proba\{X_1^k=X_{k+1}^{2k}=\cdots=X_{(m-1)k+1}^{mk}= b_1^k\}\; ,\eqno(3.9)
$$
where $m$ is the integer part of $(n+k-1)/k$.
The $\phi$-mixing property implies the following
upper bound for (3.9)
$$
(1+\phi(1))\proba\{X_1^k=b_1^k\}
\proba\{X_{k+1}^{2k}=\cdots=X_{(m-1)k+1}^{mk}= b_1^k\}\; .\eqno(3.10)
$$
By Lemma 1, for any $n$ large enough (3.10) can be bounded by
$$
(1+\phi(1))\proba\{X_1^k=b_1^k\}
e^{-\gamma k(m-1)}\le
(1+\phi(1))\proba\{X_1^k=b_1^k\}e^{-\gamma n/2}\; .
$$
where in the last inequality we use $k(m-1)>n/2$ which follows at once from
$k \le n/3$ .
We now choose
$t=t_n$, $r=\min\{n^2\,,\,\sqrt{n\,t_n}\}$ for $n$ large enough.
Summing up all the upper bounds the result follows.
\bigskip
We now turn to the second step of the proof. It is based on the following
result which is proven in Galves and Schmitt (1997).
\proclaim{Theorem 3}.
Assume the source $(X_n)$ is a stationary
stochastic process exponentially $\phi$-mixing.
There are four positive numbers $\Lambda_1$, $\Lambda_2$, $C$ and $c$ such that
for any finite sequence
$a_{1}^{n}\in A^{n}$ there is a number $\gamma_{a_{1}^{n}}$ satisfying
$$
\Lambda_1
\le{\gamma_{a_{1}^{n}}
\over\proba\big\{X_{1}^{n}=a_{1}^{n}\big\}}
\le \Lambda_2
$$
such that
$$
\sup_{t\ge 0}\bigg|\proba\big\{
\tau_{a_{1}^{n}} >t/\gamma_{a_{1}^{n}}\big\} -e^{-t}\bigg|\le
Ce^{-cn}\;.
$$
To proceed with the proof of the Theorem, we first observe that
it can be restated as
$$
\lim_{n\to\infty}
\proba\left\{R_{n}>e^{nh}e^{u\sigma \sqrt n }\right\}={1\over \sqrt{2\pi}}
\int_{-\infty}^u e^{-x^2/2}dx\;,\eqno(3.11)
$$
for any real $u$. We now apply Lemma~2
with
$$
t_n=e^{nh}e^{u\sigma \sqrt n }\;,
$$
and Theorem 3 in the left hand side of (3.11) to obtain
$$
\proba\left\{R_{n}>e^{nh}e^{\sigma u \sqrt n}\right\}
=\sum_{a^{n}_{1}\in A^{n}}\proba\{X_{1}^{n}=a^{n}_{1}\}\;
e^{-\lambda_{a_{1}^{n}}\proba\{X_{1}^{n}=a^{n}_{1}\}\;
e^{nh}e^{u \sqrt n}} +\eta_{n}(u)\;.\eqno (3.12)
$$
where
$$
\lambda_{a_{1}^{n}}={\gamma_{a_{1}^{n}}\over
\proba\{X_{1}^{n}=a^{n}_{1}\}}\;,
$$
and $\eta_{n}(u)$ goes to zero when $n$ goes to infinity.
\bigskip
We now come to the third and final step of the proof.
It is based on the following central limit theorem.
\proclaim{Theorem 4}. Assume that the stationary
process $(X_{n})$ taking values in the finite set $A$ is Gibbsian. Let
$\varphi$, $S_n$ and $\sigma^2$ be defined as in (2.2), (2.5) and (2.6)
respectively. If $\sigma^2>0$ and assuming (2.4), the random variable
$$
{\bigl}\left({S_n\over\sigma\sqrt n}\right)\Rightarrow N(0,1)\;,
$$
when $n$ goes to infinity.
For a proof of this theorem and related bibliography, we refer to Coelho and
Parry (1990).
Let $Y_{n}$ denote the random variable
$$
Y_{n}=\sum_{a^{n}_{1}\in A^{n}}
\bigun_{\{X_{1}^{n}=a^{n}_{1}\}}\;
e^{-\lambda_{a_{1}^{n}}\proba\{X_{1}^{n}=a^{n}_{1}\}
e^{nh}e^{u \sqrt n}}\;.
$$
We observe that the leading term in the left hand side of (3.12)
is equal to $\expectation\{Y_{n}\}$.
To complete the proof of the Theorem we will show that
$$
\liminf_{n\to\infty}\expectation\{Y_{n}\}=
\limsup_{n\to\infty}\expectation\{Y_{n}\}=
{1\over \sqrt{2\pi}}
\int_{-\infty}^{-u} e^{-x^2/2}dx\;.\eqno(3.13)
$$
We first derive a lower bound on the $\liminf$. For any fixed $\eta>0$,
Markov's inequality implies
$$
\expectation\{Y_{n}\}\ge e^{-e^{-\eta\sqrt n}}\;
\proba\big\{\log Y_{n}\ge -\,
e^{-\eta\sqrt n}\big\}\;.
$$
We recall that (2.4) implies that the constant $P$ in (2.2)
is equal to the entropy $h$. Therefore, using (2.2), we see that
$$
\log Y_n\ge -\lambda_{X_1^n} K e^{S_n+u\,\sqrt n}\;.\eqno(3.14)
$$
Since $\Lambda_{1}\le\lambda_{a_{1}^{n}}\le
\Lambda_{2}$, we use (3.14) for $n$ large enough to get
$$
\proba\big\{\log Y_{n}\ge -\,
e^{-\eta\sqrt n}\big\}\ge \proba\left\{{S_n\over
\sqrt n}<-u-2\eta\right\}\;.
$$
We now use Theorem 4 to conclude that
$$
\liminf_{n\to\infty}\expectation\{Y_{n}\}\ge
{1\over \sqrt{2\pi}}
\int_{-\infty}^{-u-2\eta} e^{-x^2/2}dx\;.\eqno(3.15)
$$
Since this is true for any $\eta>0$ we have the following lower bound.
$$
\liminf_{n\to\infty}\expectation\{Y_{n}\}\ge {1\over \sqrt{2\pi}}
\int_{-\infty}^{-u} e^{-x^2/2}dx\;.
$$
We now derive a similar upper bound for the lim sup.
We have obviously
$$
\expectation\{Y_{n}\}\le e^{-e^{\eta\sqrt n}}\;
\proba\big\{\log Y_{n}<-e^{\eta\sqrt n}\big\}+
\proba\big\{\log Y_{n}\ge -e^{\eta\sqrt n}\big\}\;,
$$
and using Theorem 4 as above we conclude that
$$
\limsup_{n\to\infty}\expectation\{Y_{n}\}\le
{1\over \sqrt{2\pi}}
\int_{-\infty}^{-u+2\eta} e^{-x^2/2}dx\;.\eqno(3.16)
$$
Since (3.16) is true for any $\eta>0$ we have
$$
\limsup_{n\to\infty}\expectation\{Y_{n}\}\le
{1\over \sqrt{2\pi}}
\int_{-\infty}^{-u} e^{-x^2/2}dx\;.
$$
This concludes the proof of the Theorem.
\beginsection{References.}
\noindent Aldous, D. (1989). {\sl Probability approximations via the
Poisson clumping heuristic}. Applied Mathematical Sciences,
77. Springer-Verlag, New York-Berlin.
\medskip\noindent Bowen, R. (1975). {\sl Equilibrium states
and ergodic theory of Anosov systems}. Lecture Notes in Mathematics
{\bf470}. Springer Verlag, Berlin Heidelberg New York.
\medskip\noindent Coelho, Z. and Parry, W. (1990). Central limit
asymptotics for shifts of finite type. { \sl Israel J. Math. } {\bf69},
235-249.
\medskip\noindent Crow, E. and Shimizu, K. editors (1988).
{\sl Lognormal distributions. Theory and applications.} Marcel Dekker,
Statistics Textbooks and Monographs {\bf 88}, New York Basel.
\medskip\noindent Galton, F. (1879). The geometric mean in vital
and social statistics. {\sl Proc. Roy. Soc.} {\bf 29}, 365-367.
\medskip\noindent Galves, A. and Schmitt, B. (1997). Inequalities for hitting
times in mixing dynamical systems. {\sl Random Comput. Dyn.} to appear.
Preprint available as
\hbox{http://www.ma.utexas.edu/mp\_ arc/c/97/97-128.ps.gz.\hfil}
\medskip\noindent Kolmogorov, N. (1941). Uber das logarithmisch Normale
verteilungesgestz der Dimensione der Teilchen bei Zerst\" ucklung. {\sl
Doklady URSS} {\bf 31}, 99-101.
\medskip\noindent Lalley, S. (1986). Regenerative representation
for one-dimensional Gibbs states. {\sl Ann. of Probab.} {\bf 14}, 1262-1271.
\medskip\noindent Ornstein, D. and Weiss, B. (1993). Entropy and
data compression schemes. {\sl IEEE Trans. Inform. Theory} {\bf39}, 78-83.
\medskip\noindent Wyner, A. and Ziv, J. (1989). Some asymptotic properties
of the entropy of a stationary ergodic data source with applications to data
compression. {\sl IEEE Trans. Inform. Theory} {\bf 35}, 1250-1258.
\bye