Content-Type: multipart/mixed; boundary="-------------0109180820111"
This is a multi-part message in MIME format.
---------------0109180820111
Content-Type: text/plain; name="01-329.keywords"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="01-329.keywords"
de Branges space, Schr\"odinger operator, inverse spectral theory
---------------0109180820111
Content-Type: application/x-tex; name="dB.tex"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="dB.tex"
\documentclass{amsart}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{amssymb}
\newtheorem{Theorem}{Theorem}[section]
\newtheorem{Proposition}[Theorem]{Proposition}
\newtheorem{Lemma}[Theorem]{Lemma}
\newtheorem{Definition}[Theorem]{Definition}
\newtheorem{Corollary}[Theorem]{Corollary}
\numberwithin{equation}{section}
\begin{document}
\title{Schr\"odinger operators and de~Branges spaces}
\author{Christian Remling}
\address{Universit\"at Osnabr\"uck\\
Fachbereich Mathematik/Informatik\\
49069 Osnabr\"uck\\
Germany}
\email{cremling@mathematik.uni-osnabrueck.de}
\urladdr{www.mathematik.uni-osnabrueck.de/staff/phpages/remlingc.rdf.html}
\date{September 18, 2001}
\thanks{2000 {\it Mathematics Subject Classification.} 34A55 34L40
46E22}
\keywords{de Branges space, Schr\"odinger operator,
spectral representation, inverse spectral theory}
\thanks{Remling's work was supported by the Heisenberg program
of the Deutsche Forschungs\-gemein\-schaft}
%\thanks{to appear in {\it Commun.\ Math.\ Phys.} }
\begin{abstract}
We present an approach to de~Branges's theory of Hilbert spaces
of entire functions that emphasizes the connections to the spectral
theory of differential operators. The theory is used to discuss the
spectral representation of one-dimensional Schr\"odinger operators and to
solve the inverse spectral problem.
\end{abstract}
\maketitle
\section{Introduction}
In this paper, I will discuss the general direct and inverse
spectral theory of one-dimensional Schr\"odinger operators
$H=-d^2/dx^2 + V(x)$ from the point of view of de~Branges's theory of
Hilbert spaces of entire functions. In particular, I will present
a new solution of the inverse spectral problem.
Basically, we will obtain a local version of the Gelfand-Levitan
characterization \cite{GL}
of the spectral data of one-dimensional Schr\"odinger
operators (for a modern presentation of the Gelfand-Levitan theory,
see Chapter 2 of either \cite{Lev} or \cite{Mar}).
However, our treatment is quite different from that
of Gelfand-Levitan. On top of that, we do not need continuity assumptions
on the potential $V$, but this is not the main point here because this
technical improvement should also be possible within the framework of
Gelfand-Levitan theory.
I have tried to pursue two goals in this paper. First of all,
I will emphasize the connections between de~Branges's theory of
Hilbert spaces of entire functions and the spectral theory
of differential operators from the very beginning, and I hope that
this leads to a concrete and
accessible introduction to de~Branges's results, at
least for people with a background similar to mine. My treatment of
de~Branges's theory is, of course, by no means intended to be a
replacement for the deeper and more general, but also more
abstract and demanding treatment of de~Branges himself
in \cite{dB1,dB2,dB3,dB4} and especially \cite{dB}.
The second and perhaps more important goal is to give a new view on the
(especially inverse) spectral theory of one-dimensional
Schr\"odinger operators by recognizing it
as a part of a larger picture.
More specifically, I believe that one of de~Branges's major results (namely,
Theorem \ref{T6.3} below) may be interpreted
as the mother of many inverse theorems. In this paper, we will use it
to discuss the inverse theory for Schr\"odinger operators, but I think
one can discuss along these lines the inverse theory of other
operators as well, provided there is a good characterization of the
spectral data that occur. In particular, it should be possible
to give such a treatment for the one-dimensional Dirac operator.
The treatment of the inverse spectral problem given in this paper
is neither short nor elementary, the major thrust really is the
new picture it provides. It is not short because there are computational
parts and technical issues (mainly in Sect.\ 13--15) that need to
be taken care of. However, I think that
the general strategy, which will be explained in
Sect.\ 9, is quite transparent. Our treatment is not elementary, either,
because it depends on the machinery of de~Branges spaces and at least
two major results from this theory (Theorems \ref{T6.3} and \ref{T6.4}),
which will not be proved here.
To place this paper into context,
let me mention some work on related topics. De~Branges's
results from \cite{dB1,dB2,dB3,dB4,dB}
are rather complete, so not much has been added since as far
as the general, abstract theory is concerned.
Dym and Dym-McKean \cite{Dym,DMcK} also use de~Branges spaces
to study certain differential operators,
and they give independent introductions
de~Branges's results. The theory of de~Branges spaces is intimately
connected with the theory of so-called canonical systems (also known
as Zakharov-Shabat systems), and there
exists a considerable literature on this subject. See, for instance,
\cite{HdSW,Sakh} and the references cited therein.
Sakhnovich's book \cite{Sakh} in fact
discusses more general systems, and a study of these systems in
the spirit of de~Branges spaces is carried out in \cite{ADym}. As for
the inverse spectral theory of one-dimensional Schr\"odinger operators,
there is the classical work of Gelfand-Levitan mentioned above
\cite{GL}. A different approach -- which so far has been used to
attack uniqueness questions, but in principle also gives a
procedure for reconstructing the potential from the spectral data --
was recently developed by Simon,
partly in collaboration with Gesztesy \cite{GeSi,Si}. This approach
emphasizes the role of large $z$ asymptotics and is quite
different from both \cite{GL} and the approach used here.
However, we will see some connections in Sect.\ 4 of this paper.
For still another recent
treatment of uniqueness questions, see \cite{Horv}.
This paper is organized as follows. We define de~Branges
spaces and establish some basic properties in the following
section. In Sect.\ 3, we then discuss classical material on
the spectral representation of Schr\"odinger operators from this
point of view. This gives an immediate intuitive understanding of
de~Branges spaces, and it also provides an aesthetically pleasing
picture of the spectral representation. Moreover, this material
is then used to derive conditions on the spectral data (which
are related to the Gelfand-Levitan conditions). The local approach
suggested by the theory of de~Branges spaces simplifies this
treatment considerably. Here, by ``local'' we roughly mean that instead
of studying the problem on the half line $(0,\infty)$ at one stroke,
we study the problems on $(0,N)$ for arbitrary $N>0$.
In Sect.\ 5, we state the inverse spectral theorem, which is the
converse of the results of Sect.\ 4. According to the general
philosophy of this paper, this inverse spectral theorem
will also be formulated in the language of de~Branges spaces.
The proof requires preparatory
material; this is presented in Sect.\ 6--8. In particular, in Sect.\ 7
we state, without proof, four theorems on de~Branges spaces on
which our treatment of the inverse problem will crucially depend.
In Sect.\ 9, we start the proof of the inverse spectral theorem,
and we explain the general strategy. This proof is then carried
out in Sect.\ 11--16. In Sect.\ 10, we prepare for the proof
by a discussion
of canonical systems in the style of the treatment of Sect.\ 3.
In Sect.\ 17, we discuss the implications of our results for
the spectral measures of Schr\"odinger operators on the half
line $(0,\infty)$. We do this mainly in order to clarify the
relations to the Gelfand-Levitan theory. We conclude this
paper with some remarks of a more general character.
\section{Elementary properties of de~Branges spaces}
One way to understand de~Branges spaces is to interpret them as weighted
versions of Paley-Wiener spaces. This point of view is put forward in
the introduction of \cite{dB}. So let us recall the Paley-Wiener
Theorem. Fix $a>0$, and define $PW_a$ as the space
of Fourier transforms $\widehat{f}$ of functions $f\in L_2(-a,a)$
(where $\widehat{f}(k) = (2\pi )^{-1/2} \int f(x) e^{-ikx}\, dx$). For
$f\in L_2(-a,a)$, the Fourier transform $\widehat{f}$, originally defined
as an element of $L_2(\mathbb R)$, uniquely extends to an entire function.
The Paley-Wiener Theorem says that
\begin{equation}
\label{PW}
PW_a = \{ F:\mathbb C\to \mathbb C : F\text{ entire, } \int_{\mathbb R}
\left| F(\lambda)\right|^2\, d\lambda < \infty, |F(z)| \le C_F e^{a|z|} \} .
\end{equation}
An entire function $E:\mathbb C \to \mathbb C$ is called a {\it de~Branges
function} if $|E(z)| > |E(\overline{z})|$ for all $z\in\mathbb C^+
=\{ z\in\mathbb C: \text{Im }z >0 \}$. Note that such an $E$
is root-free on $\mathbb C^+$. Now the {\it de~Branges space} $B(E)$
based on $E$ is defined in analogy to \eqref{PW}:
It consists of the entire functions $F$
which are square integrable on the real line with respect to the weight
function $|E|^{-2}$,
\begin{equation}
\label{L2}
\int_{\mathbb R} \left| \frac{F(\lambda)}{E(\lambda)} \right|^2\,
d\lambda < \infty,
\end{equation}
and satisfy a growth condition at infinity. In the presence of
\eqref{L2}, there are a number of ways to state this condition.
To formulate this result, we need
some notions from the theory of Hardy spaces.
However, this subject will not play
an important role in what follows. A good reference for further information
on this topic is \cite{Garn}.
We write $N_0$ for the set of those functions from
the Nevanlinna class $N$ for which the point mass at infinity in the
canonical factorization is non-negative. A more direct, equivalent
characterization goes as follows: $f\in N$ precisely if $f$
is holomorphic on $\mathbb C^+$ and can be written as
the quotient of two bounded holomorphic functions on
$\mathbb C^+$: $f=F_1/F_2$. Such an $f$ is in $N_0$ if in this
representation, $F_2$ can be chosen so that
\[
\lim_{y\to\infty} \frac{\ln |F_2(iy)|}{y} = 0 .
\]
We will also need the Hardy space $H_2$ (on the upper half plane),
which may be defined
as follows:
$f\in H_2$ precisely if $f$ is holomorphic on $\mathbb C^+$ and
\[
\sup_{y>0} \int_{-\infty}^{\infty} \left| f(x+iy) \right|^2 \, dx
< \infty.
\]
Equivalently, $H_2$ is the space of Fourier transforms of functions from
$L_2(-\infty,0)$.
\begin{Proposition}
\label{P2.1}
Suppose that $F$ is entire and \eqref{L2} holds. Then the following
are equivalent:\\
a) $|F(z)/E(z)|, |F^{\#}(z)/E(z)| \le C_F (\text{{\rm Im }}z)^{-1/2}$ for
all $z\in\mathbb C^+$.\\
b) $F/E, F^{\#}/E \in N_0$.\\
c) $F/E, F^{\#}/E \in H_2$.
\end{Proposition}
Here, we use the notation $F^{\#}(z)=\overline{F(\overline{z})}$.
By definition,
an entire function $F$ is in $B(E)$ precisely if, in addition to \eqref{L2},
one (and hence all) of these conditions holds.
In \cite{dB}, de~Branges uses condition b) to define $B(E)$
(functions from $N$ are called functions of bounded type in \cite{dB}).
Condition a) is used in \cite{DMcK}, while c) gives the most elegant
description of $B(E)$ as
\begin{equation}
\label{defdB}
B(E) = \{ F:\mathbb C \to \mathbb C : F \text{ entire, }
F/E, F^{\#}/E \in H_2 \} .
\end{equation}
Clearly, \eqref{L2} now follows automatically.
\begin{proof} As $H_2 \subset N_0$, c) implies b). Condition c) also
implies a) because $H_2$ functions admit a Cauchy type representation
\cite[Chapter II]{Garn}:
\[
\frac{F(z)}{E(z)} = \frac{1}{2\pi i} \int_{\mathbb R}
\frac{F(\lambda)}{E(\lambda)}\, \frac{d\lambda}{\lambda - z}
\quad\quad (z\in \mathbb C^+ ) ,
\]
and similarly for $F^{\#}/E$.
Taking \eqref{L2} into account, we now get a) by applying the
Cauchy-Schwarz inequality.
Now assume that a) holds. A standard application of the residue
theorem (see \cite[Section 6.1]{DMcK} for the details) shows that
\begin{equation}
\label{Cauchy}
\frac{1}{2\pi i} \int_{\mathbb R}
\frac{F(\lambda)}{E(\lambda)}\, \frac{d\lambda}{\lambda - z} =
\begin{cases}
F(z)/E(z) & z\in\mathbb C^+ \\
0 & z\in\mathbb C^-
\end{cases} .
\end{equation}
It is well known that \eqref{Cauchy} together with \eqref{L2} implies
that $F/E \in H_2$ \cite[Exercise II.2a)]{Garn}. Of course, an analogous
argument works for $F^{\#}/E$, so c) holds.
Finally, we show that b) implies c). The canonical factorization
(see again \cite{Garn}) of $F/E\in N_0$ reads
\begin{equation}
\label{factor}
F(z)/E(z) = e^{i\alpha}e^{ihz} B(z) g(z) S_1(z)/S_2(z),
\end{equation}
where $\alpha\in\mathbb R$,
$h\ge 0$, $B$ is a Blaschke product, $g$ is an outer function,
and $S_1$, $S_2$ are the singular factors. Now $F/E$ is meromorphic,
and \eqref{L2} prevents poles on the real line, so $F/E$ is
actually holomorphic not only on the upper half plane, but on a
neighborhood of the closure of $\mathbb C^+$. As a consequence,
$S_1=S_2\equiv 1$. To see this, just recall how the singular
factors were constructed \cite[Sect.\ II.5]{Garn}. Given this, \eqref{L2}
and \eqref{factor} together with Jensen's inequality
now imply that $F/E\in H_2$ (compare \cite[Sect.\ II.5]{Garn}).
By the same argument, $F^{\#}/E\in H_2$.
\end{proof}
\begin{Theorem}
\label{T2.2}
$B(E)$, endowed with the inner product
\[
[F,G] = \frac{1}{\pi} \int_{\mathbb R} \overline{F(\lambda)}
G(\lambda) \, \frac{d\lambda}
{|E(\lambda)|^2},
\]
is a Hilbert space. Moreover, for any $z\in\mathbb C$, point
evaluation is a bounded linear functional. More explicitly,
the entire function $J_{z}$ given by
\[
J_z(\zeta)= \frac{\overline{E(z)}E(\zeta)-
E(\overline{z})\overline{E(\overline{\zeta})}}
{2i(\overline{z}-\zeta)}
\]
belongs to $B(E)$ for every $z\in\mathbb C$, and
$[J_z,F]=F(z)$ for all $F\in B(E)$.
\end{Theorem}
\begin{proof}
$B(E)$ is obviously a linear space, and $[\cdot,\cdot]$ is a scalar
product on $B(E)$.
Also, using condition a) from Proposition \ref{P2.1},
it is not hard to see
that $J_z\in B(E)$ for every $z\in\mathbb C$.
Now fix $F\in B(E)$. Then, as noted above, $F/E$ obeys
the Cauchy type formula \eqref{Cauchy}. A similar computation
shows that
\[
\frac{1}{2\pi i} \int_{\mathbb R}
\frac{F(\lambda)}{E^{\#}(\lambda)}\, \frac{d\lambda}{\lambda - z} =
\begin{cases}
0 & z\in\mathbb C^+\\
-F(z)/E^{\#}(z) & z\in\mathbb C^-
\end{cases} .
\]
Combining these equations, we see that indeed
\begin{equation}
\label{2.1}
F(z)=\frac{1}{\pi} \int \overline{J_z(\lambda)}F(\lambda)\frac{d\lambda}
{|E(\lambda)|^2},
\end{equation}
at least if $z\notin\mathbb R$. But the right-hand side of
\eqref{2.1} is an entire function of $z$, so \eqref{2.1} must hold for
all $z\in\mathbb C$.
It remains to prove completeness of $B(E)$. Since entire functions
are already determined by their restrictions to $\mathbb R$, the space
$B(E)$ may be viewed as a subspace of
$L_2(\mathbb R, \pi^{-1}|E(\lambda)|^{-2}
d\lambda)$. So we only need to show that $B(E)$ is closed in this
larger space.
To this end, observe that
\[
\|J_z\|^2=J_z(z)=\frac{|E(z)|^2-|E(\overline{z})|^2}{4\,
\text{Im }z}
\]
remains bounded if $z$ varies over a compact set.
So if $F_n\in B(E)$ converges in norm to some
$F\in L_2(\mathbb R, \pi^{-1}|E(\lambda)|^{-2}
d\lambda)$,
then $F_n(z)=\langle J_z, F_n \rangle_{L_2}$ converges
uniformly on compact sets to $\langle J_z, F \rangle$, and thus
$F(z)=\langle J_z, F \rangle$ defines an entire extension of $F
\in L_2(\mathbb R, \pi^{-1}|E(\lambda)|^{-2}
d\lambda)$.
We can now use \eqref{defdB} and completeness
of $H_2$ to see that $F$ belongs to $B(E)$.
\end{proof}
$E_a(z)=e^{-iaz}$ is a de~Branges function. With this choice, we
recover the Paley-Wiener space from \eqref{PW}: $PW_a = B(E_a)$.
The general de~Branges space $B(E)$ shares many properties with
this simple example, as the full blown theory from \cite{dB} shows:
$B(E)$ {\it always} consists of transforms of $L_2$ functions with
bounded support. However, in the general case, one has to use
eigenfunctions of a differential operator instead of the exponentials
$e^{ikx}$ and spectral measures instead of Lebesgue measure.
These (rather vague) remarks will be made more precise later. Note
also that the reproducing kernel $J_z$ for $B(E_a)=PW_a$ is
the Dirichlet kernel,
\[
J_z(\zeta)=D_a(\overline{z}-\zeta)
=\frac{\sin a(\overline{z}-\zeta)}{\overline{z}-\zeta},
\]
as a brief computation shows. This is easy to understand: for
general $L_2$ functions, convolution
with $D_a$ projects onto the frequencies in $(-a,a)$, but for functions
in $PW_a$, these are the only frequencies that occur,
so $D_a$ acts as a reproducing kernel
on this space.
There is another simple choice for $E$. Every polynomial without zeros
in $\mathbb C^+\cup \mathbb R$ is a de~Branges function. It is clear
that in this case $B(E)$ contains precisely the polynomials whose
degree is smaller than that of $E$. Basically, the theory of these
(finite dimensional) de~Branges spaces is the theory of orthogonal
polynomials. Many results from \cite{dB} can be viewed as
generalizations of results about orthogonal polynomials.
\section{Spectral representation of 1D Schr\"odinger operators}
In this section, we show that the spaces used in the usual
spectral representation of Schr\"odinger operators on bounded
intervals are de~Branges spaces. So consider the equation
\begin{equation}
\label{se}
-y''(x) + V(x)y(x) = zy(x),
\end{equation}
with $V\in L_1(0,N)$. We will also be interested in the associated
self-adjoint operators on $L_2(0,N)$. For simplicity, we will always
use Neumann boundary conditions at $x=0$. Thus we consider the
operators $H_N^{\beta}=-d^2/dx^2 + V(x)$ on $L_2(0,N)$
with boundary conditions
\[
y'(0)=0,\quad y(N)\sin\beta + y'(N)\cos\beta=0 .
\]
We start by recalling some basic facts about the spectral representation
of $H_N^{\beta}$. General references for this material are
\cite{CL,WMLN}.
The spectrum of $H_N^{\beta}$ is simple and purely discrete.
Let $u(x,z)$ be the solution of \eqref{se} with the initial values
$u(0,z)=1$, $u'(0,z)=0$ (so $u$ satisfies the boundary condition
at $x=0$).
Define the Borel measure $\rho_N^{\beta}$ by
\begin{equation}
\label{rhoN}
\rho_N^{\beta} = \sum_{\frac{u'}{u}(N,E)=-\tan \beta}
\frac{\delta_E}{\|u(\cdot,E)\|_{L_2(0,N)}^2}.
\end{equation}
Here, $\delta_E$
denotes the Dirac measure (i.e.\ $\delta_E(\{ E\} )=1,
\delta_E(\mathbb R\setminus\{ E\} )=0$), and the sum ranges
over all eigenvalues of $H_N^{\beta}$,
and of course this interpretation also makes sense if $\beta=\pi/2$.
The operator $U:L_2(0,N)\to L_2(\mathbb R,d\rho_N^{\beta})$,
defined by
\begin{equation}
\label{U}
(Uf)(\lambda) = \int u(x,\lambda) f(x)\, dx,
\end{equation}
is unitary, and $UH_N^{\beta}U^*$ is multiplication by $\lambda$ in
$L_2(\mathbb R,d\rho_N^{\beta})$. It is a simple but noteworthy
fact that the action of $U$ depends neither on $N$ nor on
the boundary condition $\beta$.
The adjoint (or inverse) of
$U$ acts as
\begin{equation}
\label{U*}
(U^*F)(x) = \int u(x,\lambda)F(\lambda)\, d\rho_N^{\beta}(\lambda),
\end{equation}
for $F\in L_2(\mathbb R,d\rho_N^{\beta})$ with finite support.
Similar statements hold for half line problems (if a potential
$V\in L_{1,loc}([0,\infty))$ is given), except that the
construction of the spectral measure $\rho$ is slightly more
complicated. One can use, for instance, the limiting procedure
of Weyl (see \cite[Chapter 9]{CL}). Also, there is the distinction
between the limit point and limit circle cases. In the latter case,
one needs a boundary condition at infinity to get self-adjoint
operators (see again \cite{CL} or \cite{WMLN}). In either case,
$U$, defined by \eqref{U} for compactly supported $f\in L_2(0,\infty)$,
extends uniquely to a unitary map $U:L_2(0,\infty)\to
L_2(\mathbb R, d\rho)$, and we still have that $UHU^*$ is multiplication
by the variable in $L_2(\mathbb R, d\rho)$ (in the limit circle case,
$\rho$ and $H$ depend on the boundary condition at infinity).
Finally, for compactly supported $F\in L_2(\mathbb R, d\rho)$,
we also still have \eqref{U*}, with $\rho_N^{\beta}$ replaced by
$\rho$, of course.
In this paper, half line problems will sometimes be lurking in the
background, but we will mainly work with problems on bounded intervals.
We now identify $L_2(\mathbb R, d\rho_N^{\beta})$ as a de~Branges
space. Let
\[
E_N(z) = u(N,z)+iu'(N,z).
\]
Then, since $u(N,\overline{z})=\overline{u(N,z)}$ and similarly for $u'$,
\begin{equation}
\label{3.1}
\frac{\overline{E_N(z)}E_N(\zeta)-
E_N(\overline{z})\overline{E_N(\overline{\zeta})}}
{2i(\overline{z}-\zeta)}
= \frac{\overline{u(N,z)}u'(N,\zeta)-\overline{u'(N,z)}
u(N,\zeta)}{\overline{z}-\zeta}.
\end{equation}
Denote the left-hand side of \eqref{se} by $\tau y$.
We have Green's identity
\[
\int_0^N\left( \overline{(\tau f)} g - \overline{f} \tau g\right)
= \left. \left( \overline{f(x)}g'(x)-\overline{f'(x)}g(x)\right)
\right|_{x=0}^{x=N},
\]
and this allows us to write \eqref{3.1} in the form
\[
\frac{\overline{E_N(z)}E_N(\zeta)-
E_N(\overline{z})\overline{E_N(\overline{\zeta})}}
{2i(\overline{z}-\zeta)}
= \int_0^N \overline{u(x,z)}u(x,\zeta)\, dx.
\]
Taking $z=\zeta\in\mathbb C^+$ shows that $E_N$ is a
de~Branges function. The de~Branges space based on $E_N$
will be denoted by $S_N\equiv B(E_N)$ ($S$ for Schr\"odinger).
By Theorem \ref{T2.2} and the above calculation,
the reproducing kernel $J_z$ of $S_N$ is given by
\begin{equation}
\label{PE}
J_z(\zeta)=\int_0^N \overline{u(x,z)}u(x,\zeta)\, dx.
\end{equation}
\begin{Theorem}
\label{T3.1}
For any boundary condition $\beta$ at $x=N$, the Hilbert spaces $S_N$
and $L_2(\mathbb R, d\rho_N^{\beta})$ are identical. More precisely,
if $F(z)\in S_N$, then the restriction of $F$ to $\mathbb R$ belongs
to $L_2(\mathbb R, d\rho_N^{\beta})$, and $F\mapsto F\big|_{\mathbb R}$
is a unitary map from $S_N$ onto $L_2(\mathbb R, d\rho_N^{\beta})$.
\end{Theorem}
\begin{proof}
Basically, the theorem is true because
$J_z$, as given in \eqref{PE}, is the reproducing kernel for
both spaces. The formal proof proceeds as follows.
Fix $\beta\in [0,\pi)$. We will usually drop the reference to this parameter
(and also to $N$) in the notation in this proof. Let
$\{\lambda_n\}$ be the eigenvalues of $H_N^{\beta}$; note that
$\{\lambda_n\}$ supports the spectral measure $\rho=\rho_N^{\beta}$.
We first claim that $J_z\in L_2(\mathbb R, d\rho)$ for every
$z\in\mathbb C$. More precisely, by this we mean that the restriction
of $J_z$ to $\mathbb R$ (or $\{\lambda_n\}$) belongs to
$L_2(\mathbb R, d\rho)$. Indeed, using \eqref{rhoN} and \eqref{PE},
we obtain
\begin{align*}
\left\| J_z \right\|^2_{L_2(\mathbb R, d\rho)} & =
\sum_n \left| J_z(\lambda_n) \right|^2 \rho(\{\lambda_n\} ) \\ & =
\sum_n \left| \langle u(\cdot, z), u(\cdot, \lambda_n) \rangle_{L_2(0,N)}
\right|^2 \left\|u(\cdot, \lambda_n) \right\|^{-2}_{L_2(0,N)}\\
& = \left\|u(\cdot, z) \right\|^2_{L_2(0,N)}.
\end{align*}
The last equality is Parseval's formula, which applies because
the normed eigenfunctions $u(\cdot,\lambda_n)/\|u(\cdot,\lambda_n)\|$ form
an orthonormal basis of $L_2(0,N)$. A similar computation shows that
\[
\langle J_w, J_z \rangle_{L_2(\mathbb R, d\rho)} = \langle u(\cdot,z),
u(\cdot, w) \rangle_{L_2(0,N)}
= J_z(w) = [J_w, J_z ]_{S_N} .
\]
By extending linearly, we thus get an isometric restriction map
$V_0:L(\{J_z: z\in\mathbb C\} ) \to L_2(\mathbb R,d\rho)$,
$V_0J_z=J_z\big|_{\mathbb R}$.
$V_0$ extends uniquely to an isometry $V:
\overline{L(\{J_z: z\in\mathbb C\} )} \to L_2(\mathbb R,d\rho)$.
Now the finite linear combinations of the $J_z$ are dense both
in $L_2(\mathbb R, d\rho)$ and in $S_N$.
In fact, as $J_{\lambda_m}(\lambda_n) = \|u(\cdot,\lambda_n)
\|^2 \delta_{mn}$, the $J_z$ already span $L_2(\mathbb R, d\rho)$ if
$z$ runs through the eigenvalues $\lambda_n$. As for $S_N$, we just
note that since $[J_z,F]=F(z)$, an $F\in S_N$ that is orthogonal to
all $J_z$'s must vanish identically.
It follows that $V$ maps $S_N$ unitarily onto $L_2(\mathbb R, d\rho)$.
Finally, if $F\in S_N$, then
\[
(VF)(\lambda_n)= \langle V_0J_{\lambda_n}, VF \rangle_{L_2(\mathbb R, d\rho)}
=[ J_{\lambda_n}, F ] = F(\lambda_n),
\]
so $V$ (originally defined by a limiting procedure) indeed just is
the restriction map on the whole space.
\end{proof}
Recall that $U$ from \eqref{U} maps $L_2(0,N)$ unitarily onto
$L_2(\mathbb R, d\rho_N^{\beta})$. Hence, by using the identification
$L_2(\mathbb R, d\rho_N^{\beta}) \equiv S_N$ obtained in Theorem
\ref{T3.1}, we get an induced unitary map (which we still denote
by $U$) from $L_2(0,N)$ onto $S_N$. We claim that this map is still
given by \eqref{U}; more precisely, for $f\in L_2(0,N)$,
\begin{equation}
\label{UU}
(Uf)(z) = \int u(x,z)f(x)\, dx\quad\quad (z\in\mathbb C).
\end{equation}
To see this, note that \eqref{UU} is correct for $f=u(\cdot,\lambda_n)$,
where $\lambda_n$ is an eigenvalue of $H_N^{\beta}$. Indeed,
$u(\cdot,\lambda_n)$ is real valued, so in this
case the right-hand side of \eqref{UU} equals $J_{\lambda_n}(z)$, which
clearly is in $S_N$. It is of course automatic that $Uf$, computed with
formula \eqref{UU}, restricts to the right function on $\{ \lambda_n \}$.
Now \eqref{UU} follows in full generality by a standard approximation
argument.
As a consequence, we have the following alternate
description of $S_N$ as a set, in addition to the definition
\eqref{defdB}:
\begin{equation}
\label{BN}
S_N = \left\{ F(z)= \int u(x,z) f(x)\, dx : f\in L_2(0,N) \right\} .
\end{equation}
This may be interpreted as a statement of Paley-Wiener type.
Originally, $S_N$ was defined as a space of entire functions
which are square integrable on the real line with respect to
a weight function and satisfy a
growth condition; now \eqref{BN} says that these function precisely
arise by transforming $L_2$ functions with support in $(0,N)$,
using the eigenfunctions $u(\cdot, z)$.
In the case of
zero potential, one basically recovers the original Paley-Wiener
Theorem. A still much more general result along these
lines (namely, Theorem \ref{T6.3}) will be discussed later.
The material developed so far has some consequences.
We continue to denote the de~Branges space
associated with a Schr\"odinger equation on an interval $(0,N)$ by $S_N$.
\begin{Theorem}
\label{T3.2}
a) Suppose that $0 0,
\end{equation}
where $E_0$ is the de~Branges function for zero potential: $E_0(z)
=\cos\sqrt{z}N-i\sqrt{z}\sin\sqrt{z}N$.
To establish \eqref{4.2},
it clearly suffices to show that
\[
\liminf_{\lambda\to\pm\infty}\left| \frac{E(\lambda)}{E_0(\lambda)}
\right| > 0.
\]
Consider first the case $\lambda\to\infty$, and put again $\lambda=k^2$,
$k\to\infty$. Assume that, contrary to the assertion, there exists a
sequence $k_n\to\infty$ so that $E(k_n^2)/E_0(k_n^2)\to 0$.
We now use \eqref{estv} and the analogous estimate on $u'$ which reads
\begin{equation}
\label{estv'}
\left| u'(x,z) + \sqrt{z}\sin\sqrt{z}x \right| \le
\exp( \|V\|_{L_1(0,N)} ) \exp ( |\text{Im }z^{1/2}|x )
\quad (0\le x\le N) .
\end{equation}
Since $E(z)=u(N,z)+iu'(N,z)$, we obtain
\begin{equation}
\label{4.3}
\left| \frac{E(k_n^2)}{E_0(k_n^2)} \right|^2 =
\frac{\left(\cos k_n N + O(k_n^{-1}) \right)^2 + \left(
k_n\sin k_n N + O(1) \right)^2}{\cos^2 k_n N + k_n^2 \sin^2 k_n N} .
\end{equation}
If $k_n \sin k_n N$ remains bounded as $n\to\infty$, then
$|\cos k_n N| \to 1$, so \eqref{4.3} shows that $|E(k_n^2)/E_0(k_n^2)|^2$
is bounded away from zero. Thus, by passing to a subsequence if necessary,
we may assume that $k_n|\sin k_n N|\to\infty$. But then we see
from \eqref{4.3} that $|E(k_n^2)/E_0(k_n^2)|^2\to 1$, which is
a contradiction to our choice of $k_n$.
The argument for $\lambda\to -\infty$ is similar (in fact, easier).
Write $\lambda=-\kappa^2$ with $\kappa\to\infty$. One shows that both
$E$ and $E_0$ are of the asymptotic form
\begin{align*}
\left| E(-\kappa^2)\right|^2 & = \frac{\kappa^2}{4} e^{2\kappa N}
+O( \kappa e^{2\kappa N} ), \\
\left| E_0(-\kappa^2)\right|^2 & = \frac{\kappa^2}{4} e^{2\kappa N}
+O( \kappa e^{2\kappa N} ),
\end{align*}
so $|E(-\kappa^2)/E_0(-\kappa^2)| \to 1$. Thus \eqref{4.2} holds.
Now if $F(z)=\int f(x) \cos \sqrt{z}x \, dx$ with $f\in L_2(0,N)$,
then $F/E_0\in L_2(\mathbb R)$, hence also $F/E\in L_2(\mathbb R)$
by \eqref{4.2}. Moreover, $F$ is obviously entire.
It remains to establish one of the conditions
of Proposition \ref{P2.1}. To this end, we establish Cauchy type
representations for $F/E$, $F^{\#}/E$. As we have already seen in
the proof of Proposition \ref{P2.1}, such representations imply
condition a) from the Proposition.
Write $z=R^2e^{2i\varphi}$, $\sqrt{z}=Re^{i\varphi}$ with
$R>0$, $0\le\varphi\le\pi/2$. Then the asymptotic formulae
\eqref{estv}, \eqref{estv'} yield
\[
E(z) = \cos(NRe^{i\varphi}) - iRe^{i\varphi} \sin (NRe^{i\varphi})
+ O(e^{NR\sin\varphi}).
\]
The constant implicit in the error term is of course independent
of $R$ and $\varphi$. It follows that
\begin{equation}
\label{4.a}
|E(z)| \ge R | \sin(NRe^{i\varphi})|
- O(e^{NR\sin\varphi}).
\end{equation}
Hence there exist constants $C_0,R_0>0$ with the following
property: If $R\ge R_0$ and $\sin\varphi\ge C_0/R$, then
\begin{equation}
\label{4.4}
|E(z)|\ge \frac{1}{2} R e^{NR\sin\varphi}.
\end{equation}
In the opposite case of small $\varphi$, we
restrict our attention
to the radii $R_n=N^{-1}(2\pi n + \pi/2)$, with $n\in\mathbb N$, $n$ large.
The assumption $\sin\varphi < C_0/R$ ensures that the error term
from \eqref{4.a} is actually bounded, and
\begin{align*}
\sin(NRe^{i\varphi}) & =
\sin(NR\cos\varphi + iNR\sin\varphi) =
\sin(NR + iNR\sin\varphi) + O(R^{-1})\\
& = \sin(\pi/2 + iNR\sin\varphi) + O(R^{-1})
= \cosh (NR\sin\varphi) + O(R^{-1}).
\end{align*}
As $\cosh x\ge 1$, we thus get from \eqref{4.a}
that $|E(z)|\ge R_n/2$ for $z$ as
above and sufficiently large $n$. Obviously, if $\sin\varphi0$, the limit $\lim_{\epsilon\to 0+} m^{(N)}(k^2+i\epsilon)$
exists. We will denote this limit simply by $m^{(N)}(k^2)$; we then
have that $m^{(N)}(k^2)=M_N(k)$ for all $k>0$ (note, however, that
$M_N(-k)$ does {\it not} give the correct value but the complex conjugate
of $m^{(N)}(k^2)$ because $k^2$ is now approached from the lower half
plane).
From these facts, we immediately get the following description of
$\rho^{(N)}$. Denote the finitely many negative eigenvalues by
$-\kappa_n^2$, $\kappa_n > 0$. Then
\[
\rho^{(N)}= \sum_n \rho^{(N)}(\{ -\kappa_n^2 \} ) \delta_{-\kappa_n^2}
+ \frac{1}{\pi}\chi_{(0,\infty)}(\lambda) \text{Im }M_N(\sqrt{\lambda})
\, d\lambda .
\]
We will also need the $m$-function $m_0$ and the spectral measure
$\rho_0$ for zero potential. The following formulae hold:
\begin{equation}
\label{mrho0}
m_0(z) = (-z)^{-1/2},\quad \rho_0 =
\chi_{(0,\infty)}(\lambda)\, \frac{d\lambda}{\pi \sqrt{\lambda}}.
\end{equation}
In the first equation, which holds for $z\in\mathbb C^+$, the
square root must be chosen so that $\text{Im }m_0 > 0$. Clearly,
$m_0$ can then be holomorphically continued to $\mathbb C \setminus
[0,\infty)$. This continuation will also be denoted by $m_0$. Finally,
just as for $m^{(N)}$, we put $m_0(\lambda)\equiv \lim_{\epsilon\to
0+} m_0(\lambda+i\epsilon)$ for $\lambda>0$.
\begin{Lemma}
\label{L4.3}
a) The limit $\lim_{k\to 0} k M_N(k)$ exists.\\
b) For $\text{Im }k \ge 0$, $k\notin (-\infty,0]$, we have that
\[
m^{(N)}(k^2)-m_0(k^2) = \frac{1}{k^2} \int_0^N V(x) e^{2ikx}\, dx
+ O(|k|^{-3}) .
\]
\end{Lemma}
\begin{proof}
a) If $y_1$, $y_2$ both solve \eqref{se}, then the Wronskian
$y'_1y_2-y_1y'_2$ is constant. By computing the Wronskian of
$f(\cdot,k)$ and $f(\cdot, -k)$ at $x=0$ and $x=N$, we therefore see that
\[
f'(0,k)f(0,-k) - f(0,k) f'(0,-k) = 2ik .
\]
Take the derivative with respect to $k$ (writing $\dot{}\equiv
\frac{d}{dk}$) and then set $k=0$. We obtain
\[
f(0,0)\dot{f}'(0,0) - \dot{f}(0,0) f'(0,0) = i,
\]
and thus it is not possible that $f'(0,k)$ and $\dot{f}'(0,k)$ vanish
simultaneously at $k=0$. Therefore a possible pole of $M_N$ at $k=0$
must be of order one.
b) Put $g(x,k)=f(x,k)e^{-ikx}$. Then, basically by the variation of
constants formula, $g$ is the unique solution of the integral equation
\[
g(x,k)= 1 + \frac{1}{2ik} \int_x^N \left( e^{2ik(t-x)} - 1 \right)
V(t) g(t,k)\, dt .
\]
If $\text{Im }k\ge 0$ and $|k|\ge 2\|V\|_{L_1(0,N)}$, this implies the
a priori estimate $\|g\|_{\infty} \le 2$. So for these $k$,
we have $|g(x,k)-1|\le 2\|V\|_1/|k|$.
This in turn shows that
\[
g'(x,k) = -\int_x^N V(t)e^{2ik(t-x)}\, dt + O(|k|^{-1}).
\]
Hence for large $|k|$,
\begin{align*}
m^{(N)}(k^2)=M_N(k) & =
-\frac{f(0,k)}{f'(0,k)} = \frac{-g(0,k)}{ikg(0,k)+g'(0,k)}\\
& = \frac{i}{k}\, \frac{1}{1+\frac{g'(0,k)}{ikg(0,k)}}
= \frac{i}{k} \left( 1 - \frac{g'(0,k)}{ikg(0,k)} + O(|k|^{-2})
\right) \\
& = \frac{i}{k} + \frac{1}{k^2} \int_0^N V(t)e^{2ikt}\, dt
+ O(|k|^{-3}),
\end{align*}
as desired, since $m_0(k^2)=i/k$. For small $k$, there is nothing to prove.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T4.2}]
Suppose that an $F\in S_N$ is given and write, according to
Theorem \ref{T4.1},
\[
F(z) = \int f(t) \cos\sqrt{z}t \, dt
\]
with $f\in L_2(0,N)$. Introduce the (signed) Borel measure
$\sigma_N$ by $\sigma_N=\rho^{(N)}-\rho_0$. Theorem \ref{T3.2}b)
allows us to compute the norm of $F$ as
\begin{equation}
\label{4.8}
\|F\|_{S_N}^2 = \int |F(\lambda)|^2\, d\rho^{(N)}(\lambda)
= \int|F(\lambda)|^2\, d\rho_0(\lambda) +
\int |F(\lambda)|^2\, d\sigma_N(\lambda) .
\end{equation}
The two integrals in this last expression converge absolutely
because the map $f\mapsto F$ is unitary from $L_2(0,\infty)$
onto $L_2(\mathbb R, d\rho_0)$ -- in fact, it is just the $U$
from \eqref{U} for zero potential. This observation also
says that
\[
\int|F(\lambda)|^2\, d\rho_0(\lambda) = \int_0^N |f(t)|^2 \, dt .
\]
It remains to analyze the last integral from \eqref{4.8}.
Using the identity
\[
\cos x \cos y = \frac{1}{2} \left( \cos(x-y) + \cos (x+y) \right),
\]
we can write it in the form
\begin{align}
\int_{-\infty}^{\infty} |F(\lambda)|^2\, & d\sigma_N(\lambda) =
\int_{-\infty}^{\infty} d\sigma_N(\lambda)
\int ds\int dt \overline{f(s)}f(t) \cos \sqrt{\lambda}s
\cos \sqrt{\lambda}t \nonumber \\
\label{4.9}
& = \frac{1}{2} \int_{-\infty}^{\infty} d\sigma_N(\lambda)
\int ds\int dt \overline{f(s)}f(t) \left(
\cos\sqrt{\lambda}(s-t) + \cos\sqrt{\lambda}(s+t) \right) .
\end{align}
Formally, this is of the desired form with
$\phi(x) = \int \cos\sqrt{\lambda}x \, d\sigma_N(\lambda)$,
but this needs to be interpreted carefully because the integral
``defining'' $\phi$
will not, in general, be absolutely convergent.
Our strategy will be to first define $\phi$ as a distribution
and then prove that it is actually an absolutely continuous
function. More precisely, the contribution coming from
$\lambda\in (0,\infty)$ will be treated in this way.
So we define a tempered distribution $\phi_+\in\mathcal{S}'$ as
follows. Let $g$ be a test function from the Schwartz space
$\mathcal{S}$. Recall that this means that $g$ is infinitely
differentiable and $\sup_{x\in\mathbb R} |x|^m |g^{(n)}(x)| <
\infty$ for all $m,n\in\mathbb N_0$. Then $\phi_+$ acts on $g$ by
\[
(\phi_+, g) = \int_0^{\infty} d\sigma_N(\lambda)
\int_{-\infty}^{\infty} dx\, g(x) \cos\sqrt{\lambda}x .
\]
This is well defined because $\int g(x) \cos\sqrt{\lambda}x\, dx$
is rapidly decreasing in $\lambda$ and from Lemma \ref{L4.3} and
the preceding material we have the a priori estimate
$|\sigma_N |([0,R]) \le C \sqrt{R}$. Thus the integral certainly
converges. It is also clear that $\phi_+$ is linear and continuous
in the topology of $\mathcal{S}$, so indeed $\phi_+\in\mathcal{S}'$.
Note that formally, $\phi_+$ is just $\phi_+(x)=\int_0^{\infty}
\cos\sqrt{\lambda}x \, d\sigma_N(\lambda)$.
The Fourier transform of $\phi_+$ is, by definition, the tempered
distribution $\widehat{\phi}_+$ acting on test functions $g$
by $(\widehat{\phi}_+,g)=
(\phi_+,\widehat{g} )$. We compute
\begin{align*}
(\phi_+,\widehat{g}) & = \frac{1}{2} \int_0^{\infty} d\sigma_N(\lambda)
\int_{-\infty}^{\infty} dx\, \widehat{g}(x) \left( e^{i\sqrt{\lambda}x}
+ e^{-i\sqrt{\lambda}x} \right) \\
& = \sqrt{\frac{\pi}{2}} \int_0^{\infty} d\sigma_N(\lambda)
\left( g(\sqrt{\lambda}) + g(-\sqrt{\lambda}) \right) \\
& = \sqrt{\frac{2}{\pi}} \int_0^{\infty} \text{Im }
( m^{(N)}- m_0 )(k^2) \left( g(k) + g(-k) \right) k\, dk \\
& = \sqrt{\frac{2}{\pi}} \int_{-\infty}^{\infty} g(k) |k| \text{ Im }
(m^{(N)}- m_0)(k^2) \, dk,
\end{align*}
and hence $\widehat{\phi}_+$ is a function and
\begin{equation}
\label{4.5}
\widehat{\phi}_+(k) = \sqrt{\frac{2}{\pi}}\, |k| \text{ Im }
(m^{(N)}- m_0)(k^2) .
\end{equation}
From Lemma \ref{L4.3} and the formula for $m_0$,
we see that $\widehat{\phi}_+$ is continuous
and $\widehat{\phi}_+(k)=O(|k|^{-1})$ for large $|k|$. In fact,
we get the more precise information that
\begin{align*}
\widehat{\phi}_+(k) & =
\sqrt{\frac{2}{\pi}} \, \frac{1}{|k|} \int_0^N V(x)\sin 2|k|x \, dx
+O(k^{-2}) \\
& = \frac{1}{\sqrt{2\pi}} \, \frac{1}{ik} \int_0^N V(x)
\left( e^{2ikx}- e^{-2ikx} \right) \, dx + O(k^{-2})\\
& =\frac{1}{ik} \widehat{W}_N(k) + O(k^{-2}),
\end{align*}
where
\[
W_N(x) = \begin{cases} -(1/2) V(x/2) & 0 < x < 2N \\
(1/2) V(-x/2) & -2N < x < 0 \\
0 & |x| > 2N \end{cases} .
\]
Therefore the (distributional) derivative $\phi'_+$ of $\phi_+$ has
a Fourier transform of the form
\[
(\phi'_+)\, \widehat{ }\,(k) = ik\widehat{\phi}_+(k) =
\widehat{W}_N(k) + \widehat{R}_N(k),
\]
where $\widehat{R}_N$ is a continuous function and
$\widehat{R}_N(k) = O(|k|^{-1})$.
It follows that
\[
\phi'_+(x) = W_N(x) + R_N(x),
\]
with $R_N\in L_2$.
In particular, $\phi'_+\in\mathcal{S}'$ is a locally integrable function,
and as a consequence, $\phi_+$ is an absolutely continuous function. We
define
\[
\phi(x) = \int_{-\infty}^0 \cos\sqrt{\lambda}x\, d\sigma_N(\lambda)
+ \phi_+(x)
= \sum \rho^{(N)}(\{ -\kappa_n^2 \} ) \cosh \kappa_n x + \phi_+(x),
\]
and we verify that this $\phi$ has the desired properties.
We know already that $\phi_+$ is absolutely continuous. Its Fourier transform,
$\widehat{\phi}_+$, is real valued and even (compare \eqref{4.5}), so
$\phi_+$ has these properties, too. The (finite) sum
$\sum \rho^{(N)}(\{ -\kappa_n^2 \} ) \cosh \kappa_n x$ manifestly is
a smooth, real valued, even function, so we have established that $\phi$
is absolutely continuous, real valued, and even.
To show that $\phi(0)=0$, we use the formula
\[
(m^{(N)} - m_0)(k^2) = \int_{-\infty}^{\infty}
\frac{d\sigma_N(\lambda)}{\lambda - k^2}\quad\quad
(k\in\mathbb C^+),
\]
which follows at once from the Herglotz representations of
$m^{(N)}$ and $m_0$ (see, e.g., \cite{Lev}). We can Fourier transform
the denominator,
\[
\frac{1}{\lambda - k^2} = \frac{i}{k} \int_0^{\infty}
\cos\sqrt{\lambda}t\, e^{ikt}\, dt\quad\quad
(k\in\mathbb C^+, \lambda>0),
\]
to write this in the form
\begin{equation}
\label{4.6}
(m^{(N)} - m_0)(k^2) = \sum \frac{\rho^{(N)}(\{
-\kappa_n^2 \} )}{-\kappa_n^2 - k^2} +
\frac{i}{k} \int_0^{\infty} d\sigma_N(\lambda)
\int_0^{\infty} dt\, e^{ikt} \cos\sqrt{\lambda}t .
\end{equation}
We now take a closer look at this last integral:
\begin{align*}
\int_0^{\infty} d\sigma_N(\lambda)
\int_0^{\infty} dt\, e^{ikt} \cos\sqrt{\lambda}t & =
\frac{2}{\pi} \int_0^{\infty} dl\, l \text{ Im }
(m^{(N)} - m_0)(l^2) \int_0^{\infty} dt\, e^{ikt} \cos lt\\
& = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} dl\,
\widehat{\phi}_+(l)\int_0^{\infty} dt\, e^{ikt} \cos lt\\
& = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} dl\,
\widehat{\phi}_+(l)\int_0^{\infty} dt\, e^{ikt}e^{-ilt}.
\end{align*}
This last expression equals
$\int \widehat{\phi}_+\,\widehat{h}$,
where $h(t) = \chi_{(0,\infty)}(t) e^{ikt}$.
Since we are assuming that $k\in\mathbb C^+$, this function is in $L_2$,
as is $\widehat{\phi}_+$, and thus we may use the Plancherel identity to
obtain the final result
\[
(m^{(N)} - m_0)(k^2) =
\sum \frac{\rho^{(N)}(\{
-\kappa_n^2 \} )}{-\kappa_n^2 - k^2} +
\frac{i}{k} \int_0^{\infty} \phi_+(t) e^{ikt}\, dt \quad\quad(k\in\mathbb C^+).
\]
Note that on a formal level, this may be derived very easily from
\eqref{4.6} because the last term of \eqref{4.6} looks like
$\phi_+$ applied to $(i/k)h$. However, $h$ is not a test function!
If we assume that in addition $\text{Im }k> \max \kappa_n$, then
$\int_0^{\infty} \phi(t) e^{ikt} \, dt$ exists and we get the
more compact formula
\[
(m^{(N)} - m_0)(k^2) =
\frac{i}{k} \int_0^{\infty} \phi(t) e^{ikt} \, dt.
\]
We now specialize to $k=iy$, $y\to\infty$, and integrate by parts.
This gives
\[
m^{(N)}(-y^2)-m_0(-y^2) = \frac{\phi(0)}{y^2} +
\frac{1}{y^2} \int_0^{\infty} \phi'(t) e^{-yt}\, dt .
\]
Since $\phi'_+\in L_1+ L_2$, the integral goes to zero by
dominated convergence, hence
\[
m^{(N)}(-y^2)-m_0(-y^2) = \frac{\phi(0)}{y^2} + o(y^{-2})
\quad\quad (y\to\infty) .
\]
On the other hand, Lemma \ref{L4.3}b) implies that
$m^{(N)}(-y^2)-m_0(-y^2) = o(y^{-2})$. Therefore, $\phi(0)=0$.
Let $\mathcal{K}_{\phi}: L_2(0,N)\to L_2(0,N)$ be the integral operator
defined in \eqref{Kphi}, with the $\phi$ constructed above. We
still have to establish the crucial property of $\phi$, namely,
the fact that the integral from \eqref{4.9} equals
$\langle f, \mathcal{K}_{\phi}f \rangle$ for all $f\in L_2(0,N)$.
We first consider the case when $f\in C_0^{\infty}(0,N)$, and we
treat explicitly only the first term from
\eqref{4.9}, which contains $\cos\sqrt{\lambda}
(s-t)$. Introduce the new variables $R=s+t$, $r=s-t$. Then we have
\begin{align*}
\int d\sigma_N(\lambda) \int\int ds\, dt\, & \overline{f(s)} f(t)
\cos\sqrt{\lambda}(s-t) \\
& = \frac{1}{2} \int d\sigma_N(\lambda) \int dr\, \cos\sqrt{\lambda} r
\int dR\, \overline{f\left( \frac{R+r}{2} \right)}
f\left( \frac{R-r}{2} \right) \\
& =\int d\sigma_N(\lambda) \int dr\, g(r) \cos\sqrt{\lambda} r .
\end{align*}
Here, we have put
\begin{equation}
\label{4.11}
g(r) \equiv \frac{1}{2} \int dR\, \overline{f\left( \frac{R+r}{2} \right)}
f\left( \frac{R-r}{2} \right) ,
\end{equation}
and all integrals are over $\mathbb R$. Note that $g\in C_0^{\infty}
(-N,N)$. In particular, $g$ is an admissable test
function, and thus the following manipulations are justified:
\begin{align*}
\int d\sigma_N(\lambda) & \int dr\, g(r) \cos\sqrt{\lambda} r\\
& = \sum \rho( \{ -\kappa_n^2 \} ) \int g(r) \cosh \kappa_n r\, dr
+\int_0^{\infty} d\sigma_N(\lambda) \int dr\, g(r) \cos\sqrt{\lambda} r\\
& = \int dr\, g(r) \sum \rho( \{ -\kappa_n^2 \} ) \cosh \kappa_n r
+ \int \phi_+(r) g(r)\, dr\\
& = \int \phi(r) g(r)\, dr .
\end{align*}
Finally, we can write out $g$ (see \eqref{4.11}) and transform back to
the original variables
$(s,t)$; we obtain the expression
\[
\int\int ds\,dt\, \overline{f(s)} f(t) \phi(s-t) .
\]
If we combine this with the result of the analogous computation
for the term involving $\cos\sqrt{\lambda}(s+t)$, then we get indeed that
\begin{align*}
\frac{1}{2} \int_{-\infty}^{\infty} d\sigma_N(\lambda)
\int ds\int dt\, \overline{f(s)} & f(t) \left(
\cos\sqrt{\lambda}(s-t) + \cos\sqrt{\lambda}(s+t) \right) \\
& = \frac{1}{2} \int\int ds\,dt\, \overline{f(s)} f(t)
\left( \phi(s-t) + \phi(s+t) \right) \\
& = \langle f, \mathcal{K}_{\phi} f \rangle .
\end{align*}
Using this in \eqref{4.8}, \eqref{4.9}, we see that
\begin{equation}
\label{4.10}
\|F\|_{S_N}^2 = \langle f, (1+\mathcal{K}_{\phi}) f \rangle_{L_2(0,N)},
\end{equation}
as desired.
So far, this has been proved for $f\in C_0^{\infty}(0,N)$. To establish
\eqref{4.10} in full generality, fix $f\in L_2(0,N)$ and pick
$f_n\in C_0^{\infty}(0,N)$ with $\|f_n-f\|_{L_2(0,N)} \to 0$.
From the proof of Theorem \ref{T4.1} (see, in particular, \eqref{4.2})
we know that there is a constant $C>0$ so that
for all $G\in S_N$, the inequality $\|G\|_{S_N}\le C\|G\|_{S_N^{(0)}}$
holds, where $S_N^{(0)}$ is the de~Branges space for zero potential.
Hence, writing $F_n(z)= \int f_n(t) \cos \sqrt{z}t\, dt$, we deduce that
\[
\|F_n-F\|_{S_N} \le C \|F_n-F\|_{S_N^{(0)}} = C \|f_n-f\|_{L_2(0,N)}
\to 0 .
\]
Therefore, we can use \eqref{4.10} with $f$ replaced
by $f_n$ and then pass to the limit to see that
\eqref{4.10} holds for all $f\in L_2(0,N)$.
\end{proof}
\section{The inverse spectral theorem}
Theorem \ref{T4.2} associates with each Schr\"odinger equation
a function $\phi$ that determines the scalar product on the
corresponding de~Branges spaces $S_N$. Recall also that by
Theorem \ref{T3.1}, these de~Branges spaces can be identified with the
spaces $L_2(\mathbb R, d\rho_N^{\beta})$ from the spectral
representation of the Schr\"odinger operators. So it makes sense
to think of $\phi$ (on $[-2N,2N]$) as representing the spectral
data of $-d^2/dx^2+V(x)$ (on $L_2(0,N)$, with suitable boundary
conditions at the endpoints). Our next result is the converse
of Theorem \ref{T4.2}. It says that every function $\phi$ that
has the properties stated in Theorem \ref{T4.2} comes from a
Schr\"odinger equation. To be able to formulate this concisely,
we denote this set of $\phi$'s by $\Phi_N$, so
\[
\Phi_N =\{ \phi:[-2N,2N] \to \mathbb R :
\phi \text{ absolutely continuous, even, }
\phi(0)=0, 1+\mathcal{K}_{\phi}>0 \} .
\]
The last condition of course refers to the integral operator
$\mathcal{K}_{\phi}$ on $L_2(0,N)$ that was introduced in \eqref{Kphi};
we require that the self-adjoint operator $1+\mathcal{K}_{\phi}$ be
positive definite. In the situation of Theorem \ref{T4.2},
this condition holds because $\langle f, (1+\mathcal{K}_{\phi}) f\rangle$
is a norm.
\begin{Theorem}
\label{T8.1}
For every $\phi\in \Phi_N$,
there exists a $V\in L_1(0,N)$ so that the norm on the
de~Branges space $S_N$ associated with \eqref{se} is given by
\[
\|F\|^2_{S_N} = \langle f, (1+\mathcal{K}_{\phi}) f\, \rangle_{L_2(0,N)}
\quad\quad (f\in L_2(0,N)) .
\]
Here, $F(z)=\int f(t)\cos\sqrt{z}t\, dt$, as in Theorem
\ref{T4.1} .
\end{Theorem}
We will take up the proof of Theorem \ref{T8.1} in Sect.\ 9. Let us
first point out that we also have uniqueness in both directions.
In fact, uniqueness is, as usual, much easier than existence.
\begin{Theorem}
\label{T8.2}
a) If $V\in L_1(0,N)$ is given, then the $\phi\in\Phi_N$ from Theorem \ref{T4.2}
is unique.\\
b) If $\phi\in\Phi_N$ is given, then the $V\in L_1(0,N)$ from Theorem \ref{T8.1}
is unique.
\end{Theorem}
This will also be proved in Sect.\ 9. We need some preparations; this
will occupy us for the following three sections.
\section{Canonical systems I}
A canonical system is a family of differential equations of the following
form:
\begin{equation}
\label{Can}
Ju'(x) = zH(x) u(x).
\end{equation}
Here, $J=\bigl( \begin{smallmatrix} 0 & -1 \\ 1 & 0 \end{smallmatrix}
\bigr)$, and $H(x)\in\mathbb R^{2\times 2}$, the entries of $H$ are
integrable functions on an interval $(0,N)$, and $H(x)\ge 0$ (i.e.,
$H(x)$ is a positive semidefinite matrix)
for almost every $x\in (0,N)$. We also assume that there is no
nonempty open interval $I\subset (0,N)$ so that $H= 0$ almost
everywhere on $I$.
Finally, $z\in\mathbb C$ is the spectral parameter.
As usual, $u:[0,N]\to \mathbb C^2$ is called a solution if $u$ is
absolutely continuous and satisfies \eqref{Can} almost everywhere.
Usually, one does not assume that $H(x)\not\equiv 0$ on
nonempty open sets, but
dropping this assumption does not add generality. Indeed, by letting
\[
S_0 = \left\{ x\in (0,N): \exists \epsilon >0: H(t)=0\text{ for
a.e.\ } t\in (x-\epsilon,x+\epsilon) \right\}
\]
and introducing the new independent variable
\[
\xi(x) = \int_0^x \left( 1 - \chi_{S_0}(t) \right) \, dt,
\]
one may pass
to an equivalent canonical system that satisfies our additional assumption.
A fundamental result (namely, Theorem \ref{T6.3}) associates
with every de~Branges space a canonical system \eqref{Can}. Therefore,
canonical systems are a central object in the theory of de~Branges spaces.
Let $u(x,z)$, $v(x,z)$ be the solutions of \eqref{Can} with the initial
values $u(0,z)= \bigl( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \bigr)$,
$v(0,z)= \bigl( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \bigr)$.
We will mainly work with $u(x,z)$. Just as in Sect.\ 3, we can build a
de~Branges function from $u$ by defining $E_N(z)=u_1(n,z)+iu_2(N,z)$.
Here, a pathological case can occur: if $H(x)= \bigl( \begin{smallmatrix}
0 & 0 \\ 0 & H_{22}(x) \end{smallmatrix} \bigr)$ on $(0,N)$, then
$u(N,z)=\bigl( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \bigr)$ and
$E_N(z)\equiv 1$. According to our definition in Sect.\ 2, this is not
a de~Branges function. So it will convenient to slightly extend this
definition and to also admit non-zero constants as de~Branges functions.
The corresponding de~Branges space is simply defined to be the zero space.
\begin{Proposition}
\label{P5.1}
$E_N(z)=u_1(N,z)+iu_2(N,z)$ is a de~Branges function. The corresponding
reproducing kernel $J_z$ is given by
\[
J_z(\zeta) = \int_0^N u^*(x,z) H(x) u(x,\zeta) \, dx .
\]
\end{Proposition}
\begin{proof}
The formula for $J_z$ follows by a calculation, which is analogous to
the discussion preceding Theorem \ref{T3.1}. One uses the fact
that $u(x,\overline{z})=\overline{u(x,z)}$; we leave the details to
the reader. Also, just as in Sect.\ 3, by taking $z=\zeta\in\mathbb C^+$,
the formula for $J_z$ implies that $E_N$ is a de~Branges function. In
this context, observe the following fact: if
\[
\int_0^N u^*(x,z)H(x)u(x,z)\, dx = 0
\]
for some $z\in\mathbb C$, then $H(x)u(x,z)=0$ for almost every $x\in
(0,N)$, hence $u(x,z)=u(0,z)=
\bigl( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \bigr)$. This in
turn implies that $H_{11}=0$ almost everywhere, and we are in the
trivial case $E_N(z)\equiv 1$. In the opposite case, $\int_0^N
u^*(x,z)H(x)u(x,z)\, dx > 0$ for all $z\in\mathbb C$, and $E_N$ is a
genuine de~Branges function.
\end{proof}
Eventually, we will again identify the corresponding de~Branges space
$B(E_N)$ with a space $L_2(\mathbb R, d\rho_N^{\beta})$,
where $\rho_N^{\beta}$ is
a spectral measure of \eqref{Can}, just as we did in the case of
Schr\"odinger equations in Theorem \ref{T3.1}. However, things are
more complicated now, basically for two reasons: First of all, if
\eqref{Can} is to be interpreted as an eigenvalue equation $Tu=zu$,
then, formally, the operator $T$ should be $Tu=H^{-1}Ju'$, but
$H(x)$ need not be invertible. Consequently, one has to work with
relations instead of operators. Second, on so-called singular intervals,
equation \eqref{Can} actually is a difference equation in disguise.
These points will be studied in some detail in Sect.\ 10.
Our discussion of canonical systems will be modelled on the (simpler)
analysis of Sect.~3. For a functional analytic treatment of canonical
systems, see \cite{HdSW}. Reference \cite{dB} also contains a lot
of material on canonical systems, though in somewhat implicit form.
\section{Four theorems of de~Branges}
In this section we state, without proof, four general results
of de~Branges on de~Branges spaces
which will play an important role in our treatment
of the inverse problem for Schr\"odinger operators. The first result
is a useful tool for recognizing de~Branges spaces. It is Theorem 23
of \cite{dB}. For an alternate proof, see \cite[Sect.\ 6.1]{DMcK}.
\begin{Theorem}
\label{T6.1}
Let $\mathcal{H}$ be a Hilbert space whose elements are entire functions.
Suppose that $\mathcal{H}$ has the following three properties:\\
a) For every $z\in\mathbb C$, point evaluation $F\mapsto F(z)$ is a
bounded linear functional.\\
b) If $F\in\mathcal{H}$ has a zero at $w\in\mathbb C$,
then $G(z)=\frac{z-\overline{w}}{z-w} F(z)$ belongs to $\mathcal{H}$
and $\|G\|=\|F\|$.\\
c) $F\mapsto F^{\#}$ is an isometry on $\mathcal{H}$.
Then $\mathcal{H}$ is a de~Branges space: There exists a de~Branges
function $E$, so that $\mathcal{H}=B(E)$ and $\|F\|_{\mathcal{H}}
=\|F\|_{B(E)}$ for all $F\in\mathcal{H}$.
\end{Theorem}
The converse of Theorem \ref{T6.1} is also true (and easily proved):
Every de~Branges space satisfies a), b), and c). In fact, in \cite{dB1},
the conditions of Theorem \ref{T6.1} are used to define de~Branges spaces.
The de~Branges function $E$ is not uniquely determined by the
Hilbert space $B(E)$. The situation is clarified by \cite[Theorem I]{dB1}.
Given $E$, we introduce the entire functions $A$, $B$ by
\[
A(z) = \frac{E(z)+E^{\#}(z)}{2},\quad\quad
B(z) = \frac{E(z)-E^{\#}(z)}{2i}.
\]
\begin{Theorem}
\label{T6.2}
Let $E_1$, $E_2$ be de~Branges functions. Then $B(E_1)=B(E_2)$
(as Hilbert spaces) if and only if there exists $T\in\mathbb R^{2\times 2}$,
$\det T=1$, so that
\[
\begin{pmatrix} A_2(z) \\ B_2(z) \end{pmatrix} = T
\begin{pmatrix} A_1(z) \\ B_1(z) \end{pmatrix}.
\]
\end{Theorem}
The next two results lie much deeper. They are central to the whole
theory of de~Branges spaces. We will not state the most general versions
here; for this, the reader should consult \cite{dB}. The following
definition will be useful to avoid (for us) irrelevant technical
problems. A de~Branges space $B(E)$ is called {\it regular} if
\begin{equation}
\label{regsp}
F(z) \in B(E) \Longrightarrow \frac{F(z)-F(z_0)}{z-z_0}\in B(E) .
\end{equation}
Here $z_0$ is any fixed complex number. The definition is reasonable
because it can be shown that if \eqref{regsp} holds for one $z_0\in
\mathbb C$, then it holds for all $z_0\in\mathbb C$ (compare
\cite[Theorem 25]{dB}).
The condition \eqref{regsp}
also plays an important role in \cite{dB}. According
to (a more general version of) Theorem \ref{T6.3} below,
every de~Branges space comes from a (possibly singular)
canonical system; the regular spaces
are precisely those that come from regular problems, that is, $x=0$ is
not a singular endpoint. Jumping ahead, we can also remark that condition
\eqref{regsp} ensures the existence of a conjugate mapping.
See also \cite{Wo} for other aspects of \eqref{regsp}.
\begin{Theorem}
\label{T6.3}
If $B(E)$ is a regular de~Branges space, $E(0)=1$, and $N>0$ is given,
then there exists a canonical system \eqref{Can} (that is, there exists
an integrable function $H:(0,N)\to \mathbb R^{2\times 2}$
with $H(x)\ge 0$
almost everywhere, $H\not\equiv 0$ on nonempty open sets),
such that $E(z)=E_N(z)$, where $E_N$ is determined from
\eqref{Can} as in Proposition \ref{P5.1}.
Moreover, $H(x)$ can be chosen so that $\text{tr }H(x)$ is a
(positive) constant.
\end{Theorem}
De~Branges proved various results of this type; see \cite[Theorems V, VII]{dB2}
and \cite[Theorems 37, 40]{dB}. The version given here follows by
combining \cite[Theorem VII]{dB2} with \cite[Theorem 27]{dB}. In fact,
this is not literally true because de~Branges uses the equation
\begin{equation}
\label{Canint}
y(b)J-y(a)J = z \int_a^b y(t) dm(t)
\end{equation}
instead of \eqref{Can}. Here, $m$ is a matrix valued measure. If
$m$ is absolutely continuous, $dm(t)=m'(t)\, dt$, then \eqref{Canint}
can be written as a differential equation $y'J=zyH$, with $H=m'$, and the
further change of variable $u(x,z)=y^*(x,-\overline{z})$ then gives
\eqref{Can}. In \cite{dB}, $m$ is only assumed to be continuous,
but then one can change the independent variable to
$\xi(t)=\text{tr }m((0,t))$ to get an absolutely continuous measure.
This transformation automatically leads to a system with
$\text{tr }H(x)\equiv 1$, and this is how one proves the last statement
of Theorem \ref{T6.3}. A further transformation of the type $\xi\to
a\xi$ with a suitable $a>0$ then yields a problem on $(0,N)$ again.
There is no apparent reason for preferring one of these equivalent
ways of writing canonical systems (see \eqref{Can} and \eqref{Canint}),
but it appears that the form we use
here (namely, \eqref{Can}) has become the most common.
The assumption that $E(0)=1$ is just a normalization; it does not
restrict the applicability of Theorem \ref{T6.3}. In fact, one can just
use Theorem \ref{T6.2} with $T = \bigl( \begin{smallmatrix}
A(0)|E(0)|^{-2} & B(0)|E(0)|^{-2} \\ -B(0) & A(0) \end{smallmatrix} \bigr)$
to pass to an equivalent $E$ with $E(0)=1$. This will always work
because de~Branges functions associated with regular spaces do not have
zeros on the real line.
Theorem \ref{T6.3}, combined with the material from Sect.\ 10 (especially
\eqref{9.b}), is the promised (extremely) general version of
the Paley-Wiener Theorem. One can also view Theorem \ref{T6.3} as
a basic result in inverse spectral theory: given ``spectral data'' in
the form of a de~Branges function $E$, the theorem asserts the existence
of a corresponding differential equation. In this paper, we will use
Theorem \ref{T6.3} in this second way.
As a final remark on Theorem \ref{T6.3}, we would like to point out
that $H(x)$ is uniquely determined by $E(z)$ and $N$ if one normalizes
appropriately. (One may require that $\text{tr }H(x)$ be constant, as
in the last part of Theorem \ref{T6.3}, and
$\int_0^{\epsilon} H_{11}(t)\, dt > 0$ for all $\epsilon>0$.)
To prove this, the
basic idea is to proceed as in the proof of Theorem \ref{T8.2}b)
(which will be discussed in Sect.\ 9), but things are more complicated
here and one needs material from Sect.\ 10. We do not need this
uniqueness statement in this paper.
\begin{Theorem}
\label{T6.4}
Let $B(E)$, $B(E_1)$, $B(E_2)$ be regular de~Branges spaces and assume that
$B(E_1)$ and $B(E_2)$ are isometrically contained in $B(E)$. Then either
$B(E_1)$ is isometrically contained in $B(E_2)$ or
$B(E_2)$ is isometrically contained in $B(E_1)$.
\end{Theorem}
This is a special case of \cite[Theorem 35]{dB}. See also
\cite[Sect.\ 6.5]{DMcK} for a proof.
Theorem \ref{T6.4}
clearly is a strong structural result. The de~Branges subspaces
of a given space are totally ordered by inclusion. As an illustration,
take $B(E)=S_N$, the de~Branges space coming from a Schr\"odinger equation
on the interval $(0,N)$. Then it can be deduced from Theorem \ref{T6.4} that
the chain of spaces $\{ S_x : 0\le x\le N \}$ is a complete list of the
de~Branges spaces that are subspaces of $S_N$.
\section{Canonical systems II}
Theorem \ref{T6.3} associates a canonical system to every (regular)
de~Branges space. Conversely, we have seen in Sections 3 and 6 how
Schr\"odinger equations and canonical systems generate
de~Branges spaces. This recipe works for other equations as well
(Dirac, Sturm-Liouville, Jacobi difference equation). So,
in a sense, canonical systems are the most general formally
symmetric, second order differential systems (here, by ``order'' we
mean order of differentiation times number of components). In particular,
every Schr\"odinger equation can be written as canonical system by
a simple transformation. Namely, given a Schr\"odinger equation
\eqref{se}, let $y_1$, $y_2$ be the solutions of \eqref{se} with
$z=0$ with the
initial values $y_1(0)=y'_2(0)=1$, $y'_1(0)=y_2(0)=0$, and put
$T(x) = \bigl( \begin{smallmatrix} y_1(x) & y_2(x) \\ y'_1(x) & y'_2(x)
\end{smallmatrix} \bigr)$. Now if $y(x,z)$ solves \eqref{se},
then the vector function $u$ defined by $u(x,z)= T^{-1}(x)\bigl(
\begin{smallmatrix} y(x,z) \\ y'(x,z) \end{smallmatrix} \bigr)$ solves
\eqref{Can} with
\[
H(x) = \begin{pmatrix} y_1^2(x) & y_1(x)y_2(x) \\
y_1(x)y_2(x) & y_2^2(x) \end{pmatrix} .
\]
This is shown by direct computation. Note that this $H$
has the required properties: its entries are integrable
(in fact, they have absolutely continuous derivatives), and $H(x)\ge 0$,
$H(x)\not= 0$ for every $x$.
Conversely, from a canonical system of this special form, one can go
back to a Schr\"odinger equation. We state this
separately for later use. By $AC^{(n)}[0,N]$ we denote the set
of (locally integrable) functions
whose $n$th derivative (in the sense of distributions)
is in $L_1(0,N)$. Equivalently, $f\in AC^{(n)}[0,N]$ precisely
if $f,f',\ldots, f^{(n-1)}$ are absolutely continuous on $[0,N]$.
\begin{Proposition}
\label{P7.1}
Let $h,k\in AC^{(2)}[0,N]$ be real valued functions, and suppose that
$h(0)=1$, $h'(0)=0$, and
\begin{equation}
\label{7.1}
h(x)k'(x) - h'(x)k(x)=1.
\end{equation}
Let
\[
H(x)= \begin{pmatrix}
h^2(x) & h(x)k(x) \\ h(x)k(x) & k^2(x) \end{pmatrix}.
\]
Then, if $u(x,z)$ solves \eqref{Can} with this $H$, then
\[
y(x,z) := h(x) u_1(x,z) + k(x) u_2(x,z)
\]
solves \eqref{se} with $V(x)= h''(x)k'(x)-h'(x)k''(x)$.
Moreover, the de~Branges spaces generated by \eqref{se} and \eqref{Can}
as in Sect.\ 3 and Proposition \ref{P5.1},
respectively, are identical.
\end{Proposition}
\begin{proof}
The fact that $y$ solves \eqref{se} with $V=h''k'-h'k''$ is checked by
direct computation. Note that $hk''=h''k$; this follows by
differentiating \eqref{7.1}. Also, $hu'_1+ku'_2=0$ and thus
\[
y'(x,z)= h'(x)u_1(x,z)+k'(x)u_2(x,z).
\]
In particular, this relation shows that $y(\cdot,z)\in AC^{(2)}[0,N]$.
To compare the de~Branges spaces, we must specialize to the solution
$u$ with the initial values $u_1(0,z)=1$, $u_2(0,z)=0$.
The corresponding $y$ satisfies $y(0,z)=1$, $y'(0,z)=0$, and hence
is the solution from which the de~Branges function
of the Schr\"odinger equation
is computed. The values at $x=N$ are related by
\[
\begin{pmatrix} y(N,z) \\ y'(N,z) \end{pmatrix} =
\begin{pmatrix} h(N) & k(N) \\ h'(N) & k'(N) \end{pmatrix} u(N,z).
\]
The final claim
now follows from Theorem \ref{T6.2}.
\end{proof}
\section{Starting the proofs}
\begin{proof}[Proof of Theorem \ref{T8.2}]
a) If we know $V$, we can solve the Schr\"odinger equation \eqref{se}
(in principle, that is) and find $E_N(z)$. This function in turn determines
the scalar products $[F,G]_{S_N}$, and we have that
\[
[F,G]_{S_N} = \langle f, (1+\mathcal{K}_{\phi})g\rangle_{L_2(0,N)},
\]
so we know the operator $\mathcal{K}_{\phi}$ on $L_2(0,N)$. Hence
we know the kernel $K(s,t)$ almost everywhere on $[0,N]\times [0,N]$
(with respect to two-dimensional Lebesgue measure), but $K$ is
continuous, so we actually know the kernel everywhere, and
$\phi(2t)=2K(t,t)$, so $\phi$ on $[0,2N]$ is uniquely determined by $V$
on $(0,N)$. As $\phi$ is even, we of course automatically know $\phi$ on
$[-2N,2N]$ then.
b) Suppose that we have two potentials $V_1,V_2\in L_1(0,N)$, for
which the scalar product on the corresponding de~Branges spaces
is determined by one and the same $\phi\in\Phi_N$. In other words,
$S_N^{(1)}=S_N^{(2)}$ (as de~Branges spaces). Now $\phi$
on $[-2N,2N]$ determines the de~Branges spaces $S_x^{(i)}$
($i=1,2$) for every $x\in (0,N]$, so we actually have that also
$S_x^{(1)}=S_x^{(2)}$ for these $x$. By Theorem \ref{T6.2},
\[
\begin{pmatrix} u_2(x,z) \\ u'_2(x,z) \end{pmatrix} =
T(x) \begin{pmatrix} u_1(x,z) \\ u'_1(x,z) \end{pmatrix}
\quad\quad (00$
(strictly speaking, we know this for
the operator on $L_2(0,N)$, but $\langle f,
(1+\mathcal{K}_{\phi}) g\rangle_{L_2(0,x)}$
for $f,g\in L_2(0,x)$ can of course also be evaluated
in the bigger space $L_2(0,N)$).
$\mathcal{K}_{\phi}$ is compact, so we actually
have that $1+\mathcal{K}_{\phi}\ge \delta >0$. Thus
\[
\delta \|f\|_{L_2(0,x)}^2 \le \|F\|_{H_x}^2 \le C \|f\|_{L_2(0,x)}^2,
\]
and now completeness of $H_x$ follows from the completeness of
$L_2(0,x)$.
We now verify conditions a), b), c) of Theorem \ref{T6.1}.
Condition a) is obvious from
\[
|F(z)| \le e^{|z|^{1/2}x} \int_0^x |f(t)|\, dt
\le x^{1/2} e^{|z|^{1/2}x} \|f\| \le
(x/\delta)^{1/2} e^{|z|^{1/2}x}\|F\|_{H_x}.
\]
It is also clear that c) holds since
$F^{\#}(z) = \int\overline{f(t)} \cos \sqrt{z}t\, dt$, so $F^{\#}\in
H_x$ and, as $K$ is real valued,
\[
\|F^{\#}\|^2 = \langle \overline{f}, (1+\mathcal{K}_{\phi})\overline{f}
\rangle = \langle f, (1+\mathcal{K}_{\phi}) f \rangle = \|F\|^2 .
\]
To prove b), fix $w\in\mathbb C$ and $F\in H_x$ with $F(w)=0$.
Extend the $f\in L_2(0,x)$ corresponding to $F$ to $(-x,x)$ by
letting $f(-t)=f(t)$ ($0 2x$.
We also rewrite $\langle g, (1+\mathcal{K}_{\phi}) g
\rangle_{L_2(0,x)}$. Namely, since $g$ is even and
the integral kernel $K$ of $\mathcal{K}_{\phi}$ satisfies
\[
K(s,t)=K(-s,t)=K(s,-t)=K(-s,-t),
\]
we have that
\[
\langle \chi_{(0,x)}g, \mathcal{K}_{\phi} \chi_{(0,x)}g
\rangle_{L_2(0,x)} =
\frac{1}{8} \int_{-x}^x ds \int_{-x}^x dt\,
\overline{g(s)}g(t) \left( \phi(s-t)+\phi(s+t) \right) .
\]
Furthermore, using the substitution $s\to -s$ in the second term,
we can write this in the form
\[
\langle \chi_{(0,x)}g, \mathcal{K}_{\phi} \chi_{(0,x)}g
\rangle_{L_2(0,x)} =
\frac{1}{4} \int_{-x}^x ds \int_{-x}^x dt\,
\overline{g(s)}g(t)\phi(s-t) = \frac{1}{4}
\langle g, \phi * g \rangle_{L_2(-x,x)},
\]
where, as usual, the star denotes convolution.
Having made these preliminary observations and using the fact
that $|\widehat{f}(k)|=|\widehat{g}(k)|$ for real $k$, we obtain
\begin{align*}
4 \|G\|_{H_x}^2 & = 4 \langle \chi_{(0,x)}g, (1+\mathcal{K}_{\phi})
\chi_{(0,x)}g \rangle_{L_2(0,x)}
= 2\|g\|^2_{L_2(-x,x)} + \langle g, \phi * g \rangle_{L_2(-x,x)}\\
& = 2 \|\widehat{g} \|_{L_2(\mathbb R)}^2 + \langle \widehat{g},
\widehat{\phi}\,\widehat{g} \rangle_{L_2(\mathbb R)}
= \int_{-\infty}^{\infty} \left| \widehat{g}(k)\right|^2
\left( 2 + \widehat{\phi}(k) \right)\, dk\\
& = \int_{-\infty}^{\infty} \left| \widehat{f}(k)\right|^2
\left( 2 + \widehat{\phi}(k) \right)\, dk\\
& = 2 \|\widehat{f} \|_{L_2(\mathbb R)}^2 + \langle \widehat{f},
\widehat{\phi}\,\widehat{f} \rangle_{L_2(\mathbb R)}
= 2\|f\|_{L_2(-x,x)}^2 + \langle f, \phi * f \rangle_{L_2(-x,x)}\\
& = 4 \langle \chi_{(0,x)}f, (1+\mathcal{K}_{\phi})
\chi_{(0,x)}f \rangle_{L_2(0,x)}
= 4 \|F\|_{H_x}^2,
\end{align*}
as desired. Theorem \ref{T6.1} now shows that $H_x$ is a de~Branges
space.
It is clear that for every $\lambda\in\mathbb R$, there exists an
$F\in H_x$ with $F(\lambda)\not=0$. Thus, by the definition of
de~Branges spaces, the corresponding de~Branges function cannot
have zeros on the real line. Using Theorem \ref{T6.2}, we can
therefore normalize so that $E_x(0)=1$ (exactly as in the remark
following Theorem \ref{T6.3}).
We still must show that the
de~Branges space $H_x$ is regular. We will check condition
\eqref{regsp} with $z_0=0$. So let $F\in H_x$,
$F(z)= \int_0^x f(t)\cos\sqrt{z}t\, dt$, with $f\in L_2(0,x)$. Then
\[
g(t):= \int_t^x f(s)(t-s)\, ds
\]
is in $AC^{(2)}[0,x]$ (so, in particular, $g\in L_2(0,x)$), and
$g'(t)=\int_t^x f(s)\, ds$, $g''=-f$. By integrating by parts twice,
we thus see that
\begin{align*}
\int_0^x g(t)\cos\sqrt{z}t\, dt & =
\left. \frac{\sin \sqrt{z}t}{\sqrt{z}}
\int_t^x f(s)(t-s)\, ds \right|_{t=0}^{t=x}
- \int_0^x dt\,\frac{\sin \sqrt{z}t}{\sqrt{z}}\int_t^x ds\, f(s)\\
& = \left. \frac{\cos\sqrt{z}t}{z} \int_t^x f(s)\, ds\right|_{t=0}^{t=x}
+ \int_0^x f(t) \frac{\cos\sqrt{z}t}{z} \, dt\\
& = \frac{F(z)-F(0)}{z},
\end{align*}
hence this latter combination is in $H_x$.
The final claim of the lemma is obvious from the construction
of the spaces $H_x$. We have made this statement explicit mainly
because of its importance.
\end{proof}
The next step in the proof of Theorem \ref{T8.1} is to apply
Theorem \ref{T6.3} to the regular de~Branges space $H_N=B(E_N)$.
We obtain $H:(0,N)\to \mathbb R^{2\times 2}$, with entries in
$L_1(0,N)$ and $H(x)\ge 0$
for almost every $x\in (0,N)$, $\text{tr }H(x)=\tau >0$, such that
$E_N(z)$ is exactly the de~Branges function associated with the
canonical system \eqref{Can} as in Proposition \ref{P5.1}.
Actually, we have obtained much more. We get a whole
scale of de~Branges spaces in both cases. On the one hand, we have
the spaces $H_x=B(E_x)$ from Lemma \ref{L8.3}. On the other hand,
we can consider the canonical system \eqref{Can} on $(0,x)$ only; by
Proposition \ref{P5.1}, we get again de~Branges spaces, which we
denote by $B_x$. Our next major goal is to show that, possibly after
a reparametrization of the independent variable, $H_x=B_x$ for all
$x\in [0,N]$ (at the moment, we know this only for $x=N$). One crucial
input will be Theorem \ref{T6.4}; however, we will also need additional
material on canonical systems. This topic will
be resumed in the next section.
The proof of Theorem \ref{T8.1} will then proceed as follows. The identity
$H_x=B_x$ says that we have two realizations of the same chain of
de~Branges spaces: one from Lemma \ref{L8.3} and a second one from the
canonical system \eqref{Can}. By comparing objects in these two worlds,
we will get information on the matrix elements $H_{ij}(x)$ of
the $H$ from \eqref{Can}. This will allow us
to verify the hypotheses of Proposition \ref{P7.1}; so the spaces $H_x$
we started with indeed come from a Schr\"odinger equation.
\section{Canonical systems III}
We now develop some material on the spectral representation of
canonical systems. We consider equation \eqref{Can} together
with the boundary conditions
\begin{equation}
\label{bc}
u_2(0)=0,\quad\quad u_1(N)\cos\beta + u_2(N)\sin\beta =0 .
\end{equation}
Here, $\beta\in [0,\pi )$. As usual, $z$ is called an eigenvalue
if there is a nontrivial solution to \eqref{Can}, \eqref{bc}.
We can considerably simplify the whole discussion by excluding
certain ``singular'' values of $\beta$.
In particular, it is convenient to assume
right away that $\beta\not= \pi/2$. Then zero is not an eigenvalue.
In particular, the following holds.
If $f\in L_1(0,N)$ is given, then the inhomogeneous problem
$Ju'=f$ together with the boundary conditions \eqref{bc} has a unique
solution $u$ which can be written in the form
\begin{gather*}
u(x) = \int_0^N G(x,t) f(t)\, dt, \\
G(x,t) = \begin{pmatrix} \tan\beta & -\chi_{(0,t)}(x) \\
-\chi_{(0,x)}(t) & 0 \end{pmatrix} = G(t,x)^* .
\end{gather*}
We can now write the eigenvalue problem \eqref{Can}, \eqref{bc}
as an integral equation, which is easier to handle. Of course,
this is a standard procedure; compare, for example,
\cite[Chapter VI]{GKr}. Let $L_2^H(0,N)$
be the space of measurable functions $y:(0,N)\to\mathbb C^2$
satisfying
\[
\|y\|_{L_2^H(0,N)}^2
\equiv \int_0^N y^*(x) H(x) y(x)\, dx < \infty.
\]
The quotient of $L_2^H(0,N)$ by $\mathcal{N}=\{ y: \|y\|=0 \}$ is a Hilbert
space. As usual, this space will again be denoted by $L_2^H(0,N)$,
and we will normally not distinguish between Hilbert space elements and their
representatives. In a moment, we will also use the similarly defined
space $L_2^I$, where $H$ is replaced by the $2\times 2$ identity
matrix. The space $L_2^I$ can be naturally identified with
$L_2 \oplus L_2$.
As a preliminary observation, notice that a nontrivial solution $y$
to \eqref{Can}, \eqref{bc} cannot be the zero element of $L_2^H(0,N)$.
Indeed, if $\|y\|_{L_2^H}=0$, then $H(x)y(x)=0$ almost everywhere,
so \eqref{Can} implies that $y(x)=y(0)$. But since $\beta\not= \pi/2$,
the boundary conditions \eqref{bc} then force $y\equiv 0$. A similar
argument shows that eigenfunctions associated with different
eigenvalues also represent different elements of $L_2^H(0,N)$.
We now claim that $\lambda$
is an eigenvalue of \eqref{Can}, \eqref{bc} with corresponding
eigenfunction $y$ if and only if $y\in L_2^H(0,N)$ and $y$ solves
\begin{equation}
\label{9.2}
y(x) = \lambda \int_0^N G(x,t) H(t) y(t)\, dt .
\end{equation}
Note that for $y\in L_2^H(0,N)$, \eqref{9.2} may be considered in
the pointwise sense or as an equation in $L_2^H(0,N)$. Fortunately,
the two interpretations are equivalent. More precisely, if \eqref{9.2}
holds in $L_2^H(0,N)$, then we can simply define a particular
representative $y(x)$ by the right-hand side of \eqref{9.2} (this right-hand
side does not depend on the choice of representative).
It is clear from the construction of $G$ and the fact that solutions of
\eqref{Can} are continuous that eigenfunctions lie in $L_2^H(0,N)$
and solve \eqref{9.2} pointwise.
Conversely, if $y\in L_2^H(0,N)$,
then $Hy\in L_1(0,N)$. So if $y$ in addition solves \eqref{9.2},
then it also solves \eqref{Can}, \eqref{bc} by
construction of $G$ again.
Now define a map
\[
V:L_2^H(0,N) \to L_2^I(0,N),\quad y(x) \mapsto H^{1/2}(x)y(x) .
\]
Here, $H^{1/2}(x)$ is the unique positive semidefinite square
root of $H(x)$. In the sequel, we will often use the fact that
$H(x)$ and $H^{1/2}(x)$ have the same kernel.
$V$ is an isometry and hence maps $L_2^H$ unitarily
onto its range $R(V)\subset L_2^I$. Define an integral operator
$\mathcal{L}$ on $L_2^I(0,N)$ by
\begin{align*}
L(x,t) & = H^{1/2}(x) G(x,t) H^{1/2}(t), \\
(\mathcal{L}f)(x) & = \int_0^N L(x,t)f(t)\, dt .
\end{align*}
The kernel $L$ is square integrable (by this we mean that
$\int_0^N \!\! \int_0^N \|L^*L\|\, dx\,dt < \infty$), so $\mathcal{L}$ is
a Hilbert-Schmidt operator and thus compact. Since $L(x,t)=L^*(t,x)$,
$\mathcal{L}$ is also self-adjoint. The following lemma says that
the eigenvalues of \eqref{Can}, \eqref{bc} are precisely the
reciprocal values of the non-zero eigenvalues of $\mathcal{L}$.
The corresponding eigenfunctions are mapped to one another by
applying $V$.
\begin{Lemma}
\label{L9.1}
Let $f\in L_2^I(0,N)$, $\lambda\not= 0$. Then the following statements
are equivalent:\\
a) $\mathcal{L}f = \lambda^{-1} f$;\\
b) $f\in R(V)$, and the unique $y\in L_2^H(0,N)$ with $Vy=f$ solves
\eqref{9.2}.
\end{Lemma}
\begin{proof}
Note that for all $g\in L_2^I$, we have that
$(\mathcal{L}g)(x)=H^{1/2}(x)w(x)$, where
\[
w(x) = \int_0^N G(x,t)H^{1/2}(t) g(t) \, dt
\]
lies in $L_2^H$, thus $R(\mathcal{L})\subset R(V)$.
Now if a) holds, then $f=\lambda\mathcal{L}f\in R(V)$, so $f=Vy$ for
a unique $y\in L_2^H$ and
\[
f(x) = H^{1/2}(x) y(x) = \lambda (\mathcal{L}Vy)(x)
= \lambda H^{1/2}(x) \int_0^N G(x,t) H(t) y(t)\, dt
\]
for almost every $x\in (0,N)$. In other words,
\[
H^{1/2}(x) \left( y(x) - \lambda \int_0^N G(x,t) H(t) y(t)\, dt
\right) = 0
\]
almost everywhere, and this says that the expression in parantheses
is the zero element
of $L_2^H$, that is, \eqref{9.2} holds.
Conversely, if b) holds, we only need to multiply \eqref{9.2} from
the left by $H^{1/2}(x)$ to obtain a).
\end{proof}
Let $P:L_2^I\to L_2^I$ be the projection onto the closed
subspace $R(V)$ of $L_2^I$. Since
\[
R(V)^{\perp} = \{ f\in L_2^I: H(x)f(x)=0 \text{ almost everywhere} \} ,
\]
we have that $\mathcal{L}(1-P)=0$. Also, we have already observed
that $R(\mathcal{L})\subset R(V)=R(P)$, so $\mathcal{L}= P\mathcal{L}$.
Hence $\mathcal{L}P=P\mathcal{L}$, and thus $R(P)=R(V)$ is a reducing
subspace for $\mathcal{L}$. Let
$\mathcal{L}_0:R(V)\to R(V)$ be the
restriction of $\mathcal{L}$ to $R(V)$. Then $\mathcal{L}_0$
is also compact (in fact, Hilbert-Schmidt) and self-adjoint, and
$\mathcal{L}=\mathcal{L}_0 \oplus 0$.
If we use this notation, then Lemma \ref{L9.1} says that the
eigenfunctions of $\mathcal{L}_0$ with non-zero eigenvalues precisely
correspond to the eigenfunctions of \eqref{Can}, \eqref{bc}. The
kernel of $\mathcal{L}_0$ will also play a central role. To develop
this, we now introduce two important subspaces of $L_2^H$. Namely, let
\begin{gather*}
\begin{split}
R_{(0,N)} = \{ y\in L_2^H(0,N): \exists f\in AC^{(1)}[0,N],
H(x)f(x)=0 \text{ for a.e.\ } x\in (0,N), \\
f_2(0)=0, f(N)=0, Jf'=Hy \} ,
\end{split}\\
\begin{split}
\widetilde{R}_{(0,N)} = \{ y\in L_2^H(0,N):
\exists f\in AC^{(1)}[0,N],
H(x)f(x)=0 \text{ for a.e.\ } x\in (0,N), \\
f_2(0)=0, f_1(N)\cos\beta + f_2(N)\sin\beta = 0, Jf'=Hy \} .
\end{split}
\end{gather*}
Recall that on a formal level, operators $T$ associated with \eqref{Can}
should act as $Tf=H^{-1}Jf'$, so (still formally) the relation $Jf'=Hy$ says
that $y$ is an image of $f$. Thus $\widetilde{R}_{(0,N)}$ should be
thought of as the space of images of zero; $R_{(0,N)}$ has a similar
interpretation. In the following lemma, we identify $\widetilde{R}_{(0,N)}$
as the kernel $N(\mathcal{L}_0)$ of $\mathcal{L}_0$.
\begin{Lemma}
\label{L9.2}
$N(\mathcal{L}_0) = V\widetilde{R}_{(0,N)}$.
\end{Lemma}
\begin{proof}
If $g\in R(V)$ with $\mathcal{L}_0 g=0$ is given, write
$g(x)=H^{1/2}(x)y(x)$ with $y\in L_2^H$. Then $y$ obeys
\begin{equation}
\label{9.a}
H^{1/2}(x) \int_0^N G(x,t)H(t)y(t)\, dt = 0
\end{equation}
in $L_2^I(0,N)$, that is, for almost every $x\in (0,N)$. Let
$f(x)= \int_0^N G(x,t)H(t)y(t)\, dt$. Then, by the construction
of $G$, $f\in AC^{(1)}[0,N]$,
$f$ satisfies the
boundary conditions \eqref{bc}, and $Jf'=Hy$; by \eqref{9.a},
$H(x)f(x)=0$ for almost every $x\in (0,N)$. So $y\in
\widetilde{R}_{(0,N)}$ and $g=Vy\in V\widetilde{R}_{(0,N)}$.
Conversely, suppose that $g=Vy$ for some $y\in \widetilde{R}_{(0,N)}$.
By definition of $\widetilde{R}_{(0,N)}$, there exists $f\in AC^{(1)}[0,N]$,
so that $H(x)f(x)=0$ almost everywhere, $f$ satisfies the boundary
conditions and $Jf'=Hy$. We have $\mathcal{L}_0 g=\mathcal{L}Vy=V\widetilde{f}$,
where $\widetilde{f}(x) = \int_0^N G(x,t)H(t)y(t)\, dt$. Again by
construction of $G$, the function $\widetilde{f}\in AC^{(1)}[0,N]$
thus solves the following problem:
$\widetilde{f}$ satisfies the boundary
conditions and $J\widetilde{f}'=Hy$. However, as noted at
the beginning of this section, there is
only one function with these properties, hence $\widetilde{f}=f$, and
therefore $(\mathcal{L}_0 g)(x)=H^{1/2}(x)f(x)=0$ almost everywhere.
\end{proof}
\begin{Theorem}
\label{T9.3}
Suppose that $\beta\not= \pi/2$. Then the normed eigenfunctions
of the boundary value problem \eqref{Can}, \eqref{bc},
\[
Jy'(x) = zH(x)y(x),\quad\quad
y_2(0)=0,\quad y_1(N)\cos\beta + y_2(N)\sin\beta =0,
\]
form an orthonormal basis of the Hilbert space
$L_2^H(0,N) \ominus \widetilde{R}_{(0,N)}$.
\end{Theorem}
\begin{proof}
As $\mathcal{L}_0$ is compact and self-adjoint, the normed eigenfunctions
of $\mathcal{L}_0$ (suitably chosen in the case of degeneracies)
form an orthonormal basis of $R(V)$. Also, the normed
eigenfunctions belonging to non-zero eigenvalues form an orthonormal
basis of $R(V)\ominus N(\mathcal{L}_0)$. Now go back to $L_2^H$, using
the unitary map $V^{-1}:R(V)\to L_2^H(0,N)$. By Lemma \ref{L9.2},
$N(\mathcal{L}_0)$ is mapped onto $\widetilde{R}_{(0,N)}$, and by
Lemma \ref{L9.1}, the preceding discussion and the fact that
$\mathcal{L}=\mathcal{L}_0\oplus 0$, the eigenfunctions
of $\mathcal{L}_0$ with non-zero eigenvalues precisely go to the
eigenfunctions of \eqref{Can}, \eqref{bc}.
\end{proof}
As in Sect.\ 3, we can introduce spectral measures
$\rho_N^{\beta}$. Define
\[
\rho_N^{\beta} = \sum_{\frac{u_1}{u_2}(N,\lambda) = -\tan\beta}
\frac{\delta_{\lambda}}{\|u(\cdot, \lambda) \|_{L_2^H(0,N)}^2} .
\]
The sum is over the eigenvalues $\{ \lambda_n \}$
of \eqref{Can}, \eqref{bc} (which also depend on $N$ and $\beta$).
Recall also that $u(\cdot,z)$ is the solution of \eqref{Can} with
$u(0,z)=\bigl( \begin{smallmatrix} 1 \\ 0 \end{smallmatrix} \bigr)$.
The map $U$ defined by
\begin{gather*}
U: L_2^H(0,N) \ominus \widetilde{R}_{(0,N)} \to
L_2(\mathbb R, d\rho_N^{\beta}),\\
(Uf)(\lambda) = \int u^*(x,\lambda) H(x) f(x) \, dx
\end{gather*}
is unitary. Indeed, this is just a reformulation of Theorem \ref{T9.3}
because $U$ computes the scalar products of $f$ with the elements of
the basis $\{ u(\cdot,\lambda_n)\}$. The $u(\cdot,\lambda_n)$'s are
not normalized here, but this has been taken into account by choosing
the correct weights in the definition of $\rho_N^{\beta}$.
For a further development of the theory of canonical systems,
we need the following definition. Following \cite{dB2,dB},
we call $x_0\in (0,N)$ a
{\it singular point} if there exists an $\epsilon >0$, so that
on $(x_0-\epsilon, x_0+\epsilon)$, $H$ has the form
\[
H(x) = h(x) P_{\varphi}, \quad P_{\varphi} =
\begin{pmatrix} \cos^2\varphi & \sin\varphi\cos\varphi \\
\sin\varphi\cos\varphi & \sin^2\varphi \end{pmatrix}
\]
for some ($x$-independent) $\varphi\in [0,\pi)$ and some
$h\in L_1(x_0-\epsilon,x_0+\epsilon)$, $h\ge 0$. Notice that
$P_{\varphi}$ is the projection onto $e_{\varphi}=\bigl(
\begin{smallmatrix} \cos\varphi \\ \sin\varphi \end{smallmatrix}
\bigr)$. Points that are not singular are called {\it regular points}.
Clearly, the set $S$ of singular points,
\[
S = \{ x\in (0,N): x \text{ is singular} \} ,
\]
is open, so it can be represented as a countable or finite union
of disjoint, open intervals:
\[
S = \bigcup (a_n, b_n).
\]
On such an interval $(a_n,b_n)$, the angle $\varphi=\varphi_n$
whose existence is (locally) guaranteed by the definition above
must actually have the same value on the whole interval for otherwise
there would be regular points on $(a_n,b_n)$. We call the
boundary condition $\beta$ at $x=N$ {\it regular} if $\beta\not=
\pi/2$ and, in case there should be an $n$ with $b_n=N$, then also $\beta\not=
\varphi_n$, where $\varphi_n$ is the angle
corresponding to the interval $(a_n,b_n)$.
To get a first intuitive understanding of the notion of singular
points, consider \eqref{Can} on an interval $(a,b)\subset S$.
After multiplying from the left by $J^{-1}=-J$, the equation reads
\[
u'(x)= -zh(x) JP_{\varphi}u(x).
\]
Since the matrices on the right-hand side commute with one another
for different values of $x$, the solution is given by
\[
u(x)=\exp\left( -z\int_a^x h(t)\, dt\: JP_{\varphi} \right) u(a) .
\]
However, $P_{\varphi}JP_{\varphi}=0$, as we see either from a direct
computation or alternatively from the fact that this matrix is singular,
anti-self-adjoint and has real entries. Thus the series for the
exponential terminates and
\[
u(x)= \left( 1 - z\int_a^x h(t)\, dt\: JP_{\varphi} \right) u(a).
\]
In particular, letting $u_+=u(b)$, $u_-=u(a)$, $H=\int_a^b H(x)\, dx$,
we obtain $J(u_+ - u_-) = zHu_-$, so on a singular interval,
\eqref{Can} actually is its difference equation analog in disguise.
\begin{Lemma}
\label{L9.4}
Suppose $y\in\widetilde{R}_{(0,N)}$, and let $f\in AC^{(1)}[0,N]$
be such that $H(x)f(x)=0$ almost everywhere, $f_2(0)=0$, $f_1(N)\cos\beta+
f_2(N)\sin\beta =0$, and $Jf'=Hy$ (the existence of such an $f$ follows
from the definition of $\widetilde{R}_{(0,N)}$). Then, if $x_0\in
(0,N)$ is regular, then $f(x_0)=0$. Similarly, if $\beta$ is a regular
boundary condition, then $f(N)=0$.
\end{Lemma}
\begin{proof}
Fix $y$, $f$, $x_0$ as above, and
write $f(x)=R(x)\bigl( \begin{smallmatrix} \sin\varphi(x) \\ -\cos\varphi(x)
\end{smallmatrix} \bigr)$. Since
$Hf=0$ almost everywhere, either $R(x_0)=0$
or else $H(x)$ must have the form
$H(x)=h(x)P_{\varphi(x)}$ in a neighborhood of $x_0$.
(Note that this does not say that $x_0$ is singular because $\varphi$
may depend on $x$.)
In the first case, we are done. If $R(x_0)\not= 0$, we can
solve for $R$, $\varphi$ in terms of $f_1$, $f_2$ in a neighborhood
of $x_0$, and we find that these functions are absolutely continuous,
too. Hence the condition that $Jf'=Hy$ gives
\[
R'(x)\begin{pmatrix} \cos\varphi(x) \\ \sin\varphi(x) \end{pmatrix} +
R(x)\varphi'(x) \begin{pmatrix} -\sin\varphi(x) \\ \cos\varphi(x)
\end{pmatrix} = h(x) P_{\varphi(x)}y(x) \equiv
\alpha(x) \begin{pmatrix} \cos\varphi(x) \\ \sin\varphi(x) \end{pmatrix} .
\]
We now take the scalar product with
$\bigl( \begin{smallmatrix} -\sin\varphi(x) \\ \cos\varphi(x)
\end{smallmatrix} \bigr)$ and find that $R(x)\varphi'(x)=0$. Hence
$R(x_0)\not=0$ implies that $\varphi'\equiv 0$ on a neighborhood
of $x_0$, that is, $x_0$ is singular. This contradiction shows that
$f(x_0)=0$.
This argument also works at $x_0=N$, provided that $(N-\epsilon,N)
\not\subset S$ for all $\epsilon>0$. On the other hand, if $(N-\epsilon,N)
\subset (a_n,b_n)\subset S$ for some $\epsilon>0$,
then near $N$, the function $f$
must have the form $f(x)=R(x)\bigl( \begin{smallmatrix} \sin\varphi_n
\\ -\cos\varphi_n \end{smallmatrix} \bigr)$. But now the boundary
condition at $N$ implies that $R(N)=0$ or
\[
\sin\varphi_n\cos\beta - \cos\varphi_n\sin\beta =
\sin(\varphi_n-\beta)=0 .
\]
This latter relation, however, cannot hold if $\beta$ is regular.
\end{proof}
Here is an immediate consequence of the second part of Lemma \ref{L9.4}.
\begin{Corollary}
\label{C9.5}
If $\beta$ is regular, then $\widetilde{R}_{(0,N)}=R_{(0,N)}$.
\end{Corollary}
We can now prove the promised analog of Theorem \ref{T3.1}.
\begin{Theorem}
\label{T9.6}
For regular boundary conditions $\beta$, the Hilbert spaces
$L_2(\mathbb R, d\rho_N^{\beta})$ and $B(E_N)$ (see Proposition \ref{P5.1})
are identical. More precisely,
if $F(z)\in B(E_N)$, then the restriction of $F$ to $\mathbb R$ belongs
to $L_2(\mathbb R, d\rho_N^{\beta})$, and $F\mapsto F\big|_{\mathbb R}$
is a unitary map from $B(E_N)$ onto $L_2(\mathbb R, d\rho_N^{\beta})$.
\end{Theorem}
\begin{proof}
Basically, we repeat the proof of Theorem \ref{T3.1}.
As $\beta$ and $N$ are fixed throughout, we will again usually drop
the reference to these parameters.
Let $\{\lambda_n\}$ be the eigenvalues of \eqref{Can}, \eqref{bc}.
We claim again that $J_z\in L_2(\mathbb R, d\rho)$ for every
$z\in\mathbb C$ and verify this by the following calculation:
\begin{align*}
\left\| J_z \right\|^2_{L_2(\mathbb R, d\rho)} & =
\sum_n \left| J_z(\lambda_n) \right|^2 \rho(\{\lambda_n\} ) \\ & =
\sum_n \left| \langle u(\cdot, z), u(\cdot, \lambda_n) \rangle_{L_2^H(0,N)}
\right|^2 \left\|u(\cdot, \lambda_n) \right\|^{-2}_{L_2^H(0,N)}\\
& \le \left\|u(\cdot, z) \right\|^2_{L_2^H(0,N)}.
\end{align*}
The estimate follows with the help of Bessel's inequality.
Similar reasoning shows that
\[
\langle J_w, J_z \rangle_{L_2(\mathbb R, d\rho)} = \langle u(\cdot,z),
Q u(\cdot, w) \rangle_{L_2^H(0,N)},
\]
where $Q$ is the projection onto $\overline{L(\{ u(\cdot,\lambda_n)
\} )}$. By Theorem \ref{T9.3} and Corollary \ref{C9.5},
$\overline{L(\{ u(\cdot,\lambda_n)\} )} = R_{(0,N)}^{\perp}$.
We now want to show that $u(\cdot, z)\in
R_{(0,N)}^{\perp}$ for all $z\in\mathbb C$. To this end, fix $y\in R_{(0,N)}$,
and pick $f\in AC^{(1)}[0,N]$ with $Hf=0$ almost everywhere, $f_2(0)=f(N)=0$,
and $Jf'=Hy$. An integration by parts shows that
\begin{align*}
\langle u(\cdot, z) , y \rangle_{L_2^H(0,N)} & =
\int_0^N u^*(x,z)H(x)y(x)\, dx = \int_0^N u^*(x,z)Jf'(x)\, dx\\
& = u^*(x,z)Jf(x)\big|_{x=0}^{x=N} - \int_0^N {u'}^*(x,z)
Jf(x)\, dx \\
& =\int_0^N \left( Ju'(x,z)\right)^* f(x)\, dx
=\overline{z} \int_0^N u^*(x,z) H(x) f(x) \, dx = 0,
\end{align*}
as desired. Thus $Qu(\cdot, w)=u(\cdot, w)$ and
\[
\langle J_w, J_z \rangle_{L_2(\mathbb R, d\rho)} =\langle u(\cdot,z),
u(\cdot, w) \rangle_{L_2^H(0,N)} = J_z(w) = [J_w, J_z]_{B(E_N)} .
\]
This discussion of $Qu$ and the use of Bessel's inequality (instead
of Parseval's identity) were
the only modifications that are necessary; the rest of the
argument now proceeds literally as in the proof of Theorem \ref{T3.1}.
\end{proof}
The observations that were made after the proof of Theorem \ref{T3.1}
also have direct analogs. By combining Theorem \ref{T9.6}
with the remarks following Theorem \ref{T9.3}, we get an induced unitary
map, which we still denote by $U$. It is given by
\begin{subequations}
\begin{gather}
\label{9.1a}
U: L_2^H(0,N) \ominus R_{(0,N)} \to B(E_N) \\
\label{9.1b}
(Uf)(z) = \int u^*(x,\overline{z})H(x)f(x)\, dx.
\end{gather}
\end{subequations}
The proof goes as in Sect.\ 3. One first checks that \eqref{9.1b} is correct
for $f=u(\cdot,\lambda_n)$. This follows from the following calculation:
\begin{align*}
\left( U u(\cdot, \lambda_n) \right)(z) & =
\int_0^N u^*(x,\overline{z})H(x) u(x,\lambda_n)\, dx\\
& = \int_0^N \overline{u^*(x,z)H(x) u(x,\lambda_n)}\, dx\\
& = \int_0^N \left( u^*(x,z)H(x) u(x,\lambda_n)\right)^* \, dx\\
& = \int_0^N u^*(x,\lambda_n)H(x)u(x,z)\, dx = J_{\lambda_n}(z).
\end{align*}
Then one extends to the whole space. In this
context, recall also that $u(\cdot,\lambda_n)\in R_{(0,N)}^{\perp}$, as
we saw in the proof of Theorem \ref{T9.3}.
It is remarkable that the technical complications we have had to
deal with in this section are, so to speak, automatically handled
correctly by the $U$ from \eqref{9.1a}, \eqref{9.1b}. Namely, first
of all, the boundary condition $\beta$ does not appear in \eqref{9.1a},
\eqref{9.1b}. Recall that above we needed a regular
$\beta$, but once Theorem \ref{T9.6}
has been proved, we can get a statement that
does not involve $\beta$ by passing from $L_2(\mathbb R, d\rho_N^{\beta})$
to the de~Branges space $B(E_N)$.
Next, \eqref{9.1b} also makes sense for general $f\in L_2^H(0,N)$,
not necessarily orthogonal to $R_{(0,N)}$. If interpreted in this
way, $U$ is partial isometry from $L_2^H(0,N)$ to $B(E_N)$ with
initial space $L_2^H(0,N) \ominus R_{(0,N)}$ and final space $B(E_N)$.
This follows again from the fact that $u(\cdot,z)\in R_{(0,N)}^{\perp}$
for all $z\in\mathbb C$.
We can immediately make good use of these observation to prove the following
important fact.
\begin{Theorem}
\label{T9.reg}
The de~Branges spaces $B(E_N)$ coming from the canonical system
\eqref{Can} (compare Proposition \ref{P5.1}) are regular.
\end{Theorem}
\begin{proof}
Again, we prove this by verifying \eqref{regsp} for $z_0=0$.
As a direct consequence of the discussion
above, we have that
\begin{equation}
\label{9.b}
B(E_N) = \left\{ F(z) = \int u^*(x,\overline{z})H(x)f(x)\, dx :
f\in L_2^H(0,N) \right\} .
\end{equation}
Thus integration by parts yields
\begin{align*}
& \frac{F(z)-F(0)}{z} = \int_0^N \frac{u^*(x,\overline{z})- (1,0)}{z}
H(x)f(x)\, dx \\
& \quad\quad = \left. -\frac{u^*(x,\overline{z})-(1,0)}{z}
\int_x^N H(t)f(t)\, dt \right|_{x=0}^{x=N} +
\frac{1}{z} \int_0^N dx\, {u'}^*(x,\overline{z}) \int_x^N dt\, H(t) f(t)\\
& \quad\quad = \int_0^N dx\, u^*(x,\overline{z}) H(x)J\int_x^N dt\, H(t) f(t)
\equiv \int_0^N u^*(x,\overline{z}) H(x) g(x) \, dx ,
\end{align*}
with $g(x)=J\int_x^N H(t)f(t)\, dt$. This $g$ is bounded, hence in
$L_2^H(0,N)$, so the proof is complete.
\end{proof}
Note that the relation \eqref{9.b} also makes it clear why it is reasonable
to interpret Theorem \ref{T6.3} as a Paley-Wiener Theorem.
We are now heading towards the analog of Theorem
\ref{T3.2}a). For $00$ is regular, but $(N_1,N_2)\subset S$, then
$B(E_{N_2})=B(E_{N_1})\oplus V$,
where $V$ is a one-dimensional space.
Similarly, if $(0,N)\subset S$, then either $B(E_N)= \{ 0 \}$ or
$B(E_N) \cong \mathbb C$. In the first case, $E_N(z)\equiv 1$.
\end{Corollary}
\begin{proof}[Sketch of proof]
The first part follows in the usual way from Lemma \ref{L9.9} by
applying $U$ from \eqref{9.1b}.
The second part is established by a
similar discussion; we leave the details to the reader. Let us just
point out the fact that the case $B(E_N)= \{ 0 \}$ occurs if
$H(x)=h(x)
\bigl( \begin{smallmatrix} 0 & 0 \\ 0 & 1 \end{smallmatrix} \bigr)$
on $(0,N)$.
Here, the boundary condition at zero is not regular, so to speak,
and we have that $R_{(0,N)}=\widetilde{R}_{(0,N)}=L_2^H(0,N)$.
\end{proof}
\section{Matching de~Branges spaces}
We resume the proof of Theorem \ref{T8.1}. Recall briefly what we
have done already: We have constructed two families of de~Branges
spaces, $H_x$ and $B_x$, $00$
for all $x\in (0,N)$. It will be convenient to define $H_0=B_0= \{ 0 \}$.
The canonical system \eqref{Can'} was constructed such that $B_N = H_N$;
in fact, the corresponding de~Branges functions are equal. We also know
that $H_t$ is isometrically contained in $H_x$ if $t\le x$, and
for the family $B_x$, we have Corollary \ref{C9.8}. In particular,
if $x\in [0,N]$ is arbitrary and if $t\in [0,N]$ is a regular value of
\eqref{Can'}, then $H_x$ and $B_t$ are both isometrically contained
in $H_N=B_N$.
Here, the points $t=0$ and $t=N$ are regular by definition;
the claim on $B_t$ is obvious for these $t$'s. (In a different
context, it would of course make perfect sense to call $t=0$ singular
if there is an interval $(0,s)\subset S$.)
Denote this (extended) set of regular values by $R$, that is,
$R= [0,N]\setminus S$.
By Lemma \ref{L8.3} and Theorem \ref{T9.reg}, all spaces are
regular, so Theorem \ref{T6.4} applies: either $H_x\subset B_t$
or $B_t\subset H_x$, the inclusion being isometric in each case.
Define, for $t\in R$, a function $x(t)$ by
\[
x(t) = \inf \{ x\in [0,N]: H_x \supset B_t \} .
\]
It is clear that $x(t)$ is increasing, $x(0)=0$, and, since
$H_x$ is a proper subspace of $H_N=B_N$ for $xx} H_t .
\]
\end{Lemma}
In the second expression, the closure is taken in $H_N$; recall that
this space contains $H_x$ as a subspace for every $x$.
\begin{proof}
We begin with the first equality.
We know already that $H_t$ is isometrically contained in $H_x$ for
$tx} H_t$. On the other hand, if
$F\in \bigcap_{t>x} H_t$, then for all large $n$, we have that
$F(z)=\int f_n(s)\cos\sqrt{z}s\, ds$ for some $f_n\in L_2(0,x+1/n)$.
But by the uniqueness of the Fourier transform, there can be at
most one function $f\in L_2(\mathbb R)$ so that $F(z)=\int f(s)
\cos\sqrt{z}s\, ds$, hence $f=f_n$ for all $n$. This $f$ is supported
by $(0,x+1/n)$ for all $n$, hence $f\in L_2(0,x)$ and $F\in H_x$.
\end{proof}
\begin{Proposition}
\label{P10.2}
The (modified) system \eqref{Can'} has no singular points, and
$B_t=H_{x(t)}$ for all $t\in [0,N]$.
\end{Proposition}
\begin{proof}
We first prove that the desired relation $B_t=H_{x(t)}$ holds
for all $t\in R$, the set of regular values. For these $t$, we
know that for all $x$, either $H_x\subset B_t$ or $B_t \subset H_x$.
Now the definition of $x(t)$ implies that the first case occurs
for $xx(t)$. Hence
\[
\overline{\bigcup_{xx(t)} H_x,
\]
and now Lemma \ref{L10.1} shows that $B_t=H_{x(t)}$. This argument
does not literally apply to the extreme values $t=0$, $t=N$, but
the claim is obvious in these cases.
If $(a,b)$ is a component of $S$, then the preceding may be applied
to the regular values $a$, $b$, and thus
\[
H_{x(b)}\ominus H_{x(a)} = B_b \ominus B_a .
\]
Corollary \ref{C9.10} shows that this latter difference
is one-dimensional. (For $a=0$, this statement holds because of our
modification of \eqref{Can'}.)
On the other hand, $H_{x(b)}\ominus H_{x(a)}$
is isomorphic to $L_2(x(a),x(b))$ and hence either the zero
space or infinite dimensional. We have reached a contradiction
which can only be avoided if $S=\emptyset$.
\end{proof}
It is, of course, much more convenient to have $B_t=H_t$; this can be
achieved by transforming \eqref{Can'}. More specifically,
we will use $x(t)$ as the independent variable.
We defer the discussion of the technical details to Sect.\ 15 because
we need additional tools which will be developed next.
\section{The conjugate function}
In regular de~Branges spaces, one can introduce a so-called
conjugate mapping, which is a substitute for the Hilbert transform
of ordinary Fourier analysis. In this paper, the conjugate
mapping will not play a major role. Thus, our treatment of this
topic will be very cursory and incomplete;
for the full picture, please consult \cite{dB}.
Consider a canonical system and the associated de~Branges spaces
$B_N\equiv B(E_N)$; for simplicity, we assume
that there are no singular points (as in Proposition \ref{P10.2}). Recall
that $v$ is the solution of \eqref{Can'} with $v(0,z)=\bigl( \begin{smallmatrix}
0 \\ 1 \end{smallmatrix} \bigr)$, and define
\begin{equation}
\label{Kz}
K_z(\zeta) = \frac{v^*(N,z)Ju(N,\zeta) - 1}{\zeta - \overline{z}} .
\end{equation}
Since $(v^*(x,z)Ju(x,\zeta))'=(\zeta-\overline{z})v^*(x,z)H(x)u(x,\zeta)$,
this may also be written in the form
\begin{equation}
\label{Kz'}
K_z(\zeta) = \int_0^N v^*(x,z)H(x)u(x,\zeta)\, dx
= \int_0^N u^*(x,\overline{\zeta})H(x) v(x,\overline{z})\, dx .
\end{equation}
In particular, when combined with \eqref{9.b},
the last expression shows that $K_z\in B_N$ for all $z\in\mathbb C$.
We can thus define, for $F\in B_N$, the conjugate function $\widetilde{F}$
by $\widetilde{F}(z)=[K_z,F]$. The material of Sect.\ 10 immediately
provides us with an interpretation of $\widetilde{F}$. Namely, if
$F(z)=\int u^*(x,\overline{z})H(x)f(x)\, dx$ with $f\in L_2^H(0,N)$, then
$\widetilde{F}(z) = \int v^*(x,\overline{z})H(x)f(x)\, dx$, that is, instead
of $u$, one uses the solution $v$
satisfying the ``conjugate'' boundary condition at $x=0$.
The next lemma says that $\widetilde{F}$ does not depend on the
space in which the conjugate function is computed. Notice that since
we are assuming that all points are regular, $B_{N_1}$ is isometrically
contained in $B_{N_2}$ for $N_10$ by
assumption, \eqref{ie} has the unique solution
\begin{equation}
\label{13.1}
p(x,\cdot) = ( 1 + \mathcal{K}_{\phi}^{(x)} )^{-1}g .
\end{equation}
Note that the roles of the variables $x$ and $t$ are quite different:
$t$ is the independent variable, while $x$ is an external parameter.
We now observe the important fact that \eqref{13.1} makes sense not only
on $L_2(0,x)$, but on each space of the following chain of Banach spaces:
\[
C[0,x] \subset L_2(0,x) \subset L_1(0,x) .
\]
Indeed, first of all, $\mathcal{K}_{\phi}^{(x)}$ is a well defined
operator on each of these three spaces; in fact, $\mathcal{K}_{\phi}^{(x)}$
maps $L_1(0,x)$ into $C[0,x]$. Moreover, $\mathcal{K}_{\phi}^{(x)}$
is compact in every case.
This follows from the Arzela-Ascoli Theorem: If $f_n\in L_1(0,x)$,
$\|f_n\|_1\le 1$, then, since the kernel $K$ is uniformly continuous
on $[0,x]\times [0,x]$, the sequence of functions $\mathcal{K}_{\phi}^{(x)}f_n$
is equicontinuous and uniformly bounded, hence there exists a uniformly
convergent subsequence. So $\mathcal{K}_{\phi}^{(x)}$ is compact even
as an operator from $L_1(0,x)$ to $C[0,x]$.
The inclusion $\mathcal{K}_{\phi}^{(x)}(L_1(0,x))\subset C[0,x]$
also shows that the spectrum of $\mathcal{K}_{\phi}^{(x)}$
is independent of the space:
the eigenfunctions with non-zero eigenvalues
are always contained in the smallest space
$C[0,x]$. In particular, we always have that $-1\notin
\sigma(\mathcal{K}_{\phi}^{(x)})$, so $1+\mathcal{K}_{\phi}^{(x)}$ is
boundedly invertible and \eqref{13.1} holds.
To investigate the derivatives of $p$, we again temporarily make the
additional assumption that $\phi$ and $g$
(and hence also $K$) are smooth.
So, let us suppose that $\phi,g\in C^{\infty}$. Then one can show
that the solution $p$ is also smooth: $p\in C^{\infty}(\Delta_N)$.
We leave this part of the proof to the reader. To investigate the
smoothness in $x$, it is useful to transform \eqref{ie} to get an
equivalent family of equations on a space that does not depend on
$x$. See also \cite[Sect.\ 2.3]{Lev} for a discussion of these issues.
Once we know that $p$ is smooth,
we can get integral equations for the derivates by differentiating
\eqref{ie}. Since, for the time being, all functions are $C^{\infty}$,
we may differentiate under the integral sign. Thus we obtain
\begin{subequations}
\begin{gather}
\label{13.1a}
p_t(x,t) = - \int_0^x K_t(t,s) p(x,s)\, ds + g'(t), \\
\label{13.1b}
p_x(x,t) + \int_0^x K(t,s) p_x(x,s)\, ds = - K(t,x)p(x,x), \\
\label{13.1c}
\begin{split}
p_{xx}(x,t) + \int_0^x K(t,s) p_{xx}(x,s)\, ds =
-K_x & (t,x)p(x,x)\\
& -K(t,x) \left( p_x(x,s) + 2p_s(x,s) \right) \bigr|_{s=x}, \\
\end{split}\\
\label{13.1d}
p_{xt}(x,t) = - K_t(t,x)p(x,x) - \int_0^x K_t(t,s)p_x(x,s)\, ds .
\end{gather}
\end{subequations}
For general $\phi$ and $g$, we approximate $\phi'$ in $L_1(-2N,2N)$
by odd functions $\phi'_n\in C_0^{\infty}(-2N,2N)$, and we put
$\phi_n(x)=\int_0^x \phi'_n(t) dt$. Then $\phi_n\to\phi$ in
$C[-2N,2N]$ and $K^{(n)}\to K$ in $C(\Delta_N)$ (we use superscripts
here because in a moment we will want to denote partial derivatives
by subscripts). Similary, we
pick $L_1(0,N)$ approximations $g''_n\in C_0^{\infty}(0,N)$
of $g''\in L_1(0,N)$ and put
\[
g_n(x)=g(0) + x g'(0) + \int_0^x g''_n(t) (x-t)\, dt .
\]
The integral operators $\mathcal{K}_{\phi_n}^{(x)}$ converge to
$\mathcal{K}_{\phi}^{(x)}$ in the operator norm of any of the spaces we have
considered above. In particular, $1+\mathcal{K}_{\phi_n}^{(x)}$ is
boundedly invertible for all sufficiently large $n$.
In fact, $\mathcal{K}_{\phi_n}^{(x)}$ converges to
$\mathcal{K}_{\phi}^{(x)}$ in the norm of
$B(L_1(0,x), C[0,x])$. Moreover, this convergence is uniform in $x\in (0,N]$.
Let $p^{(n)}$ be the solution of \eqref{ie} with $K$ and $g$ replaced
by $K^{(n)}$ and $g_n$, respectively. Then the above remarks together
with \eqref{13.1} imply that
\[
\|p^{(n)}(x,\cdot) - p(x,\cdot) \|_{C[0,x]} \to 0,
\]
uniformly in $x\in [0,N]$.
(Strictly speaking, one needs a separate argument for the
degenerate case $x=0$, but things are very easy here because
$p^{(n)}(0,0)=g_n(0)=g(0)=p(0,0)$.)
In other words, $p^{(n)}$ converges to the solution
$p$ of the original problem in $C(\Delta_N)$. In particular,
$p\in C(\Delta_N)$. Similar arguments work for the first order
partial derivatives. Eq.\ \eqref{13.1b} says that
\[
p_x^{(n)}(x,\cdot) = -p^{(n)}(x,x)( 1 + \mathcal{K}_{\phi_n}^{(x)} )^{-1}
K^{(n)}(\cdot,x),
\]
and the right-hand side converges in $C[0,x]$, uniformly with respect
to $x$. Again, the case $x=0$ needs to be discussed separately;
we leave this to the reader. It follows that
the partial derivative $p_x$ exists, is continuous and
is equal to this limit function. The argument for the existence and
continuity of $p_t$, which uses \eqref{13.1a}, is similar (perhaps easier,
because one does not need to invert an operator).
We have proved part a) now.
Eq.\ \eqref{13.1c} (for $K^{(n)}$ instead of $K$) again has the form
\[
( 1 + \mathcal{K}_{\phi_n}^{(x)} ) p_{xx}^{(n)}(x,\cdot) = h_n(x,\cdot);
\]
we do not write out the inhomogeneous term $h_n$ here.
Since $h_n$ converges in $L_1(0,x)$, but not necessarily in $C[0,x]$,
we now only obtain convergence of $p_{xx}^{(n)}$ in $L_1(0,x)$.
We denote the limit
function by $p_{xx}$, so $p_{xx}(x,\cdot)\in L_1(0,x)$ and
\[
\|p_{xx}^{(n)}(x,\cdot) - p_{xx}(x,\cdot)\|_{L_1(0,x)} \to 0,
\]
uniformly in $x$. Note, however, that $p_{xx}$ need not be a partial
derivative in the classical sense.
Using similar arguments, we deduce from \eqref{13.1d} that
$p_{xt}^{(n)}(x,\cdot)$ converges in $L_1(0,x)$ to a limit function, which
we denote by $p_{xt}(x,\cdot)$. As usual, the convergence is uniform in $x$.
We have that
\begin{multline}
\label{13.2}
p'(x,x) = (p_x+p_t)\bigr|_{t=x} =
-p(x,x)K(x,x) +g'(x) \\
-\int_0^x K(x,s)p_x(x,s)\, ds
-\int_0^x K_x(x,s)p(x,s)\, ds.
\end{multline}
We now show that the individual terms on
the right-hand side are in $AC^{(1)}[0,N]$. This is obvious for the
first two terms, so we only need to discuss the integrals.
If we replace $K$ and $p$ in these integrals
by $K^{(n)}$ and $p^{(n)}$, respectively,
and then let $n$ tend to infinity, we have convergence to the original
terms in $C[0,N]$. We can therefore prove absolute continuity of these
terms by showing that the derivatives converge in $L_1(0,N)$. So, consider
\begin{multline*}
\frac{d}{dx} \int_0^x K^{(n)}(x,s)p_x^{(n)}(x,s)\, ds =
K^{(n)}(x,x)p_x^{(n)}(x,x)\\
+ \int_0^x K_x^{(n)}(x,s) p_x^{(n)}(x,s)\, ds
+ \int_0^x K^{(n)}(x,s) p_{xx}^{(n)}(x,s)\, ds .
\end{multline*}
It is easy to see that the first two terms on the right-hand side
converge in $C[0,N]$. As for the last term, we note that
\begin{multline*}
\int_0^x K^{(n)}(x,s) p_{xx}^{(n)}(x,s)\, ds =
\int_0^x K(x,s) p_{xx}^{(n)}(x,s)\, ds\\
+ \int_0^x \left( K^{(n)}(x,s)-K(x,s)\right) p_{xx}^{(n)}(x,s)\, ds.
\end{multline*}
Now recall that $\mathcal{K}_{\phi_n}^{(x)}-\mathcal{K}_{\phi}^{(x)}\to 0$ in
$B(L_1,C)$ (uniformly in $x$) and $\|p_{xx}^{(n)}(x,\cdot)\|_{L_1(0,x)}$
is bounded as a function of $n$ and $x$. Therefore, the last term
goes to zero in $C[0,N]$. Similarly, the first term also converges in
$C[0,N]$, as we see from the following estimate:
\[
\left| \int_0^x K(x,s) \left( p_{xx}^{(n)}(x,s)- p_{xx}(x,s)
\right) \, ds \right| \le \|\mathcal{K}_{\phi}^{(x)}\|_{B(L_1,C)}
\| p_{xx}^{(n)}(x,\cdot) - p_{xx}(x,\cdot) \|_{L_1} .
\]
Finally, let us analyze the last term from
\eqref{13.2}. By definition of $K$ (see \eqref{Kphi}),
\[
K_x(x,s) = \frac{1}{2} \left( \phi'(x-s) + \phi'(x+s) \right) .
\]
Let us look at the term with $\phi'(x-s)$; the other term is
of course treated similarly. An integration by parts gives
\[
\int_0^x \phi'(x-s)p(x,s)\, ds =
\phi(x)p(x,0) + \int_0^x \phi(x-s)p_s(x,s)\, ds.
\]
The first term on the right-hand side manifestly is absolutely
continuous. To establish absolute continuity of the integral, we argue
exactly as above. Namely, we approximate by smooth functions
and compute the derivative:
\begin{multline*}
\frac{d}{dx} \int_0^x \phi_n(x-s) p_s^{(n)}(x,s)\, ds =\\
\int_0^x \phi'_n(x-s)p_s^{(n)}(x,s)\, ds
+ \int_0^x \phi_n(x-s) p_{xs}^{(n)}(x,s) \, ds .
\end{multline*}
Now arguments analogous to those used in the preceding paragraph show
that this derivative converges in $C[0,N]$, and, also as above,
convergence in $L_1(0,N)$ already would have been sufficient to
deduce the required absolute continuity.
\end{proof}
\section{Some identities}
First of all, we can now complete the work of Sect.\ 11.
\begin{Theorem}
\label{T10.3}
There exists $H(x)\in L_1(0,N)$, $H(x)\ge 0$ for almost every
$x\in (0,N)$, $H\not\equiv 0$ on nonempty open sets,
so that for all $x\in [0,N]$, we have that
$B_x=H_x$ (as de~Branges spaces). Here,
$B_x$ is the de~Branges space based on the de~Branges function
$E_x(z)=u_1(x,z)+iu_2(x,z)$, where
\[
Ju'(t)=zH(t)u(t),\quad\quad u(0)=\bigl( \begin{smallmatrix}
1 \\ 0 \end{smallmatrix} \bigr) ,
\]
as in Proposition \ref{P5.1}, and $H_x$ is the space from Lemma \ref{L8.3}.
Moreover, $H(x)$ can be chosen so that $\widehat{F}(0)=\widetilde{F}(0)$
for all $F\in B_N=H_N$.
\end{Theorem}
The last part justifies our definition of $K_0^{(x)}\in H_x$ from
Sect.\ 13. Recall also that $\widehat{F}(0)$ is computed in $H_N$,
while $\widetilde{F}(0)$ is computed as in Sect.\ 12
by using the realization $B_N$ of this space.
\begin{proof}
As explained in Sect.\ 11, we use the system from
Proposition \ref{P10.2}, but with $x(t)$ as the new independent variable.
As the first step of the proof, let us check how far we are
from satisfying the last part of Theorem \ref{T10.3}.
Propositions \ref{P11.2} and \ref{P12.1} show that
\[
\widetilde{F}(0)\overline{G(0)} - F(0)\overline{\widetilde{G}(0)} =
\widehat{F}(0)\overline{G(0)}
- F(0)\overline{\widehat{G}(0)}
\]
for all $F,G\in B_N=H_N$. In particular, if $F(0)=0$, then
$\widetilde{F}(0)=\widehat{F}(0)$. Since both $F\mapsto \widetilde{F}(0)$
and $F\mapsto \widehat{F}(0)$ are linear maps, it follows that there exists
a constant $c\in\mathbb C$, independent of $F$, so that
\begin{equation}
\label{15-1}
\widehat{F}(0)=\widetilde{F}(0)+cF(0).
\end{equation}
From the definitions, we see that $\widetilde{F^{\#}}(0)=
\overline{\widetilde{F}(0)}$ and $\widehat{F^{\#}}(0)=\overline{
\widehat{F}(0)}$, so the $c$ from \eqref{15-1} must actually be real.
To avoid confusion, let us temporarily denote the reproducing
and conjugate kernels (for
$z=0$) of the spaces $B_t$ by $j_0^{(t)}$ and
$k_0^{(t)}$ (lowercase letters!), respectively. Note also
that the conjugate and reproducing kernels depend only on the de~Branges
space, but not on the particular de~Branges function chosen. Therefore
\eqref{15-1} says that $K_0^{(N)}(z)=k_0^{(N)}(z)+cj_0^{(N)}(z)$.
Since $B_t=H_{x(t)}$ and since by Lemma \ref{L11.1}, $\widetilde{F}(0)$
does not depend on the space in which the conjugate function is computed,
we also have that $j_0^{(t)}(z)=J_0^{(x(t))}(z)$ and
\begin{equation}
\label{15-2}
K_0^{(x(t))}(z) = k_0^{(t)}(z) + c j_0^{(t)}(z).
\end{equation}
Next, we claim that the following analog of Lemma \ref{L10.1} holds:
\begin{equation}
\label{11-2}
B_t = \overline{\bigcup_{st} B_s.
\end{equation}
We will only prove (and use) this for canonical systems without
singular points, where the proof is very easy. However, a similar but --
due to the possible presence of singular points -- somewhat more complicated
result holds for general canonical systems. If $S=\emptyset$, then
Lemma \ref{L9.4} implies that $R_{(a,b)}=\{ 0 \}$
for arbitrary $a**t}
L_2^H(0,s).
\]
It also follows that $x(t)$ is strictly increasing. Indeed,
if $x(t_1)=x(t_2)$, then $B_{t_1}=B_{t_2}$, and these spaces are mapped
by $U^{-1}$ onto $L_2^H(0,t_1)$ and $L_2^H(0,t_2)$, respectively, so
$t_1=t_2$.
The relation \eqref{11-2} allows us to show that $x(t)$ is continuous.
Proposition \ref{P10.2} together with \eqref{11-2} imply that
\[
B_t = \bigcap_{s>t} B_s = \bigcap_{s>t} H_{x(s)}.
\]
On the other hand, $B_t=H_{x(t)}$, and now Lemma \ref{L10.1} and the
fact that $H_y\ominus H_x$ is not the zero space for $y>x$ show that
$\inf_{s>t} x(s) \le x(t)$. A similar argument gives $\sup_{s0$, so
$p(0,0)=g(0)$, $p'(0,0)=g'(0)$. Hence
\[
y(0,0)=1, \quad w(0,0)=0, \quad y'(0,0)=0, \quad w'(0,0)=1,
\]
as claimed, and it also follows that $yw'-y'w=1$.
\end{proof}
\begin{Proposition}
\label{P14.3}
Let $H$ and $y$, $w$ be as in Theorems \ref{T10.3}
and \ref{T12.3}, respectively. Then
\[
H_{11}(x)w(x,x) = H_{12}(x)y(x,x),\quad\quad
H_{12}(x)w(x,x) = H_{22}(x)y(x,x).
\]
\end{Proposition}
\begin{proof}
These identities follow at once from Proposition \ref{P14.1}
and \eqref{id}.
\end{proof}
\section{Conclusion of the proof of Theorem \ref{T8.1}}
We are now in a position to verify the hypotheses of Proposition
\ref{P7.1} for the $H$ constructed in Theorem \ref{T10.3}. More precisely,
we will transform the canonical system one more time to obtain a new
system satisfying the assumptions of Proposition \ref{P7.1}. At the end,
however, it will turn out that this transformation was actually unnecessary.
We know from Theorem \ref{T13.1}b)
that the functions $y(x,x)$, $w(x,x)$ belong to $AC^{(2)}[0,N]$,
and Proposition \ref{P14.2} implies that if $y(x_0,x_0)=0$, then
$y'(x_0,x_0)\not= 0$, $w(x_0,x_0)\not= 0$. Thus
$y$, $w$ have only finitely many zeros in $[0,N]$ and they do not
vanish simultaneously. Also, Proposition \ref{P14.3} implies that
$H_{11}w^2=H_{22}y^2$. So we may consistently define a function
$r \ge 0$ by
\[
r(x) = \begin{cases} \left| y(x,x)\right|^{-1}\sqrt{H_{11}(x)} &
\text{ if }y(x,x)\not= 0 \\
\left| w(x,x)\right|^{-1}\sqrt{H_{22}(x)}
& \text{ if }w(x,x)\not= 0 \end{cases} .
\]
We now need a certain regularity of $r$ (more precisely, we need
that $r\in AC^{(2)}$). This can be established directly by showing
that $H_{ij}\in AC^{(2)}[0,N]$. Note, however, that this statement
is not obvious at this point because, for example, the second derivative
$H''_{11}$, evaluated formally with the help of Proposition
\ref{P14.1}, contains the third order derivative
$y_{xxx}$, which need not exist. Thus it is again easier to first
carry out this final part of
the proof of Theorem \ref{T8.1} under the additional assumption
that $\phi\in C^{\infty}$ and then pass to the general case by a
limiting argument.
By Propositions \ref{P14.1}, \ref{P14.2}, $y(0,0)=H_{11}(0)=1$,
hence $r(0)=1$. Also, since we are assuming that $\phi\in C^{\infty}$,
the function $r$ is also smooth as long as $r>0$. Fix an interval
$[0,L]\subset [0,N]$, so that $r>0$ on $[0,L]$. On this interval
$[0,L]$, we tranform the canonical system as follows. Let
\[
t(x) = \int_0^x r(s)\, ds\quad\quad (0\le x\le L),
\]
let $x(t)$ be the inverse function,
and define the new matrix $\widetilde{H}(t)=H(x(t))/r(x(t))$
for $0\le t \le t(L)$.
Let $u(x,z)$ be the solution of the original system
\[
Ju'=zHu, \quad\quad u(0,z)=\bigl( \begin{smallmatrix} 1 \\ 0
\end{smallmatrix} \bigr),
\]
and put $\widetilde{u}(t,z)=u(x(t),z)$. Then $\widetilde{u}$ solves
the new equation
\begin{equation}
\label{Cantr}
J\widetilde{u}' = z \widetilde{H} \widetilde{u}, \quad\quad
\widetilde{u}(0,z) =\bigl( \begin{smallmatrix} 1 \\ 0
\end{smallmatrix} \bigr);
\end{equation}
the corresponding de~Branges spaces are related by
$\widetilde{B}_t=B_{x(t)}$.
Now the first line from the definition of $r$ shows that
\begin{equation}
\label{15.a}
\widetilde{H}_{11}(t) = r(x(t))\frac{H_{11}(x(t))}{r^2(x(t))}
= r(x(t)) y^2(x(t)),
\end{equation}
at least if $y(x(t))\not= 0$. Here, $y(x)$ is short-hand for
$y(x,x)$. However, as $y$ and $w$ do not vanish simultaneously,
Proposition \ref{P14.3} implies that $H_{11}(x)=0$
if $y(x)=0$, so \eqref{15.a} holds generally. Similar arguments apply
to the other matrix elements:
\[
\widetilde{H}(t) = r(x(t)) \begin{pmatrix} y^2(x(t)) &
(yw)(x(t)) \\ (yw)(x(t)) & w^2(x(t)) \end{pmatrix}
\equiv \begin{pmatrix} a^2(t) & (ab)(t) \\ (ab)(t) & b^2(t) \end{pmatrix},
\]
where $a(t)=r^{1/2}(x(t))y(x(t))$, $b(t)=r^{1/2}(x(t))w(x(t))$.
Now as $r>0$ on $[0,L]$, we have $a,b\in C^{\infty}[0,L]$ and
\begin{align*}
a&(t)b'(t)-a'(t)b(t)\\
& = r^{1/2}(x(t))\left( y(x(t)) \frac{d}{dt}
\left( r^{1/2}(x(t))w(x(t)) \right) -w(x(t)) \frac{d}{dt}
\left( r^{1/2}(x(t))y(x(t)) \right) \right)\\
& = r(x(t)) \left( y(x(t)) \frac{d}{dt}
w(x(t)) - w(x(t)) \frac{d}{dt}
y(x(t)) \right)\\
& = \left( y(x)w'(x)-w(x)y'(x)\right) \bigr|_{x=x(t)} = 1
\end{align*}
by Proposition \ref{P14.2}. Moreover, $a(0)=r^{1/2}(0)y(0)=1$ and
$a'(0)=r^{1/2}(0)y'(0)=0$ also by Proposition \ref{P14.2}.
Thus the canonical system \eqref{Cantr} satisfies the assumptions
of Proposition \ref{P7.1}. So \eqref{Cantr} comes from a Schr\"odinger
equation. In particular, we have the following description of
$\widetilde{B}_t$ as a set:
\[
\widetilde{B}_t = S_t = \left\{ F(z)=\int_0^t f(s)\cos\sqrt{z}s\, ds:
f\in L_2(0,t) \right\} .
\]
On the other hand, $\widetilde{B}_t=B_{x(t)}=H_{x(t)}$ by Theorem
\ref{T10.3}, and, again as sets, $H_{x(t)}=S_{x(t)}$ by the definition
of $H_{x(t)}$. We are forced to admit that $x(t)=t$ for all $t\in [0,t(L)]$.
In other words, we have shown that if $r>0$ on $[0,L]$,
then $r\equiv 1$ on $[0,L]$. Also, as
noted at the beginning of the argument, $r(0)=1$, so the set of
$L$'s such that $r\equiv 1$ on $[0,L]$ is nonempty and closed and
open in $[0,N]$, hence $r\equiv 1$ on all of $[0,N]$.
So in reality, there has been no transformation, and the system
\eqref{Cantr} is the system from Theorem \ref{T10.3}. This system
is equivalent to a Schr\"odinger equation, that is, there exists
$V\in L_1(0,N)$, so that $H_x=B_x=S_x$ (as de~Branges spaces).
In particular, we may specialize to $x=N$, and we have thus proved
Theorem \ref{T8.1} under the additional assumption that $\phi\in
C^{\infty}$.
The extension to the general case is routine. As usual, approximate
$\phi'$ in $L_1(-2N,2N)$ by odd functions $\phi'_n\in C_0^{\infty}
(-2N,2N)$ and put $\phi_n(x)=\int_0^x\phi'_n(t)\, dt$. Then $\phi_n
\in C^{\infty}\cap \Phi_N$ for all sufficiently large $n$.
As a by-product of the above argument, we have the formulae
\begin{equation}
\label{15.1}
H_{11}(x)=y^2(x,x),\quad H_{12}(x)=y(x,x)w(x,x),\quad
H_{22}(x)=w^2(x,x),
\end{equation}
which are valid for smooth $\phi$. So we may use \eqref{15.1} if we replace
$\phi$ by $\phi_n$. Now if $n\to\infty$,
all quantities converge pointwise to
the right limits; for the matrix elements $H_{ij}$, this follows
from Proposition \ref{P14.1}.
So \eqref{15.1} holds in the general case as well.
Now a glance at Theorem \ref{T13.1}b) and Proposition \ref{P14.2}
suffices to verify the hypotheses of Proposition \ref{P7.1} (for the
canonical system from Theorem \ref{T10.3}; no
transformation is needed this time). This completes the proof of
Theorem \ref{T8.1}.
\section{Half line problems}
In this section, we discuss half line problems, that is, operators
of the form $-d^2/dx^2 + V(x)$ on $L_2(0,\infty)$. We assume, as usual,
that $V\in L_{1,loc}([0,\infty))$.
Our presentation in this section will be less detailed.
Of course, in a sense, half line
problems are contained in our previous treatment because we may analyze the
problem on $(0,\infty)$ by analyzing it on $(0,N)$ for every $N$. More
precisely, Theorems \ref{T4.1}, \ref{T4.2}, \ref{T8.1}, and \ref{T8.2},
applied with variable $N>0$,
give a one-to-one correspondence between functions $\phi\in
\bigcap_{N>0} \Phi_N$ and locally integrable potentials $V:[0,\infty)
\to \mathbb R$. Here we say that $\phi\in \bigcap_{N>0} \Phi_N$ if
the restriction of $\phi$ to $[-2N,2N]$ belongs to $\Phi_N$ for
every $N>0$. The uniqueness assertions from Theorem \ref{T8.2} make
sure that there are no consistency problems. For example, the
following holds: If $N_10$. In other words, we demand that
\[
\|F\|_{S_N}^2 = \int |F(\lambda)|^2 \, d\rho(\lambda)
\quad\quad\forall F\in \bigcup_{N>0} S_N .
\]
Borrowing the terms commonly used for discrete problems,
we may also say that the spectral measures are precisely the
solutions of a (continuous version of a) certain moment problem.
By Theorem \ref{T3.2}b), the measures from Weyl theory
are indeed spectral measures in this sense. In particular,
given a potential $V\in L_{1,loc}([0,\infty))$, spectral measures always exist.
The spectral measure is unique precisely if $V$ is in the limit
point case at infinity. Indeed, if $V$ is in the limit circle
case, any choice of a boundary condition at infinity yields a
spectral measure, and there are many others. For instance, one
can form convex combinations or, more generally, averages of these
measures. Conversely, if $V$ is in the limit point case, then
uniqueness of the spectral measure follows from the Nevanlinna
type parametrization of the measures $\mu$ for which $L_2(\mathbb R,
d\mu)$ isometrically contains $S_N$ together with the fact that
the Weyl circles shrink to points.
The Gelfand-Levitan conditions characterize the spectral measures
of half line problems. We now want to demonstrate that such a
characterization also follows in a rather straightforward way from
our direct and inverse spectral theorems
(Theorems \ref{T4.1}, \ref{T4.2}, \ref{T8.1}, and \ref{T8.2})
and some standard material.
For a positive Borel measure $\rho$, introduce the signed measure
$\sigma=\rho-\rho_0$ (where $\rho_0$ is the measure for zero potential
from \eqref{mrho0}), and consider the following two conditions.
\begin{enumerate}
\item If $F\in \bigcup_{N>0} S_N$, $\int |F(\lambda)|^2 \, d\rho(\lambda)
=0$, then $F\equiv 0$.
\item For every $g\in C_0^{\infty}(\mathbb R)$, the integral
$\int d\sigma(\lambda)\int dx\, g(x) \cos\sqrt{\lambda}x$
converges absolutely:
\[
\int_{-\infty}^{\infty} d|\sigma|(\lambda) \left|
\int_{-\infty}^{\infty} dx\, g(x) \cos\sqrt{\lambda}x \right| < \infty.
\]
Moreover, there exists an even, real valued function
$\phi\in AC^{(1)}(\mathbb R)$ with $\phi(0)=0$, so that
\[
\int d\sigma(\lambda)\int dx\, g(x) \cos\sqrt{\lambda}x
= \int g(x) \phi(x)\, dx
\]
for all $g\in C_0^{\infty}(\mathbb R)$.
\end{enumerate}
The set of $\rho$'s satisfying these two conditions will be denoted
by $GL$, for Gelfand-Levitan. We do {\it not} require that $\bigcup
S_N \subset L_2(\mathbb R, d\rho)$, so at this point, we cannot
exclude the possibility that for fixed $\rho\in GL$, there exists $F\in
\bigcup S_N$ with $\int |F|^2\, d\rho = \infty$. However, we will
see in moment that actually there are no such $F$'s.
Our definition of $GL$ is inspired by
Marchenko's treatment of the Gelfand-Levitan theory
(see especially \cite[Theorem 2.3.1]{Mar}). Note, however,
that Marchenko does not regularize by subtracting $\rho_0$, but by
using the analog of the function $\psi$ from Sect.\ 13
instead of $\phi$. Moreover, he uses a space of
test functions tailor made for the discussion of Schr\"odinger operators,
and he assumes continuity of the potential.
\begin{Theorem}
\label{T17.1}
a) For every $\rho\in GL$, there exists a unique
$V\in L_{1,loc}([0,\infty))$ so that $\rho$ is a spectral measure
of $-d^2/dx^2+V(x)$.\\
b) If $\rho$ is a spectral measure of $-d^2/dx^2+V(x)$, then $\rho\in GL$.
\end{Theorem}
\begin{proof}
a) A computation using
condition 2.\ from the definition of $GL$ shows that for
every $f\in C_0^{\infty}(\mathbb R)$, the function
\[
F(\lambda)=\int f(x)\cos\sqrt{\lambda}x \, dx
\]
belongs to $L_2(\mathbb R, d\rho)$ and
\[
\|F\|_{L_2(\mathbb R,d\rho)}^2 =
\|f\|_{L_2(\mathbb R)}^2 + \int\!\!\!\int ds\, dt\,
\overline{f(s)}f(t) \frac{1}{2}\left( \phi(s-t) + \phi(s+t) \right) .
\]
In particular, it follows that the identity
\begin{equation}
\label{17-1}
\|F\|_{L_2(\mathbb R,d\rho)}^2 =
\langle f, (1+\mathcal{K}_{\phi}) f \rangle_{L_2(0,N)}
\end{equation}
holds if $f\in C_0^{\infty}(0,N)$. By a density argument
and the fact that norm convergent sequences have subsequences that
converge almost everywhere, condition 1.\ now implies that
$1+\mathcal{K}_{\phi}>0$ as an operator on $L_2(0,N)$.
So $\phi\in\Phi_N$, and from Theorem \ref{T8.1},
we thus get $V\in L_1(0,N)$, so that
\[
\|F\|_{S_N}^2 =
\langle f, (1+\mathcal{K}_{\phi}) f \rangle_{L_2(0,N)}
\]
for all $F\in S_N$. Hence $\|F\|_{S_N}=\|F\|_{L_2(\mathbb R,d\rho)}$
for all $F$ as above with $f\in C_0^{\infty}(0,N)$. Again by a density
argument, this relation actually holds on all of $S_N$.
The whole argument works for arbitrary $N$, and, as
observed above,
Theorem \ref{T8.2}b) implies that there are no consistency problems.
We obtain a locally integrable potential $V$ on $[0,\infty)$,
so that $\|F\|_{S_N}=\|F\|_{L_2(\mathbb R,d\rho)}$
for all $F\in \bigcup S_N$. In other words, $\rho$ is a spectral
measure of $-d^2/dx^2+V(x)$.
Uniqueness of $V$ is clear because \eqref{17-1} forces us
to take $V$ on $(0,N)$
so that the norm on $S_N$ is the one determined by $\mathcal{K}_{\phi}$;
so once $\phi$ is given,
there is no choice by Theorem \ref{T8.2}b) again. But clearly
$\phi$ is uniquely determined by the measure $\sigma$ and hence
also by $\rho$.
b) Property 1.\ is obvious from the equality
$\|F\|_{L_2(\mathbb R,d\rho)}=\|F\|_{S_N}$.
To establish property 2., we use the well known estimates
(\cite[Sect.\ 2.4]{Mar}; compare also \cite{GeSi})
\[
\lim_{L\to\infty} \rho((-\infty,-L)) e^{a\sqrt{L}}=0
\quad\forall a>0,\quad\quad
\int \frac{d\rho(\lambda)}{1+\lambda^2} < \infty.
\]
As $|\sigma|\le \rho
+ \rho_0$, the absolute
convergence of $\int d\sigma(\lambda)\int dx\, g(x) \cos\sqrt{\lambda}x$
for $g\in C_0^{\infty}$ follows. Moreover, this integral depends
continuously on $g\in \mathcal{D}=C_0^{\infty}(\mathbb R)$ and hence
defines a distribution.
Now let $f_1,f_2\in C_0^{\infty}(\mathbb R)$ be even functions. Then
\[
F_i(z)\equiv \int_{-\infty}^{\infty} f_i(x)\cos\sqrt{z}x\, dx
= 2 \int_0^{\infty} f_i(x)\cos\sqrt{z}x\, dx \in \bigcup S_N,
\]
and by a calculation,
\begin{align*}
[F_1,F_2]_{S_N} & = \langle F_1(\lambda), F_2(\lambda)
\rangle_{L_2(\mathbb R,d\rho)}\\
& = 4 \langle f_1, f_2 \rangle_{L_2(0,\infty)}
+ \int d\sigma(\lambda) \int_{-\infty}^{\infty} dx\, g(x) \cos\sqrt{\lambda}x,
\end{align*}
where $N$ must be chosen so large that $F_1,F_2\in S_N$ and
\begin{equation}
\label{17.1}
g(x) \equiv \frac{1}{2} \int_{-\infty}^{\infty}
\overline{f_1\left( \frac{x+y}{2} \right)}
f_2\left( \frac{x-y}{2} \right)\, dy.
\end{equation}
On the other hand, we have that
\[
[F_1,F_2]_{S_N} = 4\langle f_1, (1+\mathcal{K}_{\phi}) f_2\rangle
= 4 \langle f_1, f_2 \rangle_{L_2(0,\infty)}
+ \int_{-\infty}^{\infty} g(x)\phi(x)\, dx,
\]
where $\phi\in\bigcap\Phi_N$ is the function from Theorem \ref{T4.2}. Hence
\begin{equation}
\label{17.2}
\int d\sigma(\lambda) \int dx\, g(x) \cos\sqrt{\lambda}x
= \int g(x)\phi(x)\, dx
\end{equation}
for every $g$ that is of the form \eqref{17.1} with even $f_i\in C_0^{\infty}
(\mathbb R)$.
We claim that this set of $g$'s is rich
enough to guarantee the validity of \eqref{17.2} for arbitrary
$g\in C_0^{\infty}(\mathbb R)$. To see this, one can proceed as follows.
By a change of variables, \eqref{17.1} becomes
\[
g(x) = \int \overline{f_1(x-u)} f_2(u)\, du = \left(
\overline{f_1} * f_2 \right) (x).
\]
We can take $\overline{f_1}$ as an approximate identity, that is,
$\overline{f_1(x)}=n\varphi(nx)$, where $\int\varphi =1$ and let $n\to
\infty$. It follows that the set of $g$'s of the form \eqref{17.1}
is dense (in the topology of $\mathcal{D}=C_0^{\infty}(\mathbb R)$)
in the set of even test functions. Moreover,
for odd test functions $g$,
\[
\int d\sigma(\lambda) \int dx\, g(x)\cos\sqrt{\lambda}x =
\int g(x)\phi(x)\, dx = 0.
\]
By combining these facts, we deduce that
\eqref{17.2} holds for every $g\in C_0^{\infty}(\mathbb R)$, as claimed.
\end{proof}
We have no uniqueness statement in Theorem \ref{T17.1}b): for a given
$V$, there may be many $\rho$'s. However, this only comes from
the fact that we have insisted on working with spectral measures.
Clearly, in addition to the bijection $V\leftrightarrow\phi$ between
$L_{1,loc}$ and $\bigcap\Phi_N$ discussed at the beginning
of this section, we also have a one-to-one correspondence
between potentials $V$ and, let us say, distributions
\[
g \mapsto \int d\sigma(\lambda) \int dx\, g(x) \cos\sqrt{\lambda}x .
\]
However, this distribution determines the measure $\sigma$ (and thus $\rho$)
only if (in fact, precisely if) we have limit point case at infinity.
This remark again confirms our claim that in inverse spectral theory,
the function $\phi$ is the more natural object.
\section{Concluding remarks}
The proof of Theorem \ref{T8.1} has indicated at least two
methods of reconstructing the potential $V$ from the spectral
data $\phi$. One consists of solving the integral equation
for $y$ (say),
\[
y(x,t) + \int_0^x K(t,s)y(x,s)\, ds = 1 .
\]
By \eqref{15.1} and
Proposition \ref{P7.1}, $V=y''w'-w''y'$, and since $yw''=wy''$
and $yw'-y'w=1$,
we can compute the potential $V$ from this solution $y$ by
$V(x)=y''(x,x)/y(x,x)$. This way of finding $V$ is quite similar
to the Gelfand-Levitan procedure, where one solves the integral equation
\[
z(x,t) + \int_0^x K(t,s) z(x,s)\, ds = -K(x,t)
\]
for $z$ and computes the potential as $V(x)=z'(x,x)$ (see
\cite[Chapter 2]{Lev}). Loosely speaking, our function $y(x,t)$
is a two-point version of the solution $y(x)$ to $-y''+Vy=0$
with the initial values $y(0)=1$, $y'(0)=0$.
Our proof of Theorem \ref{T8.1}
also admits a second, completely different interpretation.
Namely, the integral equations for $y$, $w$ may be viewed as an
auxiliary tool needed to show that the canonical system that was
constructed with the aid of Theorem \ref{T6.3} is equivalent to
a Schr\"odinger equation. In other words, if one has a
constructive proof of Theorem \ref{T6.3}, one may apply the
corresponding reconstruction procedure
and one automatically obtains a canonical system that satisfies
the hypotheses of Proposition \ref{P7.1}, possibly after some
modifications: deletion of an initial singular interval, introduction
of a new independent variable to match the de~Branges spaces
and finally a
transformation of the type $H\to H_c$, as in the proof of
Theorem \ref{T10.3}. (Actually, this last transformation does not affect
$H_{11}(x)$ and, by the above, is thus not needed to compute $V(x)$.)
Put differently, this means that work on
constructive inverse spectral theory of canonical systems always has
implications in the inverse spectral theory of Schr\"odinger operators
as well.
In \cite{dB2}, Theorem \ref{T6.3}
is proved as follows. The first step
is to approximate the de~Branges function $E$ by polynomial de~Branges
functions $E_n$. The construction of (discrete) canonical systems
for $E_n$ can be carried out using elementary methods only (for
instance, orthogonalization of polynomials). Finally, one passes
to the limit $n\to\infty$. See also \cite{Sakh,Yud} for completely
different views on Theorem \ref{T6.3}.
As a final remark, we would like to point out that the transformation
from a Schr\"odinger equation to a canonical system regularizes the
coefficients. Indeed, $H\in AC^{(2)}$, while in general, one only
has $V\in L_1$. This effect will be particularly convenient if one
considers Schr\"odinger operators with, let us say,
measures or even more singular distributions as potentials.
The theory of canonical systems and de~Branges spaces seems to provide
us with a particularly appropriate approach to the direct and inverse
spectral theory of such operators.
\begin{thebibliography}{99}
\bibitem{ADym} D.Z.\ Arov and H.\ Dym, $J$-inner matrix functions,
interpolation and inverse problems for canonical systems, I: Foundations,
Int.\ Eq.\ Op.\ Theory {\bf 29} (1997), 373--454.
\bibitem{Atk} F.V.\ Atkinson, On the location of Weyl circles,
Proc.\ Roy.\ Soc.\ Edinburgh {\bf 88A} (1981), 345--356.
\bibitem{CL} E.A.\ Coddington and N.\ Levinson, Theory of
Ordinary Differential Equations, McGraw-Hill, New York, 1955.
\bibitem{dB1} L.\ de Branges, Some Hilbert spaces of entire
functions I, Trans.\ Amer.\ Math.\ Soc.\ {\bf 96} (1960),
259--295.
\bibitem{dB2} L.\ de Branges, Some Hilbert spaces of entire
functions II, Trans.\ Amer.\ Math.\ Soc.\
{\bf 99} (1961), 118--152.
\bibitem{dB3} L.\ de Branges, Some Hilbert spaces of entire
functions III, Trans.\ Amer.\ Math.\ Soc.\
{\bf 100} (1961), 73--115.
\bibitem{dB4} L.\ de Branges, Some Hilbert spaces of entire
functions IV, Trans.\ Amer.\ Math.\ Soc.\ {\bf 105} (1962), 43--83.
\bibitem{dB} L.\ de Branges, Hilbert Spaces of Entire Functions,
Prentice-Hall, Englewood Cliffs, 1968.
\bibitem{DT} P.\ Deift and E.\ Trubowitz, Inverse scattering
on the line, Commun.\ Pure Appl.\ Math.\ {\bf 32} (1979), 121--251.
\bibitem{Dym} H.\ Dym, An introduction to de~Branges spaces
of entire functions with applications to differential
equations of the Sturm-Liouville type, Adv.\ Math.\
{\bf 5} (1970), 395--471.
\bibitem{DMcK} H.\ Dym and H.P.\ McKean, Gaussian Processes,
Function Theory, and the Inverse Spectral Problem. Academic
Press, New York, 1976.
\bibitem{Garn} J.B.\ Garnett, Bounded Analytic Functions,
Academic Press, New York, 1981.
\bibitem{GL} I.M.\ Gelfand and B.M.\ Levitan, On the determination
of a differential equation from its spectral function,
Amer.\ Math.\ Soc.\ Transl.\ (2) {\bf 1} (1955), 253--304.
\bibitem{GeSi} F.\ Gesztesy and B.\ Simon, A new approach to
inverse spectral theory, II.\ General real potentials and the
connection to the spectral measure, Ann.\ Math.\ {\bf 152}
(2000), 593--643.
\bibitem{GKr} I.C.~Gohberg and M.G.~Krein, Theory and Applications
of Volterra Operators in Hilbert Space, Transl.\ of Math.\ Monographs,
Vol.~24, Amer.\ Math.\ Soc.\, Providence, 1970.
\bibitem{BJH} B.J.~Harris, The asymptotic form of the Titchmarsh-Weyl
$m$-function associated with a second order differential equation
with locally integrable coefficient, Proc.\ Roy.\ Soc.\ Edinburgh
{\bf 102A} (1986), 243--251.
\bibitem{HdSW} S.\ Hassi, H.\ de~Snoo, and H.\ Winkler,
Boundary-value problems for two-dimensional canonical systems,
Int.\ Eq.\ Op.\ Theory {\bf 36} (2000), 445--479.
\bibitem{HKS} D.~Hinton, M.~Klaus, and J.K.~Shaw, Series representation and
asymptotics for Titchmarsh-Weyl $m$-functions, Differential Int.\ Eq.\
{\bf 2} (1989), 419--429.
\bibitem{Horv} M.\ Horvath, On the inverse spectral theory
of Schr\"odinger and Dirac operators, Trans.\ Amer.\ Math.\ Soc.\
{\bf 353} (2001), 4155--4171.
\bibitem{Lev} B.M.\ Levitan, Inverse Sturm-Liouville Problems,
VNU Science Press, Utrecht, 1987.
\bibitem{Mar} V.A.\ Marchenko, Sturm-Liouville Operators and
Applications, Birkh\"auser, Basel, 1986.
\bibitem{PT} J.\ P\"oschel and E.\ Trubowitz, Inverse Spectral
Theory, Academic Press, Orlando, 1987.
\bibitem{Sakh} L.A.\ Sakhnovich, Spectral Theory of Canonical
Systems. Method of Operator Identities, Birkh\"auser, Basel, 1999.
\bibitem{Si} B.\ Simon, A new approach to inverse spectral
theory, I.\ Fundamental formalism, Ann.\ Math.\ {\bf 150} (1999),
1029--1057.
\bibitem{WMLN} J.\ Weidmann, Spectral Theory of Ordinary
Differential Operators, Lecture Notes in Mathematics, Vol.\ 1258,
Springer-Verlag, New York, 1987.
\bibitem{Wo} H.\ Woracek, De~Branges spaces of entire functions
closed under forming difference quotients, Int.\ Eq.\ Op.\
Theory {\bf 37} (2000), 238--249.
\bibitem{Yud} P.\ Yuditskii, A special case of de~Branges' theorem
on the inverse monodromy problem, Int.\ Eq.\ Op.\ Theory {\bf 39}
(2001), 229--252.
\end{thebibliography}
\end{document}
---------------0109180820111--
**