Content-Type: multipart/mixed; boundary="-------------0112050355250" This is a multi-part message in MIME format. ---------------0112050355250 Content-Type: text/plain; name="01-451.keywords" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="01-451.keywords" Schrödinger operator, inverse spectral theory, A function ---------------0112050355250 Content-Type: application/x-tex; name="Afcn.tex" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="Afcn.tex" \documentclass{amsart} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{amssymb} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \theoremstyle{definition} \newtheorem{Definition}{Definition}[section] \numberwithin{equation}{section} \begin{document} \title[Inverse spectral theory]{Inverse spectral theory for one-dimensional Schr\"odinger operators: the $A$ function} \author{Christian Remling} \address{Universit\"at Osnabr\"uck\\ Fachbereich Mathematik/Informatik\\ 49069 Osnabr\"uck\\ Germany} \email{cremling@mathematik.uni-osnabrueck.de} \urladdr{www.mathematik.uni-osnabrueck.de/staff/phpages/remlingc.rdf.html} \date{December 4, 2001} \thanks{2000 {\it Mathematics Subject Classification.} Primary 34A55 34L40; secondary 46E22} \keywords{Schr\"odinger operator, inverse spectral theory, $A$ function} \thanks{Remling's work was supported by the Heisenberg program of the Deutsche Forschungs\-gemein\-schaft} %\thanks{to appear in {\it Commun.\ Math.\ Phys.} } \begin{abstract} We link recently developed approaches to the inverse spectral problem (due to Simon and myself, respectively). We obtain a description of the set of Simon's $A$ functions in terms of a positivity condition. This condition also characterizes the solubility of Simon's fundamental equation. \end{abstract} \maketitle \section{Introduction} Consider a one-dimensional Schr\"odinger equation, \begin{equation} \label{se} -y''(x)+V(x)y(x)=zy(x). \end{equation} In inverse spectral theory, we study the problem of recovering the potential $V$ from spectral data. Much has been done on this; we refer the reader to \cite{Lev,Mar} for expositions of classical work, especially the approach of Gelfand and Levitan and to \cite{PT,Sakh} for rather different views of the subject. In this paper, we are concerned with two recently developed methods: the approach of Simon \cite{Si} and the treatment given by myself in \cite{RemdB}. More specifically, we will apply the method of \cite{RemdB} to solve an open problem in Simon's theory. This supplements the results of \cite{Si} and shows that Simon's basic equation is a smoothly working machine for reconstructing the potential: not only can the equation be used for this purpose (this was shown in \cite{Si}), but it also automatically dismisses improper spectral data. We will now describe our results in more detail and, at the same time, give a brief introduction to the results of \cite{Si}. The basic object in Simon's approach is the so-called $A$ function. The original definition is based on a representation formula for the Titchmarsh-Weyl $m$ function of the Schr\"odinger operator on $(0,\infty)$ with Dirichlet boundary condition ($y(0)=0$) at the origin. Namely, it is shown in \cite{Si} that there exists a unique real valued function $A$ so that \begin{equation} \label{mA} m(k^2)=ik - \int_0^{\infty} A(t)e^{2ikt}\, dt. \end{equation} Some assumptions are necessary; for instance, \eqref{mA} holds if the potential $V$ is compactly supported and $\text{Im }k$ is sufficiently large. $A$ is in $L_1(0,N)$ for all $N>0$, and $A$ on $(0,N)$ is a function of $V$ on $(0,N)$ only. The converse is also true: $A$ on $(0,N)$ determines $V$ on $(0,N)$. Our first result characterizes the set of $A$ functions. Define \[ \mathcal{A}_N = \left\{ A\in L_1(0,N): 1+\mathcal{K}_A >0 \right\} . \] Here, $\mathcal{K}_A$ is the (self-adjoint, Hilbert-Schmidt) integral operator on $L_2(0,N)$ with kernel \begin{subequations} \label{KA} \begin{align} K(x,t)& =\frac{1}{2} \left( \phi(x-t) - \phi(x+t) \right) ,\\ \phi(x)& = \int_0^{|x|/2} A(t)\, dt. \end{align} \end{subequations} So $(\mathcal{K}_A f)(x) = \int_0^N K(x,t) f(t)\, dt$, and in the definition of $\mathcal{A}_N$, we require that the operator $1+\mathcal{K}_A$ be positive definite. \begin{Theorem} \label{T1.1} $\mathcal{A}_N$ is the set of $A$ functions on $(0,N)$. In other words, given $A_0\in L_1(0,N)$, there exists $V\in L_1(0,N)$ so that $A_0$ is the $A$ function of this $V$ precisely if $A_0\in\mathcal{A}_N$. \end{Theorem} Note that because of the local character of the problem, we do not properly distinguish between functions and their restrictions to $(0,N)$. Also, $A$ functions and potentials are always real valued; we do not make this explicit in the notation. It is not yet clear why $A$ is an interesting object in inverse spectral theory. This depends on two facts. First of all, if $V$ is continuous, then $V(0)=A(0)$ (for general $V$, this still holds, but needs interpretation). Second, if $A(\cdot, x)$ denotes the $A$ function of the problem on $[x,\infty)$, then this one-parameter family of $A$ functions satisfies \begin{equation} \label{Aeq} \frac{\partial A(t,x)}{\partial x} = \frac{\partial A(t,x)}{\partial t} + \int_0^t A(s,x)A(t-s,x)\, ds. \end{equation} Basically, this is a way of writing the Schr\"odinger equation \eqref{se}; the point is that the potential has disappeared, so that \eqref{Aeq} can be solved without knowing $V$. In general, \eqref{Aeq} needs to be interpreted appropriately, but if $V$ is smooth ($V\in C^1$ will do), \eqref{Aeq} holds pointwise. Eq.\ \eqref{Aeq} provides us with a method of recovering the potential from its $A$ function. Let $A_0$ be this $A$ function. Then, if one can solve \eqref{Aeq} with the initial condition $A(t,0)=A_0(t)$ on $\Delta_N=\{ (x,t): 0\le x\le N, 0\le t\le N-x \}$, then, by the first fact, $V$ can be read off as $V(x)=A(0,x)$. ($0\le x\le N$). It turns out that the positivity condition $1+\mathcal{K}_A>0$ also characterizes the solubility of this boundary value problem. In other words, if $A_0\in L_1(0,N)$ is given, \eqref{Aeq} has a solution $A(t,x)$ defined on $\Delta_N$ and satisfying $A(t,0)=A_0(t)$ precisely if $A_0\in\mathcal{A}_N$. This is the main result of this paper. Keel and Simon have independently established the necessity of the positivity condition (unpublished work). They use essentially the same method of proof. We will give the precise formulation of the result described in the preceding paragraph later (in Theorem \ref{T5.2}) because this requires an interpretation of \eqref{Aeq} for non-smooth $A$. Fortunately, there is a very natural and satisfactory way to do this, based on a regularity result from \cite{Si}. We conclude this introduction with a more detailed overview of the contents of this paper. In the following section, we review material from \cite{RemdB}. Unfortunately, in \cite{RemdB}, I chose Neumann boundary conditions whereas here we need Dirichlet boundary conditions, so we cannot quote directly from \cite{RemdB}. On the other hand, the necessary modifications are in most cases rather obvious and straightforward, so we will, as a rule, not give the proofs. There are two exceptions: we will prove Theorem \ref{T2.2} because we need the material from this proof to establish the relations to the theory from \cite{Si} and to prove Theorem \ref{T1.1}. This material is presented in Sect.\ 3. Also, we will make some remarks on the proof of Theorem \ref{T4.1} in the appendix because here the necessary changes in the treatment of \cite{RemdB} are not entirely routine. In Sect.\ 4, we formulate and prove the result on the solubility of \eqref{Aeq}. The discussion of this result is continued in Sect.\ 5. {\bf Acknowledgment:} I thank Barry Simon for informing me of his work with Keel and for useful correspondence on the subject. \section{Spectral representation and de~Branges spaces} We start by giving a very short review on de~Branges spaces and their use in the spectral representation of Schr\"odinger operators. This material was presented in detail in \cite{RemdB}, but for Neumann boundary conditions ($y'(0)=0$) instead of Dirichlet boundary conditions ($y(0)=0$). A {\it de~Branges function} is an entire function $E$ satisfying the inequality $|E(z)|>|E(\overline{z})|$ for all $z\in\mathbb C^+ =\{ z\in\mathbb C : \text{ Im }z>0 \}$. If a de~Branges function $E$ is given, one can form the {\it de~Branges space} $B(E)$. It consists of the entire functions $F$ for which $F/E, F^{\#}/E\in H_2(\mathbb C^+)$. Here, $F^{\#}(z)=\overline{F(\overline{z})}$, and $H_2(\mathbb C^+)$ is the usual Hardy space on the upper half plane. So, $g\in H_2(\mathbb C^+)$ precisely if $g$ is a holomorphic function on $\mathbb C^+$ and $\sup_{y>0} \int_{-\infty}^{\infty} |g(x+iy)|^2 \, dx < \infty$. An alternate description of $B(E)$ is given by \begin{multline*} B(E) = \biggl\{ F: F\text{ entire, }\int_{\mathbb R} \left| (F/E)(\lambda) \right|^2\, d\lambda < \infty, \\ |(F/E)(z)|, |(F^{\#}/E)(z)| \le C_F (\text{Im }z)^{-1/2}\quad \forall z\in\mathbb C^+ \biggr\} . \end{multline*} $B(E)$ with the scalar product \[ [F,G] = \frac{1}{\pi}\int_{\mathbb R} \overline{F(\lambda)} G(\lambda)\, \frac{d\lambda}{|E(\lambda)|^2} \] is a Hilbert space. Moreover, the {\it reproducing kernels} \[ J_z(\zeta) = \frac{\overline{E(z)}E(\zeta)- E(\overline{z})\overline{E(\overline{\zeta})}} {2i(\overline{z}-\zeta)} \] belong to $B(E)$ for every $z\in\mathbb C$ and $[J_z,F]=F(z)$ for all $F\in B(E)$. We now describe how de~Branges spaces occur in the spectral representation of Schr\"odinger operators. For (much) more on the general theory of de~Branges spaces, see the classic \cite{dB}. Consider the Schr\"odinger equation \eqref{se} on $0\le x\le N$, with potential $V\in L_1(0,N)$ and Dirichlet boundary condition at the origin and arbitrary (self-adjoint) boundary condition at $x=N$: \begin{equation} \label{bc} y(0)=0,\quad\quad y(N)\sin\beta + y'(N)\cos\beta = 0. \end{equation} The associated self-adjoint operators $H_N^{\beta}= -d^2/dx^2 + V(x)$ with domains \begin{multline*} D(H_N^{\beta}) = \bigl\{ y\in L_2(0,N): y,y' \text{ absolutely continuous,}\\ -y''+Vy \in L_2(0,N), y \text{ satisfies }\eqref{bc} \bigr\} \end{multline*} have simple, purely discrete spectrum. A spectral representation can thus be obtained by expanding in terms of eigenfunctions. Usually, one proceeds as follows. Let $u(x,z)$ be the solution of \eqref{se} with the initial values $u(0,z)=0$, $u'(0,z)=1$, and define a Borel measure $\rho_N^{\beta}$ by \[ \rho_N^{\beta} = \sum \frac{\delta_{\lambda}}{\|u(\cdot,\lambda)\|_{L_2(0,N)}^2} . \] The sum is over the eigenvalues $\lambda$ of $H_N^{\beta}$, and $\delta_{\lambda}$ is the Dirac measure at $\lambda$ (so $\delta_{\lambda}(\{ \lambda \} ) =1$, $\delta_{\lambda}(\mathbb R \setminus \{ \lambda \} ) =0$). Then the operator $U$ acting as \begin{equation} \label{defU} (Uf)(\lambda) = \int u(x,\lambda) f(x)\, dx \end{equation} maps $L_2(0,N)$ unitarily onto $L_2(\mathbb R, d\rho_N^{\beta})$. At this point, de~Branges spaces enter the game. Namely, this space $L_2(\mathbb R, d\rho_N^{\beta})$ can be identified as a de~Branges space. To do this, let $E_N(z)=u(N,z)+iu'(N,z)$. Then $E_N$ is obviously entire, and it also satisfies $|E_N(z)|>|E_N(\overline{z})|$ ($z\in\mathbb C^+$) -- in fact, this inequality is nothing but the well known fact that $-u'/u$ is a Herglotz function in disguise. So $E_N$ is a de~Branges function. We will denote the de~Branges space $B(E_N)$ based on this function by $S_N$. \begin{Theorem} \label{T2.0} The Hilbert spaces $S_N$ and $L_2(\mathbb R, d\rho_N^{\beta})$ are identical. More precisely, if $F(z)\in S_N$, then the restriction of $F$ to $\mathbb R$ belongs to $L_2(\mathbb R, d\rho_N^{\beta})$, and this restriction map is unitary from $S_N$ onto $L_2(\mathbb R, d\rho_N^{\beta})$. The map \begin{gather*} U: L_2(0,N)\to S_N\\ (Uf)(z) = \int u(x,z)f(x)\, dx \end{gather*} maps $L_2(0,N)$ unitarily onto $S_N$. \end{Theorem} This is the analog of \cite[Theorem 3.1]{RemdB} and the remark following it for Dirichlet boundary conditions, and it has the same proof. Note that the $U$ from Theorem \ref{T2.0} is just the composition of the $U$ from \eqref{defU} with the inverse of the restriction map from the first part of Theorem \ref{T2.0}. As a corollary of the last part of Theorem \ref{T2.0}, we have the following description of $S_N$ as a set: \begin{equation} \label{2.1} S_N = \left\{ F(z)=\int u(x,z)f(x)\, dx : f\in L_2(0,N) \right\} . \end{equation} It also follows that $S_N$ is isometrically contained in $S_{N'}$ if $N\le N'$. Indeed, these spaces are the images of $L_2(0,N)$ and $L_2(0,N')$, respectively, under the unitary map $U$ from Theorem \ref{T2.0}. Similarly, if $V$ is defined on $(0,\infty)$ and $\rho$ is a spectral measure of the half line problem, the $S_N$ may be identified (by restricting the functions from $S_N$ to $\mathbb R$) with a subspace of $L_2(\mathbb R, d\rho)$. This follows in the same way from the fact the $U$ from \eqref{defU} also maps $L_2(0,\infty)$ unitarily onto $L_2(\mathbb R, d\rho)$. It is possible to characterize the de~Branges spaces that come in this way from a Schr\"odinger equation. This characterization may be viewed as the general direct and inverse spectral theory for one-dimensional Schr\"odinger operators, formulated in the language of de~Branges spaces. We begin with the ``direct'' theorems. They are the analogs of \cite[Theorems 4.1, 4.2]{RemdB}. \begin{Theorem} \label{T2.1} As a set, $S_N$ is given by \[ S_N = \left\{ F(z)=\int f(x) \frac{\sin\sqrt{z}x}{\sqrt{z}}\, dx: f\in L_2(0,N) \right\} . \] \end{Theorem} This set is just what \eqref{2.1} gives for zero potential, so Theorem \ref{T2.1} says that $S_N$ as a vector space is independent of the potential. The next result shows that the scalar product on $S_N$ is a small perturbation of the scalar product for zero potential. \begin{Theorem} \label{T2.2} Let $V\in L_1(0,N)$. Then there exist a unique $A\in L_1(0,N)$, so that for all $F\in S_N$, \[ \|F\|_{S_N}^2 = \langle f, (1+\mathcal{K}_A) f \rangle_{L_2(0,N)}, \] where $\mathcal{K}_A$ is the integral operator defined in \eqref{KA} and $f$ is related to $F$ as in Theorem \ref{T2.1}. \end{Theorem} The proof of Theorem \ref{T2.1} is completely analogous to the proof of \cite[Theorem 4.1]{RemdB} and will be omitted. The proof of Theorem \ref{T2.2} is also more or less analogous to the proof of \cite[Theorem 4.2]{RemdB} (additional, if harmless, complications arise because in contrast to the situation for Neumann boundary conditions, $\widehat{A}$ need not be a function). This proof, however, will be discussed in detail in the next section because it is precisely there that we make contact with Simon's theory \cite{Si}. The ``inverse'' theorem asserts that a de~Branges space that has the properties stated in Theorems \ref{T2.1}, \ref{T2.2} actually comes from a Schr\"odinger equation. Such a de~Branges space is determined by the function $A$ from Theorem \ref{T2.2}. Clearly, if $A$ is as in Theorem \ref{T2.2}, then $1+\mathcal{K}_A$ must be a positive operator, that is, $A\in\mathcal{A}_N$. In other words, Theorems \ref{T2.1}, \ref{T2.2} associate with each de~Branges space coming from a Schr\"odinger equation on $(0,N)$ a function $A\in\mathcal{A}_N$. Here is the converse. \begin{Theorem} \label{T4.1} Let $A\in\mathcal{A}_N$. Then there exists a unique potential $V\in L_1(0,N)$ such that the norm on the de~Branges space $S_N$ associated with \eqref{se} with this $V$ is given by \[ \|F\|_{S_N}^2 = \langle f, (1+\mathcal{K}_A)f\rangle_{L_2(0,N)} . \] Here, $F(z)= \int f(t)\frac{\sin\sqrt{z}t}{\sqrt{z}}\, dt$, $f\in L_2(0,N)$. \end{Theorem} Theorem \ref{T4.1} is the analog of \cite[Theorem 5.1]{RemdB}. One can use the method of \cite{RemdB} to prove Theorem \ref{T4.1}, but a number of modifications are necessary. This will be discussed in the appendix. \section{The $A$ function} Let $V\in L_1(0,N)$. Our first goal in this section is to prove Theorem \ref{T2.2} by appropriately defining an $A$ function, $A\in L_1(0,N)$. Basically, we follow the method that was used in \cite[Sect.\ 4]{RemdB} to define the $\phi$ function. Later, it will turn out that the $A$ from Theorem \ref{T2.2} also is the $A$ function in the sense of \cite{Si}. It is also possible to do things the other way around: one can define $A$ as in \cite{Si} and then use the results from \cite{GeSi} to prove that this $A$ also is the $A$ from Theorem \ref{T2.2}. We proceed as follows. Let $\rho_N$ be the spectral measure of the problem on $[0,\infty)$ with potential \[ V_N(x)= \begin{cases} V(x) & xN \end{cases} \] and Dirichlet boundary condition at the origin. Let $d\rho_0(\lambda) =\pi^{-1}\chi_{(0,\infty)}(\lambda)\sqrt{\lambda}\, d\lambda$ be the spectral measure of the half line problem with zero potential, and introduce the signed measure $\sigma=\rho_N-\rho_0$. Formally, we want to define $A$ by \begin{equation} \label{3.1} A(t)= -2 \int \frac{\sin 2t\sqrt{\lambda}}{\sqrt{\lambda}}\, d\sigma(\lambda), \end{equation} but this integral does not, in general, exist, so \eqref{3.1} needs to be interpreted appropriately. Note also that if $A$ is defined via the representation formula for $m$ (see Sect.\ 1), then a variant of \eqref{3.1}, in various interpretations, is established in \cite{GeSi} (see also \cite{RaSi}). We will not use these results here. As $V_N$ is a compactly supported potential, the spectral measure $\rho_N$ is purely absolutely continuous on $(0,\infty)$, and there are only finitely many negative eigenvalues. To obtain a more detailed description of $\rho_N$, we use the $m$ function $m_N$ of the problem on $[0,\infty)$ with potential $V_N$ and Dirichlet boundary condition. The following lemma collects some properties of $m_N$. It will be convenient to write the spectral parameter as $z=k^2$ and use $k$ as the variable. The $m$ function for zero potential will be denoted by $m_0$, so $m_0(z)=\sqrt{-z}$, where the square root must be chosen so that $m_0$ maps $\mathbb C^+$ to itself. \begin{Lemma} \label{L3.1} a) $m_N$ extends to a meromorphic function on $\mathbb C\setminus [0,\infty)$. $M_N(k)\equiv m_N(k^2)$, originally defined for $k\in\mathbb C^+$, then extends to a meromorphic function on $\mathbb C$. The poles of $M_N$ in $\overline{\mathbb C^+}$ are $i\kappa_1,\ldots,i\kappa_n$ ($n\in\mathbb N_0$) and possibly $0$; here, $\kappa_i>0$ and $-\kappa_1^2,\ldots,-\kappa_n^2$ are the negative eigenvalues. These poles are simple. b) The limit $m_N(\lambda)\equiv \lim_{\epsilon\to 0+} m_N(\lambda+i\epsilon)$ exists for all $\lambda>0$ and \[ \chi_{[0,\infty)}(\lambda)\, d\rho_N(\lambda) = \frac{1}{\pi} \chi_{(0,\infty)}(\lambda)\, \text{\rm Im }m_N(\lambda)\, d\lambda . \] c) For $k\in\mathbb R$, $|k|\to\infty$, the following asymptotic formula holds: \[ \text{\rm Im }(m_N(k^2)-m_0(k^2)) = -\int_0^N V(x)\sin 2|k|x \, dx + O(|k|^{-1}). \] \end{Lemma} Since this is completely standard, we will not give the proof. See, for example, \cite{DT}. One uses the fact that $m_N(k^2)=f_N'(0,k)/f_N(0,k)$, where $f_N$ is the solution of $-f''+V_Nf=k^2f$ with $f_N(x,k)=e^{ikx}$ for $x\ge N$. We now return to \eqref{3.1}. The integral over $(-\infty,0)$ is a finite sum and does not pose any difficulties. The integral over $[0,\infty)$ will be defined as a distribution. More precisely, if $g\in\mathcal{S}$ is a test function from the Schwartz space $\mathcal{S}$, we let \begin{equation} \label{defA} (A_+,g)= -2\int_0^{\infty} d\sigma(\lambda)\int_{-\infty}^{\infty} dx\, g(x) \frac{\sin 2\sqrt{\lambda}x}{\sqrt{\lambda}}. \end{equation} The $x$ integral defines a bounded, rapidly decaying function of $\lambda\in (0,\infty)$, so by Lemma \ref{L3.1}, the right-hand side of \eqref{defA} is well defined. Moreover, $(A_+,g)$ depends continuously on $g$ in the topology of $\mathcal{S}$. Hence \eqref{defA} defines a tempered distribution $A_+\in\mathcal{S}'$. We now compute the Fourier transform $\widehat{A_+}$ of $A_+$, which is defined by $(\widehat{A_+},g)=(A_+,\widehat{g})$. We use the convention \[ \widehat{g}(k) = \frac{1}{\sqrt{2\pi}} \int g(x) e^{-ikx}\, dx \] (so $g(x)=(2\pi)^{-1/2} \int \widehat{g}(k) e^{ikx}\, dk$). The measure $\sigma$ can be rewritten with the help of Lemma \ref{L3.1}b). \begin{align*} (\widehat{A_+},g) & = i \int_0^{\infty} \frac{d\sigma(\lambda)} {\sqrt{\lambda}} \int_{-\infty}^{\infty} dx\, \widehat{g}(x) \left( e^{2i\sqrt{\lambda}x} - e^{-2i\sqrt{\lambda}x} \right) \\ & = i \sqrt{2\pi} \int_0^{\infty} \frac{d\sigma(\lambda)} {\sqrt{\lambda}} \left( g(2\sqrt{\lambda})-g(-2\sqrt{\lambda}) \right)\\ & = i \sqrt{\frac{2}{\pi}} \int_0^{\infty} dk \text{ Im } (m_N-m_0)(k^2/4) \left( g(k)-g(-k) \right) \end{align*} We now want to perform the substitution $k\to -k$ in the term involving $g(-k)$, but one must be careful because due to the possible pole of $M_N$ at $k=0$, the two integrals need not exist separately. Therefore, we must first renormalize appropriately. By Lemma \ref{L3.1}a), the limit \[ a=\lim_{k\to 0+} k\text{ Im }m_N(k^2) \] exists. Moreover, $\text{Im }(m_N-m_0)(k^2) -a/k$ remains bounded as $k\to 0+$. Thus we can split the integral as follows: \begin{multline} \label{3.2} \int_0^{\infty} dk\, \text{Im } (m_N-m_0)(k^2/4) \left( g(k)-g(-k) \right) =\\ \int_0^{\infty} dk\, \left( \text{Im } (m_N-m_0)(k^2/4) - \frac{2a}{k} \right) \left( g(k)-g(-k) \right) + 2a \int_0^{\infty} \frac{g(k)-g(-k)}{k}\, dk. \end{multline} Both integrals on the right-hand side exist, and the last integral equals $(PV(1/k),g)$, where $PV(1/k)$ is the principal value distribution \[ (PV(1/k),g) = \lim_{\delta\to 0+} \int_{|k|>\delta} \frac{g(k)}{k}\, dk. \] Moreover, in the first term of the right-hand side of \eqref{3.2}, the integrals with $g(k)$ and $g(-k)$ exist separately, and thus we may now substitute $k'=-k$ in the second term to obtain \begin{multline*} \int_0^{\infty} \left( \text{Im } (m_N-m_0)(k^2/4)-\frac{2a}{k}\right) \left( g(k)-g(-k) \right)\, dk =\\ \int_{-\infty}^{\infty} \left( \text{sgn}(k)\, \text{Im }(m_N-m_0)(k^2/4) -\frac{2a}{k} \right) g(k)\, dk. \end{multline*} Here, $\text{sgn}(k)$ is the sign of $k$, that is, $\text{sgn}(k)=1$ if $k>0$ and $\text{sgn}(k)=-1$ if $k<0$. Putting things together, we thus see that $\widehat{A_+}$ is a function plus the principal value distribution. More precisely, \begin{equation} \label{Ahat} \widehat{A_+}(k) = i \sqrt{\frac{2}{\pi}} \left(\text{sgn}(k)\, \text{Im }(m_N-m_0)(k^2/4) -\frac{2a}{k} \right) + 2ia \sqrt{\frac{2}{\pi}} PV(1/k) . \end{equation} Lemma \ref{L3.1}c) now shows that \[ \widehat{A_+}(k) = -i \sqrt{\frac{2}{\pi}} \int_0^N V(t) \sin kt\, dt + 2ia \sqrt{\frac{2}{\pi}} PV(1/k) + \widehat{R}_N(k), \] where $\widehat{R}_N$ is a bounded function that is continuous away from $k=0$ and satisfies $|\widehat{R}_N(k)| \le C/|k|$. Since \begin{equation} \label{PV} PV(1/k) = i\sqrt{\frac{\pi}{2}}\left( \text{sgn}(t) \right)\,\widehat{ }\,\,(k), \end{equation} it follows that \[ A_+(t)= \text{sgn}(t) \left(V_N(|t|) - 2a \right) +R_N(t), \] where $R_N\in L_2(\mathbb R)$. In particular, $A_+$ is a function. We now put \begin{equation} \label{z1} A(t) = A_+(t) - 2 \int_{-\infty}^0 \frac{\sin 2t\sqrt{\lambda}}{\sqrt{\lambda}} \, d\sigma (\lambda) = A_+(t) - 2 \sum_n \rho_N(\{ -\kappa_n^2\} ) \frac{\sinh 2\kappa_n t} {\kappa_n} . \end{equation} Recall that the $-\kappa_n^2$'s are just the eigenvalues of $-d^2/dx^2+V_N(x)$. In particular, the sum is finite. This completes our definition of $A(t)\in L_1(0,N)$, following the recipe \eqref{3.1}. We have in fact obtained an odd, locally integrable function on $\mathbb R$. This function depends on $N$, but, as we will see shortly, its restriction to $(0,N)$ does not. The primitive of $A$ also is an important function in inverse spectral theory. So let \[ \phi(x) = \int_0^{|x|/2} A(t)\, dt = \int_0^{x/2} A(t)\, dt. \] \begin{Lemma} \label{L3.2} If $g\in C_0^{\infty}(\mathbb R)$, $\int g(x)\, dx=0$, then \[ \int_{-\infty}^{\infty} \phi(x)g(x)\, dx = \int_{-\infty}^{\infty} d\sigma(\lambda) \int_{-\infty}^{\infty} dx\, g(x) \frac{\cos \sqrt{\lambda}x}{\lambda} . \] \end{Lemma} Note that since $\int g=0$, the integral \[ \int_{-\infty}^{\infty} g(x)\frac{\cos\sqrt{\lambda}x} {\lambda}\, dx = \int_{-\infty}^{\infty} g(x)\frac{\cos\sqrt{\lambda}x-1} {\lambda}\, dx \] remains bounded as $\lambda\to 0$. \begin{proof} Let $\phi_+(x)=\int_0^{x/2} A_+(t)\, dt$. Since $\int g=0$, we can write $g=f'$ with $f\in C_0^{\infty}$. Hence, by the definition of $A_+$, \begin{align*} \int_{-\infty}^{\infty} \phi_+(x)g(x)\, dx & = - \int_{-\infty}^{\infty} \phi_+'(x)f(x)\, dx = - \int_{-\infty}^{\infty} A_+(x)f(2x)\, dx\\ &= 2 \int_0^{\infty} d\sigma(\lambda) \int_{-\infty}^{\infty} dx\, f(2x) \frac{\sin 2\sqrt{\lambda}x}{\sqrt{\lambda}}\\ &= \int_0^{\infty} d\sigma(\lambda) \int_{-\infty}^{\infty} dx\, f(x) \frac{\sin \sqrt{\lambda}x}{\sqrt{\lambda}}. \end{align*} An integration by parts shows that \[ \int_{-\infty}^{\infty} f(x) \frac{\sin \sqrt{\lambda}x}{\sqrt{\lambda}}\, dx = \int_{-\infty}^{\infty} g(x) \frac{\cos \sqrt{\lambda}x}{\lambda}\, dx. \] Finally, $\phi(x)=\phi_+(x)-\sum \rho_N(\{ -\kappa_n^2\} ) \frac{\cosh \kappa_n x -1}{\kappa_n^2}$, and since $-\frac{\cosh \kappa_n x}{\kappa_n^2}= \frac{\cos i\kappa_n x}{(i\kappa_n)^2}$, it is clear that \[ \int_{-\infty}^{\infty} \left( -\sum \rho_N(\{ -\kappa_n^2\} ) \frac{\cosh \kappa_n x}{\kappa_n^2} \right) g(x)\, dx = \int_{-\infty}^0 d\sigma(\lambda) \int_{-\infty}^{\infty}dx\, g(x) \frac{\cos\sqrt{\lambda}x}{\lambda} . \] On the left-hand side, we have dropped the $1$'s in the numerators. This is possible because $\int g=0$. \end{proof} We can now prove Theorem \ref{T2.2}. \begin{proof}[Proof of Theorem \ref{T2.2}] As expected, we claim that the sought function $A$ is the function constructed above. So we have to show that for all $f\in L_2(0,N)$, \[ \|F\|_{S_N}^2 = \langle f, (1+\mathcal{K}_A) f \rangle_{L_2(0,N)}, \] where $F(z)=\int f(x)\frac{\sin\sqrt{z}x}{\sqrt{z}}\, dx$ and $\mathcal{K}_A$ is constructed from $A$ as in \eqref{KA}. Let us first verify this for $f\in C_0^{\infty}(0,N)$. Extend $f$ by setting $f(-x)=-f(x)$ for $x>0$. By the discussion following Theorem \ref{T2.0}, we have that $\|F\|_{S_N}=\|F\|_{L_2(\mathbb R, d\rho_N)}$. Now \[ \int \left| F(\lambda) \right|^2\, d\rho_N(\lambda)= \|f\|_{L_2(0,N)}^2 + \int \left| F(\lambda) \right|^2\, d\sigma(\lambda) \] and \begin{align} \int \left| F(\lambda) \right|^2\, d\sigma(\lambda) & = \int d\sigma(\lambda) \int_0^N\!\int_0^N ds\, dt\, \overline{f(s)} f(t) \frac{\sin \sqrt{\lambda}s}{\sqrt{\lambda}}\, \frac{\sin \sqrt{\lambda}t}{\sqrt{\lambda}}\nonumber\\ & =\frac{1}{4} \int d\sigma(\lambda) \int_{-N}^N\!\int_{-N}^N ds\, dt\, \overline{f(s)} f(t) \frac{\sin \sqrt{\lambda}s}{\sqrt{\lambda}}\, \frac{\sin \sqrt{\lambda}t}{\sqrt{\lambda}}\nonumber\\ & = \frac{1}{8} \int d\sigma(\lambda) \int_{-N}^N\!\int_{-N}^N ds\, dt\, \overline{f(s)} f(t) \frac{\cos\sqrt{\lambda}(s-t) - \cos\sqrt{\lambda}(s+t)}{\lambda}\nonumber\\ & = \frac{1}{4} \int d\sigma(\lambda) \int_{-N}^N\!\int_{-N}^N ds\, dt\, \overline{f(s)} f(t) \frac{\cos\sqrt{\lambda}(s-t)}{\lambda}\nonumber\\ \label{z3} & = \frac{1}{4} \int d\sigma(\lambda) \int dr\, g(r) \frac{\cos\sqrt{\lambda}r}{\lambda}, \end{align} where \[ g(r) = \frac{1}{2} \int \overline{f\left(\frac{u+r}{2}\right)} f\left( \frac{u-r}{2}\right)\, du \quad\in C_0^{\infty}(\mathbb R) . \] Note that \[ \int g(r)\, dr = \int \overline{f(s)}\, ds \int f(t)\, dt =0. \] This shows, first of all, that the integral from \eqref{z3} exists; so the manipulations leading to \eqref{z3} were justified. Second, Lemma \ref{L3.2} applies, and we obtain \begin{align*} \int \left| F(\lambda) \right|^2\, d\sigma(\lambda) & = \frac{1}{4} \int_{-\infty}^{\infty} \phi(r) g(r)\, dr\\ & = \frac{1}{4}\int_{-N}^N\!\int_{-N}^N ds\, dt\, \overline{f(s)} f(t) \phi(s-t) . \end{align*} Since $\phi$ is even, the steps of the above computation can be reversed and we get the desired formula \begin{align*} \int \left| F(\lambda) \right|^2\, d\sigma(\lambda) & = \frac{1}{2} \int_0^N\! \int_0^N ds\, dt\, \overline{f(s)} f(t) \left( \phi(s-t)-\phi(s+t)\right)\\ & = \langle f, \mathcal{K}_A f \rangle_{L_2(0,N)} . \end{align*} So far, this has been proved for $f\in C_0^{\infty}(0,N)$. If $f\in L_2(0,N)$ is arbitrary, pick $f_n\in C_0^{\infty}(0,N)$ so that $f_n\to f$ in $L_2(0,N)$. Since then $F_n(z)=\int f_n(t)\frac{\sin\sqrt{z}t}{\sqrt{z}} \, dt$ is a norm bounded sequence in $S_N$, we can extract a weakly convergent subsequence (wlog the original sequence). The existence of reproducing kernels on $S_N$ implies that $F_n$ also converges pointwise to its weak limit. On the other hand, it is clear that \[ F_n(z)=\int f_n(t) \frac{\sin\sqrt{z}t}{\sqrt{z}}\, dt \to \int f(t) \frac{\sin\sqrt{z}t}{\sqrt{z}}\, dt = F(z), \] so the weak limit of $F_n$ is $F$. It follows that \[ \|F\|_{S_N}^2 \le \liminf_{n\to\infty} \|F_n\|_{S_N}^2 = \lim_{n\to\infty} \langle f_n, (1+\mathcal{K}_A)f_n \rangle =\langle f, (1+\mathcal{K}_A)f \rangle . \] This inequality, applied to $F-F_n$ in place of $F$, shows that $F_n\to F$ in the norm of $S_N$. So, by going to the limit a second time, we see that $\|F\|_{S_N}^2 =\langle f, (1+\mathcal{K}_A)f \rangle$, as desired. Uniqueness of $A$ is easy. If $\langle f,\mathcal{K}_A f\rangle = \langle f, \mathcal{K}_{\tilde{A}} f\rangle$ for all $f\in L_2(0,N)$, then, since the kernels are continuous, they must be identically equal to one another. It follows that $A=\widetilde{A}$ almost everywhere. \end{proof} We can now establish the fact that $A$ on $(0,N)$ is independent of $N$. More precisely, we consider the following situation. Let $0 \max \kappa_n$, then \[ m_N(k^2) = ik - \int_0^{\infty} A(t)e^{2ikt}\, dt. \] More generally, if $k\in\mathbb C^+$, $k\notin \{ i\kappa_n \}$, then \[ m_N(k^2) = ik - \int_0^{\infty}A_+(t)e^{2ikt}\, dt - \sum \frac{\rho_N(\{-\kappa_n^2\} )}{\kappa_n^2+k^2} . \] \end{Theorem} \begin{proof} By the Herglotz representations of $m_N$ and $m_0(k^2)=ik$, we have that \begin{equation} \label{3.3} m_N(k^2) = ik + c + \int_{-\infty}^{\infty} \left( \frac{1}{\lambda - k^2} - \frac{\lambda}{\lambda^2+1} \right) \, d\sigma(\lambda), \end{equation} where $c$ is a constant. Meromorphic continuation allows us to use \eqref{3.3} for $k\in\mathbb C^+$, $k\notin \{ i\kappa_n \}$. The integral over $(-\infty,0)$ can be evaluated right away as \begin{equation} \label{z5} \int_{-\infty}^0 \left( \frac{1}{\lambda - k^2} - \frac{\lambda}{\lambda^2+1} \right) \, d\sigma(\lambda) = - \sum \frac{\rho_N(\{-\kappa_n^2\} )}{\kappa_n^2+k^2}+d. \end{equation} It remains to analyze the integral over $(0,\infty)$. Instead of regularizing by subtracting $\lambda/(\lambda^2+1)$, we use a cut-off. (Something of this sort is needed because it can happen that $\int\frac{d|\sigma|(\lambda)}{1+|\lambda|}=\infty$.) Let $\varphi\in C_0^{\infty}(\mathbb R)$ be even, with $0\le \varphi \le 1$ and $\varphi(0)=1$. We claim that for $k\in\mathbb C^+$, \begin{equation} \label{z4} \lim_{\epsilon\to 0+} \int_0^{\infty} \frac{\varphi(2\epsilon \sqrt{\lambda})}{\lambda - k^2}\, d\sigma(\lambda) = -\int_0^{\infty} A_+(t) e^{2ikt}\, dt . \end{equation} The assertions of Theorem \ref{T3.2} will follow from \eqref{z4}. Indeed, by dominated convergence, \[ \lim_{\epsilon\to 0+} \int_0^{\infty}\left( \frac{1}{\lambda - k^2} - \frac{\lambda}{\lambda^2+1} \right) \varphi(2\epsilon\sqrt{\lambda})\, d\sigma(\lambda) = \int_0^{\infty}\left( \frac{1}{\lambda - k^2} - \frac{\lambda}{\lambda^2+1} \right) \, d\sigma(\lambda), \] so plugging \eqref{z5}, \eqref{z4} into \eqref{3.3} gives the second formula from Theorem \ref{T3.2}, possibly with an additional constant on the right-hand side. But the asymptotic relation $(m_N-m_0)(iy)\to 0$ ($y\to\infty$) shows that this constant is zero. The first formula from Theorem \ref{T3.2} is an immediate consequence of the second one; we just have to recall \eqref{z1} and note that for $\text{Im }k > \kappa$, \[ 2 \int_0^{\infty} \frac{\sinh 2\kappa t}{\kappa}\, e^{2ikt}\, dt = \frac{1}{-\kappa^2 -k^2}. \] So it remains to prove \eqref{z4}. We use the formula \[ \frac{1}{\lambda-k^2} = \int_0^{\infty} e^{ikx}\, \frac{\sin\sqrt{\lambda}x} {\sqrt{\lambda}}\, dx, \] which is valid for $\lambda>0$ and $k\in\mathbb C^+$. We also use Lemma \ref{L3.1}b) and \eqref{Ahat}, \eqref{PV}. \begin{align*} \int_0^{\infty} \frac{\varphi(2\epsilon\sqrt{\lambda})} {\lambda - k^2}\, d\sigma(\lambda) & = \frac{2}{\pi} \int_0^{\infty} dl\, \text{Im }(m_N-m_0)(l^2) \varphi(2\epsilon l) \int_0^{\infty} dx\, e^{ikx} \sin lx \\ & = \frac{1}{\pi} \int_{-\infty}^{\infty} dl\, \text{sgn}(l) \text{Im }(m_N-m_0)(l^2/4) \varphi(\epsilon l) \int_0^{\infty} dx\, e^{2ikx} \sin lx\\ & = \frac{-i}{\sqrt{2\pi}} \int_{-\infty}^{\infty}dl\, \left( A_+ + 2a\,\text{sgn} \right)\, \widehat{ }\,\,(l) \varphi(\epsilon l) \int_0^{\infty} dx\, e^{2ikx} \sin lx \\ &\:\:\:\: + \frac{2a}{\pi} \int_{-\infty}^{\infty} \frac{dl}{l} \varphi(\epsilon l) \int_0^{\infty}dx\, e^{2ikx}\, \sin lx. \end{align*} It was legitimate to split the integral in the last step, because the two integrals exist separately. In fact, this last integral can be evaluated in the limit $\epsilon\to 0+$. By dominated convergence again, the limit is just the integral without the cut-off function $\varphi$, and \[ \frac{2a}{\pi} \int_{-\infty}^{\infty} \frac{dl}{l} \int_0^{\infty} dx\, e^{2ikx}\, \sin lx = \frac{ia}{k} . \] Furthermore, since $( A_+ + 2a\,\text{sgn} )\, \widehat{ }\,\,(l)$ is an odd function, we have that \begin{align*} \frac{-i}{\sqrt{2\pi}} \int_{-\infty}^{\infty}dl\, & \left( A_+ + 2a\, \text{sgn} \right)\, \widehat{ }\,\, (l) \varphi(\epsilon l)\int_0^{\infty} dx\, e^{2ikx} \sin lx \\ & = \frac{-1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} dl\, \left( A_+ + 2a\,\text{sgn} \right)\, \widehat{ }\,\,(l) \varphi(\epsilon l) \int_0^{\infty} dx\, e^{2ikx} e^{ilx}\\ & = - \int_{-\infty}^{\infty} \left( A_+ + 2a\,\text{sgn} \right)\, \widehat{ }\,\,(l) \varphi(\epsilon l) h(l)\, dl, \end{align*} where $h(l)=(2\pi)^{-1/2} \int_0^{\infty}e^{2ikx}e^{ilx}\, dx$. Thus $\widehat{h}(x)=\chi_{(0,\infty)}(x)e^{2ikx}$. Note also that we used the fact that $\left( A_+ + 2a\,\text{sgn} \right)\, \widehat{ }\,\,(l)$ remains bounded as $l\to 0$. Now $\varphi(\epsilon l) h(l)$ is a test function, so the definition of the Fourier transform of a distribution shows that \[ \int_{-\infty}^{\infty} \left( A_+ + 2a\,\text{sgn} \right)\, \widehat{ }\,\,(l) \varphi(\epsilon l) h(l)\, dl = \int_{-\infty}^{\infty} \left( A_+(t) + 2a\, \text{sgn}(t)\right) \left( \varphi(\epsilon\cdot) h \right)\, \widehat{ }\,\, (t)\, dt. \] Writing the Fourier transform of the product as a convolution of Fourier transforms, one can show without much difficulty that $\left( \varphi(\epsilon\cdot) h \right)\, \widehat{ } \to \widehat{h}$ as $\epsilon\to 0+$. Dominated convergence again applies and thus \begin{align*} \lim_{\epsilon\to 0+} \int_{-\infty}^{\infty} \left( A_+(t) + 2a\, \text{sgn}(t)\right) \left( \varphi(\epsilon\cdot) h \right)\, \widehat{ }\,\, (t)\, dt & = \int_0^{\infty} \left( A_+(t)+ 2a\, \text{sgn}(t)\right) e^{2ikt}\, dt\\ & = \int_0^{\infty} A_+(t) e^{2ikt}\, dt + \frac{ia}{k} . \end{align*} Now \eqref{z4} follows by collecting terms. \end{proof} The identification of $A$ as Simon's $A$ function provided by Theorem \ref{T3.2} allows us to deduce Theorem \ref{T1.1} from the results of Sect.\ 2. \begin{proof}[Proof of Theorem \ref{T1.1}] If a potential $V\in L_1(0,N)$ is given, then, by Theorem \ref{T3.2}, we can construct its unique $A$ function $A_0\in L_1(0,N)$ (in the sense of \cite{Si}) by the method above. This means that $A_0$ is also the $A$ from Theorem \ref{T2.2}, so $1+\mathcal{K}_{A_0}$ is a positive definite operator, hence $A_0\in\mathcal{A}_N$. Conversely, if $A_0\in\mathcal{A}_N$ is given, Theorem \ref{T4.1} shows that $A_0$ comes from a potential $V\in L_1(0,N)$, in the sense of Theorem \ref{T2.2}. But now uniqueness of $A_0$ and Theorem \ref{T3.2} show that $A_0$ is the $A$ function of this $V$ also in the sense of \cite{Si}. \end{proof} \section{The $A$ equation} We now consider the family $A(\cdot,x)$, where $A(\cdot,x)\in L_1(0,N-x)$ is the $A$ function of the problem on $[x,N]$ (or, equivalently, of the shifted potential $V_x(\cdot)=V(x+\cdot)$). Our first task is to give, for non-smooth $A$, an interpretation of the basic equation \eqref{Aeq} suitable for our purposes. The key to this is the following regularity result from \cite{Si}. \begin{Theorem}[Simon] \label{T5.1} Let $V\in L_1(0,N)$. Then $A(t,x)-V(x+t)$ is (jointly) continuous on $\Delta_N=\{ (t,x): 0\le x \le N, 0\le t\le N-x \}$. Moreover, $A(0,x)-V(x)=0$ for all $x\in [0,N]$. \end{Theorem} More precisely, the statement is that given a realization of $V\in L_1(0,N)$, there exist realizations of the functions $A(\cdot,x)\in L_1(0,N-x)$, such that the asserted continuity holds. (If one views $V$ and $A(\cdot,x)$ as functions on the open intervals $(0,N)$ and $(0,N-x)$, respectively, then it is also being asserted that $A(t,x)-V(x+t)$ has a continuous extension to the {\it closed} triangle $\Delta_N$.) Theorem \ref{T5.1} is part of what \cite[Theorem 2.1]{Si} states. (There is a typo in estimate (2.4) of \cite[Theorem 2.1]{Si}: $A(\alpha;q)$ and $A(\alpha;\widetilde{q})$ should be replaced by $A(\alpha;q)-q(\alpha)$ and $A(\alpha;\widetilde{q})-\widetilde{q}(\alpha)$, respectively.) If interpreted in the light of \eqref{Aeq}, Theorem \ref{T5.1} says that the singularities of $A$ propagate along the characteristics $x+t=\text{const.}$ of the linear part $A_x=A_t$ without changing their shape. Theorem \ref{T5.1} suggests to regularize by subtracting $V(x+t)$. Actually, Theorem \ref{T5.1}, specialized to $x=0$, also says that $A(t,0)-V(t)\in C[0,N]$, so if we impose the initial condition $A(t,0)=A_0(t)$, we may as well regularize by subtracting $A_0(x+t)$. In fact, this second method is more convenient because it incorporates the initial condition in an explicit way. So, formally, we proceed as follows. Let \begin{equation} \label{defB} B(t,x) = A(t-x,x)-A_0(t)\quad\quad (0\le x\le t\le N). \end{equation} Obviously, $B(t,0)=0$ if $A$ satisfies the initial condition. Moreover, still working on a formal level, we have that $\partial B/\partial x = \partial A/\partial x- \partial A/\partial t$, so if $A$ also solves \eqref{Aeq}, then \begin{equation} \label{Beq} B(t,x) = \int_0^x dy \int_0^{t-y} ds\, \left( B(y+s,y)+A_0(y+s)\right)\left( B(t-s,y)+A_0(t-s)\right). \end{equation} If $A$ actually is an $A$ function, then $B$ is continuous by the above remarks. Now the right-hand side of \eqref{Beq} makes sense for general $B\in C(\Delta_N')$, $A_0\in L_1(0,N)$, where $\Delta_N'$ is the triangle \[ \Delta_N'= \{ (t,x): 0\le x\le t\le N \} . \] Indeed, $\int_0^{t-y} A_0(y+s) A_0(t-s)\, ds$ is an integral of convolution type and defines an integrable function of $y$. The other terms from \eqref{Beq} are obviously well defined for $B\in C(\Delta_N')$, $A_0\in L_1(0,N)$. We take \eqref{Beq} as our interpretation of \eqref{Aeq} together with the initial condition $A(t,0)=A_0(t)$. \begin{Definition} \label{D1} Let $A_0\in L_1(0,N)$. We say that $B$ solves the problem $(A_0)$ if $B\in C(\Delta_N')$ and \eqref{Beq} holds. \end{Definition} It is quite clear and will be proved shortly (see Theorem \ref{TSi} below) that if $A(t,x)$ is an $A$ function for some potential and $A_0(t)=A(t,0)$, then $B$ from \eqref{defB} solves the problem $(A_0)$ -- in fact, this was the property that motivated Definition \ref{D1} in the first place. We can now give a precise version of the statement discussed in the introduction. \begin{Theorem} \label{T5.2} Suppose $A_0\in L_1(0,N)$. Then the problem $(A_0)$ has a solution if and only if $A_0\in\mathcal{A}_N$. \end{Theorem} This result and its consequences will be discussed in more detail in the next section. We now define a related problem associated with \eqref{Aeq}, and we establish uniqueness results. These tools will be needed in the proof of Theorem \ref{T5.2}. So, consider the boundary value problem \eqref{Aeq}, $A(0,x)=V(x)$, where the function $V\in L_1(0,N)$ is given. We follow the recipe that led to Definition \ref{D1} and introduce \begin{equation} \label{defC} C(t,x)= A(t-x,x)-V(t). \end{equation} Theorem \ref{T5.1} shows that if $A$ is an $A$ function, then $C\in C(\Delta_N')$ and $C(t,t)=0$. So the equation for $C$ should be \begin{equation} \label{Ceq} C(t,x) = -\int_x^t dy\int_0^{t-y} ds \left( C(y+s,y)+ V(y+s)\right) \left( C(t-s,y) + V(t-s) \right) . \end{equation} Again, we turn things around and take this as the definition of \eqref{Aeq} together with $A(0,x)=V(x)$. \begin{Definition} \label{D2} Let $V\in L_1(0,N)$. We say that $C$ solves the problem $(V)$ if $C\in C(\Delta_N')$ and \eqref{Ceq} holds. \end{Definition} Next, we present a lemma that must be true if our definitions were reasonable. It just reflects, on the level of Definitions \ref{D1}, \ref{D2}, the trivial fact that if $A$ solves \eqref{Aeq}, $A(t,0)=A_0(t)$, then it also solves \eqref{Aeq}, $A(0,x)=V(x)$, where we simply define $V$ as these boundary values; and, of course, this also works the other way around. \begin{Lemma} \label{L5.3} a) Suppose $A_0\in L_1(0,N)$, and let $B\in C(\Delta_N')$ be a solution of the problem $(A_0)$. Define $V(x)=B(x,x)+A_0(x)$, $C(t,x)=B(t,x)+A_0(t)-V(t)$. Then $V\in L_1(0,N)$, and $C$ solves $(V)$. b) Suppose $V\in L_1(0,N)$, and let $C\in C(\Delta_N')$ be a solution of the problem $(V)$. Define $A_0(t)=C(t,0)+V(t)$, $B(t,x)=C(t,x)+V(t)-A_0(t)$. Then $A_0\in L_1(0,N)$, and $B$ solves $(A_0)$. \end{Lemma} \begin{proof} This is checked by straightforward computations, which we leave to the reader. \end{proof} The next theorem also is something that had better be true if the above definitions make precise what we had in mind originally. It was already announced above. \begin{Theorem} \label{TSi} Let $V\in L_1(0,N)$, and let $A(t,x)$ be the $A$ functions of this potential for variable left endpoint $x$, as in Theorem \ref{T5.1}. Let $A_0(t)=A(t,0)$, and define $B$ and $C$ by \eqref{defB} and \eqref{defC}, respectively. Then $B$ solves $(A_0)$ and $C$ solves $(V)$. \end{Theorem} \begin{proof}[Sketch of proof] Theorem \ref{TSi} is a minor variation on the results of \cite[Sect.\ 6]{Si}, and we can use the same method of proof. For smooth $V$ ($V\in C^1$, say), $A$ is also smooth and satisfies \eqref{Aeq} pointwise \cite{Si}. But for smooth $A$, \eqref{Aeq} with the initial condition $A(t,0)=A_0(t)$ is equivalent to the problem $(A_0)$ from Definition \ref{D1}. Here, $A$ and $B$ are related as in \eqref{defB}. So, $B$ from \eqref{defB} solves $(A_0)$, and, by an analogous argument, $C$ from \eqref{defC} solves $(V)$. For general $V\in L_1(0,N)$, approximate $V$ in $L_1$ norm by smooth potentials $V_n$. Then, by the above, the corresponding functions $B_n$, $C_n$ solve the problems $(A_0^{(n)})$ and $(V_n)$, respectively. On the other hand, \cite[Theorem 2.1]{Si} implies that $A_0^{(n)}\to A_0$ in $L_1(0,N)$ and $B_n\to B$, $C_n\to C$ in $C(\Delta_N')$, so the assertions follow by going to the limits in \eqref{Beq}, \eqref{Ceq}. \end{proof} We now establish uniqueness of the solutions to the boundary value problems associated with \eqref{Aeq}. \begin{Theorem} \label{T5.4} a) Suppose $A_0\in L_1(0,N)$, and let $B_1, B_2\in C(\Delta_N')$ be solutions of $(A_0)$. Then $B_1\equiv B_2$. b) Suppose $V\in L_1(0,N)$, and let $C_1, C_2\in C(\Delta_N')$ be solutions of $(V)$. Then $C_1\equiv C_2$. \end{Theorem} \begin{proof} We will only prove part b). The proof of a) is similar (a trifle easier, in fact); also, part a) is just a version of \cite[Theorem 7.1]{Si}. Moreover, part a) is not needed in the proof of Theorem \ref{T5.2}. Let $C_1$, $C_2$ be as in the hypothesis, and define, for $0\le t\le x\le N$, $D_i(t,x)=C_i(x,x-t)$. Then, by a computation, the $D_i$ solve \begin{multline*} D_i(t,x)= \\ -\int_0^t du \int_0^u ds\, \left( D_i(s,x-u+s)+V(x-u+s)\right) \left( D_i(u-s,x-s) + V(x-s) \right) . \end{multline*} Let \[ M(T) = \max \left\{ \left| D_2(t,x)-D_1(t,x)\right| : 0\le t\le T, t\le x\le N \right\} , \] and put $a=\max |D_i(t,x)|$, where the maximum is taken over $0\le t\le x\le N$ and $i=1,2$. Since \begin{multline} \label{z2} D_2(t,x)-D_1(t,x) = \\ \int_0^t du\int_0^u ds \left( D_2(s,x-u+s)+V(x-u+s)\right) \left( D_1(u-s,x-s)-D_2(u-s,x-s) \right)\\ +\int_0^t du\int_0^u ds \left( D_1(s,x-u+s)-D_2(s,x-u+s)\right) \left( D_1(u-s,x-s)+V(x-s) \right), \end{multline} we have the estimate \[ M(T) \le \left( aT + 2\|V\|_{L_1(0,N)} \right) T M(T). \] This immediately implies that $M(t_0)=0$, provided that $t_0>0$ is taken so small that $(aN+2\|V\|)t_0 < 1$. But then the integrals from \eqref{z2} can be taken over a smaller region. We can let the $u$ integrals start at $u=t_0$, and similar truncations are possible in the $s$ integrals. Hence by running through the same argument again, we see that $M(2t_0)=0$. We get the full claim in a finite number of steps. \end{proof} \begin{proof}[Proof of Theorem \ref{T5.2}] If $A_0\in\mathcal{A}_N$, then by Theorem \ref{T1.1}, $A_0$ is the $A$ function of some potential $V\in L_1(0,N)$. Construct the $A(\cdot,x)$ functions of this $V$. Then $A(t,0)=A_0(t)$ for almost every $t\in [0,N]$. Pick a new representative of $A_0\in L_1(0,N)$ by requiring this equality to hold everywhere, and define $B$ by \eqref{defB}. By Theorem \ref{TSi}, $B$ solves $(A_0)$. Conversely, if $A_0\in L_1(0,N)$ and $B\in C(\Delta_N')$ solves $(A_0)$, define $V\in L_1(0,N)$ and $C\in C(\Delta_N')$ as in Lemma \ref{L5.3}a). By this Lemma, $C$ solves $(V)$. On the other hand, if $A(t,x)$ is the family of $A$ functions of this potential $V$ (as in Theorem \ref{T5.1}) and if $\widetilde{C}(t,x)=A(t-x,x)-V(t)$, then, by Theorem \ref{TSi}, $\widetilde{C}$ also solves $(V)$. Hence, by Theorem \ref{T5.4}b), $C\equiv \widetilde{C}$. In particular, \[ A(t,0)=\widetilde{C}(t,0)+V(t)=C(t,0)+V(t)=B(t,0)+A_0(t)=A_0(t). \] So $A_0=A(\cdot,0)$ is an $A$ function and hence must be in $\mathcal{A}_N$ by Theorem \ref{T1.1}. \end{proof} \section{Further remarks} We have proved that the problem $(A_0)$ has a solution if and only if $A_0\in\mathcal{A}_N$, and this holds precisely if $A_0$ is the $A$ function of some potential. In that case, the solution $B$ is unique, and $A(t,x)=B(x+t,x)+A_0(x+t)$ is the family of $A$ functions and, still more importantly, $V(x)=A(0,x)=B(x,x)+A_0(x)$ is the potential whose $A$ function $A_0$ is. On the other hand, if $A_0$ does not come from a potential, $(A_0)$ has no global solution on $\Delta_N'$. So we can attack the problem of reconstructing $V$ from $A_0$ as follows. Try to solve $(A_0)$, for example by iterating \eqref{Beq}. This will always work if $A_0$ comes from a potential. If $A_0$ does not come from a potential, the equation will also detect this: there is no solution. We can say more on this latter scenario. Suppose that $A_0\in L_1(0,N)\setminus \mathcal{A}_N$. Define \[ s = \inf \left\{ N'>0 : A_0 \notin \mathcal{A}_{N'} \right\} . \] Since $\mathcal{A}_{N'}$, viewed as a subset of $L_1(0,N)$, is open, the infimum is actually a minimum. Also, $s>0$, because for any given function $A_0\in L_1(0,N)$, the operator norm of $\mathcal{K}_{A_0}$ in $L_2(0,N')$ tends to zero as $N'\to 0+$. By the definition of $s$, the problem $(A_0)$ can be solved on $\Delta_{N'}'$ for every $N'0$. However, this follows from the general fact that one can always (that is, for arbitray $A_0\in L_1(0,N)$) solve \eqref{Beq} on a small strip $0\le x \le \delta$. To see this, note that the right-hand side of \eqref{Beq} is locally Lipschitz continuous in $B$ in the norm of $C(\Delta_N')$, with small Lipschitz constant if $0\le x\le \delta$ and $\delta>0$ is small. One can thus prove the claim by a Picard type iteration, using the contraction mapping principle. It also follows that \eqref{Beq} has the continuation property that ODEs possess: if \eqref{Beq} can be solved for $0\le x \le \delta$, then one can extend to a larger strip $0\le x\le\delta'$ ($\delta' >\delta$). \begin{appendix} \section{} In this appendix, we make some remarks on the proof of Theorem \ref{T4.1}. This is not intended to be a self-contained sketch of the proof. Rather, we give some hints that should enable a reader who is familiar with \cite{RemdB} to carry out the proof. In particular, we will use the notation from \cite{RemdB} without further explanation. One starts out as in \cite[Sect.\ 9]{RemdB}. Given $A\in\mathcal{A}_N$, one defines the Hilbert spaces \begin{gather*} H_x = \left\{ F(z)=\int f(t)\frac{\sin\sqrt{z}t}{\sqrt{z}}\, dt: f\in L_2(0,x) \right\} ,\\ \|F\|_{H_x}^2 = \langle f, (1+\mathcal{K}_A ) f\rangle_{L_2(0,x)}, \end{gather*} and verifies that the $H_x$ are de~Branges spaces. The inverse theorem of de~Branges \cite[Theorem 7.1]{RemdB} yields a canonical system, and the aim is to check that this system is as in \cite[Proposition 8.1]{RemdB}, but now with $h(0)=0$, $h'(0)=1$. This will prove that $H_N$ comes from a Schr\"odinger equation with Dirichlet boundary conditions. The main modification occurs in Sect.\ 13 of \cite{RemdB}. One again defines a conjugate mapping $F\mapsto \widehat{F}(0)$ on $H_x$ by $\widehat{F}(0)=\int f(t)\psi(t)\, dt$, but this time with \[ \psi(t) = -1-\int_0^t \phi(s)\, ds . \] In this context, recall that $\phi(t)=\int_0^{t/2} A(s)\, ds$. Proposition 13.1 still holds, with an analogous proof. One can then define $y$, $w$ as in Theorem 13.2, and one obtains the new integral equations \begin{align*} y(x,t) + \int_0^x K(t,s) y(x,s)\, ds & = t, \\ w(x,t) + \int_0^x K(t,s) w(x,s)\, ds & = \psi(t), \end{align*} where $K$ is the kernel from \eqref{KA}. Similar changes occur in the formulae of Theorem 15.2; for instance, we now have \[ H_{11}(x) = xy(x,x) + \int_0^x ty_x(x,t)\, dt . \] The crucial Wronskian type identity of Proposition 15.3 continues to hold, but the initial values of $y$ are now given by $y(0,0)=0$, $y'(0,0)=1$. The proof can be completed as in Sect.\ 16 of \cite{RemdB}. \end{appendix} \begin{thebibliography}{99} \bibitem{dB} L.\ de Branges, Hilbert Spaces of Entire Functions, Prentice-Hall, Englewood Cliffs, 1968. \bibitem{DT} P.\ Deift and E.\ Trubowitz, Inverse scattering on the line, Commun.\ Pure Appl.\ Math.\ {\bf 32} (1979), 121--251. \bibitem{GeSi} F.\ Gesztesy and B.\ Simon, A new approach to inverse spectral theory, II.\ General real potentials and the connection to the spectral measure, Ann.\ Math.\ {\bf 152} (2000), 593--643. \bibitem{Lev} B.M.\ Levitan, Inverse Sturm-Liouville Problems, VNU Science Press, Utrecht, 1987. \bibitem{Mar} V.A.\ Marchenko, Sturm-Liouville Operators and Applications, Birkh\"auser, Basel, 1986. \bibitem{PT} J.\ P\"oschel and E.\ Trubowitz, Inverse Spectral Theory, Academic Press, Orlando, 1987. \bibitem{RaSi} A.\ Ramm and B.\ Simon, A new approach to inverse spectral theory, III.\ Short-range potentials, J.\ d'Analyse Math.\ {\bf 80} (2000), 319--334. \bibitem{RemdB} C.\ Remling, Schr\"odinger operators and de~Branges spaces, preprint (electronically available on my homepage). \bibitem{Sakh} L.A.\ Sakhnovich, Spectral Theory of Canonical Systems. Method of Operator Identities, Birkh\"auser, Basel, 1999. \bibitem{Si} B.\ Simon, A new approach to inverse spectral theory, I.\ Fundamental formalism, Ann.\ Math.\ {\bf 150} (1999), 1029--1057. \end{thebibliography} \end{document} ---------------0112050355250--