Content-Type: multipart/mixed; boundary="-------------0202180640592" This is a multi-part message in MIME format. ---------------0202180640592 Content-Type: text/plain; name="02-74.keywords" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="02-74.keywords" Schr\"odinger operator, Fourier transform, sparse potentials ---------------0202180640592 Content-Type: application/x-tex; name="Fourier.tex" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="Fourier.tex" \documentclass{article} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \tolerance=2000 \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{corollary}[theorem]{Corollary} \begin{document} \title{Asymptotics of the Fourier transform of the spectral measure for Schr\"odinger operators with bounded and unbounded sparse potentials} \author{Denis Krutikov} \maketitle \noindent Universit\"at Essen, Fachbereich Mathematik/Informatik, 45117 Essen, \noindent GERMANY\\ E-mail: denis.krutikov@uni-essen.de\\[0.3cm] 2000 AMS Subject Classification: primary 34L40, 81Q10, secondary 42A38 \\[0.3cm] Key words: Schr\"odinger operator, Fourier transform, sparse potentials \\[0.3cm] \begin{abstract} We study the pointwise behavior of the Fourier transform of the spectral measure for discrete one-dimensional Schr\"odinger operators with unbounded sparse potentials, particularly with the potentials of the special type, which give rise to the spectra with Hausdorff dimensionality between $1/2$ and $1$. Operators with bounded sparse potentials are also considered. \end{abstract} \section{Introduction} Let $H$ be a Schr\"odinger operator, acting on a Hilbert space $\mathcal H$. We can think of $H$ as of an energy Hamiltonian of a quantum mechanical system. Denote by $\psi$ the initial state of such a system and by $\rho_{\psi}$ the spectral measure of $\psi$. Then the time evolution of the state $\psi(t)$ is given by $\psi(t)=e^{-itH}\psi$. For the Fourier transform $\widehat{\rho}_{\psi}(t)$, we have the representation $\langle \psi, e^{-itH}\psi\rangle = \widehat{\rho}_{\psi}(t)$. Then the analysis of the Fourier transform $\widehat{\rho}_{\psi}(t)$ provides us with the information about the probability of finding the system again in the state $\psi$ at time $t$, which is $\left| \langle \psi, e^{-itH}\psi\rangle \right|^2$. In this paper we study one-dimensional discrete Schr\"odinger operators on the "half line" (that is on $ \ell_2(\mathbb{N})$), which are defined by \begin{equation} (H_{\varphi}y) ( n)=y(n-1)+y(n+1)+V(n)y(n) \nonumber \end{equation} (where $0 < \varphi < \pi$) along with a phase boundary condition \begin{equation} y(0) \sin \varphi+ y(1) \cos \varphi=0. \nonumber \end{equation} We are mainly interested in the asymptotic behavior of $\widehat{\rho}_{\psi}(t)$ at infinity, particularly, we would like to know whether the following relation holds: \begin{equation} \label{Rajchman} \lim_{t\to\pm\infty} \widehat{\rho}_{\psi}(t) =0. \end{equation} (Measures for which (\ref{Rajchman}) holds are called Rajchman measures. For more information about these measures see \cite{KrRe} and \cite{Ly}.) \\ This last question is especially easy to answer if the measure under consideration is an absolutely continuous measure or a point measure. The answer is positive in the first case (by Riemann-Lebesgue Lemma) and negative in the second (by Wiener's Theorem). So the only complicated and therefore also interesting case concerns the singular continuous part of a measure. In this paper, we will study one specific model, for which the pointwise behavior of $\widehat{\rho}(t)$ can be analyzed rather completely, although the arising spectrum is singular continuous. Namely, we will consider discrete one-dimensional Schr\"odinger operators with sparse potentials, that is potentials of the form \begin{equation} \label{sparse} V(l)= \left \{ \begin{array}{l} v_n, \; l=x_n \\ 0, \hspace{2mm} {\rm else} \end{array} \right \}, \end{equation} where $ 10. \end{equation} \noindent Then \noindent (i) For every boundary phase $\varphi$, the spectrum of $H_{\varphi}$ consists of the closed interval $[-2,2]$ (which is the essential spectrum $\sigma_{\mbox{ess}} (H_{\varphi})$) along with some discrete point spectrum outside this interval. \noindent (ii) For every $\varphi$, the Hausdorff dimensionality of the spectrum of $H_{\varphi}$ in $(-2,2)$ is bounded between dimensions $\delta$ and $ \frac{2 \delta}{1+\delta}$. \noindent (iii) For Lebesgue a.e. $\varphi$, the spectrum in $[-2,2]$ is of exact dimension $\delta$. \end{theorem} We will mainly consider in this paper one-dimensional discrete Schr\"odinger operators on $ \ell_2(\mathbb{N})$ under the same assumptions as in this theorem. We are only interested in the part of the spectrum in $(-2,2)$, because of the relation $\sigma_{\mbox{ess}} (H_{\varphi})= [-2,2]$. So our aim is to investigate the asymptotics of the Fourier transform of the spectral measure for Schr\"odinger operators in the situation, where it is not only known that their (essential) spectra are purely singular continuous, but where the exact Hausdorff dimensionality of these spectra is also known. (For other results on the relationship between asymptotics of $\widehat{\rho}(t)$ and the continuity properties of $\rho$ with respect to Hausdorff measures see \cite{L}.) The actual value $\varphi$ from the definition of the operator $H_{\varphi}$ will not be significant, and all results of this paper are valid for any value $\varphi$. Therefore we will usually omit the index $\varphi$ and will write $H$ instead of $H_{\varphi}$. Let $\rho$ be the spectral measure associated with the vector $\delta_1\in \ell_2$ ($\delta_1(1)=1$ and $\delta_1(n)=0$ if $n\not= 1$), that is $\rho(M)=\|E(M)\delta_1\|^2$, where $E(\cdot)$ is the spectral resolution of $H$. Since $\delta_1$ is a cyclic vector for $H$, any other spectral measure $\rho_{\psi}$ is absolutely continuous with respect to $\rho$. We can therefore restrict our further consideration to this particular measure. We will prove below the following theorem: \begin{theorem} \label{t1.2} Let a potential $V$ have the form (\ref{sparse}) and let the sequences $(x_n)$ and $(v_n)$ obey (\ref{xn}). Then:\\ (i) For every \, $\delta \in (\frac{4}{5},1)$, $f\in C_0^{\infty}(-2,2)$ and every $\sigma>0$, there exists a constant $C$ such that \[ \left| (f\, d\rho)\, \widehat{ }\, (t) \right| \le C |t|^{- \frac{5}{4} + \frac{1}{\delta}+\sigma} \] for all $t$ with $|t|>1$.\\ (ii) For every \, $\delta \in (\frac{2}{3},1)$, $\sigma>0$ and every $f\in C_0^{\infty}(-2,2)$ with $0\notin\text{supp }f$, there exists a constant \, $C$ such that \[ \left| (f\, d\rho)\, \widehat{ }\, (t) \right| \le C |t|^{-\frac{3}{2}+\frac{1}{\delta}+\sigma } \] for all $t$ with $|t|>1$. \\ (iii) Fix arbitrary $\epsilon>0$ and $\delta \in (\frac{1}{2},1)$ and define the resonant set ${\cal R}$ by \[ {\cal R}= \bigcup_{n\in\mathbb N} [\frac{1}{2}x_n, \, x_n^{\frac{\delta}{2\delta-1}+\varepsilon}]. \] Then for every $m\in\mathbb N$ and every $f\in C_0^{\infty}(-2,2)$, there exist a constant $C$ and $t_0>0$, such that \[ \left| (f\, d\rho)\, \widehat{ }\, (t) \right| \le C |t|^{-m} \] for all $t$ with $|t|\notin R$ and $|t| > t_0$. \end{theorem} {\it Remark.} This result doesn't actually require the exact relation $v_n= x_n^{\frac{1-\delta}{2\delta}}$. It will be clear from the proof that the precise condition on the sequence $(v_n)$ that we need is that $(|v_n|)$ is an unbounded monotonically increasing sequence, such that $\prod_{r=1}^{j-1} |v_r| \le |v_j|^N$ holds for all $j$'s with some $N>0$ and $v_j \le x_j^{\frac{1-\delta}{2\delta}}$ holds for all $j$'s. So our results concern somewhat more general situation than it can seem at the first moment. \vspace{1mm} \noindent From Theorem \ref{t1.2} we see that $\widehat{\rho}$ has a resonance structure, although these resonances are not too sharp because of the strong spreading of the wave packets. (See \cite{KrRe} for some notices on the accordance of the resonance structure of $\widehat{\rho}$ \,with the quasiclassical picture of a quantum motion under the influence of a sparse potential.) This paper is closely connected with the paper \cite{KrRe}, where discrete one-dimensional Schr\"odinger operators with bounded sparse potentials are considered. As one illustration of this connection we notice that in the limit $\delta \rightarrow 1$ the inequality from the last theorem (part (ii)) transforms into the inequality \[ \left| (f\, d\rho)\, \widehat{ }\, (t) \right| \le C |t|^{-\frac{1}{2}+\sigma }, \] which is the inequality proved in \cite{KrRe} in the case of a bounded potential for any $f$ with $0\notin\text{supp }f$. \\We will also briefly discuss the case of a bounded potential at the end of this paper and obtain in the last section as byproduct of the whole consideration the following theorem: \begin{theorem} \label{t1.3} Let a potential $V$ have the form (\ref{sparse}) with the bounded sequence $(v_n)$ and let the sequence $(x_n)$ obey \begin{equation} \label{xn2} \lim\limits_{n \rightarrow \infty} \frac{\prod\limits_{k=1}^{n-1} x_k } {x_n^{\varepsilon}} =0 \hspace{3mm} \mbox{for all } \hspace{3mm} \varepsilon >0. \end{equation} Then for every $f\in C_0^{\infty}(-2,2)$ and every $\sigma>0$, there exists a constant $C$, such that \[ \left| (f\, d\rho)\, \widehat{ }\, (t) \right| \le C |t|^{- \frac{1}{4} +\sigma} \] for all $t$ with $|t| >1$. \end{theorem} This theorem can be considered as some kind of a completion of results from \cite{KrRe}, for this theorem provides us with the general estimate for $(f\, d\rho)\, \widehat{ } \, (t)$ for arbitrary $f$, whereas in \cite{KrRe} only the corresponding formula for $f$'s with $0\notin\text{supp }f$ is proved (we have however to notice here that our assumptions on the growth of $x_n$'s are much stronger than those from \cite{KrRe}). Our approach to proving Theorems \ref{t1.2} and \ref{t1.3} depends on a representation of the Fourier transform of the spectral measure as a series of integrals (Theorem \ref{t2.2}). We describe our strategy of estimating the integrals from this series in the section 5. From now on and throughout the rest of this paper (with exception of the section 9), we assume that the potential is given by (\ref{sparse}) and (\ref{xn}) (if nothing else is stated). \section{Preliminaries} We collect in this section some results that we will use in the sequel. At first we will use a EFGP transformation (also called a Pr\"ufer transformation) to rewrite the discrete Schr\"odinger equation \begin{equation} \label{scheq} y(n-1)+y(n+1)+V(n)y(n)=Ey(n)\quad\quad\quad (n\in\mathbb N). \end{equation} So, suppose that $E\in (-2,2)$ and let $y$ be some solution of (\ref{scheq}). Write $E=2\cos x$ with $x \in (0,\pi)$ and define $R(n)>0$, $\theta(n)$ by \[ \begin{pmatrix} u(n)-u(n-1) \cos x \\ u(n-1) \sin x \end{pmatrix} = R(n) \begin{pmatrix} \cos(\theta(n)) \\ \sin(\theta(n)) \end{pmatrix} . \] Then $R$ and $\theta$ obey the equations (see \cite{KLS}) \begin{equation} \label{EFGP1} \frac{R(n+1)^2}{R(n)^2}= \left(1-\frac{V(n)}{\sin x} \sin(2\theta(n)+2x)+\frac{V(n)^2} {\sin^2 x} \sin^2(\theta(n)+x) \right), \end{equation} \begin{equation} \label{EFGP2} \cot (\theta(n+1))=\cot(\theta(n)+x)-\frac{V(n)}{\sin x}. \end{equation} It is evident that for all $l$'s from $\{x_n+1,..., x_{n+1} \}$, $n \in \mathbb{N}$, holds: $R(l)=R(x_n+1)$ and $\theta(l)=\theta(x_n+1)+x(l-x_n-1)$. We denote further $\theta(n)+x$ with $\bar \theta(n)$, $\theta(x_{r})$ with $ \theta_r$ and $\bar \theta(x_{r})$ with $\bar \theta_r$. As a second tool, we need a representation (from \cite{KrRe}) of the spectral measure as a weak star limit of absolutely continuous measures. (This result is related to the similar result for continuous case from \cite{Pea2}). Let $R(n,x)=R(n)$ correspond to the solution $u_{\varphi}$ of (\ref{scheq}) with the initial values \begin{equation} u_{\varphi}(0,z)= \cos \varphi, \hspace{3mm} u_{\varphi}(1,z)= \sin \varphi. \nonumber \end{equation} \begin{proposition} (due to \cite{KrRe}) \label{P2.1} Let $f$ be a continuous function with the support contained in $(-2,2)$. Then \[ \int_{-2}^{2} f(E) \, d\rho(E) = \frac{2}{\pi} \lim_{n\to\infty} \int_0^{\pi} f(2\cos x) \frac{\sin^2 x}{R^2(n,x)}\, dx . \] \end{proposition} We can use Proposition \ref{P2.1} to derive a series representation for the Fourier transform of $\rho$. Since we are only interested in the part of the spectrum in $(-2,2)$, we will study \[ (f\, d\rho)\,\widehat{ }\,(t) = \int_{-\infty}^{\infty} f(E) e^{-itE}\, d\rho(E), \] with $f\in C_0^{\infty}(-2,2)$. \\ \begin{theorem} \label{t2.2} Let coefficients $C^{\alpha}_j(x)$ be defined by \begin{equation} \label{coeff} C^{\alpha}_j(x) = \prod\limits^{j}_{r=1} \left( \frac{v_r}{v_r + 2i(sgn(\alpha_r)) \sin x} \right)^{|\alpha_r|}, \end{equation} with $\alpha= (\alpha_1,..., \alpha_j)$ from $ \mathbb{Z}^j$ (sgn denotes here a sign). Let $f$ be a continuous function with the support contained in $(-2,2)$. Let $a$ and $b$ be defined by \[a= \inf \{ x \in (0, \pi) : \, 2 \cos x \in \mbox{supp}(f) \}, \hspace{1mm} b= \sup \{ x \in (0, \pi) : \, 2 \cos x \in \mbox{supp}(f) \}. \] Then there exists a function $h(x)$ from $C_0^{\infty}(-2,2)$ with the support contained in [a,b], such that for any $m \ge 1$ holds \begin{multline} \label{series} (f\, d\rho)\, \widehat{ }\, (t) =\\ \sum\limits_{j=m+1}^{\infty} \sum_{ \{ \alpha \in \mathbb{Z}^j | \alpha_j \ne 0 \} } \int^b_a h(x) C_{j}^{\alpha}(x) e^{i2(-t \cos(x)+\sum^j_{r=1}(\alpha_r \bar \theta_r))} dx \\ + \sum_{ \alpha \in \mathbb{Z}^{m} } \int^b_a h(x) C_{m}^{\alpha}(x) e^{i2(-t \cos(x)+\sum^{m}_{r=1}(\alpha_r \bar \theta_r))} dx. \end{multline} \end{theorem} {\it Proof.} From (\ref{sparse}) (which ensures that $V(l) =0$, if $l \neq x_n$) together with (\ref{EFGP1}) we have the representation \begin{equation} \label{t2.2.-1} R^{-2}(x_{j},k) =R^{-2}(0,k) \prod_{r=1}^{j-1} \left( 1-\frac {v_r \sin (2 \bar \theta_r)}{ \sin (x)} + \frac {v_r^2 \sin^2(\bar \theta_r)} {\sin^2(x)} \right)^{-1}. \end{equation} For any real $u$ the function $\left( 1-u \sin x+ u^2 \sin^2\frac{x}{2} \right)^{-1}$ expands in a Fourier series \begin{equation} \label{t2.2.-2} \left( 1-u \sin x+ u^2 \sin^2\frac{x}{2} \right)^{-1}=\sum_{n=-\infty}^{\infty} \left(\frac{u}{u+2i (sgn(n))}\right)^{|n|} e^{inx}. \end{equation} (This relation can be easily proved by summing the series on the right side.) We use the formulas (\ref{t2.2.-1}) and (\ref{t2.2.-2}) and Proposition \ref{P2.1} (where we take the limit with respect to the subsequence $x_j$) to obtain \[(f\, d\rho)\, \widehat{ }\, (t) = \lim\limits_{j \rightarrow +\infty} \sum_{ \alpha \in \mathbb{Z}^{j} } \int^b_a h(x) C_{j}^{\alpha}(x) e^{i2(-t \cos(x)+\sum^{j}_{r=1}(\alpha_r \bar \theta_r))} dx, \] where the function $h$ is defined by $h(x)= \frac{2}{\pi} R^{-2}(0,x) f(2 \cos x) \sin^2 x$. Then (\ref{series}) follows from \begin{multline} \sum_{ \alpha \in \mathbb{Z}^j } \int^b_a h(x) C_{j}^{\alpha}(x) e^{i2(-t \cos(x)+\sum^j_{r=1}(\alpha_r \bar \theta_r))} dx =\\ \sum_{ \{ \alpha \in \mathbb{Z}^j | \alpha_j \ne 0 \} } \int^b_a h(x) C_{j}^{\alpha}(x) e^{i2(-t \cos(x)+\sum^j_{r=1}(\alpha_r \bar \theta_r))} dx \\ + \sum_{ \alpha \in \mathbb{Z}^{j-1} } \int^b_a h(x) C_{j-1}^{\alpha}(x) e^{i2(-t \cos(x)+\sum^{j-1}_{r=1}(\alpha_r \bar \theta_r))} dx, \nonumber \end{multline} which is an easy corollary of $C_j^{ ( \alpha_1, ... , \alpha_{j-1}, 0) }= C_{j-1}^{ ( \alpha_1, ... , \alpha_{j-1}) }. $ \hspace{1cm} $\square$ \vspace{1mm} We will use the formula (\ref{series}) in the case $m=m(t)$, where $m(t)$ is defined by the condition \begin{equation} \label{mt} \frac{x_{m(t)+1}}{2} \le |t| < \frac{x_{m(t)+ 2}}{2}. \end{equation} \par Before we go on, we make some general remarks on the notation: (i) The interval $[a,b]$ from Theorem \ref{t2.2} is from now on fixed. We denote $a_0= \inf_{x \in [a,b]} \sin x$ and use $a_0$ further always in this meaning. (ii) The term ``constant'' will refer always to a positive number which is independent of $t$, $j$, $x$ and of the $\alpha_j$'s. It may depend, however, on the other parameters of the problem, which are the sequences $(v_n)$ and $(x_n)$ and the function $f \in C_0^{\infty}(-2,2)$. It may also depend on additional parameters we introduce like the $\sigma$ from Theorem \ref{t1.2}. Some constants will have the same meaning through the whole paper (for example $C_k$'s from Lemma \ref{l4.1} below) or through a certain part of the paper (for example $G_l$'s from Lemma \ref{l5.1} below). In the last case we will always announce the change of this meaning. All other constants, which actual value may change from one formula to the next, are usually denoted by $C$. Also, we sometimes write $a\lesssim b$ instead of $a\le Cb$. (iii) We use the notation $\mathbb{Z}_+$ for the set $\{0,1,2... \}$. (iv) With $u^{(b)}$ we always denote the derivative of $u$ (w.r.t to $x$) of order $b$, that is $u^{(b)} =\frac{d^b}{dx^b} u$. \section{Estimates on the derivatives of $C^{\alpha}_j(x)$} The aim of this section is to prove the following lemma: \begin{lemma} \label{l3.1} Let the sequence $(v_n)$ of positive numbers converge monotonically to $ +\infty$. Denote $\frac{\sqrt{v_r^2+4 a_0^2}}{v_r}$ with $p_{0,r}$. Then there exists for each integer $k \ge 0$ a constant $\tilde P_k$, such that for any $j$ holds \begin{equation} \label{est-of-coeff} \sum\limits_{\alpha \in \mathbb{Z}^j} \sup\limits_{ \begin{array}{c} \, \kappa =0,...,k\\ x \in (a,b) \end{array} } \left|(C_j^{\alpha}(x))^{(\kappa)} \right| \le \frac{\tilde P_k}{( \ln (p_{0,j}) )^k} \prod\limits_{r=1}^j \left( 1+\frac{2}{\sqrt{ p_{0,r}} -1}\right). \end{equation} \end{lemma} (We prove this lemma at the end of the section.) \vspace{1mm} \noindent We have for the coefficients $C^{\alpha}_j(x)$ the representation \begin{equation} \label{repr-of-coeff} C^{\alpha}_j= \prod\limits^{j}_{r=1} \left( \frac{v_r}{\sqrt {v_r^2+ 4 a^2_0}} \right)^{|\alpha_r|} \prod\limits_{r=1}^j \left( \frac{1}{ \frac{v_r}{\sqrt{v_r^2+ 4 a^2_0}} +\frac{2i (\mbox{sgn} (\alpha_r)) \sin x}{\sqrt{v_r^2+ 4 a^2_0}} } \right)^{|\alpha_r|}. \end{equation} We need therefore only to investigate the derivatives of \[ \prod\limits_{r=1}^j \left( \frac{1}{ \frac{v_r}{\sqrt{v_r^2+ 4 a^2_0}} +\frac{2i (\mbox{sgn} (\alpha_r)) \sin x}{\sqrt{v_r^2+ 4 a^2_0}} } \right)^{|\alpha_r|} . \] We study for the moment a little more general situation. \begin{lemma} \label{l3.2} Let $w$ and $s$ be real numbers, such that $w^2 +4 s^2 a_0^2 =1$ holds. Let $m$ be a nonnegative integer. Then there exists for any nonnegative integer $p$ a constant $C_{2,p}$, independent of $m$, $w$ and $s$, such that for $k=0,...,p$ holds the inequality \[ \sup\limits_{x \in (a,b)} \left| \left( \left( \frac{1}{w + 2i s \sin x} \right)^m \right)^{(k)} \right| \le C_{2,p}^{k} m^{k}. \] \end{lemma} \noindent {\it Proof.} It is easy to see, that for each $k$ there exists a constant $C_{1,k}$, independent of $w$ and $s$, such that \begin{equation} \label{l3.2-1} \sup\limits_{x \in (a,b)} \left| \left( \frac{1}{w + 2i s \sin x} \right)^{(k)} \right| \le C_{1,k} \end{equation} holds, where for $k=0$ we can set $C_{1,0}=1$. We use further the common equality \begin{equation} \label{l3.2-2} \left( \prod\limits_{r=1}^n u_r \right)^{(k)} = \sum\limits_{l_1=0}^k ......\sum\limits_{l_{n-1}=0}^{l_{n-2}} {k \choose l_1}...{l_{n-2} \choose l_{n-1}} u_1^{(l_{n-1})} u_2^{(l_{n-2}-l_{n-1})}...u_n^{(k-l_1)} \end{equation} and two following from this equality formulas: \begin{equation} \label{l3.2-3} \sum\limits_{l_1=0}^k \sum\limits_{l_2=0}^{l_1} ...\sum\limits_{l_{n-1}=0}^{l_{n-2}} {k \choose l_1}{l_1 \choose l_2} ......{l_{n-2} \choose l_{n-1}} = n^k \end{equation} (with $u_r(x)=e^{x}$) and \begin{equation} \label{l3.2-4} \sum\limits_{l_1=0}^k ......\sum\limits_{l_{n-1}=0}^{l_{n-2}} {k \choose l_1} ......{l_{n-2} \choose l_{n-1}} m_1^{l_{n-1}}...m_n^{k-l_1} = \left( \sum\limits_{r=1}^n m_r \right)^k \end{equation} (with $u_r(x)=e^{m_r x}$). Then with $u_l(x)=\frac{1}{w+2i s \sin x}$ f\"ur $l=1,...,m$ and \[ C_{2,p}=\max\limits_{l=1,...,p} C_{1,p}\] from (\ref{l3.2-1}), (\ref{l3.2-2}) and (\ref{l3.2-3}) follow the estimates \begin{multline*} \hspace{35mm} \left| \left( \left( \frac{1}{w + 2i s \sin x} \right)^{m} \right)^{(k)} \right| \\ \le \sum\limits_{l_1=0}^k ......\sum\limits_{l_{m-1}=0}^{l_{m-2}} {k \choose l_1} ......{l_{m-2} \choose l_{m-1}} \left|u_1^{(l_{m-1})} \right| \left| u_2^{(l_{m-2}-l_{m-1})} \right| ......\left|u_m^{(k-l_1)} \right| \\ \le \sum\limits_{l_1=0}^k ......\sum\limits_{l_{m-1}=0}^{l_{m-2}} {k \choose l_1} ......{l_{m-2} \choose l_{m-1}} C_{1,l_{m-1}}C_{1,l_{m-2}-l_{m-1}} ......C_{1,k-l_1}\\ \le C_{2,p}^k \sum\limits_{l_1=0}^k \sum\limits_{l_2=0}^{l_1} ...\sum\limits_{l_{m-1}=0}^{l_{m-2}} {k \choose l_1}{l_1 \choose l_2} ......{l_{m-2} \choose l_{m-1}}=C_{2,p}^{k} m^k, \end{multline*} where we have used the relations $k=k- l_1+l_{m-1} +\sum_{j=1}^{m-2} (l_{m-j-1}-l_{m-j}) $ and $C_{1,0}=1$. \hspace{1cm} $\square$ \begin{lemma} \label{l3.3} Let real numbers $w_l$ and $s_l$, $l=1,...,j$, obey $w_l^2 +4 s_l^2 a_0^2 =1$ for all $l$'s. Let $m_l$, $l=1,..., j$, be any nonnegative integers. Then there exists for each nonnegative integer $p$ a constant $C_{2,p}$, independent of $w_l$, $s_l$ and $m_l$, such that for all $k=0,...,p$ holds \[ \sup\limits_{x \in (a,b)} \left| \left( \prod\limits_{l=1}^j \left( \frac{1}{w_l + 2i s_l \sin x} \right)^{m_l} \right)^{(k)} \right| \le C_{2,p}^k \left( \sum\limits_{l=1}^j m_l \right)^k. \] \end{lemma} \noindent {\it Proof.} We use the formula (\ref{l3.2-2}) with $u_l= \left( \frac{1}{w_l+2i s_l \sin x} \right)^{m_l},$ $l=1,...,j$, to obtain, using the result of Lemma \ref{l3.2} and (\ref{l3.2-4}), the following estimates: \begin{multline*} \sup\limits_{x \in (a,b)} \left| \left( \prod\limits_{l=1}^j \left( \frac{1}{w_l + 2i s_l \sin x} \right)^{m_l} \right)^{(k)} \right| \\ \le \sum\limits_{l_1=0}^k ...\sum\limits_{l_{j-1}=0}^{l_{j-2}} {k \choose l_1} ......{l_{j-2} \choose l_{j-1}} \sup\limits_{x \in (a,b)} \left| u_1^{(l_{j-1}-1)} \right| .... \sup\limits_{x \in (a,b)} \left| u_j^{(k-l_1)} \right| \\ \le \sum\limits_{l_1=0}^k ...\sum\limits_{l_{j-1}=0}^{l_{j-2}} {k \choose l_1} ......{l_{j-2} \choose l_{j-1}} C_{2,l_{j-1}}^{l_{j-1}}(m_1)^{l_{j-1}} ...... C_{2,k-l_1}^{k-l_1}(m_j)^{k-l_1} \\ \le C_{2,p}^k \sum\limits_{l_1=0}^k ...\sum\limits_{l_{j-1}=0}^{l_{j-2}} {k \choose l_1} ......{l_{j-2} \choose l_{j-1}} (m_1)^{l_{j-1}} ...... (m_j)^{k-l_1} = C_{2,p}^k \left( \sum\limits^j_{l=1} m_l \right)^k. \hspace{2mm} \square \end{multline*} \begin{corollary} For all $k=0,..,p$ holds the inequality \[ \sup\limits_{x \in (a,b)} \left( \prod\limits_{r=1}^j \left( \frac{1}{ \frac{v_r}{\sqrt {v_r^2+ 4 a^2_0}} +\frac{2i (\mbox{sgn} (\alpha_r)) \sin x}{\sqrt {v_r^2+ 4 a^2_0}} } \right)^{|\alpha_r|} \right)^{(k)} \le C_{2,p}^k \left( \sum\limits^j_{r=1}|\alpha_r| \right)^k. \hspace{5mm} \square \] \end{corollary} From this last inequality and from the formula (\ref{repr-of-coeff}) follows the inequality \begin{equation} \label{est-of-coeff-2} \sup\limits_{ \begin{array}{c} \, \kappa =0,...,k\\ x \in (a,b) \end{array} } \left| (C_j^{\alpha}(x))^{(\kappa)} \right| \le \frac{C_{2,k}^k \left( \sum\limits^{j}_{r=1}|\alpha_r| \right)^k }{ \prod\limits^{j}_{r=1} p_{0,r}^{|\alpha_r|} } . \end{equation} We are now able to prove Lemma \ref{l3.1}. \vspace{1pt} \noindent {\it Proof of Lemma \ref{l3.1}}. Denote $ (\frac{2 k}{e})^k $ with $\tilde C_k$. The function $\frac{x^k}{a^x}$ with $a>1$ obeys $\sup_{x \ge 0} \frac{x^k}{a^x} \le \frac{ k^k}{e^k (\ln a)^k }$. Therefore for all positive integers $n_1$ and all $p>1$ holds \vspace{1mm} \noindent $n_1^k \le \tilde C_k \ln(p)^{-k} \left( \sqrt{p}\right)^{n_1}$. We have then for all positive integers $j$ the following estimates: \begin{multline*} \sum\limits_{\alpha \in \mathbb{Z}^j} \left( \left( \sum\limits_{r=1}^j |\alpha_r|\right)^k \prod\limits_{r=1}^j \left( p_{0,r}^{-|\alpha_r| } \right) \right) \\ \le \sum\limits_{\alpha \in \mathbb{Z}^j} \left( \left( \sum\limits_{r=1}^j |\alpha_r|\right)^k \left( \sqrt{p_{0,j}} \right)^{- \sum\limits_{r=1}^j |\alpha_r| } \prod\limits_{r=1}^j \left(\frac{1}{\sqrt{ p_{0,r}}} \right)^{|\alpha_r| } \right) \\ \le \frac{\tilde C_k}{(\ln (p_{0,j}))^k} \sum\limits_{\alpha \in \mathbb{Z}^j} \left( \prod\limits_{r=1}^j \left(\frac{1}{\sqrt{ p_{0,r}}} \right)^{ |\alpha_r| } \right) = \frac{\tilde C_k}{(\ln (p_{0,j}))^k} \prod\limits_{r=1}^j \left( 1+\frac{2}{\sqrt{p_{0,r}}-1} \right). \end{multline*} Then (\ref{est-of-coeff}) follows from (\ref{est-of-coeff-2}) and from the last inequality with $\tilde P_k=C_{2,k}^k \tilde C_k$. \vspace{1pt} \noindent {\it Remark.} In the case $k=0$ the inequality (\ref{est-of-coeff}) holds with $\tilde P_0=1$. \section{Estimates on the EFGP angles} We will need in the next sections some estimates on the derivatives of the EFGP angles $\bar \theta_n$. The corresponding estimates in the case of a bounded potential are contained in \cite{KrRe} and \cite{KLS} (in the last paper only the estimates for the first two derivatives are proved). In the case under consideration we prove the following lemma: \begin{lemma} \label{l4.1} Let $v_n$'s and $x_n$'s obey (\ref{sparse}) and (\ref{xn}). Then there exist constants \, $C_l$, $l=0,1,2,...$, such that the following inequalities hold: \begin{align*} \sup\limits_{x \in (a,b)} | \bar \theta^{(l)}(x_{n+1}) | \le C_l x^l_n (v_n^2+4)^{2l} , \hspace{2mm} n, \, l=1,2,..., \\ \sup\limits_{x \in (a,b)} \left| \frac{d \bar \theta (x_n) }{d x} -x_n \right| \le C_0 x_{n-1} v_{n-1}^4, \hspace{4mm} n=2,3,... \end{align*} \end{lemma} {\it Proof.} Because of $\bar \theta' (x_n) =\theta' (x_n) +1$ we need only to prove the above estimates with $\bar \theta (x_n)$ replaced by $\theta (x_n)$. We use the simple inequality \begin{equation} \label{l4.1-1} \min\limits_{\varphi \in [0,2\pi]} ( 1-v\sin2\varphi+v^2 \sin^2 \varphi ) \ge (v^2+4)^{-1}, \end{equation} which can be proved by methods of elementary calculus. We differentiate the equation (\ref{EFGP2}) and solve for $\theta_n'$ to obtain \begin{multline} \label{l4.1-2} \theta'(n+1)=\frac{\theta'(n)+1}{1-\frac{V(n)}{\sin x}\sin(2\bar \theta(n))+\frac{V(n)^2}{\sin^2 x}\sin^2(\bar \theta(n))}\\ - \frac{\cos x \sin^2(\bar \theta(n))V(n)}{\sin^2 x (1-\frac{V(n)}{\sin x}\sin(2\bar \theta(n))+\frac{V(n)^2}{\sin^2 x}\sin^2(\bar \theta(n)))}. \end{multline} From this and (\ref{sparse}) follows \begin{equation} \label{l4.1-3} \theta'(x_{n+1}) =\theta'(x_n +1) + x_{n+1}-x_n-1, \hspace{2mm} \theta^{(l)}(x_{n+1}) =\theta^{(l)} (x_n +1), \hspace{2mm} l \ge 2. \end{equation} Using (\ref{l4.1-1}) we obtain then from (\ref{l4.1-2}) the inequality \[ |\theta'(x_n+1)| \le |\theta'_n+1| \left( \frac{v_n^2}{a_0^2}+4 \right) + \frac{v_n(\frac{v_n^2}{a_0^2}+4)}{a_0^2}. \] We can continue the last inequality using (\ref{l4.1-3}) as follows: \begin{multline*} |\theta'(x_n+1)| \le \frac{1}{a_0^2} ( |\theta'(x_{n-1}+1)|+x_{n}-x_{n-1}) (v_{n}^2+4) +\frac{ v_{n} (v_{n}^2+4)}{a_0^4} \\ \le \frac{x_{n}}{a_0^2} (v_{n}^2+4 )^2 \left( \frac{|\theta'(x_{n-1}+1)|}{x_{n}(v_{n}^2+4 )}+ \frac{1}{(v_{n}^2+4 )} + \frac{1}{x_{n}a_0^2 } \right). \end{multline*} We can now choose a constant $C_1$ sufficiently large, so that by induction from the last estimate (with the help of (\ref{sparse}) and (\ref{xn})) follows for all $n$ the inequality \begin{equation} \label{l4.1-4} \sup\limits_{x \in (a,b) } |\theta'(x_n+1)| \le C_1 x_{n} (v_{n}^2+4 )^2. \end{equation} From (\ref{l4.1-3}) and (\ref{l4.1-4}) follows then \begin{multline*} |\theta'(x_n)-x_n| \le |\theta'(x_{n-1})|+x_{n-1} \le C_1 x_{n-1} (v_{n-1}^2+4 )^2+ x_{n-1} \le 2C_1x_{n-1}v_{n-1}^4, \end{multline*} where we probably have to enlarge $C_1$. Then the claim of the present lemma for the first derivatives follows with $C_0=2 C_1$. To prove the assertion for the higher derivatives, we have to differentiate (\ref{l4.1-2}) sufficiently many times. As a result we obtain for $l \ge 2$ the formula \begin{multline} \label{l4.1-5} \theta^{(l)}(n+1)= \frac{\theta^{(l)}(n)}{1-\frac{V(n)}{\sin x}\sin(2\bar \theta(n))+\frac{V(n)^2}{\sin^2 x}\sin^2(\bar \theta(n))} \\+ \frac{P_l (n)} {\sin^{4l} x (1-\frac{V(n)}{\sin x} \sin(2\bar \theta(n)) +\frac{V(n)^2}{\sin^2 x} \sin^2(\bar \theta(n)))^l}, \end{multline} where $P_l(n)$ is a real polynomial of the form \begin{multline*} P_l(n)= \\ \sum\limits_{\beta \in J} c_{\beta} (\sin x)^{a_{\beta}} (\cos x)^{b_{\beta}} (\sin(\bar \theta(n)))^{s_{\beta}} (\cos(\bar \theta(n)))^{r_{\beta}} V(n)^{w_{\beta}} \prod\limits_{j=1}^{l-1} \left( \theta^{(j)}(n) \right)^{u_{j,\beta}}, \end{multline*} for which holds: $J$ is a finite set of indices, $w_{\beta} \le 2l-1$, $0 \le u_{j,\beta} $ and $\sum_{j=1}^{l-1} j u_{j,\beta} \le l$ for all $\beta$ from $J$. (This is easy to show by induction on $l$.) We can estimate $P_l(n)$ as follows: \[|P_l(n)| \lesssim V(n)^{2l-1} \sup_{(u_1,...,u_{l-1}): \, \sum_{j=1}^{l-1} j u_j \le \, l} \, \, \, \prod\limits_{j=1}^{l-1} \left| \left( \theta^{(j)}(n) \right)^{u_j} \right|. \] From the formula (\ref{l4.1-5}) we obtain the inequality \begin{equation} \label{l4.1-6} |\theta^{(l)}(x_{n}+1)| \le |\theta^{(l)}(x_{n})| \left( \frac{ v_{n}^2}{a_0^2}+4 \right) +\frac{|P_l(x_{n})| (\frac{ v_{n}^2}{a_0^2}+4)^l}{a_0^{4l} }. \end{equation} We will use now the induction on $l$. So let us assume, that \[\sup\limits_{x \in [a,b]} | \theta^{(j)}(x_n+1) | \le C_j x^j_n (v_n^2+4)^{2j} \] for each $j \le l-1$. Then we can modify the estimate for $P_l$ as follows: \begin{multline*} \sup\limits_{x \in [a,b]} |P_l(x_n)| \\ \lesssim v_n^{2l-1} \sup_{ \sum_{j=1}^{l-1} j u_j \le \, l} \, \, \, ( x_{n}+(C_1+1) x_{n-1})^{u_1} \left( \prod\limits_{j=2}^{l-1} C_j \right) x_{n-1}^{\sum\limits_{j=2}^{l-1} j u_j} \lesssim v_n^{2l-1} x_n^l \end{multline*} (here we have used (\ref{l4.1-3}), the definition of $P_l$ and (\ref{xn})). \noindent Thus we can obtain from (\ref{l4.1-6}) using the last estimate on $P_l$ and (\ref{l4.1-3}) the estimate \begin{multline*} |\theta^{(l)}(x_{n}+1)| \le |\theta^{(l)}(x_{n-1}+1)| \left( \frac{ v_{n}^2}{a_0^2}+4 \right) +C x_n^l v_n^{2l-1} \frac{ (\frac{ v_{n}^2}{a_0^2}+4)^l}{a_0^{4l} } \\ =\frac{x_{n}^l (v_{n}^2+4 )^{2l}}{a_0^{2}} \left(\frac{|\theta^{(l)}(x_{n-1}+1)| }{ \left( v_{n}^2+4 \right)^{2l-1} x_n^l} +\frac{C}{a_0^{6l-2}} \frac{v_n^{2l-1} }{ (v_{n}^2+4 )^{l}} \right). \end{multline*} We can now choose a constant $C_l$ sufficiently large, so that by induction on $n$ from the last estimate follows for all $n$ the inequality \begin{equation} \sup\limits_{x \in (a,b) } \label{l4.1-7} |\theta'(x_n+1)| \le C_l x_{n}^l (v_{n}^2+4 )^{2l}. \end{equation} (We have used here again the condition (\ref{xn}).) Thus the induction on $l$ is also complete and the estimates on the higher derivatives are proved. \hspace{1cm} $\square$ \vspace{1pt} \noindent {\it Remark.} We can weaken the conditions of the previous lemma. Actually it suffices to assume the relations \begin{equation} \label{xnvn} \lim\limits_{n \rightarrow \infty} \frac{1}{x_n}=\lim\limits_{n \rightarrow \infty} \frac{1}{v_n}=\lim\limits_{n \rightarrow \infty} \frac{x_n v_n^2}{x_{n+1}} =0. \end{equation} \begin{corollary} \label{c4.3} Under the same assumptions as in the previous lemma holds \[ \lim\limits_{j \rightarrow \infty} \frac{\bar \theta' (x_j)} {x_j}=1. \] \end{corollary} {\it Remark.} Let $\varepsilon >0$ be chosen. Then we can assume without loss of generality that $(1-\varepsilon)x_j < \bar \theta_j' <(1+\varepsilon)x_j $ holds not only for sufficiently large $j$ (which is the claim of the last corollary), but for all $j$. For this we have to take in Theorem \ref{t2.2} \[ R^{-2}(x_{j+j_0},k) =R^{-2}(x_{j_0},k) \prod_{r=j_0+1}^{j-1} \left( 1-\frac {v_r \sin (2 \bar \theta_r)}{ \sin x} + \frac {v_r^2 \sin^2(\bar \theta_r)} {\sin^2 x} \right)^{-1} \] instead of (\ref{t2.2.-1}), define $h$ there by $h(x)= \frac{2}{\pi} R^{-2}(x_{j_0},x) f(2 \cos x) \sin^2 x$ and then \vspace{1mm} \noindent renumerate correspondingly $x_j$'s, $v_j'$s and $\bar \theta_j'$'s. \begin{corollary} \label{c4.5} Using (\ref{xn}) we can rewrite the statement of Lemma \ref{l4.1} as follows: \begin{align*} \sup\limits_{x \in [a,b]} | \bar \theta^{(l)}(x_{n+1}) | \le C_l x_n^{\frac{(2-\delta)l }{\delta}} , \hspace{2mm} n,l=1,2... \\ \sup\limits_{x \in (a,b)} \left| \frac{d \bar \theta (x_n) }{d x} -x_n \right| \le C_0 x_{n-1}^{\frac{2-\delta}{\delta}}, \hspace{2mm} n=2,3... \end{align*} where we probably have to enlarge $C_l$'s, $l \ge 1$. \end{corollary} \section{Non-resonant terms} We describe now shortly our general strategy for estimating (\ref{series}). We consider separate integrals from this sum, that is the integrals of the form \begin{equation} \label{term} \int^b_a h(x) C_{j}^{\alpha}(x) e^{i2(-t \cos x+\sum^{j}_{r=1}(\alpha_r \bar \theta_r))} dx . \end{equation} By Corollary \ref{c4.3}, the derivative of the phase is roughly equal to \[2 \sum_{r=1}^j \alpha_r x_r + 2t\sin x . \] Because of the condition (\ref{xn}) (which implies the rapid growth of $x_j$'s) the last expression is in most cases of the order $2 \alpha_j x_j+2t \sin x$. So we can conclude that if $|t|$ is either much larger or much smaller than $x_j$ and if the $\alpha_r$'s with $rr, \hspace{2mm} j >m(t)+1, \vspace{1mm} \\ |t|^{\upsilon_0} , \hspace{9mm} r=j=m(t)+1, \vspace{1mm} \\ \left( 2^{-1} |t| x_{j-1}^{\frac{\delta-2}{\delta}} \right)^{\rho_0}, \hspace{3mm} r \le m(t), \hspace{1mm} j=m(t)+1,\\ \vspace{1mm} 4^{-1} |t| a_0 x_j^{\frac{\delta-2}{\delta}} , \hspace{3mm} r \le m(t) ,\hspace{1mm} j=m(t) . \end{array} \right\} \] The values $\rho_0$ and $\upsilon_0$ are not defined for the moment, but we assume $0<\rho_0 <1$ and $0<\upsilon_0 <1$. \par We use the notation ''non-resonant'' for the terms (\ref{term}) of three types: \\ a) for the terms with $j>m(t)+1$ and with $|\alpha_r| \le \gamma^j_r$ for all $r \gamma^j_{r_0}$ for some $r_0$. Therefore we refer to this terms also as to terms ''with large $\max |\alpha_r|$''. We denote the sets of corresponding $\alpha$'s with $ A_{3,j}$. (We notice, that the definition of the sets $A_{1,j}$, $A_{2,j}$ and $A_{3,j}$ depends in the cases $j=m(t)$ and $j=m(t)+1$ on the value of $t$.) We devote the rest of this section to the study of non-resonant terms. The discussion of the resonant terms follows in the next section and the consideration of rest terms occurs in the section 7. Let us start with the non-resonant case a), so we consider the terms (\ref{term}) with $j>m(t)+1$ and $\alpha_r \le \gamma^j_r$ for $r=1,...,j-1$. Abbreviate \[K^{\alpha}_j =2(-t \cos x+\sum^{j}_{r=1} \alpha_r \bar \theta_r).\] Using Corollary \ref{c4.3}, we then see that, if $\varepsilon>0$ is chosen sufficiently small, for sufficiently large $|t|$ (note that this also ensures that $j$ is large) holds \begin{multline} \label{estK'} \inf\limits_{x \in (a, b)} |(K_j^{\alpha})'| \ge |\alpha_j| (1- \varepsilon) x_j - |t| - (1+ \varepsilon) \sum\limits^{j-1}_{r=1} |\alpha_r| x_r \\ \ge (1- \varepsilon) |\alpha_j| x_j - \frac{x_j}{2} - \frac{1+ \varepsilon}{4} \frac{x_j}{ x_{j-1}} \sum\limits^{j-1}_{r=1} x_r \ge \frac{|\alpha_j| x_j }{4}. \end{multline} In order to obtain good estimates, we must now integrate by parts sufficiently many times. To do this, we introduce the differential expression \[ L= \frac{-i}{(K_j^{ \alpha})'}\, \frac{d}{dx}. \] Note that $ L(e^ {i K_j^{ \alpha}}) =e^{i K_j^{ \alpha}} $. Therefore, we can manipulate the integrals (\ref{term}) as follows: \[ \int_a^b h C_j^{\alpha} e^{i K_j^{ \alpha} }\, dx = \int_a^b h C_j^{\alpha} \left( L^m e^{i K_j^{ \alpha}} \right) \, dx = \int_a^b e^{i K_j^{ \alpha}} \left[ {(L^*)}^m \left( h C_j^{\alpha} \right) \right] \, dx. \] Here, $m\in\mathbb N$ is still to be chosen and \[ L^{*}= \frac{d}{dx} \, \frac{i}{(K_j^{ \alpha})'(x)} \] is the transpose of $L$. There are no boundary terms because the support of $h$ lies in $[a,b]$. Thus we obtain the estimate \begin{equation} \label{estpi} \left| \int^b_a h(x) C_j^{\alpha}(x) e^{iK_j^{ \alpha}}dx \right| \le \pi \max_{ x \in (a,b)} \left| (L^{*})^m \left( h(x) C_j^{\alpha}(x) \right) \right| . \end{equation} So, our next task is to control $(L^{*})^m \left( h C_j^{\alpha} \right)$. \begin{lemma} \label{l5.1} For each positive integer $m$ there exists a constant $\tilde B_{m}$, such that holds the inequality \begin{multline} \sup_{ x \in (a,b)} \left| (L^{*})^m \left( h(x) C_j^{\alpha}(x) \right) \right| \le \tilde B_{m} \\ \\ \times \sup\limits_{ \begin{array}{c} \, m \le l \le 2m \\x \in (a,b) \end{array} } \sup\limits_{\begin{array}{c} \, (\kappa, \zeta_1,...,\zeta_{l-1}) \in (\mathbb{Z}_+)^{l}: \\ \, \kappa \le m, \\ \, \sum_{k=1}^{l-1} (k+1) \zeta_k \le l - \kappa \end{array} } \hspace{3mm}\left|(C_j^{\alpha}(x))^{(\kappa)} \frac{ \prod\limits^{l-1}_{k=1} \left( (K_j^{\alpha}(x))^{(k+1)} \right)^{\zeta_k}}{ ((K_j^{\alpha}(x))' )^l} \right| . \nonumber \end{multline} \end{lemma} {\it Proof.} For each $m$ there exist constants $C^{m, l}_{(\kappa, \zeta)} $, independent of $j$, $x$ and $ \alpha$ (some of them can be equal to zero), such that \begin{equation} \label{l5.1-1} (L^{*})^m (h C_j^{\alpha} ) =\sum\limits^{2m}_{l=m} \sum\limits_{(\kappa,\zeta) \in I_{l}} \frac{C^{m, l}_{(\kappa, \zeta)} (h C_j^{\alpha})^{(\kappa)} \prod\limits^{l-1}_{k=1} \left( (K_j^{\alpha})^{(k+1)} \right)^{\zeta_k}}{ ((K_j^{\alpha})' )^l}, \end{equation} where $I_{l}$ is defined by $I_{l}= \{ (\kappa, \zeta) \in \{0,..,l \} \times \mathbb{Z}_+^{l-1}: \, \kappa+ \sum^{l-1}_{k=1} (k+1) \zeta_k \le l \}$. The formula (\ref{l5.1-1}) is in the case $m=1$ directly obtained by derivation. (We have in this case \[ I_1=\{(0), \, (1) \}, \, I_2= \{(0,0), \, (1,0), \, (2,0), \, (0,1) \}, \] $C^{1,1}_{(1)}=i=-C^{1,2}_{(0,1)}$ and else $C^{1, l}_{(\kappa, \zeta)}$ equal to zero.) \\ For arbitrary $m>1$ (\ref{l5.1-1}) is easily proved by induction argument, using \begin{multline*} L^{*}\left( \frac{\left(h C_j^{\alpha} \right)^{(\kappa)} \prod\limits^{l-1}_{k=1} \left( (K_j^{ \alpha})^{(k+1)} \right)^{\zeta_k}}{ ((K_j^{\alpha})')^{l}} \right) =\frac{ i (h C_j^{\alpha})^{(\kappa+1)} \prod\limits^{l-1}_{k=1} \left( (K_j^{\alpha}) ^{(k+1)} \right)^{\zeta_k}}{ ((K_j^{ \alpha})')^{l+1}} \\+ i (h C_j^{\alpha})^{(\kappa)} \sum\limits^{l-1}_{k_0 =1 } \frac {\zeta_{k_0} \left( (K_j^{\alpha})^{(k_0+1)} \right)^{\zeta_{k_0}-1} (K_j^{\alpha})^{(k_0+2)} \prod\limits_{k \ne k_0} \left( (K_j^{\alpha})^{(k+1)} \right)^{\zeta_k}} { ((K_j^{\alpha})')^{l+1}} \\- \frac{i (l+1)(h C_j^{\alpha} )^{(\kappa)} (K_j^{ \alpha})'' \prod\limits_{k=1}^{l-1} \left( (K_j^{ \alpha}) ^{(k+1)} \right)^{\zeta_k} }{ ((K_j^{\alpha})')^{l+2}}. \end{multline*} The sets $I_{l}$ are evidently finite and, moreover, $|I_{l}|$ (the cardinality of $ I_{l}$) depends only on $l$. Then, taking in account that $h$ lies in $C^{\infty}_0 (-2,2)$, the present lemma follows from (\ref{l5.1-1}). $\hspace{12mm} \square$ \vspace{1mm} \noindent To bound the expressions $ \prod^{l-1}_{k=1} \left( (K_j^{\alpha}(x))^{(k+1)} \right)^{\zeta_k} $, we use Corollary \ref{c4.5} which implies that \begin{equation} \label{estK(k+1)} \left| \prod\limits^{l-1}_{k=1} \left( (K_j^{\alpha})^{(k+1)} \right)^{\zeta_k} \right| \le \prod\limits^{l-1}_{k=1} \left( |t|+ \sum\limits_{r=1}^j |\alpha_r| C_{\xi_k} x_{r-1}^{\frac{(k+1) (2-\delta) }{\delta}} \right)^{\zeta_k} . \end{equation} From the last inequality and the inequality (\ref{estK'}) follows the estimate \begin{multline} \label{estK(k+1)-2} \left| \frac{\prod\limits^{l-1}_{k=1} ((K_j^{\alpha})^{(k+1)})^{\zeta_k}}{((K_j^{\alpha})')^l}\right| \le \frac{4^l}{(|\alpha_j| x_j)^{\nu}} \prod\limits^{l-1}_{k=1} \left( \frac{|t|+ \sum\limits_{r=1}^j |\alpha_r| C_{k+1} x_{r-1}^{\frac{(k+1) (2-\delta) }{\delta}}}{(|\alpha_j| x_j)^{k+1} } \right)^{\zeta_k}, \end{multline} with $\nu = l -\sum_{k=1}^{l-1} (k+1) \zeta_k$. We have now to consider the separate factors from this product. So let $\xi$ be any integer $ \ge 2$. Then we use $|t| \le x_j $ (which follows from $j>m(t)+1$) and (\ref{xn}) to obtain the estimates \begin{multline*} \frac{ |t|+ \sum\limits_{r=1}^j |\alpha_r| C_{\xi} x_{r-1}^{\frac{\xi(2-\delta)}{\delta}} }{(|\alpha_j| x_j)^{\xi} } \le \frac{|t|+ x_j x_{j-1}^{\frac{\delta-2}{\delta}} \sum\limits_{r=1}^{j-1} ( C_{\xi} x_{r-1}^{\frac{\xi(2-\delta)}{\delta}})+|\alpha_j| C_{\xi} x_{j-1}^{\frac{\xi(2-\delta)}{\delta}}}{(|\alpha_j| x_j)^{\xi}} \\ \le \frac{|t|}{x_j^{\xi}}+ C_{\xi} x_{j-1}^{\frac{(\delta-2)}{\delta}} x_j^{1-\xi} \sum\limits_{r=1}^{j-1} x_{r-1}^{\frac{\xi(2-\delta)}{\delta}} +C_{\xi} \frac{x_{j-1}^{\frac{\xi(2-\delta)}{\delta}}}{x_j^{\xi}} \lesssim \frac{x_{j-1}^{\frac{(\xi-1)(2-\delta)}{\delta}}}{x_j^{\xi-1}} . \end{multline*} From the last estimate and (\ref{estK(k+1)-2}) follows that there exists a constant $ G_l$, such that \[ \sup\limits_{x \in (a,b)} \left| \frac{\prod\limits^{l-1}_{k=1} ((K_j^{\alpha})^{(k+1)})^{\zeta_k}}{((K_j^{\alpha})')^l} \right| \le \frac{G_l}{x_j^{\nu}} \prod\limits^{l-1}_{k=1} \left( \frac{x_{j-1}^{\frac{k(2-\delta)}{\delta}}}{x_j^{k}} \right)^{\zeta_k} = \frac{G_l}{x_j^{\nu}} \left( \frac{x_{j-1}^{\frac{2-\delta}{\delta}}}{x_j} \right)^{ \sum\limits^{l-1}_{k=1} k\zeta_k}. \] We continue this with the help of estimates \[ \nu+\sum\limits^{l-1}_{k=1} k \zeta_k =l- \sum\limits^{l-1}_{k=1} \zeta_k \ge l- \frac{1}{2}\sum\limits^{l-1}_{k=1} (k+1)\zeta_k \ge \frac{ l+ \kappa}{2} \] to obtain the inequality \begin{equation} \label{estGl} \sup\limits_{\begin{array}{c} \, x \in (a,b) \\ \sum_{k=1}^{l-1} (k+1) \zeta_k \le l - \kappa \end{array} } \left| \frac{\prod\limits^{l-1}_{k=1} ((K_j^{\alpha})^{(k+1)})^{\zeta_k}}{((K_j^{\alpha})')^l} \right| \le G_l \frac{ x_{j-1}^{\frac{(2-\delta)l}{\delta}} }{ x_j^{\frac{l+\kappa}{2}} } . \end{equation} \begin{proposition} \label{p5.2} Let the sequences $(v_n)$ and $(x_n)$ obey (\ref{xn}). Let \, $\delta$ be arbitrary from $(\frac{1}{2},1)$. Then there exist for any positive integer $n$ a constant $ B_n$ and $t_0$ from $\mathbb{R}$, such that for $|t|>t_0$ and $j>m(t)+1$ holds the inequality \[\sum\limits_{ \alpha \in A_{1,j}} \left| \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{i K^{\alpha}_{j}(x)} dx \right| \le B_n x_j^{-n} .\] \end{proposition} {\it Proof.} Let $n$ be fixed. Using (\ref{estpi}), (\ref{estGl}) and Lemma \ref{l5.1}, we obtain the estimate \begin{equation} \label{l5.2-1} \left| \int\limits_a^b h(x)C^{\alpha}_j e^{iK^{\alpha}_j}dx \right| \le \tilde G_m \sup\limits_{ \begin{array}{c} \, 0 \le \kappa \le m\\ m \le l \le 2m \\ x \in (a,b) \end{array} } \left|(C_j^{\alpha}(x))^{(\kappa)} \right| \frac{ x_{j-1}^{\frac{(2-\delta)l}{\delta}} }{ x_j^{\frac{l+\kappa}{2}} } \, \end{equation} with $\tilde G_m=\pi \tilde B_{m} \max_{m \le l \le 2m}G_l$. We have now to use Lemma \ref{l3.1}. By Taylor expansion we obtain \begin{align*} \sqrt{p_{0,r}} = \sqrt[4]{\frac{v_r^2+4 a_0^2}{v_r^2}}= \sqrt[4] {1+\frac{4 a_0^2}{v_r^2}} = 1+\frac{ a_0^2}{v_r^2}+o\left( \frac{1}{v_r^3} \right), \\ \ln (p_{0,r})= \ln \left( \sqrt{1+\frac{4 a_0^2}{v_r^2}} \right)= \frac{2 a_0^2}{v_r^2} + o\left( \frac{1}{v_r^3} \right). \end{align*} From this expansions, from Lemma \ref{l3.1} (the formula (\ref{est-of-coeff})) and from the condition (\ref{xn}) follows for each $\varepsilon >0$: \begin{equation} \label{est-koeff-with-Taylor} \sum\limits_{\alpha \in \mathbb{Z}^j }\sup\limits_{ x \in (a,b) } \left|(C_j^{\alpha}(x))^{(\kappa)} \right| \lesssim v_j^{2 \kappa+2+2 \varepsilon} = x_j^{\frac{(1-\delta)(\kappa+1+\varepsilon)}{\delta}} . \end{equation} From this estimate and from (\ref{l5.2-1}) we conclude that there exists for each $m$ a constant $K_m$, such that \begin{equation} \label{est-Km} \sum\limits_{\alpha \in A_{1,j}} \left| \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{2i K^{\alpha}_{j}(x)} dx \right| \le K_m \sup\limits_{ \begin{array}{c} \, 0 \le \kappa \le m\\ m \le l \le 2m \end{array} } \frac{x_{j-1}^{\frac{(2-\delta)l}{2 \delta}}}{ x_j^{ \frac{l+\kappa}{2} - \frac{(1-\delta)(\kappa+1+\varepsilon)}{\delta} } }. \end{equation} From $\delta >\frac{1}{2}$ follows $\frac{(1-\delta)}{\delta} <1-\tilde \varepsilon$ with some $\tilde \varepsilon>0$ and we can choose $m$ so that holds $ (1+2\frac{1+\varepsilon}{m}) \frac{1-\delta}{\delta} < 1-\tilde \varepsilon$. Then we have for all $\kappa \ge \frac{l}{2}$ ($\ge \frac{m}{2}$) the relation \[ \frac{l+\kappa}{2} - \frac{(1-\delta)(\kappa+1+\varepsilon)}{\delta} =\frac{l}{2} +\frac{\kappa}{2}(1 -(1+\frac{1+\varepsilon}{\kappa}) \frac{2(1-\delta)}{\delta}) >\frac{l +2 \kappa \tilde \varepsilon-\kappa}{2} \ge \kappa \tilde \epsilon. \] For all $\kappa$ with $\kappa < \frac{l}{2}$ holds \[ \frac{l+\kappa}{2} - \frac{(1-\delta)(\kappa+1+\varepsilon)}{\delta} \ge \frac{l+\kappa}{2} - \kappa-1-\varepsilon =\frac{l-\kappa}{2} - 1-\varepsilon > \frac{l}{4} - 1-\varepsilon. \] Thus we can continue (\ref{est-Km}) as follows: \[\sum\limits_{\alpha \in A_{1,j}} \left| \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{2i K^{\alpha}_{j}(x)} dx \right| \le K_m \frac{x_{j-1}^{\frac{(2-\delta)m}{ \delta}}}{ x_j^{ \min \{ \frac{m \tilde \varepsilon}{2} , \frac{m}{4} - 1-\varepsilon \} } }. \] We have now only to use the condition (\ref{xn}) to obtain for large $j$ the inequality $x_{j-1}^{\frac{(2-\delta)m}{ \delta}} \le x_j$. Then the present proposition follows with sufficiently large $t_0$ (which ensures that all $j$'s under consideration are also large) and with \\ $B_n=K_m$, where $m$ is chosen so that holds $\min \{ \frac{m \tilde \varepsilon}{2} , \frac{m}{4} - 1-\varepsilon \} > n+1$. \hspace{5mm} $\square$ \vspace{2mm} The non-resonant cases b) and c) can be treated similarly. We will thus keep in these cases the discussion brief. First we consider the case b). It holds $j=m(t)$ and $\alpha_r \le \gamma^j_r$ for $r=1,...,j$. Instead of (\ref{estK'}) we have with sufficiently small $\varepsilon>0$ \begin{multline} \label{estK'2} \inf\limits_{x \in (a, b)} |(K_j^{\alpha})'| \ge |t| \inf\limits_{x \in (a,b)} \sin(x) -(1+\varepsilon) \sum\limits^{j}_{r=1} |\alpha_r| x_r \ge |t| a_0 - (1+\varepsilon) \sum\limits^{j}_{r=1} \frac{|t| a_0}{4 x_j} x_r \\ = |t| a_0 -\frac{(1+\varepsilon) |t| a_0}{4 } \left(1+x_j^{-1} \sum\limits^{j-1}_{r=1} x_r \right) \ge \frac{|t| a_0}{2}. \end{multline} Lemma \ref{l5.1} holds also in this case without any change. We can also use the inequality (\ref{estK(k+1)}), which is also valid. Instead of (\ref{estK(k+1)-2}) we have \begin{equation} \label{estK(k+1)-3} \sup\limits_{x \in (a,b)} \left| \frac{\prod\limits^{l-1}_{k=1} ((K_j^{\alpha})^{(\xi_k)})^{\zeta_k}}{((K_j^{\alpha})')^l} \right| \le \frac{2^l}{(|t| a_0)^{\nu}} \prod\limits^{l-1}_{k=1} \left( \frac{ |t|+ \sum\limits_{r=1}^j |\alpha_r| C_{\xi_k} x_{r-1}^{\frac{(k+1) (2-\delta)}{\delta}}}{(|t| a_0)^{k+1}} \right)^{\zeta_k}. \end{equation} So we need to consider the separate factors from this product. We obtain for any $\xi \ge 2$ the following sequence of estimates: \begin{multline} \frac{ |t|+ \sum\limits_{r=1}^j |\alpha_r| C_{\xi_k} x_{r-1}^{\frac{ \xi (2-\delta)}{\delta}}}{(|t| a_0)^{\xi}} \le \frac{|t|+ \sum\limits_{r=1}^{j} |t| a_0 x_j^{\frac{\delta-2}{\delta}} C_{\xi} x_{r-1}^{\frac{\xi (2-\delta)}{\delta}} }{|t|^{\xi}} \\ \le |t|^{1-\xi}+ C_{\xi}a_0 \left( \frac{x_j^{\frac{2-\delta}{\delta}}}{|t|} \right)^{\xi-1} \left( \frac{1}{x_j}\right)^{\xi \frac{2-\delta}{\delta} } \sum\limits_{r=1}^{j} x_{r-1}^{\frac{\xi (2-\delta)}{\delta} } \lesssim \left( \frac{x_j^{\frac{2-\delta}{\delta}}}{|t|} \right)^{\xi-1}. \end{multline} Thus we obtain instead of (\ref{estGl}) the following inequality: \begin{equation} \label{estGl-2} \sup\limits_{\begin{array}{c} \, x \in (a,b) \\ \sum_{k=1}^l (k+1) \zeta_k \le l-\kappa \end{array} } \left| \frac{\prod\limits^{l-1}_{k=1} ((K_j^{\alpha})^{(k+1)})^{\zeta_k}}{((K_j^{\alpha})')^l} \right| \le G_l \frac{x_{j}^{\frac{(2-\delta)l}{\delta}}}{|t|^{\frac{l+\kappa}{2}}} \le G_l \frac{x_{j}^{\frac{(2-\delta)l}{\delta}}}{|t|^{\frac{l}{2}}} . \end{equation} (These constants $G_l$'s are possibly different from $G_l$'s from (\ref{estGl}).) \begin{proposition} \label{p5.3} Let the sequences $(v_n)$ and $(x_n)$ obey (\ref{xn}). Let $\delta$ be arbitrary from $(0,1)$. Then there exist for any positive integer $n$ a constant $ B_n$ and $t_0>0$, so that for $|t|>t_0$ and for $j=m(t)$ holds the inequality \[\sum\limits_{ \alpha \in A_{1,j}} \left| \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{i K^{\alpha}_{j}(x)} dx \right| \le B_n |t|^{-n} .\] \end{proposition} {\it Proof.} Let $n$ be fixed. Using (\ref{estpi}), (\ref{estGl-2}) and Lemma \ref{l5.1} we obtain in this case the following estimate (compare with the proof of Proposition \ref{p5.2}): \begin{equation} \label{l5.3-1} \left| \int\limits_a^b h(x)C^{\alpha}_j e^{iK^{\alpha}_j}dx \right| \lesssim \sup\limits_{ \begin{array}{c} \, 0 \le \kappa \le m\\ x \in (a,b) \end{array} } \left|(C_j^{\alpha}(x))^{(\kappa)} \right| \left( \frac{x_{j}^{\frac{2(2-\delta)}{\delta}}} {|t|} \right)^{\frac{m}{2}} . \end{equation} The estimate (\ref{est-koeff-with-Taylor}) from the proof of Proposition \ref{p5.2} remains valid in this case also. From this estimate and from (\ref{l5.3-1}) we conclude, that there exists for each $m$ a constant $K_m$, so that holds \[\sum\limits_{\alpha \in A_{1,j}} \left| \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{2i K^{\alpha}_{j}(x)} dx \right| \le K_m \frac{x_{j}^{\frac{(2-\delta)m+(1-\delta)(m+1+\varepsilon)}{ \delta}}}{ |t|^{ \frac{m}{2} } }. \] Then the present proposition follows from (\ref{xn}) with $B_n=K_{2n+1}$ and $t_0$ sufficiently large. \hspace{1cm} $\square$ Now we come to the case c). We denote $\min\limits_{x \in [a,b]} |t \sin(x)+\sum^j_{r=1}\alpha_r x_r|$ with $D^{\alpha}_t$. Let $x_0$ be the point, where this minimum is achieved. Then we have $D^{\alpha}_t=|t \sin(x_0)+\sum^j_{r=1}\alpha_r x_r|$. We use (\ref{xn}), (\ref{defres}) and Corollary \ref{c4.5} to obtain for large $j$ ($=m(t)+1$) the following sequence of inequalities: \begin{multline*} \inf\limits_{x \in (a,b)} |(K^{\alpha}_j(x))'| \ge |t \sin(x_0)+\sum^j_{r=1} \alpha_r x_r| - \sum_{r=1}^j | \alpha_r| | \bar \theta'_r - x_r | \\ \ge |t \sin(x_0)+\sum^j_{r=1} \alpha_r x_r| - C_0|\alpha_j|x_{j-1}^{\frac{2-\delta}{\delta}}- \sum\limits^{j-1}_{r=1}\left( x_j x_{j-1}^{\frac{\delta-2}{\delta}} \right)^{\rho_0} C_0 x_{r-1}^{\frac{2-\delta}{\delta}} \\ \ge D^{\alpha}_t \left(1- 2C_0 \left( \frac{ x_{j-1}^{\frac{2-\delta}{\delta}}}{x_j} + \left( \frac{x_{j-1}^{\frac{2-\delta}{\delta}} }{x_j} \right)^{1-\rho_0} \frac{ \sum\limits^{j-1}_{r=1}\left( x_{r-1}^{\frac{2-\delta}{\delta}} \right)} {x_{j-1}^{\frac{2-\delta}{\delta}}}\right) \right) \ge \frac{D^{\alpha}_t}{2} \ge \frac{|\alpha_j| x_j}{4}. \end{multline*} So, instead of (\ref{estK(k+1)-2}) from the case a) and (\ref{estK(k+1)-3}) from the case b) we have in the case c) the estimate \begin{equation} \label{estK(k+1)-4} \sup\limits_{x \in (a,b)} \left| \frac{\prod\limits^{l-1}_{k=1} ((K_j^{\alpha})^{(\xi_k)})^{\zeta_k}}{((K_j^{\alpha})')^l} \right| \le \frac{2^l}{(D^{\alpha}_t)^{\nu}} \prod\limits^{l-1}_{k=1} \left( \frac{ |t|+ \sum\limits_{r=1}^j |\alpha_r| C_{\xi_k} x_{r-1}^{\frac{(k+1) (2-\delta)}{\delta}} }{(D^{\alpha}_t)^{k+1}} \right)^{\zeta_k}. \end{equation} We consider again the separate factors. We use the inequality \[ |t| =\frac{|t \sin x_0|}{\sin x_0} \le \frac{D^{\alpha}_t + \sum\limits_{r=1}^j |\alpha_r| x_r }{\sin x_0} \] to obtain for each integer $\xi \ge 2$ the following estimates: \begin{multline} \frac{ |t|+ \sum\limits_{r=1}^j |\alpha_r| C_{\xi} x_{r-1}^{\frac{\xi (2-\delta)}{\delta}} }{|t \sin(x_0)+\sum^j_{r=1} \alpha_r x_r|^{\xi} } \\ \le \frac{ D^{\alpha}_t + \sum\limits^j_{r=1}|\alpha_r| x_r + \sin(x_0) \left( |\alpha_j| C_{\xi} x_{j-1}^{\frac{\xi (2-\delta) }{\delta}}+C_{\xi} \frac{x_j}{x_{j-1}} \sum\limits_{r=1}^{j-1} x_{r-1}^{\frac{\xi (2-\delta)}{\delta}} \right) }{\sin(x_0) \left( D^{\alpha}_t \right)^{\xi}} \\ \lesssim \left( \frac{1}{D^{\alpha}_t} \right)^{\xi-1} +\frac{|\alpha_j| (x_{j-1}^{\frac{\xi (2-\delta) }{\delta}} +x_j)} {\left( D^{\alpha}_t \right)^{\xi-1} |\alpha_j| x_j} \lesssim \frac{ x_{j-1}^{\frac{(\xi-1) (2-\delta) }{\delta}}} {\left( D^{\alpha}_t \right)^{\xi-1}} \lesssim \left( \frac{ x_{j-1}^{\frac{ (2-\delta) }{\delta}}} { t }\right)^{\xi-1}. \end{multline} We obtain from this as in the previous case \begin{equation} \label{estGl-3} \sup\limits_{\begin{array}{c} \, x \in (a,b) \\ \sum_{k=1}^l (k+1) \zeta_k \le l \end{array} } \left| \frac{\prod\limits^{l-1}_{k=1} ((K_j^{\alpha})^{(k+1)})^{\zeta_k}}{((K_j^{\alpha})')^l} \right| \le G_l \left( \frac{x_{j-1}^{\frac{2(2-\delta)}{\delta}}}{|t|} \right)^{\frac{l}{2}} . \end{equation} (These constants $G_l$'s are possibly different from $G_l$'s from (\ref{estGl}) and (\ref{estGl-2}).) \begin{proposition} \label{p5.4} Let the sequences $(v_n)$ and $(x_n)$ obey (\ref{xn}). Let \, $\delta$ be arbitrary from $(0,1)$. Then there exist for any positive integer $n$ a constant $ B_n$ and $t_0>0$, so that for $|t|>t_0$ and for $j=m(t)+1$ holds the inequality \[\sum\limits_{ \alpha \in A_{1,j}} \left| \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{i K^{\alpha}_{j}(x)} dx \right| \le B_n |t|^{-n} .\] \end{proposition} {\it Proof.} We have only to repeat the proof of Proposition \ref{p5.3} with (\ref{estGl-2}) replaced by (\ref{estGl-3}). \hspace{15mm} $\square$ \section{Resonant terms} We consider in this sections the resonant terms, that is terms (\ref{term}) with \\ $j=m(t)+1$, $\alpha_r \le \gamma^j_r$ for $1\le r \le j$, for which holds \begin{equation} \label{res-condition} \min_{x \in [a,b]} |t \sin(x)+\sum^j_{r=1} \alpha_r x_r| < \frac{ \max \{ |\alpha_j x_j|, |t| a_0 \}}{2}. \end{equation} For the sake of brevity we write in this section $j$ for $m(t)+1$. Abbreviate $g(x)=2( t \sin(x)+\sum^j_{r=1} \alpha_r x_r)$. The point $x=\pi/2$ (which corresponds to the energy $E=0$) plays a special role now because the derivative of $g$ is zero there. Therefore, we assume first that $\pi/2 \notin [a,b] $ (the opposite case we will consider further). The terms under consideration have possibly small $\inf_{x \in (a,b)} |(K^{\alpha}_j)'| $, so we cannot proceed in this case as in the previous section. To avoid this difficulty we represent $K^{\alpha}_j$ as follows: $K^{\alpha}_j= \eta_1 + \eta_2$, where \[ \eta_1=2(-t \cos(x)+ x \sum^j_{r=1} \alpha_r x_r), \, \, \, \, \eta_2= 2\sum^j_{r=1} \alpha_r (\bar \theta_r- x x_r). \] Let $\varepsilon_1$ be any positive number. The function $g(x)$ (which is the derivative of $\eta_1$) is monotone on $(a,b)$, therefore there exists only one point of minimum of $|g(x)|$. We denote this point with $x_0$ and interval $(x_0-\varepsilon_1, x_0+\varepsilon_1) \bigcap (a,b)$ with $I_1$. Then we have for all $x \notin I_1$ \begin{equation} \label{def-of-M} 2|t \sin(x)+\sum^j_{r=1} \alpha_r x_r| \ge | t| |\sin x - \sin x_0| \ge M |t| \varepsilon_1, \end{equation} with $M= \inf (|\cos x|, \, x \in (a,b)) >0$. We split the integral (\ref{term}) and then integrate by parts to obtain \begin{multline} \label{res-int-by-parts} \int\limits^b_a h C_j^{\alpha} e^{iK^{\alpha}_j} dx=\int\limits_{I_1} h C_j^{\alpha} e^{iK^{\alpha}_j} dx+\int\limits_{(a,b) \setminus I_1} \frac{h C_j^{\alpha} e^{i \eta_2}} {ig}de^{i\eta_1} \\ =\int\limits_{I_1} h C_j^{\alpha} e^{iK^{\alpha}_j} dx -\left( \frac{h C_j^{\alpha} e^{iK^{\alpha}_j}} { i g} \right)^{x_0+\varepsilon_1}_{x_0-\varepsilon_1} -\int\limits_{(a,b) \setminus {I_1}} \frac{h' C_j^{\alpha} e^{i K^{\alpha}_j} }{ i g} dx \\ -\int\limits_{(a,b) \setminus {I_1}} \frac{h \frac{d}{dx}\left( C_j^{\alpha}\right) e^{i K^{\alpha}_j} }{ i g} dx +\int\limits_{(a,b) \setminus {I_1}} \frac{2 h C_j^{\alpha} (t \cos(x)) e^{i K^{\alpha}_j} } {i g^2} dx \\ -\int\limits_{(a,b) \setminus I_1} \frac{2 h(x) C_j^{\alpha}(x) \sum^j_{r=1} (\alpha_r (\bar \theta'_r(x)-x_r)) e^{i K^{\alpha}_j} } { g} dx. \end{multline} We leave the first integral for the moment and estimate other summands. The only summand, which we cannot estimate immediately, is the last integral. So we consider the expressions $ \left| g^{-1} \sum^j_{r=1} \alpha_r (\bar \theta'_r(x)-x_r) \right| $. From Corollary \ref{c4.5} we have the inequality \begin{equation*} \left| \sum^j_{r=1} \alpha_r (\bar \theta'_r(x)-x_r) \right| \le C_0 \left( \left( x_j x_{j-1}^{\frac{ \delta-2}{\delta}} \right)^{\rho_0} \sum^{j-1}_{r=1} x_{r-1}^{\frac{2-\delta }{\delta}} +|t|^{\upsilon_0} x_{j-1}^{\frac{2-\delta }{\delta}} \right). \end{equation*} Then with (\ref{def-of-M}) and with the definition of $g(x)$ follows the estimate \begin{multline*} \sup_{x \in (a,b) \backslash I_1} \left| \frac{ \sum^j_{r=1} \alpha_r (\bar \theta'_r(x)-x_r) } {i g} \right| \\ \le \frac{C_0\left( |t|^{\upsilon_0} x_{j-1}^{\frac{2-\delta }{\delta}} +x_j^{\rho_0} x_{j-1}^{\frac{(\delta-2) \rho_0}{\delta}} \sum^{j-1}_{r=1} x_{r-1}^{\frac{2-\delta }{\delta}} \right) }{M \varepsilon_1 |t| } \lesssim \frac{ x_{j-1}^{\frac{2-\delta }{\delta}}}{M \varepsilon_1 |t|^{1-\upsilon_0}} +\frac{1}{M \varepsilon_1 |t|^{1-\rho_0}}. \end{multline*} (The ''hidden'' constant in the last estimate is independent of $M$ and $\varepsilon_1 $.) \\ So we use (\ref{res-int-by-parts}) to obtain the estimate \begin{multline*} \left|\int^b_a h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \le \left| \int_{I_1} h C_j^{\alpha} e^{i K^{\alpha}_j}dx \right| + \frac{C (b-a) }{M \varepsilon_1|t| } \sup\limits_{\begin{array}{c} x \in (a,b) \\ \delta=0,1 \end{array}} \left| ( C_j^{\alpha})^{(\delta)} \right| \\ + \left( \frac{C}{M |t| \varepsilon_1} +\frac{C (b-a) }{M \varepsilon_1 } \left( |t|^{-1} + \frac{1}{M \varepsilon_1 |t| } + \frac{ x_{j-1}^{\frac{2-\delta }{\delta}}}{ |t|^{1-\upsilon_0}} +\frac{1}{ |t|^{1-\rho_0}} \right) \right) \sup\limits_{ x \in (a,b) } \left| C_j^{\alpha} \right|. \end{multline*} Let $(\varepsilon_k)_{k=1}^{\infty}$ be any monotonically decreasing sequence of positive numbers. (The first term of this sequence, that is $\varepsilon_1$, was already introduced). We define now inductively the sequence of intervals by $I_k=(x_0- \varepsilon_k, x_0 +\varepsilon_k ) \bigcap I_{k-1}$. The formula (\ref{res-int-by-parts}) holds with $I_1$ replaced by $I_k$, $(a,b)$ replaced by $I_{k-1}$ and with the additional boundary term \[\left( \frac{h C_j^{\alpha} e^{iK^{\alpha}_j}} { i g} \right)^{x_0+\varepsilon_{k-1} }_{x_0-\varepsilon_{k-1}} \] on the right side. We can then repeat the previous procedure to obtain the following estimate (the contribution corresponding to the additional boundary term is small in comparison with the other terms and can be therefore omitted): \begin{multline*} \left|\int_{I_{k-1}} h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \le \left| \int_{I_k} h C_j^{\alpha} e^{i K^{\alpha}_j}dx \right| + \frac{C \varepsilon_{k-1} }{M \varepsilon_k |t| } \sup\limits_{\begin{array}{c} x \in (a,b) \\ \delta=0,1 \end{array}} \left| ( C_j^{\alpha})^{(\delta)} \right| \\+ \left( \frac{C }{M |t| \varepsilon_k } +\frac{C \varepsilon_{k-1} }{M \varepsilon_k } \left( |t|^{-1} + \frac{1}{M \varepsilon_k |t| } + \frac{ x_{j-1}^{\frac{2-\delta }{\delta}}}{ |t|^{1-\upsilon_0}} +\frac{1}{ |t|^{1-\rho_0}} \right) \right) \sup\limits_{ x \in (a,b) } \left| C_j^{\alpha} \right|. \end{multline*} For fixed $m$ (which we have to specify further) follows from the two last estimates the inequality \begin{multline} \label{res-long-est} \left|\int^b_a h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \le C \left( \varepsilon_m \sup\limits_{ x \in (a,b) } \left| C_j^{\alpha} \right| + \sum\limits_{k=1}^{m-1} \frac{\varepsilon_{k-1}}{M |t| \varepsilon_k} \sup\limits_{\begin{array}{c} x \in (a,b) \\ \delta=0,1 \end{array}} \left| ( C_j^{\alpha})^{(\delta)} \right| \right) \\ + \sum\limits_{k=1}^{m-1} \frac{C}{M \varepsilon_k} \left( \frac{2 M \varepsilon_k+ \varepsilon_{k-1}}{M \varepsilon_k |t| } + \frac{ \varepsilon_{k-1} x_{j-1}^{\frac{2-\delta }{\delta}}}{ |t|^{1-\upsilon_0}} +\frac{\varepsilon_{k-1}}{ |t|^{1-\rho_0}} \right) \sup\limits_{x \in (a,b) } \left| C_j^{\alpha} \right| . \end{multline} (We notice again that $C$ is independent of $M$ and $ \varepsilon_k$'s.) \begin{proposition} \label{p6.1} Suppose $\pi/2 \notin [a,b]$. Then for each $\sigma>0$ there exist $t_0 >0$ and a constant $C$, so that for $|t|>t_0$ holds \begin{equation*} \sum\limits_{\alpha \in A_{2,m(t)+1}} \left| \int\limits^b_a h C_{m(t)+1}^{\alpha} e^{iK^{\alpha}_{m(t)+1}} dx \right| \\ \le C |t|^{-\min\{ \frac{1}{2},\frac{2 \delta -1}{\delta}, 1- \upsilon_0, 1-\rho_0 \} + \frac{(1-\delta)}{\delta}+\sigma}. \end{equation*} \end{proposition} {\it Proof}. We set $\varepsilon_k=|t|^{-k \mu}$ with $\mu>0$ to obtain from (\ref{res-long-est}), using (\ref{est-koeff-with-Taylor}), for sufficiently large $|t|$ the following inequality: \begin{multline*} \sum\limits_{\alpha \in A_{2,j}} \left|\int^b_a h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \le C \left( |t|^{- m \mu} x_j^{\frac{(1-\delta)(1+\varepsilon)}{\delta}} + \frac{m}{M |t|^{1- \mu} } x_j^{\frac{(1-\delta)(2+\varepsilon)}{\delta}} \right) \\ + C \left( \sum\limits_{k=1}^{m-1} \left( \frac{2}{M^2 |t|^{1-(k+1) \mu} }\right) + \frac{m x_{j-1}^{\frac{2-\delta }{\delta}}}{ M |t|^{1-\upsilon_0- \mu }} +\frac{m |t|^{\rho_0}}{ M |t|^{1- \mu}} \right) x_j^{\frac{(1-\delta)(1+\varepsilon)}{\delta}} . \end{multline*} We can continue the last estimate as follows: \begin{multline*} \sum\limits_{\alpha \in A_{2,j}} \left|\int^b_a h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \lesssim |t|^{- m \mu+\frac{(1-\delta)(1+\varepsilon)}{\delta}} + |t|^{\mu+\frac{(1-\delta)(2+\varepsilon)}{\delta}-1} \\+|t|^{m \mu -1+\frac{(1-\delta)(1+\varepsilon)}{\delta}} +|t|^{ \upsilon_0 + \mu + \varepsilon +\frac{(1-\delta)(1+\varepsilon)}{\delta}- 1 } +|t|^{\rho_0 + \mu +\frac{(1-\delta)(1+\varepsilon)}{\delta} -1 }. \end{multline*} Then the proposition follows with $\varepsilon < \frac{\delta \sigma}{2}$ and $\mu = \frac{1}{2m}$ with $m$ sufficiently large, so that holds $\mu< \frac{ \sigma}{2}$ . \hspace{1cm} $\square$ Now consider the case $\frac{\pi}{2} \in [a,b]$. We have to exclude this critical point $\frac{\pi}{2}$. For this we denote the interval $[\frac{\pi}{2}-\gamma, \frac{\pi}{2}+\gamma]$ with $J_{\gamma}$ ($\gamma$ from $ (0, \frac{\pi}{2})$ is still to be chosen) and use the easy estimate \[ \left| \int_a^b h C^{\alpha}_j e^{i K^{\alpha}_j } \right| \le C \gamma \sup_{x \in (a,b)} \left| C^{\alpha}_j \right| + \left| \int_{[a,b] \setminus J_{\gamma} } h C^{\alpha}_j e^{i K^{\alpha}_j } \right|. \] The set $[a,b] \setminus J_{\gamma}$ is the union of at most two intervals, which don't contain the point $\frac{\pi}{2}$. We can therefore use for these intervals the previous results. We have only to notice that the following inequality holds: \[ \inf_{x \in [a,b] \setminus J_{\gamma} } |\cos x | \ge \cos (\frac{\pi}{2}-\gamma) = \sin \gamma. \] So we obtain instead of (\ref{res-long-est}) the estimate \begin{multline} \label{res-long-est-2} \left|\int\limits^b_a h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \le C (\gamma+ \varepsilon_m) \sup\limits_{ x \in (a,b) } \left| C_j^{\alpha} \right| \\ +C \left( \sum\limits_{k=1}^{m-1} \frac{\varepsilon_{k-1}}{ |t| \varepsilon_k \sin \gamma} \sup\limits_{\begin{array}{c} x \in (a,b) \\ \delta=0,1 \end{array}} \left| ( C_j^{\alpha})^{(\delta)} \right| \right) \\ + \frac{C}{ \varepsilon_k \sin \gamma} \sum\limits_{k=1}^{m-1} \left( \frac{2 \varepsilon_k \sin \gamma+ \varepsilon_{k-1}}{ \varepsilon_k |t| \sin \gamma} + \frac{ \varepsilon_{k-1} x_{j-1}^{\frac{2-\delta }{\delta}}}{ |t|^{1-\upsilon_0}} +\frac{\varepsilon_{k-1}}{ |t|^{1-\rho_0}} \right) \sup\limits_{x \in (a,b) } \left| C_j^{\alpha} \right| . \end{multline} \begin{proposition} \label{p6.2} Suppose $j=m(t)+1$. Let $[a,b]$ be an arbitrary closed subinterval of $(0, \pi)$. Then for each $\varepsilon>0$, $ \varpi >0$ there exist $t_0 >0$ and a constant $C$, so that for $|t|>t_0$ holds \begin{equation*} \sum\limits_{\alpha \in A_{2,j}} \left| \int\limits^b_a h C_j^{\alpha} e^{iK^{\alpha}_j} dx \right| \\ \le C |t|^{-\min\{ \varpi, \frac{1- 2 \varpi }{2}, \frac{2 \delta -1}{\delta}-\varpi, 1- \upsilon_0-\varpi, 1-\rho_0 -\varpi\} + \frac{(1-\delta)}{\delta}+\varepsilon}. \end{equation*} \end{proposition} {\it Proof}. For small $\gamma >0$ holds the inequality $\sin \gamma \ge \frac{\gamma}{2}$. With $\gamma=|t|^{-\varpi}$ we have then from (\ref{res-long-est-2}) the estimate \begin{multline*} \sum\limits_{\alpha \in A_{2,j}} \left|\int^b_a h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \lesssim \left( (|t|^{-\varpi}+ |t|^{- m \mu}) x_j^{\frac{(1-\delta)(1+\varepsilon)}{\delta}} + \frac{m}{|t|^{1- \mu-\varpi} } x_j^{\frac{(1-\delta)(2+\varepsilon)}{\delta}} \right) \\ + \left( \sum\limits_{k=1}^{m-1} \left( \frac{3}{ |t|^{1-(k+1) \mu- 2\varpi} }\right) + \frac{m x_{j-1}^{\frac{2-\delta }{\delta}}}{ |t|^{1-\upsilon_0- \mu - \varpi }} +\frac{m |t|^{\rho_0}}{ |t|^{1- \mu-\varpi}} \right) x_j^{\frac{(1-\delta)(1+\varepsilon)}{\delta}} . \end{multline*} Using $j=m(t)+1$ and (\ref{mt}), we continue this as follows: \begin{multline*} \sum\limits_{\alpha \in A_{2,j}} \left|\int^b_a h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \lesssim \left( |t|^{-\varpi}+ |t|^{- m \mu} + |t|^{\mu+\frac{1-\delta}{\delta} + \varpi-1} \right) |t|^{\frac{(1-\delta)(1+\varepsilon)}{\delta}} \\ +\left( |t|^{m \mu -1+2 \varpi } +|t|^{ \upsilon_0 + \mu+ \varpi + \varepsilon - 1 } +|t|^{\rho_0 + \mu +\varpi -1 } \right) |t|^{\frac{(1-\delta)(1+\varepsilon)}{\delta}} . \end{multline*} Then the proposition follows with $\mu = \frac{1-2 \varpi}{2m}$ and $m$ sufficiently large. \hspace{1cm} $\square$ \section{The terms with large $\max |\alpha_r|$} The heading of this section refers to those terms from (\ref{term}), which are not considered in the previous two sections, that is to the terms, which correspond to such $\alpha$'s from $\mathbb{Z}^j$, for which there exists $r_0$, such that $|\alpha_{r_0}| > \gamma^j_{r_0}$. We use in this section an easy estimate \begin{equation} \label{easyest} \left| \int^b_a h(x) C_j^{\alpha}(x) e^{iK^{\alpha}_j} dx \right| \le C \prod\limits_{r=1}^j p_{0,r}^{-|\alpha_r|}. \end{equation} \noindent ($p_{0,r}$ is defined as in the section 3, that is by $ \frac{\sqrt{v_r^2+4 a_0^2}}{v_r}$.) In the case $j>m(t)+1$ we have then the estimates \begin{multline} \label{restmult1} \left|\sum_{ \alpha \in A_{3,j}} \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{iK^{\alpha}_j} dx \right| \le \left( \sum_{r_0=1}^{j-1} \sum_{ \{\alpha \in \mathbb{Z}^j | \alpha_{r_0} \ge \gamma^j_{r_0} \} } C \prod\limits_{r=1}^j p_{0,r}^{-|\alpha_r|} \right) \\ \le C (j-1) \left( p_{0,j}^{- \min\limits_{r=1,...,j-1}(\gamma^j_r)} \prod\limits_{r=1}^j \left( \sum^{+ \infty}_{k=- \infty } p_{0,r}^{-k} \right) \right) \\ \le j C\left( p_{0,j}^{- \min\limits_{r=1,...,j-1}(\gamma^j_r)} \prod\limits_{r=1}^j (\frac{p_{0,r}+1}{p_{0,r}-1}) \right). \end{multline} In cases $j=m(t)$ and $j=m(t)+1$ the following estimate is similarly obtained: \begin{equation} \label{eqrest1} \left|\sum_{ \alpha \in A_{3,j}} \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{iK^{\alpha}_j} dx \right| \le j C \left( p_{0,j}^{- \min\limits_{r=1,...,j}(\gamma^j_r)} \prod\limits_{r=1}^j \left( \frac{p_{0,r}+1}{p_{0,r}-1} \right) \right). \end{equation} We consider for the moment a little more general situation. \begin{lemma} \label{restlem1} Let $(\mu_j)$ be an arbitrary sequence of positive real numbers. Then there exists a constant $C$, such that the following estimate holds: \[ j \left( p_{0,j}^{-\mu_j } \prod\limits_{r=1}^j \left(\frac{p_{0,r}+1}{p_{0,r}-1} \right) \right) \le C e^{-\mu_j \frac{2 a_0^2}{ v_j^2}} v_j^3. \] \end{lemma} {\it Proof.} We have from $\lim_{j \rightarrow +\infty} v_j= +\infty$ (which is the consequence of (\ref{xn})) the relation $\lim_{j \rightarrow \infty} \left(1+\frac{4 a_0^2}{v_j^2} \right)^{\frac{v_j^2}{4 a_0^2}} = e$. Therefore there exists a constant $\tilde C_1$, such that holds \[ p_{0,j}^{-\mu_j } = \left( \frac{v_r}{\sqrt{v_r^2+4 a_0^2}} \right)^{\mu_j} = \left( \left( \frac{1}{1+\frac{4 a_0^2}{v_j^2}}\right)^{\frac{v_j^2}{4 a_0^2} } \right)^{ \frac{1}{2} \mu_j \frac{4 a_0^2}{v_j^2}} \le \tilde C_1e^{-\frac{2 \mu_j a_0^2}{v_j^2}}. \] From (\ref{xn}) follows $\lim\limits_{j \rightarrow \infty} j a_0^{-2j} v_j^{-1} \prod\limits_{r=1}^{j-1} (v_r^{2}+a_0^2) =0$. Particularly, there exists a constant $\tilde C_2$, such that \begin{multline} j \left( \prod\limits_{r=1}^j \left(\frac{p_{0,r}+1}{p_{0,r}-1} \right) \right) =j \left( \prod\limits_{r=1}^j \frac{(p_{0,r}+1)^2}{p^2_{0,r}-1} \right) =j \left( \prod\limits_{r=1}^j \frac{(\sqrt{v_r^2 +4 a_0^2}+v_r)^2}{4 a_0^2} \right) \\ \le j \left( \prod\limits_{r=1}^j \frac{v_r^2 +4 a_0^2 }{a_0^2} \right) \le \tilde C_2 v_j (v_j^2 +4 a_0^2) \le 2 \tilde C_2 v_j^3. \nonumber \end{multline} With $ C=2 \tilde C_1 \tilde C_2 $ we obtain the desired inequality. \hspace{2cm} $\square$ \vspace{1mm} We are now able to prove the main result of the present section. \begin{proposition} \label{p7.2} Let $\varepsilon$ be any positive number and $\delta \in (\frac{1}{2},1)$ Denote with $ G(t, \delta, \rho_0, \upsilon_0, \varepsilon)$ the value $|t|^{\min \{ 1-3\varepsilon,1+\upsilon_0-\varepsilon- \frac{1}{\delta}, 1+\rho_0 -2 \varepsilon -\frac{1}{\delta} \}} $. Then there exists a constant $C$, such that holds \[ \sum_{j=m(t)}^{\infty} \sum_{ \alpha \in A_{3,j}} \left| \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{i K_{j}^{\alpha}} dx \right| \le C e^{-2 a_0^2 G(t, \delta, \rho_0, \upsilon_0, \varepsilon)}. \] \end{proposition} {\it Proof.} For the value $\min (\gamma^j_r)$ we have for large $j$, $t$ with (\ref{xn}) and (\ref{mt}) in the case $j>m(t)+1$: \begin{equation*} \min\limits_{r=1,...,j-1}(\gamma^j_r)=x_j \left(4x_{j-1}^{\frac{2-\delta}{\delta}} \right)^{-1} \ge x_j^{1- \varepsilon} , \end{equation*} in the case $j=m(t)$: \begin{equation*} \min\limits_{r=1,...,j}(\gamma^j_r)= |t| \left( 2x_j^{\frac{2-\delta}{\delta}}\right)^{-1} \ge |t|^{1- \varepsilon} \end{equation*} and in the case $j=m(t)+1$: \begin{equation*} \min\limits_{r=1,...,j}(\gamma^j_r)= \min \{ |t|^{\upsilon_0} , \left( 2^{-1} |t| x_{j-1}^{\frac{\delta-2}{\delta}} \right)^{\rho_0} \} \ge |t|^{\min \{ \upsilon_0, \, \rho_0-\varepsilon \}} . \end{equation*} Then we have from (\ref{restmult1}) and (\ref{eqrest1}), using Lemma \ref{restlem1}, (with $\mu_j=\min \gamma^j_r$) the estimates \begin{multline*} \sum_{ \alpha \in A_{3,j}} \left| \int\limits^b_a h C_{j}^{\alpha} e^{i K_{j}^{\alpha}} dx \right| \lesssim x_j^{\frac{3(1-\delta)}{2 \delta}} e^{-2 a_0^2 x_j^{2- \varepsilon- 1/ \delta }}, \hspace{5mm} j >m(t)+1, \\ \sum_{ \alpha \in A_{3,j}} \left| \int\limits^b_a h C_{j}^{\alpha} e^{i K_{j}^{\alpha}} dx \right| \lesssim x_j^{\frac{3(1-\delta)}{2 \delta}} e^{-2 a_0^2 |t|^{1- \varepsilon} x_j^{1- 1/ \delta }}, \hspace{5mm} j =m(t), \\ \sum_{ \alpha \in A_{3,j}} \left| \int\limits^b_a h C_{j}^{\alpha} e^{i K_{j}^{\alpha}} dx \right| \lesssim x_j^{\frac{3(1-\delta)}{2 \delta}} e^{-2 a_0^2 |t|^{\min \{ \upsilon_0, \, \rho_0-\varepsilon \}} x_j^{1- 1/ \delta }}, \hspace{5mm} j =m(t)+1. \end{multline*} From the condition $\delta \in (\frac{1}{2},1)$ follows $2 - 1/ \delta >0$. We can conclude then, because of rapid growth of $x_j$'s (condition (\ref{xn})), that the series \[ \sum_{j=m(t)+2}^{\infty} \, \sum_{ \alpha \in A_{3,j}} \left| \int\limits^b_a h C_{j}^{\alpha} e^{i K_{j}^{\alpha}} dx \right| \] is negligently small in comparison with \[\sum_{ \alpha \in A_{3,m(t)}} \left| \int\limits^b_a h C_{m(t)}^{\alpha} e^{i K_{m(t)}^{\alpha}} dx \right|+ \sum_{ \alpha \in A_{3,m(t)+1}} \left| \int\limits^b_a h C_{m(t)+1}^{\alpha} e^{i K_{m(t)+1}^{\alpha}} dx \right|. \] We estimate the last expression using the estimates from above and (\ref{mt}) as follows: \[\sum_{j=m(t)}^{m(t)+1} \sum_{ \alpha \in A_{3,j}} \left| \int\limits^b_a h C_{j}^{\alpha} e^{i K_{j}^{\alpha}} dx \right| \lesssim \frac{x_m(t)^{\frac{3(1-\delta)}{2 \delta}}} {e^{2 a_0^2 |t|^{1- 2 \varepsilon}}} + \frac{|t|^{\frac{3(1-\delta)}{2 \delta}} }{e^{2 a_0^2 |t|^{\min \{ \upsilon_0, \, \rho_0-\varepsilon \}+1- 1/ \delta }} } .\] We have for $|t|$ sufficiently large the inequality $ |t|^{\frac{3(1-\delta)}{2 \delta}} \le e^{2 a_0^2 |t|^{ \varepsilon }}$, from which then the proposition follows with (\ref{mt}). \hspace{15mm} $\square$ \section{Proof of Theorem \ref{t1.2}} (i) We have now to specify the values $\rho_0$ and $\upsilon_0$. The smallest values, for which Proposition \ref{p7.2} gives us the desired estimate, are $\rho_0=\upsilon_0= \frac{1}{\delta}-1+\sigma$ with $\sigma>0$ (for all smaller values holds $G(t,\delta, \rho_0, \upsilon_0, \varepsilon) \le 0$, which stays in contradiction with the relation (\ref{Rajchman})). So we set $\rho_0=\upsilon_0= \frac{1}{\delta}-1+\sigma $ with $\sigma>0$ still to be specified. Then we can conclude from Proposition \ref{p7.2} that the contribution of the ''terms with large $\max |\alpha_r|$'' in (\ref{series}) can be estimated by $e^{-t^{\sigma/2}}$ (if $\varepsilon>0$ is chosen sufficiently small). As for non-resonant terms, we see then from Propositions \ref{p5.2}, \ref{p5.3} and \ref{p5.4} that we can estimate the sum of this terms by $C |t|^{-m}$ with arbitrary $m$ (we use (\ref{xn}) to conclude that the contribution of integrals $\int_a^b h C^{\alpha}_j e^{i K^{\alpha}_j} dx$ for $j>m(t)+1$ is smaller than the contribution of these integrals with $j=m(t)$ and $j=m(t)+1$). So the crucial contribution comes from the resonant terms. Proposition \ref{p6.1} implies, that the sum of these terms can be estimated by $C|t|^{-\min\{ \frac{3}{2} -\frac{1}{\delta}-\sigma, \, \, 3-\frac{2}{\delta}-2 \sigma \} }$. In the case $ \delta > \frac{2}{3}$ the last expression takes the form $C|t|^{ \frac{1}{\delta}-\frac{3}{2}+ \sigma }$. Thus (i) is proved. \vspace{1mm} \noindent (ii) We set again $\rho_0=\upsilon_0= \frac{1}{\delta}-1+\sigma$. Everything said about the non-resonant terms and the ''terms with large $\max |\alpha_r|$'' remains valid in this case without a change. The crucial contribution comes again from the resonant terms. We have now only to specify the value $\varpi$. From Proposition \ref{p6.2} we have now the estimate \[\sum\limits_{\alpha \in A_{2,m(t)+1}} \left| \int\limits^b_a h C_{m(t)+1}^{\alpha} e^{iK^{\alpha}_{m(t)+1}} dx \right| \\ \le C |t|^{-\min\{ \varpi, \frac{1- 2 \varpi }{2}, 2- \frac{1}{\delta}-\varpi-\sigma \} + \frac{(1-\delta)}{\delta}+\sigma}. \] So we have to choose $\varpi$ so, that $\min\{ \varpi, \frac{1- 2 \varpi }{2}, 2- \frac{1}{\delta}-\varpi-\sigma \}$ takes the largest value. That is obtained (if $\delta >2/3$) by $\varpi = 1/4$. We have then the estimate \[\sum\limits_{\alpha \in A_{2,m(t)+1}} \left| \int\limits^b_a h C_{m(t)+1}^{\alpha} e^{iK^{\alpha}_{m(t)+1}} dx \right| \\ \le C |t|^{- \frac{5}{4} + \frac{1}{\delta}+\sigma}, \] from which the statement (ii) follows. (iii) We consider the case $t \notin \cal{R}$ and prove, that in this case the set $A_{2,m(t)+1}$ of resonant terms is empty, so the better estimate (of the order $C |t|^{-m}$ with arbitrary $m$) is possible. $t \notin \cal{R}$ implies $x_j^{\frac{\delta}{2\delta-1}+\varepsilon} < |t| < x_{j+1}/2$ with some $j$. This $j$ must by the definition of $m(t)$ be equal to $m(t)+1$. We have now to show, that for all $\alpha$ with $|\alpha_r| \le \gamma^{m(t)+1}_r$ holds (\ref{defres}). Similarly to (\ref{estK'2}) we obtain \begin{multline*} \min\limits_{x \in [a, b]} |t \sin(x)+\sum^j_{r=1} \alpha_r x_r| \ge |t| a_0 -|\alpha_{m(t)+1}|x_{m(t)+1}- (|t|x_{j-1}^{\frac{\delta-2}{\delta}})^{\rho_0} \sum\limits^{j-1}_{r=1} x_r \\ \ge |t| a_0 -|t|^{\upsilon_0}x_{m(t)+1}-|t|^{\rho_0} x_{m(t)+1}^{\sigma} \end{multline*} for large $|t|$ and small $\sigma>0$. We set as in (i) and (ii) $\rho_0=\upsilon_0= \frac{1}{\delta}-1+\sigma$. So we can continue the previous inequality as follows: \begin{multline} \label{last-est-8} \min\limits_{x \in [a, b]} |t \sin(x)+\sum^j_{r=1} \alpha_r x_r| \ge |t| a_0-2|t|^{\frac{1}{\delta}-1+\sigma} x_{m(t)+1}\\= |t|^{\frac{1}{\delta}-1+\sigma} \left( |t|^{\frac{2 \delta-1}{\delta}-\sigma} a_0- 2 x_{m(t)+1} \right). \end{multline} From $|t| > x_{m(t)+1}^{\frac{\delta}{2\delta-1}+\varepsilon}$ follows for sufficiently small $\sigma$ the inequality \vspace{1mm} \noindent $|t|^{\frac{2 \delta-1}{\delta}-\sigma} >x_{m(t)+1}^{1+\frac{\varepsilon}{2}}$. The relation (\ref{defres}) follows now from (\ref{last-est-8}), using (\ref{xn}). Thus $A_{2, m(t)+1}$ is empty and the estimate \[ \left| (f\, d\rho)\, \widehat{ }\, (t) \right| \le C |t|^{- m} \] holds with some $C$ for arbitrary $m$. \hspace{1cm} $\square$ \vspace{2mm} \section{Proof of Theorem \ref{t1.3}} If the sequence $(v_n)$ from (\ref{sparse}) is bounded, there exists a constant $A>1$, such that for all $r$'s holds the inequality $p_{0,r} \ge A$, from which follows with $B=1+\frac{2}{\sqrt A -1}$ the inequality \[\frac{p_{0,r}+1}{p_{0,r}-1} < \frac{ \sqrt p_{0,r}+1}{\sqrt p_{0,r}-1} \le B.\] We have then for the derivatives of coefficients from (\ref{est-of-coeff}) the estimate \begin{equation} \label{better-est} \sum_{\alpha \in \mathbb{Z}^{m(t)+1}} \sup\limits_{ \begin{array}{c} \, \kappa =0,...,k \\ x \in (a,b) \end{array} } \left|(C_{m(t)+1}^{\alpha}(x))^{(\kappa)} \right| \le \tilde P_k B^{m(t)+1}, \end{equation} where $\tilde P_k$'s are possibly different from $\tilde P_k$'s from (\ref{est-of-coeff}). We can use in the case of a bounded potential the inequalities from Corollary \ref{c4.5} with $\delta=1$ (for the proof see \cite{KrRe}). We set correspondingly $\delta=1$ in the definition of the values $\gamma_r^j$. It is easy to see that all results from the section 5 remain valid for bounded potentials also, because the estimate (\ref{better-est}) is even better than the estimate (\ref{est-koeff-with-Taylor}). Therefore, the contribution of non-resonant terms is in the case of a bounded potential also of the order $C |t| ^{-m}$. So the only changes we have to make concern the estimates from the sections 6 and 7. We start with the modifications, which we have to carry out in the sec- \\tion 7. We replace the estimates (\ref{restmult1}) and (\ref{eqrest1}) by the estimate \begin{equation*} \left|\sum_{ \alpha \in A_{3,j}} \int\limits^b_a h(x) C_{j}^{\alpha}(x) e^{iK^{\alpha}_j} dx \right| \le j C A^{- \min(\gamma^j_r)} B^j . \end{equation*} Then the final estimate for the terms with large $ \max{|\alpha_r|}$ reads \[ \sum_{j=m(t)}^{\infty} \, \sum_{ \alpha \in A_{3,j}} \left| \int\limits^b_a h C_{j}^{\alpha} e^{i K_{j}^{\alpha}} dx \right| \le C A^{-|t|^{\min\{\upsilon_0, \rho_0 \}- 2 \varepsilon } }, \] where we have used the condition (\ref{xn2}) to obtain \begin{equation} \label{last-lim} \lim\limits_{t \rightarrow \pm \infty} (m(t)+1) B^{m(t)+1} A^{-|t|^{\varepsilon}} =0 \, \mbox{ for all } \, \varepsilon>0. \end{equation} So we can conclude that the contribution to (\ref{series}) of these terms is for any choice of $\upsilon_0 \in (0,1)$ and $ \rho_0 \in (0,1)$ smaller than the contribution of non-resonant terms (we have only to set in the last inequality $\varepsilon= \min\{\upsilon_0, \rho_0 \}/3$). The crucial contribution comes (as in the case of an unbounded potential) again from resonant terms. We start our consideration here with the inequality (\ref{res-long-est-2}) and use the inequality (\ref{better-est}) to obtain \begin{multline*} \sum_{\alpha \in A_{2,j}} \left|\int\limits^b_a h C_j^{\alpha} e^{iK^{\alpha}_j}dx \right| \lesssim B^j \left(\gamma+ \varepsilon_m + \sum\limits_{k=1}^{m-1} \frac{\varepsilon_{k-1}}{ |t| \varepsilon_k \sin \gamma} \right) \\ + \frac{1}{ \varepsilon_k \sin \gamma} \sum\limits_{k=1}^{m-1} \left( \frac{2 \varepsilon_k \sin \gamma+ \varepsilon_{k-1}}{ \varepsilon_k |t| \sin \gamma} + \frac{ \varepsilon_{k-1} x_{j-1}^{\frac{2-\delta }{\delta}}}{ |t|^{1-\upsilon_0}} +\frac{\varepsilon_{k-1}}{ |t|^{1-\rho_0}} \right) B^j, \end{multline*} where $j$ stays for $m(t)+1$. We set $\varepsilon_k = |t|^{- k \mu}$ with $\mu =\frac{1-2 \varpi}{2m}$, $\gamma=|t|^{-\varpi}$ and $\upsilon_0=\rho_0 = \tau >0$ to obtain from the last inequality the following estimate: \begin{multline*} \sum\limits_{\alpha \in A_{2,m(t)+1}} \left|\int^b_a h C_{m(t)+1}^{\alpha} e^{iK^{\alpha}_{m(t)+1}}dx \right| \\ \lesssim \left(|t|^{-\varpi}+ |t|^{ \frac{2 \varpi-1}{2}} + |t|^{ \tau+\varpi-1} \right) |t|^{\sigma} B^{m(t)+1} .\end{multline*} We have now only to set $\varpi=1/4$ and then to choose any $\tau$ from $(0, 1/4)$, because we have for all $\varepsilon>0$ from the condition (\ref{xn2}) the relation \[ \sup_{t \in \mathbb{R} \backslash [-1,1] } B^{m(t)+1} |t|^{-\varepsilon} < \infty. \hspace{1cm} \square \] \vspace{1mm} \noindent {\it Acknowledgments:} I thank H. Behncke and C. Remling for constant encouragement and stimulating discussions. \begin{thebibliography}{99} \bibitem{JiLa} S.\ Jitomirskaya, Y.\ Last: Power-law subordinacy and singular spectra. I: Half-line operators, Acta\ Math.\ {\bf 183} (1999), No.2, 171-189. \bibitem{JiLa2} S.\ Jitomirskaya, Y.\ Last: Dimensional Hausdorff properties of singular continuous spectra, Phys.\ Rev.\ Lett.\ Volume {\bf 76} Number {\bf 11} (1996), 1765-1769 \bibitem{KLS} A.\ Kiselev, Y.\ Last, and B.\ Simon: Modified Pr\"ufer and EFGP transforms and the spectral analysis of one-dimensional Schr\"odinger operators, Commun.\ Math.\ Phys.\ {\bf 194} (1998), 1--45. \bibitem{KrRe} D.\ Krutikov, C.\ Remling: Schr\"odinger Operators with Sparse Potentials: Asymptotics of the Fourier Transform of the Spectral Measure, Commun. Math. Phys. {\bf 223} (2001) 3, 509-532 \bibitem{L} Y.\ Last: Quantum dynamics and decompositions of singular continuous spectra, J.\ Funct.\ Anal.\ {\bf 142} (1996), 406--445. \bibitem{Ly} R.\ Lyons, Fourier-Stieltjes coefficients and asymptotic distribution modulo $1$, Ann.\ Math.\ {\bf 122} (1985), 155--170. \bibitem{Pea1} D.B.\ Pearson: Singular continuous measures in scattering theory, Commun.\ Math.\ Phys.\ {\bf 60} (1978), 13--36. \bibitem{Pea2} D.B.\ Pearson: Value distribution and spectral analysis of differential operators, J.\ Phys.\ A {\bf 26} (1993), 4067--4080. \bibitem{SiSt} B.\ Simon, G.\ Stolz: Operators with singular continuous spectrum, V. Sparse potentials: Proc.\ Amer.\ Math.\ Soc. {\bf 124} (1996). 2073-2080. \end{thebibliography} \end{document} ---------------0202180640592--