Lecture notes on nonlocal equations: Difference between revisions

From nonlocal pde
Jump to navigation Jump to search
imported>Luis
imported>Luis
mNo edit summary
Line 135: Line 135:
M_{\mathcal L}^- u(x) &= \inf_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy.
M_{\mathcal L}^- u(x) &= \inf_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy.
\end{align*}
\end{align*}
Thus, for a nonlocal operator $I$ (which is a black box that maps $C^2$ functions into continuous functions), we can say if is [[uniformly elliptic]] if for any two $C^2$ functions $u$ and $v$,
Thus, for a nonlocal operator $I$ (which is a black box that maps $C^2$ functions into continuous functions), we can say it is [[uniformly elliptic]] if for any two $C^2$ functions $u$ and $v$,
\[ M_{\mathcal L}^- (u-v)(x) \leq Iu(x) - Iv(x) \leq M_{\mathcal L}^+ (u-v)(x).\]
\[ M_{\mathcal L}^- (u-v)(x) \leq Iu(x) - Iv(x) \leq M_{\mathcal L}^+ (u-v)(x).\]


Line 156: Line 156:
# $\varphi(x) = u(x)$,
# $\varphi(x) = u(x)$,
# $\varphi(y) \leq u(y)$ everywhere in $\R^n$,
# $\varphi(y) \leq u(y)$ everywhere in $\R^n$,
then $I\varphi(x) \geq 0$.
then $I\varphi(x) \leq 0$.


The point of the definition is to translate the difficulty of evaluating the operator $I$ into a smooth test function $\varphi$. In this way, the function $u$ is only required to be continuous (lower semicontinuous for the inequality $Iu \leq 0$). The function $\varphi$ is a test function ''touching $u$ from below'' at $x$.
The point of the definition is to translate the difficulty of evaluating the operator $I$ into a smooth test function $\varphi$. In this way, the function $u$ is only required to be continuous (lower semicontinuous for the inequality $Iu \leq 0$). The function $\varphi$ is a test function ''touching $u$ from below'' at $x$.

Revision as of 07:47, 8 May 2012

Lecture 1

Definitions: linear equations

The first lecture serves as an overview of the subject and to familiarize ourselves with the type of equations under study.

The aim of the course is to see some regularity results for elliptic equations. Most of these results can be generalized to parabolic equations as well. However, this generalization presents extra difficulties that involve nontrivial ideas.

The prime example of an elliptic equation is the Laplace equation. \[ \Delta u(x) = 0 \text{ in } \Omega.\]

Elliptic equations are those which have similar properties as the Laplace equation. This is a vague definition.

The class of fully nonlinear elliptic equations of second order have the form \[ F(D^2u, Du, u, x)=0 \text{ in } \Omega.\] for a function $F$ such that \[ \frac{\partial F}{\partial M_{ij}} > 0 \text{ and } \frac{\partial F}{\partial u} \leq 0.\]

These are the minimal monotonicity conditions for which you can expect a comparison principle to hold. The appropriate notion of weak solution, viscosity solutions, is based on this monotonicity.

What is the Laplacian? The most natural (coordinate independent) definition may be \[ \Delta u(x) = \lim_{r \to 0} \frac c {r^{n+2}} \int_{B_r} u(x+y)-u(x) dy.\]

A simple (although rather uninteresting) example of a nonlocal equation would be the following non infinitesimal version of the Laplace equation \[ \frac c {r^{n+2}} \int_{B_r} u(x+y)-u(x) dy = 0 \text{ for all } x \in \Omega.\]

The equation tells us that the value $u(x)$ equals the average of $u$ in the ball $B_r(x)$. A more general integral equation is a weighted version of the above. \[ \int_{\R^n} (u(x+y)-u(x)) K(y) dy = 0 \text{ for all } x \in \Omega.\] where $K:\R^n \to \R$ is a non negative kernel.

The equations show that $u(x)$ is a weighted average of the values of $u$ in the neighborhood of $x$. This is true in some sense for all elliptic equations, but it is most apparent for integro-differential ones.

For the Dirichlet problem, the boundary values have to be prescribed in the whole complement of the domain. \begin{align*} \int_{\R^n} (u(x+y)-u(x)) K(y) dy &= 0 \text{ for all } x \in \Omega, \\ u(x) &= g(x) \text{ for all } x \notin \Omega. \end{align*}

These type of equations have a natural motivation from probability, as we will see below.

Probabilistic derivation

Let us start by an overview on how to derive the Laplace equation from Brownian motion.

Let $B_t^x$ be Brownian motion starting at the point $x$ and $\tau$ be the first time it hits the boundary $\partial \Omega$. If we call $u(x) = \mathbb E[g(B_\tau^x)]$ for some prescribed function $g: \partial \Omega \to \R$, then $u$ will solve the classical Laplace equation \begin{align*} \Delta u(x) &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*}

A variation would be to consider diffusions other than Brownian motion. If $X^x_t$ is the stochastic process given by the SDE: $X_0^x = x$ and $dX_t^x = \sigma(X) dB$, and we define as before $u(x) = \mathbb E[g(X_\tau^x)]$, then $u$ will solve \begin{align*} a_{ij}(x) \partial_{ij} u(x) &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*} where $a_{ij}(x) = \sigma^*(x) \sigma(x)$ is a non negative definite matrix for each point $x$.

Nonlinear equations arise from stochastic control problems. Say that we can choose the coefficients $a_{ij}(x)$ from a family of possible matrices $\{a_{ij}^\alpha\}$ indexed by a parameter $\alpha \in A$. For every point $x$, we can choose a different $a_{ij}(x)$ and our objective is to make $u(x)$ as large as possible. The maximum possible value of $u(x)$ will satisfy the equation \begin{align*} \sup_{\alpha} a_{ij}^\alpha \partial_{ij} u &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*}

Sketch of the proof. If $v$ is any solution to \begin{align*} a_{ij}(x) \partial_{ij} v(x) &= 0 \text{ in } \Omega,\\ v(x) &= g(x) \text{ on } \partial \Omega. \end{align*} with $a_{ij}(x) \in \{a_{ij}^\alpha : \alpha \in A\}$, then from the equation that $u$ solves, we have \[ a_{ij}(x) \partial_{ij} u(x) \leq 0 \text{ in } \Omega. \] Therefore $u \geq v$ in $\Omega$ by the comparison principle for linear elliptic PDE.

Integro-differential equations are derived from discontinuous stochastic processes: Levy processes with jumps.

Let $X_t^x$ be a pure jump Levy process starting at $x$. Now $\tau$ is the first exit time from $\Omega$. The point $X_\tau$ may be anywhere outside of $\Omega$ since $X_t$ jumps. The jumps take place at random times determined by a Poisson process. The jumps in any direction $y \in A$, for some set $A \subset \R^n$ follow a Poisson process with intensity \[ \int_A K(y) dy. \] The kernel $K$ represents then the frequency of jumps in each direction. This type of processes are well understood and studied in the probability community.

The small jumps may happen more often than large ones. In fact, small jumps may happen infinitely often and still have a well defined stochastic process. This mean that the kernels $K$ may have a singularity at the origin. The exact assumption one has to make is \[ \int_{\R^n} K(y) (1 \wedge |y|^2) dy , +\infty.\] The generator operator of the Levy process is \[ Lu(x) = \int_{\R^n} (u(x+y) - u(x) - y \cdot Du(x) \chi_{B_1}(y)) K(y) dy. \]

We may assume that $K(y)=K(-y)$ in order to simplify the expression. This assumption is not essential, but it makes the computations more compact. This way we can write \begin{align*} Lu(x) &= PV \int_{\R^n} (u(x+y) - u(x)) K(y) dy, \text{ or } &= \int_{\R^n} (u(x+y) + u(x-y) - 2u(x)) K(y) dy. \end{align*}

An optimal control problem for jump processes leads to the integro-differntial Bellman equation \[ Iu(x) := \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^\alpha(y) dy = 0 \text{ in } \Omega.\]

Another possibility is to consider a problem with two parameters, which are controlled by two competitive players. This is the integro-differential Isaacs equation. \[ Iu(x) := \inf_\beta \ \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^{\alpha\beta}(y) dy = 0 \text{ in } \Omega.\]

Other contexts in which integral equations arise are the following:

Uniform ellipticity

Regularity result require stronger monotonicity assumptions. For fully nonlinear elliptic equations of second order F(D^2u)=0, uniform ellipticity is defined as that there exist two constants $\Lambda \geq \lambda > 0$ such that \[ \lambda I \leq \frac{\partial F}{\partial M_{ij}}(M) \leq \Lambda I.\]

Big Theorems:

  • Krylov-Safonov (1981): Solutions to fully nonlinear uniformly elliptic equations are $C^{1,\alpha}$ for some $\alpha>0$.
  • Evans-Krylov (1983): Solutions to convex fully nonlinear uniformly elliptic equations are $C^{2,\alpha}$ for some $\alpha>0$.

At the end of this course, we should be able to understand the proof of these two theorems and their generalizations to nonlocal equations.

We first need to understand what ellipticity means in an integro-differential equation. The prime example will be the fractional Laplacian. For $s \in (0,2)$, define \[ -(-\Delta)^{s/2} u(x) = \int_{\R^n} (u(x+y)-u(x)) \frac{c_{n,s}}{|y|^{n+s}} dy.\]

This is an integro-differential operator with a kernel which is radially symmetric, homogeneous, and singular at the origin.

A natural ellipticity condition for linear integro-differential operators would be to impose that the kernel is comparable to that of the fractional Laplacian. The condition could be \[ c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y).\] But other conditions are possible.

Uniform ellipticity is linked to extremal operators. The classical Pucci maximal operators are the extremal of all uniformly elliptic operators which vanish at zero. \begin{align*} M^+(D^2 u) &= \sup_{\lambda I \leq \{a_{ij}\} \leq \Lambda I} a_{ij} \partial_{ij} u(x) = \Lambda tr(D^2u)^+ - \lambda tr(D^2u)^+,\\ M^-(D^2 u) &= \inf_{\lambda I \leq \{a_{ij}\} \leq \Lambda I} a_{ij} \partial_{ij} u(x) = \lambda tr(D^2u)^+ - \Lambda tr(D^2u)^+.\\ \end{align*} A fully nonlinear equation $F(D^2u)=0$ is uniformly elliptic if and only if for any two symmetric matrices $X$ and $Y$, \[M^-(X-Y) \leq F(X) - F(Y) \leq M^+(X-Y).\]

Given any family of kernels $\mathcal L$, we define \begin{align*} M_{\mathcal L}^+ u(x) &= \sup_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy, \\ M_{\mathcal L}^- u(x) &= \inf_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy. \end{align*} Thus, for a nonlocal operator $I$ (which is a black box that maps $C^2$ functions into continuous functions), we can say it is uniformly elliptic if for any two $C^2$ functions $u$ and $v$, \[ M_{\mathcal L}^- (u-v)(x) \leq Iu(x) - Iv(x) \leq M_{\mathcal L}^+ (u-v)(x).\]

The first choice of $\mathcal L$ would be the one described above \[ \mathcal L = \left\{ K : c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]

In this case, the maximal operators take a particularly simple form

\begin{align*} M_{\mathcal L}^+ u(x) &= \frac{c_{n,s}}2 \int_{\R^n} \frac{\Lambda (u(x+y)+u(x-y)-2u(x))^+ - \lambda (u(x+y)+u(x-y)-2u(x))^-}{|y|^{n+s}} dy, \\ M_{\mathcal L}^- u(x) &= \frac{c_{n,s}}2 \int_{\R^n} \frac{\lambda (u(x+y)+u(x-y)-2u(x))^+ - \Lambda (u(x+y)+u(x-y)-2u(x))^-}{|y|^{n+s}} dy. \end{align*}

For other choices of $\mathcal L$, the operators $M^+_{\mathcal L}$ and $M^-_{\mathcal L}$ may not have an explicit expression.

Lecture 2

Viscosity solutions

Definition. We say that $Iu \leq 0$ in $\Omega$ in the viscosity sense if every time there exists a function $\varphi : \R^n \to \R$ such that for some point $x \in \Omega$,

  1. $\varphi$ is $C^2$ in a neighborhood of $x$,
  2. $\varphi(x) = u(x)$,
  3. $\varphi(y) \leq u(y)$ everywhere in $\R^n$,

then $I\varphi(x) \leq 0$.

The point of the definition is to translate the difficulty of evaluating the operator $I$ into a smooth test function $\varphi$. In this way, the function $u$ is only required to be continuous (lower semicontinuous for the inequality $Iu \leq 0$). The function $\varphi$ is a test function touching $u$ from below at $x$.

The inequality $Iu \geq 0$ is defined analogously using tests functions touching $u$ from above. A viscosity solution is a function $u$ for which both $Iu \leq 0$ and $Iu \geq 0$ hold in $\Omega$.

Viscosity solutions have the following basic properties:

  • Stability under uniform limits.

For second order equations this means that if $F_n(D^2 u_n) = 0$ in $\Omega$ and we have both $F_n \to F$ and $u_n \to u$ locally uniformly, then $F(D^2 u)=0$ also holds in the viscosity sense.

This is available under several set of assumptions. Some are rather difficult to prove, like the case of second order equations with variable coefficients.

The method can be applied to find the viscosity solution of the Dirichlet problem every time the comparison principle holds and some barrier construction can be used to assure the boundary condition.

Let us analyze the case of integral equations. Whenever a test function $\varphi$ exists, there is a vector $b$ ($=\nabla \varphi(x)$) and a constant $c$ ($=|D^2 \varphi(x)|$) such that \[ u(x+y) \leq u(x) + b \cdot y + c|y|^2.\] Therefore, the positive part of the integral \[ \int_{\R^n} (u(x+y) + u(x-y) - 2u(x))^+ K(y) dy \] has an $L^1$ integrand. The negative part can a priori integrate to $-\infty$. In any case, we can assign a value to the integral in $[-\infty,\infty)$, and also to any expression of the form \[Iu(x) = \inf_\beta \ \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^{\alpha\beta}(y) dy\] Thus, the value of $Iu(x)$ can be evaluated classically. In the case that $I$ is uniformly elliptic one can even show that the negative part of the integral is also finite. This small observation makes it more comfortable to deal with viscosity solutions of integro-differential equations than in the classical PDE case, since the equation is evaluated directly into the solution $u$ at all points $x$ where there is a test function $\varphi$ touching $u$ either from above or below.

An open problem

Uniqueness with variable coefficients

Prove that the comparison principle holds for equations of the form \[ \inf_\alpha \ \sup_\beta \int_{\R^n} (u(x+y)-u(x)) K^{\alpha \beta}(x,y) dy = 0,\] under appropriate ellipticity and continuity conditions on the kernel $K$.

The closest result available, due to Cyril Imbert and Guy Barles, is for equations of the form \[ \inf_{\alpha} \ \sup_\beta \int_{\R^n} (u(x+j(x,y))-u(x)) K^{\alpha \beta}(x,y) dy = 0.\] Here $j$ is assumed to be essentially Lipschitz continuous respect to $x$, among other nondegeneracy conditions for $j$ and $K$.

Second order equations as limits of integro-differential equations

We can recover second order elliptic operators as limits of integral ones. Consider \[ \lim_{s \to 2} \int_{\R^n} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy.\]

For $u \in C^3$, we write the expansion \[ u(x+y) = u(x) + Du(x) \cdot y + y^t \ D^2u(x)\ y + O(|y|^3).\]

Let us split the integral above in the domains $B_R$ and $\R^n \setminus B_R$ for some small $R>0$.

For the first part, we have \begin{align*} \int_{B_R} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy &= \int_{B_R} (y^t \ D^2u(x) \ y + O(|y|^3)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy \\ &= \int_0^R (2-s) \frac{r^2}{r^{n+s}} r^{n-1} \int_{\partial B_1} (\theta^t \ D^2u(x) \ \theta) \frac{a(\theta)} d\theta dr + (2-s) O(R^{3-s}) \\ &= R^{2-s} \int_{\partial B_1} (\theta^t \ D^2u(x) \ \theta) \frac{a(\theta)} d\theta + (2-s) O(R^{3-s}) \\ \end{align*}

Therefore, when we take $s\to 2$, we obtain \[\int_{B_R} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy = \int_{\partial B_1} \theta^t \ D^2u(x) \ \theta a(\theta) d\theta,\] which is a linear operator in $D^2u$, hence it equals $a_{ij} \partial_{ij}u$ for some matrix $a_{ij}$.

Smooth approximations to viscosity solutions to fully nonlinear elliptic equations

One of the common difficulties one encounters when dealing with viscosity solutions is that it is difficult to make density type arguments. More precisely, a viscosity solution cannot be approximated by a classical $C^2$ solution in any standard way. We can do it however, if we use nonlocal equations.

Given the equation \begin{align*} 0 = F(D^2u) &= \inf_\alpha \ \sup_\beta a^{\alpha \beta}_{ij} \partial_{ij} u\\ &= \frac \lambda 2 \Delta u + \inf_\alpha \ \sup_\beta b^{\alpha \beta}_{ij} \partial_{ij} u. \end{align*}

We approximate linear each operator $b^{\alpha \beta}_{ij} \partial_{ij} u$ by an integro-differential one \[b^{\alpha \beta}_{ij} \partial_{ij} u = \lim_{r\to 0} \int_{\R^n} (u(x+y)-u(x)) K_r^{\alpha \beta} dy,\] where \[ K_r^{\alpha \beta}(y) = \frac 1 {r^{n+2}} K^{\alpha \beta} \left( \frac y r \right),\] and each $K^{\alpha \beta}$ is smooth and compactly supported. Then, we approximate the equation with \[ \frac \lambda 2 \Delta u_r + \inf_\alpha \ \sup_\beta \int_{\R^n} (u_r(x+y)-u_r(x)) K_r^{\alpha \beta} dy = 0 \] For each $r>0$, the solution $u$ will be $C^{2,1}$ (very smooth), and $u_r \to u$ as $r \to 0$, where $u$ is the solution to $F(D^2 u)=0$.

Regularity results, such as Harnack or $C^{1,\alpha}$, can be proved uniformly in $r$ bypassing the technical difficulties of viscosity solutions if we are willing to deal with integral equations.

Regularity of nonlinear equations: how to start

In order to show that the solution to a fully nonlinear equation $F(D^2 u)=0$ is $C^{1,\alpha}$ for some $\alpha>0$, we differentiate the equation and study the equation that the derivative satisfies. Formally, if we differentiate in an arbitrary direction $e$, \[ \frac{\partial F}{\partial M_{ij}} (D^2u) \partial_{ij} (\partial_e u) = 0.\]

If we call $a_{ij}(x) = \frac{\partial F}{\partial M_{ij}} (D^2u(x))$, we do not know much about this coefficients a priori (they are technically not well defined), but we know that for all $x$ \[ \lambda I \leq a_{ij}(x) \leq \Lambda I,\] because of the uniform ellipticity assumption on $F$.

What we need is to prove that a solution to an equation of the form \[ a_{ij}(x) \partial_{ij} v = 0\] is Holder continuous, with an estimate which depends on the ellipticity constants of $a_{ij}$ but is independent of any other property of $a_{ij}$ (no smoothness assumption can be made). This is the fundamental result by Krylov and Safonov.

Differentiating the equation

When we try to make the argument above rigorous, we encounter some technical difficulties. The first obvious one is that $\partial_e u$ may not be a well defined function. We must take incremental quotients. \[ v(x) = \frac{u(x+h)-u(x)}{|h|}.\] The coefficients of the equation may not be well defined either, but what can be shown is that \[ M^+(D^2 v) \geq 0 \text{ and } M^-(D^2 v) \leq 0,\] for the classical Pucci operators of order 2.

For fully nonlinear integro-differential equations, one gets the same thing with the appropriate extremal operators corresponding to the uniform ellipticity assumption. If $Iu=0$ in $B_1$ and $v$ is defined as above, then \[ M_{\mathcal L}^+(v) \geq 0 \text{ and } M_{\mathcal L}^-(v) \leq 0,\] wherever $x \in \Omega$ and $x+h \in \Omega$.

The challenge is then to find a Holder estimate based on these two inequalities. The result says that if $v$ satisfies in the viscosity sense both inequalities $M_{\mathcal L}^+(v) \geq 0$ and $M_{\mathcal L}^-(v) \leq 0$ in (say) $B_1$, then $v$ is $C^\alpha(B_{1/2})$ with the estimate \[ \|v\|_{C^\alpha(B_{1/2})} \leq C \|v\|_{L^\infty(\R^n)}.\]

The fact that the $L^\infty$ norm is taken in the full space $\R^n$ is an unavoidable consequence of the fact that the equation is non local. This feature does make the proof of $C^{1,\alpha}$ regularity more involved and it even forces us to add extra assumptions

It is good to keep in mind that for smooth functions $v$, the two inequalities above are equivalent to the existence of some kernel $K(x,y)$ such that \[ \int_{\R^n} (v(x+y)-v(x)) K(x,y) dy = 0, \] and that $K(x,\cdot) \in \mathcal L$ for all $x$. But no assumption can be made about the regularity of $K$ respect to $x$.

Holder estimates

The proof of the Holder estimates is relatively simple if we do not care about how the constants $C$ and $\alpha$ depend on $s$. If we want a robust estimate that passes to the limit as $s \to 2$, the proof will be much harder. We will start with the simple case.

Let $\mathcal L$ be the usual class of kernels \[ \mathcal L = \left\{ K : c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]

Let $u$ be a continuous function, bounded in $\R^n$ such that \begin{align*} M^+_{\mathcal L} u &\geq 0 \text{ in } B_1, \\ M^-_{\mathcal L} u &\leq 0 \text{ in } B_1. \end{align*} Where both of the inequalities above are understood in the viscosity sense.

Then, there are constants $C$ and $\alpha>0$ (depending only on $\lambda$, $\Lambda$, $n$ and $s$) such that \[ |u(x) - u(0)| \leq C |x|^\alpha \|u\|_{L^\infty(\R^n)}.\] There is nothing special about the point $0$. Thus, the estimate can be made uniformly in any set of points compactly contained in $B_1$.

Proof. The factor $\|u\|_{L^\infty(\R^n)}$ can be assumed to be $1$ thanks to the simple normalization $u/\|u\|_{L^\infty}$. So, we assume that $\|u\|_{L^\infty}=1$ and will prove that there is a constant $\theta>0$ such that \[ osc_{B_{2^{-k}}} u \leq (1-\theta)^k.\] The result then follows taking $\alpha = \log(1-\theta)/\log(1/2)$ and $C = (1-\theta)^{-1}$.

We will prove the above estimate for dyadic balls inductively. It is certainly true for $k \leq 0$ since $\|u\|_{L^\infty} = 1$. Now we assume it holds up to some value of $k$ and want to prove it for $k+1$.

In order to prove the inductive step, we rescale the function so that $B_{2^{-k}}$ corresponds to $B_1$. Let \[ v(x) = (1-\theta)^{-k} u(2^{-k} x) - \frac 12 .\] The function $v$ is scaled so that $-1/2 \leq v \leq 1/2$ in $B_1$.

The scale invariance of $M^+_{\mathcal L}$ and $M^-_{\mathcal L}$ plays a crucial role here in that $v$ satisfies the same extremal equations as the original function $u$.

From the inductive hypothesis, $osc_{B_{2^{-j}}} u \leq (1-\theta)^j$ for all $j \leq k$, so we have that $osc_{B_{2^{j}}} v \leq (1-\theta)^{-j}$ for all $j \geq 0$.

There are two obvious ways in which the oscillation of $v$ in the ball of radius $1/2$ can be smaller than its oscillation in $B_1$: either the suppremum of $u$ is smaller in $B_{1/2}$ or the infimum is larger. We prove one or the other depending on which of the sets $\{v < 0\} \cap B_1$ or $\{v > 0\} \cap B_1$ has larger measure. Let us assume the former. The other case follows by exchangind $v$ with $-v$. We want to prove now that $v \leq (1/2-\theta)$ in $B_{1/2}$.

Note that since we know that $osc_{B_{2^{j}}} v \leq (1-\theta)^{-j}$ for all $j \geq 0$, then \[ v(x) \leq (2|x|)^\alpha-1/2 \text{ for } x \notin B_1.\]

The point is to choose $\theta$ and $\alpha$ appropriately so that the following three points

  • $v(x) \leq (2|x|)^\alpha-1/2 \ \text{ for all } x \notin B_1$.
  • $|\{v < 0\} \cap B_1| > 1/2 |B_1|$.
  • $M^+_{\mathcal L} v \leq 0$ in $B_1$

Imply that $v \leq (1/2-\theta)$ in $B_{1/2}$.

If that holds for any choice of $\alpha$ and $\theta$, it also holds for smaller values. Thus, a posteriori, we can make one of them smaller so that $\alpha = \log(1-\theta)/\log(1/2)$.

Let $\rho$ be a smooth radial function supported in $B_{3/4}$ such that $\rho \equiv 1$ in $B_{1/2}$.

If $v \geq (1/2-\theta)$ at any point in $B_{1/2}$, then $(u+\theta \rho)$ would have a local maximum at a point $x_0 \in B_{3/4}$ for which $(u+\theta \rho)(x_0) > 1/2$. \[ \max_{B_1} (u+\theta \rho) = (u+\theta \rho)(x_0) > 1.\] In order to obtain a contradiction, we evaluate $M^+ (u+\theta \rho)(x_0)$.

On one hand \begin{align*} M^+ (u+\theta \rho)(x_0) &\geq M^+ u(x_0) + \theta M^- \rho(x_0) \\ &\geq \theta \ \min_{B_{3/4}} \ M^- \rho(x_0). \end{align*}

On the other hand, the other estimate is more delicate. Let $w = (u+\theta \rho)$. \begin{align*} M^+ (u+\theta \rho)(x_0) &= \int_{\R^n} \frac{\Lambda (w(x_0+y)+w(x_0-y)-2w(x_0))^+ - \lambda (w(x_0+y)+w(x_0-y)-2w(x_0))^-}{|y|^{n+s}} dy \\ &\leq \int_{\R^n} \frac{\Lambda (w(x_0+y)-w(x_0))^+ - \lambda (w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\ &\leq \int_{x_0+y \notin B_1} (\dots) + \int_{x_0+y \in B_1} (\dots) \end{align*}

The first integral can be bounded using that $v(x) \leq (2|x|)^\alpha-1/2$ for all $x \notin B_1$. In fact, it is arbitrarily small if $\alpha$ is chosen close to $0$. \[ \int_{x_0+y \notin B_1} (\dots) \leq \int_{x_0+y \notin B_1} ((2|x|)^\alpha-1) \frac{\Lambda}{|y|^{n+s}} dy \ll 1.\]

The second integral has a non negative integrand just because $(u+\theta \rho)$ takes its maximum in $B_1$ at $x_0$. But we can say more using the set $G = \{v < 0\} \cap B_1$. \begin{align*} \int_{x_0+y \in B_1} (\dots) &\leq \int_{x_0+y \in G} (\dots) + \int_{x_0+y \in B_1 \setminus G} (\dots) \\ &\leq \int_{x_0+y \in G} (\dots) = \int_{x_0+y \in G} - \lambda \frac{(w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\ &\leq \int_{x_0+y \in G} - \lambda \frac{(w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\ &\leq \int_{x_0+y \in G} - \lambda \frac{1/2-\theta}{|y|^{n+s}} dy \leq -C. \end{align*} In the last inequality we use that $|y|^{-n-s}$ is bounded below, $\theta$ is chosen less than $1/2$, and $|G|>|B_1|/2$.

So, for $\theta$ and $\alpha$ small enough the sum of the two terms will be negative and less than $\theta \min M^- \rho$, arriving to a contradiction. This finishes the proof.

Inspecting the proof above we see that the argument is much more general than presented. The only assumptions used on $\mathcal L$ are that:

  1. The extremal operators are scale invariant.
  2. For the smooth bump function $\rho$, $M^- \rho$ is bounded.
  3. $M^+ w(x_0)$ can be bounded below at point $x_0 \in B_{3/4}$ which achieves the maximum of $w$ in $B_1$ provided that
    • $w(x) \leq w(x_0) + (2|x|)^\alpha-1$ for $x \notin B_1$.
    • $|\{w(x) \leq w(x_0)-1\} \cap B_1| \geq |B_1|/2$.

There are very general families of non local operators which satisfy those conditions above.