Lecture 6
Lecture 6
The obstacle problem
The obstacle problem consists in finding the smallest super-solution to a non local equation which is constrained to remain above a given obstacle $\varphi$. The equation reads \begin{align*} &Lu \leq 0 \ \text{ in } \Omega, \\ &u \geq \varphi \ \text{ in } \Omega, \\ &Lu = 0 \ \text{ wherever } u>\varphi. \end{align*}
In order to have a well posed problem, it should be complemented with a boundary condition. For example, we can consider the Dirichlet condition $u=\varphi$ in $\R^n \setminus \Omega$.
The definition of the problem already tells us how to prove the existence of the solution: we use Perron's method. Under reasonable assumptions on the non local operator $L$, it is possible to prove the uniqueness of solutions as well.
The regularity of both the solution $u$ and the free boundary $\partial \{u=\varphi\}$ is a much more delicate subject. It is in fact not well understood for generic non local operators $L$, even linear.
Note that the equation can be written as a Bellman equation, \[ \max(Lu,\varphi-u) = 0 \ \text{ in } \Omega.\]
The equation models a problem in stochastic control known as the optimal stopping problem. We follow a Levy process $X(t)$ with generator $L$ (assume it is linear). We are allowed to stop at any time while $X(t) \in \Omega$ but we must stop if at any time $X(t) \notin \Omega$. Whenever we stop at a time $\tau$, we are given the payoff $\varphi(X(\tau))$. Our objective is to maximize the expected payoff by choosing the best stopping time \[ u(x) := \sup_{\text{stopping time }\tau} \mathbb E \big[ \varphi(X(\tau)) \vert X(0)=x \big].\] The stopping time $\tau$ must fulfill the probabilistic definition of stopping time. That is, it has to be measurable with respect to the filtration associated with $X$. In plain words, our decision to stop or continue must be based only on the current position $X(t)$ (the past is irrelevant from the Markovian assumption and the future cannot be predicted).
Let us explain heuristically why $u$ solves the obstacle problem. For every $y \notin \Omega$, we are forced to stop the process immediately, thus naturally $u(x) = \varphi(x)$. If $x \in \Omega$, we have the choice to either stop of follow the process. If we choose to stop, we get $u(x)=\varphi(x)$. If we choose to continue, we get $Lu(x)=0$. Moreover, we make those choices because the other choice would give us a worse expectation. That is, if we choose to stop at $x$ then $Lu(x) \leq 0$, and if we choose to continue at $x$ then $u(x) \geq \varphi(x)$. These are all the conditions that define the obstacle problem.
The regularity is quite delicate even in the simplest cases. We will start by only considering the case $L = -(-\Delta)^s$ for $s \in (0,1)$. Moreover, we will take $\Omega = \R^n$ to avoid issues concerning the boundary. The problem is well posed in the full space $\R^n$ if $\varphi$ is compactly supported and $n > 1$.
Lipschitz regularity and semiconvexity
The next two propositions hold for generic non local operators $L$. In fact not even the linearity of $L$ is used, only the convexity of $L$ is necessary for the second proposition (the one on semiconvexity).
The first proposition says in particular that if $\varphi$ is Lipschitz, then also $u$ is Lipschitz with the same seminorm.
Proposition. Assume $\varphi$ has a modulus of continuity $\omega$, then also $u$ has the modulus of continuity $\omega$.
Proof. We know that for all $x$ and $y$ in $\R^n$, \[ \varphi(x+y) + \omega(|y|) \geq \varphi(x).\] Since $u \geq \varphi$, we also have \[ u(x+y) + \omega(|y|) \geq \varphi(x).\] Fixing $y$, we can take the left hand side of the inequality as a function of $x$, and we realize it is a supersolution of the equation which is above $\varphi$. Therefore, from the definition of the obstacle problem, it is also above $u$ (recall that $u$ was the minimum of such supersolutions). Then \[ u(x+y) + \omega(|y|) \geq u(x).\] The fact that this holds for any values of $x,y \in \R^n$ means that $u$ has the modulus of continuity $\omega$.
□The following proposition implies that if $\varphi$ is smooth, then $u$ is semiconvex in the sense that $D^2 u \geq -C \, \mathrm{I}$. By this inequality we mean that the function $u(x) + C \frac{|x|^2}2$ is convex.
Proposition. Assume $D^2 \varphi \geq -C \, \mathrm I$. Then also $D^2 u \geq -C \, \mathrm I$.
Proof. For any $x,y \in \R^n$ we have \[ \varphi(x+y) + \varphi(x-y) - 2 \varphi(x) \geq -C |y|^2.\] This can be rewritten as \[ \frac{\varphi(x+y)+\varphi(x-y)+C|y|^2} 2 \geq \varphi(x).\] Since $u \geq \varphi$, \[ \frac{u(x+y)+u(x-y)+C|y|^2} 2 \geq \varphi(x).\] This is the point in which the convexity of the equation plays a role. Notice that the obstacle problem can be written as a Bellman equation. It is, itself, a convex problem, meaning that the average of two solutions is a super-solution. Therefore, the left hand side of the inequality is a super-solution above $\varphi$, and therefore must be larger than $u$. \[ \frac{u(x+y)+u(x-y)+C|y|^2} 2 \geq u(x).\] This is precisely the fact that $D^2 u \geq -C \, \mathrm I$.
□$C^{2s}$ regularity
For the classical Laplacian, the optimal regularity is $C^{1,1}$, which was originally proved by Frehse. The proof goes like this
- On one hand, we have the semiconvexity: $D^2 u \geq -C \, \mathrm I$.
- On the other hand, we have from the equation: $\Delta u \leq 0$.
These two things combined give an estimate $|D^2 u| \leq C$.
A similar argument works for the fractional Laplacian to prove $u \in C^{2s}$. However, this regularity is not optimal as soon as $s<1$. Instead $u \in C^{1+s}$ would be the optimal one. The argument goes like this.
- On one hand, $u$ is bounded and semiconvex: $D^2 u \geq -C \, \mathrm I$ and $u \in L^\infty$. Therefore $(-\Delta)^s u \leq C$.
- On the other hand, we have from the equation: $(-\Delta)^s u \geq 0$.
These two things combined say that $|(-\Delta)^s u| \leq C$. This is almost like $u \in C^{2s}$.
The precise estimate $u \in C^{2s}$ follows with a little extra work, but we will not need it.
Exercise 13. Let $u$ be a function such that for some constant $C$, $|u| \leq C$, $D^2 u \geq -C \, \mathrm I$ and $(-\Delta)^s u \geq 0$ in $\R^n$. Prove that there is a constant $\tilde C$ (depending only on $C$) so that for all $x \in \R^n$, \[ \int_{\R^n} \frac{|u(x+y) - u(x)|}{|y|^{n+2s}} dy \leq \tilde C.\] Recall from Exercise 11 that the conclusion above implies that $u \in C^{2s}$.
$C^{2s+\alpha}$ regularity
The next step is to prove that $u$ is a little bit more regular than $C^{2s}$. We will now show that $w := (-\Delta)^s u$ is $C^\alpha$ for some $\alpha>0$ small (the optimal $\alpha$ will appear in the next section).
Note that the function $w$ solves the following Dirichlet problem for $(-\Delta)^{1-s}$, \begin{align*} (-\Delta)^{1-s} w &= -\Delta \varphi \ \text{ inside } \{u=\varphi\},\\ w &= 0 \ \text{ in } \{u > \varphi\}. \end{align*}
Here is an important distinction between the cases $s=1$ and $s<1$. If $s=1$, the function $w = \Delta u$ does not satisfy any useful equation, whereas here we can obtain some extra regularity from the equation for $w$.
We can observe here a heuristic reason for the optimal regularity. If we expect that $w$ will be continuous and the contact set $\{u=\varphi\}$ will have a smooth boundary, then the regularity of $w$ on the boundary $\partial \{u=\varphi\}$ is determined by the equation and should be $w \in C^{1-s}$. This corresponds precisely to $u \in C^{1+s}$. The proof will be long though, because we do not know a priori either that $\{u=\varphi\}$ has a smooth boundary or that $w$ is continuous across $\partial \{u=\varphi\}$.
It is convenient to rewrite the problem in terms of $u-\varphi$ instead of $u$. We will abuse notation and still call it $u$, which now solves an obstacle problem with obstacle $\varphi=0$ but with a non zero right hand side $\varphi$. \begin{align*} u &\geq 0 \ \text{ in } \R^n, \\ (-\Delta)^s u &\geq \phi \ \text{ in } \R^n, \\ (-\Delta)^s u &= \phi \ \text{ wherever } u>0. \end{align*} We also call $w = (-\Delta)^s u$.
Recall that $D^2 u \geq -C \, \mathcal I$ for some constant $C$. In particular $(-\Delta)^{1-s} w \leq C$ for some $C$.
In this section, we will show that $w \in C^\alpha$ for some small $\alpha>0$, which implies that $u \in C^{2s+\alpha}$. We will prove it with an iterative improvement of oscillation scheme. That is, we will show that for any point $x$, \[ \mathrm{osc}_{B_{2^{-k}}(x)} w \leq C (1-\theta)^k,\] for some $\theta>0$.
The location of $x$ is irrelevant, so let us center these balls at the origin. Naturally, the most delicate position to achieve regularity is the free boundary, so let us concentrate on the case $0 \in FB$.
The following lemma plays an important role in the proof.
Lemma. Given $\mu>0$ and $u$ a semiconvex function (i.e. $D^2 u > -C \, \mathrm I$), if $u(x_0) \geq \mu r^2$ at some point $x_0 \in B_r$, then \[ |\{ x \in B_{2r} : u(x) > 0 \} | \geq \delta |B_r|.\] Here, the constant $\delta$ depends on $\mu$ and $C$ only.
We leave the proof as an exercise.
The right hand side $\phi$ is a smooth function. We will use in particular that it is Lipschitz. By an appropriate scaling of the solution (i.e. replacing $u(x)$ and $w(x)$ by $cr^{-2s}u(rx)$ and $cw(rx)$ with $r \ll 1$ and $c \mathrm{osc} \ w = 1$), we can assume that \begin{equation*} \label{allsmall} \begin{aligned} D^2 u &\geq - \varepsilon \, \mathrm I, \\ (-\Delta)^{1-s} w &\leq \varepsilon, \\ |\phi(x) - \phi(y)| &\leq \varepsilon |x-y|, \text{ for all } x,y, \\ \mathrm{osc}_{\R^n} w &\leq 1. \end{aligned} \end{equation*} Here $\varepsilon$ is arbitrarily small. All these estimates except the last one will remain valid for all scaled solutions in our iteration. Instead of the last one, we will keep the estimate along the iteration which is common for non local problems \[ \mathrm{osc}_{B_{2^k}} w \leq 2^{\alpha k} \ \text{ for all integers } k \geq 1.\] Here $\alpha$ is a small positive number to be determined later. We will do an iterative improvement of oscillation scheme similar to the proof of the Holder estimates which we saw in Lecture 2. In order to carry this out, we need to prove that under these assumptions \[ \mathrm{osc}_{B_{1/2}} w \leq (1-\theta) \ \text{ for some } \theta>0.\]
We know that $(-\Delta)^{1-s} w \leq \varepsilon$. So, $w$ is a subsolution to a non local equation of order $2-2s$. There is one easy case of improvement of oscillation. Assume that \begin{equation} \label{good_case} | \{ w - \min_{B_1} \, w < 1/2\} \cap B_1 | \geq \delta |B_1|, \end{equation} for some $\delta>0$. Then we can literally use the result of Lecture 2 to conclude that $w < (1-\theta) + \min_{B_1} w$ in $B_{1/2}$ and obtain the improvement of oscillation.
We will now prove that \eqref{good_case} is in fact always true provided that $\delta$ is chosen sufficiently small (and $0 \in FB$). If there is some point in $B_1$ where $u>0$ (this is certainly true if $0 \in FB$), then $w(x)=\phi(x)$ at some point. Note that since $w \geq \phi$ everywhere and $\phi$ has a very small oscillation in $B_1$ (bounded by an arbitrary constant $\varepsilon$), then the minimum value of $w$ in $B_1$ is approximately the same as the value of $\phi(0)$, or any other value of $\phi$ in $B_1$.
Assuming that \eqref{good_case} is not true, then the opposite must hold, \[ | \{ w - \min_{B_1} \, w > 1/2\} \cap B_1 | > (1-\delta) |B_1|. \] In particular, since $w=\varphi \approx \min_{B_1} w$ where $u>0$, then \[ | \{ u=0 \} \cap B_1 | > (1-\delta) |B_1|. \] From the Lemma stated above, then $u<\mu$ in $B_{1/2}$ for some $\mu$ (arbitrarily small) depending on $\varepsilon$ and $\delta$.
Let $z$ be a point where $u(z)>0$ and $|z| \ll 1$. Let $x_0$ be the point where the (positive) maximum value of $u(x) - 100 \mu |x-z|^2$ is achieved in $B_{1/2}$. Since $u < \mu$, we see that necessarily $x_0 \in B_{1/10}$. Since $u(x_0)>0$, we must have \begin{align*} \phi(x_0) &= w(x_0) = (-\Delta)^s u(x_0), \\ &= \int_{\R^n} \frac{u(x_0) - u(y)}{|x_0-y|^{n+2s}} dy, \\ &= \int_{B_{1/2}} \frac{u(x_0) - u(y)}{|x_0-y|^{n+2s}} dy + \int_{\R^n \setminus B_{1/2}} \frac{u(x_0) - u(y)}{|x_0-y|^{n+2s}} dy. \\ \end{align*} Note that the first integral is bounded by the fact that $u$ is touched from above by a parabola of opening $100 \mu$ in $B_{1/2}$. For the second integral, we use only that $u(x_0) \geq 0$, \[ \phi(x_0) \geq - C \mu + \int_{\R^n \setminus B_{1/2}} \frac{- u(y)}{|x_0-y|^{n+2s}} dy. \] Since we assume that \eqref{good_case} does not hold, we know that there will be a point $x_1$ so that $|x_1-x_0| < C \delta^{1/n}$ and $w(x_1) > \min_{B_1} w + 1/2$. In particular $u(x_1)=0$. Therefore \begin{align*} \phi(x_0) - \varepsilon + 1/2 &< w(x_1) = (-\Delta)^s u(x_1),\\ &= \int_{\R^n} \frac{- u(y)}{|x_1-y|^{n+2s}} dy, \\ &\leq \int_{\R^n \setminus B_{1/2}} \frac{-u(y)}{|x_1-y|^{n+2s}} dy. \\ &\leq \int_{\R^n \setminus B_{1/2}} \frac{-u(y)}{|x_0-y|^{n+2s}} dy + C |x_0-x_1| \end{align*} The last term is somewhat delicate since we are not keeping track of the growth of $u$ at infinity. We need to make the estimate depending on what we know of $w$ only. We omit the details of this computation.
We have obtained \begin{align*} \phi(x_0) &\geq - C \mu + \int_{\R^n \setminus B_{1/2}} \frac{- u(y)}{|x_0-y|^{n+2s}} dy,\\ \phi(x_0) - \varepsilon + 1/2 &< \int_{\R^n \setminus B_{1/2}} \frac{-u(y)}{|x_0-y|^{n+2s}} dy + C \delta^{1/n}. \end{align*} We get a contradiciton if $\mu$, $\delta$ and $\varepsilon$ are chosen small enough.
This contradiction means that the inequality \eqref{good_case} must always hold, and we can always carry out the improvement of oscillation.
Almost optimal regularity
In the notation of the previous section. We have a continuous function $w$ such that \begin{align*} (-\Delta)^{1-s} w &= 0 \ \text{ inside the contact set } \{u=0\}, \\ w &= \phi \ \text{ in } \{u > 0\}. \end{align*}
The regularity of fractional harmonic functions with smooth Dirichlet conditions in smooth Domains is well understood (of course we have not established yet, nor we ever will, that $\{u=0\}$ has a smooth boundary). This can be achieved constructing appropriate barriers. The Poisson kernel of the fractional Laplacian is explicit in the upper half plane, which allows us to make direct computations. That is, if we want a function $B$ solving \begin{align*} B &= f \ \text{ in the lower half space } \{x_n \leq 0\}, \\ (-\Delta)^\sigma B &= 0 \ \text{ in the upper half space } \{x_n > 0\}, \end{align*} then $B$ is given by the formula \[ B(x) = \int_{\{y_n \leq 0\}} f(y) P(x,y) dy.\] Here $P$ is the explicit Poisson kernel \[ P(x,y) = C_{n,\sigma} \left( \frac {y_n} {x_n} \right)^{\sigma} \frac 1 {|x-y|^n}.\]
We see that $B$ will be as smooth as $f$ (even locally) up to $C^\sigma$, which is the maximum regularity that can be expected in the boundary.
Using this, if the contact set $\{u=0\}$ is convex, we can easily construct barriers to prove that $w \in C^{1-s}$, and therefore $u \in C^{1+s}$.
In general we do not know whether $\{u=0\}$ is convex. However, based on the semiconvexity assumption on $u$, we can prove an almost convexity result for its level sets. This is the content of the following lemma.
Lemma. Let $w \in C^\alpha$ as before and $u(x_0)=0$. There is a small $\delta>0$ (depending on $\alpha$ and $s$) and a constant $C$ such that $x_0$ does not belong to the convex envelope of the set \[ \{x \in B_r(x_0) : w(x) > w(x_0) + C r^{\alpha+\delta} \}.\]
The previous lemma says in some sense that the level sets of $u$ are almost convex up to a correction. This estimate allows us to construct certain barrier functions (a tedious but elementary computation) and improve the initial $C^\alpha$ estimate on $w$ to a $C^{\tilde \alpha}$, where $\tilde \alpha$ can be computed explicitly. Iterating this procedure, we can obtain a sequence of estimates which converge (but never reach) $w \in C^{1-s}$. That is, we can prove that $w \in C^\alpha$ for all $\alpha < 1-s$. In other words $u \in C^{1+\alpha}$ for all $\alpha < s$.
The optimal regularity $u \in C^{1+s}$ is also true. But we will need to develop completely different tools (related to the thin obstacle problem) in order to prove it.