Lecture 6
Lecture 6
The obstacle problem
The obstacle problem consists in finding the smallest super-solution to a non local equation which is constrained to remain above a given obstacle $\varphi$. The equation reads \begin{align*} &Lu \leq 0 \ \text{ in } \Omega, \\ &u \geq \varphi \ \text{ in } \Omega, \\ &Lu = 0 \ \text{ wherever } u>\varphi. \end{align*}
In order to have a well posed problem, it should be complemented with a boundary condition. For example, we can consider the Dirichlet condition $u=\varphi$ in $\R^n \setminus \Omega$.
The definition of the problem already tells us how to prove the existence of the solution: we use Perron's method. Under reasonable assumptions on the non local operator $L$, it is possible to prove the uniqueness of solutions as well.
The regularity of both the solution $u$ and the free boundary $\partial \{u=\varphi\}$ is a much more delicate subject. It is in fact not well understood for generic non local operators $L$, even linear.
Note that the equation can be written as a Bellman equation, \[ \max(Lu,\varphi-u) = 0 \ \text{ in } \Omega.\]
The equation models a problem in stochastic control known as the optimal stopping problem. We follow a Levy process $X(t)$ with generator $L$ (assume it is linear). We are allowed to stop at any time while $X(t) \in \Omega$ but we must stop if at any time $X(t) \notin \Omega$. Whenever we stop at a time $\tau$, we are given the payoff $\varphi(X(\tau))$. Our objective is to maximize the expected payoff by choosing the best stopping time \[ u(x) := \sup_{\text{stopping time }\tau} \mathbb E \big[ \varphi(X(\tau)) \vert X(0)=x \big].\] The stopping time $\tau$ must fulfill the probabilistic definition of stopping time. That is, it has to be measurable with respect to the filtration associated with $X$. In plain words, our decision to stop or continue must be based only on the current position $X(t)$ (the past is irrelevant from the Markovian assumption and the future cannot be predicted).
Let us explain heuristically why $u$ solves the obstacle problem. For every $y \notin \Omega$, we are forced to stop the process immediately, thus naturally $u(x) = \varphi(x)$. If $x \in \Omega$, we have the choice to either stop of follow the process. If we choose to stop, we get $u(x)=\varphi(x)$. If we choose to continue, we get $Lu(x)=0$. Moreover, we make those choices because the other choice would give us a worse expectation. That is, if we choose to stop at $x$ then $Lu(x) \leq 0$, and if we choose to continue at $x$ then $u(x) \geq \varphi(x)$. These are all the conditions that define the obstacle problem.
The regularity is quite delicate even in the simplest cases. We will start by only considering the case $L = -(-\Delta)^s$ for $s \in (0,1)$. Moreover, we will take $\Omega = \R^n$ to avoid issues concerning the boundary. The problem is well posed in the full space $\R^n$ if $\varphi$ is compactly supported and $n > 1$.
Lipschitz regularity and semiconvexity
The next two propositions hold for generic non local operators $L$. In fact not even the linearity of $L$ is used, only the convexity of $L$ is necessary for the second proposition (the one on semiconvexity).
The first proposition says in particular that if $\varphi$ is Lipschitz, then also $u$ is Lipschitz with the same seminorm.
Proposition. Assume $\varphi$ has a modulus of continuity $\omega$, then also $u$ has the modulus of continuity $\omega$.
Proof. We know that for all $x$ and $y$ in $\R^n$, \[ \varphi(x+y) + \omega(|y|) \geq \varphi(x).\] Since $u \geq \varphi$, we also have \[ u(x+y) + \omega(|y|) \geq \varphi(x).\] Fixing $y$, we can take the left hand side of the inequality as a function of $x$, and we realize it is a supersolution of the equation which is above $\varphi$. Therefore, from the definition of the obstacle problem, it is also above $u$ (recall that $u$ was the minimum of such supersolutions). Then \[ u(x+y) + \omega(|y|) \geq u(x).\] The fact that this holds for any values of $x,y \in \R^n$ means that $u$ has the modulus of continuity $\omega$.
□The following proposition implies that if $\varphi$ is smooth, then $u$ is semiconvex in the sense that $D^2 u \geq -C \, \mathrm{I}$. By this inequality we mean that the function $u(x) + C \frac{|x|^2}2$ is convex.
Proposition. Assume $D^2 \varphi \geq -C \, \mathrm I$. Then also $D^2 u \geq -C \, \mathrm I$.
Proof. For any $x,y \in \R^n$ we have \[ \varphi(x+y) + \varphi(x-y) - 2 \varphi(x) \geq -C |y|^2.\] This can be rewritten as \[ \frac{\varphi(x+y)+\varphi(x-y)+C|y|^2} 2 \geq \varphi(x).\] Since $u \geq \varphi$, \[ \frac{u(x+y)+u(x-y)+C|y|^2} 2 \geq \varphi(x).\] This is the point in which the convexity of the equation plays a role. Notice that the obstacle problem can be written as a Bellman equation. It is, itself, a convex problem, meaning that the average of two solutions is a super-solution. Therefore, the left hand side of the inequality is a super-solution above $\varphi$, and therefore must be larger than $u$. \[ \frac{u(x+y)+u(x-y)+C|y|^2} 2 \geq u(x).\] This is precisely the fact that $D^2 u \geq -C \, \mathrm I$.
□$C^{2s}$ regularity
For the classical Laplacian, the optimal regularity is $C^{1,1}$, which was originally proved by Frehse. The proof goes like this
- On one hand, we have the semiconvexity: $D^2 u \geq -C \mathrm I$.
- On the other hand, we have from the equation: $\Delta u \leq 0$.
These two things combined give an estimate $|D^2 u| \leq C$.
A similar argument works for the fractional Laplacian to prove $u \in C^{2s}$. However, this regularity is not optimal as soon as $s<1$. Instead $u \in C^{1+s}$ would be the optimal one. The argument goes like this.
- On one hand, $u$ is bounded and semiconvex: $D^2 u \geq -C \mathrm I$ and $u \in L^\infty$. Therefore $(-\Delta)^s u \leq C$.
- On the other hand, we have from the equation: $(-\Delta)^s u \geq 0$.
These two things combined say that $|(-\Delta)^s u| \leq C$. This is almost like $u \in C^{2s}$.
The precise estimate $u \in C^{2s}$ follows with a little extra work, but we will not need it.
Exercise 13. Let $u$ be a function such that for some constant $C$, $|u| \leq C$, $D^2 u \geq -C \, \mathrm I$ and $(-\Delta)^s u \geq 0$ in $\R^n$. Prove that there is a constant $\tilde C$ (depending only on $C$) so that for all $x \in \R^n$, \[ \int_{\R^n} \frac{|u(x+y) - u(x)|}{|y|^{n+2s}} dy \leq \tilde C.\] Recall from Exercise 11 that the conclusion above implies that $u \in C^{2s}$.
$C^{2s+\alpha}$ regularity
The next step is to prove that $u$ is a little bit more regular than $C^{2s}$. We will now show that $w := (-\Delta)^s u$ is $C^\alpha$ for some $\alpha>0$ small (the optimal $\alpha$ will appear in the next section).
Note that the function $w$ solves the following Dirichlet problem for $(-\Delta)^{1-s}$, \begin{align*} (-\Delta)^{1-s} w &= -\Delta \varphi \ \text{ inside } \{u=\varphi\},\\ w &= 0 \ \text{ in } \{u > \varphi\}. \end{align*}
Here is an important distinction between the cases $s=1$ and $s<1$. If $s=1$, the function $w = \Delta u$ does not satisfy any useful equation, whereas here we can obtain some extra regularity from the equation for $w$.
We can observe here a heuristic reason for the optimal regularity. If we expect that $w$ will be continuous and the contact set $\{u=\varphi\}$ will have a smooth boundary, then the regularity of $w$ on the boundary $\partial \{u=\varphi\}$ is determined by the equation and should be $w \in C^{1-s}$. This corresponds precisely to $u \in C^{1+s}$. The proof will be long though, because we do not know a priori either that $\{u=\varphi\}$ has a smooth boundary or that $w$ is continuous across $\partial \{u=\varphi\}$.
It is convenient to rewrite the problem in terms of $u-\varphi$ instead of $u$. We will abuse notation and still call it $u$, which now solves an obstacle problem with obstacle $\varphi=0$ but with a non zero right hand side $\varphi$. \begin{align*} u &\geq 0 \ \text{ in } \R^n, \\ (-\Delta)^s u &\geq \phi \ \text{ in } \R^n, \\ (-\Delta)^s u &= \phi \ \text{ wherever } u>0. \end{align*} We also call $w = (-\Delta)^s u$.
Recall that $D^2 u \geq -C \, \mathcal I$ for some constant $C$. In particular $(-\Delta)^{1-s} w \leq C$ for some $C$.
In this section, we will show that $w \in C^\alpha$ for some small $\alpha>0$, which implies that $u \in C^{2s+\alpha}$. We will prove it with an iterative improvement of oscillation scheme. That is, we will show that for any point $x$, \[ \mathrm{osc}_{B_{2^{-k}}(x)} w \leq C (1-\theta)^k,\] for some $\theta>0$.
The location of $x$ is irrelevant, so let us center these balls at the origin. Naturally, the most delicate position to achieve regularity is the free boundary, so let us concentrate on the case $0 \in FB$.
The following lemma plays an important role in the proof.
Lemma. Given $\mu>0$ and $u$ a semiconvex function (i.e. $D^2 u > -C \, \mathrm I$), if $u(x_0) \geq \mu r^2$ at some point $x_0 \in B_r$, then \[ |\{ x \in B_{2r} : u(x) > 0 \} | \geq \delta |B_r|.\] Here, the constant $\delta$ depends on $\mu$ and $C$ only.
We leave the proof as an exercise.
The right hand side $\phi$ is a smooth function. We will use in particular that it is Lipschitz. By an appropriate scaling of the solution (i.e. replacing $u(x)$ and $w(x)$ by $r^{-2s}u(rx)$ and $w(rx)$ with $r \ll 1$), we can assume that \begin{equation} \label{allsmall} \begin{aligned} D^2 u &\geq - \varepsilon \, \mathrm I, \\ (-\Delta)^{1-s} w &\leq \varepsilon, \\ |\phi(x) - \phi(y)| &\leq \varepsilon |x-y|, \text{ for all } x,y. \end{aligned} \end{equation} These estimates will remain valid for all scaled solutions in our iteration.