# Brownian Motion and Harmonic functions

In what follows, I will discuss the deep connections between Brownian motion (to be defined below) and harmonic functions. In order to make the discussion as accessible as possible, I will first review some basic definitions and facts of probability theory.

## Basic Probability Theory

Probaility theory is the study of probability spaces $(\Omega,\mathcal{F},P)$. Here $\Omega$ will be the underlying space, $\mathcal{F}$ is a $\sigma$ - field and $P$ is a probability measure. What does all of this mean? A $\sigma$ - field is a collection of subsets from $\Omega$ with the following properties: $\emptyset \in \mathcal{F}$, if $A\in \mathcal{F}$ then $A^{c}\in \mathcal{F}$ and finally if $A_i\in \mathcal{F}$ then $\cup_{i=1}^{\infty}A_i \in \mathcal{F}$. A probability measure $P$ is a real valued function $P:\mathcal{F}\rightarrow [0,1]$ such that $P(\Omega)=1$, $P(\emptyset)=0$ and if $A_{i}$ are disjoint then $P(\cup_{i}^{\infty}A_i)=\sum_{i=1}^{\infty}P(A_i)$. A random variable $X$ is a measurable map $X:\Omega \rightarrow \mathbb{R}^d$ and by measurable we mean that $X^{-1}(A)\in \mathcal{F}$ for every Borel set $A\subset \mathbb{R}^d$.Given a random variable, one can define a distribution $\mu$ on $\mathbb{R}^d$ by setting $\mu(A)=P(X^{-1}(A))$.An important example is given by normal random variables. We say that a random variable $X$ is normally distributed with mean $x$ and variance $t$ if $P(X\in A)=\frac{1}{\sqrt{2\pi t}}\int_{A}e^{-(x-y)^{2}/2t}dy=\int_{A}p(t,x,y)dy$. Finally, we say that a family $X_1,\ldots, X_n$ of random variables are independent if and only if $P(X_1\in A_1,\ldots,X_n\in A_n)=\prod_{i=1}^{n}P(X_i\in A_i)$. An important example is given by random variables taking on values in $\mathbb{R}^d$ and whose coordinates are independent and normally distributed. More precisely, consider the random variable $X=(X_1,\ldots,X_d)$ with $X_i$ a family of independent and normally distributed random variables with mean $x_i$ and variance $t$ respectively. We then easily have $P(X\in A)=\frac{1}{(2\pi t)^{d/2}}\int_A e^{-|x-y|^2/2t} dy$. These distributions will be important in the study of Brownian motion. The next topic to be discussed is conditional expectation.

## Conditional Expectation

Let $(\Omega, \mathcal{F}, P)$ be the underlying probability space. Let $X$ be a random variable and let $\mathcal{F}_1$ be another $\sigma$ - field contained in $\mathcal{F}$. It is not necessarily true that $X$ will be measurable with respect to the smaller field $\mathcal{F}_1$ since we only know that $X$ is measurable with respect to the larger field $\mathcal{F}$. What we seek is a random variable $Z$ that is $\mathcal{F}_1$ measurable and has the following property: for every $A \in \mathcal{F}_1$, $\int_{A}X \textrm{dp}=\int_{A}Z \textrm{dp}$. In other words, we seek a random variable $Z$ that is $\mathcal{F}_1$ measurable and whose average value is the same as that of $X$ when restricted to any set $A$ in $\mathcal{F}_{1}$.The existence of $Z$ is guaranteed by the Radon Nikodym Theorem but I will not discuss this issue. To show uniqueness of $Z$ (upto null sets), consider two such random variables satisfying the above conditions say $Z_{1}$ and $Z_{2}$. Now consider sets of the form $\{Z_1-Z_2>\alpha>0\}$ and integrate $Z_1$ and $Z_2$ over these sets. By definition, they must be equal but this means $Z_{1}\geq Z_{2}\geq \alpha >0$ except possibly on a set of probability zero or as we say "almost surely". Since $\alpha$ is arbitrary then $Z_1\geq Z_2$ almost surely and by a similar argument we get $Z_2\geq Z_1$ almost surely and thus $Z_1=Z_2$ almost surely. From now on, we will denote $Z$ by $E(X|\mathcal{F}_1)$ to highlight the dependence on both the sigma field $\mathcal{F}_1$ and the random variable $X$. Here are some basic examples of conditional expectation. First, consider the case where $\mathcal{F}_1=\{\Omega,\emptyset\}$. Its clear that any random variable which is measurable with respect to $\mathcal{F}_1$ is necessarily a constant function and that $E(X|\mathcal{F}_1)=E(X)=\int_{\Omega}X \textrm{dP}$. Now consider two sets $A,B$ in $\Omega$ and let $\mathcal{F}=\sigma(A,B)$ be the smallest $\sigma$ - field generated by $A$ and $B$ and let $\mathcal{F}_1$ be $\{\Omega,\emptyset,A,A^c\}$. Then $E(1_{B}|\mathcal{F}_1)=P(A\cap B)/P(A)$ on $A$ and $E(1_{B}|\mathcal{F}_1)=P(A^c\cap B)/P(A^c)$ on $A^c$. This agrees with the usual definitions given in High School for conditional expectation.Now consider two random variables $X,Y$ and suppose $X$ is measurable with respect to some sigma field $\mathcal{F}$ with $\sigma(Y)\subset \mathcal{F}$ then we write $E(X|Y)=E(X|\sigma(Y))$. An interesting interpretation of conditional expectation can also be provided by the theory of Hilbert spaces.If we consider our Hilbert space to be $L^2(\Omega)$ then the set of all $L^2$ functions which are $F_{1}$ measurable is a closed subspace of $L^2(\Omega)$ and $E(X|\mathcal{F}_1)$ is the orthogonal projection of $X$ onto the subspace of $\mathcal{F}_1$ measurable functions. In other words, $E(X|\mathcal{F}_1)$ is the unique minimizer of $E(|X-Z|^2)$ over all $\mathcal{F}_1$ measurable functions $Z$. We will need the following result known as the double expectation formula: $E(E(X|\mathcal{F}_1))=E(X)$. The proof follows directly from the definition of conditional expectation given above by taking $A=\Omega$.

## Brownian Motion

### What is Brownian Motion?

We will now discuss Brownian motion. We let $\Omega$ denote the space of all continuous functions $\omega:[0,\infty)\rightarrow \mathbb{R}$ and $\mathcal{F}$ the smallest sigma field which makes the coordinate map $X_t(\omega)=\omega (t)$ measurable for all $t> 0$. From now on, we will write $X_t=X(t)$ to denote this family of random variables. To complete the construction of Brownian motion, one needs to construct a family of measures $P_x$ on $\Omega$ with the following properties: if $0<t_1\ldots <t_n$ then $X(t_1),X(t_2)-X(t1),\ldots, X(t_n)-X(t_{n-1})$ are independent, $X(s+t)-X(s)$ is normally distributed with mean $x$ and variance $t$ and $P_x(X(0)=x)=1$. The continuous functions $\omega$ are called Brownian paths. To obtain $d$ dimensional Brownian motion starting at a point $(x_1,\ldots,x_d)=x\in \mathbb{R}^d$, we consider $X(t)=(X_1(t),\ldots,X_d(t))$ with $X_i(t)$ independent one dimensional Brownian motions starting at $x_i$. We mention that Brownian Motion is invariant under translations, scalings and orthogonal transformations. In other words, if $X(t)$ is Brownian motion starting at $x$ then the following is true:

$1.1) X(t)+b$ is Brownian motion starting at $x+b$.
$1.2) b^{-1/2}\cdot X(bt)$ is Brownian motion starting at $b^{-1/2}x$
$1.3) T(X(t))$ is Brownian motion starting at $Q(x)$ for every orthogonal transformation $Q$.

### Markov Property

We will make use of the following two very important properties of Brownian motion. The first property is the Markov property: $$E_{x}(Y\circ \theta_{s}|\mathcal{F}_s)=E_{X_s}(Y).$$ What does this expression mean? Firstly, $\theta_s$ is the so called "shift" transformation $\theta_s:\Omega \rightarrow \Omega$ defined by $\theta_s (\omega)(t)=\omega(t+s)$, $Y$ is a bounded $\mathcal{F}$ measurable function and $\mathcal{F}_s$ denotes a "perfected" version of the sigma field $\mathcal{F}^{+}=\cap_{t>s}\mathcal{F}^{\circ}_t$ with $\mathcal{F}_s^{\circ}=\sigma(X(r),r\leq s)$. Finally, the expression $E_{X_s}(Y)$ is simply the expectation of $Y$ with respect to Brownian motion starting at $X(s)$.

### Stopping times and the Strong Markov Property

Before discussing the Strong Markov property, we need to consider the idea of a stopping time. We call $S$ a stopping time if $S$ is a random variable $S:\Omega \rightarrow [0,\infty]$ and the event ${S\leq t}$ belongs to $\mathcal{F}_t$. In other words, the odds of $S\leq t$ depend on what happened up to time $t$.Finally, we let $\mathcal{F}_S$ denote the collection of sets $A$ such that $A\cap \{S\leq t\}$ belongs to $\mathcal{F}_t$ for all $t\geq 0$. Brownian motion is said to obey the Strong Markov Property in the sense that: $$E_{x}(Y\circ \theta_s|\mathcal{F}_S)=E_{X(S)}Y_S.$$ Here $Y:[0,\infty]\times \Omega \rightarrow \mathbb{R}$ is bounded and measurable and $Y_S(\omega)=Y(S(\omega),\omega)$. Now let $\tau$ be any stopping time. We know that for every $t>0$, $E_{x}(X(t))=x$ and $E_{x}(\|X(t)-x\|^2)=d\cdot t$. This result generalizes to the stopping time $\tau$ in the sense that $E_{x}(X(\tau))=x$ and $E_{x}(\|X(\tau)-x\|^2)=E_{x}(\tau)$.

### Hitting Times

We will first consider the concept of a hitting time. Let $B$ be any $F_{\sigma}$ (countable union of compact sets) in $\mathbb{R}^d$. Let $\tau_{B}= \textrm{inf}=\{t:X(t)\in B\}$. This is called the "hitting time" for $B$.In other words, $\tau_{B}(\omega)$ is the amount of time that it takes for the Brownian path $\omega$ to first hit $B$. .We say that a set $B$ is polar if and only if $P_{x}(\tau_B<\infty)=0$ for all $x$ and nonpolar otherwise.We have the following important results:

2.1) If $B$ has positive Lebesgue measure then it is nonpolar.
2.2) If $d \geq 2$ then $\{x\}$ is polar for every $x\in \mathbb{R}^d$.
2.3) $P_{x}(\tau_{B}\leq t)$ as a function of $x$ is bounded away from zero on compacts.}
2.4) If $B^{c}$ is bounded then $P_{x}(\tau_{B}< \infty)=1$ for every $x\in \mathbb{R}^d$.

The next topic to discuss is the concept of recurrence versus transience. We let $L_{B}=\textrm{sup}\{t: X(t)\in B\}$ be the exit time for $B$. In other words, $L_{B}(\omega)$ tells you when your Brownian path $\omega$ exits $B$ forever. we say that $B$ is recurrent for Brownian motion if and only if $P_{x}(L_{B}<\infty)=0$ for every $x$ and transient for Brownian motion if and only if $P_{x}(L_{B}<\infty)=1$ for every $x$. If $B$ is polar, we set $L_{B}=0$. We have the following results:

2.5) B is recurrent for Brownian motion if and only if $P_{x}(\tau_{B}<\infty)=1$ for every $x$.
2.6) If $B$ is transient then inf $\{P_{x}(\tau_{B}<\infty): x\in B^{c}\}=0.$}
2.7) If $d\leq 2$ and $B$ is nonpolar then $B$ is recurrent.
2.8) If $d\geq 3$ and $B$ is bounded then $B$ is transient and $\lim_{\|x\|\rightarrow \infty}P_{x}(\tau_{B}<\infty)=0$.

We say that Brownian motion is transient if $P_{x}(\lim_{ t\rightarrow \infty} \|X(t)\|=\infty)=1$ for all $x$ and recurrent if $P_{x}(\lim_{ t\rightarrow \infty} \|X(t)\|=\infty)=0$ for all $x$. If $d\leq 2$ then clearly Brownian motion is a recurrent process since every ball $B_{r}(0)$ is nonpolar (see $2.1$) and thus recurrent by $2.7$. If $d\geq 3$ then Brownian motion is transient since by $2.8$ every ball $B_r(0)$ is transient.

## Regular Points and Weak Convergence of Radon Measures

We will now discuss "regular" points. A point ${x}$ is regular for $B$ if and only if $P_{x}(\tau_{B}=0)=1$. We write $B^{r}$ to denote the set of regular points for $B$. Clearly one has $B^{\circ}\subset B^{r}\subset \overline{B}$ since Brownian paths are continuous almost surely. We will need the following result called "Blumenthal's $0-1$ law". This says that if $A\in \cap_{t>0}\mathcal{F}_{t}$ then $P_{x}(A)\in \{0,1\}$ for every $x$ and in particular $P_{x}(\tau_{B}=0)\in \{0,1\}$ for every $x$. Hence if $x$ is not regular then necessarily $P_{x}(\tau_{B}=0)=0$.It is not hard to see that every point $x$ is regular for $\{x\}$ if and only if $d=1$. If $d=1$ and Brownian motion starts at $x$ then by symmetry of the Gaussian ${x}$ is regular for the sets $\{y>x\}$ and $\{y<x\}$. By continuity of paths we must have $x$ regular for $\{x\}$. If $d\neq 1$ then every point is polar and thus any point $x$ cannot be regular for $\{x\}$.

To each $x$ we can associate the following measure $h_{B}(x,A)=P_{x}(\tau_{B}<\infty,X(\tau_{B})\in A)$. From now on, we will call this measure the hitting distribution of $B$ for Brownian motion starting at $x$. The following results will be crucial for the study of harmonic functions:

3.1) $B\cap (B^r)^c$ is polar
3.2) $h_B(x,A)=0$ for every polar set $A$ and $h_B(x,\cdot)$ is concentrated on $\partial B \cap B^r$ whenever $x\in (B^r)^c$
3.3) $b\in B^{r}$ if and only if $h_{B}(x,\cdot)$ converges completely to $\delta_b$.

In general, if one is given a sequence of Radon measures $\mu_{n}$ then we say that $\mu_{n}$ converges weakly to a Radon measure $\mu$ if and only if for every continuous function of compact support $f$, $\int f d\mu_n$ converges to $\int f d\mu$.Radon measures are just measures on some subset of $\mathbb{R}^d$ that assign finite measure to compacts. Now suppose we have a sequence of Radon measures $\mu_n$ defined on a subset $B$ of $\mathbb{R}^d$ and $\mu(B)<\infty$ with $\mu_n(B)\rightarrow \mu(B)$ as $n\rightarrow \infty$. Furthermore, assume that $\mu_n$ converges weakly to $\mu$ then for every bounded and continuous function $f$, $\int f d\mu_n$ converges to $\int f d\mu$. In the latter case, we will say that $\mu_n$ converges completely to $\mu$. The following statements are equivalent to complete convergence:

-For all closed sets $F$, $\limsup_{n\rightarrow \infty} \mu_{n}(F) \leq \mu(F)$
-For all open sets $V$, $\mu(V)\leq \liminf_{n\rightarrow \infty}\mu_{n}(V)$
-For all Borel sets $A$ with $\mu(\partial A)=0$, $\lim_{n\rightarrow \infty}\mu_{n}(A)=\mu(A)$.
-Suppose $f$ is continuous and bounded almost everywhere with respect to $\mu$ then $\int f d\mu_{n}$ converges to $\int f d\mu$.

## Tests for Regularity

How do we know if a point is regular for a set $B$? How do we know if $B$ is recurrent? In this section, we will present a few ways of answering the last two questions.

### Exterior Cone Condition

Let $B$ be any $F_{\sigma}$ then we say that a point $x$ satisfies an exterior cone condition for $B$ if and only if there exists a cone $C$ with vertex $x$ such that $C\cap B_r(x)\subset B$. By a cone with vertex x, we mean sets of the form $\{y: <y-x,u>\geq \delta \cdot \|y-x\|\}$ for some $\delta>0$ and some unit vector $u$. If $x$ satisfies an exterior cone condition for $B$ then $x$ is regular for $B$. Firstly, by translation invariance of Brownian motion, we may assume that $x=0$ and our cone $C$ has vertex at the origin. If we let $C_r=C\cap B_r(0)$ then $P_0(\tau_B\leq t)\geq P_0(X(t)\in C_r)$. By the scale invariance property of Brownian motion, we know that $P_0(X(t)\in C_r)=P_0(t^{1/2}X(1)\in C_r)=P_0(X(1)\in C_{t^{-1/2}r})$ for every $t>0$. Letting $t\rightarrow 0$ stretches out the cones and so $P_0(\tau_B\leq t)\geq P_0(X(1)\in C)$ with $C$ defined above with $x=0$. Since the Gaussian is radial, $P_0(X(1)\in C)>0$ and by the $0-1$ law, the conclusion follows. As a corollary, we get a simple proof that many sets are regular. By regular, we mean that $B^r=B$ or all points of $B$ are regular for $B$. For instance, any closed ball is regular. This follows by observing that any point on the surface of the ball obviously satisfies the exterior cone condition for the ball. If we consider any sphere then any point on the sphere satisfies the exterior cone condition for the interior of the sphere and for the exterior. By continuity of paths, the result follows. This argument also works for cylinders and other surfaces. Lastly, every bounded open subset of $\mathbb{R}^d$ with Lipschitz boundary is regular since every point on the boundary must satisfy the exterior cone condition. This result is particularly interesting when considering the Dirichlet and Poisson equations with boundary conditions (see below).

### Wiener's Test

We now present a test that can be used to determine whether a point is regular or not for a set $B\subset \mathbb{R}^d$ for $d\geq 3$. There is a similar test for $d=2$. It is named after its discoverer Norbert Wiener.

Let $\mu$ be a measure on $\mathbb{R}^d$ and let $g(x,y)=\int_{0}^{\infty}p(t,x,y)dt=1/2\cdot \pi^{-d/2}\Gamma(d/2 -1)\|x-y\|^{2-d}$ with $\Gamma$ denoting the Gamma function. We let $g\mu (x)=\int g(x,y)\mu(dy)$ denote the Newtonian potential of $\mu$. We will denote by $\mu_B$ the equilibrium measure of $B$. The equilibrium measure will be the (unique) measure $\mu_B$ with the property that its Newtonian potential is equal to $1$ on $B^r$ and $\mu_B$ is concentrated on $B^r$. The uniqueness of such a measure will not be proved. We will also denote by $C(B)=\mu_B(\mathbb{R}^d)$ the Newtonian capacity of $B$. Let's compute the equilibrium measure of the closed ball $B_s(0)=B_s$. Clearly $B_s^r=B_s$ and if $\sigma_s$ denotes the uniform probability measure on the sphere $S_s(0)$ then $\mu_{B_s}=2\cdot \pi^{d/2}\cdot \Gamma(d/2-1)^{-1}\cdot s^{d-2}\cdot \sigma_s$. This follows directly from the definition of the equilibrium measure and of $g(x,y)$. This shows that that equilibrium measures for balls exist. In fact, we will show that equilibrium measures exist for any bounded $B$. More precisely, if $B$ is bounded then $\mu_B(A)=\int h_B(y,A)\mu_{B_s}(dy)$ whenever $B\subset B_s$.To prove this, we first need the following result whose proof will be omitted: $$\int g(x,y)h_B(z,dy)=\int g(z,y)h_B(x,dy)$$ for any $x,z$. we now compute: $$\begin{array}{ll}\iint g(x,y)h_{B}(z,dy)\mu_{B_s}(dz) & = & \iint g(z,y)h_B(x,dy)\mu_{B_s}(dz) \\ & = & \iint g(y,z)\mu_{B_s}(dz)h_B(x,dy) \\ & = & \int h_{B}(x,dy) \\ & = & P_{x}(\tau_{B}<\infty). \end{array}$$

Since $P_{x}(\tau_B<\infty)=1$ when $x\in B^r$ and $h_{B}(z,\cdot)$ is concentrated on $B^r$ for every $z$ then we have found the equilibrium measure of $B$. Note that this shows $\mu_B$ is concentrated on $\partial B$ since $h_B(z,\cdot)$ is concentrated on $\partial B$ for every $z$. Moreover, $C(B)=0$ if and only if $B$ is polar. To prove the last assertion, take $s\rightarrow \infty$ and note that $P_{x}(\tau_B\leq t)$ is lowersemicontinuous as a function in $x$ for every fixed $t>0$. We now list a few basic properties of Newtonian Capacity. If $A$ and $B$ are bounded then:

-$C(b+B)=C(B)$ for all $b\in \mathbb{R}^d.$
-$C(aB)=a^{d-2}C(B)$ for all $a>0$.
-$C(A\cap B)+C(A\cup B) \leq C(A) + C(B)$
-If $B_n\uparrow B$ then $C(B_n)\uparrow C(B)$.
-If $B_n$ is compact for all $n$ and $B_n\downarrow B$ then $C(B_n)\downarrow C(B)$.

Before proceeding to Wiener's test, we will need a result from probability theory which can be considered as a Borel Cantelli lemma for dependent events. The result is the following: suppose $\{A_k\}$ is a sequence of measurable sets in your probability space $(\Omega, \mathcal{F},P)$ and suppose there exists some $M>0$ such that $P(A_m\cap A_n)\leq M\cdot P(A_m)P(A_n)$ whenever $|m-n|>1$ then $P(\cap_{k}\cup_{n\geq k}A_n)>0$ if and only if $\sum_{k}P(A_k)=\infty$. In other words, the probability of the events $A_k$ occurring infinitely often is strictly positive if and only if the sum of their respective probabilities is infinitely large. We are now ready to prove Wiener's result. Let $x\in \mathbb{R}^d$, $0<\lambda <1$ and let (for $k\geq 1$) $B_k=\{\lambda^{k}\leq \|y-x\|<\lambda^{k-1}\}\cap B$ then $x$ is regular for $B$ if and only if $\sum_{k}\lambda^{k(2-d)}C(B_k)=\infty$. The first step in the proof is to show that $x$ is regular for $B$ if and only if $P_{x}(A=\cap_{k}\cup_{n\geq k}\{\tau_{B_n}<\infty\})>0$. If $\tau_B(\omega)=0$ then (by continuity of paths) $\omega \in A$ so if $x$ is regular for $B$ then $P_{x}(A)=1.$ Now let $\alpha=\liminf_{k}\{\tau_{B_k}(\omega)\}\leq \beta = \limsup_{k}\{\tau_{B_{k}}(\omega)\}$. Firstly, $P_{c}(\{\beta<\infty\} \cap A)=P_{x}(A)$ since Brownian motion is transient for $d\geq 3$. Secondly, $P_{x}(\{\alpha=0\} \cap A)=P_{x}(A)$ or else (by continuity of paths) this would contradict the fact that $\{x\}$ is polar for Brownian motion whenever $d\geq 3$.In other words, we have $\alpha=0$ for almost every element in $A$ and this implies that $P_{x}(\{\tau_{B}=0\})\geq P_{x}(A)>0$ which in turn implies that $x$ is regular for $B$. If we can show that there exists an $M>0$ such that $P_{x}(\tau_{B_m}<\infty,\tau_{B_n}<\infty)\leq M \cdot P_{x}(\tau_{B_m}<\infty)P_{x}(\tau_{B_n}<\infty)$ whenever $|m-n|>1$ then $x$ is regular for $B$ if and only if $\sum_{k}P(\tau_{B_{k}}<\infty)=\infty$. Note that $P_{x}(\tau_{B_{k}}<\infty)=(g\mu_{B_{k}})(x)$ and since $\mu_{B_k}$ is concentrated on $\partial B_{K}$ then $g(x,y)=\lambda^{k}$ or $g(x,y)=\lambda^{k-1}$ for almost every $y$ with respect to the equilibrium measure of $B_k$. This easily implies that $\sum_{k}P(\tau_{B_k}<\infty)=\infty$ if and only if $\sum_{k}\lambda^{k(2-d)}C(B_{k})=\infty$.To complete the proof,note that by the Strong Markov property of Brownian motion we have: $$\begin{array}{ll}P_{x}(\tau_{B_m}<\infty,\tau_{B_n}<\infty)&=&P_{x}(\tau_{B_m}<\tau_{B_n}<\infty) + P(\tau_{B_n}<\tau_{B_m}<\infty) \\ & \leq & P_{x}(\tau_{B_m}<\tau_{B_n}\infty)\cdot \sup_{z\in B_m}P_{z}(\tau_{B_n}<\infty)+P_{x}(\tau_{B_n}<\tau_{B_m}<\infty)\cdot\sup_{z\in B_n}P_{z}(\tau_{B_m}<\infty) \\ & \leq & P_{x}(\tau_{B_m}<\infty))\cdot \sup_{z\in B_m}P_{z}(\tau_{B_n}<\infty)+P_{x}(\tau_{B_n}<\infty)\cdot\sup_{z\in B_n}P_{z}(\tau_{B_m}<\infty) \end{array}$$ Moreover, if $z\in B_m$ then $$\begin{array}{ll}P_{z}(\tau_{B_n}<\infty) & = & \int g(z,y)\mu_{B_n}(dy) \\ & = & \int \big(\|z-y\|/\|x-y\|\big)^{2-d}g(x,y)\mu_{B_n}(dy) \\ & \leq & \int \big(\frac{\lambda ^ {n-1}}{\lambda^{m}-\lambda^{n-1}}\big)^{d-2}g(x,y)\mu_{B_n}(dy) \\ & \leq & \big(\frac{\lambda}{1-\lambda}\big)^{d-2}\cdot P_{x}(\tau_{B_n}<\infty)\end{array}$$

and if $z\in B_n$ then $$\begin{array}{ll}P_{z}(\tau_{B_m}<\infty) & = & \int g(z,y)\mu_{B_m}(dy) \\ & = & \int \big(\|z-y\|/\|x-y\|\big)^{2-d}g(x,y)\mu_{B_m}(dy) \\ & \leq & \int \big(\frac{\lambda ^ {m-1}}{\lambda^{m}-\lambda^{n-1}}\big)^{d-2}g(x,y)\mu_{B_m}(dy) \\ & \leq & \big(\frac{1}{\lambda(1-\lambda)}\big)^{d-2}\cdot P_{x}(\tau_{B_m}<\infty)\end{array}$$

Hence if $M=(\lambda \cdot (1-\lambda))^{2-d}$ then $P_{x}(\tau_{B_m}<\infty,\tau_{B_n}<\infty)\leq M \cdot P_{x}(\tau_{B_m}<\infty)P_{x}(\tau_{B_n}<\infty)$ and this completes the proof.

### Thorns

Consider the following "thorns" $T_f=\{(x_1,\ldots,x_n):0\leq x_1,x_2^2+\ldots + x_n^2\leq f^2(x_1)\}$ with $f(x)$ satisfying the following properties:

-$f(x)>f(0)=0.$
-$f(x)/x$ is nondecreasing for $x$ sufficiently small.
-$f(x)/x$ is nonincreasing for $x$ sufficiently large.

The sets $T_f$ look like thorns with vertex at $0$ and sharpness depending on the rate of decay of $f(x)$ near zero. We will derive integral tests to determine whether $0$ is regular for $T_f$. The two main ingredients to derive such integral tests are estimates on the Newtonian capacity of cylinders and Wiener's test combined with the integral test from calculus. We first state some estimates on the Newtonian capacity of cylinders. In what follows, $C(L)$ denotes the cylinder $\{(x_1,\ldots,x_d): 0\leq x_1 \leq L, x_2^2+\ldots+x_d^2\leq 1\}$.

For all $L\geq \delta>0$ there exists $M_{\delta}<N_{\delta}$ such that $M_{\delta}\cdot L \leq C(L) \leq N_{\delta}\cdot L$ whenever $d>3$ and $M_{\delta}\cdot L/\log(L) \leq C(L)\leq N_{\delta}\cdot L/\log(L)$ whenever $d=3$.

As in the statement of Wiener's test (with $\lambda=1/2$), we let $B_k=\{y\in T_f,2^{-k-1}<\|y\|\leq 2^{-k}\}$ then $0$ is regular for $T_f$ if and only if $\sum_{k}2^{k(d-2)}C(B_k)=\infty$. Note that if $f(t)/t$ does not approach zero as $t\rightarrow 0$ then $0$ satisfies the exterior cone condition for $T_f$. To see this, suppose $f(t)/t\geq \beta >0$ for small $t$ then $0$ satisfies the exterior cone condition for $T_f$ with parameters (see the discussion of exterior cone conditions above) $\alpha=\sqrt{1+\beta^2},u=(1,0,\ldots,0).$ Henceforth, we may assume that $f(t)/t \rightarrow 0$ as $t\rightarrow 0$. The idea will be to tightly squeeze $B_k$ into two cylinders $A_k\subset D_k$ and use the estimates for the Newtonian capacity of cylinders to derive corresponding estimates for the Newtonian capacity of the $B_k$.

To this end, we let $$A_k=\{x:4/3 \cdot 2^{-k-1}<x_1 \leq 3/4 \cdot 2^{-k},x_2^2+\ldots+x_d^2\leq f^2(2^{k-1})\}$$ and $$B_k=\{x:3/4 \cdot 2^{-k-1}<x_1\leq 4/3 \cdot 2^{-k},x_2^2+\ldots + x_d^2\leq f^2(2^{-k})\}.$$ One easily checks that $A_k\subset B_k \subset C_k$ for $k$ sufficiently large (remember that we are assuming $f(t)/t \rightarrow 0$ as $t\rightarrow 0$). This implies that $C(A_k)\leq C(B_k)\leq C(D_k)$. We have (for d>3): $$\begin{array}{ll}C(A_k)& = & C\big((f(2^{-k-1})^{-1}\cdot A_k \cdot f(2^{-k-1})\big) \\ & = & \big(f(2^{-k-1}\big)^{d-2}\cdot C\big(f(2^{-k-1}\big)^{-1}\cdot A_k\big) \\ & \gg & \big(f(2^{-k-1})\big)^{d-2}\cdot \big(f(2^{-k-1})\big)^{-1}\cdot (3/4 \cdot 2^{-k}-4/3 \cdot 2^{-k-1}) \\ & \gg & \big(f(2^{-k-1})\big)^{d-3}\cdot 2^{-k-1} \end{array}$$

and consequently $$\begin{array}{ll} \sum_{k}2^{k(d-2)}C(B_k) & \geq & \sum_{k}2^{k(d-2)}C(A_k)\\ &\gg & \sum_{k}2^{k(d-2)}\big(f(2^{-k-1})\big)^{d-3}\cdot 2^{-k-1}\\ &\gg & \sum_{k}2^{k(d-3)}\big(f(2^{-k-1})\big)^{d-3}.\end{array}$$ Note that $\sum_{k}2^{k(d-3)}\big(f(2^{-k-1})\big)^{d-3}=\infty$ if and only if $\sum_{k}\big(f(2^{-k})/2^{-k}\big)^{d-3}=\infty.$ Using upper bounds for the Newtonian capacity of cylinders, one obtains (using the same strategy of proof as above), $\sum_k 2^{k(d-2}C(B_k)\gg \sum_{k}\big(f(2^{-k})/2^{-k}\big)^{d-3}.$ In other words, we have $\sum_{k}2^{k(d-2)}C(B_k)=\infty$ if and only if $\sum_{k}\big(f(2^{-k})/2^{-k}\big)^{d-3}=\infty$. By the integral test (whose application is valid because of the assumption on $f$), this is equivalent to the divergence of the improper integral $\int_{0}^{\infty}\big(f(2^{-t})/2^{-t}\big)^{d-3} dt$ and this in turn is equivalent (by making the substitution $u=2^{-t}$) to the divergence of the improper integral $\int_{0}^{1}\big(f(u)/u\big)^{d-3} \cdot u^{-1}du$. For $d=3$, we obtain (by copying the proof for $d>3$)that $0$ is regular for $T_f$ if and only if $\sum_k \big(log (2^{-k}/f(2^{-k}))\big)^{-1}=\infty$ if and only $\int_{0}^{1}|log(f(u)/u)|^{-1}\cdot u^{-1} du=\infty$.

## Dirichlet and Generalized Dirichlet Problem

In this lengthy section, we will consider the existence and uniqueness of both the Dirichlet problem and generalized Dirichlet problem. More precisely, suppose $g$ is a continuous function on $\partial D$ with $D$ an open subset of $\mathbb{R}^d$. Does there exist a function $f$ which is harmonic in $D$ and $\lim_{x\rightarrow b}f(x)=g(b)$ for every $b\in \partial D$?. This problem will be referred to as the Dirichlet problem. Now suppose we fix a point $b$ of the boundary of $D$ which is regular for $D^c$ and we consider a continuous function $g$ on $\partial D$. Does there exist a harmonic function $f$ such that $f$ is harmonic on $D$ and $\lim_{x\rightarrow b} f(x)=g(b)$? This problem will be referred to as the generalized Dirichlet problem.

### Existence

From now on, $D$ will denote an arbitrary open subset of $\mathbb{R}^d$. Furthermore, $H_D(x,\cdot)$ will refer to the hitting distribution for $D^c$ of Brownian motion starting at $x$ and $T_D=\tau_{D^c}.$ Let $x$ be any point in $\mathbb{R}^d$ then the hitting distribution for the sphere centered at $x$ of radius $r$ for Brownian motion starting at $x$ is the uniform probability measure on the sphere.

By translation invariance of Brownian motion, we may assume $x=0$. It suffices to prove that $H_{S_r(0)}(0,\cdot)$ is invariant under any orthogonal transformation $Q$. We compute:

$$\begin{array}{ll}P_{0}(X(\tau_{S_{r}(0)})\in Q(A)) & = & P_{0}(Q^{-1}(X(\tau_{S_r(0)})\in A) \\ & = & P_{Q^{-1}(0)}(Q^{-1}(X(\tau_{S_r(0)}))\in A) \\ &=& P_{0}(X(\tau_{S_r(0)})\in A)\end{array}$$ as desired.

Let $g$ be any bounded measurable function on $\partial D$ then $H_D g$ is harmonic on $D$ and has boundary value $g(b)$ for every $b\in \partial D \cap (D^{c})^r$ whenever $g$ is continuous at $b$. To prove this, let $x \in D$ and let $r>0$ such that $B_r(x)\subset D$. Let $S_r(x)$ be the sphere of radius $r$ centered at $x$. We compute:

$$\begin{array}{ll}(H_D g)(x)& = & E_{x}(g(X(T_D));T_D<\infty) \\ &=& E_x(g(X(T_D)) \circ \theta_{T_{S_r(x)}});T_D\circ \theta_{S_r(x)}<\infty) \\& = & E_x(E_{X(T_{S_r(x)})}(g(X(T_D));T_D<\infty))\\ & = & H_{S_{r}(x)}(H_{D}g) \end{array}$$

Since the hitting distribution for the any sphere centered at $x$ for Brownian motion starting at $x$ was found to be the uniform probability measure then by the mean value theorem, $H_{D}g$ is harmonic on $D$. To complete the proof, let $x\in \partial D \cap( D^c)^r$ then by $3.3$ the Radon measures $H_D(x,\cdot)$ converge completely to $\delta_b$. Let $\epsilon>0$ then by the continuity of $g$ at $b$ there exists $\delta(\epsilon)>0$ such that $|g(y)-g(b)|<\epsilon$ whenever $y\in U_{\delta}(b)$. However, $H_D(x,U_{\delta}(b)) \rightarrow 1$ as $x\rightarrow b$ and since $g$ is bounded, we easily get $$\limsup_{x\rightarrow b}|g(b)-(H_Dg)(x)|<\epsilon.$$ Since $\epsilon$ is arbitrary, the proof is complete.

Now suppose $g$ is a bounded continuous function on $\partial D$ then $H_{D}g$ is Harmonic inside $D$ and has boundary value $g(b)$ for every regular point $b$. In other words, if $\partial D$ consisted entirely of regular points for $D^c$ then one can solve the Dirichlet problem. This motivates the following definition: we say that $D$ is regular for the Dirichlet problem if and only if $\partial D \subset (D^c)^r$.

In summary, both the Dirichlet and generalized Dirichlet problems can be entirely solved using Brownian motion whenever the boundary data is bounded and continuous.

### Uniqueness

The first uniqueness result is easily had using the Maximum Principle for harmonic functions. If $D$ is both regular for the Dirichlet problem and bounded then $H_{D}(g)$ is the unique solution to the Dirichlet problem with boundary condition $g$. We wish to remove the condition that $D$ be bounded. Suppose we have boundary condition $1$ then $H_{D}1=P_{x}(T_D<\infty)$ solves the generalized Dirichlet problem with boundary condition $1$ and in turn the function $P_x(T_D<\infty)$ solves the generalized Dirichlet problem with boundary condition $0$. Hence if $g$ is bounded and continuous then the generalized Dirichlet problem with boundary value $g$ has a family of solutions given by $H_{D}g + \alpha P_{x}(\tau_{D^c}=\infty)$ with $\alpha$ an arbitrary real number. In other words, if one has uniqueness for generalized Dirichlet problem then necessarily $P_{x}(T_D=\infty)=0$ on $D$ and this is true if and only if $D^{c}$ is recurrent (see $2.6$ above). Now suppose that $D^c$ is recurrent and $f$ is bounded and solves the generalized Dirichlet problem with boundary data $g$ then in fact $f=H_{D}g$ inside $D$.In other words, if the generalized Dirichlet problem has a bounded solution and $D^c$ is recurrent then the solution is unique. To prove the claim, consider a sequence $D_n$ of relatively compact open subsets of consisting of the points $D$ that are more than $1/n$ distance away from $D^c$ and whose distance from origin is less than $n$. This is an increasing sequence and $D_n$ is both bounded and regular for the Dirichlet problem for each $n$. This implies that if $f$ solves the Dirichlet problem then by the by Maximum Principle, we must have $f=H_{D_N}f$ on $D_n$ for every $n$. To complete the proof, note that $T_{D_n}\uparrow T_D< \infty$ for every $x\in D$ since $D^c$ is recurrent. By $3.1$, $\partial D / (D^c)^r$ is polar and since $f$ is a solution to the generalized Dirichlet problem then $f(X(T_{D_n}))$ converges to $g(X(T_D))$ almost surely in D and thus by bounded convergence theorem: $H_{D_N}f$ converges to $H_{D}g$ on $D$. This easily implies that $f=H_{D} g$ on $D$ and completes the proof. A natural question is: what do solutions of the generalized Dirichlet problem look like when $D^c$ is transient? Surprisingly, all possible solutions must be of the form $(H_{D}g)(x)+ \alpha P_{x}(T_D<\infty)$ for some real number $\alpha$. Note that the proof above does not work when $D^c$ is transient since we will not be able to identify the limit of $(H_{D_n}f)(x)$ as $n\rightarrow \infty$ for every $x\in D$.

We will need two lemmas to prove the main claim. The first lemma says that if $f$ is a bounded function on $D$ and $Q^{t}_{D}f=f$ on $D$ then $f=\alpha P_{x}(\tau_{D}=\infty)$ on $D$ for some real number $\alpha$. Here $(Q^{t}_{D}f)(x)=E_{x}(f(X(t);0<t<T_D)$. To prove this, we use the semigroup property $Q_{D}^{t}Q_{D}^s=Q_{D}^{t+s}$ and thus $Q_{D}^{nt}f=f$ for all $n$. If $|f|\leq M$ then: $$\begin{array}{ll}|f(x)|&=&|(Q_{D}^{nt}f)(x)|\\ & = & |E_{x}(f(X_{nt});nt<T_D)| \\ & \leq & M \cdot P_{x}(nt<T_D).\end{array}$$ Sending $n\rightarrow \infty$, we obtain $|f(x)|\leq M P_{x}(T_D=\infty)$. This gives $f(x)+M P_{x}(T_D=\infty)\geq 0$ and thus one may assume that $f(x)$ is nonnegative and set $f(x)=0$ on $D^{c}$. In particular $p^{t}f\geq f$ and consequently $p^{tn}f$ is an increasing sequence which converges since $f$ is bounded. Let $\alpha$ denote the limit. Observe that $f(x)\leq \alpha$ since $f=Q_{D}^{nt}f\leq P^{nt}f$ on $D$ for every $n$ and this in turn gives $f(x)\leq \alpha \cdot P_{x}(T_D<\infty)$. Now consider that $p^{tn}f=E_{x}(f(X(t)))=f(x) + E_{x}(f(X(tn));T_D<tn)$ for every $n$. Letting $n\rightarrow \infty$ easily gives $f(x)\geq \alpha P_{x}(T_D=\infty)$ and consequently $f(x)=\alpha \cdot P_{x}(T_D=\infty)$ on $D$.

The second lemma says that if $f$ is any bounded harmonic function on $D$ and $D_{n}$ are the relatively compact open sets defined earlier then $f(x)=\alpha\cdot P_{x}(T_D<\infty) + \lim_{n\rightarrow \infty} E_{x}(f(X(T_D));T_D<\infty)$. As in the proof in the previous paragraph,we must have $H_{D_N}f=f$ on $D_{n}$ for all $n$. We compute:

$$\begin{array}{ll} (Q^t_{D_n}f)(x)=(Q^t_{D_n}H_{D_n}f)(x)&=& E_x(E_{X(t)}(f(X(T_{D_n}));t<T_{D_n}) \\ & = & E_{X}(f(X(T_{D_n}))\circ \theta_t;t<T_{D_n})\\ &=& E(f(X(T_{D_n}));t<T_{D_n}) \\ &=& H_{D_n}f-E_{x}(f(X(T_{D_n}));t\geq T_{D_n}) \\ & = & f(x)-E_{x}(f(X(T_{D_n});t\geq T_{D_n}). \end{array}$$

We used the Markov Property for Brownian motion to get from line $1$ to line $2$ and to go from line $2$ to line $3$ we used the fact that $f(X(T_{D_n})\circ \theta_t)=f(X(T_{D_n}))$ on the set of paths $\{t<T_D\}$. We consider $Q^t_{D_n}f$ as $n\rightarrow \infty$ and $t\rightarrow \infty$. The idea will be to show that $Q_{D}^tf$ converges as $t\rightarrow \infty$. Fixing $t>0$, one has $\lim_{n\rightarrow \infty}Q^t_{D_n}f=Q^t_{D}f$ and consequently $$\begin{array}{ll} (Q_{D}^tf)(x)&=&f(x)-\lim_{n\rightarrow \infty}E_x(f(X(T_{D_n}));T_{D_n}\leq t) \\ & = & f(x)-\lim_{n\rightarrow \infty}E_x(f(X(T_{D_n});T_D\leq t).\end{array}$$ Now we send $t\rightarrow \infty$ to get: $$\begin{array}{ll} \lim _{t\rightarrow \infty}(Q_{D}^tf)(x) & = & f(x) - \lim_{t\rightarrow \infty}(\lim_{n\rightarrow \infty}E_x(f(X(T_{D_n});T_D\leq t)) \\ &=& f(x)-\lim_{n\rightarrow \infty}E_x(f(X(T_{D_n});T_D<\infty). \end{array}$$ Hence the limits exists and denote it by say $r(x)$. It follows easily from the semigroup property $Q_D^s Q_D^t r=Q_D^{s+t}r$ that $Q_D^tr=r$ for all $t>0$. By the previous lemma, $r(x)=\alpha \cdot P_x(T_D=\infty)$ for some $\alpha$ and this completes the proof.

Now suppose we have a solution $f$ to the generalized Dirichlet problem with boundary condition $g$. We know that necessarily, $f(x)=\alpha P_{x}(T_D=\infty)+\lim_{n\rightarrow \infty} E_{x}(f(X(T_{D_n}));T_{D_n}<\infty)$ on $D$. Again $\tau_{D_n}\uparrow T_D$ on $\{T_D<\infty\}$ and since $\partial D \ (D^c)^r$ is polar by $3.1$ above, then $X(T_{D_n})$ converges almost surely to a regular point for the Dirichlet problem and since $f$ solves the generalized Dirichlet problem with boundary data $g$, $f(X(T_{D_n}))$ converges to $g(X(T_D))$ as $n\rightarrow \infty$ on $\{T_D<\infty\}$. By assumption $f$ is bounded and thus by bounded convergence theorem: $$f(x)=\alpha \cdot P_{x}(T_D=\infty)+H_{D}(g)(x)$$ on D.

In summary, we see that bounded solutions to both the Dirichlet and generalized Dirichlet problems have unique solutions if and only if $D^c$ is recurrent and all such solutions have the specific form given above.

### Non-Existence of Solutions

Can we always solve the Dirichlet problem with boundary conditions? Well it turns out that the answer is no. The first counterexample is quite simple: consider the domain $D$ which is any open ball of radius say $R$ centered at the origin but without the origin. We let $\phi(x)=1$ if $\|x\|=R$ and $\phi(0)=0$. Since $D^c$ is recurrent, there exists at most one solution to this Dirichlet problem and it must be $(H_D\phi)(x)=1$ on $D$ and thus there cannot be any solution to this Dirichlet problem. Note that in this situation, $D$ is not simply connected. This begs the question: do all Dirichlet problems admit solutions whenever $D$ is a simply connected domain? The answer is again no for $d\geq 3$ and yes for $d=2$. The counterexample is provided by Lebesgue's thorn. By using the Wiener test for thorns (see the section on thorns), it is clearly that there exists a sharp enough thorn with vertex at $0$ such that $0$ is not regular for that thorn. Let $D$ be any open ball centered at $0$ which is punctured by the the thorn and let $\phi(x)=1$ for $x\in \partial D/\{0\}$ and $\phi(x)=0$ for $x=0$. Note that $D$ is a simply connected domain but there is no solution to the Dirichlet problem with boundary data $\phi$ (just use the same argument as above). For $d=2$, it can be shown that every simply connected domain is regular for the Dirichlet problem and so we cannot produce counterexamples as we did for $d\geq 3$.

## Green's Functions and Poisson's Equation

### Green's Functions

In what follows, $D$ will denote an open subset of $\mathbb{R}^d$ for $d\geq 3$. We wish to solve Poisson's equation: $\bigtriangleup f = \rho$. We first consider the Green's functions for an arbitrary open subset $D$. We will let $G_D(x,y)=\int_{0}^{\infty}Q_{D}(t,x,y)$ with $\int_{A}Q_D(t,x,y)=P_{x}(X(t)\in A; t<T_D)$. In other words, $Q_D(t,x,y)$ can be thought of the probability density for Brownian motion killed when it first exits $D$. In particular we have $G_D(x,y)\leq g(x,y)$ and thus if we consider the measures $G_D(x,dy)$ then they are Radon measures. These measures will be called Green measures for $D$. It is straightforward to check that $G_D(x,A)$ is the expected amount of time that Brownian motion starting at $x$ spends in $A$ before first exiting $D$. Properties of $G_D(x,y)$ follow directly from properties of $Q_D(t,x,y)$.The following holds:

-$Q_D(t,x,y)=Q_D(t,y,x)$. In particular, $G_D(x,y)=G_D(y,x)$.
--$Q_D(t,x,y)=0$ whenever $x$ or $y$ belongs to $(D^c)^r$.In particular, $G_D(x,y)=0$ whenever $x$ or $y$ is regular for $D^c$.
-$Q_D(t,x,y)$ is uppersemicontinnuous in each variable separately. In particular (by Fatou's lemma), $G_D(x,y)$ is uppersemicontinuous in each variable separately and $G_D(x,y)$ approaches $0$ as $x$ approaches a regular point for $D^c$.
-$Q_D(t,x,y)$ is jointly continuous on $(0,\infty) \times \mathbb{R}^d/\partial D \times \mathbb{R}^d/\partial D$.

We list a few additional properties of the Green's functions:

-$G_D(x,y)=G_{D_0}(x,y)+ \int_{D}G_{D}(z,y)H_{D_0}(x,dz)$ on $D_0 \times D_0$ whenever $D_0\subset D$
-$G_D(x,y)$ is locally integrable in each variable separately.
-$G_D(x,y)$ is continuous on $D\times D$ in the extended sense (there is blow up when $x=y$).

### Generalized Laplacian

Suppose $f$ is twice continuously differentiable in $D$. Now suppose $s<\bigtriangleup f (x) < t$ then there exists an $r$ small enough that $2<\bigtriangleup f (y) < t$ for all $y\in B(x,r)$. If $g(x)=\|x-y\|^{2}/2d$ then clearly $\bigtriangleup g(x) = 1$ for all $x\in D$. Now let $h(x)$ be the unique harmonic function on $B(x,r)$ with boundary value $f(x)-tg(x)$. We then have $\bigtriangleup (f-tg-h))(y)=\bigtriangleup (f-tg)(y)<0$ inside $B(r,x)$ and $f-tg-h=0$ on the boundary of $B(r,x)$. This easily implies that the minimum of $f-tg-h$ is achieved on the boundary of $B(r,x)$ and consequently $f(x)\geq h(x)$. By the mean value property, one has $$\begin{array}{ll}h(x)&=& \int h(y) \sigma_r(x,dy) \\ & = & \int f(y)-tg(y) \sigma_r(x) \\ & = & \int f(y) \sigma_r(x)-r^2t/2d.\end{array}$$ Hence we obtain the inequality $t>2d/r^2\big[\int f(y) \sigma_{r}(x) -f(x)\big]$.Arguing similarly for the lower bound $s$, we obtain $s<2d/r^2\big[\int f(y) \sigma_{r}(x) -f(x)\big]$. In other words, one has $lim_{r\rightarrow 0}2d/r^2\big[\int f(y) \sigma_{r}(x) -f(x)\big]=\bigtriangleup f(x)$. We say that a function $f$ has a generalized Laplacian at $x$ whenever the limit $lim_{r\rightarrow 0}d/r^2\big[\int f(y) \sigma_{r}(x) -f(x)\big]$ exists. The above shows that whenever $f$ is sufficiently nice,the generalized Laplacian coincides with the normal Laplacian. This approach to the laplacian will prove to be useful for solving Poisson's equation.

### Poisson's Equation

We will now use the Green's functions to solve Poisson's equation. As usual, we let $D$ be an arbitrary open subset of $\mathbb{R}^d$. We first consider the function $(g\rho)(x)=\int g(x,y)\rho(y)dy$ on $D$. Let's assume for the moment that $\rho$ is chosen so that $(g\rho)(x)$ is twice continuously differentiable and let's compute the generalized Laplacian. We fix $x$ in $D$ and consider $r$ sufficiently small that $B(r,x)\subset D$. We have $(g\rho)(x)=(G_{B(r,x)}\rho)(x)+ \int (g\rho)(z) \sigma_{r}(x)$ since $\sigma_r(x)$ is $H_{B(r,x)}(x,dz)$ and consequently $2d/r^2\big[\int f(y) \sigma_{r}(x) -f(x)\big]=-2d/r^2 \cdot (G_{B(r,x)}\rho)(x)$. Recall from our discussion about stopping times in the first section that for any stopping time $\tau$, $E_x(X(\tau))=x$ and $E_x(\|X(\tau)-x\|^2)=d\cdot E_{x}(\tau)$ and consequently one easily deduces that $E_x(T_{B(r,x)})=r^2/d=G_{B(r,x)}(x,B(r,x))$. Since $\rho$ is continuous it is simple that $\bigtriangleup (g\rho)(x)=-2\rho(x)$.It turns out that when $\rho$ is Holder continuous and $\int_{D} (1+\|y\|)^{2-d}|\rho(y)|dy<\infty$ then in fact $(g\rho)(x)$ is twice continuously differentiable and $D^{\alpha}(g\rho)(x)=((D^{\alpha}g) \rho)(x)$ for $|\alpha|\leq 2$.

Since $(H_D(g\rho))(x)=\int (g\rho)(z) H_D(x,dz)$ is harmonic in $D$ and $\bigtriangleup (g\rho)(x)=-2\rho(x)$ then we also have $\bigtriangleup (G_D\rho)(x)=-2\rho(x)$. In order to study the Poisson equation with boundary conditions, we need to consider the behavior of $(G_D\rho)(x)$ as $x$ approaches $\partial D$. We claim that $(G_D\rho)(x)$ approaches zero as $x$ approaches $\partial D \cap (D^c)^r$ whenever $\rho$ is bounded and integrable on $D$. To see this, we let $b$ be a regular point for $D^c$ on the boundary of $D$ and let $B_1\subset B_2$ be two concentric closed balls centered at $b$. We decompose $\rho=\rho_1 + \rho_2$ with $\rho_1$ zero on $B_2$ and $\rho_2$ zero outside $B_2$. Note that $G_D(x,y)\leq g(x,y)$ and $g(x,y)$ is uniformly bounded in $y$ whenever $y\in B_2^c$ and $x\in B_1$. Furthermore since $G_D(x,y)$ is uppersemicontinuous in $x$ and $G_D(b,y)=0$ for all $y\in D$ then $G_D(x,y)$ approaches $0$ as $x$ approaches $b$ for every $y$ in $D$.It now follows easily from dominated convergence theorem that $(G_D\rho_1)(x)\rightarrow 0$ as $x$ approaches $b$ (recall that $\rho$ is integrable on $D$). Since $\rho_1$ is supported on $B_2$ and $\rho_1$ is bounded then it suffices to prove that $G_D(x,B_2)$ approaches zero as $x$ approaches $b$. To this end, we let $t>0$ and compute:

$$\begin{array}{ll}G_D(x,B_2)& = & E_x\left(\int_{0}^{T_D}1_{B_2} ds\right) \\ & = & E_x\left(\int_{0}^{T_D}1_{B_2}(X(s)) ds ; T_D \leq t\right) + E_x\left(\int_{0}^{T_D}1_{B_2}(X(s)) ds ; T_D > t\right) \\ &=& E_x\left(\int_{0}^{T_D}1_{B_2}(X(s)) ds ; T_D \leq t\right) + E_x\left(E_x\left(\int_{0}^{T_D}1_{B_2}(X(s))ds | \mathcal{F}_t\right);T_D>t\right) \\ & \leq & t + P_x(T_D \geq t)(t+\sup_{y\in D}G_D(y,B_2)) \end{array}$$

Since $G_D(y,B_2)\leq \int_{B_2} g(y,z)dz$ then $G_D(y,B_2)$ is uniformly bounded in $y$ and since $b$ is regular for $D^c$, we have that $P_x(T_D >t) \rightarrow 0$ as $x$ approaches $b$ and thus $G_D(x,B_2) \rightarrow 0$ as $x$ approaches $b$. This completes the proof.

### Boundary Conditions and Uniqueness

If we consider the Poisson equation with boundary conditions then the problem is easily solved by using our results for the Dirichlet problem. Let $g$ be any continuous and bounded function on $\partial D$. If $\rho(x)$ is Holder continuous, bounded and integrable on $D$ then $(G_D\rho)(x) \rightarrow 0$ as $x$ approaches any regular point $b$ for $D^c$ and $\bigtriangleup (G_D\rho)(x)=-2\rho(x)$. On the other hand, we know that $(H_Dg)(x)$ is harmonic in $D$ and has boundary value $g(b)$ as $x$ approaches $b$. Hence adding both functions we obtain a function $f(x)$ such that $\bigtriangleup f(x))=2\rho(x)$ with $\lim_{x\rightarrow b}f(x)=g(b)$ for every $b\in (D^c)^r$. In particular, if $D$ is regular for the Dirichlet problem and $\rho$ is both Holder continuous and integrable on $D$ then we can completely solve the Poisson equation whenever the boundary data is bounded and continuous. In addition, we have uniqueness if and only if $D^c$ is recurrent and all bounded solutions differ by $\alpha\cdot P_{x}(T_D=\infty)$ for some real number $\alpha$.

## References

Durrett, R. (2005). Probability: Theory and Examples.Thomson
Port, Sidney C.; Stone, Charles J. (1978): Brownian Motion and Classical Potential Theory. Academic Press, Inc.
Evans, Lawrence C.; Gariepy, Ronald F. (1992): Measure Theory and Fine Properties of Functions. CRC Press, Inc.