# Homework 4 (due April 2)

1. The $n$th Hermite polynomial is $H_n(t,x) = {(-t)^n\over n!} e^{x^2\over 2t} {d^n\over dx^n} e^{-{x^2\over 2t}}$. Show that the $H_n$ play the role that the monomials ${x^n\over n}$ play in ordinary calculus, $dH_{n+1} (t, B_t) = H_n(t, B_t).$

2. The backward equation for the Ornstein-Uhlenbeck process is ${\partial u\over \partial t} = {1\over 2} {\partial^2 u\over \partial x^2} -\rho x {\partial u\over \partial x}.$ Show that $v(t,x) =u(t,xe^{\rho t})$ satisfies $e^{2\rho t} {\partial v\over \partial t} = {1\over 2} {\partial^2 v\over \partial x^2}$ and transform this to the heat equation by $\tau= {1-e^{-2\rho t}\over 2\rho}$. Use this to derive Mehler's formula for the transition probabilities, $p(t,x,dy)= {e^{ - { \rho(y-xe^{-\rho t})^2\over (1-e^{-2\rho t})}}\over \sqrt{ 2\pi \left( { (1-e^{-2\rho t})\over 2\rho}\right)}} dy.$

3. i. Show that $X(t)=(1-t)\int_0^t {1\over 1-s} dB(s)$ is the solution of $dX(t)= -{X(t)\over 1-t} dt + dB(t), \qquad 0\le t<1,\qquad X(0)=0.$ ii. Show that $X(t)$ is Gaussian and find the mean and covariance.

iii. Show that for $0=t_0< t_1<\cdots< t_n<1$ the variables $\frac{X(t_i)}{1-t_i} - \frac{X(t_{i-1}) }{ 1-t_{i-1}}$ are independent

iv. Show that the finite dimensional distributions are given by $P(X(t_1)\in dx_1,\ldots, X(t_n)\in dx_n) = \prod_{i=1}^n p(t_i-t_{i-1}, x_i-x_{i-1}) \frac{p(1-t_n, -x_n)}{p(1,0)} dx_1\cdots dx_n$ where $p(t,x)$ is the Gaussian kernel.

v. Show that $X(t)$ is equal in distribution to a Brownian motion conditioned to have $B(1)=0$. It is the Brownian Bridge.

vi. For fixed constants $a$ and $b$ solve the stochastic differential equation $dX(t) = {b-X(t)\over 1-t} dt + dB(t),\qquad 0\le t<1\qquad X(0) =a.$ This is the Brownian Bridge from $a$ to $b$.

4. Consider the general linear stochastic differential equation $dX_t = [ A(t) X_t + a(t) ] dt + \sigma(t) dB_t,\qquad X_0=x,$ where $B_t$ is an $r$-dimensional Brownian motion independent of the initial vector $x\in {\bf R^d}$ and the $d\times d$, $d\times 1$ and $d\times r$ matrices $A(t)$, $a(t)$ and $\sigma(t)$ are non-random. Show that the solution is given by $X_t= \Phi(t) [ x+\int_0^t \Phi^{-1}(s) a(s) ds + \int_0^t\Phi^{-1}(s) \sigma(s) dB_s ]$ where $\Phi$ is the $d\times d$ matrix solution of $\dot \Phi(t) = A(t) \Phi(t), \qquad \Phi(0) = I.$

5. A common model for interest rates is the Vasicek model, $dr(t) = (\theta-\alpha r(t)) dt + \sigma dB(t)$. Relate it to the Ornstein-Uhlenbeck process. The discount function is $Z_{t, T}(\omega) = E[ e^{-\int_t^T r(s) ds} ~|~{\cal F}(t) ].$ i. Show that in fact $Z_{t,T}$ is only a function of $r(t)$ (which we may as well call $Z_{t,T}(r(t))$.) ii. Fix $t$ and show that $Z_{t,T}(r)$ is the solution of the equation ${\partial Z\over\partial T} =(\theta-\alpha r) {\partial Z\over \partial r} + \sigma^2{\partial^2 Z\over \partial r^2} - r Z$ with $Z_{t,t}=1$. iii. Show that the continuously compounded interest rate $R_{t,T}= -(T-t)^{-1}\ln Z_{t,T}$ is of the special form $R(t,T) = a(T-t) + b(T-t) r(t)$ and find the functions $a(t)$ and $b(t)$. iv. Repeat i.-iii. for the CIR model $dr (t) = (\alpha -\beta r(t)) dt + \sigma\sqrt{r(t)} dB(t)$

v. Compute the mean $E[r(t)]$ and the variance ${\rm Var}(r(t))$ for the Vasicek and CIR models.

6. Let $X_1(\cdot)$ and $X_2(\cdot)$ solve the two constant coefficient sde's $dX_1(t) = b dt + \sigma_1 dB(t)$ and $dX_2(t) = bdt + \sigma_2 dB(t)$. How big is ${P(X_1(t_1)\in dx_1,\ldots,X_1(t_n)\in dx_n)\over P(X_2(t_1)\in dx_1,\ldots,X_2(t_n)\in dx_n)}$ as $n$ becomes large?

7. If $P$ and $\tilde P$ are equivalent and $\frac{d\tilde P }{ dP }= Z$ show that $\frac{d P }{ d\tilde P }= \frac1{Z}$. Let $P^{a,b}_x$ denote the probability measure on $C ([0, T ])$ corresponding to the solution of the stochastic differential equation $dX (t) = \sigma(t, X (t))dB(t) + b(t, X (t))dt,\quad X (0) = x$ where $a = \sigma\sigma^T$. Let $b_1 \neq b_2$. Write expressions for $\frac{dP^{ a,b_1}_ x }{dP^{ a,b_2}_ x }$ and $\frac{dP^{ a,b_2}_ x }{dP^{ a,b_1}_ x }$ using the Cameron-Martin-Girsanov formula. Is the second the inverse of the first, or not? Find an explanation.

8. Let $\alpha(x) = (\alpha_1 (x), . . . , \alpha_n(x))$ be a smooth function from ${}R^n$ to ${}R^n$ . Consider the partial differential equation for $x \in {}R^n$ , and $t > 0$, $\frac{\partial u}{\partial t} = \frac12 \sum^n_{i=1} \frac{\partial^2 u}{\partial x^2_i} +\sum_{i=1}^n\alpha_i (x) \frac{\partial u}{\partial x_i}$, $u(0, x) = f (x).$ i. Use the Girsanov theorem to show that the solution is $u(t, x) =E_x[ e^{\int_0^t\alpha(B(s))dB(s)-\frac12\int_0^t |\alpha(B(s))|^2 ds}f(B(t))].$

ii. Suppose that $\alpha(x) = \nabla\gamma (x)$ for some function $\gamma : R^n\to R$. Use Ito's formula to show that in this case $u(t, x) = e^{-\gamma (x)} E_x[e^{\gamma (B(t))- \frac12\int_0^t (\nabla\gamma^2 (B(s))+\Delta\gamma (B(s)))ds }f (B(t))].$

iii. Use the Feynman-Kac formula to show that $v(t, x) = e^{\gamma (x)} u(t, x)$ is the solution of $\frac{\partial v}{\partial t} = \frac12 \Delta v - \frac12 (\nabla\gamma^2 + \Delta\gamma )v.$