# Almost Sure

## 31 August 10

### Zero-Hitting and Failure of the Martingale Property

For nonnegative local martingales, there is an interesting symmetry between the failure of the martingale property and the possibility of hitting zero, which I will describe now. I will also give a necessary and sufficient condition for solutions to a certain class of stochastic differential equations to hit zero in finite time and, using the aforementioned symmetry, infer a necessary and sufficient condition for the processes to be proper martingales. It is often the case that solutions to SDEs are clearly local martingales, but is hard to tell whether they are proper martingales. So, the martingale condition, given in Theorem 4 below, is a useful result to know. The method described here is relatively new to me, only coming up while preparing the previous post. Applying a hedging argument, it was noted that the failure of the martingale property for solutions to the SDE ${dX=X^c\,dB}$ for ${c>1}$ is related to the fact that, for ${c<1}$, the process hits zero. This idea extends to all continuous and nonnegative local martingales. The Girsanov transform method applied here is essentially the same as that used by Carlos A. Sin (Complications with stochastic volatility models, Adv. in Appl. Probab. Volume 30, Number 1, 1998, 256-268) and B. Jourdain (Loss of martingality in asset price models with lognormal stochastic volatility, Preprint CERMICS, 2004-267).

Consider nonnegative solutions to the stochastic differential equation

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX=a(X)X\,dB,\smallskip\\ &\displaystyle X_0=x_0, \end{array}$ (1)

where ${a\colon{\mathbb R}_+\rightarrow{\mathbb R}}$, B is a Brownian motion and the fixed initial condition ${x_0}$ is strictly positive. The multiplier X in the coefficient of dB ensures that if X ever hits zero then it stays there. By time-change methods, uniqueness in law is guaranteed as long as a is nonzero and ${a^{-2}}$ is locally integrable on ${(0,\infty)}$. Consider also the following SDE,

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dY=\tilde a(Y)Y\,dB,\smallskip\\ &\displaystyle Y_0=y_0,\smallskip\\ &\displaystyle \tilde a(y) = a(y^{-1}),\ y_0=x_0^{-1} \end{array}$ (2)

Being integrals with respect to Brownian motion, solutions to (1) and (2) are local martingales. It is possible for them to fail to be proper martingales though, and they may or may not hit zero at some time. These possibilities are related by the following result.

Theorem 1 Suppose that (1) and (2) satisfy uniqueness in law. Then, X is a proper martingale if and only if Y never hits zero. Similarly, Y is a proper martingale if and only if X never hits zero.

As an example, consider ${a(X)=X^{-\gamma}}$ for some fixed exponent ${\gamma}$, so that ${\tilde a(Y)=Y^{\gamma}}$. In the previous post it was shown that X fails to be a proper martingale when ${\gamma < 0}$ and hits zero when ${\gamma > 0}$. Theorem 1 shows that these two statements are equivalent.

We can be a bit more precise than the statement in Theorem 1. Being nonnegative local martingales, the processes X, Y are automatically supermartingales. For times ${s, ${X_s-{\mathbb E}[X_t\mid\mathcal{F}_s]}$ is nonnegative and, hence, will be almost surely zero if and only if it has zero expectation. So, we see that the martingale condition ${X_s={\mathbb E}[X_t\mid\mathcal{F}_s]}$ is satisfied whenever ${{\mathbb E}[X_s]={\mathbb E}[X_t]}$. Furthermore, the supermartingale condition gives ${x_0\ge{\mathbb E}[X_s]\ge{\mathbb E}[X_t]}$. It follows that X is a martingale over a time interval ${[0,t]}$ if and only if ${{\mathbb E}[X_t]=x_0}$. The following theorem shows that this is equivalent to ${Y_t}$ being strictly positive with probability one, giving a more precise statement than Theorem 1 above.

Theorem 2 Suppose that (1) and (2) satisfy uniqueness in law. Then,

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb E}[X_t]=x_0{\mathbb P}(Y_t>0),\smallskip\\ &\displaystyle{\mathbb E}[Y_t]=y_0{\mathbb P}(X_t>0). \end{array}$ (3)

To apply these results, the following necessary and sufficient condition for solutions to the SDE (1) to hit zero after a finite time can be used. This is a special case of Feller’s test for explosions, and a proof is given further below.

Theorem 3 Suppose that a is nonzero and ${a^{-2}}$ is locally integrable on ${(0,\infty)}$. Then, solutions X to (1) hit zero with positive probability if and only if

 $\displaystyle \int_0^Kx^{-1}a(x)^{-2}\,dx<\infty$ (4)

for ${K>0}$. In this case, X hits zero almost surely.

Using Theorem 1, this can be transformed into a condition for the process X to be a martingale. In particular, X satisfying (1) will be a proper martingale if and only if the solution Y to (2) has zero probability of hitting zero. By Theorem 3 this is equivalent to

$\displaystyle \int_0^{1/K}x^{-1}\tilde a(x)^{-2}\,dx=\int_K^\infty x^{-1}a(x)^{-2}\,dx=\infty.$

So, we have arrived at a necessary and sufficient condition for X to be a martingale.

Theorem 4 Suppose that a is nonzero and ${a^{-2}}$ is locally integrable on ${(0,\infty)}$. Then, solutions X to (1) are proper martingales if and only if

 $\displaystyle \int_K^\infty x^{-1}a(x)^{-2}\,dx=\infty$ (5)

for ${K>0}$.

Looking again at the SDE ${dX=X^c\,dB}$, we take ${a(X)=X^{c-1}}$,

$\displaystyle \int_K^\infty x^{-1}a(x)^{-2}\,dx=\int_K^\infty x^{1-2c}\,dx.$

This is infinite if and only if ${c\le1}$, in which case X is a martingale and, for ${c>1}$, Theorem 4 shows that it fails to be a martingale. Consider, also, the following SDE, whose coefficient grows very slightly faster than linearly in X,

 $\displaystyle dX=X(\log(X+1))^c\,dB.$ (6)

Again, c is a fixed positive constant. In this case, we take ${a(x)=(\log(x+1))^c}$. Up to a finite scaling factor, the integral (4) gives

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\int_K^\infty x^{-1}a(x)^{-2}\,dx&\displaystyle\sim\int_K^\infty x^{-1}(\log x)^{-2c}\,dx\smallskip\\ &\displaystyle=\begin{cases} (1-2c)^{-1}\left[(\log x)^{1-2c}\right]_K^\infty,&\textrm{if }c\not=1/2,\\ \left[\log\log x\right]_K^\infty,&\textrm{if }c=1/2. \end{cases} \end{array}$

This is finite if and only if ${c>1/2}$. So, solutions to (6) are martingales whenever ${c\le1/2}$ and are local martingales, but not proper martingales, for all ${c>1/2}$.

Proof of Theorem 2

I will now give a proof of Theorem 2 using local Girsanov transforms. As is standard, we work with respect to a filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}\}_{t\in{\mathbb R}_+},{\mathbb P})}$. However, in order for the Girsanov transform method to be successfully applied, we do not assume that the filtration is complete. We will also work in the more general setting of continuous local martingales, not necessarily defined by an SDE.

For the remainder of this section, let X be a continuous local martingale taking values in the extended nonnegative real numbers ${\bar{\mathbb R}_+=[0,\infty]}$, and with the fixed initial condition ${X_0=x_0>0}$. The local martingale property implies that X is a supermartingale and that it is almost surely finite. Let us also set ${y_0=1/x_0}$ and Y=1/X, which explodes when X hits zero and hits zero in the (zero probability) event that X explodes. Let also define ${\tau_X}$ to be the first time at which X hits zero and ${\tau_Y}$ to be the first time at which Y hits zero or, equivalently, X hits infinity. From this setup, ${\tau_Y}$ is almost-surely infinite.

The idea is to use X to define a change of measure ${{\mathbb Q}=x_0^{-1}X_\infty\cdot{\mathbb P}}$. However, Girsanov transform theory would only be applicable when X is a uniformly integrable and positive martingale. To get around this restriction, we instead apply the change of measure locally. That is, if ${\tau}$ is a stopping time such that ${X^\tau}$ is a uniformly integrable martingale and ${X_\tau>0}$, then we define the restriction of the probability measure ${{\mathbb Q}}$ to ${\mathcal{F}_\tau}$ by

 $\displaystyle {\mathbb Q}\vert_{\mathcal{F}_\tau}=x_0^{-1}X_\tau\cdot{\mathbb P}\vert_{\mathcal{F}_\tau}.$ (7)

That is, ${{\mathbb E}_{\mathbb Q}[Z]=x_0^{-1}{\mathbb E}[X_\tau Z]}$ for any bounded ${\mathcal{F}_\tau}$-measurable random variable Z. There is a subtle issue here though, as the measure ${{\mathbb Q}}$ need not exist at all. This is an issue which was encountered previously in my stochastic calculus notes, in the application of measure changes to stochastic differential equations. This problem can arise because, although ${{\mathbb P}}$ and ${{\mathbb Q}}$ are defined to be equivalent on ${\mathcal{F}_\tau}$, they need not be equivalent on ${\mathcal{F}}$. For example, being a local martingale, X is almost surely bounded under ${{\mathbb P}}$. However, if it is not a proper martingale, then we will see that Y=1/X hits zero in a finite time, at which X explodes. If we did not include such events in the probability space to start with, then defining the transformed measure ${{\mathbb Q}}$ would be impossible. Similarly, problems would be caused by including such zero probability events in the initial sigma algebra ${\mathcal{F}_0}$. This is the reason for considering X to lie in the extended nonnegative real numbers in the setup above, and also for not assuming that the filtration satisfies the usual completeness properties. These are not real problems though, and just require a bit of care with the construction of the underlying filtered probability space. For now, we ignore these issues and assume that the measure ${{\mathbb Q}}$ exists, in which case it is essentially unique. We can prove the following much more general version of Theorem 2, applying to arbitrary continuous and nonnegative local martingales. Actually, Theorem 2 will follow as a corollary of Lemma 5 when applied to solutions of the SDE (1).

Lemma 5 If it exists, the measure ${{\mathbb Q}}$ defined by (7) is uniquely defined on ${\mathcal{F}_{\tau_Y-}}$. Furthermore, ${{\mathbb Q}(\tau_X\ge\tau_Y)=1}$, ${\tilde Y\equiv Y^{\tau_Y}}$ is a ${{\mathbb Q}}$-local martingale with quadratic variation

 $\displaystyle [\tilde Y]_t=\int_0^{t\wedge\tau_Y} X^{-4}\,d[X]$ (8)

(under ${{\mathbb Q}}$) and, for any time ${t>0}$,

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb E}_{\mathbb P}[X_t]=x_0{\mathbb Q}(\tilde Y_t>0),\smallskip\\ &\displaystyle{\mathbb E}_{\mathbb Q}[\tilde Y_t]=y_0{\mathbb P}(X_t>0). \end{array}$ (9)

Proof: Let ${\tau_n}$ be a sequence of stopping times increasing to ${\tau_X\wedge\tau_Y}$ and such that ${X^{\tau_n}}$ are uniformly integrable martingales with ${X_{\tau_n}>0}$ (almost surely). For example, we could take ${\tau_n}$ to be the first time that X hits either n or 1/n. Then, (7) defines the restriction of ${{\mathbb Q}}$ to ${\mathcal{F}_{\tau_n}}$. Letting n go to infinity this uniquely defines ${{\mathbb Q}}$ on ${\mathcal{F}_{\tau_X\wedge\tau_Y-}}$. Once it is shown that ${\tau_X\ge\tau_Y}$ ${{\mathbb Q}}$-almost surely, then this will also uniquely determine ${{\mathbb Q}}$ on ${\mathcal{F}_{\tau_Y-}}$.

By Ito’s lemma,

 $\displaystyle dY=\frac{-1}{X^2}\,dX+\frac{1}{X^3}\,d[X],$ (10)

so that ${d[Y]=X^{-4}\,d[X]}$ and ${d[X,Y]=-X^{-2}\,d[X]}$. However, applying the Girsanov theorem to the local martingale X shows that

$\displaystyle M\equiv X-\int X^{-1}\,d[X]$

is a ${{\mathbb Q}}$-local martingale over the intervals ${[0,\tau_n]}$ and, therefore, so is ${Y=y_0-\int X^{-2}\,dM}$. Letting n go to infinity shows that ${Y^{\tau_X\wedge\tau_Y}}$ is a ${{\mathbb Q}}$-local martingale. In particular, by the supermartingale property, ${{\mathbb E}_{\mathbb Q}[Y_{\tau_X\wedge\tau_Y}]\le y_0<\infty}$ so that ${Y_{\tau_X\wedge\tau_Y}}$ is finite and ${\tau_X\ge\tau_Y}$ ${{\mathbb Q}}$-almost surely.

So, we have shown that ${{\mathbb Q}(\tau_X\ge\tau_Y)=1}$, ${{\mathbb Q}}$ is uniquely defined on ${\mathcal{F}_{\tau_Y-}}$ and that ${\tilde Y}$ is a ${{\mathbb Q}}$-local martingale. Using (10), the quadratic variation of Y is given by

$\displaystyle d[Y]=\frac{1}{X^4}\,d[X]$

on ${[0,\tau_X\wedge\tau_Y]}$, giving (8).

The first of the identities in (9) comes from the following sequence of equalities.

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb Q}(\tilde Y_t>0)&\displaystyle=\lim_{n\rightarrow\infty}{\mathbb Q}(\tau_n>t)\smallskip\\ &\displaystyle=\lim_{n\rightarrow\infty}x_0^{-1}{\mathbb E}[X_{\tau_n}1_{\{\tau_n>t\}}]\smallskip\\ &\displaystyle=x_0^{-1}{\mathbb E}[X_t1_{\{\tau_X>t\}}]=x_0^{-1}{\mathbb E}[X_t]. \end{array}$

The first equality is using the fact that, ${{\mathbb Q}}$-almost surely, Y hits zero at time ${\tau_Y=\lim_n\tau_n}$ whenever this is finite. The second equality is just using the change of measure definition (7). The third equality is using the condition that ${X^{\tau_n}}$ are martingales and ${\lim_n\tau_n=\tau_X}$ ${{\mathbb P}}$-almost surely. In a similar way, the second of the identities in (9) comes from the following,

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}_{\mathbb Q}[\tilde Y_t]&\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}_{\mathbb Q}[Y_t 1_{\{\tau_n>t\}}]\smallskip\\ &=\lim_{n\rightarrow\infty}x_0^{-1}{\mathbb E}[X_{\tau_n}Y_t1_{\{\tau_n>t\}}]\smallskip\\ &\displaystyle=x_0^{-1}{\mathbb E}[X_tY_t1_{\{\tau_X>t\}}]=x_0^{-1}{\mathbb P}(X_t>0). \end{array}$

Now, let’s move on to showing that with the underlying filtered probability space set up correctly, measures defined by local Girsanov transforms do indeed exist. The idea is to construct ${{\mathbb Q}}$ locally via equation (7), and apply the Kolmogorov extension theorem to extend to a measure on ${(\Omega,\mathcal{F})}$. Kolmogorov’s theorem states that if we have a consistent set of probability measures defined on the finite products of some underlying measurable spaces (in particular, Polish spaces), then they extend uniquely to a measure on the infinite product.

Lemma 6 Let ${\Omega}$ be the set of continuous functions ${\omega\colon{\mathbb R}_+\rightarrow\bar{\mathbb R}_+}$, X be the coordinate process

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle X_t\colon\Omega\rightarrow\bar{\mathbb R}_+,\smallskip\\ \displaystyle X_t(\omega)=\omega(t), \end{array}$

${\{\mathcal{F}_t\}_{t\ge0}}$ be the natural filtration

$\displaystyle \mathcal{F}_t=\sigma\left(X_s\colon s\le t\right).$

and set ${\mathcal{F}=\mathcal{F}_\infty}$.

Then, for any probability measure ${{\mathbb P}}$ on ${(\Omega,\mathcal{F})}$ making X a local martingale with ${X_0=x_0}$ almost surely, a measure ${{\mathbb Q}}$ defined by (7) exists.

Proof: Let each n, let ${\tau_n}$ be the stopping time

$\displaystyle \tau_n=\inf\left\{t\ge0\colon X_t\ge n{\rm\ or\ }X_t\le1/n\right\}.$

These increase to a limit ${\tau_\infty}$, which is the first time that X hits either zero or infinity. Define the measure ${{\mathbb Q}^n}$ on ${(\Omega,\mathcal{F}_{\tau_n})}$ according to (7),

$\displaystyle {\mathbb Q}^n\vert_{\mathcal{F}_{\tau_n}}=x_0^{-1}X_{\tau_n}\cdot{\mathbb P}\vert_{\mathcal{F}_{\tau_n}}.$

This can be extended to a measure on the sigma algebra ${\mathcal{F}}$ by supposing that X remains constant after time ${\tau_n}$. That is, the ${{\mathbb Q}^n}$-expectation of any ${\mathcal{F}}$-measurable and bounded function ${F\colon\Omega\rightarrow{\mathbb R}}$ is given by,

$\displaystyle {\mathbb E}_{{\mathbb Q}^n}[F(X)]=x_0^{-1}{\mathbb E}\left[X_{\tau_n}F(X^{\tau_n})\right].$

This defines a sequence of measures on ${(\Omega,\mathcal{F})}$ such that, for all ${m\le n}$, ${{\mathbb Q}^m}$ and ${{\mathbb Q}^n}$ agree when restricted to ${\mathcal{F}_{\tau_m}}$. Denote the infinite product space as ${\Omega^{\mathbb N}=\prod_{n=1}^\infty\Omega}$ with product sigma algebra ${\mathcal{F}^{\mathbb N}}$, and let ${X^n}$ be the projection onto the n‘th component. Then, applying the Kolmogorov extension theorem, there exists a measure ${\tilde{\mathbb Q}}$ on ${(\Omega^{\mathbb N},\mathcal{F}^{\mathbb N})}$ with respect to which ${X^n}$ has distribution ${{\mathbb Q}^n}$ and, for any ${m\le n}$, ${X^m}$ and ${X^n}$ are equal up until the first time that they exceed the level m or drop below 1/m, ${\tilde{\mathbb Q}}$-almost surely. With respect to ${\tilde{\mathbb Q}}$, then, the limit

$\displaystyle X^\infty_t=\lim_{n\rightarrow\infty}X^n_t$

exists and, up until the first time at which it passes the upper level n or drops below 1/n, this agrees with the distrubution of X under the measure ${\tilde{\mathbb Q}}$. It can also be seen that ${X^\infty}$ is almost surely continuous. So, we can define ${{\mathbb Q}}$ to be the measure on ${(\Omega,\mathcal{F})}$ with respect to which X has the same distribution as ${X^\infty}$ does under ${\tilde{\mathbb Q}}$. Finally, for a stopping time ${\tau}$ for which ${X^\tau}$ is a ${{\mathbb P}}$-uniformly integrable martingale and ${X_\tau>0}$, it needs to be shown that (7) holds. For any finite time t and bounded ${\mathcal{F}_{\tau\wedge t}}$-measurable random variable Z, ${Z1_{\{\tau_n>\tau\wedge t\}}}$ is ${\mathcal{F}_{\tau_n}}$-measurable. So, using the martingale property for X,

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}_{{\mathbb Q}}[Z1_{\{\tau_\infty>\tau\wedge t\}}]&\displaystyle=\lim_{n\rightarrow\infty}{\mathbb E}_{\mathbb Q}[Z1_{\{\tau_n>\tau\wedge t\}}]\smallskip\\ &\displaystyle=\lim_{n\rightarrow\infty}x_0^{-1}{\mathbb E}[X_{\tau_n}Z1_{\{\tau_n>\tau\wedge t\}}]\smallskip\\ &\displaystyle=x_0^{-1}{\mathbb E}[X_{\tau}Z]. \end{array}$ (11)

The last equality here uses the martingale property to replace ${X_{\tau_n}}$ by ${X_\tau}$, and the fact that ${\tau_n>\tau\wedge t}$ for large n, almost surely, which follows from the property that ${X^\tau}$ is almost surely bounded and nonzero. Using Z=1 in (11) shows that ${\tau_\infty>\tau\wedge t}$ ${{\mathbb Q}}$-almost surely and, letting t increase to infinity,

$\displaystyle {\mathbb E}_{\mathbb Q}[Z]=x_0^{-1}{\mathbb E}[X_\tau Z]$

for all ${\mathcal{F}_\tau}$-measurable and bounded random variables Z. ⬜

Now that it has been shown that the measure change is well-defined, assuming the setup of Lemma 6, we can finally move on to the proof of Theorem 2. Rather than defining the process X via the SDE (1), however, it helps to rewrite it in a more intrinsic form without reference to a driving Brownian motion. Any such process is a local martingale with quadratic variation

 $\displaystyle [X]_t=\int_0^t a(X_s)^2X_s^2\,ds.$ (12)

Conversely, by enlarging the probability space to add a Brownian motion if required, any local martingale satisfying (12) and the initial condition ${X_0=x_0}$ solves the SDE (\ref) for some Brownian motion B}. So, assuming uniqueness in law of (2), Theorem 2 is a consequence of equations (9) and the following result.

Lemma 7 Assume the setup of Lemma (6) and let ${{\mathbb P}}$ be a probability measure on ${(\Omega,\mathcal{F})}$ with respect to which X is a local martingale satisfying ${X_0=x_0}$ almost surely, and with quadratic variation given by (12).

Then, letting ${{\mathbb Q}}$ be the measure defined by Lemma 5, the process ${\tilde Y=Y^{\tau_X\wedge\tau_Y}}$ is a ${{\mathbb Q}}$-local martingale with quadratic variation

 $\displaystyle [\tilde Y]_t=\int_0^ta(\tilde Y^{-1}_s)^2\tilde Y_s^2\,ds.$ (13)

Proof: Lemma 5 states that ${\tilde Y}$ is a ${{\mathbb Q}}$-local martingale with quadratic variation

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle[\tilde Y]_t&\displaystyle=\int_0^{t\wedge\tau_Y}X_s^{-4}\,d[X]_s\smallskip\\ &\displaystyle=\int_0^{t\wedge\tau_Y}X_s^{-4}a(X_s)^2X_s^2\,ds\smallskip\\ &\displaystyle=\int_0^{t\wedge\tau_Y}a(\tilde Y^{-1}_s)^2\tilde Y_s^2\,ds. \end{array}$

As ${\tilde Y=0}$ after time ${\tau_Y}$, this gives (13). ⬜

#### Proof of the zero-hitting condition

Theorem 3, providing a necessary and sufficient condition for solutions to the SDE (1) to hit zero, can be proven by applying a well-chosen transformation to the local martingale X. Then, martingale convergence will be used — with probability one, whenever a continuous local martingale is bounded above or below then it converges to a finite value as ${t\rightarrow\infty}$. The result also follows from Feller’s test for explosions (see Karatzas and Shreve, Brownian Motion and Stochastic Calculus, Chapter 5).

As ${a^{-2}}$ is assumed to be locally integrable, a convex function ${F\colon{\mathbb R}_+\rightarrow{\mathbb R}}$ can be defined by

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle F(x)&\displaystyle=2\int_1^x\int_1^yz^{-2}a(z)^{-2}\,dz\,dy\smallskip\\ &\displaystyle=2\int_1^x(x-y)y^{-2}a(y)^{-2}\,dy. \end{array}$

This is continuously differentiable with second order derivative ${F^{\prime\prime}=2x^{-2}a^{-2}}$ defined in the sense of distributions. Ito’s lemma gives

 $\displaystyle F(X)=F(x_0)+\int F^\prime(X)\,dX+\int X^{-2}a(X)^{-2}\,d[X].$ (14)

Although Ito’s lemma only directly applies in the twice differentiable case, where a is continuous, (14) extends to all a with ${a^{-2}}$ locally integrable by taking limits (using the dominated convergence theorem to take the limits, and the monotone class theorem to extend to ${a^{-2}}$ locally integrable). Then, as X satisfies the SDE (1) and a is assumed to be nonzero, its quadratic variation is given by ${d[X]=a(X)^2X^2\,dt}$ for ${a(X)X}$ nonzero up until the time at which X hits zero. So

$\displaystyle F(X_t)= F(x_0)+\int_0^tF^\prime(X)\,dX+t\wedge\tau$

where ${\tau}$ is the first time at which X hits zero. In particular, ${M_t\equiv F(X_t)-t\wedge\tau}$ is a local martingale. The limiting value of F at zero is

$\displaystyle F(0)=2\int_0^1x^{-1}a(x)^{-2}\,dx,$

which is just the integral (4). If this is infinite, X cannot hit zero in finite time, as it would imply that the local martingale M explodes. Alternatively, suppose that the integral (4) is finite, so that ${F(0)}$ is finite. As X is a nonnegative local martingale, it converges almost surely to a finite value at infinity. So, ${M\le F(X)}$ is bounded above and, by martingale convergence, tends to a finite limit. This shows that ${t\wedge\tau=F(X_t)-M_t}$ converges as ${t\rightarrow\infty}$, which can only be the case if ${\tau}$ is finite and X hits zero.

1. This symmetry idea of linking the martingale property of one diffusion process to the explosion behavior of another diffusion process is very interesting. I think the Theorem 4 here coincides the Theorem 1.1 in this paper: .

Two follow-up questions: (1) Can we extend the result to one dimensional time-inhomogeneous diffusion?
(2) How about Doleans-Dade stochastic exponentials? Is there a similar result that exhibit this symmetry?
btw, I find all your posts very helpful for understanding stochastic calculus~ 🙂

Comment by Zhenyu (Rocky) Cui — 26 March 11 @ 6:33 PM

2. Sorry, the link I refer to is: ” http://homepage.alice.de/murusov/papers/cev.pdf“.

Comment by Zhenyu (Rocky) Cui — 26 March 11 @ 6:34 PM

• Thanks, that’s an interesting reference and certainly very relevant to this post. Their Theorem 1.1 is exactly the same as my Theorem 4.

To answer (1), there is no problem in extending the ideas in this post to time-inhomogeneous diffusions. In fact, I was originally going to state Theorem 1 in this greater generality, but it didn’t seem to gain a lot and maybe would lose a bit of clarity. The proof I give (section “Proof of Theorem 2”) does not rely on it being a diffusion, so similar statements can be given for all continuous nonnegative local martingales (i.e., X is a proper martingale iff Y never hits zero).

The main statement for one dimensional diffusions, Theorem 4, does rely on time-homogenity because in that case we have a simple condition for zero-hitting, and therefore a simple condition for loss of the martingale property.

For (2), are you thinking about conditions for the Doleans exponential $X=\mathcal{E}(M)$ of a local martingale M to be a proper martingale? You can certain apply the method used here to it, although I’m not sure how useful a condition you obtain. I’d have to think about that a bit more.

Comment by George Lowther — 27 March 11 @ 1:12 PM

3. Thank you for your prompt reply! Yes, I mean that form for the Q(2).

Also on the issue of linking “loss of martingale property” to the “zero hitting” of another auxiliary process Y_t , I find a different opinion in the following paper: http://www.warwick.ac.uk/~stsjai/PDFs/LocalMart.pdf . See the paragraph at end of page 2 and start of page 3. My understanding is that the loss of martingale property is linked to Y_t exiting at a so called “bad endpoint”, but NOT every endpoint. See the interesting example 3.1 on page 7 and the following remark(ii) on page 8.

Note that the above paper only considers $M=\int_0^t b(Y_u)dW_u$, a special case of local martingale. I just wonder how to characterize the “bad endpoint” for general local martingales M? A more recent paper seems to be working in this direction and I am still reading it: http://www.warwick.ac.uk/~stsjai/PDFs/mart07.pdf

Best regards 🙂

Comment by Zhenyu (Rocky) Cui — 27 March 11 @ 2:04 PM

• Hi, Zhenyu:

All the links are broken. Would you be so kind to repair them? Thank you.

Comment by Anonymous — 13 January 17 @ 9:06 AM

4. Typo: it was shown that X fails to be a local martingale -> it was shown that X fails to be a proper martingale

Comment by Ben — 21 October 13 @ 1:58 PM

Blog at WordPress.com.