# Almost Sure

## 12 October 16

### Do Convex and Decreasing Functions Preserve the Semimartingale Property — A Possible Counterexample

Figure 1: The function f, convex in x and decreasing in t

Here, I attempt to construct a counterexample to the hypotheses of the earlier post, Do convex and decreasing functions preserve the semimartingale property? There, it was asked, for any semimartingale X and function ${f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}}$ such that ${f(t,x)}$ is convex in x and right-continuous and decreasing in t, is ${f(t,X_t)}$ necessarily a semimartingale? It was explained how this is equivalent to the hypothesis: for any function ${f\colon[0,1]^2\rightarrow{\mathbb R}}$ such that ${f(t,x)}$ is convex and Lipschitz continuous in x and decreasing in t, does it decompose as ${f=g-h}$ where ${g(t,x)}$ and ${h(t,x)}$ are convex in x and increasing in t. This is the form of the hypothesis which this post will be concerned with, so the example will only involve simple real analysis and no stochastic calculus. I will give some numerical calculations suggesting that the construction below is a counterexample, but do not have any proof of this. So, the hypothesis is still open.

Although the construction given here will be self-contained, it is worth noting that it is connected to the example of a martingale which moves along a deterministic path. If ${\{M_t\}_{t\in[0,1]}}$ is the martingale constructed there, then

$\displaystyle C(t,x)={\mathbb E}[(M_t-x)_+]$

defines a function from ${[0,1]\times[-1,1]}$ to ${{\mathbb R}}$ which is convex in x and increasing in t. The question is then whether C can be expressed as the difference of functions which are convex in x and decreasing in t. The example constructed in this post will be the same as C with the time direction reversed, and with a linear function of x added so that it is zero at ${x=\pm1}$. (more…)

## 5 October 16

### A Martingale Which Moves Along a Deterministic Path

Figure 1: Sample paths

In this post I will construct a continuous and non-constant martingale M which only varies on the path of a deterministic function ${f\colon{\mathbb R}_+\rightarrow{\mathbb R}}$. That is, ${M_t=f(t)}$ at all times outside of the set of nontrivial intervals on which M is constant. Expressed in terms of the stochastic integral, ${dM_t=0}$ on the set ${\{t\colon M_t\not=f(t)\}}$ and,

 $\displaystyle M_t = \int_0^t 1_{\{M_s=f(s)\}}\,dM_s.$ (1)

In the example given here, f will be right-continuous. Examples with continuous f do exist, although the constructions I know of are considerably more complicated. At first sight, these properties appear to contradict what we know about continuous martingales. They vary unpredictably, behaving completely unlike any deterministic function. It is certainly the case that we cannot have ${M_t=f(t)}$ across any interval on which M is not constant.

By a stochastic time-change, any Brownian motion B can be transformed to have the same distribution as M. This means that there exists an increasing and right-continuous process A adapted to the same filtration as B and such that ${B_t=M_{A_t}}$ where M is a martingale as above. From this, we can infer that

$\displaystyle B_t=f(A_t),$

expressing Brownian motion as a function of an increasing process. (more…)

## 26 September 16

### Do Convex and Decreasing Functions Preserve the Semimartingale Property?

Some years ago, I spent considerable effort trying to prove the hypothesis below. After failing at this, I spent time trying to find a counterexample, but also with no success. I did post this as a question on mathoverflow, but it has so far received no conclusive answers. So, as far as I am aware, the following statement remains unproven either way.

Hypothesis H1 Let ${f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}}$ be such that ${f(t,x)}$ is convex in x and right-continuous and decreasing in t. Then, for any semimartingale X, ${f(t,X_t)}$ is a semimartingale.

It is well known that convex functions of semimartingales are themselves semimartingales. See, for example, the Ito-Tanaka formula. More generally, if ${f(t,x)}$ was increasing in t rather than decreasing, then it can be shown without much difficulty that ${f(t,X_t)}$ is a semimartingale. Consider decomposing ${f(t,X_t)}$ as

 $\displaystyle f(t,X_t)=\int_0^tf_x(s,X_{s-})\,dX_s+V_t,$ (1)

for some process V. By convexity, the right hand derivative of ${f(t,x)}$ with respect to x always exists, and I am denoting this by ${f_x}$. In the case where f is twice continuously differentiable then the process V is given by Ito’s formula which, in particular, shows that it is a finite variation process. If ${f(t,x)}$ is convex in x and increasing in t, then the terms in Ito’s formula for V are all increasing and, so, it is an increasing process. By taking limits of smooth functions, it follows that V is increasing even when the differentiability constraints are dropped, so ${f(t,X_t)}$ is a semimartingale. Now, returning to the case where ${f(t,x)}$ is decreasing in t, Ito’s formula is only able to say that V is of finite variation, and is generally not monotonic. As limits of finite variation processes need not be of finite variation themselves, this does not say anything about the case when f is not assumed to be differentiable, and does not help us to determine whether or not ${f(t,X_t)}$ is a semimartingale.

Hypothesis H1 can be weakened by restricting to continuous functions of continuous martingales.

Hypothesis H2 Let ${f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}}$ be such that ${f(t,x)}$ is convex in x and continuous and decreasing in t. Then, for any continuous martingale X, ${f(t,X_t)}$ is a semimartingale.

As continuous martingales are special cases of semimartingales, hypothesis H1 implies H2. In fact, the reverse implication also holds so that hypotheses H1 and H2 are equivalent.

Hypotheses H1 and H2 can also be recast as a simple real analysis statement which makes no reference to stochastic processes.

Hypothesis H3 Let ${f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}}$ be such that ${f(t,x)}$ is convex in x and decreasing in t. Then, ${f=g-h}$ where ${g(t,x)}$ and ${h(t,x)}$ are convex in x and increasing in t.

## 14 September 16

### Failure of the Martingale Property For Stochastic Integration

If X is a cadlag martingale and ${\xi}$ is a uniformly bounded predictable process, then is the integral

 $\displaystyle Y=\int\xi\,dX$ (1)

a martingale? If ${\xi}$ is elementary this is one of most basic properties of martingales. If X is a square integrable martingale, then so is Y. More generally, if X is an ${L^p}$-integrable martingale, any ${p > 1}$, then so is Y. Furthermore, integrability of the maximum ${\sup_{s\le t}\lvert X_s\rvert}$ is enough to guarantee that Y is a martingale. Also, it is a fundamental result of stochastic integration that Y is at least a local martingale and, for this to be true, it is only necessary for X to be a local martingale and ${\xi}$ to be locally bounded. In the general situation for cadlag martingales X and bounded predictable ${\xi}$, it need not be the case that Y is a martingale. In this post I will construct an example showing that Y can fail to be a martingale. (more…)

## 12 September 16

### Martingales with Non-Integrable Maximum

Filed under: Examples and Counterexamples — George Lowther @ 12:01 PM
Tags: , ,

It is a consequence of Doob’s maximal inequality that any ${L^p}$-integrable martingale has a maximum, up to a finite time, which is also ${L^p}$-integrable for any ${p > 1}$. Using ${X^*_t\equiv\sup_{s\le t}\lvert X_s\rvert}$ to denote the running absolute maximum of a cadlag martingale X, then ${X^*}$ is ${L^p}$-integrable whenever ${X}$ is. It is natural to ask whether this also holds for ${p=1}$. As martingales are integrable by definition, this is just asking whether cadlag martingales necessarily have an integrable maximum. Integrability of the maximum process does have some important consequences in the theory of martingales. By the Burkholder-Davis-Gundy inequality, it is equivalent to the square-root of the quadratic variation, ${[X]^{1/2}}$, being integrable. Stochastic integration over bounded integrands preserves the martingale property, so long as the martingale has integrable maximal process. The continuous and purely discontinuous parts of a martingale X are themselves local martingales, but are not guaranteed to be proper martingales unless X has integrable maximum process.

The aim of this post is to show, by means of some examples, that a cadlag martingale need not have an integrable maximum. (more…)

## 11 September 16

### The Optimality of Doob’s Maximal Inequality

One of the most fundamental and useful results in the theory of martingales is Doob’s maximal inequality. Use ${X^*_t\equiv\sup_{s\le t}\lvert X_s\rvert}$ to denote the running (absolute) maximum of a process X. Then, Doob’s ${L^p}$ maximal inequality states that, for any cadlag martingale or nonnegative submartingale X and real ${p > 1}$,

 $\displaystyle \lVert X^*_t\rVert_p\le c_p \lVert X_t\rVert_p$ (1)

with ${c_p=p/(p-1)}$. Here, ${\lVert\cdot\rVert_p}$ denotes the standard Lp-norm, ${\lVert U\rVert_p\equiv{\mathbb E}[U^p]^{1/p}}$.

An obvious question to ask is whether it is possible to do any better. That is, can the constant ${c_p}$ in (1) be replaced by a smaller number. This is especially pertinent in the case of small p, since ${c_p}$ diverges to infinity as p approaches 1. The purpose of this post is to show, by means of an example, that the answer is no. The constant ${c_p}$ in Doob’s inequality is optimal. We will construct an example as follows.

Example 1 For any ${p > 1}$ and constant ${1 \le c < c_p}$ there exists a strictly positive cadlag ${L^p}$-integrable martingale ${\{X_t\}_{t\in[0,1]}}$ with ${X^*_1=cX_1}$.

For X as in the example, we have ${\lVert X^*_1\rVert_p=c\lVert X_1\rVert_p}$. So, supposing that (1) holds with any other constant ${\tilde c_p}$ in place of ${c_p}$, we must have ${\tilde c_p\ge c}$. By choosing ${c}$ as close to ${c_p}$ as we like, this means that ${\tilde c_p\ge c_p}$ and ${c_p}$ is indeed optimal in (1). (more…)

## 6 September 16

### The Maximum Maximum of Martingales with Known Terminal Distribution

In this post I will be concerned with the following problem — given a martingale X for which we know the distribution at a fixed time, and we are given nothing else, what is the best bound we can obtain for the maximum of X up until that time? This is a question with a long history, starting with Doob’s inequalities which bound the maximum in the ${L^p}$ norms and in probability. Later, Blackwell and Dubins (3), Dubins and Gilat (5) and Azema and Yor (1,2) showed that the maximum is bounded above, in stochastic order, by the Hardy-Littlewood transform of the terminal distribution. Furthermore, this bound is the best possible in the sense that there do exists martingales for which it can be attained, for any permissible terminal distribution. Hobson (7,8) considered the case where the starting law is also known, and this was further generalized to the case with a specified distribution at an intermediate time by Brown, Hobson and Rogers (4). Finally, Henry-Labordère, Obłój, Spoida and Touzi (6) considered the case where the distribution of the martingale is specified at an arbitrary set of times. In this post, I will look at the case where only the terminal distribution is specified. This leads to interesting constructions of martingales and, in particular, of continuous martingales with specified terminal distributions, with close connections to the Skorokhod embedding problem.

I will be concerned with the maximum process of a cadlag martingale X,

$\displaystyle X^*_t=\sup_{s\le t}X_s,$

which is increasing and adapted. We can state and prove the bound on ${X^*}$ relatively easily, although showing that it is optimal is more difficult. As the result holds more generally for submartingales, I state it in this case, although I am more concerned with martingales here.

Theorem 1 If X is a cadlag submartingale then, for each ${t\ge0}$ and ${x\in{\mathbb R}}$,

 $\displaystyle {\mathbb P}\left(X^*_t\ge x\right)\le\inf_{y < x}\frac{{\mathbb E}\left[(X_t-y)_+\right]}{x-y}.$ (1)

Proof: We just need to show that the inequality holds for each ${y < x}$, and then it immediately follows for the infimum. Choosing ${y < x^\prime < x}$, consider the stopping time

$\displaystyle \tau=\inf\{s\ge0\colon X_s\ge x^\prime\}.$

Then, ${\tau \le t}$ and ${X_\tau\ge x^\prime}$ whenever ${X^*_t \ge x}$. As ${f(z)\equiv(z-y)_+}$ is nonnegative and increasing in z, this means that ${1_{\{X^*_t\ge x\}}}$ is bounded above by ${f(X_{\tau\wedge t})/f(x^\prime)}$. Taking expectations,

$\displaystyle {\mathbb P}\left(X^*_t\ge x\right)\le{\mathbb E}\left[f(X_{\tau\wedge t})\right]/f(x^\prime).$

Since f is convex and increasing, ${f(X)}$ is a submartingale so, using optional sampling,

$\displaystyle {\mathbb P}\left(X^*_t\ge x\right)\le{\mathbb E}\left[f(X_t)\right]/f(x^\prime).$

Letting ${x^\prime}$ increase to ${x}$ gives the result. ⬜

The bound stated in Theorem 1 is also optimal, and can be achieved by a continuous martingale. In this post, all measures on ${{\mathbb R}}$ are defined with respect to the Borel sigma-algebra.

Theorem 2 If ${\mu}$ is a probability measure on ${{\mathbb R}}$ with ${\int\lvert x\rvert\,d\mu(x) < \infty}$ and ${t > 0}$ then there exists a continuous martingale X (defined on some filtered probability space) such that ${X_t}$ has distribution ${\mu}$ and (1) is an equality for all ${x\in{\mathbb R}}$.

## 31 August 10

### Zero-Hitting and Failure of the Martingale Property

For nonnegative local martingales, there is an interesting symmetry between the failure of the martingale property and the possibility of hitting zero, which I will describe now. I will also give a necessary and sufficient condition for solutions to a certain class of stochastic differential equations to hit zero in finite time and, using the aforementioned symmetry, infer a necessary and sufficient condition for the processes to be proper martingales. It is often the case that solutions to SDEs are clearly local martingales, but is hard to tell whether they are proper martingales. So, the martingale condition, given in Theorem 4 below, is a useful result to know. The method described here is relatively new to me, only coming up while preparing the previous post. Applying a hedging argument, it was noted that the failure of the martingale property for solutions to the SDE ${dX=X^c\,dB}$ for ${c>1}$ is related to the fact that, for ${c<1}$, the process hits zero. This idea extends to all continuous and nonnegative local martingales. The Girsanov transform method applied here is essentially the same as that used by Carlos A. Sin (Complications with stochastic volatility models, Adv. in Appl. Probab. Volume 30, Number 1, 1998, 256-268) and B. Jourdain (Loss of martingality in asset price models with lognormal stochastic volatility, Preprint CERMICS, 2004-267).

Consider nonnegative solutions to the stochastic differential equation

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX=a(X)X\,dB,\smallskip\\ &\displaystyle X_0=x_0, \end{array}$ (1)

where ${a\colon{\mathbb R}_+\rightarrow{\mathbb R}}$, B is a Brownian motion and the fixed initial condition ${x_0}$ is strictly positive. The multiplier X in the coefficient of dB ensures that if X ever hits zero then it stays there. By time-change methods, uniqueness in law is guaranteed as long as a is nonzero and ${a^{-2}}$ is locally integrable on ${(0,\infty)}$. Consider also the following SDE,

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dY=\tilde a(Y)Y\,dB,\smallskip\\ &\displaystyle Y_0=y_0,\smallskip\\ &\displaystyle \tilde a(y) = a(y^{-1}),\ y_0=x_0^{-1} \end{array}$ (2)

Being integrals with respect to Brownian motion, solutions to (1) and (2) are local martingales. It is possible for them to fail to be proper martingales though, and they may or may not hit zero at some time. These possibilities are related by the following result.

Theorem 1 Suppose that (1) and (2) satisfy uniqueness in law. Then, X is a proper martingale if and only if Y never hits zero. Similarly, Y is a proper martingale if and only if X never hits zero.

## 16 August 10

### Failure of the Martingale Property

In this post, I give an example of a class of processes which can be expressed as integrals with respect to Brownian motion, but are not themselves martingales. As stochastic integration preserves the local martingale property, such processes are guaranteed to be at least local martingales. However, this is not enough to conclude that they are proper martingales. Whereas constructing examples of local martingales which are not martingales is a relatively straightforward exercise, such examples are often slightly contrived and the martingale property fails for obvious reasons (e.g., double-loss betting strategies). The aim here is to show that the martingale property can fail for very simple stochastic differential equations which are likely to be met in practice, and it is not always obvious when this situation arises.

Consider the following stochastic differential equation

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX = aX^c\,dB +b X dt,\smallskip\\ &\displaystyle X_0=x, \end{array}$ (1)

for a nonnegative process X. Here, B is a Brownian motion and a,b,c,x are positive constants. This a common SDE appearing, for example, in the constant elasticity of variance model for option pricing. Now consider the following question: what is the expected value of X at time t?

The obvious answer seems to be that ${{\mathbb E}[X_t]=xe^{bt}}$, based on the idea that X has growth rate b on average. A more detailed argument is to write out (1) in integral form

 $\displaystyle X_t=x+\int_0^t\,aX^c\,dB+ \int_0^t bX_s\,ds.$ (2)

The next step is to note that the first integral is with respect to Brownian motion, so has zero expectation. Therefore,

$\displaystyle {\mathbb E}[X_t]=x+\int_0^tb{\mathbb E}[X_s]\,ds.$

This can be differentiated to obtain the ordinary differential equation ${d{\mathbb E}[X_t]/dt=b{\mathbb E}[X_t]}$, which has the unique solution ${{\mathbb E}[X_t]={\mathbb E}[X_0]e^{bt}}$.

In fact this argument is false. For ${c\le1}$ there is no problem, and ${{\mathbb E}[X_t]=xe^{bt}}$ as expected. However, for all ${c>1}$ the conclusion is wrong, and the strict inequality ${{\mathbb E}[X_t] holds.

The point where the argument above falls apart is the statement that the first integral in (2) has zero expectation. This would indeed follow if it was known that it is a martingale, as is often assumed to be true for stochastic integrals with respect to Brownian motion. However, stochastic integration preserves the local martingale property and not, in general, the martingale property itself. If ${c>1}$ then we have exactly this situation, where only the local martingale property holds. The first integral in (2) is not a proper martingale, and has strictly negative expectation at all positive times. The reason that the martingale property fails here for ${c>1}$ is that the coefficient ${aX^c}$ of dB grows too fast in X.

In this post, I will mainly be concerned with the special case of (1) with a=1 and zero drift.

 $\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dX=X^c\,dB,\smallskip\\ &\displaystyle X_0=x. \end{array}$ (3)

The general form (1) can be reduced to this special case, as I describe below. SDEs (1) and (3) do have unique solutions, as I will prove later. Then, as X is a nonnegative local martingale, if it ever hits zero then it must remain there (0 is an absorbing boundary).

The solution X to (3) has the following properties, which will be proven later in this post.

• If ${c\le1}$ then X is a martingale and, for ${c<1}$, it eventually hits zero with probability one.
• If ${c>1}$ then X is a strictly positive local martingale but not a martingale. In fact, the following inequality holds
 $\displaystyle {\mathbb E}[X_t\mid\mathcal{F}_s] (4)

(almost surely) for times ${s. Furthermore, for any positive constant ${p<2c-1}$, ${{\mathbb E}[X_t^p]}$ is bounded over ${t\ge0}$ and tends to zero as ${t\rightarrow\infty}$.

## 2 June 10

### Failure of Pathwise Integration for FV Processes

Figure 1: A non-pathwise stochastic integral of an FV Process

The motivation for developing a theory of stochastic integration is that many important processes — such as standard Brownian motion — have sample paths which are extraordinarily badly behaved. With probability one, the path of a Brownian motion is nowhere differentiable and has infinite variation over all nonempty time intervals. This rules out the application of the techniques of ordinary calculus. In particular, the Stieltjes integral can be applied with respect to integrators of finite variation, but fails to give a well-defined integral with respect to Brownian motion. The Ito stochastic integral was developed to overcome this difficulty, at the cost both of restricting the integrand to be an adapted process, and the loss of pathwise convergence in the dominated convergence theorem (convergence in probability holds intead).

However, as I demonstrate in this post, the stochastic integral represents a strict generalization of the pathwise Lebesgue-Stieltjes integral even for processes of finite variation. That is, if V has finite variation, then there can still be predictable integrands ${\xi}$ such that the integral ${\int\xi\,dV}$ is undefined as a Lebesgue-Stieltjes integral on the sample paths, but is well-defined in the Ito sense. (more…)

Next Page »

Blog at WordPress.com.