Almost Sure

21 October 16

The Projection Theorems

Back when I first started this series of posts on stochastic calculus, the aim was to write up the notes which I began writing while learning the subject myself. The idea behind these notes was to give a more intuitive and natural, yet fully rigorous, approach to stochastic integration and semimartingales than the traditional method. The stochastic integral and related concepts were developed without requiring advanced results such as optional and predictable projection or the Doob-Meyer decomposition which are often used in traditional approaches. Then, the more advanced theory of semimartingales was developed after stochastic integration had already been established. This now complete! The list of subjects from my original post have now all been posted. Of course, there are still many important areas of stochastic calculus which are not adequately covered in these notes, such as local times, stochastic differential equations, excursion theory, etc. I will now focus on the projection theorems and related results. Although these are not required for the development of the stochastic integral and the theory of semimartingales, as demonstrated by these notes, they are still very important and powerful results invaluable to much of the more advanced theory of continuous-time stochastic processes. Optional and predictable projection are often regarded as quite advanced topics beyond the scope of many textbooks on stochastic calculus. This is because they require some descriptive set theory and, in particular, some understanding of analytic sets. The level of knowledge required for applications to stochastic calculus is not too great though, and I aim to give complete proofs of the projection theorems in these notes. However, the proofs of these theorems do require ideas which are not particularly intuitive from the viewpoint of stochastic calculus, and hence the desire to avoid them in the initial development of the stochastic integral. The theory of semimartingales and stochastic integration will not used at all in the series of posts on the projection theorems, and all that will be required from these stochastic calculus notes are the initial posts on filtrations and processes. I will also mention quasimartingales, although only the definition and very basic properties will be required.

The subjects related to the projection theorems which I will cover are,

  • The Debut Theorem. I have already covered the debut theorem for right-continuous processes. This is a special case of the more general result which applies to arbitrary progressively measurable processes.
  • The Optional and Predictable Section Theorems. These very powerful results state that optional processes are determined, up to evanescence, by their values at stopping times and, similarly, predictable processes are determined by their values at predictable stopping times.
  • Optional and Predictable Projection. This forms the core of these sequence of posts, and follows in a straightforward way from the section theorems. As the section theorems are required to prove them, the projection theorems are also regarded as an advanced topic. However, for right-continuous and left-continuous processes it is possible to construct respectively the optional and predictable projections in a more elementary and natural way, without involving the section theorems.
  • Dual Optional and Predictable Projection. The dual projections are, as the name suggests, dual to the optional and predictable projections mentioned above. These apply to increasing integrable processes or, more generally, to processes with integrable variation. For a process X, the dual projections can be thought of as the optional and predictable projections applied to the differential {dX}.
  • The Doléans Measure. The Doléans measure can be defined for class (D) submartingales and, applied to the square of a martingale, can be used to construct the stochastic integral for square integrable martingales. Although this does not involve the projection theorems, the Doléans measure in conjunction with dual predictable projection gives a slick proof of the Doob-Meyer decomposition. The Doléans measure also exists for quasimartingales and, similarly, the Doob-Meyer decomposition can be extended to such processes.

12 October 16

Do Convex and Decreasing Functions Preserve the Semimartingale Property — A Possible Counterexample


Figure 1: The function f, convex in x and decreasing in t

Here, I attempt to construct a counterexample to the hypotheses of the earlier post, Do convex and decreasing functions preserve the semimartingale property? There, it was asked, for any semimartingale X and function {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} such that {f(t,x)} is convex in x and right-continuous and decreasing in t, is {f(t,X_t)} necessarily a semimartingale? It was explained how this is equivalent to the hypothesis: for any function {f\colon[0,1]^2\rightarrow{\mathbb R}} such that {f(t,x)} is convex and Lipschitz continuous in x and decreasing in t, does it decompose as {f=g-h} where {g(t,x)} and {h(t,x)} are convex in x and increasing in t. This is the form of the hypothesis which this post will be concerned with, so the example will only involve simple real analysis and no stochastic calculus. I will give some numerical calculations suggesting that the construction below is a counterexample, but do not have any proof of this. So, the hypothesis is still open.

Although the construction given here will be self-contained, it is worth noting that it is connected to the example of a martingale which moves along a deterministic path. If {\{M_t\}_{t\in[0,1]}} is the martingale constructed there, then

\displaystyle  C(t,x)={\mathbb E}[(M_t-x)_+]

defines a function from {[0,1]\times[-1,1]} to {{\mathbb R}} which is convex in x and increasing in t. The question is then whether C can be expressed as the difference of functions which are convex in x and decreasing in t. The example constructed in this post will be the same as C with the time direction reversed, and with a linear function of x added so that it is zero at {x=\pm1}. (more…)

5 October 16

A Martingale Which Moves Along a Deterministic Path

Sample Paths

Figure 1: Sample paths

In this post I will construct a continuous and non-constant martingale M which only varies on the path of a deterministic function {f\colon{\mathbb R}_+\rightarrow{\mathbb R}}. That is, {M_t=f(t)} at all times outside of the set of nontrivial intervals on which M is constant. Expressed in terms of the stochastic integral, {dM_t=0} on the set {\{t\colon M_t\not=f(t)\}} and,

\displaystyle  M_t = \int_0^t 1_{\{M_s=f(s)\}}\,dM_s. (1)

In the example given here, f will be right-continuous. Examples with continuous f do exist, although the constructions I know of are considerably more complicated. At first sight, these properties appear to contradict what we know about continuous martingales. They vary unpredictably, behaving completely unlike any deterministic function. It is certainly the case that we cannot have {M_t=f(t)} across any interval on which M is not constant.

By a stochastic time-change, any Brownian motion B can be transformed to have the same distribution as M. This means that there exists an increasing and right-continuous process A adapted to the same filtration as B and such that {B_t=M_{A_t}} where M is a martingale as above. From this, we can infer that

\displaystyle  B_t=f(A_t),

expressing Brownian motion as a function of an increasing process. (more…)

26 September 16

Do Convex and Decreasing Functions Preserve the Semimartingale Property?

Some years ago, I spent considerable effort trying to prove the hypothesis below. After failing at this, I spent time trying to find a counterexample, but also with no success. I did post this as a question on mathoverflow, but it has so far received no conclusive answers. So, as far as I am aware, the following statement remains unproven either way.

Hypothesis H1 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and right-continuous and decreasing in t. Then, for any semimartingale X, {f(t,X_t)} is a semimartingale.

It is well known that convex functions of semimartingales are themselves semimartingales. See, for example, the Ito-Tanaka formula. More generally, if {f(t,x)} was increasing in t rather than decreasing, then it can be shown without much difficulty that {f(t,X_t)} is a semimartingale. Consider decomposing {f(t,X_t)} as

\displaystyle  f(t,X_t)=\int_0^tf_x(s,X_{s-})\,dX_s+V_t, (1)

for some process V. By convexity, the right hand derivative of {f(t,x)} with respect to x always exists, and I am denoting this by {f_x}. In the case where f is twice continuously differentiable then the process V is given by Ito’s formula which, in particular, shows that it is a finite variation process. If {f(t,x)} is convex in x and increasing in t, then the terms in Ito’s formula for V are all increasing and, so, it is an increasing process. By taking limits of smooth functions, it follows that V is increasing even when the differentiability constraints are dropped, so {f(t,X_t)} is a semimartingale. Now, returning to the case where {f(t,x)} is decreasing in t, Ito’s formula is only able to say that V is of finite variation, and is generally not monotonic. As limits of finite variation processes need not be of finite variation themselves, this does not say anything about the case when f is not assumed to be differentiable, and does not help us to determine whether or not {f(t,X_t)} is a semimartingale.

Hypothesis H1 can be weakened by restricting to continuous functions of continuous martingales.

Hypothesis H2 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and continuous and decreasing in t. Then, for any continuous martingale X, {f(t,X_t)} is a semimartingale.

As continuous martingales are special cases of semimartingales, hypothesis H1 implies H2. In fact, the reverse implication also holds so that hypotheses H1 and H2 are equivalent.

Hypotheses H1 and H2 can also be recast as a simple real analysis statement which makes no reference to stochastic processes.

Hypothesis H3 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and decreasing in t. Then, {f=g-h} where {g(t,x)} and {h(t,x)} are convex in x and increasing in t.


14 September 16

Failure of the Martingale Property For Stochastic Integration

If X is a cadlag martingale and {\xi} is a uniformly bounded predictable process, then is the integral

\displaystyle  Y=\int\xi\,dX (1)

a martingale? If {\xi} is elementary this is one of most basic properties of martingales. If X is a square integrable martingale, then so is Y. More generally, if X is an {L^p}-integrable martingale, any {p > 1}, then so is Y. Furthermore, integrability of the maximum {\sup_{s\le t}\lvert X_s\rvert} is enough to guarantee that Y is a martingale. Also, it is a fundamental result of stochastic integration that Y is at least a local martingale and, for this to be true, it is only necessary for X to be a local martingale and {\xi} to be locally bounded. In the general situation for cadlag martingales X and bounded predictable {\xi}, it need not be the case that Y is a martingale. In this post I will construct an example showing that Y can fail to be a martingale. (more…)

12 September 16

Martingales with Non-Integrable Maximum

Filed under: Examples and Counterexamples — George Lowther @ 12:01 PM
Tags: , ,

It is a consequence of Doob’s maximal inequality that any {L^p}-integrable martingale has a maximum, up to a finite time, which is also {L^p}-integrable for any {p > 1}. Using {X^*_t\equiv\sup_{s\le t}\lvert X_s\rvert} to denote the running absolute maximum of a cadlag martingale X, then {X^*} is {L^p}-integrable whenever {X} is. It is natural to ask whether this also holds for {p=1}. As martingales are integrable by definition, this is just asking whether cadlag martingales necessarily have an integrable maximum. Integrability of the maximum process does have some important consequences in the theory of martingales. By the Burkholder-Davis-Gundy inequality, it is equivalent to the square-root of the quadratic variation, {[X]^{1/2}}, being integrable. Stochastic integration over bounded integrands preserves the martingale property, so long as the martingale has integrable maximal process. The continuous and purely discontinuous parts of a martingale X are themselves local martingales, but are not guaranteed to be proper martingales unless X has integrable maximum process.

The aim of this post is to show, by means of some examples, that a cadlag martingale need not have an integrable maximum. (more…)

11 September 16

The Optimality of Doob’s Maximal Inequality

One of the most fundamental and useful results in the theory of martingales is Doob’s maximal inequality. Use {X^*_t\equiv\sup_{s\le t}\lvert X_s\rvert} to denote the running (absolute) maximum of a process X. Then, Doob’s {L^p} maximal inequality states that, for any cadlag martingale or nonnegative submartingale X and real {p > 1},

\displaystyle  \lVert X^*_t\rVert_p\le c_p \lVert X_t\rVert_p (1)

with {c_p=p/(p-1)}. Here, {\lVert\cdot\rVert_p} denotes the standard Lp-norm, {\lVert U\rVert_p\equiv{\mathbb E}[U^p]^{1/p}}.

An obvious question to ask is whether it is possible to do any better. That is, can the constant {c_p} in (1) be replaced by a smaller number. This is especially pertinent in the case of small p, since {c_p} diverges to infinity as p approaches 1. The purpose of this post is to show, by means of an example, that the answer is no. The constant {c_p} in Doob’s inequality is optimal. We will construct an example as follows.

Example 1 For any {p > 1} and constant {1 \le c < c_p} there exists a strictly positive cadlag {L^p}-integrable martingale {\{X_t\}_{t\in[0,1]}} with {X^*_1=cX_1}.

For X as in the example, we have {\lVert X^*_1\rVert_p=c\lVert X_1\rVert_p}. So, supposing that (1) holds with any other constant {\tilde c_p} in place of {c_p}, we must have {\tilde c_p\ge c}. By choosing {c} as close to {c_p} as we like, this means that {\tilde c_p\ge c_p} and {c_p} is indeed optimal in (1). (more…)

6 September 16

The Maximum Maximum of Martingales with Known Terminal Distribution

In this post I will be concerned with the following problem — given a martingale X for which we know the distribution at a fixed time, and we are given nothing else, what is the best bound we can obtain for the maximum of X up until that time? This is a question with a long history, starting with Doob’s inequalities which bound the maximum in the {L^p} norms and in probability. Later, Blackwell and Dubins (3), Dubins and Gilat (5) and Azema and Yor (1,2) showed that the maximum is bounded above, in stochastic order, by the Hardy-Littlewood transform of the terminal distribution. Furthermore, this bound is the best possible in the sense that there do exists martingales for which it can be attained, for any permissible terminal distribution. Hobson (7,8) considered the case where the starting law is also known, and this was further generalized to the case with a specified distribution at an intermediate time by Brown, Hobson and Rogers (4). Finally, Henry-Labordère, Obłój, Spoida and Touzi (6) considered the case where the distribution of the martingale is specified at an arbitrary set of times. In this post, I will look at the case where only the terminal distribution is specified. This leads to interesting constructions of martingales and, in particular, of continuous martingales with specified terminal distributions, with close connections to the Skorokhod embedding problem.

I will be concerned with the maximum process of a cadlag martingale X,

\displaystyle  X^*_t=\sup_{s\le t}X_s,

which is increasing and adapted. We can state and prove the bound on {X^*} relatively easily, although showing that it is optimal is more difficult. As the result holds more generally for submartingales, I state it in this case, although I am more concerned with martingales here.

Theorem 1 If X is a cadlag submartingale then, for each {t\ge0} and {x\in{\mathbb R}},

\displaystyle  {\mathbb P}\left(X^*_t\ge x\right)\le\inf_{y < x}\frac{{\mathbb E}\left[(X_t-y)_+\right]}{x-y}. (1)

Proof: We just need to show that the inequality holds for each {y < x}, and then it immediately follows for the infimum. Choosing {y < x^\prime < x}, consider the stopping time

\displaystyle  \tau=\inf\{s\ge0\colon X_s\ge x^\prime\}.

Then, {\tau \le t} and {X_\tau\ge x^\prime} whenever {X^*_t \ge x}. As {f(z)\equiv(z-y)_+} is nonnegative and increasing in z, this means that {1_{\{X^*_t\ge x\}}} is bounded above by {f(X_{\tau\wedge t})/f(x^\prime)}. Taking expectations,

\displaystyle  {\mathbb P}\left(X^*_t\ge x\right)\le{\mathbb E}\left[f(X_{\tau\wedge t})\right]/f(x^\prime).

Since f is convex and increasing, {f(X)} is a submartingale so, using optional sampling,

\displaystyle  {\mathbb P}\left(X^*_t\ge x\right)\le{\mathbb E}\left[f(X_t)\right]/f(x^\prime).

Letting {x^\prime} increase to {x} gives the result. ⬜

The bound stated in Theorem 1 is also optimal, and can be achieved by a continuous martingale. In this post, all measures on {{\mathbb R}} are defined with respect to the Borel sigma-algebra.

Theorem 2 If {\mu} is a probability measure on {{\mathbb R}} with {\int\lvert x\rvert\,d\mu(x) < \infty} and {t > 0} then there exists a continuous martingale X (defined on some filtered probability space) such that {X_t} has distribution {\mu} and (1) is an equality for all {x\in{\mathbb R}}.


14 August 16

Purely Discontinuous Semimartingales

As stated by the Bichteler-Dellacherie theorem, all semimartingales can be decomposed as the sum of a local martingale and an FV process. However, as the terms are only determined up to the addition of an FV local martingale, this decomposition is not unique. In the case of continuous semimartingales, we do obtain uniqueness, by requiring the terms in the decomposition to also be continuous. Furthermore, the decomposition into continuous terms is preserved by stochastic integration. Looking at non-continuous processes, there does exist a unique decomposition into local martingale and predictable FV processes, so long as we impose the slight restriction that the semimartingale is locally integrable.

In this post, I look at another decomposition which holds for all semimartingales and, moreover, is uniquely determined. This is the decomposition into continuous local martingale and purely discontinuous terms which, as we will see, is preserved by the stochastic integral. This is distinct from each of the decompositions mentioned above, except for the case of continuous semimartingales, in which case it coincides with the sum of continuous local martingale and FV components. Before proving the decomposition, I will start by describing the class of purely discontinuous semimartingales which, although they need not have finite variation, do have many of the properties of FV processes. In fact, they comprise precisely of the closure of the set of FV processes under the semimartingale topology. The terminology can be a bit confusing, and it should be noted that purely discontinuous processes need not actually have any discontinuities. For example, all continuous FV processes are purely discontinuous. For this reason, the term `quadratic pure jump semimartingale’ is sometimes used instead, referring to the fact that their quadratic variation is a pure jump process. Recall that quadratic variations and covariations can be written as the sum of continuous and pure jump parts,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle [X]_t&\displaystyle=[X]^c_t+\sum_{s\le t}(\Delta X_s)^2,\smallskip\\ \displaystyle [X,Y]_t&\displaystyle=[X,Y]^c_t+\sum_{s\le t}\Delta X_s\Delta Y_s. \end{array} (1)

The statement that the quadratic variation is a pure jump process is equivalent to saying that its continuous part, {[X]^c}, is zero. As the only difference between the generalized Ito formula for semimartingales and for FV processes is in the terms involving continuous parts of the quadratic variations and covariations, purely discontinuous semimartingales behave much like FV processes under changes of variables and integration by parts. Yet another characterisation of purely discontinuous semimartingales is as sums of purely discontinuous local martingales — which were studied in the previous post — and of FV processes.

Rather than starting by choosing one specific property to use as the definition, I prove the equivalence of various statements, any of which can be taken to define the purely discontinuous semimartingales.

Theorem 1 For a semimartingale X, the following are equivalent.

  1. {[X]^c=0}.
  2. {[X,Y]^c=0} for all semimartingales Y.
  3. {[X,Y]=0} for all continuous semimartingales Y.
  4. {[X,M]=0} for all continuous local martingales M.
  5. {X=M+V} for a purely discontinuous local martingale M and FV process V.
  6. there exists a sequence {\{X^n\}_{n=1,2,\ldots}} of FV processes such that {X^n\rightarrow X} in the semimartingale topology.


8 August 16

Purely Discontinuous Local Martingales

The previous post introduced the idea of a purely discontinuous local martingale. In the context of that post, such processes were used to construct local martingales with prescribed jumps, and enabled us to obtain uniqueness in the constructions given there. However, purely discontinuous local martingales are a very useful concept more generally in martingale and semimartingale theory, so I will go into more detail about such processes now. To start, we restate the definition from the previous post.

Definition 1 A local martingale X is said to be purely discontinuous iff XM is a local martingale for all continuous local martingales M.

We can show that every local martingale decomposes uniquely into continuous and purely discontinuous parts. Continuous local martingales are well understood — for instance, they can always be realized as time-changed Brownian motions. On the other hand, as we will see in a moment, purely discontinuous local martingales can be realized as limits of FV processes, and arguments involving FV local martingales can often to be extended to the purely discontinuous case. So, decomposition (1) below is useful as it allows arguments involving continuous-time local martingales to be broken down into different approaches involving their continuous and purely discontinuous parts. As always, two processes are considered to be equal if they are equivalent up to evanescence.

Theorem 2 Every local martingale X decomposes uniquely as

\displaystyle  X = X^{\rm c} + X^{\rm d} (1)

where {X^{\rm c}} is a continuous local martingale with {X^{\rm c}_0=0} and {X^{\rm d}} is a purely discontinuous local martingale.

Proof: As the process {H=\Delta X} is, by definition, equal to the jump process of a local martingale then it satisfies the hypothesis of Theorem 5 of the previous post. So, there exists a purely discontinuous local martingale {X^{\rm d}} with {\Delta X^{\rm d}=H=\Delta X}. We can take {X^{\rm d}_0=X_0} so that {X^{\rm c}=X-X^{\rm d}} is a continuous local martingale starting from 0.

If {X=\tilde X^{\rm c}+\tilde X^{\rm d}} is another such decomposition, then {\tilde X^{\rm d}} and {X^{\rm d}} have the same jumps and initial value so, by Lemma 3 of the previous post, {\tilde X^{\rm d}=X^{\rm d}}. ⬜

Throughout the remainder of this post, the notation {X^{\rm c}} and {X^{\rm d}} will be used to denote the continuous and purely discontinuous parts of a local martingale X, as given by decomposition (1). Using the notation {\mathcal{M}_{\rm loc}}, {\mathcal{M}_{{\rm loc},0}^{\rm c}} and {\mathcal{M}_{\rm loc}^{\rm d} } respectively for the spaces of local martingales, continuous local martingales starting from zero and the purely discontinuous local martingales, Theorem 2 can be expressed succinctly as

\displaystyle  \mathcal{M}_{\rm loc} = \mathcal{M}_{{\rm loc},0}^{\rm c} \oplus \mathcal{M}_{\rm loc}^{\rm d}. (2)

That is, {\mathcal{M}_{\rm loc}} is the direct sum of {\mathcal{M}_{{\rm loc},0}^{\rm c}} and {\mathcal{M}_{\rm loc}^{\rm d}}. Definition 2 identifies the purely discontinuous local martingales to be, in a sense, orthogonal to the continuous local martingales. Then, (2) can be understood as the decomposition of {\mathcal{M}_{\rm loc}} into the direct sum of the closed subspace {\mathcal{M}_{{\rm loc},0}^{\rm c}} and its orthogonal complement. This does in fact give an alternative, elementary, and commonly used, method of proving decomposition (1). As we have already shown the rather strong result of Theorem 5 from the previous post, the quickest way of proving the decomposition was to simply apply this result. I’ll give more details on the more elementary approach further below.

Definition 1 used above for the class of purely discontinuous local martingales was very convenient for our purposes, as it leads immediately to the proof of Theorem 2. However, there are many alternative characterizations of such processes. For example, they are precisely the processes which are limits of FV local martingales in a strong enough sense. They can also be characterized in terms of their quadratic variations and covariations. Recall that the quadratic variation and covariation are FV processes with jumps {\Delta[X]=(\Delta X)^2} and {\Delta[X,Y]=\Delta X\Delta Y}, so that they can be decomposed into continuous and pure jump components,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle [X]_t &\displaystyle=[X]^c_t+\sum_{s\le t}(\Delta X_s)^2,\smallskip\\ \displaystyle [X,Y]_t &\displaystyle=[X,Y]^c_t+\sum_{s\le t}\Delta X_s\Delta Y_s. \end{array} (3)

The following theorem gives several alternative characterizations of the class of purely discontinuous local martingales.

Theorem 3 For a local martingale X, the following are equivalent.

  1. X is purely discontinuous.
  2. {[X,Y]=0} for all continuous local martingales Y.
  3. {[X,Y]^c=0} for all local martingales Y.
  4. {[X]^c=0}.
  5. there exists a sequence {\{X^n\}_{n=1,2,\ldots}} of FV local martingales such that

    \displaystyle  {\mathbb E}\left[\sup_{t\ge0}(X^n_t-X_t)^2\right]\rightarrow0.


Next Page »

Create a free website or blog at