For stochastic processes in discrete time, the Doob decomposition uniquely decomposes any integrable process into the sum of a martingale and a *predictable* process. If is an integrable process adapted to a filtration then we write . Here, *M* is a martingale, so that , and *A* is predictable with . By saying that *A* is predictable, we mean that is measurable for each . It can be seen that this implies that

Then it is possible to write *A* and *M* as

(1) |

So, the Doob decomposition is unique and, conversely, the processes *A* and *M* constructed according to equation (1) can be seen to be respectively, a predictable process starting from zero and a martingale. For many purposes, this allows us to reduce problems concerning processes in discrete time to simpler statements about martingales and separately about predictable processes. In the case where *X* is a submartingale then things reduce further as, in this case, *A* will be an increasing process.

The situation is considerably more complicated when looking at processes in continuous time. The extension of the Doob decomposition to continuous time processes, known as the *Doob-Meyer decomposition*, was an important result historically in the development of stochastic calculus. First, we would usually restrict attention to sufficiently nice modifications of the processes and, in particular, suppose that *X* is cadlag. When attempting an analogous decomposition to the one above, it is not immediately clear what should be meant by the predictable component. The continuous time predictable processes are defined to be the set of all processes which are measurable with respect to the predictable sigma algebra, which is the sigma algebra generated by the space of processes which are adapted and continuous (or, equivalently, left-continuous). In particular, all continuous and adapted processes are predictable but, due to the existence of continuous martingales such as Brownian motion, this means that decompositions as sums of martingales and predictable processes are not unique. It is therefore necessary to impose further conditions on the term *A* in the decomposition. It turns out that we obtain unique decompositions if, in addition to being predictable, *A* is required to be cadlag with locally finite variation (an FV process). The processes which can be decomposed into a local martingale and a predictable FV process are known as *special semimartingales*. This is precisely the space of locally integrable semimartingales. As usual, we work with respect to a complete filtered probability space and two stochastic processes are considered to be the same if they are equivalent up to evanescence.

Theorem 1For a processX, the following are equivalent.

Xis a locally integrable semimartingale.Xdecomposes as

(2) for a local martingale

Mand predictable FV processA.

Furthermore, choosing , decomposition (2) is unique.

Theorem 1 is a general version of the Doob-Meyer decomposition. However, the name `Doob-Meyer decomposition’ is often used to specifically refer to the important special case where *X* is a submartingale. Historically, the theorem was first stated and proved for that case, and I will look at the decomposition for submartingales in more detail in a later post. Compare with the Bichteler-Dellacherie theorem, which says that a process is a semimartingale if and only if it decomposes as the sum of a local martingale and an FV process. Unfortunately, that decomposition is not unique. By Theorem 1, uniqueness is obtained by requiring the finite variation term to also be predictable, at the cost of restricting the result to apply only to locally integrable processes. Compare also with the unique decomposition of continuous semimartingales into a continuous local martingale and a continuous FV process. When looking at non-continuous processes, predictable FV processes are the correct generalisation of continuous FV processes.

The proof of Theorem 1 follows almost immediately from the proof of the Bichteler-Dellacherie theorem described in these notes, together with the classification of predictable FV processes. That is, we orthogonally project the process *X* onto a local martingale *M*, then show that is a predictable FV process. However, this only immediately applies to locally *square-integrable* processes. The proof for a general locally integrable process will require approximating by the square-integrable case, for which the following lemma will be used. Here, denotes the variation of *X* across the interval , so the the inequality below is trivial unless has integrable variation.

Lemma 2IfMis a local martingale,Ais a predictable FV process and then

for all stopping times .

*Proof:* As *A* is a predictable FV process, there exists a predictable process with such that is equal to the variation . Then,

As is a local martingale, there exist bounded stopping times increasing to infinity such that the stopped processes are martingales. For any stopping time , has zero expectation. So,

Letting *n* go to infinity and using monotone convergence gives the result. ⬜

With this lemma out of the way, we can now give the proof of Theorem 1.

*Proof of Theorem 1:* If were two such decompositions with then is a predictable FV local martingale starting from 0, hence is identically zero. So, subject to *A* starting from zero, decomposition (2) is unique.

Next, sums of local martingales and FV processes are semimartingales. Also, local martingales are locally integrable, as are cadlag predictable processes. So, the first statement follows from the second. In the case where *X* is a locally *square*-integrable semimartingale, the reverse implication follows quickly from previous results of these notes. As in the proof of the Bichteler-Dellacherie theorem, any such *X* decomposes as for a locally square-integrable martingale *M* and a cadlag adapted *A* such that is a local martingale for all cadlag bounded martingales *N*. Then, *A* is a predictable FV process.

It just remains to show that decomposition (2) exists for all locally integrable semimartingales *X*. Choose a sequence of stopping times increasing to infinity such that the *pre-stopped* processes

are locally square integrable. For example, taking to be the first time at which exceeds *n*, then will be uniformly bounded by *n*. As differs from the semimartingale by a step process consisting of a single jump, it is a semimartingale. As the result has already been established for square integrable semimartingales, we can decompose

for locally square integrable martingales and predictable FV processes with . It needs to be shown that converge to processes in decomposition (2) for *X*.

Choosing *m,n* and a stopping time , Lemma 2 gives

As *X* is locally integrable, there exists a sequence of finite stopping times increasing to infinity such that is integrable. Then,

as *m,n* go to infinity. So tends to zero in probability as *m,n* tend to infinity. This shows that is a Cauchy convergent sequence under the ucp topology. By completeness it has a limit *A* and, by passing to a subsequence if necessary, we can assume that tends to zero as for all finite times *T* with probability one. So, *A* is predictable (up to a set of zero probability). Applying Fatou’s lemma

as *n* goes to infinity. In particular, *A* has finite variation on the range , so it is an FV process. Finally, we have just shown that locally tends to zero in . Setting , this shows that the local martingales locally converge to *M* in . Also, has integrable variation and, hence, is a martingale over . So, *M* is a local martingale as required.

#### Special Semimartingales as Integrators

Having established the canonical decomposition (2) for special semimartingales, we can now move on to stochastic integration. In fact, the decomposition is particularly well behaved under integration, inasmuch as it commutes with the integral. The integral of a predictable process with respect to a special semimartingale can be split up into separate integrals with respect to the martingale and predictable FV parts. This compatibility of stochastic integration with semimartingale decompositions was previously seen in the special case of continuous semimartingales. However, for noncontinuous semimartingales, there is one restriction for this to hold. We are only guaranteed that the integrals with respect to the separate components are well-defined if it is assumed that the integral is itself locally integrable.

In some approaches, the compatibility of integration with the canonical decomposition is used in the construction of the stochastic integral. This is the case, for example, in Protter (*Stochastic Integration and Differential Equations*). There, a predictable process is said to be *X*-integrable if there exists a sequence of stopping times increasing to infinity such that the pre-stopped processes are locally integrable and satisfies property 3 of Theorem 3 with respect to the decomposition of .

Compare Theorem 3 with the more general decomposition of a semimartingale into local martingale and FV terms given by the Bichteler-Dellacherie theorem. In that case, it is possible for a predictable process to be *X*-integrable but not be integrable with respect to either of the terms in the decomposition. This can happen even when the integral is very well behaved (such as locally integrable, locally bounded, etc). It can be seen that, as a consequence of Theorem 3, this definition of *X*-integrable processes coincides with that used in these notes.

Theorem 3LetXbe a special semimartingale and be decomposition (2). Then, given a predictable process , the following are equivalent.

- is
X-integrable and is locally integrable.- is both
M-integrable andA-integrable, such that is a local martingale.- is locally integrable.
Then,

(3)

is the unique decomposition of into a local martingale and predictable FV process starting from 0.

*Proof:* *1 ⇒ 2:* As is a locally integrable semimartingale, decomposition (2) can be applied to write for a local martingale *N* and predictable FV process *B* with . Choose a bounded nonzero predictable process such that is bounded. For example, . Then,

However, and are local martingales, and and are predictable FV processes. So, by the uniqueness of decomposition (2) applied to ,

Integrating with respect to both sides and applying associativity of integration, we see that is *M*-integrable with being a local martingale and is *A*-integrable with being a predictable FV process. This shows that the second statement of the theorem and decomposition (3) both hold.

*2 ⇒ 3:* By assumption, is a local martingale, so is locally integrable. Therefore, is locally integrable. As the quadratic variation has jumps , it is locally L^{1/2}-integrable. So, is locally integrable. Next, as is *A*-integrable, is almost surely finite at all times. Since this integral is also predictable, it is locally integrable. Adding these together,

is locally integrable.

*3 ⇒ 1:* It was previously shown that, as a consequence of the Burkholder-Davis-Gundy inequality, local integrability of is enough to guarantee that is *M*-integrable and that is a local martingale, hence locally integrable. Also, local integrability of is enough to infer that is *A*-integrable and, being bounded by , the integral is also locally integrable. Adding these together shows that is *X*-integrable and

is locally integrable as required. ⬜

Hi,

There is a notation that I don’t fully understand in Lemma 2 and after, what do you mean exactly by ?

For example, I don’t see what it means for a Brownian Motion.

Regards

Comment by TheBridge — 3 October 11 @ 4:21 PM |

That’s just the variation of

Xon the range [0,τ]. It is infinite for Brownian motion. Maybe I could have been a bit clearer. The inequality is trivial for processes with infinite variation.Comment by George Lowther — 3 October 11 @ 4:37 PM |

Ok I got it thank’s

Comment by TheBridge — 3 October 11 @ 4:46 PM |

I added an extra sentence just before Lemma 2 explaining the notation.

Comment by George Lowther — 3 October 11 @ 11:44 PM |

Hi,

In theorem 3, point 2, given the fact that we know that is -Integrable, doesn’t it imply automatically that is a local martingale ?

If true, I find the idea of pointing that out a little a bit superfluous, if not true then I can’t see a counterexample of a being predictable and -Integrable such that is not a local martingale.

Best regards and thanks for this new post that sheds some light on places (of stochastic integration) I couldn’t even realise there were darkness.

Comment by TheBridge — 4 October 11 @ 5:18 PM |

Hi. Actually, just knowing that is

M-integrable isnotenough to be able to say that is a local martingale. If is locally bounded then it is enough. More generally, if is locally integrable then it is a local martingale (as local martingales are locally integrable, this is an ‘if and only if’ condition). See my earlier post Preservation of the Local Martingale Property.There do exist local martingales

MandM-integrable predictable processes such that is not locally integrable, and not a local martingale. I have an example in the same post linked above (Loss of the Local Martingale Property). So, it does have to be explicitly stated.Hope that helps.

Comment by George Lowther — 4 October 11 @ 9:03 PM |

Ok thank’s for those detailed explanations

Best regards

Comment by TheBridge — 5 October 11 @ 8:17 AM |

Hi,

I have a suggestion about this sentence in the proof of theorem 1,

“As X is locally integrable, there exists a sequence of finite stopping times increasing to infinity such that is integrable.”

It might be worth for the reader to hyperlink this assertion to “lemma 8” from the post “localization” of 23 december 2009.

Best regards

Comment by TheBridge — 5 October 11 @ 9:43 AM |

Just did. Thanks.

Comment by George Lowther — 5 October 11 @ 10:28 PM |

Hi,

I have yet another question about the proof of Theorem 1. When you claim that a.s. entails that is predictable, I can’t really see why this is obvious from the context, could you elaborate a little more on this ?

Best regards

Comment by Anonymous — 9 October 11 @ 4:57 PM |

The limit of a sequence of real-valued measurable functions is itself measurable (from any measurable space to the real numbers, with the Borel sigma algebra). More precisely, if (

E,ℰ) is a measurable space andf:_{n}E→ ℝ is a sequence of measurable functions thenS= {x∈E: lim_{n}f(_{n}x) exists} is in ℰ, and the limit lim_{n}fis a measurable function on_{n}S. Apply this to the processesAthought of as measurable functions with respect to the predictable sigma algebra.^{n}I’ll add an extra line to the post to try and make this a bit clearer.

Comment by George Lowther — 11 October 11 @ 1:53 AM |

Hi George,

Using your line of argument, would it be correct to claim that if we have a sequence where is a metric space (or maybe Polish ?) together with the borelian sigma algebra associated to the distance topology, then is a -measurable set and is -measurable ?

Using this with and where is endowed with the predictable sigma field , would yield the result, wouldn’t it ?

Best Regards

Comment by TheBridge — 11 October 11 @ 1:03 PM |

I think so, but that is more complicated than necessary. It’s more direct to consider

Aas functions from with the predictable sigma algebra to the real numbers with the Borel sigma algebra.^{n}Btw, what is in your comment? It sounds like you are taking it to be the underlying probability space, but the predictable sigma algebra is defined on .

Comment by George Lowther — 11 October 11 @ 11:58 PM

Ok I got it,

Reading your answer, I indeed suspected I was over-complexifying things.

That’s because (in my mind) processes take values in the càdlàg space and for each the canonical application gives a càdlàg function (in this precise context).

Seeing predictable sigma algebra over the space really makes life much easier.

I think there must be some measurable application from to here that makes things strictly equivalent here but I’m not sure this really helps clarifying things.

Best regards

Comment by TheBridge — 12 October 11 @ 7:36 AM