In the previous post, I looked at optional projection. Given a non-adapted process *X* we construct a new, adapted, process *Y* by taking the expected value of conditional on the information available up until time *t*. I will now concentrate on *predictable* projection. This is a very similar concept, except that we now condition on the information available strictly before time *t*.

It will be assumed, throughout this post, that the underlying filtered probability space satisfies the usual conditions, meaning that it is complete and right-continuous. This is just for convenience, as most of the results stated here extend easily to non-right-continuous filtrations. The sigma-algebra

represents the collection of events which are observable before time *t* and, by convention, we take . Then, the conditional expectation of *X* is written as,

(1) |

By definition, *Y* is adapted. However, at each time, (1) only defines *Y* up to a zero probability set. It does not determine the paths of *Y*, which requires specifying its values simultaneously at the uncountable set of times in . So, (1) does not tell us the distribution of *Y* at random times, and it is necessary to specify an appropriate version for *Y*. Predictable projection gives a uniquely defined modification satisfying (1). The full theory of predictable projection for jointly measurable processes requires the predictable section theorem. However, as I demonstrate here, in the case where *X* is left-continuous, predictable projection can be done by more elementary methods. The statements and most of the proofs in this post will follow very closely those given previously for optional projection. The main difference is that left and right limits are exchanged, predictable stopping times are used in place of general stopping times, and the sigma algebra is used in place of .

Stochastic processes will be defined up to evanescence, so two processes are considered to be the same if they are equal up to evanescence. In order to apply (1), some integrability requirements need to imposed. I will use local integrability. Recall that, in these notes, a process *X* is locally integrable if there exists a sequence of stopping times increasing to infinity and such that

(2) |

is integrable. This is a strong enough condition for the conditional expectation (1) to exist, not just at each fixed time, but also whenever *t* is a stopping time. The main result of this post can now be stated.

Theorem 1 (Predictable Projection)LetXbe a left-continuous and locally integrable process. Then, there exists a unique left-continuous processYsatisfying (1).

As it is left-continuous, the fact that *Y* is specified, almost surely, at any time *t* by (1) means that it is uniquely determined up to evanescence. The main content of Theorem 1 is the existence of *Y*, and the proof of this is left until later in this post.

The process defined by Theorem 1 is called the *predictable projection* of *X*, and is denoted by . So, is the unique left-continuous process satisfying

(3) |

for all times *t*. In practice, *X* will usually not just be left-continuous, but will also have right limits everywhere. That is, it is *caglad* (“continu à gauche, limites à droite”).

Theorem 2LetXbe a caglad and locally integrable process. Then, its predictable projection is caglad.

The simplest non-trivial example of predictable projection is where is constant in *t* and equal to an integrable random variable *U*. Then, is the left-limits of the cadlag martingale , so is easily seen to be a caglad process. (more…)