Almost Sure

24 February 19

Properties of the Dual Projections

In the previous post I introduced the definitions of the dual optional and predictable projections, firstly for processes of integrable variation and, then, generalised to processes which are only required to be locally (or prelocally) of integrable variation. We did not look at the properties of these dual projections beyond the fact that they exist and are uniquely defined, which are significant and important statements in their own right.

To recap, recall that an IV process, A, is right-continuous and such that its variation

\displaystyle  V_t\equiv \lvert A_0\rvert+\int_0^t\,\lvert dA\rvert (1)

is integrable at time {t=\infty}, so that {{\mathbb E}[V_\infty] < \infty}. The dual optional projection is defined for processes which are prelocally IV. That is, A has a dual optional projection {A^{\rm o}} if it is right-continuous and its variation process is prelocally integrable, so that there exist a sequence {\tau_n} of stopping times increasing to infinity with {1_{\{\tau_n > 0\}}V_{\tau_n-}} integrable. More generally, A is a raw FV process if it is right-continuous with almost-surely finite variation over finite time intervals, so {V_t < \infty} (a.s.) for all {t\in{\mathbb R}^+}. Then, if a jointly measurable process {\xi} is A-integrable on finite time intervals, we use

\displaystyle  \xi\cdot A_t\equiv\xi_0A_0+\int_0^t\xi\,dA

to denote the integral of {\xi} with respect to A over the interval {[0,t]}, which takes into account the value of {\xi} at time 0 (unlike the integral {\int_0^t\xi\,dA} which, implicitly, is defined on the interval {(0,t]}). In what follows, whenever we state that {\xi\cdot A} has any properties, such as being IV or prelocally IV, we are also including the statement that {\xi} is A-integrable so that {\xi\cdot A} is a well-defined process. Also, whenever we state that a process has a dual optional projection, then we are also implicitly stating that it is prelocally IV.

From theorem 3 of the previous post, the dual optional projection {A^{\rm o}} is the unique prelocally IV process satisfying

\displaystyle  {\mathbb E}[\xi\cdot A^{\rm o}_\infty]={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty]

for all measurable processes {\xi} with optional projection {{}^{\rm o}\xi} such that {\xi\cdot A^{\rm o}} and {{}^{\rm o}\xi\cdot A} are IV. Equivalently, {A^{\rm o}} is the unique optional FV process such that

\displaystyle  {\mathbb E}[\xi\cdot A^{\rm o}_\infty]={\mathbb E}[\xi\cdot A_\infty]

for all optional {\xi} such that {\xi\cdot A} is IV, in which case {\xi\cdot A^{\rm o}} is also IV so that the expectations in this identity are well-defined.

I now look at the elementary properties of dual optional projections, as well as the corresponding properties of dual predictable projections. The most important property is that, according to the definition just stated, the dual projection exists and is uniquely defined. By comparison, the properties considered in this post are elementary and relatively easy to prove. So, I will simply state a theorem consisting of a list of all the properties under consideration, and will then run through their proofs. Starting with the dual optional projection, the main properties are listed below as Theorem 1.

Note that the first three statements are saying that the dual projection is indeed a linear projection from the prelocally IV processes onto the linear subspace of optional FV processes. As explained in the previous post, by comparison with the discrete-time setting, the dual optional projection can be expressed, in a non-rigorous sense, as taking the optional projection of the infinitesimal increments,

\displaystyle  dA^{\rm o}={}^{\rm o}dA. (2)

As {dA} is interpreted via the Lebesgue-Stieltjes integral {\int\cdot\,dA}, it is a random measure rather than a real-valued process. So, the optional projection of {dA} appearing in (2) does not really make sense. However, Theorem 1 does allow us to make sense of (2) in certain restricted cases. For example, if A is differentiable so that {dA=\xi\,dt} for a process {\xi}, then (9) below gives {dA={}^{\rm o}\xi\,dt}. This agrees with (2) so long as {{}^{\rm o}(\xi\,dt)} is interpreted to mean {{}^{\rm o}\xi\,dt}. Also, restricting to the jump component of the increments, {\Delta A=A-A_-}, (2) reduces to (11) below.

We defined the dual projection via expectations of integrals {\xi\cdot A} with the restriction that this is IV. An alternative approach is to first define the dual projections for IV processes, as was done in theorems 1 and 2 of the previous post, and then extend to (pre)locally IV processes by localisation of the projection. That this is consistent with our definitions follows from the fact that (pre)localisation commutes with the dual projection, as stated in (10) below.

Theorem 1

  1. A raw FV process A is optional if and only if {A^{\rm o}} exists and is equal to A.
  2. If the dual optional projection of A exists then,
    \displaystyle  (A^{\rm o})^{\rm o}=A^{\rm o}. (3)
  3. If the dual optional projections of A and B exist, and {\lambda}, {\mu} are {\mathcal F_0}-measurable random variables then,
    \displaystyle  (\lambda A+\mu B)^{\rm o}=\lambda A^{\rm o}+\mu B^{\rm o}. (4)
  4. If the dual optional projection {A^{\rm o}} exists then {{\mathbb E}[\lvert A_0\rvert\,\vert\mathcal F_0]} is almost-surely finite and
    \displaystyle  A^{\rm o}_0={\mathbb E}[A_0\,\vert\mathcal F_0]. (5)
  5. If U is a random variable and {\tau} is a stopping time, then {U1_{[\tau,\infty)}} is prelocally IV if and only if {{\mathbb E}[1_{\{\tau < \infty\}}\lvert U\rvert\,\vert\mathcal F_\tau]} is almost surely finite, in which case
    \displaystyle  \left(U1_{[\tau,\infty)}\right)^{\rm o}={\mathbb E}[1_{\{\tau < \infty\}}U\,\vert\mathcal F_\tau]1_{[\tau,\infty)}. (6)
  6. If the prelocally IV process A is nonnegative and increasing then so is {A^{\rm o}} and,
    \displaystyle  {\mathbb E}[\xi\cdot A^{\rm o}_\infty]={\mathbb E}[{}^{\rm o}\xi\cdot A_\infty] (7)

    for all nonnegative measurable {\xi} with optional projection {{}^{\rm o}\xi}. If A is merely increasing then so is {A^{\rm o}} and (7) holds for nonnegative measurable {\xi} with {\xi_0=0}.

  7. If A has dual optional projection {A^{\rm o}} and {\xi} is an optional process such that {\xi\cdot A} is prelocally IV then, {\xi} is {A^{\rm o}}-integrable and,
    \displaystyle  (\xi\cdot A)^{\rm o}=\xi\cdot A^{\rm o}. (8)
  8. If A is an optional FV process and {\xi} is a measurable process with optional projection {{}^{\rm o}\xi} such that {\xi\cdot A} is prelocally IV then, {{}^{\rm o}\xi} is A-integrable and,
    \displaystyle  (\xi\cdot A)^{\rm o}={}^{\rm o}\xi\cdot A. (9)
  9. If A has dual optional projection {A^{\rm o}} and {\tau} is a stopping time then,
    \displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle(A^{\tau})^{\rm o}=(A^{\rm o})^{\tau},\smallskip\\ &\displaystyle(A^{\tau-})^{\rm o}=(A^{\rm o})^{\tau-}. \end{array} (10)
  10. If the dual optional projection {A^{\rm o}} exists, then its jump process is the optional projection of the jump process of A,
    \displaystyle  \Delta A^{\rm o}={}^{\rm o}\!\Delta A. (11)
  11. If A has dual optional projection {A^{\rm o}} then
    \displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb E}\left[\xi_0\lvert A^{\rm o}_0\rvert + \int_0^\infty\xi\,\lvert dA^{\rm o}\rvert\right]\le{\mathbb E}\left[{}^{\rm o}\xi_0\lvert A_0\rvert + \int_0^\infty{}^{\rm o}\xi\,\lvert dA\rvert\right],\smallskip\\ &\displaystyle{\mathbb E}\left[\xi_0(A^{\rm o}_0)_+ + \int_0^\infty\xi\,(dA^{\rm o})_+\right]\le{\mathbb E}\left[{}^{\rm o}\xi_0(A_0)_+ + \int_0^\infty{}^{\rm o}\xi\,(dA)_+\right],\smallskip\\ &\displaystyle{\mathbb E}\left[\xi_0(A^{\rm o}_0)_- + \int_0^\infty\xi\,(dA^{\rm o})_-\right]\le{\mathbb E}\left[{}^{\rm o}\xi_0(A_0)_- + \int_0^\infty{}^{\rm o}\xi\,(dA)_-\right], \end{array} (12)

    for all nonnegative measurable {\xi} with optional projection {{}^{\rm o}\xi}.

  12. Let {\{A^n\}_{n=1,2,\ldots}} be a sequence of right-continuous processes with variation

    \displaystyle  V^n_t=\lvert A^n_0\rvert + \int_0^t\lvert dA^n\rvert.

    If {\sum_n V^n} is prelocally IV then,

    \displaystyle  \left(\sum\nolimits_n A^n\right)^{\rm o}=\sum\nolimits_n\left(A^n\right)^{\rm o}. (13)

(more…)

8 February 19

Dual Projections

The optional and predictable projections of stochastic processes have corresponding dual projections, which are the subject of this post. I will be concerned with their initial construction here, and show that they are well-defined. The study of their properties will be left until later. In the discrete time setting, the dual projections are relatively straightforward, and can be constructed by applying the optional and predictable projection to the increments of the process. In continuous time, we no longer have discrete time increments along which we can define the dual projections. In some sense, they can still be thought of as projections of the infinitesimal increments so that, for a process A, the increments of the dual projections {A^{\rm o}} and {A^{\rm p}} are determined from the increments {dA} of A as

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle dA^{\rm o}={}^{\rm o}(dA),\smallskip\\ &\displaystyle dA^{\rm p}={}^{\rm p}(dA). \end{array} (1)

Unfortunately, these expressions are difficult to make sense of in general. In specific cases, (1) can be interpreted in a simple way. For example, when A is differentiable with derivative {\xi}, so that {dA=\xi dt}, then the dual projections are given by {dA^{\rm o}={}^{\rm o}\xi dt} and {dA^{\rm p}={}^{\rm p}\xi dt}. More generally, if A is right-continuous with finite variation, then the infinitesimal increments {dA} can be interpreted in terms of Lebesgue-Stieltjes integrals. However, as the optional and predictable projections are defined for real valued processes, and {dA} is viewed as a stochastic measure, the right-hand-side of (1) is still problematic. This can be rectified by multiplying by an arbitrary process {\xi}, and making use of the transitivity property {{\mathbb E}[\xi\,{}^{\rm o}(dA)]={\mathbb E}[({}^{\rm o}\xi)dA]}. Integrating over time gives the more meaningful expressions

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle {\mathbb E}\left[\int_0^\infty \xi\,dA^{\rm o}\right]={\mathbb E}\left[\int_0^\infty{}^{\rm o}\xi\,dA\right],\smallskip\\ &\displaystyle{\mathbb E}\left[\int_0^\infty \xi\,dA^{\rm p}\right]={\mathbb E}\left[\int_0^\infty{}^{\rm p}\xi\,dA\right]. \end{array}

In contrast to (1), these equalities can be used to give mathematically rigorous definitions of the dual projections. As usual, we work with respect to a complete filtered probability space {(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})}, and processes are identified whenever they are equal up to evanescence. The terminology `raw IV process‘ will be used to refer to any right-continuous integrable process whose variation on the whole of {{\mathbb R}^+} has finite expectation. The use of the word `raw’ here is just to signify that we are not requiring the process to be adapted. Next, to simplify the expressions, I will use the notation {\xi\cdot A} for the integral of a process {\xi} with respect to another process A,

\displaystyle  \xi\cdot A_t\equiv\xi_0A_0+\int_0^t\xi\,dA.

Note that, whereas the integral {\int_0^t\xi\,dA} is implicitly taken over the range {(0,t]} and does not involve the time-zero value of {\xi}, I have included the time-zero values of the processes in the definition of {\xi\cdot A}. This is not essential, and could be excluded, so long as we were to restrict to processes starting from zero. The existence and uniqueness (up to evanescence) of the dual projections is given by the following result.

Theorem 1 (Dual Projections) Let A be a raw IV process. Then,

  • There exists a unique raw IV process {A^{\rm o}} satisfying
    \displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm o}_\infty\right]={\mathbb E}\left[{}^{\rm o}\xi\cdot A_\infty\right] (2)

    for all bounded measurable processes {\xi}. We refer to {A^{\rm o}} as the dual optional projection of A.

  • There exists a unique raw IV process {A^{\rm p}} satisfying
    \displaystyle  {\mathbb E}\left[\xi\cdot A^{\rm p}_\infty\right]={\mathbb E}\left[{}^{\rm p}\xi\cdot A_\infty\right] (3)

    for all bounded measurable processes {\xi}. We refer to {A^{\rm p}} as the dual predictable projection of A.

Furthermore, if A is nonnegative and increasing then so are {A^{\rm o}} and {A^{\rm p}}.

(more…)

21 January 19

Pathwise Properties of Optional and Predictable Projections

Recall that the the optional and predictable projections of a process are defined, firstly, by a measurability property and, secondly, by their values at stopping times. Namely, the optional projection is measurable with respect to the optional sigma-algebra, and its value is defined at each stopping time by a conditional expectation of the original process. Similarly, the predictable projection is measurable with respect to the predictable sigma-algebra and its value at each predictable stopping time is given by a conditional expectation. While these definitions can be powerful, and many properties of the projections follow immediately, they say very little about the sample paths. Given a stochastic process X defined on a filtered probability space {(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})} with optional projection {{}^{\rm o}\!X} then, for each {\omega\in\Omega}, we may be interested in the sample path {t\mapsto{}^{\rm o}\!X_t(\omega)}. For example, is it continuous, right-continuous, cadlag, etc? Answering these questions requires looking at {{}^{\rm o}\!X_t(\omega)} simultaneously at the uncountable set of times {t\in{\mathbb R}^+}, so the definition of the projection given by specifying its values at each individual stopping time, up to almost-sure equivalence, is not easy to work with. I did establish some of the basic properties of the projections in the previous post, but these do not say much regarding sample paths.

I will now establish the basic properties of the sample paths of the projections. Although these results are quite advanced, most of the work has already been done in these notes when we established some pathwise properties of optional and predictable processes in terms of their behaviour along sequences of stopping times, and of predictable stopping times. So, the proofs in this post are relatively simple and will consist of applications of these earlier results.

Before proceeding, let us consider what kind of properties it is reasonable to expect of the projections. Firstly, it does not seem reasonable to expect the optional projection {{}^{\rm o}\!X} or the predictable projection {{}^{\rm p}\!X} to satisfy properties not held by the original process X. Therefore, in this post, we will be concerned with the sample path properties which are preserved by the projections. Consider a process with constant paths. That is, {X_t=U} at all times t, for some bounded random variable U. This has about as simple sample paths as possible, so any properties preserved by the projections should hold for the optional and predictable projections of X. However, we know what the projections of this process are. Letting M be the martingale defined by {M_t={\mathbb E}[U\,\vert\mathcal F_t]} then, assuming that the underlying filtration is right-continuous, M has a cadlag modification and, furthermore, this modification is the optional projection of X. So, assuming that the filtration is right-continuous, the optional projection of X is cadlag, meaning that it is right-continuous and has left limits everywhere. So, we can hope that the optional projection preserves these properties. If the filtration is not right-continuous, then M need not have a cadlag modification, so we cannot expect optional projection to preserve right-continuity in this case. However, M does still have a version with left and right limits everywhere, which is the optional projection of X. So, without assuming right-continuity of the filtration, we may still hope that the optional projection preserves the existence of left and right limits of a process. Next, the predictable projection is equal to the left limits, {{}^{\rm p}\!X_t=M_{t-}}, which is left-continuous with left and right limits everywhere. Therefore, we can hope that predictable projections preserve left-continuity and the existence of left and right limits. The existence of cadlag martingales which are not continuous, such as the compensated Poisson process, imply that optional projections do not generally preserve left-continuity and the predictable projection does not preserve right-continuity.

Recall that I previously constructed a version of the optional projection and the predictable projection for processes which are, respectively, right-continuous and left-continuous. This was done by defining the projection at each deterministic time and, then, enforcing the respective properties of the sample paths. We can use the results in those posts to infer that the projections do indeed preserve these properties, although I will now more direct proofs in greater generality, and using the more general definition of the optional and predictable projections.

We work with respect to a complete filtered probability space {(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})}. As usual, we say that the sample paths of a process satisfy any stated property if they satisfy it up to evanescence. Since integrability conditions will be required, I mention those now. Recall that a process X is of class (D) if the set of random variables {X_\tau}, over stopping times {\tau}, is uniformly integrable. It will be said to be locally of class (D) if there is a sequence {\tau_n} of stopping times increasing to infinity and such that {1_{\{\tau_n > 0\}}1_{[0,\tau_n]}X} is of class (D) for each n. Similarly, it will be said to be prelocally of class (D) if there is a sequence {\tau_n} of stopping times increasing to infinity and such that {1_{[0,\tau_n)}X} is of class (D) for each n.

Theorem 1 Let X be pre-locally of class (D), with optional projection {{}^{\rm o}\!X}. Then,

  • if X has left limits, so does {{}^{\rm o}\!X}.
  • if X has right limits, so does {{}^{\rm o}\!X}.

Furthermore, if the underlying filtration is right-continuous then,

  • if X is right-continuous, so is {{}^{\rm o}\!X}.
  • if X is cadlag, so is {{}^{\rm o}\!X}.

(more…)

20 January 19

Properties of Optional and Predictable Projections

Having defined optional and predictable projections in an earlier post, I now look at their basic properties. The first nontrivial property is that they are well-defined in the first place. Recall that existence of the projections made use of the existence of cadlag modifications of martingales, and uniqueness relied on the section theorems. By contrast, once we accept that optional and predictable projections are well-defined, everything in this post follows easily. Nothing here requires any further advanced results of stochastic process theory.

Optional and predictable projections are similar in nature to conditional expectations. Given a probability space {(\Omega,\mathcal F,{\mathbb P})} and a sub-sigma-algebra {\mathcal G\subseteq\mathcal F}, the conditional expectation of an ({\mathcal F}-measurable) random variable X is a {\mathcal G}-measurable random variable {Y={\mathbb E}[X\,\vert\mathcal G]}. This is defined whenever the integrability condition {{\mathbb E}[\lvert X\rvert\,\vert\mathcal G] < \infty} (a.s.) is satisfied, only depends on X up to almost-sure equivalence, and Y is defined up to almost-sure equivalence. That is, a random variable {X^\prime} almost surely equal to X has the same conditional expectation as X. Similarly, a random variable {Y^\prime} almost-surely equal to Y is also a version of the conditional expectation {{\mathbb E}[X\,\vert\mathcal G]}.

The setup with projections of stochastic processes is similar. We start with a filtered probability space {(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})}, and a (real-valued) stochastic process is a map

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle X\colon{\mathbb R}^+\times\Omega\rightarrow{\mathbb R},\smallskip\\ &\displaystyle (t,\omega)\mapsto X_t(\omega) \end{array}

which we assume to be jointly-measurable. That is, it is measurable with respect to the Borel sigma-algebra {\mathcal B({\mathbb R})} on the image, and the product sigma-algebra {\mathcal B({\mathbb R})\otimes\mathcal F} on the domain. The optional and predictable sigma-algebras are contained in the product,

\displaystyle  \mathcal P\subseteq\mathcal O\subseteq \mathcal B({\mathbb R})\otimes\mathcal F.

We do not have a reference measure on {({\mathbb R}^+\times\Omega,\mathcal B({\mathbb R})\otimes\mathcal F)} in order to define conditional expectations with respect to {\mathcal O} and {\mathcal P}. However, the optional projection {{}^{\rm o}\!X} and predictable projection {{}^{\rm p}\!X} play similar roles. Assuming that the necessary integrability properties are satisfied, then the projections exist. Furthermore, the projection only depends on the process X up to evanescence (i.e., up to a zero probability set), and {{}^{\rm o}\!X} and {{}^{\rm p}\!X} are uniquely defined up to evanescence.

In what follows, we work with respect to a complete filtered probability space. Processes are always only considered up to evanescence, so statements involving equalities, inequalities, and limits of processes are only required to hold outside of a zero probability set. When we say that the optional projection of a process exists, we mean that the integrability condition in the definition of the projection is satisfied. Specifically, that {{\mathbb E}[1_{\{\tau < \infty\}}\lvert X_\tau\rvert\,\vert\mathcal F_\tau]} is almost surely finite. Similarly for the predictable projection.

The following lemma gives a list of initial properties of the optional projection. Other than the statement involving stopping times, they all correspond to properties of conditional expectations.

Lemma 1

  1. X is optional if and only if {{}^{\rm o}\!X} exists and is equal to X.
  2. If the optional projection of X exists then,
    \displaystyle  {}^{\rm o}({}^{\rm o}\!X)={}^{\rm o}\!X. (1)
  3. If the optional projections of X and Y exist, and {\lambda,\mu} are {\mathcal{F}_0}-measurable random variables, then,
    \displaystyle  {}^{\rm o}(\lambda X+\mu Y) = \lambda\,^{\rm o}\!X + \mu\,^{\rm o}Y. (2)
  4. If the optional projection of X exists and U is an optional process then,
    \displaystyle  {}^{\rm o}(UX) = U\,^{\rm o}\!X (3)
  5. If the optional projection of X exists and {\tau} is a stopping time then, the optional projection of the stopped process {X^\tau} exists and,
    \displaystyle  1_{[0,\tau]}{}^{\rm o}(X^\tau)=1_{[0,\tau]}{}^{\rm o}\!X. (4)
  6. If {X\le Y} and the optional projections of X and Y exist then, {{}^{\rm o}\!X\le{}^{\rm o}Y}.

(more…)

10 January 19

Proof of the Measurable Projection and Section Theorems

The aim of this post is to give a direct proof of the theorems of measurable projection and measurable section. These are generally regarded as rather difficult results, and proofs often use ideas from descriptive set theory such as analytic sets. I did previously post a proof along those lines on this blog. However, the results can be obtained in a more direct way, which is the purpose of this post. Here, I present relatively self-contained proofs which do not require knowledge of any advanced topics beyond basic probability theory.

The projection theorem states that if {(\Omega,\mathcal F,{\mathbb P})} is a complete probability space, then the projection of a measurable subset of {{\mathbb R}\times\Omega} onto {\Omega} is measurable. To be precise, the condition is that S is in the product sigma-algebra {\mathcal B({\mathbb R})\otimes\mathcal F}, where {\mathcal B({\mathbb R})} denotes the Borel sets in {{\mathbb R}}, and the projection map is denoted

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\pi_\Omega\colon{\mathbb R}\times\Omega\rightarrow\Omega,\smallskip\\ &\displaystyle\pi_\Omega(t,\omega)=\omega. \end{array}

Then, measurable projection states that {\pi_\Omega(S)\in\mathcal{F}}. Although it looks like a very basic property of measurable sets, maybe even obvious, measurable projection is a surprisingly difficult result to prove. In fact, the requirement that the probability space is complete is necessary and, if it is dropped, then {\pi_\Omega(S)} need not be measurable. Counterexamples exist for commonly used measurable spaces such as {\Omega= {\mathbb R}} and {\mathcal F=\mathcal B({\mathbb R})}. This suggests that there is something deeper going on here than basic manipulations of measurable sets.

By definition, if {S\subseteq{\mathbb R}\times\Omega} then, for every {\omega\in\pi_\Omega(S)}, there exists a {t\in{\mathbb R}} such that {(t,\omega)\in S}. The measurable section theorem — also known as measurable selection — says that this choice can be made in a measurable way. That is, if S is in {\mathcal B({\mathbb R})\otimes\mathcal F} then there is a measurable section,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\tau\colon\pi_\Omega(S)\rightarrow{\mathbb R},\smallskip\\ &\displaystyle(\tau(\omega),\omega)\in S. \end{array}

It is convenient to extend {\tau} to the whole of {\Omega} by setting {\tau=\infty} outside of {\pi_\Omega(S)}.

[caption_id=”sectionpic” align=”aligncenter” width=”450″] measurable sectionFigure 1: A section of a measurable set[/caption] The graph of {\tau} is

\displaystyle  [\tau]=\left\{(t,\omega)\in{\mathbb R}\times\Omega\colon t=\tau(\omega)\right\}.

The condition that {(\tau(\omega),\omega)\in S} whenever {\tau < \infty} can alternatively be expressed by stating that {[\tau]\subseteq S}. This also ensures that {\{\tau < \infty\}} is a subset of {\pi_\Omega(S)}, and {\tau} is a section of S on the whole of {\pi_\Omega(S)} if and only if {\{\tau < \infty\}=\pi_\Omega(S)}.

The results described here can also be used to prove the optional and predictable section theorems which, at first appearances, also seem to be quite basic statements. The section theorems are fundamental to the powerful and interesting theory of optional and predictable projection which is, consequently, generally considered to be a hard part of stochastic calculus. In fact, the projection and section theorems are really not that hard to prove.

Let us consider how one might try and approach a proof of the projection theorem. As with many statements regarding measurable sets, we could try and prove the result first for certain simple sets, and then generalise to measurable sets by use of the monotone class theorem or similar. For example, let {\mathcal S} denote the collection of all {S\subseteq{\mathbb R}\times\Omega} for which {\pi_\Omega(S)\in\mathcal F}. It is straightforward to show that any finite union of sets of the form {A\times B} for {A\in\mathcal B({\mathbb R})} and {B\in\mathcal F} are in {\mathcal S}. If it could be shown that {\mathcal S} is closed under taking limits of increasing and decreasing sequences of sets, then the result would follow from the monotone class theorem. Increasing sequences are easily handled — if {S_n} is a sequence of subsets of {{\mathbb R}\times\Omega} then from the definition of the projection map,

\displaystyle  \pi_\Omega\left(\bigcup\nolimits_n S_n\right)=\bigcup\nolimits_n\pi_\Omega\left(S_n\right).

If {S_n\in\mathcal S} for each n, this shows that the union {\bigcup_nS_n} is again in {\mathcal S}. Unfortunately, decreasing sequences are much more problematic. If {S_n\subseteq S_m} for all {n\ge m} then we would like to use something like

\displaystyle  \pi_\Omega\left(\bigcap\nolimits_n S_n\right)=\bigcap\nolimits_n\pi_\Omega\left(S_n\right). (1)

However, this identity does not hold in general. For example, consider the decreasing sequence {S_n=(n,\infty)\times\Omega}. Then, {\pi_\Omega(S_n)=\Omega} for all n, but {\bigcap_nS_n} is empty, contradicting (1). There is some interesting history involved here. In a paper published in 1905, Henri Lebesgue claimed that the projection of a Borel subset of {{\mathbb R}^2} onto {{\mathbb R}} is itself measurable. This was based upon mistakenly applying (1). The error was spotted in around 1917 by Mikhail Suslin, who realised that the projection need not be Borel, and lead him to develop the theory of analytic sets.

Actually, there is at least one situation where (1) can be shown to hold. Suppose that for each {\omega\in\Omega}, the slices

\displaystyle  S_n(\omega)\equiv\left\{t\in{\mathbb R}\colon(t,\omega)\in S_n\right\} (2)

are compact. For each {\omega\in\bigcap_n\pi_\Omega(S_n)}, the slices {S_n(\omega)} give a decreasing sequence of nonempty compact sets, so has nonempty intersection. So, letting S be the intersection {\bigcap_nS_n}, the slice {S(\omega)=\bigcap_nS_n(\omega)} is nonempty. Hence, {\omega\in\pi_\Omega(S)}, and (1) follows.

The starting point for our proof of the projection and section theorems is to consider certain special subsets of {{\mathbb R}\times\Omega} where the compactness argument, as just described, can be used. The notation {\mathcal A_\delta} is used to represent the collection of countable intersections, {\bigcap_{n=1}^\infty A_n}, of sets {A_n} in {\mathcal A}.

Lemma 1 Let {(\Omega,\mathcal F)} be a measurable space, and {\mathcal A} be the collection of subsets of {{\mathbb R}\times\Omega} which are finite unions {\bigcup_kC_k\times E_k} over compact intervals {C_k\subseteq{\mathbb R}} and {E_k\in\mathcal F}. Then, for any {S\in\mathcal A_\delta}, we have {\pi_\Omega(S)\in\mathcal F}, and the debut

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle \tau\colon\Omega\rightarrow{\mathbb R}\cup\{\infty\},\smallskip\\ &\displaystyle \omega\mapsto\inf\left\{t\in{\mathbb R}\colon (t,\omega)\in S\right\}. \end{array}

is a measurable map with {[\tau]\subseteq S} and {\{\tau < \infty\}=\pi_\Omega(S)}.

(more…)

7 January 19

Proof of Optional and Predictable Section

In this post I give a proof of the theorems of optional and predictable section. These are often considered among the more advanced results in stochastic calculus, and many texts on the subject skip their proofs entirely. The approach here makes use of the measurable section theorem but, other than that, is relatively self-contained and will not require any knowledge of advanced topics beyond basic properties of probability measures.

Given a probability space {(\Omega,\mathcal F,{\mathbb P})} we denote the projection map from {\Omega\times{\mathbb R}^+} to {\Omega} by

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\pi_\Omega\colon \Omega\times{\mathbb R}^+\rightarrow\Omega,\smallskip\\ &\displaystyle\pi_\Omega(\omega,t)=\omega. \end{array}

For a set {S\subseteq\Omega\times{\mathbb R}^+} then, by construction, for every {\omega\in\pi_\Omega(S)} there exists a {t\in{\mathbb R}^+} with {(\omega,t)\in S}. Measurable section states that this choice can be made in a measurable way. That is, assuming that the probability space is complete, {\pi_\Omega(S)} is measurable and there is a measurable section {\tau\colon\Omega\rightarrow{\mathbb R}^+} satisfying {\tau\in S}. I use the shorthand {\tau\in S} to mean {(\omega,\tau(\omega))\in S}, and it is convenient to extend the domain of {\tau} to all of {\Omega} by setting {\tau=\infty} outside of {\pi_\Omega(S)}. So, we consider random times taking values in the extended nonnegative real numbers {\bar{\mathbb R}^+={\mathbb R}^+\cup\{\infty\}}. The property that {\tau\in S} whenever {\tau < \infty} can be expressed by stating that the graph of {\tau} is contained in S, where the graph is defined as

\displaystyle  [\tau]\equiv\left\{(\omega,t)\in\Omega\times{\mathbb R}^+\colon t=\tau(\omega)\right\}.

The optional section theorem is a significant extension of measurable section which is very important to the general theory of stochastic processes. It starts with the concept of stopping times and with the optional sigma-algebra on {\Omega\times{\mathbb R}^+}. Then, it says that if S is optional its section {\tau} can be chosen to be a stopping time. However, there is a slight restriction. It might not be possible to define such {\tau} everywhere on {\pi_\Omega(S)}, but instead only up to a set of positive probability {\epsilon}, where {\epsilon} can be made as small as we like. There is also a corresponding predictable section theorem, which says that if S is in the predictable sigma-algebra, its section {\tau} can be chosen to be a predictable stopping time.

I give precise statements and proofs of optional and predictable section further below, and also prove a much more general section theorem which applies to any collection of random times satisfying a small number of required properties. Optional and predictable section will follow as consequences of this generalised section theorem.

Both the optional and predictable sigma-algebras, as well as the sigma-algebra used in the generalised section theorem, can be generated by collections of stochastic intervals. Any pair of random times {\sigma,\tau\colon\Omega\rightarrow\bar{\mathbb R}^+} defines a stochastic interval,

\displaystyle  [\sigma,\tau)\equiv\left\{(\omega,t)\in\Omega\times{\mathbb R}^+\colon\sigma(\omega)\le t < \tau(\omega)\right\}.

The debut of a set {S\subseteq\Omega\times{\mathbb R}^+} is defined to be the random time

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle D(S)\colon\Omega\rightarrow\bar{\mathbb R}^+,\smallskip\\ &\displaystyle D(S)(\omega)=\inf\left\{t\in{\mathbb R}^+\colon(\omega,t)\in S\right\}. \end{array}

In general, even if S is measurable, its debut need not be, although it can be shown to be measurable in the case that the probability space is complete. For a random time {\tau} and a measurable set {A\subseteq\Omega}, we use {\tau_A} to denote the restriction of {\tau} to A defined by

\displaystyle  \tau_A(\omega)=\begin{cases} \tau(\omega),&{\rm\ if\ }\omega\in A,\\ \infty,&{\rm\ if\ }\omega\not\in A. \end{cases}

We start with the general situation of a collection of random times {\mathcal T} satisfying a few required properties and show that, for sufficiently simple subsets of {\Omega\times{\mathbb R}^+}, the section can be chosen to be almost surely equal to the debut. It is straightforward that the collection of all stopping times defined with respect to some filtration do indeed satisfy the required properties for {\mathcal T}, but I also give a proof of this further below. A nonempty collection {\mathcal A} of subsets of a set X is called an algebra, Boolean algebra or, alternatively, a ring, if it is closed under finite unions, finite intersections, and under taking the complement {A^c=X\setminus A} of sets {A\in\mathcal A}. Recall, also, that {\mathcal A_\delta} represents the countable intersections of A, which is the collection of sets of the form {\bigcap_nA_n} for sequences {A_1,A_2,\ldots} in {\mathcal A}.

Lemma 1 Let {(\Omega,\mathcal F,{\mathbb P})} be a probability space and {\mathcal T} be a collection of measurable times {\tau\colon\Omega\rightarrow\bar{\mathbb R}^+} satisfying,

  • the constant function {\tau=0} is in {\mathcal T}.
  • {\sigma\wedge\tau} and {\sigma_{\{\sigma < \tau\}}} are in {\mathcal T}, for all {\sigma,\tau\in\mathcal T}.
  • {\sup_n\mathcal\tau_n\in\mathcal T} for all sequences {\tau_1,\tau_2,\cdots} in {\mathcal T}.

Then, letting {\mathcal A} be the collection of finite unions of stochastic intervals {[\sigma,\tau)} over {\sigma,\tau\in\mathcal T}, we have the following,

  • {\mathcal A} is an algebra on {\Omega\times{\mathbb R}^+}.
  • for all {S\in\mathcal A_\delta}, its debut satisfies

    \displaystyle  [D(S)]\subseteq S,\ \{D(S) < \infty\}=\pi_\Omega(S),

    and there is a {\tau\in\mathcal T} with {[\tau]\subseteq[D(S)]} and {\tau = D(S)} almost surely.

(more…)

23 December 18

Projection in Discrete Time

It has been some time since my last post, but I am continuing now with the stochastic calculus notes on optional and predictable projection. In this post, I will go through the ideas in the discrete-time situation. All of the main concepts involved in optional and predictable projection are still present in discrete time, but the theory is much simpler. It is only really in continuous time that the projection theorems really show their power, so the aim of this post is to motivate the concepts in a simple setting before generalising to the full, continuous-time situation. Ideally, this would have been published before the posts on optional and predictable projection in continuous time, so it is a bit out of sequence.

We consider time running through the discrete index set {{\mathbb Z}^+=\{0,1,2,\ldots\}}, and work with respect to a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_n\}_{n=0,1,\ldots},{\mathbb P})}. Then, {\mathcal{F}_n} is used to represent the collection of events observable up to and including time n. Stochastic processes will all be real-valued and defined up to almost-sure equivalence. That is, processes X and Y are considered to be the same if {X_n=Y_n} almost surely for each {n\in{\mathbb Z}^+}. The projections of a process X are defined as follows.

Definition 1 Let X be a measurable process. Then,

  1. the optional projection, {{}^{\rm o}\!X}, exists if and only if {{\mathbb E}[\lvert X_n\rvert\,\vert\mathcal{F}_n]} is almost surely finite for each n, in which case
    \displaystyle  {}^{\rm o}\!X_n={\mathbb E}[X_n\,\vert\mathcal{F}_n]. (1)
  2. the predictable projection, {{}^{\rm p}\!X}, exists if and only if {{\mathbb E}[\lvert X_n\rvert\,\vert\mathcal{F}_{n-1}]} is almost surely finite for each n, in which case
    \displaystyle  {}^{\rm p}\!X_n={\mathbb E}[X_n\,\vert\mathcal{F}_{n-1}]. (2)

(more…)

6 March 17

The Projection Theorems

In this post, I introduce the concept of optional and predictable projections of jointly measurable processes. Optional projections of right-continuous processes and predictable projections of left-continuous processes were constructed in earlier posts, with the respective continuity conditions used to define the projection. These are, however, just special cases of the general theory. For arbitrary measurable processes, the projections cannot be expected to satisfy any such pathwise regularity conditions. Instead, we use the measurability criteria that the projections should be, respectively, optional and predictable.

The projection theorems are a relatively straightforward consequence of optional and predictable section. However, due to the difficulty of proving the section theorems, optional and predictable projection is generally considered to be an advanced or hard part of stochastic calculus. Here, I will make use of the section theorems as stated in an earlier post, but leave the proof of those until after developing the theory of projection.

As usual, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}\}_{t\ge0},{\mathbb P})}, and only consider real-valued processes. Any two processes are considered to be the same if they are equal up to evanescence. The optional projection is then defined (up to evanescence) by the following.

Theorem 1 (Optional Projection) Let X be a measurable process such that {{\mathbb E}[1_{\{\tau < \infty\}}\lvert X_\tau\rvert\;\vert\mathcal{F}_\tau]} is almost surely finite for each stopping time {\tau}. Then, there exists a unique optional process {{}^{\rm o}\!X}, referred to as the optional projection of X, satisfying

\displaystyle  1_{\{\tau < \infty\}}{}^{\rm o}\!X_\tau={\mathbb E}[1_{\{\tau < \infty\}}X_\tau\,\vert\mathcal{F}_\tau] (1)

almost surely, for each stopping time {\tau}.

Predictable projection is defined similarly.

Theorem 2 (Predictable Projection) Let X be a measurable process such that {{\mathbb E}[1_{\{\tau < \infty\}}\lvert X_\tau\rvert\;\vert\mathcal{F}_{\tau-}]} is almost surely finite for each predictable stopping time {\tau}. Then, there exists a unique predictable process {{}^{\rm p}\!X}, referred to as the predictable projection of X, satisfying

\displaystyle  1_{\{\tau < \infty\}}{}^{\rm p}\!X_\tau={\mathbb E}[1_{\{\tau < \infty\}}X_\tau\,\vert\mathcal{F}_{\tau-}] (2)

almost surely, for each predictable stopping time {\tau}.

(more…)

28 February 17

Pathwise Regularity of Optional and Predictable Processes

As I have mentioned before in these notes, when working with processes in continuous time, it is important to select a good modification. Typically, this means that we work with processes which are left or right continuous. However, in general, it can be difficult to show that the paths of a process satisfy such pathwise regularity. In this post I show that for optional and predictable processes, the section theorems introduced in the previous post can be used to considerably simplify the situation. Although they are interesting results in their own right, the main application in these notes will be to optional and predictable projection. Once the projections are defined, the results from this post will imply that they preserve certain continuity properties of the process paths.

Suppose, for example, that we have a continuous-time process X which we want to show to be right-continuous. It is certainly necessary that, for any sequence of times {t_n\in{\mathbb R}_+} decreasing to a limit {t}, {X_{t_n}} almost-surely tends to {X_t}. However, even if we can prove this for every possible decreasing sequence {t_n}, it does not follow that X is right-continuous. As a counterexample, if {\tau\colon\Omega\rightarrow{\mathbb R}} is any continuously distributed random time, then the process {X_t=1_{\{t\le \tau\}}} is not right-continuous. However, so long as the distribution of {\tau} has no atoms, X is almost-surely continuous at each fixed time t. It is remarkable, then, that if we generalise to look at sequences of stopping times, then convergence in probability along decreasing sequences of stopping times is enough to guarantee everywhere right-continuity of the process. At least, it is enough so long as we restrict consideration to optional processes.

As usual, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}. Two processes are considered to be the same if they are equal up to evanescence, and any pathwise property is said to hold if it holds up to evanescence. That is, a process is right-continuous if and only is it is everywhere right-continuous on a set of probability 1. All processes will be taken to be real-valued, and a process is said to have left (or right) limits if its left (or right) limits exist everywhere, up to evanescence, and are finite.

Theorem 1 Let X be an optional process. Then,

  1. X is right-continuous if and only if {X_{\tau_n}\rightarrow X_\tau} in probability, for each uniformly bounded sequence {\tau_n} of stopping times decreasing to a limit {\tau}.
  2. X has right limits if and only if {X_{\tau_n}} converges in probability, for each uniformly bounded decreasing sequence {\tau_n} of stopping times.
  3. X has left limits if and only if {X_{\tau_n}} converges in probability, for each uniformly bounded increasing sequence {\tau_n} of stopping times.

The `only if’ parts of these statements is immediate, since convergence everywhere trivially implies convergence in probability. The importance of this theorem is in the `if’ directions. That is, it gives sufficient conditions to guarantee that the sample paths satisfy the respective regularity properties.

Note that conditions for left-continuity are absent from the statements of Theorem 1. In fact, left-continuity does not follow from the corresponding property along sequences of stopping times. Consider, for example, a Poisson process, X. This is right-continuous but not left-continuous. However, its jumps occur at totally inaccessible times. This implies that, for any sequence {\tau_n} of stopping times increasing to a finite limit {\tau}, it is true that {X_{\tau_n}} converges almost surely to {X_\tau}. In light of such examples, it is even more remarkable that right-continuity and the existence of left and right limits can be determined by just looking at convergence in probability along monotonic sequences of stopping times. Theorem 1 will be proven below, using the optional section theorem.

For predictable processes, we can restrict attention to predictable stopping times. In this case, we obtain a condition for left-continuity as well as for right-continuity.

Theorem 2 Let X be a predictable process. Then,

  1. X is right-continuous if and only if {X_{\tau_n}\rightarrow X_\tau} in probability, for each uniformly bounded sequence {\tau_n} of predictable stopping times decreasing to a limit {\tau}.
  2. X is left-continuous if and only if {X_{\tau_n}\rightarrow X_\tau} in probability, for each uniformly bounded sequence {\tau_n} of predictable stopping times increasing to a limit {\tau}.
  3. X has right limits if and only if {X_{\tau_n}} converges in probability, for each uniformly bounded decreasing sequence {\tau_n} of predictable stopping times.
  4. X has left limits if and only if {X_{\tau_n}} converges in probability, for each uniformly bounded increasing sequence {\tau_n} of predictable stopping times.

Again, the proof is given below, and relies on the predictable section theorem. (more…)

29 November 16

The Section Theorems

Consider a probability space {(\Omega,\mathcal{F},{\mathbb P})} and a subset S of {{\mathbb R}_+\times\Omega}. The projection {\pi_\Omega(S)} is the set of {\omega\in\Omega} such that there exists a {t\in{\mathbb R}_+} with {(t,\omega)\in S}. We can ask whether there exists a map

\displaystyle  \tau\colon\pi_\Omega(S)\rightarrow{\mathbb R}_+

such that {(\tau(\omega),\omega)\in S}. From the definition of the projection, values of {\tau(\omega)} satisfying this exist for each individual {\omega}. By invoking the axiom of choice, then, we see that functions {\tau} with the required property do exist. However, to be of use for probability theory, it is important that {\tau} should be measurable. Whether or not there are measurable functions with the required properties is a much more difficult problem, and is answered affirmatively by the measurable selection theorem. For the question to have any hope of having a positive answer, we require S to be measurable, so that it lies in the product sigma-algebra {\mathcal{B}({\mathbb R}_+)\otimes\mathcal{F}}, with {\mathcal{B}({\mathbb R}_+)} denoting the Borel sigma-algebra on {{\mathbb R}_+}. Also, less obviously, the underlying probability space should be complete. Throughout this post, {(\Omega,\mathcal{F},{\mathbb P})} will be assumed to be a complete probability space.

It is convenient to extend {\tau} to the whole of {\Omega} by setting {\tau(\omega)=\infty} for {\omega} outside of {\pi_\Omega(S)}. Then, {\tau} is a map to the extended nonnegative reals {\bar{\mathbb R}_+={\mathbb R}_+\cup\{\infty\}} for which {\tau(\omega) < \infty} precisely when {\omega} is in {\pi_\Omega(S)}. Next, the graph of {\tau}, denoted by {[\tau]}, is defined to be the set of {(t,\omega)\in{\mathbb R}_+\times\Omega} with {t=\tau(\omega)}. The property that {(\tau(\omega),\omega)\in S} whenever {\tau(\omega) < \infty} is expressed succinctly by the inclusion {[\tau]\subseteq S}. With this notation, the measurable selection theorem is as follows.

Theorem 1 (Measurable Selection) For any {S\in\mathcal{B}({\mathbb R}_+)\otimes\mathcal{F}}, there exists a measurable {\tau\colon\Omega\rightarrow\bar{\mathbb R}_+} such that {[\tau]\subseteq S} and

\displaystyle  \left\{\tau < \infty\right\}=\pi_\Omega(S). (1)

As noted above, if it wasn’t for the measurability requirement then this theorem would just be a simple application of the axiom of choice. Requiring {\tau} to be measurable, on the other hand, makes the theorem much more difficult to prove. For instance, it would not hold if the underlying probability space was not required to be complete. Note also that, stated as above, measurable selection implies that the projection of S is equal to a measurable set {\{\tau < \infty\}}, so the measurable projection theorem is an immediate corollary. I will leave the proof of Theorem 1 for a later post, together with the proofs of the section theorems stated below.

A closely related problem is the following. Given a measurable space {(X,\mathcal{E})} and a measurable function, {f\colon X\rightarrow\Omega}, does there exist a measurable right-inverse on the image of {f}? This is asking for a measurable function, {g}, from {f(X)} to {X} such that {f(g(\omega))=\omega}. In the case where {(X,\mathcal{E})} is the Borel space {({\mathbb R}_+,\mathcal{B}({\mathbb R}_+))}, Theorem 1 says that it does exist. If S is the graph {\{(t,f(t))\colon t\in{\mathbb R}_+\}} then {\tau} will be the required right-inverse. In fact, as all uncountable Polish spaces are Borel-isomorphic to each other and, hence, to {{\mathbb R}_+}, this result applies whenever {(X,\mathcal{E})} is a Polish space together with its Borel sigma-algebra. (more…)

Next Page »

Create a free website or blog at WordPress.com.