# Almost Sure

## 29 November 16

### The Section Theorems

Consider a probability space ${(\Omega,\mathcal{F},{\mathbb P})}$ and a subset S of ${{\mathbb R}_+\times\Omega}$. The projection ${\pi_\Omega(S)}$ is the set of ${\omega\in\Omega}$ such that there exists a ${t\in{\mathbb R}_+}$ with ${(t,\omega)\in S}$. We can ask whether there exists a map

$\displaystyle \tau\colon\pi_\Omega(S)\rightarrow{\mathbb R}_+$

such that ${(\tau(\omega),\omega)\in S}$. From the definition of the projection, values of ${\tau(\omega)}$ satisfying this exist for each individual ${\omega}$. By invoking the axiom of choice, then, we see that functions ${\tau}$ with the required property do exist. However, to be of use for probability theory, it is important that ${\tau}$ should be measurable. Whether or not there are measurable functions with the required properties is a much more difficult problem, and is answered affirmatively by the measurable selection theorem. For the question to have any hope of having a positive answer, we require S to be measurable, so that it lies in the product sigma-algebra ${\mathcal{B}({\mathbb R}_+)\otimes\mathcal{F}}$, with ${\mathcal{B}({\mathbb R}_+)}$ denoting the Borel sigma-algebra on ${{\mathbb R}_+}$. Also, less obviously, the underlying probability space should be complete. Throughout this post, ${(\Omega,\mathcal{F},{\mathbb P})}$ will be assumed to be a complete probability space.

It is convenient to extend ${\tau}$ to the whole of ${\Omega}$ by setting ${\tau(\omega)=\infty}$ for ${\omega}$ outside of ${\pi_\Omega(S)}$. Then, ${\tau}$ is a map to the extended nonnegative reals ${\bar{\mathbb R}_+={\mathbb R}_+\cup\{\infty\}}$ for which ${\tau(\omega) < \infty}$ precisely when ${\omega}$ is in ${\pi_\Omega(S)}$. Next, the graph of ${\tau}$, denoted by ${[\tau]}$, is defined to be the set of ${(t,\omega)\in{\mathbb R}_+\times\Omega}$ with ${t=\tau(\omega)}$. The property that ${(\tau(\omega),\omega)\in S}$ whenever ${\tau(\omega) < \infty}$ is expressed succinctly by the inclusion ${[\tau]\subseteq S}$. With this notation, the measurable selection theorem is as follows.

Theorem 1 (Measurable Selection) For any ${S\in\mathcal{B}({\mathbb R}_+)\otimes\mathcal{F}}$, there exists a measurable ${\tau\colon\Omega\rightarrow\bar{\mathbb R}_+}$ such that ${[\tau]\subseteq S}$ and

 $\displaystyle \left\{\tau < \infty\right\}=\pi_\Omega(S).$ (1)

As noted above, if it wasn’t for the measurability requirement then this theorem would just be a simple application of the axiom of choice. Requiring ${\tau}$ to be measurable, on the other hand, makes the theorem much more difficult to prove. For instance, it would not hold if the underlying probability space was not required to be complete. Note also that, stated as above, measurable selection implies that the projection of S is equal to a measurable set ${\{\tau < \infty\}}$, so the measurable projection theorem is an immediate corollary. I will leave the proof of Theorem 1 for a later post, together with the proofs of the section theorems stated below.

A closely related problem is the following. Given a measurable space ${(X,\mathcal{E})}$ and a measurable function, ${f\colon X\rightarrow\Omega}$, does there exist a measurable right-inverse on the image of ${f}$? This is asking for a measurable function, ${g}$, from ${f(X)}$ to ${X}$ such that ${f(g(\omega))=\omega}$. In the case where ${(X,\mathcal{E})}$ is the Borel space ${({\mathbb R}_+,\mathcal{B}({\mathbb R}_+))}$, Theorem 1 says that it does exist. If S is the graph ${\{(t,f(t))\colon t\in{\mathbb R}_+\}}$ then ${\tau}$ will be the required right-inverse. In fact, as all uncountable Polish spaces are Borel-isomorphic to each other and, hence, to ${{\mathbb R}_+}$, this result applies whenever ${(X,\mathcal{E})}$ is a Polish space together with its Borel sigma-algebra. (more…)

## 22 November 16

### Predictable Processes

In contrast to optional processes, the class of predictable processes was used extensively in the development of stochastic integration in these notes. They appeared as integrands in stochastic integrals then, later on, as compensators and in the Doob-Meyer decomposition. Since they are also central to the theory of predictable section and projection, I will revisit the basic properties of predictable processes now. In particular, any of the collections of sets and processes in the following theorem can equivalently be used to define the predictable sigma-algebra. As usual, we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in{\mathbb R}_+},{\mathbb P})}$. However, completeness is not actually required for the following result. All processes are assumed to be real valued, or take values in the extended reals ${\bar{\mathbb R}={\mathbb R}\cup\{\pm\infty\}}$.

Theorem 1 The following collections of sets and processes each generate the same sigma-algebra on ${{\mathbb R}_+\times\Omega}$.

1. {${[\tau,\infty)}$: ${\tau}$ is a predictable stopping time}.
2. ${Z1_{[\tau,\infty)}}$ as ${\tau}$ ranges over the predictable stopping times and Z over the ${\mathcal{F}_{\tau-}}$-measurable random variables.
3. {$A\times(t,\infty)\colon t\in{\mathbb R}_+,A\in\mathcal{F}_t$}$\cup${$A\times\{0\}\colon A\in\mathcal{F}_0$}.
4. The elementary predictable processes.
5. {${(\tau,\infty)}$: ${\tau}$ is a stopping time}${\cup}${${A\times\{0\}\colon A\in\mathcal{F}_0}$}.

Compare this with the analogous result for sets/processes generating the optional sigma-algebra given in the previous post. The proof of Theorem 1 is given further below. First, recall that the predictable sigma-algebra was previously defined to be generated by the left-continuous adapted processes. However, it can equivalently be defined by any of the collections stated in Theorem 1. To make this clear, I now restate the definition making use if this equivalence.

Definition 2 The predictable sigma-algebra, ${\mathcal{P}}$, is the sigma-algebra on ${{\mathbb R}_+\times\Omega}$ generated by any of the collections of sets/processes in Theorem 1.

A stochastic process is predictable iff it is ${\mathcal{P}}$-measurable.

## 15 November 16

### Optional Processes

The optional sigma-algebra, ${\mathcal{O}}$, was defined earlier in these notes as the sigma-algebra generated by the adapted and right-continuous processes. Then, a stochastic process is optional if it is ${\mathcal{O}}$-measurable. However, beyond the definition, very little use was made of this concept. While right-continuous adapted processes are optional by construction, and were used throughout the development of stochastic calculus, there was no need to make use of the general definition. On the other hand, optional processes are central to the theory of optional section and projection. So, I will now look at such processes in more detail, starting with the following alternative, but equivalent, ways of defining the optional sigma-algebra. Throughout this post we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in{\mathbb R}_+},{\mathbb P})}$, and all stochastic processes will be assumed to be either real-valued or to take values in the extended reals ${\bar{\mathbb R}={\mathbb R}\cup\{\pm\infty\}}$.

Theorem 1 The following collections of sets and processes each generate the same sigma-algebra on ${{\mathbb R}_+\times\Omega}$.

{${[\tau,\infty)}$: ${\tau}$ is a stopping time}.

• ${Z1_{[\tau,\infty)}}$ as ${\tau}$ ranges over the stopping times and Z over the ${\mathcal{F}_\tau}$-measurable random variables.
• The optional-sigma algebra was previously defined to be generated by the right-continuous adapted processes. However, any of the four collections of sets and processes stated in Theorem 1 can equivalently be used, and the definitions given in the literature do vary. So, I will restate the definition making use of this equivalence.

Definition 2 The optional sigma-algebra, ${\mathcal{O}}$, is the sigma-algebra on ${{\mathbb R}_+\times\Omega}$ generated by any of the collections of sets/processes in Theorem 1.

A stochastic process is optional iff it is ${\mathcal{O}}$-measurable.

## 8 November 16

### Measurable Projection and the Debut Theorem

I will discuss some of the immediate consequences of the following deceptively simple looking result.

Theorem 1 (Measurable Projection) If ${(\Omega,\mathcal{F},{\mathbb P})}$ is a complete probability space and ${A\in\mathcal{B}({\mathbb R})\otimes\mathcal{F}}$ then ${\pi_\Omega(A)\in\mathcal{F}}$.

The notation ${\pi_B}$ is used to denote the projection from the cartesian product ${A\times B}$ of sets A and B onto B. That is, ${\pi_B((a,b)) = b}$. As is standard, ${\mathcal{B}({\mathbb R})}$ is the Borel sigma-algebra on the reals, and ${\mathcal{A}\otimes\mathcal{B}}$ denotes the product of sigma-algebras.

Theorem 1 seems almost obvious. Projection is a very simple map and we may well expect the projection of, say, a Borel subset of ${{\mathbb R}^2}$ onto ${{\mathbb R}}$ to be Borel. In order to formalise this, we could start by noting that sets of the form ${A\times B}$ for Borel A and B have an easily described, and measurable, projection, and the Borel sigma-algebra is the closure of the collection such sets under countable unions and under intersections of decreasing sequences of sets. Furthermore, the projection operator commutes with taking the union of sequences of sets. Unfortunately, this method of proof falls down when looking at the limit of decreasing sequences of sets, which does not commute with projection. For example, the decreasing sequence of sets ${S_n=(0,1/n)\times{\mathbb R}\subseteq{\mathbb R}^2}$ all project onto the whole of ${{\mathbb R}}$, but their limit is empty and has empty projection.

There is an interesting history behind Theorem 1, as mentioned by Gerald Edgar on MathOverflow (1) in answer to The most interesting mathematics mistake? In a 1905 paper, Henri Lebesgue asserted that the projection of a Borel subset of the plane onto the line is again a Borel set (Lebesgue, (3), pp 191–192). This was based on the erroneous assumption that projection commutes with the limit of a decreasing sequence of sets. The mistake was spotted, in 1916, by Mikhail Suslin, and led to his investigation of analytic sets and to begin the study of what is now known as descriptive set theory. See Kanamori, (2), for more details. In fact, as was shown by Suslin, projections of Borel sets need not be Borel. So, by considering the case where ${\Omega={\mathbb R}}$ and ${\mathcal{F}=\mathcal{B}({\mathbb R})}$, Theorem 1 is false if the completeness assumption is dropped. I will give a proof of Theorem 1 but, as it is a bit involved, this is left for a later post.

For now, I will state some consequences of the measurable projection theorem which are important to the theory of continuous-time stochastic processes, starting with the following. Throughout this post, the underlying probability space ${(\Omega,\mathcal{F})}$ is assumed to be complete, and stochastic processes are taken to be real-valued, or take values in the extended reals ${\bar{\mathbb R}={\mathbb R}\cup\{\pm\infty\}}$, with time index ranging over ${{\mathbb R}_+}$. For a first application of measurable projection, it allows us to show that the supremum of a jointly measurable processes is measurable.

Lemma 2 If X is a jointly measurable process and ${S\in\mathcal{B}(\mathbb{R}_+)}$ then ${\sup_{s\in S}X_s}$ is measurable.

Proof: Setting ${U=\sup_{s\in S}X_s}$ then, for each real K, ${U > K}$ if and only if ${X_s > K}$ for some ${s\in S}$. Hence,

$\displaystyle U^{-1}\left((K,\infty]\right)=\pi_\Omega\left((S\times\Omega)\cap X^{-1}\left((K,\infty]\right)\right).$

By the measurable projection theorem, this is in ${\mathcal{F}}$ and, as sets of the form ${(K,\infty]}$ generate the Borel sigma-algebra on ${\mathbb{\bar R}}$, U is ${\mathcal{F}}$-measurable. ⬜

Next, the running maximum of a jointly measurable process is again jointly measurable.

Lemma 3 If X is a jointly measurable process then ${X^*_t\equiv\sup_{s\le t}X_s}$ is also jointly measurable.

## 25 October 16

### Optional Projection For Right-Continuous Processes

In filtering theory, we have a filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$ and a signal process ${\{X_t\}_{t\in{\mathbb R}_+}}$. The sigma-algebra ${\mathcal{F}_t}$ represents the collection of events which are observable up to and including time t. The process X is not assumed to be adapted, so need not be directly observable. For example, we may only be able to measure an observation process ${Z_t=X_t+\epsilon_t}$, which incorporates some noise ${\epsilon_t}$, and generates the filtration ${\mathcal{F}_t}$, so is adapted. The problem, then, is to compute an estimate for ${X_t}$ based on the observable data at time t. Looking at the expected value of X conditional on the observable data, we obtain the following estimate for X at each time ${t\in{\mathbb R}_+}$,

 $\displaystyle Y_t={\mathbb E}[X_t\;\vert\mathcal{F}_t]{\rm\ \ (a.s.)}$ (1)

The process Y is adapted. However, as (1) only defines Y up to a zero probability set, it does not give us the paths of Y, which requires specifying its values simultaneously at the uncountable set of times in ${{\mathbb R}_+}$. Consequently, (1) does not tell us the distribution of Y at random times. So, it is necessary to specify a good version for Y.

Optional projection gives a uniquely defined process which satisfies (1), not just at every time t in ${{\mathbb R}_+}$, but also at all stopping times. The full theory of optional projection for jointly measurable processes requires the optional section theorem. As I will demonstrate, in the case where X is right-continuous, optional projection can be done by more elementary methods.

Throughout this post, it will be assumed that the underlying filtered probability space satisfies the usual conditions, meaning that it is complete and right-continuous, ${\mathcal{F}_{t+}=\mathcal{F}_t}$. Stochastic processes are considered to be defined up to evanescence. That is, two processes are considered to be the same if they are equal up to evanescence. In order to apply (1), some integrability requirements need to imposed on X. Often, to avoid such issues, optional projection is defined for uniformly bounded processes. For a bit more generality, I will relax this requirement a bit and use prelocal integrability. Recall that, in these notes, a process X is prelocally integrable if there exists a sequence of stopping times ${\tau_n}$ increasing to infinity and such that

 $\displaystyle 1_{\{\tau_n > 0\}}\sup_{t < \tau_n}\lvert X_t\rvert$ (2)

is integrable. This is a strong enough condition for the conditional expectation (1) to exist, not just at each fixed time, but also whenever t is a stopping time. The main result of this post can now be stated.

Theorem 1 (Optional Projection) Let X be a right-continuous and prelocally integrable process. Then, there exists a unique right-continuous process Y satisfying (1).

Uniqueness is immediate, as (1) determines Y, almost-surely, at each fixed time, and this is enough to uniquely determine right-continuous processes up to evanescence. Existence of Y is the important part of the statement, and the proof will be left until further down in this post.

The process defined by Theorem 1 is called the optional projection of X, and is denoted by ${{}^{\rm o}\!X}$. That is, ${{}^{\rm o}\!X}$ is the unique right-continuous process satisfying

 $\displaystyle {}^{\rm o}\!X_t={\mathbb E}[X_t\;\vert\mathcal{F}_t]{\rm\ \ (a.s.)}$ (3)

for all times t. In practise, the process X will usually not just be right-continuous, but will also have left limits everywhere. That is, it is cadlag.

Theorem 2 Let X be a cadlag and prelocally integrable process. Then, its optional projection is cadlag.

A simple example of optional projection is where ${X_t}$ is constant in t and equal to an integrable random variable U. Then, ${{}^{\rm o}\!X_t}$ is the cadlag version of the martingale ${{\mathbb E}[U\;\vert\mathcal{F}_t]}$. (more…)

## 27 December 11

### Compensators of Counting Processes

A counting process, X, is defined to be an adapted stochastic process starting from zero which is piecewise constant and right-continuous with jumps of size 1. That is, letting ${\tau_n}$ be the first time at which ${X_t=n}$, then

$\displaystyle X_t=\sum_{n=1}^\infty 1_{\{\tau_n\le t\}}.$

By the debut theorem, ${\tau_n}$ are stopping times. So, X is an increasing integer valued process counting the arrivals of the stopping times ${\tau_n}$. A basic example of a counting process is the Poisson process, for which ${X_t-X_s}$ has a Poisson distribution independently of ${\mathcal{F}_s}$, for all times ${t > s}$, and for which the gaps ${\tau_n-\tau_{n-1}}$ between the stopping times are independent exponentially distributed random variables. As we will see, although Poisson processes are just one specific example, every quasi-left-continuous counting process can actually be reduced to the case of a Poisson process by a time change. As always, we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}$.

Note that, as a counting process X has jumps bounded by 1, it is locally integrable and, hence, the compensator A of X exists. This is the unique right-continuous predictable and increasing process with ${A_0=0}$ such that ${X-A}$ is a local martingale. For example, if X is a Poisson process of rate ${\lambda}$, then the compensated Poisson process ${X_t-\lambda t}$ is a martingale. So, the compensator of X is the continuous process ${A_t=\lambda t}$. More generally, X is said to be quasi-left-continuous if ${{\mathbb P}(\Delta X_\tau=0)=1}$ for all predictable stopping times ${\tau}$, which is equivalent to the compensator of X being almost surely continuous. Another simple example of a counting process is ${X=1_{[\tau,\infty)}}$ for a stopping time ${\tau > 0}$, in which case the compensator of X is just the same thing as the compensator of ${\tau}$.

As I will show in this post, compensators of quasi-left-continuous counting processes have many parallels with the quadratic variation of continuous local martingales. For example, Lévy’s characterization states that a local martingale X starting from zero is standard Brownian motion if and only if its quadratic variation is ${[X]_t=t}$. Similarly, as we show below, a counting process is a homogeneous Poisson process of rate ${\lambda}$ if and only if its compensator is ${A_t=\lambda t}$. It was also shown previously in these notes that a continuous local martingale X has a finite limit ${X_\infty=\lim_{t\rightarrow\infty}X_t}$ if and only if ${[X]_\infty}$ is finite. Similarly, a counting process X has finite value ${X_\infty}$ at infinity if and only if the same is true of its compensator. Another property of a continuous local martingale X is that it is constant over all intervals on which its quadratic variation is constant. Similarly, a counting process X is constant over any interval on which its compensator is constant. Finally, it is known that every continuous local martingale is simply a continuous time change of standard Brownian motion. In the main result of this post (Theorem 5), we show that a similar statement holds for counting processes. That is, every quasi-left-continuous counting process is a continuous time change of a Poisson process of rate 1. (more…)

## 20 December 11

### Compensators of Stopping Times

The previous post introduced the concept of the compensator of a process, which is known to exist for all locally integrable semimartingales. In this post, I’ll just look at the very special case of compensators of processes consisting of a single jump of unit size.

Definition 1 Let ${\tau}$ be a stopping time. The compensator of ${\tau}$ is defined to be the compensator of ${1_{[\tau,\infty)}}$.

So, the compensator A of ${\tau}$ is the unique predictable FV process such that ${A_0=0}$ and ${1_{[\tau,\infty)}-A}$ is a local martingale. Compensators of stopping times are sufficiently special that we can give an accurate description of how they behave. For example, if ${\tau}$ is predictable, then its compensator is just ${1_{\{\tau > 0\}}1_{[\tau,\infty)}}$. If, on the other hand, ${\tau}$ is totally inaccessible and almost surely finite then, as we will see below, its compensator, A, continuously increases to a value ${A_\infty}$ which has the exponential distribution.

However, compensators of stopping times are sufficiently general to be able to describe the compensator of any cadlag adapted process X with locally integrable variation. We can break X down into a continuous part plus a sum over its jumps,

 $\displaystyle X_t=X_0+X^c_t+\sum_{n=1}^\infty\Delta X_{\tau_n}1_{[\tau_n,\infty)}.$ (1)

Here, ${\tau_n > 0}$ are disjoint stopping times such that the union ${\bigcup_n[\tau_n]}$ of their graphs contains all the jump times of X. That they are disjoint just means that ${\tau_m\not=\tau_n}$ whenever ${\tau_n < \infty}$, for any ${m\not=n}$. As was shown in an earlier post, not only is such a sequence ${\tau_n}$ of the stopping times guaranteed to exist, but each of the times can be chosen to be either predictable or totally inaccessible. As the first term, ${X^c_t}$, on the right hand side of (1) is a continuous FV process, it is by definition equal to its own compensator. So, the compensator of X is equal to ${X^c}$ plus the sum of the compensators of ${\Delta X_{\tau_n}1_{[\tau_n,\infty)}}$. The reduces compensators of locally integrable FV processes to those of processes consisting of a single jump at either a predictable or a totally inaccessible time. (more…)

## 26 May 11

### Predictable Stopping Times

Although this post is under the heading of the general theory of semimartingales’ it is not, strictly speaking, about semimartingales at all. Instead, I will be concerned with a characterization of predictable stopping times. The reason for including this now is twofold. First, the results are too advanced to have been proven in the earlier post on predictable stopping times, and reasonably efficient self-contained proofs can only be given now that we have already built up a certain amount of stochastic calculus theory. Secondly, the results stated here are indispensable to the further study of semimartingales. In particular, standard semimartingale decompositions require some knowledge of predictable processes and predictable stopping times.

Recall that a stopping time ${\tau}$ is said to be predictable if there exists a sequence of stopping times ${\tau_n\le\tau}$ increasing to ${\tau}$ and such that ${\tau_n < \tau}$ whenever ${\tau > 0}$. Also, the predictable sigma-algebra ${\mathcal{P}}$ is defined as the sigma-algebra generated by the left-continuous and adapted processes. Stated like this, these two concepts can appear quite different. However, as was previously shown, stochastic intervals of the form ${[\tau,\infty)}$ for predictable times ${\tau}$ are all in ${\mathcal{P}}$ and, in fact, generate the predictable sigma-algebra.

The main result (Theorem 1) of this post is to show that a converse statement holds, so that ${[\tau,\infty)}$ is in ${\mathcal{P}}$ if and only if the stopping time ${\tau}$ is predictable. This rather simple sounding result does have many far-reaching consequences. We can use it show that all cadlag predictable processes are locally bounded, local martingales are predictable if and only if they are continuous, and also give a characterization of cadlag predictable processes in terms of their jumps. Some very strong statements about stopping times also follow without much difficulty for certain special stochastic processes. For example, if the underlying filtration is generated by a Brownian motion then every stopping time is predictable. Actually, this is true whenever the filtration is generated by a continuous Feller process. It is also possible to give a surprisingly simple characterization of stopping times for filtrations generated by arbitrary non-continuous Feller processes. Precisely, a stopping time ${\tau}$ is predictable if the process is almost surely continuous at time ${\tau}$ and is totally inaccessible if the underlying Feller process is almost surely discontinuous at ${\tau}$.

As usual, we work with respect to a complete filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in{\mathbb R}_+},{\mathbb P})}$. I now give a statement and proof of the main result of this post. Note that the equivalence of the four conditions below means that any of them can be used as alternative definitions of predictable stopping times. Often, the first condition below is used instead. Stopping times satisfying the definition used in these notes are sometimes called announceable, with the sequence ${\tau_n\uparrow\tau}$ said to announce ${\tau}$ (this terminology is used by, e.g., Rogers & Williams). Stopping times satisfying property 3 below, which is easily seen to be equivalent to 2, are sometimes called fair. Then, the following theorem says that the sets of predictable, fair and announceable stopping times all coincide.

Theorem 1 Let ${\tau}$ be a stopping time. Then, the following are equivalent.

1. ${[\tau]\in\mathcal{P}}$.
2. ${\Delta M_\tau1_{[\tau,\infty)}}$ is a local martingale for all local martingales M.
3. ${{\mathbb E}[1_{\{\tau < \infty\}}\Delta M_\tau]=0}$ for all cadlag bounded martingales M.
4. ${\tau}$ is predictable.

## 24 December 09

### Local Martingales

Recall from the previous post that a cadlag adapted process ${X}$ is a local martingale if there is a sequence ${\tau_n}$ of stopping times increasing to infinity such that the stopped processes ${1_{\{\tau_n>0\}}X^{\tau_n}}$ are martingales. Local submartingales and local supermartingales are defined similarly.

An example of a local martingale which is not a martingale is given by the double-loss’ gambling strategy. Interestingly, in 18th century France, such strategies were known as martingales and is the origin of the mathematical term. Suppose that a gambler is betting sums of money, with even odds, on a simple win/lose game. For example, betting that a coin toss comes up heads. He could bet one dollar on the first toss and, if he loses, double his stake to two dollars for the second toss. If he loses again, then he is down three dollars and doubles the stake again to four dollars. If he keeps on doubling the stake after each loss in this way, then he is always gambling one more dollar than the total losses so far. He only needs to continue in this way until the coin eventually does come up heads, and he walks away with net winnings of one dollar. This therefore describes a fair game where, eventually, the gambler is guaranteed to win.

Of course, this is not an effective strategy in practise. The losses grow exponentially and, if he doesn’t win quickly, the gambler must hit his credit limit in which case he loses everything. All that the strategy achieves is to trade a large probability of winning a dollar against a small chance of losing everything. It does, however, give a simple example of a local martingale which is not a martingale.

The gamblers winnings can be defined by a stochastic process ${\{Z_n\}_{n=1,\ldots}}$ representing his net gain (or loss) just before the n’th toss. Let ${\epsilon_1,\epsilon_2,\ldots}$ be a sequence of independent random variables with ${{\mathbb P}(\epsilon_n=1)={\mathbb P}(\epsilon_n=-1)=1/2}$. Here, ${\epsilon_n}$ represents the outcome of the n’th toss, with 1 referring to a head and -1 referring to a tail. Set ${Z_1=0}$ and

$\displaystyle Z_{n}=\begin{cases} 1,&\text{if }Z_{n-1}=1,\\ Z_{n-1}+\epsilon_n(1-Z_{n-1}),&\text{otherwise}. \end{cases}$

This is a martingale with respect to its natural filtration, starting at zero and, eventually, ending up equal to one. It can be converted into a local martingale by speeding up the time scale to fit infinitely many tosses into a unit time interval

$\displaystyle X_t=\begin{cases} Z_n,&\text{if }1-1/n\le t<1-1/(n+1),\\ 1,&\text{if }t\ge 1. \end{cases}$

This is a martingale with respect to its natural filtration on the time interval ${[0,1)}$. Letting ${\tau_n=\inf\{t\colon\vert X_t\vert\ge n\}}$ then the optional stopping theorem shows that ${X^{\tau_n}_t}$ is a uniformly bounded martingale on ${t<1}$, continuous at ${t=1}$, and constant on ${t\ge 1}$. This is therefore a martingale, showing that ${X}$ is a local martingale. However, ${{\mathbb E}[X_1]=1\not={\mathbb E}[X_0]=0}$, so it is not a martingale. (more…)

## 23 December 09

### Localization

Special classes of processes, such as martingales, are very important to the study of stochastic calculus. In many cases, however, processes under consideration `almost’ satisfy the martingale property, but are not actually martingales. This occurs, for example, when taking limits or stochastic integrals with respect to martingales. It is necessary to generalize the martingale concept to that of local martingales. More generally, localization is a method of extending a given property to a larger class of processes. In this post I mention a few definitions and simple results concerning localization, and look more closely at local martingales in the next post.

Definition 1 Let P be a class of stochastic processes. Then, a process X is locally in P if there exists a sequence of stopping times ${\tau_n\uparrow\infty}$ such that the stopped processes

$\displaystyle 1_{\{\tau_n>0\}}X^{\tau_n}$

are in P. The sequence ${\tau_n}$ is called a localizing sequence for X (w.r.t. P).

I write ${P_{\rm loc}}$ for the processes locally in P. Choosing the sequence ${\tau_n\equiv\infty}$ of stopping times shows that ${P\subseteq P_{\rm loc}}$. A class of processes is said to be stable if ${1_{\{\tau>0\}}X^\tau}$ is in P whenever X is, for all stopping times ${\tau}$. For example, the optional stopping theorem shows that the classes of cadlag martingales, cadlag submartingales and cadlag supermartingales are all stable.

Definition 2 A process is a

1. a local martingale if it is locally in the class of cadlag martingales.
2. a local submartingale if it is locally in the class of cadlag submartingales.
3. a local supermartingale if it is locally in the class of cadlag supermartingales.

Next Page »

Blog at WordPress.com.