The number of times that a process passes upwards or downwards through an interval is refered to as the number of *upcrossings* and respectively the number of *downcrossings* of the process.

Consider a process whose time index runs through an index set . For real numbers , the number of upcrossings of across the interval is the supremum of the nonnegative integers such that there exists times satisfying

(1) |

and for which . The number of upcrossings is denoted by , which is either a nonnegative integer or is infinite. Similarly, the number of downcrossings, denoted by , is the supremum of the nonnegative integers such that there are times satisfying (1) and such that .

Note that between any two upcrossings there is a downcrossing and, similarly, there is a downcrossing between any two upcrossings. It follows that and can differ by at most 1, and they are either both finite or both infinite.

The significance of the upcrossings of a process to convergence results is due to the following criterion for convergence of a sequence.

Theorem 1A sequence converges to a limit in the extended real numbers if and only if the number of upcrossings is finite for all .

*Proof:* Denoting the infimum limit and supremum limit by

then and the sequence converges to a limit if and only if .

First, suppose that the sequence converges. If then there is an such that for all . So, all upcrossings of must start before time , and we may conclude that is finite. On the other hand, if then and we can infer that for all and some . Again, this gives .

Conversely, suppose that the sequence does not converge, so that . Then choose in the interval . For any integer , there is then an such that and an with . This allows us to define infinite sequences by and

for . Clearly, and for all , so .

This convergence criterion is particularly suited to applications in stochastic calculus and martingale theory because bounds for the number of upcrossings and downcrossings of a process can be obtained in terms of stochastic integrals.

In these stochastic calculus notes, I am mainly concerned with stochastic processes with time index running through the positive real numbers. However, for the purposes of this post, it is useful to consider arbitrary index sets . We work under a probability space and a filtration of sigma-algebras . Consider an elementary predictable process of the form for times in and -measurable random variables . Its integral with respect to is given by

In particular, this includes processes for *elementary predictable sets* of the form

(2) |

for times in and sets .

The number of upcrossings of an interval can be computed as follows.. Set and define and by

It is easily seen that whenever and, if for times in , then for some . The number of upcrossings of is therefore equal to the maximum such that . So, and, letting be the elementary process gives the following inequality (see also, the figure above)

The process takes only the values 0 and 1, so is equal to for an elementary predictable set of the form (2). This gives the following bound for the upcrossings of a process which, by the results of the previous post, is particularly suited to martingale convergence results.

Lemma 2Let be a stochastic process with time running over the finite index set , and set . Then there exists an elementary setAof the form (2) such that

(3)

#### Martingale Convergence

The considerations above lead to the following bound on the expected number of upcrossings of a martingale.

Lemma 3 (Doob’s upcrossing lemma)Let be a supermartingale with time running through a countable index set . The number of upcrossings of any satisfies

*Proof:* It is enough to prove this for finite index sets . The result for countably infinite sets then follows by taking a sequence of finite sets increasing to and applying monotone convergence.

So, suppose that is finite and set . Equation (3) can be applied.

The second inequality follows from the fact that is a supermartingale (equivalently, is a submartingale) and is a bounded nonnegative elementary predictable process.

Martingale convergence is a consequence of the upcrossing lemma. The following says that any -bounded martingale in discrete time converges almost surely.

Theorem 4 (Doob’s Forward Convergence Theorem)Let be a martingale (or submartingale, or supermartingale) such that is bounded over all . Then, with probability one, the limit exists and is finite.

*Proof:* It is enough to consider the case where is a supermartingale, as the submartingale case then follows by applying the result to . In this case, Lemma 3 shows that the expected number of upcrossings of any is finite, so almost surely. By countable additivity, simultaneously for all rational (and therefore, for all real ) with probability one. Consequently, the limit exists in the extended real numbers, and Fatou’s lemma gives

so is almost surely finite.

A process *X* is said to be *uniformly integrable* if the set of random variables is uniformly integrable. This case is particularly nice.

Corollary 5Let be a uniformly integrable martingale and let be the sigma-algebra at infinity. Then, there is an almost-surely unique and integrable -measurable random variable such that for each . Furthermore, as , with probability one.

*Proof:* First, by Doob’s forward convergence theorem, we can set , which exists with probability one. By uniform integrability, this converges in and it follows that converges to as . So, satisfies the required properties.

Suppose is any other random variable satisfying the required properties. Then, for any and . By the monotone class theorem, this extends to all . As are -measurable, it follows that they are almost surely equal.

The condition that is -bounded in Doob’s forward convergence theorem is automatically satisfied in many cases. In particular, if is a non-negative supermartingale then for all , so is bounded, giving the following corollary.

Corollary 6Let be a non-negative martingale (or supermartingale). Then, with probability one, the limit exists and is finite.

As an example application of the martingale convergence theorem, it is easy to show that a standard random walk started started at will visit every level with probability one.

Corollary 7Let be a standard random walk. That is, and

Then, for every integer , with probability one for some .

*Proof:* Without loss of generality, suppose that . Let be the first time for which . It is easy to see that the stopped process defined by is a martingale and is non-negative. Therefore, by the martingale convergence theorem, the limit exists and is finite (almost surely). In particular, converges to and must be less than for large . However, whenever , so we have and therefore for some .

#### Levy’s Upwards and Downwards Theorems

A very useful application of martingale convergence is to the following results concerning limits of conditional expectations for a limit of increasing or decreasing sigma-algebras.

Theorem 8 (Levy’s upward theorem)Let be a probability space and be an increasing sequence of sub-sigma-algebras of . Set . For any integrable random variable , the following limit holds with probability one,

as . This limit also holds under -convergence.

Theorem 9 (Levy’s downward theorem)Let be a probability space and be a decreasing sequence of sub-sigma-algebras of . Set . For any integrable random variable , the following limit holds with probability one,

as . This limit also holds under -convergence.

The upwards theorem is nothing more than a direct application of the convergence theorem for uniformly integrable martingales. The downwards theorem follows in a similar way.

For each negative integer , set and . This is a martingale with respect to the filtration and, by Doob’s upcrossing lemma, almost surely has finitely many upcrossings of any interval . So the limit exists almost-surely. By uniform integrability of conditional expectations, this must also converge in the -norm. In particular, is integrable, and taking limits of the equality gives

proving Levy’s downwards theorem.

Note that, due to the uniformly integrability of conditional expectations, -convergence immediately follows from almost-sure convergence in Levy’s upwards and downwards theorems.

#### The Strong Law of Large Numbers

Roughly speaking, the *strong law of large numbers* says that the sample mean of a set of independent and identically distributed random variables tends to the mean, with probability one, as the sample size tends to infinity.

More precisely, this can be stated as follows. Suppose that is a sequence of IID random variables. Set . If the random variables are integrable with mean then

with probability one as . There is also a weak law of large numbers which states that the limit above holds in probability. However, as convergence in probability always follows from almost sure convergence, the weak law is a direct consequence of the strong law.

I mention the strong law as an example of a well-known result which is quite hard to prove from elementary ideas, but follows easily from martingale convergence. The strong law actually follows from Levy’s downwards theorem.

For each set . Symmetry under permuting gives

Taking the average of these terms,

So, setting , Levy’s downwards theorem shows that with probability one, and this limit is a random variable with mean . The final step is to apply Kolmogorov’s zero-one law. This implies that the limit of is almost-surely constant, so must equal .

Hi,

As I was reading (once more) your blog, I have a few remarks about this article,

I noticed that you mention a set in lemma 2 which does not appear (anymore ?) in the equation (3) (or (4) ?) .

(Maybe related to the preceding remark) In the proof of lemma 3, you mention a second inequality which is missing in the claim.

After the proof of Theorem 4, you make the following statement that I don’t really understand :

“A process is uniformly integrable if each of the random variables are uniformly integrable.” As uniform integrability is usually about a family of random variables, I don’t really see what you mean by “each is uniformly integrable unless this is only the statement that an integrable random variable is uniformly integrable.

Best Regards

Comment by TheBridge — 17 October 11 @ 2:41 PM |

Yeah, this is rather sloppy. I’ll edit it when I have time. [

Edit:This is fixed now] About the uniform integrability, I should have said “…if the set of random variables {X:_{t}t∈ R_{+}} is uniformly integrable”. And, the set B isn’t used, it should be deleted. Thanks for pointing these out.Comment by George Lowther — 18 October 11 @ 1:15 AM |

Dear George

thanks for your great block, I learn a lot from your blog. I would like to have a question :

you wrote

symmetry under permuting gives

can you explain more on this ? why you have above “equal signs” ?

thanks

Comment by nguyen — 4 November 11 @ 5:14 PM |

Hi ngyuen,

This follows from the following lemma :

If you are given a symmetric function and i.i.d. integrable random variables (or positive) then almost surely for every .

Then take and you have almost what you are looking for. To finish the work neatly you have to notice that almost surely for by noting that , and that is independent of for every .

You should try to prove the lemma by going back to definition of conditional expectation, using the symmetry of and the fact that are i.i.d. and integrable together with the following hint :

when conditioning with respect to some real integrable random variable , for every bounded random variable then there exists a measurable function such that almost surely.

I hope it helps you and that this reply will save some time to George Lowther so he can write some new great posts ;-)

Best regards

Comment by TheBridge — 7 November 11 @ 2:45 PM |

TheBridge: Thanks! I’ll add that here we have

f(X_{1},…,X) =_{n}X_{1}+ ⋯ +X. So, the equality come down to showing that 𝔼[_{n}X(_{i}gX_{1}+⋯+X)] = 𝔼[_{n}X(_{j}gX_{1}+⋯+X)] for bounded measurable functions_{n}g. This follows from symmetry of the distribution of (X_{1},…,X)._{n}Comment by George Lowther — 7 November 11 @ 9:52 PM |

Dear Goerge,

I am curious as to whether we don’t need to make sure that the times at which an elementary process has jumps need to be stopping times in order for this process to be adapted.

For example, the times s_k and t_k defined before lemma 2 don’t seem to be necessarily stopping times, and so I am somewhat confused as to what the filtration F_{s_k} and F_{t_k} would be.

By the way thanks again for the reply about the joint measurability in a previous post. It was very useful.

Comment by Tigran — 7 December 11 @ 6:39 PM |

Actually the problem is resolved if the time domain of X_t is finite, which might be what you are assuming implicitly.

Comment by Tigran — 8 December 11 @ 9:39 AM |

Hi Tigran,

Here all are deterministic and bounded by T, so there is no stopping time issue (notice however that so they are trivial stopping times).

George Lowther might confirm this, but in the context of these notes, elementary predictable processes do not use stopping times in their definition, only deterministic times (check “Martingales and Elementary Integrals” post). But it is true that other authers use them to define elementary process (for example in Protter’s book) maybe this is where your question is coming from.

Best Regards

Comment by TheBridge — 8 December 11 @ 12:51 PM |

TheBridge: Thanks for that comment, it more or less covers what I was thinking too. I’ll add also that, in lemma 2 (and the bit just before its statement) the times are stopping times. This only works because the index set here is finite. Also, because the index set is finite, the set in (2) can be written as a union over a finite set of deterministic times, so is elementary (I could have stated this more clearly in the post).

As you mention, the definition of elementary sets and processes is not standardised and differs between texts. Sometimes “simple” is used instead. I tend to use elementary to refer to processes defined using a finite set of deterministic times and simple for the same thing with stopping times. There was no need to consider simple processes here, so I haven’t used them in these notes.

Comment by George Lowther — 8 December 11 @ 1:52 PM

Thanks for your feedback. The reason I asked is that this also comes up further in the notes (e.g. the proof of cadlag versions in the next post), where things seem to rely on the finiteness of the index set to make the hitting times of X_t stopping times, and then extend it to countable sets using the MCT.

Comment by Tigran — 10 December 11 @ 1:58 PM