Girsanov transformations describe how Brownian motion and, more generally, local martingales behave under changes of the underlying probability measure. Let us start with a much simpler identity applying to normal random variables. Suppose that *X* and are jointly normal random variables defined on a probability space . Then is a positive random variable with expectation 1, and a new measure can be defined by for all sets . Writing for expectation under the new measure, then for all bounded random variables *Z*. The expectation of a bounded measurable function of *Y* under the new measure is

(1) |

where is the covariance. This is a vector whose i’th component is the covariance . So, *Y* has the same distribution under as has under . That is, when changing to the new measure, *Y* remains jointly normal with the same covariance matrix, but its mean increases by . Equation (1) follows from a straightforward calculation of the characteristic function of *Y* with respect to both and .

Now consider a standard Brownian motion *B* and fix a time and a constant . Then, for all times , the covariance of and is . Applying (1) to the measure shows that

where is a standard Brownian motion under . Under the new measure, *B* has gained a constant drift of over the interval . Such transformations are widely applied in finance. For example, in the Black-Scholes model of option pricing it is common to work under a *risk-neutral* measure, which transforms the drift of a financial asset to be the risk-free rate of return. Girsanov transformations extend this idea to much more general changes of measure, and to arbitrary local martingales. However, as shown below, the strongest results are obtained for Brownian motion which, under a change of measure, just gains a stochastic drift term.

As always, we work under a complete filtered probability space . Consider a new measure , for some strictly positive random variable *U* with expectation 1. Then and are *equivalent*. That is, if and only if for sets . By the Radon-Nikodym theorem, all probability measures on equivalent to can be defined in this way, and *U* is referred to as the Radon-Nikodym derivative of with respect to , denoted by . Conditional expectations with respect to the new measure are related to the original one as follows.

Lemma 1Let be an equivalent measure to . Then, for any bounded random variableZand sigma-algebra , the conditional expectation is given by

(2)

*Proof:* Denote the right-hand-side of (2) by *Y*, which is -measurable and satisfies . So, for any ,

which, by definition, means that .

Given a measure equivalent to , define the martingale

(3) |

Note that there is symmetry here in exchanging the roles of and . Using Lemma 1 with the simple identity ,

In particular, is a uniformly integrable martingale with respect to so, if a cadlag version of *U* is used, then will be a cadlag martingale converging to the limit and is finite.

We can now answer the following question — when is a process *X* a martingale under the equivalent measure ?

Lemma 2Let be an equivalent measure to , and be as in (3). Then, a processXis a -martingale if and only ifUXis a -martingale.

*Proof:* Set . Then, *X* is adapted if and only if *M* is adapted. Also, , so *X* is integrable under if and only if *M* is integrable under . Using (2) for the conditional expectation,

for any . So, if and only if .

** Local Martingales **

Lemma 2 can be localized to obtain a condition for a cadlag adapted process to be a local martingale with respect to . First, as the process *U* defined by (3) is a martingale, it has a cadlag modification whenever it is right-continuous in probability. In particular, if the filtration is right-continuous then *U* always has a cadlag modification.

Lemma 3Let be an equivalent measure to , and suppose thatUgiven by (3) is cadlag. Then, a cadlag adapted processXis a -local martingale if and only ifUXis a -local martingale.

*Proof:* Replacing *X* by if necessary, suppose that . Given a stopping time , we first show that the stopped process is a martingale if and only if is a martingale which, by Lemma 2, is equivalent to being a -martingale.

As *U* is a nonnegative martingale, optional sampling gives and,

So is integrable if and only if is. In this case, let *M* be the difference . For times ,

Therefore, *M* is a martingale, and is a martingale if and only if is.

So, given stopping times , are -martingales if and only if and, therefore, are -martingales.

If *X* is a local martingale, then Lemma 3 can be used to derive a decomposition of *X* into the sum of a -local martingale and an FV process defined in terms of the quadratic covariation [*U*,*X*].

Theorem 4Let be an equivalent measure to , and suppose thatUgiven by (3) is cadlag. IfXis a local martingale then, whereYis a -local martingale andVis the FV process

*Proof:* It is just necessary to show that is a local martingale under . Applying integration by parts,

As *U* is a martingale, this shows that *UV*-[*U*,*X*] is a local martingale. Also, *UX*-[*U*,*X*] is a local martingale, so

is a local martingale as required.

** Continuous Local Martingales **

A useful method of constructing measure changes is to use a Doléans exponential. For a local martingale *M* this is the solution to the stochastic differential equation with initial condition so, by preservation of the local martingale property, *U* is a local martingale. For continuous local martingales, the Doléans exponential is given by

In particular, if the quadratic variation at infinity, , is finite then the limit exists and will be strictly positive. If, furthermore, *U* is a uniformly integrable martingale rather than just a local martingale then, for each time *t*. So defines an equivalent measure with *U* satisfying equation (3). Also, for any -local martingale *X*, , and Theorem 4 shows that is a -local martingale. Applying this with gives the following.

Theorem 5 (Girsanov transformation)LetXbe a continuous local martingale, and be a predictable process such that . If is a uniformly integrable martingale then and the measure is equivalent to . Then,Xdecomposes as

(4)

for a -local martingaleY.

So, Girsanov transformations allow us to change to an equivalent measure where the local martingale *X* gains a drift term which is an integral with respect to its quadratic variation [*X*]. In fact, continuous local martingales *always* decompose as in (4) under any continuous change of measure.

In the following, it is required that we take a cadlag version of the martingale *U*, which is guaranteed to exist if the filtration is right-continous. However, in these notes we are not assuming that filtrations are right-continuous. Still, it is always possible to pass to the right-continuous filtration . Then, a continuous process starting at zero will be -adapted if and only if it is -adapted, and the two filtrations define the same space of continuous local martingales starting from 0. So Theorem 6 can be applied to arbitrary equivalent changes of measure on all complete filtered probability spaces.

Theorem 6Let be an equivalent measure to , and suppose thatUgiven by (3) has a cadlag version. Then, there is a predictable process satisfying and , in which case

Xdecomposes as for a -local martingaleY.Udecomposes as for a nonnegative local martingaleVwith {\rm[V,X]=0}.

Note that if is a uniformly integrable martingale, then the decomposition given for *U* implies that the change of measure can be decomposed into a Girsanov transformation, precisely as in Theorem 6, followed by a measure change given by the process *V* satisfying [*V*,*X*]=0. In general, however, this will not be the case since need only be a local martingale.

*Proof:* By Theorem 4, *X*=*Y*+*V* for a -local martingale *Y* and FV process . Next, by the Kunita-Watanabe inequality, if is a nonnegative process satisfying then,

That is, *V* is *absolutely continuous* with respect to [*X*]. We would like to use a stochastic version of the Radon-Nikodym theorem to imply the existence of a predictable process with . In fact, this is possible as stated below in Lemma 7, so as required. It still needs to be shown that is finite and that *U* satisfies the required decomposition.

Next, we show that is finite. As , the process

is a martingale, under its natural filtration. As cadlag martingales are semimartingales, and have well defined quadratic variation, is finite. Then, applying the Kunita-Watanabe inequality again, for any positive constant *K*,

Letting *K* increase to infinity and squaring this inequality gives,

This is finite, since it has been shown that is finite and is a cadlag -martingale tending to the finite limit , so is bounded.

Finally, as is finite for all times *t*, is X-integrable. Define the local martingales and . The quadratic covariation is given by

So, [*M*–*N*,*X*]=0, , and integration by parts applied to the definition of Doléans exponentials gives

The decomposition of *U* follows by taking .

The following stochastic version of the Radon-Nikodym theorem was used in the proof of Theorem 6, which we now prove.

Lemma 7LetAbe a continuous FV process andBbe a continuous adapted increasing process such that (almost surely) for all and bounded nonnegative predictable satisfying .

Then, there is a predictable process satisfying , and .

*Proof:* Let us first suppose that *A* and *B* have integrable variation and, without loss of generality, assume that . Then, we can define the following finite signed measures on the predictable measurable space ,

for bounded predictable . As *B* is increasing, is a (nonnegative) measure. If for a predictable set *S*, then and, from the condition of the lemma, , giving . So, is absolutely continuous with respect to and the Radon-Nikodym derivative exists. This is a predictable process satisfying and for all bounded predictable .

Then, the following process has integrable variation

and, for any bounded predictable ,

This shows that *M* is a martingale. As continuous FV local martingales are constant, *M* is identically 0, giving as required.

Finally, let us drop the assumption that *A* and *B* have integrable variation, and define the stopping times

By continuity, the stopped processes and have variation bounded by *n* so, by the above argument, there are predictable processes such that . The result now follows by taking .

One difficulty in applying Theorem 5 to construct measure changes is that the Doléans exponential is only guaranteed to be a local martingale, whereas we need it to be a uniformly integrable martingale. The following gives a necessary and sufficient condition for a nonnegative local martingale to be a uniformly integrable martingale.

Lemma 8LetUbe a nonnegative local martingale with . Then, for all stopping times , andUis a uniformly integrable martingale if and only if .

*Proof:* Choose stopping times such that are uniformly integrable martingales. By Fatou’s lemma and optional sampling, for stopping times ,

So, *U* is a supermartingale. Taking expectations with gives . Conversely, if then, using , is a nonnegative random variable with expectation

So, , showing that *U* is a uniformly integrable martingale.

This lemma is useful in theory but, in practice, the expectation of is often hard to calculate directly. Instead, the following sufficient conditions can be used to show that a Doléans exponential is a uniformly integrable martingale. Condition (5) is *Kazamaki’s criterion* and (6) is *Novikov’s criterion*.

Lemma 9LetMbe a continuous local martingale with . The following is a sufficient condition for to be a uniformly integrable martingale,

(5) where the supremum is taken over all bounded stopping times . In particular, this condition is satisfied and is a uniformly integrable martingale, whenever

(6)

*Proof:* The following simple identity for a constant *r* will be used

Suppose that for some constant *K* and all bounded stopping times . Then choose real numbers and with . Holder’s inequality gives

Lemma 8 has been applied here to bound the expectation of by 1. Setting then for *r* close to 1 and giving,

So, is an -bounded martingale and hence is uniformly integrable. Therefore, for all

Next, using Holder’s inequality,

The last inequality here is just Jensen’s inequality, using the fact that . Letting *a* increase to 1 gives so, by Lemma 8, is a uniformly integrable martingale as required.

Finally, suppose that (6) is satisfied. Then, for a stopping time , the Cauchy-Schwarz inequality gives

so (5) holds, as required.

** Brownian Motion **

With the help of Lévy’s characterization, the results above can be strengthened significantly when the local martingale is a Brownian motion. If *B* is a Brownian motion, then it is possible to construct an equivalent measure change under which it decomposes as the sum of a Brownian motion and the absolutely continuous process , for a given predictable process . This is stated, more generally for d-dimensional Brownian motion, in the following theorem.

Theorem 10Let be a standard d-dimensional Brownian motion on the underlying filtered probability space and be predictable processes satisfying (almost surely). Ifis a uniformly integrable martingale then and the measure is equivalent to . Then,

Bdecomposes as

(7)

for a d-dimensional Brownian motion with respect to .

Here, *U* is the Doléans exponential for the local martingale and, by Novikov’s criterion (6) above, *U* will be a uniformly integrable martingale whenever has finite expectation.

*Proof:* As Brownian motion has quadratic variation , the condition on ensures that it is -integrable, so we can define the continuous local martingale . Applying Theorem 5 to *X*, is a local martingale and, if it is a uniformly integrable martingale then and defines a continuous measure change.

By Theorem 4, the decomposition exists for a -local martingale and, as ,

Finally, as are continuous FV processes they do not contribute to quadratic covariations involving , giving . So, by Lévy’s characterization, is a d-dimensional Brownian motion under .

Finally, Brownian motion transforms according to (7) under *all* continuous measure changes. That is, it picks up a drift satisfying .

Theorem 11Let be an equivalent measure to , and suppose thatUgiven by (3) is cadlag. If is a standard d-dimensional Brownian motion on the underlying filtered probability space, then there are predictable processes satisfying and . Then

Bdecomposes as for a standard d-dimensional Brownian motion with respect to .Udecomposes as where andVis a positive local martingale with .

*Proof:* As Brownian motion has quadratic variation , the existence of predictable processes satisfying and is given by Theorem 6. Also, by the same theorem, for -local martingales . As continuous finite variation processes do not contribute to quadratic covariations, and Lévy’s characterization shows that is a d-dimensional Brownian motion under .

Finally, defining the local martingales and ,

So, and giving,

and the decomposition for *U* follows by taking .

Your posts are always enlightening!

I was just wondering if there was an available pdf version of your blog entries, somewhere ?

Comment by Alekk — 5 May 10 @ 11:37 AM |

Hi. No, I don’t have a PDF version. This has been asked before though, so it’s probably worth creating some. Maybe have something ready in the next week or two (starting with the “filtrations and processes” section).

Comment by George Lowther — 5 May 10 @ 1:51 PM |

Hopefully I got the Latex now right….

Great Blog indeed,

I recently discussed with some friends from uni (physicists) a question that was losely related to the Girsanov theorem.

Let denote the Girsanov density of a measure with respect to another measure , where is any process such that the Girsanovs theorem is valid.

Then the information entropy between the two measures is defined through

which equates to

for some Brownian motion

We now discussed in great length if implies .

What about if is replaced by a continuous, general martingale?

Opinions basically ranged from this is trivially true (as the finiteness is closely related to square integrability) to the complete opposite.

Do you have any hints or ideas on that?

Cheers

Roger

Comment by Roger — 13 August 10 @ 5:51 PM |

The answer to your question is yes, it is true! I don’t know if there is a `trivial’ argument, but I can give you a proof. It is also true for general continuous local martingales.

One thing though. It is not absolutely clear when you say that has expectation less than infinity, do you mean its absolute value is integrable, or its positive part is integrable. (Taking an expectation of a nonintegrable random variable is not well defined in general, unless it is nonnegative). I’ll assume you mean its positive part, as this is the weaker condition and is still enough to imply what you want.

For the proof: Let W be a standard Brownian motion, b > 0 be a constant, and set X

_{t}= W_{t}– bt. Denote its maximum as X_{t}^{*}= sup_{s≤t}X_{s}. I claim that X_{∞}^{*}is integrable. Letting T_{a}be the first time at which X hits a positive value a, this is the same as the first time W hits the sloping line a + bt. The distribution of T_{a}is given bywhich is a standard result. Letting θ go to 0 gives

Integrating with respect to a gives .

Next, if M is a continuous local martingale starting at zero and X = M-b[M] then . This follows because all such continuous local martingales are time changes of Brownian motion. In particular, (M

_{T}– b[M]_{T})_{+}is integrable for any time T.So, if M is a continuous local martingale starting at 0 and such that (-M

_{T}+[M]_{T}/2)_{+}is integrable then, choosing 0<b<1/2,This gives what you want.

Regards,

George

[

btw, I deleted your first post.]Comment by George Lowther — 15 August 10 @ 10:47 PM |

I should add though, your question is indeed trivial in the case where is a martingale. However, for the Girsanov transformation to be defined, this need not be true. It is only guaranteed that M is a

localmartingale.Comment by George Lowther — 15 August 10 @ 11:54 PM |

Alternatively, there is the following much quicker argument. If b > 0 then exp(2b(M – b[M])) is a positive local martingale and, hence, is a supermartingale. So its expectation is bounded (by 1) and, as exponentials grow faster than linearly, (M – b[M])

_{+}has finite expectation.Comment by George Lowther — 16 August 10 @ 10:17 PM |

Great,

thanks for having a clear argument for that. And I meant indeed local martingale – that was actually the crucial point 🙂

Roger

Comment by Roger — 16 August 10 @ 7:07 PM |

Hello George,

Just wonder what property of a diffusion process is preserved after the absolutely continuous measure change. For example, if X_t is an ergodic diffusion (which means it has stationary distributions when time goes to infinity), then after the measure change, its drift part is changed and we denote it as Y_t. Then is Y_t still ergodic? Can you give me a reference on this or a counter-example?

Btw, I always wonder what the Girsanov theorem behave when we push the time to infinity. I see some authors use Girsanov for optimal stopping problems, and that involves defining the Radon -ikodym derivative process up to a random stopping time. I am uneasy with this and wonder whether there is some reference on using Girsanov theorem up to a random time rather than the fixed time case.

Thank you very much~

Rocky 🙂

Comment by Zhenyu (Rocky) Cui — 12 April 11 @ 4:18 AM |

Hi Rocky,

Apologies for the slow response. I haven’t had much time to log on, and don’t have much time now, but I’ll try and quickly answer.

I’m not familiar with ergodic diffusions. But, I think that you maybe mean that the distribution of

Xtends weakly to a limit as_{t}tgoes to infinity. Or, that (in probability) asTgoes to infinity (where is a bounded continuous function and is the limiting distribution). I don’t think that either of these are going to be affected by an absolutely continuous change of measure. Assuming the limit is independent of , then you can approximate the Radon-Nikodym derivative in L^{1}by . Using will not change the limiting distribution. Then considerslarge.For the second question. Over a finite horizon, say, you can transform a standard Brownian motion into one with constant nonzero drift with a Girsanov transform. Over an infinite time horizon, this is not possible with an absolutely continuous change of measure. This is because events such as have probability one for a Brownian motion, but zero for a BM with drift. You can do it by a local Girsanov transform though, by defining , where is the martingale defining the measure change. You have to be careful in the choice of underlying probability space, and not complete the filtration (as the measure change is not absolutely continuous, you need to be careful about null sets under the original measure which could have positive probability under the new measure). You can also apply Girsanov transforms up to a stopping time T, which is similar to just applying it to the process stopped at time T. You can even apply a Girsanov tranform up to a sequence of stopping times T

_{n}increasing to a limit T, even though the the change of measure might not be absolutely continuous on . For example, I did this in one of my posts here (Zero-Hitting and Failure of the Martingale Property). In fact, the stopping time can be almost-surely infinite under the original measure and yet almost surely finite in the transformed measure so, again, you have to be careful. I might come back to this and check out some references when I have time.Comment by George Lowther — 14 April 11 @ 1:19 AM |

Hello George,

Here is what I understand:

Approximate by . Assume the limiting random variable of is under measure , then for any ,

The last equality is because of the ergodicity of under measure .

My question is what is the requirement on ? Do we need to restrict in order to transform the time process ? If so, then and can we apply Girsanov theorem in this case?

Thanks!

Comment by Zhenyu (Rocky) Cui — 14 April 11 @ 4:00 PM |

By the way, how to display latex formulas on the webpage, I know it may be silly to ask, but I never post formula on blogs myself before…

Thanks for your reply and I am not in a hurry to know the answer~ I know you are quite busy 🙂

Comment by Zhenyu (Rocky) Cui — 14 April 11 @ 4:05 PM

You start the latex expression with ‘$latex ‘ (the space is needed) and close with ‘$’. You can’t do displaymath – not directly anyway. I’ll edit the latex in your post when I log on later (and I might add a page about posting latex. It’s not obvious. Edit: Here it is!)

Comment by George Lowther — 14 April 11 @ 5:14 PM

Hi.

Sorry about the delay in answering. I did read your comment earlier, but was not really sure what you were asking (I’m still not sure). Here,

Xis the process defining the Girsanov transform, but it is also the process which you want to transform the law of (in general, they will be different). Have I got that right? If Q is equivalent to P then any process which converges to a limit under P also converges to the same limit under Q. That much is true, and is a consequence of them having the same events of probability 1. However, I didn’t think that “ergodic diffusion” referred to convergence of the process, just convergence of the distribution. That can change but, if you assume thatXbecomes independent of_{t}Xin the limit as_{s}tgoes to infinity (and fixeds), then my argument above was that the distribution ofXmust also have the same limit under both measures. However, this does not hold if you consider “local” Girsanov transformations._{t}Comment by George Lowther — 2 May 11 @ 1:09 AM

Hello George,

Sorry about the confusion. Here the process X_t is a martingale. It is used in defining the Radon-Nikodym derivative governing the measure change. It is also the process to which we apply this measure change. This set up is a bit strange and the origin comes from “change of numeaire” technique in finance.

Ergodic diffusion in my understanding indeed means convergence only in distribution sense, or converge to some unknown random variable with certain distribution. And this distribution is the “limiting distribution”.

Here s is fixed and perhaps by strong Markov property, I can assume that behavior of is independent of .

By the way, are you familiar with the Skorokhod embedding problem? A survey is at: http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdfview_1&handle=euclid.ps/1104335302

Maybe you will be interested in writing a blog on that. The problem is now I get addicted to reading your blog for self study of probability theory rather than reading thick textbooks 🙂

Comment by Zhenyu (Rocky) Cui — 3 May 11 @ 8:30 PM |

Zhenyu (or should I call you Rocky?): Just time for a quick comment. I don’t think it is important that X is used for both the measure change and is the process to which the measure change is applied. You can’t change the limiting distribution by an equivalent change of measure (assuming that the limit is independent of

Xfor fixed_{s}s). However, you can change it by a local Girsanov transform. Suppose that B is a Brownian motion andThis is an exponential Brownian motion tending to zero. Under the transformed Q measure, for a Q-Brownian motion , so it diverges to infinity.

And, yes, I’m familiar with Skorohod embedding, but haven’t studied all the methods of solving it. I’ll think about that, but can’t promise anything now.

Glad you like the blog! Of course, I wouldn’t want to take you away from the textbooks, but hopefully getting a fresh perspective might help to understand them a bit better.

Comment by George Lowther — 6 May 11 @ 1:29 AM |

Hi George,

You can call me Rocky, because my Chinese PingYin name is hard to pronounce. I have a clear idea about the problem now and thanks for the illustration.

Keep on writing great illuminating blogs on probability theory~

Best regards!

Rocky

Comment by Zhenyu (Rocky) Cui — 9 May 11 @ 4:06 PM |

Hello George.

May I ask you which sufficient condition should a (cadlag, adapted) process verify for there to exist an equivalent probability measure under which is a local martingale ? Being semi-martingale is necessary, but I have reasons (based on financial mathematics litterature) to think that it is not sufficient.

Thanks in advance

Comment by kebabroyal — 22 December 11 @ 2:55 PM |

Actually I answered my own question with Theorem 4. I believe therefore that the No Free Lunch with Vanishing Risk of Delbaen and Schachermayer is only a restatement of being bounded in probability.

Comment by kebabroyal — 22 December 11 @ 2:59 PM |

Yes, I think you’re right. I don’t have access to their papers right now but, from what I remember, Delbaen and Schachermayer define two kinds of no-arbitrage condition. No Free Lunch with Vanishing Risk is equivalent to what you state, and is equivalent to being a semimartingale. No Free Lunch with Bounded Risk is the stronger condition, and is equivalent to the existence of an equivalent local martingale measure, for continuous processes(if I remember these terms correctly).

Comment by George Lowther — 23 December 11 @ 12:26 AM |

Hello,

Is there a version of Girsanov’s theorem that can be applied to a stable Levy process, or more generally, a process with infinite expectation?

My apologies if I’ve missed a discussion elsewhere in this blog.

Comment by Christopher — 26 March 12 @ 6:42 PM |

Thanks for this very useful blog. I just have a little question about Lemma 3 in the above post. How do go from the second last to the last line in the set of equalities? i.e. how do you get

(sorry I am not sure what the markup is to display mathematics in the post, but the code should be correct [

GL: I pasted in the latex from your follow-up comment, and deleted that comment. Hope you don’t mind]).Comment by Johan du Plessis (@johan_duplessis) — 10 April 12 @ 6:50 AM |

Well, you have the equality

Note: both sides are zero when so you can restrict to . As this is -measurable, the conditional expectation around the left hand side has no effect.

Also, as is zero whenever ,

Comment by George Lowther — 12 April 12 @ 12:55 AM |

Hello, I observed your weblog within a new directory of blogs. I do not understand how your blog site arrived up, will need to have been a typo, Your blog site looks good. Have a nice day

Comment by qvnbejbngg@yahoo.com — 19 February 14 @ 6:04 PM |

sorry, can you explain me how did you arrive to equation (1)? Sorry, I am a noob, willing to understand…

Comment by Vittorio Apicella — 6 August 14 @ 12:15 PM |