Almost Sure

3 May 11

Continuous Semimartingales

A stochastic process is a semimartingale if and only if it can be decomposed as the sum of a local martingale and an FV process. This is stated by the Bichteler-Dellacherie theorem or, alternatively, is often taken as the definition of a semimartingale. For continuous semimartingales, which are the subject of this post, things simplify considerably. The terms in the decomposition can be taken to be continuous, in which case they are also unique. As usual, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}, all processes are real-valued, and two processes are considered to be the same if they are indistinguishable.

Theorem 1 A continuous stochastic process X is a semimartingale if and only if it decomposes as

\displaystyle  X=M+A (1)

for a continuous local martingale M and continuous FV process A. Furthermore, assuming that {A_0=0}, decomposition (1) is unique.

Proof: As sums of local martingales and FV processes are semimartingales, X is a semimartingale whenever it satisfies the decomposition (1). Furthermore, if {X=M+A=M^\prime+A^\prime} were two such decompositions with {A_0=A^\prime_0=0} then {M-M^\prime=A^\prime-A} is both a local martingale and a continuous FV process. Therefore, {A^\prime-A} is constant, so {A=A^\prime} and {M=M^\prime}.

It just remains to prove the existence of decomposition (1). However, X is continuous and, hence, is locally square integrable. So, Lemmas 4 and 5 of the previous post say that we can decompose {X=M+A} where M is a local martingale, A is an FV process and the quadratic covariation {[M,A]} is a local martingale. As X is continuous we have {\Delta M=-\Delta A} so that, by the properties of covariations,

\displaystyle  -[M,A]_t=-\sum_{s\le t}\Delta M_s\Delta A_s=\sum_{s\le t}(\Delta A_s)^2. (2)

We have shown that {-[M,A]} is a nonnegative local martingale so, in particular, it is a supermartingale. This gives {\mathbb{E}[-[M,A]_t]\le\mathbb{E}[-[M,A]_0]=0}. Then (2) implies that {\Delta A} is zero and, hence, A and {M=X-A} are continuous. \Box

Using decomposition (1), it can be shown that a predictable process {\xi} is X-integrable if and only if it is both M-integrable and A-integrable. Then, the integral with respect to X breaks down into the sum of the integrals with respect to M and A. This greatly simplifies the construction of the stochastic integral for continuous semimartingales. The integral with respect to the continuous FV process A is equivalent to Lebesgue-Stieltjes integration along sample paths, and it is possible to construct the integral with respect to the continuous local martingale M for the full set of M-integrable integrands using the Ito isometry. Many introductions to stochastic calculus focus on integration with respect to continuous semimartingales, which is made much easier because of these results.

Theorem 2 Let {X=M+A} be the decomposition of the continuous semimartingale X into a continuous local martingale M and continuous FV process A. Then, a predictable process {\xi} is X-integrable if and only if

\displaystyle  \int_0^t\xi^2\,d[M]+\int_0^t\vert\xi\vert\,\vert dA\vert < \infty (3)

almost surely, for each time {t\ge0}. In that case, {\xi} is both M-integrable and A-integrable and,

\displaystyle  \int\xi\,dX=\int\xi\,dM+\int\xi\,dA (4)

gives the decomposition of {\int\xi\,dX} into its local martingale and FV terms.

(more…)

23 November 10

Lévy Processes

A Poisson process sample path

Figure 1: A Cauchy process sample path

Continuous-time stochastic processes with stationary independent increments are known as Lévy processes. In the previous post, it was seen that processes with independent increments are described by three terms — the covariance structure of the Brownian motion component, a drift term, and a measure describing the rate at which jumps occur. Being a special case of independent increments processes, the situation with Lévy processes is similar. However, stationarity of the increments does simplify things a bit. We start with the definition.

Definition 1 (Lévy process) A d-dimensional Lévy process X is a stochastic process taking values in {{\mathbb R}^d} such that

  • independent increments: {X_t-X_s} is independent of {\{X_u\colon u\le s\}} for any {s<t}.
  • stationary increments: {X_{s+t}-X_s} has the same distribution as {X_t-X_0} for any {s,t>0}.
  • continuity in probability: {X_s\rightarrow X_t} in probability as s tends to t.

More generally, it is possible to define the notion of a Lévy process with respect to a given filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}. In that case, we also require that X is adapted to the filtration and that {X_t-X_s} is independent of {\mathcal{F}_s} for all {s < t}. In particular, if X is a Lévy process according to definition 1 then it is also a Lévy process with respect to its natural filtration {\mathcal{F}_t=\sigma(X_s\colon s\le t)}. Note that slightly different definitions are sometimes used by different authors. It is often required that {X_0} is zero and that X has cadlag sample paths. These are minor points and, as will be shown, any process satisfying the definition above will admit a cadlag modification.

The most common example of a Lévy process is Brownian motion, where {X_t-X_s} is normally distributed with zero mean and variance {t-s} independently of {\mathcal{F}_s}. Other examples include Poisson processes, compound Poisson processes, the Cauchy process, gamma processes and the variance gamma process.

For example, the symmetric Cauchy distribution on the real numbers with scale parameter {\gamma > 0} has probability density function p and characteristic function {\phi} given by,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle p(x)=\frac{\gamma}{\pi(\gamma^2+x^2)},\smallskip\\ &\displaystyle\phi(a)\equiv{\mathbb E}\left[e^{iaX}\right]=e^{-\gamma\vert a\vert}. \end{array} (1)

From the characteristic function it can be seen that if X and Y are independent Cauchy random variables with scale parameters {\gamma_1} and {\gamma_2} respectively then {X+Y} is Cauchy with parameter {\gamma_1+\gamma_2}. We can therefore consistently define a stochastic process {X_t} such that {X_t-X_s} has the symmetric Cauchy distribution with parameter {t-s} independent of {\{X_u\colon u\le t\}}, for any {s < t}. This is called a Cauchy process, which is a purely discontinuous Lévy process. See Figure 1.

Lévy processes are determined by the triple {(\Sigma,b,\nu)}, where {\Sigma} describes the covariance structure of the Brownian motion component, b is the drift component, and {\nu} describes the rate at which jumps occur. The distribution of the process is given by the Lévy-Khintchine formula, equation (3) below.

Theorem 2 (Lévy-Khintchine) Let X be a d-dimensional Lévy process. Then, there is a unique function {\psi\colon{\mathbb R}\rightarrow{\mathbb C}} such that

\displaystyle  {\mathbb E}\left[e^{ia\cdot (X_t-X_0)}\right]=e^{t\psi(a)} (2)

for all {a\in{\mathbb R}^d} and {t\ge0}. Also, {\psi(a)} can be written as

\displaystyle  \psi(a)=ia\cdot b-\frac{1}{2}a^{\rm T}\Sigma a+\int _{{\mathbb R}^d}\left(e^{ia\cdot x}-1-\frac{ia\cdot x}{1+\Vert x\Vert}\right)\,d\nu(x) (3)

where {\Sigma}, b and {\nu} are uniquely determined and satisfy the following,

  1. {\Sigma\in{\mathbb R}^{d^2}} is a positive semidefinite matrix.
  2. {b\in{\mathbb R}^d}.
  3. {\nu} is a Borel measure on {{\mathbb R}^d} with {\nu(\{0\})=0} and,
    \displaystyle  \int_{{\mathbb R}^d}\Vert x\Vert^2\wedge 1\,d\nu(x)<\infty. (4)

Furthermore, {(\Sigma,b,\nu)} uniquely determine all finite distributions of the process {X-X_0}.

Conversely, if {(\Sigma,b,\nu)} is any triple satisfying the three conditions above, then there exists a Lévy process satisfying (2,3).

(more…)

15 September 10

Processes with Independent Increments

In a previous post, it was seen that all continuous processes with independent increments are Gaussian. We move on now to look at a much more general class of independent increments processes which need not have continuous sample paths. Such processes can be completely described by their jump intensities, a Brownian term, and a deterministic drift component. However, this class of processes is large enough to capture the kinds of behaviour that occur for more general jump-diffusion processes. An important subclass is that of Lévy processes, which have independent and stationary increments. Lévy processes will be looked at in more detail in the following post, and includes as special cases, the Cauchy process, gamma processes, the variance gamma process, Poisson processes, compound Poisson processes and Brownian motion.

Recall that a process {\{X_t\}_{t\ge0}} has the independent increments property if {X_t-X_s} is independent of {\{X_u\colon u\le s\}} for all times {0\le s\le t}. More generally, we say that X has the independent increments property with respect to an underlying filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})} if it is adapted and {X_t-X_s} is independent of {\mathcal{F}_s} for all {s < t}. In particular, every process with independent increments also satisfies the independent increments property with respect to its natural filtration. Throughout this post, I will assume the existence of such a filtered probability space, and the independent increments property will be understood to be with regard to this space.

The process X is said to be continuous in probability if {X_s\rightarrow X_t} in probability as s tends to t. As we now state, a d-dimensional independent increments process X is uniquely specified by a triple {(\Sigma,b,\mu)} where {\mu} is a measure describing the jumps of X, {\Sigma} determines the covariance structure of the Brownian motion component of X, and b is an additional deterministic drift term.

Theorem 1 Let X be an {{\mathbb R}^d}-valued process with independent increments and continuous in probability. Then, there is a unique continuous function {{\mathbb R}^d\times{\mathbb R}_+\rightarrow{\mathbb C}}, {(a,t)\mapsto\psi_t(a)} such that {\psi_0(a)=0} and

\displaystyle  {\mathbb E}\left[e^{ia\cdot (X_t-X_0)}\right]=e^{i\psi_t(a)} (1)

for all {a\in{\mathbb R}^d} and {t\ge0}. Also, {\psi_t(a)} can be written as

\displaystyle  \psi_t(a)=ia\cdot b_t-\frac{1}{2}a^{\rm T}\Sigma_t a+\int _{{\mathbb R}^d\times[0,t]}\left(e^{ia\cdot x}-1-\frac{ia\cdot x}{1+\Vert x\Vert}\right)\,d\mu(x,s) (2)

where {\Sigma_t}, {b_t} and {\mu} are uniquely determined and satisfy the following,

  1. {t\mapsto\Sigma_t} is a continuous function from {{\mathbb R}_+} to {{\mathbb R}^{d^2}} such that {\Sigma_0=0} and {\Sigma_t-\Sigma_s} is positive semidefinite for all {t\ge s}.
  2. {t\mapsto b_t} is a continuous function from {{\mathbb R}_+} to {{\mathbb R}^d}, with {b_0=0}.
  3. {\mu} is a Borel measure on {{\mathbb R}^d\times{\mathbb R}_+} with {\mu(\{0\}\times{\mathbb R}_+)=0}, {\mu({\mathbb R}^d\times\{t\})=0} for all {t\ge 0} and,
    \displaystyle  \int_{{\mathbb R}^d\times[0,t]}\Vert x\Vert^2\wedge 1\,d\mu(x,s)<\infty. (3)

Furthermore, {(\Sigma,b,\mu)} uniquely determine all finite distributions of the process {X-X_0}.

Conversely, if {(\Sigma,b,\mu)} is any triple satisfying the three conditions above, then there exists a process with independent increments satisfying (1,2).

(more…)

16 June 10

Continuous Processes with Independent Increments

A stochastic process X is said to have independent increments if {X_t-X_s} is independent of {\{X_u\}_{u\le s}} for all {s\le t}. For example, standard Brownian motion is a continuous process with independent increments. Brownian motion also has stationary increments, meaning that the distribution of {X_{t+s}-X_t} does not depend on t. In fact, as I will show in this post, up to a scaling factor and linear drift term, Brownian motion is the only such process. That is, any continuous real-valued process X with stationary independent increments can be written as

\displaystyle  X_t = X_0 + b t + \sigma B_t

(1)

for a Brownian motion B and constants {b,\sigma}. This is not so surprising in light of the central limit theorem. The increment of a process across an interval [s,t] can be viewed as the sum of its increments over a large number of small time intervals partitioning [s,t]. If these terms are independent with relatively small variance, then the central limit theorem does suggest that their sum should be normally distributed. Together with the previous posts on Lévy’s characterization and stochastic time changes, this provides yet more justification for the ubiquitous position of Brownian motion in the theory of continuous-time processes. Consider, for example, stochastic differential equations such as the Langevin equation. The natural requirements for the stochastic driving term in such equations is that they be continuous with stationary independent increments and, therefore, can be written in terms of Brownian motion.

The definition of standard Brownian motion extends naturally to multidimensional processes and general covariance matrices. A standard d-dimensional Brownian motion {B=(B^1,\ldots,B^d)} is a continuous process with stationary independent increments such that {B_t} has the {N(0,tI)} distribution for all {t\ge 0}. That is, {B_t} is joint normal with zero mean and covariance matrix tI. From this definition, {B_t-B_s} has the {N(0,(t-s)I)} distribution independently of {\{B_u\colon u\le s\}} for all {s\le t}. This definition can be further generalized. Given any {b\in{\mathbb R}^d} and positive semidefinite {\Sigma\in{\mathbb R}^{d^2}}, we can consider a d-dimensional process X with continuous paths and stationary independent increments such that {X_t} has the {N(tb,t\Sigma)} distribution for all {t\ge 0}. Here, {b} is the drift of the process and {\Sigma} is the `instantaneous covariance matrix’. Such processes are sometimes referred to as {(b,\Sigma)}-Brownian motions, and all continuous d-dimensional processes starting from zero and with stationary independent increments are of this form.

Theorem 1 Let X be a continuous {{\mathbb R}^d}-valued process with stationary independent increments.

Then, there exist unique {b\in{\mathbb R}^d} and {\Sigma\in{\mathbb R}^{d^2}} such that {X_t-X_0} is a {(b,\Sigma)}-Brownian motion.

(more…)

25 May 10

The Martingale Representation Theorem

The martingale representation theorem states that any martingale adapted with respect to a Brownian motion can be expressed as a stochastic integral with respect to the same Brownian motion.

Theorem 1 Let B be a standard Brownian motion defined on a probability space {(\Omega,\mathcal{F},{\mathbb P})} and {\{\mathcal{F}_t\}_{t\ge 0}} be its natural filtration.

Then, every {\{\mathcal{F}_t\}}local martingale M can be written as

\displaystyle  M = M_0+\int\xi\,dB

for a predictable, B-integrable, process {\xi}.

As stochastic integration preserves the local martingale property for continuous processes, this result characterizes the space of all local martingales starting from 0 defined with respect to the filtration generated by a Brownian motion as being precisely the set of stochastic integrals with respect to that Brownian motion. Equivalently, Brownian motion has the predictable representation property. This result is often used in mathematical finance as the statement that the Black-Scholes model is complete. That is, any contingent claim can be exactly replicated by trading in the underlying stock. This does involve some rather large and somewhat unrealistic assumptions on the behaviour of financial markets and ability to trade continuously without incurring additional costs. However, in this post, I will be concerned only with the mathematical statement and proof of the representation theorem.

In more generality, the martingale representation theorem can be stated for a d-dimensional Brownian motion as follows.

Theorem 2 Let {B=(B^1,\ldots,B^d)} be a d-dimensional Brownian motion defined on the filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}, and suppose that {\{\mathcal{F}_t\}} is the natural filtration generated by B and {\mathcal{F}_0}.

\displaystyle  \mathcal{F}_t=\sigma\left(\{B_s\colon s\le t\}\cup\mathcal{F}_0\right)

Then, every {\{\mathcal{F}_t\}}-local martingale M can be expressed as

\displaystyle  M=M_0+\sum_{i=1}^d\int\xi^i\,dB^i (1)

for predictable processes {\xi^i} satisfying {\int_0^t(\xi^i_s)^2\,ds<\infty}, almost surely, for each {t\ge0}.

(more…)

17 May 10

SDEs Under Changes of Time and Measure

The previous two posts described the behaviour of standard Brownian motion under stochastic changes of time and equivalent changes of measure. I now demonstrate some applications of these ideas to the study of stochastic differential equations (SDEs). Surprisingly strong results can be obtained and, in many cases, it is possible to prove existence and uniqueness of solutions to SDEs without imposing any continuity constraints on the coefficients. This is in contrast to most standard existence and uniqueness results for both ordinary and stochastic differential equations, where conditions such as Lipschitz continuity is required. For example, consider the following SDE for measurable coefficients {a,b\colon{\mathbb R}\rightarrow{\mathbb R}} and a Brownian motion B

\displaystyle  dX_t=a(X_t)\,dB_t+b(X_t)\,dt. (1)

If a is nonzero, {a^{-2}} is locally integrable and b/a is bounded then we can show that this has weak solutions satisfying uniqueness in law for any specified initial distribution of X. The idea is to start with X being a standard Brownian motion and apply a change of time to obtain a solution to (1) in the case where the drift term b is zero. Then, a Girsanov transformation can be used to change to a measure under which X satisfies the SDE for nonzero drift b. As these steps are invertible, every solution can be obtained from a Brownian motion in this way, which uniquely determines the distribution of X.

A standard example demonstrating the concept of weak solutions and uniqueness in law is provided by Tanaka’s SDE

\displaystyle  dX_t={\rm sgn}(X_t)\,dB_t (2)

(more…)

3 May 10

Girsanov Transformations

Girsanov transformations describe how Brownian motion and, more generally, local martingales behave under changes of the underlying probability measure. Let us start with a much simpler identity applying to normal random variables. Suppose that X and {Y=(Y^1,\ldots,Y^n)} are jointly normal random variables defined on a probability space {(\Omega,\mathcal{F},{\mathbb P})}. Then {U\equiv\exp(X-\frac{1}{2}{\rm Var}(X)-{\mathbb E}[X])} is a positive random variable with expectation 1, and a new measure {{\mathbb Q}=U\cdot{\mathbb P}} can be defined by {{\mathbb Q}(A)={\mathbb E}[1_AU]} for all sets {A\in\mathcal{F}}. Writing {{\mathbb E}_{\mathbb Q}} for expectation under the new measure, then {{\mathbb E}_{\mathbb Q}[Z]={\mathbb E}[UZ]} for all bounded random variables Z. The expectation of a bounded measurable function {f\colon{\mathbb R}^n\rightarrow{\mathbb R}} of Y under the new measure is

\displaystyle  {\mathbb E}_{\mathbb Q}\left[f(Y)\right]={\mathbb E}\left[f\left(Y+{\rm Cov}(X,Y)\right)\right], (1)

where {{\rm Cov}(X,Y)} is the covariance. This is a vector whose i’th component is the covariance {{\rm Cov}(X,Y^i)}. So, Y has the same distribution under {{\mathbb Q}} as {Y+{\rm Cov}(X,Y)} has under {{\mathbb P}}. That is, when changing to the new measure, Y remains jointly normal with the same covariance matrix, but its mean increases by {{\rm Cov}(X,Y)}. Equation (1) follows from a straightforward calculation of the characteristic function of Y with respect to both {{\mathbb P}} and {{\mathbb Q}}.

Now consider a standard Brownian motion B and fix a time {T>0} and a constant {\mu}. Then, for all times {t\ge 0}, the covariance of {B_t} and {B_T} is {{\rm Cov}(B_t,B_T)=t\wedge T}. Applying (1) to the measure {{\mathbb Q}=\exp(\mu B_T-\mu^2T/2)\cdot{\mathbb P}} shows that

\displaystyle  B_t=\tilde B_t + \mu (t\wedge T)

where {\tilde B} is a standard Brownian motion under {{\mathbb Q}}. Under the new measure, B has gained a constant drift of {\mu} over the interval {[0,T]}. Such transformations are widely applied in finance. For example, in the Black-Scholes model of option pricing it is common to work under a risk-neutral measure, which transforms the drift of a financial asset to be the risk-free rate of return. Girsanov transformations extend this idea to much more general changes of measure, and to arbitrary local martingales. However, as shown below, the strongest results are obtained for Brownian motion which, under a change of measure, just gains a stochastic drift term. (more…)

20 April 10

Time-Changed Brownian Motion

From the definition of standard Brownian motion B, given any positive constant c, {B_{ct}-B_{cs}} will be normal with mean zero and variance c(ts) for times {t>s\ge 0}. So, scaling the time axis of Brownian motion B to get the new process {B_{ct}} just results in another Brownian motion scaled by the factor {\sqrt{c}}.

This idea is easily generalized. Consider a measurable function {\xi\colon{\mathbb R}_+\rightarrow{\mathbb R}_+} and Brownian motion B on the filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}. So, {\xi} is a deterministic process, not depending on the underlying probability space {\Omega}. If {\theta(t)\equiv\int_0^t\xi^2_s\,ds} is finite for each {t>0} then the stochastic integral {X=\int\xi\,dB} exists. Furthermore, X will be a Gaussian process with independent increments. For piecewise constant integrands, this results from the fact that linear combinations of joint normal variables are themselves normal. The case for arbitrary deterministic integrands follows by taking limits. Also, the Ito isometry says that {X_t-X_s} has variance

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\left(\int_s^t\xi\,dB\right)^2\right]&\displaystyle={\mathbb E}\left[\int_s^t\xi^2_u\,du\right]\smallskip\\ &\displaystyle=\theta(t)-\theta(s)\smallskip\\ &\displaystyle={\mathbb E}\left[(B_{\theta(t)}-B_{\theta(s)})^2\right]. \end{array}

So, {\int\xi\,dB=\int\sqrt{\theta^\prime(t)}\,dB_t} has the same distribution as the time-changed Brownian motion {B_{\theta(t)}}.

With the help of Lévy’s characterization, these ideas can be extended to more general, non-deterministic, integrands and to stochastic time-changes. In fact, doing this leads to the startling result that all continuous local martingales are just time-changed Brownian motion. (more…)

13 April 10

Lévy’s Characterization of Brownian Motion

Standard Brownian motion, {\{B_t\}_{t\ge 0}}, is defined to be a real-valued process satisfying the following properties.

  1. {B_0=0}.
  2. {B_t-B_s} is normally distributed with mean 0 and variance ts independently of {\{B_u\colon u\le s\}}, for any {t>s\ge 0}.
  3. B has continuous sample paths.

As always, it only really matters is that these properties hold almost surely. Now, to apply the techniques of stochastic calculus, it is assumed that there is an underlying filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}, which necessitates a further definition; a process B is a Brownian motion on a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})} if in addition to the above properties it is also adapted, so that {B_t} is {\mathcal{F}_t}-measurable, and {B_t-B_s} is independent of {\mathcal{F}_s} for each {t>s\ge 0}. Note that the above condition that {B_t-B_s} is independent of {\{B_u\colon u\le s\}} is not explicitly required, as it also follows from the independence from {\mathcal{F}_s}. According to these definitions, a process is a Brownian motion if and only if it is a Brownian motion with respect to its natural filtration.

The property that {B_t-B_s} has zero mean independently of {\mathcal{F}_s} means that Brownian motion is a martingale. Furthermore, we previously calculated its quadratic variation as {[B]_t=t}. An incredibly useful result is that the converse statement holds. That is, Brownian motion is the only local martingale with this quadratic variation. This is known as Lévy’s characterization, and shows that Brownian motion is a particularly general stochastic process, justifying its ubiquitous influence on the study of continuous-time stochastic processes.

Theorem 1 (Lévy’s Characterization of Brownian Motion) Let X be a local martingale with {X_0=0}. Then, the following are equivalent.

  1. X is standard Brownian motion on the underlying filtered probability space.
  2. X is continuous and {X^2_t-t} is a local martingale.
  3. X has quadratic variation {[X]_t=t}.

(more…)

18 January 10

Quadratic Variations and Integration by Parts

A major difference between standard integral calculus and stochastic calculus is the existence of quadratic variations and covariations. Such terms show up, for example, in the stochastic version of the integration by parts formula.

For motivation, let us start by considering a standard argument for differentiable processes. The increment of a process {X} over a time step {\delta t>0} can be written as {\delta X_t\equiv X_{t+\delta t}-X_t}. The following identity is easily verified,

\displaystyle  \delta XY = X\delta Y + Y\delta X + \delta X \delta Y.

(1)

Now, divide the time interval {[0,t]} into {n} equal parts. That is, set {t_k=kt} for {k=0,1,\ldots,n}. Then, using {\delta t=1/n} and summing equation (1) over these times,

\displaystyle  X_tY_t -X_0Y_0=\sum_{k=0}^{n-1} X_{t_k}\delta Y_{t_k} +\sum_{k=0}^{n-1}Y_{t_k}\delta X_{t_k}+\sum_{k=0}^{n-1}\delta X_{t_k}\delta Y_{t_k}.

(2)

If the processes are continuously differentiable, then the final term on the right hand side is a sum of {n} terms, each of order {1/n^2}, and therefore is of order {1/n}. This vanishes in the limit {n\rightarrow\infty}, leading to the integration by parts formula

\displaystyle  X_tY_t-X_0Y_0 = \int_0^t X\,dY + \int_0^t Y\,dX.

Now, suppose that {X,Y} are standard Brownian motions. Then, {\delta X,\delta Y} are normal random variables with standard deviation {\sqrt{\delta t}}. It follows that the final term on the right hand side of (2) is a sum of {n} terms each of which is, on average, of order {1/n}. So, even in the limit as {n} goes to infinity, it does not vanish. Consequently, in stochastic calculus, the integration by parts formula requires an additional term, which is called the quadratic covariation (or, just covariation) of {X} and {Y}. (more…)

Next Page »

Create a free website or blog at WordPress.com.