Almost Sure

25 February 10

SDEs with Locally Lipschitz Coefficients

In the previous post it was shown how the existence and uniqueness of solutions to stochastic differential equations with Lipschitz continuous coefficients follows from the basic properties of stochastic integration. However, in many applications, it is necessary to weaken this condition a bit. For example, consider the following SDE for a process X

\displaystyle  dX_t =\sigma \vert X_{t-}\vert^{\alpha}\,dZ_t,

where Z is a given semimartingale and {\sigma,\alpha} are fixed real numbers. The function {f(x)\equiv\sigma\vert x\vert^\alpha} has derivative {f^\prime(x)=\sigma\alpha {\rm sgn}(x)|x|^{\alpha-1}} which, for {\alpha>1}, is bounded on bounded subsets of the reals. It follows that f is Lipschitz continuous on such bounded sets. However, the derivative of f diverges to infinity as x goes to infinity, so f is not globally Lipschitz continuous. Similarly, if {\alpha<1} then f is Lipschitz continuous on compact subsets of {{\mathbb R}\setminus\{0\}}, but not globally Lipschitz. To be more widely applicable, the results of the previous post need to be extended to include such locally Lipschitz continuous coefficients.

In fact, uniqueness of solutions to SDEs with locally Lipschitz continuous coefficients follows from the global Lipschitz case. However, solutions need only exist up to a possible explosion time. This is demonstrated by the following simple non-stochastic differential equation

\displaystyle  dX= X^2\,dt.

For initial value {X_0=x>0}, this has the solution {X_t=(x^{-1}-t)^{-1}}, which explodes at time {t=x^{-1}}. (more…)

Advertisements

10 February 10

Existence of Solutions to Stochastic Differential Equations

A stochastic differential equation, or SDE for short, is a differential equation driven by one or more stochastic processes. For example, in physics, a Langevin equation describing the motion of a point {X=(X^1,\ldots,X^n)} in n-dimensional phase space is of the form

\displaystyle  \frac{dX^i}{dt} = \sum_{j=1}^m a_{ij}(X)\eta^j(t) + b_i(X).

(1)

The dynamics are described by the functions {a_{ij},b_i\colon{\mathbb R}^n\rightarrow{\mathbb R}}, and the problem is to find a solution for X, given its value at an initial time. What distinguishes this from an ordinary differential equation are random noise terms {\eta^j} and, consequently, solutions to the Langevin equation are stochastic processes. It is difficult to say exactly how {\eta^j} should be defined directly, but we can suppose that their integrals {B^j_t=\int_0^t\eta^j(s)\,ds} are continuous with independent and identically distributed increments. A candidate for such a process is standard Brownian motion and, up to constant scaling factor and drift term, it can be shown that this is the only possibility. However, Brownian motion is nowhere differentiable, so the original noise terms {\eta^j=dB^j_t/dt} do not have well defined values. Instead, we can rewrite equation (1) is terms of the Brownian motions. This gives the following SDE for an n-dimensional process {X=(X^1,\ldots,X^n)}

\displaystyle  dX^i_t = \sum_{j=1}^m a_{ij}(X_t)\,dB^j_t + b_i(X_t)\,dt

(2)

where {B^1,\ldots,B^m} are independent Brownian motions. This is to be understood in terms of the differential notation for stochastic integration. It is known that if the functions {a_{ij}, b_i} are Lipschitz continuous then, given any starting value for X, equation (2) has a unique solution. In this post, I give a proof of this using the basic properties of stochastic integration as introduced over the past few posts.

First, in keeping with these notes, equation (2) can be generalized by replacing the Brownian motions {B^j} and time t by arbitrary semimartingales. As always, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}. In integral form, the general SDE for a cadlag adapted process {X=(X^1,\ldots,X^n)} is as follows,

\displaystyle  X^i = N^i + \sum_{j=1}^m\int a_{ij}(X)\,dZ^j.

(3)

(more…)

22 December 09

U.C.P. and Semimartingale Convergence

A mode of convergence on the space of processes which occurs often in the study of stochastic calculus, is that of uniform convergence on compacts in probability or ucp convergence for short.

First, a sequence of (non-random) functions {f_n\colon{\mathbb R}_+\rightarrow{\mathbb R}} converges uniformly on compacts to a limit {f} if it converges uniformly on each bounded interval {[0,t]}. That is,

\displaystyle  \sup_{s\le t}\vert f_n(s)-f(s)\vert\rightarrow 0 (1)

as {n\rightarrow\infty}.

If stochastic processes are used rather than deterministic functions, then convergence in probability can be used to arrive at the following definition.

Definition 1 A sequence of jointly measurable stochastic processes {X^n} converges to the limit {X} uniformly on compacts in probability if

\displaystyle  {\mathbb P}\left(\sup_{s\le t}\vert X^n_s-X_s\vert>K\right)\rightarrow 0

as {n\rightarrow\infty} for each {t,K>0}.

(more…)

Create a free website or blog at WordPress.com.