Almost Sure

30 December 11

The Doob-Meyer Decomposition

The Doob-Meyer decomposition was a very important result, historically, in the development of stochastic calculus. This theorem states that every cadlag submartingale uniquely decomposes as the sum of a local martingale and an increasing predictable process. For one thing, if X is a square-integrable martingale then Jensen’s inequality implies that {X^2} is a submartingale, so the Doob-Meyer decomposition guarantees the existence of an increasing predictable process {\langle X\rangle} such that {X^2-\langle X\rangle} is a local martingale. The term {\langle X\rangle} is called the predictable quadratic variation of X and, by using a version of the Ito isometry, can be used to define stochastic integration with respect to square-integrable martingales. For another, semimartingales were historically defined as sums of local martingales and finite variation processes, so the Doob-Meyer decomposition ensures that all local submartingales are also semimartingales. Going further, the Doob-Meyer decomposition is used as an important ingredient in many proofs of the Bichteler-Dellacherie theorem.

The approach taken in these notes is somewhat different from the historical development, however. We introduced stochastic integration and semimartingales early on, without requiring much prior knowledge of the general theory of stochastic processes. We have also developed the theory of semimartingales, such as proving the Bichteler-Dellacherie theorem, using a stochastic integration based method. So, the Doob-Meyer decomposition does not play such a pivotal role in these notes as in some other approaches to stochastic calculus. In fact, the special semimartingale decomposition already states a form of the Doob-Meyer decomposition in a more general setting. So, the main part of the proof given in this post will be to show that all local submartingales are semimartingales, allowing the decomposition for special semimartingales to be applied.

The Doob-Meyer decomposition is especially easy to understand in discrete time, where it reduces to the much simpler Doob decomposition. If {\{X_n\}_{n=0,1,2,\ldots}} is an integrable discrete-time process adapted to a filtration {\{\mathcal{F}_n\}_{n=0,1,2,\ldots}}, then the Doob decomposition expresses X as

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle X_n&\displaystyle=M_n+A_n,\smallskip\\ \displaystyle A_n&\displaystyle=\sum_{k=1}^n{\mathbb E}\left[X_k-X_{k-1}\;\vert\mathcal{F}_{k-1}\right]. \end{array} (1)

As previously discussed, M is then a semimartingale and A is an integrable process which is also predictable, in the sense that {A_n} is {\mathcal{F}_{n-1}}-measurable for each {n > 0}. Furthermore, X is a submartingale if and only if {{\mathbb E}[X_n-X_{n-1}\vert\mathcal{F}_{n-1}]\ge0} or, equivalently, if A is almost surely increasing.

Moving to continuous time, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})} with time index t ranging over the nonnegative real numbers. Then, the continuous-time version of (1) takes A to be a right-continuous and increasing process which is predictable, in the sense that it is measurable with respect to the σ-algebra generated by the class of left-continuous and adapted processes. Often, the Doob-Meyer decomposition is stated under additional assumptions, such as X being of class (D) or satisfying some similar uniform integrability property. To be as general possible, the statement I give here only requires X to be a local submartingale, and furthermore states how the decomposition is affected by various stronger hypotheses that X may satisfy.

Theorem 1 (Doob-Meyer) Any local submartingale X has a unique decomposition

\displaystyle  X=M+A, (2)

where M is a local martingale and A is a predictable increasing process starting from zero.

Furthermore,

  1. if X is a proper submartingale, then A is integrable and satisfies

    \displaystyle  {\mathbb E}[A_\tau]\le{\mathbb E}[X_\tau-X_0] (3)

    for all uniformly bounded stopping times {\tau}.

  2. X is of class (DL) if and only if M is a proper martingale and A is integrable, in which case

    \displaystyle  {\mathbb E}[A_\tau]={\mathbb E}[X_\tau-X_0] (4)

    for all uniformly bounded stopping times {\tau}.

  3. X is of class (D) if and only if M is a uniformly integrable martingale and {A_\infty} is integrable. Then, {X_\infty=\lim_{t\rightarrow\infty}X_t} and {M_\infty=\lim_{t\rightarrow\infty}M_t} exist almost surely, and (4) holds for all (not necessarily finite) stopping times {\tau}.

Note that, by definition, local submartingales and local martingales are always cadlag processes and, hence, in decomposition (2), A is automatically required to be cadlag. So, right-continuity of A did not have to be explicitly stated. Also, Theorem 1 implies that any local submartingale X decomposes as a local martingale plus a finite variation process. As mentioned above, this implies that X is a semimartingale. Here, we work in the opposite direction, proving first that every local submartingale is a semimartingale and, then, use this to give a proof of Theorem 1. Recall that, in these notes, a cadlag adapted process X is defined to be a semimartingale if and only if the stochastic integral {\int\xi\,dX} is defined for all bounded predictable integrands {\xi}.

Lemma 2 Every local submartingale is a semimartingale.

Proof: It was prevously shown that X is a semimartingale if, for each fixed {t\in{\mathbb R}_+}, the set

\displaystyle  \left\{\int_0^t\xi\,dX\colon\vert\xi\vert\le1,\;\xi{\rm\ is\ elementary}\right\} (5)

is bounded in probability. This characterization was also stated as part of the Bichteler-Dellacherie theorem, and is what we will use to show that a local submartingale X is a semimartingale. The proof is similar to the earlier proof that local martingales are semimartingales. By localization, we can suppose that X is a proper submartingale.

For any elementary process {\xi} and fixed time {t > 0}, there exist times {0=t_0 \le t_1\le\cdots \le t_n=t} such that

\displaystyle  1_{(0,t]}\xi=\sum_{k=1}^n\xi_{t_k}1_{(t_{k-1},t_k]},

and {\xi_{t_k}} are {\mathcal{F}_{t_{k-1}}}-measurable random variables. Consider the discrete-time process {\tilde X_k=X_{t_k}-X_0}, which is a submartingale adapted to the discrete filtration {\mathcal{\tilde F}_k=\mathcal{F}_{t_k}}. Applying the Doob decomposition (1) to {\tilde X}, we can write {\tilde X_k=M_k+A_k}, where M is a discrete-time martingale and A is increasing with {{\mathbb E}[A_n]={\mathbb E}[\tilde X_n]}. So, if {\vert\xi\vert\le1},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb P}\left(\left\vert\sum_{k=1}^n\xi_{t_k}(A_k-A_{k-1})\right\vert\ge K\right)&\displaystyle\le{\mathbb P}\left(A_n\ge K\right)\smallskip\\ &\displaystyle\le K^{-1}{\mathbb E}[A_n]\smallskip\\ &\displaystyle=K^{-1}{\mathbb E}[X_t-X_0] \end{array} (6)

for all {K > 0}. Also, as previously shown in the proof that martingales are semimartingales, there exists a constant {c > 0}, independent of choice of X, {\xi} and K, such that

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb P}\left(\left\vert\sum_{k=1}^n\xi_{t_k}(M_k-M_{k-1})\right\vert\ge K\right)&\displaystyle\le\frac{c}{K}{\mathbb E}[\vert M_n\vert]\smallskip\\ &\displaystyle\le\frac{c}{K}{\mathbb E}\left[\vert \tilde X_n\vert + A_n\right]\smallskip\\ &\displaystyle\le\frac{2c}{K}{\mathbb E}\left[\vert X_t-X_0\vert\right]. \end{array} (7)

Combining (6) and (7) gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rcl} \displaystyle{\mathbb P}\left(\left\vert\int_0^t\xi\,dX\right\vert > K\right)&\displaystyle\le&\displaystyle{\mathbb P}\left(\left\vert\sum_{k=1}^n\xi_{t_k}(A_k-A_{k-1})\right\vert\ge\frac{K}{2}\right)\smallskip\\ &&\displaystyle+{\mathbb P}\left(\left\vert\sum_{k=1}^n\xi_{t_k}(M_k-M_{k-1})\right\vert\ge\frac{K}{2}\right)\smallskip\\ &\displaystyle\le&\displaystyle\frac{4c+2}{K}{\mathbb E}\left[\vert X_t-X_0\vert\right]. \end{array}

So, choosing K large, the probability on the left hand side can be made as small as we like, independently of {\xi}. This shows that the set (5) is bounded in probability as required. ⬜

With Lemma 2 out of the way, all the ingredients are now in place to give a proof of the Doob-Meyer decomposition. This is just an application of the special semimartingale decomposition from an earlier post, although there is still a small amount of work required to show that the compensator A is indeed increasing. Let us start by proving the first part of Theorem 1 where it is only assumed that X is a local submartingale.

Lemma 3 Every local submartingale X uniquely decomposes as {X=M+A}, where M is a local martingale and A is an increasing predictable process starting from zero.

Proof: By Lemma 2, we know that X is a semimartingale. Also, as it is a local submartingale, X is locally integrable. So, it uniquely decomposes as {X=M+A}, where M is a local martingale and A is a predictable FV process with {A_0=0}. It only remains to be shown that A is increasing.

As {A=X-M} is a local submartingale, there exist stopping times {\tau_n} increasing to infinity such that {A^{\tau_n}} are submartingales. Also, as A is predictable, {\tau_n} can be chosen such that {A^{\tau_n}} has uniformly bounded and, hence, integrable total variation. Then, by the properties of submartingales,

\displaystyle  {\mathbb E}\left[\int_0^{\tau_n}\vert\xi\vert\,dA\right]\ge0 (8)

for any bounded elementary process {\xi}. Let us denote the set of uniformly bounded predictable processes for which (8) holds by S. Then, as just shown, S contains all bounded elementary processes. Next, consider a sequence {\{\xi^m\}_{m=1,2,\ldots}} of predictable processes in S, uniformly bounded by a constant K, and converging to the limit {\xi}. As {A^{\tau_n}} has integrable total variation, dominated convergence gives

\displaystyle  {\mathbb E}\left[\int_0^{\tau_n}\vert\xi\vert\,dA\right]=\lim_{m\rightarrow\infty}{\mathbb E}\left[\int_0^{\tau_n}\vert\xi^m\vert\,dA\right]\ge0.

So, {\xi\in S}. Therefore, by the monotone class theorem, S contains all uniformly bounded predictable processes.

Now, as A is a predictable FV process, it can be written as {A=A^+-A^-} where {A^+} and {A^-} are increasing processes, and {A^-=-\int\xi\,dA} for some predictable process {0\le\xi\le1}. By (8),

\displaystyle  {\mathbb E}\left[A^-_{\tau_n}\right]=-{\mathbb E}\left[\int_0^{\tau_n}\xi\,dA\right]\le0.

Letting n increase to infinity, monotone convergence gives {{\mathbb E}[A^-_\infty]=0}. Finally, as it is nonnegative and increasing, we almost surely have {A^-_t=0} for all t and, hence, {A=A^+} is increasing. ⬜

To complete the proof of Theorem 1, just the `furthermore’ part remains to be shown. That is, under the additional hypotheses for X we obtain stronger properties satisfied by M and A. We prove this now. Note that nowhere in this part of the proof requires the fact that A is predictable, just that it is an increasing process starting from zero.

Proof of Theorem 1:
Suppose that X is a proper submartingale. Then, choose stopping times {\tau_n} increasing to infinity such that the stopped processes {M^{\tau_n}} are proper martingales. For any uniformly bounded stopping time {\tau}, optional sampling says that {X_{\tau_n\wedge\tau}} is integrable with expectation bounded by {{\mathbb E}[X_\tau]}. So,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[A_{\tau_n\wedge\tau}]&\displaystyle={\mathbb E}[X_{\tau_n\wedge\tau}-M^{\tau_n}_\tau]\smallskip\\ &\displaystyle\le{\mathbb E}[X_\tau-M_0]\smallskip\\ &\displaystyle={\mathbb E}[X_\tau-X_0]. \end{array}

Letting n increase to infinity and using monotone convergence gives inequality (3) so, in particular, A is integrable.

Now, suppose that X is of class (DL). Then, it is a proper submartingale so, as shown above, A is integrable. As it is also nonnegative and increasing, A is dominated in {L^1} on each bounded interval. So, {M=X-A} is of class (DL) and, hence, is a proper martingale. Then, for any uniformly bounded stopping time {\tau}, optional sampling gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[A_\tau]&\displaystyle={\mathbb E}[X_\tau-M_\tau]\smallskip\\ &\displaystyle={\mathbb E}[X_\tau-M_0]\smallskip\\ &\displaystyle={\mathbb E}[X_\tau-X_0]. \end{array}

This proves equality (4).

Conversely, suppose that M is a proper martingale and A is integrable. Then, M is of class (DL) and, as it is dominated in {L^1} on finite intervals, A is also of class (DL). Therefore, {X=M+A} is of class (DL).

Now, suppose that X is of class (D). In particular, it is of class (DL) and, as shown above, M is a proper martingale and A is integrable. By submartingale convergence, the limit {X_\infty=\lim_{t\rightarrow\infty}X_t} exists almost-surely. Applying (3) and monotone convergence,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[A_\tau]&\displaystyle=\lim_{t\rightarrow\infty}{\mathbb E}[A_{\tau\wedge t}]\smallskip\\ &\displaystyle=\lim_{t\rightarrow\infty}{\mathbb E}[X_{\tau\wedge t}-X_0]\smallskip\\ &\displaystyle={\mathbb E}[X_\tau-X_0]. \end{array}

The last equality here uses uniform integrability of {X_{\tau\wedge t}} over {t\ge0}, since X is of class (D). So, (3) holds for all stopping times and, by taking {\tau=\infty}, we see that {A_\infty} is integrable. As it is dominated in {L^1}, A is in class (D). So, {M=X-A} is also in class (D) and, hence, is a uniformly integrable martingale.

Conversely, suppose that M is a uniformly integrable martingale and that {A_\infty} is integrable. Then, M is of class (D) and, as it is dominated in {L^1}, A is also of class (D). So, {X=M+A} is of class (D) as required. ⬜

Approximating the Compensator

The process A appearing in decomposition (2) is called the compensator of X. By definition, it is the unique predictable FV process, starting from zero, such that {X-A} is a local martingale. However, this is considerably different from the definition of the compensator in the dicrete-time Doob decomposition (1). In this section, I will show how the continuous-time compensator does arise as the limit of discrete-time compensators along partitions. As previously discussed for approximating the compensator of integrable variation processes, we start by defining the notion of a stochastic partition, P, of {{\mathbb R}_+}. This is just a sequence of stopping times

\displaystyle  0=\tau_0\le\tau_1\le\tau_2\le\cdots\uparrow\infty.

The mesh of the partition is denoted by {\vert P\vert=\sup_n(\tau_n-\tau_{n-1})}. For a process X, we calculate the compensator along the partition P as the process {A^P_t} defined by

\displaystyle  A^P_t=\sum_{n=1}^\infty1_{\{\tau_{n-1} < t\}}{\mathbb E}\left[X_{\tau_n}-X_{\tau_{n-1}}\;\vert\mathcal{F}_{\tau_{n-1}}\right]. (9)

Here, we are only going to consider processes X which are submartingales of class (D). This ensures that the limit {X_\infty=\lim_{t\rightarrow\infty}X_t} exists, and that {X_\tau} is integrable for all stopping times {\tau}. So, the expectations in (9) are well defined.

If X is a class (D) submartingale, then Theorem 1 says that {X=M+A} for a uniformly integrable martingale M and predictable increasing process A starting from zero and such that {A_\infty} is integrable. Therefore, by optional sampling, we have {M_\tau={\mathbb E}[M_\infty\vert\mathcal{F}_\tau]} for each stopping time {\tau}. This implies that {{\mathbb E}[M_{\tau_n}-M_{\tau_{n-1}}\vert\mathcal{F}_{\tau_{n-1}}]=0}, so (9) can be rewritten as

\displaystyle  A^P_t=\sum_{n=1}^\infty1_{\{\tau_{n-1} < t\}}{\mathbb E}\left[A_{\tau_n}-A_{\tau_{n-1}}\;\vert\mathcal{F}_{\tau_{n-1}}\right]. (10)

This expresses {A^P} in terms of the integrable and increasing process A. In the case where X is quasi-left-continuous, so that A is continuous, the approximation to the compensator calculated along partitions P converges uniformly to A as the mesh goes to zero. Here, {\vert P\vert\xrightarrow{\rm P}0} denotes the limit as the mesh {\vert P\vert} goes to zero in probability.

Theorem 4 Let X be a cadlag and quasi-left-continuous submartingale of class (D), and A be as in decomposition (2). Then, {A^P\rightarrow A} uniformly in {L^1} as {\vert P\vert\rightarrow0} in probability. That is,

\displaystyle  \lim_{\vert P\vert\xrightarrow{\rm P}0}{\mathbb E}\left[\sup_{t\ge0}\vert A^P_t-A_t\vert\right]=0. (11)

Proof: As X is quasi-left-continuous, its compensator A is continuous. Then, (10) expresses {A^P} in terms of the continuous process A of integrable total variation. Theorem 10 of the post on compensators then states that the limit (11) holds. ⬜

In the case where X is not quasi-left-continuous, then convergence to the compensator is not guaranteed in such a strong sense as above. As was previously shown by an example, even in the very simple case where {X=1_{[\tau,\infty)}} for some stopping time {\tau}, it is not guaranteed that the limit of {A^P_t} exists in probability as the mesh goes to zero. Instead, we have to work with respect to weak convergence in {L^1}.

Theorem 5 Let X be a cadlag submartingale of class (D), and A be as in decomposition (2). Then, {A^P_\tau\rightarrow A_\tau} weakly in {L^1} as {\vert P\vert\rightarrow0} in probability, for any random time {\tau\colon\Omega\rightarrow{\mathbb R}_+\cup\{\infty\}}. That is,

\displaystyle  \lim_{\vert P\vert\xrightarrow{\rm P}0}{\mathbb E}\left[YA^P_\tau\right]={\mathbb E}\left[YA_\tau\right] (12)

for all uniformly bounded random variables Y.

Proof: As in the proof of Theorem 4, we note that (10) expresses {A^P} in terms of the integrable variation process A. Theorem 11 of the post on compensators states that the limit (12) holds. ⬜

Notes

As discussed above, the approach used to prove the Doob-Meyer decomposition in this post is quite different from the methods which are often used elsewhere. This is because we have already developed much of the theory of semimartingales independently of the Doob-Meyer decomposition, so it seems natural to simply apply this theory here rather than starting from scratch. I will, however, briefly outline some alternative approaches which are sometimes used.

One method of proving the existence of the Doob-Meyer decomposition is via the Doléans measure of a class (D) cadlag submartingale X. This is a finite measure on the predictable σ-algebra {\mathcal{P}} satisfying

\displaystyle  \mu(\xi)={\mathbb E}\left[\int_0^\infty\xi\,dX\right]

for all bounded elementary processes {\xi}. Proving that the function {\mu} does extend to a measure can be done by a relatively straightforward application of the Caratheodory extension theorem. Next, the Doléans measure is used to define the following finite measure on the underlying measurable space {(\Omega,\mathcal{F}_t)}. Given any bounded {\mathcal{F}_t}-measurable random variable Y, choose a cadlag modification of the {\mathcal{F}_{\cdot+}}-martingale {M_t={\mathbb E}[Y\vert\mathcal{F}_{t+}]}. Then, letting {M_{t-}=\lim_{s\uparrow\uparrow t}M_s}, define {\nu_t(Y)} by

\displaystyle  \nu_t(Y)=\mu(M_-).

It can be shown that {\nu_t} is absolutely continuous with respect to {{\mathbb P}}, so the Radon-Nikodym derivative

\displaystyle  A_t=\frac{d\nu_t}{d{\mathbb P}}

exists. Furthermore, A is adapted, increasing and right-continuous in probability. So, it has a right-continuous modification. Also, from the construction, it is straightforward to show that {X-A} is a martingale and

\displaystyle  {\mathbb E}\left[M_tA_t\right]={\mathbb E}\left[\int_0^tM_{s-}\,dA_s\right] (13)

for all cadlag and bounded martingales M. The fact that A is predictable follows from (13) together with the techniques developed in the earlier post on predictable FV processes. Filling in the details, this provides a proof of the Doob-Meyer decomposition for class (D) submartingales.

The method just described, making use of the Doléans measure of X, is close to the standard `classical’ proof of the Doob-Meyer decomposition. The method is closely related to the idea of dual predictable projection, which is sometimes employed in the proof. The most difficult part is showing that any process A satisfying (13) is indeed predictable.

An alternative method which is sometimes used is to construct the compensator A directly, by taking the limit of the discrete approximation along a sequence of partitions. That is, prove Theorem 5 without assuming a-priori that the compensator A exists. Again, showing that the process A is indeed predictable is often the most tricky part of this approach. Similarly, the decomposition could be constructed by first proving Theorem 4. This would prove the Doob-Meyer decomposition for quasi-left-continuous cadlag submartingales where, now, the compensator A is the unique continuous increasing process starting from zero such that {X-A} is a martingale. Extending to general cadlag submartingales is relatively straightforward. We would simply subtract out the jumps of X which occur at predictable times, and construct the compensators at these jump times explicitly.

Finally, as quite a lot of the theory of semimartingales and stochastic integration has already been developed in these notes, and was drawn upon in the proof given here, I think that it is worth briefly mentioning what was really needed for the proof above. The main point is that, for every submartingale X, the set (5) is bounded in probability. Furthermore, this implies that the quadratic variation {[X]} exists. Sometimes, semimartingales are defined as cadlag adapted processes such that (5) is bounded in probability. Then, quadratic variations and covariations of semimartingales always exist. This allows us to define the vector space V of semimartingales X such that {[X]_\infty} is integrable, with the inner product {\langle X,Y\rangle={\mathbb E}[[X,Y]_\infty]}. So, for any submartingale X such that {[X]_\infty} is integrable, we can define M to be the orthogonal projection in V of X onto the subspace of uniformly square integrable martingales. In these notes, this projection was done as part of the proof of the Bichteler-Dellacherie theorem. The process {A=X-M} obtained is characterized by the property that {[A,N]} is a local martingale for all local martingales N, which we showed is equivalent to A being a predictable FV process.

3 Comments »

  1. For a recent paper on the Doob-Meyer decomposition theorem, see

    M. Beiglböck, W. Schachermayer, B. Veliyev. A short proof of the Doob-Meyer Theorem. Preprint, 2010.

    It uses Komlos lemma as in previous works of W. Schachermayer.

    Comment by adrien — 30 December 11 @ 7:31 PM | Reply

    • Thanks, I see it is on the arXiv (A Short Proof Of The Doob-Meyer Theorem). I’ll check that out.

      Comment by George Lowther — 30 December 11 @ 8:06 PM | Reply

    • Ok, I’ve read it now. It seems like a very neat approach, and is something that I have thought about before. It constructs the compensator A by calculating the limit of discrete Doob decompositions along a sequence of partitions. This is (roughly) similar to proving Theorem 5 above, for a specific sequence of partitions (dyadic partitions) without a-priori assuming the existence of a compensator. One way is to use the fact that A^P_t is uniformly integrable as P runs through the partitions, which implies that this sequence is compact in the weak topology on L1 (uniformly integrable subsets of L1 are weakly relatively compact). You still need need to show that any limit point A obtained is predictable. In the paper you mention, they use the fact that by taking convex combinations, you can pass from a uniformly integrable sequence to one converging both in L1 and almost-surely (and, once you know this fact, you can pass to convex combinations without having to explicitly mention weak convergence). Then, by showing that A can be obtained almost surely as a limit of convex combinations of the left-continuous and adapted (hence, predictable) processes AP, you get that A is predictable.

      Constructing A using a limit (or limit point) of Doob-style approximations on a sequence of partitions is not particularly new — this is what Kallenberg does in Foundations of Modern Probability (iirc). The trick is in how you show that A is predictable. In the linked paper, Komlos’ lemma allows you to show that it is a limit of convex combinations of the predictable processes AP, which establishes predictability in a particularly simple way.

      Thanks for bringing this paper to my attention.

      Comment by George Lowther — 31 December 11 @ 12:05 AM | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 138 other followers