Almost Sure

8 December 19


Filed under: Probability Theory — George Lowther @ 9:24 PM
Tags: , ,

After the previous posts motivating the idea of studying probability spaces by looking at states on algebras, I will now make a start on the theory. The idea is that an abstract algebra can represent the collection of bounded, and complex-valued, random variables, with a state on this algebra taking the place of the probability measure. By allowing the algebra to be noncommutative, we also incorporate quantum probability.

I will take very small first steps in this post, considering only the basic definition of a *-algebra and positive maps. To effectively emulate classical probability theory in this context will involve additional technical requirements. However, that is not the aim here. We take a bare-bones approach, to get a feeling for the underlying constructs, and start with the definition of a *-algebra. I use {\bar\lambda} to denote the complex conjugate of a complex number {\lambda}.

Definition 1 An algebra {\mathcal A} over field {K} is a {K}-vector space together with a binary product {(a,b)\mapsto ab} satisfying

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle a(bc)=(ab)c,\smallskip\\ &\displaystyle \lambda(ab)=(\lambda a)b=a(\lambda b)\smallskip\\ &\displaystyle a(b+c)=ab+ac,\smallskip\\ &\displaystyle (a+b)c=ac+bc, \end{array}

for all {a,b,c\in\mathcal A} and {\lambda\in K}.

A *-algebra {\mathcal A} is an algebra over {{\mathbb C}} with a unary involution, {a\mapsto a^*} satisfying

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle (\lambda a+\mu b)^*=\bar\lambda a^*+\bar\mu b^*,\smallskip\\ &\displaystyle (ab)^*=b^*a^*\smallskip\\ &\displaystyle a^{**}=a. \end{array}

for all {a,b,c\in\mathcal A} and {\lambda,\mu\in{\mathbb C}}.

An algebra is called unitial if there exists {1\in\mathcal A} such that

\displaystyle  1a=a1=a

for all {a\in\mathcal A}. Then, {1} is called the unit or identity of {\mathcal A}.

In contrast to my previous posts, I am not considering a *-algebra to contain a unit by default, and will refer to it as `unitial’ whenever the existence of a unit is required. An {a\in\mathcal A} is called self-adjoint if and only if {a^*=a}. It can be seen that the self-adjoint elements form a real-linear subspace of the algebra, which we denote by {\mathcal A_{\rm sa}}. For any {a\in\mathcal A}, then {a^*a} and {a^*+a} are both self-adjoint. Furthermore, every {a\in\mathcal A} can be uniquely decomposed as

\displaystyle  a=u+iv

for {u,v\in\mathcal A_{\rm sa}}. Using {a^*=u-iv}, this is easily solved to obtain {u=(a+a^*)/2} and {v=(a-a^*)/2i}. If the algebra is unitial, then the identity {1} must be self-adjoint, as can be seen from

\displaystyle  1^*=1^*1=(1^*1)^*=(1^*)^*=1.

A sub-*-algebra {\mathcal B} of {\mathcal A} is a subset which is closed under the algebra operations. That is {\lambda a}, {a+b}, {ab} and {a^*} are all in {\mathcal B}, for any {a,b\in\mathcal B} and {\lambda\in{\mathbb C}}. Any sub-*algebra is itself a *-algebra under these operations. An algebra is said to be commutative if the identity {ab=ba} is satisfied.

Example 1 Let {X} be a set and {\mathcal A} be the collection of functions {f\colon S\rightarrow{\mathbb C}}. This is a commutative *-algebra under the pointwise operations

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle (f+g)(x)=f(x)+g(x),\smallskip\\ &\displaystyle (\lambda f)(x)=\lambda f(x),\smallskip\\ &\displaystyle (fg)(x)=f(x)g(x),\smallskip\\ &\displaystyle f^*(x)=\overline{f(x)}, \end{array}

for {f,g\in\mathcal A} and {\lambda\in{\mathbb C}}. The self-adjoint elements are the real-valued functions on {X}.

Commutative *-algebras can often be represented as sub-*-algebras of the collection of complex valued functions from some set {X}, although this does impose additional requirements. For example, any such algebra also satisfies {a=0} whenever {a^*a=0}. In example 1, it can be seen that the positive elements of {\mathcal A} are precisely those that can be expressed in the form {a^*a}.

Noncommutative *-algebras arise as collections of linear operators on an inner product space.

Example 2 If {V} is a vector space over a field {K}, then the space of linear maps {V\rightarrow V} is a {K}-algebra, where the algebra operations are defined by combining the linear maps in the usual way,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle (a+b)(x)=a(x)+b(x),\smallskip\\ &\displaystyle (\lambda a)(x)=\lambda a(x),\smallskip\\ &\displaystyle (ab)(x)=a(b(x)), \end{array}

for all {x\in V} and {\lambda\in K}. If {V} is a complex vector space with inner product {\langle\cdot,\cdot\rangle}, let {\mathcal A} be the space of linear maps {a\colon V\rightarrow V} such that there exists an adjoint {a^*\colon V\rightarrow V} satisfying

\displaystyle  \langle x,ay\rangle=\langle a^*x,y\rangle (1)

for all {x,y\in V}. Then {\mathcal A} is a *-algebra. In particular, if {V} is a Hilbert space, then the collection {B(V)} of all bounded linear maps {V\rightarrow V} is a *-algebra, with involution given by the operator adjoint.

In this example, if {a\in\mathcal A} satisfies {a^*a=0} then

\displaystyle  \lVert ax\rVert^2=\langle ax,a x\rangle=\langle a^*ax,x\rangle=0

so that {a=0}, as in example 1. Generally, we expect the property that {a^*a=0} implies {a=0} for the cases that we will be interested in, although I will not impose this as a condition. Any element of {\mathcal A} of the form {a^*a} satisfies

\displaystyle  \langle x,a^*a x\rangle=\langle ax,ax\rangle=\lVert ax\rVert^2\ge0,

so represents a positive linear operator.

Next, we define positive linear maps on a *-agebra.

Definition 2 Let {\mathcal A} be a *-algebra. Then, a linear map {p\colon\mathcal A\rightarrow{\mathbb C}} is,

  1. self-adjoint or real if {p(a^*)=\overline{p(a)}} for all {a\in\mathcal A}.
  2. positive if it is self-adjoint and {p(a^*a)\ge0} for all {a\in\mathcal A}.

Example 3 Let {(X,\mathcal E,\mu)} be a finite measure space, and {\mathcal A} be the bounded measurable maps {X\rightarrow{\mathbb C}}. Then, integration w.r.t. {\mu} defines a positive linear map on {\mathcal A},

\displaystyle  p(f)=\int f d\mu.

Example 4 Let {V} be an inner product space, and {\mathcal A} be a sub-*-algebra of the space of linear maps {a\colon V\rightarrow V} as in example 2. Then, any {\xi\in V} defines a positive linear map on {\mathcal A},

\displaystyle  p(a)=\langle\xi,a\xi\rangle.

Given a *-algebra {\mathcal A} and a positive linear map {p\colon\mathcal A\rightarrow{\mathbb C}}, we can define a semi-inner product by,

\displaystyle  \langle x,y\rangle = p(x^*y), (2)

for all {x,y\in\mathcal A}. This is only a semi-inner product, as it need not be positive definite. That is, it is possible that {\langle x,x\rangle=0} for some nonzero {x\in\mathcal A}. The associated semi-norm is

\displaystyle  \lVert a\rVert_2=\langle a,a\rangle^{\frac12}=\sqrt{p(a^*a)}.

I will refer to this as the {L^2(p)} semi-norm on {\mathcal A}. If you prefer to work with a true inner product, rather than a semi-inner product, it is always possible to quotient out by the space of {x\in\mathcal A} for which {\lVert a\rVert_2=0}. As {\mathcal A} acts on itself by left-multiplication, taking {V=\mathcal A} considered as a complex vector space shows that the construction of *-algebras in example 2 is quite general.

A left-ideal of a *-algebra is a subset, which is a subspace as a complex vector space, and which is closed under left-multiplication by elements of the algebra.

Lemma 3 Let {p\colon\mathcal A\rightarrow{\mathbb C}} be a positive linear map on *-algebra {\mathcal A}. Let {\mathcal A_0} denote the elements {x\in\mathcal A} such that {\lVert x\rVert_2=0} or, equivalently, {p(x^*x)=0}. This is a left-ideal of {\mathcal A}.

Using {\mathcal A/\mathcal A_0} to denote the quotient vector space, with quotient map {x\mapsto[x]=\mathcal A_0+x}, then the semi-inner product uniquely defines an inner product on {\mathcal A/\mathcal A_0} by

\displaystyle  \langle[x],[y]\rangle=\langle x,y\rangle (3)

and {\mathcal A} acts on {\mathcal A/\mathcal A_0} by left-multiplication, {a[x]=[ax]}.

Proof: If {x,y\in\mathcal A_0} and {\lambda,\mu\in{\mathbb C}} then, by the triangle inequality,

\displaystyle  \lVert \lambda x+\mu y\rVert_2\le\lvert\lambda\rvert\lVert x\rVert_2+\lvert\mu\rvert\lVert y\rVert_2=0.

So {\lambda x+\mu y\in\mathcal A_0}, showing that {\mathcal A_0} is a vector subspace. Also, for {a\in\mathcal A}, Cauchy–Schwarz gives

\displaystyle  \lVert ax\rVert_2=\langle ax,ax\rangle=\langle a^*ax,x\rangle\le\lVert a^*ax\rVert_2\lVert x\rVert_2=0,

so that {ax} is in {\mathcal A_0} which, therefore, is a left-ideal. Next, if {[x]=[x^\prime]} then

\displaystyle  \lvert\langle x,y\rangle-\langle x^\prime,y\rangle\rvert=\lvert\langle x-x^\prime,y\rangle\rvert\le\lVert x-x^\prime\rVert_2\lVert y\rVert_2=0

and, similarly, the value of {\langle x,y\rangle} is unchanged by replacing {y} with {y^\prime} where {[y]=[y^\prime]}. So, 3 is well-defined. Furthermore, if {\langle[x],[x]\rangle=0} then {x\in\mathcal A_0}, so that {\mathcal A/\mathcal A_0} is a true inner product space. Finally, if {[x]=0} then, as {\mathcal A_0} is a left-ideal, {[ax]=0}, so we can define the action of {\mathcal A} on {\mathcal A/\mathcal A_0} by {a[x]=[ax]}. ⬜

Now, for {a\in\mathcal A}, consider the linear map on {\mathcal A} given by left-multiplication, {x\mapsto ax} (or, you can look at the action on {\mathcal A/\mathcal A_0}, if preferred). We denote its operator norm by {\lVert a\rVert_\infty}, so that,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\lVert a\rVert_\infty&\displaystyle=\inf\left\{K\in{\mathbb R}^+\colon \lVert ax\rVert_2\le K\lVert x\rVert_2{\rm\ for\ all\ }x\in\mathcal A\right\}\smallskip\\ &\displaystyle=\sup\left\{\lVert ax\rVert_2\colon x\in\mathcal A, \lVert x\rVert_2\le1\right\}. \end{array} (4)

Again, this may only be a semi-norm, as it is possible that {\lVert a\rVert_\infty=0} for nonzero {a}, and, alternatively, can be infinite. I will refer to {\lVert\cdot\rVert_\infty} as the {L^\infty(p)} seminorm and, sometimes, will drop the subscript and write simply {\lVert a\rVert}, where it is unlikely to cause confusion. I will say that {a} is bounded if it acts as a bounded operator, so that {\lVert a\rVert} is finite. We show that this is a C*-seminorm. Whenever I say that a *-algebra acts on a semi-inner product space, I am requiring that this is in the sense of example 2 so that, in particular, (1) holds.

Lemma 4 Let {\mathcal A} be a *-algebra acting on semi-inner product space {V}. Then, the operator norm on {\mathcal A} is an algebra semi-norm,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\lVert\lambda a\rVert = \lvert\lambda\rvert\lVert a\rVert,\smallskip\\ &\displaystyle\lVert a+b\rVert\le\lVert a\rVert+\lVert b\rVert,\smallskip\\ &\displaystyle\lVert ab\rVert\le\lVert a\rVert\lVert b\rVert, \end{array}

for all {a,b\in\mathcal A} and {\lambda\in{\mathbb C}}. Furthermore, the C*-inequality,

\displaystyle  \lVert a\rVert^2\le\lVert a^*a\rVert (5)

holds for all {a\in\mathcal A}, and {\lVert a^*\rVert=\lVert a\rVert}.

Proof: The algebra seminorm properties are standard for any collection of operators on a normed space. For the C*-inequality, consider {a\in\mathcal A} and {x\in V}. Cauchy–Schwarz gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\lVert a x\rVert^2&\displaystyle=\langle ax,ax\rangle=\langle a^*ax,x\rangle\smallskip\\ &\displaystyle\le\lVert a^*ax\rVert\lVert x\rVert\le\lVert a^*a\rVert\lVert x\rVert^2, \end{array}

and the (5) follows. Next, cancelling {\lVert a^*x\rVert} from both sides of the following

\displaystyle  \lVert a^*x\rVert^2=\langle aa^*x,x\rangle\le\lVert a\rVert\lVert a^*x\rVert\lVert x\rVert

gives {\lVert a^*\rVert\le\lVert a\rVert}. Using {a^*} in place of {a} gives the reverse inequality. ⬜

Once it is known that a semi-norm on a *-algebra satisfies the C*-inequality, then we get various identities for free. An element {a} of a *-algebra {\mathcal A} is called normal if it commutes with its adjoint, {a^*a=aa^*}. This includes, for example, all self-adjoint elements and all unitary elements which, by definition, satisfy {a^*a=aa^*=1}.

Lemma 5 If {\lVert\cdot\rVert} is a finite seminorm on *-algebra {\mathcal A} satisfying the C*-inequality (5) then,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\lVert a^*\rVert=\lVert a\rVert,\smallskip\\ &\displaystyle\lVert a^*a\rVert=\lVert a\rVert^2, \end{array} (6)

for all {a\in\mathcal A}. Furthermore, for normal {a\in\mathcal A},

\displaystyle  \lVert a^n\rVert=\lVert a\rVert^n (7)

for all {n\ge1} and, more generally,

\displaystyle  \lVert(a^*)^ra^s\rVert=\lVert a\rVert^{r+s} (8)

for all {r,s\ge1}.

Proof: The C*-inequality gives

\displaystyle  \lVert a\rVert^2\le\lVert a^*a\rVert\le\lVert a^*\rVert\lVert a\rVert.

Cancelling {\lVert a\rVert} gives {\lVert a\rVert\le\lVert a^*\rVert}. Replacing {a} by {a^*} gives the reverse inequality, so {\lVert a\rVert=\lVert a^*\rVert}. Then,

\displaystyle  \lVert a\rVert^2\le\lVert a^*a\rVert\le\lVert a^*\rVert\lVert a\rVert=\lVert a\rVert^2,

giving the second equality. We now prove (7), starting with the case where {a=a^*} is self-adjoint and {n} is a power of 2. By what we have just shown,

\displaystyle  \lVert a^{2^{m+1}}\rVert=\lVert (a^{2^m})^2\rVert=\lVert a^{2^m}\rVert^2.

Hence, by induction in {m},

\displaystyle  \lVert a^{2^m}\rVert=\lVert a\rVert^{2^m},

for all nonnegative integers {m}. This proves (7) when {n} is a power of 2, and we need to extend to the case where it is any positive integer. In that case, choose {m} such that {2^m\ge n}. Then,

\displaystyle  \lVert a\rVert^{2^m}=\lVert a^{2^m}\rVert\le\lVert a^n\rVert\lVert a\rVert^{2^m-n}.

Assuming that {\lVert a\rVert} is nonzero, cancelling {\lVert a\rVert^{2^m-n}},

\displaystyle  \lVert a^n\rVert\le\lVert a\rVert^n\le\lVert a^n\rVert,

gives the required equality. The case where {\lVert a\rVert=0} is easily handled, since {\lVert a^n\rVert\le\lVert a\rVert^n=0}.

Now, if {a} is normal then,

\displaystyle  \lVert a^n\rVert^2=\lVert (a^*)^na^n\rVert=\lVert(a^*a)^n\rVert=\lVert a^*a\rVert^n=\lVert a\rVert^{2n}.


\displaystyle  \lVert (a^*)^r a^s\rVert^2=\lVert (a^*)^sa^r(a^*)^ra^s\rVert=\lVert(a^*a)^{r+s}\rVert=\lVert a\rVert^{2(r+s)}

as required. ⬜

In particular, applying this to the action of {\mathcal A} on itself:

Corollary 6 If {p\colon\mathcal A\rightarrow{\mathbb C}} is a positive linear map on *-algebra {\mathcal A}, then the {L^\infty(p)} semi-norm is an algebra semi-norm satisfying the C*-identities (6), and satisfying (7,8) for bounded normal {a\in\mathcal A}.

Proof: Let {\mathcal B} be the subalgebra of bounded elements of {\mathcal A}. By lemma 4, {\lVert a^*\rVert=\lVert a\rVert} is finite for all {a\in\mathcal B}, so {\mathcal B} is a *-subalgebra. As, again by lemma 4, the C*-inequality (5) holds, lemma 5 shows that the C*-identities (6) hold for bounded {a} and (7,8) hold for bounded normal {a}. It only remains to show that (6) holds for unbounded {a}, but this is immediate from the C*-inequality (5) . ⬜

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at