Tag Archives: bra-ket notation

Matrix elements: example

Required math: calculus, vectors

Required physics: none

References: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Section 3.6; Problem 3.22.

As a simple example of the bra-ket notation and the matrix representation of operators, suppose we have a three-dimensional vector space spanned by an orthonormal basis {|1\rangle,|2\rangle,|3\rangle}. We begin with a couple of vectors given by

\displaystyle   |\alpha\rangle \displaystyle  = \displaystyle  i|1\rangle-2|2\rangle-i|3\rangle
\displaystyle  |\beta\rangle \displaystyle  = \displaystyle  i|1\rangle+2|3\rangle

The corresponding bras are found by taking the complex conjugate:

\displaystyle   \langle\alpha| \displaystyle  = \displaystyle  -i\langle1|-2\langle2|+i\langle3|
\displaystyle  \langle\beta| \displaystyle  = \displaystyle  -i\langle1|+2\langle3|

Remember that these bras are not vectors in their own right; rather, they are operators which acquire meaning only when they are applied to vectors to yield a complex number.

We can form the two inner products of these two vectors:

\displaystyle   \langle\alpha|\beta\rangle \displaystyle  = \displaystyle  (-i)(i)+(-2)(0)+(i)(2)
\displaystyle  \displaystyle  = \displaystyle  1+2i
\displaystyle  \langle\beta|\alpha\rangle \displaystyle  = \displaystyle  (-i)(i)+(0)(-2)+(2)(-i)
\displaystyle  \displaystyle  = \displaystyle  1-2i

Clearly {\langle\beta|\alpha\rangle=\langle\alpha|\beta\rangle^{*}}.

If we define an operator {\hat{A}=|\alpha\rangle\langle\beta|} we can write it in terms of the basis above:

\displaystyle   \hat{A} \displaystyle  = \displaystyle  |\alpha\rangle\langle\beta|
\displaystyle  \displaystyle  = \displaystyle  (i|1\rangle-2|2\rangle-i|3\rangle)(-i\langle1|+2\langle3|)
\displaystyle  \displaystyle  = \displaystyle  |1\rangle\langle1|+2i|1\rangle\langle3|+2i|2\rangle\langle1|-4|2\rangle\langle3|-|3\rangle\langle1|-2i|3\rangle\langle3|

From here, we can obtain its matrix elements in this basis by using the orthonomal property of the basis. For example

\displaystyle   \langle1|A|1\rangle \displaystyle  = \displaystyle  \langle1|1\rangle\langle1|1\rangle+2i\langle1|1\rangle\langle3|1\rangle+2i\langle1|2\rangle\langle1|1\rangle
\displaystyle  \displaystyle  \displaystyle  -4\langle1|2\rangle\langle3|1\rangle-\langle1|3\rangle\langle1|1\rangle-2i\langle1|3\rangle\langle3|1\rangle
\displaystyle  \displaystyle  = \displaystyle  1

Doing similar calculations for the other elements, we get

\displaystyle  \mathbf{\mathsf{A}}=\left[\begin{array}{ccc} 1 & 0 & 2i\\ 2i & 0 & -4\\ -1 & 0 & -2i \end{array}\right]

The matrix is not hermitian, as it is not equal to its conjugate transpose.

Projection operators

Required math: calculus, vectors

Required physics: Schrödinger equation

References: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Section 3.6; Problem 3.21.

We’ve been writing vector inner products using the Dirac bra-ket notation, so the inner product of two vectors is {\left\langle f\right.\left|g\right\rangle }. Dirac’s idea was to break this notation into two pieces, the ‘bra’ and the ‘ket’. The meaning of the ket part is fairly obvious: it’s just the original vector. But what exactly is the ‘bra’ part? Essentially, it’s a linear operator whose operand is a vector and output is a complex number (scalar). If the vector space is discrete (containing any number of dimensions, finite or infinite), then applying a bra to a ket results in the ordinary scalar product (the ‘dot product’ familiar from linear algebra). If the vector space is continuous, as with position or momentum, then applying a bra to a ket results in an integral over the relevant domain.

It’s worth pointing out that some authors such as Griffiths call the bra a linear function of vectors rather than an operator, preferring to reserve the term ‘operator’ for something which operates on a vector and returns another vector. I don’t see any particular value in such a fine distinction, and since the bra certainly does ‘operate’ on a vector (even though it produces a scalar as the result), the term ‘operator’ seems appropriate.

Although the bra has no physical meaning on its own, it can still simplify the notation for some other operators. One such example is the projection operator. If you can remember your linear algebra, you might recall that, given two vectors {\mathbf{a}} and {\mathbf{b}}, you can find the perpendicular projection of {\mathbf{a}} on {\mathbf{b}} from the formula

\displaystyle  \mathbf{a}_{\perp}=\frac{\mathbf{b}\cdot\mathbf{a}}{\left|\mathbf{b}\right|^{2}}\mathbf{b}

If {\mathbf{b}} is normalized (that is, it’s a unit vector), then this formula reduces to

\displaystyle  \mathbf{a}_{\perp}=\left(\mathbf{b}\cdot\mathbf{a}\right)\mathbf{b}

This amounts to taking the inner (dot) product of {\mathbf{b}} with {\mathbf{a}} and multiplying by the vector {\mathbf{b}}. That is, we take a bra of {\mathbf{b}} and have it operate on a ket of {\mathbf{a}}, then multiply the result into the ket of {\mathbf{b}}.

In bra-ket notation, we can define the projection operator as

\displaystyle  \hat{P}\equiv|\alpha\rangle\langle\alpha|

where {|\alpha\rangle} is a normalized vector. Applying this to any other vector {|\beta\rangle} gives the projection of {|\beta\rangle} along {|\alpha\rangle}:

\displaystyle  |\beta\rangle_{\perp}=|\alpha\rangle\langle\alpha|\beta\rangle

We’ll have a look at a few properties of the projection operator.

First, the projection operator is idempotent, which means that {\hat{P}^{2}=\hat{P}}. The consequence of this is that it doesn’t matter how many times you apply a given projection operator; it will have the same result as applying it just once. This makes sense from a geometric viewpoint, since once you’ve projected a vector onto another vector, projecting the projection just gives you the same projection back again.

The proof of the idempotent property is quite simple: {\hat{P}^{2}=|\alpha\rangle\langle\alpha|\alpha\rangle\langle\alpha|=|\alpha\rangle\langle\alpha|}, since {\langle\alpha|\alpha\rangle=1}.

Since it’s an operator that returns a vector, we can find its eigenvalues. Using the idempotent property we get

\displaystyle   \hat{P}^{2}|a\rangle \displaystyle  = \displaystyle  \hat{P}|a\rangle
\displaystyle  (\hat{P}^{2}-\hat{P)}|a\rangle \displaystyle  = \displaystyle  0
\displaystyle  (p^{2}-p)|a\rangle \displaystyle  = \displaystyle  0

where {p} is an eigenvalue of {\hat{P}}. Thus the only two eigenvalues possible are 0 and 1.

For an eigenvalue of 1, the corresponding eigenvector must satisfy {\hat{P}|a\rangle=|\alpha\rangle\langle\alpha|a\rangle=|a\rangle}. Thus the eigenvector for eigenvalue 1 is {A|\alpha\rangle} for some constant {A}. Thus any vector parallel to {|\alpha\rangle} is an eigenvector.

For an eigenvalue of 0, we have {\hat{P}|a\rangle=|\alpha\rangle\langle\alpha|a\rangle=0}, so the eigenvector is any vector orthogonal to {|\alpha\rangle}.

Follow

Get every new post delivered to your Inbox.

Join 285 other followers