Author Archives: growescience

Temperature

References: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 1.1 – 1.6

Although we’re all familiar with temperature, it’s quite difficult to give a precise definition of it. To get started, we can look at the notion of thermal equilibrium. Two objects are in thermal equilibrium if, when they are in contact, there is no net energy transferred from one to the other. This allows us to define temperature in a relative sense: if there is a net spontaneous transfer of energy from object {A} to object {B}, then the temperature {T_{A}} of {A} is higher than the temperature {T_{B}} of {B}.

To attach units to temperature we can pick some common substance such as water and consider its freezing and boiling points at standard atmospheric pressure and assign some numbers to the temperature of water at these two points. The Celsius (centigrade) scale assigns 0 to the freezing point and 100 to the boiling point. We can then insert a thermometer, such as a mercury thermometer, into the water and mark the points 0 and 100 on the tube. We can then divide up the portion of the tube between these two points into 100 equally spaced intervals, thus defining the temperature of any other object into which we place the thermometer (extending the marks below 0 and above 100 as required). This technique has obvious limitations; mercury freezes at {-38.8^{\circ}\mbox{C}} so the thermometer won’t be much use below that point. At the other extreme, the glass tube will become soft and deform at some point, and mercury boils at {356.7^{\circ}\mbox{C}}. We’ve also made the assumption that the expansion rate of mercury is constant over the range of the thermometer, and so on…

A thermometer can also be made using the expansion and contraction of a gas with temperature. Experimentally, gases obey the ideal gas law over a wide range of pressures and temperatures. The law says

\displaystyle  PV=nRT \ \ \ \ \ (1)

where {P} is the pressure, {V} the volume, {n} is a measure of the number of gas molecules in the container, {T} is the temperature (in kelvin) and {R} is the gas constant. The kelvin scale is obtained from Celsius scale by adding 273.15 to the latter:

\displaystyle  K=C+273.15 \ \ \ \ \ (2)

Thus absolute zero is {0\mbox{ K}=-273.15\mbox{ C}}.

Example 1 The Fahrenheit scale defines the freezing point of water to be {32^{\circ}\mbox{F}} and the boiling point to be {212^{\circ}\mbox{F}}. (The origin of these rather bizarre values, or more to the point, the reason why {0^{\circ}\mbox{F }}is where it is, seems rather obscure; see the Wikipedia article if you’re interested.) This means that there are 180 Fahrenheit degrees between the freezing and boiling points, so there are {\frac{180}{100}=\frac{9}{5}} Fahrenheit degrees per Celsius degree. Thus to convert from F to C we first subtract 32, then multiply by {\frac{5}{9}}:

\displaystyle  C=\frac{5}{9}\left(F-32\right) \ \ \ \ \ (3)

or, the other way round:

\displaystyle  F=\frac{9}{5}C+32 \ \ \ \ \ (4)

Thus absolute zero is {\frac{9}{5}\left(-273.15\right)+32=-459.67^{\circ}\mbox{F}}.

An approximate formula that works fairly well for temperatures in the 0 to 30 C range is to double the C value then add 30 to get the F value (or, conversely, subtract 30 from F then divide by 2 to get C). The temperature of {-40} is the same on both scales (so 40 below is darned cold no matter how you measure it!).

Example 2 The Rankine temperature scale uses Fahrenheit-sized degrees, but its zero is at absolute zero, so that

\displaystyle   R \displaystyle  = \displaystyle  F+459.67\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  \frac{9}{5}C+491.67\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  \frac{9}{5}K \ \ \ \ \ (7)

Room temperature (21 C, say) is thus 529.47 R.

Example 3 Some examples of kelvin temperatures are

Celsius kelvin
Human body temp 37 310.15
Boiling point of water 100 373.15
A very cold day (in Dundee, anyway) {-10} 263.15
Boiling point of nitrogen {-196} 77.15
Melting point of lead 327 600.15

Example 4 It makes sense to say one object is “twice as hot” as another if we’re using the kelvin (or Rankine) scale, since the absolute zero of temperature is zero on these scales. Saying that 20 C is twice as hot as 10 C is wrong, of course. That’s like measuring a person’s height by defining the zero height to be at an ‘absolute’ height of 150 cm. Using that definition, we wouldn’t say that someone with an absolute height of 160 cm is twice as tall as someone with a height of 155 cm.

Example 5 The relaxation time is, roughly speaking, the time required for two objects initially at different temperatures to come to thermal equilibrium when placed in contact. Mathematically, the temperature difference declines as {\Delta T=\Delta T_{0}e^{-At}} for some constant {A}, so it’s more precise to define relaxation time as the time required for the temperature difference to decrease to specified fraction (say {1/e}) of its initial value {\Delta T_{0}}. When you take your temperature by putting a fever thermometer under your tongue, you typically have to wait around 2 minutes before taking a reading, so that’s the relaxation time between objects {A} (your mouth) and {B} (the thermometer).

Example 6 The human sense of touch is notoriously bad at being able to judge temperature. A common experiment involves placing one of your hands in a bowl of cold water and the other in a bowl of hot (not too hot!) water. Leave your hands there for a minute or two and then touch an object at room temperature with each hand in turn. The cold hand will sense the object as warm, while the hot hand will sense it as cold.

Occupation number representation; delta function as a series

References: Tom Lancaster and Stephen J. Blundell, Quantum Field Theory for the Gifted Amateur, (Oxford University Press, 2014) – Problem 3.1.

We can write the hamiltonian for the harmonic oscillator in terms of the creation and annihilation operators as

\displaystyle  \hat{H}=\hbar\omega\left(a^{\dagger}a+\frac{1}{2}\right) \ \ \ \ \ (1)

Normalization requires

\displaystyle   a\left|n\right\rangle \displaystyle  = \displaystyle  \sqrt{n}\left|n-1\right\rangle \ \ \ \ \ (2)
\displaystyle  a^{\dagger}\left|n\right\rangle \displaystyle  = \displaystyle  \sqrt{n+1}\left|n+1\right\rangle \ \ \ \ \ (3)

so the combined operator {a^{\dagger}a} acts as a number operator, giving the number of quanta in a state:

\displaystyle   a^{\dagger}a\left|n\right\rangle \displaystyle  = \displaystyle  a^{\dagger}\sqrt{n}\left|n-1\right\rangle \ \ \ \ \ (4)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{n}a^{\dagger}\left|n-1\right\rangle \ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  n\left|n\right\rangle \ \ \ \ \ (6)

We can generalize this to a collection of independent oscillators where oscillator {k} has frequency {\omega_{k}}. In that case

\displaystyle  \hat{H}=\hbar\sum_{k}\omega_{k}\left(a_{k}^{\dagger}a_{k}+\frac{1}{2}\right) \ \ \ \ \ (7)

where {a_{k}^{\dagger}} and {a_{k}} are the creation and annihilation operators for one quantum in oscillator {k}. For the harmonic oscillator, the energy levels are all equally spaced, with a spacing of {\hbar\omega_{k}} so if we redefine the zero point of energy to be {\frac{1}{2}\hbar\omega_{k}} for oscillator {k}, then the hamiltonian above can be rewritten as

\displaystyle  \hat{H}=\sum_{k}n_{k}\hbar\omega_{k} \ \ \ \ \ (8)

where {n_{k}} is the number of quanta in oscillator {k}. An eigenstate of this hamiltonian is a state containing {N} oscillators with oscillator {k} containing {n_{k}} quanta, which we can write as {\left|n_{1}n_{2}...n_{N}\right\rangle }. This is called the occupation number representation since rather than writing out a complex wave function describing all {N} oscillators, we just list the number of quanta contained within each oscillator.

The application of this to quantum field theory is that we can interpret each quantum in oscillator {k} as a particle with a momentum {p_{k}}. We’re not saying that a particle is an oscillator; rather we’re noting that we can use the same notation to refer to both particles and oscillators. So if we have a number of momentum states {p_{k}} available in our system, then we can define creation and annihilation operators {a_{p_{k}}^{\dagger}} and {a_{p_{k}}} for that momentum state and write the hamiltonian as

\displaystyle  \hat{H}=\sum_{k}E_{p_{k}}a_{p_{k}}^{\dagger}a_{p_{k}} \ \ \ \ \ (9)

In order for creation operators to work properly when creating elementary particles, we need to recall that there are two fundamental types of particles: fermions and bosons. The wave function for two bosons is, in position space:

\displaystyle  \psi\left(\mathbf{r}_{a},\mathbf{r}_{b}\right)=A\left[\psi_{1}\left(\mathbf{r}_{a}\right)\psi_{2}\left(\mathbf{r}_{b}\right)+\psi_{2}\left(\mathbf{r}_{a}\right)\psi_{1}\left(\mathbf{r}_{b}\right)\right] \ \ \ \ \ (10)

If we interchange the two particles by swapping {\mathbf{r}_{a}} and {\mathbf{r}_{b}}, the compound wave function {\psi\left(\mathbf{r}_{a},\mathbf{r}_{b}\right)} doesn’t change, so that {\psi\left(\mathbf{r}_{a},\mathbf{r}_{b}\right)=\psi\left(\mathbf{r}_{b},\mathbf{r}_{a}\right)}

If we have two fermions, on the other hand, the wave function is

\displaystyle  \psi\left(\mathbf{r}_{a},\mathbf{r}_{b}\right)=A\left[\psi_{1}\left(\mathbf{r}_{a}\right)\psi_{2}\left(\mathbf{r}_{b}\right)-\psi_{2}\left(\mathbf{r}_{a}\right)\psi_{1}\left(\mathbf{r}_{b}\right)\right] \ \ \ \ \ (11)

and now if we swap the particles we get {\psi\left(\mathbf{r}_{a},\mathbf{r}_{b}\right)=-\psi\left(\mathbf{r}_{b},\mathbf{r}_{a}\right)}.

If we use two creation operators operating on the vacuum state {\left|0\right\rangle } to create a state containing two particles, the resulting state must behave properly under the exchange of the two particles. Another way of putting this is that if we swap the order in which the particles are created we must get exactly the same state if the particles are bosons, but the negative of the original state if the particles are fermions. That is, for bosons

\displaystyle  a_{p_{1}}^{\dagger}a_{p_{2}}^{\dagger}=a_{p_{2}}^{\dagger}a_{p_{1}}^{\dagger} \ \ \ \ \ (12)

or in terms of commutators

\displaystyle  \left[a_{p_{1}}^{\dagger},a_{p_{2}}^{\dagger}\right]=0 \ \ \ \ \ (13)

For fermions, we’ll use the symbols {c^{\dagger}} and {c} for creation and annihilation operators, and in this case we must have

\displaystyle  c_{p_{1}}^{\dagger}c_{p_{2}}^{\dagger}=-c_{p_{2}}^{\dagger}c_{p_{1}}^{\dagger} \ \ \ \ \ (14)

For fermions we define an anticommutator as

\displaystyle  \left\{ c_{p_{1}}^{\dagger},c_{p_{2}}^{\dagger}\right\} \equiv c_{p_{1}}^{\dagger}c_{p_{2}}^{\dagger}+c_{p_{2}}^{\dagger}c_{p_{1}}^{\dagger} \ \ \ \ \ (15)

so we have

\displaystyle  \left\{ c_{p_{1}}^{\dagger},c_{p_{2}}^{\dagger}\right\} =0 \ \ \ \ \ (16)

For the harmonic oscillator, the creation and annihilation operators satisfied the commutation relation

\displaystyle  \left[a_{p_{1}},a_{p_{2}}^{\dagger}\right]=\delta_{p_{1}p_{2}} \ \ \ \ \ (17)

That is, the annihilation operator commutes with the creation operator if they refer to different oscillators; otherwise the commutator is 1. To complete the analogy between particles and oscillators, we just define the commutation relations between creation and annihilation operators for particles as

\displaystyle   \left[a_{p_{1}},a_{p_{2}}^{\dagger}\right] \displaystyle  = \displaystyle  \delta_{p_{1}p_{2}}\ \ \ \ \ (18)
\displaystyle  \left\{ c_{p_{1}},c_{p_{2}}^{\dagger}\right\} \displaystyle  = \displaystyle  \delta_{p_{1}p_{2}} \ \ \ \ \ (19)

Example The commutation relations can be inserted into a formula which gives a new form of the Dirac delta function. For two different momentum states {\mathbf{p}} and {\mathbf{q}} we have, for a pair of bosons

\displaystyle  \left[a_{p},a_{q}^{\dagger}\right]=\delta_{pq} \ \ \ \ \ (20)

Suppose that the system is enclosed in a cube of side length {L}. Then we can construct the sum

\displaystyle  \frac{1}{\mathcal{V}}\sum_{p,q}e^{i\left(\mathbf{p}\cdot\mathbf{x}-\mathbf{q}\cdot\mathbf{y}\right)}\left[a_{p},a_{q}^{\dagger}\right]=\frac{1}{\mathcal{V}}\sum_{p}e^{i\mathbf{p}\cdot\left(\mathbf{x}-\mathbf{y}\right)} \ \ \ \ \ (21)

What can we make of the sum on the RHS? To see what it is, suppose we have some function {f\left(x\right)} defined for {-\pi\le x\le\pi}. We can expand it in a Fourier series as follows:

\displaystyle  f\left(x\right)=\sum_{n=-\infty}^{\infty}c_{n}e^{inx} \ \ \ \ \ (22)

where the coefficients are

\displaystyle  c_{n}=\frac{1}{2\pi}\int_{-\pi}^{\pi}f\left(x\right)e^{-inx}dx \ \ \ \ \ (23)

We can write the Fourier series for the function at a particular point {x=a} as

\displaystyle   f\left(a\right) \displaystyle  = \displaystyle  \frac{1}{2\pi}\sum_{n}e^{ina}\int_{-\pi}^{\pi}f\left(x\right)e^{-inx}dx\ \ \ \ \ (24)
\displaystyle  \displaystyle  = \displaystyle  \int_{-\pi}^{\pi}f\left(x\right)\left[\frac{1}{2\pi}\sum_{n}e^{in\left(a-x\right)}\right]dx \ \ \ \ \ (25)

The term in brackets in the last line behaves exactly like {\delta\left(x-a\right)} so we can take it as another definition of the Dirac delta function

\displaystyle  \delta\left(x-a\right)=\frac{1}{2\pi}\sum_{n}e^{in\left(a-x\right)}=\frac{1}{2\pi}\sum_{n}e^{in\left(x-a\right)} \ \ \ \ \ (26)

where we can change the exponent in the last term because the sum over {n} extends from {-\infty} to {\infty} so we can replace {n} by {-n} and get the same sum.

Now if the function {f\left(x\right)} extends from 0 to {L} instead of from {-\pi} to {\pi} we can replace {x} by {\xi\equiv Lx/2\pi} (and {a} by {\xi_{a}\equiv La/2\pi}) to get

\displaystyle   f\left(\xi_{a}\right) \displaystyle  = \displaystyle  \int_{0}^{L}f\left(\xi\right)\left[\frac{1}{2\pi}\frac{2\pi}{L}\sum_{n}e^{i2\pi n\left(\xi_{a}-\xi\right)/L}\right]d\xi\ \ \ \ \ (27)
\displaystyle  \displaystyle  = \displaystyle  \int_{0}^{L}f\left(\xi\right)\left[\frac{1}{L}\sum_{p}e^{ip\left(\xi-\xi_{a}\right)}\right]d\xi \ \ \ \ \ (28)

where

\displaystyle  p\equiv\frac{2\pi n}{L} \ \ \ \ \ (29)

Obviously, the same argument works for the {y} and {z} directions, so in 3-d

\displaystyle   f\left(\mathbf{a}\right) \displaystyle  = \displaystyle  \int_{\mathcal{V}}f\left(\mathbf{r}\right)\left[\frac{1}{L^{3}}\sum_{p}e^{i\mathbf{p}\cdot\left(\mathbf{r}-\mathbf{a}\right)}\right]d^{3}\mathbf{r}\ \ \ \ \ (30)
\displaystyle  \displaystyle  = \displaystyle  \int_{\mathcal{V}}f\left(\mathbf{r}\right)\left[\frac{1}{\mathcal{V}}\sum_{p}e^{i\mathbf{p}\cdot\left(\mathbf{r}-\mathbf{a}\right)}\right]d^{3}\mathbf{r} \ \ \ \ \ (31)

so the 3-d delta function is

\displaystyle  \delta^{\left(3\right)}\left(\mathbf{x}-\mathbf{y}\right)=\frac{1}{\mathcal{V}}\sum_{p}e^{i\mathbf{p}\cdot\left(\mathbf{x}-\mathbf{y}\right)} \ \ \ \ \ (32)

From 21 we get

\displaystyle  \frac{1}{\mathcal{V}}\sum_{p,q}e^{i\left(\mathbf{p}\cdot\mathbf{x}-\mathbf{q}\cdot\mathbf{y}\right)}\left[a_{p},a_{q}^{\dagger}\right]=\delta^{\left(3\right)}\left(\mathbf{x}-\mathbf{y}\right) \ \ \ \ \ (33)

Harmonic oscillator ground state from annihilation operator

References: Tom Lancaster and Stephen J. Blundell, Quantum Field Theory for the Gifted Amateur, (Oxford University Press, 2014) – Problem 2.4.

We can use the annihilation operator {\hat{a}} in the harmonic oscillator to reclaim the position space form of the ground state wave function. The operator is

\displaystyle   \hat{a} \displaystyle  = \displaystyle  \frac{1}{\sqrt{2\hbar m\omega}}\left[i\hat{p}+m\omega\hat{x}\right] \ \ \ \ \ (1)

Applying {\hat{a}} to the ground state {\left|0\right\rangle } we get 0 (that is, annihilating the ground state eliminates the wave function altogether), so

\displaystyle  \left[i\hat{p}+m\omega\hat{x}\right]\left|0\right\rangle =0 \ \ \ \ \ (2)

The eigenfunction of position is found from

\displaystyle  \hat{x}\left|x_{0}\right\rangle =x_{0}\left|x_{0}\right\rangle \ \ \ \ \ (3)

Since the operator {\hat{x}} multiplies any function by the position {x} and we want the eigenfunction {\left|x_{0}\right\rangle } to represent a particular position {x_{0}}, {\left|x_{0}\right\rangle } must pick out {x_{0}} from all possible values of {x}, that is, it must be zero everywhere except {x=x_{0}}. This condition is satisfied if we take

\displaystyle  \left|x_{0}\right\rangle =\delta\left(x-x_{0}\right) \ \ \ \ \ (4)

We then get

\displaystyle   \left\langle x\left|\hat{p}\right|\psi\right\rangle \displaystyle  = \displaystyle  \int\delta\left(x'-x\right)\hat{p}\psi\left(x'\right)\; dx'\ \ \ \ \ (5)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\int\delta\left(x'-x\right)\frac{d}{dx'}\psi\left(x'\right)\; dx'\ \ \ \ \ (6)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\frac{d}{dx}\int\delta\left(x'-x\right)\psi\left(x'\right)\; dx'\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\frac{d}{dx}\left\langle x\left|\psi\right.\right\rangle \ \ \ \ \ (8)

Also

\displaystyle   \left\langle x\left|\hat{x}\right|\psi\right\rangle \displaystyle  = \displaystyle  \int\delta\left(x'-x\right)\hat{x}\psi\left(x'\right)\; dx'\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar\int\delta\left(x'-x\right)x'\psi\left(x'\right)\; dx'\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar x\int\delta\left(x'-x\right)\psi\left(x'\right)\; dx'\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  -i\hbar x\left\langle x\left|\psi\right.\right\rangle \ \ \ \ \ (12)

Therefore, from 2 we get

\displaystyle   \left\langle x\left|\left[i\hat{p}+m\omega\hat{x}\right]\right|0\right\rangle \displaystyle  = \displaystyle  \hbar\frac{d}{dx}\left\langle x\left|0\right.\right\rangle +m\omega x\left\langle x\left|0\right.\right\rangle =0\ \ \ \ \ (13)
\displaystyle  \hbar\frac{d}{dx}\left\langle x\left|0\right.\right\rangle \displaystyle  = \displaystyle  -m\omega x\left\langle x\left|0\right.\right\rangle \ \ \ \ \ (14)

This is a differential equation for {\left\langle x\left|0\right.\right\rangle } which has the solution

\displaystyle  \left\langle x\left|0\right.\right\rangle =Ae^{-m\omega x^{2}/2\hbar} \ \ \ \ \ (15)

where {A} is found from normalization:

\displaystyle   \int\left|\left\langle x\left|0\right.\right\rangle \right|^{2}dx \displaystyle  = \displaystyle  A^{2}\int_{-\infty}^{\infty}e^{-m\omega x^{2}/\hbar}dx=1\ \ \ \ \ (16)
\displaystyle  A \displaystyle  = \displaystyle  \left(\frac{m\omega}{\pi\hbar}\right)^{1/4} \ \ \ \ \ (17)

This is the same function that we got earlier.

Coupled oscillators in terms of creation and annihilation operators; phonons

References: Tom Lancaster and Stephen J. Blundell, Quantum Field Theory for the Gifted Amateur, (Oxford University Press, 2014) – Problem 2.3.

Suppose we have a one-dimensional chain of {N} equal masses {m} connected by identical springs of rest length {a} and spring constant {k}, so that mass {j} has a rest position of {ja}, {j=0...N-1}. The masses can move only in the {x} direction, and moving one mass extends the spring on one side while compressing the spring on the other side. The total energy of the system is the sum of the kinetic energies of the masses and the potential energies of the springs. The kinetic energy of mass {j} is just {\frac{1}{2}mv_{j}^{2}} or, in terms of the momentum operator

\displaystyle  T_{j}=\frac{\hat{p}_{j}^{2}}{2m} \ \ \ \ \ (1)

The potential energy of the spring connecting masses {j} and {j+1} is {\frac{1}{2}k\left(\Delta x\right)^{2}} where {\Delta x} is the amount the spring is stretched (or compressed). {\Delta x} is the difference between the amounts that the two masses on either end have moved from their equilibrium positions, so if we call {x_{j}} the amount by which mass {j} has moved from position {ja}, then

\displaystyle   \Delta x_{j,j+1} \displaystyle  = \displaystyle  x_{j+1}-x_{j}\ \ \ \ \ (2)
\displaystyle  V_{j} \displaystyle  = \displaystyle  \frac{1}{2}K\left(\hat{x}_{j+1}-\hat{x}_{j}\right)^{2} \ \ \ \ \ (3)

where we’ve added hats to show that {\hat{x}_{j}} is an operator. Therefore the hamiltonian of the system is

\displaystyle  \hat{H}=\sum_{j}\frac{\hat{p}_{j}^{2}}{2m}+\sum_{j}\frac{1}{2}K\left(\hat{x}_{j+1}-\hat{x}_{j}\right)^{2} \ \ \ \ \ (4)

As it stands, the hamiltonian contains terms such as {k\hat{x}_{j}\hat{x}_{j+1}} in which the spatial terms of two masses occur in a product, that is, it contains coupled terms. We can convert the hamiltonian to an uncoupled system in which it consists of a sum of terms where each term refers to only a single index. This is done by using discrete Fourier transforms. A discrete Fourier transform assumes that the raw data (the values of {x_{j}} and {p_{j}}) are samples at equally spaced intervals and that the behaviour outside the observed range (that is, for {j<0} and {j\ge N}) is periodic, so that it repeats the observed behaviour with a period of {Na}. This is equivalent to imposing periodic boundary conditions so that

\displaystyle   x_{j+N} \displaystyle  = \displaystyle  x_{j}\ \ \ \ \ (5)
\displaystyle  p_{j+N} \displaystyle  = \displaystyle  p_{j} \ \ \ \ \ (6)

The discrete Fourier transform is then

\displaystyle   x_{j} \displaystyle  = \displaystyle  \frac{1}{\sqrt{N}}\sum_{k}\tilde{x}_{k}e^{ikja}\ \ \ \ \ (7)
\displaystyle  p_{j} \displaystyle  = \displaystyle  \frac{1}{\sqrt{N}}\sum_{k}\tilde{p}_{k}e^{ikja} \ \ \ \ \ (8)

[This transform differs from the one in the earlier reference in that it has a factor of {1/\sqrt{N}} in front. All that matters is that the product of this factor with the corresponding factor in front of the inverse transform is {1/N}.] The index {k} is the frequency and because of the periodic boundary conditions, we must have

\displaystyle   e^{ikja} \displaystyle  = \displaystyle  e^{ik\left(j+N\right)a}\ \ \ \ \ (9)
\displaystyle  e^{ikNa} \displaystyle  = \displaystyle  1\ \ \ \ \ (10)
\displaystyle  k \displaystyle  = \displaystyle  \frac{2\pi m}{Na} \ \ \ \ \ (11)

for an integer {m} which is in a range such that {ka} varies over {2\pi}. Any range of {m} that satisfies this condition would do, but it turns out to be most convenient to choose {-\frac{N}{2}<m\le\frac{N}{2}}. This gives {-\pi<ka\le\pi}.

We can now work out the hamiltonian 4 using the Fourier transformed variables {\tilde{x}_{k}} and {\tilde{p}_{k}}. First, the kinetic energy term:

\displaystyle   \sum_{j}p_{j}^{2} \displaystyle  = \displaystyle  \sum_{j}\left(\frac{1}{\sqrt{N}}\sum_{k}\tilde{p}_{k}e^{ikja}\right)\left(\frac{1}{\sqrt{N}}\sum_{k'}\tilde{p}_{k'}e^{ik'ja}\right)\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{N}\sum_{j}\sum_{k}\sum_{k'}\tilde{p}_{k}\tilde{p}_{k'}e^{i(k+k')ja} \ \ \ \ \ (13)

We can now make use of the sum (derived from a geometric series):

\displaystyle  \sum_{j}e^{i2\pi mj/N}=N\delta_{m,0} \ \ \ \ \ (14)

This means that

\displaystyle  \sum_{j}e^{i(k+k')ja}=N\delta_{k,-k'} \ \ \ \ \ (15)

so

\displaystyle  \sum_{j}p_{j}^{2}=\sum_{k}\tilde{p}_{k}\tilde{p}_{-k} \ \ \ \ \ (16)

For the potential energy term we have

\displaystyle   \sum_{j}\left(\hat{x}_{j+1}-\hat{x}_{j}\right)^{2} \displaystyle  = \displaystyle  \frac{1}{N}\sum_{j}\left(\sum_{k}\tilde{x}_{k}e^{ik\left(j+1\right)a}-\sum_{k}\tilde{x}_{k}e^{ikja}\right)\left(\sum_{k'}\tilde{x}_{k'}e^{ik'\left(j+1\right)a}-\sum_{k'}\tilde{x}_{k'}e^{ik'ja}\right)\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{N}\sum_{j}\left(\sum_{k}\tilde{x}_{k}e^{ikja}\left(e^{ika}-1\right)\right)\left(\sum_{k'}\tilde{x}_{k'}e^{ik'ja}\left(e^{ik'a}-1\right)\right)\ \ \ \ \ (18)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{N}\sum_{j}\sum_{k}\sum_{k'}\tilde{x}_{k}\tilde{x}_{k'}e^{i\left(k+k'\right)ja}\left(e^{ika}-1\right)\left(e^{ik'a}-1\right)\ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  \sum_{k}\tilde{x}_{k}\tilde{x}_{-k}\left(e^{ika}-1\right)\left(e^{-ika}-1\right)\ \ \ \ \ (20)
\displaystyle  \displaystyle  = \displaystyle  \sum_{k}\tilde{x}_{k}\tilde{x}_{-k}\left(2-\left(e^{ika}+e^{-ika}\right)\right)\ \ \ \ \ (21)
\displaystyle  \displaystyle  = \displaystyle  2\sum_{k}\tilde{x}_{k}\tilde{x}_{-k}\left(1-\cos\left(ka\right)\right)\ \ \ \ \ (22)
\displaystyle  \displaystyle  = \displaystyle  4\sum_{k}\tilde{x}_{k}\tilde{x}_{-k}\sin^{2}\frac{ka}{2} \ \ \ \ \ (23)

It might not seem that we’ve made much progress, since now both the kinetic and potential energy terms appear to be coupled, involving products of {+k} and {-k} modes. However, the inverses of 7 and 8 are

\displaystyle   \tilde{x}_{k} \displaystyle  = \displaystyle  \frac{1}{\sqrt{N}}\sum_{j}x_{j}e^{-ikja}\ \ \ \ \ (24)
\displaystyle  \tilde{p}_{k} \displaystyle  = \displaystyle  \frac{1}{\sqrt{N}}\sum_{j}p_{j}e^{-ikja} \ \ \ \ \ (25)

Since {x_{j}} and {p_{j}} are observables, they must be hermitian operators, so {x_{j}^{\dagger}=x_{j}} and {p_{j}^{\dagger}=p_{j}} so

\displaystyle   \tilde{x}_{k}^{\dagger} \displaystyle  = \displaystyle  \frac{1}{\sqrt{N}}\sum_{j}x_{j}e^{ikja}=\tilde{x}_{-k}\ \ \ \ \ (26)
\displaystyle  \tilde{p}_{k}^{\dagger} \displaystyle  = \displaystyle  \frac{1}{\sqrt{N}}\sum_{j}p_{j}e^{ikja}=\tilde{p}_{-k} \ \ \ \ \ (27)

Therefore

\displaystyle   \sum_{j}p_{j}^{2} \displaystyle  = \displaystyle  \sum_{k}\tilde{p}_{k}\tilde{p}_{-k}\ \ \ \ \ (28)
\displaystyle  \displaystyle  = \displaystyle  \sum_{k}\tilde{p}_{k}\tilde{p}_{k}^{\dagger}\ \ \ \ \ (29)
\displaystyle  \sum_{j}\left(\hat{x}_{j+1}-\hat{x}_{j}\right)^{2} \displaystyle  = \displaystyle  4\sum_{k}\tilde{x}_{k}\tilde{x}_{-k}\sin^{2}\frac{ka}{2}\ \ \ \ \ (30)
\displaystyle  \displaystyle  = \displaystyle  4\sum_{k}\tilde{x}_{k}\tilde{x}_{k}^{\dagger}\sin^{2}\frac{ka}{2}\ \ \ \ \ (31)
\displaystyle  \hat{H} \displaystyle  = \displaystyle  \frac{1}{2m}\sum_{k}\tilde{p}_{k}\tilde{p}_{k}^{\dagger}+2K\sum_{k}\tilde{x}_{k}\tilde{x}_{k}^{\dagger}\sin^{2}\frac{ka}{2}\ \ \ \ \ (32)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2m}\sum_{k}\tilde{p}_{k}\tilde{p}_{k}^{\dagger}+\frac{1}{2}m\sum_{k}\omega_{k}^{2}\tilde{x}_{k}\tilde{x}_{k}^{\dagger}\ \ \ \ \ (33)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2m}\sum_{k}\tilde{p}_{k}\tilde{p}_{-k}+\frac{1}{2}m\sum_{k}\omega_{k}^{2}\tilde{x}_{k}\tilde{x}_{-k} \ \ \ \ \ (34)

where

\displaystyle  \omega_{k}^{2}\equiv4\frac{K}{m}\sin^{2}\frac{ka}{2} \ \ \ \ \ (35)

That is, we’ve managed to write the hamiltonian as the sum over uncoupled oscillators, where oscillator {k} has frequency {\omega_{k}}. The catch is that the operators {\tilde{p}_{k}} and {\tilde{x}_{k}} are in frequency space, not ‘normal’ space, so they are the ‘momentum’ and ‘position’ operators for the modes of oscillation of the coupled set of oscillators.

The formal equivalence of the hamiltonian in mode space with the hamiltonian for a single oscillator in normal space means we can define creation and annihilation operators in the same way. That is (reverting to ‘hat’ notation to indicate operators):

\displaystyle   \hat{a}_{k}^{\dagger} \displaystyle  = \displaystyle  \frac{1}{\sqrt{2\hbar m\omega_{k}}}\left[-i\hat{p}_{k}^{\dagger}+m\omega_{k}\hat{x}_{k}^{\dagger}\right]\ \ \ \ \ (36)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{\sqrt{2\hbar m\omega_{k}}}\left[-i\hat{p}_{-k}+m\omega_{k}\hat{x}_{-k}\right]\ \ \ \ \ (37)
\displaystyle  \hat{a}_{k} \displaystyle  = \displaystyle  \frac{1}{\sqrt{2\hbar m\omega_{k}}}\left[i\hat{p}_{k}+m\omega_{k}\hat{x}_{k}\right] \ \ \ \ \ (38)

Inverting these equations, we get (note that {\omega_{k}} is always the positive square root of 35):

\displaystyle   \hat{a}_{-k}^{\dagger} \displaystyle  = \displaystyle  \frac{1}{\sqrt{2\hbar m\omega_{k}}}\left[-i\hat{p}_{k}+m\omega_{k}\hat{x}_{k}\right]\ \ \ \ \ (39)
\displaystyle  \hat{a}_{-k}^{\dagger}+\hat{a}_{k} \displaystyle  = \displaystyle  \frac{2}{\sqrt{2\hbar m\omega_{k}}}m\omega_{k}\hat{x}_{k}\ \ \ \ \ (40)
\displaystyle  \hat{x}_{k} \displaystyle  = \displaystyle  \sqrt{\frac{\hbar}{2m\omega_{k}}}\left(\hat{a}_{-k}^{\dagger}+\hat{a}_{k}\right)\ \ \ \ \ (41)
\displaystyle  \hat{p}_{k} \displaystyle  = \displaystyle  i\sqrt{\frac{\hbar m\omega_{k}}{2}}\left(\hat{a}_{-k}^{\dagger}-\hat{a}_{k}\right) \ \ \ \ \ (42)

Example We can express the original space coordinate {x_{j}} in terms of the creation and annihilation operators. From 7

\displaystyle   x_{j} \displaystyle  = \displaystyle  \frac{1}{\sqrt{N}}\sum_{k}\hat{x}_{k}e^{ikja}\ \ \ \ \ (43)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{\hbar}{2mN}}\sum_{k}\frac{1}{\sqrt{\omega_{k}}}\left(\hat{a}_{-k}^{\dagger}+\hat{a}_{k}\right)e^{ikja}\ \ \ \ \ (44)
\displaystyle  \displaystyle  = \displaystyle  \sqrt{\frac{\hbar}{2mN}}\sum_{k}\frac{1}{\sqrt{\omega_{k}}}\left(\hat{a}_{k}^{\dagger}e^{-ikja}+\hat{a}_{k}e^{ikja}\right) \ \ \ \ \ (45)

where the last line follows from the fact that we’re summing over {k} over a range of values symmetric about {k=0} so we can replace {k} by {-k} and get the same sum.

Inserting 41 and 42 into 34 we get

\displaystyle   \hat{H} \displaystyle  = \displaystyle  -\frac{\hbar}{4}\sum_{k}\omega_{k}\left(\hat{a}_{-k}^{\dagger}-\hat{a}_{k}\right)\left(\hat{a}_{k}^{\dagger}-\hat{a}_{-k}\right)+\frac{\hbar}{4}\sum_{k}\omega_{k}\left(\hat{a}_{-k}^{\dagger}+\hat{a}_{k}\right)\left(\hat{a}_{k}^{\dagger}+\hat{a}_{-k}\right)\ \ \ \ \ (46)
\displaystyle  \displaystyle  = \displaystyle  \frac{\hbar}{2}\sum_{k}\omega_{k}\left(\hat{a}_{-k}^{\dagger}\hat{a}_{-k}+\hat{a}_{k}\hat{a}_{k}^{\dagger}\right) \ \ \ \ \ (47)

Since the commutator is

\displaystyle  \left[\hat{a}_{k},\hat{a}_{k}^{\dagger}\right]=1 \ \ \ \ \ (48)

we get

\displaystyle   \hat{H} \displaystyle  = \displaystyle  \frac{\hbar}{2}\sum_{k}\omega_{k}\left(\hat{a}_{-k}^{\dagger}\hat{a}_{-k}+1+\hat{a}_{-k}^{\dagger}\hat{a}_{-k}\right)\ \ \ \ \ (49)
\displaystyle  \displaystyle  = \displaystyle  \sum_{k}\hbar\omega_{k}\left(\hat{a}_{-k}^{\dagger}\hat{a}_{-k}+\frac{1}{2}\right)\ \ \ \ \ (50)
\displaystyle  \displaystyle  = \displaystyle  \sum_{k}\hbar\omega_{k}\left(\hat{a}_{k}^{\dagger}\hat{a}_{k}+\frac{1}{2}\right) \ \ \ \ \ (51)

That is, the hamiltonian is the sum of the hamiltonians for individual oscillators in terms of creation and annihilation operators. The ‘particles’ that are created or annihilated are the modes of oscillation in the chain of ‘real’ oscillators; these modes are called phonons, since they are reminiscent of sound waves passing through a medium.

Berry’s phase: definition and value for a spin-1 particle in a magnetic field

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 10.6.

So far, when calculating phases in the adiabatic theorem we’ve assumed that there is only one parameter {R} in the hamiltonian that is time-dependent. In that case, we can write the geometric phase as

\displaystyle   \gamma_{n}\left(t\right) \displaystyle  = \displaystyle  i\int_{0}^{t}\left\langle \psi_{n}\left(t'\right)\left|\frac{\partial}{\partial t'}\psi_{n}\left(t'\right)\right.\right\rangle dt'\ \ \ \ \ (1)
\displaystyle  \displaystyle  = \displaystyle  i\int_{R_{i}}^{R_{f}}\left\langle \psi_{n}\left|\frac{\partial}{\partial R}\psi_{n}\right.\right\rangle dR \ \ \ \ \ (2)

and if we take the system through a complete loop where we start at {R=R_{i}} then take {R} out to {R_{f}} then back to {R_{i}} again, the net phase is always zero because the two limits on the integral are the same once we’ve travelled the complete loop.

However, if the hamiltonian has two or more time-dependent parameters, then the chain rule for derivatives says that

\displaystyle  \frac{\partial\psi_{n}}{\partial t}=\frac{\partial\psi_{n}}{\partial R_{1}}\frac{dR_{1}}{dt}+\frac{\partial\psi_{n}}{\partial R_{2}}\frac{dR_{2}}{dt}+...+\frac{\partial\psi_{n}}{\partial R_{N}}\frac{dR_{N}}{dt} \ \ \ \ \ (3)

If we treat the complete set of parameters {R_{j}} as the components of an {N}-dimensional vector, we can define a gradient in the {R_{j}} coordinate system as {\nabla_{R}} and rewrite this derivative as

\displaystyle  \frac{\partial\psi_{n}}{\partial t}=\left(\nabla_{R}\psi_{n}\right)\cdot\frac{d\mathbf{R}}{dt} \ \ \ \ \ (4)

giving the phase as

\displaystyle  \gamma_{n}\left(t\right)=i\int_{\mathbf{R}_{i}}^{\mathbf{R}_{f}}\left\langle \psi_{n}\left|\nabla_{R}\psi_{n}\right.\right\rangle \cdot d\mathbf{R} \ \ \ \ \ (5)

If we now take the system around a closed loop in {R}-space in time {T}, we can write the phase change over that loop as a line integral around the path:

\displaystyle  \gamma_{n}\left(t\right)=i\oint\left\langle \psi_{n}\left|\nabla_{R}\psi_{n}\right.\right\rangle \cdot d\mathbf{R} \ \ \ \ \ (6)

As this is the line integral of a vector around a closed path, if {\mathbf{R}} consists of three parameters, we can convert it to a surface integral over the area enclosed by the path by using Stokes’s theorem:

\displaystyle  \gamma_{n}\left(t\right)=i\int\left(\nabla\times\left\langle \psi_{n}\left|\nabla_{R}\psi_{n}\right.\right\rangle \right)\cdot d\mathbf{a} \ \ \ \ \ (7)

This phase is known as Berry’s phase and is not, in general, zero. Griffiths works out the classic example of calculating Berry’s phase for an electron in a precessing magnetic field and then, more generally, for an electron in a magnetic field of constant magnitude but varying in direction by sweeping out some closed path (of any shape). The results apply to any spin-1/2 particle, so here we’ll work out Berry’s phase for a particle of spin 1.

Ultimately, what we want is to work out 7 for some initial spin state of the particle. If we’re using spherical coordinates, then {\nabla_{R}} is the usual gradient in spherical coordinates, so to complete the calculation, we need to know {\psi_{n}}. Suppose we start with the particle in the {+1} spin state (for spin 1, the {z} component can have values {\pm\hbar} and 0, so a spin of {+1} corresponds to {+\hbar}). The spin matrices are

\displaystyle   S_{z} \displaystyle  = \displaystyle  \hbar\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & -1 \end{array}\right)\ \ \ \ \ (8)
\displaystyle  S_{x} \displaystyle  = \displaystyle  \frac{\hbar}{\sqrt{2}}\left(\begin{array}{ccc} 0 & 1 & 0\\ 1 & 0 & 1\\ 0 & 1 & 0 \end{array}\right)\ \ \ \ \ (9)
\displaystyle  S_{y} \displaystyle  = \displaystyle  \frac{\hbar}{\sqrt{2}}\left(\begin{array}{ccc} 0 & -i & 0\\ i & 0 & -i\\ 0 & i & 0 \end{array}\right) \ \ \ \ \ (10)

so the component of {\mathbf{S}} along a direction (which is taken to be the magnetic field’s instantaneous direction) given by

\displaystyle  \hat{\mathbf{r}}=\sin\theta\cos\phi\hat{\mathbf{x}}+\sin\theta\sin\phi\hat{\mathbf{y}}+\cos\theta\hat{\mathbf{z}} \ \ \ \ \ (11)

is

\displaystyle   \mathbf{S}\cdot\hat{\mathbf{r}} \displaystyle  = \displaystyle  \hbar\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & -1 \end{array}\right)\cos\theta+\frac{\hbar}{\sqrt{2}}\left(\begin{array}{ccc} 0 & 1 & 0\\ 1 & 0 & 1\\ 0 & 1 & 0 \end{array}\right)\sin\theta\cos\phi+\frac{\hbar}{\sqrt{2}}\left(\begin{array}{ccc} 0 & -i & 0\\ i & 0 & -i\\ 0 & i & 0 \end{array}\right)\sin\theta\sin\phi\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  \frac{\hbar}{\sqrt{2}}\left(\begin{array}{ccc} \sqrt{2}\cos\theta & \sin\theta e^{-i\phi} & 0\\ \sin\theta e^{i\phi} & 0 & \sin\theta e^{-i\phi}\\ 0 & \sin\theta e^{i\phi} & -\sqrt{2}\cos\theta \end{array}\right) \ \ \ \ \ (13)

The eigenvalues of this matrix are {\pm\hbar,0} as required and the normalized eigenvector corresponding to {+\hbar} is

\displaystyle  \chi_{+1}=\frac{1}{2}\left[\begin{array}{c} e^{-2i\phi}\left(1+\cos\theta\right)\\ \sqrt{2}e^{-i\phi}\sin\theta\\ 1-\cos\theta \end{array}\right] \ \ \ \ \ (14)

The gradient in spherical coordinates of {\chi_{+1}} has components only in the {\theta} and {\phi} directions and we have

\displaystyle   \nabla\chi_{+1} \displaystyle  = \displaystyle  \frac{1}{r}\frac{\partial\chi_{+1}}{\partial\theta}\hat{\boldsymbol{\theta}}+\frac{1}{r\sin\theta}\frac{\partial\chi_{+1}}{\partial\phi}\hat{\boldsymbol{\phi}}\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  \frac{1}{2r}\left[\begin{array}{c} -e^{-2i\phi}\sin\theta\\ \sqrt{2}e^{-i\phi}\cos\theta\\ \sin\theta \end{array}\right]\hat{\boldsymbol{\theta}}-\frac{i}{2r}\left[\begin{array}{c} \frac{e^{-2i\phi}\left(1+\cos\theta\right)}{\sin\theta}\\ \sqrt{2}e^{-i\phi}\\ 0 \end{array}\right]\hat{\boldsymbol{\phi}} \ \ \ \ \ (16)

We can now work out

\displaystyle   \left\langle \chi_{+1}\left|\nabla\chi_{+1}\right.\right\rangle \displaystyle  = \displaystyle  0\hat{\boldsymbol{\theta}}-i\frac{1+\cos\theta}{r\sin\theta}\hat{\boldsymbol{\phi}}\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  -i\frac{1+\cos\theta}{r\sin\theta}\hat{\boldsymbol{\phi}} \ \ \ \ \ (18)

The curl of this has a component only in the {r} direction:

\displaystyle   \nabla\times\left\langle \chi_{+1}\left|\nabla\chi_{+1}\right.\right\rangle \displaystyle  = \displaystyle  \frac{1}{r\sin\theta}\frac{\partial}{\partial\theta}\left[-i\frac{1+\cos\theta}{r\sin\theta}\right]\hat{\mathbf{r}}\ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  \frac{i}{r^{2}}\hat{\mathbf{r}} \ \ \ \ \ (20)

To get the phase we need to integrate this over the surface enclosed by a complete loop traversed by the point of the {\mathbf{B}} field, that is

\displaystyle  \gamma=i\int\frac{i}{r^{2}}\hat{\mathbf{r}}\cdot d\mathbf{a} \ \ \ \ \ (21)

Since the magnetic field’s magnitude is constant, the traversed path is on the surface of a sphere with radius {r} and {d\mathbf{a}} subtends an element of solid angle on this sphere so that

\displaystyle  d\mathbf{a}=r^{2}\hat{\mathbf{r}}d\Omega \ \ \ \ \ (22)

The integral thus comes out to

\displaystyle  \gamma=-\int d\Omega=-\Omega \ \ \ \ \ (23)

Geometric phase is always zero for real wave functions

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 10.5.

Here’s another example of calculating phases in the adiabatic theorem which says that if a system starts out in the {n}th state of a time-dependent hamiltonian, and the hamiltonian changes slowly compared to the internal period of the time-independent wave function (that is, the time scale over which the hamiltonian changes is much longer than {\hbar/E_{n}}), then after a time {t} the system will end up in state

\displaystyle \Psi_{n}\left(t\right)=e^{i\theta_{n}\left(t\right)}e^{i\gamma_{n}\left(t\right)}\psi_{n}\left(t\right) \ \ \ \ \ (1)

 


where

\displaystyle \theta_{n}\left(t\right) \displaystyle \equiv \displaystyle -\frac{1}{\hbar}\int_{0}^{t}E_{n}\left(t'\right)dt'\ \ \ \ \ (2)
\displaystyle \gamma_{n}\left(t\right) \displaystyle \equiv \displaystyle i\int_{0}^{t}\left\langle \psi_{n}\left(t'\right)\left|\frac{\partial}{\partial t'}\psi_{n}\left(t'\right)\right.\right\rangle dt' \ \ \ \ \ (3)

{\theta} is called the dynamic phase and {\gamma} is called the geometric phase. If {\psi_{n}} is real, then {\gamma_{n}} is always zero, as we can see by differentiating the normalization condition:

\displaystyle \left\langle \psi_{n}\left|\psi_{n}\right.\right\rangle \displaystyle = \displaystyle 1\ \ \ \ \ (4)
\displaystyle \frac{d}{dt}\left\langle \psi_{n}\left|\psi_{n}\right.\right\rangle \displaystyle = \displaystyle 0\ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle \left\langle \dot{\psi}_{n}\left|\psi_{n}\right.\right\rangle +\left\langle \psi_{n}\left|\dot{\psi}_{n}\right.\right\rangle \ \ \ \ \ (6)
\displaystyle \displaystyle = \displaystyle \left\langle \psi_{n}\left|\dot{\psi}_{n}\right.\right\rangle ^*+\left\langle \psi_{n}\left|\dot{\psi}_{n}\right.\right\rangle \ \ \ \ \ (7)
\displaystyle \displaystyle = \displaystyle 2\Re\left(\left\langle \psi_{n}\left|\dot{\psi}_{n}\right.\right\rangle \right) \ \ \ \ \ (8)

That is, {\left\langle \psi_{n}\left(t'\right)\left|\frac{\partial}{\partial t'}\psi_{n}\left(t'\right)\right.\right\rangle } must be purely imaginary, so if {\psi_{n}} is real, the bracket must be zero. This also means that {\gamma} is always real.

We can multiply the real wave function {\psi_{n}} by a phase factor {e^{i\phi_{n}}} where {\phi_{n}} is a real function of whatever parameters are dependent on time in the hamiltonian (but {\phi_{n}} is not a function of {x}). In that case we have a new wave function (we’ll drop the subscript {n} to save time):

\displaystyle \psi' \displaystyle = \displaystyle e^{i\phi}\psi\ \ \ \ \ (9)
\displaystyle \dot{\psi}' \displaystyle = \displaystyle i\dot{\phi}e^{i\phi}\psi+e^{i\phi}\dot{\psi}\ \ \ \ \ (10)
\displaystyle \left\langle \psi'\left|\dot{\psi}'\right.\right\rangle \displaystyle = \displaystyle \left\langle \psi\left|i\dot{\phi}\psi+\dot{\psi}\right.\right\rangle \ \ \ \ \ (11)
\displaystyle \displaystyle = \displaystyle i\left\langle \psi\left|\dot{\phi}\psi\right.\right\rangle +\left\langle \psi\left|\dot{\psi}\right.\right\rangle \ \ \ \ \ (12)
\displaystyle \displaystyle = \displaystyle i\dot{\phi} \ \ \ \ \ (13)

where in the last line we took {\dot{\phi}} outside the bracket since it doesn’t depend on {x} and used {\left\langle \psi\left|\dot{\psi}\right.\right\rangle =0}. The geometric phase for the modified wave function is therefore

\displaystyle \gamma' \displaystyle = \displaystyle i\int_{0}^{t}i\dot{\phi}dt'\ \ \ \ \ (14)
\displaystyle \displaystyle = \displaystyle -\left(\phi\left(t\right)-\phi\left(0\right)\right) \ \ \ \ \ (15)

Putting this back into 1 we get

\displaystyle \Psi'\left(t\right) \displaystyle = \displaystyle e^{i\theta\left(t\right)}e^{-i\left(\phi\left(t\right)-\phi\left(0\right)\right)}\psi'\left(t\right)\ \ \ \ \ (16)
\displaystyle \displaystyle = \displaystyle e^{i\theta\left(t\right)}e^{-i\left(\phi\left(t\right)-\phi\left(0\right)\right)}e^{i\phi\left(t\right)}\psi\left(t\right)\ \ \ \ \ (17)
\displaystyle \displaystyle = \displaystyle e^{i\theta\left(t\right)}e^{i\phi\left(0\right)}\psi\left(t\right) \ \ \ \ \ (18)

Although the wave function picks up a constant phase {\phi\left(0\right)}, there is no time-dependent geometric phase.

Phases in the adiabatic theorem: delta function well

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 10.4.

Here’s another example of calculating phases in the adiabatic theorem which says that if a system starts out in the {n}th state of a time-dependent hamiltonian, and the hamiltonian changes slowly compared to the internal period of the time-independent wave function (that is, the time scale over which the hamiltonian changes is much longer than {\hbar/E_{n}}), then after a time {t} the system will end up in state

\displaystyle  \Psi_{n}\left(t\right)=e^{i\theta_{n}\left(t\right)}e^{i\gamma_{n}\left(t\right)}\psi_{n}\left(t\right) \ \ \ \ \ (1)

where

\displaystyle   \theta_{n}\left(t\right) \displaystyle  \equiv \displaystyle  -\frac{1}{\hbar}\int_{0}^{t}E_{n}\left(t'\right)dt'\ \ \ \ \ (2)
\displaystyle  \gamma_{n}\left(t\right) \displaystyle  \equiv \displaystyle  i\int_{0}^{t}\left\langle \psi_{n}\left(t'\right)\left|\frac{\partial}{\partial t'}\psi_{n}\left(t'\right)\right.\right\rangle dt' \ \ \ \ \ (3)

{\theta} is called the dynamic phase and {\gamma} is called the geometric phase.

The wave functions {\psi_{n}\left(t\right)} are the solutions of the eigenvalue equation at a particular time {t}:

\displaystyle  H\left(t\right)\psi_{n}\left(t\right)=E_{n}\left(t\right)\psi_{n}\left(t\right) \ \ \ \ \ (4)

That is, they aren’t a full solution of the time dependent Schrödinger equation; rather they are the solutions of the time-independent Schrödinger equation with whatever parameters are now time-dependent in the hamiltonian replaced by their time-dependent forms.

With a delta function well the potential is

\displaystyle  V\left(x\right)=-\alpha\delta\left(x\right) \ \ \ \ \ (5)

and the time-independent wave function for the bound state is

\displaystyle  \psi\left(x\right)=\frac{\sqrt{m\alpha}}{\hbar}e^{-m\alpha\left|x\right|/\hbar^{2}} \ \ \ \ \ (6)

If the strength of the delta function {\alpha} varies with time, then

\displaystyle   \gamma_{n}\left(t\right) \displaystyle  = \displaystyle  i\int_{\alpha_{1}}^{\alpha_{2}}\left\langle \psi_{n}\left(t'\right)\left|\frac{\partial}{\partial\alpha}\psi_{n}\left(\alpha\right)\right.\right\rangle d\alpha\ \ \ \ \ (7)
\displaystyle  \frac{\partial}{\partial\alpha}\psi_{n}\left(\alpha\right) \displaystyle  = \displaystyle  e^{-m\alpha\left|x\right|/\hbar^{2}}\frac{m\left(2m\alpha\left|x\right|-\hbar^{2}\right)}{2\hbar^{3}\sqrt{m\alpha}}\ \ \ \ \ (8)
\displaystyle  \left\langle \psi_{n}\left(t'\right)\left|\frac{\partial}{\partial\alpha}\psi_{n}\left(\alpha\right)\right.\right\rangle \displaystyle  = \displaystyle  \frac{m}{2\hbar^{4}}\int_{-\infty}^{\infty}e^{-2m\alpha\left|x\right|/\hbar^{2}}\left(2m\alpha\left|x\right|-\hbar^{2}\right)dx\ \ \ \ \ (9)
\displaystyle  \displaystyle  = \displaystyle  \frac{m}{\hbar^{4}}\int_{0}^{\infty}e^{-2m\alpha x/\hbar^{2}}\left(2m\alpha x-\hbar^{2}\right)dx\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  \left.\frac{m}{2\hbar^{2}}xe^{-2m\alpha x/\hbar^{2}}\right|_{0}^{\infty}\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  0 \ \ \ \ \ (12)

Therefore {\gamma_{n}=0}.

The bound state energy is

\displaystyle  E=-\frac{m\alpha^{2}}{2\hbar^{2}} \ \ \ \ \ (13)

so the dynamic phase is

\displaystyle  \theta_{n}\left(t\right)=\frac{m}{2\hbar^{3}}\int_{0}^{t}\alpha^{2}\left(t'\right)dt' \ \ \ \ \ (14)

If {\alpha} changes at a constant rate, then {d\alpha/dt=c} and {\alpha\left(t\right)=\alpha_{1}+ct}, so

\displaystyle   \theta_{n}\left(t\right) \displaystyle  = \displaystyle  \frac{m}{2\hbar^{3}}\int_{0}^{\left(\alpha_{2}-\alpha_{1}\right)/c}\left(\alpha_{1}+ct\right)^{2}dt\ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  \frac{m\left(\alpha_{1}^{3}-\alpha_{2}^{3}\right)}{6c\hbar^{3}} \ \ \ \ \ (16)

Phases in the adiabatic approximation

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 10.3.

The adiabatic theorem (see Griffiths, section 10.1 for a proof) says that if a system starts out in the {n}th state of a time-dependent hamiltonian, and the hamiltonian changes slowly compared to the internal period of the time-independent wave function (that is, the time scale over which the hamiltonian changes is much longer than {\hbar/E_{n}}), then after a time {t} the system will end up in state

\displaystyle  \Psi_{n}\left(t\right)=e^{i\theta_{n}\left(t\right)}e^{i\gamma_{n}\left(t\right)}\psi_{n}\left(t\right) \ \ \ \ \ (1)

where

\displaystyle   \theta_{n}\left(t\right) \displaystyle  \equiv \displaystyle  -\frac{1}{\hbar}\int_{0}^{t}E_{n}\left(t'\right)dt'\ \ \ \ \ (2)
\displaystyle  \gamma_{n}\left(t\right) \displaystyle  \equiv \displaystyle  i\int_{0}^{t}\left\langle \psi_{n}\left(t'\right)\left|\frac{\partial}{\partial t'}\psi_{n}\left(t'\right)\right.\right\rangle dt' \ \ \ \ \ (3)

{\theta} is called the dynamic phase and {\gamma} is called the geometric phase.

The wave functions {\psi_{n}\left(t\right)} are the solutions of the eigenvalue equation at a particular time {t}:

\displaystyle  H\left(t\right)\psi_{n}\left(t\right)=E_{n}\left(t\right)\psi_{n}\left(t\right) \ \ \ \ \ (4)

That is, they aren’t a full solution of the time dependent Schrödinger equation; rather they are the solutions of the time-independent Schrödinger equation with whatever parameters are now time-dependent in the hamiltonian replaced by their time-dependent forms.

For example, with an infinite square well whose right wall moves so that its position {w} is a function of time {w\left(t\right)}, we have

\displaystyle   \psi_{n}\left(t\right) \displaystyle  = \displaystyle  \sqrt{\frac{2}{w\left(t\right)}}\sin\frac{n\pi}{w\left(t\right)}x\ \ \ \ \ (5)
\displaystyle  E_{n}\left(t\right) \displaystyle  = \displaystyle  \frac{\left(n\pi\hbar\right)^{2}}{2mw^{2}\left(t\right)} \ \ \ \ \ (6)

In this case, {\psi_{n}} depends on only one time-dependent parameter, so we can use the chain rule to write

\displaystyle   \gamma_{n}\left(t\right) \displaystyle  = \displaystyle  i\int_{0}^{t}\left\langle \psi_{n}\left|\frac{\partial}{\partial w}\psi_{n}\right.\right\rangle \frac{dw}{dt'}dt'\ \ \ \ \ (7)
\displaystyle  \displaystyle  = \displaystyle  i\int_{w_{1}}^{w_{2}}\left\langle \psi_{n}\left|\frac{\partial}{\partial w}\psi_{n}\right.\right\rangle dw \ \ \ \ \ (8)

where the wall moves from {w_{1}} to {w_{2}} between times 0 and {t}. We get

\displaystyle   \frac{\partial}{\partial w}\psi_{n} \displaystyle  = \displaystyle  -\frac{\sqrt{2}}{2w^{5/2}}\left[w\sin\frac{n\pi}{w}x+2n\pi x\cos\frac{n\pi}{w}x\right]\ \ \ \ \ (9)
\displaystyle  \left\langle \psi_{n}\left|\frac{\partial}{\partial w}\psi_{n}\right.\right\rangle \displaystyle  = \displaystyle  -\frac{1}{w^{3}}\int_{0}^{w}\sin\frac{n\pi}{w}x\left[w\sin\frac{n\pi}{w}x+2n\pi x\cos\frac{n\pi}{w}x\right]dx\ \ \ \ \ (10)
\displaystyle  \displaystyle  = \displaystyle  \frac{\sin^{2}n\pi}{w}\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  0 \ \ \ \ \ (12)

In this case, there is no change in phase due to the geometric phase. In fact, we can see this is generally true for real wave functions {\psi_{n}} since

\displaystyle   \left\langle \psi_{n}\left|\psi_{n}\right.\right\rangle \displaystyle  = \displaystyle  1\ \ \ \ \ (13)
\displaystyle  \frac{d}{dt}\left\langle \psi_{n}\left|\psi_{n}\right.\right\rangle \displaystyle  = \displaystyle  0\ \ \ \ \ (14)
\displaystyle  \displaystyle  = \displaystyle  \left\langle \dot{\psi}_{n}\left|\psi_{n}\right.\right\rangle +\left\langle \psi_{n}\left|\dot{\psi}_{n}\right.\right\rangle \ \ \ \ \ (15)
\displaystyle  \displaystyle  = \displaystyle  2\Re\left(\left\langle \psi_{n}\left|\dot{\psi}_{n}\right.\right\rangle \right) \ \ \ \ \ (16)

That is, {\left\langle \psi_{n}\left(t'\right)\left|\frac{\partial}{\partial t'}\psi_{n}\left(t'\right)\right.\right\rangle } must be purely imaginary, so if {\psi_{n}} is real, the bracket must be zero. This also means that {\gamma} is always real.

Thus {\gamma} is zero as the wall moves from {w_{1}} to {w_{2}} and also as it moves back from {w_{2}} to {w_{1}}.

The dynamic phase for the same journey is

\displaystyle   \theta_{n}\left(t\right) \displaystyle  = \displaystyle  -\frac{1}{\hbar}\int_{0}^{t}E_{n}\left(t'\right)dt'\ \ \ \ \ (17)
\displaystyle  \displaystyle  = \displaystyle  -\frac{\hbar\left(n\pi\right)^{2}}{2m}\int_{0}^{t}\frac{1}{w^{2}\left(t'\right)}dt' \ \ \ \ \ (18)

If the speed of the wall is constant so that {w=w_{1}+vt} we have

\displaystyle   \theta_{n}\left(t\right) \displaystyle  = \displaystyle  -\frac{\hbar\left(n\pi\right)^{2}}{2m}\int_{0}^{\left(w_{2}-w_{1}\right)/v}\frac{dt'}{\left(w_{1}+vt'\right)^{2}}\ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  \frac{\hbar\left(n\pi\right)^{2}}{2mv}\frac{w_{1}-w_{2}}{w_{1}w_{2}} \ \ \ \ \ (20)

Electron in a precessing magnetic field

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 10.2.

For a spin 1/2 particle in a magnetic field {\mathbf{B}}, we’ve seen that the hamiltonian is

\displaystyle  \mathsf{H}=-\gamma\mathbf{B}\cdot\mathsf{S} \ \ \ \ \ (1)

where {\gamma} is the gyromagnetic ratio, which for an electron is {-e/m}. Now suppose that the magnetic field’s direction precesses around the {z} axis (sweeps out a cone) with angular speed {\omega}, so that {\mathbf{B}} makes an angle {\alpha} with the {z} axis. That is

\displaystyle  \mathbf{B}\left(t\right)=B_{0}\left[\sin\alpha\cos\left(\omega t\right)\hat{\mathbf{x}}+\sin\alpha\sin\left(\omega t\right)\hat{\mathbf{y}}+\cos\alpha\hat{\mathbf{z}}\right] \ \ \ \ \ (2)

At time {t}, the component of {\mathbf{S}} along {\mathbf{B}} is given by

\displaystyle  \textsf{S}_{B}=\frac{\hbar}{2}\left(\begin{array}{cc} \cos\alpha & \sin\alpha e^{-i\omega t}\\ \sin\alpha e^{i\omega t} & -\cos\alpha \end{array}\right) \ \ \ \ \ (3)

so the hamiltonian is

\displaystyle   \mathsf{H} \displaystyle  = \displaystyle  \frac{\hbar\omega_{1}}{2}\left(\begin{array}{cc} \cos\alpha & \sin\alpha e^{-i\omega t}\\ \sin\alpha e^{i\omega t} & -\cos\alpha \end{array}\right)\ \ \ \ \ (4)
\displaystyle  \omega_{1} \displaystyle  \equiv \displaystyle  \frac{eB_{0}}{m} \ \ \ \ \ (5)

If we freeze the system at time {t} and solve the time-independent Schrödinger equation to get the eigenvalues and eigenspinors we get

\displaystyle  \chi_{+}=\left(\begin{array}{c} \cos(\alpha/2)\\ e^{i\omega t}\sin(\alpha/2) \end{array}\right) \ \ \ \ \ (6)

\displaystyle  \chi_{-}=\left(\begin{array}{c} e^{-i\omega t}\sin(\alpha/2)\\ -\cos(\alpha/2) \end{array}\right) \ \ \ \ \ (7)


with energies

\displaystyle  E_{\pm}=\pm\frac{\hbar\omega_{1}}{2} \ \ \ \ \ (8)

Griffiths gives the exact solution to the time-dependent Schrödinger equation for this problem as

\displaystyle  \chi\left(t\right)=\left[\begin{array}{c} \left(\cos\frac{\lambda t}{2}-i\frac{\omega_{1}-\omega}{\lambda}\sin\frac{\lambda t}{2}\right)\cos\frac{\alpha}{2}e^{-i\omega t/2}\\ \left(\cos\frac{\lambda t}{2}-i\frac{\omega_{1}+\omega}{\lambda}\sin\frac{\lambda t}{2}\right)\sin\frac{\alpha}{2}e^{i\omega t/2} \end{array}\right] \ \ \ \ \ (9)


where

\displaystyle  \lambda\equiv\sqrt{\omega^{2}+\omega_{1}^{2}-2\omega\omega_{1}\cos\alpha} \ \ \ \ \ (10)

To prove this, we need to show that

\displaystyle  \mathsf{H}\chi=i\hbar\frac{\partial\chi}{\partial t} \ \ \ \ \ (11)

As usual, I’ll use Maple to help things along, although even Maple requires a bit of help here and there. We’ll start with {\mathsf{H}\chi} which is the matrix product of 4 and 9. After multiplying out the terms and using the trig identities {\cos\left(a\pm b\right)=\cos a\cos b\mp\sin a\sin b} we get

\displaystyle  \mathsf{H}\chi=\frac{\hbar\omega_{1}}{2\lambda}\left[\begin{array}{c} e^{-i\omega t/2}\cos\frac{\alpha}{2}\left[i\left(4\omega\cos^{2}\frac{\alpha}{2}-3\omega-\omega_{1}\right)\sin\frac{\lambda t}{2}+\lambda\cos\frac{\lambda t}{2}\right]\\ e^{i\omega t/2}\sin\frac{\alpha}{2}\left[i\left(4\omega\cos^{2}\frac{\alpha}{2}-\omega-\omega_{1}\right)\sin\frac{\lambda t}{2}+\lambda\cos\frac{\lambda t}{2}\right] \end{array}\right] \ \ \ \ \ (12)

Now for the RHS. We get after collecting terms

\displaystyle  i\hbar\frac{\partial\chi}{\partial t}=\frac{\hbar}{2\lambda}\left[\begin{array}{c} e^{-i\omega t/2}\cos\frac{\alpha}{2}\left[i\left(\omega^{2}-\omega\omega_{1}-\lambda^{2}\right)\sin\frac{\lambda t}{2}+\lambda\omega_{1}\cos\frac{\lambda t}{2}\right]\\ e^{i\omega t/2}\sin\frac{\alpha}{2}\left[i\left(\omega^{2}+\omega\omega_{1}-\lambda^{2}\right)\sin\frac{\lambda t}{2}+\lambda\omega_{1}\cos\frac{\lambda t}{2}\right] \end{array}\right] \ \ \ \ \ (13)

The two sides are equal if both the following are true:

\displaystyle   \left(4\omega\cos^{2}\frac{\alpha}{2}-3\omega-\omega_{1}\right)\omega_{1} \displaystyle  = \displaystyle  \omega^{2}-\omega\omega_{1}-\lambda^{2}\ \ \ \ \ (14)
\displaystyle  \left(4\omega\cos^{2}\frac{\alpha}{2}-\omega-\omega_{1}\right)\omega_{1} \displaystyle  = \displaystyle  \omega^{2}+\omega\omega_{1}-\lambda^{2} \ \ \ \ \ (15)

Substituting from 10 both equations give the same condition:

\displaystyle   4\omega\omega_{1}\cos^{2}\frac{\alpha}{2} \displaystyle  = \displaystyle  2\omega\omega_{1}\left(1+\cos\alpha\right)\ \ \ \ \ (16)
\displaystyle  \cos\alpha \displaystyle  = \displaystyle  2\cos^{2}\frac{\alpha}{2}-1 \ \ \ \ \ (17)

The last line is a trig identity, so the time-dependent Schrödinger equation is satisfied.

We can also express 9 as a linear combination of 6 and 7. Griffiths gives the answer as

\displaystyle   \chi\left(t\right) \displaystyle  = \displaystyle  \left[\cos\frac{\lambda t}{2}-i\frac{\left(\omega_{1}-\omega\cos\alpha\right)}{\lambda}\sin\frac{\lambda t}{2}\right]e^{-i\omega t/2}\chi_{+}\left(t\right)+\nonumber
\displaystyle  \displaystyle  \displaystyle  i\left[\frac{\omega}{\lambda}\sin\alpha\sin\frac{\lambda t}{2}\right]e^{\omega t/2}\chi_{-}\left(t\right) \ \ \ \ \ (18)

This can be verified by direct calculation. Take the top element first and use the cosine of difference of angles formula:

\displaystyle   \chi_{1} \displaystyle  = \displaystyle  \left[\cos\frac{\lambda t}{2}-i\frac{\left(\omega_{1}-\omega\cos\alpha\right)}{\lambda}\sin\frac{\lambda t}{2}\right]e^{-i\omega t/2}\cos\frac{\alpha}{2}+\nonumber
\displaystyle  \displaystyle  \displaystyle  i\left[\frac{\omega}{\lambda}\sin\alpha\sin\frac{\lambda t}{2}\right]e^{\omega t/2}e^{-i\omega t}\sin\frac{\alpha}{2}\ \ \ \ \ (19)
\displaystyle  \displaystyle  = \displaystyle  e^{-i\omega t/2}\left[\cos\frac{\lambda t}{2}\cos\frac{\alpha}{2}+\frac{i}{\lambda}\sin\frac{\lambda t}{2}\left(-\omega_{1}\cos\frac{\alpha}{2}+\omega\cos\frac{\alpha}{2}\right)\right]\ \ \ \ \ (20)
\displaystyle  \displaystyle  = \displaystyle  \left(\cos\frac{\lambda t}{2}-i\frac{\omega_{1}-\omega}{\lambda}\sin\frac{\lambda t}{2}\right)\cos\frac{\alpha}{2}e^{-i\omega t/2} \ \ \ \ \ (21)

The bottom element works much the same way, using the sine of difference of angles formula:

\displaystyle   \chi_{2} \displaystyle  = \displaystyle  \left[\cos\frac{\lambda t}{2}-i\frac{\left(\omega_{1}-\omega\cos\alpha\right)}{\lambda}\sin\frac{\lambda t}{2}\right]e^{-i\omega t/2}e^{i\omega t}\sin\frac{\alpha}{2}-\nonumber
\displaystyle  \displaystyle  \displaystyle  i\left[\frac{\omega}{\lambda}\sin\alpha\sin\frac{\lambda t}{2}\right]e^{\omega t/2}\cos\frac{\alpha}{2}\ \ \ \ \ (22)
\displaystyle  \displaystyle  = \displaystyle  \left(\cos\frac{\lambda t}{2}-i\frac{\omega_{1}+\omega}{\lambda}\sin\frac{\lambda t}{2}\right)\sin\frac{\alpha}{2}e^{i\omega t/2} \ \ \ \ \ (23)

Finally, writing {\chi\left(t\right)=c_{+}\left(t\right)\chi_{+}+c_{-}\left(t\right)\chi_{-}} we can check that the coefficients are normalized.

\displaystyle   \left|c_{+}\right|^{2}+\left|c_{-}\right|^{2} \displaystyle  = \displaystyle  \cos^{2}\frac{\lambda t}{2}+\frac{\left(\omega_{1}-\omega\cos\alpha\right)^{2}}{\lambda^{2}}\sin^{2}\frac{\lambda t}{2}+\left[\frac{\omega}{\lambda}\sin\alpha\sin\frac{\lambda t}{2}\right]^{2}\ \ \ \ \ (24)
\displaystyle  \displaystyle  = \displaystyle  \cos^{2}\frac{\lambda t}{2}+\sin^{2}\frac{\lambda t}{2}\left[\frac{\left(\omega_{1}-\omega\cos\alpha\right)^{2}}{\lambda^{2}}+\left(\frac{\omega}{\lambda}\sin\alpha\right)^{2}\right]\ \ \ \ \ (25)
\displaystyle  \displaystyle  = \displaystyle  \cos^{2}\frac{\lambda t}{2}+\sin^{2}\frac{\lambda t}{2}\left[\frac{\omega^{2}+\omega_{1}^{2}-2\omega\omega_{1}\cos\alpha}{\lambda^{2}}\right]\ \ \ \ \ (26)
\displaystyle  \displaystyle  = \displaystyle  \cos^{2}\frac{\lambda t}{2}+\sin^{2}\frac{\lambda t}{2}\left[\frac{\omega^{2}+\omega_{1}^{2}-2\omega\omega_{1}\cos\alpha}{\omega^{2}+\omega_{1}^{2}-2\omega\omega_{1}\cos\alpha}\right]\ \ \ \ \ (27)
\displaystyle  \displaystyle  = \displaystyle  1 \ \ \ \ \ (28)

+

The adiabatic approximation in quantum mechanics

Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 10.1.

The adiabatic approximation in quantum mechanics is a method by which approximate solutions to the time dependent Schrödinger equation can be found. The method works in cases where the hamiltonian changes slowly by comparison with the natural, internal frequency of the wave function. For example, the general solution to the time-independent Schrödinger equation can be written as

\displaystyle  \Psi\left(x,t\right)=\sum_{n}c_{n}\psi_{n}\left(x\right)e^{-iE_{n}t/\hbar} \ \ \ \ \ (1)

where the {\psi_{n}\left(x\right)} are the eigenfunctions of the hamiltonian with eigenvalues (energies) {E_{n}}. The internal frequency of eigenfunction {n} is {E_{n}/\hbar}, so if the variations in the hamiltonian have frequency components much lower than this, we can use the adiabatic approximation.

The adiabatic theorem states that in a system with non-degenerate energy levels, if we start with the system in level {n} of the original hamiltonian and then undergo an adiabatic process that takes us to some final hamiltonian, then the system will be in level {n} of the final hamiltonian, although the wave function may pick up a phase factor along the way.

A common example of an adiabatic process is an infinite square well that starts with width {a} that increases slowly over time with constant speed {v} so that the width is given by

\displaystyle  w\left(t\right)=a+vt \ \ \ \ \ (2)

for {t\ge0}. If we let the wall expand to twice its original width, then we can take as the ‘external’ time {T_{e}} the time it takes to complete its expansion:

\displaystyle  T_{e}=\frac{a}{v} \ \ \ \ \ (3)

The ‘internal’ time {T_{i}} can be the period of the phase factor {e^{-iE_{n}t/\hbar}} in the starting state, so we would have

\displaystyle   \frac{E_{n}T_{i}}{\hbar} \displaystyle  = \displaystyle  2\pi\ \ \ \ \ (4)
\displaystyle  T_{i} \displaystyle  = \displaystyle  \frac{2\pi\hbar}{E_{n}} \ \ \ \ \ (5)

However, it turns out that the time dependent Schrödinger equation can be solved exactly in this case, with the {n}th eigenfunction given by

\displaystyle  \Phi_{n}\left(x,t\right)=\sqrt{\frac{2}{w}}\sin\left(\frac{n\pi}{w}x\right)e^{i\left(mvx^{2}-2E_{n}^{i}at\right)/2\hbar w} \ \ \ \ \ (6)


where {E_{n}^{i}} is the energy of level {n} in the starting well, with width {a}:

\displaystyle  E_{n}^{i}=\frac{n^{2}\pi^{2}\hbar^{2}}{2ma^{2}} \ \ \ \ \ (7)

We can verify by direct differentiation that 6 satisfies the time dependent Schrödinger equation

\displaystyle  -\frac{\hbar^{2}}{2m}\frac{\partial^{2}\Phi_{n}}{\partial x^{2}}=i\hbar\frac{\partial\Phi_{n}}{\partial t} \ \ \ \ \ (8)

The calculation gets very messy (remember {w} depends on {t}) so it’s best to use Maple to do it, and we get

\displaystyle  i\hbar\frac{\partial\Phi_{n}}{\partial t}=e^{i\left(mvx^{2}-2E_{n}^{i}at\right)/2\hbar w}\frac{\sqrt{2}}{w^{5/2}}\left[\frac{1}{2}\sin\left(\frac{n\pi}{w}x\right)\left(\frac{\left(n\pi\hbar\right)^{2}}{m}+m\left(vx\right)^{2}-i\hbar vw\right)-i\cos\left(\frac{n\pi}{w}x\right)\pi n\hbar vx\right] \ \ \ \ \ (9)

Fortunately, we get the same expression for {-\frac{\hbar^{2}}{2m}\frac{\partial^{2}\Phi_{n}}{\partial x^{2}}} so the Schrödinger equation is satisfied.

6 is the wave function for a single energy level, so we can get a general solution that is a superposition of energy levels in the usual way:

\displaystyle  \Psi\left(x,t\right)=\sum_{n=1}^{\infty}c_{n}\Phi_{n}\left(x,t\right) \ \ \ \ \ (10)

In this case, all the time dependence is included in the {\Phi_{n}}, so the coefficients {c_{n}} are true constants, independent of both space and time. The {\Phi_{n}} are orthonormal at each instant of time, since

\displaystyle   \int_{0}^{w}\Phi_{j}^*\Phi_{n}dx \displaystyle  = \displaystyle  \frac{2}{w}e^{2iat\left(E_{j}^{i}-E_{n}^{i}\right)/2\hbar w}\int_{0}^{w}\sin\left(\frac{j\pi}{w}x\right)\sin\left(\frac{n\pi}{w}x\right)dx\ \ \ \ \ (11)
\displaystyle  \displaystyle  = \displaystyle  e^{2iat\left(E_{j}^{i}-E_{n}^{i}\right)/2\hbar w}\delta_{jn}\ \ \ \ \ (12)
\displaystyle  \displaystyle  = \displaystyle  \delta_{jn} \ \ \ \ \ (13)

We can therefore use orthonormality to get an expression for {c_{n}} by multiplying both sides by {\Phi_{j}^*} and integrating. Since the {c_{n}} are independent of time, we can do this at {t=0} when {w=a}.

\displaystyle  c_{j}=\int_{0}^{a}\Phi_{j}^*\left(x,0\right)\Psi\left(x,0\right)dx \ \ \ \ \ (14)

If the particle starts out in the ground state, then

\displaystyle   \Psi\left(x,0\right) \displaystyle  = \displaystyle  \sqrt{\frac{2}{a}}\sin\left(\frac{\pi}{a}x\right)\ \ \ \ \ (15)
\displaystyle  c_{n} \displaystyle  = \displaystyle  \frac{2}{a}\int_{0}^{a}e^{-imvx^{2}/2\hbar a}\sin\left(\frac{n\pi}{a}x\right)\sin\left(\frac{\pi}{a}x\right)dx \ \ \ \ \ (16)

Substituting {z=\pi x/a} we get

\displaystyle   c_{n} \displaystyle  = \displaystyle  \frac{2}{\pi}\int_{0}^{\pi}e^{-i\alpha z^{2}}\sin\left(nz\right)\sin\left(z\right)dz\ \ \ \ \ (17)
\displaystyle  \alpha \displaystyle  \equiv \displaystyle  \frac{mva}{2\pi^{2}\hbar} \ \ \ \ \ (18)

So far, all this is exact. To use the adiabatic approximation, we need estimates of {T_{e}} and {T_{i}}. We can get {T_{e}} from 3. For {T_{i}} we can look at 6 at {x=0} and find the value of {t} that makes the argument of the exponential advance by {2\pi}. Actually, because of the signs, the phase actually goes backwards as {t} gets larger, but the principle is the same; we just need to find the value of {t} at which the argument changes by {2\pi}.

\displaystyle   \frac{2E_{1}^{i}aT_{i}}{2\hbar\left(a+vT_{i}\right)} \displaystyle  = \displaystyle  2\pi\ \ \ \ \ (19)
\displaystyle  T_{i} \displaystyle  = \displaystyle  \frac{2\pi\hbar a}{E_{1}^{i}a-2\pi\hbar v} \ \ \ \ \ (20)

Dividing top and bottom by {v} and using 3 we get

\displaystyle  T_{i}=\frac{2\pi\hbar T_{e}}{E_{1}^{i}T_{e}-2\pi\hbar} \ \ \ \ \ (21)

To satisfy the adiabatic condition, we need {T_{e}\gg T_{i}} so, using 7

\displaystyle   \frac{2\pi\hbar}{E_{1}^{i}T_{e}-2\pi\hbar} \displaystyle  \ll \displaystyle  1\ \ \ \ \ (22)
\displaystyle  E_{1}^{i}T_{e} \displaystyle  \gg \displaystyle  4\pi\hbar\ \ \ \ \ (23)
\displaystyle  \frac{8mav}{\pi\hbar} \displaystyle  \ll \displaystyle  1 \ \ \ \ \ (24)

Apart from the numerical factors which don’t differ too drastically, we see from 18 that this effectively requires {\alpha\ll1}. Using this approximation, we can evaluate the integral in 17 to get

\displaystyle   c_{n} \displaystyle  \approx \displaystyle  \frac{2}{\pi}\int_{0}^{\pi}\sin\left(nz\right)\sin\left(z\right)dz\ \ \ \ \ (25)
\displaystyle  \displaystyle  = \displaystyle  \delta_{1n} \ \ \ \ \ (26)

Thus the system remains in the ground state {n=1} as the wall expands, which is what the adiabatic theorem predicts. The wave function is therefore

\displaystyle  \Phi_{1}\left(x,t\right)\approx\sqrt{\frac{2}{w}}\sin\left(\frac{\pi}{w}x\right)e^{i\left(mvx^{2}-2E_{1}^{i}at\right)/2\hbar w} \ \ \ \ \ (27)

The phase factor in this wave function is the exponential factor that doesn’t depend on {x}, that is

\displaystyle  \theta\left(t\right)=-\frac{2E_{1}^{i}at}{2\hbar w}=-\frac{\pi^{2}\hbar t}{2ma\left(a+vt\right)} \ \ \ \ \ (28)

The instantaneous eigenvalue in the ground state is the original eigenvalue with the well width {a} replaced by the dynamic width {w}:

\displaystyle  E_{1}\left(t\right)=\frac{\pi^{2}\hbar^{2}}{2mw^{2}} \ \ \ \ \ (29)

If we integrate this over the time the wall has moved so far we get

\displaystyle   \int_{0}^{t}E_{1}\left(t'\right)dt' \displaystyle  = \displaystyle  \frac{\pi^{2}\hbar^{2}}{2m}\int_{0}^{t}\frac{dt'}{\left(a+vt'\right)^{2}}\ \ \ \ \ (30)
\displaystyle  \displaystyle  = \displaystyle  \frac{\pi^{2}\hbar^{2}t}{2ma\left(a+vt\right)}\ \ \ \ \ (31)
\displaystyle  \displaystyle  = \displaystyle  -\hbar\theta\left(t\right) \ \ \ \ \ (32)

Follow

Get every new post delivered to your Inbox.

Join 384 other followers