<<

Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1282

Elastic and inelastic scattering effects in conductance measurements at the nanoscale

A theoretical treatise

PETER BERGGREN

ACTA UNIVERSITATIS UPSALIENSIS ISSN 1651-6214 ISBN 978-91-554-9321-9 UPPSALA urn:nbn:se:uu:diva-261609 2015 Dissertation presented at Uppsala University to be publicly examined in Häggsalen, Lägerhyddsvägen 1, Uppsala, Friday, 16 October 2015 at 09:00 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Wolfgang Belzig (Universitet Konstanz).

Abstract Berggren, P. 2015. Elastic and inelastic scattering effects in conductance measurements at the nanoscale. A theoretical treatise. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1282. 87 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9321-9.

Elastic and inelastic interactions are studied in tunnel junctions of a superconducting nanoelectromechanical setup and in response to resent experimental superconducting scanning tunneling microscope findings on a paramagnetic molecule. In addition, the electron density of molecular graphene is modeled by a scattering theory approach in very good agreement with experiment. All studies where conducted through the use of model Hamiltonians and a Green function formalism. The nanoelectromechanical system comprise two fixed superconducting leads in-between which a cantilever suspended superconducting island oscillates in an asymmetric fashion with respect to both fixed leads. The Josephson current is found to modulate the island motion which in turn affects the current, such that parameter regions of periodic, quasi periodic and chaotic behavior arise. Our modeled STM setup reproduces the experimentally obtained spin excitations of the paramagnetic molecule and we show a probable cause for the increased uniaxial anisotropy observed when closing the gap distance of tip and substrate. A wider parameter space is also investigated including effects of external magnetic fields, temperature and transverse anisotropy. Molecular graphene turns out to be well described by our adopted scattering theory, producing results that are in good agreement with experiment. Several point like scattering centers are therefore well suited to describe a continuously decaying potential and effects of impurities are easily calculated.

Keywords: Scattering theory, Scanning tunneling microscopy, tunnel junctions, molecular graphene, paramagnetic molecules, spin interaction, nano electromechanical system, Josephson junction, , chaos

Peter Berggren, Department of and Astronomy, Box 516, Uppsala University, SE-751 20 Uppsala, Sweden.

© Peter Berggren 2015

ISSN 1651-6214 ISBN 978-91-554-9321-9 urn:nbn:se:uu:diva-261609 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-261609) Dedicated to my family

List of papers

This thesis is based on the following papers, which are referred to in the text by their Roman numerals.

I Stability and chaos of a driven nano electromechanical Josephson junction P. Berggren and J. Fransson

II Spin inelastic electron tunneling spectroscopy on local magnetic moment embedded in Josephson junction P. Berggren and J. Fransson

III Theory of spin inelastic tunneling spectroscopy for superconductor-superconductor and superconductor-metal junctions P. Berggren and J. Fransson

IV Molecular graphene under the eye of scattering theory H. Hammar, P. Berggren and J. Fransson

Reprints were made with permission from the publishers.

Contents

Part I: Introduction ...... 9

Part II: Theoretical framework ...... 13

1 Short introduction to quantum mechanics ...... 15

2 Model making in many body systems ...... 19 2.1 Hamiltonian descriptions ...... 20 2.2 Spin Hamiltonian ...... 21

3 Green functions in many body physics ...... 22 3.1 General background to Green functions ...... 22 3.2 Green functions in many-body systems ...... 24 3.2.1 Variants of the many-body Green function ...... 28 3.2.2 Green functions at finite temperature ...... 28 3.2.3 Green functions of free field excitations ...... 32 3.2.4 The Heisenberg equation of motion and the perturbative expansion ...... 34

4 Tunnel junctions and scanning tunnelling microscopy ...... 35 4.1 Scanning tunnelling microscopy (STM) ...... 35 4.2 Theoretical description of tunnel junctions ...... 37 5 Scattering theory for surface electrons interacting with Dirac delta-function like potentials ...... 42

6 Superconductivity ...... 44 6.1 Key points of BCS theory ...... 45

7 Notes on chaos ...... 48

Part III: Accessible versions of the published papers ...... 51 8 Stability and chaos of a driven nanoelectromechanical Josephson junction ...... 53 8.1 Results for zero bias voltage ...... 56 8.2 Results for finite bias voltage ...... 58 9 Theory of spin inelastic tunneling spectroscopy for Josephson and superconductor-metal junctions ...... 61 9.1 Results for a spin 1 magnetic molecule ...... 65 9.2 Results for a Spin 5/2 magnetic molecule ...... 66 9.3 Anisotropy dependence on tip to sample distance ...... 69 9.4 Concluding remarks ...... 71

10 Molecular graphene under the eye of scattering theory ...... 73

11 Outlook ...... 79

12 Acknowledgments ...... 80

13 Summary in swedish ...... 81

References ...... 83 Part I: Introduction

Research and general interest in nanotechnology has exploded in resent years as applications are becoming feasible also for medical, mechanical and con- sumer applications [1]. In electronics the nanometer has been the length scale for many years and processor chips have continued to develop to a point where they are now manufactured with 14 nm architectures [2]. However, conven- tional technology has recently started to close in on the physical limits of size. Take for example the storage capacity of modern hard drives. These are con- structed with a thin layer of a magnetic material spread over the plane sur- face of a substrate. The magnetic film is in turn divided into small sections, called domains, that have two different preferred magnetization directions dis- cernible by the read head. Each domain represents one bit, encoded as a 1 or a 0 by the magnetic direction. The read head can change the bit state of the domain by supplying energy, such that the magnetic state can overcome the potential barrier that separates the two energy minima. The problem is that this process can happen spontaneously if the magnetic domain is thermally excited and the stability with respect to thermal influence is set by the volume of the domain, for a given material. By making smaller domains the bits be- come more sensitive to thermal information loss which obviously limits the density of bits that can be packed on the magnetic thin film. Research into alternative ways in which a bit may be stored, with higher density, is therefore important as demand continues to grow with the informa- tion society. An additional benefit is that the energy associated with informa- tion storage and processing often is lowered as a wanted byproduct of smaller architectures. A candidate for a next generation technology in data storage is the use of densely packed magnetic molecules. These are molecules where a local atomic magnetic moment is positioned within a surrounding molecular cage that provides an anisotropy field generating preferred magnetic direc- tions. The advantage of such molecules over thin film domains is that they are orders of magnitude smaller yet still have high enough barriers, suppressing spontaneous transitions between bit states, to be used in room temperature [3]. In papers II and III a magnetic molecule of this kind is considered, where the spin energy levels are mapped out in the environment of a superconductor- superconductor and superconductor-metal gap. This is a setup commonly re- ferred to as a tunnel junction and the system has been reported to show some promising features, such as long mean lifetimes for the excited spin states [4]. Something that could open up for use in the capacity of a computer working memory, possibly in a quantum computer context. Tunnel junctions generally consist of two conductors that are separated by a thin insulating layer. The usefulness of such a device comes from the quan- tum mechanical property of tunneling which allows for electron flow between the conductors that is classically forbidden. Devices of this kind lie outside the area of common knowledge even though they appear in many electron- ics applications, such as hard drives and solar cells [5, 6, 7]. In experimental physics tunnel junctions play a very important role. Scanning tunneling mi- croscopes (STM:s), an invention that earned G. Binnig and H. Rohrer a [8], are tunnel junctions that can be swept over a material surface and make measurements so detailed that individual atoms are "seen". Apart from taking measurements the STM can also be used to move atoms on a surface into patterns of almost any two dimensional shape. For measurements of ex- treme sensitivity to magnetic fields tunnel junctions known as superconducting quantum interference devices (SQIUD:s) are used. While papers II and III are modeled as STM experiments paper I is a study of an asymmetric double tunnel junction that incorporates nano mechanical motion, under superconducting conditions. This setup reveals some interest- ing characteristics as the tunneling current is modulated by the motion of an oscillator. The interplay between current and vibrations open up for different regions of operation that are periodic, quasi periodic and chaotic. In the final paper, IV, a scattering theoretical approach is used to study molecular graphene with great experimental agreement. Graphene is single layer of the carbon honeycomb structure in graphite, several of which are laid down on a piece of paper when writing with a normal pencil. The material has many record breaking properties such being the strongest ever tested and some of these properties carry over to molecular graphene, e.g. Dirac fermions. Molecular graphene is constructed by placing atomistic or molecular scatter- ing centers on a metallic surface in a triangular pattern that is the dual to the honeycomb. Surface electron density is then forced to the honeycomb struc- ture which simulates graphene. The adopted theory is very flexible and it is shown that a number of point like scattering centers can be used to simulate

10 a continuous spatially decaying potential. Impurity defects are hence easily added or taken away.

11

Part II: Theoretical framework

Of all the people who will attempt to read this thesis my dad is one. His interest in physics in many ways kindled my own and since he lacks formal education in the subject, the first introductory section serves to give him and others shar- ing a similar background some basic insight into quantum mechanics. Others who may read this thesis are experts in the field and know the theory inti- mately. This group can not expect any new insights from the theory part as the intention of the sections, following the first, is to provide undergraduate to graduate reference material for understanding or reading up on the contents of the published papers.

1. Short introduction to quantum mechanics

In the advent of physics, as the science known to us, classical mechanics was the first to evolve as it describes interactions of bodies on our own length scale. Objects within such dimensions are simply the most easily accessible for controlled experiments. In the late 1700th century precise measurements and a new mathematical framework led Newton to bring forth a paradigm shift of science as he formulated laws of nature that relates quantities of physics, which also serve to define them in terms of each other. He famously stated that ~F = m ∗~a, (1.1) where ~F is the force acting on a body of mass m to cause an acceleration in the direction and magnitude ~a. A force is hence defined as the quantity that makes a body of mass m accelerate by the amount a in a given direction. This equation, known as Newton’s second law, is an equation of motion since it determines the movement of a body subject to an external force. Within the speeds, masses and forces obtainable for experiments in Newton’s days the motion of any body precisely followed the second law and it became the corner stone in what is now referred to as classical mechanics. A common example that is easily solvable with the application of Newton’s second law is the harmonic oscillator. This example transfers well to quantum mechanics and connects on a fundamental level to the quantum mechanical description of the world that is quantum field theory. Picture an object with mass m tied to the end of a spring that in turn is tied to an immensely massive object M, such that M  m. The more massive object may then as an ap- proximation be considered fixed and serve as the point of reference to which the less massive body moves. In reality the most common way to realize the setup is to let a small weight hang on a spring tied to a rigid armature securely fastened to earth. The origin of motion is defined to be where m is at rest. If x denotes the distance that m is moved from the origin the force acting on m from the spring is F = −ks ∗ x, where ks is the spring constant which has a value that is lower for a soft spring compared to a harder spring. This is known as a restoring force because it acts to restore m to its origin whether it is being pulled down or lifted up. To obtain a mathematical expression that predicts the position of m at any specific time given the set of initial conditions, that determine how far m has been pulled down before release and at which speed m is let go, Newtons second law can be applied directly, −k ∗ x = m ∗ a. (1.2)

15 In the one dimension x, acceleration is the change of speed over one time unit, a = dv/dt, and speed is the change of position over one time unit v = dx/dt. The equation of motion may in other words be rewritten as

d2x k = − s x, (1.3) dt2 m which is a differential equation fulfilled by

x(t) = x0 ∗ cos(ω ∗t), (1.4) p where x0 is the amplitude of the oscillatory motion and ω = ks/m is the angular frequency, if m is initially pulled down by x0 before release. A pen attached to m will draw the curve illustrated in Figure 1.1 if if touches a paper scrolling past m at constant speed. This wave form is known as a sinusoidal curve and it predicts the position of m at any given time since it represents the paper drawn to infinite time. Before comparing the above example to the quantum mechanical counter- part, is is helpful to realize that we by observing the actions of objects in our surrounding condition and familiarize our brains to anticipate movement to such a degree that we obtain an intuitive understanding. Our intuitive under- standing is fragile, however, and prone to misconception, even when observ- ing macroscopic objects we may be surprised, as the simple rotating bicycle wheel experiment shows us when we are first exposed to it. If a person sits still on a swivel chair and is given a rotating bicycle wheel with handles in the hub, that are orientated vertically, he or she will start to spin together with the chair, about its axis of rotation, if the wheel is turned upside down. The physics is clear about this, since angular momentum must be conserved, but it surprises the intuition because we rarely come across the phenomenon in everyday life. It is also on our length scale that the classical laws hold up the best. When applied to the very large and small, careful measurement reveal that the observation don’t match up to the Newtonian theories perfectly. Through the evolution of technology these worlds of extremes became ac- cessible, and by a collective effort in the early twentieth century the laws of nature for things massive and tiny, beyond our imagination, where formu- lated. Our brains Instinctively try to picture these worlds, obscured to our eyes, through the lens of a macroscopic conditioning and care must taken not to force macroscopic concepts blatantly. The theories of relativity and quan- tum mechanics are therefore not easily understood and exposure over time is needed to get accustomed. The quantum mechanical counterpart to Newtons second law is called the Schrödinger equation,

∂ ih¯ Ψ(r,t) = HΨ(r,t), (1.5) ∂t 16 and it is formulated in terms of the energy of the system rather than the forces of Newton’s second law. Ψ(r,t) is the wave function of the system, and for simplicity, lets say that it represents a particle. The wave function then gives the probability amplitude for the particle to be at position R at time t. In other words, when we calculate what the particle is up to it doesn’t really exist at any specific point in space or time. We only know how the the probability, |Ψ(r,t)|2, of the particle to be somewhere develops. This whole notion is in stark contrast to classical mechanics where the solution to the equation of mo- tion tells you the position, speed and acceleration of an object at any given time. A quantum mechanical particle can therefore never be said to be some- where unless it is measured and forced to make an imprint at that instance in the measuring device. H is called the Hamiltonian and it contains the total energy of the system. For the particle this ads up to its kinetic energy and its potential energy, if affected by the surroundings. Mathematically,

−h¯ 2 H = ∇ +V(r,t), (1.6) 2m where the first term is the kinetic energy and the second term is the potential energy. The other symbols that appear in the equation are, i, the imaginary unit, h¯, the reduced Planck constant, that for example relates the uncertainty of a particles position to its uncertainty in momentum through ∆x∆p ≤ h¯/2 and m which is the mass. The quantum mechanical version of the harmonic oscillator

x (a) x(t) (b)

ψk(r ) 2 |ψk(r)| t

ћω ћω/2 r

Figure 1.1. (a) Sinusoidal curve that represents the position, x, of a harmonically oscillating body over time t. (b) A quantum harmonic oscillator has quantized energy levels and the curves show the probability amplitude and probability for a particle to be at a given point in space. example given earlier is referred to as a quantum harmonic oscillator and it is illustrative to compare the differences. To find a solution the Schrödinger equation is solved with the potential term 1 V(r) = mω2r2, (1.7) 2 17 where ω is the angular frequency of the oscillator. The first notable difference is that the solution looks much more complicated,

1/4 2 n 1 mω  − mωr n mω r2 d − mω r2 Ψn(r) = √ e 2h¯ (−1) e h¯ (e h¯ ) n = 1,2,3,... 2nn! πh¯ drn (1.8) which is generally the case. The really interesting thing however, is to look at the energy of the solution  1 E = h¯ω n + , (1.9) n 2 which has become quantized, meaning that the oscillator only vibrates at cer- tain energies that are equally spaced. Much like the string of a guitar only vibrates at certain frequencies, from the fundamental harmonic and upwards through its multiples. Figure 1.1 (a) illustrates the motion of a classical har- monic oscillator and the curve indicates where the moving body is at a given time on the t-axis. In contrast (b) illustrates a quantum harmonic oscillator and the moving body can no longer be said to be at a certain place at a given point, instead the curves indicate what probability amplitude, Ψ(r), and what probability, |Ψ(r)|, the body has to be at a given point for the different energy levels available. These energy levels all correspond to a quantum state of the oscillator and to switch between them the oscillator has to give or be given an amount of energy equal to the level spacing. Since the quantum mechani- cal body has to be somewhere, the area under a probability curve sums up to R |Ψ(r)|2dr = 1.

18 2. Model making in many body systems

In the previous section the fundamental laws of physics - classical and quan- tum mechanic - were touched upon. These laws are precisely formulated and perfectly describe our micro- and macroscopic world up to the most extreme circumstances regarding energy and gravitational fields. Consequently, most physicists do not deal with the advancement of fundamental physics itself but with the complexity of interacting systems. In mathematical terms these sys- tems are often defined by a Hamiltonian on the form

H = H0 + HI, (2.1) where H is the total Hamiltonian including interactions, H0 is a Hamiltonian with known solutions and HI accounts for the interactions. Obviously, finding solutions to the full Hamiltonian is the aim for theoretical studies of a par- ticular problem. When quantum mechanics was first formulated most prob- lems that could conceivably yield analytical solutions where worked out very rapidly, e.g. the electron wave function and energy levels of hydrogen, the quantum harmonic oscillator, etc.. Today researchers face problems that al- most, without exception, demand approximate solutions, often given through the aid of computers. This is not surprising since the description of a real physical system generally concern more than two interacting particles. Even classically the limit to obtaining a closed analytical solution, that can predict the system at any given time, is two interacting particles. Hence, the eigen- states to the Hamiltonian (2.1), that constitute the solutions, often need to be sought by means of some perturbative expansion. These are methods whose use are limited to situations where the interaction can be considered weak, such that a solution can be written as a series expansion where successive terms include higher orders of the interaction and can be neglected. This thesis complies work where the perturbative methods are given within a Green function formalism, detailed in section 3. The problems considered are all of a many body character, composed of enough particles that fluctua- tions from the average are small, to which a quantum field theoretic, or second quantization, description is suited. In quantum field theory fields are quantized into operators that create or de- stroy particle excitations of the field. The electromagnetic field is an example that can be quantized. In the process the field becomes and operator that acts on the quantum state of the field to generate or destroy the field excitations that are photons. For the "matter"-field given by the wave function defined in (1.5)

19 above the quantization promotes it to and operator that creates or destroys the field excitations that, for example, are given by electrons. Hence, while the wave function can give the probability for a particle to be at a specific point the field theoretic operator can give the density of electrons at a given point.

2.1 Hamiltonian descriptions The basic methodology applied in the studies of this thesis can be described in two steps. First, an interesting problem is identified and model Hamiltonian, that includes the interactions needed to replicate the important features of the problem, is identified. Second, a Green function formalism is applied to that Hamiltonian in order to reach the solution. This process boils down to find- ing a balance between the complexity of the Hamiltonian and the number of interactions deemed necessary to find a solution that has the sought features. The process is not foolproof and sometimes the results fail to give the desired answers. It may then be difficult to see whether the model Hamiltonian is to blame or if the Green functions were truncated at to low order. The thesis includes tunnel junction and scattering studies where material specific properties are of secondary importance. Materials are therefore pur- posefully replicated as metals or superconductors with ideal dispersion and (DOS). Electrons within all metallic conductors are hence governed by the free particle Hamiltonian † H0 = ∑εkckσ ckσ , (2.2) kσ † where ckσ is the creation operator and ckσ is the annihilation operator. The Planck constant h¯ is generally set to equal 1 to simplify notation, the sub- script k is the wave vector or momentum vector and σ is the spin index. The superconducting state is given by the Bardeen, Cooper and Schrieffer (BCS) Hamiltonian, detailed in section (6). Electron or quasiparticle tunneling is treated as scattering off of a tunneling potential, through an interaction Hamiltonian detailed in section 4.2. Other in- teractions include energy exchange with magnetic moments which are detailed below. The full Hamiltonian is also the generator of time evolution for a quantum mechanical system. In the following text equations are written in either the Heisenberg picture or the interaction picture. In the Heisenberg picture the time dependence of an operator is given by iHt −iHt ckσ (t) = e ckσ e (2.3) while the state vectors are constant in time. In the interaction picture operators evolve in time as iH0t −iH0t ckσ (t) = e ckσ e (2.4)

20 and state vectors as

|kσ(t)i = eiH0te−i(H0+HI)t|kσ(0)i. (2.5)

2.2 Spin Hamiltonian The Hamiltonian for a generic spin situated in an external magnetic field, B, and an anisotropy field may be written

2 2 2 HS = −gµBB · S + DSz + E Sx − Sy , (2.6) where g is the g-factor, µB is the Bohr magneton, S = (Sx,Sy,Sz) is the spin vector and D and E are the uniaxial and the transverse anisotropy energies, defining the anisotropy field [3]. The first term of the Hamiltonian Zeeman- splits the spin eigenstates in energy with an amount proportional to the field strength and the spin will in general align itself with the direction of the mag- netic field. The second term is a phenomenological representation of the uni- axial anisotropy field whose energy D splits the spin state energy levels in a twofold degenerate way

D < 0 : E±N < E±N−1 < E±N−2 < ...,

E±N/2 < E±N/2−1 < E±N/2−2 < ..., N = odd (2.7) D > 0 : E±N > E±N−1 > E±N−2 > ...,

E±N/2 > E±N/2−1 > E±N/2−2 > ..., N = odd. For D < 0 the state with the largest spin projection onto the Z-axis clearly has the lowest energy which sets a favored spin direction along the Z-axis, often referred to as the "easy-axis". For D > 0 on the other hand, the spin state with the smallest spin projection onto the Z-axis is favored an the spin will prefer to lie in the XY-plane, often referred to as an "easy-plane". The last term in the spin Hamiltonian represents the transverse anisotropy and it expresses the difference between the X and Y directions. However, since it fails to commute with Sz it mixes the spin states into linear combinations. The mixed states prevents any axis from being uniquely easy or hard and by convention the axis directions are assigned such that |D| is maximized and E>0.

21 3. Green functions in many body physics

In first quantization the wave function is the premier quantity sought, from which all measurable properties of a system can be found. In second quanti- zation where we deal with many-body systems the Green functions serve this purpose to a large extent. Green functions are often called propagators be- cause they give the probability amplitude for a particle appearing at position x at time t to be at position x0 at a different time t0. Through this ability Green functions probe the entire space for a considered test-particle, accounting for interactions with other particles and external potentials, to map out how the density of particles distributes in space at given times. For thorough reviews of Green functions, see references [9, 10].

3.1 General background to Green functions The term Green function originally comes from mathematics, where they are also referred to as impulse response functions, because they solve inhomoge- neous differential equations by measuring momentary impulses that may then be integrated over. Mathematically, if the differential operator L(r) acts on a function such that L(r) f (r) = u(r), (3.1) then the Green function solves the related problem L(r)G(r,r0) = δ(r − r0), (3.2) where δ(r −r0) is the Dirac delta function. The solution to the sought function f (r) is then found by solving the integral-equation Z f (r) = G(r,r0)u(r0)dr0. (3.3)

In quantum mechanics we are often looking for the solution to the similar problem, [H0 +V(r)]Ψ(r,t) = i∂tΨ(r,t), (3.4) where the solutions ψ(r,t) to H0 are known, V(r) accounts for additional ex- ternal potentials, and a time dependence t is included. Inspired by eq. (3.2) we may then define Green functions that satisfy 0 0 [i∂t − H0]g(x,x ) = δ(x − x ), (3.5) 0 0 [i∂t − H0 −V(r)]G(x,x ) = δ(x − x ), (3.6)

22 where, x = (r,t), the lower case g(x,x0) signifies that it solves the equation without V(x) and the upper case G(x,x0) signifies that it solves the equation with full interactions. g(x) and G(x) will from now on be referred to as bare and dressed Green functions respectively. The Schrödinger equation (3.4) may be used to connect the the wave functions with the Green functions, Z Ψ(x,t) = ψ(x,t) + dx0dt0g(x,x0t,t0)V(x0)Ψ(x0,t0), (3.7) Z Ψ(x,t) = ψ(x,t) + dx0dt0G(x,x0t,t0)V(x0)ψ(x0,t0), (3.8) through two equivalent integral equations, that includes either the bare or dressed Green function. These equation are seemingly as difficult to solve as the original equation (3.4). By inspection it is, however, easy to see that an iterative process may be employed on eq. (3.7) to generate an infinite series,

Ψ = ψ + gVψ + gVgVψ + gVgVgVψ + ··· (3.9) = ψ + (g + gVg + gVgVg + ···)Vψ where the implied integrals and the variable dependence has been excluded for clarity. In the last step eq. (3.8) is regained if we equate

G = g + gVg + gVgVg + gVgVgVg + ··· (3.10) = g + gV(g + gVg + gVgVg + gVgVgVg + ···), where the expression in the parenthesis is once again nothing but G. This equation, G = g + gVG, (3.11) is known as the Dyson equation, and for all convergent series expansions we now have a powerful and foolproof way of obtaining the Green function, G, in a perturbative manner. To see why the Green function is referred to as a propagator in mathematical terms we observe that the probability amplitude for a particle located at x0 at time t0 to be at the position x at time t, can be found by applying the time evolution operator in the Schrödinger picture, U(t,t0) = exp[−iH(t − t0)], to an initial quantum state at time t0,

0 |α,ti = e−iH(t−t )|α,t0i, (3.12) where H is the total Hamiltonian H = H0 +V. If a complete set of position R 0 0 0 states, dx |x ihx |, and eigenstates, ∑a |aiha|, are introduced while we multi- ply the equation from the left with the position state hx|, we get Z 0 0 0 0 0 −iE (t−t0) hx|α,ti = ∑ dx dt hx|aiha|x ihx |α,t ie a , (3.13) a

23 which equates to Z Ψ(x,t) = dx0dt0Gr(x,x0,t,t)Ψ(x0,t0), (3.14)

0 if Gr(x,x0,t,t0) = −iθ(t − t0)hx|e−iH(t−t )|x0i. This function incidentally sat- isfies equation (3.25) and is, hence, nothing but the Green function, which clearly acts on the wave function Ψ(x0,t0) and propagates it to Ψ(x,t). In the last derivation step the superscript t is added to the Green function to signify that we explicitly consider causal interactions, insured by the Heaviside step function θ(t −t0), such that interactions occurring at t0 only affect the system at later times t > t0. Green functions that satisfies this condition are called retarded while anti causal Green functions that are nonzero for t0 > t only, are called advanced. Functions structured like the Green functions will also be referred to as cor- relation functions in general because they give the correlation between quan- tum states that may differ i other parameters (and quantum numbers) than position and time.

3.2 Green functions in many-body systems In the previous section we defined the retarded Green function, Gr, that prop- agates a wave function through space from an earlier to a later time. In second quantization for many body systems the equivalent Green function,

r 0 0 0 † 0 0 G (x,x ,t,t ) = −iθ(t −t )h{cλ (x,t),cλ (x ,t )}i, (3.15) will be used mainly by its own merits, which demands a careful interpretation of its physical meaning. In order to help form a mental picture and connect to a physical system c is intentionally used to denote the creation and annihila- tion operators, which is commonly reserved for electrons. Note also that the † 0 0 position dependence makes the cλ (x,t) and cλ (x ,t ) field operators, strictly speaking. The use of "c" implies that we are dealing with fermions, confined to Fermi-Dirac statistics1, which is why the anti-commutator, {A,B} = AB+BA, appears within the brackets. For bosons, confined to Bose-Einstein statistics2, the usual commutator, [A,B] = AB−BA, needs to be used and "a" will denote a generic boson destruction operator following this point. The subscript λ is the collection of quantum numbers that define the specific particle considered - for electrons λ = p,σ generally, where p is the momentum and σ is the spin. At zero temperature the bras and kets, h...i, represent the ground state of the sys- tem Hamiltonian, H, or the ground state of the unperturbed hamiltonian, H0. Which of these the bras and kets refer to is implied by their use. The Green

1See equation 3.36 for the finite temperature dependence. 2See equation 3.37 for the finite temperature dependence.

24 functions of the unperturbed Hamiltonian is generally known and used to find an approximation for the unknown Green function of the full Hamiltonian. To get a sense of the physical process the Green function describes we look at how the operators act on different occupied or unoccupied Fock states. When the creation operator acts on the state vector for the full Hamiltonian, in 0 † 0 0 0 e.g. θ(t −t )hcλ (x,t)cλ (x ,t )i, at t the result is 0 if it tries to create a particle in a state that is already occupied, due to its fermionic properties. If the state is unoccupied an electron is created at position x0 that will scatter until t when the annihilation operator destroys the electron at position x. The scattering events modify the amount of probability amplitude that can be found at x the time t0 −t later, which is what the Green’s function measures. 0 † 0 0 If we consider θ(t − t)hcλ (x ,t )cλ (x,t)i instead, the destruction operator is the first to act on the Fock state at the earlier time t. In order to yield a nonzero result the state defined by λ needs to be occupied in this case. This condition is fulfilled by all energy states below the Fermi energy where the removal of a negatively charged particle creates a hole with positive charge. A Green’s function of this kind hence propagates a hole rather than an electron, or more strictly, gives the amplitude for at hole created at x to be at x0 the time t −t0 later. The order of the operators evidently plays an important role for what phys- ical process the Green function describes, and since the operators act at dif- ferent times, so does the ordering with respect to time. For electrons causality dictates that operators of earlier events on the time line should be moved to the right of later events, regardless of the number of operators within the bracket. Since the operators follow commutation and anti-commutation relations these movements must be done with care. In order to emphasize where time order- ing should take place we introduce the time ordering operator, T{...}, that ensures a causal behavior. The generalized Green function for the above dis- cussed physical events is for example defined

t 0 0 † 0 0 G (x,x ,t,t ) = −ihT{cλ (x,t)cλ (x ,t )}i 0 † 0 0 0 † 0 0 = −iθ(t −t )hcλ (x,t)cλ (x ,t )i + θ(t −t)hcλ (x ,t )cλ (x,t)i, (3.16) where the time ordering operator arranges the field operators appropriately to the step-functions. Note the different signs of the two terms that are a consequence of the fermionic anti-commutation behavior as opposed to the bosonic case where both terms share the same sign. For a static system, only the time difference, t − t0, between the two op- erators is important. The time dependence therefore disappears if the two operators act on the state at equal times, and if the state vectors are of H0 we get † hcλ (t)cλ 0 (t)i = δλλ 0 θ(−ελ ) (3.17)

25 † at zero temperature. The number operator n(λ) = cλ cλ simply counts the oc- cupation of the state λ, which is either 1 or 0 at zero temperature where the Fermi-Dirac distribution function f (ε) = θ(−ε). At finite temperatures, how- ever, where the Fermi-Dirac distribution governs the occupation of fermions, states may be occupied by fractions, as we shall see. For a static system where the operators act on a state of the full Hamiltonian at different times, however, we need a way to connect these to the known states of the noninteracting Hamiltonian. By working in the interaction picture the time dependence is separated such that the interaction evolves the states while the unperturbed hamiltonian evolves the operators. The interaction may then be turned on adiabatically in the infinite past to evolve the unperturbed state into the state of full interaction we seek. To regain the unperturbed state in the infinite future the interaction is turned off adiabatically . This procedure is captured in the Gell-Mann–Low theorem that states that there exists an opera- tor S(t,t0) = U(t)U†(t0) ⇒ 0 0 ∂tS(t,t ) = −iV(t)S(t,t ) ⇒ (3.18) 0 n −iR t dt V(t )o S(t,t ) = T e t0 1 1 ,

iH t −iH t where V(t1) = e 0 1Ve 0 1 and U(t) = exp(iH0t)exp(−iHt) is the time-evolution operator in the interaction picture, that evolves the ground state of the unper- turbed Hamiltonian to the ground state of the interacting Hamiltonian by the operation |Ψ0i = S(0,−∞)|ψ0i and analogously hΨ0| = hψ0|S(∞,0). The time ordered Green function can then be written as t 0 † 0 G (t,t ) = −ihΨ0|T{cλ (t)cλ (t )}|Ψ0i = −iθ(t −t0) 0 † 0 0 × hψ0|S(−∞,0)S(0,t)cλ (t)S(t,0)S(0,t )cλ (t )S(t ,0)S(0,−∞)|ψ0i + iθ(t0 −t) 0 † 0 0 × hψ0|S(−∞,0)S(0,t )cλ (t )S(t ,0)S(0,t)cλ (t)S(t,0)S(0,−∞)|ψ0i. (3.19) Both the bra and ket state vectors are now defined as the ground state in the infinite past, but we ideally would like to have the bra run to the infinite future since it sets the upper integration limit of the S-matrix expansion in eq (3.18). To fix this the bra state may be rewritten as,

hψ0|S(∞,−∞)S(−∞,0) hψ0|S(∞,0) hψ0|S(−∞,0) = = , (3.20) hψ0|S(∞,−∞)|ψ0i hψ0|S(∞,−∞)|ψ0i which allows us to simplify the time ordered Green function (3.19) to † 0 −ihψ0|T{c (t)c (t )S(∞,−∞)}|ψ0i Gt(t,t0) = λ λ . (3.21) hψ0|S(∞,−∞)|ψ0i

26 The time ordered Green function is now expressed in terms of known ground state vectors and the full Green function can be found in a perturbative fashion or exactly by expanding the S-matrix,

∞ n+1 Z ∞ −iR ∞ V(t0)dt0 (−i) S(∞,−∞) = Te −∞ = ∑ dt1 ···dtnT{V(t1)···V(tn)}. n=0 n! −∞ (3.22) It is important to remark here that the denominator in (3.21) in general pro- duces an infinite sum of terms. These are referred to as vacuum polarization terms and show up as disconnected diagrams in the Feynman diagram repre- sentation. As luck would have it however, these terms also show up in the nominator to completely cancel the denominator. A result of this cancellation is that the Green function may be calculated by discarding the denominator while only terms that correspond to connected Feynman diagrams are kept. The higher orders in the S-matrix expansion will contain many creation and annihilation operators in even numbers. In order to separate these into pairs that we can interpret as Green functions there is a method that follows what is known as Wick’s theorem. It states that an expression of several operators should be uncoupled into a sum of all possible pairings where each one is time ordered, e.g.

† † † † hT{cα (t1)cβ (t2)cλ (t3)cδ (t4)}i0 =hT{cα (t1)cβ (t2)}i0hT{cλ (t3)cδ (t4)}i0 † † − hT{cα (t1)cδ (t4)}i0hT{cλ (t3)cβ (t2)}i0 (3.23) for fermions. Under non-equilibrium conditions or generally in cases where the studied system, initially know at t → −∞ as |ψi, fails to go back to that initial state as t → ∞ in the future, the state hΨ(t → ∞)| remains completely unknown. In order to avoid the reference to the infinite future and circumvent this problem a method has been developed based on the idea that the integration path in the S-matrix expansion can be shifted slightly into the complex plane just above or below the real time-axis. The contour path of integration can then run from τ → −∞ + iδ, where the initial state is known, in a loop that smoothly tran- sitions to the lower complex plane at τ = a such that the contour follows real time axis back to the initial state along τ = t − iδ until t → ∞. When the limit a → ∞ is taken, all possible events may transpire in time, without having to refer to a final state at the infinite future. This mathematical trick may seen questionable at a first glance but is actually well grounded and works for non equilibrium and well as equilibrium conditions at the expense of a somewhat raised mathematical complexity [11].

27 3.2.1 Variants of the many-body Green function So far we have defined two different kinds of Green functions that are both im- portant as computational tools and because they carry direct physical meaning. In the previous section it was hinted that non-equilibrium situations demand an even richer Green function toolbox. Though not all calculations behind the papers compiled in this thesis where done within the non-equilibrium frame- work the same Green function definitions are used throughout. The most com- monly appearing - lesser, G< and greater, G> - Green functions also serve as building blocks for the remaining 4 and are, for fermions, defined as,

> 0 0 † 0 0 G (x,x ,t,t ) = −ihcλ (x,t)c (x ,t )i λ (3.24) < 0 0 † 0 0 G (x,x ,t,t ) = ihcλ (x ,t )cλ (x,t)i, where the signs < and > indicate the time ordering of the operators. If the integration path is along the complex contour around the real time axis t > t0 implies that t0 is a point to the left of the later time t on the upper time contour. If the events indicated by t and t0 occur on the lower time contour t > t0 means that the later time t marks a point to the left of t0. The other relevant Green functions are Gt(x,x0,t,t0) = θ(t −t0)G>(x,x0,t,t0) + θ(t0 −t)G<(x,x0,t,t0), Gt¯(x,x0,t,t0) = θ(t0 −t)G>(x,x0,t,t0) + θ(t −t0)G<(x,x0,t,t0), (3.25) Gr(x,x0,t,t0) = θ(t −t0)G>(x,x0,t,t0) − G<(x,x0,t,t0), Ga(x,x0,t,t0) = θ(t0 −t)G<(x,x0,t,t0) − G>(x,x0,t,t0), of which the time ordered, Gt, and retarded, Gr, were already defined earlier. The Anti-time-ordered Green function, Gt¯, is for leftward bound complex time contours what the time-ordered Green function, Gt, is for rightward bound paths. Ga, called the advanced Green functions, is the time order opposite of Gr. The corresponding Green function lineup for bosons, such as , can be generated using the expressions, (3.25), above if the lesser and greater Green functions are redefined, > 0 0 † 0 0 D (x,x ,t,t ) = −ihQλ (x,t)Qλ (x ,t )i < 0 0 0 0 (3.26) D (x,x ,t,t ) = −ihQλ (x ,t )Qλ (x,t)i, † to account for the ordinary commutation relation. Qq = aq + a−q is the Her- mitian displacement operator for the coupled atoms of a .

3.2.2 Green functions at finite temperature At zero temperature all fermions within a given system will occupy the lowest lying energy states one by one, while a similar set of bosons all share the single

28 lowest energy state. At finite temperature the system is often considered in the grand canonical ensemble, where contact with a large heat bath keeps the temperature of the system fixed at the same time as the particle number may fluctuate. Since the particle mean energy of the system is directly related to the temperature, several particles necessarily occupy higher energy states in some configuration, as opposed to the zero temperature case. For many body systems the exact configuration of occupied states is never known and because a large number of configurations share the same energy no know single state closes the Green function. Instead we look at the average amplitude given by all possible states, weighted by the probability of finding the system in each one of them. Within the constraints of the canonical ensemble, where the studied system is in contact with a heat-bath that may exchange thermal energy with the system but not particles, the probability of finding the system in a specific state is given by the Boltzmann distribution P(Ei) = exp(−βEi)/Z, −1 where the partition function is Z = ∑i exp(−βEi) and β = (kBT) , where kB is the Boltzmann constant. In the Green function formalism we introduce this temperature dependence by defining the density matrix operator ρ = e−βH = ∑|νie−βEν hν| (3.27) ν and changing the bracket definition in accordance with

† 0 † 0 ∑ν hν|ρcλ (t)cλ (t )|νi hcλ (t)cλ (t )i = . (3.28) ∑ν hν|ρ|νi This correlation function now expresses the amplitude for a particle, with quantum numbers λ, created at t0 to be destroyed at t, weighted by the prob- ability that the acted upon Fock states are occupied. Note also that the cor- relation function references one temperature only, which makes it viable to use in situations restricted to thermal equilibrium, where the Green function is the same between any equally spaced times. If the perturbative potential lacks any explicit time dependence the Green function G(t,t0) can consequently be written G(t −t0), which permits us to go back and forth between time and fre- quency space through Fourier transform. As it stands above the expression is written in the Heisenberg picture, but λ is not generally an eigenstate of the to- tal Hamiltonian and if we want to use the S-matrix expansion in the interaction picture to access the unperturbed eigenstates we see that the thermodynamic factor exp[−β(H0 +V)] also includes V in addition to the time evolution op- erators exp[−it(H0 +V)]. A convenient way to expand both factors, on equal footing, is to redefine time by shifting it into the complex plane through it → τ, so that exp(−βH)exp(−itH) → exp[−(β + τ)H]. It is now possible to define a time ordering operator, Tτ for imaginary times τ that works just like the time ordering operator for real time, e.g. † 0 † 0 0 † 0 0 Tτ {cˆ(τ)cˆ (τ )} equalsc ˆ(τ)cˆ (τ ) if τ > τ andc ˆ (τ )cˆ(τ) if τ < τ . The time

29 evolution operator for τ is simply defined

Uˆ (τ) = eH0τ e−Hτ (3.29) in direct correspondence to the real time case and the S-matrix,

Sˆ(τ,τ0) = Uˆ (τ)Uˆ −1(τ0) ⇒ ˆ 0 ˆ ˆ 0 ∂τ S(τ,τ ) = −V(τ)S(τ,τ ) ⇒ (3.30) n R τ o 0 − 0 dτ1V(τ1) Sˆ(τ,τ ) = Tτ e τ , expands as expected. Note the absence of the imaginary unit in the expres- sions. The thermodynamic density matrix can now be expressed in terms of the Sˆ-matrix if we consider β to be an imaginary time, exp(−βH) = exp(−βH0)Sˆ(β,0). Through the mathematical trick of viewing β as an imaginary time a general time-ordered correlation function can be written

−βH † 0 ∑ hν|e Tτ {cˆ (τ)cˆ (τ )}|νi hT {cˆ (τ)cˆ† (τ0)}i = ν λ λ τ λ λ −βH ∑ν hν|e |νi −βH 0 † 0 0 ∑ hν|e 0 Sˆ(β,0)Tτ {Sˆ(0,τ)cˆ (τ)Sˆ(τ,τ )cˆ (τ )Sˆ(τ ,0)}|νi = ν λ λ −βH0 ˆ ∑ν hν|e S(β,0)|νi † 0 hTτ {Sˆ(β,0)cˆ (τ)cˆ (τ )}i0 = λ λ , hSˆ(β,0)i0 (3.31) where the properties of the Sˆ-matrix and time ordering operator have been used and the brackets in the last step equal h···i0 = ∑ν hν|exp(−βH0)···|νi. The correlation function can now be calculated, just as in the zero temperature case, by expanding the Sˆ-matrix,

∞ (−1)n R β ˆ ˆ † 0 † 0 ∑n n! 0 dτ1 ···dτnhTτ cˆλ (τ)V(τ1)···V(τn)cˆλ (τ )i0 hTτ {cˆλ (τ)cˆλ (τ )}i = , hSˆ(β,0)i0 (3.32) where the creation and annihilation operators now act on the unperturbed states. When defined as

M † 0 Gλ (τ,τ) ≡ −hTτ {cˆλ (τ)cˆλ (τ )i (3.33) this correlation function is referred to as the Matsubara Green function. To decouple averages in the sum of (3.32) that contain more than two op- erators in the correct way we once again use Wick’s theorem. The integration limits of the Sˆ-matrix expansion goes from 0 to β in the imaginary time for- malism and it is not as evident why the unperturbed states are recovered in these limits, as it is when the temperature is 0. It can, however, be shown that

30 the imaginary time domain is limited to −β ≤ τ ≤ β for the Matsubara Green function. The cyclic properties of the trace over states, ∑ν h···iν , in fact forces the Matsubara Green function, not only to depend solely on the time difference such that GM(τ,τ0) = GM(τ − τ0), but also to abide by the property,

GM(τ) = GM(τ + β) if − β < τ < 0 for Bosons (3.34) GM(τ) = −GM(τ + β) if − β < τ < 0 for Fermions, and conversely for 0 < τ < β. According to Fourier analysis these properties allows for the definition of the Fourier transform

Z β M M iωnτ G (iωn) = dτG (τ)e 0

M 1 −iωnτ M G (τ) = ∑e G (iωn) (3.35) β n (2n + 1)π ω = β for Fermions if n is odd. For Bosons the same relations hold for any even n where ωn = 2nπ/β, see [9] for a thorough discussion. In frequency space, where most calculations are done, the Sˆ-matrix expansion of the Matsubara Green function can be rewritten as a Dyson equation that may be used to find the full function in a perturbative fashion. We have now found ways to obtain the Matsubara Green function at finite temperatures in both imaginary time and frequency space. The Matsubara Green function does not correspond to a physical quantity directly, however, and to be useful it needs to be translated into a Green function of real time. This process turns out to be very simple and it is the reason why the Matsubara Green function is useful. Once the Matsubara Green function has been found in frequency space the analytic continuation iω → ω + iδ immediately pro- duces the Fourier transform of the retarded Green function defined in (3.25). The Matsubara Green function method is not the only way to calculate the correct retarded Green function for finite temperatures. In real time the time ordered Green function may also be expanded into a Dyson type equation by application of the Heisenberg equation of motion, i∂ta(t) = [a,H](t). The retarded Green function can then be found by the means of non-equilibrium theory that we will look closer at in section 3.2.4 and ??. As special, and very useful, case where results can be found for finite tem- peratures without resorting to the Matsubara or non-equilibrium methods con- cerns unperturbed excitations. The Green function of free electrons is, for ex- ample, ultimately what the more complicated interacting electron expressions are expanded in. A free electron is defined by its quantum numbers λ = k,σ, where k is the wave vector directly related to the electron momentum and σ is the spin quantum number which is either up or down. If we now look at

31 the lesser Green function defined in (3.24) and consider a single point, such that x = x0 at one given time, such that t = t0, the operators within the electron states |k,σi, of energy εk, are just what is referred to as the occupation num- † ber operator, ckσ ckσ . Using the definition of the thermal average with dropped spin indices, (3.28), we can see that

0 −βH † 0 1 −β ∑k06=k εk0 nk0 −βεknk 0 0 e n † ∑k hk |e ckck|k i ∑nk=0 k hc cki = = k 0 −βH 0 1 −β ∑ 0 ε 0 n 0 −βε n ∑k0 hk |e 0 |k i ∑ e k 6=k k k k k nk=0 (3.36) − n 0 + e β ∑k06=k εk0 k0 e−βεk 1 = = = f (εk) −β ∑ 0 ε 0 n 0 −β ∑ 0 ε 0 n 0 βε e k 6=k k k + e k 6=k k k e−βεk e k + 1 where the first term on the second row represent the case where the quantum † state |k,σi is empty nk = hk|ckck|ki = 0, while the second term represents the case where said state is occupied. f (ε) is the Fermi-Dirac distribution function that gives a measure of the probability for a fermion energy state to be occupied thermally. Note that this number may be in fractions. Bosons can occupy any quantum state with any positive integer value, nk = 1,2,3,..., and consequently distribute differently than fermions under finite temperatures. Using the average from (3.36), but with bosonic operators it can be shown that † 1 hakaki = = nB(εk), (3.37) eβεk − 1 see [10] for proof, where nB(εk) is the Bose-Einstein distribution that gives the average occpation of a boson energy state by thermal excitation. A common system to study, that relates to experimental measurements, is one where the electric potential of a lead is varied up or down, in relation to its Fermi energy, by an applied bias voltage. The measured quantity in such a setup is often the current flowing from the lead, to the part under examination, and onwards to an additional lead. Each lead consequently exchanges particles with the thermal reservoir in order to avoid building up or depleting charge, which means that they need to be treated within the grand canonical ensemble. In mathematical terms a chemical potential, µ, is introduced to the Hamilto- G † nian, H = H − µcλ cλ , to account for the applied voltage, which is reflected G in the distribution function as a change in the energy variable εk → εk − µ.

3.2.3 Green functions of free field excitations Six Green functions were defined in section 3.2.1 that could all be written in terms of the lesser and greater Green functions. In section 3.2.2 we also saw that the space and time independent correlation function of the unper- turbed Hamiltonian gives the thermal occupation at a given energy under fi- nite temperatures and equilibrium conditions. If the Fock states of the average

32 † are eigenstates of the free electron Hamiltonian, H0 = ∑k ckck, the time de- pendence is easily extracted as well and we can write the Green functions as mathematical functions on closed form. These are very useful since we ulti- mately use them to express the Green functions of interacting particles through some perturbative expansion. The greater and lesser Green functions for a free electron turn out to equal, 0 >(0) 0 −iεk(t−t ) Gk (t,t ) = −i[1 − f (εk)]e , 0 (3.38) <(0) 0 −iεk(t−t ) Gk (t,t ) = i f (εk)e , while the time ordered, retarded and advanced are 0 t(0) 0 0 −iεk(t−t ) Gk (t,t ) = −i[θ(t −t ) − f (εk)]e , 0 t¯(0) 0 0 −iεk(t−t ) Gk (t,t ) = −i[θ(t −t) − f (εk)]e , 0 (3.39) r(0) 0 0 −iεk(t−t ) Gk (t,t ) = −iθ(t −t )e , 0 a(0) 0 0 −iεk(t−t ) Gk (t,t ) = iθ(t −t)e . The free electron Green functions obviously depend on the time difference, t −t0, only and can hence be Fourier transformed into functions of frequency, >(0) Gk (ω) = −2πi[1 − f (εk)]δ(ω − εk), <(0) Gk (ω) = 2πi f (εk)δ(ω − εk), t(0) 1 Gk (ω) = , ω − εk + iδk t¯(0) −1 (3.40) Gk (ω) = , ω − εk − iδk r(0) 1 Gk (ω) = , ω − εk + iδ a(0) 1 Gk (ω) = . ω − εk − iδk where δ is infinitesimal and positive, while δk is infinitesimal and negative if k < kF , but positive if k > kF . For non-interacting phonons the equivalent Green functions in time are 0 0 > 0 −iωq(t−t ) iωq(t−t ) Dq (t,t ) = −i{[nB(εq) + 1]e + nB(εq)e }, 0 0 < 0 iωq(t−t ) −iωq(t−t ) Dq (t,t ) = −i{[nB(εq) + 1]e + nB(εq)e }, r 0 0 0 Dq(t,t ) = −2θ(t −t )sin[ωq(t −t )], a 0 0 0 Dq(t,t ) = −2θ(t −t)sin[ωq(t −t )], 0 0 t 0 0 iωq(t−t ) 0 −iωq(t−t ) Dq(t,t ) = −i{[nB(εq) + θ(t −t)]e + [nq(εq) + θ(t −t )]e }, 0 0 t¯ 0 0 iωq(t−t ) 0 −iωq(t−t ) Dq(t,t ) = −i{[nB(εq) + θ(t −t )]e + [nq(εq) + θ(t −t)]e }. (3.41)

33 3.2.4 The Heisenberg equation of motion and the perturbative expansion A closed expansion for the dressed Green function can often be found in the time-domain by successively differentiating the Green function with respect to time and applying the Heisenberg equation of motion,

∂tcλ (t) = −i[cλ (t),H], (3.42) † where H is the total Hamiltonian that can be written H = ∑λ ελ cλ cλ +V as before. The time-derivative of e.g. a general fermion time-ordered Green function is t 0 h 0 † 0 0 † 0 i ∂tGλ (t,t ) = ∂t −iθ(t −t )hcλ (t)cλ (t )i + iθ(t −t)hcλ (t )cλ (t)i , (3.43) which equates to t 0 0 † 0 0 † 0 ∂tG (t,t ) = − iδ(t −t )hcλ (t)c (t )i − θ(t −t )h[cλ ,H](t)c (t )i λ λ λ (3.44) 0 † 0 0 † 0 − iδ(t −t )hcλ (t )cλ (t)i + θ(t −t)hcλ (t )[cλ ,H](t)i and since † [cλ ,H] = ∑ελ 0 [cλ ,cλ 0 cλ 0 ] + [cλ ,V] = ελ cλ + [cλ ,V] (3.45) λ 0 the equation may be written t 0 0 † 0 (i∂t − ελ )Gλ (t,t ) = δ(t −t ) − ihT{[cλ ,V](t)cλ (t )}i. (3.46) Now, if the bare time ordered Green function is differentiated with respect to t 0 time, ∂tgλ (t,t ), the same process as above yields the relation 0 t 0 δ(t −t ) gλ (t,t ) = , (3.47) (i∂t − ε) which can be identified in equation (3.46) above. Finally, by using f (x) = R 0 0 0 t 0 † 0 f (x )δ(x−x )dx and defining Fλ (t,t ) = −ihT{[cλ ,V](t)cλ (t )}i the dressed time ordered Green function becomes Z t 0 t 0 t t 0 Gλ (t,t ) = gλ (t,t ) + gλ (t,t1)Fλ (t1,t )dt1. (3.48)

t t Depending on the interaction, Fλ either includes Gλ as a factor, closing the t 0 expression to an equation that may be iteratively solved, or Fλ (t,t ) must be differentiated with respect to time in the same manner as above to hopefully t recover Gλ such that Z t 0 t 0 t t t 0 Gλ (t,t ) = gλ (t,t ) + gλ (t,t1)Σλ (t1,t2)Gλ (t2 −t )dt1dt2, (3.49)

t where Σλ is the self-energy for the fermion in the given environment.

34 4. Tunnel junctions and scanning tunnelling microscopy

Tunnel junctions come in a wide variety of different of shapes and forms, as parts of experimental setups or in nanoscale electronics, designed to operate with as many different purposes i mind. In 1975 the tunneling magnetoresis- tance (TMR) was discovered in a magnetic tunnel junction (MTJ) [12] at low temperatures which led to research that eventually blossomed with the discov- ery of the giant magnetoresistance (GMR) in the late 1980s [5, 6]. An effect that is utilized in all modern high density hard disk drives. In multi junction so- lar cells the tunnel junction provides a low resistance separation between the n and p doped subcells [7]. For superconducting tunnel junctions, the Josephson effect is utilized in e.g superconducting quantum interference devices (SQIDs) which are magnetic field sensors of extremely high sensitivity [13, 14]. For experimental applications, devices such as break junctions offer a means to measure electrical currents through single molecules or chains of atoms [15]. A tunnel junction is also an essential part of scanning tunneling mi- croscopy (STM) which is one of the most important resent experimental tools that has the ability to image surfaces of materials down to the atomic scale as well as to make very local spectroscopic measurements. Since STMs play an important role in the theoretical studies conducted for this thesis, a slightly more detailed description follows below. Between these different devices the basic functionality does set a common denominator. In essence two conductors are positioned close enough, on ei- ther side of an insulator, that electrons (quasiparticles) may tunnel quantum mechanically from one conductor to the other through the potential barrier of the insulator. The resulting tunneling current, determined by a number of fac- tors such as the DOS spectrum of the two leads, the temperature of the leads, barrier of the insulator, the biasing voltage applied or the magnetic proper- ties of the two leads, is typically measured. For the purpose of experimental measurements or in order to achieve some wanted functionality, an object of study, that the tunneling electrons may interact with, is often placed between the leads within the insulating part. These objects may be e.g. quantum dots, molecules, magnetic or nonmagnetic atoms, etc.

4.1 Scanning tunnelling microscopy (STM) In 1982 the first paper with experimental results from STM measurements was published [8], which later earned G. Binnig and H. Rohrer the Nobel prize in

35 1986. The apparatus they had built was revolutionary because it provided a means to view the surface of a conductive material with atomic resolution. Later, in 1990, D.M. Eigler and E.K. Schweizer used a STM to move randomly placed xenon atoms on a plane nickel surface into organised patterns, thereby proving that the manipulation of individual atoms is possible [16]. Ever since, STM has continued to be an invaluable asset to physicists for probing local quantum mechanical phenomena and development has continued with the re- alisation of superconducting [4] and spin polarized STM [17]. STM:s are fairly simple in principle. A very sharp tip of a conducting ma- terial, ideally ending with a single atom, is held in position, a few Å over a conducting sample material by piezoelectric actuators that can move the tip in all directions. A small voltage bias can be applied over the tip to sample gap, that stretches the exponentially decaying tip/sample electron wave functions to overlap so that a small chance of tunneling i achieved. For a given bias volt- age the resulting tunneling current, I(r), will be proportional to the LDOS of the sample surface and DOS:s of the tip [18]. When comparing measurements with experiments, however, the differential conductance, dI/dV, is often con- sidered and by choosing a tip that has a fairly flat DOS in the energy range of the bias voltage, about the Fermi level, the differential conductance can be shown to equal the local DOS of the surface directly to a good approximation. Measurements can be performed in two different ways. Either by keeping the tunneling current constant as the tip scans the sample surface by continually adjusting the tip height by feedback from the current, or by keeping the tip height constant in a fixed (x, y) position as the voltage is varied. The first method records the tip heights over a scanned portion of the sample surface which is translated to a topographic picture of the LDOS. STM tunneling cur- rents varies exponentially with the tip height which makes the method sensi- tive to surface variations. The second method records the current variation in a fixed position for a range of tunneling electron energies. The current variation is then translated to give a spectral picture of the surface electron LDOS. The prevalent theory for basic interpretation of the images and spectra pro- vided by a STM comes from Bardeen [19] and later from Tersoff and Hamann [18] applied specifically to STM. Bardeen presume that the potential barrier can be treated as a perturbation and by truncating at the lowest order correction retrieves Fermi’s golden rule for transitions, which translates to

2πe 2 I = ∑|Ttν | f (εt)[1 − f (εν )]δ(εt + eV − εν ) (4.1) h¯ t,ν for the tunnelling current in the tip to surface direction. e above is the electron charge, t and ν labels the tip and surface states, f (ε) is the Fermi function, and V gives the gap voltage. The tunnelling matrix element is found to be

h¯ 2 Z T = dS · ψ∗∇¯ ψ − ψ ∇¯ ψ∗, (4.2) tν 2m t ν ν t 36 if the integral is taken over a surface within the potential barrier that sepa- rates the tip wave function from the surface wave function, such that they are described by their own Hamiltonians. Expression (4.1) is fairly intuitive, the delta function assures that only tip states, elevated by eV, with a matching surface state are accounted for. The Fermi function f (εt) gives the probability for a tip state with energy εt to be occupied by an electron while 1 − f (εν ) gives the probability of a surface state to be unoccupied. The matrix element Tt,ν squared gives the quantum mechanical probability for a tip state ψt to transition to a surface state ψν . Calculating I(r) from expression (4.1) essentially comes down to solving the integral |Tt,ν (r)| but with a few approximations appropriate to real exper- iments a simpler and more attainable expression is reached. First off, if the tip of the STM is treated as a point source, confining the tip wave function, the transition matrix element will depend only in the surface wave function at 2 2 the tip and |Tt,ν | ∝ |ψν | . Then assuming experiments are done in low tem- peratures with small gap voltages the Fermi functions can be replaced by step functions, which is true in the limit. The total expression will then be non zero only in the energy gap 0 < ε < eV. Furthermore, by taking the continuum limit of the discreet tip state summation one ends up with,

Z eV 2 I(r0) ∝ ρt(ε)∑|ψν (r)0| δ(ε − εν )dε, (4.3) 0 ν where ρt(ε) is the tip density of states and r0 represents the position of the tip in relation to the surface. The local density of states of the substrate surface can be identified within the integral and (4.3) takes the short form,

Z eV I(r) ∝ LODS(r,ε)dε, (4.4) 0 if ρt(ε) is considered constant over the energy limit. By looking at the differ- ential conductance it is clear that dI(r,ε) ∝ LDOS(r,ε). (4.5) dV as stated above.

4.2 Theoretical description of tunnel junctions The theoretical framework used to describe tunnel junctions within the papers published for this thesis rests on a Hamiltonian description, in the spirit of Cohen et al. [20], where the conducting leads are treated as mathematically separated and semi infinite having their own heat baths. The lead Hamiltoni- ans H1 and H2 does consequently commute, [H1,H2] = 0 and each lead may

37 donate/accept an arbitrary number of electrons without changing the internal state. The only connection between the leads is accounted for by a tunneling term of the kind † ∑ Tkqckσ cqσ + H.C. (4.6) kqσ that destroys an electron in one lead, represented by the momentum subscript q, after which an electron is created in the other lead, represented by the mo- mentum subscript k. H.C. is the hermitian conjugate of the first term and takes care of the opposite process where electrons tunnel in the other direction. Tkq is the tunneling matrix element, that sets the rate of tunneling. It is, again, given by the wave function overlap between wave functions of the two leads. As such, it varies exponentially with the distance between the leads, apart from being dependent on the particle energy and the wave number. For small bias voltages, eV, in relation to the Fermi energy EF , however, Tkq is often treated as constant, as variations in Tkq are on the order of E/EF and k/kF [9]. The method of obtaining a closed mathematical expression for the tunneling current is most easily illustrated by an example. Following the steps of Mahan [9], suppose a system of two metallic leads, denoted by the subscripts L and R - for left and right, that are separated by insulating gap, is determined by the total Hamiltonian

H =HL + HR + HT † † † ∗ † (4.7) = ∑εkckck + ∑εqcqcq + ∑Tkqckcq + ∑Tkqcqck k q kk kq where the conduction electrons in each lead are considered free and the spin indices are removed. The current of tunneling electrons flowing through the system in response to an applied bias voltage can then be expressed as the rate of change in electron number for either of the leads, i.e. the number of electrons removed from one lead must end up in the other lead and the average rate at which this happens is the definition of current. This statement translated into mathematics reads

IL(t) = −ehN˙L(t)i, (4.8) where the left side has be chosen to define positive and negative current and † NL = ∑k ckck is the number operator. Using the Heisenberg equation of mo- tion, N˙L(t) = i[H,NL](t), the current can be written as

 † ∗ †  IL(t) = −ie∑ Tkqhck(t)cq(t) − Tkqhcq(t)ck(t)i kq ! (4.9) † = −2ieIm ∑Tkqhck(t)cq(t)i kq

38 since the number operator commutes with all terms of the Hamiltonian except for the tunneling term. This step was taken to show that the equation of mo- tion method of section 3.2.4 now can be applied on the correlation function above to yield an expression for IL(t). By working in the interaction picture an other route may be taken, however, where the brackets of equation (4.8) are expanded in terms of S-matrices to give † hN˙L(t)it=0 = hS (t,−∞)N˙L(t)S(t,−∞)i  Z t   Z t  0 0 0 0 = h 1 + i dt HT (t ) N˙L(t) 1 − i dt HT (t ) i −∞ −∞ (4.10) Z t  0  0 = hN˙L(t)i − i h N˙L(t),HT (t ) idt , −∞ in linear response, where the tunneling term of the Hamiltonian is considered the interaction. The first term equals zero, hN˙L(t)i = 0, since the brackets now represent states of the uncoupled system in the infinite past when no tunneling occurred. The current is then given by Z t  0  0 IL(t) = ie h N˙L(t),HT (t ) idt , (4.11) −∞ which is an expression that will generate several correlation functions once the commutator is expanded. The goal is to collect these operators such that the ones acting on the same lead pair up and form Green functions that can be expressed in regular mathematical functions. So far the derivation has excluded the bias voltage that will drive the tunnel- ing process. To include the bias voltage a chemical potential, µx, is introduced for each lead that allow us to make a shift energy wise in the DOS spectra of the left and right lead equal to eV = µL − µR. The reason why the chemical potentials are introduced at this stage rather than in the definition of the Hamil- tonian (4.7) is that they now can differ on an absolute energy scale instead of only shifting relative each other. Mathematically the chemical potentials are introduced by defining µ HL = HL − µLNL µ HR = HR − µRNR (4.12) µ µ µ H0 = HL + HR , Time evolution for each operator is still governed by the noninteracting Hamiltonians HL +HR, but in order to facilitate calculations it would be prefer- able if the newly defined Hamiltonians that include the chemical potentials were to evolve time and that an operator of a specific lead was governed in time by the Hamiltonian of that lead alone. Using the definitions (4.12) the original uncoupled Hamiltonians take the form µ µ HL + HR = HL + µLNL + HR + µRNR (4.13)

39 which implies the time dependence

µ µ iH t i(µLNL+µINI)t −iH t −i(µLNL+µINI)t N˙L(t) = e 0 e N˙Le 0 e µ   µ (4.14) iH0 t i(µR−µL)t † ∗ i(µL−µR)t † −iH0 t = e ∑i Tkqe ckcq − Tkqe cqck e . kq

µ The first row is possible since H0 commutes with NL and NR and the second equality comes from the Heisenberg equation of motion. The ei(µL−µR)t factors † † are a result of the commutators between Nx and ckcq/cqck in the expansion eABe−A = B + [A,B] + [A,[A,B]]/(2!) + .... After redefining the operator time dependence in accordance with ck(t) = iHµt −iHµt e L cke L the commutator in the current equation (4.11) can be expanded into correlation functions of states that belong to the uncoupled Hamiltonian. To aid notation lets introduce

† A(t) = ∑Tkqck(t)cq(t), (4.15) kq which permits us to write the current as,

Z t −ieV(t+t0)  0  −ieV(t−t0)  † 0  IL(t) =e e h A(t),A(t ) i + e h A(t),A (t ) i −∞ (4.16) 0 0 − eieV(t−t )hA†(t),A(t0)i − eieV(t+t )hA†(t),A†(t0)idt0.

Terms with the commutator combination [A(t),A(t0)] and [A†(t),A†(t0)] con- tain correlation functions where operators of the same kind pair up and they will only be nonzero for tunneling between two superconductors. After dis- 0  0 ∗ carding these and using eieV(t−t ) A†(t),A(t0) = − e−ieV(t−t ) A(t),A†(t0) we end up with

Z t  † 0  −ieV(t−t0) 0 IL(t) = 2eRe h A(t),A (t ) ie dt . (4.17) −∞

Expanding the commutator average,

† 0 ∗ † † 0 0 h[A(t),A (t )]i = ∑ TkqTk0q0 h[ck(t)cq(t),cq0 (t )ck0 (t )]i kk0qq0 ∗  † 0 † 0 † 0 0 †  = ∑ TkqTk0q0 hck(t)ck0 (t )ihcq(t)cq0 (t )i − hcq0 (t )cq(t)ihck0 (t )ck(t)i kk0qq0 2  <(0) 0 >(0) 0 <(0) 0 >(0) 0  = ∑|Tkq| Gk (t ,t)Gq (t,t ) − Gq (t,t )Gk (t ,t) kq (4.18)

40 reveals a symmetric set of bare Green functions that written explicitly gives Z t 2 IL(t) = 2e∑|Tkq| Re { f (ξk)[1 − f (ξq)] − f (ξq)[1 − f (ξk)]} kq −∞ 0 × e−i(ξq−ξk+eV)(t−t )dt0 (4.19)

Z t 0 2 −i(ξq−ξk+eV)(t−t ) 0 = 2e∑|Tkq| Re [ f (ξk) − f (ξq)]e dt kq −∞ By working out the real part of the time integral,

Z t 0 −i(ξq−ξk+eV)(t−t ) 0 Re e dt = δ(ξq − ξk + eV), (4.20) −∞ R and taking the continuum limit of the momentum summations, ∑k → nk dξk R and ∑q → nq dξq, where the DOS of the left and right leads nk and nq are assumed to be constant, we reach the final result Z 2 IL(t) = 2e∑|Tkq| nknq dξq[ f (ξq + eV) − f (ξq)]. (4.21) kq The tunneling current between two metallic conductors obviously depends on the tunneling matrix element, the DOS of both conductors and the applied bias voltage in a linear fashion like a resistor, since the Fermi functions cancel up to the difference eV. The method described here through the example above is quite flexible both in terms of more complicated lead structures, numbers of leads and interac- tions, as long as basic premise that the system can be divided into isolated parts coupled only by a tunneling term can be motivated.

41 5. Scattering theory for surface electrons interacting with Dirac delta-function like potentials

In reference [21] Fiete and Heller develop a scattering theory for electrons, confined to the 2-dimensional surface of a noble metal [22, 23], that scatter off of adsorbed atoms simulated by Dirac delta-function potentials. The aim of the theory is to calculate the local DOS in an environment where the electrons are influenced by several scatterers. A situation that applies to a multitude of experiments where STM:s are used to measure the surface on which atoms or molecules are positioned in an intentional pattern, e.g. quantum corrals [24, 25, 26, 27]. The differential conductance picked up by the STM-tip is, as previously discussed, proportional to the local DOS of the surface, which in turn is given by the imaginary part of the retarded Green function, 1 LDOS(r,ε) = − Im[Gr(r,r,ε)], (5.1) π in the static, zero temperature, case. This Green function can, as we have seen, be calculated by use of the Dyson equation, Gr = gr + grTgr, (5.2) where T = V +VgrV + .... For a continuous potential with spatial extension the Green function is hence given by term wise contributions that account for scattering once, twice and so on up to infinity. The surface states of noble metals does, however, have a energy band minimum that is very close to the Fermi energy. The dispersion relation is consequently quadratic and isotropic to a good approximation and the typical wavelength of a surface state electron is in the order of λF ≈ 30 Å, which is much greater than the size of an adsorbed atom. The argument can than be made that the electrons do not sense the shape and extension of the potential and it can be approximated with a Dirac delta- function. Equation (5.2) then simplifies immensely, Z r r 2 0 2 00 r 0 0 00 r 00 Gk(r,r) =gk(r,r) + d r d r gk(r,r )s(k)δ(r0 − r )δ(r0 − r )gk(r ,r), r r r = gk(r,r) + s(k)gk(r,r0)gk(r0,r) (5.3) as detailed in reference [28]. r0 is the position of the scatterer in relation to the measuring point r and s(k) = (4ih¯ 2/m∗)[e2iδ(ε(k)) − 1], where ε(k) =

42 2 2 EF +E0 +h¯ k /(2m∗), EF is the Fermi energy, m∗ is the effective mass of the surface electron and δ(ε(k)) is an energy dependent phase shift that can be given a value based on experiments. The isotropic, quadratic nature of the dispersion relation and the long wave length that is large in comparison to the lattice parameter of the underlying no- ble metal surface as well, also suggest that an s-wave approximation is viable for the bare retarded Green function. In two dimension the explicit expression is r 0 m∗  0 0  gk(r,r ) = −i J0(k|r − r |) + iY0(k|r − r |) , (5.4) 2h¯ 2 where J0 and Y0 are the Bessel functions of the first and second kind. Equation (5.3) gives the static probability amplitude for an electron to be at r when interacting with a single scattering potential at r0. In order to in- crease the number of scatterers that affect the electron, which is the goal of the theory, the equation has to be modified. A simplistic route would be to simply sum up the contributions from several scatterers, but an electron that scatters off of a potential at r1 surely leaves some amplitude to be scattered off of the potential at r2 as well, which will bring additional amplitude back to the starting position r. For N adsorbed atoms, or scattering potentials, on the considered surface

N r r r r Gk(r,r) = gk(r,r) + ∑ si(k)gk(r,ri)gk,i(ri,r) (5.5) i=1 includes both the sum over all scatterers and through the subtle subscript mod- r ification of gk,i also the secondary scattering events between each adsorbed r atom and all the rest. gk,i is defined as

N r r r r gk,i(ri,r) = gk(ri,r) + ∑ si(k)gk(ri,r j)gk, j(r j,r) (5.6) j6=i which is a series that iteratively can be made infinitely long. By moving the r summation term to the left side and summing over i, however, gk,i(ri,r) can be factored out. If the sum over i and j then is written on matrix form the equation will look like −1 G = A G0, (5.7) r r r where Ai j = δi j −sigk(ri,r j), G0 = gk(ri,r) and G = gk,i(ri,r). Using a com- puter this equation is easily solved even for a large number of scatterers and retrieving the local DOS for the surface affected by the interactions is just a r matter of taking the imaginary part of Gk(r,r) given by equation (5.5).

43 6. Superconductivity

Superconductivity was discovered over a hundred years ago in 1911 by Kamer- lingh Onnes [29]. He measured the electrical resistivity of mercury under ultra cold conditions made possible by liquid Helium and to his amazement saw that the resistivity fell to zero below 4.3 K. Kamerlingh and his team also managed to transition the liquid Helium to its superfluid state during the experiments. The transition was mentioned in his notebook but he failed to realize the sig- nificance of the condensation. After the initial discovery other materials were found to be superconducting and in 1933 Walther Meissner and Robert Ochen- feld notably discovered what is now referred to as the Meissner effect [30]. In short it can be illustrated by placing a magnet on a material that will condense to superconductivity below a certain point. When this point is reached and the material becomes superconducting the magnetic field from the magnet, that penetrates the material, will be expelled from the superconductor that gener- ates super currents in the surface to counter the magnetic field. This causes the magnet to levitate as is popularly shown when demonstrating superconductiv- ity. In order to understand the superconducting state a lot of experimental and theoretical efforts were put into the research field in the following years. Fritz London managed to come up with a classical theory that reproduced the Meiss- ner effect, in 1937, based on the assumption the electron wave function within the superconductor is rigid under influence of a magnetic field, in contrast to wave functions for electrons in a normal metal [31]. The conclusion of the theory is that the magnetic field decays exponentially into the superconduc- tor over a distance that is referred to as the London penetration depth. In the 1950:s and Vitaly Ginzburg published an equation for the super current that was derived by minimizing an expression for the free energy, of the superconductor near the into superconductivity, which was expressed in terms of Londons wave function interpreted as a complex order parameter. During this time, the first steps towards a microscopic theory of supercon- ductivity was taken, as opposed to the phenomenological of London, Landau and Ginzburg. Clues about the underlying interactions responsible for super- conductivity were offered in 1950 by Herbert Fröhlich [32], who devised a theory for electron interaction, where he showed that phonon medi- ated interactions between electrons gives rise to an attractive potential. At the same time Emanuel Maxwell [33] and Bernard Serin et al. [34] came up with experimental evidence that pointed towards electron phonon interaction play- ing a key role in formation of the superconducting state. In the mid 1950:s

44 set out to pursue a microscopic theory for superconductivity with his postdoc and grad student J. Robert Schrieffer. Cooper soon found that an arbitrarily weak attractive potential between two electrons moving close to the Fermi surface of a metal, at opposite points in k-space, with opposite spin, gives rise to a bound state of the electrons [35]. These bound states are now referred to as a Cooper pairs and they display very im- portant bosonic properties, which allows them to condense collectively to form a macroscopic state that carries charge. Bardeen, Cooper and Schrieffer con- tinued to find the appropriate wave function for the Cooper pairs as well as to formulate a Hamiltonian that captures the microscopic mechanism of super- conductivity [36]. More recently, in 1986, high temperature superconductors, reaching the superconducting state at temperatures as high as 130 K, were dis- covered. The mechanism for superconductivity within these materials can not be fully explained by BCS theory and there are currently no definitive answers to how they work. The reason why superconductors are superconducting can be described in a hand waving semiclassical way by considering electrons within the lattice structure of a metal. The positively charged lattice is deformed by an elec- tron moving through it. An accompanying electron is attracted by the higher positive charge density of the deformed lattice at some point which causes attraction between the two electrons. At room temperature the thermally in- duced lattice vibrations are too violent and breaks this weak bond amongst the electrons. At low temperatures however, the energy exchange from thermal vi- brations is low enough that Cooper pairs can form for a sufficiently long time to condense into a collective state. The energy required to break a pair apart in this state is much higher than the energy needed to break a single pair above the critical temperature, thus the hole condensate becomes resilient towards any form of scattering as long as the the energy exchange is lower than twice the pairing potential, and without scattering the resistivity goes to zero.

6.1 Key points of BCS theory By the time Cooper set out to to consider bound states of electrons it was known that electron phonon interaction gave rise to a frequency dependent term that could overcome coulomb repulsion at long range 1 V = |g |2 . (6.1) e f f e f f 2 2 ω − ωD

For energies smaller than the corresponding Debye frequency, |εk − εF | < h¯ωD, relative the Fermi energy, the potential is negative. By looking at a two electron wave function describing electrons that energetically lie within the ikcmRcm this narrow energy gap, ψ(r1,σ1,r2,σ2) = e φ(r1 − r2)χσ1σ2 , where cm denotes the centre of mass for the system, φ is the two particle wave function

45 and χ contains the spin dependence, it can be seen to satisfy the Schrödinger equation with the energy dependence 2 E = −2h¯ωDexp(−1/(|ge f f | N(εF ))), (6.2) under the circumstances that kcm = 0, φ is of S-wave symmetry, and χ is in a spin singlet state. This energy clearly describes a bound state and Cooper noted that the spin 0 nature of the Cooper pair implied that they might con- dense into a macroscopic collective that would dramatically change the ground state of the material [35]. Schrieffer then came up with a candidate for the su- perconducting ground state

† † ψkck↑c−k↓ † † |ψBCSi = ∏e |0i = ∏[1 + ψkck↑c−k↓]|0i, (6.3) k k where Cooper pairs are treated in a coherent fashion, with some special prop- erties, e.g. hψBCS|ck↑c−k↓|ψBCSi ∝ ψk. The ground state is not yet normalized but this is accomplished by adding the parameters u and v as follows † † |ψBCSi = ∏[uk + vkck↑c−k↓]|0i, (6.4) k 2 2 such that hψBCS|ψBCSi = 1 and |uk| + |vk| . By observing the equation it becomes clear that vk is a probabilistic weight for the pair state to be occupied while uk sets the probability of the pair state not to be occupied. The model Hamiltonian that Bardeen, Cooper and Schrieffer could found their theory on, known as the BCS Hamiltonian, incorporates a free electron part and a scattering interaction between the zero momentum Cooper pairs, † † † HBCS = ∑εkσ ckσ ckσ + ∑Vk,k0 ck↑c−k↓ck0↑c−k0↓, (6.5) kσ k,k0

2 where Vk,k0 = |ge f f | in the phonon mediated example above. In general Vk,k0 can be any attractive potential between electrons however. The ground state energy E = hψBCS|H|ψBCSi can now be minimized by variational means keep- ing the number operator constant. A procedure that generates the following parameter expressions     2 1 εk 2 1 εk |uk| = 1 + , |vk| = 1 − , 2 ξk 2 ξk and q (6.6) 2 2 ξk = ± εk + |∆| iφ 2 ∗ 2 ∆ = |∆|e = |ge f f | ∑ukvk = |ge f f | ∑hck↑c−k↓i, k k where ∆ is the superconducting order parameter that generally has a complex phase and a magnitude equal to the binding energy of a single Cooper pair. |∆|

46 also sets the gap around the Fermi energy in the characteristic DOS spectrum of a superconductor, as well the minimum value of ξk, which will be shown to be the lowest excitation value from a Cooper pair state to a quasiparticle state. The critical temperature at which a material becomes superconducting is given by Tc ≈ 2∆/(3.53 ∗ kB) for all BCS superconductors. Regarding the phase, it is easy to see that the BCS wave function is not invariant under the † iα † gauge transformation ckσ → e ckσ and that the order parameter pics up a phase under such transitions ∆ → e2iα|∆|. An interesting thing about the phase is that it is the conjugate to the number operator and therefore has to comply with the uncertainty principle ∆α∆N ≥ h¯/2, which means that a condensate with well defined phase has an ill-defined particle number. The large number of particles involved in forming the superconducting state where the order parameter appears suggest that fluctuations from the pairing operator Aˆ = ck↑c−k↓ about the average should be small - justifying a mean field treatment.1 Applying the fluctuation operator δAˆ = Aˆ − hAˆi to the BCS Hamiltonian and neglecting the squared fluctuation term leads to the mean field BCS Hamiltonian

† h ∗ † † i HmBCS = ∑εkσ ckσ ckσ + ∑ ∆ ck↑c−k↓ + ∆ck↑c−k↓ , (6.7) kσ k which is used in the published papers in section III. Note that a term V∆∆∗ is discarded from the above expression since it merely provides a constant energy shift in our calculations. When using the BCS Hamiltonian to describe the state of charge carriers in tunnel junction leads, within a Green function formalism, correlation functions of the pairing operator appear. In order to interpret these correlation functions, as in the usual case of simple fermions, a transformation can be applied that diagonalizes the BCS Hamiltonian into

 † †  H = ∑ξk γk↑γk↑ + γ−k↓γ−k↓ , (6.8) k

† where ξk is defined in (6.6) and γkσ (γkσ ) destroys(creates) a fermion quasi particle defined by

∗ ∗ † γk↑ = ukck↑ − vkc−k↓, (6.9) † † γ−k↓ = vkck↑ + ukc−k↓. The transformation is known as the Bogolyubov-Valatin transformation and the quasi particles resulting from the transformation are linear combinations of electrons and holes which explains their dispersion relation, which is a mix q 2 2 of electron and hole excitation spectra following the shape ξk = εk + ∆ .

1The justification is formally shown in [10]

47 7. Notes on chaos

In paper I some solutions to the driven oscillator are referred to as chaotic. The term chaos is widely used in many different contexts with different meanings in mind. In mathematics on the other hand, the term chaos is more well defined to mean the study of dynamical systems that are highly sensitive to initial con- ditions. A chaotic system hence evolve in a completely deterministic way and the same initial condition will always yield the same motion or development. If the initial conditions are changed in the slightest way however, chances are that the outcome will be dramatically different. In other words, a chaotic sys- tem is deterministic but not predictable. Chaotic motion is not determined by any parameters that change in a random fashion, which is a property of what in mathematics is called a stochastic system. There are ways to mathematically classify a system as chaotic, e.g. if it satisfies criteria such as sensitivity to initial conditions, topological mixing and dense periodic orbits [37]. These classifications are nontrivial to formally prove for a system and we did not attempt to show that our equation of motion

3 u¨ + A(t)u − B(t)U = F(t)/mI, (7.1) where the coefficients A(t),B(t) and F(t) are harmonically varying under fi- nite voltage conditions, fulfilled them. The equation does however fulfill the necessary condition for a differential equation to yield chaotic solutions - namely, that there are at least three dynamical variables that determine the motion of the system and that the equation is nonlinear [37]. A more visual way to establish chaotic behavior is to plot a Poincaré map of the system trajectory in phase space. A Poincaré map is most easily described by considering an orbital motion that goes through a plane intersection surface. The orbital trajectory will periodically cross the Poincaré surface leaving a dot behind at the point of intersection. The nonlinear equation above describes motion in one dimension and an intersecting plane crossing the path of motion would only get one dot drawn on it no matter the complexity of variations in speed and acceleration. It is therefore common to analyze the trajectory in phase space instead, in a way that would correspond to a revolving vector that represents the motion. The projection on the xy plane gives the position, the projection on the z-axis gives the speed and time evolves as the angle between the xy projection and the starting point grows. After a long time has passed, in relation to the period of the motion, the dots may form two dimensional shapes that are telling for the motion itself. A dot on the Poincaré map is

48 indicative of a regular periodic motion, if the circumferences period is set to the same value as the period of the motion. A closed curve means that the motion is quasi periodic, depending on more than one frequency and a pattern that grows unpredictably, often to a swirling shape with fractal properties at long times, signals chaotic motion.

49

Part III: Accessible versions of the published papers

8. Stability and chaos of a driven nanoelectromechanical Josephson junction

With the advancement of micro fabrication methods in resent years it has be- come possible to construct mechanical objects, such as oscillators, that are small enough to show measurable effects of quantum mechanics [38, 39, 40]. Setups on this scale are often referred to as mesoscopic since they belong in- between the world of the atom and the world we humans can observe with our eyes. The currently proposed technological applications are mainly in high speed and sensitivity detection devices [41] of mass [42, 43, 44, 45, 46, 47], charge [48], force [49], and displacement [50]. In the future even more novel uses may see fruition within the area of quantum information technology [51, 52, 53, 54]. Some of these experimental investigations were done un- der very low temperatures where the nanomechanical resonators were in con- tact with parts in a superconducting state. A path that has also been taken by coupling nanomechancal resonators to superconducting interference devices (SQUID:s) [55] and to a superconducting Cooper pair box [56]. The first paper, I, presented in this thesis concerns a nanoelectromechanical system (NEMS) of this kind that includes mechanical movement that couples to electron tunneling events. Specifically we take a theoretical look at a me- chanical system of three parts that are all superconducting (SC). Two of these parts are fixed electrode leads, of a length scale in the hundreds of nanometers, that geometrically aim at a common point in a ninety degree angle from each other. In the intersection point of their aim a third superconducting island is suspended by a cantilever that allows for oscillatory motion in the direction of the left (L) fixed electrode, while its movement in the direction of the right most (R) electrode remains unvarying. For a schematic picture of the setup, see Figure 8.1. In absence of electromechanical coupling the only force acting on the island is restoring and comes from the spring-constant of the cantilever, ki, which then makes the island a harmonic oscillator. The gaps between the leads and the island form two Josephson junctions where electron tunneling occurs, either by the Josephson effect alone if there is a superconducting phase differ- ence between the leads and the island, or additionally by bias voltage induced effective Cooper pair tunneling. Since the tunneling rates decrease exponen- tially with distance direct tunneling between the L and R leads is neglected. The interesting thing about this setup is the fact that the island may oscillate to shuttle electrons between the leads in a dynamical way - set by the mechanical parameters of the island and the electronic parameters of the superconduc- tors. A similar geometrically symmetric system was studied in reference [57]

53 u

k d I SC L SC I mI

d

V SC R

Figure 8.1. Schematic picture of the NEMS setup. The superconducting island (SC I) can move under the restoring force of the spring as indicated by the arrows. The island couples by proximity to the fixed left superconducting (SC L) and right super- conducting leads (SC R) as two Josephson junctions are formed. A voltage bias V can be applied over the fixed leads. kI denotes the spring constant and mI is the mass of the island. The distance between the equilibrium position of the island, indicated by the dashed circle, and the leads is d and oscillation amplitude is denoted u.

with interesting result and here we further developed the idea by considering an asymmetric version, where the island distance to the left lead, d, varies linearly with the oscillation amplitude, u, while the distance to the right lead varies as the hypotenuse of u and d. In the theoretical modeling of the system each of the three superconductors are defined by BCS Hamiltonians with pairing potentials ∆L,R,I, detailed in section 6. These couple through two tunneling terms that destroy an electron in the island and create one in either fixed lead, or vice versa, with a rate given by the tunneling matrix element. In island equilibrium the tunneling rate is set to a base value that changes with the oscillation distance of the island in a lin- ear approximation. Consequently, the oscillation amplitude needs to be small compared to the equilibrium distance between island and leads. The tunneling current is derived from the time derivative of the average occupation number of the left and right fixed leads, which gives a perturbative expression through expansion of the S-matrix. The resulting current can be divided into two parts, one that accounts for single electron (quasi-particle) tunneling, and the other that accounts for Josephson tunneling. In our study, only the effects of the second contribution - the Josephson part - was considered. This part describes the effective tunneling of Cooper pairs which means that the averages are com- posed of creation and annihilation operator pairs, rather than a combination of the two. In order to interpret these averages we utilized the Bogolyubov trans-

54 formation which leads to anomalous Green functions that can be expressed as ordinary mathematical functions. It is also important to notice that the time scale of the electronic tunnel- ing process mathematically intertwines with the time scale of the mechanical oscillations to create a very complicated problem that is unmanageable ana- lytically. To separate these time scales, a Born-Oppenheimer type approxi- mation is considered, and the derivation rests on the assumption that the elec- tronic processes are orders of magnitude faster than the oscillation period of the island. This implies that the electron energies, of roughly 1eV, needs to be orders of magnitude larger than the mechanical energy of the island. A condition that is thankfully fulfilled for mechanical resonators of the consid- ered size as shown in reference [47] where the corresponding energy figures 10−9 − 10−6eV are given for the typical eigenfrequencies. The position of the vibrating island is now viewed as slowly varying which justifies that the posi- tion dependent tunneling rate is Taylor expanded into a series that is cut after the first time derivative. After the time scale separation the Josephson tunneling current can be ex- pressed in terms of island position and speed dependent factors, in addition to the usual harmonic functions, dependent on the Josephson frequency ωJ, the superconducting phase difference φ of the gaps, and time. For the SC left lead to SC island

2 IL(t) = JL[1 − αu(t)] sin(ωJ,Lt + φL) + ΓL[1 − αu(t)]αu˙(t)cos(ωJ,Lt + φL), (8.1) where α is the tunneling coupling constant and the amplitudes JL and ΓL de- pend on the bias voltage, the pairing potentials, the equilibrium tunneling ma- trix element, and the lead and island DOS. For the SC right lead to the island

 h i2 p 2 2 IR(t) =JR 1 − α R + u − R sin(ωJ,Rt + φR)  h i  (8.2) p 2 2 αuu˙ + ΓR 1 − α R + u − R √ cos(ωJ,Rt + φR). R2 + u2 In order to find the function, u(t), that determines the movement of the SC island a classical Hamiltonian, Hosc, is constructed for the mechanical motion. Uncoupled from the SC fixed leads, the island is a harmonic oscillator gov- (0) erned by the Hamiltonian Hosc . The energy provided to the SC island by the tunneling currents can be expressed by the equation

∂HJ 2e = IJ, (8.3) ∂φ from which two Hamiltonians are obtained for each gap - HJ,L from the left and HJ,R from the right. Solving the Hamilton equation of motion for Hosc =

55 (0) Hosc + HJ,L + HJ,R then yields

3 u¨ + A(t)u − B(t)u = FL(t)/mI, (8.4) which is a version of the well know Duffing equation with time dependent coefficients and driving force. Depending on these coefficients and the driving force, 1 A(t) ≈ [kI + kD(cos(ωJ,Rt + φR) − 1)] mI k D (8.5) B(t) ≈ 2 (cos(ωJ,Rt + φR) − 1) 2mIR J α F (t) ≈ − L (cos(ω t + φ ) − 1), L e J,L L equation (8.4) is fulfilled by a rich variety of solutions. A(t),B(t) and F(t) are approximations valid in the weak coupling and low bias voltage limit where, αu  1 and ΓL/R/JL/R  1. kD = (JRα)/(eR) is an effective spring constant from the electromechanical coupling. By looking at the coefficients and the driving force it is clear that the coupling to the right lead provides an additional set of coefficients to the otherwise harmonic oscillator, while the coupling to the left lead creates an inhomogeneous force term. This is the case only if the angle between the motion of the island and the right fixed lead is 90 de- grees - for all other angles the coefficients and the driving force would include contributions from both the left coupling and the right. For zero bias voltage, or effectively time independent coefficients, the equation of motion for the central island is solvable analytically by Jacobi’s elliptic functions.

8.1 Results for zero bias voltage Figure 8.2 (a) shows phase diagrams of the possible solutions to equation (8.4) for negative and positive values of the time independent coefficient A and B under zero bias voltage and for φL = 0, such that FL = 0, while φR 6= 0. A > 0 and B < 0 gives solutions of the kind u(t) = u0cn(Ωt,k), where cn(x,y) is q 2 p the Jacobi elliptic cosine, Ω = A − Bu0, and k = (u0 −B/2)/Ω, that are similar to the Harmonic oscillator in that it has a stable singular point in the centre of oscillations. For A < 0 and B < 0 the island equation of motion is solved by two different kinds of Jacobi’s elliptic functions depending on the values of the parameters A and B. These solution trajectories encircles either one of two stable singular points that are off-centre, in the phase diagram, or around all three singular points with a centre of oscillations at the saddle point in the origin (u = 0,u˙ = 0). The shaded part to the right in Figure 8.2 shows solutions to the equation that are unphysical, since B is always negative, with trajectories that go off to infinity. In panel (b) a schematic illustration depicts

56 ( a ) A ( b) ( c) [μA ] k I S C I 6 A < 0,B < 0 7/ 8 5 S C L mI 6/ 8 4 2| A | 3 0< u < 0 -B 0 5/8 2 1.579 1

V / R R B S C R 1.571 [μA ] 0.08 k I 3/ 8 1.563 u 0 0.1 0.2 0.3 A < 0,B < 0 S C I 0.06 1/4 S C L mI 0.04 0 2| A | 1/8 - 0 0.02 -B < u 0< 0 0 0 u V 0 10.5 1.5 2 2.5 3 0 S C R I / 0 Figure 8.2. The SC tunnel junction system under zero voltage bias, where A(t),B(t),F(t) A,B,F, and φL = 0 such that FL = 0. (a): Phase diagrams of the different solutions→ to the island eq. of motion as they depend on the sign of A and B. The shaded region represent unphysical solutions. (b): Schematic illustration of the off-centre oscillatory solutions that correspond to trajectories encircling either one of the stable singular points in the bottom left panel of (a). (c): Fourier transform of the current IR as it depends on φR under realistic conditions. ω0 is the eigenfrequency of the uncoupled island oscillations. The gradient bars indicate amplitude in the region they cover and the inset shows where A > 0,B < 0, solutions in the bottom transition into A < 0,B < 0, solutions at the discontinuous step.

SC island oscillations, for the A < 0 and B < 0 case, that are off-centre in relation to the equilibrium position of the uncoupled harmonic oscillator. In Figure 8.2 (c) the Fourier transform of the time dependent current IR(t) is presented relative its DC component for varying values of the phase shift φR. 1 The image was created with parameters JL = JR = 0.1 mS, α = 0.01 Å− , R = 10 Å, mI = 1 fg, kI = 0.01 N/m, and the initial value was set to u0 = 0.1 nm. The amplitude of the alternating current is given by the gradient bars, that also divide the image in two regions where they apply. In the bottom part the SC island motion follows the A > 0 and B < 0 solutions depicted in the upper left panel of 8.2. As the phase shift closes in on φR = π/2, however, the trace takes a discontinuous step, seen in the inset, that coincides with A 0, where the solutions are given by the lower left panel of (a) in the same figure.≈ In the very beginning of the upper panel, also seen in the inset, the trace fades away momentarily when the initial value u0 coincides with the separatrix of the A < 0 and B < 0 solutions where the island oscillation freeze asymptotically. For φL = 0 the force term in the island equation of motion is finite which causes a rightward shift of the oscillation centers in the phase portraits of its solutions. As the force term increases the island shies away from the left fixed lead more and more. For solutions in the A > 0 and B < 0 category these changes are not very dramatic, but for solutions that belong to the A < 0 and B < 0 case the left most stable singular point eventually annihilate with the

57 central saddle point, leaving the rightmost stable singular point behind, such that the two categories of solutions become topologically similar.

8.2 Results for finite bias voltage

When a bias voltage is applied over the double junction, ωJ,L = −ωJ,R = ωJ > 0, the island equation of motion becomes very complicated and analytical so- lutions are impossible to find. All results calculated under these circumstances are consequently given by numerical means. A sense of the outcome at differ- ent conditions can however be assessed by looking at the current expressions (8.1) and (8.2) and the island equation of motion (8.4) with accompanying coefficients. It is for example immediately clear that the amplitude of the alternating current is significantly larger when the harmonic functions of the currents become time dependent and go between -1 and 1, in comparison to the zero bias case where all time variation of the current comes from the small os- cillations of u(t). The oscillation period set by the Josephson frequency, which is to say the voltage bias, enters not only the current expressions directly, but is also found in the coefficients and the force term to the island equation of motion which implies that its solutions will behave quite differently if ωJ is of the same order of magnitude as ω0 or not.

Island motion Island motion I 2 R 0.15 0.1 0.1 1.6 0.05 1.2 0 0 0.8 -0.05 Velocity [m/s] -0.1 Position [nm] 0.4 Current [mA]

−0.1 -0.15 0 -2 10-1 2 0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1 0.1 2 0.2 0.1 1

0 0 0 -0.1 Velocity [m/s] Position [nm] -1 Current [mA] -0.2 −0.1 -2 -2 -1 0 1 2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 1.6 0.015 0.08 0.01 1.2 0.04 0.005 0.8 0 0 -0.005 0.4 −0.04 Position [nm] Current [mA] Velocity [m/s] -0.01 -0.015 0 −0.08 -1.5 -1 -0.5 0 0.5 1 1.5 0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5 Postition [nm] Time [μs] Time [μs]

Figure 8.3. Representations of the island motion and current IR(t) at bias voltages corresponding to ωJ/ω0 = 0.048 (bottom three images), ωJ/ω0 = 0.48 (middle three images), and ωJ/ω0 = 24 (top three images).

58 At low bias voltage, ωJ/ω0 < 0.24, the island motion becomes quasi-periodic, rather than periodic, with a high frequency oscillation of small amplitude su- perimposed on a low frequency oscillation, given by ωJ, of a comparatively high amplitude. Physically, the island vibrates harmonically when the cosine functions of the coefficients and the force term suppress the influence of the electromechanical coupling, only to have its stable oscillation centre moved as the force term gains in strength. The bottom three panels in Figure 8.3 illus- trates this process with a phase portrait of the island motion, its position over time and with a plot of the current, IR, as it depends on time. When the bias voltage gives a Josephson frequency comparable to the eigen- frequency of the island, roughly in the range 0.24 < ωJ/ω0 < 6.1, one might expect a resonance behavior with amplitude build up. The strong nonlinear- ity of the equation of motion instead causes a chaotic behavior, which is a well documented property of the ordinary driven Duffing equation [58]. Just as in the low bias voltage case an example of an island motion phase portrait and the island motion and right junction current over time are graphed in the middle three panels of Figure 8.3. To illustrate the likely chaotic behavior of the island for intermediate bias voltages a Poincaré map1 from this range is compared with one obtained at high bias voltage in Figure 8.4. High bias voltages, ωJ/ω0 < 6.1, restore the island oscillations to a quasi- periodic state with a phase portrait that largely resembles one of the traces that encircles the rightmost stable singular point for A < 0 and B < 0 solu- tions of the zero bias case, as can be seen in the top left images of Figure 8.3. The remaining two images to the right show that the island position over time varies fairly harmonic and that the alternating current IR(t) is modulated by the slower vibrations of the island. While difficult to manufacture experi-

0.2 ω /ω = 24 ω /ω = 0.48 0.15 J 0 J 0 0.1 0.05 0 -0.05 Velocity [m/s] Velocity -0.1 -0.15 -0.2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Postition [nm] Postition [nm] Figure 8.4. Poincaré maps of the island motion, that indicate quasi periodic and chaotic behavior, taken at the intersection t = 0 + 2πn/ω j, where n = 0,1,2,..., for low high and intermediate bias voltage relative ω0. mentally a setup of this kind is not outside the reach of current state-of-the-art fabrication and measurement techniques. The importance of nonlinear depen-

1See section 7

59 dence and the Duffing equation in a NEMS circumstance could be plentiful. Areas of investigation include weak signal amplification, with low noise lev- els, based on high system sensitivity near bifurcation points [59, 60, 61, 62]. Other novel experiments utilize buckled nano-resonator beams that oscillate within the confines of a double well potential, typical to the Duffing equation, to produce mechanical quantized quit states in low temperatures, or to con- struct mechanical memory bits that work under room temperature [63, 64].

60 9. Theory of spin inelastic tunneling spectroscopy for Josephson and superconductor-metal junctions

One of the holy grails sought in modern information technology is the ability to control the quantum states of single spins. In a time where the traditional means of information storage by charge, in solid state memory devices, is closing in on its physical limits, orders of magnitude greater bit density in closely packed local spins that can be flipped at low energy cost is a welcome progression [65, 66]. In the more experimental field of quantum information technology single spin manipulation could also pave the way to realization of practical quantum computers [67]. Even though such applications are concep- tually sound a lot of practical problems remain to be solved before working version of comparable functionality to the present state-of-the-art solid state devices can be found on the market. Experimental setups using Spin Polarized Scanning Tunneling Microscopy (SP-STM) can gauge the spin state of a magnetic atom or molecule that lies adsorbed on, typically, a metal surface as the current depends on the align- ment of the local spin in relation to the magnetic moment of the tip. If the spin states of the local spin are non-degenerate in energy a spectral approach can be used to identify spin states by regular STM without the need of a magnetized tip. In both cases however the local spin interacts with its surrounding of spin carriers in the substrate and tip to quickly change state. From spectroscopical measurements for example, the mean life time of excited states of a local spin has been shown to be in the order of pico seconds on top of a metal surface [68, 69, 70]. To short a time for the state to be usable as a temporary bit rep- resentation since realistic clock frequencies of computer oscillators are orders of magnitude lower. In order to increase the mean spin excitation life time of local spins in con- tact with a surface, the number of ways for it to exchange energy and spin angular momentum with itinerant electrons must be limited. While a number of novel ways to achieve prolonged life-times have been researched [71, 72], the immediate solution that comes to mind is to separate the metal substrate from the local magnetic atom or molecule with a thin insulating layer that cre- ates a gap in the density of states around the Fermi level that is comparable to the spin excitation energies. Experimentally layers of CuO, BN, and Cu2N all increases the man life time up to hundreds of pico seconds [66, 73, 74, 75].

61 The idea of an insulating layer as a means to create a gapped substrate in the energy range of the spin excitation energy can be taken a step further by using a superconducting surface and tip instead. A superconductor exhibits the ideal properties of a perfect band gap, with a width equal to twice the superconduct- ing pairing potential, yet still conducts current needed for measurements. This translates to a physical picture where transfer of spin de-excitation energy and angular momentum to a single quasi-particle can take place only once enough energy is supplied to first break up a Cooper pair. Driven by these promising possibilities Heinrich et al. devised and executed an experiment where they spectroscopically measured the current response of a superconducting STM where the tip floated above a local spin that lay adsorbed on a superconduct- ing substrate [4]. A complication with superconductors in a setup like that appear due to the formation of Shiba states within the superconducting gap that can hinder the interpretation of peaks in the differential conductance as de-excitations of spin-states. In order to minimize such effects Heinrich et al. used a paramagnetic organic molecule molecule, eg. M-octaethylporphyrin- chloride (M-OEP-Cl) where M denotes a transition-metal element (Mn, Fe, Co, Ni, Cu), as a local spin, where the magnetic atom is surrounded by a ligand cage that lifts the spin moment from the substrate enough to weaken the direct interaction with the substrate, responsible for the spurious states, which results in a migration of the Shiba peaks to the vicinity of the large su- perconducting coherence peaks at the edge of the superconducting gap. An additional benefit of the ligand cage is that it provides an environment of mag- netic anisotropy for the magnetic atom that serves to effectively split the spin states in energy. The experiments, conducted at 1.2 K using a lead covered tip and a lead substrate on which a spin 5/2 M-OEP-Cl molecule lay, clearly showed three significant results. First, any sign of inelastic interaction from the tunneling electrons with the local spin moment only gave peaks outside of the supercon- ducting gap structure i the dI/dV spectra at bias voltages |eV| = ∆sub +∆tip + ∆mn, where ∆mn is the energy difference between spin states of the magnetic molecule. That is to say that an additional channel for conduction opens up once a tunneling electron has enough energy to excite the spin ground state |mz = ±1/2i to the fist excited state | ± 3/2i in addition to the energy it takes to break up a Cooper pair. Figure 9.1 (b) illustrates, in terms of DOS, how the tip and substrate match up at zero bias voltage, current onset, and when spin excitations are made possible, for SC to SC and NM to SC. Second, the mean excitations lifetime of the first excited spin state is on the order of τ ≈ 10 ns - long enough for a pumping action to be observed where additional tunnel- ing electrons interact with the excited spin state before de-excitation to assist transitions |mz = ±3/2i → |mz = ±5/2i through yet another tunneling chan- nel. It is reasoned that these long mean lifetimes are a consequence of the energy relation between the larger superconducting gap, ∆tip + ∆sub on the positive voltage side, and the smaller energy difference E±3/2 − E±1/2. The

62 excited local spin lacks energy to break up a Cooper pair so that energy and angular momentum can be released to a substrate quasi particle, which re- moves the most effective means of de-excitation for the local spin. The third observed experimental result shows that moving the STM tip closer to the sur- face increases the axial anisotropy of the magnetic molecule in a seemingly exponential fashion. In response to the experimental results obtained by Heinrich et al. we at- tempt to model the setup using an exchange interaction Hamiltonian and a Green function representation in linear response. The Hamiltonian includes a BCS1 description of the tip and substrate, to account for superconductiv- ity, as well as as free particle description of the tip in order to simulate the case where one of the leads is composed of a normal metal. Electrons may tunnel between the tip and substrate, as a bias voltage is applied, through a tunneling term that encompasses both direct tunneling without interaction with the local spin and tunneling with an intermediate step where energy and angular momentum is exchanged between the electron and the local spin, see Figure 9.1 for a schematic illustration. The local spin, on the other hand, ac- quires a 2S + 1-fold spectrum of eigenenergies and spin states {Eα ,|αi}, set by the axial anisotropy field D and the transverse anisotropy field E, from the anisotropy Hamiltonian detailed in section 2.2, that can be Zeeman split by an external magnetic field in either direction. For a magnetic molecule of integer spin moment S, finite axial, D 6= 0, and zero transverse anisotropy, E = 0, the eigenstates are given by the basis states |Sz,mzi, mz = −Sz,−Sz + 1,...,Sz, with twofold degeneracy in each energy level aside from the |mz = 0i state which only has one energy level. A finite transverse anisotropy E 6= 0 will transform the eigenstates into linear combinations of the basis states and split up the degenerate energy levels. For half integer spin moments Sz the behavior is similar with the distinction that the energy level degeneracy remains even though E 6= 0, which only shifts the energy levels of the spin states slightly, rather splitting them up. Direct interactions between the local spin moment and quasiparticles of the tip and substrate, occurring irrespectively of tunneling, are accounted for by a Kondo-like coupling Hamiltonian. These terms give rise to Shiba states within the superconducting gap that cause resonance peaks in the differential con- ductance. These can hinder detection of peaks resulting from spin exchange interaction. We do however show that it is plausible that the ligand cage of the paramagnetic molecule weakens this interaction enough to shift the ener- gies of the Shiba states so close to the thermally broadened superconducting coherence peaks that their signatures are drowned. The expression for the tunneling current is derived from the time derivative of the occupation number in the tip using the Heisenberg equation of motion. In linear response two current contributions arise - the first accounts for con-

1Detailed in section 6

63 (a) (b) NM/SC eV = 0 meV eV = 2∆ eV = 2∆ +(E3/2-E1/2) e- e- e- 3/2 S 1/2 V - - e e eV E-Ef eV = ∆k eV = 0 meV eV = ∆k +(E3/2-E1/2) e- e- e- 2∆ 3/2

1/2 NM SC SC DoS DoS (tip) (sub) Figure 9.1. (a) Schematic illustration of the modeled SC-STM setup. Electrons tunnel between the superconducting/normal metal (SC/NM) tip and the SC substrate either directly or through an intermediate exchange of energy and angular momentum with the local spin S of the paramagnetic molecule. A bias voltage V drives the process. (b) DOS match up of tip and substrate for progressively higher bias voltages. The leftmost image shows that no electron tunnel at zero bias voltage. The middle image depicts how electrons start to tunnel once the Fermi level of the tip equals the lowest energy empty state of the substrate. The rightmost image shows the onset of the additional spin interaction tunneling channel. ventional single particle tunneling, while the second gives the Josephson cur- rent. The Josephson part will be negligible far from equilibrium where many of our results are gathered and will be omitted in the results following. The single particle tunneling term can in turn be divided into three separate terms I0,I1 and I2 that express different tunneling mechanisms. I0 accounts for direct tunneling without any interactions with the local spin. Contributions from I0 are kept as a background in all following results. I1 does couple to the local spin but vanishes without spin polarized currents. I2 is the term that receives our main attention as it contains the spin-correlation functions. It allows for spin flip electron tunneling that may excite or de-excite the spin states of the paramagnetic molecule. The spin-correlation function appears on the form

  0 0 z −+ +− i(Eα −Eβ )(t−t ) ∑ σ σσ 0 · hS(t)S(t )i · σ σ 0σ = ∑ 2χαβ + χαβ + χαβ e (9.1) σσ 0 αβ in the current expression for I2, where σ σσ 0 is the vector of Pauli matrices that belong to the spin of the tunnelling electrons, S(t) is the spin-vector of the local spin and z,−+,+− z,−,+ z,+,−   χαβ = hα|S |βihβ|S |αiP(Eα ) 1 − P(Eβ ) . (9.2)

The sum is over all spin eigenstates |α(β)i and eigenenergies Eα(β) and the state occupation P(Eα(β)) is given by the Gibbs or the Fermi-Dirac distribution functions.

64 Before the calculated results are presented, in form of the differential con- ductance as a function of bias voltage, it is worth pointing out that this spectral image is a reflection of the underlying DOS of the tip and substrate. Supercon- ductor to superconductor tunneling will therefore give a sharp peak structure at the onset of each new tunneling channel, that falls off to the usual stepped increase, since the filled and empty sides of the DOS on either side of the superconducting gap begin with the pronounced coherence peaks, see Figure 9.1 (b). For SC to NM tunneling the NM lead has a more flat DOS which gives less distinct feature in the differential conductance spectra and for NM to NM tunneling, where both leads have flat DOS, new conduction channels give staircase-like steps in the differential conductance.

9.1 Results for a spin 1 magnetic molecule The experiment conducted by Heinrich et al. involved a paramagnetic molecule with an effective spin 5/2 moment, but for illustrative purposes the initial re- sults given here concern the case of a spin 1 local moment. For a positive axial anisotropy, D > 0, the three spin eigenstates, |mz = −1,0,1i, generates two different energy eigenvalues E0 and E± that are related by E± = E0 + D meV. The spin ground state is hence |mz = 0i when D is positive. For negative D the situation is the opposite and |mz = ±1i are the degenerate ground states. For nonzero transverse anisotropy E 6= 0 the degenerate |mz = ±1i states are slip up√ and modified into the linear combinations |E±i ≡ (|mz = −1i ± |mz = 1i)/ 2 on the energy levels E± = E0 +D±E. A schematic illustration of the spin state energies under different anisotropy conditions for the paramagnetic molecule can be seen in Figure 9.2 (a), where arrows indicate the possible tunneling electron aided spin excitations. The corresponding calculated dI/dV spectra for SC to SC and NM to SC transitions are given in panels (b) and (c), generated with parameter values; Temperature, T = 1.2 K, ∆tip/sub = 0.5 meV, D = 1 meV, T1 = 0.3T0 and varying E. The same parameters are used for the NM to SC case except for the lower temperature of T = 0.5 K. Note that the dI/dV curves in all following figures are offset for clarity and comparability and that the vertical scale only applies to the bottom curve. The SC to SC spectrum in the bottom of Figure 9.2 (b) clearly shows a peak at eV = ±(∆tip + ∆sub + D) that corresponds to additional electrons tunneling once the excitation energy for |E0i → |E±i transitions is available through biasing. For increased values of the transverse anisotropy the peak divides into two that separate linearly with increasing E as expected (indicated by the dashed lines). In this process a third peak, that emerges from the main superconducting coherence peak, can be seen moving towards higher energy values (indicated by the dotted lines). This peak results from spin preserving transitions between the higher energy states |E−i → |E+i that are forbidden when E = 0 due to conservation of angular momentum. The amplitude of the

65 (a) (b) (c) D > 0, E =0 D > 0, E ≠ 0 2.4 E = 0.2 meV

D+E |E+〉 2.0 2 D |E 〉 ± E = 0.2 meV D-E |E-〉 1.6 1.5 E = 0.1 meV 1.2 1 dI/dV (a.u.) E = 0.1 meV 0.8 E = 0 meV 0.5 0.4 E = 0 meV 0 −1 0 1 2 0 |E0〉 |E0〉 −2 −3 −2 −1 0 Δ1 Δ+D 2 3 Bias (meV) Bias (meV) Figure 9.2. (a) Schematic illustration of the spin state energy-levels and possible exci- tations for D > 0,E = 0 and D > 0,E =6 0. (b) dI/dV spectrum for SC to SC tunneling for three different values of E at T = 1.2 K. (c) dI/dV spectrum for NM to SC tun- neling for three different values of E at T = 0.5 K. The coloured curves are offset for clarity and comparability.

different peaks are strongly dependent on the thermal populations, P0,P−P+, of the spin states involved in the transitions, which is readily seen in equation (9.2) At E = 0.1 meV and kBT ≈ 0.1 meV the excitations |E−i → |E+i are preva- lent enough to produce a small peak at the intersection with the dotted line leaning towards the right. A faint trace is even visible (indicated by an ar- row) from de-excitations |E+i → |E−i that lift a lead quasiparticle energy wise over the edge to possible tunneling. At E = 0.2 meV the separation be- tween |E−i and |E+i is greater and the even more/less thermally populated states |E−i/|E+i favor excitations to de-excitations as the factor (1 − P+)P− gets bigger while (1 − P−)P+ gets smaller. A similar reasoning explains why the |E0i → |E±i transition peaks, that separate around eV = 2 meV, differ in amplitude. The |E−i state gets more populated as it acquires a lower energy which hinder transitions to it, while the opposite is true for the |E+i state that becomes less populated with increased energy. In Figure 9.2 (c), depicting NM to SC tunneling, the peaks are thermally broadened which is correct but obfuscates some of the more fine details. In SC to SC case the peaks are to sharp which indicates that the theory fails to account for thermal broadening in this case.

9.2 Results for a Spin 5/2 magnetic molecule Our spectrum for a spin 5/2 paramagnetic molecule matches the experimen- tally obtained counterpart from reference [4] very nicely. For a uniaxial and transverse anisotropy D > 0 and E = 0 the spin can occupy one of the three doubly degenerate states | ± 1/2i, | ± 3/2i, and | ± 5/2i at the corresponding 2 energy-levels E±m/2 = Dm /4. In Figure 9.3 (a,1), where the parameter val- 2 ues given by Heinrich et al. are used,the |E±1/2i → |E±3/2i excitation peaks

2See caption.

66 are reproduced at |eV| = 2∆ + 2D only. In addition, we also reproduce peaks following the higher order transitions |E±3/2i → |E±5/2i at |eV| = 2∆ + 4D along the dotted line, that were observed as a result of electron pumping, by increasing the state population through a uniform energy shift in the spin state energy levels. Experimentally this peak is a sign that the mean lifetime of the excited spin state |E±3/2i is long enough for interaction with a consecutive tunneling electron that excites the spin to the even higher |E±5/2i state before de-excitation.

1.5 E (a,1) (b,1) 1.5 (b,3) 4D 1 1 P = 0.16 dI/dV (a.u.) ±3/2 2D T = 6 K

0.5 dI/dV (a.u.) P ±3/2 = 0.065 0 D=0 D>0 0.5 T = 4 K

P ±3/2 = 0.026 E=0 E=0 T = 1.2 K 2Δ 2Δ+4D B=0 B=0 −6 −4 −2 0 2Δ+2D −6 −4 −2 0 2Δ 6 E 2Δ+D 1.5 (a,2) (b,2) 1.5 (b,4)

1 - - eσ e σ 1

E = 0.2 meV e-σ dI/dV (a.u.) T = 6 K eV S=+1 dI/dV (a.u.) 0.5 E = 0.1 meV 0.5 S=0 eV=E+1 -E0 T = 4 K E = 0 meV T = 1.2 K 2Δ 2Δ+4D −6 −4 −2 0 −6 −4 −2 0 2Δ+2D 2Δ+6D DOS 2Δ 2Δ+4D eV (meV) V (mV) 2Δ+2D Figure 9.3. (a,1) dI/dV spectra for a spin 5/2 paramagnetic molecule under different population numbers of the spin states. The parameters D = 0.7 meV, E = 0, ∆tip/sub = 1.35 meV, T = 1.2 K, and T1 = 0.3T0, are used to match those given in [4]. (a,2) dI/dV spectra as in (a,1) for a fixed population and different values of E (note the difference in scale to (a,1)). (b,1) and (b,2) schematically depict the two de-excitation processes of thermally populated higher energy states responsible for the in-gap peak structures shown in (b,3) and (b,4). (b,3) and (b,4) dI/dV curves for a S = 1 and S = 5/2 system respectively at different temperatures. Other parameters equal those of the bottom curve in (a,1).

While experimental data points towards a vanishing transverse anisotropy for the considered system under the measured conditions we are free to explore a wider parameter space in theory. In Figure 9.3 the dI/dV spectra are plotted for the positive values E = 0.1 meV, E = 0.1 meV as well as for E = 0 meV. The most striking feature to appear as E gets bigger is the formation of an additional peak along the dash-dotted line starting at |eV| = 2∆ + 6D. This peak is the manifestation of a new conduction channel made possible by the mixing of spin eigenstates that results from a finite E. The spin states form (m) linear combinations on the form |E±mi = ∑n=1,3,5 α±n/2|mz ± n/2i under the modified Hamiltonian, with the upshot being that transitions between the low- est energy spin state, weighted on |mz = ±1/2i and the highest, weighted on

67 |mz = ±5/2i, no longer are forbidden by conservation of angular momentum. Density is simply distributed among the spin basis states to facilitate tunnel- ing electron spin flips, ∆mz = ±1, under |E1i → |E3i excitations with nonzero probability. Apart from paving the way for a new conduction channel a vary- ing transverse anisotropy, E, also shifts the spacing of spin states in energy, as i readily seen in Figure 9.3 (a,2). The lack of in-gap dI/dV peaks in the spectra presented so far is very much a result of the low temperatures under which the calculations are done. By increasing the temperature to T = 4 K and even T = 6 K it can be seen in Figure 9.3 (b,3) and (b,4) that spin state de-excitations assist tunneling elec- trons to produce in-gap peak structures as the higher energy spin states get thermally populated. The two processes responsible for these peak structures are schematically illustrated in (b,1) and (b,2). First, as the temperature rises the Fermi function, that governs occupation of the spin states, stretches to give a finite value at energies corresponding to the higher spin states, depicted as a blue, red and green curve in Figure 9.3 (b,1). These partially occupied states de-excite to assist tunneling and give an in-gap peak at the mirror position to the corresponding excitation peak with respect to the main coherence. In (b,3) the de-excitation peak is located at eV = 2∆ − D and the corresponding excitation peak is located at eV = 2∆ + D for a S = 1 system. Second, as the Fermi function governing occupation of the tip and substrate quasi-particles stretches, with increased temperature, the above gap states that used to be empty eventually fill up to a slight extent, see Figure 9.3 (b,2). Tunneling is then possible without application of a bias voltage which reflects in the dI/dV curves as a central peak at eV = 0. Apart from directly tunneling these elec- trons may also excite the local spin just as before once the bias voltage reaches the excitation energy of the second lowest spin state, effectively adding a con- duction channel. In (b,4) the results can be seen clearly as one peak appears at eV = 0 and two peaks appear at |eV| = 2D. The spin states for a spin 5/2 adsorbed molecule have so far been doubly de- generate. By applying an external magnetic field B, however, these sates will Zeeman-split giving access to a wide variety of possible transitions. When ap- plying an external magnetic field care has to be taken not the quench the super- conductivity of the tip and substrate with magnetic fields that are too strong. One way to ensure a a greater stability to magnetic fields is to choose tip and substrate materials, such as NbTi, Nb3(Sn,Ge,Al) and MgB2 [76, 77, 78], that maintain the superconducting state under magnetic field of several Tesla. Figure 9.4 (a) illustrates how the spin states split up and what transitions are possible under different values of the magnetic parameters D, E and B. For an external magnetic field positive in the z-direction and a transverse anisotropy E = 0 there are for example 5 possible transitions. Traces of these can be seen forking off in the dI/dV-spectra of Figure 9.4 (b). Close to the main coher- ence peak the transitions |E−1/2i ↔ |E1/2i give clearly visible signatures on each side. As |E1/2i goes up in energy and becomes less populated, |E−1/2

68 goes down in energy to have an increased occupation, which explains why excitations are favoured in relation to de-excitations resulting in markedly dif- ferent amplitudes of the two peaks. Further up in the bias voltage, starting at eV = 4 meV, the excitations |E−1/2i → |E−3/2i and |E1/2i → |E3/2i become responsible for two distinct peaks as Bz gets bigger. The difference in energy between the two peaks is a result of the ground states |E±1/2i splitting up less than the states |E±3/2 under a given Bz, causing the energy E−3/2 − E−1/2 to be smaller than E3/2 − E1/2, see Figure 9.4 (a). The difference in ampli- tude mainly follows the occupation of the lower energy spin states |E−1/2i and |E1/2i that strongly depend on energy.

(a) (b)

| + 5/2 | + 5/2 | 5/2 4D

1 | + 3/2 | + 3/2 Energy | 3/2 0.8 2D Bz = 2.6 T 0.1 | + 1/2,+ 3/2, + 5/2 | + 1/2 | + 1/2 0.6 | 1/2 0.4 dI/dV [a.u.] dI/dV D=0 D>0 D>0 D>0 D>0 Bz = 1.3 T 0.2 E=0 E=0 E>0 E=0 E>0 E [meV] Bz = 0 T B=0 B=0 B=0 B>0 B>0 0 0 2 3 4 5 6 7 8 Bias voltage [mV] Figure 9.4. (a) Schematic illustration of the spin state energy levels and possible transitions under different values of the magnetic parameters D, E and B. (b) dI/dV spectra of the STM setup for three different values of an applied external magnetic field. For E > 0 the number of peaks is increased as the dashed lines indicate.

As for the case without an external magnetic field a nonzero transverse anisotropy E 6= 0 will change the spin eigenstates into linear combinations of the basis-states and transitions between any of the spin states can take place with nonzero probability also when B 6= 0, see Figure 9.4 (a) for a schematic illustration. While most such transitions are too rare to leave a trace in the calculated dI/dV-curves it is apparent in Figure 9.4 (b) that additional peaks emerge, e.g. for all permutations of the |E±1/2i → |E±3/2i excitation around eV = 4 meV four peaks can be seen instead of two.

9.3 Anisotropy dependence on tip to sample distance When varying the tip to sample distance during experiments Heinrich et al. discovered that the magnetic anisotropy, acting on the local spin, changed. By closing in on the sample the axial anisotropy D grew and by moving away is got smaller, seemingly in an exponential manner. In order to investigate the phenomenon through our microscopical model the methods applied in refer- ences, [79, 80, 81, 82], where used. An effective spin Hamiltonian

e f f 2 HS = Hs + DS · S + 2F Sz , (9.3)

69 is constructed out of the Hamiltonian for the whole STM-setup. The fields D and F are given by interactions with the tunnelling electrons and the tip/substrate- quasiparticles and Cooper pairs. These can be written in the separate terms Dc/Fc, Dt/Ft and Ds/Fs for interactions with the current, tip and substrate respectively. Mathematically all contributions share the same structure, where F is expressed in terms of anomalous Green functions that correlates Cooper pairs, while D includes both F and a similar term of regular Green functions that correlates electrons. The tip and substrate terms Dt/s/Ft/s are the zero bias voltage limit of Dc/Fc in structure. In order to make analytical progress with the expressions for the energies D and F , the tip and substrate are considered to be in local equilibrium such that the fluctuation-dissipation theorem holds and the relation A = ±i f (±ω)[−2ImAr(ω)] can be used. A few conclusions can then be drawn for the current induced energies Dc and Fc. First of all, F is proportional to the harmonically oscillating factor e−i2eVt that will cause a temporal fluctuation in the spin spectrum. Since the spectral STM measurements are done in the long time limit however, Fc will be discarded under finite bias voltage. Secondly, at zero bias voltage D is purely real which implies that it only contributes with a uniform energy shift of the whole spin spectrum, since DS · S = DS2, and won’t effect the spin state level spacing. The effects of the current induced magnetic anisotropy can hence be discarded for our purposes when the volt- age bias is zero and we can instead focus on the tip and substrate induced 2 anisotropies in the last term of the effective spin Hamiltonian, 2F sz . By introducing the explicit form of the bare retarded anomalous Green function, Ft/s can be shown to give a finite value that falls off logarithmi- cally with a high energy cut off. Given any such high energy cut off Ft/s ∝ 2 (Tsntip/sub|∆tip/sub|) , which carries the crucial information that the magnetic anisotropy induced by Cooper pair correlations in the tip and substrate is pro- portional to the coupling strength, Ts, of direct energy exchange with the local spin. Combined with the notion that the energy eigenvalues of the effective 2 spin Hamiltonian for a spin 5/2 local moment, En/5 = (n/2) (D + 2F ), pro- vides the transition energies ε1 = E3/2 − E1/2 = 2(D + 2F ) and ε2 = E5/2 − E3/2 = 4(D + 2F ), giving the constant ratio ε1/ε2 = 1/2, it is clear that the overall uniaxial anisotropy produces spin excitation energies that vary with the exponentially distance dependent parameter Ts. It becomes more difficult to draw general conclusions under finite bias volt- age due to the fluctuating nature of the induced anisotropy fields, even though Dc simply shifts the whole spectrum. We therefore leave a more thorough investigation to the future and propose that our findings account for at least some of the distance dependent effects on the axial anisotropy observed ex- perimentally.

70 9.4 Concluding remarks In conclusion our exchange interaction model manages to reproduce the exper- imentally obtained differential conductance spectra gathered on measurements of, e.g. Fe-OEP-Cl and Mn-phthalocyanine [4, 83]. These show peak struc- tures from additional conduction channels, due to tunneling electron induced spin excitations of the paramagnetic molecule, at bias voltages that correspond to the sum of the superconducting pairing-potentials of the tip and substrate and the spin excitation energy. The observed effects of electron pumping that facilitates spin-transitions between states above the spin ground state are also successfully mimicked. We demonstrate that direct interactions between the local spin and the tip/substrate under weak coupling conditions, assumed by ligand cage separation, cause Shiba states that lie close to main supercon- ducting coherence peaks in the dI/dV-spectra. In addition, direct interactions between the local spin and tip/substrate Cooper pairs are shown to induce a finite contribution to the uniaxial anisotropy that acts on the magnetic centre of the studied molecule. We argue that this effect at least partially explains the variation in uniaxial anisotropy found experimentally when changing the tip to sample distance. Apart from reproducing experimental findings a wider parameter space was explored in terms of uniaxial and transverse anisotropy fields, external mag- netic fields, and different system temperatures. For illustrative purposes we also considered a local spin with magnetic moment S = 1 as well the situation where one of the leads are made up of a normal metal rather than a supercon- ducting material. Basically all features of the SC to SC case are reproduced in the NM to SC case with the key differences that the conduction onset is shifted to eV = ∆sub +ε1/2→3/2 rather than eV = 2∆sub +ε1/2→3/2 for the first spin in- teraction channel. The thermal broadening of all dI/dV-peaks does however obscure details to a higher degree. It is also probable that the prolonged de- excitation lifetimes of the occupied |mz = 3/2i state will diminish as electrons in the NM side are available for exchange of energy and angular momentum. While the uniaxial anisotropy sets a level spacing for the S = 5/2 states the transverse anisotropy links the states in linear combinations that enable a richer variety of possible transitions. In combination with an external magnetic field, that Zeeman-splits all degenerate states the number over such transitions is large - even though most will be undetectable since only the lowest en- ergy spin states are significantly populated. Increased temperatures will, how- ever, thermally occupy higher energy states to a larger degree which makes the excitations from states above the ground state more numerous. Higher temperatures also make de-excitations from higher spin states occur more of- ten and an outstretched Fermi-function can populate the empty electron states above the superconducting gap. Both of these effects produce in-gap peaks in the dI/dV-curves that possibly explains some of the found signatures in Mn-phtalocyanine measurements [83].

71 We propose that an extended experimental study on adsorbed paramagnetic molecules could benefit from the introduction of an external magnetic field. Transitions between the Zeeman-split ground state could be of particular inter- est as the separation is tunable with de-excitation energies below the breakup point of tip/substrate Cooper pairs.

72 10. Molecular graphene under the eye of scattering theory

Few discoveries within have made headlines in the scientific and general media on a scale comparable to that of graphene, after it was first synthesized and evidence of massless Dirac Fermions were gath- ered [84, 85, 86, 87, 88, 89]. Graphene is truly remarkable as one of the first 2-dimensional material ever constructed with a host of record breaking proper- ties [90]. Some of these properties, especially those concerning the electronic benefits of massless Dirac Fermions, can possibly be replicated in an engi- neered fashion by trapping particles in a 2-dimensional hexagonal pattern [91]. Examples include hexagonally confined photons [92, 93], ultrahigh-mobility electron gases [94], ultra cold atoms trapped in optical lattices [95, 96, 97] and metallic surface electrons bounded by the potentials of adsorbed molecules placed by STM probes [98]. Through these kinds of experimental setups phenomena like topological [99] and quantum spin Hall insulators [100, 101] could be studied as well as nontrivial strongly correlated phases [102]. This summary covers our published paper IV where a scattering theory de- veloped for the modeling of STM measurements [21], detailed in section 5, is used to investigate the electronic structure of molecular graphene. The theory works with finite boundary conditions of any shape in contrast to tight binding models that demand an infinite lattice [103, 104, 105]. Molecular graphene is engineered by letting adsorbed molecules adhere with the triangular lattice structure of a metallic surface. The metallic surface electrons are essentially trapped in 2 dimensions where the molecules are seen as potential hills be- tween which a hexagonal pattern of electron highways form, see Figure 10.1 (a). The theoretical approach simulates a STM device measuring the current response to a bias voltage between tip and substrate. This response gives the (differential) conductance, dI/dV, which is a proportional to the DOS of the tip and the local DOS of the substrate. In order to minimize the coloring of data by the tip, it is purposefully made of a material with a flat DOS and there- fore treated as constant, ntip(εF − eV) ≈ ntip, throughout our calculations. By averaging over space around the scattering-centers the surface electron DOS is extracted to values that are in excellent agreement with experiment [98]. Apart from matching experimental data the flexibility of the theory allows us to study impurity scattering, of both vacancy and impurity defects, as well as to simulate electron and hole doping through variation of the lattice parameter. The effects of magnetic fields are also considered by imposing strain on the substrate lattice.

73 In mathematical terms the surface electrons of the substrate are modeled by the Hamiltonian for a 2-dimensional electrons gas of free particles. Ad- sorbed molecules are modeled as scattering-centers by Dirac delta-function potentials at positions, Rn, given by the regular triangular lattice of the sub- strate. The triangular nature of the lattice ensures that the scattering-centers migrate electron density to the void between them creating a hexagonal pattern of electron highways. The local DOS is given by the imaginary part of the re- tarded Green function of the surface electrons under influence of the scattering environment. These dressed Green functions defined at a given point on the metallic surface are calculated by a T-matrix expansion that cover direct inter- actions from the amount of electron amplitude that returns after one scattering event as well as indirect scattering from electron amplitude that takes the sec- ond order route via two scatterers before returning a contribution to the initial point. To achieve spacial extension of the molecular scatterers - rather than the infinitesimal width of the delta-functions - they are simulated by several scattering centers that are positioned in a hexagonal pattern a few Ångström wide. The hexagonal shape gives a good compromise between DOS-results and calculation effort, as can be seen in Figure 10.1 (b) where DOS-spectra for single, triangle and hexagon scatterers are plotted over the energy range −0.2 < eV < 0.2 eV around the Fermi level. The DOS-curve changes signif- icantly towards a more linear spectrum once the scatterers go from a single scattering-centre up to the hexagonal point after which the changes are less dramatic.

(b) hexagon (c) 2.4 Å (d) Nscatt = 30 0.18 (a) 0.4 0.4 4.8 Å 0.4 90 triangle 9.6 Å 182 single 306 0.3 0.3 0.3

0.14 ) ) ) ω ω ω N( N( N( 0.2 0.2 0.2 0.1

0.1 0.1 0.1 2 nm x2 0.06 x10 0 0 0 −0.1 0 0.1 −0.1 0 0.1 −0.1 0 0.1 energy (eV) energy (eV) energy (eV) Figure 10.1. (a) Topograph of the electron local DOS for a sample of 45 hexagonal molecular scatterers of 7 Dirac delta-function potentials each (indicated by dots). The overlaid triangular and hexagonal patterns show the molecular lattice structure and electron density high points respectively. (b) Electron DOS in the hexagonal spaces in-between scatterers for different numbers of scattering centers and shapes of the scatterers. (c) DOS corresponding to that of (b) for different diameters of hexagonal scatterers. (d) DOS for the different sample sizes 5 × 6, 9 × 10, 13 × 14 and 17 × 18 scatterers. Realistic parameter values for a Cu(111) surface was used to generate all images. A Hexagonal scattering diameter of 4.8 Å was used to create (a), (b) and (d).

The DOS-spectrum is also sensitive to the size of the molecular scatterers and for a lattice parameter of a = 19.2 Å a hexagonal diameter between 2 and

74 4.8 Å is found to give a linear spectrum that doesn’t show the point like be- havior of smaller diameters or the double-well features of larger diameters, see Figure 10.1 (c). Apart from scatterer specific effects the DOS will change depending on the total sample size. Figure 10.1 (d) indicates that 13 × 14 hexagonal scatterers of diameter 4.8 Å in a square shape are sufficient to pro- duce the characteristic linear DOS spectrum of graphene about the Dirac point, as verified in reference [98]. The scattering theory approach is very versatile and easy to adapt to differ- ent conditions, which we illustrate in few examples. The effects of electron and hole doping, for example, can be achieved by varying the lattice param- eter a. From a given outset a stretched parameter weakens the electron con- finement which results in a shift of the DOS spectrum downwards in energy - corresponding to hole doping. A shortened lattice parameter will increase the confinement and shift the DOS spectrum upwards in energy - corresponding to electron doping. Both processes conserves the overall shape of the DOS spectrum to a large extent, see Figure 10.2 (a) for a calculated example. More elaborate lattice geometries such as the Kekulé pattern gives DOS results that match up to experiment in an excellent way [98], e.g. as can be seen in Figure 10.2 (b) the DOS is gapped at the Dirac point in-between strong peaks. By displacing the scatterers of the regular lattice about a point using polar 2 2 coordinates (ur,uθ ) = (qr sin3θ,qr cos3θ), the effect of strain, or a pseudo magnetic field, on the molecular graphene can be studied, where q sets the strength of the strain field. In Figure 10.2 (c) one can read from the local DOS:s over the strained lattice at Fermi level, taken at q = 10−3 Å which corresponds to a magnetic field of 60 T, that the pseudo-spin symmetry is broken. Areas that belong to the A sub lattice light up as bright spot of high density, while areas that belong to the B sub lattice are dark and depleted of density - just as for real graphene. The local DOS:s spectrum in Figure 10.2 (d) taken for the respective sub lattice clearly shows a Dirac point mismatch between the the two sub lattices A and B as well. The scattering theory also lends itself to model effects of impurity scatter- ing off defects or off vacancies as scattering points can be added to or removed from any position. For real graphene it has been show, by nearest-neighbor interaction models [106, 107, 108], that scattering off a single vacancy poten- tial gives rise to a resonance in the local DOS below the Dirac point within the linear spectrum. The theory is based on a Hamiltonian that accounts for nearest-neighbor hopping, which offers a connection between the sub-lattices A and B, and scattering within the sub-lattice hosting the impurity. Further- more, a T-matrix approach to calculate the local DOS from this Hamiltonian shows that the divergent factor, responsible for the DOS resonance, only ap- pears in the expression for the local DOS of the sub-lattice not hosting the vacancy impurity. A pattern of increased local DOS, at the resonance energy, can therefore be expected to spread from the vacancy site following a threefold spatial symmetry.

75 (a) a=17.8 Å (b) Kekulé (c) (d) A 19.2 Å 3 nm 0.3 B 0.3 20.4 Å ) ω ) 0.2 ω 0.2 N( N(r, 0.1 0.1

−0.1 0 0.1 −0.1 0 0.1 −0.1 0 0.1 energy (eV) energy (eV) 0.1 0.2 0.3 0.4 energy (eV) Figure 10.2. (a) Electron DOS for different values of the lattice parameter a. The rightmost dotted curve represents hole-doped molecular graphene, the solid middle curve represents a nearly neutral case and the leftmost dashed curve is the density spectrum when electron doped. (b) Surface electron DOS for scatterers placed in the Kekulé pattern. (c) Local DOS for molecular graphene under an applied pseudo magnetic field corresponding to 60 T. (d) Energy spectra of the electron DOS for the A (solid) and B (dashed) sub-lattices.

For the scattering theory we have adapted to molecular graphene a vacancy site is modeled by adding scattering points where the hexagonal high elec- tron density paths intersect, see inset of Figure 10.3 (b). Electron density then depletes from the intersection, effectively simulating the absence of a carbon atom in regular graphene. In Figure 10.3 (a) the local DOS spectra, at sites adjacent to the vacancy, are plotted for different magnitudes of the scattering potential. The varied strength of the potential is accomplished by changing the number of scattering points within the scatterer from 7 to 13, 25 and 37, as in- dicated in the figure. The spectral images shows a clear build up of a resonance in the linear part of the curves below the Dirac point for an increased number of scattering points. To give a more apparent picture of the progressive change Figure 10.3 (b) depicts the same data as (a) but divided by the unperturbed spectra. The arising structure can be seen to take the shape of a well defined peak for potentials with several scattering points. Topographical representa- tions of the calculated spectra are shown in Figure 10.3 (c) and (d) for 7 and 13 scattering points respectively. The expected pattern of increased density follows the predicted shape of a three fold symmetric signature in the B sub- lattice. The intensity, also, shows that stronger scattering potentials lead to a signature with more dramatic changes in the local DOS that exhibits higher peaks. Calculations done with the discussed nearest-neighbor hopping model on real graphene suggests an even stronger mid-gap peak in the B sub-lattice in response to a vacancy in the A sub-lattice. This discrepancy is reasonable since the hexagonal confinement in engineered molecular graphene is weaker than in actual graphene and because the effective potential still is within the weak limit - even for 37 scattering points. If we instead of adding scattering points, which depletes the local electron density, remove a scatterer from the triangular lattice the effect will be that

76 electron density fills the void. In Figure 10.3 (e) this situation is depicted and a spot of strong intensity can be seen in the middle of the image where the removed scatterer creates a defect in the lattice. The defect sits between both sub-lattices and hence couple to both on an equal footing, which explains the sixfold symmetric patter emerging.

(a) (b) 7 points

13 1.6 0.3

25 ) ω )

(r, 1.2 0

ω 37 0.2 )/N ω

N(r, 0.8 N (r, 0.1 0.4

−0.08 −0.04 0 0.04 0.08 −0.08 −0.04 0 0.04 0.08 energy (eV) energy (eV)

(c) (d) (e) 3 nm

−0.08 −0.04 0 0.04 0 0.04 0.08 0.12 Figure 10.3. (a) Electron DOS spectra for impurity scattering off of a single vacancy site, modeled by 7, 13, 25 and 37 added scattering points. The solid black line shows the unperturbed DOS, for reference. (b) DOS as in (a) subtracted by the unperturbed DOS. (c) and (d) gives the local DOS over the lattice for the same calculation as in (a) and (b). The high density pattern of threefold symmetry grows in intensity as more scattering points are packed in the vacancy site, compare (c) with (d). (e) The local DOS for impurity scattering modeled by removing a scatterer. Electron density builds up in the void and couples equally to both sub-lattices, which explains the shape of sixfold symmetry.

To conclude, the use of scattering theory for modeling molecular graphene works very well. Our approach show that hexagonally shaped scatterers with a diameter between 3 and 6 Å made up of at least 7 Dirac delta-function po- tentials that are distributed according to a triangular lattice (dual to the hexag- onal), gives realistic results. For a moderate lattice of roughly 13 × 14 scat- tering sites the calculated local DOS matches the experimental findings of

77 reference [98] with very good agreement. The theory also offers flexibility as we show by application to, e.g. electron and hole doping situations achieved by varying the lattice parameter and strain effects on the lattice that breaks the pseudo-spin symmetry between lattices A and B. Impurity scattering is real- ized, in good agreement with previous theoretical predictions [106, 107, 108], by adding or removing scatterers in order to suppress or promote electron den- sity. We do, for example, reproduce the resonance peak within the linear part of the DOS spectrum below the Dirac point that is a result of vacancy scat- tering and argue that the lack of experimental evidence for this peak is due to the fact that vacancies realized by adding that same species as the surrounding generates a scattering potential that is too weak to give a measurable signature.

78 11. Outlook

The Success of research on nanoscale devices is ultimately measured by the level of eventual adoption in technological applications. In this sense the fields of nano-electromechanics and single magnetic molecules are in their infancy. Current applications and those of the near future are still limited to research labs or highly specialized measuring tools. While nano electromechanical systems are bridging the gap between be- tween the mechanical and the quantum mechanical, which opens up for a range of novel uses [109], they still show great promise as detection devices in the classical regime. A currently widespread and shining example is the atomic force microscope (AFM), that was developed around the same time as STM [110], and can scan surfaces with a sub nanometer vertical resolution. More recent applications that have been demonstrated include functioning arrays of nano resonators that can sense chemical vapor with concentrations of one part- per-billion under a 2 second exposure time [111]. The related application, mass spectrometry, has also been shown to work successfully using NEMS, even measuring neutral particles which lies beyond the traditional magnet based mass spectrometers [112]. Magnetic resonance force microscopy is a development of AFM and was in 2004 reported to have observed a single electron spin [49], which takes us to the field of single spin magnets. Single magnetic molecules are a subject of intense basic research, which means that practical applications are many years away. The internal structures of single magnetic molecules and responses under outer stimuli are still being mapped out by measurements in e.g. tunnel junctions. These measurements include variations in electron transport, mechanical strain, magnetic fields, thermal gradients, optical influence and quantum interference [115]. As far as my own research goes, one of the most recent projects that occu- pied my time sadly fell apart as the equations crumbled to dust. The project, concerning electron spin resonance in a Josephson junction, looked promising and would have made a nice contribution to this thesis. You win some and loose some, however, and currently we are compiling the results of a study into thermally induced electron mediated spin torque. Where torque effects on local spins within a thermal gradient of a metal between two heat baths are studied as phonons indirectly affect the spins through interactions with the conduction electrons.

79 12. Acknowledgments

I would like acknowledge the help, support and patience of my supervisor Jonas Fransson. I am also forever grateful for the foundation on which I stand - my family, including Nora, Olle, Hugo, mother Lena, farther Stefan, Sister Sara and brother Christoffer. Many thanks also goes out to my colleagues and lunch time philosophers Henning, Kristoffer, Henrik, Oscar, Lisa, Anders, Annica,... and naturally the golden five.

80 13. Summary in swedish

Nanovetenskap har under de senaste åren fått så en framträdande roll att var och varannan känner till området. Produkter som drar nytta av nanostrukturer börjar industriellt nå medicinska och mekaniska ändamål, men även på kon- sumentmarknaden förkommer tekniken [1]. Inom halvledarelektronik har man under många år jobbat i nanostorleksordning och fortsatt forskning har lett till att hårdvarutillverkare nu lanserat processorer med en arkitektur om hela 14 nm [2]. Konventionell teknik börjar dock nå de fysikaliska gränserna, vilket tydliggörs av att utvecklingen inte längre följer Moores lag [113]. Ta lagrings kapaciteten på moderna hårddiskar som exempel. De fungerar genom att en tunn magnetisk film på en slät yta är indelad i små domäner, som var och en har två favoriserade magnetiseringsriktiningar som läshuvudet kan skilja mellan. Varje domän lagrar en bit genom att en riktning representerar en 1:a medan den andra riktningen representerar en 0:a. Genom att avsik- tligen generera ett magnetfält kan läshuvudet tillföra energi så att en domän byter magnetiseringsriktning. Problemet är bara att även termisk energi kan spontant kan ändra denna riktning hos en domän om energibarriären mellan de olika riktningarna är för liten. Denna barriär står i relation till domänvolymen och om den blir för liten kommer också känsligheten mot informationsförlust bli stor. Denna mekanism sätter en fysikalisk gräns för den möjliga informa- tionstätheten för ett givet material. Forskning som kan leda till en helt ny lagringsteknink av information är därför viktig i och med att informationssamhället hela tiden genererar enorma mängder data. En möjlig sådan väg ligger i användandet av molekylära mag- neter [3]. Dessa består av en magnetisk atom som ligger inbäddad i ett skal av andra atomer som influerar den magnetiska atomen med ett anisotropiskt fält. Detta gör att molekylen energimässigt föredrar vissa magnetiseringsriktningar som kan användas för att representera en 1:a eller en 0:a. Fördelen med dessa molekyler, jämte användandet av magnetiska domäner hos tunna filmer, är att de upptar en yta som är storleksordningar mindre utan att för den skull vara speciellt känsliga mot termiska övergångar. Artiklarna II och III sammanställer teoretiska studier av en sådan magnetisk molekyl som ligger placerad på en supraledande yta inom ett gap över vilket en spänning kan läggas till en supraledande fin spets. En uppställning av detta slag brukar kallas för skannings tunnel mikroskop (STM). 1982 pub- licerade G. Binnig och H. Rohrer de första bilderna, genererade av ett sådan mikroskop, där individuella atomer för första gången gjordes "synliga" [8]. Experimentella mätningar gjorda med en sådan uppställning visade hoppfulla

81 resultat där medellivslängden för exciterade magnetiska tillstånd hos molekylen visade sig vara ovanligt lång, vilket i detta fall rör sig om 10 nanosekunder [4]. Mätningarna visade även att åtkomsten till dessa de exciterade tillstånden en- ergimässigt förskjutits ovanligt högt på grund av de intilliggande supraledarna. Denna process förklarar de långa livstiderna i och med att energin hos den ex- citerade molekylen inte omedelbart kan lämnas över till elektroner i substratet eller spetsen då elektroner i dessa supraledare till stor del förekommer Cooper- par med högre bindningsenergi än molekylens excitationsenergi. Med vår teoretiska modell kunde dessa energinivåer återskapas med god överensstäm- melse och friheten i modellen lät oss utforska en större parameterrymd som inkluderade yttre magnetfält, varierad temperatur, samt andra värden på de anisotropiska fälten. Vår teoretiska modell visade även att interaktion med Cooper-paren i substratet åtminstone till viss del förklarar den varierade uni- axiala anisotropi som uppmätts då spetsens avstånd till molekylen varierats. Tunnelövergångar spelar även en central roll i artikel I där en nanoelek- tromekanisk oscillator pendlar kring två fixa ledare i en asymmetrisk geometri. Systemet består således av två tunnelövergångar som varierar i avstånd. De fixa ledarna och den oscillerande "ön" är alla supraledande vilket leder till bidrag från Josephson strömmar i gapen. Dessa strömmar modulerar öns rörelse samtidigt som öns rörelse påverkar strömmen. I och med denna kom- plexitet framträder parameterreginoner med vitt skild karaktär, från periodisk rörelse kring ett eller två energiminima, till kvasiperiodiska banor och vidare till kaotiska beteenden. I den sista artikeln, IV, används spridningsteori för att studera molekylärt grafen med god överensstämmelse med experiment. Vanligt grafen är ett atomlager tunt och byggs upp av kolatomer som binder till varandra i en hexagonal struktur. Flera sådana lager ovanpå varandra bygger upp materialet grafit som återfinns i vanliga blyertspennor. Grafen har en rad unika egen- skaper som gjort att forskningen kring detta material uttryckligen exploderat sedan det först framställdes 2003 [84]. Molekylärt grafen byggs upp av att molekyler placeras i ett triangulärt mönster på en metallisk yta. Mellanrum- men kring molekylerna utgör då ett hexagonalt mönster som fylls med elek- trondensitet av potentialen från molekylerna. Dessa molekyler är även bundna till metallens yta och resultatet liknar grafen, då elektronerna fördrar att hålla sig i det hexagonala mönstret. Egenskaper, så som att elektronerna blir Dirac- fermioner, uppträder även in molekylärt grafen. Spridningsteorin som över- förts till att beskriva detta tillstånd är flexibel och vi visar att flera punktliga spridningscentra kan simulera effekterna från en kontinuerligt avtagande po- tential. Denna metod lämpar sig därför till att beskriva spridning från oregel- bundna defekter.

82 References

[1] D. Hornig. The state of nanotechnology. Casey Reserch, 2012. [2] R. Smith. Intel’s 14nm technology in detail. Anandtech, 2014. [3] C. F. Hirjibehedin, C. Y. Lin, A. F. Otte, M. Ternes, C. P. Lutz, B. A. Jones, and A. J. Heinrich. Science, 317:1199, 2007. [4] B. W. Heinrich, L. Braun, J. I. Pascual, and K. J. Franke. Protection of excited spin states by a superconducting energy gap. Nat. Phys., 9:765–768, 2013. [5] M. N. Baibich, J. M. Broto, A. Fert, F. Nguyen Van Dau, F. Petroff, P. Etienne, G. Creuzet, A. Friederich, and J. Chazelas. Phys. Rev. Lett., 61:2472–2475, 1988. [6] G. Binasch, P. Grünberg, F. Saurenbach, and W. Zinn. Phys. Rev. B, 39:4828–4830, 1989. [7] M. Yamaguchi, T. Takamoto, and K. Araki. Solar Energy Materials and Solar Cells, 90:3068 – 3077, 2006. [8] G. Binnig, H. Rohrer, Ch. Gerber, and E. Weibel. Phys. Rev. Lett., 49:57–61, 1982. [9] G. D. Mahan. Mnay-Particle Physics. Kluwer Academic/Plenum Publishers, 3 edition, 2000. [10] H. Bruus and K. Flensberg. Many-Body Quantum theory in Condensed Matter Physics. Oxford university press, 1 edition, 2004. [11] J. Fransson. Non-Equilibrium Nano-Physics. Springer, 1 edition, 2010. [12] M. Julliere. Phys. Lett., 54A(225), 1975. [13] K. Sternickel and A Braginski. Supercond. Sci. Technol., 19:S160, 2006. [14] S. N. Erné, H. D. Hahlbohm, and H. Lübbig. J. of Appl. Phys., 47(12):5440–5442, 1976. [15] O. Hideki, Y. Kondo, and K. Takayanagi. Nature, 395:780–783, 1998. [16] D. M. Eigler and E. K. Schweizer. Nature, 344:524–526, 1990. [17] S. Heinze, M. Bode, A. Kubetzka, O. Pietzsch, X. Nie, S Blgel,¨ and R. Wiesendanger. Science, 288(5472):1805–1808, 2000. [18] J. Tersoff and D. R. Hamann. Phys. Rev. B, 31(2):805–813, 1985. [19] J. Bardeen. Phys. Rev. Lett., 6(2):57–59, 1961. [20] M. H. Cohen, L. M. Falcon, and J. C. Phillips. Phys. Rev. Lett., 8(8):316–318, 1962. [21] G. A. Fiete and E. J. Heller. Rev. Mod. Phys., 75(933), 2003. [22] P. Heimann, H. Neddermeyer, and H. F. Roloff. J. Phys. C, 10(L17), 1977. [23] W. Shockley. phys. rev., 56(4):317–323, 1939. [24] M. F. Crommie, C. P. Lutz, and D. M. Eigler. Science, 262(5131):218–220, 1993. [25] E. J. Heller, M. F. Crommie, C. P. Lutz, and D. M. Eigler. Nature, 369:464–466, 1994. [26] H. C. Manoharan, C. P. Lutz, and D. M. Eigler. Nature, 403:512–515, 2000.

83 [27] C. R. Moon, C. P. Lutz, and H. C. Manoharan. Nat. Phys., 4:454–458, 2008. [28] L. S. Rodberg and R. M. Thaler. Introduction to the Quantum Theory of Scattering. Academic Press, 1967. [29] D. van Delft and P. Kes. Physics Today, page 38, 2010. [30] W. Meissner and R. Ochsenfeld. Naturwissenschaften, 21(44):787788, 1933. [31] F. London. Nature, 140:793–796, 1937. [32] H. Fröhlich. Phys. Rev., 79:845, 1950. [33] E. Maxwell. Phys. Rev., 78(4):477, 1950. [34] C. A. Reynolds, B. Serin, W. H. Wright, and L. B. Nesbitt. Phys. Rev., 78(4):487, 1950. [35] L. N. Cooper. Phys. Rev., 104(4):1189–1190, 1956. [36] J. Bardeen, L. N. Cooper, and J. R. Schrieffer. Phys. Rev,, 106(1):162–164, 1957. [37] H. Broer and F. Takens. Dynamical systems and chaos. Springer, 1 edition, 2011. [38] M.P. Blencowe. Contemp. Phys., 46(249), (2005). [39] G. A. Steele, A. K. Hüttel, B. Witkamp, M. Poot, H. B. Meerwaldt, L. P. Kouwenhoven, and H. S. J. van der Zant. Science, 325(5944):1103–1107, (2009). [40] B. Lassagne, Y. Tarakanov, J. Kinaret, D. Garcia-Sanchez, and A. Bachtold. Science, 325(5944):1107–1110, (2009). [41] R.G. Knobel and A.N. Cleland. Nature, 424(6946):291–293, (2003). [42] M.D. Dai, K. Eom, and C-W. Kim. Appl. Phys. Lett., 95(203104), (2009). [43] A.K. Naik, M.S. Hanay, W.K. Hiebert, X.L. Feng, and M.L. Roukes. Nature Nanotechnology, 4:445–450, (2009). [44] K.L. Ekinci, X.M.H. Huang, and M.L. Roukes. Appl. Phys. Lett., 84(4469), (2004). [45] Y. T. Yang, C. Callegari, X. L. Feng, K. L. Ekinci, and M. L. Roukes. Nano Lett., 6:583–586, (2006). [46] H. B. Peng, C. W. Chang, S. Aloni, T. D. Yuzvinsky, and A. Zettl. Phys. Rev. Lett., 97(087203), (2006). [47] M. Li, H.X. Tang, and M.L. Roukes. Nature Nanotechnology, 2:114–120, (2007). [48] A.N. Cleland and M.L. Roukes. Nature, 392:160–162, (1997). [49] D. Rugar, R. Budakian, H.J. Mamin, and B.W. Chui. Nature, pages 329–332, (2004). [50] M.D. LaHaye, O. Buu, B. Camarota, and K.C. Schwab. Science, 304(5667), (2004). [51] A.N. Cleland and M.R. Geller. Phys. Rev. Lett., 93(070501), (2004). [52] P. Rabl, S.J. Kolkowitz, F.H.L Koppens, J.G.E. Harris, P. Zoller, and M.D. Lukin. Nature Physics, 6:602–608, (2010). [53] K.C. Schwab and M.L. Roukes. Physics Today, 58:36–42, (2005). [54] M. Blencowe. Physics Reports, 395:159–222, (2004). [55] S. Etaki, M. Poot, I. Mahboob, K. Onomitsu, H. Yamaguchi, and H.S.J. Van der Zant. Nature Physics, 4:785–788, (2008). [56] A. Zazunov, R. Egger, C. Mora, and T. Martin. Phys. Rev. B, 73(214501), (2006).

84 [57] J. Fransson, J.-X. Zhu, and A.V. Balatsky. Phys. Rev. Lett., 101(067202), (2008). [58] P.J. Holmes and D.A. Rand, (1976). [59] B. Yurke, D.S. Greywall, A.N. Pargellis, and P.A. Busch. Phys. Rev. A, 51(421119), (1995). [60] I. Siddiqi, R. Vijay, F. Pierre, C.M. Wilson, M. Metcalfe, C. Rigetti, L. Frunzio, and M.H. Devoret. Phys. Rev. Lett., 93(207002), (2004). [61] R. Vijay, M.H. Devoret, and I. Siddiqi. Rev. Sci. Inst., 80(111101), (2009). [62] R.B. Karabalin, R. Lifshitz, M.C. Cross, M.H. Matheny, S.C. Masmanidis, and M.L. Roukes. Phys. Rev. Lett., 106(094102), (2011). [63] S. Savel’ev, A.L. Rakhmanov, X. Hu, A. Kasumov, and F. Nori. Phys. Rev. B, 75(165417), (2007). [64] M. Bagheri, M. Poot, M. Li, W.P.H. Pernice, and H.X. Tang. Nature Nanotech., 6:726–732, (2011). [65] L. Bogani and W. Wernsdorfer. Nat. Mater., 7(179), 2008. [66] S. Kahle, Z. Deng, N. Malinowski, C. Tonnoir, A. Forment-Aliaga, N. Thontasen, G. Rinke, D. Le, V. Turkowski, and T.S. Rahman. Nano Lett., 12(518), 2012. [67] L. C. Bassett and D. D. Awschalom. Nature, 489:505–507, 2012. [68] T. Balashov, T. Schuh, A. F. Takács, A. Ernst, S. Ostanin, J. Henk, I. Mertig, P. Bruno, T. Miyamachi, S. Suga, and W. Wulfhekel. Phys. Rev. Lett., 102:257203, Jun 2009. [69] A. A. Khajetoorians, S. Lounis, B. Chilian, A. T. Costa, L. Zhou, D. L. Mills, J. Wiebe, and R. Wiesendanger. Itinerant nature of atom-magnetization excitation by tunneling electrons. Phys. Rev. Lett., 106:037205, Jan 2011. [70] A. J. Heinrich, J. A. Gupta, C. P. Lutz, and D. M. Eigler. Science, 306(5695):466–469, 2004. [71] Toshio Miyamachi, Tobias Schuh, Tobias Markl, Christopher Bresch, Timofey Balashov, Alexander Stohr, Christian Karlewski, Stephan Andre, Michael Marthaler, Martin Hoffmann, Matthias Geilhufe, Sergey Ostanin, Wolfram Hergert, Ingrid Mertig, Gerd Schon, Arthur Ernst, and Wulf Wulfhekel. Nature (London), 503(7475):242–246, 11 2013. [72] David J. Christle, Abram L. Falk, Paolo Andrich, Paul V. Klimov, Jawad Ul Hassan, Nguyen T. Son, Erik Janzén, Takeshi Ohshima, and David D. Awschalom. Nat. Mater., 14(160), 12 2015. [73] Sebastian Loth, Kirsten von Bergmann, Markus Ternes, Alexander F. Otte, Christopher P. Lutz, and Andreas J. Heinrich. Nat Phys, 6:340– 344, 2010. [74] Noriyuki Tsukahara, Ken-Ichi Noto, Michiaki Ohara, Susumu Shiraki, Noriaki Takagi, Yasutaka Takata, Jun Miyawaki, Munetaka Taguchi, Ashish Chainani, Shik Shin, and Maki Kawai. Adsorption-induced switching of magnetic anisotropy in a single iron(ii) phthalocyanine molecule on an oxidized cu(110) surface. Phys. Rev. Lett., 102:167203, Apr 2009. [75] S. Loth, S. Baumann, C.P. Lutz, D.M. Eigler, and A.J Heinrich. Science, 335:196, 2012. [76] D. Larbalestier, A. Gurevich, D. M. Feldmann, and A. Polyanskii. Nature, 414:368, 2001. [77] A. Gurevich. Nat. Mater., 10:255–259, 2011.

85 [78] C. Buzea and T. Yamashita. Supercond. Sci. and Technol., 14:R115, 2001. [79] J.-X. Zhu, Z. Nussinov, A. Shnirman, and A. V. Balatsky. Phys. Rev. Lett., 92:107001, 2004. [80] J. Fransson and J.-X. Zhu. New J. Phys., 10:013017, 2008. [81] J. Fransson, J. Ren, and J.-X. Zhu. Phys. Rev. Lett., 113:257201, 2014. [82] S Bhattacharjee, L. Nordström, and J. Fransson. Phys. Rev. Lett., 108:057204, 2012. [83] B. W. Heinrich, L. Braun, J. I. Pascual, and K. J. Franke. Strain-induced tuning of the magnetocrystalline anisotropy of single molecules. Nano. Lett., 2015. [84] K. S. Novoselov, A. K. Geim, D Morozov, Y. Jiang Zhang, S. V. Dubonos, Grigorieva I. V., and A. A. Frisov. Science, 306(666), (2004). [85] A. K. Geim and K. S. Novoselov. Nat. Mater., 6(183), (2007). [86] M. I. Katsnelson. Mater. Today, 10(20), (2007). [87] A. H. Castro, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim. Rev. Mod. Phys., 81(109), (2009). [88] M. A. H. Vozmediano, M. I. Katsnelson, and F. Guinea. Phys. Rep., 496(109), (2010). [89] D. C. Elias, R. R. Nair, Mohiuddin T. M. G., S. V. Morozov, P. Blake, M. P. Halsall, A. C. Ferrari, D. W. Boukhvalov, M. I. Katsnelson, A. K. Geim, and K. S. Novoselov. Science, 323(610), (2009). [90] A. K. Geim. Science, 324(5934):1530–1534, (2009). [91] M. Polini, M. Guinea, M. Lowenstein, H. C. Manoharan, and V. Pellegrini. Nat. Nanotechnol., 8(625), (2013). [92] O. Peleg, G. Bartal, B. Freedman, O. Manela, M. Segev, and D. N. Christodoulides. Phys. Rev. Lett., 98(103901), (2007). [93] U. Kuhl, S. Barkhofen, T. Tudorovskiy, H.-J. Stöckman, T. Hossain, L. de Forges de Parny, and F. Mortessagne. Phys. rev. B, 82(094308), (2010). [94] A. Singha, M. Gibertini, B. Karmakar, S. Yuan, M. Polini, G. Vignale, M. I. Katsnelson, A. Pinczuk, L. N. Pfeiffer, K. W. West, and V. Pellegrini. Science, 332(6034):1176–1179, 2011. [95] P. Soltan-Panahi, J. Struck, P. Hauke, A. Bick, W. Plenkers, G. Meineke, C. Becker, P. Windpassinger, M. Lewenstein, and K. Sengstock. Nat. Phys., 7(5):434–440, 2011. [96] L. Tarruell, D. Greif, T. Uehlinger, G. Jotzu, and T. Esslinger. Nature, 483(7389):302–305, 2012. [97] Thomas Uehlinger, Gregor Jotzu, Michael Messer, Daniel Greif, Walter Hofstetter, Ulf Bissbort, and Tilman Esslinger. Phys. Rev. Lett., 111:185307, 2013. [98] K. K. Gomes, W. Mar, W. Ko, F. Guinea, and H. C. Manoharan. Nature, 483(7389):306–310, 2012. [99] F. D. M. Haldane. Phys. Rev. Lett., 61:2015–2018, 1988. [100] C. L. Kane and E. J. Mele. Phys. Rev. Lett., 95:226801, 2005. [101] F. Guinea, M. I. Katsnelson, and A. K. Geim. Nat. Phys., 6(1):30–33, 2010. [102] Z. Y. Meng, T. C. Lang, S. Wessel, F. F. Assaad, and A. Muramatsu. Nature, 464(7290):847–851, 2010. [103] B. Wunsch, F Guinea, and F. Sols. New J. Phys., 10(103027), 2008. [104] C.-H. Park and S. G. Louie. Nano Lett., 9(1793), 2009.

86 [105] M. Gibertini, A. Singha, V. Pellegrini, M. Polini, G. Vignale, A. Pinczuk, L. N. Pfeiffer, and K. W. West. phys. rev. B, 79(241406), 2009. [106] S. H. M. Jafri, K. Carva, E. Widenkvist, T. Blom, B. Sanyal, J. Fransson, O. Eriksson, U. Jansson, H. Grennberg, O. Karis, R. A. Quinlan, B. C. Holloway, and K. Leifer. J. Phys. D, 43(045404), 2010. [107] K. Carva, B. Sanyal, J. Fransson, and O. Eriksson. phys. rev. B, 81(245405), 2010. [108] T. O. Wehling, A. V. Balatsky, M. I. Katsnelson, A. I. Lichtenstein, K. Scharnberg, and R. Wiesendanger. phys. rev. B, 75(125425), 2007. [109] K. C. Schwab and M. L. Roukes. Physics Today, page 36, July 1986. [110] G. Binnig, C. F. Quate, and C. Gerber. Phys. Rev. Lett., 56:930–933, 1986. [111] I. Bargatin, E. B. Myers, J. S. Aldridge, C. Marcoux, P. Brianceau, L. Duraffourg, E. Colinet, S. Hentz, P. Andreucci, and M. L. Roukes. Nano Lett., 12:1269–1274, 2012. [112] E. Sage, A. Brenac, T. Alava, R. Morel, C. Dupré, M. S. Hanay, M. L. Roukes, L. Duraffourg, C. Masselon, and S. Hentz. Nat. Commun., 6(6482), 2015. [113] L. Latif. Amd claims 20nm transition signals the end of moore’s law. the Inquirer, 2013. [114] Dante Gatteschi, Roberta Sessoli, and Jacques Villain. Magnetic Interactions in Molecular Systems. Oxford University Press, 2006. [115] S. V. Aradhya and Venkataraman. Nat. Nanotech., page 8, 2013.

87 Acta Universitatis Upsaliensis Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1282 Editor: The Dean of the Faculty of Science and Technology

A doctoral dissertation from the Faculty of Science and Technology, Uppsala University, is usually a summary of a number of papers. A few copies of the complete dissertation are kept at major Swedish research libraries, while the summary alone is distributed internationally through the series Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology. (Prior to January, 2005, the series was published under the title “Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology”.)

ACTA UNIVERSITATIS UPSALIENSIS Distribution: publications.uu.se UPPSALA urn:nbn:se:uu:diva-261609 2015