<<

Louisiana State University LSU Digital Commons

LSU Historical Dissertations and Theses Graduate School

1992 Monopoles and Confinement in Lattice-. Vandana Singh Louisiana State University and Agricultural & Mechanical College

Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_disstheses

Recommended Citation Singh, Vandana, "Monopoles and Confinement in Lattice-Gauge Theory." (1992). LSU Historical Dissertations and Theses. 5467. https://digitalcommons.lsu.edu/gradschool_disstheses/5467

This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Historical Dissertations and Theses by an authorized administrator of LSU Digital Commons. For more information, please contact [email protected]. INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand corner and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.

Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order.

University Microfilms International A Bell & Howell Information Company 300 North Zeeb Road. Ann Arbor. Ml 48106-1346 USA 313/761-4700 800/521-0600 Order Number 9316999

Monopoles and confinement in

Singh, Vandana, Ph.D.

The Louisiana State University and Agricultural and Mechanical Col., 1992

UMI 300 N. ZeebRd. Ann Arbor, MI 48106 MONOPOLES AND CONFINEMENT IN LATTICE GAUGE THEORY

A Dissertation

Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Physics and Astronomy

by Vandana Singh B.Sc. Delhi University, 1983 M.Sc. Delhi University, 1985 M.S. Louisiana State University, 1988 TABLE OF CONTENTS

Page

ACKNOWLEDGEMENTS ...... iii

LIST OF TABLES ...... iv

LIST OF FIGURES ...... v

A B S T R A C T ...... vi

CHAPTER 1. INTRODUCTION ...... 1

CHAPTER 2. LATTICE GAUGE THEORY BASICS ...... IS 2.1. Overview of Path Integrals ...... 18 2.2. Lattice Gauge Theory ...... 21 2.3. Monte Carlo Methods ...... 33

CHAPTER 3. A DUAL LONDON EQUATION FOR U(l) LGT ...... 42 3.1 London Equation and Fluxoid Q uantization ...... 42 3.2. Application of the Dual London Theory to U(l) LGT ...... 45 3.3. Results ...... 53

CHAPTER 4. METHOD AND RESULTS FOR SU(2) L G T ...... 64 4.1 Abelian Projection: The Maximal Abelian Gauge ...... 64 4.2. Lattice Implementation ...... 66 4.3. Results and Their Interpretation ...... 69

SUM M ARY ...... 84

REFERENCES ...... 87

V IT A ...... 90 ACKNOWLEDGEMENTS

I am indebted to my advisor, Richard Haymaker, for his guidance, pa­ tience and encouragement during this project. I am happy to thank him for many hours of fruitful discussions, for much help with programming, and for taking a lot of trouble to help me gain an appreciation of particle physics in general. I am grateful for his support in encouraging me to attend various conferences and summer schools, and in giving me the opportunity to present talks. I am also happy to thank our collaborator, Dana Browne for encour­ agement and advice, for being very generous with his time, for invaluable help with this project, and also for helping me with programming, writing papers, and TqX . I am also glad to acknowledge useful discussions and help from

Chen-Han Lee.

I would also like to thank my friends Christopher Chapman, Shyamoli

Chaudhuri, Chitra Guruswamy, Jai Won Kim, Evan Mauceli, Ying Cai Peng,

Vijay Poduri, Hossein Sadeghpour, Zahra Sadeghpour, Jyotsna Vijapurkar.

Bing Yu, and Lijun Zhang, for their help and support.

Most of all, I am indebted to my family, without whose support and encouragement this work would not have been possible.

This work was done on the Physics SUN cluster (LEQSF(90-92)-ENH-12) and on the IBM 3090.

111 LIST OF TABLES

Table 1: The Four Fundamental ...... 2

Table 2: Some Stable Particles: the Leptons and Selected ...... 3

Table 3: Flavors and their Properties ...... S

Table 4: a(/3) for SU(2) Lattice Gauge T heory ...... 33

Table 5: Bulk P roperties of the £7(1) Vacuum ...... 54

Table 6: Comparision of Calculated and Expected Total Electric Flux ... 56

Table 7: A from a Fit to the Dual London Equation ...... 58

Table 8: Bulk Properties of the SU (2) V a c u u m ...... 71

Table 9: Bulk Properties after Abelian Projection ...... 71

Table 10: P aram eters for (V x J m ) w F i t ...... SI

Table 11: P roperties of the SU(2) Dual Superconductor ...... 81 LIST OF FIGURES

Figure 1.1: The Conjectured Flux Tube Configuration for a qq P a i r ...... 10

Figure 2.1: The Links Constituting a Plaquette ...... 24

Figure 2.2: The Links Constituting a 3 x 3 Wilson Loop ...... 27

Figure 2.3: Demonstrating the Basics of the Monte Carlo Method ...... 35

Figure 3.1: Correspondence Between Original and Dual Lattice ...... 52

Figure 3.2: The Measurement Plane Relative to the qq P a i r ...... 55

Figure 3.3: Electric Flux Distribution for j3 = 1.1 ...... 57

Figure 3.4: Electric Flux Distribution for 18 = 0.95 ...... 59

Figure 3.5: Electric Flux versus Transverse Distance for f3 = 0.95 ...... 61

Figure 3.6: —V x J m versus Transverse Distance for f3 = 0.95 ...... 62

Figure 3.7: The Fluxoid versus Transverse Distance for (3 = 0.95 ...... 63

Figure 4.1: — (V x J m ) Versus Transverse Distance for /3 = 2.4 ...... 75

Figure 4.2: —(V x J;\.j) Versus Transverse Distance for /? = 2.5 ...... 76

Figure 4.3: J m Versus Transverse Distance for = 2.4 ...... 77

Figure 4.4: Electric Flux Versus Transverse Distance for 0 = 2.4 ...... 78

Figure 4.5: Electric Flux Versus Transverse Distance for j3 = 2.5 ...... 79

v ABSTRACT

The mechanism by which , believed to be the fundamental con­

stituents of matter, are prevented from existing in the free state is still un­

known. The phenomenon of quark confinement is one of the fundamental

problems in physics. One of the most viable candidates for a hypothesis of

confinement is the dual superconductor mechanism that likens quark confine­

ment to the Meissner effect in superconductors. The peculiarities of quark

interactions make a numerical approach to the subject a necessity, and there­

fore, much of the work in this area has been done through the methods of

lattice gauge theory, with the simplicities afforded by putting spacetime on

a four-dimensional grid. Over the years a large amount of indirect evidence

has accumulated that the dual superconductor hypothesis does indeed lead to

quark confinement but unambiguous evidence has eluded research efforts until

recently. This work presents the first direct proof of a Meissner-like effect that

leads to confinement, using the numerical techniques of lattice gauge theory.

It is shown that for a U( 1) lattice gauge theory, that serves as a toy model

for the real world of quarks, a dual London relation and an electric fluxoid

quantization condition is satisfied, allowing us to conclude that the vacuum in this case acts like an extreme type-II superconductor, and that quarks are

confined. We also show that SU( 2) lattice gauge theory, which is qualitatively different and another step closer to reality, shows a Meissner-like effect. In contrast to the £7(1) case, our results are found consistent with a dual ver­ sion of the Ginsburg-Landau theory of superconductivity. We find reason to believe that the SU( 2) vacuum behaves like a superconductor on the border­ line between type-I and type-II. Our approach paves the way for a study of the more complicated theory, , that is believed to describe quarks. CHAPTER 1

INTRODUCTION

The quark is believed to be an ultimate constituent of matter. However,

it has never been isolated in an experiment, in spite of strong indirect evidence

that it exists [1], This is not, we believe, because of any fundamental limita­

tions of our instruments, but because of the nature of the strong that

exists between one quark and another. The hypothesis that quarks cannot ex­

ist in the free state is known as quark confinement. In this thesis we will try to

understand how quarks are confined using a simple physical picture borrowed

from the theory of superconductivity. To motivate this we first review the

four fundamental forces in nature, the classification of elementary particles

and the development of the . We will then introduce quantum

chromodynamics, the theory that describes quarks. Next we will build up an

analogy between superconductivity and quark confinement, and explain why

we need a numerical approach to this problem. The method we use is that

of lattice gauge theory, the basics of which will be outlined. Lastly we will

summarise our main results and explain the organization of this thesis.

The four basic interactions are listed in Table 1 below. Particles interact

by exchanging certain other particles, called exchange or mediating particles.

For example an electron and a positron can interact electromagnetically by exchanging photons. Table 1: The Four Fundamental Forces

Interaction Relative Exchange Range Strength Particle Strong 1 ~ 10- l i cm Electromagnetic 1 tr ­ Photon DC Weak io -1'2 IF*, Z° Bosons ~ 0 r- O 1 Gravitational G raviton DO

Particles are classified roughly according to their interactions. A selec­ tive list of particles is shown in Table 2. Of the leptons, the most familiar is the electron, e~. Leptons interact electromagnetically. Hadrons are particles that can have weak, electromagnetic and strong interactions. However we will show later that strong interactions are responsible for the internal dynamics of the hadrons, and also dominate their interactions with each other. The time scale for strong interactions is about 10-23 seconds. The much slower electro­ magnetic and weak interactions are ignorable to a good approximation on this time scale. Hadrons can be grouped in two broad categories, the relatively light which are bosons and the heavier which are fermions.

The proton and neutron p and n constitute atomic nuclei. Particle masses are shown in MeV. One MeV is 1.78 x 10-30 Kg. The cpiantities in brackets are experimental uncertainties. For example, the notation 0.5110034(14) should be read as 0.5110034 ± 0.0000014. These numbers are taken from Ref. [2j.

We now trace briefly the development of the quark model. It arose from attempts to explain the profusion of apparently ‘elementary’ particles like those in Table 2, most of which had been dicovered by the 1950's and 1960's. 3

Table 2. Some Stable Particles: the Leptons and Selected Hadrons

Particle Particle Mass (MeV) category Symbol Leptons e± 0.5110034 (14) P* 105.65932 (29) r± 17S4.2 (3.2) < 0.000046 < 0.5 * < 0.164 Hadrons Mesons n± 139.5673 (7) 7T° 134.9630 (38) n 54S.8 (0.6) K ± 493.667 (0.015) K° 497.67 (0.013) Baryons P 938.2796 (27) n 939.5731 (27) A 115.60 (0.05) T+ 1189.36 (0.06) v - 1197.34 (0.05) S° 1192.46 (0.08) TO 1314.9 (0.6) TUT 1321.32 (0.6) f i- 1672.45 (0.32)

The situation was alleviated somewhat by the discovery that there existed cer­ tain symmetries, which, by enabling different particles to be put into one class, reduced the number of independent particles. The first of these was isospin symmetry, according to which the proton and neutron could be treated as two states of a single particle, the . The justification for this is that the two have nearly identical masses and interactions, which are different only because the proton is charged. By the same argument, there is only one pion

7r which has three charge states. However the discovery of particles possessing a new attribute, that of strangeness, multiplied the number of isospin multi-

plets, and the idea that perhaps there existed a higher symmetry that could

incorporate strange particles, began to be explored. Also, the very profusion

of particles and that they could be organized at least in a crude way, began

to suggest that they all had a common underlying substructure, in a manner

analogous to the atomic theory explanation of the profusion of elements in the

periodic table.

Early attempts to explain these phenomena included a proposal by Fermi

and Yang (1949) [3] that neutrons and protons were the elementary con­

stituents of hadrons. This was followed by Sakata's theory in 1956 [3] sug­

gesting that all hadrons were composed of three basic states, the proton, neu­

tron and the lambda particle, (p,n, A). This was an attempt to incorporate strangeness on the same footing as isospin. Its importance lies in the fact that

although incorrect, it marks the birth of SU( 3) as a symmetry of elementary particles since Sakata put (p, n,A) into the lowest non-trivial representation of 517(3).

A higher symmetry that would include strange particles was proposed by

Gell-Mann in the 1960’s [4]. This is also an 517(3) symmetry but does not consider the neutron, proton and A particle to be elementary constituents of the other hadrons. The three are treated as composites just like the other hadrons. The basic feature of this scheme is that if we plot the hypercharge

Y = 5 + 5, where 5 is the strangeness quantum number and B the number, versus the third component J3 of isospin, we can arrange the vast number of hadrons into groups that, are either octets, singlets or decuplets.

For instance, the neutron, proton, the three F particles and the two cascade

particles E as well as the A all can be grouped into one octet. This is an

approximate symmetry as the particle masses in this octet range from 939 MeV

to 1314 MeV. Thus isospin multiplets can be grouped into supermultiplets. In

terms of group theory, hadrons must belong to ‘representations’ of SU(3). This

is powerful, because if we know only a few particles belonging to a particular

supermultiplet, we can predict the rest.

In 1964, Gell-Mann [4], and independently Zweig [5], proposed that had­

rons were composed of fractionally charged sub-particles called quarks [4] or

aces [5], of three kinds or flavors. This hypothesis enables us to resolve the

mystery of why only octets, singlets and decuplets occur in nature. If these

three quarks ('up’, ‘down’, and ‘strange’) constitute the lowest non-trivial representation of SU{3), we can see that the direct product of a quark triplet with an antiquark triplet would yield an octet and a singlet. A direct product of three quark triplets yields a decuplet, two octets and a singlet. The fact that higher supermultiplets do not occur then means that hadrons are either composed of quark-antiquark pairs, or of combinations of three quarks. We now identify the former with mesons and the latter with baryons. Thus a proton is composed of two ‘up’ quarks and a ‘down’ quark, ( unci), and a 7T+ m eson is (ud).

Strong evidence for the quark model came from experiments carried out in the 1960’s [6]. High energy electrons (15-200 GeV) were made to collide with 6

protons, and ‘deep inelastic’ events were selected, i.e. those which involved a

large amount of momentum and energy transfer to the proton. The incident

particles were scattered at much larger angles than expected if the proton was

a continuous charge distribution. This was interpreted as evidence that there

were much smaller particles inside the proton, of spin 1/2 and size less than

10~1G'Cm.

The success of the quark model spurred a number of experiments to isolate

the quark [7]. Essentially they attempted to find fractionally charged particles.

The experiments looked for quarks in high energy accelerator experiments, in cosmic rays and in samples of terrestrial matter. They were all singularly unsuccessful, the collision experiments yielding a bound on the cross-section for free quark production, « 10-3‘cm2, while the flux of quarks in cosmic rays was less than 10“ 10 quarks per cm2/ster.second. Analysis of graphite samples yielded less than 1 quark per 2 x 108 . The only positive result came from LaRue, Fairbank and Hebard [8] who searched for fractionally charged particles and found five out of 39, that had mean charges of —0.343 ± 0.011.

However, other groups were not able to reproduce their result. All this may be considered evidence that quarks cannot exist in the free state, which is the idea of quark confinement.

One of the hypotheses that tried to account for quark confinement was the color conjecture. The idea was that free quarks were prevented from ex­ isting due to a new, exact symmetry of the strong interactions which gave rise to a color conservation law. There were already indications that a new quark quantum number was needed. For example, the A++ particle, which is

composed of three u quarks and has spin 3/2, with zero orbital angular mo­

mentum. has a totally symmetric wavefunction that would violate the Pauli

exclusion principle unless we introduce an additional quantum number, that

of color, which renders the wavefunction antisymmetric. Thus quarks must

possess a further label which can have three values, say R (red), B (blue) and

G (green). All existing quark composites are then seen to be color singlets (or

‘colorless’) so that quark confinement becomes identical to color confinement.

Transformations in color space belong to a new group of SU{ 3) transforma­

tions.

There exists some compelling experimental evidence for the color hy­

pothesis. In 7T° decay, for instance, where the final products are two photons, expected to be produced through a virtual quark-antiquark state, the theoret­ ical cross-section is three times smaller than the experimental value. However, after we consider that each quark can have any of three colors, (which means multiplying the cross-section by 3) we obtain the correct value. A similar situation arises when we consider the branching ratio for e+e- going to qq and to /u+/.i~. This ratio has peaks at certain values of the center of mass energy, which can only be predicted correctly [9] if quarks are assumed to be color-charged.

The correct theory of strong interactions is believed to be quantum chro­ modynamics (QC'D) [9]. Like , it is also a gauge theory. This means that the theory is invariant with respect to a certain s

Table 3: Quark Flavors and Their Properties

Quark Symbol I h s c B Q up u 1/2 1/2 0 0 0 + 14 down d 1/2 -1/2 0 0 0 ■ 3 i strange s 0 0 -1 0 0 ~ 3 charmed c 0 0 0 1 0 +11 bottom b 0 0 0 0 -1 '3 class of local transformations, that constitute the symmetry or gauge group of the theory. The preservation of this local symmetry necessitates the exis­ tence of gauge fields, whose quanta are the exchange particles of this theory.

Thus quantum electrodynamics is invariant with respect to transformations under the gauge group U( 1), with the gauge field (the photon field A fl) cor­ responding to the Lie algebra of the group. Similarly QCD is an 5Tr(3)-color gauge theory, in which color is an exact, local symmetry. The interaction between quarks is mediated by massless gluons, which themselves carry color

(e.g. RG). There are eight varieties of these. Since gluons can interact with each other, quantum chromodynamics is non-linear in the variables, un­ like quantum electrodynamics, which is simpler because the photon has no charge.

The five known flavors of quarks and their quantum numbers are sum­ marized below. The quantum numbers are, respectively, isospin, the third component of isospin, strangeness, charm and the bottom quantum number.

(Note that while QCD says nothing about the flavor spectrum of quarks, there is no other theory that does). 9

Quantum chromodynamics incorporates most of the experimental results

in quark physics. It makes predictions about the high energy behavior of quark

bound states which are borne out by experiments. One example is asymptotic

freedom, which refers to the fact that quarks within hadrons behave like nearly

free particles, as seen in cleep-inelastic scattering experiments [6]. However,

what we are interested in here is the low energy or long distance (about half

a fermi at sufficiently low temperatures) behavior of hadrons. This is the

confined regime, which we now discuss.

There is a simple way of understanding how quarks can be permanently confined. Consider the quark-antiquark (qq) , the . It will be possible to separate the quarks only if the potential between them goes to a constant as a function of their separation. Since quarks do not seem to exist in the free state, it is reasonable to conjecture that the interquark potential increases with separation. The simplest case is a potential that depends linearly on the separation, V(r) ~ kr, where k is a constant, equal to the force. This implies that it would require an infinite amount of energy to separate the qq pair to an infinite distance. A constant force means that the color field lines between the qq pair form a tube with a constant energy per unit length instead of spreading out in space like those of an electric dipole.

This conjectured flux tube configuration is shown in Fig. 1.1.

In searching for a mechanism for permanently confining quarks, it is nat­ ural to look for a similar phenomenon in nature. The other example of flux 10

Figure 1.1: The Conjectured Flux Tube Configuration for a qq Pair 11

tube formation in physics is the Abrikosov vortex, formed when a type-II su­

perconductor is subjected to an external magnetic field above He 1 [10]. This has led t’Hooft, Mandelstam and others [11] to suggest that perhaps quark confinement is analogous to the Meissner effect in superconductors. This has come to be known as the dual superconductor hypothesis. Briefly, the dual superconductor hypothesis postulates that quark confinement is like a Meiss­ ner effect with a color-electric field replacing the magnetic field, and a color magnetic field in place of a (negative) electric field (this is the dual trans­ formation, E —> B ,B —> —E). Thus the color-electricallv charged quarks play the role of magnetic sources in the superconductor. Just as electrically charged supercurrents in superconductors are responsible for ‘squeezing’ the magnetic field lines into a tubular configuration, color-magnetically charged supercurrents (generated by color-magnetic monopoles) circulate about the color-electric field lines between the quarks, squeezing them into a flux tube.

Unfortunately the confined regime cannot be studied using analytical methods. The usual analytical approach is perturbation theory, which in­ volves an expansion in the coupling constant (for example the dimensionless coupling a = e2/(47rtic) in electrodynamics). This is feasible when the cou­ pling constant is less than one, as in the high energy region of QCD. In the low energy regime however, the coupling is of order unity, so that perturbative expansions are impossible and nonperturbative methods must be applied.

The most productive nonperturbative approach is lattice gauge theory

(LGT), proposed by Wilson in 1974 [12]. This is a formulation of quantum field theory 0 1 1 a Euclidean spacetime grid, which in our study is a four­ dimensional hypercube. Gauge fields, which are the fields associated with the exchange particle of the theory, are defined on the links (strictly speak­ ing, gauge group elements exist on links), while matter fields like quarks exist on the sites of the lattice. Why do we wish to ‘discretize’ a continuum the­ ory in this manner? To begin with, there is the question of infinities that arise in a continuum field theory for processes at high momenta. These are usually taken care of via a procedure known as regularization, which isolates the infinite terms, followed by , which absorbs these infinities into a redefinition of the parameters of the theory. However most regulariza­ tion schemes are based on perturbative expansions. The lattice, on the other hand, represents a non-perturbative regulator. There is an automatic high- momentum cut-off because the finite lattice spacing a permits a momentum no higher than rr/a. Like all regulators, the lattice must be removed at the end of the calculation by taking the continuum limit a —> 0. Another motivation for lattice gauge theory is that it takes advantage of the deep connections be­ tween quantum field theories and statistical mechanics. At zero temperature, quantum field theory in D space dimensions and 1 time dimension is equiva­ lent to a classical Euclidean statistical system in D + 1 space dimensions. At non-zero temperatures there is a similar equivalence with a quantum statis­ tical system in D space dimensions. Thus well known methods in statistical mechanics, such as Monte Carlo techniques, high temperature expansions and the can be applied to study field theory 0 1 1 the lattice. 13

Probably the most compelling reason why lattice gauge theory is useful for studying quark confinement is that like a statistical system it can have differ­ ent phases, and in some phase the theory may be naturally confining while in another phase it can be deconfining (as discussed in chapter 3 and 4).

What we have learned about confinement in general from lattice stud­ ies will be discussed in Chapter 2. We now summarize results specific to flux tube formation and the dual superconductor hypothesis. Lattice studies

[13,14] have shown clear evidence for a flux tube-like configuration between a quarlc-antiquark pair. There is evidence that the energy to separate them goes solely into lengthening the tube, which is what we expect from a linear potential. These studies have even gleaned some details of the flux tube. For instance they have shown that the parallel component of the color-electric field is the dominant contribution to the energy density. Also there is a can­ cellation between the transverse electric and magnetic contributions, so that the flux tube is narrower than expected from studying individual components.

These are some of the features that must be incorporated in a mechanism of confinement.

Since the dual superconductor model appeared, lattice studies have been performed on both U(l) and'non-Abelian pure gauge theories. The motivation for studying U(l) gauge theory on a lattice is that unlike its continuum coun­ terpart, it has a confined phase as well as the deconfined phase corresponding to quantum electrodynamics. As will be discussed in more detail later, the vacuum of this theory naturally contains magnetic monopoles in the confined 14 phase. Thus, confinement can be studied using U(l) lattice gauge theory as a prototype before going on to the true non-Abelian theory.

Much evidence favoring this hypothesis has been accumulated to date, starting from the work of Polyakov [15] and Banks, Myerson and Kogut [16], who showed that 17(1) lattice gauge theory with a pair of static quarks could be approximately transformed into a model describing the interaction between an electric current loop from the quarks and a gas of magnetic current loops.

DeGrand and Toussaint [17] demonstrated via a numerical simulation that the vacuum of 17(1) lattice gauge theory is populated by monopole currents, copious in the confined phase and rare in the cleconfined phase. Also working in 17(1), Barber, [IS], Cea and Cosmai [19] and Wensley and Stack [20] found evidence that monopoles were relevant to confinement. In the work of Ref.

20, which studied a lattice function that serves as an order parameter for confinement, the monopole contribution was found to account for nearly all of the total value.

The next breakthrough was ’t Hooft’s extension of this picture to non-

Abelian theories, by choosing a special gauge where the non-Abelian theory resembled the Abelian case (which will be explained in detail in Chapter 4).

This was implemented on the lattice in a pioneering study by Kronfeld et al., for SU{ 2) [21] and recently also for SU(3) [22] gauge theories. They confirmed that the monopole density was high in the confined phase, and fell dramatically to nearly zero in the deconfined regime. Other hints that the dual superconductor hypothesis was on the right track, came from Bornyakov at 15

al., [23], as well as the relatively recent work of Suzuki et a I. [24], who showed

that there was an Abelian dominance in SU{ 2) lattice gauge theory. This

means that the order parameter for confinement is dominated by an Abelian

contribution, indicating that confinement arises from the Abelian subgroup

of the non-Abelian theory. This lends credence to ’t Hooft’s gauge-fixing

procedure mentioned above. Another piece of evidence came from Barber

et al. [25], who, after selectively removing lattice configurations containing

magnetic monopoles, found that the resulting theory was non-confining.

Note that past studies of the dual superconductor hypothesis have been mainly performed in the vacuum, without quarks, and it is encouraging that

the results hint strongly that monopoles are relevant to the confinement pro­ cess. However, a real test of the dual superconductor hypothesis should mea­ sure the response of these monopoles to external sources, such as a quark- antiquark pair. In the absence of sources, the monopoles form current loops distributed randomly in spacetime. The dual superconductor hypothesis is validated if the introduction of sources results in a reorganization of these currents so as to squeeze the flux lines between the quarks into a tube. The demonstration of this for U( 1) lattice gauge theory, and its extension to SU( 2), is the crux of this thesis.

To test the dual superconductor mechanism on the lattice, we must devise a function that measures the response of the magnetic monopoles to the pres­ ence of a quark-antiquark pair. In this we are guided by the London equation for superconducters [10,26], which relates the line integral of the supercurrent 16 around a closed loop to the magnetic flux through the surface enclosed by the loop. We devise two correlation functions which are the lattice equivalents of the terms in the dual version of the London equation. If this equation is valid on the lattice, this is proof of the Meissner effect, and hence of the dual superconductor hypothesis.

Our results indicate unambiguously that a dual London equation is obeyed on the lattice for pure four-dimensional U(l) gauge theory. It is all important, then, to see whether the same holds for non-Abelian lattice gauge theories.

This is more difficult to demonstrate because of the fact that the necessary gauge fixing step is costly in terms of computer time. However we have per­ formed this calculation (albeit on a relatively small scale) and find that the

Meissner effect does occur, but the data is consistent with a Ginzburg-Landau type theory [10] rather than a London equation. For both theories, we also calculate the London penetration depth and demonstrate fluxoid quantization.

This thesis is organized as follows. Chapter 2 begins with a discussion of the connection between the path integral formalism of quantum field theory, and lattice gauge theory. An overview of lattice gauge theory, its implementa­ tion and interpretation, and a historical perspective is also presented, finishing with a detailed exposition of the numerical techniques employed on the lat­ tice. In chapter 3 we review the London theory of superconductivity briefly and write down its dual version. We then explain how our correlation func­ tions are the lattice equivalents of the terms in the London equation and how the calculation is performed. We present our results and their interpretation. Chapter 4 deals with our study of SU( 2 ) lattice gauge theory, in which we

discuss 't Hooft’s gauge fixing mechanism and our implementation of this on

the lattice, and display our results and their interpretation in terms of a dual

Ginzburg-Landau theory. Finally, in a summary chapter, we discuss our re­ sults, their consequences and their limitations, as also the directions that are indicated by our project for further research. CH A PTER 2

LATTICE GAUGE THEORY BASICS

2.1 Overview of Path Integrals

We review the path integral formulation of quantum field theory [27], its

implementation on the lattice, and discuss in general terms how a lattice

calculation is performed and interpreted.

To understand how lattice gauge theory relates to the path integral for­

mulation of quantum field theory, we start by recalling the path integral de­

scription of quantum mechanics, in which the Green function of the theory can

be expressed as a weighted sum over all possible paths between the initial and

final states. If the coordinate degrees of freedom of the quantum mechanical

state are denoted collectively by q, then as the system evolves from the initial

state (q,t) to the final state ( q',t'), the wavefunction at the later time may be w ritten:

%MJ ) = J (q,t), where

is the Green function. Since lattice gauge theory is formulated in Euclidean space, let us analytically continue this Green function to imaginary time, i.e., t —► — ir, t1 —*• —ir1. Further, assuming that the Hamiltonian has the form

= + V(Q), — e x — 1

18 19

w here a is the index for the coordinate degrees of freedom of the system,

we divide the time interval t' — t into N small intervals of length e = (r' —

t)/N . Each path in phase space starting from (q,t) and ending on ( q',t' ) is

approximated by straight line segments for each time interval r, — r,- [ and is

weighted by e-5^ ]. Then, summing over all possible paths, we obtain, for

the Green function,

(,/|e— = f [D<,)e-SM , ( 2 .1.1 ) J,I

w here

•SeM = J dr'' LE(q{T''),q{T'')).

Here is the Euclidean action and L e is the Lagrangian.

n -i r

Scfa] = ^2 e o(^(r'))2 + • / = 0 Q

We have now represented the matrix element as a sum over all possible paths,

each path weighted by a Boltzmann-like factor exp( — S£[

butions to this integral will come from those paths for which S e is stationary,

i.e., which satisfy the condition (SS'ijfQ] = 0. This is the principle of least action,

which leads to the classical Euclidean equations of motion. Thus contributions from paths other than the classical ones, represent quantum fluctuations.

We now extend this formulation to quantum field theory, where we are mainly interested in scattering processes. The initial state of the system, at t = —oo, is the ground state or vacuum of the theory. The interaction takes place at t = 0, and the final state of the system, at. t = +oc. is also the 20 ground state. In field theory all physical information about the system is encapsulated in the set of Green functions of the theory. The relevant Green function is the vacuum to vacuum time ordered product of field operators. An n-point Green function is such a product of n field operators in the Heisenberg representation. For instance, for the real scalar field continued to imaginary tim e t —> —tY,

<&a,.(.T,i) = e Ilrai(x, 0)e- / / r , the n-point Green function is the time-ordered product,

G'0 l,a2,...o„(n, T2, ...Tn) = (Et}\®ai{Tl)$cl2(T2)...§Qn(Tn))\Eo).

Here |-Eo > is the vacuum or ground state of the system. In analogy with quantum mechanics, this path integral is evaluated by dividing the time in­ terval t' — r into N infinitesimal intervals . Each path between the initial and final states is weighted by e~S£:t*i and by the products of at the corresponding times. After summing contributions from all possible paths, we multiply the result by (1 /\/'2ne)nN w here n is the number of degrees of freedom, and take the limit e —» 0, N —> oo, keeping the product JVe finite.

< £ o |T ($ a i ( r 1) - . . $ Q„(r,,)|£o> ------/[ £ ) $ ] e - S E[*]------(2' L 2) w here + ° o / cIt L e {${t ), $ ( r ) ) . 'OO

Since it is not always possible to calculate the path integral analytically (unless it is Gaussian) we use numerical methods instead, which are necessarily limited to finite lattices. We shall discuss these in Section 2.3. 21

A great simplification is afforded by the resemblance of equation (2.1.2) to

a statistical mechanical ensemble average with a Boltzmann distribution. This

enables us to use the techniques of statistical mechanics to calculate Green

functions for systems with a large number of degrees of freedom. This justifies

the fact that Euclidean Green functions are often referred to as correlation

functions,

< $ai(r1)---$Qn(r„) >= iJ [ D ^ ai{Tl)---^an{Tn)e~SE^ (2.1.3)

where Z = resembles the partition function of the statistical mechanics analog.

For completeness we note that the analogous expression for Green func­ tions and the action in Minkowski space is

< > = ------/[£)$]e»sW------( ’ where

= - v m m so th at

2.2 Lattice Gauge Theory

Let us now turn to the lattice formulation of quantum field theory [12,27-

29]. We begin by putting spacetime on a discrete, hypercubical grid. This is intended to be a temporary measure, a mathematical trick to enable us to 22 do numerical calculations, after which the lattice spacing is supposed to be taken to zero. On the lattice we are in Euclidean space, which has the useful property of making our integrals converge smoothly.

Each lattice link is associated with an element of the symmetry group of the field theory being studied. For £7(1) gauge theory, these are £7(1) group elements denoted U(i,j) = exp(/0) = exp{ia(jaAfl). Here r/0 is the coupling, a is the lattice spacing, and is the gauge field. The i and j indices on the group element represent the two adjacent sites that define the link on which the element U(i,j) is located. At each site there emanate 8 such links in each of the 4 possible directions. Those pointing in the positive directions are labelled by the site. A link variable pointing along the negative direction is the inverse of the link along the positive direction, U(j, i) = £7(i,j)-1 = exp ( — id).

Having defined the link variables U(i,j) on every site of the lattice, we now have a Euclidean statistical mechanical system in the canonical ensemble with partition function

Z = j dUe~s^u\ (2.2.1) where we take S to be the Wilson action:

S[C] = 5^[l-i(C p + C p]. (2.2.2) P

Here 0 is analogous to 1/kT in statistical mechanics. Up is the plaquette variable, defined as the oriented product of directed links around an ele­ mentary square. Thus for a plaquette in the /.m plane at site it, this is

Utl(n)Uu{n +f.i)U,^(ii + v)U^(n). (See Fig. 2.1). S reduces to the usual 23

(Euclidean) action for a pure gauge field,

J ITr(FlluFllu)d-lx, in the naive continuum limit of the lattice spacing a —> 0, where Fliu is the gauge field tensor. This is the usual gciuge action if we identify ft — 2/go2.

The plaquette itself reduces to ss jn this limit.

So far what we have is a vacuum consisting of the gauge particles of our theory, which are randomly pair-produced and annihilated. We have not introduced quarks yet. Introducing dynamical quarks on the lattice gives rise to what is known as the doubling problem [27,2S] which arises as follows. To introduce fermions on the lattice consider the action for a free Dirac field, continued to imaginary time.

SF = i J + M)tp{x), where the 7 ;f are the Euclidean counterparts of the Dirac matrices. The problem arises when we write down a correlation function such as the two point function for the field operators, using the partition function with the above action. It is most easily seen if we try to take the naive continuum limit of this function. It turns out that there are then 16 fermionic species, fifteen too many. In D-spacetime dimensions there are i D fermions. There are ways of overcoming this problem, but they are computationally intensive and not very physically transparent.

Fortunately, the study of the fundamental problems of quantum chromo- dynamics such as quark confinement can be done without the introduction of 24

Figure 2.1: The Links Constituting a Plaquette fermion degrees of freedom on the lattice. Static quarks suffice, allowing us

to study the broad features of the phenomenon in some detail. It can also

be argued that using static quarks on the lattice is a good approximation to

real-life heavy quarks such as the b-quark.

Static quark sources were first introduced by Wilson [12] in his original

formulation of lattice gauge theory, via the Wilson loop. To understand this

[28], consider separating a quark-anti quark pair to a relative distance R. T he

pair is kept at this separation for a time T and then brought together and

annihilated. The world line of such a quark-antiquark pair would form a

closed loop in spacetime. The Euclidean amplitude for this process is the

matrix element

(i\e-HT\f)

where |i),|/) represent the initial and final states, i.e. the qq pair a distance

R apart, and H is the hamiltonian. This equation can be written as a path

integral.

, , -HT: ,, flDAl ‘\ e*P [ - 5 + 'So f ' ,|e U> /(B.4/)e-s

w here Jfl is an external current describing the world lines of the quarks, and

the group index a = for SU( 3), a = 1,2,3 for SU{ 2) and a = 1 for

17(1). For a simple planar loop the Jfl‘lAfl“ term becomes ( 1 / 2 A" where

the A“ are S matrices that are the generators of SU{ 3). Since ] /), |/) are identical and l3ecau.se this is a. static process, the above equation reduces to 2G

exp( — V(R)T)(i\f), where V(R) is the interquark potential. The quantity

is called the Wilson loop correlation function. Here P stands for path ordering.

On the lattice, the Wilson loop is defined as the product of links forming a

closed loop C' lying on a plane, such as the c — t plane in the lattice, as shown in Fig. 2.2.

(2.2.3)

The Wilson loop is an order parameter for confinement, as we will see shortly.

At a fixed time t between 0 and T, we can study the static properties of the quark-antiquark pair, if the Wilson loop is long enough in the time direction to avoid contamination by excited quark states that are created by the creation and annihilation process.

The behavior of the Wilson loop function was the first indication that the lattice was the natural place to study strongly interacting theories. In the limit of strong coupling, when g is large and /? = 1 / g2 is small, the partition function

Z can be expanded in j3, in a manner analogous to the high temperature expansion for a statistical mechanical system. In this regime, the Wilson loop acquires an area law behavior, W(R,T) = exp(—kA), where A = RT is the area of the loop. Since we have just seen that W{R,T) ~ exp ( — V{R)T), it follows that V = kR. We see that k represents the string tension. It serves as an order parameter since it is identically zero in the deconfined phase and non-zero in the confined sector of the theory. Figure 2.2 The links constituting a 3 x 3 Wilson lqop,% The area law behavior is independent of the shape of the closed loop C.

Unfortunately it means also that the signal dies down exponentially with area,

so we are limited to studying loops that must not be too large, and yet cannot

be too small if we are interested in states where the kinetic energy of the pair

is zero. For nearly all gauge groups of interest, the area law behavior is seen in the strong coupling limit. This includes U(l) and SU(N).

In a theory without confinement, the quark pair energy is twice the self­ energy Es of a. single quark. Then the Wilson loop function will obey a perimeter law, W{C) = exp (p(C)Es), where p{C) is the perimeter of the loop. This behavior actually persists even in the strong-coupling or confining phase of the theory, but there it is dominated by the area law dependence.

In pure U( 1) lattice gauge theory the value of the parameter 3 in the action determines which phase the theory is in [17,30]. At zero temperature there are two phases, confined and deconfined, which are separated by a weak first order phase transition at /? ss 1 [32]. The pure SU( 2) [13,31] theory has only a confined phase at zero temperature, whereas both phases exist at non-zero temperature, separated by a second order phase transition at a

3 value depending on the system temperature. We mention in passing that simulating a lattice gauge theory at a finite temperature is done by taking the time extent of the lattice to be much smaller than the space extent. The lattice temperature is then l/(aAr, ). where N t is the time extent of the lattice 29

If dynamical fermions are introduced on the lattice, the Wilson loop func­ tion is no longer a reliable indicator of confinement or deconfinement. This is because, beyond a certain interquark separation, there is enough energy stored in the flux tube to create another quark-antiquark pair from the vacuum, so that the Wilson loop now represents two mesons instead of two quarks.

We now discuss briefly the continuum limit of lattice gauge theories

[27,28,34]. At the end of the calculation we must take the lattice spacing a to zero to recover continuum physics. The naive continuum limit, which consists of simply taking the limit a —> 0, will not suffice. For one thing, there are many choices of lattice action which have the same naive continuum limit, but need not lead to the same continuum physics. The correct way to take the continuum limit is to ensure that physical observables remain finite in the limit of zero lattice spacing. However not all theories possess a continuum limit. Suppose our physical observable is mass, m. This is proportional to the reciprocal of the correlation length, so that the smallest mass gives the largest correlation length. Now the mass as measured on the lattice will be in lattice units. The physical mass is — (miatt/o-)- So as a —► 0, so must miatt-

if rnphy 3 is to be finite. Thus the correlation length measured in lattice units must diverge. So lattice functions that diverge as a —* 0 are likely to represent physical quantities in the continuum limit. Hence, a continuum limit exists for the theory in question only if it has a critical region in its parameter space where the correlation length diverges. Renormalization group techniques can be employed, as in statistical mechanics, to study the critical behavior of the 30

lattice gauge theory.

The imposition of the condition that physical observables must be finite in

the lim it a —> 0 introduces a scale, in terms of which physical quantities can be measured. The only parameter in the lattice gauge theory is the climensionless bare coupling < 70- The correlation length will depend on this coupling. At som e g =

<7o- Let there be a physical observable .4 with dimension d in units of mass.

Let. its corresponding lattice variable be Aiati, which ca.n depend only on g0.

If a continuum limit exists, then the correctly dimensioned lattice variable

Aiatt(go,a) = (l/a)dAt„u(go) must have a finite limit as a —> 0. This is only possible if g0 depends on a. Aiatt(go(a), a) —> Apfiys as a —» 0. As Autt approaches -4,,/,^, go{a) approaches g', the value corresponding to the critical point.

How do we extract the dependence of #0 on a? We go to a lattice where a is as small as is practically feasible and knowing Apily3, we use our measurement of Aiatt mid the equation Api,y3 = (l/« )rf-4/(tu(

In the case of QCD, the form of this function can be derived if we ignore dynamical fermion effects. We outline this derivation below. Since the form of 31 go{a) is independent of the particular physical observable being considered, let us use the interquark potential V. At a finite lattice spacing a, the correctly dimensioned potential on the lattice is:

V(R,g0,a) = — ,

a ~9 p{g a o)- * d V(R, a,go) = 0, (2.2.5) 'da u,dg0_ where /?(beta function, we can determine go{a). The question then, is to find the form of the beta function. This can be done approximately by perturbation theory where, to every order, the RG equation must hold.

By summing relevant perturbative diagrams that contribute to the po­ tential to order (

C V(R,gQ,a) = (do + ( ^ ) U jo ) 4ln(-) + O{g60)) . (2.2.6) 47rR_ \ lD7T“ ci J

Inserting this into the RG equation, we obtain the perturbative beta function to lowest order:

This should be applicable if the coupling is sufficiently small. The negative sign means (from the definition of the beta function) that as lattice spacing is decreased, so is go, and in fact, go is driven toward a fixed point go* =

0, corresponding to a zero of the beta function. Thus the continuum limit corresponds to vanishing bare coupling, or .

Integrating equation (2.2.5), we obtain a. relation between a and go:

a = {~r~ )R{(Jo )i (2.2.7) A l

(____L-J- w here R(go) = e i2l}osa) in the lowest order, 3o — 11/ 167T2 and is an integration constant with the dimensions of mass. So far we have only con­ sidered the leading term in the beta function expansion. This term describes the behavior of the theory near the fixed point.

For the region near the fixed point, go close to g£, (close to the continuum limit, since as go decreases so does a) we have:

Ala„ — A,)liy,ad — C(R(g0))d. (2.2.S)

Here C is a dimensionless constant. This behavior is referred to as asymptotic scaling. On a finite size lattice there will exist a limited region - a scaling window in coupling constant space - where physical quantities will exhibit this behavior. If go (and consequently a) becomes too small, finite size effects come into play because one cannot make the lattice arbitrarily large. If go becomes too large, the lattice will become insensitive to fluctuations smaller th a n a. Table 4 shows 3 as a function of a for SU( 2) lattice gauge theory.

Thus, when performing a calculation on the lattice, we should ensure that the physical length scale is much smaller than the linear extent or size of 33

Table 4: a(/3) for SU(2) Lattice Gauge Theory

$ a( 3) fm 2.22 0.19S1 (61) T30 0.1616 (47) 2.40 0.1210 (05 2.50 0.0S43 (OS) the lattice, while keeping a small enough so that we are reasonably close to the continuum limit. However, C'reutz [35] successfully showed (via a numer­ ical calculation) that the string tension in SU(2) lattice gauge theory obeys asymptotic scaling, even for lattice sizes as small as 104.

2.3 Monte Carlo Methods

We present here the motivation for a computational approach to this problem and outline the numerical methods used. Consider a four-dimensional lattice consisting of 104 sites. There are four times as many link variables and in computing the average of any physical quantity we must integrate over all of them. Each link variable is a matrix, defined by three real parameters if the theory is SU{ 2), or eight if we are simulating SU( 3). Thus, depending on the group, we are performing 120,000 or 320,000 integrations.

Performing these integrations through standard numerical integration techniques is obviously out of the question. Statistical methods are required.

Also, we should take advantage of the fact that not all configurations of the link variables U are likely to contribute significantly to the integral. This is because configurations are Boltzmann-weighted in the partition function.

Thus we need an importance sampling technique which will not waste time 34 on unimportant configurations.

We will begin by describing, in the most basic terms, the essence of the Monte Carlo method [36]. It is a. numerical technique for doing multi­ dimensional intergrals using random numbers. As a simple illustration, con­ sider the one-dimensional integral of a function f(x) shown in Fig. 2.3. Ac­ cording to a theorem of calculus, this integral is determined by the average value of the integrand in the range a < x < b (0 < x < 1 in our example).

To determine this average, we choose n random numbers as the ,r values, xt . uniformly distributed in the interval [a,&] and then sample the value of f{x).

Then the Monte Carlo estimate of this integral is

1 " F„ = { b - a) < f >= (b - a)- V ' f{xi) n z—' i = i where n is the number of trials. It can be shown that the error associated with a Monte Carlo result is a ® m = 7=, y/n where a is the variance, a = (f2) — (f)2. The advantage of this method over other numerical integration techniques is that the error is independent of the dimension of the integral.

The above naive integration technique can be improved by using the no­ tion of importance sampling. As can be seen from the graph of the function

/(.r) in our example, the dominant contributions to the integral are concen­ trated in a particular range of the abcissa. It would be much more efficient if our random variables .r,- were also concentrated in that range. This means that 35

1.0

0.8

0.6 X <4- 0.4

02

0.0 0.0 0.4 0.6 0.8 1.0 x

Figure 2.3: Demonstrating the Basics of the Monte Carlo Method 3G

instead of being uniformly distributed, the .r; should be distributed according

to a suitable non-uniform probability distribution p(x).

[ p(x)dx = 1 J a

Then our integral may be rewritten

T? [ b , , ./ ( * ) F = dxp( x ) — -, Ja Pi*) which can be evaluated by sampling according to the probability distribution p( x) so that the Monte Carlo average is

i n P(Xi)

We choose a form of p(x) which mimics f(x) w hen f(x) is large. This ensures that the integrand is slowly varying and thus the variance a is reduced.

On the lattice we are interested in computing averages of physical observ­ ables

f r f $0 e " s l*l - © ie -Si <■ 0 ■> = —------~ 1------(93 11

X u I i e_S' ' w here N is the number of configurations. We use an importance sampling procedure by choosing configurations according to a probability distribution function lit. We will average over N configurations of a biased sample so we must weight each configuration by l/II,- to eliminate the bias. Then the Monte

Carlo estimate of the observable is

< 0 > = Y (2.3.2) St St R > One possible choice is the Boltzmann factor itself.

e-s.-[0]

E fei *-*[♦] ’ so th at

1 N

< 0 > ^ i v 2 Z01- (2-3-3) j= i

This choice of II,• is due to Metropolis, et al. [37].

The secpience of steps in a. Monte Carlo simulation for lattice gauge theory is as follows:

• Start with some initial configuration. For example one could start ;cold’, with all link matrices set to the identity.

• Make a random change in that configuration. Accept or reject the change by using one or more of the Metropolis, heat bath or overrelaxation algorithms which will be discussed below. These ensure that the sequence of generated configurations obeys the desired probability distribution.

• Once the change has been accepted in the t-th configuration, compute ©,.

At the end of the calculation, compute the average, < O >= Qi)/N.

The configurations in this sum constitute a Markov chain, which is a sequence of states such that the transition probability for going from one state to another is independent of all states except the immediately preceding one.

The properties of Markov chains ensure that the (computer) time average

(O)approaches the ensemble average with a statistical uncertainty of order 3S

We now dismiss briefly, each of the three algorithms for generating con­

figurations distributed according to the Boltzmann weight.

The Metropolis Algorithm [2S,37]:

Having started with some initial configuration of link matrices, say [£'], we make a change to a trial configuration [f/]', which is selected with an arbi­ trary probability distribution Pr.u{U'). This change is accepted with the conditional probability

PT,U'(U)exp(-S(U') Pa = niin l. (2.3.4)

In practice this amounts to selecting a random number r uniformly distributed between 0 and 1. The new configuration is accepted if r < P 4 . If this condition is not satisfied then the trial configuration is rejected and the old [£f] retained.

The manner in which we choose a trial configuration [U\ is by m ultiplying each member U of the old configuration [Z7] with a group element h which has a probability distribution peaked around the identity, with equal probabilities for h and h~l. The advantage of this method is that, by symmetry,

PtMU') = Pt,U'{U) (2.3.5) simplifying the above acceptance condition to

Pa = miii 1,

In effect, then, we compute the change in the action, AS and compare eAS with the random number r. This procedure is carried out at every site, and 39

considering all sites of the lattice in this way is referred to as one sweep. We

show that by following this procedure, the probability to find a configuration

[J7] after N sweeps is

PN(U)N^? e - sM

Let W(Ua —» Ut)) be the transition probability for going from configuration

Ua to configuration Ub- Then the probability of finding a configuration [U] after N + 1 sweeps is,

Pn+i(U) = J2 W(U‘ - U)PN(U') + [1 - J ] W(U - U')]PN(U) [uy [uy

= PN{U) + Y^[Pn (U')W(U' -» U) - PN(U)W(U - U')) [uy

If the probability distribution is stationary then the condition of detailed bal­ ance is satisfied.

PN{U')W{U' -> U) = PN(U)W(U -» U') (2.3.6) and Pyv+i(Lr) = Pt\j{U). The transition probability is chosen as

W (U ^U ')-!1 , S{U)>S(U') {U ^ U ) “ \ e5W')-S(L 0 S{U) < S{U>)

W{U' -* U) _ [5(C/)_s(t/') W(U -> U')

=> Pn (U) o c e~S(U)

Care has to be taken to implement this algorithm correctly so as to avoid correlations between configurations, because the configurations in the Monte

Carlo average must be statistically independent. To ensure that this is so. we 40

discard a number of intervening configurations between measurements. The

number of configurations to discard is determined by calculating the auto­

correlation between configurations separated by intervals of 0,1,2,3 etc. The

autocorrelation between the ith and jth configurations Ui and Uj respectively

is defined as (ci+Je ,)-(c y (UiUi)-(Ui)2

which is 1 w hen i = j. W hen i and j are independent, D(j) = 0. In our

simulation, we use

DU)=, (2.3.8)

where U, is the lattice link variable defined in section 2. However, generating

independent configurations is particularly difficult near a second order phase

transition, where the correlation length becomes very large. This is called

critical slowing down. If the system contains metastable states, the algorithm

may find one such state instead of the equilibrium.

The Heat Bath Algorithm [3S]:

We discuss this only briefly, as our simulation does not use this method. This

is a special case of the general Metropolis algorithm. The trial configuration is

chosen randomly from the group manifold, but with a weighting proportional

to the Boltzmann factor. Then the terms in the acceptance criterion (equa­ tion 2.3.4) reduce to 1 and the change is always accepted. The name of this algorithm refers to the fact that it is equivalent to successively taking each link variable and putting it in contact with a thermal bath. 41

The Overrelaxation Algorithm: [3S]

Again, this involves a different, method of choosing a trial configuration [U]'.

Let U' be an element of this configuration, corresponding to a particular link.

Assume we have some group element Uq which approximately minimizes the action 5[(7], and which is also not directly dependent on the current link variable U. Then construct the trial element U' thus:

U' = UQU~lUQ. (2.3.9)

This also satisfies the symmetry relation equation (2.3.5). This element is accepted or rejected as usual by comparing the change in the action with a random number uniformly distributed between 0 and 1, as described in the section on the Metropolis method. One advantage of the oveiTelaxation method is that the trial element is rather far from the old element in phase space, without much energy penalty. This decreases the correlation between successive lattice configurations and ’shakes up’ the system so that the chances of ending up in a local extremum are less. In the case of 517(2) and (7(1) lattice gauge theories this overrelaxation method preserves the value of the action, and hence must be alternated with Metropolis or other action changing algorithms. CHAPTER 3

A DUAL LONDON EQUATION FOR U(l) LGT

3.1 London Equation and Fluxoid Quantization

We will first set the stage for a discussion of the dual superconductor model for

17(1) lattice gauge theory, by studying flux tube formation in superconductors,

as a preamble to constructing the analogy. Before vve discuss this let us estab­

lish our units. We will use Lorentz-Heaviside units throughout Chapter 3 and

4 for continuum equations of electrodynamics and its dual version. Although

the speed of light, c, appears in these expressions, in actual calculations we

will set c = 1. Quantities measured on the lattice are dimensionless.Physical

quantities measured on the lattice must be given the correct dimensions by using appropriate factors of the lattice constant a. Thus a quantity with the dimensions of length in the continuum, must be multiplied by a, while a quan­ tity with the dimensions of mass in the continuum must be multiplied by 1/a in order to have the correct physical units.

The basic properties of a superconductor axe its perfect diamagnetism and its perfect conductivity. These cause a superconductor to exclude from its interior any external magnetic field, a phenomenon known as the Meissner effect. The first attempt to describe these properties was made by F. and H.

London [26] in 1935. They wrote down two relations between the microscopic electric and magnetic fields: 43

and

B = —cV x (\J a) (3.1.2)

where A = (Afj/c 2 = m/(nse2). Here m, e and are respectively the mass,

charge and number density of the superconducting charge carriers. A^, is the

London penetration depth, whose significance can be understood by combining equation (3.1.2) with the Maxwell equation

VxB = - (3.1.3) c which gives us

A^L; 2 This implies that the magnetic field penetrates into the superconductor, and decays exponentially over the characteristic length A/,.

The justification for the London theory was given by F. London as follows.

According to a. theorem due to Bloch, the ground state wavefunction for the superconducting charge carriers will have zero net canonical momentum p, in the absence of an applied field. Since p = mv + Ae/c, (w here v is the velocity of the superconducting particles and .4 is the vector potential for the external magnetic field) this leads to an expression for the average velocity of the superconducting charge carriers in the presence of the field,

—eA me if we assume that Bloch’s theorem also applies when the external field is present. So in a sense we are implying that the wavefunction is 'rigid’, main­ taining its ground state form in spite of the field. The supercurrent density is This contains both the London equations (3.1.1) and (3.1.2). as can be seen by taking its time derivative and its curl. The value of n3 is not determ ined from this theory. If n is the the total number of charge carriers in the su­ perconductor, then at temperature T = 0, it is expected that na = n, and as T approaches the temperature Tc for the superconducting-to-normal phase transition, na is expected to drop continuously to zero. At T = 0 then we expect that the London penetration depth is

/ m c2\ At(0) = U ? J -

However experimental results show that the actual penetration depth is always larger than A^(0), even after extrapolating to T = 0. This means that even at T = 0, ns < n. Another assumption of the London theory is that the penetration depth (and therefore na) is independent of the applied magnetic field. However experiment indicates that the penetration depth increases with magnetic field, so the London theory is essentially a weak field theory.

London also introduced the concept of the fluxoid, which is defined as

\2 r ^ cf)' = 0 +

where 0 = f B ■ ds = f A- dl is the ordinary magnetic flux through the surface enclosed by the integration path. The fluxoid is zero for a path that encloses only superconducting material, by virtue of equation (3.1.2). However if the 45 path encloses a hole, it is easily seen that the fluxoid must be a constant. In fact it is quantized, which cannot be shown from a classical theory like the

London theory.

When the superconductor contains a vortex, the London equations can be modified to take the presence of the core into account, thus:

A2 -V x Js + B = ~'6o{f), (3.1.6) c where the vortex is assumed to lie along the r axis and So (r) is a two- dimensional delta function in the x — y plane. With the Maxwell equation

(3.1.3), this becomes

v 2 - b - ^ = 5 ^ h 2(F), which has the exact solution

B{r) = ^ 7tA'o(/VA). (3.1.7) Lir A“

Here A'o is a zero-order Hankel function of imaginary argument. This solution has the form e~r!x J s/r for r —*• co.

3.2 Application of the dual London Theory toU(l) L.G.T.

Let us now attempt to apply ideas from the theory of superconductivity just discussed, to £7(1) lattice gauge theory and quark confinement. Since the theory of strong interactions is quantum chromodynamics, why do we even consider a U(l) gauge theory, which describes electrodynamics, in connection with quark confinement? The reason is that unlike its continuum counterpart,

U(l) lattice gauge theory has two phases. At low values of d (see Chapter 2. 4G section 1 ) or, equivalently, at high coupling, there is a confined phase where the potential between ‘quarks’ is a linear function of their separation, which is qualitatively similar to the confined phase of the SU(3) theory. A weak first order phase transition occurs at $ ~ 1 .0 , after which there is a deconfined phase, in which the ‘quarks’ interact with a Coulomb-like potential. (This phase reduces to electrodynamics in the continuum limit). Thus we can study

U(l) lattice gauge theory as a prototype of confinement. Also, it is simpler and computationally less time consuming to simulate a U(l) gauge theory on a lattice, compared to an SU(N) gauge theory. However, there is also a deeper reason, namely that in the SU(N) theories, the confinement mechanism is expected to become transparent in a particular gauge, in which the non-

Abelian theory resembles a £7(1)^ 1 -fold Abelian theory, as we will discuss in detail in Chapter 4. Thus techniques used in the study of U(l) lattice gauge theory can be easily extended to the SU(N) case.

We have mentioned earlier that the real test of the dual superconduc­ tor hypothesis is to measure the response of monopoles to the presence of a static quark-antiquark pair. So far, attempts to test this response have been unsuccessful, mainly because it was not known how to devise an appropriate correlation function that would measure it. We will show below that the cor­ rect indicators are the terms in the dual version of the London relations for a superconductor, i.e. the lattice equivalent of the electric field and the curl of the monopole current. If they obey a dual London relation then we have a dual Meissner effect and ‘quarks’ are confined. Just as a moving produces a magnetic field, so a moving

magnetically charged particle will produce an electric field. This results from

the invariance of Maxwell’s equations under the duality transformations. The electric field E produced by magnetic monopole currents J\j will obey a dual version of Ampere’s law:

—cV x E = J\[. (3.2.1)

We show below that the Meissner effect appears because a dual version of the

London equation holds between the field and the monopole current, of the form

E = — V x j l , . (3.2.2) c

The above two equations result in the electric flux being confined to a region of size A, which is the ’’London penetration depth” for the electric field. It arises from the equation

V 2E = — . A2

Our system is a four-climensional hypercube in Euclidean space, with skew- periodic (or helical) boundary conditions. (7(1) gauge group elements

Ufi(n) are defined on the links, where n labels a site from which the link points in a positive direction /.i. We use the fact that Ufl(n) — exp (iae9fl(n)) in expressing the standard Wilson action (introduced in Chapter 2) in terms of these angular variables:

S = /3 J2 [ l- c c « ( 0 ,,„)(»•)] (3.2.3) r, /i > v 4S w here c.vp[i$liu(r)] = Ut,{r)U„(r + /.i)Uft(r + v)Ul(r) is an oriented product of gauge variables around an elementary plaquette. The static quark-antiquark pair is represented by a Wilson loop chosen to lie in the z — t plane.

We measure the component of the electric field parallel to the qq axis a distance r from the axis via the correlation:

„ _ (sui(9w)sm(6p{f))\ £ (’ > = aze(VV)2 • ^3-2-4 )

Here 6W is the the Wilson loop angle, the argument of the product of link variables taken around the loop. 6t) is similarly defined for the plaquette and is short-hand notation for 0ltl/ mentioned in the previous paragraph. It can be shown that in the naive continuum limit this corresponds to the average value of the field tensor F,lt/. To understand this, consider the average,

f[DV] J[dU]e‘ewe 5

Recalling from chapter 2 that in the naive continuum limit the plaquette e‘0p becom es ss e‘e

The Wilson loop is fixed in the lattice while the plaquette moves ai'ouncl it like a test charge, sampling the field due to the qq pair at various distances from the axis. If the plaquette orientation is in the z — t plane, it represents the parallel component of the electric field (because it corresponds to the (3,4)th 49 element of the Euclidean field tensor FIIU ). We are for the moment focusing on this component alone, as earlier studies [13,14] show that it is the dominant contribution to the energy density.

Before we discuss the correlation function that corresponds to V x Jju, we discuss how monopole currents are detected on the lattice. We use the

DeGrand-Toussiant prescription [17], which uses Gauss’s law to locate the color-magnetic flux, if any, through an elementary cube of the lattice. In lattice units, this is,

B ■ d s = — Y 2 dsa€abc^[VbBc - V c0b] = ^ 2 Op- ptcube p(zcube

The factor e,„ = (27r)/e is the magnetic charge, and the terms V& and Vc are lattice finite differences, defined at some site n for instance, as Vi)9c(n) —

8c{n + b) — 8c(n), w here 8C is the link angle and the argument of the group element. Also, p indicates a plaquette belonging to the elementary cube, and

9p is the angle corresponding to such a plaquette. Imposing the requirement th a t B ■ d s should be periodic in 27t, we must adjust any plaquette angle exceeding the range — it < 0 < n by a factor of 2mr to bring it back within the range. We may write such a plaquette angle 8,, = 6,,u + 2ml/LU. Here 8IUJ represents physical fluctuations in the range — ir to 7r, and the second term represents Dirac strings carrying 2 tr units of flux, w ith n/(„ the number of

Dirac strings passing through the plaquette. After this adjustment, if B ■ d s is not zero, we have detected a monopole current segment in the direction orthogonal to the 3-space of the elementary cube. Thus the /rth component 50

of the current at x is given by:

= eiti/afj^ ^9afj(x). (3.2.5)

where we have redefined 9afj to be the physical plaquette angle in the range

—7r to 7r. This is again in lattice units. If the net cube angle is not zero,

there is a monopole current at x. A topological conservation law holds for the

monopole currents,

d,J^x) = 0, (3.2.6)

so the currents form closed loops.

Monopole current loops can best be visualized on the dual lattice [29],

formed by associating a cube in the original lattice with a link, a site with a

hypercube, a plaquette with another plaquette oriented at right angles to the

original, and also vice versa. This is a mapping onto a lattice with integer

variables (which are multiples of magnetic charge em). Thus a cube on the

original lattice which has a net flux of em is associated with a link on the dual

lattice with a value of em, representing a monopole current density. A dual

link pointing in the opposite direction carries a current of — em, and a dual link

with value zero would correspond to a cube in the original lattice enclosing

zero flux. Figure 3.1 shows the correspondence between a cube in the original

lattice and a link in the dual lattice, and also, the correspondence between a set of four adjacent cubes on the original lattice and a 1 x 1 monopole current loop on the dual lattice. Not all faces of the four cubes are shown. The directed monopole current loop, which is the dual plaquette, represents the 51 line integral of the current density J m around the elementary square. It can have a value between —4 and 4 (in units of e,„) depending on how many of the four cubes in the original lattice are found to enclose magnetic flux.

Further, we evaluate the curl of the monopole current density J.\i. The nth component of the curl of the current is defined as V(,./c — V cJb- We identify this with the dual plaquette just discussed. Strictly speaking, since the dual plaquette is the line integral § J • dl of the current density, it is equal to J( V x J) ■ ds. By identifying the line integral with the curl, we are making the approximation that V x J is a constant within the area a2 enclosed by the plaquette. We associate the center of the plaquette with the location of

V x J. We measure the lattice average of the absolute value of this curl as a possibly useful bulk property of the vacuum:

(3.2.7)

Another bulk property of the vacuum that we also measure is the monopole perimeter density, defined as the total length of all the monopole current loops in the lattice:

(3.2.8) where V is the lattice volume. p,n was first measured by DeGrand and Tou- ssaint, who found that in the confined phase, pm was large and /3-dependent and fell rapidly at the onset of the weak first-order phase transition at « 1 .0 .

This was the first hint that monopoles may be relevant to confinement.

So far we have talked about properties of the monopole current in the vacuum. In order to test the dual London theory on the lattice, we measure the curl of the monopole current density in the presence of the Wilson loop. Original Lattice Dual Lattice

oc

Figure 3.1: Correspondence Between Original and Dual Lattice 53

In lattice units this is (j}in(0w)(V x J.u )) (17)

To give this the correct dimensions for the curl of a current we divide it by

a4. Thus the curl of the current in the presence of sources is:

/ v7 w t \ x Jm )) ,n n (V x JM)W ------. (3.2.9)

Since the penetration depth A is to be multiplied by cr to give it the correct

units, we see that the lattice version of the London equation in terms of the

correlations (3.2.4) and (3.2.9) reads the same as the continuum equation.

(sin(0u,) sin(^/,(vr))) 2 (sin{9w)( V x JM)) d2e(W) ~ a ' a 1 (17)

We need not substitute for a anywhere in this analysis. All measured quanti­

ties are in units of powers of a. As in the case of (3.2.4) the Wilson loop is fixed

in the lattice while a dual plaquette samples the region around it. We restrict

the orientation of the dual plaquette to the dual x — y plane, orthogonal to

the plane of the Wilson loop (.c — t), since our early calculations showed that

the dominant signals appear in this plane, as we expect from the geometry of

the problem.

3.3 Results

Our simulations are performed on a 9 3 x 10 lattice using skew- periodic (or

helical) boundary conditions. Less extensive work on a 7 3 x 8 lattice yields

similar results except for the expected increase in statistical fluctuations aris­

ing from the smaller lattice size. The Wilson loop is of size 3 x 3 and lies in

the z — t plane. We fix the time slice to be that at the center of the loop. We measure the electric flux and the curl of the monopole current density in the 54

transverse (.r — y) plane midway between the quark-antiquark pair, as shown

in Figure 3.2. This plane has dimensions of 9 x 9 in lattice units.

A standard Metropolis algorithm, alternated with overrelaxation is used

to generate U( 1 ) configurations. In the confined phase, we thermalize for

1 0 ,0 0 0 sweeps, after which we sample the data, every 10 sweeps for a total of

7000 measurements, which are then binned in groups of 5. In the deconfined phase, only half as many measurements are necessary since the fluctuations are much smaller. Because of the geometrical symmetry of the measurements, only the z-components of (£) and (V x Jm ) are non-zero. If the Wilson loop is removed, even the z-components average to zero so the response we observe is clearly induced by the presence of the qq pair.

We first summarize our measurements for the bulk properties of the vac­ uum in Table 5. We measure the 3 x 3 Wilson loop, TT(3,3), the plaquette P, the monopole perimeter density pm and the absolute value of the curl of the monopole current, |V x

Table 5: Bulk Properties of the 17(1) Vacuum.

3 W (3,3) P Pm V x Jm 1.1 0.203S3 (32) 0.71665 (13) 0.01312 ( 6 ) 0.05164 (25) 0.99 0.01701 (24) 0.56661 (35) 0.1179 (3) 0.4399 (10) 0.97 0.00S4S (S) 0.53659 (14) 0.1422 (2) 0.52396 (42) 0.95 0.004SS ( 6 ) 0.513S3 (9) 0.16022 (S) 0.5S466 (28) 0.90 0.001S1 (S) 0.46954 (13) 0.19449 (11) 0.69466 (37)

Now consider the correlation (3.2.3). Figure 3.3 shows the electric flux distribution for 3 = 1.1, where the vacuum is in the deconfined phase. The broad fiux distribution seen is similar to the clipole field for two classical charges of opposite signs. I

Figure 3.2: The Measurement Plane Relative to the qq Pair We also measure the total electric flux in four ways which are compared in Table 6 . Flux 1 is the flux calculated by computing the divergence of the electric field at the position of the quark. Flux 2 is obtained by summing the total flux through the measurement plane, and including the flux that flows through the boundary due to periodic boundary conditions. Flux 3 is obtained via a fit to the relation £(r) — (A2 /c )V x Jm (v) = 4>e6o{f) which is a generalization of the dual London relation to take into account the presence of the flux tube. Here (j>e is the fluxoid (or the total electric flux) and 62(F) is a two-dimensional 6 -function. We will discuss this relation in more detail shortly. Theoretically the total flux should be the charge enclosed, and since in natural units, 3 = 1 / e 2 (which converts to hcjer in Lorentz-Heaviside units) we have for the total electric flux e,

This is the last entry in Table 6 . We find good agreement between the theo­ retical value

Table 6 : Comparision of Calculated and Expected Total Electric Flux

.... 3 Flux 1 Flux 2 Flux 3 0e 0.90 1.006(81) 0.98 (28) 0.986 1.054 0.95 1.042(25) 0.S1 (9) 1.016 1.026 0.97 1.015(19) 1.14(7) 1.001 1.015 0.99 1.004(9) 0.9S (3) 0.994 1.005 1.10 0.953 (1) 0.945 (5) 0.156 0.953 0 .1 0 '

0.08 '

0.06 '

0.04 '

0.02 '

0.00

Figure 3.3: Electric flux Distribution for3 = 1.1 Figure 3.4 shows the electric flux in the confined phase, at ;3 = 0.95.

Here the flux is confined almost entirely within one lattice spacing of the

qq axis, and almost none flows through the boundary. This is the tube-like

configuration that indicates confinement. The net flux is again equal to 1 /s/3

within statistical error. The data are consistent with results from an earlier

study of the flux tube for 17(1) [14].

We show (£) as a function of distance from the qq axis in Figure 3.5. In

Figure 3.6 we show (—V x Jm ) as a function of distance from the axis. We fit our data to (£) — A2 /(c(V x Jm)), to extract A as shown in Table 7.

Table 7: A from a Fit to the Dual London Equation

13 A 0.90 0.32(2) 0.95 0.4S2(S) 0.97 0.567(9) 0.99 0.755(6) 1.1 > lattice

The dashed curve in Fig. 3.5 is a result of using the solution to the dual

London equation near a vortex to yield a flux distribution of the form

£{r) = ^rA'„(r/A), 3.3.2 ■i7T A" which is the dual version of equation (3.1.7). We can see that there is very good agreement between the continuum version and the flux distribution from the lattice simulations.

The value is consistent with the range of penetration of the electric flux in Fig 3.5 and the thickness of the current sheet in Fig. 3.6. In accordance with the London theory, we should see a divergence of the penetration depth as we approach the transition to the deconfined phase. This is supported by 59

0.7

0.5

0.3

0 . 1

- 0.1 2 /4

Figure 3.4: Electric Flux Distribution for3 = 0.95 GO our data, which shows a decrease in A further away from the phase transition.

Also, in the deconfincd phase, where (V x J\j) is almost zero, fitted values of

A are larger than our lattice size.

The anomalous behavior of the point on the qq axis can be understood by recalling that a superconductor penetrated by an Abrikosov flux tube becomes multiply connected and the London relation is replaced by the more general fluxoid quantization condition. Its dual version is

3.3.3 w here n is an integer. The data in Fig. 3.7 represent a lattice version of a delta function whose strength is very close to

This study has correctly identified the correlation functions that mea­ sure the response of the monopoles to the presence of quarks. Because the dual London equation and the electric fluxoid quantization condition are sat­ isfied, we conclude that quarks in 17(1) lattice gauge theory are confined via a Meissner effect. G1

0.6

0.5

0.4

0.3

0.2

0.0

- 0.1 0 1 2 3 4 5 6 R

Figure 3.5: Electric flux Versus Transverse Distance for 3 — 0.95 62

2.2 i 1 r

2 . 0 -±- 1.8 1.6 1.4 1.2 1.0 0.8 3 O 0.6 1 0.4 0.2 0.0 m m - 0.2 - 0 . 4 3 6 R

Figure 3.6: —V x J,y Versus Transverse Distance for 3 = 0.95 63

.1

0.9

0.7 TD • — O X 0.5 3

0.3

.1

- 0.1 0 1 2 3 4 5 6 R

Figure 3.7: The Fluxoid Versus Transverse Distance for 3 = 0.95 C H A PT E R 4

METHOD AND RESULTS FOR SU(2) LGT

4.1 Abelian Projection: The maximal Abelian Gauge

Here we discuss the theoretical basis of ’t Hooft’s gauge-fixing technique in the continuum. The need for a new approach to the non-Abelian case arises because the , which suceeds in confining quarks in the case of 17(1) (because U{ 1 ) gauge theory with Higgs fields is formally identical to the Ginzburg Landau theory of superconductivity), fails to do so in the non-Abelian gauge theory. To see this, consider a case where the unbroken symmetry group after spontaneous symmetry breaking is a continuous Abelian group. For instance, in the Georgi-Glashow model, 0(3) symmetry is broken spontaneously to 17(1). The Higgs field configuration produces natural mag­ netic monopoles of the t’Hooft-Polyakov kind. The monopole mass is predicted by this theory to be ~ 137 M w. Such monopoles are too heavy (on the mass scale of the quarks) to participate in screening external fields. Thus we must find an alternative to the Higgs mechanism.

To find an analogous picture of confinement for non-Abelian theories,

’t H ooft [ 1 1 ] considered fixing the non-Abelian part of the gauge freedom such that the maximal Abelian (or Cartan) subgroup remains. He showed, in the continuum theory, that in this special gauge there exist singularities that could be identified with magnetic monopoles. If these monopoles mimic the

U(l) monopoles to produce confinement in an analogous way, then the dual superconductor hypothesis might be validated as the correct mechanism for confinement.

64 G5

Below we go through the formulation of this procedure, first in the con­

tinuum and then on the lattice. As ’t Hooft points out, the idea behind gauge fixing is to isolate the degrees of freedom relevant to our problem. We fix

the non-Abelian part of our SU(N) gauge theory so that the remaining gauge freedom is that of the Cartan subgroup, U( 1 )A _ 1. This is done by choosing a tensor X that transforms covariantly under a gauge transformation Q

X => A'' = fLYQ-1. (4.1.1)

We now look for the gauge where A’ is diagonal, / Ai \

A -= .

\ A N J

Let V be the gauge transformation that diagonalizes A'. The gauge is still undetermined since any diagonal rotation

cl = c/i«€<*'/v); ^ 4>i = 0

will leave A' invariant. This is the Z7(l ) iV-1 subgroup. We are left with an

(N-l)-folcl abelian gauge invariant theory.

We decide on some ordering prescription for the A’s ; if A' belongs to the

Lie Algebra of SU(N), Ai > \> > ...A/v. If it is an clem ent of SU(N), then

A = and ^ i = 0, so th at (pi > 2 > 3 etc.

Under the gauge transformation V, the gauge field Afl becomes

.4,, = (4.1.2) and the m atter fields ■

It can be shown that the diagonal components of A/( transform like N

Abelian potentials

ciji I — (in dtla, (4.1.3) with the constraint Ylianl = 0- Thus for SU(2) lattice gauge theory after abelian projection, we will be left with one photon-like field. The off-diagonal components transform as N(N — 1 ) charged vector fields:

= -'[--Mi.,-; UAj)

(4.1.4)

For SU(2) ai = — ci -2 so we have one complex matter field. The above trans­ formations can be demonstrated quite easily for SU{ 2) by writing .4/t as a

2 x 2 matrix and performing a U( 1) gauge transformation.

Thus, by the process of Abelian projection we have rearranged the degrees of freedom, so that, for SU(2) for example, there is one photon-like field, one complex vector matter field, and one species of magnetic monopole.

4.2 Lattice Implementation

Abelian projection was first implemented on the lattice by Kronfeld tt al. [2 1 ].

Although more than one kind of gauge has been studied in the literature, we confine ourselves to the maximal Abelian gauge, which seems to be the most promising. This is implemented in the following manner.

We choose, as our tensor X to be diagonalizecl, the matrix

A'(s) = ^2[U(s,/.i)cr-jU\s,i.i) + U \s - ft, fi)a-jU(s - (4.2.1) 67

where s is a particular site, U(s,fi) is a link variable at s in the pth direction,

and cr 3 is the third Pauli matrix. This is the lattice equivalent of the gauge condition D,,A+IL= (d,c + iga,, ).4+/' = 0, where all(A±t‘) are (off) diagonal elements of the potential .4.,,. Diagonalizing A' at every site is equivalent to performing a local gauge transformation V( s) such that

R = ^ 2 Ti'W-iU {s, fj.)a'3Uf {s, /j.)} (4.2.2) 3,fl is maximized. Here U(s,[i) = V(s)U{s, (s + //.). After R is m axim ized, the gauge is fixed and a coset decomposition of the link variables is performed with respect to the C’artan subgroup UN~1(1) of the SU(N) gauge theory:

U{s, f.i) = c(s,/.t)u{s,/j.), 4.2.3

where «(s,//) € (7:V-1(1) and the ) are matter fields. Explicitly,

fjt „ _ A 1 “ lc(-M0|2)I/2 -c * ( s , n ) \ ( u ( s , n ) 0 \ V c(s,/t) (1 - |c(s,A0r2)1/2y V 0 4.2.4

The matrices u(s,f.i) and c(s,fi) have the appropriate gauge transforma­ tion properties under (/(l)^-1:

u'(s,f.i) = d(s)u(s, (s + jx)

c'{s,f.i) = d(s)c(s, i-i)d~l (s).

For the abelian link variables u,(s,/i), we choose the parametrization

■Ui{s,/.i) = exp{i i\rg[Ua{s, f.i)]), i = 1,... , N - 1. (4.2.5)

For SU(2), a. group element U may be written U = U.\I + icr.U, so that

Ui\ = U.i + iUn = |h’i 1 1 exp( id). Then the abelian link variable becomes cs sim ply a = exp (id). In the naive continuum limit

u i(s , p) — > exp ^i J clxa‘M where the line integral is along the link, and a,t‘ is the continuum abelian potential. It can be easily shown that their transformation properties under

are:

Ui'(s, p) — exp (iai(s))ui{s, p) exp(— + p).

Cij'(s,p) = exp(/ai(.s) - iaj{s))cij(s,p).

In this gauge, monopole currents are located in a. manner exactly similar to the case of 17(1) lattice gauge theory discussed in chapter 4.

We briefly discuss our gauge fixing techniques before going on to the measurements and results. We use three algorithms, a modified Metropolis procedure that searches for the maximum of R, an analytic method that that m axim izes R. locally at alternate sites, and an appropriately modified overre­ laxation procedure.

The first method is a pure acceptance/rejection version of the Metropolis algorithm, that accepts a new configuration if it raises the value of R, and rejects it otherwise. Its disadvantages are that due to the lack of randomness this procedure could get stuck at a local extremum, and also that it may slow down and appear to saturate well before the actual extremum is reached. We overcome these problems by interspersing calls to this routine by calls to the other two.

The second method calculates analytically the form of a gauge transfor­ m ation g which maximizes R. This is done by maximizing the expression

Tr[t'T(o>, g)(]v.i + + g\ - 1) GO with respect to each component of g. Here A is the Lagrange m ultiplier. We dia.gona.lize the resulting 4 x 4 quadratic, form that has two eigenvalues + 1 and two eigenvalues —1. This determines two of the three independent, com­ ponents of g, and the third is chosen randomly. This algorithm goes toward

Rmaz more steeply than does the first, provided R is already high enough.

The third method is an overrelaxation procedure in which the existing lattice configuration is transformed by using the square of the element g determined from the second algorithm. This does not change the value of R, but moves the configuration around in phase space, averting the danger that it may get stuck in a local extremum.

After trying out various combinations, it was found that the fastest way to proceed to whs to use the first method until R appeared to saturate, and then to call the second routine repeatedly, with calls to the overrelaxation procedure every once in a while. The extent of gauge-fixing is measured, as in the work of Ref. [24], by |.Z|2, which is the lattice version of < \DllA+^\'2 >, and is defined thus:

lZ l2 = 4 7 1^1^) + (4.2.6) where V is the lattice volume and

Zi{s,n)cr\ + Z o( s , li)oi

= U{s^L)a:iUi(s,fi)a:i - cr:iU{s, f.i)cr-,iUi(s,f.i)

+ C^(s - //, 3

- a:iU\s - f.i,fi)a3U(s - //,/i)-

\Z\~ is simply the sum of the squares of the off-diagonal elements of the matrix

A'. 70

4.3 Results and their Interpretation

Our simulations are performed on a 133 x 14 lattice with skew-periodic bound­ ary conditions, for f3 = 2.4 and /3 = 2.5. The link variables U{s,/.i) are SU( 2 ) group elements that are updated in every sweep by one call to a Metropolis algorithm and one call to an overrelaxation algorithm. Before any measure­ ments are taken, the system is thermalized by 1000 such sweeps. In each measuring sweep, 654 gauge fixing sweeps of the kind described in the previ­ ous section are performed. The typical value of R after this process is ^ 0.74, and \Z\2 « 10-5. After the measurements are taken, 25 regular SU{2) updates are performed which undo the gauge fixing procedure. In the next measuring sweep the process is repeated. In this way we accumulate 210 measurements for both 0 = 2.4 and (3 = 2.5. We have some additional measurements for

/3 = 2.4, which total 481, and we use this full set for one part of our analysis.

The greatest limiting factor in this simulation is the CPU time for a gauge fixing sweep, which is about 25 minutes, compared to about 15 seconds for a regular SU{'2) updating sweep.

After gauge fixing, the measurement procedure is identical to that for the

17(1) case described in Chapter 3, section 2, so we will directly present our results below. Our measurements of the bulk properties of the vacuum are summarized in Tables S and 9. We have measured the plaquette and 3 x 3

Wilson loop, both for the full SU{ 2) values and for their values after gauge- fixing, in terms of the residual Abelian link variables. Our results for the latter agree well with those of Suzuki [24]. We measure also the monopole perimeter density pm and the average absolute value of the curl of the monopole current, which are defined in Chapter 3. 71

Table S: Bulk Properties of the SU( 2) V acuum

ft Plaquette TF(3, 3) 2.4 0.03011(24) 0.05952(17) 2.5 0.65195(17) 0.08357(28)

Table 9: Bulk Properties after Abelian Projection

ft A belian A belian Pm |V x Jm | P laquette W (3,3) 2.4 0.75071(32) 0.17505(49) 0.02S17(5) 0.11078(30) 2.5 0.791S6(25) 0.25940(93) 0.01416(11) 0.05603(42)

Next, we measure the correlations (£(r)} defined in equation (3.2.4), and

(V x Jm ) defined in (3.2.7). Again, we restrict our measurements to the central time slice of the 3 x 3 Wilson loop, which lies as before in the z — t plane. We are further restricted to the transverse (x — y) plane midway along the qq axis, with area 9 x 9 in lattice units. Our (V x J ;\,/) data are shown in Figure 4.1 for ft = 2.4 and in Figure 4.2 for ft = 2.5. Figure 4.3 shows the cu rren t Jm calculated from (V x J\i) in a manner we shall describe shortly.

Figures 4.4 and 4.5 show our (£{r)) d a ta for ft = 2.4 and 2.5 respectively.

As we can see from Figures 4.1 and 4.2, in contrast to our 17(1) study, our SU( 2 ) data show a substantial signal at r = 1.0. That is, instead of the signal crossing zero within one lattice spacing from the qq axis (as in U(l)), it crosses zero after one lattice spacing. This seems to indicate that the core region, instead of being within a lattice spacing (or point-like on our scale) is now perceptibly larger. The implication is that the coherence length is no longer zero but a number greater than one and that, except for regions far from the core, the London equation will 110 longer apply. Since the Ginsburg-

Landau theory of superconductivity allows for the existence of a non-zero

coherence length (as we explain below), we attempt to apply its dual version

to fit our data.

Recall that the Ginsburg-Landau theory postulates a complex order pa­ ram eter,

Ginsburg-Landau parameter k = A/if distinguishes between the two kinds of superconductors. Type-I superconductors correspond to k < \/2, and type-II to k > \/2. The Ginsburg-Landau equations are:

Qh’ + 7 1'/’N ’ + (— V - — A ) 2 1p = 0 2m* i c

Here a , 7 are temperature dependent parameters with 7 > 0, and m* and e* are constants. The current density J is

♦ 1• ♦ 2 / = - 4>vr) - ^ r> i'A 2m 1 m e

Writing the wavefunction as il' = 4’o c which is appropriate in the 73 presence of an axially symmetric vortex or flux tube, this can lie rewritten

1 2k A \ 2 _ I j L f r df = 0. (4.3.1) <$0 ) r dr V dr

Here o is the flux quantum, equal to the total flux enclosed, <&o = ji .4 • dl =

27T7’Aoo> where the line integral is over a circle large enough to enclose all the flux. The current density, which has only a ^-component, is

J = ^ 0 o o 2 / 2 - (4.3.2) m* \ r fto /

An approximate solution to f over the entire r range is

/ ss tanh^. (4.3.3)

We will attempt to fit our data to the dual version of this theory. We use the CERN plotting package MINUIT for all our fits. We have 4S1 measure­ m ents for ft = 2.4, and 210 measurements for ft = 2.5.

Take the generalized definition of the fluxoid:

Again using ft = - we obtain a differential form for this equation, which for distances well outside the core, is,

E(r) + ~Y x =0. (4.3.5)

After working out the curl, this can be rearranged to give an equation that we use as a fitting function for our E(r) data:

2 A.-1 kosech2(k2r)J.M{r) IcftX? x Jm (v)) fr-'ir) = ------o------• (4 .3 .0 ) tanli' ( k’2 r) tanh"( kir) 74

Here, k\ and k> are fitting parameters, and A-i = A ~,k-} = l/£. To evaluate

JM(7’)i we use a simple fitting function f c to fit our V x J data first.

2 f c = 2ira\ e_,.na _ _£_e-re (4.3.7) a2~

Here a\, a? and are fitting parameters and c = «■>( 1 + ci-i2 ). (F itting functions involving the sum of two exponentials are unstable, and c is used to prevent this by making sure that c > a-> always). Our data and this fit are shown in Fig. 4.1 for /3 = 2.4 and Fig. 4.2 for f3 = 2.5.

J\l{r) is then evaluated from V x J,\[(r) thus:

JA1(r) = - [ dr'r'V x J{r'). 4.3.S r Jo

This works out to be

J

The parameters in equation (4.3.7) are chosen so that Jm {v) is zero at r = 0 and vanishes faster than 1 /?* as r —> co, as seen in Fig. 4.3. (J M(r) is expected to vanish faster than 1/r because that will allow V x Jm (t) to become negative, as it does in our data.) In the expression for E(i'), equation

(4.3.S), the functional forms of Jm (v) and V x Jm [v) are used on the right hand side. While the fit to V x J\i{r) uses all our data points, the fit to E(r) excludes the point at r = 0, where the fitting function for E{i') blows up. The fit to E(r) and the data are shown in Fig. 4.4 and Fig. 4.5.

Before presenting the results we note that one more piece of information can be obtained from this analysis. We can check for fluxoid quantization as follows. The flux quantum is given by

0 0.040

0.030

0.020

0.010 -

0.000

- 0.010 3.0 Distance

Figure 4.1: — (V x J,\i) Versus Transverse Distance for '3 — 2.4 7G

0.030

0.020 j ►

0.010

0 .0 0 0

- 0.010 0.0 1.0 2.0 3.0 4.0 5.0 6.0

Figure 4.2: — (V x J\[) Versus Transverse Distance for 3 = 2.5 0.0050

0.0040

0.0030

0.0020

0.0010

0.0000 0.0 2.0 4.0 6.0 Distance

Figure 4.3: J\/ Versus Transverse Distance for 3 — 2.4 78

0.040

0.030

0.020

0.010

e

0 .0 0 0 0.0 2.0 4.0 6.0 Distance

Figure 4.4: Electric Flux Versus Transverse Distance for 3 — 2.4. 79

0.040

0.030

0.020

0.010

0 .0 0 0 0.0 2.0 4.0 6.0

Figure 4.5: Electric Flux Versus Transverse Distance for 3 = 2.5 30

Now what we measure to be V x J,\i(r) is actually a line integral § J • dl around an elementary plaquette, as discussed in Chapter 3, section 2. This corresponds to a circular path C with the radius in lattice units given by

7tv2 = 1. Note also that in our measurement of E(r), we are really measuring the flux of E(r) through a plaquette, and we somewhat arbitrarily assign the center of the plaquette as the argument of E(r). In other words, E{0) does not necessarily measure the electric field at the point r = 0. Rather, it can be taken to be the value of fhe field smeared out over the area of the plaquette.

The above equation becomes

< p^E(0) + X2

which, for r = 1 / s/rr reduces to:

0 ~ £ (0) + i2!r>2A2’-J" M l 4.3.10 f-(r) r= 1 / v ^

This value can then be compared with the fluxoid value obtained directly from the data by summing all E(r) values in our 9 x 9 data array. This is because at sufficiently large distances from the core, the second term in the equation for the fluxoid, V x (J / / 2) will vanish, so that the fluxoid is nothing but the total electric flux over the entire surface area.

In a preliminary analysis we find that the uncertainties in the parameters from MINUIT are very small (in some cases less than 1%) and it is unlikely that they give a realistic idea of the actual spread in the parameters. Also, the values of \ 2 are about 250 — 3000 for the fit to < S > and 30 — 40 for the < V x

Jm > fit- -^-t first sight these may seem unacceptable but there are two reasons why \-2 fails as a good measure of the quality of the fit. Firstly, the statistical errors, particularly in the < £ > data, are very small, pushing up \ 2. Secondly, SI there are lattice artifacts, seen in the scatter of clata points, particularly in the

< V x J\i > data we shall discuss shortly. This is a consequence of the fact that rotational symmetry is broken on the lattice, and yet we are attempting to fit. our data to a function which does possess rotational symmetry. Visually, the fits are reasonable, and it would be unrealistic to expect better, considering the relatively low number of measurements and the fact that the fitting function itself is an approximation to the true solution.

To get a realistic estimate of the errors, we divide our data set into 4 equal parts of 1 2 0 measurements each, and analyse each set independently.

We then take the averages of the physical quantities from each data set, and calculate the standard deviation, which is more indicative of the true error.

Below are the results from such an analysis.

Table 10: P aram eters for (V x Jm )w F it

Set a i Clo «3 1 -0.00285 1.019 0.8240 2 -0.00349 1.061 0.7527 3 -0.01239 1.193 0.4210 4 -0.0017S 0.936 0.9799 all -0.00275 1.009 0.8251

T able 1 1 : Properties of the SU(2) Dual Superconductor

F Set A 0 0 ^3 11 Til 1 1 .1 1 0 1 1.4945 0.1745 0.1765 2 1.0696 1.4799 0.1599 0.1719 3 0.8782 1.3390 0.1025 0.17S8 4 1.1505 1.6232 0.2027 0.1776 all 1.1529 1.3959 0.1644 0.1763 S2

We now take the data from the first four rows of these tables and calculate averages and standard deviations. We obtain

bu = 0.16 ± 0.04

Eaum = 0.176 ±0.003

A = 1.05 ±0.12

f - 1.48 ±0.12

The Ginsburg-Landau parameter is k = 0.7 ± 0.1, where the uncertainty is obtained by propagating errors in A and

The last row of Tables 10 and 11 is the result of analysis on the 481 measurements as a whole. It is encouraging that the values of A, if. bo and k are equal within uncertainty to the averages calculated above.

For [3 = 2.5, the limitations of computer time allow us only 210 mea­ surements, which are analysed as a whole and give us the following results.

MINUIT errors are not quoted.

a = -0.00105

b = 0.8334

d= 1.0066

A = 1.8958

f = 1.407S

cpo = 0.2S2

Esum = 0.252 As we attempt to go to the continuum limit, we expect that these parameters will change. In that case, whether the corresponding superconductor is type-I or type-II remains to be seen. Also, these parameters will change as we go to larger Wilson loop sizes. 3 x 3 loops represent quarks that are too close for the flux tube to be really well defined. Since our work is more in the nature of a search for the correct correlation functions than a study of the details, these relatively small Wilson loops suffice. Within these limitations we see that our attempt to fit our data to the dual Ginsburg-Landau theory succeeds reason­ ably well, surprisingly so, considering the approximate nature of our approach.

In spite of the preliminary nature of this analysis, we have demonstrated the

Meissner effect (and hence validated the dual superconductor hypothesis) for

SU( 2) lattice gauge theory, and also succeeded in gleaning some details of the m echanism . SUMMARY

The aim of this investigation has been to test the dual superconductor hy­

pothesis of confinement within the framework of lattice gauge theory. The dual

superconductor hypothesis is, to date, the most viable mechanism for quark

confinement, but it cannot be easily investigated analytically because the low-

energy regime of the strong interactions is highly non-perturbative. This is

where lattice gauge theory becomes useful as a naturally non-perturbative

calculations! tool. In the two decades since ’t Hooft and Mandelstam first

proposed the dual superconductor mechanism, much lattice work has been

done, accumulating an impressive body of mainly circumstantial evidence in its favor. Our demonstration of the Meissner effect in 17(1) which could have, from a technical standpoint, been done soon after the work of DeGrand and

Toussaint, is an unambiguous validation of the dual superconductor hypoth­ esis within the limitations of our lattice investigation. We have shown that

U{ 1) lattice gauge theory has a vacuum identical to a dual, extreme type-II superconductor in which the magnetic monopoles screen external electric flux, and both the dual London relation and the electric fluxoid quantization con­ dition are satisfied. We have implemented t’Hooft’s Abelian projection for

SU(2) lattice gauge theory and performed a similar investigation that, while confirming a Meissner effect, yielded results qualitatively different from the

17(1) case. It seems that the vacuum of the 517(2) lattice gauge theory is like a superconductor on the borderline between type-I and type-II, having a coherence length detectable on the scale of our lattice. The coherence length is also comparable to the penetration depth. We have demonstrated that the

84 S5

fluxoid is quantised within the uncertainties of our investigation. One remark­

able aspect of this study is that we obtain reasonably clear results in spite of

relatively few measurements.

The results of this investigation are something of a breakthrough in the

lattice study of the confinement mechanism. Also, it is now clear how to

proceed to the full non-Abelian theory, SU( 3), that describes quarks. This

is the next important step in the lattice study of confinement. But even for

£7(1) and SU{2) lattice gauge theory, the picture is far from complete. Aside from the problems inherent in any lattice simulation, one of the drawbacks of

this investigation is that we have considered rather small Wilson loops of size

3x3. There is the likelihood that our results are contaminated by excited quark states that exist during the creation and annihilation process. Ideally a larger Wilson loop - of size at least 5x5- should be used. This is difficult to do in £ 7 (1 ) gauge theory, since the signal to noise ratio is very small for Wilson loops larger than 3x3. In the non-Abelian theories large Wilson loops give a clear signal, but this is offset by the time-consuming gauge-fixing process.

There are also questions that arise in the interpretation of any lattice result. First, it is not necessary that a result obtained at a finite lattice spacing will still be valid in the continuum limit, assuming a continuum limit exists. Taking the continuum limit in the correct fashion involves a study of scaling, (calculating with different values of j3) and attempting to extrapolate to the continuum. This is a highly non-trivial computational project. We also need to resolve some conflicting evidence from studies like those of Ref. [39] which appear to indicate that the unit cube monopoles may not be the correct physical quantities. Also, it is important to investigate whether the quantities S6

we have measured are gauge independent, which they ought to be if they are physical quantities [40]. The dual superconductor hypothesis should also be

able to account for the features of the flux tube gleaned from various lattice studies [13,14]. Such questions need to be resolved before we can say that the confinement mechanism on the lattice is fully understood.

Our method paves the way for further studies of confinement on the lattice. Our results, in particular estimates of the penetration depth and coherence length, will serve as constraints in other models of confinement. REFERENCES

[1] D. H. Perkins, Introduction to High Energy Physics, Addison-Wesley 1982; Lewis H. Ryder, Elementary Particles and Symmetries , Gordon and Breach Science Publishers (1986).

[2] ‘Review of Particle Properties’, Rev. Mod. Phys. 56, April 1984.

[3] K. Gottfried and V. Weisskopf, Concepts of Particle Physics , Oxford Uni­ versity Press, 1984.

[4] M. Gell-Mann and Y. Ne’emann, The Eightfold Way, Benjamin, 1964.

[5] G. Zweig, CERN Report 8419/Th 412, 1964.

[6] E. D. Bloom et al., Phys. Rev. Lett. 23, 930 (1969); M. Breidenbach et a i, Phy. Rev. Lett. 23, 935 (1969).

[7] L. W. Jones, Rev. Mod. Phys. 49, 717 (1977).

[8] G. LaRue et al., Phys. Rev. Lett. 38, 1011 (1977); 46, 967 (1981).

[9] F. Close, An Introduction to Quarks and Partons , Academic Press, 1979.

[10] M. Tinkham, Introduction to Superconductivity, Robert E. Krieger Pub­ lishers, 1975.

[11] S. Mandelstam, Phys. Rep. 23C, 245 (1976); G. ’t Hooft, in High E n ­ ergy Physics, ed. A. Zichichi (Editrice Compositori, Bologna, 1976); Nucl. Phys. B190[FS3], 455 (1981).

[12] K. Wilson, Phys. Rev. DIO, 2445 (1974).

[13] R. W. Haymaker and J. Wosiek, Phys. Rev. D 43, 2676 (1991); R. Som­ mer, Nucl. Phys. B 306, 180 (1988).

[14] W. Burger, M. Faber, W. Feilmar, H. Markum and M. Muller, Nucl. Phys. B20(Proc. Suppl.), 203 (1991).

[15] A. M. Polyakov, Phys. Lett. 59B , 82 (1975).

[16] T. Banks, R. J. Myerson and J. Kogut, Nucl. Phys. B129, 493 (1977).

[17] T. A. DeGrand and D. Toussaint, Phys. Rev. D22, 2478 (1980).

[18] J. S. Barber, Phys. Lett. 147B, 330 (1984).

[19] P. Cea and L. Cosmai, Phys. Lett. 249B, 114 (1990).

87 ss

[20] R. J. Wensley and J. D. Stack, Phys. Rev. Lett. 63, 1764 (1989).

[21] A. S. Kronfeld, M. L. Laursen, G. Schierholz and U.-J. Wiese, Phys. Lett. 198B, 516 (19S7); Nucl. Phys. B293, 461 (1987).

[22] F. Bi’andstaeter, G. Schierholz, U.-J. Wiese, Phys. Lett. 272B, 319 (1991).

[23] V. G. Bornyakov et al., Amsterdam University preprint 1TFA-90-22.

[24] T. Suzuki and I. Yotsuyanagi, Phys. Rev. D42, 4257 (1990).

[25] J. S. Barber, R. E. Shroclc and R. Schrader, Phys. Lett. 152B, 221 (1985); M. Teper, Phys. Lett. 171B, 8 6 (1986).

[26] F. London and H. London, Physica 2, 341 (1935).

[27] H. Rothe, Lattice Gauge Theories: An Introduction , W orld Scientific (1992).

[28] J. Kogut, Rev. Mod. Phys. 55, 775 (1983).

[29] M. Creutz, Quarks, Gluons and Lattices , Cambridge University Press, 1983.

[30] B. Lautrup and M. Nauenburg, Phys. Lett. 95B, 64 (1980).

[31] F. Gutbrod, Z. Phys. 30C, 585 (1986).

[32] V. Azcoiti, A. Cruz, E. Dagotto, A. Moreo, A. Lugo, Phys. Lett. 175B, 202 (1986); J. Kogut and E. Dagotto, Phys. Rev. Lett. 59, 617 (1987).

[33] J. K ap u sta, Finite Temperature Field Theory , Cambridge University Press, 1989.

[34] K. G. Wilson and J. Kogut, Phys. Rep. 1 2 C, 75 (1974).

[35] M. Creutz, Phys. Rev. Lett. 45, 313 (1980).

[36] H. Gould and J. Tobochnik, Computer Simulation Methods , Addison Wes­ ley Publishing Company, 1988.

[37] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller and E. Teller, J. Chem. Phys. 21, 1087 (1953).

[38] M. Creutz, ‘Monte Carlo Simulations for Lattice Gauge Theory’, Talk given at Symposium on Lattice Gauge Theory using Parallel Proces­ sors, Beijing, May 19S7; ‘Overrelaxation and Monte Carlo Simulations’, Brookhaven National Lab. preprint 39445. S9

[39] T. L. Ivanenko, A. V. Pochinsky and M. I. Polykarpov, Phys. Lett. 252B, 633 (1990).

[40] S. Hioki et al., Phys. Lett. 271B, 201 (1991). VITA

The author was born in December, 1962 in New Delhi, India. She at­ tended Mater Dei School and then did undergraduate work in Physics at Delhi

University. She got her Master’s degree in Physics from Delhi University and then came to Louisiana State University in Fall, 1985, for her doctorate in

Physics.

90 DOCTORAL EXAMINATION AND DISSERTATION REPORT

Candidate: Vandana Singh

Major Field: Physics

Title of Dissertation: Monopoles and Confinement in Lattice Gauge Theory

Approved

Major Profe&soiy'and Chairman

Date of Examination:

November 2, 1992