<<

Permutation and (Dated: September 16, 2021)

1 I. SYMMETRIES OF MANY-PARTICLE FUNCTIONS

Since electrons are fermions, the electronic wave functions have to be antisymmetric. This chapter will show how to achieve this goal. The notion of antisymmetry is related to permutations of electrons’ coordinates. Therefore we will start with the discussion of the and then introduce the permutation-group-based definition of , the zeroth- approximation to the wave in theory of many fermions. This definition, in contrast to that based on the Laplace expansion, relates clearly to properties of fermionic wave functions. The determinant gives an N-particle wave function built from a of N one-particle waves functions and is called Slater’s determinant.

II. PERMUTATION (SYMMETRIC) GROUP

Definition of permutation group: The permutation group, known also under the name of , is the group of all operations on a set of N distinct objects that order the objects in all possible ways. The group is denoted as SN (we will show that this is a group below). We will call these operations permutations and denote them by symbols

σi. For a set consisting of numbers 1, 2, ..., N, the permutation σi orders these numbers in such a way that k is at jth position. Often a better way of looking at permutations is to say that permutations are all mappings of the set 1, 2, ..., N onto itself: σi(k) = j, where j has to go over all elements. Number of permutations: The number of permutations is N! Indeed, we can first place each object at positions 1, so there are N possible placements. For each case, we can place one of the remaining N − 1 objects at the second positions, so that the number of possible arrangements is now N(N − 1). Continuing in this way, we prove the theorem. Example: For three numbers: 1, 2, 3, there are the following 3! = 6 arrangements: 123, 132, 213, 231, 312, 321. Notation: One can use the following “” to denote permutations: ! 1 2 ... k ... N σ = σ(1) σ(2) ... σ(k) ... σ(N)

The order of columns in the matrix above is convenient, but note that if the columns were ordered differently, this would still be the same permutation. An example of a permutation in this notation is ! 1 2 3 4 σ = 3 4 1 2

2 Another way if writing a permutation is to include only the second row. An example of a permutation in this notation is σ = (3412).

Multiplication: We define the operation of multiplication within the set of permutations

as (σ ◦ σ 0)(k) = σ(σ 0(k)). For example, if ! ! 1 2 3 4 1 2 3 4 σ 0 = σ = 3 4 1 2 2 4 3 1 then ! 1 2 3 4 σ ◦ σ 0 = . 3 1 2 4

Symmetric group: We can now check if these operations satisfy the group postulates

◦ ∈ • Closure: σ σ 0 SN . The proof is obvious since the product of permutations gives a number from the set, therefore is a permutation.

• Existence of unity I: this is the permutation σI (k) = k.

1 1 • Existence of inverse, i.e., for each σ there exists σ − such that σ ◦ σ − = I. Clearly, 1 the inverse can be defined such that if σ(k) = j, then σ − (j) = k.

• Multiplications are associative:

◦ ◦ ◦ ◦ σ3 (σ2 σ1) = (σ3 σ2) σ1.

Proof is in a homework problem.

◦ σ SN = SN : One important theorem resulting from these definitions is that the set of products of a single permutation with all elements of SN is equal to SN

◦ σ SN = SN .

Proof: Due to closure, the only possibility of not reproducing the whole group is that two different elements of SN are mapped by σ onto the same element:

σ ◦ σ 0 = σ 000 = σ ◦ σ 00.

1 Multiplying this equation by σ − , we get σ 0 = σ 00 which contradicts our assumption. { 1} { 1} σ − = SN : Another theorem states that σ − = SN . This is equivalent to saying 1 that σ and σ − are in one-to-one correspondence. Indeed, assume that there are two

3 ◦ ◦ 1 permutations that are inverse to σ: σ1 σ = I = σ2 σ. Multiplying this by σ − from the right, we get that σ1 = σ2.

Transpositions: A transposition is the simplest possible permutation other than σI , i.e., a permutation involving only two elements:    σ(i) = j !  1 2 ... i ... j ... N τ = τij = (ij) =  σ(j) = i = .  1 2 ... j ... i ... N  σ(k) = k for k , i,j

Permutation as product of transpositions: One important property of permutations is that each permutation can be written as a product transpositions. To prove that any permutation can be written as a product of transpositions, we just construct such a product. For a permutation σ written as ! 1 2 ... k ... N σ = (1) i1 i2 ... ik ... iN { } first find i1 in the set 1,2,...,N and then it with 1 (unless i1 = 1, in which ⇔ case do nothing). This maps i1 1. Then consider the set with i1 removed, find i2, and transpose it with the number in the second position (it will be 2 if the first transposition did not affect this place). Continuing in this way, we get the mapping of expression (1) which proves the theorem. As an example, consider ! 1 2 3 4 σ = . 2 4 3 1

We first look for i1 = 2 and then transpose it with 1: τ τ 1234 →12 2134 →14 2431

where in the second step we looked for i2 = 4 and transposed it with the element in the second positions, i.e., with 1. Then we looked for i3 = 3 and did nothing, similarly with i4 = 1. Therefore, σ can be written as ◦ σ = τ14 τ12. Let us check explicitly that the right-hand side indeed gives σ

σ(1) = τ14(τ12(1)) = τ14(2) = 2; σ(2) = τ14(τ12(2)) = τ14(1) = 4;

σ(3) = τ14(τ12(3)) = τ14(3) = 3; σ(4) = τ14(τ12(4)) = τ14(4) = 1. The decomposition of a permutation into transposition is not unique as we can always

add τijτij = 1. Parity of permutation: Although the number of transpositions in a decomposition is not unique, this number is always either odd or even for a given permutation. The proof of

4 − πσ this important theorem is given as a homework. Thus, ( 1) , where πσ is the number of permutations in an arbitrary decomposition, is always 1 or −1 for a given permutation and we can classify each permutation as either odd or even. We say that each permutation has a definitive parity. Parity of inverse permutation: One theorem concerning the parity of permutations is that πσ π 1 that (−1) = (−1) σ− , i.e., that a permutation and its inverse have the same parity. To prove it, first note that ◦ 1 1 ◦ 1 (σ1 σ2)− = σ2− σ1− ◦ which is obvious if we multiply on the left with σ1 σ2. We can write σ as ◦ ◦ ··· ◦ σ = τ1 τ2 τr.

Then 1 1 ◦ 1 ◦ ··· ◦ 1 σ − = τr− τr− 1 τ1− . − 1 Thus, π = π 1 . Note that τ = τ = τ , but this is not needed for the proof. σ σ − ij− ij ji

III. DETERMINANT

× A Definition of determinant: For a general N N matrix with elements aij, the determinant is defined as

a a ... a 11 12 1N a a ... a X |A| ≡ A 21 22 2N − πσ det = = ( 1) aσ(1)1aσ(2)2 ... aσ(N)N (2) ...... σ

aN1 aN2 ... aNN

where the sum is over all permutations of numbers 1 to N and πσ is the parity of the permutation. There are several important theorems involving determinants that we will now prove. Determinant is invariant to matrix transposition: Show that that

|A| = |AT |,

which also means that the definition (2) can be written as X |A| − πσ = ( 1) a1σ(1)a2σ(2) ... aNσ(N). (3) σ AT ≡ A { } Proof. Using notation 0 = aij0 , we can write the definition (2) as X X |AT | − πσ − πσ = ( 1) aσ0 (1)1aσ0 (2)2 ... aσ0 (N)N = ( 1) a1σ(1)a2σ(2) ... aiσ(i) ... aNσ(N) σ σ

5 (since aij0 = aji). Note that the term with a given σ in the equation written above is not the same as the term with the same σ in Eq. (2) since aij , aji. Let us reshuffle the terms in the last equations in such a way that the factor with σ(i) = 1 is in the first position (we can do it since product does not depend on the order of factors). There must be one

such a factor aiσ(i) since σ(i) runs over the numbers 1,2,...,N. Denote this value of i in a ≡ given term by i1, i1 = i so that aiσ(i) ai11 and move this factor to the first position in the product to get

a1σ(1)a2σ(2) ... ai 1 ... aNσ(N) = ai 1a2σ(2) ... ai 1σ(i 1) ai +1σ(i +1) ... aNσ(N) 1 1 1− 1− 1 1

Next, look for σ(i) = 2, denote i = i2, so that σ(i2) = 2, and move ai22 it the second position ≡ in the product. Going forward, we move the term with σ(i) = k = σ(ik), i.e., aiσ(i) aikk. Continuing, one eventually gets

a1σ(1)a2σ(2) ... aσ(N)N = ai11ai22 ... aikk ... aiN N . (4)

1 Since k = σ(ik), ik = σ − (k). Therefore, we can write X T π |A | = (−1) σ a 1 a 1 ... a 1 ... a 1 σ − (1)1 σ − (2)2 σ − (i)i σ − (N)N σ Therefore, if we sum all possible terms on the right-hand side of Eq. (4), we sum over { 1} all permutations of SN (as shown earlier, σ − = SN ). The only remaining issue is the 1 sign. The sign is right since we have proved that the parity of σ and σ − is the same. This completes the proof.

Interchange of columns: The next important theorem says that if one interchanges two columns (or rows) in a determinant, the value of the determinant changes sign

|A | −|A| i j = ↔ A where i j denotes a matrix with such interchange. ↔ Proof. We can assume without loss of generality that i < j. Denote: n o A { } A = akl i j = akl0 ↔

akl0 = akl if l , i,j, k = 1,...,N (5)

aki0 = akj, akj0 = aki, k = 1,...,N (6) |A| | A | We will expand and i j and try to identify identical terms in both expansions, ↔ ignoring the sign for now. The expansion of |A| is X |A| − πσ ··· ··· = ( 1) aσ(1)1aσ(2)2 ... aσ(i)i aσ(j)j aσ(N)N (7) σ

6 | A | The expansion of i j is ↔ X | A | − πσ˜ ··· ··· ··· i j = ( 1) aσ0˜(1)1 aσ0˜(2)2 aσ0˜(i)i aσ0˜(j)j aσ0˜(N)N ↔ σ˜ where we use tilde to distinguish the two summations, but both σ and σ˜ run over the same SN . Using Eqs. (5) and (6), we can rewrite this equation in terms of the matrix elements of A X | A | − πσ˜ ··· ··· ··· i j = ( 1) aσ˜(1)1 aσ˜(2)2 aσ˜(i)j aσ˜(j)i aσ˜(N)N ↔ σ˜ where the order of elements is unchanged, but it is convenient to have aσ˜(i)j and aσ˜(j)i switch places X | A | − πσ˜ ··· ··· ··· i j = ( 1) aσ˜(1)1 aσ˜(2)2 aσ˜(j)i aσ˜(i)j aσ˜(N)N . (8) ↔ σ˜ Now the order in the column index is the same as in Eq. (7). We want to show that the terms in Eq. (8) are in one to one correspondence with those in Eq. (7). First we see that we can choose σ˜(k) that is the same as σ(k) for k other than i or j: σ˜(k) = σ(k) for k , i,j. For k = i and j, we can choose

σ˜(j) = σ(i) and σ˜(i) = σ(j).

Note that σ˜ , σ, but the mapping is unique, so that there is one to one correspondence between the terms in the two expansion, modulo sign. To find the sign, notice that the relation between σ˜ and σ can be written as ◦ σ˜ = σ τij.

Let us check it   σ(τij(k)) = σ(k) k , i,j  σ˜(k) =  σ(τ (i)) = σ(j) k = i  ij   σ(τij(j)) = σ(i) k = j Thus, the permutations σ˜ and σ differ by one transposition and therefore (−1)πσ = −(−1)πσ˜ , which proves the theorem.

Linear of column vectors: Another theorem states that if a column of a matrix is a linear combination of two (or more) column matrices (vectors), the determinant of this matrix is equal to the linear combination of determinants, each containing one of these column matrices: |A a b c | |A a b | |A a c | ( j = β + γ ) = β ( j = ) + γ ( j = ) . (9)

7 The proof follows from the fact that the definition of determinant implies that each term in the expansion (2) contains exactly one element from each column and each row. Thus, each term contains the factor βbi + γci and can be written as a sum of two terms. Pulling the coefficients in front of determinants proves the theorem. Determinant of a product of matrices: One more theorem which is the subject of a homework is that the determinant of a product of two matrices is the product of de- terminants: |AB| = |A||B|.

Determinant of unitary matrix: The product theorem can be used to prove that the UU I determinant of a unitary matrix, i.e., a matrix with the property † = , where the dagger denotes a matrix which is transformed and complex conjugated, is a complex number of modulus 1. Indeed

 T  2 1 = |UU†| = |U||U†| = |U|| U ∗ | = |U||U∗| = |z| where we used the theorem about the determinant of a transformed matrix. Laplace expansion of determinant: Finally, a homework problem shows that the determinant of A can be computed using the so-called Laplace’s expansion X X |A| − i+j |M | − i+j |M | = ( 1) aij ij = ( 1) aij ij . i j M A where the matrix ij is obtained from matrix by removing the ith row and jth column.

8