<<

Quantum Mechanics

Robert C. Roleda Department

Completeness Expansion

We have seen in [bra-ket] that state kets may be expressed as an expansion in terms of eigenkets

| = | and that the expansion may be obtained from

= Inserting the last equation to the first,

| = | Completeness Relation

Since the inner product is just a number, we may move it to the right | = | Consistency requires that

| | = 1 That this is true implies that the eigenkets form a complete set for the expansion, thus forming a basis set . The unit summation is called the completeness relation . Inner Product

We note that the = ∗ () yields a (number). The reason that this operation is called an inner product is that it is a product of two vectors that gives a lower-ordered entity (scalar).

You may recall that the is also called a scalar product. An inner product is a bra-ket.

The completeness relation

| | = 1 is a summation over the terms which are basically ket-bras. || More generally, if we consider a ket-bra , then = ||

| = || = | |

A ket-bra acting on a ket therefore yields another ket. Mathematical entities such as these are called operators. Because ket-bra is an operation on two vectors that gives rise to a mathematical object that is higher than a vector, it is called an outer product. Matrices

Outer products can be better understood if we refer back to Euclidean vectors. These vectors may be expressed as matrices whereby the basis vectors in

= are expressed as 1 0 0 = 0 , = 1 , = 0 0 0 1 Since the components are just numbers, a is a trio expressed in terms of a column

= = Matrix

Matrix multiplication basically involves multiplying a row of the first matrix with the column of the second matrix

+ +ℎ = ℎ + +ℎ Such operations can be expressed as = where the upper indices are the column numbers while the lower indices are the row numbers of a matrix. The summation is thus over the column number of the first matrix and the row number of the second matrix:

= + = + = + = + Demonstration

If = = ℎ Then = , = , = , = , = , = , = , = ℎ and = + = + = + = + = + = + ℎ = + = + ℎ Thus, + +ℎ = + +ℎ Covectors

The dual of a vector is that object which will yield a scalar if | | multiplied with the vector. If the vector is presented as a column vector,

| ≔ then its elements will only have row numbers. The that will yield a scalar must be of the form We see then that the dual of vector must be an object that only has column | number. Such an object is a row matrix

| ≔ These objects are called covectors. Inner and Outer Products

In , . Thus, the inner product =

= = + + = + + gives a scalar. The outer product

|| = = on the other hand yields a matrix

The operation of interchanging row and column numbers is called a Transpose. Thus, if = Its transpose is = or

= For a vector,

= The transpose of a column matrix is a row matrix Adjoint

We however deal with complex numbers in quantum physics, and physicality requires that inner products produce real numbers. We have seen in [bra-ket] that the dual of a function is its complex-conjugate. Duality of Euclidean vectors on the other hand involves a transposition. Taking these two together, we define the adjoint as the complex-conjugate of the transpose. Hence, if 1 = 3 then its adjoint is

1 3 = − − Adjoint

The adjoint of the vector, 1 = 0 − is = 1 0 Note that complex-conjugation ensures that the inner product 1 = 1 0 0 = 1 + 0 + 1 = − is real. The dual of a vector | in quantum physics is therefore its adjointadjoint.