<<

Math Methods for Polymer Physics Lecture 1: Representations of Functions

Series analysis is an essential tool in polymer physics and physical sci- ences, in general. Though other broadly speaking, a allows one to analyze an arbitrarily complicated into the sum of a simpler set of functions. Though other series expansions exist, two are especially useful: the and series. In a crude way, we may think of both series as 2 different ways of approximating, or “fitting”, a given function to a simpler form. For further reading on Taylor series and see chapters 5 and 14, respectively, of Arken and Weber’s text, Mathematical Methods for Physicists.

1 Taylor Series

Let’s start with Taylor series expansions. The Taylor expansion is a repre- sentation of a function, say f(x), as an infinite in the polynomi- n als, (x−x0) , where x0 is some reference point for the independent variable, x. Why are Taylor series useful? Well, let’s say you have a complicated function: x3 f(x) = ln cos x2 + 2 + . (1) 3 This function is plotted in Fig. 1. Often it’s sufficient and useful to have a simpler description of the func- tion in the neighborhood of some point, say x = x0. For many func- tions, you may replace f(x) with a power series expansion in of ∆x = x − x0 distance from the reference point.

∞ X n f(x) = an (∆x) n=0 2 3 = a0 + a1∆x + a2 (∆x) + a3 (∆x) + ... (2)

What are these coefficients an? The first one can be deduced from x = x0 and ∆x = 0, so that

2 3 f(x0) = a0 + a1∆x + a2 (∆x) + a3 (∆x) + ... = a0 (3) | {z } 0

1 Figure 1: Plot of f(x) in eq. (1), dark solid line. For many applications, it is often necessary to know behavior of the functionf(x) near some point, say x0 = 2. The Taylor series expansions for f(x) around x = x0 including 1, 2, 3, and 4 terms only are shown as labelled.

To find the higher-order (larger n) coefficients, take of both sides. Note that after this operation the right side is still a Taylor series.

0 2 f (x0) = a1 + 2a2∆x + 3a3 (∆x) + ... = a1 (4) | {z } 0 In general, we may show

1 dnf a = . (5) n n! dxn x=x0 In order to find the Taylor series expansion we need only to take derivatives of f(x) evaluated only at the point of reference, x = x0. Eqs. (2) and (5) define the Taylor series expansion of a functions of a single variable. Functions which can be represented by a Taylor series are known as analytic functions. Notice from eq. (2) that as x → x0 and ∆x → 0 the higher order (large n) terms in the power series expansion go to zero very quickly. Hence, if one is interested in f(x) sufficiently close to x0, a Taylor series expansion truncated to include only a few leading terms may often be sufficient to approximate the function. Geometrically, we can think of this in terms of a “local” description of a function near x = x0. 2 3 0 (∆x) 00 (∆x) 000 f(x) = f(x0) + ∆xf (x0) + f (x0) + f (x0) + ... (6) | {z } | {z } 2! 3! constant linear | {z } | {z } parabolic cubic

2 This shows that sufficiently close a point of interest that analytic functions are well approximated by constant plus a sloped, linear correction plus a parabolic correction plus .... Further away from a given reference point at x = x0, the less and less a function looks like a straight line. In order to get a better a approximation, you need functions with more wiggles (e.g. higher order polynomials). Let’s try some examples. Example 1: Expand ln(x) in a Taylor series around x0 = 1.

a0 = ln 1 = 0

d 1 a = ln(x) = = 1 1 dx x x=1 x=1

d2 d 1 1 a = ln(x) = = − = −1 2 dx2 dx x x2 x=1 x=1 x=1

d3 2 a = ln(x) = = 2 3 dx3 x3 x=1 x=1 n In general, an = (−1) (n − 1)! for n > 1.

(x − 1)2 (x − 1)3 (x − 1)4 ln(x) = (x − 1) − + − + ... 2 3 4 ∞ X (−1)n(x − 1)n = (7) n n=1

1 Example 2: Expand in a Taylor around x = 0. 1 − x

a0 = 0

d 1 1 a = = = 1! 1 dx 1 − x (1 − x)2 x=0 x=0

d2 1 2 a = = = 2! 2 dx2 1 − x (1 − x)3 x=0 x=0

d3 1 1 × 2 × 3 a = = = 3! 3 dx3 1 − x (1 − x)4 x=0 x=0

In general, an = n!. So ∞ 1 X = xn (8) 1 − x n=0

3 which is the well-known .

These particular series, eqs. (7) and (8), do not converge for all values of x. When the series does not converge, for some large enough ∆x, the n successive terms terms an(∆x) become larger than that the sum of the previous terms, meaning that adding more terms in the series expansion does not provide a better approximation, and the Taylor series fails to rep- 1 resent the function. For ln(x) around x0 = 1 and 1 − x around x0 = 0, these only converge for |∆x| < 1. In general we may define Rc as the radius of convergence of the Taylor P∞ n series of f(x) around x = x0. If |∆x| < Rc, then n=0 an(∆x) = f(x). Otherwise series does not provide a good approximation of f(x) (adding more terms makes things worse). There are some functions for which Rc → ∞ and the Taylor series always converges. Important examples include ex, sin x, cos x. These functions arise in many contexts, so it is useful to commit these series to memory. Example 3: Expand ex around x = 0. Well, first notice

n d x x e = e = 1 dxn x=0 x=0 From eq. (5) this gives right away the Taylor series coefficient of ex

∞ x2 x3 x4 X xn ex = 1 + x + + + + ... = . (9) 2! 3! 4! n! n=0 The Taylor series representation of ex is a particularly useful way to see that d x x P∞ xn dx (e ) = e . Indeed, it is reasonable to view n=0 n! as the definition of ex. You should also commit expressions of sin x and cos x to memory. These converge for all x:

x3 x5 x7 sin x = x − + − + ... (10) 3! 5! 7! x2 x4 x6 cos x = 1 − + − + ... (11) 2! 4! 6! Notice that these expansions allow you to derive the following important identity, eix = cos x + i sin x, (12) which is used heavily in . It is reasonably straightforward to generalize the Taylor series expansion for a function of a single variable to a mutli-variable function, say f(x, y),

4 expanded around the point x = x0 and y = y0: ∂f ∂f f(x, y) = f(x , y ) + ∆x + ∆y 0 0 ∂x ∂y 1 h ∂2f ∂2f ∂2f i + (∆x)2 + 2∆x∆y + (∆y)2 2! ∂x2 ∂x∂y ∂y2 1 h ∂3f ∂3f + (∆x)3 + 3(∆x)2∆y 3! ∂x3 ∂x2∂y ∂3f ∂3f i +3∆x(∆y)2 + (∆y)3 + ... (13) ∂x∂y2 ∂y3 where ∆x = x − x0, ∆y = y − y0 and all partial derivatives are evaluated at (x0, y0). This expansion can be confirmed by taking first, second, third (etc.) derivatives of both sides of the equation above. Why is the Taylor expansion a useful description? In many physical systems, the full expression for a function may be impossible to write down (i.e. PE of strongly interacting mixtures of charged particles). But often, equilibrium and dynamic behavior depends only on local properties of func- tion. By “local”, we mean, sufficiently close to some set of values for the independent variable. As a concrete example, consider a colloidal bead in a laser trap (Fig. 2) , an experimental tool which has been exploited to measure the forces generated by single macromolecules. If the bead has a polarizability, α, then when it is subject to an electric field, E, it obtains a dipole moment, p = αE. The potential energy of a polarized object in an electric field 1 is simply, U = − 2 p · E, while the energy required to polarize the bead is 2 Upolarization = |p| /(2α). Therefore, if the polarizable bead is subject to an electric field E(x) that varies in space (as near the focal point of a laser beam, the net electrostatic interaction between the bead and the field is described by the potential energy, α U(x) = − |E(x)|2. (14) 2 Hence, the potential energy is lowest in regions where the electric-field in- tensity, |E(x)|2, is highest. This explains why a small polarizable object, like colloidal beads, are drawn into the focal point of a high-intensity laser (shown schematically in Fig. 2). In general, the pattern of electric field intensity, |E(x)|2, may be rather complicated. But, if we are interested only in the behavior very close to the center of the trap, the behavior always has the same simple form,

U2 2 U(∆x) = −U0 + U1∆x + (∆x) + .... (15) |{z} | {z } 2 constant =0 | {z } quadratic

5 Figure 2: Top: a schematic depiction of a polarizable bead near to the high- intensity focal point of a laser beam. Bottom: A sketch of U, the potential energy of a optically-trapped colloidal particle in terms of ∆x, the deviation from the center of the trap.

dU By definition, is minimum at ∆x = 0, so we know = U1 = 0. This dx x=0 means that the force on the bead at the center of the trap is zero, because the electric field intensity is maximal. Local equilibrium (mechanical, dynamic, etc.) always looks like this: constant + quadratic (first non-trivial term in expression about equilibrium). What is force if bead is displaced? dU F = − = −U ∆x = −k∆x (16) x dx 2 The linear force response is identical to a “Hooke’s Law” elastic spring, and k is spring constant. For all interest and purpose (near equilibrium or steady state), we are often interested in expression up to order. Therefore, if one “calibrates” the strength of optical trapping (the value of k) and carefully measure ∆x, you can measure magnitude of external forces that pull a bead from the center of the trap, generated, say, by a strand of DNA chemically tethered to the bead.

2 Fourier Series

The second important series representation of functions is the Fourier series. A simple way to describe this series is to contrast it with the Taylor series

6 described in the previous section:

Taylor Series - decompose f(x) into infinite series of polynomials (∆x)n

Fourier Series - decompose f(x) into infinite series of and cosines

Why are Fourier series (and transforms) useful?

1. Fourier analysis is necessary to understand interaction between matter and radiation/waves (i.e. scattering) and spectral analysis

2. Sines and cosines are “harmonic functions”, which means they form a complete of solutions to certain PDE’s common to the study of physical systems

Indeed, properties 1 and 2 are intimately related as the is har- monic, and therefore, radiation (light, x-rays, etc.) is sinusoidal in nature. In addition, you’ll likely see how property 2 can be used to solve problems in continuum elasticity and polymer dynamics. For example, in the study of polymer dynamics, we come across equations like,

d2 R(n) + kR(n) = 0, (17) dn2 where k > 0 describes a relaxation rate chain motion, and R(n) specifies √ d2   the position of the bead n along a polymer chain. Since, dn2 sin kn = √  −k sin kn , sines and cosines form a natural set of solutions to this equa- tion. For the purposes of this review, a Fourier series is the unique decom- position of an arbitrary function (in some domain) into an infinite series of sines and cosines. Let’s say we are interested in a function f(x) in the domain x ∈ [0,L] (see Fig. 3). In this domain we can write Fourier series as:

∞   ∞   a0 X 2πn X 2πn f(x) = + a cos x + b sin x (18) 2 n L n L n=1 n=1 an and bn are coefficients. Just as the coefficients of the Taylor series are related uniquely to the given function, an and bn are uniquely determined by properties of f(x) on this domain. How are an and bn related to f(x)? This relationship derives from an 2πn  important properties of sines and cosines. In particular, sin L x and 2πn  cos L x are on this domain. This means that if a

7 Figure 3: Plot of f(x), in the range of [0,L]. multiply any two of these elementary functions and integrate over the do- main x ∈ [0,L], the resulting is zero unless these functions are iden- 2πn  2πm  tically. Consider two produce of two functions sin L x sin L x :

Z L 2πn  2πm  dx sin x sin x 0 L L 1 Z L  2πx  2πx  = dx cos (n − m) − cos (n + m) . (19) 2 0 L L This integral is only non-zero if n = m for which the first term in the inte- 2πx  grand becomes cos L (n − m) = 1. From this we can show the following for the between sines, ( Z L 2πn  2πm  L if n = m dx sin x sin x = 2 (20) 0 L L 0 if n 6= m

Similarly, for the cosines, ( Z L 2πn  2πm  L if n = m 6= 0 dx cos x cos x = 2 (21) 0 L L 0 if n 6= m

Sines and cosines are always orthogonal,

Z L 2πn  2πm  dx sin x cos x = 0 for all m, n. (22) 0 L L The orthogonality relations, eqs. (20) - (22), are important because they allow one to invert the Fourier series, to determine the unique set of

8 coefficients, an and bn, the correspond to the function f(x). Operationally, the coefficients of the Fourier series are determined by “projecting out” the 2πn  term in series proportional to, say, sin L x , by multiplying both sides 2πn  of eq. (18) by sin L x and integrating the product over the domain x ∈ [0,L]:

Z L 2πm  dx f(x) sin x 0 L Z L   " ∞   ∞  # 2πm a0 X 2πn X 2πn = dx sin x + a cos x + b sin x L 2 n L n L 0 n=1 n=1

Carrying out the integration, a0 and all cosine terms in sum will be zero due to orthogonality conditions, eqs. (21) and (22). Likewise, all sine terms in sum except n = m term are zero too. Thus, the only term from the right-hand side of eq. (18) that survives this “projection” operation is from n = m: Z L 2πm  L dx f(x) sin x = bm 0 L 2 and, 2 Z L 2πn  bn = dx sin x f(x) (23) L 0 L By performing the same operation with the cosine functions we can also derive, 2 Z L 2πn  an = dx cos x f(x) (24) L 0 L Example 4 Consider a function f(x) = A+Bx (see Fig. ??). Compute coefficients an and bn for a Fourier series in the domain x ∈ [0,L]: From eq. (24) we compute the coefficients to the cosine terms by mul- 2πn  tiplying f(x) by cos L x and integrating over the domain. For m = 0, this is easy,

2 Z L 2  BL2  a0 = dx (A + Bx) = AL + = 2A + BL (25) L 0 L 2

Now consider bn, 2 Z L 2πm  bn = dx sin x (A + Bx) L 0 L 2B Z L 2πm  = dx x sin x (26) L 0 L How do you do this integral? Let’s review a useful trick for evaluating .

9 Figure 4: Plot of f(x), in the domain of [0,L].

Aside: Integrations by parts Z L Let’s say you want to compute dx u(x)v0(x), and you don’t know the 0 anti- of v0(x). The of differentiation gives you,

d (u(x)v(x)) = u0(x)v(x) + u(x)v0(x) (27) dx or d u(x)v0(x) = (u(x)v(x)) − u0(x)v(x). dx Substituting this expression for the integrand,

Z L Z L  d  dx u(x)v0(x) = dx (u(x)v(x)) − u0(x)v(x) 0 0 dx L Z L 0 = u(x)v(x) − dx u (x)v(x). (28) 0 0 Colloquially, we say that this operation “flips” the derivative from v(x) to u(x). (Hopefully, the remaining integrand is known!) Applying to our case in eq. (26):

2πm  u = x v0 = sin x L

L 2πm  u0 = 1 v = − cos x 2πn L

10 and, L Z L 2πn  xL 2πn  L Z L 2πn 

dx x sin x = − cos x + dx cos x 0 L 2πn L 2πn 0 L 0 L2 = − 2πn thus L b = − . (29) n πn Applying integration by parts, we can also show an = 0 for a 6= 0. All together, we have

∞  BL X BL 2πn  f(x) = A + − sin x . (30) 2 πn L n=1 This result is plotted in Fig. 5, where the series has been truncated after including a finite number of terms. It is quite clear, that additional terms improve the quality of the Fourier expansion, and the series will ultimately converge to f(x). It is common to refer to the individual terms contributing L to the Fourier sum as “Fourier modes”. From the result bn = − πn and from Fig. 5, it is clear that the contribution, or amplitude, of the higher-order modes decreases as the “mode-number” n increases, explaining why this sum converges to a reasonable approximation to the function f(x) after a finite number of terms. Three final notes on Fourier series. First, domain of Fourier series can be chosen arbitrarily. It is commonly convenient to shift domain to be symmetric about x = 0:  L L x ∈ − , 2 2 In this case, form of Fourier series looks the same. Only formula for coeffi- cients changes. L 2 Z 2 2πn  an = dx cos x f(x), (31) L L L − 2 and similarly for the bn. Second, notice that all terms in Fourier series are periodic under x → x + nL (shift by length of domain). For this reason Fourier series especially useful as a general representation of any . For example, one may calculate the Fourier coefficients for a given function based on the “projection operation” within a single domain, say from x − 0 to x = L in Fig. 6. In crystalline materials, for example, the electron density is a periodic function that is naturally described in as a Fourier spectrum, and the modes of non-zero amplitude represent regions of strong-scattering by diffraction.

11 Figure 5: Plot of f(x), in the domain of [0,L], with the Fourier series ex- pansion truncated after including a different number of terms. Clearly, the inclusion of higher order terms improves the overall approximation.

Figure 6: Plot of an infinitely periodic function. The Fourier series for a single domain describes an infinite array of periodic copies of the same function, translated by one domain length, L.

12 Finally, recall that for a discontinuous (non-analytic) function, the Tay- lor series near to the point of discontinuity does not converge, providing a poor “fit” to a discontinuous function. However, the convergence of the Fourier serious does not require a function to be analytic. Any function, even discontinuous functions, can be decomposed into a Fourier series that converges as the number of terms included in the series goes to ∞.

13