Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2014 Inverse Problems in Characterization Arsia Takeh

Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] FLORIDA STATE UNIVERSITY

COLLEGE OF ARTS AND SCIENCES

INVERSE PROBLEMS IN POLYMER CHARACTERIZATION

By

ARSIA TAKEH

A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Degree Awarded: Summer Semester, 2014

Copyright c 2014 Arsia Takeh. All Rights Reserved. Arsia Takeh defended this dissertation on May 29, 2014. The members of the supervisory committee were:

Sachin Shanbhag Professor Directing Dissertation

William Oates University Representative

Anke Meyer-Baese Committee Member

Peter Beerli Committee Member

Jim Wilgenbusch Committee Member

The Graduate School has verified and approved the above-named committee members, and certifies that the dissertation has been approved in accordance with university requirements.

ii Dedicated to:

My mother, father and sister

iii ACKNOWLEDGMENTS

This work would not have been possible without the endless support, guidance and efforts of a lot of people. Foremost, I would like to express my profound gratitude to my advisor Prof. Sachin Shanbhag, who has been a constant source of guidance, patience and support over the last five years. I am deeply grateful for his patience during years of my research and for his trust that made me through PhD program in computational science, albeit my undergraduate degree in chemical engineering. For everything you’ve done for me, Prof. Shanbhag, I thank you and I am indebted. I am also extremely thankful to my committee members, Prof. Anke Meyer-Baese, Prof. William Oates, Prof. Peter Beerli and Dr. Jim Wilgenbusch for their continued faith and encouragement of my efforts. I would like to thank all the Scientific Computing staff at Florida State University and Cen- ter for Global Engagement staff for all their administration help through these years. I thank Research Computing Center staff at Florida State University for making high-performance and high-throughput computing resources available. I gratefully acknowledge the funding sources that made my Ph.D. work possible. Part of this work was funded by the National Science Foundation. I am indebted to many friends and family for their unconditional love and support. I am sin- cerely thankful to Dr. Shahriar Rouhani, Haleh Ashki, Yassi Ashki, Mehdi Shayan, Hoda Mirafzal, Asal Mohamadi who supported me during these years, and incented me to strive towards my goal. Thank you to my parents, lovely sister, Sepideh, for standing there for me all these years and supporting me through every single challenge of my life. Words alone cannot express the thanks I owe them. I would also thank my uncles, Dr. Alireza Sedghi, Dr. Ramin Berenji, and my aunt, Dr. Golnar Berenji for their constant support, love, encouragement and guidance.

iv TABLE OF CONTENTS

ListofTables...... vii ListofFigures ...... viii ListofSymbols...... xii List of Abbreviations ...... xiv

1 Introduction 1 1.1 ...... 1 1.2 InverseProblems ...... 3 1.2.1 Polyolefins...... 6 1.3 Rheology ...... 7 1.4 Branching...... 10 1.5 Polymer Characterization ...... 10 1.5.1 Chemical Composition Distribution of Random ...... 10 1.5.2 Long-chain Branching Characterization ...... 17 1.5.3 Rheological Characterization ...... 20 1.6 Motivation and Scope ...... 22

2 Inferring Comonomer Content using Crystaf 24 2.1 CrystafModel...... 24 2.2 Dependence of Parameters on the Structure and Operating Conditions ...... 30 2.2.1 Classic and Expanded Parameters Sets ...... 30 2.2.2 Training and Testing Datasets ...... 31 2.3 ExperimentalData...... 33 2.4 Inferring Average Comonomer Content ...... 34 2.5 Results...... 34 2.5.1 Effect of Selection of Training Datasets ...... 38

3 LCB Detection and Measurement 42 3.1 Method ...... 45 3.1.1 ModelData...... 45 3.1.2 Rheological Probes ...... 45 3.2 Results...... 49 3.2.1 LCB Detection ...... 49 3.2.2 LCB Measurement ...... 53

4 Determination of the Continuous and Discrete Relaxation Time Spectrum 57 4.1 RelaxationSpectra...... 57 4.2 Method ...... 59 4.2.1 Continuous Relaxation Spectrum ...... 59 4.2.2 Discrete Relaxation Spectrum ...... 66

v 4.2.3 Error Estimation ...... 69 4.3 Program ...... 69 4.3.1 InputData ...... 72 4.3.2 Interfaces ...... 72 4.3.3 Functions ...... 73

5 Conclusion 80 5.1 Chemical Composition Distribution ...... 80 5.1.1 FutureWork ...... 81 5.2 LCB Detection and Measurement ...... 81 5.2.1 FutureWork ...... 82 5.3 Continuous and Discrete Relaxation Time Spectrum ...... 82 5.3.1 FutureWork ...... 83

Bibliography ...... 84 BiographicalSketch ...... 94

vi LIST OF TABLES

2.1 Molecular characteristics of experimental data considered in this paper...... 33

3.1 Stadler[101] rheological probes that are adopted in this work...... 46

vii LIST OF FIGURES

1.1 a) A linear homopolymer, b) A diblock , and c) A random copolymer . . . 2

1.2 Architecture of a polymer chain: a) A linear chain, b) A branched chain, and c) A cross-linkedpolymer ...... 3

[114] 1.3 The distribution of bm for HDB1 . Experimentally determined values are indicated by red lines, marginal distribution of bm from simulations in the absence of molecular weight information are indicated by the unfilled bars, and values from the simulations with prior molecular weight information are indicated by the blue filled bars. . . . . 6

1.4 Classification of polyethylenes according to density (and branching)[96] ...... 7

1.5 Time profile of (a) strain, (b) stress response in elastic solid, (c) stress response in viscous fluid, (d) stress response in viscoelastic material...... 8

1.6 CCD of a typical LLDPE[96] ...... 12

1.7 Schematic diagram of a Crystaf vessel[6] ...... 13

1.8 Effect of comonomer content on Crystaf profiles of ethylene/1-hexene copolymers.[6] . 15

1.9 Relationship between cooling rate and Crystaf peak temperature for ethylene/1-hexene copolymers[6] ...... 16

2.1 Integral Crystaf profile of a ethylene/1-hexene sample with φ = 1.50 mol % of 1- ◦ hexene, and Mn = 35, 000, at a cooling rate of 0.1 C/min...... 25

2.2 Differential Crystaf profile of a ethylene/1-hexene sample with φ = 1.50 mol % of ◦ 1-hexene, and Mn = 35, 000, at a cooling rate of 0.1 C/min...... 26

2.3 Illustration of ethylene sequences (ES), longest ethylene sequences (λ) in a copolymer sample...... 27

2.4 Weight distribution of longest ethylene sequence of a sample with φ = 1.50 mol % of 1-Hexene, and Mn = 35, 000...... 28

2.5 X(λ, T ) for three fractions of λ = 100, 300, 1500 of a sample with φ = 1.50 mol % of −5 1-Hexene, and Mn = 35, 000, and model parameters n = 4.44, k = 1.0×10 , A = 85, and B = 630...... 29

2.6 Estimated Gibbs-Thomson parameters (A and B) for the Crystaf model as a function of comonomer content.[3] ...... 31

2.7 Estimated Avrami parameters for the Crystaf model. The parameter k is an average value. (The solid lines are only as an aid to eye. The dashed line is an average value

viii for n). Comonomer content of the samples EH06, EH15, and EH31 is 0.68%, 1.51%, and 3.14% respectively.[3] ...... 32

2.8 The function ǫ(φ) for dataset 17 from table 2.3 (φexp = 2.31) derived by using model parameters that described in the text. The φ which minimizes ǫ is denoted as φmodel. 35

2.9 Distribution of difference between comonomer content derived from model and exper- imental data for (a) 5 TDS with 9 parameters, (b) 10 TDS with 9 parameters, and (c) 5 TDS with 4 parameters. The main text describes how to interpret these box plots. 36

2.10 Fraction of samples with |∆φ| < 1.0...... 38

2.11 Cost function Φ corresponding to the five different strategies for constructing training datasets. The blue symbols and lines denote the mean and standard deviation of the data, while the red lines extend from the minimum to the maximum values sampled. . 40

2.12 |∆φ| corresponding to the five different strategies for constructing training datasets. The blue symbols and lines denote the mean and standard deviation of the data, while the red lines extend from the minimum to the maximum value sampled...... 41

3.1 The storage and loss moduli of a linear sample with MW = 149.4kg/mol and a branched sample with the same molecular weight and average number of branches on a single molecule, bm = 0.74...... 44

3.2 Complex viscosity of a linear sample with MW = 149.4 kg/mol and a branched sample with the same molecular weight and average number of branches on a single molecule, bm = 0.74...... 47

∗ 3.3 Loss tangent δ(|G |) of a linear sample with MW = 149.4 kg/mol and a branched sample with the same molecular weight and average number of branches on a single molecule, bm = 0.74...... 48

3.4 LCB detection sensitivity responses, shown probes at each area have the largest error between the linear and branched regimes. White lines are showing specific configura- tions in which their cross section at the shown dimension is investigated...... 51

3.5 ǫ vs. bm plots of a) MW = 80,400 g/mol and b) MW = 446,500 g/mol for different rheological probes...... 52

3.6 ǫ vs. MW plots of a) bm = 0.63 and b) bm = 1.74 for different rheological probes. . . 54

3.7 LCB measurement sensitivity responses, shown probes at each area have the largest dǫ(θ)/dbm, therefore the largest sensitivity. White lines are showing specific configu- rations for which the cross sections at the shown MW are investigated...... 55

3.8 Derivative of the error function with respect to branching, dǫ(θ)/dbm, of a) MW = 179,000 g/mol and b) MW = 432,500 g/mol for different rheological probes...... 56

ix 4.1 Particular ρ vs. η, or L-curve. As λ is increased, the monotonically decreasing curve shows a sharp initial decline in which η falls steeply while ρ barely increases. The λc (red circle) is determined using a heuristic and lies in the corner region where the two mentioned regions meet.[113] ...... 60

4.2 (a) Continuous relaxation spectra, H(τ), (b) Experimental and inferred G′(ω) and G′′(ω), symbols represent the experimental data, blue circles and green squares corre- spondingly, and the solid lines represent the corresponding dynamic modulus obtained from corresponding inferred spectrum, H(τ)...... 61

2 4.3 Dependence of the common regularization term, ηα on different values of α. (b) Relaxation spectra, h(τ)[81] ...... 62

4.4 Relaxation time spectra of two high molecular weight monodisperse sam- ples, (a) common regularization method, (b) edge-preserving regularization method (α = 2).[81]. The dashed lines mark the range of relaxation times in which the data characterize the spectrum [τmin − τmax]. Therefore, the reconstructed spectrum is re- liable only inside this range. The error bars indicate the averaged estimated spectrum over 1000 realizations...... 64

4.5 Inferred continuous relaxation spectra using the maximum curvature method to de- termine regularization parameter λ...... 65

4.6 Plot to determine optimum number of discrete modes (Nopt). The relative error ǫ(N) between the input data and the G∗(ω) inferred from DRS is plotted on left y-axis, while the condition number of the resulting linear least squares problem is plotted on a logarithmic scale on the right y-axis.[113] ...... 68

4.7 The blue line is continuous relaxation spectra, h(τ), and the black squares represent the discrete relaxation spectra with N =9modes...... 68

4.8 Estimated continuous relaxation spectra ...... 70

4.9 Estimated continuous relaxation spectra with different regularization parameters and levels of noise, a) λ = 10−7, σ = 0.02, b) λ = 10−2, σ = 0.02, c) λ = 10−7, σ = 0.05 and d) λ = 10−2, σ = 0.05...... 71

4.10 ReSpect structure, relation between the two regimes and their sub-category functions isshown...... 75

4.11 ReSpect GUI panel in Matlab ...... 76

4.12 Plotting options (shown in the red box) for CRS and DRS...... 77

4.13 The left plot represents the dynamic modulus data (green symbols (lines) represent the experimental dynamic modulus (dynamic modulus from the corresponding CRS), respectively. Right box plot represents the extracted continuous relaxation spectrum. 78

x 4.14 The left plot represents the dynamic modulus data (green symbols (lines) represent the experimental dynamic modulus (dynamic modulus from the corresponding DRS)), respectively. The right plot represents the extracted discrete relaxation spectra with N =9modes...... 79

xi LIST OF SYMBOLS

The following short list of symbols are used throughout the document. The symbols represent quantities that I tried to use consistently. K(t, s) Kernel function bm Average σ(t) Stress γ Strain h(t) Relaxation modulus ω Frequency γ0 Maximum amplitude of the strain δ Phase shift G′ Dynamic storage modulus G′′ Dynamic loss modulus η0 Zero-shear viscosity δ(|G∗|) Rheological probe N Number of modes in DRS spectrum φ Average comonomer content Ti Initial temperature Tf Final temperature Mn Number-averaged molecular weight n Avrami parameter k Avrami parameter A Crystaf model parameter B Crystaf model parameter W (λ) LES distribution Pa LES distribution modeling parameter rN Number averaged chain length Mco 1-olefin molecular weight Mmo Ethylene molecular weight Td Dissolution temperature ◦ Td Equilibrium dissolution temperature λ LES Ts Supercooling temperature α Model constant proportional to the enthalpy of fusion Tl Lag temperature rc Cooling rate X Crystallinity Cmodel Model polymer solution concentration Cexp Experimental polymer solution concentration Φ Cost function k01, k02, k1 Expanded model parameters for parameter k

xii A0, A1 Expanded model parameters for parameter A B0, B1, B2 Expanded model parameters for parameter B ǫ(φ) error function ∆φ Difference between model and experimental φ |η∗(ω)| Complex viscosity ∗ lin lin |η (ω × η0 )|/η0 Complex viscosity normalized by η0 expected lin from the molar mass η0 ∗ |η (ω × η0)|/η0 Complex viscosity normalized by η0 ′ lin G (ω × η0 ) Storage modulus normalized by η0 expected lin from the molar mass η0 ′′ lin G (ω × η0 ) Loss modulus normalized by η0 expected lin from the molar mass η0 ′ G (ω × η0) Storage modulus normalized by η0 ′′ G (ω × η0) Loss modulus normalized by η0 δ(|G∗|) Loss tangent τe Equilibrium time N0 Number of monomers per entanglement segment η′ Storage viscosity η′′ Loss viscosity θlin Rheological probe of a linear sample θbr Rheological probe of a branched sample θmin Minimum of specific rheological probe θmax Maximum of specific rheological probe δlin Loss tangent of a linear sample δbr Loss tangent of a branched sample δmin Minimum loss tangent of a specific sample δmax Maximum loss tangent of a branched sample dǫ(θ)/dbm Derivative of error function with respect to branching, bm gi Modes of DRS V (λ) Cost function for relaxation spectrum determination η2 Norm of the curvature ρ2 relative squared error between the experimental and inferred complex moduli λc Optimal regularization parameter 2 ηα Norm of the curvature for edge-preserved regularization method where α is edge-preserving parameter t Curvature indicator ns Number of points CRS is calculated N Optimum number of points for DRS µ Mean value σ Standard deviation

xiii LIST OF ABBREVIATIONS

EOR Enhanced oil recovery PVC Polyvinyl chloride SCB Short chain branch LCB Long chain branch LDPE Low density polyethylene HDPE High density polyethylene LLDPE Linear low density polyethylene MCMC Markov chain Monte Carlo CCD Chemical composition distribution MWD Molecular weight distribution LALLS Low-angle laser light scattering TREF Temperature rising elution fractionation Crystaf Crystallization analysis fractionation DSC Differential scanning calorimetry LES Longest-ethylene sequence NMR 13C nuclear magnetic resonance SEC Size exclusion chromatography MALLS Multi angle laser light scattering CRS Continuous relaxation spectra DRS Discrete relaxation spectra pp Overall propagation probability cp Ethlene/1-olefin choice probability pm Probability of ethylene propagation cpp 1-olefin propagation probability EH Ethylene/1-hexene samples TDS Training datasets BoB Branch-on-branch model GUI Graphical user interface

xiv CHAPTER 1

INTRODUCTION

Different eras throughout human history have been tremendously influenced or even defined by usage of newly discovered materials. From this standpoint, historians have named different eras in history after the major materials used during that time; the Stone Age, the Bronze Age, and the Iron Age. Following this trend, one may assert that twentieth century was the Polymer Age.[82] The development and inception of polymeric materials is a result of extensive efforts and research within the early years of 20th century. We started using polymers a long time ago, although we never realized that we are dealing with macromolecules or polymers (polymer means “many units” in Greek). Bakelite was the first synthetic resin that gave the motivation to further research the structure and properties of macromolecules and their mass production. Polyethylene, the most popular polymer based on the production volume, was produced with the advent of a new catalyst type that makes it possible to control chain properties.

1.1 Polymers

Today, polymers are present in many aspects of our life and their applications are broad. Ex- amples of some familiar polymers used in daily life are:

• Many sealings and adhesives are polymers, such as cyanoacrylate adhesives (polymerized spontaneously in the presence of moisture) that are also known as instant glues. They are also water resistant and have a wide applicability in the aquarium seals.

• Most polymers are non-conductive, polyethylene, polypropylene and polytetrafluoroethylene (Teflon R ) are widely-used electrical insulators. On the other hand, oxidized doped poly- acetylenes have shown high conductivity behavior.

• Nylon fibers are used as a replacement for silk. Nylon fibers are also used in the vehicle tires.

• Polymers have lots of medical applications. Acrylic resin base dentures, dental fillings, oxygen- permeable polymeric contact lenses, replacement joints, and polymeric wound closures are just a few samples of the broad area of their usage in this field.

1 • Drug delivery is one of the cutting-edge applications of polymeric materials in health industry.

• Alkaline surfactant polymers are one of the Enhanced Oil Recovery (EOR) methods which is growing very rapidly in the oil industry. EOR is basically used to compensate for the reduction in the production due to the pressure drop in the oil reservoirs.

Polymeric materials or macromolecules are built up from several similar repeating molecular units (monomers) that are linked together by a covalent bond. Polymers are represented in a chain-like structure. The main chain is called the backbone of the polymer. Different types of polymers are constructed by modifying the atoms on the backbone. A homo-polymer is consisted of monomers of the same type, wheras co-polymer results from monomers of different chemistry. An A-B copolymer has two constituent monomers, A and B. These sequences can be placed in different patterns, randomly, in blocks, etc. A polymer consisting of a single type of monomer is distinguished from the copolymers and is called a homopolymer. Figure 1.1 depicts a linear homopolymer, a diblock copolymer, and a linear homopolymer. a) A-A-A-...-A-A-A b) A-...A-B-...-B c) A-B-A-A-B-A-B-B

Figure 1.1: a) A linear homopolymer, b) A diblock copolymer, and c) A random copolymer

Figure 1.2 shows three architectures of a polymeric molecule: (a) a linear chain, (b) a branched chain, and (c) a cross-linked chain. Linear polymers are polymers in which monomeric units are linked together to form a linear chain. Some examples of linear polymers are high-density polyethy- lene, nylon, and PVC. If side groups attach to the backbone, the result is a branched structure. Branches could be short (short chain branching (SCB)) or long (long chain branching (LCB)); this will be discussed later in this chapter. Branched structures can result in different characteristics in polymers based on their length, density and frequency. LDPE (low-density polyethylene) and starch are two examples of branched polymers. Separate backbones can connect to each other through chemical bonds which would result in cross-linked structures (1.2(c)).[31] Natural rubber is a cross-linked polymer.

2 (a) linear

(b) branched

(c) cross-linked

Figure 1.2: Architecture of a polymer chain: a) A linear chain, b) A branched chain, and c) A cross-linked polymer

1.2 Inverse Problems

Let us consider a general example to introduce the idea of “inverse problems” and some of their features. There are situations where we are interested in calculating quantities that are not directly measurable. If a relation between these immeasurable quantities and the measurable data is available, and this relation is linear, we can solve this problem by a least-squared method. Many physical phenomena are described through functions, which are not directly measurable. In these cases, finding the function is a very difficult problem, even if there is a linear relation between the function and the data. We can consider Fredholm integral of the first kind as an example:

g(t) = K(t, s)f(s)ds (1.1) Z where g(t) is the measurable data, f(s) is the desired function, and K(t, s) is the Kernel function. Solution to this problem is an ill-posed problem and a simple least-squared approach is not capable of solving such a problem. There are several methods suggested for solving the first kind Fred- holm integral problems.[16,28] Among all the methods proposed for solving this problem, Tikhonov regularization is the best-known and successful method to calculate various physical and chemical

3 functions. Another well-known method for solving the Fredholm integral problems is the maxi- mum entropy method.[47] In several physical experiments, a nonlinear Fredholm equation should be solved in order to determine the non-measurable function. Even though the nonlinear integral equation has several forms, in the relaxation time spectrum, which is of our interest, it has the following form only:

g(t) = K(t, s)ef(s)ds (1.2) Z Generally, inverse problems are ill-posed, while forward problems are well-posed. A well-posed problem is characterized by[10]:

• Existence: inner consistency which guarantees a non-trivial realistic solution.

• Uniqueness: for all admissible data, the solution is unique.

• Parsimony: inner stability in the sense that small changes in the output (the measured data) correspond to only small changes in the input.

An ill-posed problem fails to satisfy these conditions. Many practically important problems in mathematical physics, geology, materials, and medicine lead to ill-posed problems. In many cases, the stability condition is violated. For the solution of such problems, one needs special methods because an arbitrarily small data error might cause an arbitrarily large error in the solution. Partial restoration of stability can be obtained by regularization methods such as Tikhonov regularization for the Fredholm integral problem, whose basic philosophy is to approximate the original ill-posed problem by a sequence of closely related well-posed problems whose solutions converge to the solution of the original problem as the noise level decreases to zero. Structure-property relationships could be a good example‘ of an inverse problem. In chemistry related fields, if we have the structure of a material, we can use the proper formulations to de- termine the desired property; while the inverse route is not as direct as the forward route. There might be methods and experiments that could estimate the desired structures, but there are many instances that this approach fails due to many reasons. As an example, long-chain branching (LCB) is a structural characteristic that affects the rheological properties of the polyethylenes strongly. Branching structure - density of branch points, branch lengths, and location of branches - is compli- cated in branched polymers and it is very critical for the study of the effect of long-chain branching

4 on the rheological properties. This becomes more complicated when we are dealing with very low levels of long-chain branching, where analytical methods[27,49,92,130]cannot accurately detect and quantify trace levels of long-chain branching. In cases like this, one has to come up with an indirect method to obtain the required information.[86,87,114] The recovery of parameters characterizing a phenomenon (such as branching in the example above) from indirect measurements (viscoelastic measurements) can be a tedious and cumbersome process. Any class of indirect measurements can only recover certain information about the phe- nomenon. In order to develop practical mathematical models that relate the indirect measurements to specific information from the phenomenon that is to be recovered, we often need to rely on sim- plifying assumptions. This process often introduces its own set of errors. Assuming that a mathematical model describing a phenomenon is available, we define the forward problem as the prediction of observations given the parameter values. In the example above, the calculation of the rheological properties given the structural parameters constitutes the forward problem; such forward approaches may be based on empirical correlations, or microscopic theories. Microscopic theories have become very intricate[26,57,58,74,89,119]. The inverse problem consists of using rheological properties to infer the LCB. Rheology is very sensitive to structure; therefore this can be an interesting feature to be exploited. Shanbhag[86] proposed a data analysis method using a Bayesian formulation which uses a Markov-chain Monte Carlo (MCMC) algorithm[86,87,114]. This method was based on the idea of not seeking the “optimal solution” but rather the distribution of possible answers that are consistent with the experimental data. This arose because of multiple solutions, which means different struc- tures result in approximately similar rheology responses. This method allows one to (i) identify the number of components in the unknown mixture due to the intrinsic Occam’s razor in Bayesian analysis[37,64,71], (ii) accurately predict the composition of the mixtures in absence of degenerate so- lutions, and (iii) describe multiple solutions when more than one possible combination of structural parameters is consistent with the rheology. Figure 1.3 presented the branching distribution of the sample HDB1 when a prior molecular weight information is available, therefore the Markov chain samples only one parameter, bm. The experimental value is depicted as a thin red line. Histograms, with or without prior information, are in good agreement with the experimental data. Relaxation time spectrum is another well-known example of inverse problems. We discuss this

5 0 0.2 0.4 0.6 0.8 1 b m

[114] Figure 1.3: The distribution of bm for HDB1 . Experimentally determined values are indicated by red lines, marginal distribution of bm from simulations in the absence of molecular weight information are indicated by the unfilled bars, and values from the simulations with prior molecular weight information are indicated by the blue filled bars. problem in detail in section 1.5.3 and chapter 4.

1.2.1 Polyolefins

Polyolefins are present in many aspects of our life and they are the largest class of synthetic polymers that have been produced and used.[23] Their applications are broad, from grocery bags to prosthetic implants and airplane parts. Despite this variety, they are made of monomers composed of two elements, carbon and hydrogen. As we discussed earlier, by synthesizing different structures, we can obtain a wide range of properties and applications. Polyolefins are mostly made of ethylene and propylene. Polyethylene is one of the major types of polyolefins. Polyethylene resins are copolymers of ethylene with varying fractions of an α-olefin comonomer (1-butene, 1-hexene, etc.). Copolymers are used to decrease the crystalinity of the polyolefins. The global production of polyethylene was estimated to be 70 million tons (154.32 billion pounds) in 2009, which is about half of the annual production of plastic and resins in the world. Polyethylene in the current form was found by Gibson and Fawcett, two research chemists who were looking at high pressure and high temperature reactions in 1933. They synthesized and characterized by varying the branch levels and the density. Regarding the annual production volume, HDPE (High Density Polyethylene), LDPE

6 (Low Density Polyethylene) and LLDPE (Linear Low Density Polyethylene) are the most important polyethylene classes which are represented in figure 1.4. Introducing metallocene catalysts was a breakthrough in polymer research and industry, their single-site structure gave more control on synthesizing of the polymers. This characteristic is a remarkable advantage over the conventional Ziegler-Natta catalyst. Metallocene catalysts produce almost uniform polymer chains (compared to Ziegler-Natta catalysts) with a narrow molecular weight distribution and controlled branching. A branched polymer can be comb-like in structure with either short-chain branching (SCB) or long-chain branching. SCBs usually have less than 30 carbons and they interfere with polymer crystallization. LCBs have a tremendous effect on the melt rheological behavior. Even very small quantities of LCB change the polymer processing properties significantly. Three important indus- trial polymers are shown in figure 1.4 which has the mentioned branching architectures (LDPE (SCBs and LCBs), LLDPE (SCBs), and HDPE (scarce amount of SCBs)).

Figure 1.4: Classification of polyethylenes according to density (and branching)[96]

1.3 Rheology

Rheology seeks to describe the flow behavior of materials using constitutive equations between stress and strain. Polymers lie between the two extreme states, the pure viscous and the pure elastic states; they exhibit properties of both of these two extremes, which is the reason we call them viscoelastic materials. Linear rheology (linear viscoelasticity) refers to processes that are in

7 the vicinity of small deformations. In the linear viscoelastic regime, constitutive equations (rela- tionships between the generated stress and the related time derivatives to the applied deformation) are in a linear form. Stress, σ, is defined as the force per unit of cross sectional area, and strain, γ, is defined as the relative changes in dimensions and angles of the deformed sample. An elastic solid and a viscous fluid would show different responses to a step strain, 1.5(a) shows the imposed strain within an infinitesimal period of time on the material, figure 1.5(b) shows the resulted stress response on an elastic solid which is proportional to the strain, 1.5(c) shows the stress response on a viscous fluid which is proportional to the strain rate (a sharp step function), and finally we can see the strain response of a viscoelastic material in 1.5(d) which has both of the aforementioned characteristics and not solely one of them.

(a) γ

(b) σ

(c) σ

(d) σ time

Figure 1.5: Time profile of (a) strain, (b) stress response in elastic solid, (c) stress response in viscous fluid, (d) stress response in viscoelastic material.

Now that we have defined the strain and stress concepts, the physical meaning of relaxation modulus, h(t) would be more sensible. Generally, the ratio of the stress to the strain is called a modulus, and in this time-dependent measurement, we can define it as:

8 σ(t) h(t) = (1.3) γ Many of the real life experiments in the polymer industry are not transient experiments and they are categorized as periodic or dynamic experiments. In these type of experiments, the strain and stress may vary periodically with a frequency of ω. A constitutive equation can be written as follows for the periodic strain:

γ = γ0 sin(ωt) (1.4) where γ0 is the maximum amplitude of the strain. The resulting stress is also sinusoidal and can be written as:

σ(t) = σ0 sin(ωt + δ) (1.5) where σ0 is the stress amplitude and δ is the phase shift, which is also called the loss angle. Now that we have defined all required parameters for the storage and loss moduli (G′ and G′′ respectively), they can be calculated using the following formulas:

′ ′′ σ(t) = γ0(G (ω) sin(ωt) + G (ω) cos(ωt)) ′ σ0 G = cos(δ) (1.6) γ0 ′′ σ G = 0 sin(δ) γ0 Parallel plate rheometry is one of the powerful methods of determining several rheological properties of a polymer including storage and loss moduli. In this method, a sample is subjected to a periodic cosine shear function with frequency of ω (equation 1.4). The resulting shear stress (equation 1.5) is a sine function with the same frequency. Shear stress is generally out of phase with the applied strain. Linear viscoelasticity is important in characterization of polymers, as it is very sensitive to the structural properties of polymers. It can be used to extract the molecular weight and branching data information. Analytical rheology refers to using linear viscoelasticity data to infer the structural information of polymers.

9 1.4 Branching

In order to convert a polymer to a useful product, they need to be processed. Presence of branches in polymer melts has many effects on their flow behavior. The effect of branching on polymer properties and flow behavior depends on the length, amount, and location of the branches. Considering their length, two distinct classes of branches are defined, short-chain branches (SCB) and long-chain branches (LCB). SCBs have less than approximately 30 carbons, and interfere with polymer crystallization. SCBs are a result of copolymerization of α-olefin comonomers with polyolefins (like polyethylenes). As the branch length increases, we move from SCB to LCB, which has around two-fold length of the critical entanglement length of a linear polymer (critical entanglement length ≈ 90 carbons) LCB polymers can form lamellar crystals, so they can be crystallized. LCBs have a tremendous effect on the rheological behavior of polymer melt.[15] Even very small quantities of LCB change the polymer processing properties significantly. Unlike LCBs, SCBs do not have an important effect on rheological properties. One of the important effects of the LCBs is controlling the flow behavior of the polymers. By adding the LCB while keeping the molecular weight constant, size of the molecules decreases which results in a reduction of viscosity at low molecular weight regions and increase of the viscosity at high molecular weights. The importance of the mentioned process relies on the fact that it does not have an effect on the crystallinity of the sample.

1.5 Polymer Characterization

Polymer characterization is the analytical branch of , and seeks to measure prop- erties of polymers such as chemical, structural, thermal, mechanical, and morphological properties. In this work, we restrict ourselves to chemical and mechanical characterization.

1.5.1 Chemical Composition Distribution of Random Copolymers

Branched polyolefins are technologically important materials. When side branches are synthet- ically attached to a linear polymer backbone, the processability and mechanical properties of the polymer are significantly altered.[14,18] If these branches contain more than 50-100 carbon atoms, they are referred to as long-chain branching. Long-chain branching has a pronounced effect on the rheological properties of polymer solutions and melts, and can be manipulated to enhance shear

10 thinning and strain hardening.[36,88,131] In this work, we consider random short-chain branched (SCB) linear α-olefins, specifically, ethylene-1-hexene. Here, short side arms with only a few car- bon atoms (four in this case) dangle from the backbone, as a consequence of copolymerization of ethylene with a comonomer (hexene). As a class, linear α-olefins are important precursors in many petrochemical industries such as lubricants, plasticizers, detergents etc.[56] It is well-established that SCB effects the kinetic properties of the sample such as the rate of crystallization, structural properties such as morphology and blend miscibility, mechanical proper- ties such as impact strength and resistance to crack growth, etc.[1,21,35,38,53,54,127] These properties can be engineered to target specific applications by controlling the length and sequence of SCB, which, for linear α-olefins, is often called the chemical composition distribution (CCD).[60] The main structural distributions for polymers are the molecular weight distribution (MWD) and the chemical composition distribution (CCD). The chemical structure of many polymers is rather complex because polymerization, whether carried out in a laboratory or an industrial plant, does not necessarily produce identical molecules. A polymeric material typically consists of a distribution of molecular sizes, and sometimes also of shapes. Chromatographic methods like size exclusion chromatography often in combination with low-angle laser light scattering (LALLS) and/or viscometry can be used to determine the molecular weight distribution, and to a limited degree, the extent of LCB of a polymer. Copolymers with SCB such as LLDPE require a different approach. The CCD of a copolymer describes the distribution of comonomer fraction in its chain. Figure 1.6 shows that polymer chains with highest crystallizability (lower SCB) precipitate first at higher temperatures, this process is followed by precipitation of chains with lower crystallizability (higher SCB) at lower temperatures. Physical properties of copolymers are influenced by how comonomer units are distributed on the copolymer chains. One of the important distributions, as mentioned earlier, that is used to de- scribe them is CCD; it describes the distribution of the average comonomer content of the copolymer chains, thus reflecting intermolecular heterogeneity. The average comonomer content can be deter- mined experimentally, but complete composition distributions can only be inferred by simulation studies. Theories for various copolymerization models and for both binary and multicomponent copolymers are well developed. On the other hand, a general theory for CCD of multicomponent copolymers is not available yet. For linear binary random copolymers, the analytical expression

11 Figure 1.6: CCD of a typical LLDPE[96] for describing the weight distribution of kinetic chain length and chemical composition developed by Stockmayer is useful for understanding the microsturcture. A similar equation to Stockmayer’s distribution was developed by Costeux et al. using a statistical approach. They investigated the CCD of random binary copolymers using Monte Carlo simulations and developed an analytical expression describing the distribution function. The results from their analytical expression agree well with the ones from Monte Carlo simulations and from Stockmayer’s distributions. The relationship between the CCD of copolymers and their mechanical and thermal properties is one of the important structure-property relationships that have to be properly quantified for poly- olefins. In order to establish these structure-property relationships, one needs to have an analytical technique that can accurately determine the CCD of copolymers. In the case of semi-crystalline copolymers, 1 Three analytical techniques are used: (i) temperature rising elution fractionation (TREF),[95,126] (ii) crystallization analysis fractionation (Crystaf),[68–70] and (iii) solution differ- ential scanning calorimetry (DSC)[2,51,61,90,105–109]. All of these methods rely on the systematic

1 The reason those materials are called “semi-crystalline” is that some fraction of the polymer remains un- crystallized, or, amorphous when the polymer is cooled to room temperature. The amorphous polymer becomes trapped between the growing crystals. As a result of the highly entangled nature of the polymer chains, the move- ment of the amorphous polymer becomes restricted.

12 variation in crystallizability of SCB polymers with the CCD. For random copolymers, the crystal- lization temperature is inversely related to the density of branching: molecules with a low density of SCB crystallize at higher temperatures, while those with a high density of SCB crystallize at lower temperatures. If the amount of polymer crystallizing at any given temperature can be continuously monitored in a sweep from high to low temperatures, the CCD can be indirectly inferred by using the relationship between comonomer content and crystallization temperature. Although TREF and Crystaf produce very similar results, the operation of TREF is more complex and time-consuming than Crystaf. A schematic diagram of one crystallization vessel of Crystaf is shown in figure 1.7 . This method will be described in more detail in chapter 2.

Figure 1.7: Schematic diagram of a Crystaf vessel[6]

In Crystaf, a very dilute solution of a polymer is heated to a temperature sufficiently above the melting temperature. Dilute solutions are recommended to avoid the inter-chain interactions and co-crystallization. The drawback of having very dilute solutions is the low amount of signal to noise ratios during the sampling. As the sample is cooled down slowly, the concentration of the polymer remaining in the solution is sampled as a function of the temperature. This yields the cumulative or integral Crystaf profile. The first derivative of this curve with respect to temperature results in the weight fraction of polymer that crystallizes at each temperature. This is the differential Crystaf curve (see figure 2.2).[68] The Crystaf profile, thus obtained, cannot be directly interpreted quantitatively. It has to be

13 deconvoluted or translated to indirectly infer the CCD. Calibration curves are often used to map Crystaf profiles with CCD. These curves typically relate the peak crystallization temperature and the average comonomer content. However, Crystaf is sensitive to operating conditions, and hence a calibration curve obtained at, say, a particular cooling rate cannot be used at a different cooling rate. Several calibration curves have been reported in literature by performing time-consuming experiments.[20,25,68,70,73,84,97] All structural parameters and operational conditions that are impacting the crystallization can impact the Crystaf process as well. Therefore we need to consider the effect of each separately. These parameters include: (i) number averaged molecular weight, (ii) average comonomer content, (iii) comonomer type and (iv) cooling rate.

1.5.1.1 Effect of structural and operational parameters.

1.5.1.1.1 Number-averaged molecular weight:. Effect of chain length on Crystaf pro- files investigated by Nieto et al.[72]. They used a series of ethylene homopolymers with different molecular weights to study the effect of molecular weight. Soares et al.[97] investigated the effect by analyzing the copolymers with the same copolymer composition and different molecular weight. Both of these investigations imply that Crystaf profiles will only be influenced by the samples with low molecular weight. They have also seen that the Crystaf profiles of samples with low molecular weight broaden toward the low crystallinity end (lower temperatures) of the CCD.

1.5.1.1.2 Comonomer content:. The fraction of comonomers in the copolymer chains is the most important parameter that affects the chain crystallizability as it was mentioned before. Comonomers lower the chain crystallizability considerably. Sarzotti et al.[84] studied the effect of comonomer content on Crystaf profiles using a series of ethylene/1-hexene copolymers with different comonomer contents and approximately similar molecular weights, although molecular weights are far beyond the effectiveness threshold and even if they were not similar, their effect could be neglected. Figure 1.8 shows the effect of comonomer content on the examined samples. As we expected, by increasing the comonomer content, peak temperatures move toward the lower temperatures as a result of hindering effect of comonomers on crystallization. It is also noticed that by increasing the comonomer content, the Crystaf profiles are broadened, following Stockmayer’s bivariate distribution.

14 Figure 1.8: Effect of comonomer content on Crystaf profiles of ethylene/1-hexene copolymers.[6]

1.5.1.1.3 Comonomer type:. da Silva Filho et al.[25] investigated the effect of comonomer type and found out that Crystaf profiles for ethylene/1-butene and ethylene/1-octene copolymers are considerably different. Investigations lead to the conclusion that Crystaf peak temperatures are dependent on comonomer type when they are shorter than 1-octene. The rationale for this concept is that longer comonomers (1-decene, 1-tetradecene, etc.) are always excluded from the crystallite and have no effect on Crystaf profiles, which are greatly dependent on crystallites. On the other hand, shorter comonomers are included in the crystallite and lower the crystallization temperature.

1.5.1.1.4 Cooling rate:. Anantanawaraskul et al.[5] investigated the effect of cooling rate on several ethylene/1-hexene copolymers with different comonomer contents. Figure 1.9 shows the result of their study, Crystaf profiles are shifted toward higher temperatures when lower (slower) cooling rates are applied. The typical cooling rate for Crystaf experiments is 0.1◦C/min.

1.5.1.2 Mathematical modeling of Crystaf. Crystaf has been subject to a fair amount of mathematical modeling because of the convenience of the method. These studies have investi-

15 Figure 1.9: Relationship between cooling rate and Crystaf peak temperature for ethylene/1-hexene copolymers[6] gated the effects of different operational and structural parameters.[7,13,22,84,97] Anantawaraskul et al.[3,4] proposed a semi-empirical mathematical model for Crystaf based on the distribution of the longest ethylene sequence (LES) in the copolymer, which seemed to be in good agreement with the experimental data. They validated their model by testing it on homopolymers[4] and ethylene/1- olefin random copolymers[3]. Prior to their model, the effect of crystallization kinetics was usually neglected, and it was assumed that fractionation takes place at or near the thermodynamic equi- librium. However, Crystaf is far from thermodynamic equilibrium, even for typical operational conditions, such as a cooling rate of 0.1◦C/min.[4,5,12,13,22] Anantawaraskul et al.[8] leveraged their mathematical model to obtain calibration curves. The parameters of the model were estimated by fitting the model to well-characterized experimental data, over a range of operating environments. These parameters were then used to model Crystaf profiles at evenly spaced comonomer contents. The peak temperatures from these Crystaf curves were used to produce calibration curves. While this represents an improvement over prior calibra- tion efforts, it has a few drawbacks. By focusing solely on the peak temperature, it ignores most of

16 the features of the experimental and simulated curves. The shape of the profile, for instance, may contain useful information, which would be worthwhile to interpret, given the uncertainties in the mathematical model and the experiments. A less significant drawback, which can be fixed easily, is the construction of a new calibration curve at each operating condition.

1.5.2 Long-chain Branching Characterization

Long-chain branching complicates the characterization of molecular structure of polymers greatly. This might be the result of procedures that must be undergone to prepare samples with moderate uniform branching structures such as combs and stars.[27,46,75] The characterization of long-chain branches (LCBs) is crucial for understanding the viscoelastic behavior of branched polymers in the melt and, as a consequence, for the prediction of their processing behavior. The detection and measurement of long-chain branches are two important issues in LCB characterization. When trace amounts of long-chain branching (LCB) is introduced into the backbone of a linear polyethy- lene molecule, dramatic changes in the linear and nonlinear rheology are observed[14,36,63,121,131]. These well-documented effects include a departure from the “3.4 power law” relating the zero shear viscosity η0 to the weight-averaged molecular weight MW , unusually large sensitivity of η0 to tem- perature or higher flow activation energies,[76] enhanced shear thinning and strain hardening that lead to improved processibility, etc. In polyethylenes, branching in one form or another has existed since a long time ago. However, recent developments in single-site metallocene catalyst technology have bestowed us with an extraordinary degree of control on molecular structure, at an industrial- scale. Notwithstanding advances in synthesis, characterization of lightly branched polyethylenes has lagged behind. It has thus become imperative to develop analytical methods that enable us to accurately detect and quantify these trace levels of LCB.[49] A historically popular approach for diagnosing LCB in synthetic polymers is g-ratio analysis, where g is the ratio of the size of the branched and linear polymers of the same molecular weight. As Janzen and Colby[49] describe it in their widely cited paper, the “idea is to relate, quantitatively, experimental measurements sensitive to the sizes of polymer molecules in dilute solution, such as light scattering or intrinsic viscosities, to sizes predicted theoretically for structures having specific (regular or random) branching arrangements.” These methods rely on theoretical estimates of g for different branched structures provided by Zimm-Stockmayer equations[17,94,133]. Although using only g-ratio analysis to detect LCB is fraught with uncertainties[49], it continues to serve

17 as an important ingredient in more sophisticated multi-detector methods[110,131,132], as will be mentioned later. A spectroscopic method for directly quantifying the number of long-chain branches is 13C nuclear magnetic resonance (NMR)[78]. This method is widely used as a benchmark against which other techniques are compared, despite several practical limitations. For example, NMR is unable to differentiate between short and long branches. It typically cannot distinguish between branches with more than 10 carbon atoms in the side chain, which is far below the “long-chain” threshold of around 100-200 carbon atoms, that are found to affect flow properties so dramatically[62]. Both NMR and g-ratio analysis are tedious to perform, require expert participation, and are expensive. In addition to these practical difficulties, both these techniques measure fundamentally weak signals. Since the actual number of branch points is often a very small fraction of the total number of backbone carbons, the signal to noise ratio becomes a serious issue for 13C NMR[49,93]. Similarly, LCB causes only modest changes in size, which becomes an issue for g-ratio analysis. In stark contrast, molecular dynamics are extremely sensitive to molecular structure, and small changes in LCB levels can cause order-of-magnitude changes in rheological properties. In addition, the experimental determination of the linear-viscoelastic spectrum is more straightforward than the other techniques mentioned above. Thus, the use of rheology as an analytical technique (“analytical rheology”) to infer details of molecular structure is a compelling idea[57]. Numerous studies have sought to exploit the sensitivity of rheology to probe LCB in polyolefins.

Some determine the departure of melt viscosity η0 of a branched polymer, from that expected for a linear chain of comparable molecular weight, to quantify LCB via indices. Examples of such methods include the viscosity index of long-chain branching (v.b.i)[85], Dow rheology index (DRI)[55], long chain branching index (LCBI), etc.[92] A number of more recent studies seek to combine measurements from multiple detectors such as SEC-viscometry, light scattering, melt rheology, etc.[49,92,115,118,122,125] Wood-Adams and Dealy[130] used an empirical expression (later linked to molecular theory)[42], to relate differences in the molecular weight distribution (MWD) curves obtained via chromatog- raphy and implied by complex viscosity, using Shaw and Tuminello’s MWD-viscosity transform for linear polymers[91], to LCB frequency. Crosby et al.[24] suggested a dilution rheology method, where the concentration-dependent variation of η0, in conjunction with molecular theory, and rudimen-

18 tary molecular weight information, was able to quantify the LCB in different families of branched polyethylenes. Similarly, van Ruymbeke et al.[120] developed a criterion to combine molecular weight distribution and linear viscoelastic response, using molecular theory, to detect LCB in sparsely branched polymers with relatively broad molar mass distributions. Robertson et al.[80] measured LCB of polyethylenes with sparse to intermediate levels of LCB and relatively narrow molecular weight distributions, from only linear rheology under certain temperature-frequency windows for the polymers. While the use of linear viscoelastic measurements for diagnosing LCB is widespread[131], non- linear rheological measurements have also been employed. Doerpinghaus and Baird[29] parame- terized the pom-pom model, which describes the nonlinear rheology of branched polymers semi- quantitatively[66], to shear and extensional rheology for different levels of LCB. Stadler and Munst- edt[103,104] sought to use size-exclusion chromatography with multi-angle laser light scattering (SEC- MALLS), and shear rheological measurements to analyze viscosity functions of LCB polyethylenes, by separating the influence of long-chain branching and molecular weight on rheology, which are of- ten convoluted[30,59]. Vittorias et al.[124] employed a combination of Fourier-transform rheology[34] and the pom-pom model[48,66] to determine the topology of branched polymers. Another way to obtain an indication of LCB is the plot of the phase angle as a function of the magnitude of the complex modulus, δ(|G∗|) curves[116] which is sensitive in detecting an even small thermorheological complexity and deviations from the behavior of a linear material. Obtaining structural information from rheological measurements usually requires the inversion of a forward model that is able to predict rheological information from structural measurements. As reviewed above, such forward models may simply be empirical correlations, or based on microscopic theories. The latter have become increasingly sophisticated[26,57,58,74,89,119], although they are still quite far from being perfect. It is therefore hoped that the extraordinary sensitivity of rheology is able to mask shortcomings in the forward model, to produce sufficiently intelligible structural information. Unlike models for linear chains, hierarchical tube models for branched polymers lack the simple mathematical structure that can be exploited to invert the viscoelastic spectra. Thus, until recently[86,87], there was no general algorithm which could exploit the flexibility and robustness of these hierarchical models. Shanbhag recently proposed a data analysis method which translated the inverse problem into a

19 sampling problem, using a Bayesian formulation, which was then investigated using a Markov-chain Monte Carlo (MCMC) algorithm[86,87]. By intentionally refraining from seeking “the” optimal solution, this method explored the distribution of structures and compositions consistent with the experimental data, thereby characterizing all possible solutions and associating them with a probability of likelihood. When specifically applied to linears, stars and blends thereof, this method was able to (i) identify the number of components in the unknown mixture, due to the intrinsic Occam’s razor in Bayesian analysis[37,64,71], (ii) accurately predict the composition of the mixtures, in the absence of degenerate solutions, and (iii) describe multiple solutions, when more than one possible combination of constituents was consistent with the rheology. Takeh et al.[114] used the linear rheological data that were generated by the tube-theory based model to infer molar mass distribution and LCB distribution which can only be determined experimentally to a certain level of accuracy. Especially the LCB distribution that is very hard to obtain.

1.5.2.1 Branch-on-branch algorithm. Our LCB samples were generated based on the algorithm proposed by Larson[57]. This algorithm tries to predict linear viscoelasticity of poly- disperse polymers containing LCBs. Larson[57] proposed a quantitative algorithm that can predict the linear viscoelasticity of mixed systems and the poly- not only in molecular weight but also in branch length and branch location. The original algorithm was restricted to predict the melt contains only linear, star and comb molecules. Branch-on-branch algorithm which was proposed by Das et al.[26] has basically the same basis as the Larson algorithm, but it is different in a number of aspects. BoB (their program) calculates the rheological response of a polymer sample of an arbitrary user-defined architecture.

1.5.3 Rheological Characterization

Modeling of polymer processing and analysis of processing experiments often require the relax- ation spectrum, instead of readily available dynamic moduli, G′ and G′′. If these two rheological functions were known for a practical window of frequencies, we would have enough data to char- acterize the viscoelastic behavior of the material. Relaxation modulus which is defined in equation 1.3 is a very important characteristic which needs to be determined. Considering all the previous equations (equations 1.3 and 1.6), we are able to show that the storage and loss moduli are con- nected to the relaxation modulus through the following equations that are the integral transforms of the relaxation modulus:

20 ∞ ′ G (ω) = ω h(s) sin(ωs)ds 0 Z ∞ (1.7) ′′ G (ω) = ω h(s) cos(ωs)ds Z0 The above equation is the continuous representation of the relaxation spectra, continuous re- laxation spectra (CRS), however in a practical sense, we do not have the continuous range of frequencies needed to carry out the inversion and obtain the relaxation spectra. Therefore, we have to seek for discrete relaxation spectra (DRS) using the generalized Maxwell model as pairs of the relaxation times and Maxwell modes (τi, gi):

N 2 2 ′ ω τi G (ω) = gi 1 + ω2τ 2 i i X=1 (1.8) N ′′ ωτi G (ω) = gi 1 + ω2τ 2 i i X=1 where N is the number of modes in the spectrum. Inversion of these summations is not a trivial task. We are confronted with several problems dealing with the dynamic mechanical data: (i) these data are collected at a discrete and certain number of frequencies, ω, (ii) we are limited by the capabilities of the instruments we are using that have a certain window of frequencies, and (iii) the noise in the experimental data. As a result of the mentioned problems, inference of relaxation spectra is an ill-posed problem. Inferring continuous spectrum functions from dynamic moduli data obligated us to overcome the ill-posedness of the problem. In order to remove the ill-posedness, we need to provide extra information in addition to the experimental data. Honerkamp and Weese[44] proposed the smooth- ness of the relaxation spectra as an additional required information. They employed Tikhonov regularization to avoid the effect of the noise in the experimental data. The storage and loss moduli vary immensely over the range of frequencies that cause problems. Honerkamp and Weese[45] addressed this problem by using the logarithm of the relaxation spectra function and application of a nonlinear regression. There are also other approaches[77] that used the sensitivity of the data and inferred the spectra.

21 There are several programs based on the published algorithms that are not freely available to the public.[9,11,41,67,100] IRIS is the most popular program used in the industry1. The underlying algorithm and implementation of this program is not available to the public; therefore it is a black- box program. Due to the mentioned reasons, it is not feasible for researchers to change the code to suit their needs. Nevertheless, it is a good option for experimentalists.

1.6 Motivation and Scope

In the second chapter, we shall look at the characterization of random copolymers. We infer the comonomer content of ethylene/1-olefin random copolymers from the Crystaf profile by using a semi-empirical mathematical model, and will characterize the uncertainty in the estimates. To accomplish this, we extended the mathematical model of Anantawaraskul et al.[3] by extending the number of their model parameters from 4 to 9. We considered all the available literature data on ethylene/1-hexene, and randomly selected a train subset to train the model and regress the model parameters; we tested the predictability of the extended model by using the newly found parameters on the rest of the data. Another aspect, which we shall consider, is to quantify the predictability by studying the distribution of the difference between the inferred and true values of the comonomer content for the test subset. We shall consider the effect of quantity and quality of the training data on our predictions. In the third chapter, we shall determine how sensitive the different rheological methods are to detect LCB in a wide range of LCB and molecular weight quantities. We shall use a tube-theory based model to generate the rheological samples. For this purpose, several different rheological methods and their several different interpretations shall be investigated to determine the sensitivity. Another important issue that shall be investigated is to determine a sensitive method to measure the LCB. We shall use the current rheological methods and compare them to find the most suitable and sensitive method to measure LCB. In the fourth chapter, we will consider the problem of inferring the continuous and discrete relaxation spectra from the dynamic mechanical moduli obtained by small-angle oscillatory shear experiments. Inferring this quantity is known to be an ill-posed inverse problem; therefore addi- tional information on the spectrum is needed for solving an ill-posed problem. Tikhonov regular-

1http://rheology.tripod.com

22 ization uses the assumption or prior knowledge that spectra are smooth. Employing the nonlinear Tikhonov regularization method, we shall be able to extract the continuous relaxation spectrum. Discrete relaxation spectra shall be obtained by using the inferred continuous relaxation spectra. By inferring the continuous and discrete relaxation spectra by the algorithms that will be discussed later in the chapter, we will be able to implement an open-source computer program to extract the desired spectra from the dynamic moduli data.

23 CHAPTER 2

INFERRING COMONOMER CONTENT USING CRYSTAF

As described in chapter 1, characterization of polymers is crucial for understanding their structure. Characterization allows one to make proper modifications and choose better strategies in their synthesis. The comonomer content has a profound influence on the structural, mechanical, and kinetic properties of copolymers. The mathematical model that describes the Crystaf profiles can be considered as our forward model. Using this forward model provides an opportunity to infer the comonomer content from a Crystaf experiment by solving the inverse problem using a calibration curve approach. Some of the principal drawbacks of inference through using a calibration curve are addressed in current work. The entire Crystaf profile is used instead of just the peak, and operating conditions are incorporated into the mathematical model by expanding the parameters set. The tasks of data assimilation to tune the model parameters and uncertainty quantification of the calibrated model are also separated. The resulting insights could be an important step in bridging the gap between academic work and industrial practice.

2.1 Crystaf Model

The mathematical model developed for random copolymers by Anantawaraskul et al.[3,4] re- quires information about the average comonomer content, φ, operating conditions such as initial and final temperatures, Ti and Tf , and the cooling rate rc as input. It yields the differential Crystaf profile as its output. Results of Crystaf are reported in a temperature-concentration format, which is called an integral Crystaf curve. Figure 2.1 shows the integral Crystaf profile of an ethylene/1- hexene sample (φ = 1.50 mol % of 1-Hexene, and Mn = 35, 000), where model parameters are n = 4.44, k = 1.0 × 10−5, A = 85, and B = 630. The first derivative of this cumulative curve will result in a short chain branching distribution that is called a differential Crystaf profile. Figure 2.2 shows the differential Crystaf profile of the same sample we mentioned earlier with the same model parameters.

24 1

0.8

0.6 C 0.4

0.2

0 30 40 50 60 70 80 90 T

Figure 2.1: Integral Crystaf profile of a ethylene/1-hexene sample with φ = 1.50 mol % of ◦ 1-hexene, and Mn = 35, 000, at a cooling rate of 0.1 C/min.

For random copolymers, it is assumed that the distribution of the LES (figure 2.3), represented by λ, governs the fractionation[13]. The LES is influenced by both the molecular weight distribution and the CCD. Using a statistical approach to polymerization kinetics, Costeux et al.[22] derived an explicit formula for calculating the weight distribution of LES, W (λ; φ, Mn) from the number averaged molecular weight of the polymer, Mn, and φ. Figure 2.4 depicts the LES distribution,

W (λ) for a sample with φ = 1.50 mol % of 1-Hexene, and Mn = 35, 000.

(1 − P )(1 − pp) − W (λ) = a [F (1 − pmλ) − F (1 − pmλ 1)], (2.1) Pa

The function F (x) and the parameter Pa are defined as:

Pax Pax Pax F (x) = 2 [λ(1 − λ ) + ], (2.2) (1 − Pax) 1 − pm 1 − pm

pp(1 − cp) Pa = , (2.3) 1 − (cp.pp) where pp is the overall propagation probability, cp is the ethylene/1-olefin choice probability (cp = 1 - cpp); pm is the probability of ethylene propagation (pm = pp . cp); cpp is the 1-olefin propagation probability (which is equal to the average 1-olefin molar fraction in the copolymer

25 50

40

30 dT dC 20

10

0 30 40 50 60 70 80 90 T

Figure 2.2: Differential Crystaf profile of a ethylene/1-hexene sample with φ = 1.50 mol ◦ % of 1-hexene, and Mn = 35, 000, at a cooling rate of 0.1 C/min. for random copolymerization); the overall propagation probability (pp) can be calculated from the number averaged chain length (rN ):

r − 1 pp = N , (2.4) rN

Mn rN = , (2.5) Mco × cpp + Mmo(1 − cpp)

Where Mco is the 1-olefin molecular weight, and Mmo is the molecular weight of ethylene (28 g/mol). A modified Gibbs-Thomson equation[13] is used to determine the dissolution temperature of a polymer with a given LES.

B T (λ) = A − (2.6) d λ

◦ A = Td − T s, (2.7)

◦ B = Td × α, (2.8)

26 Figure 2.3: Illustration of ethylene sequences (ES), longest ethylene sequences (λ) in a copolymer sample.

◦ where A and B are model parameters, Td is the equilibrium dissolution temperature of chains with infinite lengths, Ts is a super-cooling temperature for Crystaf, and α is a constant that is inversely proportional to the enthalpy of fusion. Thus, a chain with a given λ begins falling out of solution once the temperature of the sample equals Td(λ). Due to finite heat-transfer rates, there is usually a non-negligible temperature lag between the oven and the sample, which depends on the particular

Crystaf device. For Crystaf 200, the empirical relationship for the lag temperature, Tl, was found to be,[5]

Tl = 5.02rc − 0.05, (2.9) where rc = −dT/dt is the cooling rate.

If rc is assumed to be infinitesimally slow, we can invoke the assumption of thermodynamic equilibrium and assert that all the chains with a given λ crystallize completely when the temperature of the sample reaches Td(λ). However, this is not a good assumption at cooling rates that are typically used. In order to relax the assumption of thermodynamic equilibrium, crystallization kinetics were folded into the model semi-empirically via the Avrami equation. The relationship between crystallinity X and time t is described via:

X(t) = 1 − exp(−ktn), (2.10) where k and n are empirical constants. For a given LES, the time “starts” when the sample temperature equals the dissolution temperature, and X(t = 0) = 0. After a sufficiently long time

27 x 10−3 5

4

3 ) λ (

W 2

1

0 0 500 1000 1500 λ

Figure 2.4: Weight distribution of longest ethylene sequence of a sample with φ = 1.50 mol % of 1-Hexene, and Mn = 35, 000. determined by k and n, X = 1 and the entire fraction crystallizes out. Alternatively, at any given temperature T , a mixture of chains with different LES (Td(λ) > T ) are continuously crystallizing out. The relationship between t in equation 2.10 and temperature T can be easily established via rc and equation 2.6 and 2.9. Combining equation 2.10 and rc we get the final expression for crystallinity:

0 T ≥ Td(λ) − Tl X(λ, T ) = − (2.11) 1 − exp[−k( (Td(λ) Tc) )n] T < T (λ) − T ( rc d l Therefore, for each fraction of LES, we can compute the fractional amount crystallized at a particular temperature, X(λ, T ). The rest 1−X(λ, T ) remains in solution. Figure 2.5 shows X(λ, T ) for three fractions of λ = 100, 300, 1500. The total uncrystallized polymer solution concentration, which can be experimentally monitored, is given by a weighted average over all LES,

Cmodel(T ) = W (λ) (1 − X(λ, T )) , (2.12) λ X=1 which corresponds to the integral Crystaf profile. These equations and assumptions describe the mathematical and physical model. Thus, the algorithm for computing the Crystaf profile may be summarized as follows:

28

1

0.8

0.6 λ = 100 λ = 300 X λ = 1500 0.4

0.2

0 30 40 50 60 70 80 90 T

Figure 2.5: X(λ, T ) for three fractions of λ = 100, 300, 1500 of a sample with φ = 1.50 −5 mol % of 1-Hexene, and Mn = 35, 000, and model parameters n = 4.44, k = 1.0 × 10 , A = 85, and B = 630.

(i) Given φ and Mn, compute W (λ)

(ii) Calculate X(λ, T ) for each λ

(iii) Calculate Cmodel(T ) from equation 2.12

Recall that the model has four parameters: A, B, k and n. Given an experimental dataset

Cexp(T ) with known φ, these parameters may be determined by minimizing a cost function Φ of the form, Tf 2 Φ = (Cmodel(T ; model parameters) − Cexp(T )) , (2.13) T Xi where the summation extends over all the discrete values of T at which measurements are available. Crystaf is a polymer analytical technique based on the continuous crystallization of polymer chains from a dilute solution.[68,69] In Crystaf, the analysis is carried out by monitoring the polymer solution concentration during crystallization by temperature reduction. Aliquots of the solution are filtered and analyzed by a concentration detector.

29 2.2 Dependence of Parameters on the Structure and Operating Conditions

Anantawaraskul et al.[3,4] estimated the four model parameters for a number of datasets with different φ and rc and found that the parameter n was essentially independent of both structural and operational conditions (φ and rc). Parameters A and B decreased with increasing φ, while their dependence on rc was less pronounced. Figure 2.6 shows the estimated values of A and B parameters from equations 2.7 and 2.8. It can be seen that these two parameters are strongly dependent on φ and independent of rc. This can be described through considering equations 2.7 and 2.8 that show that the equilibrium dissolution temperature decreases as φ increases. The Avrami parameter k on the other hand increased, as rc was increased or φ was decreased. Estimated Avrami parameters for the Crystaf model of ethylene/1-hexene samples (EH06, EH15, EH31) are shown in figure 2.7. As it can be seen and as expected the values for n are comparatively the same. On the other hand, k increases with increasing rc, and the value of k decreases with increasing comonomer content.

2.2.1 Classic and Expanded Parameters Sets

In their study, Anantawaraskul[3] extracted the model parameters (n, k, A, and B) for each dataset (particular φ and rc) separately, which was consistent with their goal of building a descrip- tive model for Crystaf. In this approach, a mathematical model is aimed to be used in a predictive mode; thus, we would prefer to use a single set of (potentially expanded) parameters to functionally assimilate the variation with φ and rc. The idea, therefore, is to estimate these parameters using a few training datasets, which could then be used to infer the φ of a dataset that is not used in the training process, i.e. the testing dataset. Therefore an expanded parameter set is proposed as follows:

log k = log(k01 + k02 × φ) + k1 × rc (2.14)

A = A0 + A1 × φ (2.15)

2 B = B0 + B1 × φ + B2 × φ (2.16)

Unlike the classic four parameter set (A, B, k and n), this expanded parameter set uses nine parameters (A0,A1,B0,B1,B2, k01, k02, k1 and n). While the form of the expanded parameters is

30 Figure 2.6: Estimated Gibbs-Thomson parameters (A and B) for the Crystaf model as a function of comonomer content.[3] motivated by numerical experiments of the behavior of these parameters regarding the changes in

φ and rc, therefore it is by no means unique, or even necessarily optimal. However, as shown later, they appear to be adequate.

2.2.2 Training and Testing Datasets

In section 2.3, 27 available ethylene/1-hexene datasets are described that meet our requirements (they must have been measured on the same type of instrument because of some empirical equations in the model). 5 and 10 “training” datasets (TDS) are randomly selected without replacement. They are used to estimate the classic and expanded model parameters by minimizing the least

31 Figure 2.7: Estimated Avrami parameters for the Crystaf model. The parameter k is an average value. (The solid lines are only as an aid to eye. The dashed line is an average value for n). Comonomer content of the samples EH06, EH15, and EH31 is 0.68%, 1.51%, and 3.14% respectively.[3] squared error, via simulated annealing, over all the TDS instead of just one, as in equation 2.13.

TDS Tf 2 (i) (i) Φ = Cmodel(T ; model parameters) − Cexp(T ) , (2.17) i T X Xi   where the index i runs over the 5 or 10 TDS selected. We then use the model with these parameters to infer the comonomer content of the remain- ing “test” datasets (datasets not used in the training process, i.e. 27 - TDS), and evaluate the generalizability and uncertainty of the procedure. Note that we do not expect random selection of training datasets to be the optimal strategy for constructing datasets. Nevertheless, it serves as an “unbiased” benchmark, with which we can compare alternative strategies. We thoroughly discuss

32 Table 2.1: Molecular characteristics of experimental data considered in this paper.

set φ rc Mn Reference set φ rc Mn Reference 1 0.135 0.1 38,800 [ 83] 15 1.51 0.1 36,300 [ 83] 2 0.359 0.1 37,100 [ 83] 16 1.90 0.1 59,200 [ 117] 3 0.675 0.1 37,200 [ 83] 17 2.31 0.1 35,200 [ 83] 4 0.68 0.02 36,100 [ 3] 18 3.09 0.1 62,000 [ 117] 5 0.68 0.05 36,100 [ 3] 19 3.14 0.02 34,300 [ 3] 6 0.68 0.1 36,100 [ 3] 20 3.14 0.05 34,300 [ 3] 7 0.68 0.2 36,100 [ 3] 21 3.14 0.1 34,300 [ 3] 8 0.68 0.5 36,100 [ 3] 22 3.14 0.2 34,300 [ 3] 9 1.23 0.1 36,100 [ 83] 23 3.14 0.5 34,300 [ 3] 10 1.51 0.02 35,200 [ 3] 24 3.94 0.1 78,800 [ 117] 11 1.51 0.05 35,200 [ 3] 25 4.20 0.1 34,500 [ 83] 12 1.51 0.1 35,200 [ 3] 26 4.31 0.1 68,000 [ 117] 13 1.51 0.2 35,200 [ 3] 27 4.80 0.1 77,500 [ 117] 14 1.51 0.5 35,200 [ 3]

the effect of selection of training datasets later on section 2.5.1. Clearly, the estimated parameters depend on the particular datasets chosen in the TDS. Given 27 27 27 datasets there are C5 ≈ 80, 000 and C10 ≈ 8.5 million distinct possibilities, for the 5 and 10 TDS scenarios, respectively. Instead of exhaustively exploring all of these cases, we use Monte Carlo to randomly select 1000 samples for both the 5 or 10 TDS cases.

2.3 Experimental Data

Experimental data is collected from twenty seven Crystaf profiles of random ethylene-1-hexene copolymers that were reported in various sources.[3,83,117] All these experiments were performed on Crystaf 200 by PolyChar for which the empirical relation of lag temperature, given by equation 2.9 is valid. Table 2.3 summarizes these datasets, in which φ ranges between 0.135 to 4.8 mole percent, ◦ ◦ and rc varies between 0.02 and 0.5 C/min. Most of the samples have a cooling rate of 0.1 C/min, which is common in practice. The 1-hexene fraction of copolymer samples (φ) are measured using 13C NMR.[3,83,117] Crystaf data were digitized from the published Crystaf profiles, and used cubic splines to interpolate the digitized data at regular temperature intervals.

33 2.4 Inferring Average Comonomer Content

Once model parameters are estimated from the 5 or 10 TDS using simulated annealing, we are in a position to infer the φ of the test datasets. That is, given an experimental Cexp(T ), we seek to determine the φ, which according to the calibrated Crystaf model, results in the best overall match. To this end, we introduce an error function ǫ(φ) which measures the distance between the modeled and experimental Crystaf profile:

Tf 2 ǫ(φ) = (Cmodel(T ; φ) − Cexp(T )) . (2.18) T T X= i Note that equations 2.13 and 2.18 are similar in form, but used for different purposes. The cost function Φ (equation 2.13 or 2.17) is minimized by varying the model parameters and selecting a set of 4 or 9 parameters which best fits the experimental training data. A small value of Φ implies that the estimated model parameters describe the training data well. The error function (equation 2.18) is minimized by varying φ in a practical and meaningful range and selecting a value that leads to the best fit with test data (i.e. minimum distance between the model and experimental data), after the model parameters have already been regressed. Φ gives us an idea about the ability of the parameterized model to assimilate the training data while ǫ shows the predictive ability of the model for a particular test dataset.

We varied φ ∈ [0, 6], computed ǫ(φ) and found the φmodel which minimized the error. Figure

2.4 shows the error distribution for dataset #17, obtained using n = 4.45, k01 = −3.05, k02 = −9 −1.73 × 10 , k1 = 20.67, A0 = 89.64, A1 = −8.41, B0 = 629.09, B1 = −92.21, and B2 = 0.84.

The minimum, φmodel = 2.33, compares favorably with the empirical φexp = 2.31.

2.5 Results

For the 5 TDS with the expanded parameter set, 5 datasets were selected randomly, and the model parameters were estimated using them. The remaining 27-5 = 22 datasets were considered as test datasets. For each of these test datasets, the ǫ(φ) curve was computed, as shown in figure

2.4, and determined the φmodel that minimized the error. Since the experimental φexp is known, the difference ∆φ = φmodel − φexp can be computed. This process repeated 1000 times, by randomly selecting a different set of 5 TDS, resulting in approximately 22/27 × 1000 ≈ 815 values of ∆φ for

34 x 105 5

4

3 ) φ ( ǫ 2

φmodel = 2.33 1

0 0 1 2 3 4 5 6 φ

Figure 2.8: The function ǫ(φ) for dataset 17 from table 2.3 (φexp = 2.31) derived by using model parameters that described in the text. The φ which minimizes ǫ is denoted as φmodel. each dataset (for 10 TDS this value is ≈ 630). These results are summarized in figure 2.9(a) using a box plot, which attempts to depict the distribution of ∆φ, concisely. The box plot for a particular dataset may be interpreted as follows: The blue boxes encapsulate th th the range of the middle 50% of the data (∆φ) that range from the 75 (q3) to the 25 (q1) percentile. The red line inside the box represents the median (the 50th percentile). A median close to zero represents accuracy in the inference, and the size of the box is a reasonable proxy for precision. The whiskers that extend from the box denote the maximum and minimum values that are not considered outliers. Outliers are indicated individually by a red ‘+’ sign. Points are considered to be outliers if they are larger than q3 + w(q3 − q1), or smaller than q1 − w(q3 − q1), where w = 1.5 was chosen. This choice of w = 1.5 corresponds to approximately ±2.76 times the standard deviation or 99.3% coverage, if the data were normally distributed. Thus, one way to interpret these graphs is to compare the median with zero; the bounds of the blue box represent the uncertainty in the median. The range spanned by the whiskers provides a higher confidence interval. Figure 2.9(b) presents the corresponding results for 10 TDS. A simple comparison between figure 2.9(a) and 2.9(b) shows that by increasing the number of training datasets the number of outliers decreases and the box width shrinks. This is an important empirical result. When the complexity

35 4

2

φ 0 ∆

−2

−4 1 3 5 7 9 11 13 15 17 19 21 23 25 27 dataset (a) 5 TDS - expanded parameters

4

2

φ 0 ∆

−2

−4 1 3 5 7 9 11 13 15 17 19 21 23 25 27 dataset (b) 10 TDS - expanded parameters

6

4

2

φ 0 ∆

−2

−4

−6 1 3 5 7 9 11 13 15 17 19 21 23 25 27 dataset (c) 5 TDS - classic parameters

Figure 2.9: Distribution of difference between comonomer content derived from model and experimental data for (a) 5 TDS with 9 parameters, (b) 10 TDS with 9 parameters, and (c) 5 TDS with 4 parameters. The main text describes how to interpret these box plots.

36 of the model is increased (moving from classic 4 parameters to expanded 9 model parameters, in this case), for a given size of TDS, there is a tendency to “over-fit” or “over-train” the parameters. An over-trained model may have the benefit of assimilating the training data extremely well (low Φ) but by capturing noise and other artifacts, its predictive capabilities on test data may suffer (large ∆Φ). Increasing the number of TDS mitigates some of the risk of over-training which must be carefully considered when the number of TDS are selected. However we should chose in between. For the sake of completeness, in figure 2.9(c) we show the results for the classic parameter set with 5 TDS. Note that the classic parameter set was never intended to be used in this manner, and its lack of accuracy and precision is not unexpected, since the natural dependence of some of the model parameters on φ and rc is suppressed. However, the contrast between the classic and expanded parameter sets suggests that the form of the equations 2.14–2.16 may be adequate. These calculations were implemented in Matlab on a relatively modern laptop without any code optimization or parallelization. For the 5 TDS case, it took about two minutes to calibrate the model. The computational cost is linear in the number of TDS. Thus, the 10 TDS case took twice as long for model calibration. The cost of inferring the φ using a calibrated model is of the order of two seconds (independent of the TDS). Thus, the training phase is significantly more expensive than the test phase. In a practical use scenario, the training phase would be carried out only once, front-loading the computational effort. Once a well-calibrated model is available, the inference of φ, given a Crystaf profile, is essentially very cheap. If the library of available training data were large, it would be relatively easy to reduce the computational time by using a high-level language and exploiting the underlying parallel structure in computing Φ. From a pragmatic standpoint, we would like to interpret figure 2.9 and address questions such as: “how confident can I be that the true comonomer content is within a certain threshold of the inferred value?”. To investigate, we considered a “reasonable” threshold level for |∆φ| of 1.0 mol % (see figure 2.10). With 5 TDS the |∆φ| was within this threshold 95% of times, except for datasets 23 and 27, for which |∆φ| < 1.0 mol %, 87.7% and 93.3% of times, respectively. It may be noted that these datasets represent extremes of one form or another (dataset 23 has a high cooling rate, and 27 has a high φ), which makes them difficult to model with limited training. When 10 TDS are used, |∆φ| < 1.0, 99% of times, except for datasets 23 and 27, for which |∆φ| was within the threshold, 97.3% and 98.9% of times, respectively. When we have large number

37

100

90

80 5 TDS 10 TDS 70 fraction(%)

60

50 0 5 10 15 20 25 30 dataset

Figure 2.10: Fraction of samples with |∆φ| < 1.0. of TDS, the chances of including datasets near “extremes” during the training phase increases. It leads to better parameterization, which directly improves results, as observed here. This reinforces the idea that the inference of conmonomer content can be improved by increasing the number of representative training datasets.

2.5.1 Effect of Selection of Training Datasets

In the results presented above, training datasets were chosen randomly from among the available datasets. In this section, we explore the effect of this choice, and examine whether some choices are better than others. We considered only the 5 TDS scenario, and divided the 27 datasets into low, medium, and high comonomer content regions as follows:

(i) low comonomer content (φ ≤ 0.68, datasets: 1 - 8)

(ii) medium comonomer content (1.23 ≤ φ ≤ 2.31, datasets: 9 - 17)

(iii) high comonomer content (φ ≥ 3.14, datasets: 18 - 27)

The aim was to construct training data from one of these regions to parameterize the model and use the remaining datasets to test the predictive ability of the parameterization, “outside of the training zone”. We stipulated an additional constraint that three different cooling rates had be

38 represented in each TDS. This requirement was imposed to ensure that sufficient data was available to reliably regress parameters that capture the variation with cooling rate. Under these constraints, there is only a relatively small number of different ways to construct training data with 5 datasets. This number is 24, 60, and 120, for the low, medium, and high φ buckets, respectively. All of these cases were exhaustively considered. In figure 2.11, Φ is reported, which reflects the quality of the parameter fitting. The values of the actual Φ are normalized by the mean Φ of the low φ bucket for clarity and better scale of comparison. The figure shows the mean and standard deviation (blue) as well as the range encompassed by the maximum and minimum values (red). Besides the three cases mentioned above (low, medium, and high), two additional strategies were considered for constructing training datasets, which are labeled as “equispaced” and “random” in figure 2.11. The “random” strategy has been discussed before; 5 TDS were picked randomly to constitute the training dataset. For the results reported in this section, 1000 random samples were used, which obeyed the additional constraint of three different rc. For the “equispaced” case, the following constraints were imposed on the construction of the TDS: (i) 2 datasets from the low and high φ buckets, and 1 dataset from the medium φ bucket, and (ii) three different cooling rates. Of all the possibilities, only 48 samples satisfied these two criteria, and were used in the comparisons that follow. From figure 2.11, it was noted that the cost function Φ is the smallest for the low and medium φ training datasets, largely due to the homogeneity of the datasets. However, this observation does not hold true for the high φ case, because the underlying Crystaf model, regardless of the parametrization, is the least accurate in this regime. The random and equispaced cases have a larger Φ, compared to the low and medium φ cases, because they have to assimilate more heterogeneous training data, which may include some high φ data. The quality of the inference is more crucial, which is partly captured by |∆φ|, and is depicted in fig. 2.12. From this standpoint, it was noticed that the low and medium φ TDS are nearly equivalent. Naively, it appears the high φ TDS leads to better predictive performance, despite poor data assimilation (figure 2.11). However, it must be recalled that the training data are excluded from the test data in this analysis. Thus, the improved apparent performance may largely be attributed to the exclusion of datasets with high φ in the test data, which, as mentioned before,

39 25

20

15

Φ 10

5

0

−5 low medium high equispaced random

Figure 2.11: Cost function Φ corresponding to the five different strategies for constructing training datasets. The blue symbols and lines denote the mean and standard deviation of the data, while the red lines extend from the minimum to the maximum values sampled. cause the most trouble for the Crystaf model. Finally, it was noted that in terms of mean |∆φ|, the equispaced (0.26 ± 0.20) and random cases (0.29 ± 0.28) outperform the rest, although there are more outliers for the latter, as evidenced by a larger range of outcomes. For equispaced TDS, even the worst outcome is still within 1 mol % of the true value. To summarize this section, choosing training data that are homogeneous leads to excellent assimilation but mediocre predictions, and vice versa. Between random and equispaced TDS, the latter outperform the former in terms of both assimilation and prediction. For most of this study, training datasets were selected randomly. In principle, if we choose “equispaced” datasets, we should expect to perform much better.

40 6

4 | φ 2 ∆ |

0

−2 low medium high equispaced random

Figure 2.12: |∆φ| corresponding to the five different strategies for constructing training datasets. The blue symbols and lines denote the mean and standard deviation of the data, while the red lines extend from the minimum to the maximum value sampled.

41 CHAPTER 3

LCB DETECTION AND MEASUREMENT

In the previous chapter, inferring of comonomer content or short-chain branching (SCB) was thor- oughly considered. As we discussed earlier in section 1.5.2, LCB has a tremendous effect on several polymer properties. Due to its importance, it has been an important subject of research for the research community and the industry. The detection and measurement of LCBs are two impor- tant topics in this area and have gained much attention from researchers. Several probes of LCB measurement were introduced and discussed in section 1.5.2. Stadler[101] utilized different probes for the detection of very low amounts of LCB in metallocene catalyzed polyethylenes. LCB detec- tion was investigated in his work and sensitivity of different probes was compared in a qualitative manner. The only direct method for measuring and detecting LCBs is NMR.[78] NMR works by detecting tertiary carbon atoms (the carbon atom that connects the backbone and LCB or the α-carbon). NMR has two major drawbacks. First is its deficiency in differentiating between LCB and SCB when SCB is longer than 6 carbon atoms. Second problem is the long processing times in NMR especially when we have a sample with very low amounts of LCB. Size exclusion chromatography with coupled multi-angle laser light scattering (SEC-MALLS)[65,104,111] is another method that is frequently used for LCB detection. However this method is not optimal for low amounts of LCB. Rheology is another approach, which is very popular because of its sensitivity to LCB. Stadler used a combination of SEC-MALLS and rheology (linear and non-linear) probes in his study. The dynamic mechanical data were used to detect LCB in his work. The dynamic mechanical data of branched samples were compared with linear samples of similar composition. This comparison provides a good possibility to detect LCB in lightly branched samples. Based on these answers, sensitivity of the different rheological probes to detect LCB in a wide range of lightly branched samples were also determined. Despite the advantages of this new method, it lacks the quantitative comparison of the rheological probes; all the sensitivity results and conclusions were based on the expected visual behavior aspects of the corresponding rheological probes. This approach has two

42 major drawbacks:

• Qualitative comparison of the sensitivity of rheological probes is the first issue with this method. By qualitative sensitivity results, we neglect to set up a measurable framework for comparison of the different rheological probes. Also, the boundaries in which each probe has the most sensitivity is not exactly determined. In addition, it should also take into consideration that each probe has a different scaling level. Some of these probes are reported in a logarithmic scale and some in a linear system, which makes the comparison complex. Neglecting these factors might lead to unrealistic conclusions.

• Linear and branched samples in this method are chosen in a qualitative manner. Low, medium, high, and ultra-high molecular weight samples are chosen. This would suffice for the qualitative investigation but for a quantitative approach, a more robust systematic sampling route should be taken.

As a result, the molecular weight effect is not explicitly considered in this comparison. In this new approach, both drawbacks are addressed and attempts have been made to resolve them:

• A quantitative framework is proposed to compare the rheological probes and yield a quantita- tive conclusion. The general approach is to find the distance between the responses from the linear and branched probes. Figure 3.1 represents the storage and loss moduli of the linear

sample with MW = 149.4 kg/mol and the branched sample with bm = 0.74. In the proposed method, for each probe, we want to quantify the agreement between the curves using the appropriate probe. There are other probes that are used and listed in table 3.1. This will be discussed in detail in section 3.1.2. This approach gives us the necessary tool to construct a quantitative framework to compare the different probes and associate the corresponding sensitivity to them.

• A computational method is being used to generate the sample data. Rheological data for the samples were generated by the branch-on-branch model (BoB) of Das et al.[26] It is a general algorithm based on the tube model and hierarchical relaxation, which is capable of predicting the linear rheology of mixtures of polymers with arbitrary architectures. It produces an ensemble of linear, star, comb, and branch-on-branch molecules. For metallocene

catalyzed polymers (here polyethylenes), BoB accepts two parameters MW and bm. BoB gives us an opportunity to generate desired linear and branched samples with the pre-defined configurations. The wide range of molecular weights that is considered in this work covers both genres of samples, those that are typically used in the industry and those that are used in scientific research. For branched samples, the average number of branches per molecule,

bm = 0 − 2.5 is considered.

43 10 10

′ 0 Linear 10

G Branched

−10 10

10 10

′′ 0 Linear 10

G Branched

−10 10 −5 0 5 10 10 ω 10

Figure 3.1: The storage and loss moduli of a linear sample with MW = 149.4kg/mol and a branched sample with the same molecular weight and average number of branches on a single molecule, bm = 0.74.

In this chapter we want to extend the studies and consider LCB measurement along with LCB detection.

LCB Detection The initial intent of this work was to construct a quantitative framework for LCB detection, as it was discussed in the above paragraphs. Using the rheological probes Stadler[101] used in his work, we try to achieve this goal. A thorough discussion about addressing this problem is provided in the later sections.

LCB Measurement There are different probes for measuring the LCB in the literature. Takeh[112] has proposed a method to infer the molecular weight and LCB in metallocene-catalyzed polyethylenes using an analytical method proposed by Shanbhag[86]. In the current study, inferring LCBs is not discussed, instead different probes are compared to find the proper one for LCB measurement. Corresponding levels of sensitivity of each probe comparing to others is also presented. Results would provide a good tool to compare the probes and choose the best available probes for LCB measurements.

In the following sections, we will discuss the method and present the results.

44 3.1 Method 3.1.1 Model Data

We considered linear and branched samples with a wide range of molecular weights and LCB.

Molecular weight (MW ) range spanned over MW = 10,000 - 700,000 g/mol, and the average number of branches per molecule (bm) spanned over bm = 0 - 2.5. As it was mentioned earlier, the rheological data for the samples were generated by the branch-on-branch model (BoB) of Das et al.[26] In BoB simulations, we assume that the chemistry (polyethylene), polymerization kinetics (single- site metallocene catalyzed), and the temperature (150◦C) at which the rheological measurement is carried out are known accurately. We chose the same parameters as Das et al.[26], viz. density of 3 −8 785 kg/m , with an equilibrium time of τe = 1.05×10 s, and an entanglement molecular weight of

Me = 1120 g/mol. The value of dynamic dilation exponent was set to unity, monomer mass M0 =

28 g/mol, and number of monomers per entangled segment Ne = 40, in this study. To describe the mixture of linear and branched polymers in this study, we considered 50,000 molecules. We obtain the storage and loss moduli from the BoB, and calculate other rheological probes using these and proper formulations. These rheological data covered a wide range of frequencies ω = [10−11 −1014], which is experimentally impossible and it takes a very long time (that would never happen because of possible degrading of the samples), but we theoretically consider this wide range to be practical and true. Regardless of the wide range of frequencies, we use certain decades of it in our calculations.

3.1.2 Rheological Probes

In this section, we will introduce the rheological probes that are going to be used in this study. These probes were adopted from Stadlers[101] work. A list of these probes are reported in table 3.1. For the frequency window that we are working on this research (discuss in section 3.1.2.1), the low-frequency limiting value of η0 is not attained therefore it is not included the probes list. As it was mentioned earlier in this chapter, we get the storage and loss moduli that we defined in section 1.5.3 from the BoB program and use these two rheological probes to obtain the rest of the probes using the relationships between them. The loss tangent is the ratio of the loss modulus to the storage modulus at each frequency:

G′′(ω) tan δ(ω) = (3.1) G′(ω)

45 Table 3.1: Stadler[101] rheological probes that are adopted in this work.

Probe Storage modulus G′(ω) Loss modulus G′′(ω) Complex viscosity |η∗(ω)| lin ∗ lin lin Complex viscosity normalized by η0 expected from the molar mass η0 |η (ω × η0 )|/η0 ∗ Complex viscosity normalized by η0 |η (ω × η0)|/η0 lin ′ lin Storage modulus normalized by η0 expected from the molar mass η0 G (ω × η0 ) lin ′′ lin Loss modulus normalized by η0 expected from the molar mass η0 G (ω × η0 ) ′ Storage modulus normalized by η0 G (ω × η0) ′′ Loss modulus normalized by η0 G (ω × η0) Loss tangent δ(|G∗|)

If the storage and loss moduli are considered to be the real and imaginary components of a complex plane, this complex modulus can be defined as follows:

∗ ′ ′′ G = G (ω) + iG (ω) (3.2) ∗ |G (ω)| = G′ 2 + G′′ 2 The storage and loss viscosities are the componentsp of the complex viscosity, which is one of the rheological probes that we consider in our work. The storage viscosity, η′, the loss viscosity, η′′, and the complex viscosity, η∗are defined as follows: ′′ ′ G η = ω

′ ′′ G η = (3.3) ω

∗ ′ ′′ η = η − iη Zero-shear viscosity is a limiting value of the loss moduli, and it is the low shear rate limiting value of the viscosity: ′′ G ′ lim = lim η = η0 (3.4) ω→0 ω ω→0   The rheological responses of a linear sample and its corresponding LCB sample are plotted here.

Following plots are showing the results for a linear sample with MW = 149.4kg/mol and an LCB sample with the same molecular weight and average branching of bm = 0.74 (λ = 0.24 branch points

46 10 10 Linear Branched

105 | ∗ η |

100

−5 10 −5 0 5 10 10 ω 10

Figure 3.2: Complex viscosity of a linear sample with MW = 149.4 kg/mol and a branched sample with the same molecular weight and average number of branches on a single molecule, bm = 0.74. per 103 carbon atoms). Figure 3.1 represents the storage and loss moduli, G′(ω),G′′(ω) respectively. Linear and branched samples are indicated by red circles and green squares, respectively. These responses vary for different structures of samples with different molecular weights and branching. The storage and loss moduli that are normalized by the zero shear rate viscosity expected from ′ lin ′′ lin ′ ′′ η0 − MW correlation, G (ω × η0 ),G (ω × η0 ) and zero shear rate viscosity, G (ω × η0),G (ω × η0) are not presented here. They are normalized values of the mentioned functions and the overall shape of the responses are the same. Complex viscosity, |η∗(ω)|, of the same configuration is shown in figure 3.2. The complex viscosity normalized by zero shear rate viscosity expected from η0 −MW ∗ lin lin ∗ correlation, |η |/η0 (ω × η0 ), and zero shear rate viscosity |η |/η0(ω × η0), are not shown because they have the same trend with different values. Figure 3.3 shows δ that relates two independent rheological data, G′ and G′′, as it is stated in equation 3.1.

3.1.2.1 LCB Detection Method. Calculating the agreement between the linear and branched samples is a key to determine the sensitivity of detection of long-chain branching in this work. In order to calculate this agreement, a new parameter is defined that measures the distance between the desired rheological probes of a linear and a branched sample with the same molecular weight. The error function ǫ is utilized to calculate the distance between the linear and branched sample probes and quantify the agreement between the probes responses. Equation 3.5 shows this parame-

47 90

80

70

] 60 Linear ◦ [ Branched δ 50

40

30

20 101 102 103 104 105 |G∗|

∗ Figure 3.3: Loss tangent δ(|G |) of a linear sample with MW = 149.4 kg/mol and a branched sample with the same molecular weight and average number of branches on a single molecule, bm = 0.74. ter as a logarithmic distance between the linear and branched structures. These rheological probes have different scales, which could be misleading if we only consider the distance function with- out normalizing it. Thus, we normalize the error function with regards to the maximum window of a probe’s response in each binary of linear and corresponding branched sample with the same molecular weight.

n θi,br 2 1 i=1(log( θ )) ǫ(θ) = . i,lin (3.5) n (log θmax )2 P θmin where θ is the desired rheological probe listed in table 3.1 except for δ(G∗) for which the reason will be discussed shortly. Summation is over all the discrete values of ω, except for the η0 that is the viscosity exhibited in the linear viscoelasticity regime and it is calculated at the lowest ω. δ(G∗) is the only rheological function that is not directly dependent on the frequency and it has a value of [0 90], therefore we use the least square method for calculating the corresponding error function:

1 n (δ − δ )2 ǫ(θ) = . i=1 lin br (3.6) n (δ − δ )2 P max min The rheological probes are calculated in a very wide range of frequencies by BoB, 10−12 - 1010; not all the calculated frequencies are considered and certain decades of frequency is considered

48 here. Frequencies of 10−2 − 103 are considered and investigated to be consistent with the window frequency Stadler[101] used for his work.

3.1.2.2 LCB Measurement Method. All of the above formulations are used to deter- mine the sensitivity of different probes in detecting the existence of LCB. The sensitivity can be determined from comparison of the agreement of linear and branched samples in different probes. Apart from studying the sensitivity of detection of probes, finding a probe that can quantify the LCB measurement of different probes is our other branch of investigation. In order to quantify the LCB measurement, changes of the distance between the linear and branched samples must be studied. That means calculating and comparing the change in the distance between the two regimes with regards to the change of the branching levels, dǫ(θ)/dbm. In this formula, molecular weight is considered as a constant parameter and we do not include it in our calculations. Therefore, the results are dependent on the molecular weight.

3.2 Results

In this section, results of the detection and measurement of LCB in our sample datasets are presented. Sensitivity of the rheological probes is discussed here. Sensitivity in this context is defined as the measure of detection ability and measurement ability of each probe comparing to the other probes. In section 3.2.1, LCB detection and the sensitivity of the different probes is discussed. Based on the responses in this section, the most sensitive probe and the corresponding region of sensitive coverage (MW , bm) for LCB detection would be picked. In section 3.2.2, sensitivity of different probes for LCB measurement is discussed and the corresponding sensitive probes are determined.

3.2.1 LCB Detection

In this section, we discuss the detection sensitivity of the rheological probes in LCB detection.

Figure 3.4 shows the most sensitive probes at each point of the investigated area of bm and MW . This figure is derived based on the comparison of ǫ of different probes at each configuration (each point of the plot). The maximum ǫ at each configuration is chosen as the most sensitive probe for the corresponding configuration. As it is shown in this figure, three different probes should be considered as the most sensitive probes:

49 – For MW ≤ 85,000 g/mol and all bm values and MW ≤ 130,000 g/mol for bm ≥ 2, complex viscosity, |η∗(ω)|, is the most sensitive probe.

– For a molecular weights range that starts from 220,000 to 425,000 g/mol at the lowest branch-

ing and have a tilt to the lower molecular weights (that has a minimum bound at MW = 100,000 g/mol) and again moved back to the original molecular weight range for the greater ∗ values of bm, complex viscosity normalized by η0, |η (ω × η0)|/η0, is the most sensitive probe for LCB detection.

– For the remaining regions between the previously mentioned configurations and MW ≥

440,000 g/mol (for all bm values), and 280, 000 ≤ MW ≤ 440,000 g/mol for 0.2 ≤ bm ≤ 1.7, loss tangent, δ(|G∗|) is the most sensitive probe.

Determining exceptions and showing precise boundaries of each sensitive probe are two of the major achievements in the current work that makes it easier to investigate the samples and choose the proper probe for LCB detection.

Figures 3.5 and 3.6 show cross sections of figure 3.4 at discrete values of MW and bm versus the error, ǫ. For this purpose, MW = 80,400 and 418,300 g/mol and bm = 0.63 and 1.74 are investigated.

These are represented as white lines in figure 3.4. Selection of these discrete values of MW and bm is based on the way the show different behaviors (different sensitivity) in figure 3.4. Figure 3.5(b) is chosen to investigate the behavior of different probes in a high molecular weight region.

Figure 3.5(a) captures the ǫ at a molecular weight of MW = 80,400 g/mol and as we expected and ∗ as we shown in figure 3.4, loss tangent (δ(|G |)) is the most sensitive probe for bm ≤ 0.35 and ∗ 0.45 ≤ bm ≤ 1.9. For 0.35 ≤ bm ≤ 0.45 complex viscosity normalized by η0 (|η (ω × η0)|/η0) is ∗ the most sensitive probe. For bm ≥ 1.9, complex viscosity (|η (ω)|) is the most sensitive method. These results are in agreement with what we have seen in figure 3.4. Figure 3.5(b) shows that loss tangent is the most sensitive probe for all branching levels, this figure also shows us that complex viscosity normalized by η0 is in competition with loss tangent especially for lower branching levels. These cross sectional figures provide us with a good judgment of the alternative probes for specific branching or molecular weight ranges i.e. for some configurations there is a distinct difference between the sensitivity of the various methods but for some other configurations this distinction is not that obvious and can be exploited to chose the simpler and cheaper method.

Figures 3.6(a) and (b) show the ǫ for two levels of branching, bm = 0.63 and 1.74, respectively. Figure 3.6(a) corresponds to a low branching sample, investigating the ǫ in this plot will lead us to

50 2.5 δ°(|G∗| G′′(ω×η ) 2 0 G′(ω×η ) 0 G′′(ω×ηlin) 1.5 0 G′(ω ×ηlin)

m 0 b |η∗(ω×η )|/η 1 0 0 |η∗(ω×ηlin)|/ηlin 0 o |η∗(ω)| 0.5 G′′(ω) G′(ω) 0 1 2 3 4 5 6 7 M 5 W x 10

Figure 3.4: LCB detection sensitivity responses, shown probes at each area have the largest error between the linear and branched regimes. White lines are showing specific configurations in which their cross section at the shown dimension is investigated.

51 0 10 G′(ω) G′′(ω) ∗ −1 |η (ω)| 10 |η∗(ω × ηlin)|/ηlin 0 o |η∗(ω × η )|/η 0 0 ′ lin −2 G (ω × η ) ǫ 10 0 G′′(ω × ηlin) 0 G′(ω×η ) −3 0 10 G′′(ω×η ) 0 δ°(|G∗|) −4

10 −2 −1 0 1 10 10 10 10 b m (a)

0 10 G′(ω) G′′(ω) ∗ −1 |η (ω)| 10 |η∗(ω × ηlin)|/ηlin 0 o |η∗(ω × η )|/η 0 0 ′ lin −2 G (ω × η ) ǫ 10 0 G′′(ω × ηlin) 0 G′(ω×η ) −3 0 10 G′′(ω×η ) 0 δ°(|G∗|) −4

10 −2 −1 0 1 10 10 10 10 b m (b)

Figure 3.5: ǫ vs. bm plots of a) MW = 80,400 g/mol and b) MW = 446,500 g/mol for different rheological probes.

52 identify three distinct regions. In the first region which can be labeled as the low molecular weight ∗ region, MW ≤ 80,000 g/mol, complex viscosity |η (ω) is the most sensitive method. If we trace this rheological method in the proceeding molecular weight regions, it can be seen that it is constantly decreasing and at very high molecular weights, it has the lowest sensitivity. This clearly shows the importance of investigating the sensitivity of different probes in a quantitative and discrete manner.

Complex viscosity normalized by η0 in the second region or the middle molecular weight region has the highest sensitivity. For larger molecular weights, loss tangent is the most sensitive probe.

Figure 3.6(b) corresponds to a branching level of bm = 1.74 . In these configurations we can observe the same trend as we have seen in figure 3.6(b) except that in the middle molecular weight region loss tangent also plays a role along with complex viscosity normalized by η0. For high molecular weight regions, loss tangent is again the only dominant probe as the most sensitive probe for LCB detection. In conclusion, although figure 3.4 gives us a good indication about the most sensitive probes in both MW and bm dimensions, it would be more precise and conclusive to investigate the cross- sections of this plot (figure 3.5 and 3.6) to obtain a thorough knowledge about them when we are analyzing it. By doing so, we would be able to identify if there are other sensitive probes that can be used for the investigated configuration. Often, we have restrictions like the lack of availability of measurement devices. Hence, considering the cross-section would be useful when it comes to choosing the alternative methods that would yield relatively close responses.

3.2.2 LCB Measurement

In the previous section, a sensitivity analysis of LCB detection was carried out. Proper probes were determined in different regions as the most sensitive probes for LCB detection. Sensitivity of LCB measurement is another important topic that should be investigated. LCB measurement sensitivity determines through investigating the derivative of ǫ with respect to LCB, dǫ(θ)/dbm at quantized values of molecular weight. Figure 3.7 shows the results for the derivative of the rheological probes with respect to LCB. Figure 3.7 shows a very complex pattern. In general, loss tangent has the highest sensitivity for large molecular weights. Complex viscosity normalized by η0 is the highest sensitive probe for middle range of molecular weights. Complex viscosity and complex lin viscosity normalized by η0 expected from the molar mass η0 are two most sensitive probes in the

53 0 10 G′(ω) G′′(ω) ∗ −1 |η (ω)| 10 |η∗(ω × ηlin)|/ηlin 0 o |η∗(ω × η )|/η 0 0 ′ lin −2 G (ω × η ) ǫ 10 0 G′′(ω × ηlin) 0 G′(ω×η ) −3 0 10 G′′(ω×η ) 0 δ°(|G∗|) −4

10 4 5 6 10 10 10 M W (a)

0 10 G′(ω) G′′(ω) |η∗(ω)| |η∗(ω × ηlin)|/ηlin −1 0 o 10 |η∗(ω × η )|/η 0 0 G′(ω × ηlin) ǫ 0 G′′(ω × ηlin) 0 −2 10 G′(ω×η ) 0 G′′(ω×η ) 0 δ°(|G∗|) −3

10 4 5 6 10 10 10 M W (b)

Figure 3.6: ǫ vs. MW plots of a) bm = 0.63 and b) bm = 1.74 for different rheological probes.

54 low molecular weight region. As it is shown in the figure, complex viscosity normalized by η0 is the sensitive probe in some configurations of high molecular weight region. Now we should investigate the sensitivity of zero-shear viscosity in depth. In depth investigation of zero-shear viscosity forced us to study the cross-section plots of figure 3.7. The white lines on this figure show the investigated cross-sections that are presented in figure 3.8.

2.5 δ°(|G∗| G′′(ω×η ) 2 0 G′(ω×η ) 0 G′′(ω×ηlin) 1.5 0 G′(ω ×ηlin)

m 0 b |η∗(ω×η )|/η 1 0 0 |η∗(ω×ηlin)|/ηlin 0 o |η∗(ω)| 0.5 G′′(ω) G′(ω) 0 1 2 3 4 5 6 7 M 5 W x 10

Figure 3.7: LCB measurement sensitivity responses, shown probes at each area have the largest dǫ(θ)/dbm, therefore the largest sensitivity. White lines are showing specific configurations for which the cross sections at the shown MW are investigated.

Figures 3.8(a) and (b) show the derivatives of the error functions, dǫ(θ)/dbm, at MW = 179,000 and 432,500 g/mol respectively. In figure 3.8(a), for the lower molecular weight sample, MW =

179,000 g/mol, this figure indicates that the ǫ for complex viscosity normalized by η0 has the largest value among all other probes. In figure 3.8(b), for the higher molecular weight sample, MW

= 432,500 g/mol, loss tangent and complex viscosity normalized by η0 are shifting as the most

55 0 10 G′(ω) ′′ −1 G (ω) 10 |η∗(ω)| |η∗(ω × ηlin)|/ηlin −2 0 o 10 |η∗(ω × η )|/η 0 0 | ′ lin

m −3 ǫ G (ω × η )

d 10 0 db | G′′(ω × ηlin) 0 −4 10 G′(ω×η ) 0 ′′ −5 G (ω×η ) 10 0 δ°(|G∗|) −6

10 −2 −1 0 1 10 10 10 10 b m (a)

0 10 G′(ω) G′′(ω) ∗ −2 |η (ω)| 10 |η∗(ω × ηlin)|/ηlin 0 o |η∗(ω × η )|/η 0 0 | ′ lin

m −4 ǫ G (ω × η )

d 10 0 db | G′′(ω × ηlin) 0 G′(ω×η ) −6 0 10 G′′(ω×η ) 0 δ°(|G∗|) −8

10 −2 −1 0 1 10 10 10 10 b m (b)

Figure 3.8: Derivative of the error function with respect to branching, dǫ(θ)/dbm, of a) MW = 179,000 g/mol and b) MW = 432,500 g/mol for different rheological probes. sensitive probe for LCB measurement. Figure 3.7 along with figure 3.8 lead us to a conclusion that we cannot easily segregate regions and identify one probe as the most sensitive probe for each region, they are entangled together and we should have a narrow range of configurations to identify the most sensitive probe for LCB measurement.

56 CHAPTER 4

DETERMINATION OF THE CONTINUOUS AND DISCRETE RELAXATION TIME SPECTRUM

In the previous chapters we discussed inferring the SCB and sensitivity analysis of detecting and measuring LCB. These two structural parameters are important in studying the polymer behavior, since their presence in the polymer can cause dramatic changes in crystallization and viscoelas- tic responses of the materials. The relaxation time spectrum is another characteristic quantity that describes the viscoelastic properties of polymer solutions and melts. In this chapter, a com- puter program, ReSpect is introduced that infers continuous and discrete relaxation spectrum from dynamic moduli measurements.

4.1 Relaxation Spectra

The relaxation time spectrum is a characteristic quantity describing the viscoelastic properties of polymer solutions and polymer melts. Two different uses of spectra can be distinguished. The most widespread is the conversion of one material function into another one. The other more fundamental use is to determine spectra in order to gain some deeper insight in relaxation mechanisms of the material. Because of the ill-posedness issue, this approach has not been pursued systematically so far.[100] A spectrum reflects the molecular processes occurring at a certain time scale. Among the pro- cesses being reflected in the spectrum is the molecular structure in a broad sense, characterized by molar mass distribution and long-chain branching architecture, molecular motions (including rep- tation, Rouse and glassy modes), and superstructures such as a crystalline lamellar structure[27,33]. A rheological measurement can be understood as the response of the structure, represented by the spectrum, to the mechanical stimulation. There are two different approaches for spectrum determination. The physically correct spectrum is a continuous function from vanishing times to a certain terminal relaxation time. There is only one physically correct continuous spectrum. Because of the ill-posedness of the spectrum calculations,

57 the continuous spectrum is only approximated. The more practical approach is to use a discrete spectrum that is a real continuous spectrum approximated by a number of modes. A discrete spectrum contains only part of the information in the continuous spectrum. A discrete mode is made up of a relaxation time and the corresponding relaxation strength. A discrete spectrum is only one of many numbers of discrete spectra leading to the same rheological response. The common method for calculating a spectrum can be described as follows: a benchmark spectrum is created or varied if it already exists. Data are calculated from this spectrum trying to describe the dataset used for the input. The next step is calculating the deviation between the experimental data and the calculated data. If the new spectrum fits the data better than the old one, it is adopted as the new benchmark; otherwise it is discarded. This loop is repeated until a given stop criterion is reached. Different programs published so far mainly differ in the spacing of the discrete modes and the calculation of the error as well as in the spectrum variation routine and stop criteria.[27] Complex moduli (G∗) linked to storage modulus G′ and loss modulus G′′ by:

∗ ′ ′′ G (ω) = G (ω) + G (ω) (4.1)

Relaxation spectra and complex modulus linked mathematically by:

∞ 2 2 N 2 2 ′ ω τ ω τi G (ω) = h(τ)dlnτ = gi 2 ,, (4.2) −∞ 1 + ω2τ 2 1 + ω2τ i i Z X=1 ∞ N ′′ ωτ ωτi G (ω) = h(τ)dlnτ = gi 2 ,, (4.3) −∞ 1 + ω2τ 2 1 + ω2τ i i Z X=1 where gi are the modes of the discrete spectrum and “N” is the number of the modes. Distribution of the relaxation times is of an important factor about Discrete Spectrum calcu- lations. A discrete relaxation spectrum requires 1 to 1.5 relaxation times per decade to describe the data with sufficient accuracy. Fewer data points will lead to waviness of material functions. A greater density of modes will enable the algorithm to account for data artifacts and thus will not enhance the physical meaning of the spectrum.[11,128]

58 4.2 Method

Current work is based on the model proposed by Takeh[113]. In his work, he implemented an open-source, multi-platform computer program ReSpect to infer continuous and discrete relaxation spectra from dynamic moduli measurements. Nonlinear Tikhonov regularization and Levenberg- Marquardt methods are being employed to extract continuous relaxation spectrum. A novel algo- rithm is proposed in his work to obtain discrete relaxation spectrum. ∗ ′ ′′ Experimental modulus G (ω) is usually reported at a set of discrete frequencies {ωi,Ge(ωi),Ge (ωi)}, where subscript “e” denotes “experiment”. The ultimate purpose of this work is finding continuous or discrete relaxation spectra so that the storage and loss moduli defined in equations 4.2 and 4.3 are approximately equal to respective experimental results.

4.2.1 Continuous Relaxation Spectrum

As mentioned earlier, nonlinear Tikhonov regularization strategy of Honerkamp and Weese[45] is utilized in this work. A nonlinear substitution, eH(τ) = h(τ), is adopted. Although this substitution makes the problem nonlinear and harder, it allows us to deal with data with wider frequency range and automatically ensures that h(τ) is positive. Cost function is defined as:

n ′ ′ n ′′ ′′ ∞ 2 Ge(ωi) − G (ωi; H(τ)) 2 Ge (ωi) − G (ωi; H(τ)) 2 d 2 V (λ) = ( ′ ) + ( ′′ ) + λ ( 2 H(τ)) dlnτ G (ωi) G (ωi) −∞ dτ i=1 e i=1 e Z X X (4.4) where the first two terms on the right-hand side represent the relative squared error between the ex- 2 ∗ ∗ ∗ 2 perimental and inferred complex moduli, which can be shown concisely as ρ = kG (H(τ)) − Ge/Gek . The last term in equation 4.4 is the regularization term which consists of the regularization pa- rameter λ, and the norm of the curvature η2 = d2H(τ)/dτ 2 2. Thus, cost function in shorthand V (λ) = ρ2 + λη2. The regularization term imposes a constraint from the a priori knowledge; the a priori knowledge is further information on the spectrum that is independent from the data. The regularization parameter λ weights the constraint from the a priori knowledge in comparison with the constraints from the data. As λ increases, the smoothness condition dominates the cost function which concluded in a smooth solution that poorly fits the experimental data, and as we decrease the λ, the smoothness condition is ignored and we end up with the original non-regularized problem, which is ill-conditioned and leads to a rough H(τ) that is sensitive to noise in the data. Therefore,

59 Figure 4.1: Particular ρ vs. η, or L-curve. As λ is increased, the monotonically decreasing curve shows a sharp initial decline in which η falls steeply while ρ barely increases. The λc (red circle) is determined using a heuristic and lies in the corner region where the two mentioned regions meet.[113]

we need to seek for an optimal λ (λc) that avoids the two extremes. There are several different [39,40] methods to choose the λc, L-curve method is used in conjunction with a heuristic . For a given λ, we can find the function H(τ) that minimizes V (λ) using Levenberg-Marquardt [43] method . We can then compute ρ(λ) and η(λ) that correspond to Hλ(τ) in equation 4.4. We repeat this method for a range of different λ, to find the λc in the L-curve method. Figure 4.1 shows a typical plot of ρ against η.

In this program, heuristic ρ(λc) = 1.05ρ(λmin) is being used to determine the λc. Once λc is found, the computed CRS is given by h(τ) = exp(Hλc (τ)) from the experimental complex moduli. Mentioned algorithm for finding the CRS could be summarized as follows:

• Select a range of small to large λ values.

• For each λ, find the corresponding minimized V (λ) (store the corresponding ρ(λ) and η(λ).

• Find the λc using L-curve.

60 (a) 10 10

0 10 h(s)

−10 10 −6 −4 −2 0 2 4 10 10 10 10 10 10 s (b) 10 10

5 10

0 10 −2 0 2 4 6 G*(exp), G*(fit) 10 10 10ω 10 10

Figure 4.2: (a) Continuous relaxation spectra, H(τ), (b) Experimental and inferred G′(ω) and G′′(ω), symbols represent the experimental data, blue circles and green squares cor- respondingly, and the solid lines represent the corresponding dynamic modulus obtained from corresponding inferred spectrum, H(τ).

• compute the CRS by h(τ) = exp(Hλc (τ)).

Figure 4.2(a) represent the inferred continuous relaxation spectra using the described algorithm in above. Figure 4.2(b) describes the experimental and inferred dynamic moduli. Inferred dynamic moduli calculated using the inferred continuous relaxation spectra.

4.2.1.1 Edge preserving regularization method. Regularization methods are used for the numerical determination of a continuous relaxation spectrum. Tikhonov regularization method is the popular method that assumes a smooth relaxation time spectrum. However common Tikhonov regularization method is not able to infer sharp edges at the transitions between the linear regimes. Roths et al.[81] show this phenomena in figure 4.3 and proposed the following form of the regular- ization term: ∞ ′′ 2 2 [H (τ)] ηα = ′′ d ln τ (4.5) −∞ 1 + [α.H (τ)]2 Z ′′ where H (τ) denotes the second derivative ofp the relaxation spectra. The regularization term favors smooth spectra that have a small second derivative (small curvature on a logarithmic scale). Figure 4.3(a) shows this clearly. The solid line in this figure represents the common Tikhonov regularization

61 α=0 α=0.5

2 η α

α=1.0

α=2.0

h h h

2 Figure 4.3: Dependence of the common regularization term, ηα on different values of α. (b) Relaxation spectra, h(τ)[81] term dependence on the relaxation spectrum that is presented in figure 4.3(b). Figure 4.3(b) represents the artificially constructed relaxation spectra with different curvatures. The parameter t in this figure indicates the width of the transition area between the linear regimes, i.e. the sharpness of the spectrum. The curvature of the spectra is proportional to 1/t, therefore the spectrum with t=1 is very smooth and on the other hand the spectrum with t=0 shows an edge. For t = 1, the 2 spectrum is smooth and as it can be seen in figure 4.3(a), ηα is small for this type of spectrum 2 regardless of the value of the α. By decreasing t, ηα increases in different levels depending on the 2 value of the α. It diverges to ∞ in t=0 for α = 0 as a reason of quadratic penalizing manner of ηα. For α = 0, smooth spectra is favored over the edged spectra which is the basis of common Tikhonov regularization method. Therefore, it is very unlikely that an edge is considered. Even a sharply bent spectrum is not inferred for α = 0. Therefore, the common regularization method lacks inferring these spectra types and is unable to construct them. The common Tikhonov regularization method estimates artificial peaks in the surroundings of the edge instead of inferring the real edge. Mentioned restrictions motivated Roths et al.[81] to propose a new method based on the non-

62 linear Tikhonov regularization method that assumes a piecewise smooth spectrum instead of a continuously smooth spectrum. In this way, the spectrum is still smooth while it can have edges where necessary. The α in equation 4.5 is the edge-preserving parameter and is responsible for the shift from a common regularization method to an edge-preserving regularization method. Small values of the H′′(τ) (i.e. smooth spectra) are still penalized in a quadratic manner by the new regularization term similar to the common Tikhonov regularization term. On the other hand, as it 2 is indicated in figure 4.3(a), edges may occur where they are required, i.e. for t → 0, ηα renders 2 a finite value instead of ∞ as in ηα=0. The main use of this method is for high MW and nearly mono-disperse samples. Using this method decreases the chance of constructing artificial peaks instead of the edges and leads us to more accurate inferred spectra. All of these advantages are achieved without having any adverse effects on the previously used method since it converges to the common Tikhonov regularization method in its limits. Figure 4.4 shows the continuous relaxation spectra of a high molecular-weight monodisperse polystyrene with MW = 275,000 g/mol and polydispersity of 1.07 (polydispersity = MW /Mn, which MW is the weight-averaged molecular weight and Mn is the number-averaged molecular weight). In both cases the estimated spectrum is linearly decreasing for small relaxation times and shows a minimum at τ ≈ 10−1s which is the relaxation time of the entanglement strands. Figure 4.4(a) shows the results using a common Tikhonov regularization method and figure 4.4(b) shows the results for an edge-preserving regularization method. The error bars in this figure mark the averaged estimated spectrum over 1000 different realizations. An edge preserving parameter of α = 2 is considered in the edge-preserved results. Figure 4.4(a) in contrast to the figure 4.4(b) shows additional peaks and a terminal peak instead of an edge. The estimated edge-preserved spectrum corresponds very well with the expectations from a monodisperse high molecular weight sample. Furthermore, the terminal relaxation time is in good agreement with the observation of Honerkamp and Weese[45].

4.2.1.2 Maximum curvature of L-curve. L-curve consists of two distinct parts, the ver- tical part corresponds to η(λ) near the answers in which the smoothness function dominates the V (λ), and the horizontal part corresponds to solutions that are dominated by the regularization errors. As we mentioned earlier, the main reason for constructing the L-curve is to choose a corner point that maintain a fair balance between overly-smoothed and non-regularized answer. Heuristic

63 (a)

(b)

Figure 4.4: Relaxation time spectra of two high molecular weight monodisperse polystyrene samples, (a) common regularization method, (b) edge-preserving regulariza- tion method (α = 2).[81]. The dashed lines mark the range of relaxation times in which the data characterize the spectrum [τmin − τmax]. Therefore, the reconstructed spectrum is reliable only inside this range. The error bars indicate the averaged estimated spectrum over 1000 realizations.

64 10 10

5 10

0 10 h(s)

−5 10

−10 10 −6 −4 −2 0 2 4 10 10 10 10 10 10 s

Figure 4.5: Inferred continuous relaxation spectra using the maximum curvature method to determine regularization parameter λ. method to determine the regularization parameter is introduced above; now we want to use the maximum curvature point of the L-curve as a method to find the regularization parameter, which was originally proposed by Hansen.[39] We are usually limited to discrete and finite sets of (ρ, η) points from L-curve. Following algorithm can be used to determine the maximum-curvature point on L-curve:

1. Smooth the available L-curve points as control points for a cubic spline curve with knots

1...N+4 to find an approximate set of (ρi, ηi, λi) points, where λ is the corresponding regu- larization parameter, N is the number of L-curve points.

2. Compute the maximum curvature of the L-curve and find the corresponding regularization

parameter, λ0.

3. Solve the regularization problem with the new regularization parameter and add the corre-

sponding point (ρ0, η0, λ0) to the L-curve.

4. Repeat until convergence.

Figure 4.5 shows the inferred continuous relaxation spectra using maximum curvature method.

65 4.2.2 Discrete Relaxation Spectrum

CRS, h(τ), is being used to determine the DRS, {gi, τi}, 1 ≤ i ≤ N (N is the number of modes in the DRS). As mentioned before, continuous function, h(τ), is numerically represented on a grid as {hi, τi}, 1 ≤ i ≤ nτ . To avoid notational confusion between the discrete modes of DRS and the discrete representation of the CRS, we use the symbol s instead of τ, hen referring to the CRS in H(s) this section. Thus, the CRS is denoted by h(s) = e , and discretized as {hi, si}, 1 ≤ i ≤ ns.

There are several strategies to determine {gi, τi}. The strategy that is employed by him involves setting N equal to the number of decades of frequency spanned in the experimental G∗(ω), and [32,50] spacing the τi evenly on a logarithmic scale . Once τi are determined, a linear least squares problem can be set up using the second rightmost hand of equations 4.2 and 4.3. IRIS program[11] used a sophisticated method, where both the number and placement of the modes τi are treated as adjustable parameters that results in a parsimonious solution. In his work, an intermediate approach is chosen. A potentially parsimonious choice for the number and location of the {τi} is allowed. The overall idea is to determine the weight of each node of the CRS (si) to figure out which nodes to eliminate or merge to end up with N ≪ ns discrete modes. ∗ ′ ′′ To find the importance of sj, its contribution to all the G (ω) = G (ω) + G (ω), relative to the contribution of other nodes, sk must be measured. The following quantity to represent the ′ ′′ contribution of the mode at sj to G (ωi) and G (ωi).

2 2 ωi sj ωisj h(sj) 2 2 ∆logs h(sj) 2 2 ∆logs 1+ωi sj 1+ωi sj ωij = ′ + ′′ (4.6) G (ωi) G (ωi) since, 2 2 ′ ns ωj si G (ωj) = ∆logs i=1 h(sj) 2 2 1+ωj si (4.7) P ′′ ns ωj si G (ωj) = ∆logs i=1 h(sj) 2 2 1+ωj si ∗ The quantity ωij is the contribution of the modeP sj to G (ωi). The weight of mode sj (ωj) may n be calculated by summing up its contributions over all frequencies, i.e., ωj = i=1 ωij. In ReSpect, the user is allowed to blend this distribution of weights with aP flat profile (constant

ωj, implying uniformly spaced modes) using a parameter αf via:

n ns n ωj = (1 − αf ) ωij + ωij (4.8) i j i X=1 X=1 X=1

66 ∗ the first term assigns weights to individual nodes sj in accordance with their contribution to G (ω).

The second term corresponds to the flat profile. By controlling the αf , we can switch between these two terms: when αf = 0, the first term sets the weights ωj. On the other extreme, by choosing

αf = 1, we have a flat profile which leads to equispaced discrete modes τi.

Once the discrete modes τi are fixed, finding gi is can be solved in a linear least squares system.

. .  .   ′  K    ij    .   1    g1  .   1    g2    .    .  n×N  . e = ≃    = Kg (4.9) .  .   .        .  .   .         1 ×  .  gN  ×  2n 1  ′′   N 1    Kij         .       .       .   n×N 2n×N    where, 2 2 ′ 1 ωi τj Kij = ′ 2 2 Ge(ωi) 1+ωi τj (4.10) ′′ 1 ωiτj Kij = ′′ 2 2 Ge (ωi) 1+ωi τj Thus, g = (KT K)−1KT e. ∗ ∗ ∗ 2 ∗ Condition number and the error can be measured by ǫ(N) = k(Gd − Ge)/Gek , where Gd is obtained from equations 4.2 and 4.3 on the regressed DRS. We repeat this exercise for a range of N. As N increases, the error decreases and the condition number increases (this has shown on figure 4.6). The optimal number of discrete modes represents a trade-off between these two trends.

Nopt is found by minimizing the following cost function:

2 2 cond(N) C(N) = (1 − αopt)(ǫ(N) − min(ǫ(N))) + αopt (4.11) min(cond(N))   where min(ǫ(N)) is the minimum value of f(N) over the range of N explored. When αopt = 1, the minimum condition number (error) determines Nopt.

67 Figure 4.6: Plot to determine optimum number of discrete modes (Nopt). The relative error ǫ(N) between the input data and the G∗(ω) inferred from DRS is plotted on left y-axis, while the condition number of the resulting linear least squares problem is plotted on a logarithmic scale on the right y-axis.[113]

10 10 h(τ ) g

105 ) τ g, h( 100

−5 10 −6 −4 −2 0 2 4 10 10 10 τ 10 10 10

Figure 4.7: The blue line is continuous relaxation spectra, h(τ), and the black squares represent the discrete relaxation spectra with N = 9 modes.

68 4.2.3 Error Estimation

Measuring dynamic mechanical data or in general any experimental data is associated with different kind of errors. These errors should either approximated by using proper statistical ap- proaches or add a noise to our data to observe its effect on the inferred spectrum. We take the second route in this program and provide an opportunity for user to add the desired level of noise. The added noise is the normal random noise with the mean value (µ) equal to the experimental value and arbitrary standard deviation (σ) to be determined by the user. The estimated answer is obtained from user-defined number of independent iterations. Error bars are showing the minimum, maximum and average at each point. Figure 4.8 represents the estimated results of 100 realizations. Regularization parameter is calculated by using the l-curve method. In figure 4.8(a) added errors have σ = 2, while in figure 4.8(b) σ = 5. This tells us what we already expected, by increasing the noise level in data the inferred relaxation spectrum would show more oscillations and it would have a broader range of errors at each point. We need to investigate the effect of added noise from different aspects. One important aspect is the relationship between the added noise and different values of regularization parameters and its effect on the estimated relaxation spectra. Figure 4.9 represents scenarios of estimated relaxation spectra with different regularization parameters and noise levels. Figure 4.9(a) and (c) show the results for λ = 10−7. Figure 4.9 (a) corresponds to σ = 0.02 while 4.9 (c) corresponds to σ = 0.05. Figure 4.9(b) and (d) show the results for λ = 10−2. Figure 4.9(b) corresponds to σ = 0.02 while 4.9(d) corresponds to σ = 0.05. As it mentioned before by increasing the regularization parameter the smoothness of the inferred relaxation spectra increases, and as we can see in these figures by increasing the smoothness fluctuations of the estimated relaxation spectra decreases. This conclusion resonates the importance of finding a proper regularization parameter therefore an extra care should be devoted to choose a proper method that calculates the regularization parameter.

4.3 Program

The program ReSpect has a graphical user interface (GUI) for Matlab users. It also can be used directly from the command line both in Matlab and GNU Octave. To use ReSpect from command line, the user specifies the desired parameters by modifying the default parameters in

69 20

15

10

5

H(s) 0

−5

−10

−15 −6 −4 −2 0 2 4 10 10 10 10 10 10 s (a)

20

10

0 H(s)

−10

−20 −6 −4 −2 0 2 4 10 10 10 10 10 10 s (b)

Figure 4.8: Estimated continuous relaxation spectra

70 (a) (b) 20 20

0 0 H(s)

−20 0 −20 0 10 10 (c) (d) 20 20

0 0 H(s)

−20 0 −20 0 10 10 s s

Figure 4.9: Estimated continuous relaxation spectra with different regularization param- eters and levels of noise, a) λ = 10−7, σ = 0.02, b) λ = 10−2, σ = 0.02, c) λ = 10−7, σ = 0.05 and d) λ = 10−2, σ = 0.05

71 SetParameters. After setting the values of the parameters, the user calls the function contSpec() or discSpec() to get the CRS or DRS, respectively. Since DRS is an approximation built on the CRS, the latter has to be computed before the computation of DRS can be attempted. ReSpect software is available for free download1.

4.3.1 Input Data

∗ ′ ′′ The experimental data G (ω) is reported at a set of discrete frequencies {ωi,G (ωi),G (ωi)}. It is saved as an ASCII text file with 3 columns. The first, second and third columns contain ω, G′(ω) and G′′(ω) respectively. This text file is labled as Gst.dat by default, although this can be changed if required through the SetParameters function. Thus, an example of Gst.dat looks like:

0.0370 12.4048 226.2025 0.0406 14.6341 246.8058 0.0455 17.2642 269.2831 0.0489 20.3672 293.8066 0.0536 24.0280 320.5642 0.0588 28.3462 349.7587 0.0645 33.4399 381.6140 0.0708 39.4517 416.3851 0.0777 46.5402 454.2931 0.0852 54.8993 495.6178 ......

4.3.2 Interfaces

• Command line interface By calling contSpec and discSpec, we get CRS and DRS, respectively. Note that CRS has to be computed before DRS. Default settings can be adjusted by modifying the SetParameters function.

• Graphical user interface (GUI) This interface is accessible only in Matlab. GUI can be opened by calling the function specgui. The advantage of the GUI is that all the adjustable parameters can be modified through the GUI, instead of the function SetParameters.

1 http://www.mathworks.com/matlabcentral/fileexchange/40458-respect

72 4.3.3 Functions

The functions in the folder are listed and discussed below. Two functions (SetParameters and kernel) are shared between CRS and DRS. The rest are used either by CRS or DRS.

• User functions: A user who does not care about the internal workings of ReSpect should only be concerned with three functions.

1. contSpec Computes the CRS (H(τ)). Note: H(τ) a modified form of the spectrum (eH(τ) = h(τ)). This is done to deal with data with a large frequency range, and to ensures that the spectrum h(τ)) is positive. 2. discSpec Uses the CRS to determine an approximate DRS. 3. specgui Fires up the GUI from which the parameters can be adjusted, and the CRS and DRS can be computed.

• Internal functions: These internal functions are called by the user functions. A brief descrip- tion is provided for each code in the program files for someone interested in modifying the source code.

– GetExpData – GetWeights – GridDensity – InitializeH – kernel – kernelD – lcurve – LevenMarq – MaxwellModes – PlotMaxwellModes – updateplot

• The parameter file: SetParameters Set parameters for CRS or DRS calculation. Detailed description of the parameters is provided in README file associated with the program.

73 – Overall parameters – Continuous spectrum parameters – Discrete spectrum parameters

Figure 4.10 shows the overall structure of the ReSpect. Figure 4.11 represents GUI panel, several parameters ,that were previously can be adjusted through SetParameters function, can be adjusted on this panel. First thing is decide to extract

CRS or DRS, which can be selected at top of the panel. Number of grid points for CRS, λc and modes calculation for DRS can be adjusted on this panel. Several plotting options (red box in figure 4.12) are included, there are two panels for plotting. Figures 4.13 and 4.14 show continuous (discrete) spectra and dynamic modulus calculated and plotted by GUI, respectively.

74 Figure 4.10: ReSpect structure, relation between the two regimes and their sub-category functions is shown.

75 Figure 4.11: ReSpect GUI panel in Matlab

76 Figure 4.12: Plotting options (shown in the red box) for CRS and DRS.

77 Figure 4.13: The left plot represents the dynamic modulus data (green symbols (lines) represent the experimental dynamic modulus (dynamic modulus from the corresponding CRS), respectively. Right box plot represents the extracted continuous relaxation spec- trum.

78 Figure 4.14: The left plot represents the dynamic modulus data (green symbols (lines) represent the experimental dynamic modulus (dynamic modulus from the corresponding DRS)), respectively. The right plot represents the extracted discrete relaxation spectra with N = 9 modes.

79 CHAPTER 5

CONCLUSION

5.1 Chemical Composition Distribution

The semi-empirical mathematical model for Crystaf proposed by Anantawaraskul et al.[3] uses the comonomer content of a random copolymer, and operating conditions, to determine the Crystaf profile. In practice, the goal is to solve the corresponding inverse problem: given the operating conditions and the Crystaf profile, we seek to determine the comonomer content. The standard solution to this problem is to use calibration curves. However, this method suffers from a few drawbacks, such as the use of only the “peak” temperature rather than the entire Crystaf profile, the need to build a calibration curve at each operating protocol, and the inability to characterize the uncertainty associated with the estimated φ. Many of these shortcomings can be fixed. In this work, the parameter set in the Crystaf model is expanded from four to nine parameters. This enhanced the ability of the model to capture de- pendencies with operating conditions and liberated us from having to build a new calibration curve at each operating condition. Furthermore, we compared the entire experimental and simulated Crystaf profiles, rather than a single point on the curve that enlists more detail from the Crystaf profile. We trained the parameters of the model on a subset of the available data on ethylene/1-hexene, and tested the predictive ability on the rest. In general, increasing the number of parameters does not necessarily improve the predictive ability, since the additional parameters may simply fit the noise in the training data. Here, however, we found this not to be the case. The expanded parameter set was able to adequately capture the variability with the operating conditions. Based on numerical experiments, two important results are derived for constructing good train- ing datasets. As we increase the number of TDS from 5 to 10, both the accuracy and the precision of the inference improved; this can be inferred from figure 2.9(a) and (b) in chapter 2.Thus, the more TDS, the better the predictive ability. This can be expected, because by increasing the number of TDS we are actually incorporating more detail or a priori information to our problem that results in

80 increasing the precision and resolution of parameter estimation and therefore predictability of the model. The other important conclusion is about selecting the TDS. We investigated low, medium, high, and equispaced regions and found that it is better to use heterogeneous training datasets, where a diversity of φ and rc are represented, than non-representative, homogeneous TDS, drawn from a limited subspace of φ or rc. This is also a proof of the latter result, as we increase and broaden our information, we actually incorporate more prior information in our training process, which results in a better parameter estimation. Furthermore, as the quality of the TDS was changed from random to equispaced, measures of accuracy and precision of the inference also improved. This implies that if we already have a large library of training datasets available, then their contribution to the cost-function may have to be re-weighted to prevent oversampling from particular regions.

5.1.1 Future Work

• One of the main topics that should be considered for this model is incorporating samples with different comonomer types, like 1-butene and 1-pentene, and investigating their effects. Adding different comonomer types would result in more complexity in our model and for the first step we should consider a good strategy for TDS configurations. As we discussed earlier in chapter 1, comonomer types have an effect on Crystaf profiles, therefore we also may want to reformulate our model parameters in order to incorporate their effects.

• Although the information about the molecular weight distributions can be accessible through size-exclusion chromatography methods, it shall be desirable to add that as a parameter to our model and transform our problem to a sampling problem. Using the data analysis that was proposed by Shanbhag[86] using a Bayesian formulation and Markov-chain Monte Carlo algorithm, this problem can be translated into a sampling problem. This method explores the

distribution of the desired parameters, MW and φ here, to characterize all possible solutions.

5.2 LCB Detection and Measurement

The LCB detection and measurement has been a long-lasting issue in the polymer industry. There are several different methods available to measure and detect the LCB in polymer samples. Each of these methods have their own drawbacks which is discussed in chapter 1. Nevertheless, there is no investigation on the quantification of these methods compared to each other. Stadler[101] inves- tigated different methods for the detection of sparse LCBs in metallocene catalyzed polyethylenes in a qualitative manner. In the present work, a quantitative measure is proposed to compare

81 the different rheological methods for the detection of LCBs. To determine how well a rheological method works compared to others, the error between the linear and the branched sample with the same molecular weight are defined. These normalized error measurements provide us with a handy tool to compare them and choose the most sensitive method in each region of the 2-D (MW , bm) plot. Apart from LCB detection, measurement of LCB is a very crucial topic in the field of research and in industry. There is an extensive amount of literature on the different methods for measuring the LCB that was discussed earlier. Shanbhag and Takeh[114] also proposed a computational method to infer LCB from metallocene catalyzed polyethylenes. Instead of measuring the LCB, the sensitivity of different rheological parameters in LCB measurements is investigated in this work.

5.2.1 Future Work

• Shanbhag[86] proposed an algorithm that was later used by Takeh[114] to infer the structural parameters of the sample. An MCMC sampling method was used in this method to sample the structural parameters, and the storage and loss moduli were used as rheological inputs in this method for Bayesian inferring. Based on the sampling area, and based on which method is the most sensitive method in each area, we can replace the storage and loss moduli in the mentioned algorithm with the sensitive method and obtain the best results. A large area of structural configurations is covered in this work, it provides very valuable information on picking the best rheological method.

• As it was shown above, rheological methods do not have a consistent response through the en- tire region and they show fluctuations. The underlying layers can be analyzed and prioritized to introduce a new scheme that offers prioritized sensitive probes for each region.

• Thermorheological behavior is very sensitive to LCB and SCB.[52,79,102,123,129] Kessner et al.[52] claim that the effect of SCBs and LCBs are additive and they can be distinguished if the comonomer type and comonomer content is known. Master curves can unveil the ther- morheological complexities and provide a proper route to LCB detection and measurement.

5.3 Continuous and Discrete Relaxation Time Spectrum

The relaxation spectrum is a characteristic quantity that describes the viscoelastic properties of polymer samples. Having the spectrum, one material function can be converted to another. Considering this necessity, an efficient and open-source, multi-platform computer program is im- plemented to infer the continuous and discrete relaxation spectra from the dynamic moduli data.

82 It has a GUI environment for the purpose of ease of use for an experimentalist. The nonlinear Tikhonov regularization method and the Levenberg-Marquardt method are employed to extract the spectra information from the dynamic moduli measurements. Some heuristic methods have been employed in this program. There are default choices in the program that can be modified to obtain the desired expectations. The SC-method and the maximum curvature method are implemented as extra options for calculating the regularization parameter. Other adjustments are also included for special cases. Relaxation spectra for high molecular weights that are nearly mono-disperse have sharp edges in the terminal regime, which is not retrievable by using the common Tikhonov regularization method. Therefore, an edge-preserving method is also implemented in this program to consider the special cases and prevent artifacts. ReSpect is hosted on Matlab Central and is freely available.

5.3.1 Future Work

• Brabec et al.[19] proposed a modified technique for the recursive method originally proposed by Emri and Tschoegl[32] for calculating a discrete spectrum. This algorithm is especially useful for samples with a monodisperse and bimodal molecular weight distribution. This algorithm may be implemented in this program in the context of considering special cases.

• Andersen et al[9] proposed a novel moving average formula for a spacing of experimental data that optimizes the quality of the inferred discrete spectrum; adding this provides good improvement as it increases the available spacing options.

• Cho et al.[98,99] proposed an algorithm for continuous spectrum that is free of possible neg- ative answers. The algorithm is simpler than nonlinear regularization that was proposed by Honerkamp and Weese[45], however it keeps the accuracy of the non-linear regularization. The original idea of the proposed algorithm is to use a fixed-point iteration that transforms the initial spectrum to a new spectrum that is closer to the least-square answer.

83 BIBLIOGRAPHY

[1] S. Acierno and P. Van Puyvelde. Effect of short chain branching upon the crystallization of model polyamides-11. Polymer, 46(23):10331 – 10338, 2005.

[2] E. Adisson, M. Ribeiro, A. Deffieux, and M. Fontanille. Evaluation of the heterogeneity in lin- ear low-density polyethylene comonomer unit distribution by differential scanning calorimetry characterization of thermally treated samples. Polymer, 33(20):4337–4342, 1992.

[3] S. Anantawaraskul, P. Jirachaithorn, J. B. P. Soares, and J. Limtrakul. Mathematical model- ing of crystallization analysis fractionation of ethylene/1-hexene copolymers. J. Polym. Sci. B: Polym. Phys., 45(9):1010–1017, 2007.

[4] S. Anantawaraskul, J. B. P. Soares, P. Jirachaithorn, and J. Limtrakul. Mathematical mod- eling of crystallization analysis fractionation (Crystaf) of polyethylene. J. Polym. Sci. B: Polym. Phys., 44(19):2749–2759, 2006.

[5] S. Anantawaraskul, J. B. P. Soares, and P. M. Wood-Adams. Effect of operation parameters on temperature rising elution fractionation and crystallization analysis fractionation. J. Polym. Sci. B: Polym. Phys., 41(14):1762–1778, 2003.

[6] S. Anantawaraskul, J. B. P. Soares, and P. M. Wood-Adams. Fractionation of semicrystalline polymers by crystallization analysis fractionation and temperature rising elution fractiona- tion. In Polymer Analysis Polymer Theory, pages 1–54. Springer, 2005.

[7] S. Anantawaraskul, J. B. P. Soares, P. M. Wood-Adams, and B. Monrabal. Effect of molecular weight and average comonomer content on the crystallization analysis fractionation (crystaf) of ethylene α-olefin copolymers. Polymer, 44(8):2393–2401, 2003.

[8] S. Anantawaraskul, P. Somnukguandee, J. B. P. Soares, and J. Limtrakul. Application of a crystallization kinetics model to simulate the effect of operation conditions on crystaf profiles and calibration curves. J. Polym. Sci. Part B, 47(9):866–876, 2009.

[9] R. S. Anderssen and A. R. Davies. Simple moving-average formulae for the direct recovery of the relaxation spectrum. J. Rheol., 45(1):1–27, 2001.

[10] R. S. B. Anderssen. Inverse problems: A pragmatist’s approach to the recovery of information from indirect measurements. ANZIAM J., 46:C588–C622, 2005.

[11] M. Baumgaertel and H. H. Winter. Determination of discrete relaxation and retardation time spectra from dynamic mechanical data. Rheol. Acta, 28:511–519, 1989.

[12] D. Beigzadeh. Long-chain branching in ethylene polymerization using combined metallocene

84 catalyst systems. PhD thesis, University of Waterloo, 2000.

[13] D. Beigzadeh, J. B. P. Soares, and T. A. Duever. Modeling of fractionation in CRYSTAF using Monte Carlo simulation of crystallizable sequence lengths: Ethylene/1-octene copolymers synthesized with single-site-type catalysts. J. Appl. Polym. Sci., 80(12):2200–2206, 2001.

[14] G. M. Benedikt and B. L. Goodall. Metallocene-catalyzed polymers: Materials, Properties, Processing & Markets. Plastics Design Library, Norwich, NY, 1998.

[15] B. H. Bersted. Effects of long chain branching on polymer rheology. In N. P. Cheremisinoff, editor, Encylop. of Fl. Mech., volume 7, chapter 22. Gulf Publishing Co., 1998.

[16] M. Bertero and P. Boccacci. Introduction to inverse problems in imaging. CRC Press, 1998.

[17] F. W. Billmeyer Jr. The Molecular Structure of Polyethylene. III. Determination of Long Chain Branching1. J. Am. Chem. Soc., 75(24):6118–6122, 1953.

[18] F. A. Bovey, F. C. Schilling, F. L. McCrackin, and H. L. Wagner. Short-chain and long-chain branching in low-density polyethylene. Macromolecules, 9(1):76–80, 1976.

[19] D. C. J. Brabec and A. Schausberger. An improved algorithm for calculating relaxation time spectra from material functions of polymers with monodisperse and bimodal molar mass distributions. Rheol. Acta, 34(4):397–405, 1995.

[20] R. Br¨ull, H. Pasch, H. G. Raubenheimer, R. Sanderson, A. J. van Reenen, and U. M. Wah- ner. Investigation of the melting and crystallization behavior of random propene/α-olefin copolymers by dsc and crystaf. Macromol. Chem. and Phys., 202(8):1281–1288, 2001.

[21] A. D. Channell and E. Q. Clutton. The effects of short chain branching and molecular weight on the impact fracture toughness of polyethylene. Polymer, 33(19):4108–4112, 1992.

[22] S. Costeux, S. Anantawaraskul, P. M. Wood-Adams, and J. B. P. Soares. Distribution of the longest ethylene sequence in ethylene/1-olefin copolymers synthesized with single-site-type catalysts. Macromol. Theor. Simul., 11(3):326–341, 2002.

[23] C. Craver and C. Carraher. Applied polymer science: 21st century. Elsevier, 2000.

[24] B. J. Crosby, M. Mangnus, W. de Groot, R. Daniels, and T. C. B. McLeish. Characterization of long chain branching: Dilution rheology of industrial polyethylenes. J. Rheol., 46:401, 2002.

[25] A. A. da Silva Filho, J. B. P. Soares, and G. B de Galland. Measurement and mathe- matical modeling of molecular weight and chemical composition distributions of ethylene/α- olefin copolymers synthesized with a heterogeneous ziegler-natta catalyst. Macromol. Chem. Physic., 201(12):1226–1234, 2000.

85 [26] C. Das, N. J. Inkson, D. J. Read, M. A. Kelmanson, and T. C. B. McLeish. Computational linear rheology of general branch-on-branch polymers. J. Rheol., 50(2):207–234, 2006.

[27] J. M. Dealy and R. G. Larson. Molecular Structure and Rheology of Molten Polymers. Hanser Publications, 1st edition, 2006.

[28] L. M. Delves and J. Walsh. Numerical solution of integral equations. 1974. Clarendon Press, Oxford.

[29] P. J. Doerpinghaus and D. G. Baird. Assessing the branching architecture of sparsely branched metallocene-catalyzed polyethylenes using the pompom constitutive model. Macromolecules, 35(27):10087–10095, 2002.

[30] P. J. Doerpinghaus and D. G. Baird. Separating the effects of sparse long-chain branching on rheology from those due to molecular weight in polyethylenes. J. Rheol., 47(3):717–736, 2003.

[31] G. W. Ehrenstein. Polymeric Materials. Carl Hanser Verlag GmbH & Co. KG, 2001.

[32] I. Emri and N. W. Tschoegl. Generating line spectra from experimental responses. part i: Relaxation modulus and creep compliance. Rheol. Acta, 32(3):311–322, 1993.

[33] J. D. Ferry. Viscoelastic Properties of Polymers. Wiley: New York, 3rd edition, 1980.

[34] G. Fleury, G. Schlatter, and R. Muller. Non linear rheology for long chain branching char- acterization, comparison of two methodologies: Fourier transform rheology and relaxation. Rheol. Acta, 44(2):174–187, 2004.

[35] K. F. Freed and J. Dudowicz. Influence of short chain branching on the miscibility of binary polymer blends: Application to polyolefin mixtures. Macromolecules, 29(2):625–636, 1996.

[36] W. W. Graessley. Effect of long branches on the flow properties of polymers. Accounts Chem. Res., 10(9):332–339, 1977.

[37] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82(4):711–732, 1995.

[38] P. Gupta, G. L. Wilkes, A. M. Sukhadia, R. K. Krishnaswamy, M. J. Lamborn, S. M. Wharry, C. C. Tso, P. J. DesLauriers, T. Mansfield, and F. L. Beyer. Does the length of the short chain branch affect the mechanical properties of linear low density polyethylenes? An investigation based on films of copolymers of ethylene/1-butene, ethylene/1-hexene and ethylene/1-octene synthesized by a single site metallocene catalyst. Polymer, 46(20):8819–8837, 2005.

[39] P. Hansen. Analysis of discrete ill-posed problems by means of the l-curve. SIAM Rev., 34(4):561–580, 1992.

86 [40] P. C. Hansen and D. P. O’Leary. The use of the l-curve in the regularization of discrete ill-posed problems. SIAM J. Sci. Comput., 14(6):1487–1503, November 1993.

[41] S. Hansen. Estimation of the relaxation spectrum from dynamic experiments using bayesian analysis and a new regularization constraint. Rheol. Acta, 47(2):169–178, 2008.

[42] C. He, S. Costeux, and P. Wood-Adams. A technique to infer structural information for low level long chain branched polyethylenes. polymer, 45(11):3747 – 3754, 2004.

[43] M. Heath. Scientific Computing: An Introductory Survey. McGraw-Hill, 1998.

[44] J. Honerkamp and J. Weese. Determination of the relaxation spectrum by a regularization method. Macromolecules, 22(11):4372–4377, November 1989.

[45] J. Honerkamp and J. Weese. A nonlinear regularization method for the calculation of relax- ation spectra. Rheol. Acta, 32:65–73, 1993.

[46] H. Hsieh and R. P. Quirk. Anionic polymerization: principles and practical applications, volume 34. CRC Press, 1996.

[47] T. S. Huang. Picture processing and digital filtering. In Berlin, Springer-Verlag (Topics in Applied Physics. Volume 6), 1975. 299 p, volume 6, 1975.

[48] N. J. Inkson, T. C. B. McLeish, O. G. Harlen, and D. J. Groves. Predicting low density polyethylene melt rheology in elongational and shear flows with pom-pom constitutive equa- tions. J. Rheol., 43:873, 1999.

[49] J. Janzen and R. H. Colby. Diagnosing long-chain branching in polyethylenes. J. Mol. Struct., 486:569–584, 1999.

[50] J. Kaschta and F. Stadler. Avoiding waviness of relaxation spectra. Rheol. Acta, 48:709–713, 2009.

[51] M. Keating, I-H. Lee, and C. S. Wong. Thermal fractionation of ethylene polymers in pack- aging applications. Thermochim. Acta, 284(1):47–56, 1996.

[52] U. Kessner, J. Kaschta, F. J. Stadler, C. S. Le Duff, X. Drooghaag, and H. M¨unstedt. Thermorheological behavior of various short-and long-chain branched polyethylenes and their correlations with the molecular structure. Macromolecules, 43(17):7341–7350, 2010.

[53] Y. M. Kim and J. K. Park. Effect of short chain branching on the blown film properties of linear low density polyethylene. J.Appl. Polym. Sci., 61(13):2315–2324, 1996.

[54] R. K. Krishnaswamy, Q. Yang, L. Fernandez-Ballester, and J. A. Kornfield. Effect of the

87 distribution of short-chain branches on crystallization kinetics and mechanical properties of high-density polyethylene. Macromolecules, 41(5):1693–1704, 2008.

[55] S. Lai, T. A. Plumley, T. I. Butler, G. W. Knight, and C. I. Kao. Dow rheology index (DRI) for insite technology polyolefins (ITP): unique structure-processing relationships. In SPE Antec Technology Papers, volume 40, pages 1814–1814. SOCIETY OF PLASTICS ENGINEERS INC, 1994.

[56] G. Lappin. Alpha olefins applications handbook, volume 37. CRC Press, 1989.

[57] R. G. Larson. Combinatorial rheology of branched polymer melts. Macromolecules, 34:4556– 4571, 2001.

[58] R. G. Larson, Q. Zhou, S. Shanbhag, and S. J. Park. Advances in modeling of polymer melt rheology. AIChE J., 53(3):542–548, 2007.

[59] H. M. Laun and H. Schuch. Transient elongational viscosities and drawability of polymer melts. J. Rheol., 33:119, 1989.

[60] C. Li Pi Shan, J. B. P. Soares, and A. Penlidis. Mechanical properties of ethylene/1-hexene copolymers with tailored short chain branching distributions. Polymer, 43(3):767–773, 2002.

[61] W. Liu, S. Kim, J. Lopez, B. Hsiao, M. Y. Keating, I-H Lee, B. Landes, and R. S. Stein. Struc- tural development during thermal fractionation of polyethylenes. J. Therm. Anal. Calorim., 59(1):245–255, 2000.

[62] W. Liu, D. G. Ray III, and P. L. Rinaldi. Resolution of signals from long-chain branching in polyethylene by 13C NMR at 188.6 MHz. Macromolecules, 32(11):3817–3819, 1999.

[63] D. J. Lohse, S. T. Milner, L. J. Fetters, M. Xenidou, N. Hadjichristidis, R. A. Mendelson, C. A. Garcia-Franco, and M. K. Lyon. Well-defined, model long chain branched polyethylene. 2. Melt rheological behavior. Macromolecules, 35(8):3066–3075, 2002.

[64] C. D. MacKay. Bayesian methods for adaptive modelling. PhD thesis, California Institute of Technology, 1991.

[65] A. Malmberg, C. Gabriel, T. Steffl, H. M¨unstedt, and B. L¨ofgren. Long-chain branching in metallocene-catalyzed polyethylenes investigated by low oscillatory shear and uniaxial exten- sional rheometry. Macromolecules, 35(3):1038–1048, 2002.

[66] T. C. B. McLeish and R. G. Larson. Molecular constitutive equations for a class of branched polymers: The pom-pom polymer. J. Rheol., 42(1):81–110, 1998.

[67] D. W. Mead. Determination of molecular-weight distributions of linear flexible polymers from linear viscoelastic material functions. J. Rheol., 38:1797–1827, 1994.

88 [68] B. Monrabal. Crystallization analysis fractionation: A new technique for the analysis of branching distribution in polyolefins. J. Appl. Polym. Sci., 52(4):491–499, 1994.

[69] B. Monrabal. Crystaf: Crystallization analysis fractionation. A new approach to the compo- sition analysis of semicrystalline polymers. Macromol. Symp., 110(1):81–86, 1996.

[70] B. Monrabal, J. Blanco, J. Nieto, and J. B. P. Soares. Characterization of homogeneous ethylene/1-octene copolymers made with a single-site catalyst. crystaf analysis and calibra- tion. J. Polym. Sci. A1, 37(1):89–93, 1999.

[71] I. Murray and I.M.Z. Ghahramani. A note on the evidence and Bayesian Occam’s razor. Gatsby Unit Tech. Rep., pages 1–4, 2005.

[72] J. Nieto, T. Oswald, F. Blanco, J. B. P. Soares, and B. Monrabal. Crystallizability of ethylene homopolymers by crystallization analysis fractionation. J. Polym. Sci. Part B, 39(14):1616– 1628, 2001.

[73] B. Paredes, J. B. P. Soares, R. van Grieken, A. Carrero, and I. Suarez. Characterization of ethylene-1-hexene copolymers made with supported metallocene catalysts: Influence of support type. Macromol. Sy., 257(1):103–111, 2007.

[74] S. J. Park, S. Shanbhag, and R. G. Larson. A hierarchical algorithm for predicting the linear viscoelastic properties of polymer melts with long-chain branching. Rheol. Acta, 44:318–330, 2005.

[75] M. Pitsikalis, S. Pispas, J. W. Mays, and N. Hadjichristidis. Nonlinear block copolymer ar- chitectures. In Blockcopolymers-Polyelectrolytes-Biodegradation, pages 1–137. Springer, 1998.

[76] R. S. Porter, J. P. Knox, and J. F. Johnson. On the flow and activation energy of branched polyethylene melts. J. Rheol., 12:409, 1968.

[77] D. H. S. Ramkumar, J. M. Caruthers, H. Mavridis, and R. Shroff. Computation of the linear viscoelastic relaxation spectrum from experimental data. J. Appl. Polym. Sci., 64(11):2177– 2189, 1997.

[78] J. C. Randall. A review of high resolution liquid 13 carbon nuclear magnetic resonance characterizations of ethylene-based polymers. Polym. Rev., 29(2):201–317, 1989.

[79] J. A. Resch, U. Keßner, and F. J. Stadler. Thermorheological behavior of polyethylene: a sensitive probe to molecular structure. Rheol. Acta, 50(5-6):559–575, 2011.

[80] C. G. Robertson, C. A. Garc´ıa-Franco, and S. Srinivas. Extent of branching from linear viscoelasticity of long-chain-branched polymers. J. Polym. Sci. Pol. Phys., 42(9):1671–1684, 2004.

89 [81] T. Roths, D. Maier, C. Friedrich, M. Marth, and J. Honerkamp. Determination of the relax- ation time spectrum from dynamic moduli using an edge preserving regularization method. Rheol. Acta, 39(2):163–173, 2000.

[82] M. Rubinstein and R. H. Colby. Polymer Physics. Oxford University Press, New York, 1st edition, 2003.

[83] D. M. Sarzotti, J. B. Soares, L. C. Simon, and L. J. Britto. Analysis of the chemical composi- tion distribution of ethylene/α-olefin copolymers by solution differential scanning calorimetry: An alternative technique to Crystaf. Polymer, 45(14):4787–4799, 2004.

[84] D. M. Sarzotti, J. B. P. Soares, and A. Penlidis. Ethylene/1-hexene copolymers synthesized with a single-site catalyst: Crystallization analysis fractionation, modeling, and reactivity ratio estimation. J. Polym. Sci. Pol. Phys., 40(23):2595–2611, 2002.

[85] H. P. Schreiber and E. B. Bagley. The Newtonian melt viscosity of polyethylene: An index of long-chain branching. J. Polym. Sci., 58(166):29–48, 1962.

[86] S. Shanbhag. Analytical rheology of blends of linear and star polymers using a Bayesian formulation. Rheol. Acta, 49(4):411–422, 2010.

[87] S. Shanbhag. Analytical rheology of branched polymer melts: Identifying and resolving degenerate structures. J. Rheol., 55(1):177–194, 2011.

[88] S. Shanbhag. Analytical rheology of polymer melts: State of the art. ISRN Materials Science, 2012:732176, 2012.

[89] S. Shanbhag, S. J. Park, Q. Zhou, and R. G. Larson. Implications of microscopic simulations of polymer melts for mean-field tube theories. Mol. Phys., 105(2):249–260, 2007.

[90] R. A. Shanks and G. Amarasinghe. Comonomer distribution in polyethylenes analysed by dsc after thermal fractionation. J. Therm. Anal. Calorim., 59(1):471–482, 2000.

[91] M. T. Shaw and W. H. Tuminello. A closer look at the MWD-viscosity transform. Polym. Eng. Sci., 34:159–165, 1994.

[92] R. N. Shroff and H. Mavridis. Long-chain-branching index for essentially linear polyethylenes. Macromolecules, 32(25):8454–8464, 1999.

[93] R. N. Shroff and H. Mavridis. Assessment of NMR and rheology for the characterization of LCB in essentially linear polyethylenes. Macromolecules, 34(21):7362–7367, 2001.

[94] P. A. Small. Long-chain branching in polymers. In Macroconformation of Polymers, volume 18 of Advances in Polymer Science, pages 1–64. Springer Berlin Heidelberg, 1975.

90 [95] J. B. P. Soares and A. E. Hamielec. Temperature rising elution fractionation of linear poly- olefins. Polymer, 36(8):1639 – 1654, 1995.

[96] J. B. P. Soares and T. F. L McKenna. Polyolefin Reaction Engineering. John Wiley & Sons, 2013.

[97] J. B. P. Soares, B. Monrabal, J. Nieto, and J. Blanco. Crystallization analysis fractionation (CRYSTAF) of poly(ethylene-co-1-octene) made with single-site-type catalysts: A mathemat- ical model for the dependence of composition distribution on molecular weight. Macromol. Chem. Physic., 199(9):1917–1926, 1998.

[98] K. Soo Cho. Power series approximations of dynamic moduli and relaxation spectrum. J. Rheol., 57(2):679–697, 2013.

[99] K. Soo Cho and G. Woo Park. Fixed-point iteration for relaxation spectrum from dynamic mechanical data. J. Rheol., 57(2):647–678, 2013.

[100] F. Stadler and C. Bailly. A new method for the calculation of continuous relaxation spectra from dynamic-mechanical data. Rheol. Acta, 48:33–49, 2009.

[101] F. J. Stadler. Detecting very low levels of long-chain branching in metallocene-catalyzed polyethylenes. Rheol. Acta, 51(9):821–840, 2012.

[102] F. J. Stadler, J. Kaschta, and H. M¨unstedt. Thermorheological behavior of various long-chain branched polyethylenes. Macromolecules, 41(4):1328–1333, 2008.

[103] F. J. Stadler and H. M¨unstedt. Correlations between the Shape of Viscosity Functions and the Molecular Structure of Long-Chain Branched Polyethylenes. Macromol. Mater. Eng., 294(1):25–34, 2009.

[104] F. J. Stadler, C. Piel, W. Kaminsky, and H. M¨unstedt. Rheological characterization of long- chain branched polyethylenes and comparison with classical analytical methods. Macromol. Symp., 236(1):209–218, 2006.

[105] P. Starck. Studies of the comonomer distributions in low density polyethylenes using temper- ature rising elution fractionation and stepwise crystallization by dsc. Polym. Int., 40(2):111– 122, 1996.

[106] P. Starck, P. Lehmus, and J. V. Sepp¨al¨a. Thermal characterization of ethylene polymers prepared with metallocene catalysts. Polym. Eng. Sci., 39(8):1444–1455, 1999.

[107] P. Starck and B. L¨ofgren. Thermal properties of ethylene/long chain α-olefin copolymers produced by metallocenes. Eur. Polym. J., 38(1):97–107, 2002.

[108] P. Starck, A. Malmberg, and B. L¨ofgren. Thermal and rheological studies on the molecu-

91 lar composition and structure of metallocene-and ziegler–natta-catalyzed ethylene–α-olefin copolymers. J. Appl. Polym. Sci., 83(5):1140–1156, 2002.

[109] P. Starck, K. Rajanen, and B. L¨ofgren. Comparative studies of ethylene–α-olefin copoly- mers by thermal fractionations and temperature-dependent crystallinity measurements. Ther- mochim. Acta, 395(1):169–181, 2002.

[110] A. M. Striegel. Long-Chain Branching Macromolecules: SEC Analysis. In J. Cazes, editor, Encyclopedia of chromatography, pages 1417–1420. Marcell-Decker, New York, 3rd edition, 2010.

[111] P. Tackx and Tacx J. C. J. F. Chain architecture of ldpe as a function of molar mass using size exclusion chromatography and multi-angle laser light scattering (sec-malls). Polymer, 39(14):3109–3113, 1998.

[112] A. Takeh. Characterization of metallocene-catalyzed polyethylenes from rheological measure- ments using a bayesian formulation. Master thesis, May 2011.

[113] A. Takeh and S. Shanbhag. A computer program to extract the continuous and discrete relaxation spectra from dynamic viscoelastic measurements. Appl. Rheol., 23(2):95–104, 2013.

[114] A. Takeh, J. Worch, and S. Shanbhag. Analytical rheology of metallocene-catalyzed polyethylenes. Macromolecules, 44(9):3656–3665, 2011.

[115] W. Thimm, C. Friedrich, T. Roths, S. Trinkle, and J. Honerkamp. Characterization of long- chain branching effects in linear rheology. arXiv: Cond. Mat., 44(2):0009169, 2000.

[116] S. Trinkle, P. Walter, and C. Friedrich. Van gurp-palmen plot ii – classification of long chain branched polymers by their topology. Rheol. Acta, 41(1–2):103–113, 2002.

[117] R. van Grieken, A. Carrero, I. Suarez, and B. Paredes. Effect of 1-hexene comonomer on polyethylene particle growth and kinetic profiles. Macromol. Sy., 259(1):243–252, 2007.

[118] E. van Ruymbeke, S. Coppola, L. Balacca, S. Righi, and D. Vlassopoulos. Decoding the viscoelastic response of polydisperse star/linear polymer blends. Journal of Rheology, 54:507, 2010.

[119] E. van Ruymbeke, R. Keunings, and C. Bailly. Prediction of linear viscoelastic properties for polydisperse mixtures of entangled star and linear polymers: Modified tube-based model and comparison with experimental results. J. Non-Newton. Fluid, 128(1):7–22, 2005.

[120] E. van Ruymbeke, V. St´ephenne, D. Daoust, P. Godard, R. Keunings, and C. Bailly. A sensitive method to detect very low levels of long chain branching from the molar mass distribution and linear viscoelastic response. J. Rheol., 49(6):1503–1520, 2005.

92 [121] J. F. Vega, M. Aguilar, J. Peon, D. Pastor, and J. Martinez-Salazar. Effect of long chain branching on linear-viscoelastic melt properties of polyolefins. e-Polymers, 46:1–35, 2002.

[122] J. F. Vega, M. Fernandez, A. Santamaria, A. Munoz-Escalona, and P. Lafuente. Rheologi- cal criteria to characterize metallocene catalyzed polyethylenes. Macromol. Chem. Physic., 200(10):2257–2268, 1999.

[123] J. F. Vega, A. Munoz-Escalona, A. Santamaria, M. E. Munoz, and P. Lafuente. Com- parison of the rheological properties of metallocene-catalyzed and conventional high-density polyethylenes. Macromolecules, 29(3):960–965, 1996.

[124] I. Vittorias, M. Parkinson, K. Klimke, B. Debbaut, and M. Wilhelm. Detection and quantifi- cation of industrial polyethylene branching topologies via Fourier-transform rheology, NMR and simulation using the Pom-pom model. Rheol. Acta, 46(3):321–340, 2007.

[125] W. J. Wang, S. Kharchenko, K. Migler, and S. Zhu. Triple-detector GPC characterization and processing behavior of long-chain-branched polyethylene prepared by solution polymerization with constrained geometry catalyst. Polymer, 45(19):6495–6505, 2004.

[126] L. Wild and G. Glckner. Temperature rising elution fractionation. In Separation Tech- niques Thermodynamics Liquid Crystal Polymers, volume 98 of Adv. Polym. Sci., pages 1–47. Springer Berlin / Heidelberg, 1991.

[127] A. H. Willbourn. Polymethylene and the structure of polyethylene: Study of short-chain branching, its nature and effects. J. Polym. Sci., 34(127):569–597, 1959.

[128] H. H. Winter and M. Mours. IRIS-handbook. IRIS Development, 2005.

[129] P. M. Wood-Adams and S. Costeux. Thermorheological behavior of polyethylene: effects of microstructure and long chain branching. Macromolecules, 34(18):6281–6290, 2001.

[130] P. M. Wood-Adams and J. M. Dealy. Using rheological data to determine the branching level in metallocene polyethylenes. Macromolecules, 33(20):7481–7488, 2000.

[131] P. M. Wood-Adams, J. M. Dealy, A. W. deGroot, and O. D. Redwine. Effect of molecular structure on the linear viscoelastic behavior of polyethylene. Macromolecules, 33:7489–7499, 2000.

[132] Y. Yu, P.J. DesLauriers, and D.C. Rohlfing. SEC-MALLS method for the determination of long-chain branching and long-chain branching distribution in polyethylene. Polymer, 46(14):5165–5182, 2005.

[133] B. H. Zimm and W. H. Stockmayer. Effect of molecular structure on the linear viscoelastic behavior of polyethylene. J. Chem. Phys., 17(12):1301–1314, 1949.

93 BIOGRAPHICAL SKETCH

Arsia Takeh was born and raised in Tehran, Iran. He earned his B.Sc. in chemical engineering from Sharif University of Technology, Tehran, Iran in September 2007. He worked for several consulting companies in the field of oil and gas between 2007-2009. He moved to U.S.A. to start his graduate studies at Florida State University in 2009. He obtained his M.Sc. in Computational Science in 2011 from Florida State University.

94