Chapter 2. Errors & Experimental 6

EVALUATION OF EXPERIMENTAL DATA 2 (THE CURSE OF ERROR ANALYSIS) All experimental measurements are subject to some uncertainty, or error. At the most fundamental level the uncertainty principle of physics tells us there are some things that we can never know exactly. But most measurements we are likely to make will be limited by ourselves or by our apparatus, and it is useful to learn to deal with these inherent shortcomings in our experiments. Accuracy is a measure of the difference between our measurement and the true value of the quantity we are measuring. High accuracy is what is ultimately desired. In order to obtain accuracy, we often make multiple measurements of the same quantity and the precision is a measure of the spread of these measurements. The meanings of these two terms, accuracy and precision, are thus quite different. A set of measurements may have high precision, but low accuracy. So a student might make three determinations of the chloride content of an unknown and get 43.44%, 43.51% and 43.47%. These results are fairly precise, but the true value might be 41.70% and the determination would not be very accurate. (This is the most common case. Just because a measurement is reproducible does not mean it's right!) Note here that these terms are used in a qualitative, descriptive sense. ACCURACY AND SYSTEMATIC ERRORS How can we evaluate the accuracy of our result? The true result is normally not known, otherwise we would not be making the measurements in the first place. (Present lab excluded, of course.) The error in our measurement is thus unknown. We must make estimates of both the accuracy and the precision of various parts of our measurements, hoping that we have included everything, and then we combine these to get an estimate of how the final answer is affected. In our discussion of errors, it is useful to distinguish between systematic, personal, and random errors because they have different sources, and affect the measurements differently. All three kinds of errors affect the accuracy of the measurement, but in different ways and for different reasons. Personal errors frequently plague beginners. This is a kind way of saying that the procedure wasn’t carried out properly. Examples of such errors are Weighing a sample which isn’t dry. Not thoroughly mixing a solution. Using the wrong chemical. Measuring voltage with a meter set to “resistance”. etc. Chapter 2. Errors & Experimental Data 7

Now imagine that the person carrying out the experiment is a tireless automaton capable of repeating the experiment flawlessly many times. Such an automaton is incapable of making a personal error. Nevertheless, the results from experiment to experiment will differ somewhat because of the inherent limitations in the equipment and procedures employed. The run-to-run differences thus encountered are examples of random errors which ultimately are limiting the precision of the results. As we shall see the inherent error resulting from this sort of experimental scatter can be reduced by repeating the experiment many times and averaging the results. There is no guarantee that after such improved precision by averaging repeated measurements that the final answer will be correct even in the limit of an infinite number of repetitions giving perfect precision. The remaining error is called systematic error, but can have more than one source so that we speak of systematic errors. Such errors are important types of errors to consider, but there are no mathematical formulae to predict them. As a consequence, their discovery can be inordinately difficult. They are inherent in the system, and can often be traced to some fundamental flaw in the experimental apparatus. Systematic errors do not average out on multiple observations. They always influence the measurements in the same direction, i.e., a given systematic error might make the result low, and repeated measurements would always yield a low result. If we are so lucky as to discover the source of a systematic error, we can usually eliminate it and sometimes completely correct old measurements for it, provided that we have recorded the necessary data. The kinds of systematic error vary enormously with the experiment under consideration, and we list a few below: Miscalibration of the apparatus. Weights, scales, and volumetric equipment all need to be calibrated. Procedural errors. These can be very subtle, such as using a platinum resistance thermometer to measure temperature and finding after a lot of detective work that the wire in the thermometer was not platinum, but probably something like osmium! Inappropriate theoretical analysis. Experimental results are usually interpreted using well-established theory, but sometimes the theory may be extended to regions which are inappropriate. For example, the perfect-gas law is often used to describe gases, and this normally works quite well. But if a gas were undergoing either liquefaction or reaction, PV would not equal nRT, and deviations from the "ideal" law must be considered. As another example, Stokes' Law describes the fall of a sphere in a viscous medium, but Millikan was forced to apply corrections to this law when he measured the charge on the electron in the famous oil drop experiment.

The dividing line between the three types of errors is fuzzy. Thus we classify the reading of burets to 0.1 rather than 0.01 ml as personal rather than random even though the effect might appear random because we know that the buret can be read more accurately. Some of the examples of systematic errors given above could be thought of as personal errors because the person devising the procedure should have known better. Chapter 2. Errors & Experimental Data 8

We can find out about random errors by examining the precision of several workers. We can avoid personal errors by being extremely careful in following the procedure. How can we identify and eliminate the systematic errors in our experiments? This is not so easy. The best method is usually to make the measurement using a completely different method. For example we could use a volumetric method of chloride analysis to check the results of a gravimetric analysis. But oftentimes we are called upon to make an assessment of the accuracy of our results without recourse to alternative experiments. About the best we can do is to consider how a miscalibration or mistake might affect each step of our measurement and then to evaluate how that affects our final result. In order to do that we must consider how errors are propagated in experiments. PROPAGATION OF SYSTEMATIC ERRORS. Suppose that we measure a quantity X but there is a systematic error, δX in the measurement. Remember that δX will not average out, and that it will have a definite polarity (which we can guess in our estimation of errors.) We now ask how this error appears in the final result R. a) Addition and subtraction. If two measurements are to be added, we have

R = (X1 + δX1) + (X2 + δX2) = (X1 + X2) + (δX1 + δX2) (1)

so δR = δX1 + δX2 (2) and the error in R is just the sum (or difference) in the error of the individual measurements. Note that it is possible in principle for the errors to completely cancel out. For example, if a section of a buret were to be mis-numbered, the volume might be read as 1 ml too high. If the result is the volume delivered, the difference between two readings, this mis-numbering of the buret would completely cancel out if the second buret reading was also in the misnumbered section. Note here that the absolute error adds (or subtracts). b) Multiplication and Division.

Suppose the result is dependent on several measurements, X, Y, Z as a product or ratio

XnY R = Z (3) Errors could be introduced in the measurement of either X, Y or Z. If, for example, there were no error in Y or Z, we would expect an error in X, δX to affect the result according to the rules of differential calculus as

nXn-1Y δR = Z δX (4) Chapter 2. Errors & Experimental Data 9 where Y and Z in the differential are assumed constant and the finite δX is assumed to behave as an infinitesimal dX. Another expression could be written for the way in which an error in Y or and error in Z would affect R. In general it is possible that errors would be made in each variable, and the error in R would be expected to be the sum of the individual errors:

nXn-1Y Xn XnY δR = Z δX + Z δY - Z2 δZ (5) if we now divide by R we have:

δR δX δY δZ R = n X + Y - Z (6) notice now that the fractional errors (weighted by the appropriate exponent) add to give the fractional error in the result. EXAMPLE As an example, we cite a very elementary experiment performed to measure the density of water. Here 10 ml of water was pipetted into a weighing bottle and weighed. How do we evaluate how the uncertainty in weight and volume affect the uncertainty in density? We measure the volume with a buret which requires two readings. If we assume that the error in the first buret reading is δV = 0.03ml, and that the second error is zero, we have from above, V = (V1 + δV1) - (V2 + 0) = V1 - V2 + δV (7) The absolute error in volume, δV is thus .03 ml. The fractional error is δV/V = .003, or 0.3%. If we further assume that we have spilled a little water so that the error in weight is δw = -.001 gm, then we have δρ δw δV ρ= weight/volume; = - (8) ρ w V since V ≈ 10 ml, and W ≈ 10 gm, we have δρ .001 .03 ≈ - - = -.003-.0001 ≈ -.003 ρ 10 10 We conclude that the error in weight is very small in comparison to the error in volume, and that the accuracy expected (provided that everything is calibrated) is about 3 parts per thousand, or 0.3%. Note also that since we have assumed that these are systematic errors, we are able to conclude that the density will be low (since the volume is too large and in enters in the denominator.) ARITHMETIC OPERATIONS SIGNIFICANT FIGURES Modern calculators tempt one to report a calculation using many digits. However, most practical calculations apply to some sort of experimental measurement, and as discussed, all experimental measurements are subject to error. In reporting the results of an experiment it is important to include enough digits to express the experimental precision, but no more. In the determination of the density of water, for example, the accepted value (at 25 °C) is 0.99707. If you were to perform the experiment discussed above, your accuracy would be limited to about .003. You should report this as, say Chapter 2. Errors & Experimental Data 10

0.994 to indicate that you are uncertain in the last significant figure. If fewer digits are used, is lost; if more, the extra digits have no experimental . It would be silly to report the result as 0.99403879 just because your calculator produces that number. The number of digits, or significant figures, should therefore match the precision of the experimental measurement. ROUNDING To reduce an expression to a smaller number of digits, use the systematic process of rounding. Discard the unwanted digits. Increase the last retained digit by one unit if the left-most discarded digit is greater than 5 or if it is equal to 5 followed by other non- zero digits, otherwise leave the last digit unchanged. If the left most discarded digit is 5 or 5 followed by zeros, round to the nearest even number. Regardless of the number of digits being rounded off, rounding must be performed in one step. Keep all digits in your calculations and carry out the rounding of the final answer. PRECISION When performing additions or subtractions, be aware of the absolute precision of your data. To preserve the precision of your measurement, other terms that will be added or subtracted to it should be expressed to a higher level of absolute precision. The uncertainty of the sum or difference will then be limited by the uncertainty of your measured data. As an example, consider a temperature measured in Centigrade that must be converted to its Kelvin equivalent. If the Centigrade temperature was measured to a precision of 0.1 degrees, then to convert properly you should add 273.15. It would be incorrect to add just 273, because the absolute precision of this term (1 degree) is coarser than that of the data and it therefore hides your experimental precision in the final result. Multiplication and division follow a different pattern. Here you must compare the relative precision of the factors, as suggested by the previous discussion of the propagation of systematic errors. The outcome will have a relative precision no better than the poorest relative precision among the factors. For example, if you have measured a round object's diameter as 5.011 cm and wish to obtain its circumference by multiplying by π, it is important to express π to better relative precision than the measurement. Thus, 3.1416 is adequate but 3.14 is not. If you use 3.14, some of your measurement's significance will be lost after multiplication. Similarly, if you multiply or divide data by other measured quantities, the relative precision of the outcome will be no better than that of the least precise piece of data. PRECISION AND RANDOM ERRORS Precision, the reproducibility of a measurement, should not be confused with accuracy, which describes how close the result comes to the true value. As previously discussed, flawed measurement methods can lead to systematic errors, which cause the results to differ consistently from the true value. Systematic errors therefore affect accuracy but not precision. Another class of errors is called random errors. As their name implies, these cause unpredictable deviations from the true value. Random errors may arise from problems in reading instruments or from fluctuating interferences that are Chapter 2. Errors & Experimental Data 11 not taken into account. When a measurement is repeated many times, the effect of random errors is expected to average out. If the measurements are plotted, they can be seen as a distribution of outcomes that is centered about some value. This distribution is called a Gaussian (or Normal) distribution and is shown and discussed below. If there are no systematic errors, the central value is the true value; if there are systematic errors, the central value is displaced from the true value. ESTIMATES OF THE TRUE VALUE AND UNCERTAINTIES A) THE NORMAL DISTRIBUTION Given a set of measurements of a single quantity, x, the best estimate of the true value will be the mean, x 1 N x = N ∑ xi (9) i=1 which is simply the sum of the outcomes divided by the number of measurements. The size of random errors may be estimated from the range, which is the difference between the largest and the smallest outcomes. A more significant measure of random errors is the sample variance, s2, defined as:

1 N 2 2 s = N - 1 ∑ (xi - x ) , (10) i=1 where xi is an individual result, x is the mean result, and the sum includes all N measurements. When a measurement is repeated many times (i.e. N → ∞), the probability, P(x), of obtaining an outcome between x and x+dx approaches: P(x) = f(x)dx (11) where f(x) is called the normal distribution function and is given by

1 2 ⎛ 1 x - m ⎞ f(x) = exp -2⎛ ⎞ (12) σ 2π ⎝ ⎝ σ ⎠ ⎠ where m is the central value (m =x for a large number of measurements), and σ is a measure of the spread of values. This is plotted in Fig 1. This distribution function predicts that x ! m as N → ∞ (13) s ! " The probability defined in eq 11 depends not only on the value of x, but also on the interval, dx. If the interval is finite, then f(x) must be integrated between limits. Thus to find the probability that x lies in the interval -y ≤ x ≤ y we evaluate the integral Chapter 2. Errors & Experimental Data 12

y P = ⌡⌠f(x)dx (14) -y which must be done numerically or by reference to tables of the Normal probability distribution. For this distribution, 68% of the outcomes will differ from the mean value by less than one standard deviation, σ, 95% will fall less than two standard deviations away from the mean, and 99.7% will fall within three standard deviations of the mean. of the mean value and standard deviation therefore helps in identifying individual measurements that are unreliable because of personal errors.

0.4

0.3

2!

f(t) 0.2

0.1

0 -4 -3 -2 -1 0 1 2 3 4 t = (x-m)/ ! Fig 1. Plot of the normal probability distribution function versus the dimensionless parameter t As more measurements of a quantity are made, the standard deviation of the set of results remains the same (it’s a measure of the width of the distribution) but the reliability of the set's mean value improves. The mean's uncertainty is described by the "standard deviation of the mean", σ σ = (15) x¯ N and obviously decreases as the number of measurements, N, goes up. Note that in order to know σ you must have an infinite number of measurements. From a finite set of measurements one calculates s from eq (10) and uses this as an estimate of σ. Systematic errors cannot be estimated by comparing outcomes measured in the same way and are not reduced by repeated measurements. Thus the precision of a measurement is increased by repeated measurements, but this repetition has no effect on the accuracy. Example: suppose a very small sample is weighed several times, with the determinations in mg tabulated below. What is the average value of the weight, the standard deviation, and the standard deviation of the mean? Chapter 2. Errors & Experimental Data 13

trial 1 2 3 4 5 6 7 w (mg) 29.8 30.2 28.6 29.7 29.2 30.3 28.1 Solution: The average weight is calculated from Eq (9). The sum of all the weights is Σ = 205.90; and the average weight is = Σ/N = 205.90/7 = 29.4 with this value of , go back and calculate the deviation of each value from the mean Chapter 2. Errors & Experimental Data 14

2 trial w (mg) - wi ( - wi) 1 29.8 -0.4 0.16 2 30.2 -0.8 0.64 3 28.6 0.8 0.64 4 29.7 -0.3 0.09 5 29.2 0.2 0.04 6 30.3 -0.9 0.81 7 28.1 1.30 1.69

Σ = 205.90 -0.1 4.07

The average deviation (-0.1/7) should be zero but is not zero because of round-off error. (Repeat this calculation for yourself.) The standard deviation s, is given by Eq (10) N ( - w )2 2 i 4.07 s = ∑ N-1 = 6 = 0.678 and s = 0.678 = 0.82 1 Referring now to Fig 1, we have an estimate of the breadth of that curve: from the data we estimate that σ is 0.82, which means that the probability of one measurement being 3σ from the mean is 99.7% (ie, the chance one measurement lies within ±2.46 mg of the mean, 29.4, is 99.7%)

The average value of this set of measurements is much more precisely defined than any one value, and the standard deviation of the mean is σ 0.82 σ = = = 0.31. x N 7 This means that we estimate that the experimental average has a 99.7% chance of lying within ±3σx (0.93) of the true value. (Experimental results are frequently given as = 29.4 ± 0.3, but this is not helpful unless the experimentalist also states that 0.3 = 1 σ.) This estimation of the accuracy should be regarded with caution unless an assessment of systematic errors is also made.

B) THE POISSON DISTRIBUTION Fig 1 shows a symmetric distribution, but there are instances where the distribution function cannot by symmetric. The most important case to consider is that of counting, which would arise from detecting individual photons with a photomultiplier or from detecting individual particles, such as electrons, nuclei, protons, ions, or molecules. If c is the number of counts in an given interval (one second, for example), the average value of c would be obtained by making a series of measurements, {ci}, and taking the average as given in eq 9. But the distribution of values of ci cannot be symmetric because c can never be negative. Counting follow Poisson’s law (x¯)x P(x, x¯ ) = x! e-x¯ x¯ which approaches the Normal distribution when x¯ is large. It is particularly simple because σ = x¯ . As example, if 100 counts are recorded in a time interval of 1 sec, x¯ = Chapter 2. Errors & Experimental Data 15

100 and σ = 10, and σ/x¯ = 10. If the counting interval were 10 sec, 1000 counts would be recorded, x¯ (counts per second) is still 100, σ = 1000 = 31.6 and σ/x¯ = 31.6. Thus the “signal to noise ratio” increases as the square root of the number of counting intervals, as suggested by eq (15). (Strictly speaking, x¯ and σ refer to a distribution of measurements, but the distribution could be one measurement. In the preceding example, one could also count for 10 one second periods. The average count rate would be the sum divided by 10 and would be expected to give the same result as counting for one 10 second interval.) PROPAGATION OF RANDOM ERRORS If several measurements combine to give a final result and each of these measurements is subject to a random error, the random error in the final result can be calculated. We briefly outline this here: The difference between the random error, which we take as the standard deviation, σ, (where σ2 = s, the variance defined above in Eq (10)), and the systematic error δx, is that sign information is contained in δx but not in s. Systematic errors can cancel out, but random errors do not because if the error is truly random there is as great a likelihood of a positive contribution as well as a negative contribution. These random errors can't be propagated in the same way as the systematic errors because of this. Suppose we wish to determine a physical quantity Q which depends on measured quantities x and y. Q is thus a function of x and y denoted as Q = f(x,y,…). (16) We assume that the true value of Q is given by - Q = f(x- , y- …) for individual measurements xi, yi, the value of Q is given by Qi = f(xi, yi, …) and the variance of a large number of such measurements is given by _ 2 1 2 (σQ) = lim N ∑ (Qi - Q) (17) The deviation from the average is similar to Eq (6) and given by - - ⎛∂Q⎞ - ⎛∂Q⎞ (Qi - Q ) ≈ (x - x ) ⎝∂x⎠ + (y - y ) ⎝∂y⎠ + … (18) Thus the variance for Q is given by combining Eqs (17) and (18) to give 2 1 ⎛ - ⎛∂Q⎞ - ⎛∂Q⎞ ⎞ 2 (σQ) ≈ lim N ∑ ⎝(x - x) ⎝∂x⎠ + (y - y) ⎝∂y⎠ + …⎠ 1 ∂Q ∂Q ∂Q ∂Q ⎛ - 2 ⎛ ⎞2 - 2 ⎛ ⎞2 - - ⎛ ⎞⎛ ⎞ ⎞ ≈ lim N ∑ ⎝(x - x) ⎝∂x⎠ + (y - y) ⎝∂y⎠ + 2 (x - x)(y - y)⎝∂x⎠⎝∂y⎠+ …⎠ ∂Q ∂Q ∂Q ∂Q 2⎛ ⎞ 2 + 2 ⎛ ⎞ 2 2 ⎛ ⎞⎛ ⎞ ≈ σx ⎝∂x⎠ σy ⎝∂y⎠ + 2 σ xy⎝∂x⎠⎝∂y⎠ + … (19) Chapter 2. Errors & Experimental Data 16

where the variances are given by _ 2 1 2 (σξ) = lim N ∑ (ξi - ξ) and the covariance _ _ 1 σ2xy = lim N ∑ (xi - x) (yi - y) In eq (19) the first terms are averages of squares of deviations, but the third term is an average of cross terms which might be expected to average to zero unless the terms in x and y are correlated. If the fluctuations in x and y are uncorrelated, which is frequently the case, then, ∂Q ∂Q 2 2⎛ ⎞ 2 + 2 ⎛ ⎞ 2 (σQ) ≈ σx ⎝∂x⎠ σy ⎝∂y⎠ + … (20)

Thus, if Q = x ± y, 2 2 2 (σQ) = σx + σy if the two quantities come from independent normal distributions, which is to say they are uncorrelated. Note that there is no possibility of the errors cancelling, and if σx and σy are about the same, s ≈ sx/ 2 regardless of whether the sum or the difference of the two values forms the final result. By the same token, if the generalized product occurs, XnY Q = Z then

2 2 2 2 + 2 2 + 2 2 σQ =σx n (R/x) σy (R/y) σz (R/z) (17) GRAPHS The most useful way to present quantitative data in and engineering is generally with a graph. For a simple situation, an experimenter directly varies an independent variable in the laboratory and observes the corresponding changes of a dependent variable. These results are graphed by plotting the independent variable on the horizontal (x) axis and corresponding values of the dependent variable on the vertical (y) axis. Linear Least Square Fitting Frequently data appear to be linearly related and it is desirable to fit the best line through the data. This can be done very effectively by “eyeballing”, or just drawing what appears to be the best straight line through the data. A more exact method is to fit a line y = mx + b through the data which minimizes the sum of squares of the deviations of the points from the line. Routines for data fitting are available on virtually every hand Chapter 2. Errors & Experimental Data 17 calculator. They are quick and easy to use, but do not completely fit our needs. We need the values of m and b which are determined, but we’d also like to know the accuracy to which they’re determined. For this reason a MATLAB routine (EXPERLSQ.m) has been written and is available on the Chem381 website, http://python.rice.edu/~brooks/Chem381. EXPERLSQ.m calculates the least square values of m and b, and moreover returns σm and σb. As can be seen in the printout below, allowance is made for three different types of data: a) Constant uncertainties: σ for each data point is the same; b) Statistical uncertainties from counting, where σ = n ; and c) instrumental uncertainties, where σ for each point is different and must be read in as a separate vector. (If only relative values of σι are known, the weighted slope and intercept are calculated, and σm and σb refer only to the uncertainty of the fit.)

A Final Note Error analysis should make sense. If the most imprecise measurement is ≈ 10%, then the final result will probably be in error on the order of 10%. (If not, you should be able to justify it to yourself.) Fitting routines such as EXPERLSQ.m should give a reasonable fit to the data, and the uncertainties in slope and intercept should encompass the set of lines which can be reasonably drawn through the data. If not, something is wrong!

EXPERLSQ.m function [a,sa,b,sb]=experlsqA(x,Y,m,w,eb,p) % % [a,sa,b,sb]=experlsqA(x,Y) % % This routine calculates an UNWEIGHTED least-square fit to a set of experimental % points which are input as vectors x and Y giving the intercept, slope, % and their std deviation. % It plots the points,and the best fit lines. The slope, etc is written % on the plot % a = intercept, sa=std dev of intercept % b = slope, sb=std dev of slope % x,Y experimental points % % [a,sa,b,sb]=experlsqA(x,Y,m,w) % % This routine calculates a WEIGHTED least-square fit to a set of experimental % points which are input as vectors x and Y giving the intercept, slope, % and their std deviation. % It plots the points,and the best fit lines. The slope, etc is written % on the plot % a=intercept, sa=std dev of intercept % b=slope, sb=std dev of slope Chapter 2. Errors & Experimental Data 18

% x,y experimental points % m=mode % -1 for individual weight (instrumental) read in as w % 0 for no weighting; % 1 for statistical[counting] % % w must contain the relative weight of each y!!!!!! % (make w a vector of ones for m=0, OR omit m, w) % % FOR APPEARANCE: [a,sa,b,sb]=experlsqA(x,Y,m,w,eb,p) % % as above, except draws error bars and does not plot points as asterisks % eb=1 to draw error bars (slows execution & confuses plot) % p=1 to plot points as "." (otherwise "*",for ≈ 50 pts or less) % modelled after Bevington, p93 if nargin<2 error('Requires at least two inputs'); end if nargin == 2 w=ones(size(x)); m=0; eb=1; p=0; elseif nargin==3 w=ones(size(x)); eb=1; p=0; elseif nargin==4 eb=1; p=0; elseif nargin==5 p=0; end

zz=size(Y); z=ones(zz); if zz(2)>101 eb=0; %too many; no error bars end if zz(2)>500 p=1; %too many; no asterisks end wa=0 if m==-1 Chapter 2. Errors & Experimental Data 19

disp('instrumental weighting') wa=input(' enter 1 if absolute weights are known: '); elseif m==0 disp('no weighting') elseif m==1 disp('Poisson statistics--counting only!') end if eb==1 disp('error bars will be drawn, slowing plot') else if zz(2)>101 disp ('too many points: no error bars') end disp ('no error bars') end if p==1 disp ('points displayed as "."') else disp ('asterisks used for points') end disp(' ') disp('*** cntrl "." to abort ****') if m==1 w=z./Y; %this is for counting elseif m==0 w=z; %no weighting end N=length(Y); N1=sum(w); wmax=max(w); wnorm=w./wmax; if wa~=1 w=wnorm; %we only have relative values N1=sum(w); %have to redo this end x1=sum(x.*w); x2=sum((x.*x).*w); y1=sum(Y.*w); xy=sum(x.*Y.*w); B=[y1;xy]; A=[N1 x1;x1 x2]; Q=A\B; a=Q(1); b=Q(2); cal=(a+b*x); Chapter 2. Errors & Experimental Data 20 s2=(1/(N-2))*sum(((Y-cal).^2).*w); %expt'l std variance del=N1*x2-x1^2; sa=sqrt(s2*x2/del); sb=sqrt(s2*N1/del); if p==1 % plot(x,Y,'.r') else plot(x,Y,'*r') end hold on; plot(x,cal,'g-') e=0.67*(ones(size(x)).*sqrt(s2)./w); if m==1 e=0.67*ones(size(x)).*sqrt(Y); %counting end if eb==1 errorbar(x,Y,e,e,'.') end if b>0 text(nscale(x,.7),nscale(Y,.33),sprintf('%s','y = a + bx')) text(nscale(x,.7),nscale(Y,.27),sprintf('a=%g',a)) text(nscale(x,.75),nscale(Y,.20),sprintf('sa=%g',sa)) text(nscale(x,.7),nscale(Y,.12),sprintf('b=%g',b)) text(nscale(x,.75),nscale(Y,.05),sprintf('sb=%g',sb)) else text(nscale(x,.7),nscale(Y,.97),sprintf('%s','y = a + bx')) text(nscale(x,.7),nscale(Y,.9),sprintf('a=%g',a)) text(nscale(x,.75),nscale(Y,.83),sprintf('sa=%g',sa)) text(nscale(x,.7),nscale(Y,.77),sprintf('b=%g',b)) text(nscale(x,.75),nscale(Y,.70),sprintf('sb=%g',sb)) end

hold off if wa==1 sprintf('chisquare = %g ',s2) if s2>1.5 disp ('you may wish to question validity of linear fit') disp ('(BE SURE the weights are correct!)') end end