EVALUATION of EXPERIMENTAL DATA 2 (THE CURSE of ERROR ANALYSIS) All Experimental Measurements Are Subject to Some Uncertainty, Or Error
Total Page:16
File Type:pdf, Size:1020Kb
Chapter 2. Errors & Experimental Data 6 EVALUATION OF EXPERIMENTAL DATA 2 (THE CURSE OF ERROR ANALYSIS) All experimental measurements are subject to some uncertainty, or error. At the most fundamental level the uncertainty principle of physics tells us there are some things that we can never know exactly. But most measurements we are likely to make will be limited by ourselves or by our apparatus, and it is useful to learn to deal with these inherent shortcomings in our experiments. Accuracy is a measure of the difference between our measurement and the true value of the quantity we are measuring. High accuracy is what is ultimately desired. In order to obtain accuracy, we often make multiple measurements of the same quantity and the precision is a measure of the spread of these measurements. The meanings of these two terms, accuracy and precision, are thus quite different. A set of measurements may have high precision, but low accuracy. So a student might make three determinations of the chloride content of an unknown and get 43.44%, 43.51% and 43.47%. These results are fairly precise, but the true value might be 41.70% and the determination would not be very accurate. (This is the most common case. Just because a measurement is reproducible does not mean it's right!) Note here that these terms are used in a qualitative, descriptive sense. ACCURACY AND SYSTEMATIC ERRORS How can we evaluate the accuracy of our result? The true result is normally not known, otherwise we would not be making the measurements in the first place. (Present lab excluded, of course.) The error in our measurement is thus unknown. We must make estimates of both the accuracy and the precision of various parts of our measurements, hoping that we have included everything, and then we combine these to get an estimate of how the final answer is affected. In our discussion of errors, it is useful to distinguish between systematic, personal, and random errors because they have different sources, and affect the measurements differently. All three kinds of errors affect the accuracy of the measurement, but in different ways and for different reasons. Personal errors frequently plague beginners. This is a kind way of saying that the procedure wasn’t carried out properly. Examples of such errors are Weighing a sample which isn’t dry. Not thoroughly mixing a solution. Using the wrong chemical. Measuring voltage with a meter set to “resistance”. etc. Chapter 2. Errors & Experimental Data 7 Now imagine that the person carrying out the experiment is a tireless automaton capable of repeating the experiment flawlessly many times. Such an automaton is incapable of making a personal error. Nevertheless, the results from experiment to experiment will differ somewhat because of the inherent limitations in the equipment and procedures employed. The run-to-run differences thus encountered are examples of random errors which ultimately are limiting the precision of the results. As we shall see the inherent error resulting from this sort of experimental scatter can be reduced by repeating the experiment many times and averaging the results. There is no guarantee that after such improved precision by averaging repeated measurements that the final answer will be correct even in the limit of an infinite number of repetitions giving perfect precision. The remaining error is called systematic error, but can have more than one source so that we speak of systematic errors. Such errors are important types of errors to consider, but there are no mathematical formulae to predict them. As a consequence, their discovery can be inordinately difficult. They are inherent in the system, and can often be traced to some fundamental flaw in the experimental apparatus. Systematic errors do not average out on multiple observations. They always influence the measurements in the same direction, i.e., a given systematic error might make the result low, and repeated measurements would always yield a low result. If we are so lucky as to discover the source of a systematic error, we can usually eliminate it and sometimes completely correct old measurements for it, provided that we have recorded the necessary data. The kinds of systematic error vary enormously with the experiment under consideration, and we list a few below: Miscalibration of the apparatus. Weights, scales, and volumetric equipment all need to be calibrated. Procedural errors. These can be very subtle, such as using a platinum resistance thermometer to measure temperature and finding after a lot of detective work that the wire in the thermometer was not platinum, but probably something like osmium! Inappropriate theoretical analysis. Experimental results are usually interpreted using well-established theory, but sometimes the theory may be extended to regions which are inappropriate. For example, the perfect-gas law is often used to describe gases, and this normally works quite well. But if a gas were undergoing either liquefaction or reaction, PV would not equal nRT, and deviations from the "ideal" law must be considered. As another example, Stokes' Law describes the fall of a sphere in a viscous medium, but Millikan was forced to apply corrections to this law when he measured the charge on the electron in the famous oil drop experiment. The dividing line between the three types of errors is fuzzy. Thus we classify the reading of burets to 0.1 rather than 0.01 ml as personal rather than random even though the effect might appear random because we know that the buret can be read more accurately. Some of the examples of systematic errors given above could be thought of as personal errors because the person devising the procedure should have known better. Chapter 2. Errors & Experimental Data 8 We can find out about random errors by examining the precision of several workers. We can avoid personal errors by being extremely careful in following the procedure. How can we identify and eliminate the systematic errors in our experiments? This is not so easy. The best method is usually to make the measurement using a completely different method. For example we could use a volumetric method of chloride analysis to check the results of a gravimetric analysis. But oftentimes we are called upon to make an assessment of the accuracy of our results without recourse to alternative experiments. About the best we can do is to consider how a miscalibration or mistake might affect each step of our measurement and then to evaluate how that affects our final result. In order to do that we must consider how errors are propagated in experiments. PROPAGATION OF SYSTEMATIC ERRORS. Suppose that we measure a quantity X but there is a systematic error, δX in the measurement. Remember that δX will not average out, and that it will have a definite polarity (which we can guess in our estimation of errors.) We now ask how this error appears in the final result R. a) Addition and subtraction. If two measurements are to be added, we have R = (X1 + δX1) + (X2 + δX2) = (X1 + X2) + (δX1 + δX2) (1) so δR = δX1 + δX2 (2) and the error in R is just the sum (or difference) in the error of the individual measurements. Note that it is possible in principle for the errors to completely cancel out. For example, if a section of a buret were to be mis-numbered, the volume might be read as 1 ml too high. If the result is the volume delivered, the difference between two readings, this mis-numbering of the buret would completely cancel out if the second buret reading was also in the misnumbered section. Note here that the absolute error adds (or subtracts). b) Multiplication and Division. Suppose the result is dependent on several measurements, X, Y, Z as a product or ratio XnY R = Z (3) Errors could be introduced in the measurement of either X, Y or Z. If, for example, there were no error in Y or Z, we would expect an error in X, δX to affect the result according to the rules of differential calculus as nXn-1Y δR = Z δX (4) Chapter 2. Errors & Experimental Data 9 where Y and Z in the differential are assumed constant and the finite δX is assumed to behave as an infinitesimal dX. Another expression could be written for the way in which an error in Y or and error in Z would affect R. In general it is possible that errors would be made in each variable, and the error in R would be expected to be the sum of the individual errors: nXn-1Y Xn XnY δR = Z δX + Z δY - Z2 δZ (5) if we now divide by R we have: δR δX δY δZ R = n X + Y - Z (6) notice now that the fractional errors (weighted by the appropriate exponent) add to give the fractional error in the result. EXAMPLE As an example, we cite a very elementary experiment performed to measure the density of water. Here 10 ml of water was pipetted into a weighing bottle and weighed. How do we evaluate how the uncertainty in weight and volume affect the uncertainty in density? We measure the volume with a buret which requires two readings. If we assume that the error in the first buret reading is δV = 0.03ml, and that the second error is zero, we have from above, V = (V1 + δV1) - (V2 + 0) = V1 - V2 + δV (7) The absolute error in volume, δV is thus .03 ml. The fractional error is δV/V = .003, or 0.3%. If we further assume that we have spilled a little water so that the error in weight is δw = -.001 gm, then we have δρ δw δV ρ= weight/volume; = - (8) ρ w V since V ≈ 10 ml, and W ≈ 10 gm, we have δρ .001 .03 ≈ - - = -.003-.0001 ≈ -.003 ρ 10 10 We conclude that the error in weight is very small in comparison to the error in volume, and that the accuracy expected (provided that everything is calibrated) is about 3 parts per thousand, or 0.3%.