<<

Section 4.2 Fitting and Surfaces by 421CurveFitting4.2.1 Fitting

In many cases the relationship of y to x is not a straight line To fit a curve to the data one can • Fit a nonlinear directly to the data. . • Rescale, transform x or y to make the relilations hilihip linear. • Fit a ppyolynomial function to the data. For a uniform fluid – light should decayyp as an exponential function of depth – Given data on depth, x , and light intensity, y,

The data s hou ld loo k like an exponent ia l funct ion. y ≈ aeb1x 1. Direct Fitting We could fit the equation directly by 2 b1xi miiinim iiizing ∑(yi − ae ) If the residuals are normal with equal variance then – this is an optimal way to go.

We could use a "" ppgrogram to accom plish this. This can also be accomplish with the "Solver" add‐in in Excel. We won't be doing such curves this way in Stat 3411.

But veryypy often in situations such as exponentially curved data the variances are not constant.

• One way to handle nonconstant variance is "weighted regression". – We give larger weight to residuals where variance is small and we are more sure of where the fitted line should go.

• We won't be doing this in Stat 3411. 2. Transforming

Often with an exponential relationship – the larger va lues have more var iance an d – the smaller values have smaller variances.

In this case it can work better to rescale the data

y ≈ aeb1x

l(ln( y) = l(ln(a) + b1 x1 new y = y′ = b0 + b1 x1 Often values in the log scale have fairly constant variances. Original Scale

30

25

20

Y 15

10

5

0 0 2 4 6 8 10 12 14 16 18 20 X

A residual plot gives us a magnified view of the increasing variance and .

Original Scale This residual plot indicates 2 problems 10 with this fit. 8 The relationship is not linear 6

4 ‐Indddicated bhby the curvature in t he

2 residual plot Residual 0 0 2 4 6 8 10 12 14 16 18 The variance is not constant -2 ‐So ltleast squares i'tthbtisn't the best -4

-6 approach even if we handle the Predicted nonlinearity. Let us try to transform the data. (A log transformation) Ln Scale

3.5

3

2.5

2 Ln(Y) 151.5

1

0.5

0 0 5 10 15 20 X

Fit a line to these data. Here it makes sense for all residuals to count eqqyually. Then back‐transform to the original scale

Original Scale

30

25

20

Y 15

10

5

0 0 2 4 6 8 10 12 14 16 18 20 X

• Predicted values are less reliable if you are extrappgolating outside the ran ge of x values in the data set. • are more reliable if one has an eqqpyuation from a physical model. • For example physical models predict that in a uniform fluid light will decay exponentially with depth. • CE4505SurfaceWaterQuality Engineering • http://www.cee.mtu.edu/~mtauer/classes/ce 4505/lecture3.htm 3. When we have no theory to guide us, we can often fit the curve in the of observed x values with a function.

For example a cubic polynomial would be 2 2 y ≈ b0 + b1 x + b2 x + b3 x

• This is linear func tion for the three var ia bles 2 3 x1 = x x1 = x x3 = x

y ≈ b0 + b1x1 + b2 x2 + b3 x3 • Excel and other programs fit these sorts of polynomial models. In Excel • Add new columns for x2 and x3 • Run a regression using 3 columns for the x varibliables – x , x2 , and x3 – All three columns need to be next to each other in Excel. Given thfidfhe fitted funct ion, we want to c hec kfk for an adequate fit by plotting the data along with the fitted function.

Notice that replication is useful for assessing how well the functional form fits the data.

With replication here we can tell that the quadratic polynomial is under‐fitting the y values at x = 2. Residual Plots

We would p lot res idua ls in t he same ways suggested in section 4.1

– Versus predicted values – Versus x – In run order – Versus other potentially influential variables, e.g. technician – Normal Plot of residuals Section 4.2.2 Surface Fitting by Least Squares

In many situations the response variable, y, is affected by more than one x variable.

For example we could have (see problem 21 in the Exercises) y = armor strength

xl = thickness x2 = Brinell hardness

The simplest model to fit in this case is a linear model in both variables

y ≈ b0 + b1 x1 + b2 x2 • In this case we are fitting a plane to the 3‐D points.

This sort of linear model with more than one x variable is called "multiple " For this function fitted y values for afixeda fixed value of x1 follow parallel lines when plotted agaitinst x2. yi ≈ b0 + b1 x1i + b2 x2i

Again, we find b0 , b1 , b2 to minimize • the sum of squared deviations of the fitted values, • the residual or error sum of squares. • 2 min∑()yi − b0 + b1x1i + b2 x2i This minimization can be solved with explicit matrix formulas. Many programs including Excel have the capability. There can be any number of x variables, for example

y ≈ b0 + b1 x1 + b2 x2 + b3 x3 It is also possible to introduce curvature into one or more ofthf the varibliables For example

2 2 y ≈ b0 + b1 x1 + b2 x2 + b3 x3 + b4 x2 + b5 x3 + b6 x2 x3 • Product terms (called interactions) in the model giv e nonparallel contou rs. Residuals ˆ Residuals are as always eyyiii=− .

R2 is still the percent of variation explained by the regression. • How better do we do – using the regression equation versus – using the mean to predict all values

2 ˆ 2 2 ∑(yi − y) − ∑(yi − yi ) = 2 ∗100 ∑()yi − y

2 R is also the square of the correlation between yy ii and ˆ . To check how well the model fits we • plot residuals in the same ways as earlier – Versus predicted values – Versus x – In run order – Versus other potentially influential variables, e.g. technician – Normal Plot of residuals

• In addition we would plot the residuals – Versus each x variable

• Versus x1 • Versus x2 • Versus x3 • An so on Final Choice of Model

We have many choices for our final model including • Transformations of many kinds • Polynomials

Our final choice is guided by 3 considerations • Sensible engineering models and experience • How well the model fits the data . • The simplicity of the model. 1. Sensible engineering models

– Whenever possible use models that make intuitive physical sense – Good mathematical models based on physics makes extrapolation much less dangerous. – HiHaving p hillithysically interpre tbltitdtable estimated parameters 2. Obviously, the model needs to fit the data reasonably well.

– Always p lo t da ta an d res idua ls in in forma tive ways. 3. Thihe simp liifhlicity of the mo dldel.

A very complex model relative to the amount of data – Can fit the data well – But not predict new points well , particularly extrapolated points. – "" the model Learn more in Stat 5511 Overfitting Polynomial

120

100

80

Y 60

40

20

0 0123456 X The fourth order ppyolynomial in pink • fits the data exactly, • but likely would not work well for predicting for x=0.5 or x=1.5

The linear fit in blue • likely predicts new points better.

Offften transforming to a llllldlfhlog scale allows simpler models to fit the data reasonably well. Extrapolating

With multiple x variables we need to be careful about extrappgolating beyygond the region of x variables for the observed data. We can' t necessarily tell that a combination of X1 and X2 is unusual just by seeing – separately

– where the new value of Xl falls amongst the measured Xl values

– where the new value of X 2 falls amongst the measured X2 values

Some Study Questions:

• You run a regression model fitting y with x1, x2, and x3. What five residual plots would you need to check in order see if the data are fit well by the model?

• Whtitlti?Whitltittillik?What is extrapolation? Why is extrapolation potentially risky?

• What can we do to reduce potential risks in extrappgolating?

• We run a regression predicting y = armor strength from xl = 2 thickness and x2 = Brinell hardness and obtain R = 80%. Explain in words for a manager who has not seen R2 values what it means for R2 to be 80%.

• What is the problem with fitting a complex model to a small data set in order to fit the data points very exactly? The following residual plot is for predicting product lifetimes from using x1 and x2

100

80

60

40

Residual 20

0 -40 -20 0 20 40 60 80 100 120

-20

-40 Predicted

What 3 problems with our fitting method and results are indicated by this residual plot? What might you do to try finding a better fit to the data?