From: John Fowler Subject: CoKurtosis Actually Found Useful Date: October 11, 2011 2:58:50 PM PDT To: Frank Masci

4 Attachments, 6 KB

Frank,

Believe it or not, the cokurtosis turns out to play a role in the variance of the "erroneous" chi- square. Actually, it's just the fourth moments x^3y and xy^3. But as a related concept, it raises the question we didn't address before: how do you normalize and cokurtosis, i.e., the usual is the fourth central divided by sigma^4; for x^3y, I assume you would divide by sigma(x)^3*sigma(y). In that case, this particular cokurtosis (of the five possible) is 3*rho. So for uncorrelated variables, this cokurtosis is zero. But I'm concerned with correlated variables, the 2-D Gaussians.

For correlated zero-mean Gaussian x and y, the correct and erroneous chi-squares are:

The difference is

The expectation value of this difference is zero, as we have seen previously. The expected squared value of the difference is:

You can see that when you expand the square, you're going to pick up terms in x^3y and xy^3, so the expectation values of these will be needed. You can't just factor these and use the expectation values of the factors, e.g., is not equal to , etc., so the moments have to be computed from the joint density function (again, written for zero-mean variables):

There you can see that if you divide the last result by sigma(x)^3*sigma(y), you get 3*rho for the cokurtosis, zero if rho is zero.

I had to coax Maple a bit to get that definite double integral out of it; the key is declaring all variables as real, the sigmas > 0, and -1 < rho < 1. Until I did that, it was spewing all kinds of conditional results, including divergences. So while it's fresh in my mind, I plan to compute all the other cokurtoses.

I thought you'd enjoy finding out that an actual use for a cokurtosis-related concept arose in practice!

Regards, John