Yesterday Roy Spencer posted on his blog, with the headline "Global Satellites: 2016 not Statistically Warmer than 1998."
According to UAH's model, the average temperature of the lower troposphere for 2016 was 0.50°C, and for 1998 it was 0.48°C. UAH puts the uncertainty of the annual value at σ=0.05°C, so the 2σ error bars (which give (pretty approxiately) the 95% confidence limits), is 0.10°C.
So Roy concluded the two years are tied:
...they are basically tied, statistically. So to say 2016 is the warmest would be dishonest, since it ignores uncertainty in the measurements: a 0.02 deg. C change over 18 years cannot be reliably measured with any of our temperature monitoring systems.John Christy apparently said the same.
Of course, 0.50°C is larger than 0.48°C, so I think there's some sleight of hand here, spun, I suspect, for the sake of headlines in Breitbart, Climate Depot, the Daily Caller and those kind of deniers.
I asked Roy what is the probability 2016 was the warmest year of the two years, but got no response from him. So I tried to calculate it myself; see what you think.
I assumed the two annual temperatures were each normally distributed, with the mean (best estimate) at the published numbers, 0.48°C and 0.50°C, and a standard deviation for each of σ=0.05°C.
So the picture is two Gaussian curves, side-by-side, with 2016's curve 0.02°C to the right of 1998's curve, both having a standard deviation of σ.
To calculate the probability that 1998 is the warmest year, I took it to be the area under the its Gaussian curve from 2016's best estimate, out to infinity.
0.02°C is a small difference between the two years' best estimates, but it's also 0.4 standard deviations (0.02°C/σ), and that isn't so small.
Normalizing the coordinates to unitless numbers, we want to area under the 1998 curve from x=0.4 to infinity.
The area of the normal distribution to the right of 0.4 is 0.3446, from this handy table. (It's straightforward to calculate, too, using the error function (erf(x)), but I got too tied up in getting the factors of 1/2 and √2π and the like correct, especially between the Wikipedia function and the Excel functions, so I just looked it up.) So
probability 1998 is the warmest year = 34%.
The probability 2016 was the warmest year is the complement of this, since we're only considering two possible years (the third highest annual average, 2010, is 0.33°C, so not even close)
probability 2016 is the warmest year = 66%.
The chance 2016 was the warmest year in UAH's records is twice that of 1998.
This may not be the most mathematically rigorous way to do this, and I don't know if it's how Gavin Schmidt does it. But 66% isn't a surprising result; it "seems" believable (remember, it's 0.4 standard deviations higher).
So, statistical tie, or 2-1 odds?
Update, 1/6: Based on Nate's calculation on Roy Spencer's blog, I now think the right answer is his 61%. He used a 1-sigma margin of error of σ*sqrt(2), for the difference. For the difference of two numbers, D=Y-X, where X and Y each have a 1-sigma margin of error of σ, then the 1-sigma MOE of D is sqrt(σ2+σ2)=σ*sqrt(2) = 0.07°C in this case. Then we're looking for the probability that we're 0.02/0.05*sqrt(2)=0.28 standard deviations above zero, and the area to the right of that is, by the table listed above, 38.97%, so the complement of that, the probability that 2010 was the warmest year, is 61%.
This is worth reading; I'll write about it in my next post:
“The Myth Of The Statistical Tie,” David Drumm, jonathanturley.org, 10/6/2012