Tuesday, March 04, 2008

Rejected! Conspiracy? No! (Pt. 1)

A recent paper submitted to EOS and Nature Precedings by Fergus Brown, James Annan and Roger Pielke Sr. was summarily rejected (Roger’s bold, not mine). The paper, published on RPJ Sr.’s site, concerned a poll, carried out by the authors via email, that demonstrated a range of opinion among main-stream climate scientists concerning the ‘IPCC consensus’. Unsurprisingly, most thought that the IPCC WG1 report was pretty much correct, some thought it was too conservative and some thought it went too far. No one denied global warming.

So, what’s the problem?

All of the authors are switched on people who know their climate science. But when it comes to polling their expertise leaves a little to be desired of, and it shows in their paper. It shouldn’t be published in its current form. Not because of a conspiracy. Not because its conclusions are necessarily wrong. It’s just that the mostly ignored potential statistical errors made the conclusions just as likely as not invalid. Simple as that.

Now, I’m not an expert in polling myself. Far from it. However, avidly reading the excellent Possum’s Pollytics during the run up to the recent Australian Federal election, I got a bit of a feel for the subject (If I ever did get around to doing the best blogs of ’07, Possum would have won it by a country mile). When you do a poll, there are some things you have to do to have it taken seriously, and some things you really shouldn't do. In my always humble opinion, the Brown et al. survey falls well short.

Unfortunately, the consequence of the rejection means that Roger is alluding to a cover up and the denialist blogosphere has latched on and started screaming. As always, it would have been far better if some reason from the reviewer/s had been given for the rejection, and a chance given to fix the errors.

So what’s the problem?

Part 1 of this post will look at the most obvious.

Don’t publish a poll without ‘teh MOE’. It’s the good ‘ol Margin Of Error and it needs to be there. Otherwise, the poll looks amateur. Using a certain confidence interval (the most common is 95%), you can easily calculate the percentage of the entire population, as represented by your sample, that will give the affirmative for a particular answer (though you’ll be wrong 5% of the time, hence the 95% interval!). The formula for a 95% confidence interval is 1.96 *sqrt((p(1-p)/n)) when p is the popn % that gave the affirmative to that particular answer and n is the sample size.

So let’s look at the Brown et al. sample. To simplify, I’ve taken the no. of people who supported a particular answer and half the no. of people who split between the two answers on either side (The total still = 100%).

57% of people ticked answer no.5. n = 140, therefore the MOE= 8.2%. So the percentage of the population of climate scientists GIVEN A PERFECTLY RANDOM SAMPLE who would answer yes to no.5 is likely to be between 48.8% and 65.2%, with a 5% chance that the true percentage lies outside of these values. I’ve plotted the MOES for all the answers below:



As you can see, if the sample taken was perfectly random and no other errors were introduced (A proposition that is highly unlikely, which I’ll cover in Part 2), it’s likely that there isn’t a ‘100%’ consensus of scientists that support the IPCC position. However, a poll with a simple MOE of around 8% is just not considered overly valid. Most political polls in Oz come in with a MOE of around 3%, and I suspect a reputable journal would want something similar before they’d publish it.

It’s important to note that this is separate, though related, issue to the ‘low’ response rate. Put simply, the sample size is too small. 1000 people can give a fairly accurate representation of 100 million voters, 140 people can't give an accurate representation of even a few tens of thousands of climate scientists (or less).

Yes, there probably isn't a perfect consensus. Can that be quantified by the survey? No.