How reliable is the global temperature record? Part two

Let’s introduce the characters in our story.

First, the satellite measurements.  Back in part 1, we found that satellites infer atmospheric temps by measuring microwave emissions from oxygen molecules.  There are two primary groups working with satellite data.  One is at the University of Alabama at Huntsville (UAH) and the other is Remote Sensing Systems (RSS); both work with the data from satellites to produce a temperature record.  Other groups are working with this data as well, but RSS and UAH are the major players. There is a lot of interesting physics and climate stuff here, but the data only go back to late 1978, and quite a substantial amount of processing is necessary to create the satellite temperature record.  The MSU post from Tamino’s Open Mind blog has a nice summary of the satellite info, albeit from 2007.  I plan to get back to this data later in our series.

The ground record of temperature comes from a variety of sources and are turned into monthly global-average readings by three independent research groups:

  1. The Met Office, in collaboration with the Climate Research Unit (CRU) at the University of East Anglia in the UK.
  2. The Goddard Institute for Space Sciences (GISS), part of the National Aeronautics and Space Administration in the US.
  3. The National Climate Data Center (NCDC), part of the National Oceanic and Atmospheric Administration in the US.

These three groups use different methods to collect and process the temperature readings in calculating the global-average numbers.  Their results are similar and they are in close agreement on the decade to decade trends.

The data sources for the Big Three come from different sources with each source having its own characteristics. It is generally agreed that reasonably reliable surface temperatures with wide geographic coverage begin around 1850.  Some records begin earlier, but have restricted geographic coverage.

The longest temperature record in use is the Central England Temperature record, dating from 1659 to the present.  It consists of monthly averages from 1659 to 1772, and daily averages from November 1772 onward.  The Met Office maintains the official CET record at the Hadley Centre.  In addition, the Hadley Centre maintains HADCRUT3, a global temperature dataset dating from 1850.

NASA maintains GIStemp, a record which provides monthly averages with wide geographic coverage from about 1880, although some records go back to the 1770s.  NOAO is responsible for the Global Historical Climatology Network (GHCN), which contains records as early as 1697. Widespread gridded monthly average temperatures are available from January 1880.  GHCN also has precipitation and pressure readings.  The US portion of the GHCN is the United States Historical Climate Network (USHCN) data, maintained by the NCDC and adjusted independently of the worldwide data.

Finally, the important antarctic records come from the Scientific Committee on Antarctic Research (SCAR) via the Reference Antarctic Data for Environmental Research (READER) program.  There are few fixed surface stations in Antarctica, and most stations have operated for short times, appearing and disappearing, leading to a very interesting data analysis program.

Next we’ll look at the problems facing the keepers of the datasets.

Politics and Science of AGW

Ran across Walter Russel Mead’s level-headed assessment of the interaction between Science and Politics in the AGW arena.  You may have seen it already, but if not, take a moment to look over Why Climate Science is on Trial.  Read the whole thing.

However absurd the skepticism in a particular case, in a general way a certain level of skepticism about the work of scientists is justified.  The ’scientific consensus’ has often been wrong in the past — and scientists are just as arrogant, dogmatic and condescending when they are wrong as when they are right.  Look at the many conflicting ideas that economists have brought forward over the last two hundred years.  Look at how medical ideas and treatments change over time.  Look at the science of ‘eugenics’ in the light of whose findings judges once condemned people to involuntary sterilization.  Look at the persisting fad for Malthusian catastrophe scenarios.  Homosexuality was once scientifically defined as a form of mental illness.  Trans-fats were made into margarine and promoted on scientific grounds as healthier than butter.  Skepticism about self-confident scientists with reams of data and arrogant attitudes is a very sensible attitude for laypeople to take.

Exactly so.  Remember, extraordinary claims require extraordinary proof.

Is the Scientific method being subverted by scientists?

While working on the next global temperature post, I ran across this distressing article at the BBC.  I’m thinking one of the side effects of ClimateGate will be far more healthy skepticism and pointed inquiry into the inside game of scientific research.  From the article:

Stem cell experts say they believe a small group of scientists is effectively vetoing high quality science from publication in journals.

In some cases they say it might be done to deliberately stifle research that is in competition with their own.

A small clique of researchers abusing peer-review and peer-reviewed publications for personal benefit?  Hummm, now where have I heard that before?

These kinds of allegations are not new and not confined to stem cell research. But professors Smith and Lovell-Badge believe that the problem has become particularly acute in their field of research recently for two reasons.

Firstly, research grants and career progression are now determined almost entirely by whether a scientist gets published in a major research journal. Secondly, in stem cell science, hundreds of millions of pounds are available for research – and so there is a greater temptation for those that want the money to behave unscrupulously.

Human beings act like human beings, even if they have Ph.D. after their names.  Healthy skepticism, open debate, and full disclosure of research results are important, but policing the entire chain from research, review, and publication are surely necessary when job security, not to mention large sums of money, are on the line.  In fact, I’m a bit suspicious that large sums of money might be the corrosive element in the current environment of Big Science.  Big Science requires lots of money, and most of the interesting questions require Big Science, so this problem isn’t going away.  Somehow, we need to come to grips with this problem and find a way to ensure the integrity of Science and the Scientific Method in a world of billion dollar research budgets.  Clearly, relying on the personal integrity of scientists is not getting the job done, and that’s quite depressing.

One more quote from the article on the dangers of just looking at numbers without context:

“We are seeing the publication of a lot of papers in high profile journals with minimal scientific content or advance, and this is partly because of these high-profile journals needing to keep their so called ‘impact factors’ as high as possible. That’s determined by the number of citations that the papers have and they know that some of this trendy work is going to get cited and they seem not to care about whether its a real scientific advance or not,” [Professor Lovell-Badge] said.

I remember from my grad student days stories of young researchers making small errors in papers just to juice the citation index.  A citation is a citation, even if it corrects an error, and few tenure committee members tracked down every paper’s citation to see why it was mentioned.  Juicing your numbers early in a career is one thing, abusing the system for your entire career is quite another.  I’m not quite sure the extent of the problem today, but I do get the feeling Science may need saving from the scientists in the future.

How reliable is the global temperature record? Part 1

What exactly is the global temperature?  How is it calculated?  And most importantly, from the climate change perspective, how can we know the global temperature is changing?  First, let’s look at how we measure heat.

Thermometers measure the heat energy of objects.  A thermometer uses a sensor, for example, the venerable glass tube filled with mercury, that responds to the thermal energy of its surrounding.  As the surroundings of the glass tube increase/decrease in heat, the mercury expands/contracts within the tube.  Place a scale alongside the tube, and the reading on the scale tells you how much heat is in the surroundings.  Calibrate the scale against standard temperatures, such as water boiling at 100 degrees Celsius, or with a known accurate thermometer, and the scale readings can be converted into temperature.  The accuracy of the reading tells how closely the measurement corresponds to the real temperature.  If your thermometer reads 51 degrees when a very accurate lab thermometer reads 50.0, the thermometer is accurate to at best one degree.  The precision of the reading tells how much information the scale gives; for example, if the marks on the scale are every two degrees, it is usually possible to read the scale to the nearest degree (on a mark or halfway between two marks).  Finally, the reproducibility of the thermometer tells if the same temperature always results in the same reading.  If the reproducibility is poor, comparisons between different readings become problematic.

If we are measuring air temperature, we must make sure the thermometer is in a neutral setting.  The thermometer can’t be in direct sun, or near buildings which are heated and cooled.  It can’t be shielded by a canopy of trees, and so on.  Every site has some environmental influences, such as that tree canopy, or the nearby city and airport, that have measurable effects on the temperature readings.  Ideally, the thermometer stays in the same place and nothing changes, but in reality, things change.  Trees grow, buildings go up and down, the population of the nearby area grows, the weather station is moved to make way for progress.

If a weather station has been measuring temperatures for a long time, without question a careful scientist would want to adjust the record a bit to account for these local environmental changes.  If one knows the station moved, one can look at the measurements in the time surrounding the move and see if, on average, the readings are just a bit higher or just a bit lower than before.  Similarly, when a new instrument is installed, one can compare before and after readings to see if the old instrument was reading a little lower or a little higher than the new instrument.  There are lots of reasons the longterm temperature record from a station might need some tweaking to give a consistent set of measurements.  The important point from a scientific perspective is that the adjustments are objectively calculated, and everyone knows why and by how much the raw readings were changed.

So now we have a reasonably consistent set of readings from a single station.  To compute a global temperature, ideally we’d place weather stations all over the global, take measurements, average the daily minimum and maximum temperature for each station, then take the daily averages of all stations and compute a global average.  In turn, each day’s global average could be compared against the same time period in prior years, and after collecting decades of readings, scientists would be able to predict the expected global average temperature for a time period.  The difference between that predicted ensemble average and the actual measured ensemble average is the anomaly. The difference between the measured temperature and a long-term average, or reference value, is the anomaly.  A warming climate gives a positive anomaly, and a cooling climate gives a negative anomaly.  Over time, the rate of change in the anomaly tells us if the worldwide climate is tending toward increasing or decreasing temperatures.  Note that the ensemble anomaly can be positive, but decreasing, and vice versa.

Satellites add an interesting variation to this.  Remote reading thermometers calculate the thermal energy in objects by measuring photon emission from the object in one or more wavelengths.  In the case of satellites, microwave emissions from atmospheric oxygen is measured.  Lots of work is necessary to accurately measure atmospheric temperature from a satellite, as Roy Spencer details.  Satellite measurements have lots of advantages over ground station measurements.  A single satellite can see large portions of the earth, the same instrument is used to measure temperatures in many places, and there are fewer tricky environmental changes to content with.  But satellite data only goes back to the 1970s.  However, it does provide a good check of the consistency and reproducibility of ground measurements.

Now we are ready to look at the various global temperature records in the next post.

Credibility is what’s really melting

Mark Steyn traces the arc of the pseudo-fact that Himalayan glaciers might melt by 2035.

That magnificent landform is melting before his eyes like the illustration of the dripping ice cream cone that accompanied his eulogy for the fast vanishing glaciers. Everyone knows they’re gonna be gone in a generation. “The glaciers on the Himalayas are retreating,” said Lord Stern, former chief economist of the World Bank and author of the single most influential document on global warming. “We’re facing the risk of extreme runoff, with water running straight into the Bay of Bengal and taking a lot of topsoil with it. A few hundred square miles of the Himalayas are the source for all the major rivers of Asia—the Ganges, the Yellow River, the Yangtze—where three billion people live. That’s almost half the world’s population.” And NASA agrees, and so does the UN Environment Programme, the Intergovernmental Panel on Climate Change, and the World Wildlife Fund, and the respected magazine the New Scientist. The evidence is, like, way disproportionate.But where did all these experts get the data from? Well, NASA’s assertion that Himalayan glaciers “may disappear altogether” by 2030 rests on one footnote, citing the IPCC’s Fourth Assessment Report from 2007.

Don’t forget, the IPCC said the probability of the glacier meltdown was “very high”, which would be 90% probable.  And it all rested on speculation by a single scientist in a telephone call.

What else is in the IPCC report that can’t be backed up?

The Undermedia has arrived

A three part series over at BigJournalism traces the story of the University of East Anglia CRU email and data leak, better known as ClimateGate.  Interesting reading for the timing and politics of the ClimateGate story.