WunderBlog Archive » Dr. Ricky Rood's Climate Change Blog

Category 6 has moved! See the latest from Dr. Jeff Masters and Bob Henson here.

Temperature (2) – Satellites, personally

By: Dr. Ricky Rood, 11:36 PM GMT on August 06, 2010

Temperature (2) – Satellites, personally

In the last entry I started down the path of looking at climate change and temperature from a personal perspective. I was motivated a bit by Paul Edward’s historical point of view in his book “A Vast Machine”, as well as a desire to set, side-by-side, experience, intuition, and comprehensive scientific investigation. A thread in the last entry was the need, the requirement to look a multiple sources of information rather than isolated sources of information. I specifically argued that looking at isolated sources of information was a type of argument, often, a political tactic to promote doubt to advance a political argument. In this article I want to extend my point-of-view approach, with the advantage of having more of an inside position than most. Ultimately, I plan this to be relevant to the quality of satellite temperature observations. We’ll see.

The time that I was in college, in the early to mid 1970s, was the time when semiconductors and scientific calculators started to take off. When I was, say, a sophomore, the physics department kept a few Wang calculators chained down in small rooms for student use. By the end of college Texas Instruments and Hewlett Packard (HP) where making scientific calculators. As I recall, one of the great advances of the HP45 was the memory to store the results of up to, maybe, 10 calculations. The HP calculators used an entry and calculation method called Reverse Polish Notation (RPN), which some felt was counter intuitive. I had a professor who had spent many years in industry with Texas Instruments. In his office one day, I was using my HP calculator to make some simple calculations. He said that he did not trust the RPN in the HP calculators. He would not accept the numbers and redid the calculations on his Texas Instrument calculator. Taking this rejection of the HP technology at face value, here was a man who simply did not believe in the calculation despite evidence of many calculations giving the same result. There seemed, to him, to be the possibility that there existed exceptions to the rule. Or perhaps, the counterintuitive entry and calculation method of the HP calculator perhaps was perceived as more error prone.

I have seen this type of “trusted-method” focus throughout my career. I pose that there are personal, experiential, and even emotional reasons to develop such a focus – such a belief. In my little corner of atmospheric modeling, researchers develop single-minded passions for the numerical algorithms used to propagate wind and temperature information in the models. I have heard scientists state that they would not trust the results of a model that has this or that algorithm in it, despite the presence of a preponderance of quantitative evidence that different approaches provide credible results. There is, to some perhaps, an element of doubt that there exists some unaccounted for exception to the rules.

More curiously, perhaps, here is an example from when I was at NASA. This was a time when there was a rapid expansion of the number of instruments that measured, for example, ozone. The new instruments used multiple methods from multiple satellites to measure ozone.

One of the basic elements of the scientific method is to validate that instruments are actually measuring what they are supposed to be measuring. It’s not so easy to make measurements from a space craft traveling a few miles per second looking at a small number of photons from a trace number of molecules. But if you slog through the thousands of pages of evidence and arguments, most scientists become convinced that we can do it. After validation, researchers try to conduct scientific investigations of, say, what causes ozone depletion in the Arctic or to make predictions. I used to marvel at the researchers who would only use the information from a particular instrument or a particular measurement type (for example, microwave versus thermal spectroscopy). They would do comical back flips in order to not use information from another source that they did not, apparently, “believe in.” Or perhaps they thought that other methods were counterintuitive, arcane, or could not be trusted.

I have seen satellite people not trust balloon people not trust people who use airplanes not trust people who use surface-based methods, and of course, none of them trust models. It goes on. I have, here, introduced the idea that there reasons other than the political arguments that people decide to anchor themselves on singular information. This focus on reduced problems, focused information, is a natural result of that part of the scientific method that follows from development of focused problems that can be used to study cause and effect.

Standing in contrast to these focused, “small” problems is the requirement of scientific investigation to combine alternative sources of information, to identify and reconcile conflicts, and to develop quantitative descriptions of complex systems. This unifying path is the subject of a different story – one that has been hinted at in other blogs. (Link1, Link2, Link3))

Trying to stick to a point – some might recall that last fall I wrote an entry motivated by questions from a school in San Diego. Subsequently, they asked me some questions about an interview with S. Fred Singer, one of the most outspoken critics of the scientific investigation behind anthropogenic global warming. I read the piece they sent me, and I was struck by a number of points in Singer’s story. The ultimate form of argument is to line up, in isolation, a collection of information to support his story. One of the pillars of the story is to state the preference for satellite observations of temperature to stand above all others. One of the forms of argument used in the criticisms of the body of science associated with global warming is to create a false tension between surface observations, balloon observations, and satellite observations.

As someone who sat inside of NASA for, gasp, 20 years, the evolution of the superiority of one measurement over the other feels like revisionist history. Routine satellite observations of observations of temperature began in 1979. As can be documented through many papers from investigators in the U.S. and Europe, it took many years of efforts from hundreds of scientists to extract useful information from these satellite observations. The standard of quality was set by a relatively sparse network of balloon observations, and the information from the balloon observations was used to inform the satellite observations. Satellite observations of temperature close to the Earth’s surface were and still are, notoriously difficult to make. Think: clouds, microclimates, hills, trees, lakes, increasing “thickness” of the atmosphere, and an instrument moving several miles per second. In the past ten years, our ability to use and to extract information from satellite observations has gotten much better. This has motivated some people to start to untangle the knots that developed over the years, when the surface and balloon measurements were at the very heart on making the satellite measurements meaningful – often providing the “first guess” in estimating satellite temperatures.

To choose one method and one source of information, to isolate that source of information, and to imbue that source of information with some exaggerated fundamental value is not part of a scientific argument. It is, in fact, an argument to ignore information. The practice of setting up tensions between satellite and non-satellite observations in the discussion of climate change is, with or without intent, incomplete, dishonest – or at least a sin of omission. At the risk of tedium – it is a form of political argument, or perhaps a preference based on personal, experiential, and even emotional reasons.

A scientific and factual point from above is that satellite temperature data is not independent of temperature data taken at the surface or in the atmosphere. We have improved our ability to derive temperature from satellites without the use of ancillary observations, but in general, remotely sensed observations require some sort of information for calibration.

There are other attributes of the satellite observations that offer challenge to climate scientists. Much has been made about instrument and siting problems of surface temperature observations. Satellites are not without their problems. There are intrinsic difficulties of making accurate calibrations, which are amplified by the vibrations of launch and operation of instruments in a space environment full of protons and electrons and requiring extraordinary cooling when exposed to the sun and stable heating when not - that is, it is difficult to control the environment in which the instrument operates. Satellite observations change from one instrument to the next, and they change over the course of the lifetime of the instrument. Therefore, to maintain a belief that satellites intrinsically define a more accurate and stable measurement method is naïve.

This leads to my final point, which is true for almost all environmental measurements. Only in the most recent years have observations been collected specifically for climate studies. Most often observations are collected for weather, without special regard to long-term instrument stability or consistent calibration from one instrument to the next, from one country to the next, from one vendor to the next. Hence, there have been extraordinary efforts to create climate data records after the fact. This is true for the famed Microwave Sounding Unit observations, and there are several ways this has been done. (NASA and NOAA funded “Pathfinder” efforts to create climate data records in the 1990s.)

There is no magic. There is no single observational record that rises to the level of absolute. If in the field of climate science an argument is made based on a single data set in the absence of information from other data sets, then you can assume that this “brand-specific” argument is trying to market a particular message. If you are respectful of the scientific method, and see such an isolated argument, especially if the argument is maintained by investigators who are responsible for the generation of the data, then your skeptics alert should be raised. More than likely it is a political or advocacy argument, rather than a scientific argument.


Figure 1: From The Use of TOVS/ATOVS Data in ERA-40. (European Center for Medium-range Weather ForecastsThis document describes the treatment of satellite data in the formation of “consistent” data for use in climate studies. Jargon and abbreviations defined in the document. What is shown in the figure are the data records for specific satellites, that have different suites of instruments. Figure caption from cited document.

Fig. 1 illustrates the availability of data for each type of satellite. The data have been acquired from several sources. The sources of TOVS data are NCAR, the Laboratoire de Meteorologie Dynamique (LMD), NASA, and the ECMWF operational archive, and the sources of ATOVS data are NASA and the ECMWF operational archive. TOVS data are available from October 1978 to date, from satellites TIROS-N, NOAA-6, NOAA-7, NOAA-8, NOAA-9, NOAA-10, NOAA-11, NOAA-12 and NOAA-14. ATOVS data are available from June 1998 to date, from NOAA-15.

The views of the author are his/her own and do not necessarily represent the position of The Weather Company or its parent, IBM.