I'm a professor at U Michigan and lead a course on climate change problem solving. These articles often come from and contribute to the course.
By: RickyRood, 7:10 AM GMT on February 21, 2012
This past week I had a short letter published in Scientific American. The letter concerned a statement made in an article that climate models do not include clouds. This is an incorrect statement that has been around for many years, and it shows up, in my experience, in more science-focused publications. I remember an exchange of letters in Physics Today in 2005. As best as I can tell, the statement is traced to a historical document that stated the first climate models written in the late 1960s contained specified clouds – meaning that they did not change as the climate changed. By the end of the 1970s, cloud parameterizations were becoming standard in climate models, and the interplay between clouds and solar radiation emerged in the 1980s as one of the most important metrics of model performance.
My letter goes on to state that the uncertainty in climate projections associated with the physical climate model is smaller than the uncertainty associated with the models of emission scenarios that are used to project carbon dioxide emissions. This statement is worthy of more discussion. Let me start with a couple of reminders. In all of these endeavors looking to the future we use models. Models are constructed based on observed behavior and are tools for projecting future outcomes. By “physical climate model” I mean a mathematical representation based on the laws of physics. Most simply, in this case, how is solar energy absorbed by the Earth, redistributed, and then emitted back to space? More generally, laws that govern physics, chemistry and biology are incorporated into climate models.
Another important ingredient in making climate projections is what is our future emission of carbon dioxide and other greenhouse gases? “Emission scenario” models are based on assumptions of population growth, economic development and sources of energy to drive the economy. Historically, one type of scenario is called “business as usual” and simply extrapolates curves of past energy use into the future. If we take emission curves that, for example, stop in 2005 and project them forward, we see that in the last couple of years we are ahead of those emissions. Generally, business as usual is assumed to be the worst case. We have several emission models based on various assumptions about development and deployment of technology. Current efforts in climate science are striving to make emission models and physical climate models talk to each other – to interact.
Physical climate models are based on the laws of physics and that does provide strategies for determining cause and effect. If cause and effect can be determined to a high degree of certainty, then we can be quite certain about predictions. The economic models, that I know, are based on observations of economic systems that are then represented through a set of mathematical relationships. These relationships are often represented by statistical methods, strive to represent human behavior, and include measures of value that rely on how much humans value something. In atmospheric science, for example, there are a set of “primitive equations” which all agree describe the motion of the atmosphere. Such a set of physically derived equations do not sit at the basis of economic projections. I hope I have stayed out of trouble here. As in a number of previous entries, I draw your attention to Daniel Farber’s Climate Models: A User’s Guide. Farber is neither climate scientist or economist, a fact that I always view as providing a measure of objective evaluation. He evaluates model robustness.
I want to discuss this uncertainty issue a little bit more, and will rely on an old standard figure from the 2001 IPCC Report. This figure has a lot of information about uncertainty.
Figure 1: From 2001 IPCC Third Assessment Report Variations of the Earth’s surface temperature: year 1000 to year 2100
The figure shows the temperature since the year 1000 forward to year 2100. The temperatures from the past are from observations of different types. The temperatures in the future are from model projections. There are a set of different physical climate models all using a standard set of emission scenarios. I have marked three types of uncertainty on the figure.
In light blue I point to a measure of observational uncertainty. This is the gray spread around the bold red temperature line. This gets smaller as more and more observations become available over time. Going into the future there are the individual colored lines of different models and on the right of the figure are the ranges associated with those models for the set of emission scenarios. The envelope of all of the models with all of the emission scenarios is pointed out by the green arrows. A simple estimate of uncertainty is the spread of the models. This uncertainty grows with time, and the spread when all of the scenarios are included is larger than the spread of any individual model. If one were to look at the individual models, you would see much the same thing. In the absence of different scenarios the models would have a significantly more narrow spread.
There are a number of important points in this simple approach to thinking about uncertainty. Looking at the spread of all models with all scenarios, the spread at, say, 30 years in the future is quite well defined by the lines of the individual models. It takes 30 or 40 years before the difference in the scenarios makes a difference. As a rule of thumb a simple description of uncertainty is that in the next couple of decades “internal variability,” that is, the spread is mostly due to things like El Nino and La Nina is most important. Then there is a length of time where the spread is due mostly to model differences. And as time approaches a century or longer, the spread due to emission scenarios begins to dominate. I note that model differences are always important, and that this difference is strongly related to details of the treatment of clouds. This uncertainty is expressed in how fast does it warm?
The physical climate model is like a telescope into the future; it provides actionable knowledge the Earth will warm, ice will melt, sea level will rise, and the weather will change. As the models improve, that future comes into more and more focus. There are physical relationships that allow a high degree of confidence to be attributed to some aspects of climate projections. For example, the surface of the globe will warm, in any carbon dioxide emission scenario. On this global scale, both model uncertainty and emission scenario uncertainty address the issue of how fast the surface will warm. Neither suggest any plausible scenario where the Earth does not warm. And simply to make the point, this plot does not suggest that the warming stops at 2100; that's just as far as the information is plotted. At local spatial scales, scales for which the models were not designed, the uncertainty analysis follows a much different logic than presented here.
Old Entry on Uncertainty and Definition of Model Types
Updated: 12:31 AM GMT on February 22, 2012
By: RickyRood, 5:47 AM GMT on February 10, 2012
Using Predictions to Plan: Case Study – La Nina and the Missouri River (2)
Earlier articles in this series:
Extreme Weather: Can we use predictions to plan?
La Nina and the Missouri River (1)
Link to NCPP to Missouri River Basin Pilot
The purpose of this series of articles is to explore how we might use model predictions and projections to plan better for extreme events. It is a mix of seasonal climate prediction and decadal-to-centennial climate projections. What I want to do is to translate information from observational studies and model predictions and make that information usable by someone. From my teaching of climate-change problem solving, I have concluded that it is this translation of information that is the most essential missing ingredient in the usability of climate knowledge. There is a LOT of information and knowledge, but it is not easy to use. An interest of mine is to develop templates on how to use that knowledge – and of course, by doing so in these blogs to provide some transparency into the use of climate information.
The previous entry made a start on the problem, but as in many starts it was naïve. It did provide a sanity check that tells us that there is documented variability of precipitation in the Missouri River basin, correlated with La Nina. But, at first blush, the La Nina variability in this region is towards drier conditions. We also know that what determines a flood is far more complex than “it rains a lot.” So that start motivates me to step back and think about all of the pieces – or mechanisms – that might work in concert to produce a flood. I will start with a map and a few pictures.
Figure 1 is a map of the Missouri River Basin. The headwaters of the Missouri River are in the Rocky Mountains in a span from central Colorado to Montana. For the upper Missouri River, the ranges in Wyoming and Montana are the most important.
Figure 1: Map of the Missouri River Basin
I have marked up this figure a bit in Figure 2. I put in some triangles to represent the mountains. Based on the paper I discussed in the first entry, that naïve start, Item 1 points to the region where there is a late spring and early summer deficit of rain associated with La Nina. Up in the mountains of Montana I have marked Item 2, that La Nina is associated with more snow in the winter.
Figure 2: Missouri River Basin with mountains symbolically marked by little hats along with the locality of precipitation variability that is linked to the La Nina cycles.
So I want to do two things here. First, where did I get that information about La Nina and snow in Montana? The Climate Prediction Center keeps a remarkable amount of information. That’s the good news. The bad news is that it is not always easy to find the information, and when you do, sometimes it needs translation. Here is their page ENSO Temperature and Precipitation Composites. Figure 3 contains my markups of a couple of figures for the composite anomalies and the composite frequency.
Figure 3: From the Climate Prediction Center. These are composite pictures, meaning that a set of La Nina years are averaged together to show what La Nina looks like. The figure plots anomaly which is the difference from an average calculated for the years 1981-2010. Hence, the composite is the average difference of a La Nina year from the average of all of the years in 1981-2010. The frequency is what percentage of the years do you see this pattern of average differences. These are for January, February, and March.
If you compare carefully with the maps in Figures 1 or 2, especially in northwestern Wyoming, La Nina suggests larger amounts of snow. The frequency map says that this pattern of difference occurs about 80% of the time. There are also positive snow cover anomalies in northwestern Colorado, but the rivers here, flow into the Missouri relatively far downstream. The strong positive snow cover anomaly in the mountains of Idaho are not in the Missouri River Basin.
The second point that I want to emphasize here is the emergence of the fact that flood in a large river basin, like the Missouri, is related strongly to the accumulation of water in basin. Therefore, variables like snow cover and soil moisture are more directly important to evaluating flood risk than, say, instantaneous rain amounts. This has consequences for the type of information that is needed from climate models. More information is needed from climate models than temperature and precipitation. We need estimates of, in this case, the storage of water in the environment. It also points out that what happens in one region in an earlier season is an important part of the information that is needed; that is, we need to determine connections.
My goal in this series is to try to write down the process and a template to make it easier for me to think about this problem the “next time.” So what do I have so far – and this will be subject to revision
Plausibility: Do I have a plausible, observational or experiential, foundation to expect a relationship between a mode of variability (here, La Nina) and an impact (here, Upper Missouri River Flood)?
Geography: What happens to a place is strongly influenced by the geography. What are the characteristics of the geography that influence behavior? In this case, for example, mountains influence the storage of water that ultimately ends up in the Missouri River.
Knowledge: We need to identify the type of knowledge that is needed, and location of sources of that knowledge. We need to know if there are existing, trusted sources that synthesize existing knowledge. We need to know if we can find pieces of usable knowledge in from trusted sources. We need to know if we need to generate knowledge to fill in the gaps to complete the knowledge base.
Connections: What pieces are connected together?
I will complete and refine this in future entries in the series.
Link to NCPP to Missouri River Basin Pilot
Updated: 4:37 PM GMT on February 10, 2012
The views of the author are his/her own and do not necessarily represent the position of The Weather Company or its parent, IBM.