WunderBlog Archive » Dr. Ricky Rood's Climate Change Blog

Category 6 has moved! See the latest from Dr. Jeff Masters and Bob Henson here.

Something New in the Past Decade? Organizing U.S. Climate Modeling (1)

By: Dr. Ricky Rood, 5:05 AM GMT on February 05, 2011

Something New in the Past Decade? Organizing U.S. Climate Modeling (1)

Next week in Washington a panel is convening to write about “A National Strategy for Advancing Climate Modeling.” (link) I am a member of this panel, and I have been asked to review an older report on which I was a lead author. The report was published in 2000, and it is still available on line at the USGCRP website. (U.S. Global Change Research Program)

When my co-authors and I wrote this report, we presented the results to several panels of distinguished people. Over the years, people have continued to send comments to me about the report. I contend that this report was different from a lot of other reports. I think it is safe to say that the authors of the report were chosen because of a willingness to look beyond their home agencies. Also we included as an author a sociologist who is expert in organizations and how to make organizations function.

The report was motivated by what I might call discontent by some of those responsible for oversight of Federal climate expenditures. There was in the late 1990s a (highly politicized) national assessment of climate change. Much of the information for model predictions came from Canadian and British models. This occurred despite the fact that not only were their several U.S. modeling efforts, but the U.S. spent (far) more money on modeling than these other countries. A natural question, what was wrong with the U.S. efforts?

In the report, we concluded some things that some of our colleagues considered radical. We focused much of our discussion on issues of management of scientific programs and organizations, and concluded that the culture and practice of science in the U.S. was, fundamentally, fragmenting. We even went as far as to state that “Without addressing these management issues, providing additional funds to the existing programs will not be effective in the development of the Climate Service." (Not sure that statement helped my career and reminding people of that might take me right through retirement.)

For my presentation next week, I need to return to the report and perhaps think about what is different in the past 10 years.

In the spirit of being conversational – there was press coverage of the report at the time, and most of that press coverage was in publications that focused on computing and supercomputing. We authors quickly regretted this emphasis on computing, and the document being cast as a “computing report.” True we did say that U.S. policy on supercomputing and our ability or inability to import supercomputers impacted, negatively, the competitiveness of U.S. climate and weather modeling. But we did not feel that our primary message was about computing.

Our primary message was meant to be about fragmentation and distribution of resources that could be brought together to address integrated problems such as climate assessments. The U.S. scientific culture values highly innovative, curiosity driven research. This is often best achieved through the efforts of individual scientists and small groups. This individuality is exciting, and it is how scientists get promoted. It develops a culture of expertise. Our point in the document was that there needed to be another path of scientific practice, one that valued the integration of all of the pieces and the production of validated, science-based products. We called this “product-driven” research. We could have as easily called it applied research.

So the question comes forward, how do we value product-driven research? It’s hard. In the U.S. we have this idea that if we generate products from our research, then that is in some way damaging to innovation and the generation of the “best science.” The “science” gets compromised. The word “operational” is invoked, and there is a prejudice that operational systems, ones that produce products on a schedule, must be less than they can and should be scientifically. Hence, anytime there is a push towards product-driven research, there is both individual and institutional resistance that rises to defeat the push. This makes sense, because it is asking people to change, and it is asking them to do something for which they cite plenty of evidence that it will assure less successful careers.

We have institutions where people are expected to work on community models but, at least historically, their performance plans make no mention of community activities. I have worked on documents for U.S. agencies as recently as 2010 where I tried to write that we were building climate models that could be used in energy planning, policy decisions, and by society to anticipate and plan for climate change. This, however, was deemed as contrary to the true agency mission of fundamental research for the benefit of the nation. People are hired to do multi-disciplinary research, but they are promoted or given tenure for their individual accomplishments in specific disciplines. Individuals are recognized for novel breakthroughs, programs are recognized for funding novel breakthroughs, and agencies are recognized for having programs that fund novel breakthroughs.

So in the final presentations we made of the 2000 Report we drew pictures like the one below. We put in arrows and money signs and suggested lines of management, and argued that there needed to be internalized incentive structures. (For those with energy, the article continues below the figure!)



Figure 1. An organization designed to deliver product-driven research (maybe what we should do).

What I have stated above is that the fragmented way we approach the practice of science is valued because it encourages innovation and fundamental discovery. One the other hand, it stands in way of the cross-disciplinary unifying branch of science. As climate scientists we have a need to perform assessments, and assessments are, by definition, cross-disciplinary unifying science. Therefore, to align our assets and efforts to perform assessments comes into basic conflict with not only our fragmented scientists and science organizations, but with the underlying culture of our practice of science.

The fragmentation extends beyond the practice of research. There are separate organizations responsible for high-performance computing, and they have their needs to demonstrate breakthroughs. Such a goal might be the greatest number of calculations in a second. Goals like that are achieved with special problems and computer codes, not with messy real problems like weather prediction and climate modeling. Computers are often provided for a set of grand challenge problems. Another point in the report was that the climate models and computational platforms needed to co-evolve; they needed to be managed together.

And if computers and models need to co-evolve, then there needs to be balanced development of software and data systems and analysis capabilities. In fact, in the 2000 report, we identified the greatest deficiency in federal investment being in software infrastructure. Since 2000, there has been significant development of software and data systems and analysis capabilities.

Perhaps then, there is some impact from the report, with more balance in the funding of all of the pieces that are needed in a robust climate program. The expenditures, however, are still fragmented, and the developments have a tendency to be independent. Even given the recognition that these expenditures are essential for a robust climate program, there is always a fight to maintain the expenditures as they are viewed to take away resources from “the science,” from research, from discovery. The program managers and software engineers and the data system professionals have to compete with the high profile breakthroughs of research and high-performance computing.

I paint here a fundamental characteristic of our practice of science. It is deeply ingrained, and in many ways, it is highly successful. Therefore, approaches to provide assessments, to address cross-disciplinary unifying science, to develop climate services – these approaches need to build from this practice and from these successes. This is a challenge to agencies who like to think in terms of re-organizations, institutions, and programmatic collocation of needed assets. Reorganization does not address the basic fact that the underlying structure is fundamentally fragmenting, that there is perceived value in that fragmentation, and that there is investment in that fragmentation.

In the 2000 report we described the type of organization that we thought was needed to address the issues of climate modeling, high-performance computing, and climate services. Today, I would nuance or refine that recommendation, based on emergence of community-based approaches to complex problem solving. A new type of organization is needed, one with stable, balanced, coordinated, product-focused investments in all of the elements necessary for science-based climate products. Essential in this organization is giving value to those who perform cross-disciplinary unifying scientific research to address complex problems. This is not reorganization or restructuring; this is not merging agencies and programs; this is focused, mindful development of a capability to achieve a specific, needed goal.

r

Climate Change Climate Models

The views of the author are his/her own and do not necessarily represent the position of The Weather Company or its parent, IBM.