Sustainable Energy
What the hell is a climate model—and why does it matter?
Better technology, techniques, and data sharing have allowed scientists to try novel experiments—or simply run many more of them.
Just a few years ago, the conventional wisdom held that you couldn’t attribute any single extreme weather event to climate change. But now scientists increasingly can and do state the odds that human actions caused or exacerbated specific droughts and hurricanes.
One big reason for the change is that the science of climate modeling is becoming increasingly powerful as improvements in technology, techniques, and data sharing allow researchers to set up novel experiments or simply run many more of them.
(Read the accompanying story: “How nuclear weapons research revealed new climate threats.”)
Climate models are sophisticated computer simulations that approximate how the planet responds to various forces, like surges in carbon dioxide. They break down the oceans, surface, and atmosphere into 3-D boxes and calculate how shifting conditions track across time and space.
Basic gains in computing power have driven many of the improvements. Those boxes were about 500 square kilometers in 1990. For some of today’s highest-resolution models, including the Department of Energy’s E3SM, Japan’s MRI, and China’s FGOALS, they are under 25 square kilometers. The resolution gets higher still for specific applications, such as modeling hurricanes.
In addition, the earliest climate models in the 1960s were focused on the atmosphere, but now they take into account land surfaces, sea ice, aerosols, the carbon cycle, vegetation, and atmospheric chemistry. More recently, models have started to incorporate the ways that human behavior shifts in response to climate change, including migration and deforestation.
Other modeling advances have occurred as a result of a three-decade effort under the World Climate Research Programme, known as the Coupled Model Intercomparison Project (CMIP). Under this program, research institutions are asked to conduct a common set of experiments with a common set of inputs, and publicly share the results.
The petabytes of resulting data have enabled researchers around the world to carry out studies that dive into specific areas of interest without having to secure their own time on supercomputers.
The abundance of data has also allowed scientists to compare the results of various models against each other, and against climate changes observed in the real world to date. That’s provided crucial insight into which models work best, and occasionally it has even revealed problems with our observations. It’s also offered feedback that’s allowed institutions to test new hypotheses, further refine models, and improve their understanding of natural processes, says Noah Diffenbaugh, a professor of earth system science at Stanford.
Notably, MIT’s hurricane modeler Kerry Emanuel used public data from seven models to drive a focused hurricane simulation tens of thousands of times, in an effort to calculate the odds that a storm on the scale of Hurricane Harvey will land on Texas again. By examining two 20-year periods across a range of scenarios for greenhouse-gas emissions, he found that what was a 1-in-100-year event at the end of the 20th century will be a 1-in-5.5-year event by the end of this one.
But for all these vast improvements, even a 25-square-kilometer box is still far too large to capture small-scale processes like the behavior of individual clouds. And scientists are well aware that the models don’t perfectly represent complex natural processes.
It’s why they generally speak in terms of ranges in climate-change scenarios, and why events in the real world can still occasionally occur outside those bounds.
“There’s still a lot of uncertainty in the projections, and we’re all bothered by that,” Emanuel says.