Manhattan Institute for Policy Research.
Subscribe   Subscribe   MI on Facebook Find us on Twitter Find us on Instagram      

National Review Online


Garbage In, Gospel Out

September 23, 2009

By Max Schulz

The global-warming crowd has too much faith in computer modeling.

There are two kinds of models that mean anything to most Americans. There are the models who pose for the Sports Illustrated Swimsuit Issue or the cover of Vogue, date movie stars, host talk shows, and have their antics chronicled on Page Six of the New York Post. Then there are computer models, those nerdy systems driven by thousands of points of data input. This latter type of model serves as the fundamental basis for the belief that unchecked anthropogenic global warming imperils the planet.

These two types of models don’t overlap much — but they do have one critical feature in common: Each suffers from severe limitations. Runway models’ limitations keep the vast majority of them from crossing over into areas like acting or singing — professions that require savvy and talent along with beauty. The limitations of the computer models on which the entire global-warming edifice is built are even more severe, and they could have a profound effect on every single American.

In considering these limitations, it’s worth reviewing the history of using computers to predict the future. In the 1950s, an MIT researcher named Jay Forrester helped develop the first large digital computers. These were designed to track and defend against Soviet bombers, and they worked very well. Soon, Forrester began using the newly arriving generations of IBM mainframes for more general modeling applications, such as for industries and cities. “From a computer’s perspective, the problems were not all that different from tracking bombers,” wrote Peter Huber in his 2000 book Hard Green, which provides a wonderful account of computer models and their limitations. It was a matter of moving otherwise-unsolvable equations into the powerful new computers, and the results were decidedly positive.

At this point, there was every reason to think that running other problems through these increasingly powerful machines would yield useful results. That was the thinking that led Forrester to collaborate with the Club of Rome in the early 1970s. They devised a model of planetary resources that considered a variety of interconnected dynamic systems and global scenarios — death rates, birth rates, natural-resource depletion, population density, capital investment, crowding, pollution, etc. They fed the model into a large MIT mainframe and flipped the switch.

Forrester’s partners published the results in the 1972 bestseller Limits to Growth. They predicted a rapidly growing global population combining with rapid resource depletion to spark violent social upheaval. Limits to Growth suggested that disasters and die-offs were imminent, and that the survivors would live in a world of misery and scarcity.

The model turned out to be wrong — spectacularly and embarrassingly wrong. Despite a large increase in world population, people all over the globe are richer and healthier than they were when Limits to Growth made its predictions. Not only did the Club of Rome fail to predict the future with any accuracy, it failed to account accurately for the past; its model showed that quality of life had peaked three decades earlier.

How could this happen?

Turns out the model Forrester had dreamed up could only do so much. Obsessing about population growth, Forrester plugged into his model the mouths that would need to be fed. But he didn’t consider that each mouth was accompanied by a brain, and that brains create wealth and solutions that tend to offset the needs of the mouths. Neither could Forrester successfully model the workings of markets. “If markets could be reliably modeled, as the Soviets thought they could, we wouldn’t need markets at all,” wrote Huber.

Today, computer models predict everything from the stock market to virus outbreaks. And the sole basis for calls to pass cap-and-trade legislation, to drastically curb greenhouse-gas emissions, and to fundamentally reorient the world’s energy economy are projections from the computer models employed by the Intergovernmental Panel on Climate Change (IPCC).

Unlike the Club of Rome’s calculations, which tried to divine the relatively near future (2000), today’s considerably more complicated climate models purport to tell us how the world will look nearly a century out.

The models, data, and methodology have certainly improved since the Club of Rome took its crack 37 years ago. But they still leave much to be desired. Consider the comments of the eminent physicist and mathematician Freeman Dyson, who said recently, “I have studied their climate models and know what they can do. . . . The models solve the equations of fluid dynamics and do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and biology of fields, farms, and forests. They do not begin to describe the real world that we live in.”

Unable to accurately describe and account for these factors, modelers “parameterize” them. Freeman says the problems with that approach are obvious: “They are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behavior in a world with different chemistry, for example in a world with increased CO2 in the atmosphere.”

Some fear that it may not be only the models’ variables that are being fudged. Pat Michaels details elsewhere on NRO today the disconcerting facts about the U.K.’s Climate Research Unit, which has repeatedly refused to release its global surface-temperature data so that its climate modeling can be verified by outside experts — and now claims that the raw data have been lost.

What does that say about the trust the public is supposed to place in those who tell us that the world is heading for disaster? Ultimately, what you put into your model determines what you get out of it. If you subscribe to the notion that even the least plausible catastrophe is at least a possibility, you can make your model predict it.

None of these flaws necessarily mean that the IPCC’s predictions for temperature and sea-level rise by 2100 will not materialize. On the other hand, they should inspire some skepticism.

If there is one factor that should make us all sit up and take notice about the deficiencies inherent in modeling, it was supplied in a recent article by Vinod Khosla — the Sun Microsystems co-founder and green venture capitalist who in 2006 helped bankroll California’s failed Proposition 87. Had it passed, Prop 87 would have socked Golden State oil companies with extra taxes and given that money to renewable-energy ventures. Among the cap-and-trade crowd, Khosla is a prophet on the level of Al Gore, and he’s a prized speaker on the clean-energy lecture circuit.

In a recent piece for Grist on the future of electric cars, Khosla wrote:

So what kinds of technology are we investing in? I think the traditional approach to lithium ion battery making, such as A123, is going to be competing in an overheated, nearly-commoditized market and will probably not (I guess never say never!) get down the cost curve in the next 5 years. (Longer-term forecasts are futile because so-called experts can make anything they want up — we all know long term we will be on fusion power.)

That second parenthetical could hardly be more telling, coming as it does from an established leader of the green brigade, a billionaire activist who counsels President Obama’s White House on climate-change policy. In essence, what Khosla is saying is that when you are investing your own money, you shouldn’t trust a forecast looking more than five years out because “experts can make anything they want up.”

Fair point, and good investment advice. But shouldn’t it apply to climate policy as well? The IPCC’s “so-called experts” aren’t offering five-year forecasts, but 100-year ones. Do the rules of forecasting somehow change when the subject is justifying higher energy taxes, or justifying the punitive taxation of oil companies to give their profits to uneconomic green-energy projects? Probably not, which suggests at the end of the day that the green regime is more interested in political payoff than in applying sound science and analysis to questions about climate change. One would hope that even a supermodel could figure that out.

Original Source:



Reclaiming The American Dream IV: Reinventing Summer School
Howard Husock, 10-14-14

Don't Be Fooled, The Internet Is Already Taxed
Diana Furchtgott-Roth, 10-14-14

Bad Pension Math Is Bad News For Taxpayers
Steven Malanga, 10-14-14

Book Review: 'Breaking In' By Joan Biskupic
Kay S. Hymowitz, 10-10-14

Neo-Victorianism On Campus
Heather Mac Donald, 10-10-14

Charter Center Advertises for More English Language Learners
Eliza Shapiro, 10-09-14

Workers Interests Are Ill Served By ‘Tipped Wage’
Diana Furchtgott-Roth, 10-09-14

Knowledge Makes A Comeback
Sol Stern, 10-09-14


The Manhattan Institute, a 501(c)(3), is a think tank whose mission is to develop and disseminate new ideas
that foster greater economic choice and individual responsibility.

Copyright © 2014 Manhattan Institute for Policy Research, Inc. All rights reserved.

52 Vanderbilt Avenue, New York, N.Y. 10017
phone (212) 599-7000 / fax (212) 599-3494