On a recent science podcast, climate models were being discussed, one conclusion of which was that droughts like the 2011 one in Texas were 20 times more likely to occur today then they were 50 years ago, and that this was due to climate change. However, floods that devastated Bangkok recently were felt not to be due to climate change, but rather a cyclical issue that was worsened by the way Bangkok had built since the last such flood.
There was a call from a listener, and as soon as I heard the tone of voice, I said to myself, “Uh oh.” Some listeners call in with questions, some give speeches. This one did both. He wanted to know if the models were so good, what the temperature was going to be in a certain city in the Midwest next July 4 and for the following 3 July 4ths.
He had a very angry, challenging tone of voice.
Climate models are not the same as weather prediction. We cannot predict the weather accurately more than a few days in the future. Does that make the GFS, ECWMF, NAM, AVN and other models wrong? Yes….and No. As a statistician, I learned the following: “All models are bad. Some are useful.” Weather forecasts are based on atmospheric models, which differ according to initial conditions and the relative weight of the known variables. I remember 50 years ago, when weather forecasting was done by a non-meteorologist on television, and the forecasts were not very good. We have gotten a lot better; short-term forecasts, in the 24 hour range, are exceptionally good; I use the GFS 9 day model as a rough idea of what to expect in the coming days, but I know matters will change.
Climate forecasting is another science altogether, taking into account different long range variables. From 40 years ago, when climate scientists, unaware of key variables, thought there might be cooling, to now, where virtually all believe warming is highly confident (95% being considered highly confident), there has been much research and ability to get information about the past. The fact that a confidence interval is used means that statistical techniques have been taken into account, and while the conclusion may be wrong, it is highly unlikely that it is.
Let me explain a confidence interval: it is NOT a probability, or it would be called such. It is a range, based on the evidence, of where the parameter (the true measurement) is expected to be. The parameter is unknown and unknowable, so that the interval either contains or does not contain the parameter. This makes no sense with probability, so we call it confidence. If we are able to repeat the experiment 100 times, we would expect 95% of the intervals generated to contain the parameter. We would not know, of course, which 95 would. The fact that models are not perfect is taken by far too many to allow them to take the other view, that they are wrong. One may, of course, choose to do so, but it behooves those who disagree to come up with their own margin of error, p-value, and confidence interval, so the data can be appropriately discussed. To say,”models are wrong,” is inappropriate for a scientific discussion. Of course they are. Statistics as a predictor is wrong, but good statistics state the likelihood an error of a defined amount will occur.
Probability is forecasting the future. There is almost nothing complex in our world that we can forecast perfectly. There must be some error. Every responsible scientist quantifies that error in some manner; to do otherwise is to say that one can predict with absolute certainty what the future will bring. We don’t do this with temperatures in Iowa on July 4th, where the next hurricane will form, or even its 10 day path. Nor do we say, with absolute certainty, that the Earth is warming. But the range that the models are giving us does not include zero or a negative number with high confidence, and that means the conclusion is, based on the current data, that the Earth is warming.
While correlation does not equal causation, there may be factors that make such causal. Because we know that one greenhouse gas has increased (carbon dioxide), that another, in the face of warming, has increased (water vapor), a third and fourth (nitrogen oxide and methane) are increasing, we have reason to believe that the conclusions are not in error, and that there is a causal factor.
Anybody who follows hurricane forecasts is familiar with the cone of uncertainty, and the fact that the cone changes with time. We saw this with Irene last year, and we saw the gradual westward shift of Isaac this year, as the initial forecasts showed Isaac to strike Tampa. With time, the models showed a westerly drift of the hurricane’s expected path, and for it to ultimately strike New Orleans. The models were constantly updated, and the gradual change was noted. What did not happen was that the hurricane dissipated, it curved into the western Atlantic, or it went south into the Caribbean. The models were good. They were not perfect, but they were very good, and three days before Isaac made landfall, it was predicted to hit very close to where it eventually did.
Nate Silver says that the cone of uncertainty for hurricanes was 700 miles in diameter 25 years ago and 200 miles in diameter today. He is studying models to understand why some work and others do not. But the poster child for good models is weather and climate science.
Models to some have become the new “Bad Boy” of climate science. Every responsible scientist develops models, if it is all possible. Indeed, the dawn of simulation was about 15 years ago, and I remember running simulations in graduate school in 1998. We are now able to do this so much better than we once did. Debate should be over what models are used, their initial factors, the variables, and the conclusions, not whether or not we should use them.
Tags: Philosophy
Leave a Reply