“Would you like to see the Senator’s paper on the projection for Social Security for 2035?” a student asked a statistics professor friend of mine, who taught in DC.

“Is there a confidence interval mentioned?” replied my friend.


“Then I don’t want to see it.”

Nearly every prediction we make about a parameter or phenomenon has some uncertainty.  We make the prediction based upon evidence we have gathered; the evidence we gather may not be accurate, especially predicting the future status of Social Security, which is unknown.  SSI may not even exist in 2017, for if a significant number of powerful people have their way, and enough people decide not to vote, because “it never matters,” SSI will be dumped to save money, to be used to make wealthier people wealthier, because these powerful people want no debt, believing recipients are freeloaders.  I don’t agree, but I am one person.

Even if SSI is unchanged, the funding model may, the economy will, world conditions will, and the number of people receiving it will.  Laws may modify it. All of these conditions are unknown.  They are estimated, using a variety of techniques, but these estimates have error: a word, like theory, that has a different meaning in science than it does in regular speech.

A scientific theory is a system of ideas intended to explain something.  “Your theory (thought, guess, idea) is wrong,” is in the general vernacular. Neither is wrong, except when the vernacular is used to denigrate a scientific theory.  We have a theory of gravitation, but I doubt anybody would jump off a building.

Errors in estimation occur, and they don’t mean that scientists are careless, obtain false, or don’t understand data.  Those are uses of “error” in the vernacular.  Errors in science occur because we use samples to estimate uncountable totals, like the percentage of people who plan to vote for a certain candidate, or any quantity that we may have modeled.  Let’s look at the former.  We don’t talk to every voter.  We choose people at random, and there are a variety of ways to do that.  A perfect sample is difficult to obtain; online samples, the ubiquitous surveys that B-school grads have inflicted on the country, are examples of bad sampling techniques.

If we take a second sample, we get a different result.  Same for a third and all samples we might take.  Typically, we sample only once and use the result as our point estimate, the best value we know.  Are we completely certain?  No, we aren’t, unless we take a census, measure every individual, which is not feasible.  We can quantify the error, however, depending upon the sampling approach and the sample size.  The error decreases as the sample size increases, the decrease of the error approximately proportional to the square root of the sample size.  That’s powerful stuff, done right.  A random sample of a 100 people in the nation, on a yes-no question, has an error of plus/minus 10%.

From a sample, we may construct a confidence interval, using the sample result and size.  This interval we believe to contain the exact value of what we are trying to measure.  Does it ?  We don’t know, because the true value is unknown.  Can any value be possible?  It depends upon what one considers possible.  If one considers anything possible, like the probability of winning the lottery every week for a year, then yes.  If one considers the likelihood of the sample’s being wrong at no more than 5%, typically used in science, then no, everything is not possible.

The concept of a confidence interval allows us to state a range where we think the true value lies. It is either in the interval or it is not, which is a not useful probability statement (0 or 100%).  Therefore, we call it confidence, and typically, high confidence is 95%.  It isn’t perfect, but it is considered significant evidence.

This explanation is why I was upset when the Republican candidate for the Congressional District where I live, a scientist, published the names of nearly 32,000 scientists who did not believe in global climate change.  First, scientists who matter are climate scientists (133), not people like him or me.  Second, science is not done by voting, but rather gathering evidence.  Science is not shouting at somebody, threatening them, or vilifying them in print.  One cannot discount the almost unanimity of articles that state there is manmade climate change occurring, and then use a number of disbelieving scientists to support their claim.  The database is not updated for deaths or opinion changes, which speaks to sloppiness, too, frankly.  If the database is important, it should be correct.

The publication that this candidate used to publish his list was one for which I was once a volunteer reviewer in statistics.  It is a right-wing publication, purportedly scientific, but the articles are replete with non-scientific terms and name calling.  That is fact.  I have read it; moreover, I was asked to review the statistics for articles.  If one has read this blog, one knows that I am a liberal.  I volunteered to help a right-wing journal.  I would be interested in examples where right-wing people volunteer to help a left-wing journal. I did this for free.  Truth matters.

On the basis of statistical evidence, I recommended against the publication of an article that claimed that low dose radiation was healthy (used skewing wrong), which the Congressional candidate believed in, that vaccines caused autism (regression analysis failed to check assumptions), and that the FDAs not approving drugs caused 10,000 deaths a year in the US (using correlation to conclude causation).  I stopped receiving articles for review.  I didn’t know why, suspecting that the journal didn’t like my opinion.  I cannot prove that, and I have no confidence interval.  However, I am 100% certain I stopped receiving articles to review; I later resigned.

Putting “scientist” after a name is easy; look at “creation science”.  Good science, however, is difficult.  I apply science to my life, but that is not being a scientist.  A scientist, and I will grant the candidate was once one, researches or studies something and draws conclusions, even if the conclusions are not what he hoped or expected.

Rule 2 of my approach to climate change is to look at the confidence intervals of both sides:  one side states with high confidence that their interval does not contain zero or a negative number regarding global temperature rise.  The other side does not give any such interval.  This is not scientific, and I am being polite.  If one is completely certain global climate change can’t be occurring, given the complexity of the atmosphere and oceans, such knowledge would be sought after by every climate scientist in the world.

What disturbs me is that the leadership of Congress admits they are not scientists, yet they quote both sides equally, which they are not.  They use such reasoning not to act, for acting might cost jobs, an unproven assumption.  In other words, non-scientists are deciding scientific issues in the country, and I am highly confident they threaten our future.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: