“You have been selected at random….” I do not enjoy hearing these six words. A survey. Two more often used here that push my buttons are “team” and “professional.”
Customer satisfaction has taught at B-schools for a long time, although businesses have done a lot to hurt it, such as the average experience one has calling a business, or just in time inventory, which isn’t (“that will be here in 3 business days.”) Surveys are now frequent. I don’t like the questions or the choices; while I buy the product, I don’t buy the inferences they might make from a survey.
Recently, I got three. Two were from Comcast, following as many calls during an e-mail outage. If I agree to complete a survey, I get through faster to a human being. Try it some time. I contacted Comcast twice, later receiving two surveys, answering the second. By not responding to the first I hurt the assumption of randomness, required for a decent survey.
Comcast told me the survey would take 2 minutes. There were questions about my satisfaction, having my the question answered, professionalism (undefined, desperately in need of definition today), offering the Website as a source of information (incredibly dumb, if there is an Internet outage), and others. I hung up at 2 minutes. They said two minutes, and I gave them two. The other was from Peace Health, which I almost tossed, but decided to fill it out. There were about 35 questions, too many, so a 3 or a 4 on a Likert Scale didn’t matter a whit to me. I don’t like averaging Likert scales, either. Two “5”s and two “1” s average to an “average,” but it suggests there are huge differences in customer satisfaction.
Twenty years ago, medical director of a hospital, I learned we spent $100,000 annually on quarterly surveys, arriving on glossy paper, with nice colors, like a dressed up pig: pretty, but still a pig. Only I read them. I know that, because I went to the Executive Meetings at the hospital and asked a good question: “How many have read this?” And a second: “Has anything changed as a result?” Answers: No, and nothing, respectively. The survey asked patients whether their food was hot. If a patient had 10 meals and 7 were hot, what should they answer? The survey asked whether the physician or nurse was professional, whatever that means, especially if the patient had several of each. The return rate was about 5%, and even before I got my stats Master’s, I knew the figure was meaningless.
I proposed a different approach: we hire one person, far less than $100K including benefits, ditch the company, and call 100 discharged patients every month, picked at random, with all replying, or the non-reply would be considered the worst possible. This is worthwhile and it conservatively estimates how well one is doing. We asked three questions:
- Did you like the care? Yes/No.
- Would you recommend us to a friend? Yes/No
- What suggestions do you have?
We didn’t learn about hot food, but care results could now be inferred to all patients, and we received good suggestions, too. People will toss most 6 page surveys; three questions from a human might be answered.
I tried this at a hospital in Las Cruces, told that time made a difference when you surveyed, whether the day of discharge or 6 weeks later. I countered: if people didn’t respond to a random sample, or responded to a call 6 weeks later, the results were worthless. I lost.
I tried it with the medical society, where we had success. We randomly surveyed primary care physicians about colonic cancer screening with two dichotomous questions, only two. We used 90% confidence intervals and margins of error of 10%. This wasn’t Bush v. Gore; this had to do with recommending screening for colon cancer. The large margin of error and the small confidence interval decreased the needed sample size to about 70, manageable, and we had the finite population correction factor, which helped further. The latter means that if the sample is a large enough percentage of the population, the sample itself is a significant part of the population: less error.
Confidence intervals are given in percent. A 90% confidence interval means one is 90% confident the true value is contained in the interval. The true value (parameter) is unknown and unknowable; the interval either does or does not contain it, so probability is irrelevant. A 100 similar samples generate 100 confidence intervals, 90 contain the parameter. Which 90? We don’t know.
We sent the questionnaires by mail and called those offices or physicians who didn’t reply. It worked. We got all but 1 response, worthwhile. We made inferences to all primary care physicians in the Medical Society with high confidence and reasonable error. Cost? Small.
A decade later, I was asked to help in a survey about insurance companies. Unfortunately, too many questions were asked, because “all were important.” They weren’t. The response rate to the survey was poor, and physicians who were supposed to call their colleagues didn’t. I was asked to call; I replied as the statistician, I was not carrying the flag for what I considered a suboptimal survey, which should have taken a quarter year to complete but instead a quarter of a decade. Really. When I performed two sample proportion tests, a physician asked me whether it were the right test. I resisted asking him if he performed the right tests on his patients.
If you want a good survey, randomize, ask 1 or 2 questions, use 90% confidence intervals and high margins of error. Randomize a thousand or a a hundred thousand people, sample 100, obtain all responses, and you will have 90% confidence that the true result for the million is within 8-9% of your point estimate, your sample result. I can prove it mathematically. Do you need 80% +/- 2%? Or can you live with 60% +/- 9%? I submit the latter is useful. Want more information? Ask two more questions and survey another 100 at random. The expected value is 1 in 10,000 will be called twice.
Sampling is an incredibly powerful technique, but it has to be used carefully. Read a newspaper article sometime and note how percentage of respondents gradually becomes percentage of people. That is incorrect.
Please act on the results. If the survey sits in an office unread, it wastes time and money. Asking for suggestions is useful to generate good ideas. If you want to call everybody afterwards, don’t ask how professional their people are. Ask only how you can do better. Trust me, you will hear a lot. People like to answer that question. Then it is your turn.
Act on the suggestions.
Leave a Reply