pacted into “yes” and “no” boxes. As in the selection of jurors,rnthe process is biased towards the unsophisticated at leisure. Arnrepublic of not-too-busy T^ watchers.rnCleariy, even the most ardent defenders of polling acknowledgerna degree of untruth in their product. Surely peoplernlie, especially on sensitive matters, and surely surveys occasionallrnv employ ill-defined terms that elicit nonsense. Questionsrnomit relevant alternatives, dramatic events distort reactions,rnand bumbling interviewers garble responses. Furthermore,rnresults are sometimes distorted by unrepresentative samples,rnclerical errors, or even outright falsification. This is all openlyrnconfessed, though dismissed as nonfatal. Let us not dwell onrnthese familiar weaknesses; our concerns run deeper.rnhi the real world, misjudgments and misperceptions are costly.rnWhile market research polls better get it right or heads willrnroll, this is not true when assessing what citizens want from government.rnE’en more important, there are few incentives to getrnit right. If concocted results were placed next to the genuine article,rnhow could we distinguish the gold from the base metal?rnThere is no standard “truth bar” in some Bureau of Opinion,rnand here lies the problem. When a New York Times survev reportsrnthat 75 percent of the public want less defense spending,rnand the Platonically correct figure is 49 percent, does anyonernlose his job? Arc pollsters panicked that the public will lose confidencernin the pay-per-view Oracles? Of course not—nobodyrn(fortunately?) has the Platonic figure. We have assertions ofrnaccuracy buttressed by scientific paraphernalia, not publicrntests.rnhi general, tests of poll accuracy on matters of public preferencernare exceedingly rare. Perhaps the only example is the preelectionrnpoll. This test, however, is somewhat limited. Voting,rnunlike policy, has a concrete meaning to respondents, while thernthreat of accountability permits pollsters to take precautionsrnsuch as larger samples and multiple questions. Even when predictionsrnmiss, escaping responsibility is not difficult. There arcrnalways last-minute disturbances, “understandable” limitationsrnof technique, and other excuses. In especially difficult situations,rnpollsters announce “too close to call.”rnThe very nature of polling offers few incentives for gettingrncloser. All survey organizations, whether small campaign shopsrnor univcrsitv-connected colossi, are constrained by costs. Improvingrnaccuracy is very expensive—intensive interviewer training,rnnumerous pretests, multiple questions, and other qualityrncontrol steps inflate costs. And one can still never be sure.rnWhy should the market—the mass media, politicians, or academicsrn—pay handsomely for this enhanced quality? A betterrnpoll question is not the same thing as buying a Lexus versus arnChevrolet. In short, there is no demand for the better product.rnPeriiaps the most serious and enduring impediment to accuracyrnis that nobody can tell the difference, at least for questionsrnabout political preference. We are shooting at an invisiblerntarget. There are no full-page ads in the Wall Street journalrnproclaiming that, for the fifth year in a row, Gallup has beenrncertified to have the most accurate polls. Accuracy will comernonly when we have a clear standard of true sentiment, and thisrnis likcl’ to be never.rnThe modern poll assumes that the people have somethingrnintelligent to say. Otherwise, why ask? Among both commercialrnpollsters and academics, this must be judged The SupremernArticle of Faith. To challenge it attacks the very foundations ofrnthe enterprise. Occasional stupidities will be dutifully acknowledged,rncommented upon, picked apart in print, and everybodyrncan then return to work as if nothing happened. The problemrnis deeper than acknowledged, however.rnThe typical survey’s very design conspires to maintain thernfaith. Basically, it is a highly structured arrangement that couldrntransform even noises from the brain dead into valid data.rnMerely by answering “yes” or “no,” citizens “resolve” the mostrncomplex, troublesome problems. It is assumed that by simplyrnreacting, respondents know of what they speak. For example,rnone common type of question is, “Would you increase/decreasc/rnor keep the same spending on policy X (defense, socialrnsecurity, environmental protection, etc.).” Despite studiesrndemonstrating vast ignorance of government spending, thernrespondent’s knowledge is never assessed. Hence, Mr. X, unknownrnto the pollster, believes we currently spend $800 billionrnon defense and advocates a “decrease” to bring it down to arnmere $500 billion. Meanwhile, Mrs. Y is certain that the presentrnfigure is $200 billion and wants an “increase” to $300 billion.rnFinally, Mr. Z is well-informed of the facts, including therndifficulty of precisely labeling “military spending,” and favors arnmarvelous plan of both cuts and increases. He reluctantly answers,rn”Don’t Know.” The headline declares “Public Dividedrnon Defense Cuts.”rnSimpleminded questionnaires mask the public’s inability tornnavigate difficult choices, especially when large numbers andrnrare events are involved. Only occasionally does collective sillinessrnexpose itself. For example, respondents in one surveyrnexpressed a willingness to impose a $10 tax on Westerners tornsave 50,000 birds in an oil spill. This is $2,000 per bird! This isrnpeanuts compared to the $32 billion a year wanted to preservernthe whopping crane and $244 million to prevent a single offshorernoil spill. But, public generosity does have its limits. Pollsrnon willingness to pay for government-sponsored health carernshow most Americans supportive only if it was an incrediblyrncheap bargain, well below $2,000 per year. Lucky birds.rnSuch examples do not reflect a stupid citizenry. They merelyrnreveal that polls can easily overreach the public’s capacity tornresolve complex problems. When the issue is familiar andrnclose-at-hand—for example, the use of racial quotas in schools,rnfear of crime, public morality—people can respond with a degreernof thoughtfulness. But when the poll ventures into farrnmore complicated, distant, and unusual matters, responses becomernless meaningful. What is most unfortunate is that polls,rnby their very design, do not permit a differentiation of the two.rnBy their format, wording, and vagueness, questionnaires disguisernirresponsible, thoughtless responses. Discovering rcalitv isrnreplaced by shadowmetrics. People can be irresponsible and indulgerntheir fantasies. The same citizen who refuses a beggar arndollar gladly confesses to a pollster a willingness to spend millionsrnabolishing poverty. The latter response is the “real” one.rnUnfortunately, since polling is a commercial enterprise, therernare no industry incentives to doubt the marketed product’s value.rnIn the end, perhaps some deep psychological need governsrnour obsession with polling: our compulsive craving for selfunderstandingrnwhets our appetites for the equivalent ofrnscientific junk food. But when confronted with junk food, thernprudent will either consume in moderation or exercise restraint.rnWhen people possess firsthand familiarity with issues, pollsrnmay reveal something of value; they will render unto the publicrnwhat is within the proper realm of public competence. Otherwise,rnbe careful, crnFEBRUARY 1996/21rnrnrn