distrust the process of reducing individual men and womennto equally weighted “cases,” forcing considered opinionsnand the variety of human characteristics into Procrusteann”response categories.”nWhatever the objectors might think, though, these arennot objections to survey research in principle. Withoutnexception, in my experience, they can be translated intontechnical terms, and they turn out to be valid criticismsnonly of bad survey research, thoughtless and incompetentnwork. Every teacher who has ever given and graded annexamination has done something similar to survey research.nMeasuring athtudes is not a great deal more difficult thannmeasuring knowledge, and does no more violence to what isnbeing measured. Measuring behavior or demographic characteristicsnis usually easier.nThis is not to say that measurement in survey research (ornin the classroom) is always well done—only that it can be.nThere are legitimate objections to much of what passes fornsurvey research. (But some of the most common complaintsnare not among them: for instance, “How can they talk aboutnall Americans when they only interviewed 1,600?” Believenme, they can.) There are many fields where the gapnbetween professional and amateur work is enormous, butnprobably few others where the difference is harder fornconsumers to detect. And any well-trained, unscrupulousnsurvey jockey can fudge his research to produce—well, notnany result, but close enough. The kind of work that givesnsurvey research a bad name is usually conducted either bynincompetents or by parties with an interest in, say, a largenpercentage rather than an accurate one. The very activencommittee on professional standards of the American Associationnfor Public Opinion Research tries hard to cope withnthis sort of thing, with only limited success.nJust as it is foolish to believe survey research uncritically,nhowever, it is a mistake to dismiss it out of hand. Reputable,nestablished survey organizations, academic or commercial,ncan usually be assumed to be technically competent, andnsmart enough not to load their results. After all, theirncontinued success depends on their usually being prettynclose to right, and sometimes (as in election polls and salesnprojections) a criterion will come along sooner or later tonshow whether they are.nLike classroom teachers giving exams—maybe morenso—professional survey researchers are not flying blind.nThey have an armory of field-tested techniques, backed upnby an ever-growing body of research on research. If they arenwell-trained, they also have an acute understanding of thenlimitations of their techniques, and the conscientious onesntry to make sure that others understand those limitations,ntoo. (Obviously, I’m not talking about the occasionalnbroadcasts by National Public Radio of the meditations ofnLou Harris on his latest poll.)nIn the interests of greater survey literacy, here is anmini-lesson on the subject. * The first step on the road tonwisdom in these matters is keeping very straight the distinctionnbetween fact and inference. Survey results are oftennpresented in greatly abbreviated, summary form. A usefulnexercise is to translate them back into what has actuallynbeen observed. Suppose you hear that “65 percent ofnAmericans say the President is doing a good job.” What thisnactually means is something like this:n1,635 adult, non-institutionalized, telephone-owningnresidents of the U.S. were chosen by a complicatednscheme that makes it very likely that they bear annacceptable resemblance to all such adults. Ourninterviewers reported that 1,503 were located andnwere willing to answer questions. Of these,naccording to our interviewers, 977 said “a goodnjob,” when asked: “Do you think the President isndoing a good job, only a fair job, or a poor job?”nThis question followed a series of questions onnforeign relations.nThe translation, read carefully, mentions the most importantnof the things that you’d have to know to make sensenof the figure “65 percent.” Each clause, each phrase, couldnbe the subject of a lecture in a class on survey methods.nHow many American adults “really” approve of the President’snperformance is only an inference—some would evennsay it is a meaningless one and that attitudes only exist asnthey are expressed in behavior in given situations (likensaying “a good job” to a survey interviewer). The fact isnwhat a sample of respondents said to interviewers in anparticular, somewhat artificial setting, in response to anparticular question embedded in a context defined bynprevious questions. (Actually, it’s not even what respondentsnsaid—it’s what interviewers have said respondentsnsaid, as filtered through a mechanical process of recordingnand coding responses.)nBut that is a fact. And if 65 percent is higher than lastnmonth, or if a higher proportion of young people than ofnold said “a good job,” those are facts, too. Oddly enough,nsurvey research is generally more reliable when it is askingnwhat kinds of people do something, or whether the proportionnis increasing or decreasing. Just estimating how manynpeople do something would seem to be a relatively simplentask, but it is actually one of the toughest. Questionwording,ninterviewer characteristics, the nature of previousnquestions, sampling biases—all of these factors can mess upnan estimate. But so long as they are constant, they matternless for comparisons between groups, or over time.nHere’s a pop quiz, to see if you’ve been paying attention.nIn 1942, only 2 percent of Southern whites said that Blacknand white children ought to attend the same schools; inn1980, only 5 percent of white Southern parents said thatnthey did not want their kids in school with even “a few”nBlack children. Have attitudes really changed from virtuallynunanimous support to virtual unanimity in the otherndirection? What do you need to know to answer thatnquestion?nI hope it is obvious that we would have to know muchnmore about the studies that produced those numbers beforenwe could even begin to take them seriously. (In fact, the twonstudies used different sampling techniques, the questionsnthey asked were somewhat different, the later one asked itsnquestion only of parents, and the two studies even definedn”the South” differentiy.) Moreover, it is impossible to saynhow the responses at either time were related to what peoplenwould have said in private to close friends. But one thing isncertain: what Southern whites tell strangers on their doorstepsnhas changed. Start with that.nI have only touched on a few of the issues involved innnnMARCH 1986/2Sn
January 1975April 21, 2022By The Archive
Leave a Reply