“Things and actions are what they are, and the consequences of them will he what they will he: why then should we desire to be deceived?”
—Joseph Butler, Fifteen Sermons

No doubt many of us could think of an answer or two to His Grace’s rhetorical question, but the case for social science—any science, for that matter—rests on what it can contribute to understanding things and actions and their consequences. For the last 40 years, an increasingly common method of research in the social sciences has been survey research. Demonstrably, it generates a great many “facts.” Is there any good reason to ignore them?

“Don’t confuse me with the facts” is an understandable and thoroughly human response. Often we just know what we know: if facts support us—well, of course they do; if they don’t, we don’t want to hear about it. Perhaps we can argue that we hold our beliefs in disregard of the available evidence because there are other sorts of truth, higher ones, beyond the merely empirical. (John Crowe Ransom’s unorthodox defense of fundamentalism comes to mind.) But in the real world, so-called, where social and political controversy takes place, many will find that argument unconvincing.

There’s no reason for conservatives to be fainthearted about the kinds of facts that survey research generates these days. Particularly as it is used in public-opinion polling, it is telling us some heartening things, things that you’d never learn from the op-ed pages. Without survey research, would we ever have known that young voters liked Ronald Reagan better than old ones, or learned that Reagan’s support has been increasing among Black voters? Whether it is the Gallup Poll that repeatedly demonstrates that Episcopalians want their old prayer book back, or the recent surveys that show the common sense of common Black folk on the subject of racial quotas, survey research again and again reveals the gulf between ordinary citizens and the professionals who pretend to speak for them. If nothing else, these results make clear exactly who is guilty of “elitism,” and they have made some very deserving people squirm.

Facts very much like these persuaded a good many of the most intelligent and honest liberals to abandon liberalism: we know them now, of course, as neoconservatives, whose complaint about liberalism is not that it’s wrong, but that it doesn’t work. Things and actions have consequences, all right, and often not the ones we had in mind. Survey research is one of the routine social-science methods for finding out just what’s going on. Not incidentally, one of the best and most effective neoconservative publications is the American Enterprise Institute’s Public Opinion magazine, largely a compendium of survey results. (The facts never entirely speak for themselves, but Public Opinion is a fine ventriloquist.)

Some of us who were in places like Columbia University in the 60’s loved survey research precisely because it could be used to introduce a much-needed note of realism to the interminable political discussions of those days. It is no accident that the student radicals in the social sciences detested empirical social research in general and survey research in particular. It constantly told them things they didn’t want to hear.

But conservative humanists, like radical social scientists, have also often been suspicious of the frankly empirical varieties of social science that, like survey research, do not at all resemble social philosophy. There are good reasons for this suspicion, even aside from the temperamental aversion to mathematics and statistics common in the softer of C.P. Snow’s “two cultures.” There is indeed something presumptuous, and perhaps even corrosive, about weighing and counting and averaging happiness and loyalties, affections and prejudices. Once upon a time, each survey respondent wound up, quite literally, as a punch card, subject to counting and sorting, if not to folding, spindling, and mutilating. (Now technology has replaced the cards with magnetic “images” on a tape or disk—not much better, aesthetically.) It is right that somebody view with distrust the process of reducing individual men and women to equally weighted “cases,” forcing considered opinions and the variety of human characteristics into Procrustean “response categories.”

Whatever the objectors might think, though, these are not objections to survey research in principle. Without exception, in my experience, they can be translated into technical terms, and they turn out to be valid criticisms only of bad survey research, thoughtless and incompetent work. Every teacher who has ever given and graded an examination has done something similar to survey research. Measuring attitudes is not a great deal more difficult than measuring knowledge, and does no more violence to what is being measured. Measuring behavior or demographic characteristics is usually easier.

This is not to say that measurement in survey research (or in the classroom) is always well done—only that it can be. There are legitimate objections to much of what passes for survey research. (But some of the most common complaints are not among them: for instance, “How can they talk about all Americans when they only interviewed 1,600?” Believe me, they can.) There are many fields where the gap between professional and amateur work is enormous, but probably few others where the difference is harder for consumers to detect. And any well-trained, unscrupulous survey jockey can fudge his research to produce—well, not any result, but close enough. The kind of work that gives survey research a bad name is usually conducted either by incompetents or by parties with an interest in, say, a large percentage rather than an accurate one. The very active committee on professional standards of the American Association for Public Opinion Research tries hard to cope with this sort of thing, with only limited success.

Just as it is foolish to believe survey research uncritically, however, it is a mistake to dismiss it out of hand. Reputable, established survey organizations, academic or commercial, can usually be assumed to be technically competent, and smart enough not to load their results. After all, their continued success depends on their usually being pretty close to right, and sometimes (as in election polls and sales projections) a criterion will come along sooner or later to show whether they are.

Like classroom teachers giving exams—maybe more so—professional survey researchers are not flying blind. They have an armory of field-tested techniques, backed up by an ever-growing body of research on research. If they are well-trained, they also have an acute understanding of the limitations of their techniques, and the conscientious ones try to make sure that others understand those limitations, too. (Obviously, I’m not talking about the occasional broadcasts by National Public Radio of the meditations of Lou Harris on his latest poll.)

In the interests of greater survey literacy, here is a mini-lesson on the subject.* The first step on the road to wisdom in these matters is keeping very straight the distinction between fact and inference. Survey results are often presented in greatly abbreviated, summary form. A useful exercise is to translate them back into what has actually been observed. Suppose you hear that “65 percent of Americans say the President is doing a good job.” What this actually means is something like this:

1,635 adult, non-institutionalized, telephone-owning residents of the U.S. were chosen by a complicated scheme that makes it very likely that they bear an acceptable resemblance to all such adults. Our interviewers reported that 1,503 were located and were willing to answer questions. Of these, according to our interviewers, 977 said “a good job,” when asked: “Do you think the President is doing a good job, only a fair job, or a poor job?” This question followed a series of questions on foreign relations.

The translation, read carefully, mentions the most important of the things that you’d have to know to make sense of the figure “65 percent.” Each clause, each phrase, could be the subject of a lecture in a class on survey methods. How many American adults “really” approve of the President’s performance is only an inference—some would even say it is a meaningless one and that attitudes only exist as they are expressed in behavior in given situations (like saying “a good job” to a survey interviewer). The fact is what a sample of respondents said to interviewers in a particular, somewhat artificial setting, in response to a particular question embedded in a context defined by previous questions. (Actually, it’s not even what respondents said—it’s what interviewers have said respondents said, as filtered through a mechanical process of recording and coding responses.)

But that is a fact. And if 65 percent is higher than last month, or if a higher proportion of young people than of old said “a good job,” those are facts, too. Oddly enough, survey research is generally more reliable when it is asking what kinds of people do something, or whether the proportion is increasing or decreasing. Just estimating how many people do something would seem to be a relatively simple task, but it is actually one of the toughest. Question-wording, interviewer characteristics, the nature of previous questions, sampling biases—all of these factors can mess up an estimate. But so long as they are constant, they matter less for comparisons between groups, or over time.

Here’s a pop quiz, to see if you’ve been paying attention. In 1942, only 2 percent of Southern whites said that Black and white children ought to attend the same schools; in 1980, only 5 percent of white Southern parents said that they did not want their kids in school with even “a few” Black children. Have attitudes really changed from virtually unanimous support to virtual unanimity in the other direction? What do you need to know to answer that question?

I hope it is obvious that we would have to know much more about the studies that produced those numbers before we could even begin to take them seriously. (In fact, the two studies used different sampling techniques, the questions they asked were somewhat different, the later one asked its question only of parents, and the two studies even defined “the South” differently.) Moreover, it is impossible to say how the responses at either time were related to what people would have said in private to close friends. But one thing is certain: what Southern whites tell strangers on their doorsteps has changed. Start with that.

I have only touched on a few of the issues involved in evaluating survey research, but, to repeat, it is important to recognize that no one is more aware of these issues, or has done more to deal with them, than professional survey researchers. After all, usually no one has a greater interest in getting it right. Caution is appropriate in dealing with survey data, but it should be an informed caution, not knee-jerk obscurantism. The results of survey research, interpreted with that informed caution, can tell us at least one kind of truth—a truth, moreover, as often comforting as alarming these days. Whether for Bishop Butler’s reason or for motives less serenely disinterested, why then should we desire to be deceived?

* Anyone who wants to be better informed ought to look at a very readable textbook by Earle Babbie, called The Practice of Social Research. (Ignore the dedication to Werner Erhard. Babbie’s Social Research for Consumers is equally good, and his Survey Research Methods is even better. All are published by Wadsworth.) An exception to the rule that college textbooks are watered-down pabulum. Practice has been so successful that it is almost certainly used at a college near you. Since Babbie has now retired on his royalties and spends full time bringing out new editions to kill the sales of older ones, cheap used copies are widely available.