How much is actually known and not just supposed or imagined? A lot more, surely, than it is fashionable to think, at least in the world that moral and literary theorists seem to inhabit. So much more, that it is easy to forget how much by which we interpret the world and its texts is nothing less than certain.

To take a simple instance, there is a notice outside my college that reads “Bill Posters Will Be Prosecuted.” Some sort of a joke, no doubt, could be made here: Bill Posters might be somebody’s name, and the authors of the notice might just be making a prediction about him. But I doubt it. hi fact, that is to understate the matter. I am utterly certain the notice does not mean that, even though I neither know nor care who wrote it; to raise the matter at all is to do no more than confirm a total certainty about what it evidently means. You understand, at first sight and without reflection, that it forbids people to stick things on the wall.

What is more, it is seen to mean that regardless of race, gender, or class, which is a thought that should give pause to ardent multiculturalists, gender-crusaders, and critics of Eurocentricity. If all uses of language were conditioned by race or gender or class, as is sometimes asserted, then this would be, too; but that hardly seems likely. In fact it is about as likely as a remark made in the 1930’s by a Nazi apologist called Jakob Hommes, whose name, though now forgotten, ought to be celebrated among multiculturalists, because when urged to accept that arithmetic, at least, was free of race, he memorably replied that even twice-two-make-four is “somehow differently tinged” in the mind of a Negro. There speaks a man faithful to his principles. Hermann Goering, a more famous Nazi, similarly remarked in the spring of 1933, shortly after Hitler’s seizure of power: “I am subjective—I submit myself to my people,” meaning to Germans of pure stock; he added that he thanked God for having created him without a sense of objectivity. When people complain about linear logic and the like, it is sometimes easy to forget that we have passed this way before. The Nazis were against it long before the New Left or Deconstructionists were heard of. Dogmatic irrationality is not new.

Indeterminacy, then, or the theoretical notion that no text has one correct interpretation, has a longer, and a much shadier, history than is widely supposed, and brutal dictators have believed in it and advocated it. But to return to posting bills. It would not occur to me or, I imagine, to anyone else on reading that college notice to ask how one knows what it means—still less to worry that making any sense of its one correct interpretation, which is absolute and unhesitating, depends on being able to answer any such question. The familiar seminar challenge “How do you know?”—unanswerable as it often is—hardly seems to apply. One might indeed be stumped for an answer; but answer or no answer, there can be no doubt that the usual interpretation is the right one. So in interpreting texts far more is known, and certainly known, than is commonly noticed; one can be certain, and rightly certain, without justification, without explaining where the knowledge came from or how it is grounded.

That seems to be a hard point, nowadays, to accept in schools and colleges. Suppose you were to move from outside the college, where no posters are allowed, to the inside, where they sometimes are; suppose, too, that Bill Posters is a real person there, and that he teaches in a department of humanities. He will probably be scandalized to hear that there arc cases where there is only one correct interpretation of a text and that one can be certain what it is without being able to say why. He may even glory in the dogma of critical indeterminacy. In April 1991, for example, Jacques Derrida ended a lecture at the University of Chicago, after two and three-quarter hours, with a soaring peroration in which he urged his audience, to their evident joy, to “draw all possible deductions from all possible hypotheses.” One enthusiastic member of his audience, Richard Stern (London Review of Books, August 15, 1991), modestly called his all-but-three-hour ordeal a Daedalian act—with the disciple Icarus, whom we may presume here to signify Richard Stern, “soaring towards self-exaltation on wings which melted because of his stupidity” while his master elaborately, hour upon hour, took the starch out of “received texts, elevated positions, fixed hierarchies of value.”

So things are happening in Chicago, which has long been known as the Windy City, much as they did in Paris 20 and more years ago. Much the same things, to tell the truth, but then universities exist (among other reasons) to preserve and to mummify. Bill Posters lives. He may even think it rash to be certain that a college notice is only about sticking things on walls. There are people who like this sort of Icarus-trip, on wings of wax that melt in the heat of the Paris or Chicago sun, and who like it for two and three-quarter hours at a time. In fact. Stern’s report of Derrida’s Chicago lecture speaks of rapt attention and prolonged applause: “The Deconstructor had built only too well; only that ever-surprising Surprise which deconstructs us all would silence the applause.” That sounds like the birth—or is it the middle age?—of a religion, and one fully equipped with a Messiah, a scripture, and a cult. It is the language of mystical ecstasy.

Denying what you know is a faith with acolytes in some unexpected places, and prophets of the indeterminate can easily be embarrassed by their allies. The Nazis are among them. “There is no such thing as truth, either in the moral or in the scientific sense,” Adolf Hitler told Hermann Rauschning in 1934, as the latter reported in Hitler Speaks (1939):

The idea of a free and unfettered science—unfettered by hypotheses—could only occur in the age of Liberalism. It is absurd. Science is a social phenomenon, and like every other social phenomenon it is limited by the benefit, or injury, it confers on the community. The slogan of objective science has been coined by the professorate simply in order to escape from a very necessary supervision by the power of the state.

So Adolf Hitler, in his fashion, was an early multiculturalist; and though defeated, his dogma lives on. Even scientists, after a good dinner, can occasionally be heard to say they would believe nothing without proof, though they look puzzled when asked how they would prove that. But then to put that question at all has at least one good effect: it helps to deconstruct Deconstruction. In any case, the skeptical scientist can be entirely contented, in practice, with his unnoticed contradiction. He simply forgets all about it in the laboratory, where he rightly believes in all sorts of things he could not prove. In the lab it would not occur to him, for example, to doubt that what looks red is red; but how in the world would he prove it?

Skepticism in the deconstructive style, whether in Paris or in Chicago, unlike Hitler’s or Goering’s, is a game. Like the flight of Icarus, it belongs to fantasy. It is for dinner-table conversation, for papers read at international conferences, for critical journals. It has nothing to do with what the reader does when he reads. It is a game, above all, for those who are afraid that, without it, they might have nothing interesting to say—a substitute, sometimes two and three-quarter hours long, for having nothing to say. As a witty German philosopher recently remarked, “theory is what is done when there is nothing more to be done” (Odo Marquard, Abschied von Prinzipiellen, 1981). Nature abhors a vacuum, and silences have to be filled.

The skeptic in the humanities, however, is perhaps less likely than the scientist to be robustly inconsistent about such things. Bill Posters can be a worried man. There is no argument, George Orwell once said, by which you can defend a poem, and the skeptic-critic is inclined to throw up his hands at that, imagining it to be a damaging, even an unanswerable charge against everything beyond the severely factual that literary departments are supposed to be about. “Emily Brontë isn’t better than Jackie Collins,” a teacher of literature told an education conference recently, “just different.” It can be hard to make Bill Posters forget his dud theory and simply get on with it. He is a despairing being. He believes that real and lasting damage has been done to literary culture, that radical skepticism of that sort cannot be answered, that it is time to give up.

This is because he has not noticed that his skeptical view of critical values is not fresh and innovative at all, as Derrida’s audiences imagine, but old-fashioned in a classic 19th-century positivistic way; a touch of modern philosophy could help. Knowledge is not the same as account-giving, for a start, and the fact that a question cannot be answered is not in itself a reason to doubt that the answer is known. It does not follow from the fact that we cannot say why Emily Brontë wrote a better novel than Jackie Collins that we do not know Wuthering Heights to be better. Knowing how familiar foods and drinks taste abundantly illustrates that. Nobody, after all, can define the difference in taste between tea and coffee. Nobody, I imagine, supposes that it follows that he does not know the difference.

Knowledge, again, does not depend on justification—”How do you know?”—and silent, shared assumptions are not something to be ashamed of, as the instant lucidity of a notice prohibiting bill-posting illustrates, but rather natural and innocent instances of how language commonly works. A Cambridge philosopher, Elizabeth Anscombe, once raised her papers high over her head, in a dramatic gesture, and said to her audience: “If I ask ‘How many is this?’ you would have to ask a question or make an assumption”—an assumption, that is, about whether she meant lectures, or sheets, or words. To answer a question, to understand anything—whether a notice, a remark in conversation, or a novel—is to make all sorts of beyond-the-dictionary assumptions and to make them with unspoken certainty. Those who put up notices against bill posting know perfectly well they will be understood, though not necessarily obeyed. They do not need to explain what they do or do not mean.

The academic demand for justification goes on nonetheless, in spite of the self-evident case against it; it may not be pure coincidence that Justification is a term in theology, usually concerning faith and works. The skeptic who gives up religion, like the sad clergyman in Evelyn Waugh’s Decline and Fall who starts worrying about the foundations of his faith, may easily fail to find them in literary studies too. Waugh’s unfortunate clergyman, in the first and most farcical of his novels, loses his religion because he cannot see why God created the world at all, which is an extreme instance of what Wittgenstein used to call “starting too far back”; there may be a lot of spilt religion, or spilt atheism, in the desperate foundation-seeking that agonizes the humanities in the present age.

“How do you know Emily Brontë is better than Jackie Collins?” The challenge is of course meant to be unanswerable. I was once attacked in a lively seminar at a Scottish university, and in a building aptly named after the famous 18th century skeptic David Hume, for suggesting that one can have knowledge—even certain knowledge—without being able to justify it, whether by proof, verbal definition, or expert agreement, and the storm the suggestion aroused was almost equal to the theological debates about justification by faith or works that shook Scotland four centuries ago. Plainly there is a big demand for justification in departments of literature and, in a secular age, a fretful sense that if divine authority can no longer be appealed to then another authority must quickly be found; that without such an authority—a new theoretical basis—no certitude can be had. Hence the chaotic, anything goes Derridean world, once applauded in Paris and now shifted to Chicago and points west, of accepting defeat and drawing all possible deductions from all possible hypotheses.

The modern skeptic is commonly a disappointed foundationist. He has noticed that there are no stated and agreed foundations to critical values, but he still believes, in his old-fashioned way, that there should be: hence the Derridean formula of endlessly drawing all possible interpretations. But even that is problematical. For if possible interpretations are to be admitted, and only such, then impossible interpretations are presumably excluded. But what, in that case, are the criteria for judging whether an interpretation is possible or impossible? In a recent Cornell collection, Arden Reed’s Romanticism and Language (1984), for example, it is suggested without irony that the crucial and hitherto unnoticed point about the boatstealing scene in Wordsworth’s Prelude is the near-homophony of pinnace—”elfin pinnace”—and penis:

Once this homophony is recognized, the logic . . . is clear. . . . The associations of “heaving through the water like a swan” alleged by psychoanalysis—that penetrating the water is the penetration of a female element, that the swan represents a particularly male or phallic symbol . . .

After all, as the critic Timothy Bahti points out in a mood of mounting self-excitement, the poem reads “lustily I dipped my oars”; so one probable hypothesis of this boyish escapade is lust. If that is a possible interpretation, however, one begins to wonder what an impossible interpretation would look like.

An Irishman, when asked the way, once replied, “I wouldn’t start from here at all.” Nobody who wanted to read would start from here. He would behave, rather, like the scientist who knows perfectly well that what looks red is red. He would get on with it. In the topsy-turvy world of critical thought, however, that view is dismissed as pretending that the problems do not exist. But surely just the reverse is true. The near homophony of “pinnace” and “penis” in Wordsworth’s poem isn’t a problem at all, and anybody who wastes time on that must be secretly convinced that there are no real Wordsworthian problems worth tackling. It is the critical theorist who is pretending problems do not exist, not the critical historian. The theorist in that style presumably imagines there is nothing serious left for the critical mind to do.

It would be interesting to ask why arguments that practicing scientists know to be a waste of time—like “How do we know that what looks red is red?”—appear enormously taxing to many who work across the fence in the humanities. Disagreement of experts, for example, which is commonplace in the physical sciences, leads few involved in science to doubt that they are engaged in an objective inquiry. They know and take for granted that experts disagree about objective questions. In moral and literary controversy, by contrast, disagreement is seen as fatal to the case for objectivity. An argument that counts for nothing in the laboratory can mean everything, apparently, in the library. We live in an age when the humanities are skeptic-credulous. Everyone knows that experts disagree about the best solution to inflation, the distances between heavenly bodies, and the present population of China, and that those disagreements represent no sufficient reason to doubt that economics, astronomy, and demography are objective inquiries. So why do we have so much moral and critical skepticism?

The catastrophe recently suffered by humanism is seen, by some, in sweeping and apocalyptic terms. In After Virtue: A Study in Moral Theory (1981), for example, Alasdair MacIntyre, a British philosopher teaching in the United States, sets out from a double hypothesis which, being largely unargued, one is plainly meant to accept as self-evident: that there are no moral certainties now, but that there once were. “The language of morality is in the same state of grave disorder” as the natural sciences would be, he argues, after a catastrophe in which laboratories had been burned down, scientists lynched, science teaching abolished, and anyone’s view of the physical universe imagined to be as good as anyone else’s. Today, he proclaims, there is total moral chaos. The reader is given no chance to demur at this powerful image of modern anarchy, and MacIntyre returns to it at the end of the book: with the two great rival systems of moral knowledge, Aristotelian and Christian, in terminal decline, as he believes, anyone can now be excused for being morally uncertain about anything. “There are no longer any clear criteria,” and those who would justify virtue must begin to “look for another basis for moral belief.” It is not just that there is no agreement. There is not even the prospect of agreement. “Our society cannot hope to achieve moral consensus.” We are waiting not for Godot, as the book strikingly concludes, but for a new St. Benedict.

This is a powerful vision. It is also, as the author implies, a widely accepted one. Some years ago Hollywood made a film called Network based, though less thoughtfully, on the same premise, since it showed a clever television producer exploiting the moral despair of America by urging the inhabitants of one city after another, exhausted by a long diet of news flashes about serial murder, hijacks, hostages, and drug addiction, to hang out of their windows by the thousand and scream. Professor MacIntyre does not want his readers to open their windows and scream, and those who read a book called After Virtue, one may safely assume, are not much given to mass hysteria. Nor can any academic philosopher wish that they were. But his vision of the world is not much different from Network’s. Moral anarchy, both believe, is endemic to the world we inhabit; certainly nobody knows the difference between right and wrong any more, as they did in the days of Jane Austen or the medieval Church; and short of another spiritual revival as intense as Benedictine monasticism, though “doubtless very different,” the world is forever doomed to ethical incertitude and unknowing despair.

There are several assumptions readily made here that need to be drawn out and examined. The first is that the modern world is an exceptionally wicked place. That is not a view unique to the present age; doubtless St. Benedict, patriarch of Western monasticism, felt something similar when he withdrew from the licentious life of Rome around 500 AD. to a cave in Subiaco to write his Rule. The spirit of Subiaco has a certain recurrence in Western thought. Some 60 years ago P.C. Wodehouse wrote a song for Anything Goes—”The world’s gone mad today / And good’s bad today, / And black’s white today / And day’s night today”—which is a sort of musical anticipation of After Virtue. I also recall a lapel button at the University of California in Berkeley, in the late I960’s, that read “God is not dead: He just doesn’t want to get involved,” implying that matters were by then past the care of even divine omnipotence. That is going a point beyond Benedict and P.G. Wodehouse. One senses a throwing-up of hands here, a desperate or whimsical turning-away from rational problem solving, and an indifference, in general, to the study of cases, considerations, and evidence.

In the face of that despairing mood—whether St. Benedict’s, or Wodehouse’s, or Berkeley’s, or MacIntyre’s—it is hard to be sure what answer would serve. If you believe that no interpretation is fixed and no argument uniquely persuasive, then you do not pay much serious attention to arguments when they are offered. If you are a moral pessimist in the God-is-dead Nietzsehean style, playful or committed, then any suggestion that things have once been worse, or that they are getting better or may soon be getting better, sounds laughably like Pollyanna. Pessimism has intellectual prestige, after all; those who take that view or adopt that pose are unlikely to be talked out of it by merely rational considerations. I speak of posing here, since it was always plain that the hermits of Berkeley, at least, were not much like those of Subiaco. They went on living their agreeable lives, cheerfully enough, and are now, it is rumored, entered into a comfortable and prosperous middle age. They knew how seriously to take their own pessimism, unlike Benedict, as they applied for good jobs, bought homes, and made due and ample provision for retirement.

But in any case, it is of little use to talk of such things. Tell the Benedicts of today that the standard of living of the human race is probably higher than ever before in history, and they will charge you with indifference to famine in Africa or draw attention to the moral dangers of materialism—or, with a fine disregard for consistency, both. Tell them there has been no major war, in the style of 1914 or 1939, for nearly half a century, and they will accuse you of callous unconcern about Vietnam or the Gulf. Tell them that there are more students in higher education than ever before, and they will speak of the deficiencies of the inner-city schools. Hostage-taking, after all, did not begin in the 20th century; nor, as anyone who has studied the opium wars in China will know, did drug abuse. The screamers of Network were probably well-housed and well-fed. The film was a satire on television, and it is mass communication, surely, not the world itself, that persuades people by the thousand that they are living in a uniquely wicked age. They might not think so if they could watch television coverage of the Thirty Years War or of the fall of ancient cities where, by common practice, the adult males were slaughtered, the women and children enslaved, and the city razed to the ground.

The second assumption is that the modern world, unlike many ages before it, lacks moral agreement. That is hard to fathom. The media often report child murders, and yet no one ever seems to speak up for the murderers. If there were no moral consensus, one would expect someone to take the part of those who kill children. But nobody ever does, and the only issues are whether policing should be more thorough and penalties strengthened. Even pro-Nazis do not exactly defend Auschwitz: they deny that it happened and hint at Zionist propaganda, which surely implies that they too are unready to endorse, at least in public, what happened there. There is a vast consensus in economic and environmental matters—that the Western world is using too much fuel, for example, and that inflation is a bad thing—even if there is less agreement about what to do about them. Ninety percent of the people, as Winston Churchill used to say, are agreed about ninety percent of the things that need to be done. Western near-unanimity in rejecting the violent Iraqi occupation of Kuwait in August 1990 was impressive; so, in August 1991, was the heartfelt popular welcome, east and west, for the failure of the Moscow coup. The notion that there is no agreement on moral issues is manifestly absurd. Why, then, is it so confidently made?

One answer concerns the recurrent myth of a consensual Golden Age. Among modern centuries that age commonly is, and has to be, the 18th century, since there were religious wars in the 16th and 17th, as everybody knows, and the 19th spawned some highly anti-consensual thinkers like Emerson, Marx, and Nietzsche. The grand instance here is usually supposed to be the moral order of Samuel Johnson, with the novels of Jane Austen, which echo that world, as a late fictional pendant. So let us look at that alleged instance of moral consensus.

Boswell reports that Johnson was once taken by a friend to hear a sermon at the Temple in London, where the preacher “ranted about liberty.” As they left the church, Johnson bitterly remarked that “our liberty was in no sort of danger: he would have done much better to pray against our licentiousness.” That is a loyal member of the Church of England in 1770 disagreeing with the pulpit pronouncements of an Anglican clergyman; and the point at issue, which sounds rather like an anti-Whig point, is devastatingly fundamental. Some 18th century Englishmen, notably Whigs, wanted more liberty; some, notably Tories, wanted less. The American Revolution of 1776 similarly divided British public opinion—and American too, as the Loyalists illustrate; so did the French Revolution 13 years later. Again, there was an embittered debate about the morality of slavery in the late 18th century and after, on both sides of the Atlantic. Where is the moral consensus there? The Golden Age of moral agreement does not exist, I suggest, except in the minds of moralists who wish that it did.

The third assumption is that a moral consensus could only be explicit, like St. Benedict’s Rule. That is an instance of a wider fallacy: that knowledge is the same as account-giving. Moral rules like the Ten Commandments claim to give precise accounts of a moral system. When MacIntyre complains “there are no longer any clear criteria” in moral matters, he implies, in all probability, that one could only have a moral consensus if there were such criteria—rules which, like St. Benedict’s, were stated and agreed, much like a monk’s. He is mistaken on both counts. Nobody needs stated criteria in order to make sound judgments, whether in the arts, in morality, or elsewhere; agreement has nothing to do with it. There was widespread agreement before the 18th century, after all, that slavery was acceptable, which suggests that in moral matters, as in the natural sciences, it is possible to be agreed and wrong. The view that the sun goes around the Earth, after all, was once agreed to.

Odd for an academic, of all people, whether philosophical or not, to argue that stated criteria are needed in making judgments. Daily experience suggests otherwise. To mark a student paper is to make a judgment; if all judgments require such criteria, then that would. So what are they? I do not suppose any college department on earth, philosophical or other, would waste two minutes together debating such an issue. They would know perfectly well that any answer to that question could only be trite if it were true: “A good paper is one that is well-written and says true things in the right order.” But answering in that fashion, as anyone can see, is not answering at all: it is merely restating the question in another form. What does well-written mean here, it will reasonably be asked, or true, or in the right order? It is notable that no one in practice judges a paper, or a book, by appealing to agreed criteria. In fact, good books can be distinguished from bad, and are—by children, for example—before the word “criterion” is understood at all.

A moral consensus would be much like that. It would not be like agreeing about the Ten Commandments or the Rule of St. Benedict, and those who condemn child-murder know that they do not need to turn to a rule-book before they condemn it. You know child-murder is wrong before learning any rules at all, often enough; and you know that the rule “Thou shalt do no murder” is right, on hearing it, because you already know it to be so. So if there are no clear criteria for moral judgment, as Professor MacIntyre complains, there may still be plenty of moral certitudes, and the world may still be a long, long way from moral anarchy or the law of the jungle.

A good deal of moral despair, I suspect, arises because, though the question “How do you know?” is often regarded seriously, it is not usually regarded literally. If someone asks “How do you know that murder (or slavery) is wrong?” he is not usually asking how you know it but whether you know it; he implies that only if you can produce grounds, at once stated and agreed, can the matter be judged to be beyond all doubt. The question is not a question, in fact, but a challenge, and to answer it literally would be to misunderstand it. If asked how you know that two plus two equal four, for example, you might reply, “Because I was taught it at school and have since seen it work admirably in practice”; but that is not what the questioner wants to hear. Knowing the difference between right and wrong might be even harder to answer. One did not learn morality, on the whole, as a set of propositions, as the two-times table is a set of propositions. On being told “It is wrong to murder somebody” the remark did not exactly represent the beginning of knowledge; it summarized a truth you already in some sense knew.

So there are truths one knows already and with certitude, and before language itself. In estimating that underrated truth it can be instructive to watch a prelinguistic child. An infant a few weeks old, on seeing an outstretched finger, will hesitate to clasp it, and when it does so it looks surprised, as if it had learnt something—something about the relation between how things look and how they feel. Even that is something one once had to learn; seeing that a person is a person is a further step: that hands, eyes, nose, and ears add up to an individual being. All this was once less than obvious, though it all seems obvious now—much as it is now obvious that murder and slavery are wrong. No wonder the moral consensus is so easily denied. It is denied not because it is not there, but because most of it has been there for so long that it is easily forgotten. Imagine asking Professor MacIntyre to explain how he learned English, for example, or how he knows the name of an old acquaintance called Smith. Everybody knows the man is called Smith, or at least all his friends do, and that sounds like a factual consensus—though not, it will be said, a very interesting one. But it only seems obvious. Once upon a time you did not know it at all, and it had to be learned.

Wittgenstein once remarked to his American friend Norman Malcolm that a philosophical confusion is rather like a man who does not know how to get out of a room. Malcolm was eventually to tell the story in his Ludwig Wittgenstein: A Memoir (1958): “He tries the window but it is too high. He tries the chimney but it is too narrow. And if he would only turn around, he would see that the door is open all the time.”

The door that leads out of moral and critical skepticism, too, has been open all the time. Nobody needs grounds for knowledge—least of all stated and agreed grounds—in order to know and to be certain that he knows. It does not matter how we know. As the philosopher said, turn around—the door is open.