//
Rauch: The Rise of Liberal Science

From Kindly Inquisitors: The New Attacks on Free Thought, Expanded Edition, by Jonathan Rauch. Pages 31 – 56.

2

The Rise of Liberal Science

In the beginning was Plato, the greatest of all the masters who have advocated centralized control of knowledge. His unresting spirit continues to haunt us; it probably always will. No one has ever made the Fundamentalist Principle for knowledge-making so dazzling, so compelling, so beautiful. No one has more sublimely argued that opinion must be regulated for the good of society.

There are many reasons to read Plato, among them the beauty and plasticity of his thought and the delightful character of Socrates, but surely one of the best reasons to read him is to be horrified. Read The Republic, putative wellspring of Western values, and you find that once you look past the glittering facade of Plato’s rhetoric you are face to face with the ethic of the totalitarian regime. It was that Republic of Plato’s which John Locke, David Hume, and the other founding fathers of the liberal epistemological regime rebelled against and, eventually, overthrew. But though they put the Platonic Republic on the defensive, they did not extinguish the life in it. Plato’s shining vision is immediately appealing, and you have to think hard about it to see why it is bad. It holds out the promise of governance by the enlightened and humane, of relief from the foolish and unreasonable, of shelter from uncertainty and change. Today, as ever, it is a magnet drawing millions of people, including many American intellectuals, toward the political regulation of inquiry.

In fairness, the dark Plato was not the only one. Like all of us, he had more than one side, and there isn’t a soul who doesn’t find much to love in him. Every liberal loves the figure of Socrates, who has taught so many people the method of skeptical inquiry and the importance of intellectual humility—of trying to keep in mind the difference between what you know and what you just think you know. Plato loved Socrates too; and it is, for me, as a philosopher of love that Plato achieves his most moving art. He was a rhetorician capable of Mozartean fertility and lyricism; he was capable, too, of irony, of doubling back to undermine himself, of leading himself astray as though to sow discontent with his own glibness. To academics and others who have spent their working lives with him, I ought to say that I have no intention of encapsulating Plato in the few pages that follow. I mean only to follow one of the particularly important strands in the tapestry, a political strand woven primarily into The Republic.

Plato’s ideal Republic, his vision of the good political regime, is built on the following principles.

The founding principle is that of absolute individual devotion to, and submission to, the good of the state. The state should control procreation and marriage, for eugenic and population-control purposes, so as to prevent the racial debasement of the ruling (guardian) class. To keep the genetic stock strong, “the offspring of the inferior and any of those of the other sort who are born defective” should be “properly dispose[ d] of in secret, so that no one will know what has become of them” (460c). 1 The private family should be abolished among the ruling class and children raised collectively, so that “these women shall all be common to all these men, and that none shall cohabit with any privately, and that the children shall be common, and that no parent shall know its own offspring nor any child its parent” (457d). Similarly, private property should be abolished among the ruling class as a way to get rid of the very notion of a private or individual interest, “so that we can count on their being free from the dissensions that arise among men from the possession of property, children, and kin” (464e).

The state cannot inculcate the necessary beliefs and virtues in its administrative class unless it exercises strict and vigilant control of speech, including poetry. “We must begin, then, it seems, by a censorship over our story-makers” (377c). Music, too, is corrupting if unsupervised. Not even Homer himself is to be spared. “We will beg Homer and the other poets not to be angry if we cancel those and all similar passages,” says Plato, after citing verses of the kind which “we will expunge” (387b and 386c). Still he is not finished: all artisans, all artists, all craftsmen are to be carefully watched by the overseers, “on penalty, if unable to obey, of being forbidden to practice their art among us, that our guardians may not be bred among symbols of evil” (401b). No form of expression, in short, is to be untouched by the state’s tendrils. Lest corruption creep in, the state “must throughout be watchful against innovations in music and gymnastics counter to the established order” (424b). Just so did Stalin and his minions supervise the innovations of Shostakovich.

Supporting the whole regime, and giving it legitimacy, is “one noble lie” told among the ruling elite. The rulers, in their turn, will administer a regimen of propaganda lies to keep the social structure stable: “our rulers will have to make considerable use of falsehood and deception for the benefit of their subjects” (459c). A ruler may lie as necessary, but, just as you might expect, if he “catches anybody else in the city lying . . . he will chastise him for introducing a practice as subversive and destructive of a state as it is of a ship” (389d). At the top, ruling over all, is the wise philosopher, whose love of wisdom “is impossible for the multitude” (494a). The philosopher-rulers are very great indeed, virtually gods, and when they die “the state shall establish public memorials and sacrifices for them” (540c). There might be posters staring down from high places to remind ordinary people of their former leaders’ omniscience; there might be a grand mausoleum in the Republic’s main square, and monumental idealized statues in all provinces.

What makes the whole massive totalitarian machine possible is the view of knowledge which undergirds it. Plato believed what so many of us instinctively believe: that the way to produce knowledge is to sit down in a quiet spot and think clearly. The best knowledge comes to him who thinks best. Liberalism holds that knowledge comes only from a public process of critical exchange, in which the wise and unwise alike participate. But Plato believed believed that knowledge comes from wisdom, and so knowledge belongs especially to the especially wise—to the true philosophers, who are rare indeed. The real philosophers are the people “who are capable of apprehending that which is eternal and unchanging, while those who are incapable of this, but lose themselves and wander amid the multiplicities of multifarious things, are not philosophers” (484b). People with that nearly divine capability may approach an understanding of reality as it is, whereas others walk as though in a “dream state.” Most of us live in a dim cave below ground, apprehending images only, but the philosopher can aspire to the brilliant sunshine of genuine knowledge.

Once you grant Plato his premises about knowledge, then it is clear who should rule the state and sort true opinions from false ones: the philosophers. “To them by their very nature belong the study of philosophy and political leadership, while it befits the other sort to let philosophy alone and follow their leadership” (474c). Only to those who are capable of right knowledge should truth and power be entrusted. Few people are endowed with such a capability, though many might aspire. Philosophy “is impossible for the multitude,” and “the perfect philosopher is a rare growth among men and is found in only a few” (494a and 491b). Such a person, who everywhere is “seeking the true nature of everything as a whole, never sinking to what lies close at hand,” is bound to be aloof and sometimes to be laughed at by “the whole rabble.” 2 Never mind: here is the man whose spirit and mind are fit to rule.

And what about the “motley horde” of people who want to rule but lack the philosopher’s access to knowledge? Such persons are bound to be a problem. Unless, says Plato, they are “compulsorily excluded [from power], there can be no cessation of troubles” (473d). There must be no Salman Rushdie in Plato’s Republic. If such a person were somehow to survive the state-controlled education with his ambitions intact, he would have to be eliminated.

Epistemology—one’s view of who can have knowledge and when—is politics, and it has the profoundest practical consequences. No better illustration exists than Plato’s ghastly state, with its central control of everything, founded on central control of truth. And Plato’s ideas are neither farfetched nor archaic. They are similar to the ideas undergirding, for example, the intellectual regime of Khomeini’s Iran, a dictatorship of the wise.

In this chapter I want to describe the skeptical revolution which uprooted and inverted Plato’s Republic and its Fundamentalist Principle. (Why I call it “fundamentalist” is left for chapter 4.) I also want to sketch the liberal—and breathtakingly radical—truth-finding regime which replaced Plato’s authoritarianism, the better to understand how to defend the liberal regime against today’s authoritarian revival. But let me first take up a basic question which makes a good starting point: Why did Plato—why would anyone—feel it necessary to regulate and control knowledge so radically and overbearingly? Why bother? What was the threat which so alarmed Plato?

. . .

If you want to clear the room at a cocktail party, say “epistemology.” People will think you are either stuttering or crazy. In college philosophy courses many of us do learn something of epistemology—usually beginning with Plato’s discussions of knowledge and ranging through Descartes, Berkeley, and the others. But the subject is usually taught as though the philosophy of knowledge were a game played by intellectuals over purely abstract counters like “insensible qualities” and “material extension” and “sense data.” It’s safe to say that most of us think of epistemology, if we ever think of it at all, as the intellectual playground of the odd.

Plato knew better. The subject of epistemology is the nature and limits of human knowledge. Or, as Senator Howard Baker did not quite ask during the Watergate hearings, “What can one know and when can one know it?” The problem of what knowledge is and how to find it is, of course, a serious question for philosophers, who for centuries have been debating just what we are entitled to claim we know—though unfortunately most people have trouble understanding what the philosophers are saying. However, the problem of knowledge is not, or at least should not be, a problem for philosophers only. It enfolds one of the few really fundamental problems that any human society must cope with if it is to survive.

Within the question of what knowledge is and how it can be had, another question is coiled: Who, if anyone, can claim to have knowledge, and under what circumstances? When is it legitimate for me to say “I’m right and you’re wrong!” and to act accordingly? This is the problem which Plato was grappling with in The Republic and elsewhere: in a world of differing opinions, how do you sort truth from error? In other words, how do you decide who is right?

It is not just that our opinions differ and that we draw different conclusions bout what we see. We see differently. Our points of view are different, literally and figuratively. By definition, no two of us share exactly the same experiences. We all believe – taking his on faith – that there must be, “out there,” an objective reality of the world as it really is, independent of the vagaries of human perception and misperception, a world that we would ll see identically if we could ass see perfectly. Philosophers have referred to that believed-in world as the external world, the objective world, he world-in0itself. I .think of it as the World Out There. Plato referred to it as the world of “forms,” of pure and perfect things as they eternally and really are, ‘the very things themselves: (479e). Plato’s philosophers was the one who could aspire to know things as they really were, and so would attain definitive knowledge rather than flighty opinion. And indeed, though the World Out There may or may not actually exists, it is what we all aspire to know.

Yet the social fact is that we live in a world not of agreement but of discord, or perceived realities as multitudinous as people. Some people – George Darden’s constituents – see creatures apparently from outer space. Some people believe they have seen Elvis since he died in 1977, and not all of those people are easy to dismiss as wackos. The experience of Hilda Weaver, a professional clinical psychologist, is so striking as to be work quoting at length:

Now, you have to realize, I have no interest in things like ghosts or ESP. I had always been very narrow-minded about the human mind. I thought that stuff was all imagination or suggestibility.

This is what happened. I was in my office one evening, writing an article for a professional journal, and I looked up and Elvis Presley was sitting across from me, in the comfortable tan chair where my clients usually sit. As I realized who he was and sensed the overwhelming kindness that just lingered in the atmosphere around the man, I could tell that he thought that all was not well with me. . . .

He began to talk with me, to communicate. He said, “Are you satisfied with your life, Missy?” That question seemed to go to the very center of me, and I said, “You’re a better psychologist than I am, and you’ve never been to school.” As soon as I made that remark I felt embarrassed and ashamed of myself, as though I had been condescending. But he smiled, and instead of being awkward, he was completely warm and in tune. He said, “I’ve been to the best school.” And from the way he said it, I knew immediately he was right. After all, he had died, for heaven’s sake! What was I doing feeling so smart just because there was a piece of paper on my wall? I suddenly realized that I was in a realm where my Ph.D. was no longer a very good credential. In the past when someone was in that chair, I was the Ph.D. and could use that as a way to hide, to keep from facing myself. . . .

I began to cry, from deep within myself, and he understood immediately and he said, “Hilda, you must open up your perspective on what you are doing with your life.” Then we conversed for a while. Much of it was very personal, stuff that I’m not yet comfortable sharing with anyone else. And by the time it was over, I understood that there is much more to the mind and the human spirit than I had previously allowed, and that if I was going to be a full human being and be helpful to others, I had to realize this and let it affect me fully. I instinctively bowed my head and put my hands together, as in prayer. When I looked up again, he was gone. 3

What did she conclude? That Elvis was alive? That he was dead but that death isn’t really final? That she was dreaming? That she was mad? That is her problem, of course, but it is not only hers. What are the rest of us supposed to say? Most of us would probably say that she was deluded or hallucinating or dreaming. But how do we know? Wouldn’t it be fairer to say that Elvis might or might not be dead? Should the Internal Revenue Service go looking for Elvis in order to collect back taxes? One way or another, every society must have some prevailing standard for distinguishing between reality and illusion, between objective knowledge and personal belief. Just what standard is used matters enormously to somebody like Hilda Weaver, a clinical psychologist whose reputation and, indeed, sanity are on the line.

Moreover, the standard chosen has profound political implications. Any attempt to decide who is right will inevitably make some people winners and others losers. So we arrive back at the creationist question: what gives the pro-science people the right to declare themselves the winners?

And we arrive, also, at the problem that Plato was up against. Diversity, of belief, thought, opinion, experience, is a fact, like it or not. Harness it, and you have the engine that generates knowledge. But diversity of belief is also a dangerous social problem, because it makes conflict inevitable. How do you stop people from breaking up into little tribes, each with its own opinion, or from fighting each other to decide who is right? Differences of opinion can bring people to blows and push countries toward hostilities. (“ More Japanese Deny Nation Was Aggressor During World War II; Spread of Revisionist View Irks Many Other Asians”) 4 Moreover, Plato believed that a good and stable society must base its decisions upon correct information and truthful principles; but, in a world abuzz with conflicting opinion, most people will necessarily not know or believe the truth. In fact, most of the time most people will be wrong, some of them dangerously so. What if they come to power? How do you ensure that truth prevails, and what do you do about the people who are not inclined to believe it? How do you bring countless millions of subjective realities to some kind of convergence? And if you cannot, whom do you believe?

How to manage conflict of belief is, I submit, a problem that every society must somehow solve. Upon the solution chosen depends, not only social peace and cohesion, but also the structure of our most important industry: the reality industry.

That industry is charged with producing true statements about the external world. Its mission is to tell us how things “really” are. Its millions of professionals work in laboratories and schools and think tanks and newsrooms all over the world. As a journalist, I happen to be a worker in the knowledge industry; the same can be said of most intellectuals, whether they are chemists or literary critics.

It always amazes me to see how little attention we pay to the knowledge-making business. I spend a lot of my time writing on economic policy issues, and I spend a lot of that time simply trying not to drown under the tide of papers and studies and newsletters and articles and books and speeches and programs and policies which wash across my desk and then on into oblivion. From Wall Street, from universities, from the government, from journalists—on and on they come. Capitalism, socialism, free trade, managed trade, planning, laissez faire—we are obsessed with debating how society ought to organize itself to create material product. We talk ceaselessly about questions like, Who (government, private sector, other) should decide what is the right level of investment or the ideal balance of trade? But we ignore questions like, Who should decide what kind of questions to ask, what kind of research to do? The imbalance is bizarre. True, we build fine buildings and invent prodigious machines and pile up dazzling wealth. But the greatest of all human products is our knowledge.

And knowledge is a product, like the metals we mine and the cars we build. To be more specific, our knowledge is a set of statements which we are satisfied are true—which have been validated, truth tested, in some satisfactory way. “The moon revolves around the earth.” “In 1492 Columbus sailed the ocean blue.” And so on. This is the product of the knowledge industry: a set of statements which have been found worthy and which we rely on.

So far, so good. But now we come to the difficult question, the question fudged by the passive-voice construction “which have been found worthy.” Found worthy by whom? Obviously, creationists and evolutionists will have different ideas about who should test beliefs about human creation, and how. Given that our experiences and conclusions will be different, what will be the test of truth? And who will administer it? There are countless decision-making strategies you could use. You could have everyone refer to a book or text of some kind. (“ Just look it up.”) You could have an every-man-for-himself sort of arrangement. (“ Truth is in the eye of the beholder.”) You could, in principle, settle disputed questions by having some specially appointed official flip a coin. (Although that sounds strange, such systems, suitably refined, have in fact been commonly used. Suppose you and your neighbor are arguing over whether or not next year’s crop will be good. Imagine going to a special temple in the center of town where a priest utters ritual words, casts a golden coin onto a decorated pavement, and then announces what the coin has said. That is known as an oracle.) You could simply have a taboo against discussing questions on which people disagree. (To some extent, every society uses that strategy: “I never discuss politics or religion.”) You could even have people vote on what is true. (Odd as that may sound, it is what a jury does, and it turns out not to be a bad arrangement, as far as it goes. For all legal and public purposes, if the jury votes that the accused did not commit the crime, the accused did not commit the crime.) You could do some of all of those things, and many others besides. But which?

Plato, living in the intellectual hurly-burly of ancient Athens, was hardly blind to this problem. He was well aware that the world was a riot of often conflicting opinions, most of which were wrong. Of the nonphilosophers, and also of most of the philosophers, he remarked, “Such men have opinions about all things, but know nothing of the things they opine” (479e). How could a nation full of false and conflicting opinions be held together?

That was not all: he was offended and alarmed by the cacophony around him, the din made by people who spoke loudly but wrongly. He has Socrates ask, “Do you think it right to speak as having knowledge about things one does not know?”— and the answer is emphatically no. For “opinions divorced from knowledge,” says Plato with disgust, “are ugly things” (506c). Woe unto the country where the truth is drowned out by the racket of false opinions, where the citizens “lose themselves and wander amid the multiplicities of multifarious things,” the jungles and quicksands of errors and misperceptions.

No surprise, then, that Plato set out to show how a just society would sort truth from error. The answer he hit upon is the one which, to many people then and now, seems the most obvious and righteous: he who best knows the truth will choose. The philosopher could best tell truth from falsehood and accordingly should administer the state. By definition, his decisions would be wisest and most truthful. Conflict would be settled fairly and effectively.

Plato was no crude fundamentalist: he did not, I think, believe that truth was obvious or immediately accessible even to the wise. He understood that the search for knowledge is never easy and often fails, and he took pains to warn his readers that such was the case by ending whole dialogues inconclusively, by having Socrates say that he has broken his head in argument “times without number,” and by stating flat-out that (for instance) there is always “plenty of room for doubt, when we even doubt whether we are asleep or awake.” 5 Why, then, did Plato follow the garden-variety true believers and fundamentalists into the morass of intellectual authoritarianism? One answer, and a fairly compelling one, is that he was a self-serving reactionary who wanted the job of philosopher-dictator for himself. Another interpretation, for those who are inclined to give him the benefit of the doubt, is that he saw no other way to resolve the difficult political problem of intellectual conflict. Plato the epistemologist understood that truth is elusive for all of us, but Plato the realist understood that some of us can come closer to it than others. In a conflict of opinion between Einstein and a fool, one wishes for Einstein to prevail. And in a conflict of opinion between Einstein and a thousand fools or a million, one wishes all the more for Einstein to prevail. Our Einsteins or Lincolns are one in a million, yet somehow our frail societies must unravel one tangled problem after another. If we are to succeed and prosper, then we are obliged to put truth-identifying power only in the hands of our very wisest citizens. And we must take extreme precautions to defend their power against less wise usurpers.

“One man is wiser than another and . . . the wiser man is the measure,” Plato says. 6 To each, then, according to his wisdom: appoint the extraordinary thinker as arbiter of truth. Plato’s logic stood dominant for two thousand years. At last it was upended by an innovation in social thinking which audaciously replaced extraordinary philosophers with ordinary critics— an innovation whose radicalism and brilliance were unsurpassed even by the inversions which replaced monarchs with electorates and feudal lords with entrepreneurs.

. . .

In the first half of the seventeenth century, when René Descartes set sail on the weird seas of philosophical skepticism, his project was radical in the truest sense of the word. “I shall proceed by setting aside all that admits of even the very slightest doubt, just as if I had convicted it of being absolutely false,” he wrote in his Meditations of 1641. He would peel away layers of possible deception until he arrived at one indubitable truth, which he would use as the base for a vaccine with which to kill a skeptical virus that was rampant in Europe. 7

One way to react to widespread disagreement is the way Plato reacted: by calling for the establishment of a rightful authority to settle conflicts. But that is not the only way. Another is to throw up your hands and say: “I don’t know and neither does anyone else. They’re all a bunch of arrogant jabberers.” In Washington, D.C., for instance, it’s common to meet people who take a they’re-all-full-of-it attitude in response to the endless inconclusive bickering of economists and public-policy experts. That is a skeptical reaction.

Skeptical doubters have been around since at least the days of Socrates himself and of Pyrrho of Elis (fourth century B.C.), who is supposed to have made it his aim to withhold judgment on all matters on which there were conflicting views, including the matter of whether anything was known. Skepticism typically flourishes in response to divisive and sometimes violent differences of opinion, as a way to short-circuit dangerous conflict. Ancient skepticism thrived in the medical community of Alexandria in reaction to the stubborn dogmatism of rival camps of doctors. In periods of consensus, skepticism simmers down, as it does also in periods when debate is quashed or circumscribed by political controls. Thus the skeptical schools of thought more or less disappeared behind the walls of the Church. But the walls were eventually broken, and intellectual crisis ensued. In the early sixteenth century Martin Luther declared that all Christians, not just the ones in authority, had the power of seeing and judging what is right or wrong in matters of faith. Well, if the Church did not have the sole authority to identify truth, and if people disagreed in their conclusions (as of course they did), just how was anyone supposed to know which beliefs were the right ones? What was the rule for separating reality from illusion? Who should be believed? As Plato had understood almost two millennia earlier, the problem of knowledge could tear society to shreds, and indeed, as Catholics and Protestants bloodied each other in battles across Europe, it did so.

No surprise, then, that at about that time the ancient skeptics were rediscovered. Amid the bickering and fighting they exerted a strong appeal. 8 Skepticism cropped up in the academies and reached a new pinnacle with Michel de Montaigne. “For this is a very true presupposition,” he remarked with unconcealed exasperation, “that men are in agreement about nothing, I mean even the most gifted and ablest scholars, not even that the sky is over our head.” Perhaps more brilliantly and ruthlessly than anyone before or since, Montaigne argued in 1577 that for man to attain knowledge was hopeless. Our judgment may lead us astray. “The slightest things in the world whirl it around.” And further, “As for the error and uncertainty of the operations of the senses, each man can furnish himself with as many examples as he pleases, so ordinary are the mistakes and deceptions that they offer us.” As for belief, in the past we have been wrong while believing we were right, and so sureness is no guarantee of anything. “Not that it is impossible that some true knowledge may dwell in us; but if it does, it does so by accident. And since by the same road, the same manner and process, errors are received into our soul, it has no way to distinguish them or to pick out truth from falsehood.” 9 Montaigne’s arguments were impeccable, but his escape from them was not. The only answer, he concluded, is for man to give up any hope of finding truth on his own and to rely upon God to reveal it to him. That, of course, was no answer at all, because the whole problem in the first place was how to be sure who spoke truly for God. With Montaigne’s having destroyed certainty without providing anything to replace it, the condition of knowledge seemed desperate.

Here Descartes intervened. He searched until he found one proposition which was clearly beyond doubt: that he thought and thus knew he existed. He was certain of that because it was clear and distinct to him. Equally clear and distinct, and so equally certain, was his knowledge of God and of God’s benevolence. A benevolent God would not deceive us. And so, we may conclude, that which is clear and distinct is not deceptive but certain. Therefore, we can and do, after all, have certain knowledge of the world.

The reasoning was ingenious, but it failed— one of the most fertile intellectual failures in all history. Descartes made a leap to which he was not entitled: his awareness of his own thinking did not give him the right to claim any certainty about God’s objective existence, or about anything else apart from himself. Moreover, if the senses and the process of thinking could both, on occasion, be deceptive, then to say that a proposition is clear and distinct is no guarantee of anything. Subsequent thinkers were quick to see those problems and others. Descartes nonetheless achieved an important advance, not with his conclusion, but with his method. Systematic criticism was the key.

Thus began the skeptical revolution. Skeptical reasoners marched straight down the road opened by Descartes. At last in 1739 David Hume, the brilliant twenty-eight-year-old enfant terrible of modern philosophy, came along with his bulldozer and made a ruin of the last pillars of certainty about the external world. Induction— generalizing from past to future, from known to unknown— is nothing more than an act of faith, Hume said. He made a devastating argument: How can I know that the past is of any value at all as a guide to the future? I can answer, “Heretofore it always has been.” But I cannot use that fact as a guide to the future without assuming what I set out to establish, namely that the past is a good guide to the future. Hume said, “We have no reason to draw any inference concerning any object beyond those of which we have had experience.” Not only that: no experience which we do have can tell us anything directly about the world as it exists, or may exist, independent of ourselves. “Let us fix our attention out of ourselves as much as possible: Let us chase our imagination to the heavens, or to the utmost limits of the universe; we never really advance a step beyond ourselves.” 10

Knowledge has not been the same since. Hume demolished the logical underpinnings of all naive claims, and most sophisticated claims, that we can have any certain knowledge whatever of the objective world— the world as it “really” is, independent of human perception or misperception. And if we cannot have certainty, then what is to distinguish one man’s belief as better than another’s? As Montaigne himself had put the problem: “Either we judge absolutely, or we absolutely cannot.” The skeptical crisis of Montaigne’s time now seemed to have given way to an abyss still deeper than before, with two dreadful consequences: first, the replacing of old dogmas with a new and utterly negative one, namely that we cannot have any knowledge whatever; second, a consequent paralysis of all intellectual industry.

“Seemed,” however, is the operative word. What was really going on was more subtle and interesting. A new social ethic was being born.

In its most peculiar and extreme philosophical form, skepticism refers to the doctrine that we have no reason to believe anything, and so should believe nothing. That, however, is on its face an unsustainable argument. Believing nothing is impossible. Even the belief that you are justified in believing nothing is a belief. And even when we refuse to conclude, we do so only against the background of other conclusions. No one could possibly be a genuinely beliefless skeptic, even in principle.

The “skepticism” upon which liberal science is based is something quite different. (To distinguish it from the kind which says that we should never conclude anything, philosophers often call it “fallibilism.”) This kind of skepticism says cheerfully that we have to draw conclusions, but that we may regard none of our conclusions as being beyond any further scrutiny or change. “Go ahead and conclude whatever you want; just remember that all of your conclusions, every single one of them, may need to be corrected.” This attitude does not require you to renounce knowledge. It requires you only to renounce certainty, which is not the same thing. In other words, your knowledge is always tentative and subject to correction. At the bottom of this kind of skepticism is a simple proposition: we must all take seriously the idea that any and all of us might, at any time, be wrong.

Taking seriously the idea that we might be wrong is not exactly a dogma. It is, rather, an intellectual style, an attitude or ethic. What is to be said for this ethic? Not much, on its face. It is not provable. There is nothing especially rational (or irrational) about it. It is not an intellectually neutral view of the world or a view that rises above faith, since it is a kind of faith— faith in the belief that we are all fallible. “Why, doubt itself is a decision of the widest practical reach,” William James rightly said. “The coil is about us, struggle as we may. The only escape from faith is mental nullity.” 11 One cannot overstress this point, although often no amount of emphasis seems to drive it home: to adopt the attitude that you can never be completely sure you are right is a decision, a positive step— not a void where commitment should be, but a kind of commitment (to taking seriously that one might be wrong). If you are not inclined to doubt, you never even reach skepticism— it is simply not an issue; you simply believe without asking questions.

What, then, is so important about the emergence, eventually the triumph, of the skeptical ethic? The answer is this: Hidden in the pages of the skeptical philosophers’ tomes is a radical social principle. It is the principle of public criticism.

When people accept the notion that none of us is completely immune from error, they also implicitly accept that no person, no matter who he is or how strongly he believes, is above possible correction. If at any moment I can be wrong and you can be wrong and so can everybody else, all without being aware of it, then none of us can claim to have finally settled any dispute about the state of the external world. No one, therefore, is above critical scrutiny, nor is any belief.

The result is this: A society which has accepted skeptical principles will accept that sincere criticism is always legitimate. In other words, if any belief may be wrong, then no one can legitimately claim to have ended any discussion— ever.

In other words: No one gets the final say.

Another conclusion also follows. If any person may be in error, then no one can legitimately claim to be above being checked by others— ever. Moreover, if anyone may be in error, no one can legitimately claim to have any unique or personal powers to decide who is right and who is wrong.

In other words: No one has personal authority.

Here is a result which Socrates would have relished and which Plato— the Plato of The Republic— fought with every resource of his genius. Here is error enthroned as inevitable and inescapable, sitting in state above the philosopher-king no less than above the ignorant laborer or the cynical sophist. In most human societies for most of history, the search for knowledge had always been anchored by some propositions or some authorities— a Bible or other texts, priests or philosopher-kings or other persons— which were believed to be reliable and beyond error, and which therefore were not open to serious questioning. With the skeptical revolution, the anchor was sawed off. Nothing would be out of bounds for critical scrutiny. No one would be entitled to declare what was true knowledge and what was false opinion.

And here is where one might naturally think we are in trouble. If we may all be wrong, how are we ever to decide who is right? Why did the skeptical fires not leave society in disarray, unable to believe anything, as seemed to happen during the skeptical crisis of Montaigne’s day? The answer is: because the fires cleared the ground for a new and extraordinarily powerful game— the game of liberal science.

. . .

We turn, then, to the revolution proper: a political revolution of the first importance. Now, when I talk about the skeptical revolution, the philosophical one of Descartes and Hume and the others is not mainly the one I mean. What Hume and the philosophers— the theorists of knowledge— were doing was radical and important. But their adventure was an outgrowth of broader changes in the intellectual climate of the day. Even as the theorists were busy showing that certain knowledge is impossible, the scientists and scholars of the Enlightenment were showing that uncertain knowledge is possible.

That process was already under way ten years after Descartes died. The physicist Freeman Dyson wrote:

The Royal Society of London in 1660 proudly took as its motto the phrase Nullius in Verba, meaning “No man’s word shall be final.” The assertion of papal infallibility, even in questions of faith and morals having nothing to do with science, grates harshly upon a scientist’s ear. We scientists are by training and temperament jealous of our freedom. We do not in principle allow any statement whatever to be immune from doubt. 12

Liberal science is a big and complicated thing. No one could begin to describe it fully. However, with nullius in verba we have reached one of the two great foundation stones of the liberal intellectual system.

I contend that these peculiar rules are two of the most successful social conventions which the human species has ever evolved. Put them into effect, and you have laid the groundwork for a knowledge-producing and dispute-resolving system that beats all competitors hands down. They are the basis of liberal inquiry and of science. Everything that follows in this essay is ultimately an attempt to defend them, and the attacks of the creationists and humanitarians and others are ultimately attempts to undermine them.

First, the skeptical rule. If people follow it, then no idea, however wise and insightful its proponent, can ever have any claim to be exempt from criticism by anyone, no matter how stupid and grubby-minded the critic. The skeptical rule is,

No one gets the final say: you may claim that a statement is established as knowledge only if it can be debunked, in principle, and only insofar as it withstands attempts to debunk it.

This is, more or less, what the great twentieth-century philosopher of science Karl R. Popper and his followers have called the principle of falsifiability. Science is distinctive, not because it proves true statements, but because it seeks systematically to disprove (falsify) false ones. In practice, of course, it is sometimes hard, if not impossible, to say whether a given statement is falsifiable or not. But what counts is the way the rule directs us to try to act. In principle, if you do not try to check ideas by trying to debunk them, then you are not practicing science. You are entitled to claim that a statement is objectively true only insofar as it is both checkable and has stood up to checking, and not otherwise. Decisions about what is and is not true are always provisional, standing only until debunked.

Second, the empirical rule. If people follow it in deciding who is right and who is wrong, then no one gets special say simply on the basis of who he happens to be. The empirical rule is,

No one has personal authority: you may claim that a statement has been established as knowledge only insofar as the method used to check it gives the same result regardless of the identity of the checker, and regardless of the source of the statement.


In other words, whatever you do to check a proposition must be something that anyone can do, at least in principle, and get the same result. Who you are doesn’t count; the rules apply to everybody, regardless of identity. A test is valid only insofar as it works for anyone who tries it. Where different checkers (debunkers) get different results, no one’s result supersedes anyone else’s, and no result can be declared. The test remains inconclusive. (It is important to note that “no personal authority” says nothing against expertise. It only says that no one, expert or amateur, gets to claim special authority simply because of who he happens to be or what he is saying. Whatever you do to become an expert must thus be something that others also could do. You may have a Ph.D., but I could get one. The views of experts, no less than those of laymen, are expected to withstand checking.)

Those two rules define a decision-making system which people can agree to use to figure out whose opinions are worth believing. Under this system, you can do anything you wish to test a statement, as long as you follow the rules, which effectively say:

• The system may not fix the outcome in advance or for good (no final say).

• The system may not distinguish between participants (no personal authority).

The rules establish, if you will, a game— like chess or baseball. And this particular game has the two distinctive characteristics that define a liberal game: if you play it, you can’t set the outcome in advance, and you can’t exempt any player from the rules, no matter who he happens to be.

Game-playing is a good way to make touchy social decisions systematically. Suppose a group needs a leader. It could use a game with the following rules. Rule 1: each member of the group gets one vote in each round of vote-casting. Rule 2: whoever gets the least number of votes in each round of vote-casting is out of the game. Rule 3: the last remaining vote-getter is the group’s legitimate leader till the next vote. Thus the liberal game of voting.

Suppose a group needs to decide which of several conflicting ideas is right. Again, a game. First, each school of thought places its opinion before the group. Second, friends and enemies of the ideas begin testing and criticizing, poking and prodding, checking and cross-checking. To check, players can do all kinds of things. Their tests can include real experiment, thought experiment, plausibility, simplicity, generality, utility, logical consistency, beauty— always understanding, however, that whatever test they use has to be a test that I or anyone else also can use, at least in principle (no personal authority). If, for you, a theory passes the test of experiment or beauty, then it must do the same for me and for others, or else the theory has not checked out conclusively. Third, everyone is entitled to modify one of the original ideas or to suggest a new one. Fourth, the opinion which emerges as the survivor is the winner— only, however, for as long as it continues to survive (no final say). Thus the liberal game of science.

Whenever you and others agree to follow those rules, there are a million things you might do to investigate reality— but whatever you do will look a lot like science. One way or another, you will wind up with a system in which anyone is entitled to check (criticize) anyone, and no one is immune from being checked by anyone else; in which people argue from the basis of statements that have checked out so far; in which people look for tests that anyone can perform, and claim a strong result only where there is strong independent agreement; and in which no one’s experience or conclusion is supposed to get special weight by dint of who he happens to be.

The game of science is not just for “scientists.” It encompasses the defining ethic of the whole vast culture of critical, liberal inquiry. A while ago I went to hear a foreign-relations expert in Washington talk about Soviet behavior under Gorbachev. He gave his view and then announced, “That’s a hypothesis, and I would be willing to test it in discussion.” He was playing the science game. Even journalists are trained to respect the liberal rules and obey them as far as possible. We are supposed to reject anybody’s claims (including our own) to having the ultimate truth, and we try to learn how to write as though Everyman, a reasonable anybody, were standing in our shoes. An old newsroom dictum goes, “If someone says your mother loves you, check it.” We journalists are not scientists, exactly, but we certainly try to play by the rules of the science game. When, that is, we are doing our job.

. . .

The skeptical revolution was gradual and nonviolent; it was fomented not by a few noisy activists but through the evolving everyday practices of thousands of intellectuals, moving as best they could from one decision about the world to the next. Its radicalism is thus easy to miss. Besides, science has a genius for looking sober and conservative; and in many ways, especially in the face it presents to the public (and the way it usually sees itself), it is sober and conservative. But in a deeper sense it is quite probably the most radical endeavor ever embarked on by mankind— radical in two ways.

First, it has completely abolished inerrancy. “There is nothing like absolute certainty in the whole field of our knowledge,” writes Popper. 13 Before the revolution Montaigne could declare, “Either we judge absolutely, or we absolutely cannot.” Afterwards, his formula stood on its head: if we judge absolutely, we absolutely do not. Knowledge must be debunkable and stands only until it is debunked. In a liberal scientific society, to claim that you are above error is the height of irresponsibility. Always we must hunt for error. Many of the best thinkers take that injunction— a moral duty, really— quite seriously. Stephen W. Hawking, the physicist, tells a charming story of how he bet that his own theory of black holes was wrong. “This is a form of insurance policy for me,” he says. “I have done a lot of work on black holes, and it would all be wasted if it turned out that black holes do not exist. But in that case, I would have the consolation of winning my bet, which would bring me four years of the magazine Private Eye.” 14 To the law of fallibility, the law of no final say, there are no exceptions. That is why liberals, whether religious or not, cringe in revulsion at the self-inflation of preachers and priest-dictators who claim certainty for their every whim. Those people have the gall to exempt themselves from the duty of fallibility. They have the gall to claim the last word and proclaim that criticism is unnecessary, as Plato did when he claimed for his philosopher special access to knowledge of the immutable truths laid up in heaven.

Radical, too, in another way— breathtakingly so. Today we take empiricism almost completely for granted. We forget that a philosopher like Plato, who held that only the wise philosopher could hope for knowledge of things as they really are, would have been horrified by our widespread acceptance of the empirical rule (no personal authority). For that rule has opened up the entirety of human knowledge to scrutiny by anyone and everyone. In principle, a beggar, a dockworker, an obscure patent examiner (Einstein, by name) could overturn the laws of Newton himself. The empirical rule has made knowledge public property and thrown the philosopher-king out the window as a fraud and a shaman.

The point bears explaining.

A lot of people think that what is unique about science is its empiricism: it relies on experience to confirm or throw out statements about reality. Of course, it does do that. But empiricism in that sense is hardly unique to science. All human beings make up their minds by referring to experience. It was the experience of seeing a light from heaven and hearing a voice ask, “Saul, Saul, why persecutest thou me?” that won Paul over to Christianity. The question which matters is not “Do you rely on experience to make up your mind about objective statements?” It is “Whose experience do you rely on?” This is where the empirical rule produces its unique answer: only the experience of no one in particular.

The empirical rule says, “You may claim that a statement has been established as knowledge only insofar as the method used to check it gives the same result regardless of the identity of the checker, and regardless of the source of the statement.” In other words, in checking— deciding what is worth believing— particular persons are interchangeable.

Interchangeability of persons (we all play by the same rules) is a hallmark of liberal social philosophy. Kant declared that an action can be right for one person only if it is right for any and all, and so codified the liberal standard of justice. The empiricists declared that a statement can be true for one person only if it is true for any and all, and so codified the liberal standard for knowledge.

This is a point which has been missed again and again: scientific empiricism is a social philosophy. If the empiricists had said, “We must make our judgments by relying on experience— specifically, the experience of the pope,” then they would have contributed nothing original. Pharaoh saw seven famished cows devour seven fat ones; it was by reference to that experience that he set his agricultural policy. But the experience was strictly private; it was dream experience (which, while it is happening, is of course sometimes just as real seeming as the waking kind). If Joseph had been an empiricist, he would have told Pharaoh that the dream experience, however real to Pharaoh, could not be shown to others and therefore did not count. Only public (or potentially public) experience counts.

For example, suppose Smith, Jones, and Brown all want to know what the temperature is outside. They refer to the thermometer. But suppose the thermometer they refer to is strange. Smith looks at it and gets a reading of 76 degrees; Jones gets a reading of 31 degrees; Brown gets 103. One approach would be for each to claim he had the right answer and regard the other two as fools— an eminently human solution. But if they follow the empirical rule (the test must give the same result regardless of the identity of the tester), they will scratch their heads and go off in search of a better test. In fact, they will probably conclude that the “thermometer” was actually telling them something about themselves, much as a blood-pressure gauge might, rather than about the world outside themselves.

This is not just an academic exercise. Think about the woman who met Elvis. A friend of mine says his mother saw Jesus descend amid a shower of golden light in the Dome of the Rock in Jerusalem. Another of my friends saw an enormous, cigar-shaped UFO hover low above the trees one night and then zip away at hyperspeed. On August 16, 1988, the Houston Chronicle reported:

LUBBOCK— Worshipers screamed and lifted their hands toward the sky as a ray of light burst through the clouds Monday night during an outdoor Mass where thousands came expecting a miracle.

Shortly after the 6 p.m. Mass began at St. John Neumann Catholic Church, throngs of pilgrims stood and applauded as many spectators pointed skyward, crying that they saw Jesus and the Virgin Mary, and calling it a miracle.

“I saw the sun pulsating a lot and saw Jesus about 10 times,” said Mamie Fertitta. “Then I saw Jesus above and the doves below.”

A dozen priests standing on a rooftop altar and 600 Eucharist ministers turned their backs to the crowd to look at the sky and wave. After minutes of silence, St. John Neumann pastor Monsignor Joseph James began to sing Amazing Grace.

As the clouds moved across the sky, members of the audience screamed, “See her! See her!”

People in the audience whipped out cameras to photograph the clouds and light.

“I saw baby Jesus for an instant in the sky,” said Koreth Vargahese of Houston.

One woman was treated by paramedics after she began screaming.

Those people saw something, and no doubt their conclusions about their experience were sincere. But the gatekeepers of establishment objectivity did not admit the appearance of Jesus and doves into objective reality, because the rules require that everybody (or nearly everybody) in the crowd see the same phenomenon, not just some phenomenon. Likewise for any explanation: it has to work for anybody, never mind who. “Fact” is not anybody’s experience; it states the experience of no one in particular. When the police detective says, “Just the facts please, ma’am,” he is asking, What would I have seen— what would anyone have seen, what would no one in particular have seen— at the scene of the crime?

By definition, then, if we take the empirical rule (no personal authority) seriously, revelation cannot be the basis for fact, because it is not publicly available. Similarly, attempts to claim a special kind of experience or checking for any particular person or kind of person— male or female, black or white, tall or short— are strictly illicit. After a woman was raped by a gang of teenagers in New York City, the Reverend Al Sharpton said that there was no proof that a rape had occurred, because the victim was being attended by white doctors. In other words, white checkers’ findings do not count. That is illicit; if you make different rules for black and white checkers, you are not doing science. Paranormalists who claim to have verified psychic phenomena often rely upon single experiments; later, when some other investigator fails to find the claimed effect, they reply (for instance) that the necessary psychic energy was blocked by the presence of a skeptic. That also is illicit; if the way you are checking works only for people with a sympathetic attitude, or if your results are not replicable by others in a reasonably regular fashion, you are not doing science. The same applies to Christian Scientists and others who believe in faith healing but say that attempts to check it work only for the faithful. Believers in miracles argue that miraculous events can be witnessed and understood properly only by those to whom God chooses to reveal himself. That also is illicit. If the way you are seeing and explaining works only for the religious, you are breaking the rules. And Plato’s regime of philosopher-rulers who have special access to truth? Prohibited, renounced, condemned.

. . .

Well, so what?

“One man’s experience is nothing if it stands alone,” the great American philosopher Charles Sanders Peirce wrote a century ago. “If he sees what others cannot, we call it hallucination. It is not ‘my’ experience but ‘our’ experience that has to be thought of; and this ‘us’ has indefinite possibilities.” 15

Outside a small circle of cognoscenti, Peirce’s lot has been a tragic and undeserved obscurity. Yet no one better understood the social implications of science’s liberal ideal of objectivity.

Unless truth be recognized as public— as that of which any person would come to be convinced if he carried his inquiry, his sincere search for immovable belief, far enough— then there will be nothing to prevent each one of us from adopting an utterly futile belief of his own which all the rest will disbelieve. Each one will set himself up as a little prophet; that is, a little “crank,” a half-witted victim of his own narrowness. 16

In that crystalline statement Peirce penetrated to the heart of the issue as no one else has. And he hints at the honest answer to the next question— the crucial political question.

It is impossible to show that either the skeptical rule or the empirical rule is “true” in any grand or final sense. There is no way I could “prove” that the pope, say, has no final say; I can only say that, in a liberal intellectual regime, papal infallibility is strictly illicit, even immoral. There is similarly no way I could “disprove” feminist empiricism, which argues that, because men’s vision of reality has been distorted by their dominant position in society, “women (or feminists, whether men or women) as a group are more likely to produce unbiased and objective results than are men (or nonfeminists) as a group.” 17 I can only say that the rules should deny respectability to anyone’s claim that some particular kind of person is favored with especially undistorted insight. That being the case, the creationist or UFO-watcher or minority separatist or whoever can go off and play his own game. As he walks away he leaves his challenge behind: “Who gave you the right to set the rules? Why is your ‘science game,’ with its rules built by comfortable, secular, European males, the only game in town— especially if it hurts and excludes people?”

Comments are closed.

I Support Viewpoint Diversity

www.heterodoxacademy.org

A politically diverse group of social scientists, natural scientists, humanists, and other scholars who want to improve our academic disciplines and universities. We share a concern about a growing problem: the loss or lack of “viewpoint diversity.” When nearly everyone in a field shares the same political orientation, certain ideas become orthodoxy, dissent is discouraged, and errors can go unchallenged.

An Interpretation of Jonathan Haidt’s Moral Foundations Theory

This sidebar lists a series of posts which together make up an essay relating Moral Foundations Theory to today's politics, and even a little history, as viewed through The Independent Whig's six-foundation moral lens.

Categories

Venn Diagram of Liberal and Conservative Traits and Moral Foundations and

%d bloggers like this: