Radical Pedagogy (2006)
Collective Self-Examination: Thinking Critically about Critical Thinking
Patrick S. O’Donnell
Department of Philosophy
Santa Barbara City College
Critical thinking pedagogy was designed for adoption across the social sciences and humanities and was not intended to be a proto-logic or introductory logic class for non-philosophy majors. Indeed, its original rationale was in part motivated by the failure of formal logic to provide appropriate means of argument analysis that students could readily apply to the myriad fora of reasoning both inside and outside the academy. An argument can be made from within philosophy itself as to the limitations of this formalist approach to critical thinking, an approach bereft of any acquaintance with psychology and lacking the very real virtues of a rhetorical tradition that goes back to both Plato and Aristotle. Moreover, this formalist orientation, while leaning towards scientism, fails to give sufficient weight to the significance of informal logic, the emotions, presumptive or pragmatic epistemology, and analogical and metaphorical reasoning. Typical textbooks in the field may pay lip service to some of these topics, but generally speaking the stress is on the formal side of the equation. Those outside philosophy proper should resist any attempt by some professional philosophers to colonize the curriculum in a manner that condescendingly conceptualizes desiderata of rationality along the lines of “logic made simple for the masses” (the implication: if one really wants to think critically, then one should be taking courses in logic). Introductory logic is just that: introductory logic, and for philosophy students. Critical thinking has its own transdisciplinary curriculum, a pedagogical approach with philosophical integrity that nevertheless does not fawn over formal logic. In fact, this curriculum is capable of demonstrating a fidelity to the deepest means and ends of the philosophical enterprise.
Key words : critical thinking; rationality; formal logic; informal logic; rhetoric; scientism; emotions; presumptive reasoning; psychology; analogical and metaphorical reasoning.
A rough draft of this essay was circulated at my college to offer relatively “cool” reflections on what had become an acrimonious turf battle between the departments of Philosophy and English. The debate was a product, in the first instance, of budgetary constraints that left both departments competing for funds with which to teach courses on critical thinking. I am not at all bemoaning budgetary constraints, for they can have salutary consequences for knowledge production and distribution (or social epistemology: cf. Fuller, 1993, pp. 51-52). In our case, philosophers tended to put their colleagues in English on the defensive citing, for example, the absence of formal logical material in the latter’s curriculum. Alas, some philosophers used this instance of disciplinary border engagement to fortify disciplinary boundaries (despite both departments being housed in the ‘ Interdisciplinary Center’ building).
The original essay had three goals: 1) to defend the ability of English instructors to teach critical thinking, 2) to view this as an opportunity for constructive disciplinary engagement and (figuratively if not literally) cognitive trading, and 3) to inform my fellow philosophy instructors that not only was our professional arrogance—if not hubris—ill-mannered, but that our students would be better served by collective self-examination, i.e., by looking within at our curriculum, at our pedagogical assumptions and methods, rather than worry about whether or not English instructors can recite Aristotelian syllogisms or construct truth tables. The English curriculum is well disposed when not intentionally crafted to highlight the importance of such topics as rhetoric, analogy and metaphor, the emotions, and questions of human psychology, all of which deserve deeper attention from critical thinking theorists. Everything is fair game for philosophy, and the above topics have been found on the philosopher’s agenda at one time or another, however much they have suffered in comparison with other subjects, fads or fixations.
This paper puts aside the first goal of the rough draft while remaining motivated by the second and third goals. Let me explain: I came to notice a wide disparity among the critical thinking texts when it came to the inclusion of formal logical materials: the rare text accorded very little space to deductive logic, while others looked like pale imitations of books used in introductory logic courses in philosophy. My department, perhaps attempting to further distance itself from the approach to critical thinking found in the English department, used not only the latter sort of text, but in several classes relied on a standard and well-known introductory logic text by Copi and Cohen (2002), despite the fact that Copi (1995) is also the co-author of an introductory text on informal logic. The critical thinking course text used by our cohorts in philosophy at a nearby university is likewise an introductory logic book (by Hurley, 2002).
While I can hardly generalize from evidence that is at best anecdotal, I think the clear preference for one sort of text over another arises from pedagogical theory and assumptions about critical thinking prompted in some measure by the professional training that takes place within philosophy proper, a disciplinary socialization in which formal logic plays a basic if not central part. One by-product or default effect of this professional training in philosophy is the misplaced canonical significance accorded formal logic in the critical thinking curriculum by those clinging to what has been called the “classical model of rationality” (Brown, 1988) but what is, at least historically speaking, more modern than classical (see Toulmin, 1990, 2001).
Meta-philosophical reflection upon the nature of philosophy reveals the fact that philosophy can be (and has been) understood and practiced in a variety of ways, as evidenced by or discussed in the work of Wittgenstein (1958), Nussbaum (1994), Rorty (1989a, 1989b), Warner (1989), Gonzalez (1998), Stroll (1998), and Hadot (2002), for instance, and embodied in often quite disparate traditions: existentialism (Cooper, 1999), the six orthodox Indian darśanas (‘viewpoints’ or systems: King, 1999; Matilal, 1986; and Smart, 1992), Islamic philosophy (Leaman, 2002; Nasr & Leaman, 1996), Buddhism (Collins, 1982; Hopkins, 1996; and Kapstein, 2001), and classical Chinese worldviews (Graham, 1989; Hall & Ames, 1987; and Hansen, 1992). Such meta-philosophical reflection can lead to an appreciation of the proper work of reason, including its possible circumscription or limitations. It might help us as well appreciate the various forms of reasoning central to critical thinking.
Put differently, global and historical meta-philosophical reflection helps us appreciate the manner in which reason is “embedded, articulated and manifested in culturally specific ways” (Ganeri, 2001), the manner in which the “forms of rationality” are “interculturally available even if they are not always interculturally instantiated” (pp. 1-3). As Ganeri (2001) notes, some paradigms of rationality do not respect the oft-cited geo-historical division between East and West (moreover, this division was not always as hard and fast as scholarly consensus would have it: see McEvilley, 2002), notably the instrumentalist and epistemic conceptions, while “others, for instance the Jaina notion of a rationality of reconciliation, or the modeling of reason by game-theory, are found in one but not the other [culture]” (p. 3). Critical thinking pedagogues might benefit from a more intimate acquaintance with the global variety and conceptions or forms of philosophy and rationality that transcend prevailing parochial or provincial (and often neo-colonial) formulations of philosophy and rationality untroubled by ignorance of the world history of philosophy (cf. Phillips, 1995b; Kupperman, 1999; Leaman, 1998). One consequence of such acquaintance may be an examination of the central role accorded formal logic and algorithmic reasoning by some critical thinking pedagogues enamored of a certain picture of what it means to philosophize. Perhaps there are good reasons that nothing quite like formal logic has pride of place in these non-Western philosophical traditions. The potential benefits are not confined to critical thinking pedagogues: for it is “high time that those of us whose primary philosophical orientation is to the Anglo-American or continental traditions learned a lot more about the character and riches of other traditions” (Leaman, 1999, p. 15). And of course there is nothing intrinsic to an analytic approach to philosophy that precludes learning about other philosophical traditions.
The positive part of my critique offers subject matter: rhetoric, informal logic (specifically, the so-called informal fallacies and presumptive reasoning in dialogue contexts), philosophical reflection on the emotions, psychology, analogy and metaphor that, I think, should displace (a ‘crowding out’ effect) much of the formal logical material in not a few critical thinking texts. I will not discuss the material (e.g., on language, definitions, categorization, etc.) in the standard texts—assuming such a thing exists—I find essential or useful and, therefore, should not be displaced. Nor do I have any concrete suggestions as to precisely how this new material might take textbook form. It will be enough if some members of the critical thinking community are persuaded by the importance of what follows.
I – Formal Logic, Rationality & Scientism
The critical thinking curriculum needs to distance itself further from the preoccupations of logicians, thereby giving less attention, say, to propositional or predicate logic or simply deductive reasoning, or the “reasoning by rules” Harold I. Brown (1988) finds essential to what he calls (a bit misleadingly) the “classical model of rationality.” My immediate concern is not with the question of how one (presuming one can) goes about attempting to justify rules of formal logic, an important question best skirted when one is learning the rules. In fact, as Brown says, “ we cannot accept the validity of a rule of logic without justification, but it is not at all clear where that justification is to be found” (p.73). What does concern me is what John Dewey aptly dubbed the “quest for certainty” in the history of philosophy, a quest canalized within rules of formal logic that are, in turn, understood as the quintessence—necessary and sufficient condition—of rationality. Brown cites Leibniz to exemplify this quest for certainty. As he explains, the quest for a set of rules that enables us to decide how to act or what to believe, was perhaps most ambitiously expressed by Leibniz,
who envisioned a ‘universal characteristic,’ i.e., a symbolism analogous to arithmetic which ‘will reduce all questions to
numbers, and thus present a sort of statics by virtue of which rational evidence may be weighed.’ If such characteristic were
available, it would allow us to settle philosophical and religious disputes, questions of war and peace, and all other human
difficulties ‘with the clarity and certainty which was hitherto possible only in arithmetic.’ Such a technique is not now at hand,
and perhaps Leibniz was a bit too optimistic about its prospects, but we can see why he would consider it desirable (p. 37).
Brown proffers an alternative to the classical model of rationality based on an Aristotelian inspired notion of judgment (p. 139) that I will not directly address. Suffice to say, a model of rationality can slough off the quest for certainty while retaining rules or (especially) principles, and still make room for the role of Aristotelian-inspired judgment (see Siegel, 2004). We might see the classical model of rationality and the quest for certainty as decisively undermined in the narrative of modernity by the cumulative impact of the later Wittgenstein, the Pragmatists, Gödel’s incompleteness theorem(s) in mathematics, and the quantum revolution in physics. But this does not at the same time entail anything like the “end of philosophy,” absolute skepticism, epistemology as (solely) hermeneutics, subjectivism, relativism or any of the catalogue of sins associated with the more puerile forms of postmodernism.
Let’s say it loudly and clearly: formal logic and deductive reasoning are not the heart of critical thinking. To be sure, philosophers and logicians will acknowledge other forms of reasoning: inductive, abductive and analogical, but these forms are usually measured against the standard of deduction, a standard that is irrelevant to the practical assessment of the latter in general, and decision making and judgment in particular. In a recent survey article on critical thinking, Bailin and Siegel (2003) remark that “Deductive reasoning represents only a narrow subset of rational thinking; the latter also encompasses (at least) inductive, probabilistic, analogical and abductive (‘inference to the best explanation’)” (p. 191). Even this may not be altogether accurate if, with Harman and Kulkarni (2005) we believe that “deductive logic is a theory of what follows from what, not a theory of reasoning:”
It is a category mistake to treat deduction and induction as two versions of the same category, because deduction and induction
are of very different categories. Deductive arguments are abstract structures of propositions. Inductive reasoning is a process
of change in view. There are deductive arguments in the sense of reasoning about deductions. There is inductive reasoning, but
it is a category mistake to speak of inductive logic (pp. 5-6).
The contention that a critical thinking course should not be construed on the order of an introductory logic class for non-philosophy majors, does not depend for its merits on the latest intellectual import from France. There’s no small irony in the fact that a fair number of critical thinking texts devote a large amount of space to formal logic, especially when one considers that much of the original inspiration for the critical thinking curriculum followed in the pedagogical wake of teaching formal logic as the primary means to propagate rational thinking and as the favored tool with which to evaluate the motley arguments that appear in myriad practical reasoning contexts (i.e., in which it seems silly to speak of sci-fi thought experiments like brains in vats or violinists plugged into our vital organs). So my concerns are hardly new, having proper pedigree in Stephen Toulmin’s seminal work, The Uses of Argument (2003 ed.). Nor are my concerns idiosyncratic, as we learn from Alec Fisher’s helpful book, The Logic of Real Arguments (1988):
Like many others I hoped that teaching logic would help my students to argue better and more logically. Like many others, I
was disappointed. Students who were well able to master the techniques of logic seemed to find that these were of very little
help in handling real arguments. The tools of classical logic—formalisation, truth-tables, Venn diagrams, semantic tableux,
etc.—just didn’t seem to apply in any straightforward way to the reasoning which students had to read in courses other than
logic (p. vii).
By analogy, one imagines students enrolled in a creative writing program being taught rules of grammar or the science of linguistics so as to better appreciate the work of Proust or Steinbeck, so as to closer emulate the creativity of a Graham Greene or Doris Lessing!
Toulmin asked a question, the provisional answer to which, caused something of a stir among philosophers of his time and place: how far can logic hope to be a formal science yet retain the possibility of being applied in the critical assessment of actual arguments? Not very far, as it turns out. But to answer this question, or perhaps so as to avoid a direct answer, Toulmin put the question in more tractable form: “What things about the forms and merit of our arguments are field-invariant and what things about them are field-dependent” (Toulmin, 2003, pp. 14-15)? As is well-known, force, strength, and persuasiveness are by degrees field-invariant, while the canons for assessing the actual reasons or premises of the argument need to be field-dependent. The difference here is not the same as the debate between the “generalists” and “specifists” in critical theory (Bailin & Siegel, 2003), for Toulmin is a generalist insofar as he would endorse the proposition that “critical thinking is rightly conceived as a set of skills, abilities, and dispositions, in the sense that these can be utilized or applied across a broad range of contexts and circumstances” (p. 184). But what interests Toulmin, and maybe some of the specifists as well, is that one acquires and learns these skills, abilities, and dispositions within varying contexts and circumstances, much like the finish carpenter has skills and abilities that can be applied in a broad range of contexts and circumstances in construction, having acquired those abilities and learned those skills (of a finish carpenter) on the job, that is, in specific contexts and circumstances.
Yet Toulmin’s structural format for argument analysis is also field invariant, for he employs such assessment guiding terms as claims, grounds, warrants, rules, and backing across argument fields (e.g., law, medicine, science, anthropology, what have you) (Toulmin, Rieke, and Jank, 1984). While some fora of practical reasoning are more susceptible to precision and exactitude or formalization than others, this has no bearing whatsoever when it comes to assessing the specific rational merits of a particular argument or chain of reasoning in an argument field or dialogue context. By contrast, as Paul Thagard (2000) has claimed, “most critical thinking textbooks assume, in line with philosophical orthodoxy, that human inference is and should be based on arguments, with deduction providing the gold standard of what an argument should look like” (p. 278). What is worse, if not downright scandalous, “Introductory philosophy textbooks and encyclopedia articles still proclaim as a universally accepted norm that a good argument is a sound argument, one with true premises and a deductively valid inference” (Hitchcock, 2003, p. 6).
The observations by Thagard and Hitchcock are in keeping with the history and tenor of Western epistemology, which has been “decisively influenced by the Greek discovery of the axiomatic method” (Williams, 2001, p. 38). Indeed, going back to Plato, this discovery encouraged the view that all genuine knowledge is demonstrative, deductivelogic (Aristotle) and mathematics (Plato) being the regnant forms of cognitive demonstration. Yet the stages of cognition in Plato’s Divided Line locate noesis above dianoia (hence the dialectician’s knowledge of the good is nonpropositional), mathematics and deduction illustrating the capacity for abstract thought, while noesis suggests both intuitive insight into the good and the idealization that makes possible all knowledge. In Francisco Gonzalez’s words, “the good remains outside of any deductive system of knowledge,” accounting for the “sharp distinction between dianoia and noesis explicit in the text” (1998, p. 242). However much associated with the name of Plato, this conception of demonstrative knowledge as exemplified in mathematics is not equivalent to genuine knowledge or knowledge of the good (indeed, the central axioms of mathematics are conventional, their assumptions lacking bedrock or foundational justification). In any case, as Williams reminds us, the very conception of demonstrative knowledge that grew out of ancient Greece and crystallized in Cartesian epistemology, is “is no longer plausible, even as an ideal” (p. 42).
Toulmin’s (1990) historical narrative places the operative stress on modernity, rather than classical Greek philosophy, for it was only after the 1630s in Europe that the enshrinement of formal logic in philosophy displaced the tradition of rhetoric: “The research program of modern philosophy…set aside all questions about argumentation—among particular people in specific situations, dealing with concrete cases, where varied things were at stake—in favor of proofs that could be set down in writing and judged as written” (p. 31). Toulmin’s identification of the excessive enthusiasm of some Enlightenment philosophers for a particular conception of reason should not be used to jettison the underlying principles and values that fueled the Enlightenment itself (cf. Bronner, 2004). The Enlightenment’s struggle against “throne and altar” remains relevant to today’s battles against poverty, injustice, anti-egalitarianism, fanaticism, anti-democratic political power, environmental degradation and despoliation, cultural relativism and subjectivist ideologies. While rhetoric has to some extent been rehabilitated in several disciplinary domains contemporary philosophers, particularly those identifying with the analytic tradition, have yet to appreciate the significance of rhetoric for philosophy and, by implication, for critical thinking. Indeed, in The Mind’s Provisions: A Critique of Cognitivism (2001), Vincent Descombes makes an intriguing case for the ethos,pathos and logos of rhetorical art as “the only real philosophical theory of mental causation to have been put forward” (p. 91), a remark crafted to disturb the philosophical slumber of cognitive naturalists, evolutionary psychologists, eliminative materialists, and post-positivists.
Still under the assumption that critical thinking theory aspires to be philosophically robust, scientism needs to be brought to the table, for it is “the doctrine that only the methods of the natural sciences give rise to knowledge [and] is today widely espoused in epistemology, metaphysics, philosophy of language, and philosophy of mind” (Stroll, 2000, p. 1). In more emphatic terms, the practice of science does not exemplify the quintessence of rationality or reason. Rather, it is but one embodiment or expression of such in one significant realm of human affairs.
In a cultural climate in which the notion of “Intelligent Design” is seriously and adamantly proffered as an intellectual challenge to evolutionary theory despite the appalling absence of epistemic virtues intrinsic to scientific theory, counsel against scientism risks being assimilated to the popular and debilitating anti-scientific ethos that today feeds on failures and gaps in educational processes. But a critique of scientism in no way impedes the quest for scientific knowledge nor impinges upon the desire for progress or growth in the natural and social sciences.
In the modern and postmodern period physics has proven bewitching to not a few philosophers, and of late cognitive science has been equally spellbinding: the early Jerry Fodor, Paul Thagard, Mark Johnson, Daniel Dennett, Patricia and Paul Churchland, Owen Flanagan, and the early Hilary Putnam leap to mind, together with a grab bag of naturalists, materialists, dwindling acolytes of Quine. The core conviction here is that “philosophy, when correctly done, is an extension of science” (p.2). In defining the latest variation on the scientistic theme, Avrum Stroll shares a quote from Patricia Churchland’s Neurophilosophy (1986) that would make even a few of the original logical positivist members of the Vienna Circle blush: “In the idealized long run, the completed science is a true description of reality, there is no other Truth and no other Reality” (p. 1). This axiomatic dogma of the scientistic creed of monistic materialism and metaphysical absolutism is thankfully missing from the crème de la crème in philosophy of science (cf.: Dupré, 2001; Kitcher, 2001; Longino, 2002; Rescher, 1999, 2000; Ziman, 2000).
The critical thinking theorist inclined toward scientism needs to repeat the following mantra with Putnam (1990): “‘scientific’ is not coextensive with ‘rational’” (p. 143). In general, contemporary philosophers of science are no longer hypnotized by the hypothetico-deductive method, but the belief persists that scientific theories, or at least inductive reasoning, can and should be formalized, hence the proclivity for probabilistic calculations founded on mathematical axioms. In Fact and Method: Explanation, Confirmation and Reality in the Natural and Social Sciences (1987), Richard Miller explains Bayesian reasoning as the latest incarnation of positivist fantasy, “an excess of formalism in which truisms about likelihood (plausibility, simplicity, and so forth) are given one-sided readings and abstract results are developed at too far a remove from the problems to be solved” (p. 279). Miller avers this latest round of falling head over heels for formalism is caused by “the triumph and prestige of the physical sciences, or ingrained ways of thinking in a highly monetary society, or both…” (p. 278). Critical thinking pedagogues need to be on the guard against the trickle-down or spillover effects from a naturalized epistemology or post-positivist philosophy of science infected with Bayesian formalism, what Rescher (1997) calls a “penchant for quantities,” a “fetish for measurement:”
People incline to think that if something significant is to be said, then you can say it with numbers and thereby transmute it
into a meaningful measurement. They endorse Lord Kelvin’s dictum that ‘When you cannot express it in numbers, your
knowledge is of a meager and unsatisfactory kind.’ But when one looks at the issue more clearly and critically, one finds there
is no convincing reason to think this is so on any universal and pervasive basis (p. 79).
Rescher reminds us that “…the things you cannot quantify in the context of an inquiry may well turn out to be the most important” (p. 236). Bayesianism may have been elected the post-positivist prom queen of induction, but there are other princesses and their retinue deserving jeweled pieces of the crown: inductive generalization (e.g. enumerative induction) and hypothetical induction (e.g. hypothetico-induction) for two, a sum of three paradigmatic inductive principles, three archetypes, and the respective families they engender in response to their respective weaknesses (see Norton, 2003a).
While applicable to stochastic systems (as probabilistic analysis of games of chance), Bayesianism has been stretched in application to belief, “based on the principle that belief comes in degrees, usually numerical, and is governed by a calculus modeled more or less closely on the probability calculus” (Norton, 2003a, p. 9). Bayesianism has all the formalist pretensions of deductive reasoning, for “if there is one account of induction that does aspire to be the universal and final account, it is the Bayesian account” (p. 13). Richard Miller, John D. Norton (2003a, 2003b), John Earman (1992) and the late L. Jonathan Cohen (1989), provide reason enough to be skeptical of this latest adventure in “scientific imperialism,” that is, “the tendency for a successful scientific idea to be applied far beyond its original home [cf. the fate of ‘rational-choice theory’], and generally with decreasing success, the more its application is extended” (Dupré, 2001, p. 16). Putnam (1990) credits Nelson Goodman with a knockdown argument that conclusively demonstrates “inductive logic is not formal in the sense that deductive logic is” (p. 304). And Putnam (1994) himself argues that “a purely formal method cannot be hoped for in inductive logic” (p. 464), while Norton (2003b) makes a compelling case for what he calls a “material theory of induction,” wherein “all inductions ultimately derive their license from facts pertinent to the matter of the induction” (p. 650). In letter if not spirit of Toulmin’s several briefs on behalf of practical reasoning over the past fifty years, Norton proclaims, “It is high time for us to recognize that our failure to agree on a single systematization of inductive inference is not merely a temporary lacuna. It is here to stay” (2003b, p. 648). Norton seems to have benefited from the study of scientific imperialism: “we can see that the more universal the scope of an inductive inference schema, the less its strength” (p. 662).
II – Rhetoric
Critical thinking theory is, rightly, parasitic on philosophy: if philosophers do not see the relevance of rhetoric, it is unlikely that critical thinking theorists will grant it a hearing. Fortunately, Nicholas Rescher (2001) stands apart from his peers in attending to the role rhetoric plays in philosophy itself. Rescher in fact argues that rhetoric makes for one of the two primary forms of philosophical exposition:
There are two very different modes of writing philosophy. The one pivots on inferential expressions such as ‘because,’ ‘since,’
‘therefore,’ ‘has the consequence that,’ ‘and so cannot,’ ‘most accordingly,’ and the like. The other bristles with adjectives of
approbation or derogation—‘evident,’ ‘sensible,’ ‘untenable,’ ‘absurd,’ ‘inappropriate,’ ‘unscientific,’ and comparable adverbs
like ‘evidently,’ ‘obviously,’ ‘foolishly,’ etc. The former relies primarily on inference and argumentation to substantiate its
claims, the latter primarily on the rhetoric of persuasion. The one seeks to secure the reader’s (or auditor’s) assent by reasons,
the other by an appeal to values and appraisals—and above all by an appeal to fittingness and consonance within the overall
scheme of things. The one looks foundationally toward secure certainties, the other coherentially towards systemic fit with
infirm but nevertheless respectable plausibilities. Like inferential reasoning, rhetoric too is a venture of justificatory
systematization, albeit one of a rather different kind (p. 21)
Let us christen the first mode of exposition “agonistic,” the agonistic and rhetorical modes of philosophical exposition being ideal types. Critical thinking texts are biased toward analyzing arguments modeled on the agonistic form of philosophical exposition, as we teach our students to look for premise and indicator terms (‘inferential expressions’), not the adjectives and adverbs of rhetorical exposition. A vast majority of modern philosophers use the agonistic form of exposition, while Schopenhauer, Kierkegaard, Nietzsche, Unamuno, Ortega y Gasset, and (sometimes) Sartre stand out for their reliance on the rhetorical (or ‘axiological’) mode of exposition. Of course as Rescher makes clear, no philosopher fully exemplifies an ideal type of exposition, and some, like Wittgenstein, used one then the other form of exposition over the course of their philosophical careers (Sartre’s Being and Nothingness irritated philosophers simply because he shifted abruptly back and forth between agonistic and rhetorical exposition). Furthermore, Rescher yokes these two types of exposition to distinctly different objectives: “The demonstrative/argumentative (inferential) mode is efficient for securing a reader’s assent to certain claims, to influencing one’s beliefs. The rhetorical (evocative) mode is optimal for inducing a reader to adopt certain preferences, to shaping or influencing one’s priorities and evaluations” (p. 23). Finally, Rescher concludes that no philosopher can avoid the method and mode of the type she “affect[s] to reject and despise” (p. 25).
Not only might we notice, with Rescher, the rhetorical form of exposition, we might also consider the rhetorical modes of expression found in our textbooks and classrooms. Do we take the time to consider the ethos, pathos, and logos of rhetorical art? Rhetoric should not be reduced to the identification of a “rhetorical device” (e.g. a simile or metaphor) or the raising of a “rhetorical question.” The collaborative art of rhetoric was developed to guide axiological judgments and decision making in conjunction with the “competencies crafted by the arts of practical reason—reading situations creatively, setting out positions clearly, appraising alternatives with prudence and practical judgment…” (Farrell, p. 3). As Toulmin reminds us in Cosmopolis: The Hidden Agenda of Modernity (1990), it was only with the triumph of the “Cartesian program for philosophy,” that “for the first time since Aristotle, logical analysis was separated from, and elevated far above, the study of rhetoric, discourse and argumentation” (p. 75). While developments within philosophy have sought to recover the proper place for rhetoric, discourse and argumentation, we have a long way to go, particularly if we need to wait for these developments to trickle down from philosophy to critical thinking theory.
Informal logic, which grew in response to a serious consideration of Toulmin’s thesis, is hardly on par with formal logic within the profession of philosophy and is only now, by fits and starts, coming into its own within critical thinking pedagogy. The development of “pragmatics” within philosophy of language (see Travis, 1997) has gone some distance in bringing “theoretical philosophy” down to earth. Philosophers as diverse as J.L. Austin, H.P. Grice, Wittgenstein, Habermas and Putnam have likewise facilitated the recovery of “practical philosophy.” But the comparative insignificance of both rhetoric and informal logic within philosophy remains: there’s much work to be done if critical thinking theorists are going to carve out curriculum space for rhetoric.
The economist Deirdre (formerly Donald) McCloskey (1985), introduces the rhetorical tradition of Aristotle, Cicero, and Quintilian:
The word ‘rhetoric’ here does not mean a verbal shell game, as in ‘empty rhetoric’ or ‘mere rhetoric’ (although form is not
trivial either, and even empty rhetoric is full). Rhetoric is the art of speaking. More broadly, it is the study of how people
persuade. In Modern Dogma and the Rhetoric of Assent Wayne Booth gives many useful definitions. Rhetoric ‘is the art of
probing what men believe they ought to believe, rather than proving what is true according to abstract methods;’ it is ‘the art of
discovering good reasons, finding what really warrants assent, because any reasonable person ought to be persuaded;’ it is
‘careful weighing of more-or-less good reasons to arrive at more or less probable or plausible conclusions—none too secure
but better than what would be arrived at by chance or unthinking impulse;’ it is the art of discovering warrantable beliefs and
improving those beliefs in shared discourse;’ its purpose must not be ‘to talk someone else into a preconceived view; rather, it
must be to engage in mutual inquiry’ The standards of good ‘good’ reasons and ‘warrantable’ belief and ‘plausible’
conclusions are to come…from the conversations of practitioners themselves, in their laboratories or seminar rooms or
conference halls. [….] To reinstate rhetoric properly understood is to reinstate wider and wiser reasoning (pp. 29-30).
Plato is often viewed as a vociferous opponent of rhetoric, yet it’s safer to conclude that it was not rhetoric per se that he opposed but rather a particular genre of it exploited by the Sophists for eristic purposes, for “Plato can also be seen as the keeper of the flame of true rhetoric, the source of the nobler model of rhetoric as a tool of philosophical enlightenment that motivates the more idealistic promoters of rhetoric in later eras” (Binder & Weisberg, 2000, p. 300). Binder and Weisberg cite Martha Nussbaum’s work on the Platonic dialogue (The Fragility of Goodness, 1986, pp. 122-135; cf. Gonzalez, 1998) to buttress the claim that dialectical dialogue is “an enduringly influential model of rhetorical form” (p. 299), its rhetorical power evidenced in its ability to attract “the perhaps resistant reader into philosophizing by exploiting the context of ordinary conversation between ordinary people” (p. 301). Plato indirectly conveys the significance of rhetoric in his choice of the dialogue form as ideally suited to the means and ends of philosophizing.
In a discussion of dialectic and eristic in Plato’s Euthydemus, Gonzalez (1998) explains why the attempt to force Socrates’ dialogic argument into deductive form to “prove something, that is, to force the universal acceptance of a conclusion which otherwise need not be accepted,” is mistaken because wrongheaded. Rhetoric, sensitivity to dialogue form and context, and practical reasoning, in short, various foci of informal logic, enable us to better appreciate the nature of Socratic argument:
This intent [i.e. to prove something] is usually accompanied by the assumption that there is no way of knowing this conclusion
to be true except through such a proof. In this case, the presence of a fallacy would destroy the argument’s claim to provide
knowledge and would allow one to refrain from accepting the conclusion. However, the charge of fallacy touches neither the
eristic nor Socrates, since neither argues with this intent. The eristic’s goal is the refutation of a particular respondent;
therefore, the use of fallacy, rather than being an objection to what he is doing, is simply a sign of his skill. The best way to
win an argument is to commit fallacies not spotted as such by the respondent. Socrates’ goal, on the other hand, is equally
removed from wanting to force universal acceptance of a conclusion. Socrates’ method is protreptic [i.e. to turn one toward
philosophy and the pursuit of virtue] precisely in that it aims to convert the interlocutor to a certain course of action. In getting
Cleinias to see the importance of wisdom in determining the goodness of things such as health, wealth, and even the virtues,
Socrates’ goal is to encourage the boy to pursue wisdom. Therefore, the premises of a Socratic argument, rather than being
understood as a means of logically necessitating a certain conclusion, are meant to be aids in turning a particular individual in
a certain direction. The argument serves the goal of conversion rather than proof. But then it is pointless to object that it
commits a logical fallacy. The question is not whether the argument succeeds in universally proving the conclusion, since this
was never the intention; the question is instead whether or not the argument points us in the right direction, inspires us with the
right goal. The measure of success is practical and not purely theoretical (p. 104).
As Gonzalez further elaborates in a note to this passage, Socratic arguments are indissolubly bound to the “specific circumstances of the conversation,” implying rhetoric is an indispensable tool in the philosopher’s (or critical thinking pedagogue’s) toolbox (p. 315, n. 16). Rhetoric is attuned to the practical situation of the interlocutor(s) or auditor(s) in a dialogue or pedagogical context. Plato and Aristotle agreed that “to understand a discourse, the auditor must first have had some experience with what the discourse is about, and some degree of familiarity with its object” (Hadot, 2002, pp. 82-89), hence their theoretical (Aristotle) and practical (Plato) accounts of the nature and proper use of rhetoric.
Practical situations and goals, contexts and circumstances, trade in contingency, not the forms of logic, whereas logical theories trade in forms. The problem, therefore, is not the relation between philosophy and rhetoric, as is often supposed, but rather the tension between formal logic and rhetoric:
One might say that insofar as logic cannot be contingently about what it is about, and insofar as what logic says cannot be
contingent, logic is about nothing other than the forms of which it treats (none of which identifies a thought), and about the
relations between those forms. If what it says about those forms is noncontingent, that may be all the noncontingency logic
needs. For some of the forms it treats, it may be part of being that form that whatever is so formed, and each of what, in the
structure, are its constituents, have a truth value. But logic need make no claims as to what things in fact are of the forms of
which it treats. If the laws of logic are about forms, then, in speaking of such forms, logic may be informative about whatever
is in fact either true or false, whether or not that is something that might have been neither, and, equally, whether or not that is
something there would not have been to say (or to think) at all, had the world been different in such-and-such ways. [….]
Logic may insist that what it says applies exclusively to things that are either true or false. But there is no reason to think it any
concern of logic what things, or what sorts of things, these are—whether, for example, all statements, or all thoughts, or only
some (Travis, 2000, pp. 84-85).
Charles Travis makes the selfsame point again in his Wittgensteinian-inspired book, Unshadowed Thought: Representation in Thought and Language (2000), but this time in a way that enables one to see just how difficult it might be to wean philosophers from the mother’s milk of formal logic, a necessity arising from the realization that the “ways there are for things to be” is an “occasion-sensitive matter:”
Bringing one’s picture of the world to bear where it does bear, and seeing how it does, is not plausibly just a matter of
calculating over some one set of forms for representing things. Inference no doubt plays some role in our maintaining that
system in our treatment of things by virtue of which we count as thinking this and that. But not all inference need be conceived
as calculation over syntactic features of items which represent the way things are. Nor are our inferential abilities so limited
that we can draw inferences from a given fact only when it is represented in some one particular way. So there is nothing in
our inferential abilities, or the role they play, that requires thoughts, or the objects of our attitudes, to be essentially structured.
The system we maintain in having an attitude need not be achieved by structuring its object in some one way; or, correlatively,
by assigning that object to some one system, structured in some one way. Perhaps that is part of what Wittgenstein meant in
saying, ‘In philosophy we often compare the use of words with games and calculi which have fixed rules, but we cannot say
that someone who is using language must be playing such a game.’ A given calculus treats a restricted sort of inference
calculated over a restricted set of forms (pp. 186-187).
Using formal models of logic as “yardsticks or templates for the evaluation of informally expressed arguments” (Fuller, 1993, p. 23), does not give sufficient weight to the fact that, as both Plato and Aristotle well knew, and Steve Fuller reminds us in Philosophy, Rhetoric and the End of Knowledge (1993), “people in search of guidance already come with certain concerns, habits of mind, and situations in which they are prepared to act. Any normative proposal [for thinking and behaving rationally] must therefore take the form of advice that complements this state of affairs” (pp. 20-21). That is to say, any normative proposal for prescriptive rationality must be fashioned with the tools of rhetoric found in the philosopher’s workshop. That is easier said than done.
In their book, Understanding Arguments: An Introduction to Informal Logic (2001), Fogelin and Sinnot-Armstrong unintentionally reveal that there is no compelling or persuasive reason for those who are not logicians, mathematicians or would-be computer scientists, to immerse themselves in formal logic: “Understanding the theory of the syllogism deepens our understanding of validity [which has nothing whatsoever to do with conversational or pragmatic rules], even if this theory is, in some cases, difficult to apply directly to arguments in daily life” (p. 181). That’s a bit mild if not disingenuous, as we later learn when our esteemed authors strike a note made earlier by Fisher:
After mastering the techniques for evaluating syllogisms, students naturally [notice one possible assumption here: it’s
unnatural for the instructor] turn to arguments that arise in daily life and attempt to use these newly acquired skills . They are
often disappointed with the results. The formal theory of the syllogism seems to bear little relationship to everyday arguments,
and there does not seem to be an easy way to bridge the gap (p. 204).
Several reasons are offered by way of tolerating, justifying or rationalizing “the gap:” “the study of formal logic is important because it deepens our insight into a central notion of logic: validity” (p. 205). Of course that’s true enough, but while that answer may comfort a logician or philosopher-to-be, how does it apply to our students? Why should they care for the notion of validity in the first place? That question is never really answered in a way that shows the logician is motivated to close the gap. Perhaps sensing this reason is not up to snuff, we’re given a second one, the argument or evidence for which is not forthcoming: “the argument forms we have studied do underlie much of our everyday reasoning” (p. 205). That is doubtful and perchance hyperbolic. Fogelin and Sinnot-Armstrong are refreshing if only because they admit the existence of the gap between the study of deductive validity in idealized argument forms (i.e., in ‘isolation from all the other factors at work in a rich conversational setting’) and the fora of everyday reasoning: “In general, the more rigor and precision you insist on, the less you can talk about [i.e., the smaller the range of application]” (p. 205). Indeed, and again: formal logic “does not concern itself with criteria for the evaluation of arguments” (Hitchcock, 2003, p. 6).
Rigor and precision are relative rationalist desiderata among others, all of which admit degrees of dependence on rhetorical means and ends that are cut, chiseled and polished—shaped—according to the purposes of empirical, moral, and practical forms of reasoning. This bears repeating if only because it remains the case that for some philosophers, “the language of negotiation, compromise, bargaining, and debate seems epistemically inferior to the language of certainty, precision, and logic” (Willard, 1996, p. 84). There may be genuinely logical rules of inference that are (transcendentally) necessary as conditions for the possibility of judgment, that is, logical rules “whose justification commits us to no more than is necessary in order to make thought and talk possible” (Luntley, 1999, p. 170), but those are not the rules of classical logic, nor are those the rules our students must learn in order to enhance their ability to “think critically.”
III – Informal Logic: Practical Reasoning and Argumentation
In addition to Toulmin’s work on practical reasoning and argumentation, Douglas Walton’s oeuvre is essential to analyzing and assessing practical reasoning in everyday social contexts, those socio-cultural fora germane to all critical thinking students. Although Walton (1992) gives pride of place to presumptive reasoning as part of plausible argumentation schemes in particular dialogue settings, deductive and inductive arguments are not altogether absent from within the larger argument chains. Plausible argumentation is assessed with weaker standards than those used to evaluated deductive and inductive arguments, as it is “based on a kind of reasoning that goes forward tentatively and provisionally in argumentation, subject to exceptions, qualifications, and rebuttals” (p. 3). Rationally or epistemologically speaking, there is no a priori, or a posteriori reason why one should be embarrassed by presumptive reasoning in plausible argumentation.
Walton understands presumption as a kind of speech act betwixt and between an assumption and an assertion. The argumentative—rational—weight of the presumption is determined by the type of discussion or dialogue in which it occurs, “and in particular by the burden of proof that is appropriate for arguments in that type of discussion” (p. 42). Walton finds three kinds or “weights” of presumption “relative to a particular point or stage” in an argument: required, reasonable and permissible presumption(s). The taxonomy of presumptions is a bit more complicated than these stage-specific weights owing to the particular dialogue type. The exact enumeration of dialogue types or forms varies throughout Walton’s work, but the following are representative: persuasion, enquiry, negotiation, information-seeking,deliberation,eristic, and “mixed.” This summary must suffice as an introduction to an important part of Walton’s contribution to informal logic.
What warrants further exploration for is the notion that dialogue types are crucial for discerning what has traditionally been termed an “informal fallacy.” For example, traditional emotional fallacies: argument to the people (argumentum ad populum), argument to pity (argumentum ad misercordiam), argument to the stick (argumentum ad baculum), and argument against the man (argumentum ad hominem), are analyzed within specific dialogue types and contexts, what Walton (1989) refers to as “dialectical relevance in the given situation” (p. 23). The salient point here is the recognition that these four arguments are not even prima facie fallacious! The making of threats—argument to the stick—for instance, may be perfectly proper—non-fallacious—in negotiation dialogues like judicial plea-bargaining or collective bargaining between labor and management. As Walton (1989) says, “there is nothing inherently fallacious about making threats” (p. 23). Walton thus ignores or effaces a distinction made by Jon Elster (1991) at the level of speech acts between “arguing” and “bargaining,” with Elster preferring a smaller linguistic component of argument analysis than dialogue type (speech act v. dialogue type), thereby maintaining a distinction between (what is conflated in Walton’s negotiation dialogue) an argument and a bargaining process, conceding that it may be difficult to determine “whether a given communication is part of an argument or a move in a bargaining process.” While this complicates matters, I do think Walton (1989) has successfully demonstrated that the specific type and context of dialogue is nevertheless fundamental in determining the presence of informal fallacies.
Two dialogue contexts used by Walton (1992) to illustrate instances in which an ad hominem argument is a legitimate, presumptive (hence defeasible) are cross-examination of a witness at trial court and a debate between candidates in an electoral campaign (pp. 251-254). His book-length treatments of several traditionally classified informal fallacies have yet to filter down into most critical thinking texts. To be sure, we’re not pedagogically limited to textual illustrations. In my class, for example, we’ve viewed Lumet’s classic 1957 film, “Twelve Angry Men,” to learn about the nature and possible consequences of reasoning with informal fallacies, in this case, in a deliberative dialogue setting in which jurors are deciding the fate of a young man accused of murdering his father.
Rhetorical considerations are important in the choice of pedagogical examples used to illustrate informal fallacies. Let us consider the following example: The fallacy of composition is defined as “arguing (a) that what is true of each part of a whole is also (necessarily) true of the whole itself, or (b) that what is true of some parts of a whole is also (necessarily) true of the whole itself” (Angeles, 1992, p. 107). Angeles provides a lexical definition of the fallacy of composition: “Each member (or some members) of the team is (or are) married, therefore, the team also has (must have) a wife.” While philosophers, as philosophers, would not likely be troubled by this illustration, I think in our capacity as critical thinking pedagogues we can do better.
My own attempt to illustrate this same informal fallacy begins by telling the students I’m selecting an example from a philosopher who masquerades as a social scientist at Columbia University, namely, Jon Elster. Not only are they introduced to Elster, the philosophically sophisticated social scientist, they also get a taste of his analytic prowess at play in his brilliant book, Making Sense of Marx (1985). This has proven rhetorically effective if only because Marx, not unlike Freud, has precipitously fallen out of academic and cultural fashion (even on the Left, which prefers the conspicuous consumption of French intellectual imports). Thus mere mention of Marx is an act of the transgressive sort so appealing to our young charges. Marx, in their mind’s eye, is equivalent to Communism, which is equivalent to the Evil Empire, which is equivalent to the Dark Side (and thus the slippery slide down the black hole of black and white thinking). Dark things by turns tempting and tantalizing, they read the following passage from Elster’s book:
Both the freedom to change employer and the freedom to become an employer oneself give rise to ideological illusions that
embody the fallacy of composition. The first is the inference from the fact that a given worker is independent of any specific
employer to the conclusion that he is free from all employers, that is independent of capital as such, to the conclusion that all
workers can achieve such independence. It might look as if the conclusion of the first inference follows validly from the
premise of the second, but this is due merely to the word ‘can’ being employed in two different senses. The freedom of the
worker to change employer depends, for its realization, mainly on his decision to do so. He ‘can’ do it, having the real ability
to do so should he want to. The freedom to move into the capitalist class, by contrast, only can be realized by the worker who
is [to quote Marx] an ‘exceedingly clever and shrewd fellow.’ Any worker ‘can’ do it, in the sense of having the formal
freedom to do so, but only a few are really able to. Hence the worker possesses the least important of the two freedoms—
namely the freedom to change employer—in the strongest sense of these two senses of freedom. He can actually use it should
he decide to. Conversely, the more important freedom to move into the capitalist class obtains only in the weaker, more
conditional sense: ‘every workman, if he is an exceedingly clever fellow…can possibly be converted into an exploiteur du
travail d’autrui.’ Correlatively, the ideological implications of the two freedoms differ. With respect to the first, the
ideologically attractive aspect is that the worker is free in the strong sense, while the second has the attraction of making him
free with respect to an important freedom. If the two are confused, as they might easily be, the idea could emerge that the
worker remains in the working class by choice rather than necessity (p. 211).
This analytically rich illustration of the fallacy of composition strikes me as rhetorically and hence pedagogically preferable to the lexical example used by Angeles.
IV – Emotions and Rationality
In their contribution to a recent volume on the philosophy of education, Sharon Bailin and Harvey Siegel (2003) discuss the normative character of critical thinking, debates among critical thinking pedagogues, and “outsider,” skeptically motivated refrains from the postmodernist lexicon. Their piece is particularly pellucid by way of a judicious and fair introduction to the rational nature of the discipline. It is therefore with some reluctance I find their response to “the charge that critical thinking neglects or downplays emotion” inadequate. The gist of their reply is as follows:
Contrary to this complaint, many critical thinking theorists explicitly acknowledge a role for emotions in critical thinking,
enjoining us, for example, to be sensitive to the feelings of others and to understand the perspectives of others. Indeed,
emotional aspects are central to Siegel’s notion of the critical spirit and to Scheffler’s account of critical thinking. [….] What
most critical thinking theorists would caution against, however, is reliance on emotion without critical assessment. What is
advocated is an appropriate role for emotion, one which enhances rather than detracts from one’s assessing and acting upon
reasons (p. 190).
The reference to critical thinking theorists is telling, for with one exception (and even that is weak: the emotions (empathy?) simply being implicated in the imperative to ‘understand other perspectives’), the references provided are not to actual critical thinking texts, but rather the philosophical fruits of several critical thinking theorists. Now I hardly want to belittle or impugn such labors, but this work has yet to make a discernable impression on the critical thinking curriculum. I am not sure why this is so, but I suspect at least part of an explanation would discover the role accorded the emotions is a rather minor one, the emotions not seen as somehow integral to our conceptions of rationality itself. Moreover, the fact that a significant number of professional philosophers and psychologists have only recently immersed themselves in the study of the emotions, suggests this work has yet to trickle-down to the critical thinking literature.
To be sure, “most of the great classical philosophers—Plato, Aristotle, Spinoza, Descartes, Hobbes, Hume—had recognizable theories of emotion,” but “what is surprising is that in much of the twentieth-century philosophers of mind and psychologists tended to neglect them” (de Sousa, 2003, p. 1). Therefore one can hardly fault critical thinking theorists and pedagogues for their neglect, assuming its disciplinary theorists and pedagogues are deeply influenced by what is salient and fashionable among philosophers (‘out of sight, out of mind’ as the proverb goes, or the psychologist’s ‘availability heuristic’). A “typical” critical thinking text contains a thing or two about “emotionally-loaded” language (almost exclusively pejorative, and often coupled with references, no less pejorative, to rhetoric), and a section on “emotional fallacies.” In a text I used for my class (Jones, 2001) one reads, in bold print on the second page, “In thinking critically, we do not struggle to become unfeeling or emotionless persons, but rather to make judgments in which our feelings and emotions are directed to their proper objects and are consequently aids rather than impediments to our judgment.” So far so good, and later Abraham Lincoln’s Gettysburg Address is used to illustrate how “emotion directed to the proper object is entirely appropriate and desirable” (pp. 42-43). But that’s it! Students can hardly be expected to understand the importance of the emotions in critical thinking and rationality given this rather impoverished hearing.
English departments are at a disciplinary advantage here, if only because “some novelists and playwrights…display a superb understanding of human emotions” (Elster, 1999a, p. 14) although, as with moral psychology in general, modern moralists like Montaigne, Pascal, La Rochefoucauld, and La Bruyère, despite their “extreme psychological acuity and powers of formulation” (Elster, 1999b, p. 51), often fall between disciplinary cracks or crevices. Elster hones in on the precise source of this disciplinary advantage vis-à-vis philosophy: “Whereas many of the fictional examples used by philosophers to illustrate this or that theory of the emotions fail to convince because they are too obviously made up for that purpose, the words and actions of characters in a novel or play have an independent authority that allows us to use them as examples and counterexamples” (p. 51). In Robin Hogarth’s (2001) words, “the content of a problem and the format in which it is presented can affect both our ability to see the problem and thus the solution reached” (p. 118). Or, as Steve Fuller puts it, “half of a philosopher’s problem is always that her interlocutor doesn’t already see the problem” (Fuller, 1993, p. 277).
As noted above by Bailin and Siegel (2003), the emotions are central to the “critical spirit” of critical thinking for it is emotions that, as it were, give it life, critical spirit defined here as “that complex of dispositions, attitudes, habits of mind, and character traits characteristic of critical thinkers” (p. 185). English instructors, rather than philosophers who teach ethics or analytic moral philosophy, are better situated, pedagogically speaking, to communicate the role of character, of ethical and non-ethical virtues, of the emotions, and of the myriad values that together are individuated in processes of moral and psychological growth and the emergence of character. It is this growth which bears fruit as the critical spirit that animates—is a necessary condition of—critical thinking. In Ethics, Evil, and Fiction (1997) Colin McGinn thus writes, “literature is where moral thinking lives and breathes on every page” (p. vi):
Stories can sharpen and clarify moral questions, encouraging a dialectic between the reader’s own experience and the trials of
the character he or she is reading about. A tremendous amount of moral thinking and feeling is done when reading novels (or
watching plays and films, or reading poetry and short stories). In fact, it is not an exaggeration to say that for most people this
is the primary way in which they acquire ethical attributes, especially in contemporary culture (pp. 174-175).
In helping to nurture critical spirit, “the terms of the novelist’s art are alert winged creatures, perceiving where the blunt terms of ordinary speech or of abstract theoretical discourse are blind, acute where they are obtuse, winged where they are dull and heavy” (Nussbaum, 1990, p. 7). Trained in political science, a Nobel Prize winner in economics, a pioneer in the field of the psychology of judgment and decision making as well as in the computational modeling of human reasoning, Herbert A. Simon came to a similar conclusion: “most human beings are able to attend to issues longer, to think harder about them, to receive deeper impressions that last longer, if information is presented in a context of emotion—a sort of hot dressing—then if it is presented wholly without affect” (Simon in Arkes and Hammond, eds., 1986, p. 111). The corollary obligation should not be taken lightly: “If we are to learn our social science from [or imbibe the critical spirit of] novelists, then novelists have to get it right” (p. 112):
Perhaps some of you are familiar with Arthur Koestler’s Darkness at Noon. It is a novel that describes what happens to a
particular person at the time of the Russian purge trials of the 1930s. Now suppose you wish to understand the history of the
Western world between the two world wars, and the events that led up to our contemporary world. You will then certainly
need to understand the purge trials. Are you more likely to gain such an understanding by reading Darkness at Noon, or by
reading a history book that deals with the trials, or by searching out the published transcripts of the trial testimony in the
library? I would vote for Koestler’s book as the best route, precisely because of the intense emotions it evokes in most readers
The critical thinking Gradgrinds among us could do worse than listen to another Nobel Laureate in economics, Amartya Sen (1982): “Fiction is a general method of coming to grips with facts. There is nothing illegitimate in being helped by War and Peace to an understanding of the Napoleonic Wars in Russia, or by Grapes of Wrath to digesting aspects of the Depression” (pp. 436-437).
As found in Putnam (1995), recalcitrant Gradgrinds would therefore do well to keep in mind four principles from A.E. Singer, Jr.:
1.Knowledge of facts presupposes knowledge of theories.
2.Knowledge of theories presupposes knowledge of facts.
3.Knowledge of facts presupposes knowledge of values.
4.Knowledge of values presupposes knowledge of facts. (p. 14)
These four principles are discussed in the best contemporary philosophy of science and, perhaps more to the point, are essential to the literary and philosophical writing of Iris Murdoch, the philosophy of Stanley Cavell (on both Murdoch and Cavell, see Mulhall in O’Hear, ed., 2000), the economics of Amartya Sen, and the recent philosophical work of Hilary Putnam and Martha Nussbaum. Assuming as we should, that ethical living—a moral life—is the very marrow of critical spirit, notice how Murdoch critiques a hard and fast categorical distinction found in ethics between facts and values (which has nothing whatsoever to do with committing the ‘naturalistic fallacy’):
The moral life is not intermittent or specialized, it is not a peculiar separate area of our existence. [….] Life is made up of
details. We compartmentalize it for reasons of convenience, dividing the aesthetic from the moral, the public from the private,
work from pleasure. [….] Yet we are all always deploying and directing our energy, refining or blunting it, purifying or
corrupting it, and it is always easier to do a thing a second time. ‘Sensibility’ is a word which may be in place here. aesthetic
insight connects with moral insight, respect for things connects with respect for persons. (Education.) Happenings in the
consciousness so vague as to be almost non-existent can have moral ‘colour.’ All sorts of momentary sensibilities to other
people, too shadowy to come under the heading of communication, are still parts of moral activity. (‘But are you saying that
every single second has a moral tag?’ Yes, roughly.) [….] This is not to advocate constant self- observation or some mad
return to solipsism. We instinctively watch and check ourselves as moral beings in our use of many various skills as we direct
our modes of attention (Murdoch, 1993, p. 495).
Critical spirit is like “all our states of consciousness and action” inasmuch as it “presuppose[s] discrimination, and any such discrimination is subject to moral evaluation” (Mulhall in O’Hear, ed., 2000, p. 257):
The moral point is that ‘facts’ are set up as such by human (that is moral) agents. Much of our life is taken up by truth-seeking,
imagining, questioning. We relate to facts through truth and truthfulness, and come to recognise and discover that there are
different modes and levels of insight and understanding. In many familiar ways various values pervade and colour what we
take to be the reality of our world; wherein we constantly evaluate our own values and those of others, and judge and
determine forms of consciousness and modes of being (Murdoch, 1993, p. 26).
Critical thinking curricula can do a better job of awakening our students to the well-known ways in which the emotions distort or subvert information acquisition, belief formation and rational action. Emotions are part and parcel of such debilitating psychological phenomena as weakness of will (Elster, 1984, 2000; Ainslie, 2001; chapters 3 and 4 in Schelling, 1984); wishful thinking (Elster, 1983; Pears, 1984); self-deception (Fingarette, 2000; Giannetti, 2000; Mele, 2001; and pp. 172-179 in Elster, 1984); and states of denial (Cohen, 2001). With ample reason, Aristotle stressed that “mastery of one’s passions is a prerequisite for a virtuous life,” while Jane Austen drew fictional characters in which “an excess of uncontrolled sensibility leads to a deficiency of good sense” (Bennett and Hacker, 2003, p. 220). Hume’s dictum that reason is or ought to be the slave of the passions, while perhaps open to a more charitable interpretation than is standard, is not one we want to teach our students. More controversially, we might explore how emotions contribute to our being rational or reasonable, how they might serve the critical spirit.
Aristotelian mastery of the passions includes learning how to feel the right emotion(s) in the right circumstance(s). The Aristotelian outlook has some historical affinity with and thus family resemblance to the neo-Stoic theory of emotions formulated by Martha Nussbaum. For Nussbaum, emotional experience involves, among other things, judgments of value and related appraisals or evaluations that reveal our abiding concern with human well-being if not flourishing (eudaimonia): “Emotions view the world from the point of my own scheme of goals and projects, the things to which I attach value in a conception of what it is for me to live well” (Nussbaum, 2001, p. 49). Peter Goldie (2000) has accordingly written of the “intelligibility, appropriateness, and proportionality” of emotions.
Talk of the rationality of emotions should not obscure the frequent involuntary character of our emotional experience: “in many standard cases, emotional reactions are triggered almost instantaneously, by cognitive or perceptual cues” (Elster, 1999b, p. 29), thus emotions can seem to happen to us, as events passively undergone as opposed to actions of an intentional or voluntary kind. And yet, there is plenty of evidence to be gleaned from the philosophy that we can learn to discipline or tame the more troublesome emotions like anger or hatred or lust on behalf of the means and ends of rationality, and toward the living of an ethical life. Given the fact that the emotions can help or hinder the goals of rational living, as well as the idea that “it makes sense to hold people responsible both for their characters and for actions that flow from their characters” (Kupperman, 1991, p. 63)—the notion that we are directly responsible for our character traits (Goldie, 2004, pp. 78-103)—I find Robert Solomon’s Sartrean-like conclusion apropos: “It would be nonsense to insist that regarding our emotional lives we are ‘the captains of our fate,’ but nevertheless, we are the oarsmen and that is enough to hold that we are responsible for our emotions” (Solomon in Hatzimoysis, ed., 2003, p. 18).
V – Some Psychological Obstacles to Rationality
Differences in psychology and philosophy come to the fore in any discussion of how people in fact think and our rational ideal of how they should think, there being a gap between these respective descriptive (psychological) and prescriptive (philosophical) pictures. In keeping with our entanglement principles of facts and values, facts and theories above, it is implausible to imagine that descriptive psychological portraits are painted on a canvass free of backdrop assumptions of what it means to think rationally. Unfortunately, but not without some methodological justification, the models of rationality common to psychologists and others in the social sciences are rather thin if not exclusively instrumentalist, models wherein rationality is, in Simon’s memorable phrase, “a gun for hire…in the service of whatever goals we have, good or bad” (Simon in Arkes and Hammond, eds., 1986, p. 97). I am not going to comment on these putatively descriptive models of rationality, (for uncommonly lucid treatments, see Hausman and McPherson, 1996; and Sen, 2002). Rather, and more simply, I’m going to make the assumption with Elster that irrationality is “neither marginal nor omnipresent,” and that psychologists have shown us various ways in which our attempts at thinking and acting rationally can be thwarted, break down, or run up against limitations. I am also assuming that informing our students of the psychological obstacles to critical thinking or rationality more broadly will do some good, that is, provide reasons (reasons here as causes) for them to struggle to avoid or overcome such obstacles, even if that struggle is something of a Sisyphean task.
Psychologists have discovered that we rely on heuristics, mental shortcuts or rules of thumb useful for thinking within specific situational or time constraints. Heuristics as such are neither intrinsically good or bad, having proven themselves in the crucible of experience. However, heuristics can, on occasion, thwart or deflect the rational attainment of our goals, goals that act, in turn, as means that enable us to approach, instantiate, disseminate, sustain, or cherish our rationally chosen ends, commitments, and values. For example, the availability heuristic is used when we depend primarily on “ease of retrieval” from memory or a recent evidential search to make judgments:
People’s estimate of frequency of causes of death are correlated with the frequency of which such causes appear in newspapers
independent of their actual frequency of occurrence. Thus deaths due to plane crashes, shark attacks, tornadoes, terrorism, and
other vivid, much reported causes are overestimated, whereas deaths due to strokes, stomach cancer, household accidents, and
lead paint poisoning are underestimated (p. 78).
Individuals distort estimates of the homeless who are mentally ill because their picture of “the homeless” is derived from their perceptions of or encounters with memorable homeless, that is, those suffering from severe psychological and/or physical disabilities, rather than the unobtrusive homeless person who is rarely seen or is easily forgotten. The cause of homelessness is thus misunderstood, attributed in the first place to debilitation or disability of the homeless themselves, not the fact that they are poor. This may entail a further belief that homelessness is a product of the deinstitutionalization of mental patients. Public estimates of the homeless who are mentally ill are therefore too high, and poverty becomes an incidental variable in the causal explanation of homelessness (pp. 86-87). The political ramifications of this sort of—metaphorically speaking—cognitive processing are obvious.
Story construction, like memory recognition and similarity judgments, appears to come naturally to us, acting as an “automatic cognitive capacity” that inclines us toward placing events in causal relations within an overarching determinative narrative framework. For instance, it is thought that jurors’ deliberative decision making is typically “driven primarily by the stories they construct to comprehend and remember the evidence presented to the court” (Hastie and Dawes, 2002, p. 135). The legal party—plaintiff or defendant—providing the preponderance of materials to aid or abet “a narrative summary of the events under dispute,” is therefore a leg up on its opponent.
Cognitive psychologists provide us with compelling evidence that, in Elster’s words, “our minds play all sorts of tricks on us, and we on them” (Elster, 1983, P. 22). Elster has discussed how our desires and preferences can be shaped in irrational (i.e., non-autonomous) ways, for example, “by the adaptation of preferences to what is seen as possible” or, in short, by “sour grapes.” Adaptive preference formation takes place behind our backs, as the adjustment of wants to possibilities results from a non-conscious “drive to reduce the tension or frustration one feels in having wants that one cannot possibly satisfy” (p. 25). Elster wisely distinguishes this phenomenon from the intentional shaping of preferences or desires with “meta-preferences,” as in Stoic or Buddhist philosophy and moral psychology, or in contemporary psychological theories of self-control in which such deliberate efforts are part of strategic character planning. G.A. Cohen (1995) explains the mechanisms of adaptive preference formation such that the sour grapes syndrome is a specific form of a more general pattern:
In all adaptive preference formation, A is preferred to B because A is (readily) available and B is not, but the comparative
preference can be the upshot either of judging A better than it would otherwise (that is, but for the unavailability of B) be
judged or of judging B worse than it would otherwise be judged (or, of course, both). The fox is in the second position. He
downgrades the grapes he does not have: he does not upgrade the condition of grapelessness (pp. 254-255, n. 17).
Both Elster and Cohen have spoken to the individual and collective consequences of adaptive preference formation.
The concept of preferences being central to economics, wherein choices arise from constraints, preferences, and expectations, the notion of adaptive preference formation has wide and important application, especially to the extent that economics has typically taken preferences as “given,” thus not themselves in need of explanation or subject to rational appraisal. The economist’s commitment to utility theory finds him identifying human well-being with preference satisfaction. Apart from the phenomena of second-(or even third-) order preferences mentioned earlier in conjunction with character planning, Cohen mentions Not Crying Over Spilt Milk as an instance of rational adaptive preference, reiterated here by Martha Nussbaum (2000): “We have failed to reach the grapes, and we have shifted our preferences in keeping with that failure, judging that such likes are not for us. But clearly this is often a good thing, and we probably shouldn’t encourage people to persist in unrealistic expectations” (p. 138). Although we cannot view all adaptive preferences as irrational, those that are should capture the attention of economists and social choice theorists, as Amartya Sen has convincingly demonstrated in cases where women “who do not desire some basic human good because they have long been habituated to its absence, or told that it is not for such as them” (Nussbaum, 2000, p.139). In other words,
Women who have been systematically oppressed may not have strong preferences for individual liberties, the same wages that
men earn, or even for protection from domestic violence. But liberties, high wages, and protection from domestic violence
may make them better off then giving them what they prefer. Satisfying preferences that result from coercion, manipulation, or
‘perverse’ preference formation mechanisms may not make people better off (Hausman and McPherson, 1996, p. 79).
An appreciation of irrational adaptive preferences calls into question basic assumptions of welfare economics and utilitarianism. Be that as it may, it seems there’s sufficient reason to listen to the findings of cognitive psychologists who have empirically demonstrated psychological mechanisms that interfere with rationality, findings that should be incorporated into our pedagogic endeavors to instill the rationalist norms and prescriptions of critical thinking. And incorporation of such findings need not entail subscribing to questionable assumptions and perspectives in the philosophy of mind.
VI – Analogical and Metaphorical Reasoning
- Critical thinking takes place in a mental environment consisting of our experiences, thoughts, and feelings. Some elements in this inner environment can sabotage our efforts to think critically or at least make critical thinking more difficult. Fortunately, we can exert some control over these elements. With practice, we can detect errors in our thinking, restrain attitudes and feelings that can disrupt our reasoning, and achieve enough objectivity to make critical thinking possible.
- The most common of these hindrances to critical thinking fall into two main categories: (1) Those obstacles that crop up because of how we think and (2) those that occur because of what we think. The first category is comprised of psychological factors such as our fears, attitudes, motivations, and desires. The second category is made up of certain philosophical beliefs.
- None of us is immune to the psychological obstacles. Among them are the products of egocentric thinking. We may accept a claim solely because it advances our interests or just because it helps us save face. To overcome these pressures, we must (1) be aware of strong emotions that can warp our thinking, (2) be alert to ways that critical thinking can be undermined, and (3) ensure that we take into account all relevant factors when we evaluate a claim.
- The first category of hindrances also includes those that arise because of group pressure. These obstacles include conformist pressures from groups that we belong to and ethnocentric urges to think that our group is superior to others. The best defense against group pressure is to proportion our beliefs according to the strength of reasons.
- We may also have certain core beliefs that can undermine critical thinking (the second category of hindrances). Subjective relativism is the view that truth depends solely on what someone believesâa notion that may make critical thinking look superfluous. But subjective relativism leads to some strange consequences. For example, if the doctrine were true, each of us would be infallible. Also, subjective relativism has a logical problemâit’s self-defeating. Its truth implies its falsity. There are no good reasons to accept this form of relativism.
- Social relativism is the view that truth is relative to societiesâa claim that would also seem to make critical thinking unnecessary. But this notion is undermined by the same kinds of problems that plague subjective relativism.
- Philosophical skepticism is the doctrine that we know much less than we think we do. One form of philosophical skepticism says that we cannot know anything unless the belief is beyond all possible doubt. But this is not a plausible criterion for knowledge. To be knowledge, claims need not be beyond all possible doubt, but beyond all reasonable doubt.