Introduction
Logical and reasoning tasks are typically classified as either deductive or inductive. In deductive reasoning, if the premises are true and a valid rule of inference is used, the conclusion must be true. In inductive reasoning, in contrast, the conclusion can be false even if the premises are true. In many cases, deductive reasoning also involves moving from general principles to specific conclusions, while inductive reasoning involves moving from specific examples to general conclusions.
Cognitive psychologists study deductive reasoning by examining how people reason using syllogisms, logical arguments comprising a major and a minor premise that lead to a conclusion. The premises are assumed to be true; the validity of the conclusion depends on whether a proper rule of inference is used. The classic example of deduction is as follows:
All men are mortal.
Socrates is a man.
Socrates is a mortal.
A more modern (and more controversial) example of deduction might be:
Abortion is murder.
Murder should be illegal.
Abortion should be illegal.
The second example prompts a distinction between “truth” and “validity.” Even though the second syllogism is logically valid, it may or may not be true. Broadly speaking, truth refers to content (that is, applicability of the conclusion to the real world), and validity refers to form (that is, whether the conclusion is drawn logically). It is thus possible to have a valid argument that is nevertheless untrue. For a clearer example, consider this syllogism:
All dinosaurs are animals.
All animals are in zoos.
All dinosaurs are in zoos.
The conclusion is valid but is not true, because one of the premises (all animals are in zoos) is not true. Even though a valid rule of inference was applied and a valid conclusion was drawn, the conclusion is not true. If a valid conclusion has been drawn from true premises, however, the argument is called “sound.”
With inductive reasoning, the validity of the conclusion is less certain. The classic example of induction is as follows:
Every crow I have seen in my life up to this time has been black.
All crows are black.
Other examples of induction include a child who begins to say “goed” (from “go”) instead of “went,” a detective piecing together evidence at the scene of a crime, and a stock analyst who, after observing that prices have fallen during the past two Septembers, urges clients to sell in August. In all these cases, a conclusion is drawn based on evidence observed before the conclusion. There remains the possibility, however, that additional evidence may render the conclusion incorrect. It does not matter how many positive instances (for example, black crows, September stock declines) have been observed; if one counterexample can be found (a white crow, a September stock rise), the conclusion is incorrect.
Heuristics
The study of induction spans a variety of methods and topics. In this article, most of the consideration of induction involves cases in which people rely on heuristics
in their reasoning. Heuristics involve rules of thumb that yield ballpark solutions that are approximately correct and can be applied across a wide range of problems.
One common heuristic is representativeness, which is invoked in answering the following questions: What is the probability that object A belongs to class B, event A originates from process B, or that process B will generate event A? The representativeness heuristic suggests that probabilities are evaluated by the degree to which A is representative of B, that is, by the degree to which A resembles B. If A is representative of B, the probability that A originates from B is judged to be high; if A does not resemble B or is not similar to B, the probability that A originates from B is judged to be low.
A second heuristic is availability, which is invoked in judgments of frequency; specifically, people assess the frequency of a class by the ease with which instances of that class can be brought to mind. Factors that influence the ability to think of instances of a class, such as recency, salience, number of associations, and so forth, influence availability in such a way that certain types of events (such as recent and salient) are more available. For example, if several people one knows have had car accidents recently, one’s subjective probability of being in a car accident is increased.
Rules of Inference
Before examining how people reason deductively, two rules of inference must be considered: modus ponens (the “method of putting,” which involves affirming a premise) and modus tollens (the “method of taking,” which involves negating a premise). Considering P and Q as content-free abstract variables (much like algebraic variables), modus ponens states that given “P implies Q” and P, one can infer Q. In the following example, applying modus ponens to 1 and 2 (in which P is “it rained last night” and Q is “the game was canceled”), one can infer 3.
1. If it rained last night, then the game was canceled.
2. It rained last night.
3. The game was canceled.
Modus tollens states that given “P implies Q” and ~Q (read “not Q”; “~” is a symbol for negation), one can infer “~P.” Applying modus tollens to 1 and 4, one can infer 5.
4. The game was not canceled.
5. It did not rain last night.
In general, people apply modus ponens properly but do not apply modus tollens properly. In one experiment, four cards showing the following letters or numbers were placed in front of subjects:
E K 4 7
Subjects saw only one side of each card but were told that a letter appeared on one side and a number on the other side. Subjects judged the validity of the following rule by turning over only those cards that provided a valid test: If a card has a vowel on one side, then it has an even number on the other side. Turning over E is a correct application of modus ponens, and turning over 7 is a correct application of modus tollens (consider P as “vowel on one side” and Q as “even number on the other side”). Almost 80 percent of subjects turned over E only or E and 4, while only 4 percent of subjects chose the correct answer, turning over E and 7. While many subjects correctly applied modus ponens, far fewer correctly applied modus tollens. Additionally, many subjects turned over 4, an error called affirmation of the consequent.
When stimuli are concrete, reasoning improves. In an analogous experiment, four cards with the following information were placed before subjects:
beer Coke 16 22
One side of each card showed a person’s drink; the other side showed a person’s age. Subjects evaluated this rule: If a person is drinking beer, that person must be at least nineteen. In this experiment, nearly 75 percent of the subjects made the correct selections, showing that in some contexts people are more likely to apply modus tollens properly.
When quantifiers such as “all,” “some,” and “none” are used within syllogisms, additional errors in reasoning occur. People are more likely to accept positive conclusions to positive premises and negative conclusions to negative premises, negative conclusions if premises are mixed, a universal conclusion if premises are universal (all or none), a particular conclusion if premises are particular (some), and a particular conclusion if one premise is general and the other is particular. These observations led to the atmosphere hypothesis, which suggests that the quantifiers within the premises create an “atmosphere” predisposing subjects to accept as valid conclusions that use the same quantifiers.
Influence of Knowledge and Beliefs
Prior knowledge or beliefs can influence reasoning if people neglect the form of the argument and concentrate on the content; this is referred to as the belief-bias effect. If a valid conclusion appears unbelievable, people reject it, while a conclusion that is invalid but appears believable is accepted as valid. Many people accept this syllogism as valid:
All oak trees have acorns.
This tree has acorns.
This tree is an oak tree.
Consider, however, this logically equivalent syllogism:
All oak trees have leaves.
This tree has leaves.
This tree is an oak tree.
In the first syllogism, people’s knowledge that only oak trees have acorns leads them to accept the conclusion as valid. In the second syllogism, people’s knowledge that many types of trees have leaves leads them to reject the conclusion as invalid.
Biases in Reasoning
A common bias in inductive reasoning is the confirmation bias, the tendency to seek confirming evidence and not to seek disconfirming evidence. In one study, subjects who were presented with the numbers (2, 4, 6) determined what rule (concept) would allow them to generate additional numbers in the series. In testing their hypotheses, many subjects produced series to confirm their hypotheses—for example, (20, 22, 24) or (100, 102, 104)—of “even numbers ascending by 2,’’ but few produced series to disconfirm their hypotheses—for example, (1, 3, 5) or (20, 50, 187). In fact, any ascending series (such as 32, 69, 100,005) would have satisfied the general rule, but because subjects did not seek to disconfirm their more specific rules, they did not discover the more general rule.
Heuristics also lead to biases in reasoning. In one study, subjects were told that bag A contained ten blue and twenty red chips, while bag B contained twenty blue and ten red chips. On each trial, the experimenter selected one bag; subjects knew that bag A would be selected on 80 percent of the trials. The subject drew three chips from the bag and reasoned whether A or B had been selected. When subjects drew two blues and one red, all were confident that B had been selected. If the probability for that sample is actually calculated, however, the odds are 2:1 that it comes from A. People chose B because the sample of chips resembles (represents) B more than A, and they ignored the prior probability of 80 percent that the bag was A.
In another experiment, subjects were shown descriptions of “Linda” that made her appear to be a feminist. Subjects rated the probability that Linda was a bank teller and a feminist higher than the probability that Linda was a bank teller. Whenever there is a conjunction of events, however, the probability of both events is less than the probability of either event alone, so the probability that Linda was a bank teller and a feminist was actually lower than the probability that she was only a bank teller. Reliance on representativeness leads to overestimation of the probability of a conjunction of events.
Reliance on representativeness also leads to the gambler’s fallacy. This fallacy can be defined as the belief that if a small sample is drawn from an infinite and randomly distributed population, that sample must also appear randomly distributed.
Consider a chance event such as flipping a coin (H represents “heads,” T represents “tails”). Which sequence is more probable: HTHTTH or HHHHHH? Subjects judge that the first sequence is more probable, but both are equally probable. The second sequence, HHHHHH, does not appear to be random, however, and so is believed to be less probable. After a long run of H, people judge T as more probable than H because the coin is “due” for T. A problem with the idea of “due,” though, is that the coin itself has no memory of a run of H or T. As far as the coin is concerned, on the next toss there is 0.5 probability of H and 0.5 probability of T. The fallacy arises because subjects expect a small sample from an infinitely large random distribution to appear random. The same misconceptions are often extended beyond coin-flipping to all games of chance.
In fallacies of reasoning resulting from availability, subjects misestimate frequencies. When subjects estimated the proportion of English words beginning with R versus words with R as the third letter, they estimated that more words begin with R, but, in fact, more than three times as many words have R as their third letter. For another example, consider the following problem. Ten people are available and need to be organized into committees. Can more committees of two or more committees of eight be organized? Subjects claimed that more committees of two could be organized, probably because it is easier to visualize a larger number of committees of two, but equal numbers of committees could be made in both cases. In both examples, the class for which it is easier to generate examples is judged to be the most frequent or numerous. An additional aspect of availability involves causal scenarios (sometimes referred to as the simulation heuristic), stories or narratives in which one event causes another and which lead from an original situation to an outcome. If a causal scenario linking an original situation and outcome is easily available, that outcome is judged to be more likely.
Evolution of Study
Until the twentieth century, deductive logic and the psychology of human thought were considered to be the same topic. The mathematician George Boole
entitled his 1854 book on logical calculus An Investigation of the Laws of Human Thought. This book was designed “to investigate the fundamental laws of those operations of the mind by which reasoning is performed.” Humans did not always seem to operate according to the prescriptions of logic, but such lapses were seen as the malfunctioning of the mental machinery. When the mental machinery functioned properly, humans were logical. Indeed, it is human rationality, the ability to think logically, that for many thinkers throughout time has separated humans from other animals (for example, Aristotle’s man as rational animal) and defined the human essence (for example, RenĂ© Descartes’s “I think, therefore I am”).
As a quintessential mental process, the study of reasoning is an integral part of modern cognitive psychology. In the mid-twentieth century, however, when psychology was in the grip of the behaviorist movement, little attention was given to such mentalistic conceptions, with the exception of isolated works such as Frederic C. Bartlett’s studies of memory and Jerome Bruner, Jacqueline J. Goodnow, and George A. Austin’s landmark publication A Study of Thinking (1956), dealing with, among other topics, induction and concept formation. The development of the digital computer and the subsequent application of the computer as a metaphor for the human mind suggested new methods and vocabularies for investigating mental processes such as reasoning, and with the ascendancy of the cognitive approach within experimental psychology and the emergence of cognitive science, research on human reasoning has become central in attempts both to understand the human mind and to build machines that are capable of independent, intelligent action.
Involvement of Computers
In the latter part of the twentieth century, there were attempts to simulate human reasoning with computers and to develop computers capable of humanlike reasoning. One notable attempt involved the work of Allen Newell and Herbert Simon, who provided human subjects with various sorts of problems to solve. Their human subjects would “think out loud,” and transcripts of what they said became the basis of computer programs designed to mimic human problem solving and reasoning. Thus, the study of human logic and reasoning not only furthered the understanding of human cognitive processes but also gave guidance to those working in artificial intelligence. One caveat, however, is that even though such transcripts may serve as a model for computer intelligence, there remain important differences between human and machine “reasoning.” For example, in humans, the correct application of some inference rules (for example, modus tollens) depends on the context (for example, the atmosphere hypothesis or the belief-bias effect). Furthermore, not all human reasoning may be strictly verbalizable, and to the extent that human reasoning relies on nonlinguistic processes (such as imagery), it might not be possible to mimic or re-create it on a computer.
After being assumed to be logical or even being ignored by science, human reasoning is finally being studied for what it is. In solving logical problems, humans do not always comply with the dictates of logical theory; the solutions reached may be influenced by the context of the problem, previous knowledge or belief, and the particular heuristics utilized in reaching a solution. Discovery of the structures, processes, and strategies involved in reasoning promises to increase the understanding not only of how the human mind works but also of how to develop artificially intelligent machines.
Bibliography
Halpern, Diane F. Thought and Knowledge: An Introduction to Critical Thinking. 4th ed. Hillsdale: Erlbaum, 2003. Print.
Holland, John H., et al. Induction: Processes of Inference, Learning, and Discovery. Reprint. Cambridge: MIT Press, 1989. Print.
Holyoak, Keith James, and Robert G. Morrison. The Oxford Handbook of Thinking and Reasoning. Oxford: Oxford UP, 2012. Print.
Johnson, Robert M. A Logic Book: Fundamentals of Reasoning. 5th ed. Belmont: Wadsworth, 2007. Print.
Johnson-Laird, Philip Nicholas. Mental Models. Cambridge: Harvard UP, 1983. Print.
Kahneman, Daniel, Paul Slovic, and Amos Tversky, eds. Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge UP, 2007. Print.
Kelley, David. The Art of Reasoning. 3d ed. New York: Norton, 1998. Print.
Manktelow, Kenneth Ian. Thinking and Reasoning: An Introduction to the Psychology of Reason, Judgment and Decision Making. Hove: Psychology Press, 2012. Print.
Ribeiro, Henrique Jales. Inside Arguments: Logic and the Study of Argumentation. Newcastle upon Tyne: Cambridge Scholars, 2012. Print.
Sternberg, Robert J., and Talia Ben-Zeev. Complex Cognition: The Psychology of Human Thought. New York: Oxford UP, 2001. Print.
Weizenbaum, Joseph. Computer Power and Human Reason II. New York: Freeman, 1997. Print.
No comments:
Post a Comment