Introduction
Psychology is typically defined as the science of behavior and cognition and is considered a research-oriented discipline, not unlike biology, chemistry, and physics. To appreciate the role of experimentation in psychology, it is useful to view it in the context of the general scientific method employed by psychologists in conducting their research. This scientific method may be described as a four-step sequence starting with identifying a problem and forming a hypothesis. The problem must be one suitable for scientific inquiry—that is, questions concerning values, such as whether rural life is “better” than city life, are more appropriate for philosophical debate than scientific investigation. Questions better suited to the scientific method are those that can be answered through the objective collection of facts—for example, “Are children who are neglected by their parents more likely to do poorly in school than children who are well treated?” The hypothesis is the tentative guess, or the prediction regarding the question’s answer, and is based on other relevant research and existing theory. The second step, and the one with which this article is primarily concerned, is the collection of data (facts) to test the accuracy of the hypothesis. Any one of a number of methods might be employed, including simple observation, survey, or experimentation. The third step is to make sense of the facts that have been accumulated by subjecting them to careful analysis; the fourth step is to share any significant findings with the scientific community.
Research Approaches
In considering step two, the collection of data, it seems that people often mistakenly use the words research and
experiment
interchangeably. A student might ask whether an experiment has been done on a particular topic when, in fact, the student really wants to know if any kind of research has been conducted in that area. All experiments are examples of research, but not all research is experimental. Research that is nonexperimental in nature might be either descriptive or correlational.
Descriptive research is nearly self-explanatory; it occurs when the researcher wants merely to characterize the behaviors of an individual or, more likely, a group. For example, one might want to survey the students of a high school to ascertain the level of alcohol use (alcohol use might be described in terms of average ounces consumed per student per week). One might also spend considerable time observing individuals who have a particular condition, such as infantile autism. A thorough description of their typical behaviors could be useful for someone investigating the cause of this disorder. Descriptive research can be extremely valuable, but it is not useful when researchers want to investigate the relationship between two or more variables (things that vary or quantities that may have different values).
In a correlational study, the researcher measures how strongly the variables are related or the degree to which one variable predicts another variable. A researcher who is interested in the relationship between exposure to violence on television (variable one) and aggressive behavior (variable two) in a group of elementary school children could administer a survey asking the children how much violent television they view and then rank the subjects from high to low levels of this variable. The researcher could similarly interview the school staff and rank the children according to their aggressive behavior. A statistic called a correlation coefficient might then be computed, revealing how the two variables are related and the strength of that relationship.
Cause and Effect
Correlational studies are not uncommon in psychological research. Often, however, a researcher wants even more specific information about the relationships among variables—in particular, about whether one variable causes a change in another variable. In such a situation, experimental research is warranted. This drawback of the correlational approach—its inability to establish causal relationships—is worth considering for a moment. In the hypothetical study described above, the researcher may find that viewing considerable television violence predicts high levels of aggressive behavior, yet she cannot conclude that these viewing habits cause the aggressiveness. After all, it is entirely possible that aggressiveness, caused by some unknown factor, prompts a preference for violent television. That is, the causal direction is unknown; viewing television violence may cause aggressiveness, but the inverse (that aggressiveness causes the watching of violent television programs) is also feasible.
As this is a crucial point, one final illustration is warranted. What if, at a certain Rocky Mountain university, a correlational study has established that high levels of snowfall predict low examination scores? One should not conclude that something about the chemical composition of snow impairs the learning process. The correlation may be real and highly predictive, but the causal culprit may be some other factor. Perhaps, as snowfall increases, so does the incidence of illness, and it is this variable that is causally related to exam scores. Maybe, as snowfall increases, the likelihood of students using their study time for skiing also increases.
Experimentation is a powerful research method because it alone can reveal cause-effect relationships. In an experiment, the researcher does not merely measure the naturally occurring relationships between variables for the purpose of predicting one from the other; rather, he or she systematically manipulates the values of one variable and measures the effect, if any, that is produced in a second variable. The variable that is manipulated is known as the independent variable; the other variable, the behavior in question, is called the dependent variable (any change in it depends on the manipulation of the independent variable). Experimental research is characterized by a desire for control on the part of the researcher. Control of the independent variable and control over extraneous variables are both wanted. That is, there is a desire to eliminate or hold constant the factors, known as control variables, other than the independent variable that might influence the dependent variable. If adequate control is achieved, the researcher may be confident that it was, in fact, the manipulation of the independent variable that produced the change in the dependent variable.
Control Groups
Returning to the relationship between television viewing habits and aggressive behavior in children, suppose that correlational evidence indicates that high levels of the former variable predict high levels of the latter. Now the researcher wants to test the hypothesis that there is a cause-effect relationship between the two variables. She decides to manipulate exposure to television violence (the independent variable) to see what effect might be produced in the aggressiveness of her subjects (the dependent variable). She might choose two levels of the independent variable and have twenty children watch fifteen minutes of a violent detective show while another twenty children are subjected to thirty minutes of the same show.
If an objective rating of playground aggressiveness later reveals more hostility in the thirty-minute group than in the fifteen-minute group, she still cannot be confident that higher levels of television violence cause higher levels of aggressive behavior. More information is needed, especially with regard to issues of control. To begin with, how does the researcher know that it is the violent content of the program that is promoting aggressiveness? Perhaps it is the case that the more time they spend watching television, regardless of subject matter, the more aggressive children become.
This study needs a control group: a group of subjects identical to the experimental subjects with the exception that they do not experience the independent variable. In fact, two control groups might be employed, one that watches fifteen minutes and another that watches thirty minutes of nonviolent programming. The control groups serve as a basis against which the behavior of the experimental groups can be compared. If it is found that the two control groups aggress to the same extent, and to a lesser extent than the experimental groups, the researcher can be more confident that violent programming promotes relatively higher levels of aggressiveness.
The experimenter also needs to be sure that the children in the thirty-minute experimental group were not naturally more aggressive to begin with. One need not be too concerned with this possibility if one randomly assigns subjects to the experimental and control groups. There are certainly individual differences among subjects in factors such as personality and intelligence, but with random assignment (a technique for creating groups of subjects across which individual differences will be evenly dispersed) one can be reasonably sure that those individual differences are evenly dispersed among the experimental and control groups.
Subject Variables
The experimenter might want to control or hold constant other variables. Perhaps she suspects that age, social class, ethnicity, and gender could also influence the children’s aggressiveness. She might want to make sure that these subject variables are eliminated by either choosing subjects who are alike in these ways or by making sure that the groups are balanced for these factors (for example, equal numbers of boys and girls in each group). There are numerous other extraneous variables that might concern the researcher, including the time of day when the children participate, the length of time between television viewing and the assessment of aggressiveness, the children’s diets, the children’s family structures (single or dual parents, siblings or only child), and the disciplinary styles used in the homes. Resource limitations prevent every extraneous variable from being controlled, yet the more control, the more confident the experimenter can be of the cause-effect relationship between the independent and dependent variables.
Influence of Rewards
One more example of experimental research, this one nonhypothetical, will further illustrate the application of this methodology. In 1973, Mark Lepper, David Greene, and Richard Nisbett tested the hypothesis that when people are offered external rewards for performing activities that are naturally enjoyable, their interest in these activities declines. The participants in the study were nursery school children who had already demonstrated a fondness for coloring with marking pens; this was their preferred activity when given an opportunity for free play. The children were randomly assigned to one of three groups. The first group was told previously that they would receive a “good player award” if they would play with the pens when later given the opportunity. Group two received the same reward but without advance notice; they were surprised by the reward. The last group of children was the control group; they were neither rewarded nor told to expect a reward.
The researchers reasoned that the first group of children, having played with the pens to receive a reward, would now perceive their natural interest in this activity as lower than before the study. Indeed, when all groups were later allowed a free play opportunity, it was observed that the “expected reward” group spent significantly less time than the other groups in this previously enjoyable activity. Lepper and his colleagues, then, experimentally supported their hypothesis and reported evidence that reward causes interest in a previously pleasurable behavior to decline. This research has implications for instructors; they should carefully consider the kinds of behavior they reward (with gold stars, lavish praise, high grades, and so on) as they may, ironically, be producing less of the desired behavior. An academic activity that is enjoyable play for a child may become tedious work when a reward system is attached to it.
Criticisms
Although most would agree that the birth of psychology as a science took place in Leipzig, Germany, in 1879, when Wilhelm Wundt established the first laboratory for studying psychological phenomena, there is no clear record of the first use of experimentation. Regardless, there is no disputing the attraction that this method of research has had for many psychologists, who clearly recognize the usefulness of the experiment in investigating potential causal relationships between variables. Hence, experimentation is employed widely across the subfields of psychology, including developmental, cognitive, physiological, clinical, industrial, and social psychology.
This is not to say that all psychologists are completely satisfied with experimental research. It has been argued that an insidious catch-22 exists in some experimental research that limits its usefulness. The argument goes like this: experimenters are motivated to control rigorously the conditions of their studies and the relevant extraneous variables. To gain such control, they often conduct experiments in a laboratory setting. Therefore, subjects are often observed in an artificial environment, engaged in behaviors that are so controlled as to be unnatural, and they clearly know they are being observed—which may further alter their behavior. Such research is said to be lacking in ecological validity or applicability to “real-life” behavior. It may show how subjects behave in a unique laboratory procedure, but it tells little about psychological phenomena as displayed in everyday life. The catch-22, then, is that experimenters desire control to establish that the independent variable is producing a change in the dependent variable, and the more such control, the better; however, the more control, the more risk that the research may be ecologically invalid.
Field Experiments
Most psychologists are sensitive to issues of ecological validity and take pains to make their laboratory procedures as naturalistic as possible. Additionally, much research is conducted outside the laboratory in what are known as field experiments. In such studies, the subjects are unobtrusively observed (perhaps by a confederate of the researcher who would not attract their notice) in natural settings such as classroom, playground, or workplace. Field experiments, then, represent a compromise in that there is bound to be less control than is obtainable in a laboratory, yet the behaviors observed are likely to be natural. Such naturalistic experimentation is likely to continue to increase in the future.
Although experimentation is only one of many methods available to psychologists, it fills a particular need, and that need is not likely to decline in the foreseeable future. In trying to understand the complex relationships among the many variables that affect the way people think and act, experimentation makes a valuable contribution: It is the one methodology available that can reveal unambiguous cause-effect relationships.
Bibliography
Barber, Theodore Xenophon. Pitfalls in Human Research. New York: Pergamon, 1985. Print.
Carlson, Neil R. Psychology: The Science of Behavior. 6th ed. Boston: Allyn, 2007. Print.
Coolican, Hugh. Research Methods and Statistics in Psychology. 6th ed. New York: Psychology, 2014. Print.
Hearst, Eliot, ed. The First Century of Experimental Psychology. Hillsdale: Erlbaum, 1979. Print.
Schweigert, Wendy A. Research Methods in Psychology: A Handbook. 3rd ed. Long Grove: Waveland, 2012. Print.
Shaughnessy, John J., and Eugene B. Zechmeister. Research Methods in Psychology. 8th ed. New York: McGraw-Hill, 2009. Print.
Stern, Paul C., and Linda Kalof. Evaluating Social Science Research. 2nd ed. New York: Oxford UP, 1996. Print.
Walsh, Richard T. G., Thomas Teo, and Angelina Baydala. A Critical History and Philosophy of Psychology. New York: Cambridge UP, 2014. Print.
No comments:
Post a Comment