Though John B. Watson was the originator of behaviorism, it is through the efforts of Burrhus Frederic (B. F.) Skinner that behaviorist principles have found widespread practical application, through the therapeutic approach known as behavior modification, or applied behavior analysis. Like Watson, Skinner envisioned psychology free of mentalistic principles, focused only on observable, overt behavior. As mental phenomena cannot be directly observed, the behaviorists did not see them as a proper subject for scientific study. Skinner eventually came to embrace the study of thought processes, though he regarded the object of study as thinking behavior, subject to the same learning principles as other behaviors.
Skinner’s greatest contribution is probably the distinction he drew between classical (respondent) conditioning (see Pavlov, Ivan), which was Watson’s primary mechanism for explaining human behavior, and operant conditioning. Classical conditioning involves the study of how behaviors, such as a dog’s reflexive salivation, are elicited by stimulus conditions such as the presence of dog food. Operant conditioning instead involves the role of the consequences of a behavior in determining the likelihood of that behavior occurring again. Two different categories of consequences determine behavior: reinforcement and punishment.
Reinforcement is any consequence that increases the frequency of the response that precedes it, whereas punishment is any consequence that decreases the frequency of the behavior. For example, praising a young child for saying a particular word (i.e., “Daddy”) will make the child more likely to say it again, but yelling at the same child after saying a particular word will make the child less likely to say it again. Although most people think of reinforcers as rewards, anything that serves to strengthen a behavior is a reinforcer.
There are actually two different kinds of reinforcement: positive and negative. Positive reinforcement involves presenting a positive stimulus (such as food, attention, approval, money, etc.) after a behavior, whereas negative reinforcement involves removal of an aversive stimulus after the behavior. For example, the warning buzzer in a car is turned off in response to putting on a seatbelt; pain goes away in response to taking a pill. In common usage, negative reinforcement is often confused with punishment, but the difference between them is quite straightforward: punishment reduces a behavior by applying an aversive stimulus, and negative reinforcement increases a behavior by removing an aversive stimulus. If a parent yells at a child in response to a bad behavior, and the behavior stops, punishment of the child has occurred. If a child throws a tantrum in a store because he wants candy, and the parent stops the tantrum by giving the child candy, the candy-giving behavior has been produced by negative reinforcement for the parent. An aversive stimulus (the tantrum) stopped as a consequence of the behavior and that behavior is now more likely to occur again under similar circumstances.
Skinner further distinguished between primary and secondary reinforcers. Primary reinforcers, such as food, water, or relief from pain, are innately reinforcing, as they satisfy a biological need and do not have to be learned. Secondary reinforcers, also called conditioned reinforcers, are learned, as they only acquire power through their association with primary reinforcers. Money is a powerful reinforcer for humans, despite the fact that it is not edible and possesses no healing power. It does allow us to obtain primary reinforcers, however—though we cannot eat money, we can certainly use it to buy food.
Skinner explored the principles of operant conditioning through the use of a specialized cage called an operant chamber, popularly known as a Skinner Box (a name Skinner himself disliked). The glass-and-metal chamber is typically large enough for a rat or pigeon, Skinner’s preferred experimental subjects, to walk around in comfortably, and is equipped with a bar or key that the animal can peck or press, a small chute near the bar through which edible reinforcers can be dropped, and a device that records bar-press responses.
Using the chamber, Skinner discovered that he could produce remarkably complicated behavior patterns through a process he called shaping. In shaping, a behavior which is not already occurring, such as pressing a bar by a rat, can be produced by reinforcing successive approximations of the desired behavior, until the target behavior occurs on its own, at which time it becomes the only behavior reinforced. Upon entering the chamber, a rat will typically explore his surroundings, walking around the entire chamber and sniffing all surfaces. In this exploration, he will eventually come into contact with the wall into which the bar is set. The moment this contact occurs, a food pellet is dropped down the chute. If the wall is touched again, reinforcement will immediately occur again. This will result in the rat spending more time against the front wall. In his movements, the rat will occasionally raise his body up and reach up the wall with his front paws. The first time this occurs (a closer approximation of the desired bar-press than simply touching the wall), immediate reinforcement will occur as well, while reinforcement for other behaviors stops. Soon the rat will be reaching up frequently, and so only reaching up, which occurs near the bar, will be reinforced. Eventually, the rat will press the bar—at this point only bar pressing will be reinforced. Through shaping, the rat is now engaging in a behavior that was not previously in his repertoire. The shaping procedure can be remarkably powerful, as shown by a famous film Skinner shot of two pigeons playing a lively game of table tennis.
All the examples above assume continuous reinforcement—reinforcement occurs every time the desired response occurs. This pattern of reinforcement has a built-in weakness, however—when reinforcement stops, extinction (the response dies out) occurs rapidly. If the experimenter stops providing food pellets, the rat stops pressing the bar, just as when the soda machine fails to provide a drink, we immediately stop putting money into it. This is interesting, because real life usually does not provide continuous reinforcement—the sales associate does not make a sale to every customer, nor does the fisherman always bring home a catch. Skinner observed that, in the lab as in real life, intermittent reinforcement, in which some responses are reinforced and some are not, produces behaviors that are far more resistant to extinction. How else to explain the behavior of gamblers in casinos, who despite rarely winning, will continue to pump money into slot machines? The soda machine, which reinforces continuously, loses the customer the first time no drink comes out, but the slot machine only reinforces occasionally, so the behavior of putting in money and pushing a button is far more persistent. The most common intermittent schedules of reinforcement are ratio schedules and interval schedules.
In a fixed-ratio schedule, behavior is reinforced after a set number of responses, as in a clothing factory where a worker is paid a set amount for every ten shirts produced. A variable-ratio schedule provides reinforcement after an unpredictable number of responses, and it produces behaviors that are difficult to extinguish. This is the schedule followed by slot machines: because an unknown number of responses will be required before reinforcement occurs, the schedule produces high rates of responding, since that is the only way to increase the frequency of reinforcement.
Interval schedules are based on elapsed time rather than number of responses. In a fixed-interval schedule, the first response after a fixed time period is reinforced, but responses that occur prior to the end of the interval are not reinforced. This leads to an increased frequency of responses as the end of the interval approaches, with very low responding at the beginning of the interval. An example would be checking more frequently for the mail as the delivery time approaches, but not checking at all when the usual time is still a long way off. If a consistently high rate of response is desired, this is clearly not an ideal schedule. A solution to the problem of inconsistent responding is the variable-interval schedule, in which the time interval prior to reinforcement is varied unpredictably. This results in slow, steady responding, which is resistant to extinction.
Much of the controversy over Skinner stems from his willingness to explore the philosophical implications of his ideas in books such as Beyond Freedom and Dignity and the novel Walden Two. Since his theory ignores mental phenomena and proposes that all behavior is under the control of external contingencies, it leaves no room for such notions as free will and personal freedom. His critics see him as dehumanizing people, both by denying free will and by suggesting that our behavior can be explained by the same mechanisms as that of animals. All controversy aside, however, Skinner’s legacy is a set of principles which have found much broader, and more effective, application than the theories of any other psychologist. Behaviorist principles are now a major influence on educational practice; childrearing, where his terminology is now as ubiquitous as Freud’s used to be; and highly effective therapeutic approaches for many psychological disorders.
Bibliography:
- Bjork, D. W. B. F. Skinner: A Life. Washington, DC: American Psychological Association, 1997;
- Skinner, B. F. About Behaviorism. New York: Vintage, 1976.
This example B. F. Skinner Essay is published for educational and informational purposes only. If you need a custom essay or research paper on this topic please use our writing services. EssayEmpire.com offers reliable custom essay writing services that can help you to receive high grades and impress your professors with the quality of each essay or research paper you hand in.