Sex Love & Porn Addiction: THE PAVLOVIAN THERAPIES
THE PAVLOVIAN THERAPIES
Two Pavlovian therapies involving extinction have been applied to sex love, porn and addiction (and other disorders). The first, systematic desensitization, was developed by Joseph Wolpe, then a South African psychiatrist. Wolpe's (1969) therapy consists of having the phobic patient imagine a set of gradually more frightening scenes involving the phobic object at the same time as he is engaged in deep muscular relaxation.
For example, the cat phobic will think about the fear-evoking object, the cat (CS), at the same time as he is making a response incompatible with fear. Pavlovian extinction will occur with this exposure to the CS (thoughts about and eventually the actual phobic object) without the US (original trauma) and the UR (terror). This is the critical, or one of the critical, aspects of the therapy. Another is the substitution of a pleasant response relaxation-for the unpleasant response of fear. Systematic desensitization has been demonstrated to be highly effective in curing phobias in a brieftime, without the recurrence of other symptoms.
The second behavior therapy, also used with phobias, is flooding. In flooding, the phobic patient is immersed in the phobic situation (either real In this or imagined) for several consecutive hours. For example, a claustrophobic (who is terrified of being in small enclosed places) would be placed in a closet. After a while, the person no longer would be terrified of being enclosed (Marks, 1969). Flooding is a Pavlovian extinction procedure; the CS (phobic situation) is presented without the US (original trauma), and fear of the CS diminishes (Stampfl and Levis, 1967).
Pavlovian conditioning, then, provides a theory of how we normally learn to feel a given emotion toward a given object. By applying its basic phenomena to emotional disorders, we can arrive at a theory of how emotional disorders come about, and we can deduce a set of therapies that should undo abnormal emotional responses.
OPERANT CONDITIONING At about the same time as Pavlov discovered an objective way of studying how we learn "what goes with what," Edward L. Thorndike (1874-1949) founded the objective study of how we learn "what to do to get what we want." Thorndike was studying animal intelligence. In one series ofexperiments he put hungry cats in puzzle boxes and observed how they learned to escape confinement and get food. He designed various boxes-some had levers to push, others had strings to pull, and some had shelves to jump on -and he left food-often fish-outside the box. The cat would have to make the correct response to escape from the puzzle box.
Thorndike's first major discovery was that learning what to do was gradual, not insightful. That is, the cat proceeded by trial and error. On the first few trials, the time to escape was very long; but with repeated success, the time gradually shortened to a few seconds. To explain his findings, Thorndike formulated the "law of effect." Still a major principle, this holds that when, in a given stimulus situation, a response is made and followed by positive consequences, the response will tend to be repeated; when followed by negative consequences, it will tend not to be repeated. Thorndike's work, like Pavlov's, provided an objective way ofstudying the properties oflearning. This tradition was refined, popularized, and applied to a range ofreal-life settings by B. F. Skinner , who worked largely with rats pressing levers for food and with pigeons pecking lighted discs for grain. It was Skinner who formulated the basic concepts of operant conditioning.
THE CONCEPTS OF OPERANT CONDITIONING Through his basic concepts, Skinner defined the elements of the law ofeffect rigorously. His three basic concepts consist of the reinforcer (both positive and negative), the operant, and the discriminative stimulus. Apositive reinforcer is an event which increases the probability that a response preceding it will occur again. In effect, a positive reinforcer rewards behavior. A negative reinforcer is an event that decreases the probability of recurrence of a response that precedes it. We also call this punishment, or an aversive event. The omission ofa negative reinforcer increases the probability of a response that precedes such an omission. An operant is a response whose probability can either be increased by positive reinforcement or decreased by negative reinforcement. If a mother reinforces her twelve-month-old child with a hug every time he says "Daddy," the probability that he will say it again is increased. In this case, the operant is saying "Daddy." If the mother hugs the child for saying Daddy only when the child's father is in sight, and does not hug him for saying "Daddy" when the father is not around, she is teaching the child to respond to a discriminative stimulus. In this case, the father being in sight is the discriminative stimulus,a signal that means that reinforcement is available ifthe operant is made.
THE OPERANT PHENOMENA Acquisition and Extinction The phenomena of acquisition and extinction in the operant conditioning of voluntary responses parallel the Pavlovian conditioning of involuntary responses. Consider a typical operant paradigm. A hungry rat is placed inside an operant chamber. The desired operant is the pressing of a lever. Each time the rat presses a lever, food is delivered down a chute. During this acquisition procedure, learning to lever press proceeds gradually, as shown in. It takes about ten sessions for the rat to learn to press at a high and constant rate. Extinction is then begun (in session 22), and the reinforcer (food) is no longer delivered when the rat presses the lever. As a result, responding gradually diminishes back to zero.
Partial Reinforcement and Schedules of Reinforcement An operant experimenter can arrange a rich variety of relationships between the responses that his subjects make and the reinforcers they receive. In the simplest relationship, each and every time a subject makes a response a reinforcer is delivered. This is called continuous reinforcement (CRF). For example, every time the rat presses a lever, a food pellet arrives. In the real world, however, reinforcements do not usually come with such high consistency. More often, reinforcements only occur for some ofthe responses that are made, and many responses are in vain. To capture this, the experimenter arranges the relationships such that reinforcement is delivered for only some of the responses that the subject makes. This is called a partial or intermittent reinforcement schedule. So, for example, the rat might receive one food pellet only when he has pressed the bar fifty times, rather than for each press.
Partial reinforcement schedules make initial learning slower, but these schedules have two other properties that are important for engineering human behavior. In the first place, a great deal of work can be produced for very little payoff. So, for one small food pellet, a rat or a person can be made to emit hundreds of responses. The second property has to do with extinction and is called the partial reinforcement extinction effect. After a subject has been partially reinforced for a response, and extinction (consisting of no reinforcement at all) has begun, a surprisingly large number of responses will occur before the subject gives up. A rat who had responded on a partial reinforcement schedule in which it pressed the lever fifty times in order to get one reinforcement will respond hundreds oftimes during extinction before it quits. In contrast, a rat who has had continuous reinforcement and whose behavior is then extinguished will stop pressing after only five to ten attempts.
Maladaptive human behavior (like sex love and porn addiction) in the real world is often highly resistant to extinction in the same way that partially reinforced operant behavior is in the laboratory. For example, a compulsive "checker" who fears that she left the gas in the stove on may check the stove hundreds oftimes a day. She is reinforced very little; that is, she almost never checks and finds that the gas is on. Most ofher responding is in vain. The operant explanation of her behavior is partial reinforcement. Because once every several hundred times she was reinforced by finding the gas on, she will now check thousands of times in order to get one reinforcer (Rachman and Hodgson, 1980).