Find study materials for any course. Check these out:
Browse by school
Make your own
To login with Google, please enable popups
To login with Google, please enable popups
Don’t have an account?
To signup with Google, please enable popups
To signup with Google, please enable popups
Sign up withor
Blocking resembles overshadowing in that one stimulus interferes with the ability of another to become a CS, however, in overshadowing, this happens because of the differences in stimuli (for ex, intensity) whereas in blocking this happens due to prior experience with one part of the compound stimulus (p77).
a stimulus will become a CS more rapidly if it has been paired with another stimulus that has since become a CS paired with a US
Distinguish between discrete-trial procedures and free-operant procedures in terms of definitions, examples, and common dependent variables.
unwanted increase in behavior; defined by an increase in behavior (DV) rate in early periods of extinction response elimination
reappearance of previously extinguished behavior when reinforcer occurs after some time without it
extinction-induced response variability
“If at first you don’t succeed, try something else”
•When extinction begins, subjects can exhibit variations in response topography (the movements involved in response). Extinction can increase the variations of topography as the subject attempts the reinforcement that previous behaviors produced.
•A person tries to open a door by turning a know but is unsuccessful, then they may next try jiggling the know, pushing on the frame, knocking on the door, etc.
reappearance of a previously reinforced behavior DURING extinction
•e.g. a pigeon pecking, switch to wing flapping, wing flapping is placed on extinction-->then going back to pecking
describe why extinction-induced response variability is important for subsequent learning (i.e., explain why behavior is so variable during extinction).
Extinction-induced response variability is important for subsequent learning because the person or subject is using a variation of the first behavior attempted, therefore they are further learning.
describe how resurgence is linked to the study of relapse
·Resurgence is when a subject returns to a previously reinforced behavior
oE.g., a pigeon used to peck a disk for food, the experimenter changed it to wing flapping, therefore the pecking was extinguished. But, when the wing flapping stops (or goes extinct) the pecking (old behavior) may come back.
oE.g., an alcoholic relapses after being sober for 6 months
the greater the degree of contingency between a behavior and a punishing event, the faster the behavior changes
• if a rat receives a shock every time it presses a lever but not otherwise, there is a clear contingency between receiving shocks and lever pressing
•inside the lab, things are controlled this way (if you always get shocked when you press the lever then you know not to press the lever) *the shock is a positive, primary punishment
•outside the lab, you can do things and get away with it (no punishment) such as abusing children or spouses, doing things illegally, etc.
the interval between a behavior and a punishing consequence has a powerful effect on learning; *the longer the delay, the slower the learning
• punish a behavior as soon as it happens; if you wait until later (“wait until your father gets home!” OR punishing a kid for something they did in the morning by making them stay for after-school detention) the behavior may not stop -- or you may be punishing a different behavior
very mild punishers typically have little effect; the greater the intensity of the punishing stimulus, the greater is the reduction of the punished responses
• in rats receiving shocks, the mildest shock at little effect, the no-shock group had no effect, and the strongest shock essentially brought lever pressing to a halt
using an effective level of punishment from the very beginning is extremely important
•if you start off with a weak punishment, the behavior will tend to persist during the increases; in the end, a far greater level of punisher may be required to suppress the behavior
the effectiveness of a punishment procedure depends on the frequency, amount, and quality of reinforcers the behavior produces
•if pecking doesn’t give food, the pigeons will probably stop pecking
•an employee will leave work early because there are more rewarding things to do than to stay at work (the reinforcer doesn’t “pay” very well)
the more starved you are for food, the more reinforcing the reinforcer will be...
effectiveness of punishment can also be determined by whether or not there is an availability of alternatively obtaining reinforcement
•for example, say a rat is food deprived. press lever, gives food. now, add a shock to whenever he presses the lever. if he’s hungry, and the lever now produces a shock AND food, he will still be likely to press the lever. if the rat had another way of obtaining food, however, he probably would not keep pressing the lever (and getting shocked).
trying to escape or avoid source of punishment
•freeing yourself from your parent who is spanking you; playing hooky from school because you’re failing; rat lying on its back using its fur to avoid the shock; can also escape/avoid punishment by cheating, lying, & making excuses;
an alternative to escaping punishment is to attack those who punish; like escape, aggression is often an effective way to exert control over those who punish...
•if 2 rats are in the same cage and one is shocked, it may attack its neighbor (or an innate object is no other animal is around); same is shown with people - husband make strike his wife, wife strikes child, child strikes younger sibling, younger sibling dog...
particularly when escape and aggression are not possible, a common response is a general suppression of behavior; not only suppression of the one behavior, but suppression of all behavior in general (apathy); this is common when punishments are common
•when rats were punished for entering one of 2 passageways, they eventually just stopped trying to enter either passageway all together & instead stayed in the release chamber; also seen in humans: if a teacher often ridicules a student for giving a “stupid” answer the student may stop participating all together
sometimes the punisher can get out of hand and potentially become abusive
•child abuse in homes is sometimes punishment that got “out of hand” (parent slaps child and breaks his jaw, shakes a baby and causes brain damage, etc.)
those who are punished tend to imitate those who punish them
•when parents rely on punishment to deal with their children, the children tend to use punishment to deal with siblings or peers
•punishment, when properly used, can have very beneficial effects
•punishment is powerful
•punishment is fast
•can reduce frequency of punished behavior & have positive side effects
•for example, autistic & retarded people who injure themselves become more outgoing and seem to happier after the self-injurious behavior has been suppressed with punishment
prevent the response all together so that it cannot occur; if the response doesn’t occur at all then you don’t need to use punishment
•limitations of response prevention - often when our freedom is restricted we are unhappy and use some type of counter-control
•examples of response prevention: chicken poxinator (mittens on hands so you can’t scratch), hand-picking, the “sober” car, texting while driving eliminator
limitations of extinction: emotional behavior; generally slower than punishment; difficult to implement for some parents (e.g., feeding, attention); sometimes cannot withhold reinforcer (natural (sensory) reinforcement - consequences that occur immediately; class clown)
Differential Reinforcement of Low-rate behavior: some response is reinforced and others are not; by definition, DRL encourages low rate behavior
Differential Reinforcement of Other behavior
•e.g., the “quiet” game
Differential Reinforcement of Incompatible Behavior -- give/reinforce a behavior that is incompatible with the one you are trying to prevent
•e.g. - make children sit on their hands so they cannot bother those around them; give your husband something to do so (cut onion) so that he is not hovering over you while you are cooking, etc.
•if you’re smiling, you can’t frown; if you are standing, you cannot sit; if you are sitting in your seat, you are not wandering around
Alternative behavior is reinforced, incompatible or not
•E.g., teaching a child to say “juice please” instead of “juice now”
Functional Communication Training (form of DRA)
•e.g., signing and not biting
•Desirable forms of communication
•E.g., children using cards that are signs for certain things; PECS
Describe the behavioral approach to animal training by including the role of this fundamental concept: reinforcement (positive primary & positive conditioned)
oPositive primary reinforcement: For every good behavior an animal does, the trainer reinforces with food or something the animal wants/likes. When the animal does something that the trainer does not want, they simply ignore it, so that the animal does not do it again (because there was no reaction/reinforcement)
oResults in lasting behavior modification. Include food, water, sexual stimulation, etc.
oPositive conditioned reinforcement: Dog does trick, hears a click, gets a treat. Includes desired reinforcement; money, praise, recognition, tokens. extinction
•like a combination of reinforcement and extinction
•when dog training, reinforce a new behavior while ignoring the unwanted behavior
Animal trainers stay away from punishment
oPunishment can result in dangerous situations and/or actually reinforce the undesirable behavior because it can make the animal angry.
shaping desired behavior (usually works best with immediate reinforcement) -- animal does good behavior, you click, and give them a reward; pretty soon the click alone is a reward
•Elephants at the zoo need to have their calluses cut off every now and then. One particular elephant is very aggressive, so the trainer had a large steel gate built in the park with a hole large enough for an elephants foot. He shaped the desired behavior.
•A --> SR
the completion of a sequence of tasks or behaviors (simple responses making complex sequence) with a primary reinforcer only being delivered at the end
oSteps to training an animal (dog)
oA B C SR
A=sit B=roll over C= shake fading
oMust reinforce right after the behavior so that the animal does not get confused with what behavior you are reinforcing
loosening the environmental control for the reinforcement (less reinforcement)
oFading reinforcement rates
oFor the animal to do the behavior without as much reinforcer
•impulsivity - immediate consequences (smaller, sooner - SS)
•A --> B --> Sr
•reinforcer is only delivered at the end
•concurrent-chains schedule means that you have more than one chains-schedule operating
•initial links - choice phase (initial link choice = DV)
•terminal links - mutually exclusive outcomes
•IV = terminal-link value
for example, if you are at a dinner party and the person on your left is really annoying you, to prevent yourself from having an outburst and yelling at them, you could change the topic of conversation or start a conversation with the person sitting to the right of you
•manipulate an establishing operation so that it modulates the efficacy of a reinforcer & increases its value
•ex. - using food to train an animal is way more reinforcing when the animal has been a little bit food deprived
•going to the grocery store full so that you pick out better foods, like veggies, instead of foods like Funyuns
to analyze the function, or purpose, of a behavior and why somebody is doing it
describe the purpose specifically of a functional analysis in the context of problem behavior in intellectual and developmental disabilities
Analogue functional analysis has the specific purpose of setting up conditions (escape, attention, alone, control, tangible) that may be analogous to what goes on outside of the laboratory/clinical setting so that you can be sure, specifically, when/where/why problem behavior occurs Most importantly, describe in detail the procedures in effect in each of the conditions of this analogue functional analysis as well as the results from these published cases.
no tasks or demands presented (so that there is nothing to escape from), tangibles (so that they don’t have to engage in any problem behavior in order to get their toy that they want), non-contingent attention (therapist fake reading; every 2 minutes they speak to the child independent of the child’s behavior)
•if you have problem behavior in the control “utopia” condition it’s because it is a natural/sensory/automatic reinforcement
•if the problem behavior is maintained by sensory/automatic/natural reinforcement then it should occur in any of the sessions/all the time
•is the tendency for behavior to occur in situations that closely resemble the one in which the behavior was learned but not in situations that differ from it; it is the tendency to respond to stimuli that were present during training but not to stimuli that were absent (308)
•behavior is different depending on environmental cutes that signal appropriate actions. example -- when you’re at the library, you engage in library-appropriate behavior; same for when you are at a bar, in a conference, in the presence of your family...
The more similar a novel stimulus is to the training stimulus, the more likely the participant is to behave as though it were the training stimulus.When these results are plotted on a curve, they yield a figure called the [term].
a phenomenon in which learned behavior sometimes generalizes on the basis of an abstract concepts.
•Example of semantic generalization - After WWII, the US often paired dirty words with the Japanese such as ‘dirty, sneaky, cruel, and enemy’.
Describe the procedures and results related to transposition. Discuss transposition in the context of generalization.
•Initial training involves simultaneous presentation of S+ and S- (stimuli are presented at the same time/concurrently)
•The S+ is “correct” (associated with reinforcement) and the S- is “wrong” (no reinforcement); both stimuli presented at once
•Generalization may involve the spread of a learned relation
responding under the control by the absolute value of the S+ (in powerpoint example, it is 5) in a probe trial; in this type of stimulus control, the behavior is exclusively under control of the absolute value of the S+ such that the 5 stimuli makes them select it
•2nd form of absolute stimulus control - reject stimulus control: control is being exerted by the S-
•a good example of this is the fact that you hate root beer. you are at someone’s house and they say “I have root beer and the other...” and you immediately just say “the other” because you hate root beer; anything but root beer .. you are trying to avoid that.. it is “controlling” your behavior
under behavior of some sort of relationship between the stimuli (stimuli relation); behavior is under, not the control of a single stimulus’ value, but rather the relation of the stimuli in relation to one another;
•relation depends on what the stimuli are ... “lighter than,” “louder than,” “fewer than”
•in the example on the powerpoint slide, it is picking “fewer than”
you change the stimuli .. for example, the boxes will now be presented with 2 objects and 5 objects (where previously 5 was the box with the least amount)
•if they were under the control of the S+, they ought to select the box with 5 (since they were being controlled by the absolute value of S+)
•if they had a relational-stimulus control, then they should select the new box with 2 objects, because that is “fewer than” the other
was used to teach pigeons to discriminate between Picaso & Monet paintings and whether or not they were high (presence or absence of drugs.
• [term definition] = different consequences for responding in the presence of different discriminative stimuli (for example, how do you tell a child they can approach these strangers but not those strangers)
•Picaso pigeon pecking
•if they pecked the RED key they were CORRECT
•if they pecked the GREEN key they were INCORRECT
•if they pecked the GREEN key they were CORRECT
•if they pecked the RED key they were INCORRECT
•S+ and S- alternate randomly (i.e. never presented together)
•teaching a kid the color “red” (you show him diff. colors at diff. times individually)
•when the key is present and a response is given, there is a reward (anytime the key is not in red, the kid is in “extinction”)
S+ and S- always presented together
•e.g. to teach a kid what a circle is (have another shape present so that they can compare)
•response to S+ produces reinforcer (if S-, then nothing)
per given trial: one sample stimulus, at least 2 comparison stimuli, and matching is reinforced
MTS but sample absent when comparison stimuli are presented
•one sample stimulus, at least 2 comparison stimuli, and mismatching is reinforced
•exact opposite of MTS
•goal is discrimination without errors
•an error is operationally defined as a response to S-
•stimulus encouraging a response to S+
•Slow and systematic removal of prompts (example: the kid who raises the wrong hand; the kid trying to learn what bathroom to go into)
•Transfer of stimulus control
reinforcer delivered after a fixed number of responses;
•“break and run”... a pause occurs immediately after reinforcement is given; performance is generally reliable
reinforcer delivered after a variable number of responses; required responses vary around a mean;
•the faster you respond, the higher your response rate, and the higher your reinforcer rate
•the more responses you engage in, the more reinforcers you get out of it
•performance is generally on task, consistent, reliable, and has minimum pausing
reinforcer delivered after first response following a fixed period of time; there are time-based (interval) schedules... so now behavior and time are being integrated into the “rule”...
•fixed = rule does not change from reinforcer to reinforcer
•interval = not purely response-based or behavior-based schedule; it takes into account the rule (time) [responding really fast doesn’t make reinforcer come faster; a certain amount of time must pass]
•“FI scallop” - pausing then gradual increase (think of the slope on the graph); Responding often starts at 1/3 of the interval (Ex. FI60, they would start responding around 20 seconds)
reinforcer delivered after first response following some variable period of time; intervals vary around some mean;
•Performance: minimal pausing - on task continuously, higher response rate (steep slope)
•the absence of breaks occur because of scheduling of reinforcers (could be given at any time)
reinforcer (usually primary) follows completion of a series of schedules (or links), and each link has its own discriminative stimuli
reinforcer follows completion of a series of schedules, and a single antecedent stimulus is always present.
getting paid after a certain number of inventions; press 100 times for food on FR100
slot-machine (or lottery ticket) payouts (reinforcement variable, you don’t know when you will win next); getting food after a ‘random’ (variable) number of lever presses;
checking your email a lot (you continuously checking isn’t going to make it come faster; some time has to pass and it will come regardless... but... if the more you check it the faster the ‘reinforcer’ i.e., you will see the email sooner than if you didn’t check it nonstop)
following a recipe (only after completing all the steps do you get your reinforcer) or traveling a long distance ***SPECIES PREFER CHAINED SCHEDULES OVER TANDEM IF GIVEN A CHOICE
a surprise drive -- you don’t know where your friend is taking you so nothing signals that you are getting closer to the destination
•1. Pause (more useful than run rate, because it is a sensitive-dependent measure)
•2. Run rate
The amount of time spent “pausing” changes as the FR value (independent variable) changes.. ex., if the FR value is increased, then the pause is longer
In comparing FR and VR responding, what variable determines the duration of the FR pause?
Sign up for free and study better.
Get started today!