- StudyBlue
- Australia
- University of Melbourne
- Psychology
- Psychology 1
- Haslam
- PMC: Phillip Smith proper

Jason Z.

How does the psychophysic account differ from the SDT account in how we make our decisions regarding a signal?

-psychometric: a rigid hardwired fixed threshold dividing the yes/no response

-SDT: response criterion 'c' which is under conscious control, it is movable, on any trial we get a signal, if the sensory effect is above criterion it's a 'yes' if not it's a 'no', 'c' will depend on instruction and payoff

-this sensory effect may fall within either noise OR signal distribution and hence represents the (hit/miss/false alarm/correct reject)

-(false alarm/correct rejection) for when stimulus is absent and you answer yes/no

-This data can then be transformed into conditional probabilites which can then be plotted on a SDT graph to work out 'c'

-this then gives us the d' (d prime) which gives us th deviation between noise distribution and signal distribution

Advertisement

What is d' D prime)?

-A measure of psychological distance between means of the signal, and the noise.

-formula is d' =μs-μn/σ

-The unit of measurement is SD

-provides criterion free measure of sensitivity

-μn corresponds to 'c' proportions of FA, and μs corresponds to proportion of hits in our graph

-convert proportion of hit/FA into corresponding standardized z-score (z below .05 is always -ve)

-i.e. hit=.08=z-score = .84/FA=.30=z-score= -.52

-It's z(S/s) - (z(S/n), it's minus to handle the -ve z-score, this gives d' value which can than be analysed however we want

-β is relative liklihood of getting sensory effect given noise/given signal. These figures corresponds to O in z-score. I.e. Ys=.280/Yn =.348= .80

-If β=1 then there was no bias

-If β>1 then there was a bias away from signal

-It's very systematic and intuitive

What is ideal observer theory?

-IOT states that the ideal place to place criterion is a ratio of noise probability over signal probability

-That is βopt.= (p(n)/p(s))

What are the optimal criterion for equal and unequal probability of signal to noise?

-When noise and signal are equally probable the βopt.=1 (optimal point to place criterion is at β=1, unbiased criterion) p(n)/p(s)=.5/.5=1

-If the P of signal/noise is unequal, P(n)=.75 P(s)=.25, βopt.= P(n)/P(s) = /75/.25 = 3

-This makes sens: if noise is more likely than signal we should be careful what we claim stimulus is signal.

-Both these claims can be shown by shifting the criterion and showing the increased net error rate

What is the conservatism phenomenon concerning human decision making in SDT?

-If people know the frequency of noise to signal, they will shift their criterion levels accordingly, i.e. if βopt.= 3 they will be more cautious about claiming a signal.

-However they are less extreme (if βopt.=3, they will shift β=2).

-Identifies costs and values associated with hits, correct rejections, false alarms, misses.

-Makes conceptual sense, larger Vhit (P(s))= larger denominator= smaller βopt. = response bias towards signal

-Found in tasks like quality control, radar management

-SDT states that this apparent fall in vigilance can be attributed to a shift in β relative to the P(s). If P(s) = .05, then all else being equal, people will shift their response 'c' to the βopt.=19, which will make them very biased against signal

Advertisement

-One of the ways to do this is to use a rating scale task that lets people rate how confident they are of stimuli rather than yes/no.

-People are given stimuli and asked to identify how confident they are that it is stimuli/noise

-when we have a multiple response 'c' we can plot this on a psychological mind-map.

-each response category receives it's own section in the graph so that the proportion of signal responses and noise responses will be plotted shown within the third section.
-area under is a sensitivity measure P(A) which varies .05-1.

-.05= zero detectability, 1= perfect detectability

What is the benefit of changing the ROC curve into a z-ROC curve?

-It is illuminating because it has 2 simple interpretations

1 z-ROC will be linear if the distributions are normal

2 Slope = σn/σs and thus if we have homoscedasticity, slope = 1

What is Pieron's law?

-Mean Reaction Time = ro+KI^{-}β.

-Ro= irreducible minimum (a point in which stimulus intensity increase wont affect RT)

-K and β are scaling constant

-I is stimulus intensity

-Describes a relationship between simple RT and stimulus properties, shows that we can sensibly study this relationship and develop laws for it.

-It is unimodal: it is a one humped camel, there is only one area where the majority of values are clustered

-positive skewed

What do we want to understand from a model of decision making?

-Relationship between MRT and task difficulty

-Why RT is variable and why it has the type of distribution that it does

-The basis of the speed-accuracy trade-off

-Relationship between MRT for errors and responses: Easy speed stress tasks, errors faster MRT than correct response, Difficult accuracy-stress tasks, errors slower than correct responses

-Decision making is only 1 part of RT

-the full process has 3 stages: stimulus encoding- stimulus identification- response selection.

-These stages are: Set of mental operations, performed in sequence that transforms

stimulus into response.

What was donder's subtraction method?

-If you had two RT tasks with different number of stages, the MRT for the missing stage would just be the difference between RT1 and RT2

How was Donder's subtraction method investigated and what were the results?

-simple task (1 stimuli-1 response) you only need to detect

-choice task (2 stimuli-2 responses) here you need to detect the stimuli, identify it, choose response

-Go/no-go task (2 stimuli-1 response), detect stimuli and identify if it's the one to respond to

-identify= T(Go/no-go)-T(simple)=36ms,

-Response sel.= T(Choice)-T(Go/no go)= 47ms

-Problem: does simple task really only involve detection? random noise?

-Attempted via changing a variable (i.e. bright-dim) that would only affect a specific stage (encoding) and another (Dissim-sim) that would affect another stage (identification).

-Theory is, if the stages are separate, you should only get additive affects on their own stages (the affect should be contained within stage ONLY)

-Thus if we see no interaction affect=stages separate

-no interaction effect seen.

-evidence for 3 distinct stages, originally outlined by Donders (encoding, identification, response sel.)

What is the sequential-sampling model of decision-making?

-A model that assumes we take multiple samples of evidence over time until we have enough evidence to make a decision

-RT here reflects time taken to accumulate a criterion amount of evidence (if you want to be have faster RT you wont have enough time to get evidence and hence be less accurate, and vice versa)

-RT variability reflects variability in accumulation process

-We accumulate a running total of our brain gathering random sample of observations of one stimulus or another: this is the random walk

-random walk varies depending on number of observations: this walk is our evidence accumulation up to a certain time

-If the stimulus is positive then the walk will go up, otherwise down

What are the properties of the random walk?

-The values of our walk is Xn=Z1+Z2+...Zn, it's the sum of our random observations, it will be normally distributed

-Mean/variance grow in proportion to number of observations

-S-N ratio is the basis of d': the longer we observe the better we discriminate information √n (μ/σ), sensitivity increase based on square root of n.

-RT is time needed (or number of samples) to accumulate criterion amount of evidence, hence variability in both RT and pathway

-sometimes paths will terminate at wrong boundary, giving us a mechanism of describing why we make a wrong decision

-in this way we map decision making process as a function of random observations of evidence.

-allow it to occur continuously, (make steps miniature) continuous process = a diffusion process

-Has same kind of properties as random walk

2.task difficulty effects: better discriminitive info->faster rate of accumulation->higher drift rate towards criterion. faster drift=easier task

3.Speed/accuracy trade off: criterion is under conscious control, speed stress=narrow criterion, accuracy stress=wide criterion

-It can also explain the order of correct responses and errors in speed stressed vs accuracy stressed tasks

What are the features of real-world decision-making?

-Usually made under risk of which we no not the outcome

-they are complex, multi-attributable alternatives

What is subjective expected utility theory by Von Neumann and Morgenstern (1944), Savage (1954)?

-It's a normative theory: prescribes how a decision should be made by a rational decision maker

-states: we calculate value of each alternative and we pick alternative with highest expected utility

-this theory is more about the assumption that we can estimate probability of outcome, otherwise decision made under uncertainty.

-Picnic= 0.3×0×0.5×40×0.2×100=40

What is the phenomenon of risk aversion identified by (Bernouli 1738)?

-people tend to be risk averse even when the objective utility is greater for option A than B

-I.e. gamble 1: win 50$ certainty, gamble 2:win 102$ with 50% probability

-EU(gamble1): 50<EU(gamble2):51, people will still choose gamble 2

-The perceived utility of ΔU2 is less than ΔU1 even though they both represent a 10$ increase. Equal objective changes in value produces diminishing changes in perceived utility.

-Hence: because the weighting of utility and objective value isn't equal, when we have more (v2) the same value increment(10$) gives less utility.

What are the completeness, transitivity, and dominance axioms that must be held for decisions to be optimal (normative) so expected utility theory is applicable?

1.Completeness: You must to be able to make compare outcomes (A<B,A>B,A~B)

2.Transitivity: you must have logical transitivity (if A>B,B>C then A>C)

3.Dominance: in choosing between two options, A dominates B IF it is better in at least 1 respect and just as good in all others

What are the independence and in-variance axioms that must hold for decision in EUT to be optimal?

4: independence: if outcome is unaffected by your choice, then your choice shouldn't be affected by the outcome. (i.e. if results is unaffected by my choice of studying or partying, then my choice won't be affected by results)

5: different ways of representing the same choice problem should result in same choice

-Hence people would rate applicants as A>B>C>D>E, however they were told to rate intelligence more highly, and most said that E>A

-transitivity breaks down.

-However when you change the value of the 12-100th ticket to 0$ most people now choose B over A the risk seeking option

-The outcome of 12-100th ticket is unaffected by choice A or B, yet the change in this outcome has affected ones choice: independence is violated

How did Tversky and Kahneman (1981) find the violation of in-variance with framing effects?

-Presented two scenarios to people in which epidemic will kill 600 people

-1: program A: 200 people will be saved with certainty, B 1/3 chance to save 600; 2/3 no one will be saved (72% choose A, 28% choose B)

-2: program A:400 people will die with certainty, B 1/3 chance no one will die, 2/3 all 600 will die (22% choose A, 78% choose B)

-There was reversal from risk aversion to seeking: question was framed positive gain or negative loss

-I.e. when it's gain ($100, p=.05) the certainty equivalent is 14$. We must be paid 14$ to give up the gamble, quite a lot more than return of 5$, hence we are risk seeking

-i.e. when it's lose(-$100, p=.05) the CE is-$8, we will forgo $8 to not have a 5% chance to lose 100, we prefer the certain high cost option than risk.

What is certainty equivalent?

-the amount or value that subjectively is equal to a prospect.

-So 78$ is the CE to winning 100$ at 95% chance. Here the true expected return is 95$ which equals a CE of 78$

-nonlinear weighting function w(p): that over-weights small probabilities (it's lower than the linear line) and under-weights big probabilities.

-i.e. if we were to give up 50$ (loss) for chance to win money $x with probability 50% you'd want at least 100$(gain) in return. The 100$gain and 50$cost is the perceived equivalent value.

-Loss: negatively accelerating convex function shows we value p(2/3)c(600)<certain(400), the value between cost 400-600 is greater than the increment in value loss. We become risk seeking.

-asymmetrical function represents change from risk aversion to seeking

-Insurance: (large cost, low probability) people buy insurance because they overweight the low probability that they'll encounter a large cost, it's risk averse

-Lottery: (large gain, low probability) people buy lottery tickets because they overweight the low probability that they'll win money, it's risk seeking

What is representative heuristics?

-Both prospect theory and subjective expected utility theory are both normative theories that require computations of complex information that mayn't be available

-instead the brain uses heuristics (simple, quick rules of thumb) to arrive at solution, which can lead to bias

-representative heuristic: the use of a representation of a stereotype to base our judgments on the probability of something (i.e. Sam=NBAplayer, Sam=black)

What is the base rate neglect bais

When we are given a base rate (70 basketballers and 30 netabller) and ask what kind of baller Crystal is.

-Even though the base rate favors basketballers, we are prone to follow a description that has some stereotypical representation of a netballer and say crystal is a netballer.

What is the availability heuristic?

-Sometimes judgments are based on availability of recall of information

-I.e. are there more murders or suicides in US

-Because the media reports murders far more, and we are exposed to far more murder info than suicides, we tend to go with murder

-Suicides vastly outnumber murder.

What is the conjunction fallacy?

-representative heuristic leads to inferences that dominate basic laws of probability

-i.e. A description that Crystal is a basketballer and 3 options are;1:crystal is baller,2:crystal is academic;3:crystal is an academic baller

-People choose 1 when choosing between (1/2), however they choose 3 when choosing between (1/3) even though basic conjunction probability states that the conjunction of two events can never be more likely than each event occuring separately

What is Fast and Frugal Heuristics by (Gigerenzer)?

-based on notion that because our rationality is bounded we satisfy rather than optimise our methods of decision making

-FaF's don't use multiple info's, rather a single piece of info to decide between alternatives, they are often nearly as good as complex strategies and are quicker to compute

What is an example of a Fast and Frugal heuristic (Gigerenzer 1996)?

-predicting which two German cities are bigger

-You have cues that correlate with size (is it a captial?, does it have university? etc)

-Take the best heuristic: work through cues in order of predictive validity till you find one that discriminates between cities and make judgment based on it

-They found that this method performs as well as complex procedures like regression, but it's faster, easier, and exploits structure of info in environment

Want to see the other 53 Flashcards in PMC: Phillip Smith proper?
JOIN TODAY FOR FREE!

Want to see the other 53 Flashcards in PMC: Phillip Smith proper?
JOIN TODAY FOR FREE!

"The semester I found StudyBlue, I went from a 2.8 to a 3.8, and graduated with honors!"

Jennifer Colorado School of Mines
StudyBlue is not sponsored or endorsed by any college, university, or instructor.

© 2015 StudyBlue Inc. All rights reserved.

© 2015 StudyBlue Inc. All rights reserved.