x: independent variable y: dependent variable z: confounding variable w: intervening variable (issue: causal mechanisms) p: moderating variable (issue: causal heterogeneity) What mechanisms create causal effects? When exploring these mechanisms, we look at intervening variables. Intervening variables are simply more links in the causal chain. Ex: x = education, y = tolerance, w = more sophisticated beliefs w is the mechanism that leads x to y how do you adjudicate between x and y? look for causal effect for x on w, and then another on w to y. evaluate causal effects in typical way The logic is that x has a causal effect on y, but only because of intervening variable that creates the casual relationship. Causal heterogeneity: causal effect of x on y is variable and not always the case; what kind of variation might there be? Variable P: Divides cases of interest into subgroups, for which we respectively look into the casual relationship between x and y We simply ?observe? z and y at one point in time static group comparison: a type of observation study groups are compared we compare treatment group w/ control group no variation over time (?static?) researcher goes out and collects data on subjects of interest different rows are groups (two rows = two groups) one of the worst designs in terms of helping us reach conclusions on casual effects the problem is always severe that differences could be spurious Is the design weak or strong in terms of our ability to reach valid conclusions about the effect of x on y? degree of internal validity A design that is weak in internal validity is weak is helping us determine the effect of x on y. leaves study open to objections based on spuriousness the design is vulnerable to the selection threat to internal valiidty idea is that ppl have selected themselves into x and y groups; once they select themselves, they?re vulnerable to spuriousness we seek designs that are strong in term of internal validity Experimental Research Design: assume there are two groups (treatment and control) controlled experiment post test only (j) x o (new way of doing things) (j) o (old way) j = judgment (researcher controls who goes into what group, uses judgment, guided by objective to create groups that are as similar as possible on key confounding variables) o = outcome If you just let them pick which program, they might have confounding variables that they brought on their own what?s crucial is that groups are matched w/ respect to attributes in question if the two groups are matched, and you observe a difference of interest, you can rule out confounding variables w/in groups (there should be none) control through matching/judgment based on potential confounds to rule them out classic example: Milgram experiment randomized controlled experiments post test only - A controlled experiment: fundamental feature is that researcher has control over determining groups; try to match them to rule out any spurious charge
Want to see the other 2 page(s) in Lecture 2.3.09?JOIN TODAY FOR FREE!