- StudyBlue
- Illinois
- Illinois State University
- Special Education
- Special Education 203
- Cuenca
- SED 203 Midterm

Samantha V.

The term assessment involves

Assessment is defined as a global term that involves a process to answer questions and make legal and instructional decisions about students by observing, recording, and interpreting information.

When a student experiences difficulty in the general education classroom what happens

The student receives increasing levels of research-based intervention.

Response to intervention (RTI) is both an intervention process when a student experiences difficulty and prereferral process for special education.

Response to intervention (RTI) is both an intervention process when a student experiences difficulty and prereferral process for special education.

Advertisement

How can professionals ensure that assessment procedures are fair to all students?

Using a variety of assessment approaches to gather information about functional, developmental, and academic information

What are accommodations?

changes to the educational program and assessment procedures and materials that do no alter the instructional level

ex: test is administered individually rather than in a group

ex: test is administered individually rather than in a group

What are modifications?

changes or adaptations made to the educational program or assessment that alter the level, content, and/or assessment criteria

ex: student answers only the odd numbered questions

ex: student answers only the odd numbered questions

The assessment framework: 6 steps

screening

referral

eligibility

program planning

program monitoring

program evaluation

referral

eligibility

program planning

program monitoring

program evaluation

Screening

Is there a possibility of a disability?

Universal screening/observation

Universal screening/observation

Referral

parent or teacher documents their concerns in a written referral

new student screening process/ older student RTI

new student screening process/ older student RTI

Eligibility

Does the student have a disability?

Does the student meet the criteria for services?

Does the student meet the criteria for services?

Program Planning

IEP is written

Program Monitoring

where should instruction begin

AT?

AT?

Advertisement

Program Evaluation

Have IEP goals been met?

Was instructional program successful?

Has the student made progress?

Was instructional program successful?

Has the student made progress?

What is reliability

Consistency of an assessment

The degree to which an instrument/assessment measures the same way each time it is used under the same condition with the same subjects

The degree to which an instrument/assessment measures the same way each time it is used under the same condition with the same subjects

What is Validity?

Does this test measure what it is supposed to measure?

Does the measure actually reflect the concept?

Do the findings reflect the opinions, attitudes, and behaviors of the target population?

Does the measure actually reflect the concept?

Do the findings reflect the opinions, attitudes, and behaviors of the target population?

Reliability sources of error?

Testing Environment

Student

Errors from the test

Test administration

Student

Errors from the test

Test administration

What are the three approaches in Determining Reliability?

Approach 1: Using correlation coefficients

Approach 2: Variances or standard Deviations of Measurement errors

Approach 3: Item Response Theory

Approach 2: Variances or standard Deviations of Measurement errors

Approach 3: Item Response Theory

True or False: The closer the relationship is the 1.0o either + or -, the weaker the relationship

False

True or False: Positive relationship=the scores either increase together or decrease together

True

true score

is the score and an individual would obtain on a test if there were no measurement errors

if there were no error, the true score would equal the obtained score

if there were no error, the true score would equal the obtained score

SEM formula

measures a students true scores

X=T+E

X=students obtained or observed score

T=true score

E= errors

X=T+E

X=students obtained or observed score

T=true score

E= errors

Technical information about tests is known as?

Item Response Theory (IRT). IRT involves a statistical calculation that determines how well the instrument differentiates between indivuals at various levels of measured abilities or characteristics.

Types of Validity? (4.5)

Content Validity (norm-referenced test)

Criterion-related validity (concurrent validity and predictive validity)

Construct validity

Consequential validity

Criterion-related validity (concurrent validity and predictive validity)

Construct validity

Consequential validity

Content Validity

Examining the test items to determine the extent to which they reflect or do not reflect the content domain (e.g. reading, math, writing, social studies)

most important achievement test, becasue achievement tests typically measure content knowledge

applied to norm-reference tests

most important achievement test, becasue achievement tests typically measure content knowledge

applied to norm-reference tests

Concurrent Validity (Criterion-related validity)

Indicates the extent to which the test scores accurately estimate an individaul's current state with regards to the criterion.

Predictive Validity (Criterion-related validity)

estimate of the extent to which one test accurately predicts future performance or behavior

is the extend to which the results of two different test administered at about the same time correlate with each other

is the extend to which the results of two different test administered at about the same time correlate with each other

Construct Validity

Is the extent to which a test measures a particular trait, construct, or psychological characteristic.

achievement and cognitive ability

achievement and cognitive ability

Consequential Validity

Describes the extent to which an assessment instrument promotes the intended consequences

performance basses assessment ex: state assessment

measures how students can apply information to real-life

performance basses assessment ex: state assessment

measures how students can apply information to real-life

Norm-Referenced Tests

Compare and individual's performance to the performance of his or her peers; emphasis is on relative standing, not mastery

Criterion-Referenced Tests

Measure skill development in terms of mastery; used to help write IEP goals and objectives and where to begin instruction

curriculum bases measurement

provides a description of a student's knowledge, skills, and behavior

curriculum bases measurement

provides a description of a student's knowledge, skills, and behavior

Scales of Measurement

Nominal

Ordinal

Interval

Ratio

Ordinal

Interval

Ratio

Nominal scale of measurement

Name--Lowest level of measurement

Ordinal scale of measurement

order items in a scale or continuum--when rank or order matters

Interval scale of measurement

order items in a continuum, but the distance between intervals is equal. Does not have to have a true (absolute) zero; zero is arbitrary--Fahrenheit and Celsius temperature scales

Ratio Scales of Measurement

a measurement scales in which a certain distance along the scale means the same thing no matter where on the scale you are--weight and height

Frequency Distribution

way of organizing tests scores based on how often they occur

Measures of Central Tendency

Mean: average score

Median: the score in which 50% score above and 50% below score

Mode: number that occurs most frequently

Median: the score in which 50% score above and 50% below score

Mode: number that occurs most frequently

Standard Deviation

Degree to which a score deviates from the mean

Raw Scores

Number of items student answers correctly

Developmental Scores

Transformed from raw scores and reflect average performance and age and grade levels

Percentile ranks

the point in a distribution at or below which the scores of a given percentage of students fall

Standard Scores

Indicates how many standard deviations an observation is above or below the mean

Norm Curve Equivalent

Scores that have been scaled in such a way that they have a normal distribution, with a mean of 50 and a standard deviation of 21.06 in the normative sample for a specific grade

THe norm curve is a symmetrical, bell-shaped curve

THe norm curve is a symmetrical, bell-shaped curve

Stanines

standard score bands divide a distribution of scores into nine parts. Mean of 5 and SD of 2

Basal Level

the set where you begin the test and matches the students age

Ceiling Level

The set where you stop the test because the person has made more than 8 errors in a set

Correlation coefficients

measure the correlation, or relationship, between tests, test items, scoring procedures, observatiosn, or behavior ratings

5 types of reliabilty that invovle correlation coefficients

test-retest

alternate form

split-half: compare the consistency of the test items

internal consistency: analyze how consistent each item is (1 measure different items within

interscorer/interobserver/interrater reliability

alternate form

split-half: compare the consistency of the test items

internal consistency: analyze how consistent each item is (1 measure different items within

interscorer/interobserver/interrater reliability

What is SEM

standard error of measurement

other factors that could affect reliability and scores

time length: the longer the test, the more reliable it is

test speed: when a test is times, reliability can be problematic

group homogeneity: the more heterogeneous the group of students who take the test, the more reliable the measure will be

item difficulty: items are so hard or easy that there is little variability

objectively scored tests, rather than subjectively tests show a higher reliability

test-retest interval: the shorter the interval, the higher the reliability will be

test speed: when a test is times, reliability can be problematic

group homogeneity: the more heterogeneous the group of students who take the test, the more reliable the measure will be

item difficulty: items are so hard or easy that there is little variability

objectively scored tests, rather than subjectively tests show a higher reliability

test-retest interval: the shorter the interval, the higher the reliability will be

·

What makes a test standardized?

tests in which a test manual prescribes administration, scoring, and interpretaion procedures that must be strictly followed by the test examiner...

norm-reference test that can be administered individually or to a group

norm-reference test that can be administered individually or to a group

Why is standardization of test important?

Failure to follow these procedures compromises the reliability, validity, and interpretation of the test results

How are criterion referenced and norm referenced test different?

Performance on a criterion-referenced test provides information on whether the student had attained a predetermined achievement, behavior, or social criterion

Typical norm-referenced tests survey a broad domain, while criterion-referenced tests usually have fewer domains but more items in each domain

Criterion referenced test help in instructional planning and provide info about student’s level of performance

Are criterion referenced or norm referenced test a better method to use to immediately impact instruction?

Criterion referenced test because they provide information about the student's current level of performance.

"The semester I found StudyBlue, I went from a 2.8 to a 3.8, and graduated with honors!"

Jennifer Colorado School of Mines
StudyBlue is not sponsored or endorsed by any college, university, or instructor.

© 2014 StudyBlue Inc. All rights reserved.

© 2014 StudyBlue Inc. All rights reserved.