AllFreePapers.com - All Free Papers and Essays for All Students
Search

Test for Reliability

Autor:   •  March 25, 2015  •  Case Study  •  789 Words (4 Pages)  •  1,060 Views

Page 1 of 4

Running head: CASE ANALYSIS 3

Case Analysis 3

Linda Maclean

National University

December 10, 2014


Case Analysis 3

There are numerous ways to test for reliability.  I will discuss some of these to include inter-rater reliability, test-retest, and internal consistency.  I will also explain the difference between reliability and validity.  A review of case analysis 2, ‘I want to get into grad school real bad” (Gliner, Morgan, & Leech, 2009, p.163) will be done to determine if Gliner’s test was reliable or not.

Reliability is extremely important in the construction of a good test.  If a test does not measure consistently (reliably), then we could not count on the scores being an accurate assessment of a students’ knowledge.  If we could not trust the bathroom scales to give an accurate weight measure because the readings fluctuate five pounds up or down in a given day, in the same manner, scores cannot be trusted on a test unless we know about the consistency with which they measure. Only when one can determine the extent that test scores are reliable can they be useful and fair to those taking the tests.  A test or measure cannot be valid if it is not reliable (Gliner et al., 2009, p. 368).  

Reliability shows the extent to which test scores are free from errors of measurement.  No test is perfectly reliable due to random errors operate to cause scores to vary or be inconsistent from time to time and situation to situation. The goal is to try to minimize these inevitable errors of measurement and increase reliability.

 Test validity is the extent to which a test accurately measures what it purports to measure. In the fields of psychological testing and educational testing.  Validity is the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests (Wainer & Braun, 1988).

Inter-Observer Reliability is used when humans are part of the measurement procedure.  When humans are used there is concern about the results being reliable or consistent. People can be very inconsistent and can misinterpret questions.  

There are ways to estimate inter-rater reliability. One of these can be used if the measurement consists of categories, the raters can check off which category each observation falls in, then calculate the percent of agreement between the raters.  This method is used to a degree by the Navy during promotion boards.

...

Download as:   txt (4.8 Kb)   pdf (189.5 Kb)   docx (11.4 Kb)  
Continue for 3 more pages »