Archive for the ‘Item Quality Metrics’ Category

IACAT Day #1

Today, I had the distinct privilege of attending and presenting at the first ever meeting of the International Association of Computerized Adaptive Testing.  Wow!  The diversity of theoreticians, practitioners, and supporters of CAT around the globe is fantastic.  We had folks from 30 different countries in attendance.  This is the logical extension of the conferences put together by GMAC […]

Comments (2)

Spring Conferences

OK, so it’s not April or May anymore.  Yes, it’s a little late to be providing a review of the Spring conferences put on by the International Objective Measurement Workshop and the National Council on Measurement in Education. Better late than never. IOMW is the biennial Rasch modeling conference.  I attend a few times in the 90s but […]

Leave a Comment

Why would a long test have a low reliability?

Introduction: Recently, we ran a reliability analysis for a client that is worth sharing. The certification exam had about 400 items and the reliability came in under .80 as measured by Cronbach’s Alpha (Cronbach, 1951), an undesirable if not unacceptable reliability for any test let alone a test containing 400 items. (NOTE: Mountain Measurement didn’t […]

Comments (2)

The Point-Biserial Correlation Coefficient

One common metric used to assess item quality is the point biserial correlation coefficient (rpb). The “pt bis” as it is sometimes called is the correlation between an item score (1/0) and the total score on a test. Positive values are desirable and indicate that the item is good at differentiating between high ability and […]

Comments (12)

© 2009-2017 Mountain Measurement, Inc. All Rights Reserved -- Copyright notice by Blog Copyright