Classical Item Statistics
ConstructMap produces traditional item-analysis statistics, as well as item response modeling fit statistics and proficiency data. The Classical Item Statistics report includes all active items in the user-selected Item Set.
- Select Reports - Item Reports - Classical Item Statistics. Enter a title for the report and a filename for storing the text file output. Select an Item Set using the Browse... button if multiple item sets exist. Select the Calculation Method for computing fit statistics. "By Item" computes expected and observed scores for mean square fit statistics for each item, while "By Parameter" computes values for each parameter. Normally, you will not need to Re-Calculate Item Fit Statistics, but if you have manually changed any item parameters, then you will want to set this option to Yes.
- Since this report produces text file output as well as an output screen, you can print this report using Word or Notepad. The file will be located in the folder you specified (note the filename in the upper left-hand corner of the heading area).
- Close the map display by clicking on the close box, , in the upper right-hand corner.
As shown in Figure 2, the heading of the report indicates the type of proficiency estimation (in this case it was MLE), the number of items included in the report, the number of cases in the project, and any active filters. Then, each item is listed, along with the name of the Item Set it belongs to and the variable it provides evidence of. Fit statistics are also displayed for each item followed by the category statistics.
For each category, the number of students with a response in that category and the percentage of students that represents are displayed. In computing percentages, only students with valid responses on the item are included. The point biserial is also displayed. This is a ratio of average test performance for respondents who scored in the particular category to average test performance of all respondents. The category with the largest point biserial value is the category students with the highest test scores responded in. We expect the largest point biserial values for the most correct responses because we expect students who did well on the instrument over all also did well on the item. Similarly, we expect the most negative point biserial values on the least correct responses. Item “i1” follows this pattern, with the point biserial increasing with from the lowest to highest response categories.
Mean abilities should also increase from low scores to high scores. While point biserials are computed from the raw scores, mean abilities are computed from the estimated proficiencies, in this case using maximum likelihood. In this example, the mean abilities behave as we would expect, increasing from the lowest score to the highest. The standard deviation of the ability estimates is also displayed.
Finally, estimated step difficulties (delta(i,j) values) are shown for the item, and Thurstonian thresholds and their standard errors are also displayed for polytomous items (the Thurstonian thresholds for dichotomous items are the step 1 difficulties).
Summary statistics are also produced for this report, including the mean test score as both a raw score and percent correct, the standard deviation of the total test scores as both a raw score and percent correct, and Cronbach’s Alpha in two forms, one that includes all cases, and a second that excludes cases that contain any missing data. The percent of missing data is also displayed.