After the test-level statistics, a detailed performance analysis for each item is provided, one item to a page. This includes a Quantile Plot, Item Information, Item Statistics, and Option statistics.
Quantile Plot
The quantile plot, as seen below, is arguably the best way to graphically depict the performance of an item with classical test theory. It is constructed by dividing the sample into X groups based on overall number-correct score, and then calculating the proportion of each group that selected each option. For a four-option multiple-choice item with five score groups as in the example, there are 20 data points. The 5 points for a given option are connected by a colored line. A good item will typically have a positive slope on the line for the correct/keyed answer, while the slope for the incorrect options should be negative. You can see this in the example below: D is the correct answer, and is selected less frequently by the lower two groups (bottom 40% of students) and more frequently by the top three groups.
Note that the numbers for the quantile plot are provided at the bottom of the page for reference.
Item Information
The item information table provides the item sequence number, item ID, keyed response, inclusion code, number of options, and the domain the item is in. This information is provided just for reference, and does not include any calculated statistics.
Item Statistics
The item statistics table provides item-level statistics and is described separately for multiple-choice and polytomous items.
Multiple-Choice Items
Label |
Explanation |
N |
Number of examinees that responded to the item |
P |
Proportion correct |
Domain Rpbis* |
Point-biserial correlation of keyed response with domain score |
Domain Rbis* |
Biserial correlation of keyed response with domain score |
Total Rpbis |
Point-biserial correlation of keyed response with total score |
Total Rbis |
Biserial correlation of keyed response with total score |
Alpha w/o |
The coefficient alpha of the test if the item was removed |
Flags |
Any flags, given the bounds provided; LP = Low P, HP = High P, LR = Low rpbis , HR = High rpbis , K = Key error (rpbis for a distractor is higher than rpbis for key), DIF for any item with a significant DIF test result |
Polytomous Items
Label |
Explanation |
N |
Number of examinees that responded to the item |
Mean |
Average score for the item |
Domain r* |
Correlation of item (Pearson’s r) with domain score |
Domain Eta*+ |
Coefficient eta from an ANOVA using item and domain scores |
Total r |
Correlation of item (Pearson’s r ) with total score |
Total Eta+ |
Coefficient eta from an ANOVA using item and total scores |
Alpha w/o |
The coefficient alpha of the test if the item was removed |
Flags |
Any flags, given the bounds provided; same as dichotomous except that mean score instead of P |
*Output provided if there are 2+ domains.
+Eta is reported if the item has 3+ categories, otherwise the biserial correlation will be reported.
If requested, the DIF test results also appear in the classical statistics table and are defined below.
Label |
Explanation |
M-H |
The Mantel-Haenszel DIF statistic |
p |
p-value associated with M-H test statistic |
Bias Against |
If p is less than 0.05, the group the item is biased against |
Option Statistics
The following table provides explanations for option-level information in the third table seen above.
Label |
Explanation |
Option |
Letter/Number of the option |
Weight |
Scoring weight for polytomous items |
N |
Number of examinees that selected the option |
Prop. |
Proportion of examinees that selected the option |
Rpbis |
Point-biserial correlation (rpbis) of option with total score |
Rbis |
Biserial correlation of option with total score |
Mean |
Average score of examinees that selected the option |
SD |
Standard deviation of scores for examinees that selected the option |
Comments
0 comments
Article is closed for comments.