by Lauren Riehm, Keean Nanji, Moiz Lakhani, Evelina Pankiv, Dean Hasanee, Wesla Pfeifer
PurposeLarge language models (LLMs) have the potential to change medical education. Whether LLMs can generate multiple-choice questions (MCQs) that are of similar quality to those created by humans is unclear. This investigation assessed the quality of MCQs generated by LLMs compared to humans.
MethodsThis review was registered with PROSPERO (CRD42025608775). A systematic review and frequentist random-effects network meta-analysis (NMA) or pairwise meta-analysis was performed. Ovid MEDLINE, Ovid EMBASE, and Scopus were searched from inception to November 1, 2024. The quality of MCQs was assessed with seven pre-defined outcomes: question relevance, clarity, accuracy/correctness; distractor quality; item difficulty analysis; and item discrimination analysis (point biserial correlation and item discrimination index). Continuous data were transformed to a 10-point scale to facilitate statistical analysis and reported as mean differences (MD). The MERSQI and the Grade of Recommendations, Assessment, Development and Evaluation (GRADE) NMA guidelines were used to assess risk of bias and certainty of evidence assessments.
ResultsFive LLMs were included. NMA demonstrated that ChatGPT 4 generated similar quality MCQs to humans with regards to question relevance (MD −0.13; 95% CI: −0.44,0.18; GRADE: VERY LOW), question clarity (MD −0.03; 95% CI: −0.15,0.10; GRADE: VERY LOW), and distractor quality (MD −0.10; 95% CI: −0.24,0.04; GRADE: VERY LOW); however, MCQs generated by Llama 2 performed worse than humans with regards to question clarity (MD −1.21; 95% CI: −1.60,-0.82; GRADE: VERY LOW) and distractor quality (MD −1.50; 95% CI: −2.03,-0.97; GRADE: VERY LOW). Exploratory post-hoc t-tests demonstrated that ChatGPT 3.5 performed worse than Llama 2 and ChatGPT 4 with regards to question clarity and distractor quality (p Conclusion
ChatGPT 4 may create similar quality MCQs to humans, whereas ChatGPT 3.5 and Llama 2 may be of worse quality. Further studies that directly compare these LLMs to human-generated questions and administer MCQs to students are required.