Comparing performances of french orthopaedic surgery residents with the artificial intelligence ChatGPT-4/4o in the French diploma exams of orthopaedic and trauma surgery - 07/12/24
Abstract |
Introduction |
This study evaluates the performance of ChatGPT, particularly its versions 4 and 4o, in answering questions from the French orthopedic and trauma surgery exam (Diplôme d’Études Spécialisées, DES), compared to the results of French orthopedic surgery residents. Previous research has examined ChatGPT's capabilities across various medical specialties and exams, with mixed results, especially in the interpretation of complex radiological images.
Hypothesis |
ChatGPT version 4o was capable of achieving a score equal to or higher (not lower) than that of residents for the DES exam.
Methods |
The response capabilities of the ChatGPT model, versions 4 and 4o, were evaluated and compared to the results of residents for 250 questions taken from the DES exams from 2020 to 2024. A secondary analysis focused on the differences in the AI's performance based on the type of data being analyzed (text or images) and the topic of the questions.
Results |
The score achieved by ChatGPT-4o was equivalent to that of residents over the past five years: 74.8% for ChatGPT-4o vs. 70.8% for residents (p = 0.32). The accuracy rate of ChatGPT was significantly higher in its latest version 4o compared to version 4 (58.8%, p = 0.0001). Secondary subgroup analysis revealed a performance deficiency of the AI in analyzing graphical images (success rates of 48% and 65% for ChatGPT-4 and 4o, respectively). ChatGPT-4o showed superior performance to version 4 when the topics involved the spine, pediatrics, and lower limb.
Conclusion |
The performance of ChatGPT-4o is equivalent to that of French students in answering questions from the DES in orthopedic and trauma surgery. Significant progress has been observed between versions 4 and 4o. The analysis of questions involving iconography remains a notable challenge for the current versions of ChatGPT, with a tendency for the AI to perform less effectively compared to questions requiring only text analysis.
Level of evidence |
IV; Retrospective Observational Study.
Le texte complet de cet article est disponible en PDF.Keywords : Artificial intelligence, ChatGPT-4, ChatGPT-4o, Diploma of specialized studies, Orthopedic and trauma surgery
Plan
Bienvenue sur EM-consulte, la référence des professionnels de santé.
L’accès au texte intégral de cet article nécessite un abonnement.
Déjà abonné à cette revue ?