Research Article

Evaluation of Accuracy, Information Quality, and Readability of Artificial Intelligence Based Chatbots in Pediatric Oral Surgery: A Comparative Analysis Based on the AAPD Clinical Guideline

Volume: 28 Number: 4 December 29, 2025
TR EN

Evaluation of Accuracy, Information Quality, and Readability of Artificial Intelligence Based Chatbots in Pediatric Oral Surgery: A Comparative Analysis Based on the AAPD Clinical Guideline

Abstract

Objective: Chatbots powered by artificial intelligence are increasingly used as tools for obtaining medical and dental knowledge. This study aimed to assess and compare the performance of four AI chatbots in providing evidence-based information on pediatric oral surgery topics, with reference to the American Academy of Pediatric Dentistry (AAPD) clinical guideline. Materials and Methods: This descriptive observational study evaluated four AI chatbots (ChatGPT-5, Gemini, Copilot and DeepSeek) by posing 20 questions derived from the AAPD Guideline on Management Considerations for Pediatric Oral Surgery. Responses were assessed for accuracy using the grading system, for quality using the 16-item DISCERN instrument and for readability using the Flesch–Kincaid Grade Level (FKGL) formula. Non-parametric Kruskal-Wallis and Mann-Whitney U tests with Holm-Bonferroni adjustment were employed for statistical comparisons (p<0.05). Results: Significant differences were observed among chatbots in all outcome measures. Gemini and ChatGPT-5 achieved the highest accuracy scores (1.30±0.47 and 1.40±0.60, respectively; p=0.001), whereas DeepSeek and Copilot showed lower accuracy. In terms of information quality, DeepSeek produced the highest DISCERN scores (52.90±3.73; p<0.001), followed by Copilot. ChatGPT-5 and Gemini yielded more readable outputs (10.73±1.98 and 11.68±1.91, respectively), though readability differences were not statistically significant (p>0.05). Conclusions: Of the models evaluated, Gemini and ChatGPT-5 produced the most accurate responses, while DeepSeek generated the highest-quality content. While AI chatbots show promise as supplementary tools for patient education and clinical learning in pediatric oral surgery, their reliability varies considerably across platforms. Continuous validation and guideline-based evaluation are essential prior to clinical integration.

Keywords

Supporting Institution

Not applicable

Project Number

Not applicable

Ethical Statement

This study does not contain any biological material or demographic data from humans or animals. Therefore, ethics committee approval is not required for this study.

Thanks

Not applicable

References

  1. 1. Brierley DJ, Chee CK, Speight PM. A review of paediatric oral and maxillofacial pathology. Int J Paediatr Dent 2013;23:319-329.
  2. 2. Kutcipal E. Pediatric oral and maxillofacial surgery. Dent Clin North Am 2013;57:83-98.
  3. 3. American Academy of Pediatric Dentistry. Management considerations for pediatric oral surgery. The Reference Manual of Pediatric Dentistry. Chicago, IL: American Academy of Pediatric Dentistry. 2025. Available at: https://www.aapd.org/globalassets/media/policies_guidelines/bp_oralsurgery.pdf
  4. 4. Cornelison BR, Erstad BL, Edwards C. Accuracy of a chatbot in answering questions that patients should ask before taking a new medication. J Am Pharm Assoc 2024;64:102110.
  5. 5. Akkoca F, Özdede M, İlhan G, Koyuncu E, Ellidokuz H. Assessing the success of ChatGPT-4o in oral radiology education and practice: a pioneering research. Cumhuriyet Dent J 2025;28:210-215.
  6. 6. Bayraktar Nahir C. Can ChatGPT be guide in pediatric dentistry? BMC Oral Health 2025;25:9.
  7. 7. Sezer B, Okutan AE. Evaluation of ChatGPT-4's performance on pediatric dentistry questions: accuracy and completeness analysis. BMC Oral Health 2025;25:1427.
  8. 8. Yıkıcı Çöl C, Nezir M, Özcan S. evaluation of the performance of different chatbots’ responses to restorative dentistry-related questions. Cumhuriyet Dent J 2025;28:237-245.

Details

Primary Language

English

Subjects

Oral and Maxillofacial Surgery , Paedodontics

Journal Section

Research Article

Publication Date

December 29, 2025

Submission Date

October 9, 2025

Acceptance Date

October 25, 2025

Published in Issue

Year 2025 Volume: 28 Number: 4

EndNote
Kaya İ, Demirel A (December 1, 2025) Evaluation of Accuracy, Information Quality, and Readability of Artificial Intelligence Based Chatbots in Pediatric Oral Surgery: A Comparative Analysis Based on the AAPD Clinical Guideline. Cumhuriyet Dental Journal 28 4 586–593.

Cumhuriyet Dental Journal (Cumhuriyet Dent J, CDJ) is the official publication of Cumhuriyet University Faculty of Dentistry. CDJ is an international journal dedicated to the latest advancement of dentistry. The aim of this journal is to provide a platform for scientists and academicians all over the world to promote, share, and discuss various new issues and developments in different areas of dentistry. First issue of the Journal of Cumhuriyet University Faculty of Dentistry was published in 1998. In 2010, journal's name was changed as Cumhuriyet Dental Journal. Journal’s publication language is English.


CDJ accepts articles in English. Submitting a paper to CDJ is free of charges. In addition, CDJ has not have article processing charges.

Frequency: Four times a year (March, June, September, and December)

IMPORTANT NOTICE

All users of Cumhuriyet Dental Journal should visit to their user's home page through the "https://dergipark.org.tr/tr/user" " or "https://dergipark.org.tr/en/user" links to update their incomplete information shown in blue or yellow warnings and update their e-mail addresses and information to the DergiPark system. Otherwise, the e-mails from the journal will not be seen or fall into the SPAM folder. Please fill in all missing part in the relevant field.

Please visit journal's AUTHOR GUIDELINE to see revised policy and submission rules to be held since 2020.