Research Article
BibTex RIS Cite

Restoratif Diş Hekimliği ile İlgili Sorulara Verilen Farklı Sohbet Botu Yanıtlarının Performansının Değerlendirilmesi

Year 2025, Volume: 28 Issue: 2, 237 - 245, 30.06.2025

Abstract

Amaç: Lisans eğitimi, uzmanlık eğitimi ve günümüzde tartışmalı kabul edilen restoratif diş hekimliği alanıyla ilgili sorulara üç farklı chatbot tarafından verilen yanıtların performansını değerlendirmek ve karşılaştırmak.
Gereç ve Yöntemler: Toplam 35 soru iki diş hekimi tarafından oluşturuldu. Bu sorularda terminoloji, tedavi prosedürleri, teknik detaylar, materyal ve uygulama prosedürü, işlem sonrası bakım, endikasyonlar, kontrendikasyonlar, tıbbi sorunlar varlığında yaklaşım gibi birçok farklı konuya değinilmektedir. Çalışmada üç farklı chatbot (Copilot, Gemini ve ChatGPT) kullanıldı. Değerlendirme 5 puanlık Likert Ölçeği kullanılarak yapıldı. İstatistiksel anlamlılık düzeyi 0,05 olarak belirlendi.
Bulgular: Alt boyutların kendi aralarındaki korelasyonu değerlendirildiğinde lisans eğitimiyle ilgili sorularla uzmanlık eğitimiyle ilgili sorular arasında çok güçlü pozitif istatistiksel olarak anlamlı bir korelasyon vardır (p < 0,001). Toplam 105 yanıttan Copilot 48 geçerli yanıt ve 57 geçersiz yanıt üretti. Gemini 54 geçerli ve 51 geçersiz yanıt üretirken, ChatGPT 58 geçerli ve 47 geçersiz yanıt üretti.
Sonuçlar: Bu çalışmanın, özellikle diş hekimliğinin farklı alanlarıyla ilgili tartışmalı sorular da dahil olmak üzere çeşitli sorulara verilen yanıtları değerlendirmek açısından daha ileri çalışmalar için fikir sağlayabileceğini düşünüyoruz.

References

  • 1. Mohammad‐Rahimi H, Ourang SA, Pourhoseingholi MA, Dianat O, Dummer PMH, Nosrat A. Validity and reliability of artificial intelligence chatbots as public sources of information on endodontics. Int Endod J 2024;57:305-314.
  • 2. Makrygiannakis MA, Giannakopoulos K, Kaklamanos EG. Evidence-based potential of generative artificial intelligence large language models in orthodontics: a comparative study of ChatGPT, Google Bard, and Microsoft Bing. Eur J Orthod 2024;cjae017.
  • 3. Alhaidry HM, Fatani B, Alrayes JO, Almana AM, Alfhaed NK. ChatGPT in dentistry: a comprehensive review. Cureus 2023;15:e38317.
  • 4. Kaftan AN, Hussain MK, Naser FH. Response accuracy of ChatGPT 3.5 Copilot and Gemini in interpreting biochemical laboratory data a pilot study. Sci Rep 2024;14:8233.
  • 5. Tiwari A, Kumar A, Jain S, Dhull KS, Sajjanar A, Puthenkandathil R, et al. Implications of ChatGPT in public health dentistry: A systematic review. Cureus 2023;15:e40367.
  • 6. Lee Y, Shin T, Tessier L, Javidan A, Jung J, Hong D, et al. Harnessing artificial intelligence in bariatric surgery: comparative analysis of ChatGPT-4, Bing, and Bard in generating clinician-level bariatric surgery recommenda tions. Surg Obes Relat Dis 2024;20:603-608.
  • 7. Mohammad-Rahimi H, Khoury ZH, Alamdari MI, Rokhshad R, Motie P, Parsa A, et al. Performance of AI chatbots on controversial topics in oral medicine, pathology, and radiology. Oral Surg Oral Med Oral Pathol Oral Radiol 2024;137:508-514.
  • 8. Ali K, Barhom N, Tamimi F, Duggal M. ChatGPT-A double-edged sword for healthcare education? Implications for assessments of dental students. Eur J Dent Educ 2024;28:206-211.
  • 9. Buldur B, Teke F, Kurt MA, Sagtas K. Perceptions of dentists towards artificial intelligence: validation of a new scale. Cumhuriyet Dent J 2024;27:109-117.
  • 10. Fang Q, Reynaldi R, Araminta AS, Kamal I, Saini P, Afshari FS, et al. Artificial Intelligence (AI)-driven dental education: Exploring the role of chatbots in a clinical learning environment. J Prosthet Dent 2024.
  • 11. Su N-Y, Yu C-H. Survey of dental students’ perception of chatbots as learning companions. J Dent Sci 2024;19:1222-1223.
  • 12. Elnagar MH, Yadav S, Venugopalan SR, Lee MK, Oubaidin M, Rampa S, et al., editors. ChatGPT and dental education: Opportunities and challenges. Seminars in Orthodontics; 2024: Elsevier.
  • 13. Aminoshariae A, Nosrat A, Nagendrababu V, Dianat O, Mohammad-Rahimi H, O'Keefe AW, et al. Artificial intelligence in endodontic education. J Endod 2024;50:562-578.
  • 14. Oguz FE, Ekersular MN, Sunnetci KM, Alkan A. Can Chat GPT be Utilized in Scientific and Undergraduate Studies? Ann Biomed Eng 2024;52:1128-1130.
  • 15. ChatGPT. https://chat.openai.com Accessed March 22, 2024
  • 16. Gemini. https://gemini.google.com/app Accessed March 22, 2024
  • 17. Copilot. https://copilot.microsoft.com Accessed March 22, 2024
  • 18. Tepe M, Emekli E. Assessing the Responses of Large Language Models (ChatGPT-4, Gemini, and Microsoft Copilot) to Frequently Asked Questions in Breast Imaging: A Study on Readability and Accuracy. Cureus 2024;16:e59960.
  • 19. Rokhshad R, Zhang P, Mohammad-Rahimi H, Pitchika V, Entezari N, Schwendicke F. Accuracy and consistency of chatbots versus clinicians for answering pediatric dentistry questions: A pilot study. J Dent 2024;144:104938.

Evaluation of the Performance of Different Chatbots’ Responses to Restorative Dentistry-Related Questions

Year 2025, Volume: 28 Issue: 2, 237 - 245, 30.06.2025

Abstract

Objectives: To evaluate and compare the performance of the responses given by three different chatbots to questions related to the field of restorative dentistry for undergraduate education, specialist education, and which are considered controversial currently.
Materials and Methods: Thirty-five questions total was created by two dentists. In these questions, many different topics such as terminology, treatment procedures, technical details, material and application procedure, post-procedure care, indications, contraindications, approach in the presence of medical problems are touched upon. Three different chatbots (Copilot, Gemini and ChatGPT) were used in the study. Evaluation was done using a 5-point Likert Scale. The statistical significance level was determined as 0.05.
Results: When the correlation of the sub-dimensions among themselves is evaluated, there is a very strong positive statistically significant correlation between the questions about undergraduate education and the questions about specialist education (p < 0.001). Out of a total of 105 responses, Copilot produced 48 valid responses and 57 invalid responses. While Gemini produced 54 valid and 51 invalid responses, ChatGPT produced 58 valid and 47 invalid responses.
Conclusions: We think that this study may provide ideas for further studies in terms of evaluating the responses to various questions, including especially controversial questions regarding different areas of dentistry.

Ethical Statement

Not applicable.

References

  • 1. Mohammad‐Rahimi H, Ourang SA, Pourhoseingholi MA, Dianat O, Dummer PMH, Nosrat A. Validity and reliability of artificial intelligence chatbots as public sources of information on endodontics. Int Endod J 2024;57:305-314.
  • 2. Makrygiannakis MA, Giannakopoulos K, Kaklamanos EG. Evidence-based potential of generative artificial intelligence large language models in orthodontics: a comparative study of ChatGPT, Google Bard, and Microsoft Bing. Eur J Orthod 2024;cjae017.
  • 3. Alhaidry HM, Fatani B, Alrayes JO, Almana AM, Alfhaed NK. ChatGPT in dentistry: a comprehensive review. Cureus 2023;15:e38317.
  • 4. Kaftan AN, Hussain MK, Naser FH. Response accuracy of ChatGPT 3.5 Copilot and Gemini in interpreting biochemical laboratory data a pilot study. Sci Rep 2024;14:8233.
  • 5. Tiwari A, Kumar A, Jain S, Dhull KS, Sajjanar A, Puthenkandathil R, et al. Implications of ChatGPT in public health dentistry: A systematic review. Cureus 2023;15:e40367.
  • 6. Lee Y, Shin T, Tessier L, Javidan A, Jung J, Hong D, et al. Harnessing artificial intelligence in bariatric surgery: comparative analysis of ChatGPT-4, Bing, and Bard in generating clinician-level bariatric surgery recommenda tions. Surg Obes Relat Dis 2024;20:603-608.
  • 7. Mohammad-Rahimi H, Khoury ZH, Alamdari MI, Rokhshad R, Motie P, Parsa A, et al. Performance of AI chatbots on controversial topics in oral medicine, pathology, and radiology. Oral Surg Oral Med Oral Pathol Oral Radiol 2024;137:508-514.
  • 8. Ali K, Barhom N, Tamimi F, Duggal M. ChatGPT-A double-edged sword for healthcare education? Implications for assessments of dental students. Eur J Dent Educ 2024;28:206-211.
  • 9. Buldur B, Teke F, Kurt MA, Sagtas K. Perceptions of dentists towards artificial intelligence: validation of a new scale. Cumhuriyet Dent J 2024;27:109-117.
  • 10. Fang Q, Reynaldi R, Araminta AS, Kamal I, Saini P, Afshari FS, et al. Artificial Intelligence (AI)-driven dental education: Exploring the role of chatbots in a clinical learning environment. J Prosthet Dent 2024.
  • 11. Su N-Y, Yu C-H. Survey of dental students’ perception of chatbots as learning companions. J Dent Sci 2024;19:1222-1223.
  • 12. Elnagar MH, Yadav S, Venugopalan SR, Lee MK, Oubaidin M, Rampa S, et al., editors. ChatGPT and dental education: Opportunities and challenges. Seminars in Orthodontics; 2024: Elsevier.
  • 13. Aminoshariae A, Nosrat A, Nagendrababu V, Dianat O, Mohammad-Rahimi H, O'Keefe AW, et al. Artificial intelligence in endodontic education. J Endod 2024;50:562-578.
  • 14. Oguz FE, Ekersular MN, Sunnetci KM, Alkan A. Can Chat GPT be Utilized in Scientific and Undergraduate Studies? Ann Biomed Eng 2024;52:1128-1130.
  • 15. ChatGPT. https://chat.openai.com Accessed March 22, 2024
  • 16. Gemini. https://gemini.google.com/app Accessed March 22, 2024
  • 17. Copilot. https://copilot.microsoft.com Accessed March 22, 2024
  • 18. Tepe M, Emekli E. Assessing the Responses of Large Language Models (ChatGPT-4, Gemini, and Microsoft Copilot) to Frequently Asked Questions in Breast Imaging: A Study on Readability and Accuracy. Cureus 2024;16:e59960.
  • 19. Rokhshad R, Zhang P, Mohammad-Rahimi H, Pitchika V, Entezari N, Schwendicke F. Accuracy and consistency of chatbots versus clinicians for answering pediatric dentistry questions: A pilot study. J Dent 2024;144:104938.
There are 19 citations in total.

Details

Primary Language English
Subjects Restorative Dentistry
Journal Section Original Research Articles
Authors

Cansu Yıkıcı Çöl 0000-0001-8855-7417

Merve Nezir 0000-0001-8902-5471

Suat Özcan 0000-0001-8782-2899

Publication Date June 30, 2025
Submission Date February 6, 2025
Acceptance Date March 6, 2025
Published in Issue Year 2025Volume: 28 Issue: 2

Cite

EndNote Yıkıcı Çöl C, Nezir M, Özcan S (June 1, 2025) Evaluation of the Performance of Different Chatbots’ Responses to Restorative Dentistry-Related Questions. Cumhuriyet Dental Journal 28 2 237–245.

Cumhuriyet Dental Journal (Cumhuriyet Dent J, CDJ) is the official publication of Cumhuriyet University Faculty of Dentistry. CDJ is an international journal dedicated to the latest advancement of dentistry. The aim of this journal is to provide a platform for scientists and academicians all over the world to promote, share, and discuss various new issues and developments in different areas of dentistry. First issue of the Journal of Cumhuriyet University Faculty of Dentistry was published in 1998. In 2010, journal's name was changed as Cumhuriyet Dental Journal. Journal’s publication language is English.


CDJ accepts articles in English. Submitting a paper to CDJ is free of charges. In addition, CDJ has not have article processing charges.

Frequency: Four times a year (March, June, September, and December)

IMPORTANT NOTICE

All users of Cumhuriyet Dental Journal should visit to their user's home page through the "https://dergipark.org.tr/tr/user" " or "https://dergipark.org.tr/en/user" links to update their incomplete information shown in blue or yellow warnings and update their e-mail addresses and information to the DergiPark system. Otherwise, the e-mails from the journal will not be seen or fall into the SPAM folder. Please fill in all missing part in the relevant field.

Please visit journal's AUTHOR GUIDELINE to see revised policy and submission rules to be held since 2020.