Deep Learning in Dental Radiographic Imaging

Article information

J Korean Acad Pediatr Dent. 2024;51(1):1-10
Publication date (electronic) : 2024 February 26
doi : https://doi.org/10.5933/JKAPD.2024.51.1.1
Department of Pediatric Dentistry, Seoul National University Dental Hospital, Seoul, Republic of Korea
Corresponding author: Hyuntae Kim Department of Pediatric Dentistry, Seoul National University Dental Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea Tel: +82-2-6256-3265 / Fax: +82-2-6256-3266 / E-mail: kht131@snu.ac.kr
Received 2024 February 11; Revised 2024 February 17; Accepted 2024 February 17.

Trans Abstract

Deep learning algorithms are becoming more prevalent in dental research because they are utilized in everyday activities. However, dental researchers and clinicians find it challenging to interpret deep learning studies. This review aimed to provide an overview of the general concept of deep learning and current deep learning research in dental radiographic image analysis. In addition, the process of implementing deep learning research is described. Deep-learning-based algorithmic models perform well in classification, object detection, and segmentation tasks, making it possible to automatically diagnose oral lesions and anatomical structures. The deep learning model can enhance the decision-making process for researchers and clinicians. This review may be useful to dental researchers who are currently evaluating and assessing deep learning studies in the field of dentistry.

Introduction

Since the start of the 20th century, extraordinary advances in medical technologies have extended the human lifespan, leading to improvements in human wellbeing. As life expectancy has increased, dental care has become more important, and it is widely recognized that maintaining healthy natural teeth improves the quality of life [1-3]. Teeth play a critical role in food consumption by breaking down food into smaller fragments to facilitate its digestion in the digestive system. The initial stage of the digestive system occurs in the oral cavity, which includes teeth and adjacent tissues [4].

However, as the human oral cavity harbors a diverse array of microorganisms [5-7], dental caries [8,9] and periodontal disease [10,11] have persisted despite rapid advances in modern dentistry. Therefore, dental caries, periodontal disease, and the resulting tooth loss are the primary challenges faced by individuals. As it is difficult to visually inspect the oral cavity oneself, many people only visit a dentist when they experience pain or discomfort.

When a patient with dental caries or another oral disease arrives at a clinic, the dentist diagnoses the severity of the oral disease and formulates a treatment plan. Early intervention is important to minimize patient discomfort and damage to teeth and oral tissues. Dental radiographic imaging techniques provide information imperceptible to the visual and tactile senses, allowing dentists to diagnose oral diseases with a high degree of sensitivity.

Computer-aided diagnosis (CAD) has evolved significantly with the digitization of radiographic information. Recently, with the active advancement of artificial intelligence (AI) research, computer-based diagnostic procedures have become increasingly powerful and effective [12].

Deep learning, which is a subset of machine learning, is a computer technology that has triggered the development of AI applications. The aim is to create machines with human-like intelligence based on deep neural networks. In this review, I briefly overview the basic concepts of deep learning and describe the current applications of deep learning in dental radiographic imaging.

Overview of deep learning

Deep learning is a subset of machine learning that consists of deep layers of artificial neural networks [13]. The name “deep” refers to the architecture of the model, which has multiple layers (Fig. 1). Unlike traditional machine learning, deep learning does not require manual extraction of features, but rather the training of models from large amounts of data (Table 1) [14]. Different deep learning networks are used for different cases and data types. For example, recurrent neural networks are often used for natural language processing and speech recognition, while convolutional neural networks (CNNs) are more commonly used for classification and computer vision tasks. CNNs are composed of three main types of layers: convolutional layer, pooling layer, and fully connected layer. With each layer, the CNN extracts image features such as edges and colors, then begins to recognize larger elements or shapes of the objects, until it finally identifies the intended object [15]. CNNs are distinguished from other neural networks by their superior performance in processing pixel data, and have shown promising results in computer vision and medical diagnosis [16].

Fig 1.

The relationship between artificial intelligence (AI), machine learning (ML), and deep learning (DL).

Comparison of machine learning and deep learning

As computing power has increased dramatically over the past 10 years, neural networks have become more sophisticated, with multiple layers and connections. This is called “deep learning” (DL).

What can deep learning do?

1. Classification

Classification is the task of determining the structure belonging to a particular category. This usually involves categorizing the presence of a lesion. For supervised learning, human experts need to ensure accurate classification and reference standard settings (Fig. 2).

Fig 2.

Example of the classification task. The impacted mesiodens classification model denotes (A) no mesiodens, and (B) mesiodens.

ResNet and GoogLeNet are the main deep learning architectures used in the classification tasks. ResNet is an advanced CNN with many deep layers incorporated into its framework. It is useful for problems that require extremely deep structures, hence it is also used for object detection and segmentation [17]. GoogLeNet utilizes the inception module, which optimizes the utilization of computational resources by incorporating multiple filter sizes within a single layer. It is built on a 22-layer deep neural network architecture [18].

Deep learning-based classification models can be used to diagnose various intraoral lesions or anatomical structures. These include impacted mesiodens [19], dental caries [20], dental implant [21], and temporo-mandibular joints [22].

2. Object detection

This task involves identifying an object that exists at a particular location. A bounding box is used to indicate a specific location within an image. The deep learning model utilizes a substantial dataset of region of interests (ROIs) that have been manually annotated by expert dentists. It then autonomously identifies and defines the ROIs in the objective images. Using an object detection model, it is possible to simultaneously identify the class to which an object belongs. For example, it is possible to classify each tooth separately on a periapical radiograph and determine the category to which the tooth belongs (Fig. 3) [23].

Fig 3.

Example of object detection task. The model automatically detects teeth in the panoramic radiograph.

R-CNN object detection architecture extracts 2000 region proposals from an image and feeds them to a convolutional neural network. These features are then passed to a support vector machine for the purpose of classifying the presence of an object within each candidate region proposal [24]. Since R-CNN models use different CNN networks for each of the 2000 different candidate regions, they are costly and require long training times. Fast and faster R-CNN are modification of R-CNN using single CNN for the proposed regions and region proposal networks [25,26]. You Only Look Once (YOLO) model, on the other hand, focuses on parts of the image that have a high probability of containing the object and provides faster results. However, it encounters difficulties in detecting small objects due to the spatial limitations of the algorithm [27].

Object detection models can be applied to various oral lesions and their anatomical structures. In panoramic radiographs, deep learning models can detect teeth and their numbering [28], impacted mesiodens [29], and cleft jaw and palate [30].

3. Segmentation

The boundaries of an object can be segmented and classified by identifying the characteristics of each pixel in the image. It is also possible to determine the class to which an object belongs. The annotation of segmentation tasks requires tracing the outline of the target, whereas object detection only encloses the target using a rectangular bounding box. Therefore, the segmentation task is generally more informative because it provides detailed information about anatomical structures (Fig. 4).

Fig 4.

Example segmentation task of dental caries. (A) An intraoral digital X-ray, (B) The segmentation model delineates dental caries in the digital X-ray.

For pixelwise segmentation, Mask R-CNN and U-Net are mainly used. Mask R-CNN is a two-stage framework that builds on the Faster R-CNN object detection model by adding a mask prediction in parallel with the existing object detection branch [31,32]. While U-Net was initially developed to segment biomedical images, it can be applied to other images as well. It is an encoder-decoder architecture that uses skip connections to transfer low-level features from the encoder to the decoder [33].

In panoramic and intraoral radiographs, a segmentation task is performed to extract periapical lesions [34], the mandibular canal and maxillary sinus [35], and dental caries [36,37]. Image segmentation can be performed using computed tomographic images [38,39]. Nozawa et al. [40] conducted a study on the segmentation of the temporomandibular joint disc using MRI images.

Deep learning procedure

1. Data processing

In supervised learning, data annotation refers to the process of labeling structures in an image and creating specifications for a reference standard. Mohammad-Rahimi et al. [41] categorized reference standards into five subgroups.

• Gold standard: Histological assessment of dental hard tissues (caries) or soft tissues (oral mucosal lesions) is a reliable reference.

• Consensus: The diagnosis of a condition is agreed upon by a group of clinicians.

• Majority voting: An expert panel votes on the diagnosis

• Intersection: Pixels affected by a condition are labeled independently by two or more clinicians. The intersection of the segmentation masks serves as the reference standard.

• Union: All pixels segmented by all clinicians can serve as the reference standard.

The total datasets must be separated into training, validation, and test sets. The training set is used for model training and parameter optimization. During training, model performance is optimized using the validation dataset. To test the trained model, separate hold data (not included in the training process) should be used to evaluate the performance of deep learning models. This is the only method that expresses the generalizability of the trained model.

2. Deep learning workflow

Rahimi’s deep learning workflow is presented below. This is a modification of the work of Montagon et al. [42], which is required for medical and dental deep learning projects (Table 2) [41].

Deep learning project checklists

First, the clinical application that is to be applied to the model should be defined. Second, each project should have a project manager with basic knowledge of the technical and clinical aspects. Third, ethical approval and informed consent need to be obtained from the patients. Fourth, funding of human and hardware resources is required. Next, data collection, model development, and model assessment are performed using team-based models. Finally, this model can be implemented for clinical adoption and requires regular monitoring. A model can be retrained by using additional data to update its performance.

3. Model evaluation metrics

The model’s performance is measured using different modalities of AI models. For classification, the accuracy, sensitivity, specificity, precision, and F1-score are evaluated. For the segmentation and objct detection tasks, the Jaccard index and Dice score are used for evaluation (Table 3).

Model evaluation metrics

Application in the field of dentistry

1. Dental caries diagnosis

Dental caries are chronic oral diseases that pose a significant threat to oral and human health [43]. Lee et al. [20] introduced a CNN algorithm with GoogLeNet Inception v3 for caries detection on periapical radiographs, and the diagnostic accuracies of premolars and molars were 89.0% and 88.0%, respectively. Lee et al. [44] used a segmentation model called U-Net for caries detection with an F1-score of 65.02%. Although the algorithm did not outperform the dentist group, the overall diagnostic performance of all clinicians improved significantly with the help of the results from the trained model. Cantu et al. [45] used a U-Net model to segment dental caries on bitewing radiographs. Their model exhibited an accuracy of 80%. Generally speaking, in deep learning, a good accuracy score would be above 70%, and if the accuracy is between 60 - 70%, the model can be considered acceptable [46].

2. Orthodontics

For a successful orthodontic treatment, it is important to establish a treatment plan based on an accurate diagnosis. Cephalometric landmarks play an important role in orthodontic diagnosis.

Park et al. [47] evaluated the performance of a deep learning model that automatically measures landmarks on a lateral cephalogram. The YOLO v3 model exhibited an accuracy of 80.4 - 96.2%, which is 5% higher than that of other methods.

Yu et al. [48] constructed a multimodal CNN model based on 5,890 lateral cephalograms to test skeletal classification models. The accuracy, sensitivity, and specificity were greater than 90%.

Deep learning models can assist dentists in deciding whether to perform extraction or non-extraction treatments. These models use numerical data from cephalometric analysis and demonstrate relatively higher accuracy [49,50].

3. Endodontics

Accurate identification of the complex anatomical structures of the root canal system has a significant impact on the success of endodontic treatment. Diagnosing the presence or absence of a periapical lesion at an appropriate time and initiating treatment are important for the success of root canal treatment.

Fukuda et al. [51] used a CNN model to detect vertical root fractures in cone-beam computed tomotraphy (CBCT) images. The model effectively detected 267 out of 330 vertical root fractures. The deep learning model exhibited a precision of 0.93 and an F-score of 0.83, indicating that it is clinically applicable.

Hiraiwa et al. [52] used a deep-learning algorithm to detect extra roots in mandibular first molars. CBCT images were used as the reference standard, and the diagnostic accuracy of the deep learning model was 86.9%.

4. Periodontology and Implantology

Periodontitis is an inflammatory disease caused by bacteria that leads to the formation of periodontal pockets, alveolar bone loss, and tooth loosening. Periodontal disease is the most common oral disease that occurs throughout a human’s lifespan, and it is considered the most common cause of tooth loss throughout a human’s lifespan.

Krois et al. [53] applied deep CNN models to detect periodontal bone loss in panoramic dental radiographs using 2001 image segments. The trained CNN model demonstrated diagnostic performance comparable to that of dentists.

Chen et al. [54] devised a new deep-learning ensemble model based on CNN algorithms to predict tooth position, shape, remaining interproximal bone level, and radiographic bone loss using periapical and bitewing radiographs. The new ensemble model is based on YOLOv5, VGG16, and U-Net. The model accuracy was approximately 90% for periapical radiographs, and its performance was superior to that of dentists.

Chang et al. [55] developed a model that automatically determines the stage of periodontal disease on panoramic radiographs using a deep learning algorithm. The correlation index between the results of the deep learning model and the diagnosis results between radiologists was 0.73, and the intraclass correlation index was 0.73. It was found to be 0.91, showing high accuracy and reliability and providing effective results in automatic diagnosis of the degree of alveolar bone loss and stage of periodontal disease.

Kong et al. [56] used data from 14,037 implant images in the training and test datasets using YOLO v5 and v7, which are object detection models, and the mean average precision (mAP) was evaluated. The mAP of YOLO v7 was higher than that of YOLOv5, with the best value being 0.984.

In a multicenter study on the classification of dental implant systems, the deep learning model outperformed most participating dental professionals, with an AUC of 0.954 [57].

5. Oral and maxillofacial surgery

Deep learning algorithms are used to diagnose lesions in the oral and maxillofacial areas and establish surgical plans. Studies have been performed to isolate and evaluate the important anatomical structures that must be considered during surgical procedures.

Poedjiastoeti and Suebnukarn [58] created a convolutional neural network using VGG-16 and trained ameloblastoma and keratocystic odontogenic tumors in 500 digital panoramic images. The accuracy of the trained model was 83.0%, and the diagnostic time was much faster than that of oral and maxillofacial specialists.

Jung et al. [59] segmented the maxillary sinus into the maxillary bone, air, and lesion and evaluated its accuracy by comparing it with human experts. They adopted the 3D-UNet model in CBCT, and the performance measure was the Dice coefficient. They concluded that a deep learning model could alleviate annotation efforts and costs by efficiently training CBCT datasets.

Conclusion

Deep learning has been widely applied in dentistry and is expected to have a significant impact on dental and oral healthcare in the future. This review provides an overview of deep learning and the recent research in dentistry. It also provides practical guidance on deep learning for dental researchers and clinicians.

Deep learning models have shown great potential for improving diagnosis in dentistry. However, because of the inherent complexity of the models, humans cannot evaluate how a model arrives at a particular decision. As the complexity of deep-learning models increases, their interpretability decreases. This is why deep learning models are called “black boxes.” In addition, a large amount of data is required to obtain a highly accurate model, which requires considerable time and effort.

This review mainly focused on the deep learning-based analysis of dental radiographs. From a broader perspective, deep learning is not limited to image analysis. Natural language processing (medical interviews and electronic medical record data) and predictive analysis (epidemics of infectious diseases, number of patients, and prognosis of diseases) can be another focus of deep learning. There is still ample scope for deep learning in dentistry, and a concerted effort is needed to follow the latest technological advances and develop new clinical and research applications.

Notes

Conflict of Interest

The author has no potential conflicts of interest to disclose.

References

1. Tan H, Peres KG, Peres MA. Retention of Teeth and Oral Health-Related Quality of Life. J Dent Res 95:1350–1357. 2016;
2. Park HE, Song HY, Han K, Cho KH, Kim YH. Number of remaining teeth and health-related quality of life: the Korean National Health and Nutrition Examination Survey 2010-2012. Health Qual Life Outcomes 17:5. 2019;
3. Gerritsen AE, Allen PF, Witter DJ, Bronkhorst EM, Creugers NH. Tooth loss and oral health-related quality of life: a systematic review and meta-analysis. Health Qual Life Outcomes 8:126. 2010;
4. Gao L, Xu T, Huang G, Jiang S, Gu Y, Chen F. Oral microbiomes: more and more importance in oral cavity and whole body. Protein Cell 9:488–500. 2018;
5. Arweiler NB, Netuschil L. The Oral Microbiota. In : Schwiertz A, ed. Microbiota of the Human Body: Implications in Health and Disease Springer International Publishing. Cham: p. 45–60. 2016.
6. Simón-Soro A, Tomás I, Cabrera-Rubio R, Catalan MD, Nyvad B, Mira A. Microbial geography of the oral cavity. J Dent Res 92:616–621. 2013;
7. An SQ, Hull R, Metris A, Barrett P, Webb JS, Stoodley P. An in vitro biofilm model system to facilitate study of microbial communities of the human oral cavity. Lett Appl Microbiol 74:302–310. 2022;
8. Nath S, Sethi S, Bastos JL, Constante HM, Mejia G, Haag D, Kapellas K, Jamieson L. The Global Prevalence and Severity of Dental Caries among Racially Minoritized Children: A Systematic Review and Meta-Analysis. Caries Res 57:485–508. 2023;
9. Wen PYF, Chen MX, Zhong YJ, Dong QQ, Wong HM. Global Burden and Inequality of Dental Caries, 1990 to 2019. J Dent Res 101:392–399. 2022;
10. Janakiram C, Mehta A, Venkitachalam R. Prevalence of periodontal disease among adults in India: A systematic review and meta-analysis. J Oral Biol Craniofac Res 10:800–806. 2020;
11. Alawaji YN, Alshammari A, Mostafa N, Carvalho RM, Aleksejuniene J. Periodontal disease prevalence, extent, and risk associations in untreated individuals. Clin Exp Dent Res 8:380–394. 2022;
12. Chan HP, Hadjiiski LM, Samala RK. Computer-aided diagnosis in the era of deep learning. Med Phys 47:E218–E227. 2022;
13. Schwendicke F, Samek W, Krois J. Artificial Intelligence in Dentistry: Chances and Challenges. J Dent Res 99:769–774. 2020;
14. Huang CX, Wang JJ, Wang SH, Zhang YD. A review of deep learning in dentistry. Neurocomputing 554:126629. 2023;
15. Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging 9:611–629. 2018;
16. Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: A scoping review. J Dent 91:103226. 2019;
17. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition :770–778. 2016;
18. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition :1–9. 2015;
19. Ahn Y, Hwang JJ, Jung YH, Jeong T, Shin J. Automated Mesiodens Classification System Using Deep Learning on Panoramic Radiographs of Children. Diagnostics (Basel) 11:1477. 2021;
20. Lee JH, Kim DH, Jeong SN, Choi SH. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J Dent 77:106–111. 2018;
21. Sukegawa S, Yoshii K, Hara T, Yamashita K, Nakano K, Yamamoto N, Nagatsuka H, Furuki Y. Deep Neural Networks for Dental Implant System Classification. Biomolecules 10:984. 2020;
22. Jung W, Lee KE, Suh BJ, Seok H, Lee DW. Deep learning for osteoarthritis classification in temporomandibular joint. Oral Dis 29:1050–1059. 2023;
23. Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, Lee CH. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep 9:3840. 2019;
24. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition :580–587. 2014;
25. Girshick R. Fast R-CNN. Proceedings of the IEEE international conference on computer vision p. 1440–1448. 2015.
26. Ren SQ, He KM, Girshick R, Sun J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Advances in Neural Information Processing Systems 28 (Nips 2015) :28. 2015;
27. Du J. Understanding of Object Detection Based on CNN Family and YOLO. J Phys Conf Ser 1004:012029. 2018;
28. Kim C, Kim D, Jeong H, Yoon SJ, Youm S. Automatic Tooth Detection and Numbering Using a Combination of a CNN and Heuristic Algorithm. Appl Sci 10:5624. 2020;
29. Ha EG, Jeon KJ, Kim YH, Kim JY, Han SS. Automatic detection of mesiodens on panoramic radiographs using artificial intelligence. Sci Rep 11:23061. 2021;
30. Kuwada C, Ariji Y, Kise Y, Fukuda M, Ota J, Ohara H, Kojima N, Ariji E. Detection of unilateral and bilateral cleft alveolus on panoramic radiographs using a deep-learning system. Dentomaxillofac Radiol 52:20210436. 2023;
31. Bharati P, Pramanik A. Deep Learning Techniques - R-CNN to Mask R-CNN: A Survey Springer Singapore. Singapore: p. 657–668. 2020.
32. Anantharaman R, Velazquez M, Lee Y. Utilizing Mask R-CNN for Detection and Segmentation of Oral Diseases. 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) p. 2197–2204. 2018.
33. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation Springer International Publishing. Cham: p. 234–241. 2015.
34. Song IS, Shin HK, Kang JH, Kim JE, Huh KH, Yi WJ, Lee SS, Heo MS. Deep learning-based apical lesion segmentation from panoramic radiographs. Imaging Sci Dent 52:351–357. 2022;
35. Cha JY, Yoon HI, Yeo IS, Huh KH, Han JS. Panoptic Segmentation on Panoramic Radiographs: Deep Learning-Based Segmentation of Various Structures Including Maxillary Sinus and Mandibular Canal. J Clin Med 10:2577. 2021;
36. Ying S, Wang B, Zhu H, Liu W, Huang F. Caries segmentation on tooth X-ray images with a deep network. J Dent 119:104076. 2022;
37. Bayrakdar IS, Orhan K, Akarsu S, Çelik Ö, Atasoy S, Pekince A, Yasa Y, Bilgir E, Sağlam H, Aslan AF, Odabaş A. Deep-learning approach for caries detection and segmentation on dental bitewing radiographs. Oral Radiol 38:468–479. 2022;
38. Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, Li J. Artificial Intelligence for the Computer-ided Detection of Periapical Lesions in Conebeam Computed Tomographic Images. J Endod 46:987–993. 2020;
39. Lahoud P, Diels S, Niclaes L, Van Aelst S, Willems H, Van Gerven A, Quirynen M, Jacobs R. Development and validation of a novel artificial intelligence driven tool for accurate mandibular canal segmentation on CBCT. J Dent 116:103891. 2022;
40. Nozawa M, Ito H, Ariji Y, Fukuda M, Igarashi C, Nishiyama M, Ogi N, Katsumata A, Kobayashi K, Ariji E. Automatic segmentation of the temporomandibular joint disc on magnetic resonance images using a deep learning technique. Dentomaxillofac Radiol 51:20210185. 2022;
41. Mohammad-Rahimi H, Rokhshad R, Bencharit S, Krois J, Schwendicke F. Deep learning: A primer for dentists and dental researchers. J Dent 130:104430. 2023;
42. Montagnon E, Cerny M, Cadrin-Chênevert A, Hamilton V, Derennes T, Ilinca A, Vandenbroucke-Menu F, Turcotte S, Kadoury S, Tang A. Deep learning workflow in radiology: a primer. Insights Imaging 11:22. 2020;
43. Cheng L, Zhang L, Yue L, Ling J, Fan M, Yang D, Huang Z, Niu Y, Liu J, Zhao J, Li Y, Guo B, Chen Z, Zhou X. Expert consensus on dental caries management. Int J Oral Sci 14:17. 2022;
44. Lee S, Oh SI, Jo J, Kang S, Shin Y, Park JW. Deep learning for early dental caries detection in bitewing radiographs. Sci Rep 11:16807. 2021;
45. Cantu AG, Gehrung S, Krois J, Chaurasia A, Rossi JG, Gaudin R, Elhennawy K, Schwendicke F. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J Dent 100:103425. 2020;
46. Gao ZK, Yuan T, Zhou XJ, Ma C, Ma K, Hui P. A Deep Learning Method for Improving the Classification Accuracy of SSMVEP-Based BCI. IEEE Transactions on Circuits and Systems. Part 2: Express Briefs 67:3447–3451. 2020;
47. Park JH, Hwang HW, Moon JH, Yu Y, Kim H, Her SB, Srinivasan G, Aljanabi MNA, Donatelli RE, Lee SJ. Automated identification of cephalometric landmarks: Part 1-Comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod 89:903–909. 2019;
48. Yu HJ, Cho SR, Kim MJ, Kim WH, Kim JW, Choi J. Automated Skeletal Classification with Lateral Cephalometry Based on Artificial Intelligence. J Dent Res 99:249–256. 2020;
49. Xie X, Wang L, Wang A. Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthod 80:262–266. 2010;
50. Jung SK, Kim TW. New approach for the diagnosis of extractions with neural network machine learning. Am J Orthod Dentofacial Orthop 149:127–133. 2016;
51. Fukuda M, Inamoto K, Shibata N, Ariji Y, Yanashita Y, Kutsuna S, Nakata K, Katsumata A, Fujita H, Ariji E. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol 36:337–343. 2020;
52. Hiraiwa T, Ariji Y, Fukuda M, Kise Y, Nakata K, Katsumata A, Fujita H, Ariji E. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac Radiol 48:20180218. 2019;
53. Krois J, Ekert T, Meinhold L, Golla T, Kharbot B, Wittemeier A, Dörfer C, Schwendicke F. Deep Learning for the Radiographic Detection of Periodontal Bone Loss. Sci Rep 9:8495. 2019;
54. Chen CC, Wu YF, Aung LM, Lin JC, Ngo ST, Su JN, Lin YM, Chang WJ. Automatic recognition of teeth and periodontal bone loss measurement in digital radiographs using deep-learning artificial intelligence. J Dent Sci 18:1301–1309. 2023;
55. Chang HJ, Lee SJ, Yong TH, Shin NY, Jang BG, Kim JE, Huh KH, Lee SS, Heo MS, Choi SC, Kim TI, Yi WJ. Deep Learning Hybrid Method to Automatically Diagnose Periodontal Bone Loss and Stage Periodontitis. Sci Rep 10:7531. 2020;
56. Kong HJ, Yoo JY, Lee JH, Eom SH, Kim JH. Performance evaluation of deep learning models for the classification and identification of dental implants. J Dent Sci 18:1301–1309. 2023;
57. Lee JH, Kim YT, Lee JB, Jeong SN. A Performance Comparison between Automated Deep Learning and Dental Professionals in Classification of Dental Implant Systems from Dental Imaging: A Multi-Center Study. Diagnostics 10:910. 2020;
58. Poedjiastoeti W, Suebnukarn S. Application of Convolutional Neural Network in the Diagnosis of Jaw Tumors. Healthc Inform Res 24:236–241. 2018;
59. Jung SK, Lim HK, Lee S, Cho Y, Song IS. Deep Active Learning for Automatic Segmentation of Maxillary Sinus Lesions Using a Convolutional Neural Network. Diagnostics (Basel) 11:688. 2021;

Article information Continued

Fig 1.

The relationship between artificial intelligence (AI), machine learning (ML), and deep learning (DL).

Fig 2.

Example of the classification task. The impacted mesiodens classification model denotes (A) no mesiodens, and (B) mesiodens.

Fig 3.

Example of object detection task. The model automatically detects teeth in the panoramic radiograph.

Fig 4.

Example segmentation task of dental caries. (A) An intraoral digital X-ray, (B) The segmentation model delineates dental caries in the digital X-ray.

Table 1.

Comparison of machine learning and deep learning

Machine learning Deep learning
Application Fewer Wider
Data volume Smaller Larger
Data dependency Less Higher
Computing resource Lower Higher
Execution time Short Long

Table 2.

Deep learning project checklists

Step title Consideration
Defining project scope · Defining clinical aim
· Defining deep learning task
· Study design and data collection
Team building · Assign project manager role
· Assign clinical, data, technical roles
Ethics · Ethical approval if needed
Hardware and software · Defining hardware and software resources
Funding · Defining the source of funding
Data collection · Data source
· Data collection and curation
· Data quality control
· Defining reference standard
· Defining annotation tool and strategy
· Data annotation
· Data splitting
Model development · Data preprocessing
· Model selection
· Model training
· Hyperparameter tuning
Model assessment · Defining metrics
· Model assessment
· Reporting model outcome
Regulatory · Quality management system/auditing
· Compliance with clinical regulator
Clinical adoption · Deploy the model in production
· Clinical application validation
· Clinical practice application and monitoring

Table 3.

Model evaluation metrics

Metric Definition Formula
Accuracy Ratio between correct predictions and total predictions TP+TNTP+TN+FP+FN
Sensitivity (Recall) Proportion of actual positives predicted as positives TPTP+FN
Specificity Proportion of actual negatives predicted as negatives TNTN+FP
Precision How many of the predicted positives are actually positive TPTP+FP
F1-score A combination overview of precision and recall metrics 2×Precision × RecallPrecision + Recall
Jaccard index Area of intersection of two segmentations divided by the area of union between them GTMSGTMS
Dice score Area of intersection of two segmentation masks multiplied by two, then divided by their total area 2 × |GTMS||GT| + |MS|

TP: true positive; TN: true negative; FP: false positive; FN: false negative; GT: ground truth; MS: machine segment.