口腔医学小站

口腔文献阅读与资源分享

Detection of the separated root canal instrument on panoramic radiograph: A comparison of LSTM and CNN deep learning methods

| 暂无评论

(14) (PDF) Detection of the separated root canal instrument on panoramic radiograph

Abstract and figures
Objectives:
A separated endodontic instrument is one of the challenging complications of root canal treatment. The purpose of this study was to compare two deep learning methods that are convolutional neural network (CNN) and long short-term memory (LSTM) to detect the separated endodontic instruments on dental radiographs.
Methods:
Panoramic radiographs from the hospital archive were retrospectively evaluated by two dentists. A total of 915 teeth, of which 417 are labeled as “separated instrument” and 498 are labeled as “healthy root canal treatment” were included. A total of six deep learning models, four of which are some varieties of CNN (Raw-CNN, Augmented-CNN, Gabor filtered-CNN, Gabor-filtered-augmented-CNN) and two of which are some varieties of LSTM model (Raw-LSTM, Augmented-LSTM) were trained based on several feature extraction methods with an applied or not applied an augmentation procedure. The diagnostic performances of the models were compared in terms of accuracy, sensitivity, specificity, positive and negative predictive value using ten-fold cross-validation. A McNemar’s tests was employed to figure out if there is a statistically significant difference between performances of the models. Receiver Operating Characteristic (ROC) curves were developed to assess the quality of the performance of the most promising model (Gabor filtered-CNN model) by exploring different cut-off levels in the last decision layer of the model.
Results:
The Gabor filtered-CNN model showed the highest accuracy (84.37 ± 2.79), sensitivity (81.26 ± 4.79), positive predictive value (84.16 ± 3.35) a​n​
… 
Figure content uploaded by Cansu Buyuk
Author content
Content may be subject to copyright.
Content uploaded by Cansu Buyuk
Author content
Content may be subject to copyright.
Page 1
birpublications.org/dmfrDentomaxillofacial Radiology (2023) 0, 20220209© 2023 The Authors. Published by the British Institute of RadiologyRESEARCH ARTICLEDetection of the separated root canal instrument on panoramicradiograph: a comparison of LSTM and CNN deeplearning methods1Cansu Buyuk, 2Burcin Arican Alpay and 3Fusun Er1Department of Dentomaxillofacial Radiology, Istanbul Okan University, Faculty of Dentistry,İstanbul, Turkey; 2Department ofEndodontics, Bahcesehir University, Faculty of Dentistry,İstanbul, Turkey; 3Information Systems Engineering, Piri Reis University,Faculty of Engineering,İstanbul, TurkeyObjectives:A separated endodontic instrument is one of the challenging complications of rootcanal treatment. The purpose of this study was to compare two deep learning methods that areconvolutional neural network (CNN) and long short-term memory (LSTM) to detect the separatedendodontic instruments on dental radiographs.Methods:Panoramic radiographs from the hospital archive were retrospectively evaluated by twodentists. A total of 915 teeth, of which 417 are labeled as “separated instrument” and 498 are labeledas “healthy root canal treatment” were included. A total of six deep learning models, four of whichare some varieties of CNN (Raw-CNN, Augmented-CNN, Gabor filtered-CNN, Gabor-filtered-augmented-CNN) and two of which are some varieties of LSTM model (Raw-LSTM, Augmented-LSTM) were trained based on several feature extraction methods with an applied or not applied anaugmentation procedure. The diagnostic performances of the models were compared in terms ofaccuracy, sensitivity, specificity, positive- and negative-predictive value using 10-fold cross-validation.A McNemar’s tests was employed to figure out if there is a statistically significant difference betweenperformances of the models. Receiver operating characteristic (ROC) curves were developed toassess the quality of the performance of the most promising model (Gabor filtered-CNN model) byexploring different cut-off levels in the last decision layer of the model.Results:The Gabor filtered-CNN model showed the highest accuracy (84.37 ± 2.79), sensitivity(81.26 ± 4.79), positive-predictive value (84.16 ± 3.35) and negative-predictive value (84.62 ± 4.56with a confidence interval of 80.6 ± 0.0076. McNemar’s tests yielded that the performance of theGabor filtered-CNN model significantly different from both LSTM models (p< 0.01).Conclusions:Both CNN and LSTM models were achieved a high predictive performance onto distinguish separated endodontic instruments in radiographs. The Gabor filtered-CNN modelwithout data augmentation gave the best predictive performance.Dentomaxillofacial Radiology(2023)0, 20220209. doi: 10.1259/dmfr.20220209Cite this article as:Buyuk C, Arican Alpay B, Er F. Detection of the separated root canalinstrument on panoramic radiograph: a comparison of LSTM and CNN deep learningmethods.Dentomaxillofac Radiol(2023) 10.1259/dmfr.20220209.Keywords:Artificial intelligence; CNN; deep learning; LSTM; panoramic radiograph; sepa-rated endodontic instrumentsIntroductionThe common causes offailure in endodontic treat-ments are the persistence of bacteria, inadequate rootcanal filling, poor obturation quality, microleakage, andcomplications ofinstrumentation such as separatedendodontic instruments (SEI).1 The incidence of theSEI in the root canal was reported to range from 0.4 to7.4%.2 In teeth with a SEI, the success of retreatmentCorrespondence to: Cansu Buyuk, E-mail: cansubuyuk@yahoo.comReceived 14 June 2022; revised 08 December 2022; accepted 05 January 2023;published online 24 January 2023
Similar research
Objectives: Dentists rely on radiographs in making treatment decisions everyday. However, if inattentively read, radiographs can lead to over or under-treatment. In this poster we present the results of a pilot study related to automated detection of apical radiolucencies using a deep learning software. Our main objectives are:-Assessing the diagnostic performance of a computer-aided deep learning detection system at detecting apical radiolucencies (AR) on periapical radiographs.-Evaluating the diagnostic performance of three board-certified experts in OMF Radiology and Endodontics for the same task.-Comparing the performance metrics of the deep leaning software and expert observers. Methods: The UNC Adams School of Dentistry-OMFR CBCT referral database was searched for all cone beam volumes acquired for Endodontic purposes from 08/25/2014 to 3/24/2019.The search criteria included endodontically treated and untreated teeth that exhibited apical radiolucencies on CBCT and had a diagnostically-acceptable intraoral image for the same site; acquired no more than 6 months apart. All patients were above 18 years of age. The search yielded 184 positive intraoral images which were de-identified and uploaded to the diagnostic software. Positive images where used to create a training set (54 images) and a testing set (130 images). Both sets were annotated for AR using the ground truth cone beam volume. An additional 132 intraoral images without apical radiolucencies were uploaded to serve as controls. The training and testing sets did not overlap. Seventy images were randomly selected from the negative and positive testing sets for the purpose of this pilot. Three expert-observers were calibrated and asked to view and annotate the pilot testing set independently. A (1-5) Likert scale to indicate level of confidence was used. Results: The standalone software performance (by tooth) results were as follows: sensitivity was 93%, specificity was 88% while ROC-AUC was 94% (95%CI: 89%,-98%). Whereas the experts combined performance results were as follows: sensitivity was 87%, specificity was 97% while ROC-AUC was 93% (95%CI: 88%,-98%). Notably, sensitivity varied by location of the radiolucency in the arch. The software performance metrics were lowest in the maxillary posterior region. Conclusions: Using a limited testing dataset, AI provided comparable performance to expert observers for this task. Further AI training is necessary to increase the sensitivity and specificity of AR detection in the posterior maxillary region. In Dentistry, multiple AI diagnostic software and tools have emerged recently for the assessment of dental caries. Additionally, neural networks and deep learning algorithms have been utilized in predicting dental pain, teeth numbering and classification, in deciding if extractions are necessary prior to orthodontic treatment and in evaluating factors affecting the diagnosis and final treatment outcomes of impacted maxillary canines. The need for AI and automation is largely linked to the anticipated increase in accuracy and speed, leading to an improved workflow and efficiency of resources. (1,2)
While a large number of archived digital images make it easy for radiology to provide
data for Artificial Intelligence (AI) evaluation; AI algorithms are more and more applied in detecting
diseases. The aim of the study is to perform a diagnostic evaluation on periapical radiographs with an
AI model based on Convoluted Neural Networks (CNNs). The dataset includes 1169 adult periapical
radiographs, which were labelled in CranioCatch annotation software. Deep learning was performed
using the U-Net model implemented with the PyTorch library. The AI models based on deep learning
models improved the success rate of carious lesion, crown, dental pulp, dental filling, periapical
lesion, and root canal filling segmentation in periapical images. Sensitivity, precision and F1 scores
for carious lesion were 0.82, 0.82, and 0.82, respectively; sensitivity, precision and F1 score for crown
were 1, 1, and 1, respectively; sensitivity, precision and F1 score for dental pulp, were 0.97, 0.87 and
0.92, respectively; sensitivity, precision and F1 score for filling were 0.95, 0.95, and 0.95, respectively;
sensitivity, precision and F1 score for the periapical lesion were 0.92, 0.85, and 0.88, respectively;
sensitivity, precision and F1 score for root canal filling, were found to be 1, 0.96, and 0.98, respectively.
The success of AI algorithms in evaluating periapical radiographs is encouraging and promising for
their use in routine clinical processes as a clinical decision support system.
  • January 2023
  • Computational and Mathematical Methods in Medicine
Aim:
This comprehensive review is aimed at evaluating the diagnostic and prognostic accuracy of artificial intelligence in endodontic dentistry.

Introduction:
Artificial intelligence (AI) is a relatively new technology that has widespread use in dentistry. The AI technologies have primarily been used in dentistry to diagnose dental diseases, plan treatment, make clinical decisions, and predict the prognosis. AI models like convolutional neural networks (CNN) and artificial neural networks (ANN) have been used in endodontics to study root canal system anatomy, determine working length measurements, detect periapical lesions and root fractures, predict the success of retreatment procedures, and predict the viability of dental pulp stem cells. Methodology. The literature was searched in electronic databases such as Google Scholar, Medline, PubMed, Embase, Web of Science, and Scopus, published over the last four decades (January 1980 to September 15, 2021) by using keywords such as artificial intelligence, machine learning, deep learning, application, endodontics, and dentistry.

Results:
The preliminary search yielded 2560 articles relevant enough to the paper’s purpose. A total of 88 articles met the eligibility criteria. The majority of research on AI application in endodontics has concentrated on tracing apical foramen, verifying the working length, projection of periapical pathologies, root morphologies, and retreatment predictions and discovering the vertical root fractures.

Conclusion:
In endodontics, AI displayed accuracy in terms of diagnostic and prognostic evaluations. The use of AI can help enhance the treatment plan, which in turn can lead to an increase in the success rate of endodontic treatment outcomes. The AI is used extensively in endodontics and could help in clinical applications, such as detecting root fractures, periapical pathologies, determining working length, tracing apical foramen, the morphology of root, and disease prediction.

Technological advancements in health sciences have led to enormous developments in artificial intelligence (AI) models designed for application in health sectors. This article aimed at reporting on the application and performances of AI models that have been designed for application in endodontics. Renowned online databases, primarily PubMed, Scopus, Web of Science, Em-base, and Cochrane and secondarily Google Scholar and the Saudi Digital Library, were accessed for articles relevant to the research question that were published from 1 January 2000 to 30 Novem-ber 2022. In the last 5 years, there has been a significant increase in the number of articles reporting on AI models applied for endodontics. AI models have been developed for determining working length, vertical root fractures, root canal failures, root morphology, and thrust force and torque in canal preparation; detecting pulpal diseases; detecting and diagnosing periapical lesions; predicting postoperative pain, curative effect after treatment, and case difficulty; and segmenting pulp cavities. Most of the included studies (n = 21) were developed using convolutional neural networks. Among the included studies. datasets that were used were mostly cone-beam computed tomography images , followed by periapical radiographs and panoramic radiographs. Thirty-seven original research articles that fulfilled the eligibility criteria were critically assessed in accordance with QUADAS-2 guidelines, which revealed a low risk of bias in the patient selection domain in most of the studies (risk of bias: 90%; applicability: 70%). The certainty of the evidence was assessed using the GRADE approach. These models can be used as supplementary tools in clinical practice in order to expedite the clinical decision-making process and enhance the treatment modality and clinical operation.
The aim of this study was to develop a deep learning model to automatically detect and segment unobturated mesial buccal 2 (MB2) canals on endodontically obturated maxillary molars depicted in CBCT studies. Fifty-seven deidentified CBCT studies of maxillary molars with clinically confirmed unobturated MB2 canals were retrieved from a dental institution radiology database. One-hundred and two maxillary molar roots with and without unobturated MB2 canals were segmented using ITK-SNAP. The data were split into training and testing samples designated to train and evaluate the performance, respectively, of a convolutional neural network (CNN), U-Net. The detection performance revealed a sensitivity of 0.8, a specificity of 1, a high PPV of 1, and a NPV of 0.83 for the testing set, along with an accuracy of 0.9. The segmentation performance of unobturated MB2 canals, assessed using the custom metric, rendered a mean value of 0.3018 for the testing set. The current AI algorithm has the potential to identify obturated and unobturated canals in endodontically treated teeth. However, the AI algorithm is still somewhat affected by metallic artifacts, variations in canal calcifications, and the applied configuration. Thus, further development is needed to improve the algorithm and validate the accuracy using external validation data sets.
Measure
Measure

点击数:0

发表回复

*为必填字段!