Show simple item record

dc.contributor.authorCai, Y.
dc.contributor.authorYu, J.G.
dc.contributor.authorChen, Y.
dc.contributor.authorLiu, C.
dc.contributor.authorXiao, L.
dc.contributor.authorGrais, E.M.
dc.contributor.authorZhao, Fei
dc.contributor.authorLan, L.
dc.contributor.authorZeng, S.
dc.contributor.authorZeng, J.
dc.contributor.authorWu, M.
dc.date.accessioned2021-01-28T11:33:44Z
dc.date.available2021-01-28T11:33:44Z
dc.date.issued2021-01-21
dc.identifier.citationCai, Y., Yu, J.G., Chen, Y., Liu, C., Xiao, L., Grais, E.M., Zhao, F., Lan, L., Zeng, S., Zeng, J. and Wu, M. (2021) 'Investigating the use of a two-stage attention-aware convolutional neural network for the automated diagnosis of otitis media from tympanic membrane images: a prediction model development and validation study', BMJ open, 11(1), p.e041139.en_US
dc.identifier.issn2044-6055
dc.identifier.urihttp://hdl.handle.net/10369/11273
dc.descriptionArticle published in BMJ Open Ear, nose and throat/otolaryngology available open access at http://dx.doi.org/10.1136/bmjopen-2020-041139en_US
dc.description.abstractObjectives This study investigated the usefulness and performance of a two-stage attention-aware convolutional neural network (CNN) for the automated diagnosis of otitis media from tympanic membrane (TM) images. Design A classification model development and validation study in ears with otitis media based on otoscopic TM images. Two commonly used CNNs were trained and evaluated on the dataset. On the basis of a Class Activation Map (CAM), a two-stage classification pipeline was developed to improve accuracy and reliability, and simulate an expert reading the TM images. Setting and participants This is a retrospective study using otoendoscopic images obtained from the Department of Otorhinolaryngology in China. A dataset was generated with 6066 otoscopic images from 2022 participants comprising four kinds of TM images, that is, normal eardrum, otitis media with effusion (OME) and two stages of chronic suppurative otitis media (CSOM). Results The proposed method achieved an overall accuracy of 93.4% using ResNet50 as the backbone network in a threefold cross-validation. The F1 Score of classification for normal images was 94.3%, and 96.8% for OME. There was a small difference between the active and inactive status of CSOM, achieving 91.7% and 82.4% F1 scores, respectively. The results demonstrate a classification performance equivalent to the diagnosis level of an associate professor in otolaryngology. Conclusions CNNs provide a useful and effective tool for the automated classification of TM images. In addition, having a weakly supervised method such as CAM can help the network focus on discriminative parts of the image and improve performance with a relatively small database. This two-stage method is beneficial to improve the accuracy of diagnosis of otitis media for junior otolaryngologists and physicians in other disciplines.en_US
dc.description.sponsorshipThis work was supported by the Key R&D Programme of Guangdong Province, China (Grant No. 2018B030339001), medical artificial intelligence project of Sun Yat-Sen Memorial Hospital (YXYGZN201904) and the National Natural Science Foundation of China (Grant No. 81570935)en_US
dc.language.isoenen_US
dc.publisherBMJen_US
dc.relation.ispartofseriesBMJ Open;
dc.titleInvestigating the use of a two-stage attention-aware convolutional neural network for the automated diagnosis of otitis media from tympanic membrane images: a prediction model development and validation studyen_US
dc.typeArticleen_US
dc.identifier.doihttp://dx.doi.org/10.1136/bmjopen-2020-041139
dcterms.dateAccepted2020-12-28
rioxxterms.versionVoRen_US
rioxxterms.licenseref.urihttp://creativecommons.org/licenses/by-nc/4.0/en_US
rioxxterms.licenseref.startdate2021-01-28


Files in this item

Thumbnail

This item appears in the following collection(s)

Show simple item record