Dhar, T., Dey, N., Borra, S. and Sherratt, R. S.
ORCID: https://orcid.org/0000-0001-7899-4445
(2023)
Challenges of deep learning in medical image analysis – improving explainability and trust.
IEEE Transactions on Technology and Society, 4 (1).
pp. 68-75.
ISSN 2637-6415
doi: 10.1109/TTS.2023.3234203
Abstract/Summary
Deep learning has revolutionized the detection of diseases and is helping the healthcare sector break barriers in terms of accuracy and robustness to achieve efficient and robust computer-aided diagnostic systems. The application of deep learning techniques empowers automated AI-based utilities requiring minimal human supervision to perform any task related to medical diagnosis of fractures, tumors, and internal hemorrhage; preoperative planning; intra-operative guidance, etc. But deep learning faces some major threats to the flourishing healthcare domain. This paper traverses the major challenges that the deep learning community of researchers and engineers faces, particularly in medical image diagnosis, like the unavailability of balanced annotated medical image data, adversarial attacks faced by deep neural networks and architectures due to noisy medical image data, a lack of trustability among users and patients, and ethical and privacy issues related to medical data. This study explores the possibilities of AI autonomy in healthcare by overcoming the concerns about trust that society has in autonomous intelligent systems.
Altmetric Badge
| Item Type | Article |
| URI | https://reading-clone.eprints-hosting.org/id/eprint/109789 |
| Identification Number/DOI | 10.1109/TTS.2023.3234203 |
| Refereed | Yes |
| Divisions | Life Sciences > School of Biological Sciences > Biomedical Sciences Life Sciences > School of Biological Sciences > Department of Bio-Engineering |
| Publisher | IEEE |
| Download/View statistics | View download statistics for this item |
Downloads
Downloads per month over past year
University Staff: Request a correction | Centaur Editors: Update this record
Download
Download