Ambritta P., N.
ORCID: https://orcid.org/0000-0001-6310-9378, Mahalle, P. N.
ORCID: https://orcid.org/0000-0001-5474-6826, Patil, R. V.
ORCID: https://orcid.org/0000-0003-1073-4297, Dey, N.
ORCID: https://orcid.org/0000-0001-8437-498X, Crespo, R. G.
ORCID: https://orcid.org/0000-0001-5541-6319 and Sherratt, R. S.
ORCID: https://orcid.org/0000-0001-7899-4445
(2023)
Explainable AI for human-centric ethical IoT systems.
IEEE Transactions on Computational Social Systems.
ISSN 2329-924X
doi: 10.1109/tcss.2023.3330738
Abstract/Summary
The current era witnesses the notable transition of society from an information-centric to a human-centric one aiming at striking a balance between economic advancements and upholding the societal and fundamental needs of humanity. It is undeniable that the Internet of Things (IoT) and artificial intelligence (AI) are the key players in realizing a human-centric society. However, for society and individuals to benefit from advanced technology, it is important to gain the trust of human users by guaranteeing the inclusion of ethical aspects such as safety, privacy, nondiscrimination, and legality of the system. Incorporating explainable AI (XAI) into the system to establish explainability and transparency supports the development of trust among stakeholders, including the developers of the system. This article presents the general class of vulnerabilities that affect IoT systems and directs the readers’ attention toward intrusion detection systems (IDSs). The existing state-of-the-art IDS system is discussed. An attack model modeling the possible attacks is presented. Furthermore, since our focus is on providing explanations for the IDS predictions, we first present a consolidated study of the commonly used explanation methods along with their advantages and disadvantages. We then present a high-level human-inclusive XAI framework for the IoT that presents the participating components and roles. We also hint upon a few approaches to upholding safety and privacy using XAI that we will be taking up in our future work. An attack model based on the study of possible attacks on the system is also presented in the article. The article also presents guidelines to choose a suitable XAI method and a taxonomy of explanation evaluation mechanisms, which is an important yet less visited aspect of explainable AI.
Altmetric Badge
| Item Type | Article |
| URI | https://reading-clone.eprints-hosting.org/id/eprint/114306 |
| Identification Number/DOI | 10.1109/tcss.2023.3330738 |
| Refereed | Yes |
| Divisions | Life Sciences > School of Biological Sciences > Biomedical Sciences Life Sciences > School of Biological Sciences > Department of Bio-Engineering |
| Uncontrolled Keywords | Human-Computer Interaction, Social Sciences (miscellaneous), Modeling and Simulation |
| Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
| Download/View statistics | View download statistics for this item |
Downloads
Downloads per month over past year
University Staff: Request a correction | Centaur Editors: Update this record
Download
Download