Applied Ethical and Explainable AI in Adversarial Deepfake Detection: From Theory to Real-World Systems

Main Article Content

Sumit Lad

Abstract

Deepfake technology is advancing by the minute. This gives rise to increased privacy, trust and security risks. Such technology can be used for malicious activities like manipulating public opinion and spreading misinformation using social media. Adversarial machine learning techniques seem to be a strong defense in detecting and flagging deepfake content. But the challenge with practical use of many deepfake detection models is that they operate as black-boxes with little transparency or accountability in their decisions. This paper proposes a framework and guidelines to integrate ethical AI and explainable AI (XAI) - specifically techniques like SHAP and LIME, to make deepfake detection systems more transparent and trustworthy. We will propose guidelines to incorporate techniques which will make deepfake detection systems more accountable and explainable such that they make deepfake detection systems seamlessly deployable in the real world.

Article Details

How to Cite
Lad , S. . (2024). Applied Ethical and Explainable AI in Adversarial Deepfake Detection: From Theory to Real-World Systems. Journal of Artificial Intelligence General Science (JAIGS) ISSN:3006-4023, 6(1), 126–137. https://doi.org/10.60087/jaigs.v6i1.236
Section
Articles