Detecting the Abuse of Generative AI in Cybersecurity Contexts: Challenges, Frameworks, and Solutions

Main Article Content

Gopalakrishna Karamchand

Abstract

The sudden momentum of generative artificial intelligence (AI) models like large language models (LLMs), generative adversarial networks (GANs), and AI-based code generators has brought new abilities never seen before in many fields. However, the same models are being used with increasingly malicious intent, such as generating highly personalized phishing emails, deepfake content, social engineering scripts, and AI-generated malware. Legacy cybersecurity solutions are poorly equipped to identify or prevent the security threats posed by generative AI tools, presenting an urgent and significant security infrastructure gap. This paper suggests a conceptual model for detecting generative AI abuse in cybersecurity scenarios. The framework consists of behavioral analysis, AI-based content fingerprinting, and adversarial prompt detection to learn misuse patterns. We also investigate the issue of detecting differences between human and AI-generated malicious artifacts, and we assess what this means in terms of detection effectiveness and ethical monitoring. Our results highlight the importance of adaptive and AI-aware cybersecurity defenses to remain ahead of evolving threats in the era of generative AI.

Article Details

How to Cite

Detecting the Abuse of Generative AI in Cybersecurity Contexts: Challenges, Frameworks, and Solutions. (2025). Journal of Data Analysis and Critical Management, 1(03), 1-12. https://jdacm.com/index.php/jdacm/article/view/34