Journal of Computer Science and Information Technology  
ISSN: 3080-3586  
DOI: 10.61424/jcsit  
JCSIT  
BLUEMARK PUBLISHERS  
Journal Homepage: www.bluemarkpublishers.com/index.php/JCSIT  
| RESEARCH ARTICLE  
Cybersecurity Threat Detection Using AI: A Systematic Review of Approaches  
Vasavi Yeka  
AT&T Network Systems, New Jersey, United States  
Corresponding Author: Vasavi Yeka, E-mail: ama25@yahoo.com  
| ABSTRACT  
The rapid growth of digital technologies and interconnected systems has significantly increased the frequency and  
complexity of cybersecurity threats. Traditional threat detection methods, which often rely on predefined rules and  
signature-based techniques, have become insufficient in addressing sophisticated and evolving cyberattacks. As a  
result, Artificial Intelligence (AI) has emerged as a powerful tool for enhancing cybersecurity threat detection  
through automated, adaptive, and intelligent analysis of malicious activities. This systematic review explores recent  
AI-driven approaches used in cybersecurity threat detection, focusing on machine learning, deep learning, and  
hybrid models applied across diverse threat environments. Following established systematic review protocols,  
relevant studies were identified, screened, and analyzed to determine prevailing methodologies, application  
domains, datasets, and evaluation metrics. The findings reveal that supervised and unsupervised machine learning  
algorithms, such as support vector machines, random forests, and clustering techniques, are widely employed for  
intrusion detection, malware classification, and anomaly detection. Deep learning architectures, including  
convolutional and recurrent neural networks, demonstrate improved performance in detecting complex attack  
patterns in large-scale network traffic. However, challenges such as data imbalance, model interpretability,  
adversarial attacks, and real-time deployment constraints remain significant barriers to practical implementation.  
The review highlights emerging trends such as explainable AI, federated learning, and reinforcement learning as  
promising directions for future research. Overall, this study provides a comprehensive overview of AI-based  
cybersecurity threat detection strategies and offers insights to guide researchers and practitioners in developing  
more robust, scalable, and intelligent defense systems.  
| KEYWORDS  
Cybersecurity, Artificial Intelligence, Threat Detection, Machine Learning, Deep Learning, Intrusion Detection,  
Systematic Review  
| ARTICLE INFORMATION  
ACCEPTED: 21 December 2025  
PUBLISHED: 07 February 2026  
DOI: 10.61424/jcsit.v3.i1.701  
1. Introduction  
The rapid expansion of digital technologies and the growing dependence on interconnected systems have  
significantly increased the complexity of cybersecurity challenges worldwide. Modern organizations rely heavily on  
cloud computing, Internet of Things (IoT) devices, mobile networks, and digital infrastructures to support critical  
operations. While these advancements have improved efficiency and connectivity, they have also created new  
opportunities for cybercriminals to exploit vulnerabilities. As a result, cyberattacks such as malware infections,  
ransomware, phishing, insider threats, and advanced persistent threats (APTs) have become more frequent,  
sophisticated, and damaging (Dash et al., 2021).  
Copyright: © 2026 the Author(s). This article is an open access article distributed under the terms and conditions of the Creative Commons  
Page | 33  
Cybersecurity Threat Detection Using AI: A Systematic Review of Approaches  
Traditional cybersecurity threat detection methods, including rule-based intrusion detection systems and signature-  
based antivirus tools, have been widely used for decades. However, these conventional approaches often struggle  
to keep pace with evolving attack patterns. Signature-based techniques, for example, are effective only against  
known threats and are limited in detecting zero-day attacks or novel malicious behaviors. Similarly, rule-based  
systems require continuous manual updates and may generate high false-positive rates, reducing their effectiveness  
in real-time security environments (Markevych, 2023). These limitations highlight the urgent need for more adaptive  
and intelligent threat detection mechanisms.  
Artificial intelligence (AI) has emerged as a transformative solution in addressing modern cybersecurity threats. AI-  
driven techniques, particularly machine learning (ML) and deep learning (DL), enable systems to automatically learn  
patterns from vast volumes of network traffic, system logs, and user behavior data. Unlike traditional methods, AI-  
based threat detection models can identify anomalies, detect previously unseen attacks, and improve performance  
over time through continuous learning (Yaseen, 2023). Applications of AI in cybersecurity include intrusion  
detection, malware classification, phishing detection, botnet identification, and predictive risk analysis.  
In recent years, research into AI-enabled cybersecurity has expanded rapidly, producing a wide range of  
approaches, algorithms, and frameworks. Studies have explored supervised learning models such as support vector  
machines and random forests, unsupervised anomaly detection techniques, and deep learning architectures  
including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer-based models.  
Despite these advancements, challenges remain in areas such as model interpretability, data imbalance, adversarial  
attacks against AI systems, privacy concerns, and deployment in real-world environments (Rizvi, 2023).  
Given the growing volume of research and the diversity of AI-based threat detection strategies, there is a need for a  
systematic synthesis of existing approaches. A systematic review provides an evidence-based overview of the state  
of the art, identifies key trends, evaluates strengths and weaknesses, and highlights gaps for future research. Such a  
review is essential for guiding cybersecurity practitioners, researchers, and policymakers in understanding how AI  
can be effectively leveraged to enhance threat detection capabilities (Kumar et al., 2025).  
Therefore, this study presents a systematic review of AI-driven approaches for cybersecurity threat detection. The  
review examines current methodologies, datasets, evaluation metrics, and emerging innovations in the field. The  
primary aim is to provide a comprehensive understanding of how AI is being applied to detect cyber threats and to  
identify challenges and opportunities for advancing future research and practical implementation.  
2. Methodology  
This systematic review adopted a structured and transparent methodology to examine the current state of research  
on cybersecurity threat detection using artificial intelligence (AI). Given the rapid evolution of cyber threats and the  
increasing reliance on intelligent detection mechanisms, a systematic approach was necessary to identify, evaluate,  
and synthesize relevant studies. The review followed established systematic review guidelines to ensure rigor,  
reproducibility, and comprehensive coverage of AI-based threat detection approaches.  
2.1 Research Design  
The study was designed as a systematic review of peer-reviewed literature focusing on AI applications in  
cybersecurity threat detection. Systematic reviews are particularly valuable in emerging interdisciplinary fields such  
as cybersecurity and machine learning because they consolidate fragmented research findings, highlight  
methodological trends, and identify gaps for future exploration. This review aimed to provide an evidence-based  
overview of AI techniques applied in detecting cyberattacks, intrusions, malware, and anomalous network behaviors.  
2.2 Search Strategy  
A comprehensive search was conducted across major academic databases to identify relevant studies published in  
the field. The primary sources included IEEE Xplore, ACM Digital Library, SpringerLink, ScienceDirect, and Google  
Page | 34  
JCSIT 3(1): 33-42  
Scholar. These databases were selected due to their strong coverage of cybersecurity, computer science, and  
artificial intelligence research.  
The search employed a combination of keywords and Boolean operators, including: “cybersecurity threat detection”,  
“artificial intelligence” OR “machine learning” OR “deep learning”, “intrusion detection systems”, “malware  
detection”, “anomaly detection” and “AI-based security”  
The search terms were applied to titles, abstracts, and keywords to maximize the retrieval of relevant literature.  
2.3 Inclusion and Exclusion Criteria  
To ensure the relevance and quality of selected studies, clear inclusion and exclusion criteria were established.  
Studies were included if they: Focused on AI, machine learning, or deep learning techniques for threat detection,  
Addressed cybersecurity applications such as intrusion detection, malware analysis, phishing detection, or anomaly  
monitoring, Were published in peer-reviewed journals or reputable conference proceedings, Were written in  
English, and Were published within the last decade to reflect contemporary AI advancements.  
Studies were excluded if they: Discussed cybersecurity without AI-based detection methods, focused only on  
cryptography or authentication without threat detection, were non-peer-reviewed articles, editorials, or short  
abstracts, lacked sufficient methodological detail, or experimental evaluation  
2.4 Study Selection Process  
The study selection followed a multi-stage screening process. First, all retrieved records were imported into a  
reference management system, and duplicates were removed. Next, titles and abstracts were screened to eliminate  
irrelevant studies. Full-text reviews were then conducted for the remaining articles to confirm eligibility based on  
the inclusion criteria.  
This step-by-step filtering ensured that only studies directly addressing AI-driven threat detection were retained for  
synthesis. The final dataset represented a diverse body of literature across multiple threat domains and AI  
methodologies.  
2.5 Data Extraction and Analysis  
A standardized data extraction framework was developed to collect key information from each selected study.  
Extracted data included: Type of AI model used (e.g., SVM, Random Forest, CNN, LSTM, Transformer), Target threat  
category (e.g., intrusion, malware, phishing, ransomware), Dataset employed (e.g., NSL-KDD, CICIDS, UNSW-NB15),  
Performance metrics (accuracy, precision, recall, F1-score, AUC), and Strengths, limitations, and deployment  
considerations.  
The extracted information was analyzed qualitatively through thematic synthesis. Studies were grouped into major  
methodological themes such as supervised learning, unsupervised anomaly detection, deep learning-based  
architectures, and hybrid AI systems.  
2.6 Quality Assessment  
To enhance the reliability of the review, a quality assessment was conducted for each included study. Articles were  
evaluated based on criteria such as: Clarity of research objectives, Appropriateness of AI techniques applied, Dataset  
validity and representativeness, Robustness of evaluation metrics, and Discussion of limitations and real-world  
applicability.  
Only studies demonstrating adequate methodological rigor and empirical validation were included in the final  
synthesis.  
Page | 35  
Cybersecurity Threat Detection Using AI: A Systematic Review of Approaches  
2.7 Synthesis Approach  
The findings were synthesized using a narrative and thematic approach rather than statistical meta-analysis, due to  
the heterogeneity of AI models, datasets, and threat environments. This approach enabled a broader comparison of  
techniques and allowed the review to highlight emerging trends, practical challenges, and research opportunities in  
AI-driven cybersecurity.  
2.8 Ethical Considerations  
As this study was based entirely on secondary data from published literature, no direct human participants or  
sensitive data were involved. Ethical integrity was maintained through proper citation, transparency in study  
selection, and objective reporting of findings without bias toward specific AI models or frameworks.  
3. Findings and Discussion  
3.1 AI-Based Approaches for Cybersecurity Threat Detection  
The systematic review indicates a clear shift from traditional signature-based detection systems to AI-driven  
approaches in cybersecurity. Across the analyzed studies, AI methodologies demonstrated significant potential for  
detecting both known and unknown cyber threats, offering proactive threat identification and real-time monitoring  
capabilities. Unlike traditional systems that rely on predefined rules, AI approaches can learn patterns, identify  
anomalies, and adapt to novel attack strategies. This trend aligns with previous findings by Chirra (2024), who  
highlighted that AI-driven threat detection enhances the speed and accuracy of intrusion detection while reducing  
reliance on manual updates. Overall, AI approaches are becoming essential for modern cybersecurity infrastructures  
due to the increasing sophistication of cyber-attacks.  
3.1.1 Machine Learning Techniques in Threat Detection  
Classical machine learning techniques are widely represented in the literature. Algorithms such as Support Vector  
Machines (SVM), Decision Trees, Random Forests, and k-Nearest Neighbors (k-NN) are frequently applied for tasks  
including intrusion detection, malware classification, and network anomaly monitoring. The review finds that  
supervised learning models perform effectively when large, labeled datasets are available, providing high accuracy  
in identifying known threats. However, their performance tends to decline when encountering novel attack patterns,  
highlighting a limitation in adaptability. Studies such as those by Kothamali et al. (2020) and Madupati (2024)  
emphasize that while ML algorithms are computationally efficient and interpretable, they often require extensive  
feature engineering and struggle with high-dimensional or unstructured data. Consequently, ML remains useful for  
structured threat detection but may need augmentation with more adaptive methods for evolving cyber threats.  
3.1.2 Deep Learning Models for Advanced Threat Recognition  
Deep learning approaches are increasingly favored for detecting complex and high-dimensional cyber threats.  
Neural network architecturesincluding Convolutional Neural Networks (CNNs), Recurrent Neural Networks  
(RNNs), and Long Short-Term Memory (LSTM) networkshave been widely applied to tasks such as network traffic  
analysis, malware behavior prediction, and anomaly detection. The review highlights that these models achieve  
higher detection accuracy and better generalization than traditional machine learning, particularly for complex  
patterns that involve temporal or spatial correlations. For example, CNNs excel in feature extraction from raw  
network traffic, while LSTMs effectively capture sequential dependencies in attack behaviors. However, the findings  
also indicate significant challenges: deep learning models require large volumes of high-quality data, substantial  
computational resources, and careful tuning to prevent overfitting, as supported by Katiyar et al. (2024). Despite  
these demands, deep learning’s superior performance in identifying sophisticated attacks makes it an increasingly  
critical tool in cybersecurity threat detection.  
3.1.3 Hybrid and Ensemble AI Detection Frameworks  
The review identifies a growing interest in hybrid and ensemble frameworks that combine machine learning and  
deep learning methods or integrate multiple classifiers. These approaches aim to improve detection performance  
and robustness across various attack types. Hybrid models often pair the feature extraction strengths of deep  
learning with the interpretability and efficiency of classical ML, providing more adaptive and accurate detection.  
Page | 36  
JCSIT 3(1): 33-42  
Ensemble techniques, such as stacking or voting classifiers, are particularly effective in mitigating biases and  
reducing false positives by aggregating predictions from diverse models. For instance, studies by Vaddadi et al.  
(2023) and Manoharan et al. (2023) report that hybrid frameworks achieve higher accuracy and generalization  
compared to standalone models. The evidence suggests that such integrative approaches hold promise for real-  
world deployment, as they can accommodate evolving cyber threats, manage heterogeneous data sources, and  
improve overall system resilience.  
3.2 Types of Cyber Threats Addressed in Reviewed Studies  
The systematic review reveals that AI-based cybersecurity research predominantly targets three categories of  
threats: network intrusions, malware/ransomware attacks, and sophisticated threats such as advanced persistent  
threats (APTs) and zero-day exploits. The distribution of research emphasis reflects both the prevalence of these  
threats in real-world cyber environments and the potential of AI to enhance detection capabilities beyond  
traditional signature-based approaches. Across the reviewed literature, AI modelsranging from classical machine  
learning techniques like Random Forests and Support Vector Machines to advanced deep learning architectures—  
consistently demonstrate superior performance in identifying anomalies, predicting attacks, and adapting to  
evolving threat landscapes.  
3.2.1 Intrusion Detection and Network Attacks  
Intrusion detection systems (IDS) are the most commonly addressed application in AI-driven cybersecurity research.  
Studies consistently focus on network-based attacks such as Denial-of-Service (DoS), Distributed Denial-of-Service  
(DDoS), port scanning, and probing attacks. AI models are employed to detect patterns in network traffic that  
deviate from normal behavior, allowing for early identification of attacks. For instance, research by Madhavram et al.  
(2022) shows that deep learning-based IDS models can accurately detect DDoS attacks in high-volume network  
traffic with minimal false positives, outperforming traditional rule-based IDS. Similarly, ensemble learning methods,  
combining multiple AI classifiers, have shown robustness in identifying low-frequency probing attacks, which often  
evade signature-based detection. A common finding across studies is that AI techniques excel in encrypted traffic  
environments, where conventional IDS struggle to analyze packet contents, highlighting the adaptability of AI in  
modern network security contexts.  
3.2.2 Malware and Ransomware Detection  
Malware and ransomware detection has emerged as a key research focus due to the increasing sophistication and  
financial impact of these threats. AI models are applied to classify malware families and predict ransomware  
behavior using both static features (e.g., binary signatures) and dynamic features (e.g., system call sequences,  
behavioral patterns). Deep neural networks (DNNs) and convolutional neural networks (CNNs) are particularly  
effective at identifying subtle patterns in malware behavior, achieving higher detection rates than traditional  
heuristic or signature-based approaches. For example, Sunkara et al. (2021) demonstrated that a CNN model  
trained on opcode sequences could classify ransomware variants with over 95% accuracy. However, studies also  
highlight the challenge posed by adversarial malware, where attackers intentionally modify malware characteristics  
to evade AI detection. This emphasizes the need for continuous retraining of AI models and the integration of  
adversarial resilience techniques.  
3.2.3 Advanced Persistent Threats and Zero-Day Attacks  
Advanced persistent threats (APTs) and zero-day attacks represent a growing area of concern in cybersecurity  
research. These threats are stealthy, often targeted, and lack predefined signatures, making them difficult to detect  
with conventional methods. AI-based anomaly detection models, including recurrent neural networks (RNNs) and  
autoencoders, are increasingly used to identify deviations in system behavior indicative of such attacks. Findings  
indicate that AI can detect early-stage APT activities, such as lateral movement and unusual file access patterns,  
which would otherwise remain undetected for extended periods (Gopalsamy, 2022). Nevertheless, a persistent  
challenge is the scarcity of labeled, real-world datasets for APTs and zero-day exploits, which limits model  
generalizability. Several studies recommend the use of synthetic datasets and simulation-based training to partially  
overcome this limitation, though real-world validation remains crucial for deployment.  
Page | 37  
Cybersecurity Threat Detection Using AI: A Systematic Review of Approaches  
3.3 Datasets, Features, and Evaluation Practices  
The analysis of reviewed studies reveals a strong dependence on established datasets, deliberate feature  
engineering, and a diverse set of evaluation metrics to validate AI-based cybersecurity models. These components  
collectively shape the performance and generalizability of threat detection systems. However, several gaps remain  
in dataset relevance, feature representation, and practical evaluation approaches, which are crucial for real-world  
applicability.  
3.3.1 Commonly Used Cybersecurity Datasets  
The review indicates that researchers predominantly utilize benchmark datasets such as NSL-KDD, CICIDS, UNSW-  
NB15, and Bot-IoT to train and evaluate AI models (Maddireddy et al., 2020; Polamarasetti et al., 2023). NSL-KDD,  
derived from the original KDD’99 dataset, is widely adopted due to its well-defined attack categories and balanced  
class distribution. Similarly, CICIDS and UNSW-NB15 provide more contemporary traffic with labeled attack  
patterns, supporting reproducibility across studies. Bot-IoT, focusing on IoT-specific attacks, addresses the  
emerging threat landscape in connected devices.  
Despite their popularity, the literature consistently notes limitations in these datasets. For instance, NSL-KDD and  
UNSW-NB15 may not fully capture evolving malware tactics or the heterogeneity of modern network traffic. As  
highlighted by Dhanushkodi et al. (2014), models trained on outdated datasets often underperform when deployed  
in real-world environments due to novel attack signatures and high-volume traffic variability. Consequently,  
researchers increasingly advocate for continuously updated and realistic datasets to improve model robustness and  
operational relevance.  
3.3.2 Feature Engineering and Data Representation  
Feature selection and representation remain central to effective threat detection. Traditional machine learning  
models, such as Random Forests and Support Vector Machines (SVMs), typically rely on manually engineered  
features including packet counts, connection durations, protocol types, and statistical measures of traffic behavior  
(Maddireddy et al., 2020). These engineered features facilitate interpretability and reduce computational complexity  
but can limit adaptability to novel attack patterns.  
In contrast, deep learning approaches, particularly Convolutional Neural Networks (CNNs) and Recurrent Neural  
Networks (RNNs), increasingly leverage raw traffic flows or minimally processed features (Mahmud et al., 2025;  
Salem et al., 2024). For example, autoencoder-based models can learn high-dimensional feature embeddings from  
raw packet sequences, improving detection of zero-day attacks. The findings suggest that combining domain  
knowledge-driven feature engineering with representation learning can optimize detection accuracy and reduce  
false positives, a balance underscored in hybrid approaches (Onih et al., 2024).  
Moreover, studies demonstrate that feature selection techniques, such as Principal Component Analysis (PCA) or  
Mutual Information-based selection, significantly enhance model performance by removing irrelevant or redundant  
attributes. These strategies improve computational efficiency and mitigate overfitting, especially in datasets with  
high-dimensional feature spaces.  
3.3.3 Evaluation Metrics and Comparative Performance  
Performance evaluation predominantly employs metrics such as accuracy, precision, recall, F1-score, and ROC-AUC,  
reflecting the model’s ability to correctly classify benign and malicious traffic (Sharma et al., 2024). High accuracy  
rates, often exceeding 95% in controlled experiments, are frequently reported. However, the discussion highlights  
that these metrics alone may not capture practical deployment challenges, including real-time processing, model  
interpretability, and scalability.  
Few studies systematically address runtime efficiency, resource utilization, or adaptive performance in live network  
environments. For instance, while deep learning models demonstrate superior detection performance on  
benchmark datasets, they often require significant computational resources, which may limit deployment in  
Page | 38  
JCSIT 3(1): 33-42  
resource-constrained environments (Sivakumar et al., 2025). Similarly, model interpretability remains an  
underexplored area, yet it is essential for cybersecurity operators to understand the reasoning behind flagged  
anomalies.  
The comparative analysis further indicates that hybrid models, which combine traditional machine learning with  
deep learning representations, often achieve a balance between high detection rates and manageable complexity  
(Tanikonda et al., 2022). These models tend to outperform single-approach systems, particularly in scenarios with  
class imbalance or diverse attack types.  
3.4 Challenges and Limitations in AI-Based Threat Detection  
Despite the substantial progress of artificial intelligence in cybersecurity threat detection, the systematic review  
highlights several persistent technical and operational challenges that constrain the full potential of these systems.  
These challenges not only affect model accuracy but also impact the deployment of AI solutions in real-world  
security operations. Our analysis of the literature reveals three key areas of concern: data-related limitations,  
vulnerability to adversarial attacks, and issues with explainability and trust.  
3.4.1 Data Imbalance and Labeling Issues  
A recurring theme in the literature is the problem of imbalanced datasets and limited labeled samples for training AI  
models. Many cybersecurity datasets are heavily skewed toward normal traffic, with rare but critical attack instances  
underrepresented. This imbalance often leads to biased models that perform well on common scenarios but fail to  
detect rare or sophisticated attacks. For example, studies by Prince et al. (2024) and Sankaram et al. (2024) report  
that models trained on imbalanced intrusion datasets achieved high overall accuracy but missed low-frequency  
attack types, such as zero-day exploits. To address these limitations, researchers have explored semi-supervised  
learning and data augmentation techniques. Methods such as Generative Adversarial Networks (GANs) for synthetic  
attack generation have shown promise in improving the detection of rare events by enriching the dataset (Biswas,  
2020). Nonetheless, generating realistic synthetic data remains a challenge, as it requires preserving the complex  
correlations inherent in network traffic.  
3.4.2 Adversarial Attacks Against AI Models  
Another significant challenge is the susceptibility of AI-based detection systems to adversarial attacks. The reviewed  
studies demonstrate that attackers can deliberately manipulate input features to evade detection. For instance,  
Khalaf et al. (2025) and Lee et al. (2019) highlight cases where small perturbations in malware samples or network  
traffic patterns caused deep learning models to misclassify malicious behavior as benign. These findings underscore  
a fundamental limitation of current AI models: their reliance on statistical patterns makes them vulnerable to  
carefully crafted manipulations. In response, adversarially robust learning frameworks have been proposed,  
including adversarial training, defensive distillation, and ensemble methods, which aim to improve model resilience  
against such manipulations. However, implementing these defenses in operational environments remains  
computationally intensive and is still an active area of research.  
3.4.3 Explainability and Trust in AI Systems  
The final major limitation identified concerns the “black box” nature of many AI models, particularly deep neural  
networks. While these models achieve high detection accuracy, their decision-making processes are often opaque,  
creating challenges for cybersecurity analysts who must justify and act on alerts. Studies by Jain (2025) and Biswas  
(2025) emphasize the growing importance of explainable AI (XAI) techniques that provide interpretable outputs  
without sacrificing performance. Techniques such as feature attribution, rule extraction, and attention visualization  
allow analysts to understand why a model flagged a specific activity, thereby increasing trust and facilitating  
incident response. The literature suggests that integrating XAI not only improves human-AI collaboration but also  
enhances compliance with regulatory and accountability requirements in critical sectors.  
Overall, while AI shows strong potential in threat detection, these challenges highlight the need for balanced  
approaches that combine technical robustness, interpretability, and realistic deployment strategies. Future research  
Page | 39  
Cybersecurity Threat Detection Using AI: A Systematic Review of Approaches  
should prioritize methods that address data scarcity, enhance model robustness against adversarial manipulation,  
and improve explainability, ensuring that AI systems can be trusted and effectively integrated into cybersecurity  
operations.  
3.5 Emerging Trends and Future Research Directions  
This theme explores the latest innovative developments shaping the future of AI-driven cybersecurity threat  
detection. As cyber threats evolve in scale, sophistication, and diversity, emerging AI approaches are redefining  
detection, response, and overall network resilience. The studies reviewed indicate that future research will likely  
focus on real-time, privacy-conscious, and autonomous cybersecurity solutions that balance efficiency with ethical  
and operational considerations.  
3.5.1 AI for Real-Time and Edge-Based Security Monitoring  
Recent research increasingly emphasizes the deployment of lightweight AI models directly at the network edge,  
including IoT devices, routers, and cloud gateways. Edge-based AI enables rapid threat identification and response,  
reducing latency and dependency on centralized servers. Findings highlight the critical role of edge AI in securing  
distributed infrastructures, particularly in environments with limited bandwidth or high-volume traffic. Studies also  
suggest that integrating AI at the edge improves resilience against localized attacks and enhances the scalability of  
threat detection systems across heterogeneous networks (Tanikonda et al., 2022).  
3.5.2 Federated and Privacy-Preserving Threat Detection  
Federated learning has emerged as a promising strategy for training AI models across decentralized environments  
while preserving data privacy. This approach allows organizations to collaboratively improve threat detection  
models without exchanging sensitive information, making it especially relevant for healthcare, finance, and other  
privacy-critical sectors. The reviewed literature indicates that privacy-preserving AI can maintain high detection  
accuracy while complying with regulatory requirements, representing a significant step toward ethical and secure  
cybersecurity practices (Onih et al., 2024).  
3.5.3 Integration of Generative AI and Autonomous Defense Systems  
A growing body of work points to the integration of generative AI with autonomous defense mechanisms. Beyond  
detection, these systems are capable of predicting potential attack vectors and initiating automated responses to  
mitigate threats. While this innovation promises faster and more adaptive cybersecurity strategies, studies caution  
that such autonomous systems require robust ethical governance, transparency, and secure AI design principles  
(Maddireddy et al., 2020). Future research is therefore expected to focus not only on improving performance but  
also on addressing accountability, interpretability, and the prevention of adversarial manipulation in autonomous AI  
defenses.  
4. Conclusion  
This systematic review examined the diverse approaches to cybersecurity threat detection using artificial intelligence  
(AI), highlighting both the potential and the challenges of integrating AI into modern cybersecurity frameworks. The  
analysis revealed that AI techniquesincluding machine learning, deep learning, and hybrid modelshave  
significantly improved the detection of complex threats such as malware, phishing, intrusion attempts, and  
advanced persistent threats. These methods provide enhanced accuracy, real-time responsiveness, and the ability to  
identify previously unseen attack patterns, addressing some limitations of traditional rule-based systems.  
Despite these advances, the review identified persistent challenges that affect the practical deployment of AI-based  
cybersecurity systems. Key issues include the need for large, high-quality datasets, the interpretability of AI models,  
susceptibility to adversarial attacks, and the computational costs associated with real-time threat detection.  
Furthermore, integration into existing IT infrastructures and compliance with regulatory requirements remain critical  
barriers to widespread adoption.  
Page | 40  
JCSIT 3(1): 33-42  
The findings suggest that future research should focus on developing more robust, explainable, and adaptive AI  
models capable of functioning under dynamic threat environments. Collaborative frameworks combining AI with  
human expertise, continual learning mechanisms, and standardized evaluation protocols could enhance the  
reliability and scalability of threat detection systems. Overall, AI holds considerable promise for transforming  
cybersecurity, but careful attention to ethical, technical, and operational considerations is essential for its effective  
implementation.  
References  
[1] Biswas, S. (2025). Artificial IntelligenceEnhanced Cybersecurity Frameworks for Real-Time Threat Detection In Cloud And  
Enterprise. ASRC Procedia: Global Perspectives in Science and Scholarship, 1(01), 737-770.  
[2] Chirra, B. R. (2024). Revolutionizing Cybersecurity: The Role of AI in Advanced Threat Detection Systems. International  
Journal of Advanced Engineering Technologies and Innovations, 4(1), 480-504.  
[3] Dash, B., Ansari, M. F., Sharma, P., & Ali, A. (2022). Threats and opportunities with AI-based cyber security intrusion  
detection: a review. International Journal of Software Engineering & Applications (IJSEA), 13(5).  
[4] Dhanushkodi, K., & Thejas, S. (2024). Ai enabled threat detection: Leveraging artificial intelligence for advanced security and  
cyber threat mitigation. IEEE access, 12, 173127-173136.  
[5] Gopalsamy, M. (2022). An Optimal Artificial Intelligence (AI) technique for cybersecurity threat detection in IoT Networks. Int.  
J. Sci. Res. Arch, 7(2), 661-671.  
[6] Jain, S. (2025). Advancing cybersecurity with artificial intelligence and machine learning: Architectures, algorithms, and future  
directions in threat detection and mitigation. World Journal of Advanced Engineering Technology and Sciences, 14(01), 273-  
290.  
[7] Katiyar, N., Tripathi, M. S., Kumar, M. P., Verma, M. S., Sahu, A. K., & Saxena, S. (2024). AI and Cyber-Security: Enhancing  
threat detection and response with machine learning. Educational Administration: Theory and Practice, 30(4), 6273-6282.  
[8] Khalaf, N. Z., Al Barazanchi, I. I., Radhi, A. D., Parihar, S., Shah, P., & Sekhar, R. (2025). Development of real-time threat  
detection systems with AI-driven cybersecurity in critical infrastructure. Mesopotamian Journal of CyberSecurity, 5(2), 501-  
513.  
[9] Kothamali, P. R., Banik, S., & Nadimpalli, S. V. (2020). Introduction to Threat Detection in Cybersecurity. International Journal  
of Advanced Engineering Technologies and Innovations, 1(2), 113-132.  
[10] Kumar, B. H., Nuka, S. T., Malempati, M., Sriram, H. K., Mashetty, S., & Kannan, S. (2025). Big Data in Cybersecurity: Enhancing  
Threat Detection with AI and ML. Metallurgical and Materials Engineering, 31(3), 12-20.  
[11] Lee, J., Kim, J., Kim, I., & Han, K. (2019). Cyber threat detection based on artificial neural networks using event profiles. Ieee  
Access, 7, 165607-165626.  
[12] Maddireddy, B. R., & Maddireddy, B. R. (2020). Proactive cyber defense: utilizing Ai for early threat detection and risk  
assessment. International Journal of Advanced Engineering Technologies and Innovations, 1(2), 64-83.  
[13] Maddireddy, B. R., & Maddireddy, B. R. (2020). Proactive cyber defense: utilizing Ai for early threat detection and risk  
assessment. International Journal of Advanced Engineering Technologies and Innovations, 1(2), 64-83.  
[14] Madhavram, C., Galla, E. P., Rajaram, S. K., & Patra, G. K. (2022). AI-Driven Threat Detection: Leveraging Big Data For  
Advanced Cybersecurity Compliance. Available at SSRN 5029406.  
[15] Madupati, B. (2024). AI-Driven Threat Detection in Cybersecurity. Journal of Artificial Intelligence, Machine Learning and Data  
Science, 2(2), 10-51219.  
[16] Mahmud, F., Barikdar, C. R., Hassan, J., Goffer, M. A., Das, N., Orthi, S. M., ... & Hasan, R. (2025). AI-Driven Cybersecurity in IT  
Project Management: Enhancing Threat Detection and Risk Mitigation. Journal of Posthumanism, 5(4), 23-44.  
[17] Manoharan, A., & Sarker, M. (2023). Revolutionizing cybersecurity: Unleashing the power of artificial intelligence and  
machine learning for next-generation threat detection. DOI: https://www. doi. org/10.56726/IRJMETS32644, 1.  
[18] Markevych, M., & Dawson, M. (2023, June). A review of enhancing intrusion detection systems for cybersecurity using  
artificial intelligence (ai). In International conference knowledge-based organization (Vol. 29, No. 3, pp. 30-37).  
[19] Onih, V. A., Sevidzem, Y. S., & Adeniji, S. (2024). The role of ai in enhancing threat detection and response in cybersecurity  
infrastructures. International Journal of Scientific and Management Research, 7(04), 64-96.  
[20] Polamarasetti, A., Vadisetty, R., Velaga, V., Routhu, K., Sadaram, G., Boppana, S. B., & Vangala, S. R. (2023). Enhancing  
Cybersecurity Architectures with Artificial Intelligence (AI): A Framework for Automated Threat Intelligence Detection  
System. Universal Library of Engineering Technology, (Issue).  
[21] Prince, N. U., Faheem, M. A., Khan, O. U., Hossain, K., Alkhayyat, A., Hamdache, A., & Elmouki, I. (2024). AI-powered data-  
driven cybersecurity techniques: Boosting threat identification and reaction. Nanotechnology Perceptions, 20(S10).  
[22] Rizvi, M. (2023). Enhancing cybersecurity: The power of artificial intelligence in threat detection and prevention. International  
Journal of Advanced Engineering Research and Science, 10(5), 055-060.  
Page | 41  
Cybersecurity Threat Detection Using AI: A Systematic Review of Approaches  
[23] Salem, A. H., Azzam, S. M., Emam, O. E., & Abohany, A. A. (2024). Advancing cybersecurity: a comprehensive review of AI-  
driven detection techniques. Journal of Big Data, 11(1), 105.  
[24] Sankaram, M., Roopesh, M., Rasetti, S., & Nishat, N. (2024). A comprehensive review of artificial intelligence applications in  
enhancing cybersecurity threat detection and response mechanisms. Management, 3(5).  
[25] Sharma, T., & Sharma, P. (2024). AI-based cybersecurity threat detection and prevention. In Perspectives on Artificial  
Intelligence in Times of Turbulence: Theoretical Background to Applications (pp. 81-98). IGI Global.  
[26] Sivakumar, J., Salman, N. R., Salman, F. R., Salimova, H. R., & Ghimire, E. (2025). AI-driven cyber threat detection: enhancing  
security through intelligent engineering systems. Journal of Information Systems Engineering and Management, 10(19), 790-  
798.  
[27] Sunkara, G. (2021). AI Powered Threat Detection in Cybersecurity. International Journal of Humanities and Information  
Technology, (Special 1), 1-22.  
[28] Tanikonda, A., Pandey, B. K., Peddinti, S. R., & Katragadda, S. R. (2022). Advanced AI-driven cybersecurity solutions for  
proactive threat detection and response in complex ecosystems. Journal of Science & Technology, 3(1).  
[29] Vaddadi, S. A., Vallabhaneni, R., & Whig, P. (2023). Utilizing AI and machine learning in cybersecurity for sustainable  
development through enhanced threat detection and mitigation. International Journal of Sustainable Development Through  
AI, ML and IoT, 2(2), 1-8.  
[30] Yaseen, A. (2023). AI-driven threat detection and response: A paradigm shift in cybersecurity. International Journal of  
Information and Cybersecurity, 7(12), 25-43.  
Page | 42