Saudi Arabia Enhances Cybersecurity Using Artificial Intelligence Amid Rising Cybercrime Threats

Saudi Arabia is actively leveraging artificial intelligence (AI) to bolster its cybersecurity efforts as generative AI gives rise to a new generation of cybercrime that is more sophisticated and challenging to detect. The National Cybersecurity Authority (NCA) in Saudi Arabia plays a pivotal role in this endeavor, utilizing its Cybersecurity Toolkit to protect critical infrastructure and public services from digital threats.

Credit: Arab News

Zainab Alamin, vice president of national digital transformation at Microsoft Arabia, stated that the Cybersecurity Toolkit provides a comprehensive suite of tools aimed at improving cyber readiness and mitigating risks for both public and private sector organizations. This toolkit, available in Arabic and English, is part of the NCA’s broader mission to instill cyber resilience throughout the Kingdom.

Beyond providing templates, the NCA has launched the national cybersecurity portal, HASEEN, which supports the management and development of cyber services. Additionally, the CyberIC Program is designed to cultivate local expertise to defend Saudi systems against evolving threats. Alamin emphasized the importance of public awareness, highlighting the National Cybersecurity Awareness Campaign aimed at increasing cybersecurity knowledge across all segments of society.

Saudi Arabia’s investment in cybersecurity is significant, with spending projected to reach SR13.3 billion (approximately $3.5 billion) in 2023, demonstrating the Kingdom’s commitment to protecting its digital infrastructure. However, as the Kingdom ramps up its efforts, cybercriminals are also evolving. Generative AI enables fraudsters to create highly realistic emails, clone voices, and produce deepfake videos, providing them with an alarming new advantage.

Alamin remarked that AI models are increasingly capable of producing outputs that are authentic, contextually accurate, and emotionally persuasive, making it more difficult for traditional detection systems and even informed users to differentiate between real and fake communications.

In the face of these threats, Microsoft has implemented its own AI-driven defenses. Between April 2024 and April 2025, its systems successfully blocked $4 billion in fraudulent attempts and prevented over 49,000 fraudulent partnership enrollments. Furthermore, Microsoft collaborates closely with law enforcement and industry partners to share threat intelligence and combat the misuse of AI by criminals.

Alamin pointed out that scams have transformed from poorly constructed phishing emails to more polished and personalized attacks. The latest Cyber Signals report from Microsoft Arabia indicates that criminals are using AI to create everything from hyper-realistic images to entire fraudulent websites.

In response to the growing sophistication of scams, Microsoft’s platforms, including Defender for Cloud and Entra, utilize AI to identify and neutralize threats across various digital channels. The Edge browser now includes features to protect against typo and domain impersonation, as well as a “Scareware Blocker” to counter deceptive pop-up scams.

Education on recognizing scams is crucial, as Alamin noted that scammers often exploit fear or urgency to manipulate victims. Signs of AI-generated deception may include overly formal language, generic greetings, or unusual phrasing. Individuals are advised to be vigilant and verify any unusual communication, especially if it involves sensitive information such as financial transactions.

As Saudi Arabia enhances its cybersecurity measures, other Gulf nations are similarly developing their frameworks to align with global standards. For example, the UAE has recently launched its Green Bond and Sukuk Framework, integrating cybersecurity into its digital finance architecture.

Advertisement

Alamin concluded by highlighting the dual-use nature of AI technologies, which can drive innovation when used ethically but can also be weaponized for scams and misinformation. Understanding the intent behind the use of AI is a critical aspect of the ongoing cybersecurity challenge.

Leave a Reply

Your email address will not be published.