Lev Matveev, Founder of SearchInform, discusses the advancement of artificial intelligence in Gulf countries, the significant security risks associated with this technology and explores strategies to mitigate these risks.
It’s not a secret that countries of the Gulf region are heavily investing in innovations and information technologies. AI swiftly became one of the most successful and fruitful parts of global governmental programs like Vision 2030 and We Are the UAE 2031. It is a significant opportunity to diversify the economy in a post-oil world. At the same time, GenAI raised several cybersecurity issues. Let’s take a brief look at the potential benefits and main risks associated with this technology.
AI Oasis Ahead
The Gulf Arab countries have developed strategies for utilizing AI. For example, Oman intends to release a new AI policy in Q1 2025. The Sultanate of Oman aims to increase the digital economy’s contribution to the GDP to 5% by 2030 and 10% by 2040. The UAE continues the same policy. The United Arab Emirates plans to double the current GDP to AED 3 trillion by 2031 through economic diversification and growth in the technology sector. The Emirates aims to have 20% of its non-oil GDP come from AI by 2031.
The Kingdom of Saudi Arabia has its own transformative program, Vision 2030. Saudi Arabia seeks ways to reduce economic reliance on oil and empower economic growth through innovations and AI. It is expected that AI will contribute $135 billion, or 12% of GDP, to the Saudi economy by 2030. The KSA set the ambitious goal to increase the share of non-oil exports in the non-oil GDP to 50% by 2030.
In the grand scheme of things, Gulf countries are developing national AI engines. In December 2024, at World Summit AI, Qatar launched Fanar, its first sovereign AI model. The UAE develops the Falcon AI model, and Saudi Arabia works on the Allam model. In 2025, the government of Oman plans to develop its own AI engine. To facilitate the integration of AI, Dubai has ordered all governmental agencies to appoint a chief artificial intelligence officer.
Such rapid development of AI raises questions about security measures. Is the Gulf region ready to implement AI solutions safely? To answer these questions, it is necessary to clearly identify the main risks related to AI.
Security Concerns Behind AI
The main AI-related risk is, at the same time, the main AI killer feature. It’s a capability to train. Every time a user enters data into the GenAI, the information is added to the service data bank as source material for further learning. Therefore, the application of user prompts could potentially expose sensitive data to third parties.
According to the research by Harmonic, almost 8.5% of the analyzed ChatGPT prompts included sensitive data. Shared data can be categorized by the following groups: customer data (45%), employee data (26%), legal and finance (14%), security (6%), and sensitive code (5%). Thus, information entered by the employees can swiftly become available to other users.
Secondly, despite the issues that have been identified, AI companies haven’t been quick to address them. Developers aren’t removing confidential data, even when requested.
In 2022, one person discovered that LAION, a major supplier of training data, processed medical records. In response to the request for the removal of confidential data from databases, a representative of LAION said that there is no capability to remove them. In this way, sensitive data from the training pool is difficult to remove upon legal request, and it may be exposed at any time.
Thirdly, AI engines sometimes can hallucinate—they provide imaginary information in response to user requests. In February 2025, the law firm Morgan & Morgan alerted its own lawyers that GenAIs can invent fake legal references and lawsuits. This was as a result of a case in which a judge discovered that the information presented by the company’s lawyers was false, generated by AI, and fined them for it.
Security Steps to Take
When we take a closer look at the Gulf countries’ readiness to address AI-related risks, we can see that they are trying to strike a balance between the speed of innovation and the need for safety standards. The Gulf Arab countries clearly identified issues emerging from AI technologies. It is backed by the fact that 49% of cybersecurity specialists in the UAE consider GenAI as a major risk for a company.
What measures can be taken to enhance the benefits of artificial intelligence and reduce the risks?
Firstly, safety guidelines are a cornerstone of a robust security posture. A useful step would be to develop appropriate guidelines, which would spell out both recommendations for the use of AI and restrictive measures. For example, it will be practical to prohibit the transfer of confidential information of any type, be it personal data, financial records, blueprints, etc.
Secondly, business and governmental entities should regularly conduct cybersecurity awareness training. The number of Arabian CISOs who are prioritizing employee data security training increased from 41% in 2023 to 55% in 2024. And that’s logical because security culture is one of the foundations of safety.
Thirdly, there are several protective solutions that could help to ensure the security of AI use and minimize the risks. Among them, next-generation DLP solutions are standing out—51% of CISOs in the UAE are implementing DLP solutions. Such systems are capable of preventing the exposure of sensitive data when a user tries to input confidential data into a chatbot via a browser.
Without a doubt, AI will become a daily routine in the coming years. It is safe to say that the Gulf countries have a perfect opportunity to avoid the mistakes made by Western states and create AI guardrails in advance. As a result, this will secure GenAI as an advantage rather than a problem.