Enmaeya News
Enmaeya News

Abu Dhabi, UAE (Enmaeya News) — A new report from the international think tank Hedayah warns that generative artificial intelligence (AI) brings both growing threats and new chances to help stop violent extremism and terrorism around the world.

The report, titled “Artificial Intelligence for Counter Extremism,” is based on a review of 52 studies and five expert discussions. These brought together professionals from different fields like security, technology, academics, and civil society. It gives one of the most detailed public views so far on how generative AI is changing the threat landscape—and how it might also help reduce these threats.

While the report says that extremist use of generative AI is still mostly at the testing stage, concern is increasing. Main risks include the ability to create high-quality propaganda in many languages, fake media and deep fakes, and AI-powered chatbots used to recruit people. 

One case in the UK involved an attacker who formed a one-sided emotional connection with an AI chatbot that seemed to support his violent ideas.

The report also points out worries about disinformation, voice masking, and identity changes—especially because AI can make it harder to trust the media and public systems. 

Experts warned that AI could be used to flood social media with extremist messages, change the data that trains other AIs, or even recreate known terrorists to help with recruitment. 

AI is also being tested for things like cyberattacks, drone strategies, and spreading bomb-making instructions. Even though most extremists are not expert programmers, easy-to-use AI tools may make it simpler for them to do harm.

On the other hand, the report shows some important ways AI could help fight extremism. These include spotting signs of radicalization early by analyzing online behavior, using chatbots to engage and educate people, and improving content moderation by using AI to find hate speech or extremist messages. 

AI could also give extra support to smaller civil society groups that work to stop violence but don’t have enough resources. Another use is helping direct people who are at risk toward better content or helpful support systems. Still, experts warn that human control is very important. Without clear ethical rules, AI models can be tricked or used in harmful ways.

The report also discusses major challenges. One is that AI systems often repeat the same social biases found in their training data, which can lead to unfair results, especially against minority groups. 

Languages and cultures that are not Western are often ignored or not well-supported by these systems. Another problem is the lack of transparency; many AI tools are not open about how they are built or used, and companies often rely on outside workers for moderation, which makes it harder to make sure the tools are used responsibly.

Other challenges include the risk to privacy from surveillance-based AI tools, environmental concerns due to the high energy needed to run AI models, and growing gaps between people who understand AI and those who don’t—especially in rural or less developed areas. Even so, some experts said that AI could also help close information gaps—for example, by offering better translation tools for languages that don’t get enough support.

To move forward, Hedayah suggests building AI in an inclusive and ethical way, increasing teamwork across different sectors, and improving research. They recommend keeping humans involved in decision-making, teaching digital literacy that includes AI, and helping civil society groups build their ability to use AI. 

The report ends by saying that while generative AI could be very useful in stopping violence, it must be used carefully—with transparency, ethics, and respect for human rights.