Artificial intelligence technology could be exploited by terrorist groups around the globe in a number of ways, a new study has found, while governmental regulatory agencies and tech companies appear to be woefully unprepared to deal with the growing threat
The “Generative AI and Terrorism” study, conducted by University of Haifa’s Prof. Gabriel Weimann, will be published in his forthcoming book, AI in Society.
Weimann unmasks the real and pertinent threats associated with the growing interest in AI-based tools among terrorists and extremists, from online manuals on how to use generative AI to bolster propaganda and disinformation tactics, to an Al-Qaeda affiliated group announcing it would start holding AI workshops online, to Islamic State’s tech support guide on how to securely use generative AI tools such as chatbots like ChatGPT.
“We are in the midst of a rapid technological revolution, no less significant than the Industrial Revolution of the eighteenth and nineteenth centuries – the artificial intelligence revolution,” Weimann writes.