Artificial Intelligence & Machine Learning ,
Cybercrime ,
Fraud Management & Cybercrime
New Turing Institute Report Calls for Establishment of AI Crime Task Force

A recent report from The Alan Turing Institute underscores a critical shortfall in the capabilities of British law enforcement agencies to combat cybercrime facilitated by artificial intelligence (AI). The report reveals a significant disparity between existing police resources and the evolving sophistication of cybercriminals, indicating that UK law enforcement remains substantially underprepared to address these threats.
The report highlights input from 22 experts across government, academia, and law enforcement sectors, emphasizing that the emergence of advanced large language models, such as OpenAI’s ChatGPT and Google’s Gemini, has enabled a rise in AI-driven cybercrime. Cybercriminals are increasingly utilizing synthetic video and audio technologies to enhance their fraudulent schemes; for instance, recent incidents involved deepfake content that reportedly swindled 20 million pounds from a multinational corporation based in Hong Kong.
Researchers warn that the simplicity of accessing AI technologies is widening the chasm between the technical capabilities of UK police and the actual threats these technologies pose. Concerns were raised regarding law enforcement’s ability to grasp the full spectrum of challenges posed by AI in cybercrime, as well as their capacity to effectively employ AI tools in their operations.
While the majority of AI-enabled attacks are still in preliminary stages, the potential proliferation of non-Western open-source large language models—such as DeepSeek’s R1 and V3—could exacerbate the discrepancy in defenses against cybercrime, according to Ardi Janjeva, a senior research associate at the Turing Institute’s Centre for Emerging Technology and Security. The lack of cooperation with burgeoning Chinese open-source frameworks poses additional complications for Western governments, delineating a critical area for concern regarding rapid response to identifiable vulnerabilities.
To mitigate the increasing threat of AI-driven cybercrimes, the report advocates for the formation of an AI crime task force within the UK National Crime Agency’s cybercrime unit. This specialized unit would be tasked with gathering data from various UK agencies to identify the tools employed by criminals, enabling swift responses to AI-facilitated offenses. The report further recommends enhanced collaboration with European and other international law enforcement agencies to stymie the criminal exploitation of these technologies.
The proposed task force is expected to establish a centralized database designed for tracking and countering AI-influenced criminal activities, vital in maintaining a proactive stance against emerging threats. It is also essential that bureaucratic hurdles that hinder law enforcement’s utilization of AI tools for tracking serious cybercriminals be addressed, as articulated by Janjeva.
Findings from this report have been communicated to the National Crime Agency and several police departments, with the aim of bolstering law enforcement’s responsiveness and technological capabilities in the realm of AI crime. At this moment, the National Crime Agency has not provided a response regarding the report’s recommendations or its implementation strategy.