Type to search

Europol Sets Out ‘Grim’ Prospects For Law Enforcement In The Era Of ChatGPT Innovation

Europol Sets Out ‘Grim’ Prospects For Law Enforcement In The Era Of ChatGPT

OpenAI - ChatGPT   Illustration

Europol has issued a stark warning about the dangers of large language models (LLMs) such as ChatGPT.

In a report, ‘ChatGPT – the impact of large language models on law enforcement’, it says that LLMs can easily be misused for fraud and social engineering, cyber crime and disinformation. It describes the prospects for law enforcement as ‘grim’.

Tools such as ChatGPT – which hit 100 million users in its first two months – have already been associated with everything from cheating in exams to more serious crimes.

ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes, says Europol, with the ability of LLMs to reproduce language patterns allowing them to impersonate the style of speech of specific individuals or groups.

“This ability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors,” the authors say.

The same ability allows LLMs to be used for propaganda and disinformation purposes, by generating messages with relatively little effort. And because it can generate code in the same way, it allows criminals with relatively little technical knowledge to produce malicious code.

While ChatGPT creator OpenAI says it’s included more safeguards in the latest version, GPT-4, Europol says it doesn’t go far enough.

“The release of GPT-4 was meant not only to improve the functionality of ChatGPT, but also to make the model less likely to produce potentially harmful output,” the report reads.

“A subsequent check of GPT-4, however, showed that all of them still worked. In some cases, the potentially harmful responses from GPT-4 were even more advanced.”

The report makes a number of recommendations, suggesting that law enforcement agencies should raise awareness of the issues and engage with the tech sector to introduce controls. They should recognize that LLMs can be used for many type of offense beyond online crime alone, and work to improve their own internal expertise – perhaps also developing their own LLMs themselves.

“When criminals use ChatGPT, there are no language or culture barriers. They can prompt the application to gather information about organizations, the events they take part in, the companies they work with, at phenomenal speed. They can then prompt ChatGPT to use this information to write highly credible scam emails,” says Julia O’Toole, CEO of MyCena Security Solutions.

“When the target receives an email from their apparent bank, CEO or supplier, there are no language tell-tale signs the email is bogus. The tone, context and reason to carry out the bank transfer give no evidence to suggest the email is a scam. This makes ChatGPT-generated phishing emails very difficult to spot and dangerous.”