Type to search

Why ‘Good AI’ Is Likely The Antidote To The New Era Of AI Cybercrime Innovation

Why ‘Good AI’ Is Likely The Antidote To The New Era Of AI Cybercrime

uncaptioned

When it comes to AI, we’re living in the calm before the storm, an inflection point before everything changes.

Consider the last two months in academia. OpenAI’s ChatGPT—the newest, most advanced chatbot that is readily available to the public—has threatened to revolutionize plagiarism with its ability to generate content on just about any subject based on simple prompts. Just over a month after its release, a survey of over 1,000 students found that 89% were already using it to help with their homework. Though these reported cases were aboveboard, the same software can easily aid students in writing school papers in minutes or passing even rigorous written exams across fields.

AI can benefit society in incredible ways. At the same time, it can enable misdeeds, malicious actors and outright crime. Just as students will use it to cheat in class, politicians could use it to proliferate propaganda and businesses could use it to mine our data.

And hackers, as much or maybe more than any other demographic, will likely use AI to design cyberattacks more dangerous and at a scale never before conceivable.

How AI Will Transform Hacking

In some ways, future AI cyberattacks will confirm our deepest worries.

For example, look at what happened on a Friday afternoon in March 2019. A managing director at a British energy company received a call from his superior. The director was ordered to urgently wire over $240,000 to a supplier in Hungary to avoid incurring late payment fees. It was a strange request, granted, but the CEO on the phone sounded perfectly normal, besides being so alarmed. And so the director did as he was told.

Meet The Unknown Immigrant Billionaire Betting Her Fortune To Take On Musk In Space

More stories like thisFewer stories like this

Discover Some Of Chile’s Top Wine Destinations

More stories like thisFewer stories like this

How TikTok And Bill Nye Are Winning The Edutainment Wars

More stories like thisFewer stories like this

The money went straight to the hackers who had cloned that CEO’s voice using easy-to-obtain software.

The story was ominous, but what we really need to worry about isn’t so ready-made for Hollywood. The greatest danger to our modern internet—and perhaps society at large—may come from ordinary, almost banal ways that criminals can use AI to scale their attacks.

In December, researchers from Check Point Software observed everybody using ChatGPT to write poems and school papers, and they wondered: Could it write a phishing email? Or how about an entire attack chain?

They experimented by first asking ChatGPT to write a lure, encouraging victims to download an Excel file. After some iterations, the email achieved every trick hackers love: false legitimacy, a sense of urgency and an effective call to action.

Next, the researchers had ChatGPT write a macro, which would run custom code as soon as a victim opened the Excel file. Then, they used another OpenAI program—Codex—to automatically generate a working reverse shell, through which they could run commands, gather data and send more malware to the target machine. They also had Codex develop a port scanner and a tool for identifying virtual machines.

The result was a relatively paint-by-numbers attack, but not bad for an afternoon’s worth of work. And in theory, any number of non-OpenAI programs could have aided in enhancing it.

There’s PassGAN, for instance, a neural network-based program that trains on past data leaks to efficiently brute-force passwords. In a 2019 study, researchers used it to blow past even state-of-the-art, machine learning-based password-guessing tools.

Even more insidious than PassGAN is DeepLocker, another deep learning-based malware that, according to its IBM progenitors, can “infect millions of systems without being detected.” Only after identifying its intended target—through factors such as geolocation and voice and facial recognition—does it reveal itself, striking with laser precision.

What The Tech Industry Can Do

We will not be able to prevent AI-driven cybercrime, but we can work to prevent the worst and make our internet even more secure than it is today.

Hardly a month after ChatGPT was released to the public, as teachers cried foul over its potential for enabling cheating, one student took it upon himself to fix the problem. Edward Tian, a 22-year-old Princeton senior, developed an app called GPTZero that can quickly and effectively detect whether ChatGPT or a human wrote an essay.

Put simply; AI can be weaponized just as easily by the good guys as by the bad.

To conclude their experiment, the Check Point researchers used Codex to scrape important data from large datasets on YARA and VirusTotal. That was just a limited use case. AI tools have long been central in detecting malicious emails better and identifying anomalous behavior far faster than humans ever could for all our inboxes and IT networks. And that’s to say nothing of AI’s role in collecting and managing data, automating tasks and informing risk assessments.

Like Godzilla versus King Kong, we can only fight AI-enabled cybercrime with AI-enhanced cybersecurity. This is the next frontier. It’s already unfolding before our eyes.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?