Type to search

Four Ways ChatGPT Is Changing Cybersecurity Innovation

Four Ways ChatGPT Is Changing Cybersecurity

uncaptioned

Artificial intelligence (AI) has long been a part of the cybersecurity industry. However, the newest versions of AI, such as ChatGPT have rapidly broken new ground and are already having a profound impact on the future.

Here are four ways the rise of ChatGPT has already changed the game.

AI-Directed Searches

For decades, search engines have been a key feature of the internet and a core area of expertise for cybersecurity operators and attackers alike. But as ubiquitous as they have become, search engines are still just an index of places to go to find information—a very asynchronous interaction.

AI like ChatGPT is fundamentally game-changing in the way it uses natural language processing (NLP) to comprehend language and provide direct responses to user questions. Provide a snippet of code, and it can give you a line-by-line walkthrough catered for a 12-year-old or a Ph.D. candidate.

Instead of simply watching a video or reading a recommended article, you can interact with the lesson, ask refining questions or dig deeper into a particular topic. As an engaged participant, you have a more direct influence on where the conversation goes. That’s why many early adopters, like my friend and security startup CEO Snehal Antani, are already abandoned old-school Google searches in favor of ChatGPT.

AI-Assisted Research

Security researchers have been toying around with the capabilities of ChatGPT for some time now. Overall, their opinions have been mixed; in fact, many seem to be simultaneously threatened and unimpressed by the tool—and by AI in general.

Some of this pushback likely has to do with their methods of inquiry. Many appear to be asking one basic question and are not providing more specific details or follow-up directions. This obscures the true power of ChatGPT, synchronous engagement, i.e., the ability to alter the conversation or outcome based on new stimuli.

When used properly, ChatGPT has already shown the ability to rapidly understand and locate obfuscated malware code. Once we perfect our methods of engagement, these tools will no doubt help to improve solutions in the marketplace at an extremely rapid pace.

AI-Augmented Operations

Through NLP, ChatGPT not only understands commands and reads code, but it can also provide actual insights and remediation advice. Properly harnessed, this capability can greatly enhance the efficiency and sophistication of a human operator at the wheel.

For an industry that continues to suffer staffing and talent issues, AI and machine learning are already being leveraged to create efficiency, improve speed and ensure operational accuracy.

As they continue to evolve, these tools even have the potential to help human operators better cope with “Context Switching,” i.e., the brain’s natural tendency to lose efficiency when forced to multitask rapidly.

AI-Powered Attacks

While security researchers and operators are utilizing AI for more efficient threat detection and incident response, hackers are likely doing the same thing. In fact, attackers have probably gained the most of anyone in the early days of NLP-power AI programs like ChatGPT.

We know threat actors are already using ChatGPT to develop malware, including polymorphic malware that mutates regularly to evade detection. Currently, the quality of ChatGPT’s code writing ability is mediocre, but these programs evolve rapidly. Future forms of specialized “coding AI” could rapidly increase the development of malware and enhance its performance.

Many hackers already specialize in specific attack types such as social engineering or phishing campaigns. In time, AI will help them automate large portions of their workflow and exploit vulnerabilities within a few hours of their discovery instead of a few days.

Like all major leaps forward, the early days of ChatGPT have been filled with users figuring out new ways to harness the technology for good and for bad. Often these forces tend to balance one another.

That said, AI-based tools will only grow in complexity and power. There are still unanswered questions on the best ways to teach AI how to deal with inaccurate information or mitigate for bias in the way it processes raw data. These concerns, and others, have prompted the National Institute of Standards and Technology (NIST) to begin developing an AI Risk Management Framework to provide guidance and processes for minimizing the risks of working with AI.

In the spirit of the headline, I decided to give the final word on this topic to ChatGPT itself. To my question, “what are some of the more interesting things people have done with ChatGPT in the cybersecurity area,” it replied with a list of its own.

There are a few different ways that people have used GPT in the cybersecurity field. Some examples include:

Realistic phishing emails: GPT can be trained on a dataset of real phishing emails and then used to generate new, convincing phishing emails. This can be useful for testing the effectiveness of anti-phishing tools and training employees to recognize phishing attempts.

Payloads for penetration testing: GPT can be trained on a dataset of payloads used in penetration testing and then used to generate new payloads. This can be useful for testing the security of systems and identifying vulnerabilities.

Documentation for compliance and security standards: GPT can be trained on a dataset of documentation for compliance and security standards, such as SOC 2 and PCI-DSS, and then used to generate new documentation. This can be useful for ensuring that documentation is accurate and up-to-date.

Reports for incident response: GPT can be trained on a dataset of incident response reports and then used to generate new incident response reports. This can be useful for quickly and accurately documenting the details of an incident and the steps taken to contain it.

Natural language summaries of security alerts: GPT can be trained on a dataset of security alerts and then used to generate natural language summaries of the alerts, making it easier for security analysts to understand and triage the alerts.

These are just a few examples of how GPT has been used in cybersecurity, but the possibilities are endless.