David Tutin
|
January 24, 2023

Cybercriminals using ChatGPT to write malware

Never far from the headlines, AI is front and center at the moment thanks to the huge impact being made by ChatGPT, which in its own words, is a “powerful AI-based language generation model.”

While this sounds simple, the range of potential ChatGPT applications is huge, from writing essays, code, poetry, movie scripts and even novels to offering ideas and insight into just about any conceivable subject on demand. It’s been touted as a major threat to Google and is attracting huge levels of investor interest. The list of possibilities is almost endless.

It’s also controversial. As the tool itself reveals when queried, “ChatGPT and other similar models like it have become controversial because of the potential for misuse.”

Among the main concerns is its use as a way to quickly and effectively develop malware. In many ways, this is nothing new – bad actors have been developing new ways to create new and evasive malware for years. Whether it’s applying techniques such as packing, encryption, and polymorphism to anti-analysis processes such as virtual machine and sandbox detection, malware authors are constantly finding new ways to beat AVs, sandbox systems and security researchers to deliver malicious payloads.

The problem is that the development of ChatGPT and a myriad of other AI tools means that we are not very far away from an explosion in AI-produced Malware. And while ChatGPT is programmed to deny requests to create malware, recent media headlines suggest it’s already being used to develop code for malicious purposes.

As reported by The Register, “A thread titled “ChatGPT – Benefits of Malware” popped up December 29 on a widely used underground hacking forum written by a person who said they were experimenting with the interface to recreate common malware strains and techniques. The writer showed the code of a Python-based information stealer that searches for and copies file types and uploads them to a hardcoded FTP server.”

An explosion in AI-produced malware

These risks are very real, and as Paul Farrington, Chief Product Office at Glasswall, explains, “If you had a compliant AI service, that was willing to produce example malicious files, and also to introduce some randomness into the file hashes, you’ve got a way to keep generating novel malware that evades AV. You could train the model with example malware snippets to improve the variations that were created.”

In this context, it will become trivial to create Malware that evades AV protection in a very short space of time, and this will be accessible to non-developers or known bad actors. The risks posed by zero-day threats, for example, could grow significantly as AI is used to create malware on an industrial scale.

So how can organizations stay ahead of the risks? In contrast to reactive AV and sandboxing solutions, Glasswall’s CDR engine protects its users from file-based zero-day malware by an average of 18 days before conventional AVs and detection systems. Instead of looking for malicious content, our advanced CDR (Content Disarm and Reconstruction) process treats all files as untrusted, validating, rebuilding and cleaning each one against their manufacturer’s ‘known-good’ specification.

Only safe, clean, and fully functioning files enter and leave an organization, allowing users to access files with full confidence. In a rapidly approaching future where AI is a go-to tool for cybercriminals and nation-state attackers, proactive protection against file-based threats has become more crucial than ever.

We’ll be exploring the potential impact of ChatGPT and similar AI tools on the cybersecurity threat landscape more closely in future blogs, but to read more about how Glasswall protects against file-based threats, click here.

Book a demo

Talk to us about our industry-leading CDR solutions

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.