Malafide Breakthrough

Humanity has finally been ushered further into the realm of Artificial intelligence, which is now competing with human intelligence and surpassing it at times. Among the finest breakthroughs in the digiverse is the ChatGPT, an AI chatbot that can write poetry, technical papers, novels, and essays, arrange parties, and learn about new subjects. The creation of malware and the pursuit of other forms of cybercrime has now been added to the list.

The misuse of ChatGPT, an artificially intelligent (AI) chatbot from OpenAI, is a new tactic by cybercriminals. Since the ChatGPT app was released towards the end of November 2022, it has grown in popularity, thus, it only makes sense that fraudsters are looking to take advantage of it.

A recent report from the Israeli security company Check Point reveals that hackers are exploiting ChatGPT to construct new chatbots that imitate young females to entice targets and generate effective hacking tools, according to Hold Security’s creator Alex Holden, to acquire the trust of their targets and have more in-depth talks with them, scammers are adopting feminine identities and impersonating girls. Additionally, ChatGPT has the ability to program malicious software that can spy on users’ keystrokes and produce malware. For your information, OpenAI has created ChatGPT as a user interface for their LLM (Large Language Model).  AI Chatbot is being utilized to create various iterations of cyber threat scripts such as

 

Potential Threats

 

  • The Infostealer

A renowned underground hacker site saw the appearance of a topic titled “ChatGPT – Benefits of Malware” on December 29, 2022. The thread’s creator said that he was using ChatGPT to test out malware strains and strategies documented in research papers and articles about prevalent malware. He provided the source code for a Python-based stealer that looks for popular file types, transfers them to a random folder inside the Temp folder, zips them up, and uploads them to a pre-programmed FTP site as an illustration. Threat refers to a rudimentary file-styling program that scans the system for 12 popular file kinds, including MS Office documents, PDFs, and pictures. When a file of interest is discovered, the virus moves it to a temporary directory, zips it, and transmits it to the user. The fact that the actor neglected to encrypt or deliver the data securely means that they may end up in third parties’ hands. The second example that this actor produced with ChatGPT is a straightforward Java sample. It downloads PuTTY, a well-known SSH and telnet client, then uses Powershell to execute it discreetly on the machine. Naturally, this script may be altered to download and execute any software, including popular malware families.

 

  • Facilitating Fraud

On New Year’s Eve in 2022, a further example of the usage of ChatGPT for fraudulent behavior was released. This example showed a new kind of cybercriminal conduct. The discussion in this example is titled “Abusing ChatGPT to construct Dark Web Marketplaces scripts,” in contrast to our earlier two examples, which concentrated more on using ChatGPT to spread malware. The cybercriminal demonstrates in this thread how simple it is to use ChatGPT to set up a Dark Web store. The marketplace’s primary function in the shadowy black market is to provide a platform for the automated exchange of illegal or stolen products, including malware, narcotics, and weapons. All transactions are made using cryptocurrency.

The cybercriminal provided a code fragment that leverages a third-party API to obtain the most recent cryptocurrency (Monero, Bitcoin, and Etherium) values as part of the Dark Web market payment system to demonstrate how to use ChatGPT for these objectives.

Beginning in 2023, several threat actors started threads on more dark web forums discussing how to use ChatGPT for nefarious purposes. Most of these centered on producing art at random using another OpenAI technology (DALLE2) and selling it online via reputable marketplaces like Etsy. In a different instance, the threat actor describes how to create a small chapter or e-book for a certain topic (using ChatGPT) and then sells this information online.

The usage of OpenAI by cybercriminals to create dangerous tools, even those without coding expertise, has been documented in several underground hacker forums. A hacker posted an Android malware code created by ChatGPT in one of the articles that Check Point examined. This code might grab desired files, compress them, and leak them online.

One user described how they took advantage of ChatGPT to hack features from Dark Web marketplaces like Silk Road and Alphabay.

Another software that could be used to set up a backdoor on a device and download further malware onto the infected machine was shared on the forum. Likewise, a user-shared Python script that may encrypt files using the OpenAI programme

 

Conclusion

It is still premature to say whether ChatGPT features will replace other popular tools as the preferred ones among Dark Web users. However, the cybercriminal sector has already expressed much interest and is embracing this most recent development to produce dangerous programs. The activity will be monitored by CPR through 2023. Last but not least, the best approach to find out about ChatGPT abuse is to ask ChatGPT directly. These incidents bring to light the depressing truth of our technology dependence, which makes the internet more hostile than ever. To handle this new issue, it is necessary to comprehend AI from a whole different angle and collaborate with the technology.

Author: Mr. Shrey Madaan, Junior Consultant, CyberPeace Foundation

Leave a Reply

About Cyber Peace Corps

Address: B-55 MIG, Ranchi Jharkhand, India
Phone: (+91) 82350 58865
Email[email protected]