HomeStrategyPolitics100,000 ChatGPT Accounts Stolen and Traded

100,000 ChatGPT Accounts Stolen and Traded


Criminals are targeting users of the artificial intelligence (AI) chatbot ChatGPT, stealing their accounts and trading them on illegal online criminal marketplaces—with the threat having already affected more than 100,000 individuals worldwide.

Group-IB, a Singapore-based cybersecurity firm, has identified 101,134 devices infected with information-stealing malware that contained saved ChatGPT credentials, according to a June 20 press release.

“These compromised credentials within the logs of info-stealing malware traded on illicit dark web marketplaces over the past year,” the release reads.

“The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale.”

When unassuming users interact with AI, the hidden malware captures and transfers data to third parties. Hackers can use the information collected to generate personas and manipulate data for various fraudulent activities.

Sensitive information, including personal and financial details, must never be disclosed—no matter how friendly the user gets with the AI.

Moreover, this issue isn’t necessarily a drawback of the AI provider—the infection could already be in the device or within other applications.

Out of the more than 100,000 victims between June 2022 and May 2023, India accounted for 12,632 ChatGPT accounts, followed by Pakistan with 9,217, Brazil with 6,531, Vietnam with 4,771, and Egypt with 4,588. The United States ranked sixth with 2,995 compromised ChatGPT credentials.

“Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code,” said Dmitry Shestakov, head of threat intelligence at Group-IB.

“Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”

The cybersecurity firm’s analysis of criminal underground marketplaces revealed that a majority of ChatGPT accounts were accessed using the malware Raccoon info stealer, which alone was responsible for more than 78,000 of the compromised credentials.

“Info stealers are a type of malware that collects credentials saved in browsers, bank card details, crypto wallet information, cookies, browsing history, and other information from browsers installed on infected computers,” Group-IB said. It then sends all this information to the malware operator.

Protection From Getting Hacked

To minimize the risks of having ChatGPT accounts compromised, Group-IB advised users of the chatbot to regularly update their passwords and implement two-factor authentication (2FA). By activating 2FA, ChatGPT users will get an additional verification code to access the chatbot’s services, usually on their mobiles.

Users can enable 2FA on their ChatGPT accounts in the “data controls” section of the settings.

Epoch Times Photo
An engineering student takes part in a hacking challenge near Paris on March 16, 2013. (Thomas Samson/AFP via Getty Images)

But even though 2FA is an excellent security measure, it isn’t foolproof. As such, if users converse with ChatGPT about sensitive topics such as intimate personal details, financial information, or anything related to work, they should consider clearing all saved conversations.

To do so, users should go to the “clear conversations” section in their accounts and click “confirm clear conversations.”

Group-IB pointed out that there has been a rise in the number of compromised ChatGPT accounts, mirroring the growing popularity of the chatbot.

In June 2022, there were 74 compromised accounts, per Group-IB. This jumped to 1,134 in November 2022, 11,909 in January, and 22,597 in March.

ChatGPT for Hacking

While ChatGPT opens up a new possibility for hackers to access sensitive information, the chatbot also can help such individuals improve and boost their criminal activities.

In a Dec. 19, 2022, blog post, cyber threat intelligence firm Check Point Research (CPR) detailed how ChatGPT and similar AI models can create more hacking threats.

For instance, since ChatGPT aids in generating code, the application lowers the bar for coding malicious programs, thus allowing even less skilled individuals to perpetrate sophisticated cyber attacks.

“Multiple scripts can be generated easily, with slight variations using different wordings. Complicated attack processes can also be automated as well,” it stated in the post.

A Jan. 13 post by CPR warned about Russian cybercriminals attempting to bypass ChatGPT’s restrictions so as to use the chatbot for carrying out potential crimes.

“We are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes,” the post reads.

“We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations. Cybercriminals are growing more and more interested in ChatGPT because the AI technology behind it can make a hacker more cost-efficient.”



Source link

NypTechtek
NypTechtek
Media NYC Local Family and National - World News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read