Meta's security team has recently revealed a worrying trend where malware pretends to be ChatGPT in an attempt to gain unauthorized access to user accounts.
April 24, 2025
According to Meta's security analysts, malware is disguising itself as generative AI applications, like ChatGPT, to infiltrate user accounts.

The security experts at Meta have found that malware distributors are adopting generative AI applications as their latest strategy for spreading harmful software.
As the excitement around generative AI grows, threat actors have begun manipulating public interest in OpenAI’s ChatGPT, using it to trick users into downloading malware. This insight comes from Meta security engineers Duc H. Nguyen and Ryan Victory. Know More These malicious campaigns seek to compromise businesses by targeting their advertising accounts online, aiming to undermine their operations.
Malware developers are casting a wide net, targeting platforms including file-sharing services like Dropbox, Google Drive, Mega, MediaFire, Discord, Trello by Atlassian, Microsoft OneDrive, and iCloud, under the guise of offering AI functions.
Since March 2023, researchers have spotted multiple malware variants that exploit themes surrounding ChatGPT to illicitly access online accounts. For example, cybercriminals have created rogue browser extensions that falsely claim to deliver ChatGPT-related features and have uploaded these extensions to official web stores.
By utilizing social media and paid search ads, malware operators have promoted these deceptive browser add-ons, tricking users into unwittingly installing harmful software. To evade detection by official marketplaces, some of these extensions even include legitimate ChatGPT functionalities.
According to Meta's security engineers, they have successfully blocked the dissemination of over 1,000 malicious links themed around ChatGPT on their platforms, and have collaborated with industry partners to bolster defenses.
Drawing parallels to previous malware incidents like Ducktail, the criminals behind these recent schemes have rapidly adapted their tactics in response to enforcement actions and public awareness. They are now employing cloaking techniques to bypass automated ad verification and making use of popular marketing strategies, including link-shortening, to obscure their links' true intents.
Their strategies are evolving to incorporate other trending subjects, such as TikTok marketing. Some have even begun targeting smaller platforms like Buy Me a Coffee as a means of pushing harmful content, especially after larger platforms have taken stronger action against them. Know More In light of the current buzz around generative AI, users need to be cautious about clicking on unexpected links or downloading applications that claim to be ChatGPT-related, especially those found in browser web stores or sidebar advertisements.
The OpenSea Discord server has been compromised, leading to bots promoting a nonexistent partnership with YouTube.
Read More
April 24, 2025
News Report Technology Cindy is a journalist for Metaverse Post, concentrating on web3, NFTs, the metaverse, and AI topics, with an emphasis on interviewing figures in the Web3 space. She has engaged with over 30 top-level executives and continues to gather insights for her audience. Hailing from Singapore, she now resides in Tbilisi, Georgia, and possesses a Bachelor's degree in Communications & Media Studies from the University of South Australia, accumulating a decade of experience in journalism and writing.
by
This includes managing press pitches, announcements, and opportunities for interviews. Alisa Davidson
News Report