OmniGPT Inc., a notable player in the realm of artificial intelligence aggregation, has reportedly suffered a severe data breach. This incident has resulted in the exposure of over 34 million lines of user conversations along with the leaked details of 30,000 emails and phone numbers posted publicly on a hacking forum.
Operating as a middleman, OmniGPT allows users easy access to different AI models, including OpenAI's ChatGPT and Google's Gemini. Its model has become increasingly popular, drawing users seeking diverse AI experiences without the burden of multiple subscriptions.
The hacker, who is known simply as "Gloomer," made the announcement on the notorious hacking site, Breach Forums. This site has garnered attention from U.S. authorities, including an attempted shutdown by the FBI in May 2024, although it has since re-emerged. "This leak contains all messages between the users and the chatbot of this site, as well as all links to the files uploaded by users and also 30k user emails," Gloomer shared. "You can find a lot of useful information in the messages, such as API keys and credentials. Many of the files uploaded to this site are very interesting because sometimes they contain credentials/billing information."
"Gloomer,"

The specifics regarding how the breach unfolded remain unclear. Research from Hackread.com highlights the gravity of the situation, indicating that leaked data not only included extensive user conversations with AI chatbots but also links to uploaded files. Some of these files may carry sensitive information, such as credentials and billing details, pointing to a serious violation of user privacy.
Impact and Legacy
Security experts weighed in on the breach’s implications, with Andrew Bolster, senior research and development manager at Black Duck Software Inc., emphasizing its potential impact. "If confirmed, this OmniGPT hack demonstrates that even practitioners experimenting with bleeding edge technology like generative AI can still get penetrated and that industry best practices around application security assessment, attestation and verification should be followed," Bolster stated. He further cautioned about the deeply personal nature of conversations held with chatbots, which individuals often use for sensitive matters related to psychological or financial issues.
"If confirmed, this OmniGPT hack demonstrates that even practitioners experimenting with bleeding edge technology like generative AI can still get penetrated and that industry best practices around application security assessment, attestation and verification should be followed,"
Eric Schwake, director of cybersecurity strategy at Salt Security Inc., echoed these warnings. He pointed out that while the data leak is pending official confirmation, the possible exposure of user information, particularly sensitive items such as API keys and credentials, underscores the dire need for robust security measures. "Should this be verified, the incident would bring to light the risks tied to the storage and processing of user data in AI interactions," he remarked.
"Should this be verified, the incident would bring to light the risks tied to the storage and processing of user data in AI interactions,"
Schwake further elaborated on the necessity for organizations developing AI solutions to prioritize data protection. He stated, "Organizations creating and deploying AI chatbots must prioritize data protection throughout the entire lifecycle, ensuring secure storage, implementing access controls, utilizing strong encryption and conducting regular security evaluations."

As details continue to unfold, OmniGPT has yet to issue a public statement regarding the breach. The incident highlights a critical gap in the security practices that surround AI applications, illuminating the challenges of data privacy in an increasingly digital world. With the volume of sensitive information that AI platforms handle, prioritizing user security will become integral to maintaining trust and safety in digital interactions. The urgency of this matter cannot be overstated as similar breaches may become more frequent in a landscape where technology continues to rapidly evolve.


