Major OmniGPT Data Breach Reveals 30,000 Users' Sensitive Data
Cybersecurity3 min read

Major OmniGPT Data Breach Reveals 30,000 Users' Sensitive Data

13 Feb 202513 Feb 2025 secureworld.io

OmniGPT suffers a significant data breach, exposing the information of 30,000 users and millions of chat messages. Experts weigh in on the implications of this security incident.

Key Takeaways

  • 1.A hacker, operating under the pseudonym "Gloomer," claims responsibility for leaking sensitive information, reportedly affecting 30,000 users with the release of personal email addresses, phone numbers, and 34 million lines of chat messages.
  • 2."The possible exposure of user information and conversation logs—including sensitive items like API keys and credentials—highlights the urgent need for strong security measures in AI-powered platforms," he warned.
  • 3.> "Gloomer," The breach was brought to light by KrakenLabs, a cybersecurity research group, which reported that Gloomer has shared samples of the stolen data on BreachForums, a well-known marketplace for illicitly traded information.

A serious cybersecurity incident has allegedly affected OmniGPT, an AI aggregator that enables access to popular models like ChatGPT-4, Claude 3.5, and others. A hacker, operating under the pseudonym "Gloomer," claims responsibility for leaking sensitive information, reportedly affecting 30,000 users with the release of personal email addresses, phone numbers, and 34 million lines of chat messages.

"Gloomer,"

The breach was brought to light by KrakenLabs, a cybersecurity research group, which reported that Gloomer has shared samples of the stolen data on BreachForums, a well-known marketplace for illicitly traded information. According to their findings, the leaked data includes:

- Complete chat logs between users and AI models via OmniGPT - Links to files uploaded by users - Email addresses and phone numbers of roughly 30,000 users - Details on API requests and authentication payloads, raising concerns about the platform’s security posture.

Person using laptop with holographic cybersecurity shield and digital interface elements
Person using laptop with holographic cybersecurity shield and digital interface elements

"If confirmed, this OmniGPT hack demonstrates that even practitioners experimenting with bleeding edge technology like Generative AI can still get penetrated," said Andrew Bolster, Senior R&D Manager at Black Duck. He highlighted the need for stringent application security measures, noting that the nature of interactions between users and AI can be deeply personal.

"If confirmed, this OmniGPT hack demonstrates that even practitioners experimenting with bleeding edge technology like Generative AI can still get penetrated,"

This sentiment echoes the ethical considerations outlined in the newly published IEEE Standard 7014 for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems, which emphasizes the duty of developers to prioritize user privacy.

Furthermore, Eric Schwake, Director of Cybersecurity Strategy at Salt Security, pointed out the potential risks that come with exposed user credentials. "The possible exposure of user information and conversation logs—including sensitive items like API keys and credentials—highlights the urgent need for strong security measures in AI-powered platforms," he warned. Schwake further advocated for rigorous data protection measures, emphasizing the necessity of secure storage, access controls, and strong encryption.

"The possible exposure of user information and conversation logs—including sensitive items like API keys and credentials—highlights the urgent need for strong security measures in AI-powered platforms,"

Adding to the concern, Jason Soroko, Senior Fellow at Sectigo, noted the growing gap between rapid AI advancements and essential security practices. "The reported OmniGPT breach highlights the risk that rapid AI innovation is outpacing basic security, neglecting privacy measures in favor of convenience," Soroko explained. He cautioned that unchecked progress could lead to vulnerabilities that diminish user trust and the overall promise of AI technology.

"The reported OmniGPT breach highlights the risk that rapid AI innovation is outpacing basic security, neglecting privacy measures in favor of convenience,"

Data center server room with multiple monitors displaying code and red LED lighting
Data center server room with multiple monitors displaying code and red LED lighting

The implications of this breach, if verified, could be far-reaching for individuals who use OmniGPT. Users may find themselves vulnerable to:

By the Numbers

- **Phishing and Identity Theft**: With exposed email addresses and phone numbers, phishing attacks may increase as cybercriminals leverage this information for targeted scams. - **Compromised Credentials**: Users who have shared API keys or login details through their interactions with AI could face unauthorized access to services and accounts. - **Corporate Espionage**: Businesses utilizing AI for sensitive tasks may suffer significant consequences if confidential data is in the leaked files, risking financial fraud and possible legal ramifications. - **Privacy Violations**: Conversations containing personal data may also come under scrutiny, raising further ethical and legal concerns.

As more details emerge, the cybersecurity community is closely monitoring the situation while users of OmniGPT are urged to review their security practices and remain vigilant against possible phishing and cyber threats. The broader ramifications of this incident could reshape security standards within the fast-evolving AI landscape, emphasizing the importance of prioritizing robust security infrastructure from the ground up.

More Stories