
eWEEK’s content and product recommendations are editorially independent. When you click on links to our partners, we may earn money. learn more.
A recent data breach involving India-based AI startup WotNot exposed over 346,000 personal files online, putting sensitive customer data at risk. Cybersecurity researchers at Cybernews discovered the leaked data in August during a “routine investigation using OSINT techniques.” A misconfigured Google Cloud Storage bucket containing over 346,000 files was accessible to anyone online without permission.
The leaked data included passports and national IDs, detailed medical records including diagnoses and test results, resumes including work history and contact information, and other files such as travel itineraries and train tickets. Data originating from WotNot’s 3,000+ customer base poses serious risks, including identity theft, fraud, and phishing.
WotNot’s reaction
WotNot, which provides chatbot development services to the healthcare, finance, and education industries, attributes the breach to a flawed cloud storage policy. The exposed bucket was reportedly being used by users on the free tier plan.
“The cause of the breach was that cloud storage bucket policies were modified to accommodate specific use cases,” WotNot told CyberNews. The data was inadvertently left exposed.”
Third parties and shadow IT
The company noted that its enterprise customers are operating on private instances with stricter security protocols. It also claimed that it encourages clients to delete sensitive files after they are transferred to its systems, although this is not strictly enforced. This incident highlighted the risks of incorporating third-party vendors into the AI ecosystem. If chatbots collect sensitive user data, any weak link in the supply chain could lead to a catastrophic breach.
According to Cybernews, AI services introduce new shadow IT resources that are outside of an organization’s direct control. “In the WotNot case, sensitive information originating from a business client was ultimately leaked,” CyberNews researchers wrote. “It shows how data from companies and thousands of individuals can be at risk.”
Experts advise users to think twice before sharing personal information with AI chatbots, especially on platforms where multiple vendors may be involved. Companies are expected to thoroughly vet their partners’ security policies before doing business with them.
Learn how hackers and cybersecurity teams alike can leverage AI on both sides of the cybersecurity equation.