/
1 min read

Microsoft AI researchers unintentionally exposed corporate passwords and 30,000 internal Teams messages

Microsoft AI researchers inadvertently leaked a significant amount of data while attempting to share their work on an open-source platform, the company confirmed.

They accidentally provided access to 38 terabytes of data when they misconfigured a link in their GitHub repository, granting permissions across the entire storage account.

The exposed data included personal computer backups, Microsoft service passwords, secret keys, and over 30,000 internal Teams messages.

Microsoft stated that no customer data was compromised, and the issue has been addressed.

The incident occurred shortly after Microsoft disclosed another security lapse related to China-based hackers compromising a Microsoft engineer’s account.

Leave a Reply