The Microsoft AI research division accidentally leaked dozens of terabytes of sensitive data starting in July 2020 while contributing open-source AI learning models to a public GitHub repository.
The exposed data included backups of personal information belonging to Microsoft employees, including passwords for Microsoft services, secret keys, and an archive of over 30,000 internal Microsoft Teams messages originating from 359 Microsoft employees.
In an advisory on Monday by the Microsoft Security Response Center (MSRC) team, Microsoft said that no customer data was exposed, and no other internal services faced jeopardy due to this incident.
Wait, they stored passwords in plain text?
Possibly or as a weak hash