Microsoft’s AI Team Accidentally Leaks Terabytes of Company Data

  • 📰 futurism
  • ⏱ Reading Time:
  • 47 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 22%
  • Publisher: 68%

日本 ニュース ニュース

日本 最新ニュース,日本 見出し

Microsoft AI researchers accidentally leaked 38 terabytes of confidential company data — all because of one misconfigured permission token.

According to Wiz, the mistake was made when Microsoft AI researchers were attempting to publish a "bucket of open-source training material" and "AI models for image recognition" to the developer platform.

The researchers miswrote the files' accompanying SAS token, or the storage URL that establishes file permissions. Basically, instead of granting GitHub users access to the downloadable AI material specifically, the butchered token allowed general access to the entire storage account. And we're not just talking read-only permissions. The mistake actually granted "full control" access, meaning that anyone who might have wanted to tinker with the many terabytes of data — including that of the AI training material and AI models included in the pile — would have been able to.

An "attacker could have injected malicious code into all the AI models in this storage account," Wiz's researchers write, "and every user who trusts Microsoft’s GitHub repository would've been infected by it.", meaning that this sensitive material has basically been open-season for several years.

 

コメントありがとうございます。コメントは審査後に公開されます。
このニュースをすぐに読めるように要約しました。ニュースに興味がある場合は、ここで全文を読むことができます。 続きを読む:

 /  🏆 85. in JP

日本 最新ニュース, 日本 見出し

Similar News:他のニュース ソースから収集した、これに似たニュース記事を読むこともできます。

Microsoft accidentally leaked 38TB of data, but the company says no customer data was exposed.Cloud security researchers at Wiz found the leak and reported it to Microsoft. Here’s what was leaked, according to Microsoft (with its emphasis):
ソース: verge - 🏆 94. / 67 続きを読む »