According to Wiz, the mistake was made when Microsoft AI researchers were attempting to publish a "bucket of open-source training material" and "AI models for image recognition" to the developer platform.
The researchers miswrote the files' accompanying SAS token, or the storage URL that establishes file permissions. Basically, instead of granting GitHub users access to the downloadable AI material specifically, the butchered token allowed general access to the entire storage account. And we're not just talking read-only permissions. The mistake actually granted "full control" access, meaning that anyone who might have wanted to tinker with the many terabytes of data — including that of the AI training material and AI models included in the pile — would have been able to.
An "attacker could have injected malicious code into all the AI models in this storage account," Wiz's researchers write, "and every user who trusts Microsoft’s GitHub repository would've been infected by it.", meaning that this sensitive material has basically been open-season for several years.
France Dernières Nouvelles, France Actualités
Similar News:Vous pouvez également lire des articles d'actualité similaires à celui-ci que nous avons collectés auprès d'autres sources d'information.
La source: verge - 🏆 94. / 67 Lire la suite »