BillionPhotos.com - stock.adobe.
38TB Microsoft data leak highlights risks of oversharing
An accidentally disclosed SAS token with excessive privileges enabled researchers to access nearly 40TB of Microsoft’s data, highlighting the risks of privilege mismanagement and oversharing
Microsoft has learned an important lesson after having to clean up a major data leak resulting from an “overly permissive” shared access signature (SAS) token accidentally disclosed by one of its employees.
The incident took place in June 2023, when a Microsoft researcher shared a URL for an Azure Blob store in a public GitHub repository while contributing to an open source artificial intelligence (AI) learning model.
However, the URL included the SAS token for an internal storage account, which was found by analysts at cloud security specialists Wiz.io.
Using the token, they were able to access the storage account where, thanks to the compromised token’s excessive privileges, they could find much more than just the open source data.
It turned out the token was configured to grant permissions across the entire storage account, which also held 38TB of data, including the backups of Microsoft employee workstations containing sensitive personal information, credentials, secret keys and 30,000 internal Teams messages.
The Wiz team and Microsoft worked together to prevent the issue from escalating any further, and have now jointly published information about the incident in a coordinated vulnerability disclosure (CVD) report.
The Microsoft Security Response Centre (MSRC) team said the scope of the breach was thankfully limited. “No customer data was exposed, and no other internal services were put at risk because of this issue,” they said. “No customer action is required in response to this issue. We are sharing … learnings and best practices … to inform our customers and help them avoid similar incidents in the future.”
The problem with SAS tokens
As Hillai Ben-Sasson and Ronny Greenberg of Wiz.io explained, SAS tokens are somewhat problematic. They are designed to restrict access and let certain clients connect to a specified Azure Storage resource, but they contain inherent insecurities that mean they must be carefully managed, which was not the case in this instance.
Among other things, their access levels can be easily and readily customised by the user, as can the expiry time, which could in theory allow a token to be created that never expired (the compromised one was in fact valid through to 2051, 28 years from now).
Additionally, because of the power users have over SAS tokens, it can be very hard for admins to know a highly permissive token that will never expire is in circulation, and it’s not easy to revoke the token either – to do this, the admin needs to rotate the account key that signed the token, which renders any other tokens signed by the same key useless.
All this, plus the risk of accidental exposure as happened here, adds up to make SAS tokens an “effective tool for attackers seeking to maintain persistency on compromised storage accounts”, said Ben-Sasson and Greenberg.
“Due to the lack of security and governance over account SAS tokens, they should be considered as sensitive as the account key itself. Therefore, it is highly recommended to avoid using account SAS for external sharing. Token creation mistakes can easily go unnoticed and expose sensitive data.
“A recent Microsoft report indicates that attackers are taking advantage of the service’s lack of monitoring capabilities in order to issue privileged SAS tokens as a backdoor,” they said. “Since the issuance of the token is not documented anywhere, there is no way to know that it was issued and act against it.”
Read more about security for MS Azure
- Securing Azure Functions is paramount to maintaining the integrity and reliability of your applications. Read over the methods, tools and best practices.
- Before adopting Microsoft Azure, it’s important to consider how to secure the cloud network. That’s where network security groups and Azure Firewall come in.
- Restricting users’ permissions in Microsoft Azure AD to only what they need to complete their job helps secure and reduce the cloud attack surface.
Andrew Whaley, senior technical director at Promon, said the incident showed even the best-intentioned projects can inadvertently come a cropper.
“Shared access signatures are a significant cyber security risk if not managed with the utmost care,” he said. “Although they’re undeniably a valuable tool for collaboration and sharing data, they can also become a double-edged sword when misconfigured or mishandled. When overly permissive SAS tokens are issued or when they are exposed unintentionally, it’s like willingly handing over the keys to your front door to a burglar.
“Microsoft may well have been able to prevent this breach if they implemented stricter access controls, regularly audited and revoked unused tokens, and thoroughly educated their employees on the importance of safeguarding these credentials,” said Whaley. “Additionally, continuous monitoring and automated tools to detect overly permissive SAS tokens could have also averted this blunder.”
Microsoft said there was no security issue or vulnerability as such in Azure Storage or the SAS token feature per se, but noted the tokens should always be created and managed properly. Since the incident took place, Microsoft has been hardening the SAS token feature and is continuing to evaluate the service to improve it further.
“Like any secret, SAS tokens need to be created and handled appropriately,” said the MSRC team. “As always, we highly encourage customers to follow our best practices when using SAS tokens to minimise the risk of unintended access or abuse.
“Microsoft is also making ongoing improvements to our detections and scanning toolset to proactively identify such cases of over-provisioned SAS URLs and bolster our secure-by-default posture.
“We appreciate the opportunity to investigate the findings reported by Wiz.io. We encourage all researchers to work with vendors under Coordinated Vulnerability Disclosure and abide by the rules of engagement for penetration testing to avoid impacting customer data while conducting security research.”
Best practice for SAS tokens
- Apply principles of least privilege, with SAS URLs scoped to the smallest set of resources needed, and permissions limited to only those needed by the application, such as read/write only;
- Use short-lived SAS tokens with a near-term expiration data, and have clients request new ones if they need to – one hour or less is recommended, certainly not 28 years;
- Handle with care – SAS URLs grant access to data and should be treated like any other secret, they must only ever be exposed to clients who need access to a storage account;
- Have a plan to revoke SAS tokens if needed, associate them with a stored access policy for fine-grained revocation within a container, and be ready to remove this policy or rotate the storage account keys should a token leak;
- Monitor and audit your application, being sure to track requests to your storage account through Azure Monitor and Azure Storage logs, and using a SAS expiration policy to catch anybody using a decade-spanning SAS URL.
Highlighted issues
The Wiz.io team said the incident highlighted two major risks.
First, the oversharing of data – since researchers collect and share a lot of this, particularly if as in this case they are working on an AI model, they are at elevated risk of accidentally causing a breach. As such, it becomes critical for security teams to define clear guidelines for external sharing of AI datasets, and both security teams and research and development teams should collaborate on this.
Second, there is a risk of a supply chain attack. In this instance, the token granted write access to the storage account containing the AI models the researcher was working on, so had a malicious actor been quick on the draw, they could easily have injected malicious code into the model files, leading to attacks on other researchers accessing the model via GitHub and, further down the line, untold damage if and when the code enters widespread public use. As such, the team said, security teams should take steps to review and sanitise AI models from external sources lest they be used to achieve remote code execution.