Easy Does It — Understanding Object Storage Public Data Exposure
One thing I’d like to avoid in narrating this journey through common Cloud Attack Killchains is the implication that cloud platform providers are doing an inherently bad job. The main providers are incredibly secure, and tend to release all their services with decent secure defaults, but all this starts to fall apart as services interact, and continues as they add complexity to meet the requirements of a diverse customer base.
We’ve touched heavily on my mantra of “simple doesn’t scale” in previous posts, and clearly this is the reality of cloud computing for both providers and customers. Everything is moving so fast — from cloud adoption to apps development and threat emergence — it’s no wonder that we are playing catch-up.
Remember when we said that time it wouldn’t happen? [Insert laughs here.]
Unfortunately any notion that most challenges can be addressed outside production remains elusive, even with DevSecOps taking off. The reality is that our staggering cloud computing footprint dictates that we will likely always be chasing exposures. It’s the nature of the beast.
Image 1 — Attack Parameters
Installment four in our “Top 10” series is really no different. Even as Azure, GCP, and AWS become more security-friendly, it remains incredible easy to make mistakes which leave data unprotected, and Object Storage Public Data Exposure falls right into this profile.
Worth noting, the issue is far more prevalent on Amazon S3 because it’s one of the oldest public cloud services and in much wider use than the Azure or GCP equivalents, but we do see it prevailing across all these environments.
Image 2 — AWS and Azure Killchains
The heart of the challenge for S3 is that it has long been considered reasonable to utilize publicly identifiable objects as repositories to share data and downloads or to host web servers, but that hasn’t worked out for a number of reasons.
S3, Azure storage accounts, and nearly every other platform start with secure, non-public defaults. However each of these platforms includes options to share data with partners or the public. Over the years providers keep adding warnings to reduce the chances of exposure. But it’s still way too easy to make this mistake despite all the warnings, if you don’t implement preventative or reactive controls.
How does this happen? The most common issue, and the hardest to solve, is that object level permissions can be made public with a couple clicks — either directly or when you change bucket-level permissions. This is often a shortcut people take when they really mean to share privately, because it is much easier to just make something public than to write a policy to only share to specific destinations (such as when S3 is storing files for a web server). Even if you take steps to make changes at the bucket level, object-level permissions don’t always carry over — it isn’t recursive — so existing public objects often aren’t fixed.
This is very hard to deal with — some organizations have many thousands or even millions of objects. I could detail another handful of potential scenarios in AWS as well — it’s such a widespread error.
I’ve often imagined what it must have been like at Amazon when these issues started hitting. I’m guessing they looked at the initial S3 data breaches and thought, “Hey, we don’t default to public — it’s just customers making poor choices.” But then a lot of customers are ending up in the headlines, so they started sending warnings, but it just kept going. Eventually they added even more warnings and more security features to prevent the problem… but this keeps adding complexity, and it’s still all too easy for someone to hit the “public” button or make something public through the more arcane methods such as signed URLs, putting a public content delivery network in front of a private bucket, or weird interactions with all the policies which led to Amazon itself creating an entire machine learning product to figure out whether something is public or not.
Talk about an “oh sh*t” moment.
Since this problem first appeared, Amazon has added multiple defenses which are highly effective, but they add their own complexity — especially at scale.
To reiterate, cloud providers keep giving us tools to identify and reduce this problem, but it’s still a pain, so… continuous assessment and automated remediation.
Azure doesn’t have the same layers of complexity as AWS, but even there we see exposures because all these object storage services legitimately need ways to make things public. Weirdly, at the time of this writing, Azure Security Center doesn’t even find public objects for you by default.
Image 3 — Azure Mitigation
In AWS the best thing you can do is to start by assessing what you have, and use the Block Public Access feature as much as possible moving forward. This lets you block all public access at the account level, but it also offers options to revert all existing buckets to private, or to “lock in” the current settings and block anything new from becoming public. It’s a great way to stem the flow and combine it with assessment to clean up problems without breaking existing applications.
Image 4 — AWS and Azure Mitigation
One last warning we hinted at earlier: you also need to keep an eye out for exposures emanating from your own content delivery network. If you’re using Amazon CloudFront, for example, a bucket could be set private, but you could be making it public via some other mechanism.
To repeat: this issue is not a huge technical snafu — most breaches occur simply because someone made an object public. There’s really no magic to any of these leading attack killchains — it’s just a matter of keeping up, and trying to keep your environment consistent.