What Security Managers Need to Know About Amazon S3 Exposures (2/2)

By |2018-10-17T20:06:11+00:00September 14th, 2018|

Continuing from “What Security Managers Need to Know About Amazon S3 Exposures (1/2)“…

In our first post we discussed how the exposure of S3 data becomes such an issue, and some details on how buckets become public in the first place. In this post we go a little deeper before laying the foundation on how to start managing S3 to avoid making these mistakes yourselves.

You mentioned buckets, but what about objects?

Objects also have their own policies and means of becoming public, which can be different from the rules of the bucket. This allows you to share a single file from within a secure bucket. The main issue with objects is that if you change the rules of a bucket to make it more secure, that won’t necessarily cascade down to the objects within the bucket.

So we get to blame Amazon for making it too complex?

Nope. Most exposures really are easy to spot and Amazon has long-warned customers (at least in the console) when they make something public. The odds are far greater that whoever opened up the bucket knew what they were doing than it being the result of some obscure interplay of 7 overlapping security controls.

Practically speaking, buckets and objects get opened up for convenience and someone forgets to lock them down again or doesn’t even realize it’s a problem.

Are all these exposure actual breaches?

Not necessarily. A fair number of the publicized exposures come from vendors and consultants performing scans of S3 and feeding the results to their marketing and the press, typically after notifying the affected company. However, we do know that bad guys also scan S3 for public buckets and unfortunately have to assume the worst.

If I have logging turned on can I use that to figure out if a public bucket was misused?

Sort of. There are two types of relevant logging, assuming you have them turned on. One (CloudTrail) monitors API calls to S3 and could reveal internal misuse or direct API calls from things like other AWS accounts accidentally given permission to your buckets. The other, access logs, technically show all access to the content. The problem is AWS doesn’t guarantee that those logs are complete… so if you see something bad you know something bad happened, but if you don’t, it could just be a missing log entry.

Isn’t it easy to find public buckets and lock them down?

Yes and no.

The best way to tell if a bucket is public is to look in the AWS console and see if it has the big “public” label. That used to only indicate that the ACL was public, but Amazon has enhanced it with a special back-end tool called [Zelkova] that uses all sorts of math to evaluate all the intersecting edge cases and determine, absolutely, if the bucket is public.

That’s great, but means you have to look in the console to use it. This isn’t ideal if you use Amazon at scale.

While some private clients also have access to Zelkova directly, the only other mechanism for the rest of us is to access it is to enable a pre-configured Config rule. This continuously monitors your environment and you can route it to send you notifications instead of looking through the console. It still needs to be turned on in each AWS account.

We fully expect Zelkova to be more accessible by the early next year.

For most organizations that use AWS at scale this all needs to be automated, which means using the AWS API. The downside is that it takes multiple API calls to determine if even a single bucket is public, plus analysis of all the content of the rules. We’ll go into more detail on how to technically handle this in the next posts.

The ugly is when we look at objects. There is no easy way to evaluate the public state of objects other than to iterate through all of them, which can be extremely time consuming and run into limits on the number of API calls you can make to AWS within a given time period.

How can I find or prevent S3 data from becoming public?

I like to say that cloud security starts with architecture and ends with automation.

The first step is to make sure you have formal policies for how you use S3, and require evaluation and validation in your project planning and architectures.

The rest comes down to automating S3 security. Automating both the assessment and response through guardrails. Since that is more-technical we cover the details in our next two posts, including tools like AWS Macie, which is a form of DLP for AWS. One post delves deeper into the technical side of how buckets become public and you can manage them, and the next walks you through building your own guardrails.

One key is to endure your team understands that this is an ongoing issue that will never go away. By design S3 needs the capability to be public, and you will have many legitimately public buckets. S3 security (and really, all of AWS security) must become part of your core security program; you can’t think of it as a one-off technical problem that will go away with a quick fix.

About the Author:

With twenty years of experience in information security, physical security, and risk management, Rich is one of the foremost experts on cloud security, having driven development of the Cloud Security Alliance’s V4 Guidance and the associated CCSK training curriculum. In addition to his role at D-OPS, Rich currently serves as Analyst & CEO of Securosis.