S3 Buckets And Security – How Should You Secure Them?

If you’ve been using AWS, you’ve undoubtedly heard of S3 buckets. S3 buckets provide object-level storage for almost unlimited amounts of data – S3 also supports multiple data tiers to help you lower costs if you’re storing backup data, for example. However, how to properly secure an S3 bucket can be confusing. In this guide, we will walk through some security tips to apply to your S3 buckets.

Disclaimer

This article will not focus on public-based buckets (such as those hosting static websites). This article will focus on private buckets only.

Contents:

Encryption

Encryption is supported in S3 buckets in multiple ways, and which one will work best for you will be down to specific requirements set by your organisation.

The First and most simple encryption type is S3-managed encryption (SSE-S3). This option lets S3 manage everything, and you don’t need to worry about any form of key management or rotation.

This option is a good baseline for encryption as it requires the least amount of overhead.

However, if your organisation requires the use of custom keys, then S3 fully integrates with KMS (SSE-KMS).

This option allows you to use AWS-managed KMS keys or a KMS key that you manage. Note the option for a “bucket key”. The option uses your KMS key to encrypt a “bucket key” so that each call to encrypt/decrypt does not need to go to KMS. Therefore, lowering your KMS bill.

Recommendation: Encryption should always be enabled in some capacity. It makes no operational difference if it’s turned on or off, so it’s better just to enable it. If you are worried about cost, keep using S3-managed keys (SSE-S3). If your organisation has strict requirements for control of data, then KMS keys are a better option (SSE-KMS)

Preventing Public Access

Wait a second, right at the start of this, you said we were not worried about public S3 buckets, right? Well, that statement is true, but we want to ensure that no one accidentally turns a private bucket public.

S3 supports two ways that we can stop public access. One option is the “block public access block” that is present on each individual S3 bucket.

Turning this on for every bucket is a good idea. However, if you’ve got 300 buckets in this account, you might see why this can become cumbersome, as a fail-safe AWS supports blocking public S3 buckets on the account level. To access this setting go to the S3 console, and on the left panel, you will notice a “Block Public Access settings for this account” BY DEFAULT THIS WILL BE DISABLED

If you know you will never have public buckets in this account, it’s a good idea to enable this.

Recommendation: Unless you have a particular use case, you probably won’t need S3 buckets to be public. Therefore, some form of blocking on public buckets should be enabled. This will ensure if a team member makes a mistake and accidentally makes the bucket public via its policy, the failsafe will actually prevent the bucket from going public.

Note if you create your S3 buckets via TerraForm, the block public access block will not be automatically enabled (like it is in the console). You must specifically declare it in your TerraForm file.

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_public_access_block" "example" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

Access Control Lists (ACLs)

Access control lists (ACLs) are used to control who owns items within an S3 bucket if you allow an AWS service or other AWS account to upload files to an S3 bucket you control.

It is not recommended to use them, and only a few specific use cases should require them. Therefore, for any new buckets you create, ensure ACLs are disabled.

Versioning

Versioning is a feature within S3 that allows you to take a “copy” of a file whenever it is changed. This also applies to delete actions, so if a file is accidentally deleted, it can quickly be recovered. Take, for example, a text file that is named “hello.txt” that contains some text.

Let’s upload the file to S3. Then we will change the file and re-upload it.

Now when we look back in the S3 console, we will note that there are multiple versions of our original file (to see this, click on the file and then on the versions tab).

Now we can see both versions of the file and have easy options to restore if needed. But what happens if we delete the file? Refreshing the console after deleting appears as if the file has been entirely removed. But note in the middle of the screen, there is a “show versions” button. Pressing this will reveal the “delete marker” of our file.

From here, we can restore our file or proceed to remove it entirely. However, if you want to restore the file, the delete marker simply needs to be deleted.

But before we get into the recommendations, there are some things to note about versioning. Every object that gets versioned is essentially replicating itself, meaning that you will need to pay to store each version of the file in S3. The cost of this can begin to mount significantly if care is not taken. Additionally, when a file is deleted, it’s not actually removed from the S3 bucket, just a delete marker added, so it no longer displays in the console. In turn, this will make all deletes a two stage process which can add significant overhead.

Recommendations: Wheather you need versioning or not is very much going to depend on what you are storing in the bucket. Do you have files that change often and are very important? If so, versioning may be a good idea. However, before you enable versioning, double-check if it’s actually required. Costs will no doubt increase when enabling it – also, consider that deleting a file is effectively a 2 stage process now (specifically, deleting every version of the file).

Versioning is not something that needs to be turned on in your buckets by default. Assess and decide if it’s needed or not.

Object Locks

Object locks should be reasonably self-explanatory, once an object is locked it won’t be possible to delete it. This is referred to as the write-once-read-many or WORM approach.

This could be useful if you have very important files stored in your S3 buckets, like the big contract you just signed with a new client. You’d want to make sure no one accidentally deletes that.

One of the biggest things to note down about object locks is it can only be enabled when creating a bucket. Buckets that have already been created can not enable object lock (the console does advise it’s possible to contact AWS support to enable object lock on existing buckets, but I’ve never been through this process).

To enable object lock, create a new bucket and expand the advanced section. You should not see the object lock section – click enable and accept the warnings.

By default, items that are added to the bucket are not immediately locked. However, this setting can be changed by going into the bucket properties:

MFA Delete

One of the most common questions I get asked is regarding MFA delete on S3 buckets and if it’s really needed. Does it make items being stored within that bucket more secure? Maybe, but it really does depend on your use case.

In my opinion, if you’re storing something within an S3 bucket that needs protecting with MFA delete, then maybe S3 is not the correct storage medium for you. I say this because enabling MFA delete has a reasonably significant downside. You must use the root account to enable it in the first place. Using the root account is not recommended as it increases the risk of the root account being compromised.

Therefore, if you want to store something sensitive that needs protection from deletion, consider using something like a Glacier vault lock or another storage medium altogether.

Further Reading

Check out the SecWiki.cloud page on S3 buckets here

AWS documentation for S3 can be found here

You may also like...