Part of the Orange Group

Expert's Voice Cloud
5 min read

The Worst Pieces of AWS Advice That You Should Ignore

Article written by:

The Worst Pieces of AWS Advice That You Should Ignore

Disasters occur daily. You must have heard about examples of spectacular outages, failures or data breaches in AWS or other clouds. Undoubtedly, all of them resulted in substantial financial losses. Most likely, every single one of them could have been avoided.

The road to a cloud disaster is paved with bad advice (or bad habits). Sometimes you may fall victim to one’s quick – and well-meant! – tip on how to save time by applying a “no-brainer” solution, or “golden advice” on speeding up processes or performing an action easier and faster. Usually, though, cloud disasters arise out of a failure to follow the best practices.

In this article, we’d like to warn you against some of the most common pieces of bad advice and poor habits in the AWS world. We will also show you what can happen if they lead you astray.

Buckle up!

“Make that data public. I will just download the files, and then you can secure it back.”

There it goes: You can make your S3 bucket or other files public if you want to share them with someone. No unauthorized party will find them and, anyway, they won’t be around there forever. 

Sure! No one will find that bucket. Until someone will. What can happen then? 4 million customer records, including phone numbers and PINs, left unprotected, with ANYONE being able to access them. This is what happened to Verizon when a contractor exposed their data to the world.

It’s extremely risky to grant public access, even for a while, to any records that should be kept private. There are plenty of crawlers and scanners actively seeking for such data. Try to avoid repeating Verizon’s mistake.

Keep in mind that S3 buckets are secured by design, and they will not become public unless you explicitly make them so. Sometimes it’s better not to change anything than to change too much, and it’s a rule worth applying in this case.

image2 3

“It’s enough to use a single region only if you keep your data in two or more availability zones.”

You designed a highly available and fully redundant cloud infrastructure in a single region. You leverage two or three availability zones with the auto-scaling feature to be sure that in the case of an availability zone failure, your applications won’t be affected.

You’re playing by the book, in accordance with the AWS Service Level Agreement requirements, so you think you’re safe. Have you thought what would happen to your application if a whole region failed? Would it still survive?

In fact, a region failure already occurred in the past. S3 buckets in US-East-1 were not available for 5 hours on February 28th, 2017. Numerous well-known services like Imgur, Slack, Twitch.tv, Flipboard, Salesforce, and many more, were affected because they trusted that it was enough to use multiple Availability Zones in a single AWS region. Learn a lesson from their failure.

“I’m not sure what privileges you need to develop that application. Let me grant you full access rights.”

In information security, there is a rule known as the Principle of Least Privilege. It states that any user, process, system, application, etc., is allowed to access only the information and resources that are indispensable to perform their work.

Sometimes you may face the situation when you’re not sure what privileges your co-worker, contractor or developer needs, so you grant full access to your AWS account. You trust that nothing bad happens (fingers crossed) because this is just a development account with no production environment.

What if your contractor runs some extra-paid services, gets access to private data or, even accidentally, exposes it externally?

Better safe than sorry, so always restrict access according to the Principle of Least Privilege.

image5 2

“Your instance, function, etc., can use a role with more access rights than it needs. (Just for testing purposes, of course.)”

This advice is related to the previous one. Sure, it’s easier to develop an application or environment where you don’t have to define a specific role for each purpose, and you can grant full access at once. Nevertheless, it’s also dangerous. And not only during the development process.

Will you remove the full access rights when the testing finishes? Chances are either you will forget about it, or it will become too difficult by then.

So the best – and only – piece of advice here would be again to restrict privileges as much as possible on every stage.

image1 3

“For testing and development, you can use a ‘permit any’ security group for this EC2 instance.”

Another similar piece of advice that’s supposed to ease development. Following it may have widespread implications.

When using a “permit any” rule, you have to remember that you leave your instance open to the world. It is much easier to develop an application when you don’t have to worry if your EC2 instance can access a database, if you have opened proper ports, or if you will be able to administrate an instance from your IP pool, or not.

However, keep in mind that if your systems are vulnerable to attacks, the total access will make them much easier to be compromised. And again, I bet it’s likely that you’ll forget to remove that “permit any” rule when you finish the development.

image3 3

“The easiest way to migrate your VMs to AWS is to create the same EC2 instances as your existing ones.”

This is a kind of bad piece of advice that can cost you a lot of money. Why? Because cloud environment isn’t usually cheaper when you do a classic lift and shift. Physical environments are often overestimated to compensate the high utilization peaks, while normal utilization is mostly much lower.

When in the cloud, you don’t have to reserve large instances in case you need a CPU power in the utilization peaks in the future. When you need more power, you just scale out your environment and add more instances. If you reserve instances in advance, you will have to pay for them even if they’re not needed, and keeping more powerful machines than required adds unnecessary costs.

A good suggestion here would be to estimate a real demand for instances, optimize and use the small ones, which will be enough for your typical needs, and create an auto scaling group with a policy to add more instances when needed.

“Amazon takes care of upgrades and updates, so you don’t have to.”

When you were deciding to move to the cloud, you probably heard that you didn’t have to worry about upgrades or updates to your systems as the cloud vendor would take care of it. It is true, but not in every case.

Amazon does not patch, update or upgrade AMIs that you made by yourself or that you assign to your scaling group launch configuration. You have to maintain them on your own.

“You don’t need to control your files in the S3 bucket. They are secure.”

Yes, they are secure if you secure them and will not grant public access to everything.

But there are cases when your data has to be publicly accessible. Then it is good to know, which files you’re sharing, if this is the only information that you’d like to share, and if there is any security risk involved.

Keep your S3 buckets in check. You can use the Amazon Macie service to dynamically discover, classify and protect sensitive data in AWS. It can continuously monitor data access activity for anomalies and alert you in the case of unauthorized access or data leaks.

image4 2

“Take a lot of snapshots of everything. The storage is unlimited, so you don’t have to worry about it.”

Snapshots are important, and there’s no denying it. If you don’t have enough snapshots, you may face problems in case of a crash or other data loss event. However, if you take too many snapshots, it will be difficult to maintain backups. You need to find the right balance.

Remember that even though snapshots are differential, you have to pay for the storage they use, which can easily increase your S3 storage cost.

“You don’t pay for unused resources. You can create whatever resources you need, and pay only for what’s being used.”

The big advantage of using cloud services is that you pay as you go: you’re charged only for what you use, nothing more. But the devil is in the details. Be careful what the term “unused” means.

For example, when you stop an EC2 instance, you will not pay for it, but the EBS volume it uses is charged by the provisioned storage, and you’re paying for it even if it’s idle. If you keep obsolete volumes, you may end up with unexpectedly high bills!

Remember to clean up the resources you don’t need, as they can cost you money. Control your resources, because when you think you aren’t wasting money, there may be some forgotten resources still running in another region.

What Can We Do For Your Business?

Contact Us!

You might also be interested in