May 17, 2021 by Jatheon

Cloud Security: Everything You Need to Know

We recently presented an overview of top five cloud security trends you should follow nowadays. In the article, we covered important trending security methodologies and technical concepts, such as DevSecOps, advanced cloud networking features like WAF or managed DDoS protection, as well as the idea of microservices fueled by Docker containers and Kubernetes orchestration.

If your company already has some cloud presence, these concepts can come in handy in the upcoming months and years, but what if you’re just preparing to embrace the public cloud?

What if you’ve just started to draw your cloud blueprints and have no clue about what cloud security really is?

Then the trending terms and buzzwords won’t help you and you’ll need to start from the basics. This mini series of articles will prepare you for that scary jump into the unknown.

What Does “Traditional IT” Security Look Like?

Before the cloud, all companies were running something that’s typically known as the “traditional IT” infrastructure. By this term, we refer to any environment used for hosting applications/services or storing data. This environment is managed and owned completely by one organization (business entity, enterprise, etc).

The environment can be a data center operated by the same organization providing the services or a third-party specialized company, which is usually the case with big enterprises.

However, smaller companies can also have their own, modest server farm at an on-premises location (e.g. a server room inside your corporate building) or just a bunch of servers lying around on your IT team’s desks. The main difference comparing this model to the cloud is that in a “traditional” IT environment, there is no elasticity or scalability on the fly. Simply put, you cannot just expand your resources with a few clicks or API calls.

Also, running a traditional IT environment requires considerable capital expenses up front – preparing the environment (creating a separate server room or leasing a datacenter rack), purchasing equipment (network, servers, storage) and provisioning services (licensing, labor hours to deploy the environment, etc).

This is all vastly different in the cloud. You can scale in a single click (or in an automated way, based on your consumption), and you pay as you go for the resources you consume, when you consume them.

To run a successful IT operation in traditional IT environments, IT teams are in charge of “everything”. This means that they also have end-to-end ownership of IT security.

It’s up to internal IT teams to:

  • ensure proper physical access to on-premises server rooms or data centers,
  • secure the entire network infrastructure both externally and internally from unauthorized access or hacker attacks, and
  • provide adequate application security for the services they run on their servers, both physical or virtual.

This seems like a lot to take on, right? That’s why cloud is so popular these days, because besides the flexibility it gives you in terms of resource scaling and costs, it can also significantly ease up your IT workload when it comes to securing your cloud services.

Besides the flexibility it gives you in terms of resource scaling and costs, cloud can also significantly ease up your IT workload when it comes to securing your cloud services. Click To Tweet

The Shared Responsibility Model – The Pillar of Cloud Security

Migration to the public cloud requires a fundamental shift in how organizations design and perceive their IT infrastructure.

You will lose direct access to your servers.

You won’t even know the exact geographic location where your data is at.

On top of that, most public cloud services are offered as API/endpoint – combination of URL and port, which you access either by HTTPS or some other protocol

Migration to the public cloud requires a fundamental shift in how organizations design and perceive their IT infrastructure. Click To Tweet

Just as you don’t have complete access to your resources, you’re also not fully responsible for your security. Organizations share this responsibility with the cloud provider, and the concept of shared responsibility is the most important one when it comes to cloud security.

Public cloud providers are your biggest partners when it comes to security, and organizations which are moving to the cloud or have already migrated should explore the public cloud provider’s best practices for security and compliance.

Companies moving to the cloud should explore the public cloud provider’s best practices for security and compliance. Click To Tweet

How Shared Responsibility Works on AWS

As an example of shared responsibility in the public cloud, we’ll take a look at how Amazon defines this concept in its cloud:

AWS shared responsibility

AWS shared responsibility

As we can see from the picture, AWS manages the entire physical layer – data centers, which are divided into regions with availability zones and edge locations.

Customers pick the region and availability zone (AZ) where the data or service will reside, but they don’t have physical access to it nor is the exact location ever disclosed to the public (customers cannot request this information from Amazon). AWS also has full control of servers/virtualization hosts, as well as different managed services that provide storage, database or networking to end users.

The client’s responsibility is to provision and maintain a desired operating system (in case you use EC2), to configure internal networking and firewall rules, and to ensure client-side data encryption.

It’s also up to the customer to organize and maintain identity management i.e. to configure access for internal users to AWS resources, with desired authentication and authorization rules. All of this is achieved by leveraging the AWS IAM service.

Further customer’s security responsibilities are determined based on the AWS services a customer uses. These services can be divided into three separate groups:

Amazon Services

Infrastructure-as-a-Service (IaaS) – a typical representative of this group of services on AWS is AWS EC2. AWS in this case provides everything up to the OS layer, and it’s on the customer to provision an OS, configure it and deploy the desired application. This category also includes AWS ECS, AWS VPC, AWS EBS, AWS ELB etc.

Platform-as-a-Service (PaaS) – for this type of services, the OS is completely configured for you, but you need to deploy your application, maintain it and keep it secure. Most popular PaaS services on AWS are AWS BeanStalk and AWS EKS (managed Kubernetes). When it comes to EKS, there’s a debate regarding which group this service should belong to (it has some similarities to IaaS, PaaS and SaaS) but since you deploy apps onto EKS, we’ll consider it a PaaS service.

Software-as-a-Service (SaaS) – these services are fully managed by your public cloud provider, and the only thing the customer needs to take care of is how to import/export data to and from SaaS services. Most of the AWS services are in this group, like AWS RDS (relational databases), S3 (object storage) or DynamoDB (NoSQL datastore).

Although we focused on AWS, the other big players in cloud computing, such as Google or Microsoft, follow pretty much the same principles.

Identity and Access Management: Your First Stop on the AWS Journey

During the initial setup of your AWS account, you will need to provide personal or company details, as well as credit card info. The credit card info is mandatory, although Amazon offers a Free Tier to get you started without any costs. You will also be asked for the root account password.

A root account is your first account on AWS, with which you will perform the initial login. Of course, by the analogy with Linux systems, the root account has all the rights and can do “everything” in your environment. Hence, the best practice for the UNIX world applies to AWS as well – the usage of the root account is highly discouraged.

AWS recommends the root account be only used for the initial login and while setting up other users with access. Securing the root account will be your first measure in protecting your AWS account, and the entire process is managed by Identity and Access Management (IAM) service.

AWS Security Status

IAM Set Up

On the IAM’s home page, AWS placed an IAM Security Status section, which presents initial IAM guidelines and a checklist for securing the environment. The use of 2-factor authentication (MFA) is encouraged not just on the root account, but on all accounts (regardless of the level of privileges). In addition, each of your employees should have a unique, dedicated account, as credentials sharing is not advised.

Also, if you have more than one user in need of certain level access (like the ability to read data from S3 buckets), you should organize users in groups and force specific IAM password policies for those users (similar to Microsoft Active Directory password policies, which are enabled by default). Also, for the programmatic / API access, you should regularly rotate access keys awarded to users.

This may sound too daunting, especially when all this information is presented in a single place. However, implementing these IAM rules is simple in practice.

TL;DR – You need to secure your root account, create as many users as required, add them to groups, rotate their passwords and keys, and assign least privileges to any entity needed (user or AWS service).

IAM Roles and Responsibilities

You may run into situations where a user or AWS entity such as EC2 or ECS/EKS services need several policies attached, but even in those cases, don’t be reluctant and assign more privileges just to be sure that the access is given straight away. Instead, spend some time reading about IAM roles and policies and how they work, before giving any write permission to AWS entities.

This may be tiring in the beginning, but as your environment grows bigger, you will be amazed by the number of issues that can arise simply due to misconfigured IAM rights. For instance, the recent Capital One breach happened just because of misconfigured IAM rights.

Besides creating and managing users, groups, policies and roles, IAM allows you to connect your identity provider to AWS, and use the existing user base you have in your on-premises environment, such as Microsoft Active Directory or linux LDAP domains. This can be particularly useful in hybrid cloud scenarios, where companies already have an established and secure way to manage users and their privileges in existing environments, and they simply want to replicate those configurations to the cloud.

Another useful IAM feature that customers usually forget is Credentials Report, which provides you with a summary of details for all of your users and their IAM settings and credentials. Be sure to make the audit control of this report a part of your security checklist, and carry it out every couple of months.

Virtual Private Cloud = Your Network on AWS

Once you have defined a hierarchy for your IAM users and groups, and assigned adequate rights to them through IAM roles and policies, it’s time to start designing network blueprints. You should do this before actually deploying your workloads into AWS compute, storage or database services.

Importance of a Custom Virtual Private Cloud

Many AWS services don’t require a Virtual Private Cloud (VPC) configured (e.g. S3 or DynamoDB NoSQL database). Plus, AWS itself provides you with a default VPC as part of your account to help speed up the creation of your environment. However, it’s still highly recommended to define a custom VPC in the region where you’ll be using AWS services.

The reason it’s important to create your own VPC is that if you decide to use the default one, it will automatically place all of your services in public subnets with Internet access, which is definitely not a good practice.

So, the question is how to design your VPC and what to have in mind?

Designing Your Own VPC: Tips and Tricks

  1. Choose an appropriate subnet mask and addressing scheme – VPCs can have a subnet mask from /16 to /28, and be sure not to overlap the address with your current on-premises network, because you might need to connect them in the future with site-to-site VPN connection or DirectConnect.
  2. Always deploy public and private subnets in pairs. This means that for every private subnet where you will deploy your workloads (e.g. EC2 instances or ECS/EKS clusters) you will need an appropriate the public subnet to accompany it, since in public subnet you will deploy Elastic Load Balancers (ELB) to route traffic to your instances or clusters.

    The difference between public and private subnets is that public ones have Internet Gateway (IGW) attached to them and can route Internet traffic to and from the subnet, while private ones can only route local VPC traffic.

  3. Besides pairing public and private subnets, be sure to deploy same subnets in at least two availability zones, for high availability. This means that if you have 10.0.0.0/16 as your VPC, and 10.0.1.0/24 and 10.0.2.0/24 as public and private subnet in AZ1 of your region, you should create two more subnets in AZ2, as well as one public and one private, for example 10.0.3.0/24 and 10.0.4.0/24. With this setup, ELB can route traffic to two public subnets, 10.0.1.0/24 and 10.0.3.0/24, which will then pass traffic to their private counterparts.
  4. By default, there are no firewall restrictions for local traffic between subnets inside a VPC, and not all subnets have restrictions on outbound traffic, and also to the Internet, so if this is against your internal policies, you can use Network ACLs (NACL) to further lock down routing in your VPC.
  5. Enable VPC flow logs!

The design of VPC is not as easy and straightforward as structuring IAM users, groups and roles, and Amazon has a reference page for most common VPC scenarios, so we highly recommend you check it out before implementing tips and tricks we advised in this section.

Summary

IAM is used to allow secure and granular access to their AWS to employees, while VPC is the main service when it comes to managing your private networking on Amazon’s public cloud.

All of this means that there should be no compromise or negligence when it comes to using these two services, and that’s why your first concern when migrating to the cloud is the proper configuration of IAM and VPC.

Once you’ve overcome all the caveats of IAM and VPC, you can continue to the next pillar of cloud security, compliance and auditing.

Jatheon is a tech company specializing in large-scale, secure archiving of business communications like email, social media and chat apps for compliance and legal discovery. See how our fully AWS-based cloud archiving software can help you implement the highest standards of cloud security.

See how data archiving can simplify compliance and ediscovery for your organization

Book a short demo to see all the key features in action and get more information.

Get a Demo

Share via
Copy link
Powered by Social Snap