Cookies
Close Cookie Preference Manager
Cookie Settings
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Made by Flinch 77
Oops! Something went wrong while submitting the form.

Common mistakes with AWS IAM...

Are you struggling to manage access to your AWS resources effectively? You're not alone. We've all been there, and let's face it, most of us are still there.

...That I am sure we have all seen

Managing access to AWS can be a real challenge, especially when dealing with large and complex environments, and multiple accounts. AWS Identity and Access Management (IAM) is a powerful tool that allows you to control access to your AWS resources. However, it's easy to adopt bad practices that can compromise the security of your AWS infrastructure, and the security of your data. In this article, I'll explore some common IAM mistakes sprinkled with some real-life examples I have seen throughout my career working with data.

Using built-in policies as-is

AWS provides a set of built-in policies that you can use as a starting point for defining permissions. Haven't we all used an 'AWS test account' where everyone has AdministratorAccess? If you're lucky, with multi-factor authentication enabled. I've been there, fortunately this didn't lead to disasters, yet.

The built-in policies are great as a template, but they should really be refined to fit your use case. Another policy that seems more benign than AdministratorAccess, is for instance AmazonS3ReadOnlyAccess giving you access to all S3 objects in your account. What can go wrong? Well, there can be PII data in those S3 buckets that shouldn't be readable by everyone. In the on-premises Hadoop era I was working in an internal data team. My access to operational data was limited. However, that same data was replicated to our Hadoop cluster by a service account with elevated permissions. On the Hadoop cluster we didn't have access controls, so everyone in my team had full access to data they wouldn't be able to see in operational systems, some of which was PII. I can see the same happening with external data synced to S3 and AmazonS3ReadOnlyAccess…

This is the policy document for AmazonS3ReadOnlyAccess:

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Effect": "Allow",
   "Action": [
    "s3:Get*",
    "s3:List*",
    "s3-object-lambda:Get*",
    "s3-object-lambda:List*"
   ],
   "Resource": "*"
  }
 ]
}

Doesn't look that complicated, right? It's pretty easy to customize if you know a little bit about policies and S3. A good first step would be to create copies of this policy for different teams and restrict the resources to the ones the team actually needs, instead of "*".

The situation gets completely out of hand if you want to use Athena for instance. Sadly there's no AthenaFullReadAccess policy; you'll have to settle for AmazonAthenaFullAccess. I won't post that policy here, but if have a look in your own AWS account. It deals with permissions for Athena, Glue, S3, SNS, CloudWatch, LakeFormation and pricing. If you're new to policies and you try to customize it, there's a high chance you'll end up with the built-in policy out of sheer frustration because things are not working otherwise. 

Not using granular access controls

As I've explained, too broad access to data is a consequence of using the built-in policies. To avoid this mistake, you should follow the principle of least privilege. This means that you should grant users only the access they need to perform their job functions, and nothing more. But even when you start setting up your own set of policies, it takes real dedication and effort to make sure that the access is granular enough, but also not so restrictive to limit productivity and innovation. 

In practice, only mature organizations are up to this task. What I often see is that access is managed at the level of the functional roles (e.g. a role for marketing, marketing analysis, engineering, etc.) or at the level of purposes (e.g. sales report, engineering report). Defining access at the user level is too labor-intensive, opening the door for other mistakes. 

But then you hit another barrier. 

Creating bloated policies

Managing granular access is tedious and you can easily lose the overview of who has access to what. This is a problem in itself. Luckily AWS is there to help you by enforcing IAM policy limitations. Generally, there is a limit on how big a single policy can be, how many policies can be linked to an IAM entity, and how many policies you can have in total. Thanks AWS, for making sure we don't overextend ourselves ;-).

Usually AWS policy size limitations are hit first. We've seen this at a bank with mature access controls. Granular controls were set up at the project level, and the role a user should have within that project (e.g. consumer, producer, admin). But unfortunately, they started hitting policy size limits. What to do? Break up policies so that you end up with a set of policies like 'PROJECT_X_CONSUMER_1', 'PROJECT_X_CONSUMER_2', 'PROJECT_X_CONSUMER_3', … all to give access to one role for one project. If you extrapolate this to 5 roles and hundreds of projects, the 'maximum number of policies' limit starts looming on the horizon. Other companies, as adidas recently outlined in a blog post, are struggling with this as well. Keeping a good overview becomes unmanageable with just the AWS console or CLI. One way to overcome this challenge is by using third-party tools like Raito that offer more advanced policy management capabilities.

Not revoking unused access

If you've managed to granularly define what access controls should be in place, you can look back with satisfaction at the difficult technical and organizational problem you've solved. Or can you? Do you know who is actually using their access? If someone is only using 10% of all resources they can access, should they still keep that access? 

It's not easy to see which access is not being used and can be revoked. Unused access can accumulate over time, increasing the attack surface of your AWS environment.

To avoid this mistake, you should periodically review the permissions granted to users and revoke any access that is no longer necessary. You can use AWS Config and AWS CloudTrail to monitor changes to IAM policies and identify unused permissions.

The most obvious violation of this principle is to find long-gone former colleagues amongst the users in your systems. This problem is definitely not limited to AWS; it happens everywhere. In the best-case scenario, the user exists in your system, but authentication is managed through SSO, so in theory that access is useless. In theory. I think everyone has seen this. More recently though, I was explaining what benefits Raito offers for access control to someone. They didn't have a technical background, so I gave them the example of managing access in Google Drive. Do you really know which files you've shared with someone? And indeed, they quickly offered an example from their own life; it turns out their former employer was skimping on Google Drive costs and access was given through the employees' personal email addresses. You can see where this is going, right?

The point being, revoking access can be more important than granting access.

Using API keys

Finally, no blog post about mistakes in IAM is complete without the grandfather of all mistakes; personal API keys. They are useful for testing and prototyping, or if you want to quickly set up something to run remotely on a non-AWS system. However, if this is done systematically, invariably these keys will end up in the wrong place. An example of a wrong place is a public GitHub repository. 

MEME - They committed their AWS keys

Instead of using API keys, it's much better to use SSO or SAML and role-based access. This ensures that users are granted the appropriate permissions based on their role or function, and reduces the risk of unauthorized access.

How can Raito help?

If this sounds all too familiar, and you don't have Pickles the horse to manage your AWS access, check out our website, or reach out to [email protected] to see how Raito helps. On our website, you can take a virtual product tour, book a demo, or request a free trial

Next to Snowflake, BigQuery, Okta, and Azure AD connectors we have a first preview version of an AWS/S3 connector that allows you to manage your S3 policies in a more intelligent way, while keeping track of your access controls and usage.

AWS/S3 - Raito dashboard
AWS/S3 - Raito dashboard

Photo by Amy Elting on Unsplash

Talk to the team