DevOps

Best way to pass AWS credentials to a Docker container.


The best way to pass AWS credentials to a Docker container depends on 2 use cases.

1. Use mount `$HOME/.aws/credentials` to the container

 

Advantages:

 

- Very simple and easy to configure, even for local environment development.

This means that you can actually use existing credentials which are already present on your local machine.

 

Disadvantages:

 

- Not secure for production environments. Credentials are stored in your machine and potentially exposed by mounting them in a container that has risks for a compromised container.

None of these work well in some distributed systems (ECS, EKS, etc) or in CI/CD pipelines.

 

Example:

 

version: '3' services:   app:     image: your_image     volumes:     - $HOME/.aws/credentials:/home/app/.aws/credentials

 

 

2. Use IAM Role

 

Advantages:

 

  • (ESPECIALLY production environments) Most secure option.
  • Credentials aren’t hard coded or file mounted. The container will assume the role of AWS so that it may securely access AWS resources.
  • It works really well in the cloud as well-ECS, EKS, EC2 etc. where the container can take this IAM role of the services or of the instance.

 

Disadvantages:

 

There’s a bit of initial complexity added, but it requires IAM roles and policies set up.

 

Conclusion:

Coming from the Production and security conscious environments, Use IAM Role.  

For a local dev or quick test, one can mount this, but it’s not acceptable in production.

Permission Denied (Public Key) when Trying to SSH into an Amazon EC2 Instance

That just means authentication failed. Most common reasons:

 

1. Wrong Key Pair: Sure that your key pair is good for this instance?

 

2. Wrong Username: If you’re using Ubuntu based distributions, the username is `ubuntu`. It might be ‘ec2-user’, ‘admin’, ‘root’, ‘fedora’ or anything else. See the documentation for your specific instance.

 

3. Wrong Host: Make sure you’re connecting to the correct EC2 instance.

In addition, if you have accidentally edited the `/home/<username>/.ssh/authorized_keys` on the EC2 instance, authentication might also fail.

It is often unclear from the AMI image description as to what the username (point 2) is, but on the EC2 side, the AWS EC2

documentation will normally help. 

 

 

Pro Tip: Use the `-v` option with your SSH command to enable verbose mode and get more detailed information on why the authentication is failing.

Can't push image to Amazon ECR - fails with "no basic auth credentials"

However, get-login is deprecated as of July 2021. If you're using AWS CLI version 2, then use get-login-password instead of get-login:

Authenticate Docker with AWS ECR for AWS CLI v1

If you using AWS CLI version 1, use the below command to authenticate Docker client:

 

$(aws ecr get-login --region <your_region>)

 

Authenticate Docker with AWS ECR for AWS CLI v2

If you using AWS CLI version 2, use the below command to authenticate Docker client:

 

aws ecr get-login-password | docker login --username AWS --password-stdin <your_account_id>.dkr.ecr.<your_region>.amazonaws.com

 

Step 3: Push Your Docker Image

Once authenticated, you're all set! Now you can push your Docker image

docker push <your_account_id>.dkr.ecr.<your_region>.amazonaws.com/repository-name:tag

Copy files from host to Docker container and Copy files from host to Docker container.

To copy files from a docker container to the host, use below commands

 

docker cp <containerId>:/file/path/within/container /host/path/target

 

Example:

Using Container Name

 

$ sudo docker cp test-container:/test.jpg .

 

Using Container ID

 

$ sudo docker cp h8fsd45fsd8fs:/out_read.jpg

 

To copy files from a host to the docker container, use below commands

 

docker cp test.txt <containerId>:/test.txt

 

Example:

Using Container Name

 

$ sudo docker cp test.txt test-container:/test.txt

 

Using Container ID

 

$ sudo docker cp test.txt h8fsd45fsd8fs:/test.txt 

Setting up the Permission Properly for S3 Buckets Security

 The most important component of any infrastructure is of course security. No one weak spot can result in an extreme consequence, for example damage to goodwill resulting in loss of funds and even legal action. A lot of organizations have big and important data that they store in AWS S3 buckets, and thus; all buckets containing such important data must be protected by S3 buckets and must not be available for public.

 

Use the following steps to limit public exposure and take appropriate measures to safeguard your S3 buckets:

 

Step 1: Limit the public access to accounts

 

This will ensure new buckets cannot be made public at account level out of the box and thus making the public access prevention at the account level.

On the left-side menu, on the S3 tab, click on ‘Block Public Access (Account Settings)’.

 

Enable the following settings:

  • Close all blocked public access.
  • Close publication of ACLs and objects which do not belong to their owner.
  • Disabled public access for buckets.And prevented block policies from having any public access.
  • Limit deployment of points to within the account.

 

To save these settings across all S3 buckets enter save.

The above things also ensure that buckets and objects are not publicly exposed as a risk.

 

Step 2: Secure access to S3 bucket policy settings

 

The bucket policy specifies what access should be given to the bucket (and to the objects within it).

 

1. Enter the S3 console, select the bucket you want to secure.

2. In the Permissions section that sits under the screen, for Bucket Policy, search for it and click it.

3. So check the existing policy and make sure it does not allow all users to change the settings.

 

Policy:

 {     "Version": "2024-10-18",     "Statement": [     {         "Effect": "Deny",         "Principal": "*",         "Action": "s3:GetObject",         "Resource": "arn:aws:s3:::your-bucket-name/*",         "Condition": {         "Bool": {             "aws:SecureTransport": "false"         }       }     }   ] }

 

 

If HTTPS has not been used in making the connection to the bucket, this policy will not allow the user to open the bucket.

 

Step 3: Additional S3 ‘Access Control Lists’

 

Make sure that the ACLs are set in the correct way so that the contents of the bucket are not exposed unnecessarily.

1. While in the S3 console, click on the Permissions tab for your bucket

2. Look for Access Control List (ACL) Please click ACL section

3. Please check if the following is in order:

    - All permissions under Everyone (public access) section should be made uncheck for public access

    - Allow access to selected services of AWS accounts that are necessary only which are required.

 

Ready to transform your business with our technology solutions? Contact Us today to Leverage Our DevOps Expertise. 

 


Devops

Related Center Of Excellence