Skip to main content

Command Palette

Search for a command to run...

Push your Docker containers from GitLab to Amazon ECR

Published
7 min read
Push your Docker containers from GitLab to Amazon ECR

If you are looking to store your Docker containers online in a secure way, take a try with Amazon ECR (Elastic Container Registry)

It is not a complicated process but there are a few pitfalls to avoid. In this post you will find a simple but functional example to publish your Docker containers from Gitlab to AWS ECR.

The first step is to create an ECR repository. To do this go to the ECR service panel in AWS management console and create a repository. 2021-01-08 08_09_51-Amazon ECR – Mozilla Firefox.png

Set it to private and let the settings by default 2021-01-08 08_13_11-Amazon ECR – Mozilla Firefox.png

Copy the repository URI for later usage 2021-01-08 08_20_56-Amazon ECR – Mozilla Firefox.png

The next step is to create an IAM account in AWS and give him the rights to access the Amazon container registry in your tenant

Go to the IAM dashboard using the Amazon web management console, then add a new user 2021-01-07 16_46_35-Push a build from Gitlab to Amazon ECR - OneNote.png

Give him a meaningful username and Programmatic access 2021-01-07 16_49_08-Push a build from Gitlab to Amazon ECR - OneNote.png

To read, write and modify containers in ECR we need to give the existing policy AmazonEC2ContainerRegistryPowerUser to the account image.png

After the creation of the user account you will receive an Access Key ID and a Secret Access Key. Copy both in a safe place. You will need them later. 2021-01-07 17_27_15-Push a build from Gitlab to Amazon ECR - OneNote.png

The AWS setup is done, now we can go to our Gitlab project and create the Variables and the files that will allow us to build the container and push it to ECR.

We create 2 projects variables to allow Docker to login to AWS without passing the IAM credentials in the CI code. You will create the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with the values that we get previously. These are normalized variables that Gitlab will use when AWS access request will be asked in the project. Just follow the steps below image.png

Create the AWS_ACCESS_KEY_ID image.png

Create the AWS_SECRET_ACCESS_KEY image.png

Both Variables are now ready to be used in the project image.png

In the repo we have the gitlab-ci.yml, the Dockerfile and a folder with the demo web application. 2021-01-08 08_33_02-cicd · master · devops _ fca-test-demo · GitLab – Mozilla Firefox.png

In the Dockerfile we just copy the content of our website folder into the nginx web service.

FROM nginx:alpine
COPY ./website /usr/share/nginx/html

In the website folder we have a index.html file with this content

Gitlab to ECR demo

Create the .gitlab-ci.yml file in your repository with the content below. You will have to set your own

  • DOCKER_REGISTRY : the repository URI without /ecr_demo
  • AWS_DEFAULT_REGION : you can find it in the DOCKER_REGISTRY i.e eu-west-1
  • APP_NAME: name of your ECR repository i.e ecr_demo
# Simple example of CI to build a Docker container and push it to Amazon ECR
variables:
  DOCKER_REGISTRY: 000000000000.dkr.ecr.eu-west-1.amazonaws.com
  AWS_DEFAULT_REGION: eu-west-1
  APP_NAME: ecr_demo
  DOCKER_HOST: tcp://docker:2375
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""

publish:
  stage: build
  image: 
    name: docker:latest
  services:
    - docker:19-dind
  before_script:
    - apk add --no-cache curl jq python3 py3-pip
    - pip install awscli
    - aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
    - aws --version
    - docker info
    - docker --version
  script:
    - docker build -t $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID .
    - docker push $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID

Let's have a deeper look at this gitlab-ci.yml

  • DOCKER_HOST : This will allow us to use the service docker:19-dind
  • DOCKER_DRIVER and DOCKER_TLS_CERTDIR : They help us to correct some issues, you can find the related links at the end of the post.
  • docker:19-dind : Means that we use Docker in Docker to log in AWS in the before_script part.

In the before_script section we install awscli in docker:19-dind and create a login session to our AWS ECR. The credentials used will come from the project variables.

  • apk add --no-cache curl jq python3 py3-pip and pip install awscli : Will install the prerequisites for awscli and awscli itself
  • aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY : Will create an authenticated session to the ECR registry
  • aws --version, docker info and docker --version : Print some informations

And finally the script section will build the Docker container with the pipeline ID as tag and push it to the ECR

  script:
    - docker build -t $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID .
    - docker push $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID

If everything goes well in the CI/CD job output you will find your newly created image in the ECR repository. 2021-01-08 11_28_12-Amazon ECR – Mozilla Firefox.png

Enjoy!!

Feel free to comment this article if you have questions.

www.cisel.ch

References

https://www.youtube.com/watch?v=jg9sUceyGaQ

https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#kubernetes

https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64959#note_194620798

https://dev.to/gustavorglima/how-to-solve-error-unsatisfiable-constraints-python-48k7

https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1986

K

hi, I get the error ERROR: Cannot connect to the Docker daemon at tcp://docker:2376. Is the docker daemon running?

Also tried the same with port 2375 with no luck. For this to work, should Gitlab runner be running in privileged = true mode?

Any suggestions. thanks,

1
D
DINA4y ago

Hi Kavitha, Do you have docker on the image of your stage where you're trying to build your docker container ? Here we use a dind service (docker in docker) :

publish:
  stage: build
  image: 
    name: docker:latest
  services:
    - docker:19-dind

Hope it helps ! Best regards

R
RAMJOS4y ago

Great tutorial on GITLAB and ECR.

May I know what will be next after putting the image in the image repository? Perhaps you can send me link on how the app container be deployed on the target environment. Thank you. More power

D
DINA4y ago

Hi RAMJOS ! Thank's for the comment ! Yes ofc, after putting the image in the repo, you can deploy it on Kubernetes. We have an other article for that : https://devops.cisel.ch/deploy-argocd-and-a-first-app-on-kubernetes

Hope it helps ! Best regards

D
Dan Stein4y ago

Thanks for this!

Im getting the error: Could not connect to the endpoint URL: "https://api.ecr.eu-east-2.amazonaws.com/" Error: Cannot perform an interactive login from a non TTY device

Do you have any idea whats going wrong? All the env variables are populated correctly have tested theyre coming through

1
D
DINA4y ago

Hi, Are you sure that your env variables are populated correctly on the right repo (each repo needs to have it's own environment variables set up) ? Can you try in local with the command aws ecr get-login-password ? It would help you to find out the problem. Maybe you need to upgrade your awscli or maybe there is a problem with the --region.

Hope it helps ! Best regards

1
L
Lucas5y ago

Hi I try with this yml and I'm getting this error:

$ aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY Unable to locate credentials. You can configure credentials by running "aws configure". Error: Cannot perform an interactive login from a non TTY device

I already setup the aws key and secret variables in ci/cd settings on gitlab.

Someone know how to resolve it?

I'm using on premise server with a gitlab runner docker with executor docker.

Thanks.

1
D
DINA5y ago

Hi Lucas, Thank's for your comment. Did you try to run an aws configure and a login from a terminal to be sure that your access are ok ?

Can you check that your variables are configured in the right gitlab project ? If it's the case, can you try to rename your variable in gitlab with an other name ? For example, rename AWS_ACCESS_KEY_ID to AMAZON_ACCESS_KEY_ID and try again to see if there is a difference in your error message.

Hope it helps ! Best regards

U
ub1k4y ago

for me it was that the AWS_ACCESS_KEY_ID were created as "protected". if you're not running the pipeline from protected branch (which I suppose you don't) then these vars are not exposed. just disable the "protected" checkbox

1
D
DINA4y ago

ub1k Thank's for sharing !

1
H

Nice work! I suggest creating your own image that contains aws-cli, get rid of variables on top and put it into GitLab as secrets to make the pipeline more simple and readable.

1
D
DINA5y ago

Thank you for your comments and suggestions. You are absolutely right! We thought that showing the various variables and the use of awscli in the code might help to better understand the process.

2

More from this blog

D

DINA DevOps Technical's Blog

59 posts