Skip to content

Instantly share code, notes, and snippets.

@sahil1
Last active December 18, 2019 22:01
Show Gist options
  • Save sahil1/2485989023d9193465ddbd9d1b42a7a8 to your computer and use it in GitHub Desktop.
Save sahil1/2485989023d9193465ddbd9d1b42a7a8 to your computer and use it in GitHub Desktop.
Dockerfile for Sample Container
# location: containers/sample_container/.dockerfile
# Base Image
FROM python:3.7-slim
# DocStr
LABEL environment for test_task
MAINTAINER Ensemble Energy
# Installing necessary build requirements
RUN apt-get update
RUN apt-get install build-essential -y
# Installing Python dependency for model env
COPY containers/sample_container/requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
RUN rm ./requirements.txt
# Installing AWS CLI and adding path variable
RUN echo 'export PATH=~/.local/bin:$PATH\n' >> $HOME/.bashrc
# Setting AWS Credentials
ARG AWS_KEY
ARG AWS_SECRET_KEY
ENV AWS_ACCESS_KEY_ID=${AWS_KEY}
ENV AWS_SECRET_ACCESS_KEY=${AWS_SECRET_KEY}
# COPY folder of code and model
COPY containers/sample_container/code /usr/code
# Set working_directory
WORKDIR /usr/code
# Code for the model scoring
CMD python3 sample.py
# location .travis/build.sh
#!/bin/bash
if [[ $TRAVIS_BRANCH == 'master' ]]
then
echo "Branch is master"
elif [[ $TRAVIS_BRANCH == 'dev' ]]
then
echo "Branch is dev"
else
echo "Branch is sandbox"
sudo docker build
--build-arg AWS_KEY=$AWS_ACCESS_KEY_ID
--build-arg AWS_SECRET_KEY=$AWS_SECRET_ACCESS_KEY
--build-arg env_type=sandbox
-t sample_container_prod -f containers/sample_container/dockerfile .
fi
# location .travis/docker_push.sh
#!/bin/bash -e
REGISTRY_URL=${AWS_ACCOUNT_ID}.dkr.ecr.${EB_REGION}.amazonaws.com
TIMESTAMP=$(date '+%Y%m%d%H%M%S')
# using datetime as part of a version for versioned image
VERSION="${TIMESTAMP}-${TRAVIS_COMMIT}"
# using specific version as well
# it is useful if you want to reference this particular version
# in additional commands like deployment of new Elasticbeanstalk version
# making sure correct region is set
aws configure set default.region ${EB_REGION}
# Login to ECR
$(aws ecr get-login --no-include-email)
# Tag and Push an image
#######################
if [[ $TRAVIS_BRANCH == 'master' ]]
then
SOURCE_IMAGE="sample_container_prod"
elif [[ $TRAVIS_BRANCH == 'dev' ]]
then
SOURCE_IMAGE="sample_container_dev"
else
SOURCE_IMAGE="sample_container_sandbox"
fi
TARGET_IMAGE_LATEST="${REGISTRY_URL}/${SOURCE_IMAGE}:latest"
TARGET_IMAGE_VERSIONED="${REGISTRY_URL}/${SOURCE_IMAGE}:${VERSION}"
# update and push latest version
docker tag ${SOURCE_IMAGE} ${TARGET_IMAGE_LATEST}
docker push ${TARGET_IMAGE_LATEST}
# update and push new version with timestamp
docker tag ${SOURCE_IMAGE} ${TARGET_IMAGE_VERSIONED}
docker push ${TARGET_IMAGE_VERSIONED}
# location containers/sample_container/requirements.txt
awscli==1.16.65
boto3==1.9.193
# location containers/sample_container/code/sample.py
import boto3
import os
def main():
bucket = os.environ['bucket']
key = os.environ['key']
if(bucket != None and bucket != "None"):
s3 = boto3.resource('s3')
bucketObj = s3.Bucket(bucket)
bucketObj.download_file(key, key.split("/")[-1])
client = boto3.client('s3')
transfer = boto3.s3.transfer.S3Transfer(client)
transfer.upload_file(key.split("/")[-1], bucket, key.replace(key.split("/")[0], 'model_data_output'))
os.remove(key.split("/")[-1])
return True
if __name__ == '__main__':
main()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment