Skip to content

Instantly share code, notes, and snippets.

@nerdalertdk
Forked from GabLeRoux/.env.example
Created September 13, 2022 09:15
Show Gist options
  • Save nerdalertdk/f38c878a4a0b8cb806e7be25a57e7be3 to your computer and use it in GitHub Desktop.
Save nerdalertdk/f38c878a4a0b8cb806e7be25a57e7be3 to your computer and use it in GitHub Desktop.

Revisions

  1. @GabLeRoux GabLeRoux revised this gist Oct 5, 2021. 2 changed files with 5 additions and 0 deletions.
    3 changes: 3 additions & 0 deletions .env.example
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,3 @@
    AWS_S3_BUCKET=
    AWS_S3_ACCESS_KEY_ID=
    AWS_S3_SECRET_ACCESS_KEY=
    2 changes: 2 additions & 0 deletions docker-compose.yml
    Original file line number Diff line number Diff line change
    @@ -1,3 +1,5 @@
    version: '3.8'

    services:
    s3fs:
    privileged: true
  2. @GabLeRoux GabLeRoux revised this gist Oct 5, 2021. 1 changed file with 6 additions and 15 deletions.
    21 changes: 6 additions & 15 deletions docker-compose.yml
    Original file line number Diff line number Diff line change
    @@ -1,25 +1,16 @@
    # Tip: You can just define all environment variables used here in a
    # .env file in the same directory so as not to expose secrets
    # docker-compose will load it automatically
    services:
    s3fs:
    privileged: true
    image: efrecon/s3fs:1.86
    restart: always
    environment:
    - AWS_S3_BUCKET=${AWS_S3_BUCKET}
    - AWS_S3_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
    - AWS_S3_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
    # A workaround for bucket names containing '.' until the related s3fs-fuse issue is resolved
    # Keep in mind this is a secrutiy risk (default is https)
    # - AWS_S3_URL=http://s3.amazonaws.com
    image: efrecon/s3fs:1.90
    restart: unless-stopped
    env_file: .env
    volumes:
    # This also mounts the S3 bucket to `/mnt/s3data` on the host machine
    # This also mounts the S3 bucket to `/mnt/s3data` on the host machine
    - /mnt/s3data:/opt/s3fs/bucket:shared

    test:
    build: .
    restart: always
    image: bash:latest
    restart: unless-stopped
    depends_on:
    - s3fs
    # Just so this container won't die and you can test the bucket from within
  3. @GabLeRoux GabLeRoux renamed this gist Oct 5, 2021. 1 changed file with 0 additions and 0 deletions.
    File renamed without changes.
  4. @HeshamMeneisi HeshamMeneisi revised this gist Jun 19, 2020. 1 changed file with 4 additions and 2 deletions.
    6 changes: 4 additions & 2 deletions docker-compose
    Original file line number Diff line number Diff line change
    @@ -14,13 +14,15 @@ services:
    # Keep in mind this is a secrutiy risk (default is https)
    # - AWS_S3_URL=http://s3.amazonaws.com
    volumes:
    # This also mounts the S3 bucket to `/mnt/s3data` on the host machine
    - /mnt/s3data:/opt/s3fs/bucket:shared

    scraper:
    test:
    build: .
    restart: always
    depends_on:
    - s3fs
    # Just so this container won't die and you can test the bucket from within
    command: sleep infinity
    volumes:
    - /mnt/s3data:/project/data:shared
    - /mnt/s3data:/data:shared
  5. @HeshamMeneisi HeshamMeneisi created this gist Jun 19, 2020.
    26 changes: 26 additions & 0 deletions docker-compose
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,26 @@
    # Tip: You can just define all environment variables used here in a
    # .env file in the same directory so as not to expose secrets
    # docker-compose will load it automatically
    services:
    s3fs:
    privileged: true
    image: efrecon/s3fs:1.86
    restart: always
    environment:
    - AWS_S3_BUCKET=${AWS_S3_BUCKET}
    - AWS_S3_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
    - AWS_S3_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
    # A workaround for bucket names containing '.' until the related s3fs-fuse issue is resolved
    # Keep in mind this is a secrutiy risk (default is https)
    # - AWS_S3_URL=http://s3.amazonaws.com
    volumes:
    - /mnt/s3data:/opt/s3fs/bucket:shared

    scraper:
    build: .
    restart: always
    depends_on:
    - s3fs
    command: sleep infinity
    volumes:
    - /mnt/s3data:/project/data:shared