Skip to content

Instantly share code, notes, and snippets.

# Source: https://gist.github.com/da48772c54dcf09a45bef5d4d778a6e9
###################################
# Using Docker Multi-Stage Builds #
# https://youtu.be/zpkqNPwEzac #
###################################
# Referenced videos:
# - Kaniko - Building Container Images In Kubernetes Without Docker: https://youtu.be/EgwVQN6GNJg
# - Skaffold - How to Build and Deploy In Kubernetes: https://youtu.be/qS_4Qf8owc0
@Shubhamhirau18
Shubhamhirau18 / UpOrDown.sh
Created September 1, 2021 19:02 — forked from davetromp/UpOrDown.sh
Bash script that checks if a site is up or down and emails notifications
#!/bin/bash
################################################################################
# This script will check to see if a website is up/down by pinging the url
# If there is no response an email wil be send via an external smtp mail server
# If the site status is down an email will be send when the site is up again
# set your check interval here :-) #############################################
interval=3600 # hour
@Shubhamhirau18
Shubhamhirau18 / s3-upload.py
Created May 2, 2021 05:52 — forked from dwdraju/s3-upload.py
Python S3 File Upload
import boto3
# Create an S3 client
s3 = boto3.client(
's3',
aws_access_key_id='KEY',
aws_secret_access_key='SECRET'
)
filename = '/path/to/file'
@Shubhamhirau18
Shubhamhirau18 / post-receive.sh
Created April 11, 2021 17:26 — forked from benfrain/post-receive.sh
post-receive hook for multiple branches
#!/bin/bash
while read oldrev newrev ref
do
branch=`echo $ref | cut -d/ -f3`
if [ "master" == "$branch" ]; then
git --work-tree=./path/under/root/dir/live-site/ checkout -f $branch
echo 'Changes pushed live.'
fi
@Shubhamhirau18
Shubhamhirau18 / nginx-tuning.md
Created January 12, 2021 13:04 — forked from denji/nginx-tuning.md
NGINX tuning for best performance

Moved to git repository: https://github.com/denji/nginx-tuning

NGINX Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.