Skip to content

Instantly share code, notes, and snippets.

View pardeep-tm's full-sized avatar

Pardeep Singh pardeep-tm

View GitHub Profile
@kyle-eshares
kyle-eshares / models.py
Created October 15, 2016 18:01
Strict ForeignKeys
from __future__ import unicode_literals
from django.db import models
from django.db.models.fields.related_descriptors import ForwardManyToOneDescriptor # noqa
class RelationNotLoaded(Exception):
pass
@karpathy
karpathy / pg-pong.py
Created May 30, 2016 22:50
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward
@nepsilon
nepsilon / how-to-app-siege-load-test.md
Last active February 19, 2017 12:45
How to test your webapp for heavy traffic? — First published in fullweb.io issue #24

How to test your web app for heavy traffic?

Load testing is a good way to understand your website or web app behaviour under high traffic. Here is how to use siege, a simple CLI tool.

Note: Use siege only on sites you own as its traffic could be interpreted as a DDOS attack.

Using 100 concurrent requests and up to 3 seconds between requests:

@ericelliott
ericelliott / essential-javascript-links.md
Last active June 14, 2025 18:43
Essential JavaScript Links
@kelvinn
kelvinn / cmd.sh
Created July 24, 2014 02:55
Example of using Apache Bench (ab) to POST JSON to an API
# post_loc.txt contains the json you want to post
# -p means to POST it
# -H adds an Auth header (could be Basic or Token)
# -T sets the Content-Type
# -c is concurrent clients
# -n is the number of requests to run in the test
ab -p post_loc.txt -T application/json -H 'Authorization: Token abcd1234' -c 10 -n 2000 http://example.com/api/v1/locations/
@reclosedev
reclosedev / celery_sentinel.py
Last active April 24, 2019 00:01
Temporary hack. Redis Sentinel support for Celery.
"""
This module adds Redis Sentinel transport support to Celery.
Current version of celery doesn't support Redis sentinel client, which is must have for automatic failover.
To use it::
import register_celery_alias
register_celery_alias("redis-sentinel")
celery = Celery(..., broker="redis-sentinel://...", backend="redis-sentinel://...")
@denji
denji / nginx-tuning.md
Last active October 30, 2025 20:38
NGINX tuning for best performance

Moved to git repository: https://github.com/denji/nginx-tuning

NGINX Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.

@tonylukasavage
tonylukasavage / script.sh
Created May 29, 2013 18:18
Move all uncommitted changes to a new branch and revert the existing branch to HEAD. "master" has uncommitted changes. You decided you'd rather make those changes in "dev_branch". Here's how to move those uncommitted changes to "dev_branch" and then revert "master" to its last commit.
# get into the master branch
git checkout master
# create a new branch for the changes and check it out
git checkout -b dev_branch
# stash the changes until we revert master
git stash
# go back to master
@rca
rca / tasks.py
Created March 27, 2012 08:14
break a big celery job into smaller, batched, chunks
"""
Celery tasks that batch a job with many tasks into smaller work sets.
The problem I'm attempting to solve is one where a job comprised of many
tasks (say 100) will snub out a job comprised of only a few tasks (say 5). It
appears as though by default celery will queue up the second job's 5 tasks
behind the first job's 100 and it will have to wait until the first job's
completion before it even begins.