Skip to content

Instantly share code, notes, and snippets.

@migbash
Forked from HacKanCuBa/gunicorn.py
Created December 17, 2022 08:36
Show Gist options
  • Save migbash/036a6a805d1a8bbbc5fecbe71b3c559f to your computer and use it in GitHub Desktop.
Save migbash/036a6a805d1a8bbbc5fecbe71b3c559f to your computer and use it in GitHub Desktop.

Revisions

  1. @HacKanCuBa HacKanCuBa revised this gist Sep 4, 2020. 1 changed file with 8 additions and 6 deletions.
    14 changes: 8 additions & 6 deletions gunicorn.py
    Original file line number Diff line number Diff line change
    @@ -9,6 +9,7 @@
    See revisions to access other versions of this file.
    2020-09-04 preload app by default and remove derogatory terms
    2020-01-13 updated for v20.0
    2019-10-01 clarified that this settings were for v19.9
    2019-09-30 several fixes and missing settings addition
    @@ -153,7 +154,7 @@
    # speed up server boot times. Although, if you defer application loading to
    # each worker process, you can reload your application code easily by
    # restarting workers.
    preload_app = False
    preload_app = True

    # sendfile - Enables or disables the use of sendfile()
    sendfile = False
    @@ -178,7 +179,7 @@

    # worker_tmp_dir - A directory to use for the worker heartbeat temporary file
    # If not set, the default temporary directory will be used.
    worker_tmp_dir = mkdtemp()
    worker_tmp_dir = mkdtemp(prefix='gunicorn_')

    # user - Switch worker processes to run as this user
    # A valid user id (as an integer) or the name of a user that can be retrieved
    @@ -336,7 +337,7 @@
    # {header}o -> response header
    # {variable}e -> environment variable
    # ---------------------------------------------------------------
    #
    #
    # Use lowercase for header and environment variable names, and put {...}x names
    # inside %(...)s. For example:
    #
    @@ -432,9 +433,10 @@
    # Server Hooks
    # ===============================================


    def on_starting(server):
    """
    Execute code just before the master process is initialized.
    Execute code just before the main process is initialized.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    @@ -505,7 +507,7 @@ def worker_abort(worker):

    def pre_exec(server):
    """
    Execute code just before a new master process is forked.
    Execute code just before a new main process is forked.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    @@ -532,7 +534,7 @@ def post_request(worker, req, environ, resp):

    def child_exit(server, worker):
    """
    Execute code just after a worker has been exited, in the master process.
    Execute code just after a worker has been exited, in the main process.
    The callable needs to accept two instance variables for the Arbiter and the
    just-exited Worker.
  2. @HacKanCuBa HacKanCuBa revised this gist Jan 14, 2020. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion gunicorn.py
    Original file line number Diff line number Diff line change
    @@ -12,7 +12,7 @@
    2020-01-13 updated for v20.0
    2019-10-01 clarified that this settings were for v19.9
    2019-09-30 several fixes and missing settings addition
    2019-09-26 forked, minor changes (mostly aesthetical, some of significance)
    2019-09-26 forked, minor changes (mostly aesthetic, some of significance)
    """

  3. @HacKanCuBa HacKanCuBa revised this gist Jan 14, 2020. 1 changed file with 62 additions and 38 deletions.
    100 changes: 62 additions & 38 deletions gunicorn.py
    Original file line number Diff line number Diff line change
    @@ -3,9 +3,21 @@
    by HacKan (https://hackan.net)
    Find it at: https://gist.github.com/HacKanCuBa/275bfca09d614ee9370727f5f40dab9e
    Based on: https://gist.github.com/KodeKracker/6bc6a3a35dcfbc36e2b7
    Changelog
    =========
    See revisions to access other versions of this file.
    2020-01-13 updated for v20.0
    2019-10-01 clarified that this settings were for v19.9
    2019-09-30 several fixes and missing settings addition
    2019-09-26 forked, minor changes (mostly aesthetical, some of significance)
    """
    # Gunicorn (v19.9) Configuration File
    # Reference - https://docs.gunicorn.org/en/19.9.0/settings.html

    # Gunicorn (v20.0) Configuration File
    # Reference - https://docs.gunicorn.org/en/20.0.4/settings.html
    #
    # To run gunicorn by using this config, run gunicorn by passing
    # config file path, ex:
    @@ -56,7 +68,7 @@
    # A positive integer generally in the 2-4 x $(NUM_CORES) range
    threads = 1

    # worker_connections - The maximum number of simultaneous clients
    # worker_connections - The maximum number of simultaneous clients.
    # This setting only affects the Eventlet and Gevent worker types.
    worker_connections = 1000

    @@ -247,6 +259,13 @@
    # --paste-global BAR=2
    raw_paste_global_conf = []

    # strip_header_spaces - Strip spaces present between the header name and
    # the `:`. This is known to induce vulnerabilities and is not compliant with
    # the HTTP/1.1 standard. See
    # https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn
    # Use with care and only if necessary.
    strip_header_spaces = False

    # ===============================================
    # SSL
    # ===============================================
    @@ -258,7 +277,14 @@
    certfile = None

    # ssl_version - SSL Version to use (see stdlib ssl module’s)
    ssl_version = 3
    # TLS Negotiate highest possible version between client/server. Can yield SSL.
    # (Python 3.6+)
    # TLSv1 TLS 1.0
    # TLSv1_1 TLS 1.1 (Python 3.4+)
    # TLSv1_2 TLS 1.2 (Python 3.4+)
    # TLS_SERVER Auto-negotiate the highest protocol version like TLS, but only
    # support server-side SSLSocket connections. (Python 3.6+)
    ssl_version = 'TLSv1_2'

    # cert_reqs - Whether client certificate is required (see stdlib ssl module’s)
    cert_reqs = 0
    @@ -273,8 +299,8 @@
    # (see stdlib ssl module’s)
    do_handshake_on_connect = False

    # ciphers - Ciphers to use (see stdlib ssl module’s)
    ciphers = 'TLSv1'
    # ciphers - SSL Cipher suite to use, in the format of an OpenSSL cipher list.
    ciphers = None

    # ===============================================
    # Logging
    @@ -288,25 +314,34 @@
    #
    # Identifier | Description
    # ------------------------------------------------------------
    # h -> remote address
    # l -> ‘-‘
    # u -> currently ‘-‘, may be user name in future releases
    # t -> date of the request
    # r -> status line (e.g. GET / HTTP/1.1)
    # s -> status
    # b -> response length or ‘-‘
    # f -> referer
    # a -> user agent
    # T -> request time in seconds
    # D -> request time in microseconds
    # L -> request time in decimal seconds
    # p -> process ID
    # {Header}i -> request header
    # {Header}o -> response header
    # h -> remote address
    # l -> ‘-‘
    # u -> user name
    # t -> date of the request
    # r -> status line (e.g. GET / HTTP/1.1)
    # m -> request method
    # U -> URL path without query string
    # q -> query string
    # H -> protocol
    # s -> status
    # B -> response length
    # b -> response length or ‘-‘ (CLF format)
    # f -> referer
    # a -> user agent
    # T -> request time in seconds
    # D -> request time in microseconds
    # L -> request time in decimal seconds
    # p -> process ID
    # {header}i -> request header
    # {header}o -> response header
    # {variable}e -> environment variable
    # ---------------------------------------------------------------
    access_log_format = (
    '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
    )
    #
    # Use lowercase for header and environment variable names, and put {...}x names
    # inside %(...)s. For example:
    #
    # %({x-forwarded-for}i)s
    access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'

    # disable_redirect_access_to_syslog - Disable redirect access logs to syslog.
    disable_redirect_access_to_syslog = False
    @@ -379,6 +414,9 @@
    # added, if not provided)
    statsd_prefix = ''

    # dogstatsd_tags - A comma-delimited list of datadog statsd (dogstatsd) tags to
    # append to statsd metrics.
    dogstatsd_tags = ''

    # ===============================================
    # Process Naming
    @@ -400,7 +438,6 @@ def on_starting(server):
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass


    def on_reload(server):
    @@ -409,7 +446,6 @@ def on_reload(server):
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass


    def when_ready(server):
    @@ -418,7 +454,6 @@ def when_ready(server):
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass


    def pre_fork(server, worker):
    @@ -428,7 +463,6 @@ def pre_fork(server, worker):
    The callable needs to accept two instance variables for the Arbiter and
    new Worker.
    """
    pass


    def post_fork(server, worker):
    @@ -438,7 +472,6 @@ def post_fork(server, worker):
    The callable needs to accept two instance variables for the Arbiter and
    new Worker.
    """
    pass


    def post_worker_init(worker):
    @@ -448,7 +481,6 @@ def post_worker_init(worker):
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass


    def worker_int(worker):
    @@ -458,7 +490,6 @@ def worker_int(worker):
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass


    def worker_abort(worker):
    @@ -470,7 +501,6 @@ def worker_abort(worker):
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass


    def pre_exec(server):
    @@ -479,7 +509,6 @@ def pre_exec(server):
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass


    def pre_request(worker, req):
    @@ -499,7 +528,6 @@ def post_request(worker, req, environ, resp):
    The callable needs to accept two instance variables for the Worker and
    the Request.
    """
    pass


    def child_exit(server, worker):
    @@ -509,7 +537,6 @@ def child_exit(server, worker):
    The callable needs to accept two instance variables for the Arbiter and the
    just-exited Worker.
    """
    pass


    def worker_exit(server, worker):
    @@ -519,7 +546,6 @@ def worker_exit(server, worker):
    The callable needs to accept two instance variables for the Arbiter and
    the just-exited Worker.
    """
    pass


    def nworkers_changed(server, new_value, old_value):
    @@ -532,7 +558,6 @@ def nworkers_changed(server, new_value, old_value):
    If the number of workers is set for the first time, old_value would be
    None.
    """
    pass


    def on_exit(server):
    @@ -541,4 +566,3 @@ def on_exit(server):
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass
  4. @HacKanCuBa HacKanCuBa revised this gist Oct 1, 2019. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions gunicorn.py
    Original file line number Diff line number Diff line change
    @@ -4,8 +4,8 @@
    Find it at: https://gist.github.com/HacKanCuBa/275bfca09d614ee9370727f5f40dab9e
    Based on: https://gist.github.com/KodeKracker/6bc6a3a35dcfbc36e2b7
    """
    # Gunicorn(v19.3) Configuration File
    # Reference - http://docs.gunicorn.org/en/19.3/settings.html
    # Gunicorn (v19.9) Configuration File
    # Reference - https://docs.gunicorn.org/en/19.9.0/settings.html
    #
    # To run gunicorn by using this config, run gunicorn by passing
    # config file path, ex:
  5. @HacKanCuBa HacKanCuBa revised this gist Sep 30, 2019. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions gunicorn.py
    Original file line number Diff line number Diff line change
    @@ -166,7 +166,7 @@

    # worker_tmp_dir - A directory to use for the worker heartbeat temporary file
    # If not set, the default temporary directory will be used.
    worker_tmp_dir = mkdtemp(prefix='yog_')
    worker_tmp_dir = mkdtemp()

    # user - Switch worker processes to run as this user
    # A valid user id (as an integer) or the name of a user that can be retrieved
    @@ -387,7 +387,7 @@
    # proc_name - A base to use with setproctitle for process naming.
    # This affects things like `ps` and `top`.
    # It defaults to ‘gunicorn’.
    proc_name = 'yog_sothoth'
    proc_name = 'gunicorn'


    # ===============================================
  6. @HacKanCuBa HacKanCuBa revised this gist Sep 30, 2019. 1 changed file with 85 additions and 21 deletions.
    106 changes: 85 additions & 21 deletions gunicorn.py
    Original file line number Diff line number Diff line change
    @@ -21,7 +21,7 @@
    # ===============================================

    # bind - The server socket to bind
    bind = '0.0.0.0:8000'
    bind = '127.0.0.1:8000'

    # backlog - The maximum number of pending connections
    # Generally in range 64-2048
    @@ -41,7 +41,9 @@
    # 2. eventlet - Requires eventlet >= 0.9.7
    # 3. gevent - Requires gevent >= 0.13
    # 4. tornado - Requires tornado >= 0.2
    # 5. uvicorn - uvicorn.workers.UvicornWorker'
    # 5. gthread - Python 2 requires the futures package to be installed (or
    # install it via pip install gunicorn[gthread])
    # 6. uvicorn - uvicorn.workers.UvicornWorker
    #
    # You’ll want to read http://docs.gunicorn.org/en/latest/design.html
    # for information on when you might want to choose one of the other
    @@ -63,17 +65,17 @@
    # Any value greater than zero will limit the number of requests a work
    # will process before automatically restarting. This is a simple method
    # to help limit the damage of memory leaks.
    max_requests = 1000
    max_requests = 10000

    # max_requests_jitter - The maximum jitter to add to the max-requests setting
    # The jitter causes the restart per worker to be randomized by
    # randint(0, max_requests_jitter). This is intended to stagger worker
    # restarts to avoid all workers restarting at the same time.
    max_requests_jitter = 100
    max_requests_jitter = 1000

    # timeout - Workers silent for more than this many seconds are killed
    # and restarted
    timeout = 600
    timeout = 30

    # graceful_timeout - Timeout for graceful workers restart
    # How max time worker can handle request after got restart signal.
    @@ -113,6 +115,17 @@
    # reload - Restart workers when code changes
    reload = False

    # reload_engine - The implementation that should be used to power reload.
    # Valid engines are:
    # ‘auto’ (default)
    # ‘poll’
    # ‘inotify’ (requires inotify)
    reload_engine = 'auto'

    # reload_extra_files - Extends reload option to also watch and reload on
    # additional files (e.g., templates, configurations, specifications, etc.).
    reload_extra_files = []

    # spew - Install a trace function that spews every line executed by the server
    spew = False

    @@ -128,11 +141,14 @@
    # speed up server boot times. Although, if you defer application loading to
    # each worker process, you can reload your application code easily by
    # restarting workers.
    preload = True
    preload_app = False

    # sendfile - Enables or disables the use of sendfile()
    sendfile = False

    # reuse_port - Set the SO_REUSEPORT flag on the listening socket.
    reuse_port = False

    # chdir - Chdir to specified directory before apps loading
    chdir = ''

    @@ -150,7 +166,7 @@

    # worker_tmp_dir - A directory to use for the worker heartbeat temporary file
    # If not set, the default temporary directory will be used.
    worker_tmp_dir = mkdtemp()
    worker_tmp_dir = mkdtemp(prefix='yog_')

    # user - Switch worker processes to run as this user
    # A valid user id (as an integer) or the name of a user that can be retrieved
    @@ -171,6 +187,11 @@
    # “0022” are valid for decimal, hex, and octal representations)
    umask = 0

    # initgroups - If true, set the worker process’s group access list with all of
    # the groups of which the specified username is a member, plus the specified
    # group id.
    initgroups = False

    # tmp_upload_dir - Directory to store temporary request data as they are read
    # This path should be writable by the process permissions set for Gunicorn
    # workers. If not specified, Gunicorn will choose a system generated temporary
    @@ -194,6 +215,17 @@
    # the environment)
    forwarded_allow_ips = '127.0.0.1'

    # pythonpath - A comma-separated list of directories to add to the Python path.
    # e.g. '/home/djangoprojects/myproject,/home/python/mylibrary'.
    pythonpath = None

    # paste - Load a PasteDeploy config file. The argument may contain a # symbol
    # followed by the name of an app section from the config file,
    # e.g. production.ini#admin.
    # At this time, using alternate server blocks is not supported. Use the command
    # line arguments to control server configuration instead.
    paste = None

    # proxy_protocol - Enable detect PROXY protocol (PROXY mode).
    # Allow using Http and Proxy together. It may be useful for work with stunnel
    # as https frontend and gunicorn as http server.
    @@ -205,7 +237,15 @@
    # Set to “*” to disable checking of Front-end IPs (useful for setups where you
    # don’t know in advance the IP address of Front-end, but you still trust the
    # environment)
    proxy_allow_from = '127.0.0.1'
    proxy_allow_ips = '127.0.0.1'

    # raw_paste_global_conf - Set a PasteDeploy global config variable in key=value
    # form.
    # The option can be specified multiple times.
    # The variables are passed to the the PasteDeploy entrypoint. Example:
    # $ gunicorn -b 127.0.0.1:8000 --paste development.ini --paste-global FOO=1
    # --paste-global BAR=2
    raw_paste_global_conf = []

    # ===============================================
    # SSL
    @@ -241,8 +281,8 @@
    # ===============================================

    # accesslog - The Access log file to write to.
    # “-” means log to stderr.
    access_logfile = '-'
    # “-” means log to stdout.
    accesslog = '-'

    # access_log_format - The access log format
    #
    @@ -264,13 +304,16 @@
    # {Header}i -> request header
    # {Header}o -> response header
    # ---------------------------------------------------------------
    access_logformat = (
    access_log_format = (
    '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
    )

    # disable_redirect_access_to_syslog - Disable redirect access logs to syslog.
    disable_redirect_access_to_syslog = False

    # errorlog - The Error log file to write to.
    # “-” means log to stderr.
    error_logfile = '-'
    errorlog = '-'

    # loglevel - The granularity of Error log outputs.
    # Valid level names are:
    @@ -279,7 +322,10 @@
    # 3. warning
    # 4. error
    # 5. critical
    log_level = 'info'
    loglevel = 'info'

    # capture_output - Redirect stdout/stderr to specified file in errorlog.
    capture_output = False

    # logger_class - The logger you want to use to log events in gunicorn.
    # The default class (gunicorn.glogging.Logger) handle most of normal usages
    @@ -288,7 +334,15 @@

    # logconfig - The log config file to use. Gunicorn uses the standard Python
    # logging module’s Configuration file format.
    log_config = None
    logconfig = None

    # logconfig_dict - The log config dictionary to use, using the standard
    # Python logging module’s dictionary configuration format. This option
    # takes precedence over the logconfig option, which uses the older file
    # configuration format.
    # Format:
    # https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig
    logconfig_dict = {}

    # syslog_addr - Address to send syslog messages.
    #
    @@ -298,19 +352,19 @@
    # ‘stream’ is the default.
    # ‘udp://HOST:PORT’ : for UDP sockets
    # ‘tcp://HOST:PORT‘ : for TCP sockets
    log_syslog_to = 'udp://localhost:514'
    syslog_addr = 'udp://localhost:514'

    # syslog - Send Gunicorn logs to syslog
    log_syslog = False
    syslog = False

    # syslog_prefix - Makes gunicorn use the parameter as program-name in the
    # syslog entries.
    # All entries will be prefixed by gunicorn.<prefix>. By default the program
    # name is the name of the process.
    log_syslog_prefix = None
    syslog_prefix = None

    # syslog_facility - Syslog facility name
    log_syslog_facility = 'user'
    syslog_facility = 'user'

    # enable_stdio_inheritance - Enable stdio inheritance
    # Enable inheritance for stdio file descriptors in daemon mode.
    @@ -323,7 +377,7 @@

    # statsd_prefix - Prefix to use when emitting statsd metrics (a trailing . is
    # added, if not provided)
    # statsd-prefix = "."
    statsd_prefix = ''


    # ===============================================
    @@ -333,7 +387,7 @@
    # proc_name - A base to use with setproctitle for process naming.
    # This affects things like `ps` and `top`.
    # It defaults to ‘gunicorn’.
    name = 'gunicorn'
    proc_name = 'yog_sothoth'


    # ===============================================
    @@ -397,7 +451,7 @@ def post_worker_init(worker):
    pass


    def worker_init(worker):
    def worker_int(worker):
    """
    Execute code just after a worker exited on SIGINT or SIGQUIT.
    @@ -448,6 +502,16 @@ def post_request(worker, req, environ, resp):
    pass


    def child_exit(server, worker):
    """
    Execute code just after a worker has been exited, in the master process.
    The callable needs to accept two instance variables for the Arbiter and the
    just-exited Worker.
    """
    pass


    def worker_exit(server, worker):
    """
    Execute code just after a worker has been exited.
  7. @HacKanCuBa HacKanCuBa revised this gist Sep 26, 2019. 1 changed file with 62 additions and 46 deletions.
    108 changes: 62 additions & 46 deletions gunicorn.py
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,9 @@
    # -*- coding: utf-8 -*-
    """Gunicorn config file.
    by HacKan (https://hackan.net)
    Find it at: https://gist.github.com/HacKanCuBa/275bfca09d614ee9370727f5f40dab9e
    Based on: https://gist.github.com/KodeKracker/6bc6a3a35dcfbc36e2b7
    """
    # Gunicorn(v19.3) Configuration File
    # Reference - http://docs.gunicorn.org/en/19.3/settings.html
    #
    @@ -10,19 +14,19 @@
    #

    import multiprocessing
    from tempfile import mkdtemp

    # ===============================================
    # Server Socket
    # ===============================================

    # bind - The server socket to bind
    bind = "127.0.0.1:8000"
    bind = '0.0.0.0:8000'

    # backlog - The maximum number of pending connections
    # Generally in range 64-2048
    backlog = 2048


    # ===============================================
    # Worker Processes
    # ===============================================
    @@ -37,10 +41,12 @@
    # 2. eventlet - Requires eventlet >= 0.9.7
    # 3. gevent - Requires gevent >= 0.13
    # 4. tornado - Requires tornado >= 0.2
    # 5. uvicorn - uvicorn.workers.UvicornWorker'
    #
    # You’ll want to read http://docs.gunicorn.org/en/latest/design.html
    # for information on when you might want to choose one of the other
    # worker classes
    # worker classes.
    # See also: https://www.uvicorn.org/deployment/
    worker_class = 'sync'

    # threads - The number of worker threads for handling requests. This will
    @@ -57,17 +63,17 @@
    # Any value greater than zero will limit the number of requests a work
    # will process before automatically restarting. This is a simple method
    # to help limit the damage of memory leaks.
    max_requests = 0
    max_requests = 1000

    # max_requests_jitter - The maximum jitter to add to the max-requests setting
    # The jitter causes the restart per worker to be randomized by
    # randint(0, max_requests_jitter). This is intended to stagger worker
    # restarts to avoid all workers restarting at the same time.
    max_requests_jitter = 0
    max_requests_jitter = 100

    # timeout - Workers silent for more than this many seconds are killed
    # and restarted
    timeout = 30
    timeout = 600

    # graceful_timeout - Timeout for graceful workers restart
    # How max time worker can handle request after got restart signal.
    @@ -79,15 +85,14 @@
    # Generally set in the 1-5 seconds range.
    keep_alive = 2


    # ===============================================
    # Security
    # ===============================================

    # limit_request_line - The maximum size of HTTP request line in bytes
    # Value is a number from 0 (unlimited) to 8190.
    # This parameter can be used to prevent any DDOS attack.
    limit_request_line = 4094
    limit_request_line = 1024

    # limit_request_fields - Limit the number of HTTP headers fields in a request
    # This parameter is used to limit the number of headers in a request to
    @@ -99,8 +104,7 @@
    # limit_request_field_size - Limit the allowed size of an HTTP request
    # header field.
    # Value is a number from 0 (unlimited) to 8190.
    limit_request_field_size = 8190

    limit_request_field_size = 1024

    # ===============================================
    # Debugging
    @@ -115,7 +119,6 @@
    # check_config - Check the configuration
    check_config = False


    # ===============================================
    # Server Mechanics
    # ===============================================
    @@ -125,13 +128,13 @@
    # speed up server boot times. Although, if you defer application loading to
    # each worker process, you can reload your application code easily by
    # restarting workers.
    preload = False
    preload = True

    # sendfile - Enables or disables the use of sendfile()
    sendfile = True
    sendfile = False

    # chdir - Chdir to specified directory before apps loading
    chdir = ""
    chdir = ''

    # daemon - Daemonize the Gunicorn process.
    # Detaches the server from the controlling terminal and enters the background.
    @@ -147,7 +150,7 @@

    # worker_tmp_dir - A directory to use for the worker heartbeat temporary file
    # If not set, the default temporary directory will be used.
    worker_tmp_dir = None
    worker_tmp_dir = mkdtemp()

    # user - Switch worker processes to run as this user
    # A valid user id (as an integer) or the name of a user that can be retrieved
    @@ -181,15 +184,15 @@
    secure_scheme_headers = {
    'X-FORWARDED-PROTOCOL': 'ssl',
    'X-FORWARDED-PROTO': 'https',
    'X-FORWARDED-SSL': 'on'
    'X-FORWARDED-SSL': 'on',
    }

    # forwarded_allow_ips - Front-end’s IPs from which allowed to handle set
    # secure headers (comma separate)
    # Set to “*” to disable checking of Front-end IPs (useful for setups where
    # you don’t know in advance the IP address of Front-end, but you still trust
    # the environment)
    forwarded_allow_ips = "127.0.0.1"
    forwarded_allow_ips = '127.0.0.1'

    # proxy_protocol - Enable detect PROXY protocol (PROXY mode).
    # Allow using Http and Proxy together. It may be useful for work with stunnel
    @@ -202,8 +205,7 @@
    # Set to “*” to disable checking of Front-end IPs (useful for setups where you
    # don’t know in advance the IP address of Front-end, but you still trust the
    # environment)
    proxy_allow_from = "127.0.0.1"

    proxy_allow_from = '127.0.0.1'

    # ===============================================
    # SSL
    @@ -232,16 +234,15 @@
    do_handshake_on_connect = False

    # ciphers - Ciphers to use (see stdlib ssl module’s)
    ciphers = "TLSv1"

    ciphers = 'TLSv1'

    # ===============================================
    # Logging
    # ===============================================

    # accesslog - The Access log file to write to.
    # “-” means log to stderr.
    access_logfile = None
    access_logfile = '-'

    # access_log_format - The access log format
    #
    @@ -263,12 +264,13 @@
    # {Header}i -> request header
    # {Header}o -> response header
    # ---------------------------------------------------------------
    access_logformat = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" \
    "%(a)s"'
    access_logformat = (
    '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
    )

    # errorlog - The Error log file to write to.
    # “-” means log to stderr.
    error_logfile = "-"
    error_logfile = '-'

    # loglevel - The granularity of Error log outputs.
    # Valid level names are:
    @@ -277,12 +279,12 @@
    # 3. warning
    # 4. error
    # 5. critical
    log_level = "info"
    log_level = 'info'

    # logger_class - The logger you want to use to log events in gunicorn.
    # The default class (gunicorn.glogging.Logger) handle most of normal usages
    # in logging. It provides error and access logging.
    logger_class = "gunicorn.glogging.Logger"
    logger_class = 'gunicorn.glogging.Logger'

    # logconfig - The log config file to use. Gunicorn uses the standard Python
    # logging module’s Configuration file format.
    @@ -296,7 +298,7 @@
    # ‘stream’ is the default.
    # ‘udp://HOST:PORT’ : for UDP sockets
    # ‘tcp://HOST:PORT‘ : for TCP sockets
    log_syslog_to = "udp://localhost:514"
    log_syslog_to = 'udp://localhost:514'

    # syslog - Send Gunicorn logs to syslog
    log_syslog = False
    @@ -308,7 +310,7 @@
    log_syslog_prefix = None

    # syslog_facility - Syslog facility name
    log_syslog_facility = "user"
    log_syslog_facility = 'user'

    # enable_stdio_inheritance - Enable stdio inheritance
    # Enable inheritance for stdio file descriptors in daemon mode.
    @@ -331,7 +333,7 @@
    # proc_name - A base to use with setproctitle for process naming.
    # This affects things like `ps` and `top`.
    # It defaults to ‘gunicorn’.
    name = None
    name = 'gunicorn'


    # ===============================================
    @@ -340,112 +342,125 @@

    def on_starting(server):
    """
    Called just before the master process is initialized.
    Execute code just before the master process is initialized.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass


    def on_reload(server):
    """
    Called to recycle workers during a reload via SIGHUP.
    Execute code to recycle workers during a reload via SIGHUP.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass


    def when_ready(server):
    """
    Called just after the server is started.
    Execute code just after the server is started.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass


    def pre_fork(server, worker):
    """
    Called just before a worker is forked.
    Execute code just before a worker is forked.
    The callable needs to accept two instance variables for the Arbiter and
    new Worker.
    """
    pass


    def post_fork(server, worker):
    """
    Called just after a worker has been forked.
    Execute code just after a worker has been forked.
    The callable needs to accept two instance variables for the Arbiter and
    new Worker.
    """
    pass


    def post_worker_init(worker):
    """
    Called just after a worker has initialized the application.
    Execute code just after a worker has initialized the application.
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass


    def worker_init(worker):
    """
    Called just after a worker exited on SIGINT or SIGQUIT.
    Execute code just after a worker exited on SIGINT or SIGQUIT.
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass


    def worker_abort(worker):
    """
    Called when a worker received the SIGABRT signal.
    Execute code when a worker received the SIGABRT signal.
    This call generally happens on timeout.
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass


    def pre_exec(server):
    """
    Called just before a new master process is forked.
    Execute code just before a new master process is forked.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass


    def pre_request(worker, req):
    """
    Called just before a worker processes the request.
    Execute code just before a worker processes the request.
    The callable needs to accept two instance variables for the Worker and
    the Request.
    """
    worker.log.debug("%s %s" % (req.method, req.path))
    worker.log.debug('%s %s', req.method, req.path)


    def post_request(worker, req, environ, resp):
    """
    Called after a worker processes the request.
    Execute code after a worker processes the request.
    The callable needs to accept two instance variables for the Worker and
    the Request.
    """
    pass


    def worker_exit(server, worker):
    """
    Called just after a worker has been exited.
    Execute code just after a worker has been exited.
    The callable needs to accept two instance variables for the Arbiter and
    the just-exited Worker.
    """
    pass


    def nworkers_changed(server, new_value, old_value):
    """
    Called just after num_workers has been changed.
    Execute code just after num_workers has been changed.
    The callable needs to accept an instance variable of the Arbiter and two
    integers of number of workers after and before change.
    @@ -455,9 +470,10 @@ def nworkers_changed(server, new_value, old_value):
    """
    pass


    def on_exit(server):
    """
    Called just before exiting gunicorn.
    Execute code just before exiting gunicorn.
    The callable needs to accept a single instance variable for the Arbiter.
    """
  8. @kodekracker kodekracker created this gist Sep 10, 2015.
    464 changes: 464 additions & 0 deletions gunicorn.py
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,464 @@
    # -*- coding: utf-8 -*-

    # Gunicorn(v19.3) Configuration File
    # Reference - http://docs.gunicorn.org/en/19.3/settings.html
    #
    # To run gunicorn by using this config, run gunicorn by passing
    # config file path, ex:
    #
    # $ gunicorn --config=gunicorn.py MODULE_NAME:VARIABLE_NAME
    #

    import multiprocessing

    # ===============================================
    # Server Socket
    # ===============================================

    # bind - The server socket to bind
    bind = "127.0.0.1:8000"

    # backlog - The maximum number of pending connections
    # Generally in range 64-2048
    backlog = 2048


    # ===============================================
    # Worker Processes
    # ===============================================

    # workers - The number of worker processes for handling requests.
    # A positive integer generally in the 2-4 x $(NUM_CORES) range
    workers = multiprocessing.cpu_count() * 2 + 1

    # worker_class - The type of workers to use
    # A string referring to one of the following bundled classes:
    # 1. sync
    # 2. eventlet - Requires eventlet >= 0.9.7
    # 3. gevent - Requires gevent >= 0.13
    # 4. tornado - Requires tornado >= 0.2
    #
    # You’ll want to read http://docs.gunicorn.org/en/latest/design.html
    # for information on when you might want to choose one of the other
    # worker classes
    worker_class = 'sync'

    # threads - The number of worker threads for handling requests. This will
    # run each worker with the specified number of threads.
    # A positive integer generally in the 2-4 x $(NUM_CORES) range
    threads = 1

    # worker_connections - The maximum number of simultaneous clients
    # This setting only affects the Eventlet and Gevent worker types.
    worker_connections = 1000

    # max_requests - The maximum number of requests a worker will process
    # before restarting
    # Any value greater than zero will limit the number of requests a work
    # will process before automatically restarting. This is a simple method
    # to help limit the damage of memory leaks.
    max_requests = 0

    # max_requests_jitter - The maximum jitter to add to the max-requests setting
    # The jitter causes the restart per worker to be randomized by
    # randint(0, max_requests_jitter). This is intended to stagger worker
    # restarts to avoid all workers restarting at the same time.
    max_requests_jitter = 0

    # timeout - Workers silent for more than this many seconds are killed
    # and restarted
    timeout = 30

    # graceful_timeout - Timeout for graceful workers restart
    # How max time worker can handle request after got restart signal.
    # If the time is up worker will be force killed.
    graceful_timeout = 30

    # keep_alive - The number of seconds to wait for requests on a
    # Keep-Alive connection
    # Generally set in the 1-5 seconds range.
    keep_alive = 2


    # ===============================================
    # Security
    # ===============================================

    # limit_request_line - The maximum size of HTTP request line in bytes
    # Value is a number from 0 (unlimited) to 8190.
    # This parameter can be used to prevent any DDOS attack.
    limit_request_line = 4094

    # limit_request_fields - Limit the number of HTTP headers fields in a request
    # This parameter is used to limit the number of headers in a request to
    # prevent DDOS attack. Used with the limit_request_field_size it allows
    # more safety.
    # By default this value is 100 and can’t be larger than 32768.
    limit_request_fields = 100

    # limit_request_field_size - Limit the allowed size of an HTTP request
    # header field.
    # Value is a number from 0 (unlimited) to 8190.
    limit_request_field_size = 8190


    # ===============================================
    # Debugging
    # ===============================================

    # reload - Restart workers when code changes
    reload = False

    # spew - Install a trace function that spews every line executed by the server
    spew = False

    # check_config - Check the configuration
    check_config = False


    # ===============================================
    # Server Mechanics
    # ===============================================

    # preload_app - Load application code before the worker processes are forked
    # By preloading an application you can save some RAM resources as well as
    # speed up server boot times. Although, if you defer application loading to
    # each worker process, you can reload your application code easily by
    # restarting workers.
    preload = False

    # sendfile - Enables or disables the use of sendfile()
    sendfile = True

    # chdir - Chdir to specified directory before apps loading
    chdir = ""

    # daemon - Daemonize the Gunicorn process.
    # Detaches the server from the controlling terminal and enters the background.
    daemon = False

    # raw_env - Set environment variable (key=value)
    # Pass variables to the execution environment.
    raw_env = []

    # pidfile - A filename to use for the PID file
    # If not set, no PID file will be written.
    pidfile = None

    # worker_tmp_dir - A directory to use for the worker heartbeat temporary file
    # If not set, the default temporary directory will be used.
    worker_tmp_dir = None

    # user - Switch worker processes to run as this user
    # A valid user id (as an integer) or the name of a user that can be retrieved
    # with a call to pwd.getpwnam(value) or None to not change the worker process
    # user
    user = None

    # group - Switch worker process to run as this group.
    # A valid group id (as an integer) or the name of a user that can be retrieved
    # with a call to pwd.getgrnam(value) or None to not change the worker
    # processes group.
    group = None

    # umask - A bit mask for the file mode on files written by Gunicorn
    # Note that this affects unix socket permissions.
    # A valid value for the os.umask(mode) call or a string compatible with
    # int(value, 0) (0 means Python guesses the base, so values like “0”, “0xFF”,
    # “0022” are valid for decimal, hex, and octal representations)
    umask = 0

    # tmp_upload_dir - Directory to store temporary request data as they are read
    # This path should be writable by the process permissions set for Gunicorn
    # workers. If not specified, Gunicorn will choose a system generated temporary
    # directory.
    tmp_upload_dir = None

    # secure_scheme_headers - A dictionary containing headers and values that the
    # front-end proxy uses to indicate HTTPS requests. These tell gunicorn to set
    # wsgi.url_scheme to “https”, so your application can tell that the request is
    # secure.
    secure_scheme_headers = {
    'X-FORWARDED-PROTOCOL': 'ssl',
    'X-FORWARDED-PROTO': 'https',
    'X-FORWARDED-SSL': 'on'
    }

    # forwarded_allow_ips - Front-end’s IPs from which allowed to handle set
    # secure headers (comma separate)
    # Set to “*” to disable checking of Front-end IPs (useful for setups where
    # you don’t know in advance the IP address of Front-end, but you still trust
    # the environment)
    forwarded_allow_ips = "127.0.0.1"

    # proxy_protocol - Enable detect PROXY protocol (PROXY mode).
    # Allow using Http and Proxy together. It may be useful for work with stunnel
    # as https frontend and gunicorn as http server.
    # PROXY protocol: http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
    proxy_protocol = False

    # proxy_allow_ips - Front-end’s IPs from which allowed accept proxy requests
    # (comma separate)
    # Set to “*” to disable checking of Front-end IPs (useful for setups where you
    # don’t know in advance the IP address of Front-end, but you still trust the
    # environment)
    proxy_allow_from = "127.0.0.1"


    # ===============================================
    # SSL
    # ===============================================

    # keyfile - SSL Key file
    keyfile = None

    # certfile - SSL Certificate file
    certfile = None

    # ssl_version - SSL Version to use (see stdlib ssl module’s)
    ssl_version = 3

    # cert_reqs - Whether client certificate is required (see stdlib ssl module’s)
    cert_reqs = 0

    # ca_certs - CA certificates file
    ca_certs = None

    # suppress_ragged_eofs - Suppress ragged EOFs (see stdlib ssl module’s)
    suppress_ragged_eofs = True

    # do_handshake_on_connect - Whether to perform SSL handshake on socket connect
    # (see stdlib ssl module’s)
    do_handshake_on_connect = False

    # ciphers - Ciphers to use (see stdlib ssl module’s)
    ciphers = "TLSv1"


    # ===============================================
    # Logging
    # ===============================================

    # accesslog - The Access log file to write to.
    # “-” means log to stderr.
    access_logfile = None

    # access_log_format - The access log format
    #
    # Identifier | Description
    # ------------------------------------------------------------
    # h -> remote address
    # l -> ‘-‘
    # u -> currently ‘-‘, may be user name in future releases
    # t -> date of the request
    # r -> status line (e.g. GET / HTTP/1.1)
    # s -> status
    # b -> response length or ‘-‘
    # f -> referer
    # a -> user agent
    # T -> request time in seconds
    # D -> request time in microseconds
    # L -> request time in decimal seconds
    # p -> process ID
    # {Header}i -> request header
    # {Header}o -> response header
    # ---------------------------------------------------------------
    access_logformat = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" \
    "%(a)s"'

    # errorlog - The Error log file to write to.
    # “-” means log to stderr.
    error_logfile = "-"

    # loglevel - The granularity of Error log outputs.
    # Valid level names are:
    # 1. debug
    # 2. info
    # 3. warning
    # 4. error
    # 5. critical
    log_level = "info"

    # logger_class - The logger you want to use to log events in gunicorn.
    # The default class (gunicorn.glogging.Logger) handle most of normal usages
    # in logging. It provides error and access logging.
    logger_class = "gunicorn.glogging.Logger"

    # logconfig - The log config file to use. Gunicorn uses the standard Python
    # logging module’s Configuration file format.
    log_config = None

    # syslog_addr - Address to send syslog messages.
    #
    # Address is a string of the form:
    # ‘unix://PATH#TYPE’ : for unix domain socket. TYPE can be ‘stream’ for the
    # stream driver or ‘dgram’ for the dgram driver.
    # ‘stream’ is the default.
    # ‘udp://HOST:PORT’ : for UDP sockets
    # ‘tcp://HOST:PORT‘ : for TCP sockets
    log_syslog_to = "udp://localhost:514"

    # syslog - Send Gunicorn logs to syslog
    log_syslog = False

    # syslog_prefix - Makes gunicorn use the parameter as program-name in the
    # syslog entries.
    # All entries will be prefixed by gunicorn.<prefix>. By default the program
    # name is the name of the process.
    log_syslog_prefix = None

    # syslog_facility - Syslog facility name
    log_syslog_facility = "user"

    # enable_stdio_inheritance - Enable stdio inheritance
    # Enable inheritance for stdio file descriptors in daemon mode.
    # Note: To disable the python stdout buffering, you can to set the user
    # environment variable PYTHONUNBUFFERED .
    enable_stdio_inheritance = False

    # statsd_host - host:port of the statsd server to log to
    statsd_host = None

    # statsd_prefix - Prefix to use when emitting statsd metrics (a trailing . is
    # added, if not provided)
    # statsd-prefix = "."


    # ===============================================
    # Process Naming
    # ===============================================

    # proc_name - A base to use with setproctitle for process naming.
    # This affects things like `ps` and `top`.
    # It defaults to ‘gunicorn’.
    name = None


    # ===============================================
    # Server Hooks
    # ===============================================

    def on_starting(server):
    """
    Called just before the master process is initialized.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass

    def on_reload(server):
    """
    Called to recycle workers during a reload via SIGHUP.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass

    def when_ready(server):
    """
    Called just after the server is started.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass

    def pre_fork(server, worker):
    """
    Called just before a worker is forked.
    The callable needs to accept two instance variables for the Arbiter and
    new Worker.
    """
    pass

    def post_fork(server, worker):
    """
    Called just after a worker has been forked.
    The callable needs to accept two instance variables for the Arbiter and
    new Worker.
    """
    pass

    def post_worker_init(worker):
    """
    Called just after a worker has initialized the application.
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass

    def worker_init(worker):
    """
    Called just after a worker exited on SIGINT or SIGQUIT.
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass

    def worker_abort(worker):
    """
    Called when a worker received the SIGABRT signal.
    This call generally happens on timeout.
    The callable needs to accept one instance variable for the initialized
    Worker.
    """
    pass

    def pre_exec(server):
    """
    Called just before a new master process is forked.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass

    def pre_request(worker, req):
    """
    Called just before a worker processes the request.
    The callable needs to accept two instance variables for the Worker and
    the Request.
    """
    worker.log.debug("%s %s" % (req.method, req.path))

    def post_request(worker, req, environ, resp):
    """
    Called after a worker processes the request.
    The callable needs to accept two instance variables for the Worker and
    the Request.
    """
    pass

    def worker_exit(server, worker):
    """
    Called just after a worker has been exited.
    The callable needs to accept two instance variables for the Arbiter and
    the just-exited Worker.
    """
    pass

    def nworkers_changed(server, new_value, old_value):
    """
    Called just after num_workers has been changed.
    The callable needs to accept an instance variable of the Arbiter and two
    integers of number of workers after and before change.
    If the number of workers is set for the first time, old_value would be
    None.
    """
    pass

    def on_exit(server):
    """
    Called just before exiting gunicorn.
    The callable needs to accept a single instance variable for the Arbiter.
    """
    pass