Forked from fschiettecatte/HighTrafficServerSettings.md
Created
April 6, 2022 05:05
-
-
Save tianchaijz/23697faad8fb748decb050a64eabccc9 to your computer and use it in GitHub Desktop.
Revisions
-
fschiettecatte revised this gist
Dec 5, 2020 . No changes.There are no files selected for viewing
-
fschiettecatte revised this gist
Nov 25, 2020 . 1 changed file with 2 additions and 2 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,9 +1,9 @@ # High Traffic Server Settings on CentOS 8 I recently did some work to optimize the network configuration of a CentOS 8 based web server that receives a lot of traffic. There is a lot of information on the web for this and it distills down to a minimum recommended configuration, and a recommended configuration. The minimum recommended configuration should be sufficient for servers with less than 10Gb, -
fschiettecatte created this gist
Aug 29, 2020 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,231 @@ # High Traffic Server Settings on CentOS 8 I recently did some work to optimized the network configuration of a CentOS 8 based web server that receives a lot of traffic. There is a lot of information on the web for this and it distills down into a minimum recommended configuration, and a recommended configuration. The minimum recommended configuration should be sufficient for servers with less than 10Gb, and the recommended configuration should be sufficient for servers with 10Gb or more. Minimum recommended configuration: ``` # Google's BBR congestion control algorithm (default: cubic) # https://research.google/pubs/pub45646/ sysctl -w net.ipv4.tcp_congestion_control=bbr # BBR requires fq for queue management (default: fq_codel) sysctl -w net.core.default_qdisc=fq # Time default value for tcp_fin_timeout connection (default: 60) sysctl -w net.ipv4.tcp_fin_timeout=30 # Disable the gradual speed increase, useful on variable-speed WANs but not for us (default: 1) sysctl -w net.ipv4.tcp_slow_start_after_idle=0 # Avoid MTU black holes (default: 0) sysctl -w net.ipv4.tcp_mtu_probing=1 ``` You can check the current/default values before making any changes: ``` sysctl net.ipv4.tcp_congestion_control sysctl net.core.default_qdisc sysctl net.ipv4.tcp_fin_timeout sysctl net.ipv4.tcp_slow_start_after_idle sysctl net.ipv4.tcp_mtu_probing ``` Defaults on CentOS 8 are as follows: ``` sysctl -w net.ipv4.tcp_congestion_control=cubic sysctl -w net.core.default_qdisc=fq_codel sysctl -w net.ipv4.tcp_fin_timeout=60 sysctl -w net.ipv4.tcp_slow_start_after_idle=1 sysctl -w net.ipv4.tcp_mtu_probing=0 ``` For the changes to survive between reboots, add the following to **/etc/sysctl.d/99-linux.internal.conf** ``` # Google's BBR congestion control algorithm (default: cubic) # https://research.google/pubs/pub45646/ net.ipv4.tcp_congestion_control = bbr # BBR requires fq for queue management (default: fq_codel) net.core.default_qdisc = fq # Time default value for tcp_fin_timeout connection (default: 60) net.ipv4.tcp_fin_timeout = 30 # Disable the gradual speed increase, useful on variable-speed WANs but not for us (default: 1) net.ipv4.tcp_slow_start_after_idle = 0 # Avoid MTU black holes (default: 0) net.ipv4.tcp_mtu_probing = 1 ``` Once added to **/etc/sysctl.d/99-linux.internal.conf**, you can reload the settings as follows: ``` sysctl --load=/etc/sysctl.d/99-linux.internal.conf ``` Recommended configuration: ``` # Maximum number of packets queued on the input side (default: 1000) sysctl -w net.core.netdev_max_backlog=30000 # Maximum receive socket buffer size (default: 212992) sysctl -w net.core.rmem_max=134217728 # Maximum send socket buffer size (default: 212992) sysctl -w net.core.wmem_max=134217728 # Minimum, initial and max TCP receive buffer size in bytes (default: 4096 87380 6291456) sysctl -w net.ipv4.tcp_rmem="10240 87380 134217728" # Minimum, initial and max TCP send buffer size in bytes (default: 4096 20480 4194304) sysctl -w net.ipv4.tcp_wmem="4096 87380 134217728" # Allowed local port range (default: 32768 60999) sysctl -w net.ipv4.ip_local_port_range=1024 65535 # Google's BBR congestion control algorithm (default: cubic) # https://research.google/pubs/pub45646/ sysctl -w net.ipv4.tcp_congestion_control = bbr # BBR requires fq for queue management (default: fq_codel) sysctl -w net.core.default_qdisc = fq # Time default value for tcp_fin_timeout connection (default: 60) sysctl -w net.ipv4.tcp_fin_timeout=30 # Maximum SYN backlog (default: 1024) sysctl -w net.ipv4.tcp_max_syn_backlog=8096 # Disable the gradual speed increase, useful on variable-speed WANs but not for us (default: 1) sysctl -w net.ipv4.tcp_slow_start_after_idle=0 # Don't cache ssthresh from previous connection (default: 0) sysctl -w net.ipv4.tcp_no_metrics_save=1 # Avoid MTU black holes (default: 0) sysctl -w net.ipv4.tcp_mtu_probing=1 ``` You can check the current/default values before making any changes: ``` sysctl net.core.netdev_max_backlog sysctl net.core.rmem_max sysctl net.core.wmem_max sysctl net.ipv4.tcp_rmem sysctl net.ipv4.tcp_wmem sysctl net.ipv4.ip_local_port_range sysctl net.ipv4.tcp_congestion_control sysctl net.core.default_qdisc sysctl net.ipv4.tcp_fin_timeout sysctl net.ipv4.tcp_max_syn_backlog sysctl net.ipv4.tcp_slow_start_after_idle sysctl net.ipv4.tcp_no_metrics_save sysctl net.ipv4.tcp_mtu_probing ``` Defaults on CentOS 8 are as follows: ``` sysctl -w net.core.netdev_max_backlog=1000 sysctl -w net.core.rmem_max=212992 sysctl -w net.core.wmem_max=212992 sysctl -w net.ipv4.tcp_rmem=4096 87380 6291456 sysctl -w net.ipv4.tcp_wmem=4096 87380 6291456 sysctl -w net.ipv4.ip_local_port_range=32768 60999 sysctl -w net.ipv4.tcp_congestion_control=cubic sysctl -w net.core.default_qdisc=fq_codel sysctl -w net.ipv4.tcp_fin_timeout=60 sysctl -w net.ipv4.tcp_max_syn_backlog=80102496 sysctl -w net.ipv4.tcp_slow_start_after_idle=1 sysctl -w net.ipv4.tcp_no_metrics_save=0 sysctl -w net.ipv4.tcp_mtu_probing=0 ``` For the changes to survive between reboots, add the following to **/etc/sysctl.d/99-linux.internal.conf** ``` # Maximum number of packets queued on the input side (default: 1000) net.core.netdev_max_backlog = 30000 # Maximum receive socket buffer size (default: 212992) net.core.rmem_max = 134217728 # Maximum send socket buffer size (default: 212992) net.core.wmem_max = 134217728 # Minimum, initial and max TCP receive buffer size in bytes (default: 4096 87380 6291456) net.ipv4.tcp_rmem = 4096 87380 134217728 # Minimum, initial and max TCP send buffer size in bytes (default: 4096 20480 4194304) net.ipv4.tcp_wmem = 4096 87380 134217728 # Allowed local port range (default: 32768 60999) net.ipv4.ip_local_port_range = 1024 65535 # Google's BBR congestion control algorithm (default: cubic) # https://research.google/pubs/pub45646/ net.ipv4.tcp_congestion_control = bbr # BBR requires fq for queue management (default: fq_codel) net.core.default_qdisc = fq # Time default value for tcp_fin_timeout connection (default: 60) net.ipv4.tcp_fin_timeout = 30 # Maximum SYN backlog (default: 1024) net.ipv4.tcp_max_syn_backlog = 8096 # Disable the gradual speed increase, useful on variable-speed WANs but not for us (default: 1) net.ipv4.tcp_slow_start_after_idle = 0 # Don't cache ssthresh from previous connection (default: 0) net.ipv4.tcp_no_metrics_save = 1 # Avoid MTU black holes (default: 0) net.ipv4.tcp_mtu_probing = 1 ``` Once added to **/etc/sysctl.d/99-linux.internal.conf**, you can reload the settings as follows: ``` sysctl --load=/etc/sysctl.d/99-linux.internal.conf ``` ### Adding BBR to the available congestion control algorithms By default BBR is not enabled on CentOS 8, you can check what is currently enabled as follows: ``` [root@linux]# sysctl net.ipv4.tcp_available_congestion_control net.ipv4.tcp_available_congestion_control = reno cubic ``` You can list all the modules present on your system as follows: ``` [root@linux]# ls -la /lib/modules/$(uname -r)/kernel/net/ipv4/ . tcp_bbr.ko.xz . ``` The module you are looking for is **tcp_bbr.ko.xz**. Enable it with **modprobe**: ``` [root@linux]# modprobe -v -a tcp_bbr insmod /lib/modules/4.18.0-193.14.2.el8_2.x86_64/kernel/net/ipv4/tcp_bbr.ko.xz ``` You can check for it as follows: ``` [root@linux]# sysctl net.ipv4.tcp_available_congestion_control net.ipv4.tcp_available_congestion_control = reno cubic bbr ```