Skip to content

Instantly share code, notes, and snippets.

@jahil
Created January 13, 2016 15:32
Show Gist options
  • Save jahil/69265ffbefd86a588649 to your computer and use it in GitHub Desktop.
Save jahil/69265ffbefd86a588649 to your computer and use it in GitHub Desktop.

Revisions

  1. jahil created this gist Jan 13, 2016.
    513 changes: 513 additions & 0 deletions sysctl.conf
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,513 @@
    # FreeBSD 10.2 -- /etc/sysctl.conf version 0.47
    # https://calomel.org/freebsd_network_tuning.html
    #
    # low latency is important so we highly recommend that you disable hyper
    # threading on Intel CPUs as it has an unpredictable affect on latency, cpu
    # cache misses and load.
    #
    # These settings are specifically tuned for a "low" latency FIOS (300/300) and
    # gigabit LAN connections. If you have 10gig or 40gig you will need to increase
    # the network buffers as proposed. "man tuning" for more information.
    #
    # Before tuning the following two(2) sections on maxsockbuf and buf_max take
    # some time to read PSC's tips on Enabling High Performance Data Transfers.
    # http://www.psc.edu/index.php/networking/641-tcp-tune

    # A standard guide to the socket buffer size is: latency to the host times
    # bandwidth in megabits per second divided by 8 bits = socket buffer size. For
    # a 150 megabit network which pings 0.1 seconds from our server we calculate
    # 0.1 seconds * 150 Mbps / 8 bits = 1.875 megabyte buffer which is below the
    # default of 2MB (2097152). If the host is farther away then the latency will
    # be higher and the buffer will need to be larger. You may want to increase to
    # 4MB if the upload bandwidth is greater the 150 Mbit and latency is over
    # 200ms. For 10GE hosts set to at least 16MB as well as to increase the TCP
    # window size to 65535 and window scale to 9. For 10GE hosts with RTT over
    # 100ms you will need to set a buffer of 150MB and a wscale of 12. "2097152 =
    # 2*1024*1024".

    # network: 1 Gbit maxsockbuf: 2MB wsize: 6 2^6*65KB = 4MB (default)
    # network: 2 Gbit maxsockbuf: 4MB wsize: 7 2^7*65KB = 8MB
    # network: 10 Gbit maxsockbuf: 16MB wsize: 9 2^9*65KB = 32MB
    # network: 40 Gbit maxsockbuf: 150MB wsize: 12 2^12*65KB = 260MB
    # network: 100 Gbit maxsockbuf: 600MB wsize: 14 2^14*65KB = 1064MB
    kern.ipc.maxsockbuf=4194304 # (default 2097152)
    #kern.ipc.maxsockbuf=16777216 # (default 2097152)

    # set auto tuning maximums to the same value as the kern.ipc.maxsockbuf above.
    # Use at least 16MB for 10GE hosts with RTT of less then 100ms. For 10GE hosts
    # with RTT of greater then 100ms set buf_max to 150MB.
    net.inet.tcp.sendbuf_max=4194304 # (default 2097152)
    net.inet.tcp.recvbuf_max=4194304 # (default 2097152)
    #net.inet.tcp.sendbuf_max=16777216 # (default 2097152)
    #net.inet.tcp.recvbuf_max=16777216 # (default 2097152)

    # maximum segment size (MSS) specifies the largest payload of data in a single
    # TCP segment not including TCP headers or options. mssdflt is also called MSS
    # clamping. With an interface MTU of 1500 bytes we suggest an
    # net.inet.tcp.mssdflt of 1460 bytes. 1500 MTU minus 20 byte IP header minus 20
    # byte TCP header is 1460. With net.inet.tcp.rfc1323 enabled, tcp timestamps
    # are added to the packets and the mss is automatically reduced from 1460 bytes
    # to 1448 bytes total payload. Note: if you are using PF with an outgoing scrub
    # rule then PF will re-package the data using an MTU of 1460 by default, thus
    # overriding this mssdflt setting and Pf scrub might slow down the network.
    # http://www.wand.net.nz/sites/default/files/mss_ict11.pdf
    net.inet.tcp.mssdflt=1460 # (default 536)

    # minimum, maximum segment size (mMSS) specifies the smallest payload of data
    # in a single TCP segment our system will agree to send when negotiating with
    # the client. By default, FreeBSD limits the maximum segment size to no lower
    # then 216 bytes. RFC 791 defines the minimum IP packet size as 68 bytes, but
    # in RFC 793 the minimum MSS is specified to be 536 bytes which is the same
    # value Windows Vista uses. The attack vector is when a malicious client sets
    # the negotiated MSS to a small value this may cause a packet flood DoS attack
    # from our server. The attack scales with the available bandwidth and quickly
    # saturates the CPU and network interface with packet generation and
    # transmission. By default, if the client asks for a one(1) megabyte file with
    # an MSS of 216 we have to send back 4,630 packets. If the minimum MSS is set
    # to 1300 we send back only 769 packets which is six times more efficient. For
    # standard Internet connections we suggest a minimum mss of 1300 bytes. 1300
    # will even work on networks making a VOIP (RTP) call using a TCP connection with
    # TCP options over IPSEC though a GRE tunnel on a mobile cellular network with
    # the DF (don't fragment) bit set.
    net.inet.tcp.minmss=1300 # (default 216)

    # H-TCP congestion control: The Hamilton TCP (HighSpeed-TCP) algorithm is a
    # packet loss based congestion control and is more aggressive pushing up to max
    # bandwidth (total BDP) and favors hosts with lower TTL / VARTTL then the
    # default "newreno". Understand "newreno" works well in most conditions and
    # enabling HTCP may only gain a you few percentage points of throughput.
    # http://www.sigcomm.org/sites/default/files/ccr/papers/2008/July/1384609-1384613.pdf
    # make sure to also add 'cc_htcp_load="YES"' to /boot/loader.conf then check
    # available congestion control options with "sysctl net.inet.tcp.cc.available"
    net.inet.tcp.cc.algorithm=htcp # (default newreno)

    # H-TCP congestion control: adaptive backoff will increase bandwidth
    # utilization by adjusting the additive-increase/multiplicative-decrease (AIMD)
    # backoff parameter according to the amount of buffers available on the path.
    # adaptive backoff ensures no queue along the path will remain completely empty
    # after a packet loss event which increases buffer efficiency.
    net.inet.tcp.cc.htcp.adaptive_backoff=1 # (default 0 ; disabled)

    # H-TCP congestion control: RTT scaling will increase the fairness between
    # competing TCP flows traversing different RTT paths through a common
    # bottleneck. rtt_scaling increases the Congestion Window Size (CWND)
    # independent of path round-trip time (RTT) leading to lower latency for
    # interactive sessions when the connection is saturated by bulk data
    # transfers. Default is 0 (disabled)
    net.inet.tcp.cc.htcp.rtt_scaling=1 # (default 0 ; disabled)

    # Ip Forwarding to allow packets to traverse between interfaces and is used for
    # firewalls, bridges and routers. When fast IP forwarding is also enabled, IP packets
    # are forwarded directly to the appropriate network interface with direct
    # processing to completion, which greatly improves the throughput. All packets
    # for local IP addresses, non-unicast, or with IP options are handled by the
    # normal IP input processing path. All features of the normal (slow) IP
    # forwarding path are supported by fast forwarding including firewall (through
    # pfil(9) hooks) checking, except ipsec tunnel brokering. The IP fast
    # forwarding path does not generate ICMP redirect or source quench messages
    # though. Compared to normal IP forwarding, fast forwarding can give a speedup
    # of 40 to 60% in packet forwarding performance which is great for interactive
    # connections like online games or VOIP where low latency is critical.
    #net.inet.ip.forwarding=1 # (default 0)
    #net.inet.ip.fastforwarding=1 # (default 0)
    #net.inet6.ip6.forwarding=1 # (default 0)

    # Reduce the amount of SYN/ACKs the server will re-transmit to an ip address
    # whom did not respond to the first SYN/ACK. On a client's initial connection
    # our server will always send a SYN/ACK in response to the client's initial
    # SYN. Limiting retranstited SYN/ACKS reduces local syn cache size and a "SYN
    # flood" DoS attack's collateral damage by not sending SYN/ACKs back to spoofed
    # ips, multiple times. If we do continue to send SYN/ACKs to spoofed IPs they
    # may send RST's back to us and an "amplification" attack would begin against
    # our host. If you do not wish to send retransmits at all then set to zero(0)
    # especially if you are under a SYN attack. If our first SYN/ACK gets dropped
    # the client will re-send another SYN if they still want to connect. Also set
    # "net.inet.tcp.msl" to two(2) times the average round trip time of a client,
    # but no lower then 2000ms (2s). Test with "netstat -s -p tcp" and look under
    # syncache entries.
    # http://people.freebsd.org/~jlemon/papers/syncache.pdf
    # http://www.ouah.org/spank.txt
    net.inet.tcp.syncache.rexmtlimit=0 # (default 3)

    # Spoofed packet attacks may be used to overload the kernel route cache. A
    # spoofed packet attack uses random source IPs to cause the kernel to generate
    # a temporary cached route in the route table, Route cache is an extraneous
    # caching layer mapping interfaces to routes to IPs and saves a lookup to the
    # Forward Information Base (FIB); a routing table within the network stack. The
    # IPv4 routing cache was intended to eliminate a FIB lookup and increase
    # performance. While a good idea in principle, unfortunately it provided a very
    # small performance boost in less than 10% of connections and opens up the
    # possibility of a DoS vector. Setting rtexpire and rtminexpire to ten(10)
    # seconds should be sufficient to protect the route table from attack.
    # http://www.es.freebsd.org/doc/handbook/securing-freebsd.html
    net.inet.ip.rtexpire=10 # (default 3600)
    #net.inet.ip.rtminexpire=10 # (default 10 )
    #net.inet.ip.rtmaxcache=128 # (default 128 )

    # Syncookies have a certain number of advantages and disadvantages. Syncookies
    # are useful if you are being DoS attacked as this method helps filter the
    # proper clients from the attack machines. But, since the TCP options from the
    # initial SYN are not saved in syncookies, the tcp options are not applied to
    # the connection, precluding use of features like window scale, timestamps, or
    # exact MSS sizing. As the returning ACK establishes the connection, it may be
    # possible for an attacker to ACK flood a machine in an attempt to create a
    # connection. Another benefit to overflowing to the point of getting a valid
    # SYN cookie is the attacker can include data payload. Now that the attacker
    # can send data to a FreeBSD network daemon, even using a spoofed source IP
    # address, they can have FreeBSD do processing on the data which is not
    # something the attacker could do without having SYN cookies. Even though
    # syncookies are helpful during a DoS, we are going to disable them at this
    # time.
    net.inet.tcp.syncookies=0 # (default 1)

    # TCP segmentation offload (TSO), also called large segment offload (LSO),
    # should be disabled on NAT firewalls and routers. TSO/LSO works by queuing up
    # large buffers and letting the network interface card (NIC) split them into
    # separate packets. The problem is the NIC can build a packet that is the wrong
    # size and would be dropped by a switch or the recieving machine, like for NFS
    # fragmented traffic. If the packet is dropped the overall sending bandwidth is
    # reduced significantly. You can also disable TSO in /etc/rc.conf using the
    # "-tso" directive after the network card configuration; for example,
    # ifconfig_igb0="inet 10.10.10.1 netmask 255.255.255.0 -tso". Verify TSO is off
    # on the hardware by making sure TSO4 and TSO6 are not seen in the "options="
    # section using ifconfig.
    # http://www.peerwisdom.org/2013/04/03/large-send-offload-and-network-performance/
    net.inet.tcp.tso=0 # (default 1)

    # Flow control stops and resumes the transmission of network traffic between
    # two connected peer nodes on a full-duplex Ethernet physical link. Ethernet
    # "PAUSE" frames pause transmission of all traffic on a physical Ethernet link.
    # Some ISP's abuse flow control to slow down customers' traffic even though
    # full bandwidth is not being used. By disabling physical link flow control the
    # link instead relies on TCP's internal flow control which is peer based on IP
    # address. The values are: (0=No Flow Control) (1=Receive Pause) (2=Transmit
    # Pause) (3=Full Flow Control, Default). We will be disabling flow control on
    # the igb interfaces.
    # http://virtualthreads.blogspot.com/2006/02/beware-ethernet-flow-control.html
    #dev.igb.0.fc=0 # (default 3)

    # General Security and DoS mitigation
    #net.bpf.optimize_writers=0 # bpf are write-only unless program explicitly specifies the read filter (default 0)
    #net.bpf.zerocopy_enable=0 # zero-copy BPF buffers, breaks dhcpd ! (default 0)
    net.inet.ip.check_interface=1 # verify packet arrives on correct interface (default 0)
    #net.inet.ip.portrange.randomized=1 # randomize outgoing upper ports (default 1)
    net.inet.ip.process_options=0 # ignore IP options in the incoming packets (default 1)
    net.inet.ip.random_id=1 # assign a random IP_ID to each packet leaving the system (default 0)
    net.inet.ip.redirect=0 # do not send IP redirects (default 1)
    #net.inet.ip.accept_sourceroute=0 # drop source routed packets since they can not be trusted (default 0)
    #net.inet.ip.sourceroute=0 # if source routed packets are accepted the route data is ignored (default 0)
    #net.inet.ip.stealth=1 # do not reduce the TTL by one(1) when a packets goes through the firewall (default 0)
    #net.inet.icmp.bmcastecho=0 # do not respond to ICMP packets sent to IP broadcast addresses (default 0)
    #net.inet.icmp.maskfake=0 # do not fake reply to ICMP Address Mask Request packets (default 0)
    #net.inet.icmp.maskrepl=0 # replies are not sent for ICMP address mask requests (default 0)
    #net.inet.icmp.log_redirect=0 # do not log redirected ICMP packet attempts (default 0)
    net.inet.icmp.drop_redirect=1 # no redirected ICMP packets (default 0)
    #net.inet.icmp.icmplim=200 # number of ICMP/TCP RST packets/sec, increase for bittorrent or many clients. (default 200)
    #net.inet.icmp.icmplim_output=1 # show "Limiting open port RST response" messages (default 1)
    #net.inet.tcp.abc_l_var=2 # increment the slow-start Congestion Window (cwnd) after two(2) segments (default 2)
    net.inet.tcp.always_keepalive=0 # disable tcp keep alive detection for dead peers, keepalive can be spoofed (default 1)
    net.inet.tcp.drop_synfin=1 # SYN/FIN packets get dropped on initial connection (default 0)
    net.inet.tcp.ecn.enable=1 # explicit congestion notification (ecn) warning: some ISP routers may abuse ECN (default 0)
    net.inet.tcp.fast_finwait2_recycle=1 # recycle FIN/WAIT states quickly (helps against DoS, but may cause false RST) (default 0)
    net.inet.tcp.icmp_may_rst=0 # icmp may not send RST to avoid spoofed icmp/udp floods (default 1)
    #net.inet.tcp.maxtcptw=50000 # max number of tcp time_wait states for closing connections (default ~27767)
    net.inet.tcp.msl=5000 # Maximum Segment Lifetime is the time a TCP segment can exist on the network and is
    # used to determine the TIME_WAIT interval, 2*MSL (default 30000 which is 60 seconds)
    net.inet.tcp.path_mtu_discovery=0 # disable MTU discovery since many hosts drop ICMP type 3 packets (default 1)
    #net.inet.tcp.rfc3042=1 # on packet loss trigger the fast retransmit algorithm instead of tcp timeout (default 1)
    net.inet.udp.blackhole=1 # drop udp packets destined for closed sockets (default 0)
    net.inet.tcp.blackhole=2 # drop tcp packets destined for closed ports (default 0)
    security.bsd.see_other_gids=0 # groups only see their own processes. root can see all (default 1)
    security.bsd.see_other_uids=0 # users only see their own processes. root can see all (default 1)
    vfs.zfs.min_auto_ashift=12 # ZFS 4k alignment

    ###
    ######
    ######### OFF BELOW HERE #########
    #
    # Other options not enabled, but included for future reference. The following
    # may be needed in high load environments or against DDOS attacks. Take a look
    # at the detailed comments for more information and make an informed decision.

    # NOTE: The packet limit of net.inet.icmp.icmplim controls the following
    # methods: ICMP echo-reply, ICMP timestamp reply, ICMP port unreachable
    # (generated as a response to a packet received on a UDP port with no listening
    # application) and for limiting the transmission of TCP reset packets on open
    # and closed TCP ports. net.inet.icmp.icmplim limits the number of ICMP/TCP RST
    # packets per second, but may have to be increased for bit torrent or many
    # connectiong clients if you see the log message, "response from xxx to 200
    # packets per second". (default 200)

    # CUBIC congestion control: is a time based congestion control algorithm
    # optimized for high speed, high latency networks and a decent choice for
    # networks with minimal packet loss; most internet connections are in this
    # catagory. CUBIC can improve startup throughput of bulk data transfers and
    # burst transfers of a web server by up to 2x compared to packet loss based
    # algorithms like newreno and H-TCP. make sure to also add
    # 'cc_cubic_load="YES"' to /boot/loader.conf then check available congestion
    # control options with "sysctl net.inet.tcp.cc.available". If you have a
    # network with greater then one percent packet loss then the next congestion
    # control called H-TCP should be tested.
    #net.inet.tcp.cc.algorithm=cubic # (default newreno)

    # The TCP window scale (rfc3390) option is used to increase the TCP receive
    # window size above its maximum value of 65,535 bytes (64k). TCP Time Stamps
    # (rfc1323) allow nearly every segment, including retransmissions, to be
    # accurately timed at negligible computational cost. Both options should be
    # enabled by default.
    #net.inet.tcp.rfc1323=1 # (default 1)
    #net.inet.tcp.rfc3390=1 # (default 1)

    # somaxconn is the OS buffer, backlog queue depth for accepting new TCP
    # connections. Your application will have its own separate max queue length
    # (maxqlen) which can be checked with "netstat -Lan". The default is 128
    # connections per application thread. Lets say your Nginx web server normally
    # receives 100 connections/sec and is single threaded application. If clients
    # are bursting in at a total of 250 connections/sec you may want to set the
    # somaxconn at 512 to be a 512 deep connection buffer so the extra 122 clients
    # (250-128=122) do not get denied service since you would have 412
    # (512-100=412) extra queue slots. Also, a large listen queue will do a better
    # job of avoiding Denial of Service (DoS) attacks if, and only if, your
    # application can handle the TCP load at the cost of more RAM and CPU time.
    # Nginx sets is backlog queue to the same as the OS somaxconn by default.
    # Note: "kern.ipc.somaxconn" is not shown in "sysctl -a" output, but searching
    # for "kern.ipc.soacceptqueue" gives the same value and both directives stand
    # for the same buffer value.
    #kern.ipc.soacceptqueue=1024 # (default 128 ; same as kern.ipc.somaxconn)

    # Selective Acknowledgment (SACK) allows the receiver to inform the sender of
    # packets which have been received and if any packets were dropped. The sender
    # can then selectively retransmit the missing data without needing to
    # retransmit entire blocks of data that have already been received
    # successfully. SACK option is not mandatory and support must be negotiated
    # when the connection is established using TCP header options. An attacker
    # downloading large files can abuse SACK by asking for many random segments to
    # be retransmitted. The server in response wastes system resources trying to
    # fulfill superfluous requests. If you are serving small files to low latency
    # clients then SACK can be disabled. If you see issues of flows randomly
    # pausing, try disabling SACK to see if there is equipment in the path which
    # does not handle SACK correctly.
    #net.inet.tcp.sack.enable=1 # (default 1)

    # Intel PRO 1000 inetwork cards maximum recieve packet processsing limit. Make
    # sure to enable hw.igb.rxd and hw.igb.txd in /boot/loader.conf as well.
    # https://fasterdata.es.net/host-tuning/nic-tuning/
    #hw.igb.rx_process_limit="4096" # (default 100)
    #dev.igb.0.rx_processing_limit="4096" # (default 100)
    #dev.igb.1.rx_processing_limit="4096" # (default 100)
    #dev.em.0.rx_processing_limit="4096" # (default 100)
    #dev.em.1.rx_processing_limit="4096" # (default 100)

    # SlowStart Flightsize is TCP's initial congestion window as the number of
    # packets on the wire at the start of the connection or after congestion.
    # Google recommends ten(10), so an MTU of 1460 bytes times ten(10) initial
    # congestion window is a 14.6 kilobytes. If you are running FreeBSD 9.1 or
    # earlier we recommend testing with a value of 44. A window of 44 packets of
    # 1460 bytes easily fits into a client's 64 kilobyte receive buffer space.
    # Note, slowstart_flightsize was removed from FreeBSD 9.2 and now we can only
    # set the initial congestion window to 10.
    # http://www.igvita.com/2011/10/20/faster-web-vs-tcp-slow-start/
    #net.inet.tcp.experimental.initcwnd10=1 # (default 1 for FreeBSD 10.1)
    #net.inet.tcp.experimental.initcwnd10=1 # (default 0 for FreeBSD 9.2)
    #net.inet.tcp.local_slowstart_flightsize=44 # (default 4 for FreeBSD 9.1)
    #net.inet.tcp.slowstart_flightsize=44 # (default 4 for FreeBSD 9.1)

    # control the amount of send and receive buffer space allowed for any given TCP
    # connection. The default sending buffer is 32K; the default receiving buffer
    # is 64K. You can often improve bandwidth utilization by increasing the default
    # at the cost of eating up more kernel memory for each connec- tion. We do not
    # recommend increasing the defaults if you are serving hundreds or thousands of
    # simultaneous connections because it is possible to quickly run the system out
    # of memory. To calculate: bandwidth divided by 8 bits divided by the MSS times
    # 1 million will be the size of the byffer in in kilobytes. For a 60 Mbit FIOS
    # connection the buffer should be at least, (60/8/1460*1000000=5136 bytes).
    # Since the default buffers are larger then 5136 we stay with the default.
    #net.inet.tcp.sendspace=32768 # (default 32768 )
    #net.inet.tcp.recvspace=65536 # (default 65536 )

    # Increase auto-tuning TCP step size of the TCP transmit and receive buffers.
    # The TCP buffer starts at "net.inet.tcp.sendspace" and
    # "net.inet.tcp.recvspace" and increases by these increments up to
    # "net.inet.tcp.recvbuf_max" and "net.inet.tcp.sendbuf_max" as auto tuned by
    # FreeBSD. http://fasterdata.es.net/host-tuning/freebsd/
    #net.inet.tcp.sendbuf_inc=32768 # (default 8192 )
    #net.inet.tcp.recvbuf_inc=65536 # (default 16384 )

    # host cache is the client's cached tcp connection details and metrics (TTL,
    # SSTRESH and VARTTL) the server can use to improve future performance of
    # connections between the same two hosts. When a tcp connection is completed,
    # our server will cache information about the connection until an expire
    # timeout. If a new connection between the same client is initiated before the
    # cache has expired, the connection will use the cached connection details to
    # setup the connection's internal variables. This pre-cached setup allows the
    # client and server to reach optimal performance significantly faster because
    # the server will not need to go through the usual steps of re-learning the
    # optimal parameters for the connection. Unfortunately, this can also make
    # performance worse because the hostcache will apply the exception case to
    # every new connection from a client within the expire time. In other words, in
    # some cases, one person surfing your site from a mobile phone who has some
    # random packet loss can reduce your server's performance to this visitor even
    # when their temporary loss has cleared. 3900 seconds allows clients who
    # connect regularly to stay in our hostcache. To view the current host cache
    # stats use "sysctl net.inet.tcp.hostcache.list" . If you have
    # "net.inet.tcp.hostcache.cachelimit=0" like in our /boot/loader.conf example
    # then this expire time is negated and not uesd.
    #net.inet.tcp.hostcache.expire=3900 # (default 3600)

    # By default, acks are delayed by 100 ms or sent every other packet in order to
    # improve the chance of being added to another returned data packet which is
    # full. This method can cut the number of tiny packets flowing across the
    # network and is efficient. But, delayed ACKs cause issues on modern, short
    # hop, low latency networks. TCP works by increasing the congestion window,
    # which is the amount of data currently traveling on the wire, based on the
    # number of ACKs received per time frame. Delaying the timing of the ACKs
    # received results in less data on the wire, time in TCP slowstart is doubled
    # and in congestion avoidance after packet loss the congestion window growth is
    # slowed. Setting delacktime higher then 100 will to slow downloads as ACKs
    # are queued too long. On low latecy 10gig links we find a value of 20ms is
    # optimal. http://www.tel.uva.es/personales/ignmig/pdfs/ogonzalez_NOC05.pdf
    #net.inet.tcp.delayed_ack=1 # (default 1)
    #net.inet.tcp.delacktime=20 # (default 100)

    # Do not create a socket or compressed tcpw for TCP connections restricted to
    # the local machine connecting to itself on localhost. An example connection
    # would be a web server and a database server running on the same machine or
    # freebsd jails connecting to each other.
    #net.inet.tcp.nolocaltimewait=1 # (default 0)

    # The number of frames the NIC's receive (rx) queue will accept befroe sending
    # a kernel inturrupt. If the queue is full and the kernel can not process the
    # packets then the packets are dropped. Use "sysctl
    # net.inet.ip.intr_queue_drops" and "netstat -Q" and increase if queue_drops is
    # greater then zero(0). The real problem is the machine is simply not fast
    # enough to handle the traffic. Upgrading the hardware is a better solution.
    #net.inet.ip.intr_queue_maxlen=256 # (default 256)
    #net.route.netisr_maxqlen=256 # (default 256)

    # security settings for jailed environments. it is generally a good idea to
    # separately jail any service which is accessible by an external client like
    # the web or mail server. This is especially true for public facing services.
    # take a look at ezjail, http://forums.freebsd.org/showthread.php?t=16860
    #security.jail.allow_raw_sockets=1 # (default 0)
    #security.jail.enforce_statfs=2 # (default 2)
    #security.jail.set_hostname_allowed=0 # (default 1)
    #security.jail.socket_unixiproute_only=1 # (default 1)
    #security.jail.sysvipc_allowed=0 # (default 0)
    #security.jail.chflags_allowed=0 # (default 0)

    # decrease the scheduler maximum time slice for lower latency program calls.
    # by default we use stathz/10 which equals thirteen(13). also, decrease the
    # scheduler maximum time for interactive programs as this is a dedicated
    # server (default 30). Also make sure you look into "kern.hz=100" in /boot/loader.conf
    #kern.sched.interact=5 # (default 30)
    #kern.sched.slice=3 # (default 12)

    # increase localhost network buffers. For example, if you run many high
    # bandwidth services on lo0 like an http or local DB server and forward public
    # external traffic using Pf. Also, if running many jails on lo0 then these may
    # help. set to 10x(lo0 mtu 16384 + 40 bytes for header) = 164240
    #net.local.stream.sendspace=164240 # (default 8192)
    #net.local.stream.recvspace=164240 # (default 8192)

    # threads per process
    #kern.threads.max_threads_per_proc=9000

    # create core dump file on "exited on signal 6"
    #kern.coredump=1 # (default 1)
    #kern.sugid_coredump=1 # (default 0)
    #kern.corefile="/tmp/%N.core" # (default %N.core)

    # ZFS L2ARC tuning - If you have read intensive workloads and limited RAM make
    # sure to use an SSD for your L2ARC. Verify noprefetch is enabled(1) and
    # increase the speed at which the system can fill the L2ARC device. By default,
    # when the L2ARC is being populated FreeBSD will only write at 16MB/sec to the
    # SSD. 16MB calculated by adding the speed of write_boost and write_max.
    # 16MB/sec is too slow as many SSD's made today which can easily sustain
    # 500MB/sec. It is recommend to set both write_boost and write_max to at least
    # 256MB each so the L2ARC can be quickly seeded. Contrary to myth, enterprise
    # class SSDs can last for many years under constant read/write abuse of a web
    # server.
    #vfs.zfs.l2arc_noprefetch=1 # (default 1)
    #vfs.zfs.l2arc_write_boost=268435456 # (default 8388608)
    #vfs.zfs.l2arc_write_max=268435456 # (default 8388608)

    # ZFS - Set TXG write limit to a lower threshold. This helps "level out" the
    # throughput rate (see "zpool iostat"). A value of 256MB works well for
    # systems with 4 GB of RAM, while 1 GB works well for us w/ 8 GB on disks which
    # have 64 MB cache.
    #vfs.zfs.write_limit_override=1073741824

    # For slow drives, set outstanding vdev I/O to "1" to prevent parallel
    # reads/writes per zfs vdev. By limiting read write streams we effectually force
    # drive access into long sequential disk access for drives like a single
    # 5400rpm disk. A value of one is not good for multiple disk spindles.
    #vfs.zfs.vdev.min_pending="1"
    #vfs.zfs.vdev.max_pending="1"

    # TCP keep alive can help detecting network errors and signaling connection
    # problems. Keep alives will increase signaling bandwidth used, but as
    # bandwidth utilized by signaling channels is low from its nature, the increase
    # is insignificant. the system will disconnect a dead TCP connection when the
    # remote peer is dead or unresponsive for: 10000 + (5000 x 8) = 50000 msec (50
    # sec)
    #net.inet.tcp.keepidle=10000 # (default 7200000 )
    #net.inet.tcp.keepintvl=5000 # (default 75000 )
    #net.inet.tcp.always_keepalive=1 # (default 1)


    # UFS hard drive read ahead equivalent to 4 MiB at 32KiB block size. Easily
    # increases read speeds from 60 MB/sec to 80 MB/sec on a single spinning hard
    # drive. Samsung 830 SSD drives went from 310 MB/sec to 372 MB/sec (SATA 6).
    # use Bonnie++ to performance test file system I/O
    #vfs.read_max=128

    # global limit for number of sockets in the system. If kern.ipc.numopensockets
    # plus net.inet.tcp.maxtcptw is close to kern.ipc.maxsockets then increase this
    # value
    #kern.ipc.maxsockets = 25600

    # spread tcp timer callout load evenly across cpus. We did not see any speed
    # benefit from enabling per cpu timers. The default is off(0)
    #net.inet.tcp.per_cpu_timers = 0

    # Increase maxdgram length for jumbo frames (9000 mtu) OSPF routing. Safe for
    # 1500 mtu too.
    #net.inet.raw.maxdgram=9216
    #net.inet.raw.recvspace=9216

    # seeding cryptographic random number generators is provided by the /dev/random
    # device, which provides psudo "real" randomness. The arc4random(3) library call
    # provides a pseudo-random sequence which is generally reckoned to be suitable
    # for simple cryptographic use. The OpenSSL library also provides functions for
    # managing randomness via functions such as RAND_bytes(3) and RAND_add(3). Note
    # that OpenSSL uses the random device /dev/random for seeding automatically.
    # http://manpages.ubuntu.com/manpages/lucid/man4/random.4freebsd.html
    #kern.random.yarrow.gengateinterval=10 # default 10 [4..64]
    #kern.random.yarrow.bins=10 # default 10 [2..16]
    #kern.random.yarrow.fastthresh=192 # default 192 [64..256]
    #kern.random.yarrow.slowthresh=256 # default 256 [64..256]
    #kern.random.yarrow.slowoverthresh=2 # default 2 [1..5]
    #kern.random.sys.seeded=1 # default 1
    #kern.random.sys.harvest.ethernet=1 # default 1
    #kern.random.sys.harvest.point_to_point=1 # default 1
    #kern.random.sys.harvest.interrupt=1 # default 1
    #kern.random.sys.harvest.swi=0 # default 0 and actually does nothing when enabled

    # IPv6 Security
    # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6
    # Disable Node info replies
    # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node
    #net.inet6.icmp6.nodeinfo=0
    # Turn on IPv6 privacy extensions
    # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html
    #net.inet6.ip6.use_tempaddr=1
    #net.inet6.ip6.prefer_tempaddr=1
    # Disable ICMP redirect
    #net.inet6.icmp6.rediraccept=0
    # Disable acceptation of RA and auto linklocal generation if you don't use them
    ##net.inet6.ip6.accept_rtadv=0
    ##net.inet6.ip6.auto_linklocal=0

    #
    ##
    ### EOF ###