$ git clone https://github.com/ceph/ceph.git
$ cd ceph
$ export CEPH_HOME="$(pwd)"
$ # build ceph following the instructions from https://docs.ceph.com/docs/master/dev/quick_guide/
$ ./install-deps.sh
$ ./do_cmake.sh -DWITH_MANPAGE=OFF -DWITH_BABELTRACE=OFF -DWITH_MGR_DASHBOARD_FRONTEND=OFF
$ cd buildSince the versino of Cuttlefish, Ceph introduced objecter_inflight_ops/objecter_inflight_op_bytes throttles that tell objecter to throttle outgoing ops according its budget. Usually, when you run rados bench, in the command line you will speicify the parameter -b for the object size and parameter -t for the number of threads that used in the test. If you don't correctly update the configuration objecter_inflight_ops/objecter_inflight_op_bytes according to the test parameters, you will see multiple lines of 0 output at the begining of the rados bench output. Here is an example result from an incorrect configuration:
$ bin/rados bench -p rados 60 write -b 4M -t 128 --show-time --write-object --no-hints
hints = 0
2020-01-22 15:24:47.443047 Maintaining 128 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objects
2020-01-22 15:24:47.443106 Object prefix: benchmark_data_pl2_2784201
2020-01-22 15:24:47.852077 sec Cur ops started finished avg MB/s cur MB/
This process only applies to the connection bandwidth below 40Gbps. For higher bandwidth evaluation, check out the post iperf3 at 40Gbps and above.
iperf3 -B <server_ip> -sFor example,
iperf3 -B 10.0.1.20 -s