bin/kafka-topics.sh --zookeeper localhost:2181 --list
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
... wait a minute ...
bin/kafka-topics.sh --zookeeper localhost:2181 --list
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mytopic
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
... wait a minute ...
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
The standard names for indexes in PostgreSQL are:
{tablename}_{columnname(s)}_{suffix}
where the suffix is one of the following:
pkeyfor a Primary Key constraint;keyfor a Unique constraint;exclfor an Exclusion constraint;idxfor any other kind of index;
| wrk.method = "POST" | |
| -- this assumes that we're sending to a html form endpoint with csrf protection disabled | |
| -- wrk.body and wrk.headers would be different if we were sending a JSON payload to a JSON API endpoing | |
| wrk.body = "message[body]=wat" | |
| wrk.headers["Content-Type"] = "application/x-www-form-urlencoded" |