For educational reasons I've decided to create my own CA. Here is what I learned.
Lets get some context first.
| ssh-keygen -t rsa -b 4096 -N '' -C "[email protected]" -f ~/.ssh/id_rsa | |
| ssh-keygen -t rsa -b 4096 -N '' -C "[email protected]" -f ~/.ssh/github_rsa | |
| ssh-keygen -t rsa -b 4096 -N '' -C "[email protected]" -f ~/.ssh/mozilla_rsa |
| $ sudo sysctl net.bridge.bridge-nf-call-iptables=1 | |
| $ sudo sysctl net.bridge.bridge-nf-call-ip6tables=1 |
Just plug in your own values for registry and repo/image name.
registry='localhost:5000'
name='my-image'
curl -v -sSL -X DELETE "http://${registry}/v2/${name}/manifests/$(
curl -sSL -I \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \| ### java -jar | |
| java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8001,suspend=y -jar target/cxf-boot-simple-0.0.1-SNAPSHOT.jar | |
| ### Maven | |
| Debug Spring Boot app with Maven: | |
| mvn spring-boot:run -Drun.jvmArguments="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8001" |
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
| cd /opt | |
| wget http://apache-mirror.rbc.ru/pub/apache/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz | |
| tar xvzf kafka_2.11-0.10.1.0.tgz | |
| ln -s kafka_2.11-0.10.1.0/ kafka | |
| vi /etc/systemd/system/kafka-zookeeper.service | |
| [Unit] | |
| Description=Apache Zookeeper server (Kafka) | |
| Documentation=http://zookeeper.apache.org |
| ********************** | |
| **Gist | |
| ********************** | |
| This gist details how to inner join two large datasets on the map-side, leveraging the join capability | |
| in mapreduce. Such a join makes sense if both input datasets are too large to qualify for distribution | |
| through distributedcache, and can be implemented if both input datasets can be joined by the join key | |
| and both input datasets are sorted in the same order, by the join key. | |
| There are two critical pieces to engaging the join behavior: |