国内从 Docker Hub 拉取镜像有时会遇到困难,此时可以配置镜像加速器。
Dockerized 实践 https://github.com/y0ngb1n/dockerized
| #!/bin/bash | |
| #Harbor on Ubuntu 18.04 | |
| #Prompt for the user to ask if the install should use the IP Address or Fully Qualified Domain Name of the Harbor Server | |
| PS3='Would you like to install Harbor based on IP or FQDN? ' | |
| select option in IP FQDN | |
| do | |
| case $option in | |
| IP) |
国内从 Docker Hub 拉取镜像有时会遇到困难,此时可以配置镜像加速器。
Dockerized 实践 https://github.com/y0ngb1n/dockerized
Copy locale.md file below into same directory as your Dockerfile
FROM alpine:3.9
# Install language pack
RUN apk --no-cache add ca-certificates wget && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.25-r0/glibc-2.25-r0.apk && \| 1. Generate new domain name for logstash server. | |
| For this tutorial | |
| domain name = logstash-prod.xyz.com | |
| ip = 1.2.3.4 | |
| * Enter to following directory | |
| ``` | |
| $ sudo mkdir /etc/pki | |
| $ cd /etc/pki |
| # -*- mode: ruby -*- | |
| # vi: set ft=ruby : | |
| # This script to install Kubernetes will get executed after we have provisioned the box | |
| $script = <<-SCRIPT | |
| apt-get update | |
| apt-get install -y docker.io docker-compose python | |
| curl -s https://storage.googleapis.com/harbor-releases/release-1.7.0/harbor-online-installer-v1.7.1.tgz | tar zxv | |
| cd harbor |
| https://github.com/urfave/cli | |
| https://github.com/mreiferson/go-httpclient | |
| https://github.com/crewjam/rfc5424 | |
| https://github.com/kubernetes/heapster | |
| https://github.com/go-openapi/spec | |
| https://github.com/andygrunwald/go-gerrit | |
| https://github.com/openshift/ci-secret-mirroring-controller | |
| https://github.com/fsnotify/fsnotify | |
| https://github.com/BurntSushi/toml | |
| https://github.com/kubernetes-csi/drivers |
| #!/bin/bash | |
| #k8setup script v1.2019.4.11 | |
| #RHEL or CentOS 7.4+ | |
| #Direct any questions to [email protected] | |
| #01101000 01110100 01110100 01110000 01110011 00111010 00101111 00101111 01101100 01100001 01101110 01100100 01101111 01101110 01101011 01100101 01111001 00101110 01100011 01101111 01101101 00101111 | |
| # Watch how it is used on youtube: https://youtu.be/KWehrWGjkm4 | |
| # |
This is an example configuration to have nginx output JSON logs to make it easier for Logstash processing. I was trying to get nginx > Filebeat > Logstash > ES working and it wasn't until I connected Filebeat directly to Elasticsearch that I saw the expected data. Google led me to ingest-convert.sh and I realized filebeat setup works for Filebeat > ES but not Filebeat > Logstash > ES. This is because Logstash does not use ingest pipelines by default. You have to enable them in the elasticsearch output block.
Having nginx log JSON in the format required for Elasticsearch means there's very little processing (i.e. grok) to be done in Logstash. nginx can only output JSON for access logs; the error_log format cannot be changed.
Extra fields are output and not used by the Kibana dashboards. I included them in case they might be useful. Since they are not declared in the filebeat setup, their default is "string" when yo