start new:
tmux
start new with session name:
tmux new -s myname
| a4b.amazonaws.com | |
| access-analyzer.amazonaws.com | |
| account.amazonaws.com | |
| acm-pca.amazonaws.com | |
| acm.amazonaws.com | |
| airflow-env.amazonaws.com | |
| airflow.amazonaws.com | |
| alexa-appkit.amazon.com | |
| alexa-connectedhome.amazon.com | |
| amazonmq.amazonaws.com |
| --- | |
| # ----------------------------------------------------------------------------------------- | |
| # Purpose: To manage system users: | |
| # - create/upadate a user who is allowed accessing via ssh connection | |
| # - add public ssh-key of user into its authorized_keys | |
| # - allow user to use sudo by putting a config into /etc/sudoers.d/ | |
| # - remove authorized_keys of inactive users | |
| # - remove inactive users | |
| # - remove sudo permission by removing its config file in /etc/sudoers.d/ if any | |
| # Maintanance: Nghia Le [at] INFOTECHVIET |
| ## Reference : | |
| ### https://phoenixnap.com/kb/how-to-install-virtualbox-centos-7 | |
| sudo yum update -y | |
| ### Install vboxdrv Kernel Module | |
| sudo yum install –y patch gcc kernel-headers kernel-devel make perl wget | |
| sudo reboot |
We need to run multiple versions of kubectl for compatibility with various kubernetes clusters.
It's a bit easier to get and use multiple versions of kubectl using Homebrew than it is using Macports.
With brew, it's simple to:
| // Set up AWS credentials | |
| sc.hadoopConfiguration.set("fs.s3a.access.key", "AKI*****************") | |
| sc.hadoopConfiguration.set("fs.s3a.secret.key", "kd8***********************************") | |
| def wordcount() = { | |
| // Read & process S3 file - word count | |
| val abc_file = sc.textFile("s3a://prms-s3/data/abc.txt") | |
| val counts = abc_file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _) | |
| //counts.saveAsTextFile("s3a://s3-to-ec2/output") | |
| counts.toDF().show() |
| ## Reference :- https://docs.cloudera.com/documentation/enterprise/5-9-x/topics/spark_s3.html | |
| ## Pyspark/ Python: | |
| ## Step1 : Generate Hadoop AWS credential file (Run this on a cluster node) | |
| # hadoop credential create fs.s3a.access.key -provider jceks://hdfs/user/centos/awskeyfile.jceks -value AKI***************** | |
| # hadoop credential create fs.s3a.secret.key -provider jceks://hdfs/user/centos/awskeyfile.jceks -value kd8********************************** | |
| ## Step 2 : simple pyspark program to access S3 file (s3_access.py) | |
| from pyspark import SparkConf, SparkContext | |
| from pyspark.sql import SparkSession |
| ## Macos install | |
| brew install pdsh | |
| ## Skip strict host file checking - Add this to .zshrc file | |
| export PDSH_SSH_ARGS_APPEND="-q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o PreferredAuthentications=publickey" | |
| ## Setup default ssh key file in "~/.ssh/config" file | |
| IdentityFile "~/.ssh/xyz.pem" | |
| ## Add an alias helper commands in the .zshrc file |
| # Note : (pre-reqs) : Install & configure AWSCLI, install jq | |
| aws ec2 describe-instances | tee /tmp/ins | | |
| jq --raw-output '.Reservations[].Instances[0] | [ .InstanceId, .PublicIpAddress , .PrivateIpAddress, .PrivateDnsName, | |
| (.Tags[] | select(.Key == "Name") | .Value ) ] | @tsv ' |
| ## Reference : https://www.tecmint.com/test-network-throughput-in-linux/ | |
| ## Install iperf | |
| sudo apt install iperf3 #Debian/Ubuntu | |
| sudo yum install iperf3 #RHEL/CentOS | |
| sudo dnf install iperf3 #Fedora 22+ | |
| ## Start the server an port 3000 | |
| iperf3 -s -f K -p 3000 |