Skip to content

Instantly share code, notes, and snippets.

View linncy's full-sized avatar

Changyuan Lin linncy

View GitHub Profile
@linncy
linncy / ufw_allow_from_ubc_asn.sh
Created July 17, 2024 22:07
ufw_allow_from_ubc_asn.sh
sudo ufw allow from 128.189.128.0/18
sudo ufw allow from 128.189.16.0/20
sudo ufw allow from 128.189.192.0/18
sudo ufw allow from 128.189.64.0/19
sudo ufw allow from 128.189.96.0/19
sudo ufw allow from 137.82.0.0/16
sudo ufw allow from 142.103.0.0/16
sudo ufw allow from 142.231.160.0/19
sudo ufw allow from 142.231.20.0/24
sudo ufw allow from 142.231.24.0/21
@linncy
linncy / gist:8ed3c176f34bd954138b9e18d3c5592e
Created May 6, 2022 21:20 — forked from corck/gist:d4c63e3908548963ce2e58b45d9976e2
Restart Networkmanager on connection reset, restart VPN after network manager restart
For monitoring Network Manager connection and restart it on a dropped connection (eth/wifi). Make sure to specify the right interface below (eth0, wlan0...)
#/etc/init/reconnect.conf
start on started network-manager
stop on runlevel [016]
script
while true; do
if ifconfig eth0 | grep -q "inet addr:"; then
# echo "all ok!"
@linncy
linncy / yarn_applications_kill.sh
Created March 19, 2022 17:55 — forked from y2k-shubham/yarn_applications_kill.sh
Kill all running YARN applications
`yarn application --list | awk '{print $1}' | grep application | xargs yarn application -kill`

** How to Implement Movie Recommendation System-P5-Spark **

What is Spark?

Apache Spark is an open-source, distributed processing system used for big data workloads. It utilizes in-memory caching and optimized query execution for fast queries against data of any size. Simply put, Spark is a fast and general engine for large-scale data processing.

  • Resilient Distributed Dataset (RDD): RDD was the primary user-facing API in Spark since its inception. At the core, an RDD is an immutable distributed collection of elements of your data, partitioned across nodes in your cluster that can be operated in parallel with a low-level API that offers transformations and actions.

Code example:

A. Spark docker-compose

@linncy
linncy / gist:8705ab5277082230a4eca8412eea7d39
Last active December 6, 2020 22:30 — forked from ryogesh/gist:37ee2273c5fe0d38d8128feb78d7812a
Ubuntu 18.x: jupyterhub init.d script
#!/bin/bash
### BEGIN INIT INFO
# Provides: jupyterhub
# Required-Start: $local_fs $syslog
# Required-Stop: $local_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Jupyterhub service
# Description: Place the script as /etc/init.d/jupyterhub
#
#!/bin/bash
ROOT=~/.certbot-src
# Remove older versions if existing, for a simple re-install/update.
rm -rf $ROOT
# Clone certbot source.
git clone https://github.com/certbot/certbot $ROOT
@linncy
linncy / gist:7391991d5d41726d8db93778a4735c12
Created March 20, 2019 03:09
[Flake8] google-api-python-client
./google-api-python-client/setup.py:33:1: E402 module level import not at top of file
./google-api-python-client/setup.py:52:1: E402 module level import not at top of file
./google-api-python-client/describe.py:76:80: E501 line too long (99 > 79 characters)
./google-api-python-client/describe.py:144:80: E501 line too long (80 > 79 characters)
./google-api-python-client/describe.py:152:1: E303 too many blank lines (3)
./google-api-python-client/describe.py:196:36: E226 missing whitespace around arithmetic operator
./google-api-python-client/describe.py:198:28: E226 missing whitespace around arithmetic operator
./google-api-python-client/describe.py:200:28: E226 missing whitespace around arithmetic operator
./google-api-python-client/describe.py:205:5: E306 expected 1 blank line before a nested definition, found 0
./google-api-python-client/describe.py:212:23: W605 invalid escape sequence '\s'