start new:
tmux
start new with session name:
tmux new -s myname
| # Deploy to AWS using docker-machine and docker-compose | |
| ## 1. Prepare your machine | |
| Please follow this steps: | |
| 1.1 Install docker in your localhost. [Installation link](https://docs.docker.com/engine/installation/) | |
| 1.2 Install docker-machine in your localhost. [Installation link](https://docs.docker.com/compose/install/) | |
| 1.3 Install docker-compose in your localhost. [Installation link](https://docs.docker.com/machine/install-machine/) | |
| 1.4 Create an account in AWS (skip this step if you already have one). [AWS](https://aws.amazon.com/) |
| // Перезапишите параметры, поместив их в файл параметров. | |
| // Наиболее часто используемые настройки см. по адресу http://go.microsoft.com/fwlink/?LinkId=808995. | |
| { | |
| // Редактор | |
| // Определяет семейство шрифтов. | |
| "editor.fontFamily": "Consolas, 'Courier New', monospace", | |
| // Управляет насыщенностью шрифта. |
| Filter | Description | Example |
|---|---|---|
| allintext | Searches for occurrences of all the keywords given. | allintext:"keyword" |
| intext | Searches for the occurrences of keywords all at once or one at a time. | intext:"keyword" |
| inurl | Searches for a URL matching one of the keywords. | inurl:"keyword" |
| allinurl | Searches for a URL matching all the keywords in the query. | allinurl:"keyword" |
| intitle | Searches for occurrences of keywords in title all or one. | intitle:"keyword" |
Original Source: https://securitytrails.com/blog/google-hacking-techniques
Reading time: 12 minutes
Some time ago we wrote an interesting post about the [OSINT](https://securitytrails.com/blog/what
| " _ _ " | |
| " _ /|| . . ||\ _ " | |
| " ( } \||D ' ' ' C||/ { % " | |
| " | /\__,=_[_] ' . . ' [_]_=,__/\ |" | |
| " |_\_ |----| |----| _/_|" | |
| " | |/ | | | | \| |" | |
| " | /_ | | | | _\ |" | |
| It is all fun and games until someone gets hacked! |
| from scrapy.linkextractors import LinkExtractor | |
| from scrapy.spiders import CrawlSpider, Rule | |
| from scrapy.crawler import CrawlerProcess | |
| import tldextract | |
| from whoischeck import main | |
| # this is a spider that crawls a website and saves all the links in a json file. | |
| def get_root_domains(file_name): |
| # git aliases - taken from oh-my-zsh's git plugin and translated to bash | |
| # https://github.com/robbyrussell/oh-my-zsh/wiki/Cheatsheet#helpful-aliases-for-common-git-tasks | |
| # https://github.com/robbyrussell/oh-my-zsh/blob/master/plugins/git/git.plugin.zsh | |
| function git_current_branch() { | |
| ref=$(git symbolic-ref HEAD 2> /dev/null) || \ | |
| ref=$(git rev-parse --short HEAD 2> /dev/null) || return | |
| echo ${ref#refs/heads/} | |
| } | |
| function git_current_repository() { | |
| ref=$(git symbolic-ref HEAD 2> /dev/null) || \ |