$ ffmpeg -i source.mp4 -vf fps=fps=15,scale=480:-1 -b:v 450k -b:a 50k out.mp4
-vf scale: https://ffmpeg.org/ffmpeg-filters.html#scale
| /* open up chrome dev tools (Menu > More tools > Developer tools) | |
| * go to network tab, refresh the page, wait for images to load (on some sites you may have to scroll down to the images for them to start loading) | |
| * right click/ctrl click on any entry in the network log, select Copy > Copy All as HAR | |
| * open up JS console and enter: var har = [paste] | |
| * (pasting could take a while if there's a lot of requests) | |
| * paste the following JS code into the console | |
| * copy the output, paste into a text file | |
| * open up a terminal in same directory as text file, then: wget -i [that file] | |
| */ |
$ ffmpeg -i source.mp4 -vf fps=fps=15,scale=480:-1 -b:v 450k -b:a 50k out.mp4
-vf scale: https://ffmpeg.org/ffmpeg-filters.html#scaleA list of useful commands for the ffmpeg command line tool.
Download FFmpeg: https://www.ffmpeg.org/download.html
Full documentation: https://www.ffmpeg.org/ffmpeg.html
| # add function to .zshrc | |
| function download-web() { | |
| wget -r -nH --no-parent --reject='index.html*' "$@" ; | |
| } | |
| # then run | |
| download-web http://example.com/dl/ |
| # POST a JSON file and redirect output to stdout | |
| wget -q -O - --header="Content-Type:application/json" --post-file=foo.json http://127.0.0.1 | |
| # Download a complete website | |
| wget -m -r -linf -k -p -q -E -e robots=off http://127.0.0.1 | |
| # But it may be sufficient | |
| wget -mpk http://127.0.0.1 | |
| # Download all images of a website |
You can easily scrape (or download) a site with a CLI tool called wget. It's available for Linux, Mac and Windows.
I recommend using Homebrew, especially if you're on a Mac, to install it.
brew install wget
List files modified in the last hour - find command shell find linux unix
Use find to list files modified within the last hour:
$ find . -mtime -1the . is the search path
-mtime time parameter
wget -P /destination/ -mpck --user-agent="" -e robots=off --random-wait -E http://example.com/| #Spider Websites with Wget – 20 Practical Examples | |
| Wget is extremely powerful, but like with most other command line programs, the plethora of options it supports can be intimidating to new users. Thus what we have here are a collection of wget commands that you can use to accomplish common tasks from downloading single files to mirroring entire websites. It will help if you can read through the wget manual but for the busy souls, these commands are ready to execute. | |
| 1. Download a single file from the Internet | |
| wget http://example.com/file.iso | |
| 2. Download a file but save it locally under a different name | |
| wget ‐‐output-document=filename.html example.com |