Forked from pe3/scrape_entire_website_with_wget.sh
Created
December 10, 2024 16:29
-
-
Save Ticolyle/ac32f512f7c03b65d9afb10a3f7736a2 to your computer and use it in GitHub Desktop.
Scrape An Entire Website with wget
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| this worked very nice for a single page site | |
| ``` | |
| wget \ | |
| --recursive \ | |
| --page-requisites \ | |
| --convert-links \ | |
| [website] | |
| ``` | |
| wget options | |
| ``` | |
| wget \ | |
| --recursive \ | |
| --no-clobber \ | |
| --page-requisites \ | |
| --html-extension \ | |
| --convert-links \ | |
| --restrict-file-names=windows \ | |
| --domains website.org \ | |
| --no-parent \ | |
| www.website.com | |
| --recursive: download the entire Web site. | |
| --domains website.org: don't follow links outside website.org. | |
| --no-parent: don't follow links outside the directory tutorials/html/. | |
| --page-requisites: get all the elements that compose the page (images, CSS and so on). | |
| --html-extension: save files with the .html extension. | |
| --convert-links: convert links so that they work locally, off-line. | |
| --restrict-file-names=windows: modify filenames so that they will work in Windows as well. | |
| --no-clobber: don't overwrite any existing files (used in case the download is interrupted and | |
| resumed). | |
| ``` | |
| there is also [node-wget](https://github.com/wuchengwei/node-wget) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment