Skip to content

Instantly share code, notes, and snippets.

@noscripter
Forked from stvhwrd/website-dl.md
Created October 20, 2021 08:44
Show Gist options
  • Save noscripter/42cbf54658b3cbf2016422efd99eefd3 to your computer and use it in GitHub Desktop.
Save noscripter/42cbf54658b3cbf2016422efd99eefd3 to your computer and use it in GitHub Desktop.

Revisions

  1. @stvhwrd stvhwrd revised this gist Jun 30, 2019. 1 changed file with 10 additions and 10 deletions.
    20 changes: 10 additions & 10 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -1,12 +1,12 @@
    ##The best way to download a website for offline use, using wget
    ## The best way to download a website for offline use, using wget

    There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different instance so you can get out of your ssh session and it will continue.

    First make a folder to download the websites to and begin your downloading: (note if downloading `www.SOME_WEBSITE.com`, you will get a folder like this: `/websitedl/www.SOME_WEBSITE.com/`)

    <br>

    ###STEP 1:
    ### STEP 1:

    ````bash
    mkdir ~/websitedl/
    @@ -17,31 +17,31 @@ Now choose for Step 2 whether you want to download it simply (1st way) or if you

    <br>

    ###STEP 2:
    ### STEP 2:

    ####1st way:
    #### 1st way:

    ````bash
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.SOME_WEBSITE.com
    ````

    ####2nd way:
    #### 2nd way:

    #####TO RUN IN THE BACKGROUND:
    ##### TO RUN IN THE BACKGROUND:

    ````bash
    nohup wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.SOME_WEBSITE.com &
    ````

    #####THEN TO VIEW OUTPUT (there will be a nohup.out file in whichever directory you ran the command from):
    ##### THEN TO VIEW OUTPUT (there will be a nohup.out file in whichever directory you ran the command from):

    ````bash
    tail -f nohup.out
    ````

    <br>

    ####WHAT DO ALL THE SWITCHES MEAN:
    #### WHAT DO ALL THE SWITCHES MEAN:

    `--limit-rate=200k` limit download to 200 Kb /sec

    @@ -62,7 +62,7 @@ resumed).

    `-U mozilla` pretends to be just like a browser Mozilla is looking at a page instead of a crawler like wget

    ####PURPOSELY **DIDN'T** INCLUDE THE FOLLOWING:
    #### PURPOSELY **DIDN'T** INCLUDE THE FOLLOWING:

    `-o=/websitedl/wget1.txt` log everything to wget_log.txt - didn't do this because it gave me no output on the screen and I don't like that.

    @@ -75,6 +75,6 @@ resumed).
    <br>
    <br>

    ######tested with zsh 5.0.5 (x86_64-apple-darwin14.0) on Apple MacBook Pro (Late 2011) running OS X 10.10.3
    ###### tested with zsh 5.0.5 (x86_64-apple-darwin14.0) on Apple MacBook Pro (Late 2011) running OS X 10.10.3

    [credit](http://www.kossboss.com/linux---wget-full-website)
  2. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion website-dl.md
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,7 @@

    There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different instance so you can get out of your ssh session and it will continue.

    First make a folder to download the websites to and begin your downloading: (note if downloading `www.SOME_WEBSITE.com`, you will get a folder like this: `/websitedl/www.SOME_WEBSITE.com/` )
    First make a folder to download the websites to and begin your downloading: (note if downloading `www.SOME_WEBSITE.com`, you will get a folder like this: `/websitedl/www.SOME_WEBSITE.com/`)

    <br>

  3. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion website-dl.md
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,7 @@

    There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different instance so you can get out of your ssh session and it will continue.

    First make a folder to download the websites to and begin your downloading: (note if downloading www.SOME_WEBSITE.com, you will get a folder like this: /websitedl/www.SOME_WEBSITE.com/ )
    First make a folder to download the websites to and begin your downloading: (note if downloading `www.SOME_WEBSITE.com`, you will get a folder like this: `/websitedl/www.SOME_WEBSITE.com/` )

    <br>

  4. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -22,15 +22,15 @@ Now choose for Step 2 whether you want to download it simply (1st way) or if you
    ####1st way:

    ````bash
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.WEBSITE_YOU_WISH_TO_DOWNLOAD.com
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.SOME_WEBSITE.com
    ````

    ####2nd way:

    #####TO RUN IN THE BACKGROUND:

    ````bash
    nohup wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.kossboss.com &
    nohup wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.SOME_WEBSITE.com &
    ````

    #####THEN TO VIEW OUTPUT (there will be a nohup.out file in whichever directory you ran the command from):
  5. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion website-dl.md
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,7 @@

    There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different instance so you can get out of your ssh session and it will continue.

    First make a folder to download the websites to and begin your downloading: (note if downloading www.steviehoward.com, you will get a folder like this: /websitedl/www.steviehoward.com/ )
    First make a folder to download the websites to and begin your downloading: (note if downloading www.SOME_WEBSITE.com, you will get a folder like this: /websitedl/www.SOME_WEBSITE.com/ )

    <br>

  6. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -39,6 +39,8 @@ nohup wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E
    tail -f nohup.out
    ````

    <br>

    ####WHAT DO ALL THE SWITCHES MEAN:

    `--limit-rate=200k` limit download to 200 Kb /sec
  7. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 4 additions and 0 deletions.
    4 changes: 4 additions & 0 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -4,6 +4,8 @@ There are two ways - the first way is just one command run plainly in front of y

    First make a folder to download the websites to and begin your downloading: (note if downloading www.steviehoward.com, you will get a folder like this: /websitedl/www.steviehoward.com/ )

    <br>

    ###STEP 1:

    ````bash
    @@ -13,6 +15,8 @@ cd ~/websitedl/

    Now choose for Step 2 whether you want to download it simply (1st way) or if you want to get fancy (2nd way).

    <br>

    ###STEP 2:

    ####1st way:
  8. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion website-dl.md
    Original file line number Diff line number Diff line change
    @@ -29,7 +29,7 @@ wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e ro
    nohup wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.kossboss.com &
    ````

    #####THEN TO VIEW OUTPUT ( it will put a nohup.out file in whichever directory you ran the command from):
    #####THEN TO VIEW OUTPUT (there will be a nohup.out file in whichever directory you ran the command from):

    ````bash
    tail -f nohup.out
  9. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -23,13 +23,13 @@ wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e ro

    ####2nd way:

    #####IN THE BACKGROUND DO WITH NOHUP IN FRONT AND & IN BACK
    #####TO RUN IN THE BACKGROUND:

    ````bash
    nohup wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.kossboss.com &
    ````

    #####THEN TO VIEW OUTPUT ( it will put a nohup.out file where you ran the command):
    #####THEN TO VIEW OUTPUT ( it will put a nohup.out file in whichever directory you ran the command from):

    ````bash
    tail -f nohup.out
  10. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 4 additions and 2 deletions.
    6 changes: 4 additions & 2 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -4,14 +4,16 @@ There are two ways - the first way is just one command run plainly in front of y

    First make a folder to download the websites to and begin your downloading: (note if downloading www.steviehoward.com, you will get a folder like this: /websitedl/www.steviehoward.com/ )

    ###(STEP1)
    ###STEP 1:

    ````bash
    mkdir ~/websitedl/
    cd ~/websitedl/
    ````

    ###(STEP2)
    Now choose for Step 2 whether you want to download it simply (1st way) or if you want to get fancy (2nd way).

    ###STEP 2:

    ####1st way:

  11. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -13,13 +13,13 @@ cd ~/websitedl/

    ###(STEP2)

    ####Simple way:
    ####1st way:

    ````bash
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.WEBSITE_YOU_WISH_TO_DOWNLOAD.com
    ````

    ####Background-download way:
    ####2nd way:

    #####IN THE BACKGROUND DO WITH NOHUP IN FRONT AND & IN BACK

  12. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -13,13 +13,13 @@ cd ~/websitedl/

    ###(STEP2)

    ####1st way:
    ####Simple way:

    ````bash
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.WEBSITE_YOU_WISH_TO_DOWNLOAD.com
    ````

    ####2nd way:
    ####Background-download way:

    #####IN THE BACKGROUND DO WITH NOHUP IN FRONT AND & IN BACK

  13. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 9 additions and 7 deletions.
    16 changes: 9 additions & 7 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -1,25 +1,25 @@
    #The best way to download a website for offline use, using wget
    ##The best way to download a website for offline use, using wget

    There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different instance so you can get out of your ssh session and it will continue.

    First make a folder to download the websites to and begin your downloading: (note if downloading www.steviehoward.com, you will get a folder like this: /websitedl/www.steviehoward.com/ )

    ##(STEP1)
    ###(STEP1)

    ````bash
    mkdir /websitedl/
    cd /websitedl/
    mkdir ~/websitedl/
    cd ~/websitedl/
    ````

    ##(STEP2)
    ###(STEP2)

    ###1st way:
    ####1st way:

    ````bash
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.WEBSITE_YOU_WISH_TO_DOWNLOAD.com
    ````

    ###2nd way:
    ####2nd way:

    #####IN THE BACKGROUND DO WITH NOHUP IN FRONT AND & IN BACK

    @@ -67,4 +67,6 @@ resumed).
    <br>
    <br>

    ######tested with zsh 5.0.5 (x86_64-apple-darwin14.0) on Apple MacBook Pro (Late 2011) running OS X 10.10.3

    [credit](http://www.kossboss.com/linux---wget-full-website)
  14. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 13 additions and 15 deletions.
    28 changes: 13 additions & 15 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -35,36 +35,34 @@ tail -f nohup.out

    ####WHAT DO ALL THE SWITCHES MEAN:

    ````bash--limit-rate=200k```` limit download to 200 Kb /sec
    `--limit-rate=200k` limit download to 200 Kb /sec

    ````bash--no-clobber```` don't overwrite any existing files (used in case the download is interrupted and
    `--no-clobber` don't overwrite any existing files (used in case the download is interrupted and
    resumed).

    ````bash
    --convert-links
    ```` convert links so that they work locally, off-line, instead of pointing to a website online
    `--convert-links` convert links so that they work locally, off-line, instead of pointing to a website online

    ````bash--random-wait```` random waits between download - websites dont like their websites downloaded
    `--random-wait` random waits between download - websites dont like their websites downloaded

    ````bash-r```` recursive - downloads full website
    `-r` recursive - downloads full website

    ````bash-p```` downloads everything even pictures (same as --page-requsites, downloads the images, css stuff and so on)
    `-p` downloads everything even pictures (same as --page-requsites, downloads the images, css stuff and so on)

    ````bash-E```` gets the right extension of the file, without most html and other files have no extension
    `-E` gets the right extension of the file, without most html and other files have no extension

    ````bash-e robots=off```` act like we are not a robot - not like a crawler - websites dont like robots/crawlers unless they are google/or other famous search engine
    `-e robots=off` act like we are not a robot - not like a crawler - websites dont like robots/crawlers unless they are google/or other famous search engine

    ````bash-U mozilla```` pretends to be just like a browser Mozilla is looking at a page instead of a crawler like wget
    `-U mozilla` pretends to be just like a browser Mozilla is looking at a page instead of a crawler like wget

    ####PURPOSELY **DIDN'T** INCLUDE THE FOLLOWING:

    ````bash-o=/websitedl/wget1.txt```` log everything to wget_log.txt - didn't do this because it gave me no output on the screen and I don't like that.
    `-o=/websitedl/wget1.txt` log everything to wget_log.txt - didn't do this because it gave me no output on the screen and I don't like that.

    ````bash-b```` runs it in the background and I can't see progress... I like "nohup <commands> &" better
    `-b` runs it in the background and I can't see progress... I like "nohup <commands> &" better

    ````bash--domain=steviehoward.com```` didn't include because this is hosted by Google so it might need to step into Google's domains
    `--domain=steviehoward.com` didn't include because this is hosted by Google so it might need to step into Google's domains

    ````bash--restrict-file-names=windows```` modify filenames so that they will work in Windows as well. Seems to work okay without this.
    `--restrict-file-names=windows` modify filenames so that they will work in Windows as well. Seems to work okay without this.

    <br>
    <br>
  15. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 3 additions and 1 deletion.
    4 changes: 3 additions & 1 deletion website-dl.md
    Original file line number Diff line number Diff line change
    @@ -40,7 +40,9 @@ tail -f nohup.out
    ````bash--no-clobber```` don't overwrite any existing files (used in case the download is interrupted and
    resumed).

    ````bash--convert-links```` convert links so that they work locally, off-line, instead of pointing to a website online
    ````bash
    --convert-links
    ```` convert links so that they work locally, off-line, instead of pointing to a website online

    ````bash--random-wait```` random waits between download - websites dont like their websites downloaded

  16. @stvhwrd stvhwrd revised this gist Apr 17, 2015. 1 changed file with 7 additions and 6 deletions.
    13 changes: 7 additions & 6 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -1,6 +1,6 @@
    #The best way to download a website for offline use, using wget

    There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different "shell" so you can get out of your ssh session and it will continue.
    There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different instance so you can get out of your ssh session and it will continue.

    First make a folder to download the websites to and begin your downloading: (note if downloading www.steviehoward.com, you will get a folder like this: /websitedl/www.steviehoward.com/ )

    @@ -35,7 +35,7 @@ tail -f nohup.out

    ####WHAT DO ALL THE SWITCHES MEAN:

    ````bash--limit-rate=200k```` Limit download to 200 Kb /sec
    ````bash--limit-rate=200k```` limit download to 200 Kb /sec

    ````bash--no-clobber```` don't overwrite any existing files (used in case the download is interrupted and
    resumed).
    @@ -55,13 +55,14 @@ resumed).
    ````bash-U mozilla```` pretends to be just like a browser Mozilla is looking at a page instead of a crawler like wget

    ####PURPOSELY **DIDN'T** INCLUDE THE FOLLOWING:
    ````bash-o=/websitedl/wget1.txt```` log everything to wget_log.txt - didnt do this because it gave me no output on the screen and I dont like that id rather use nohup and & and tail -f the output from nohup.out

    ````bash-b```` because it runs it in background and cant see progress I like "nohup <commands> &" better
    ````bash-o=/websitedl/wget1.txt```` log everything to wget_log.txt - didn't do this because it gave me no output on the screen and I don't like that.

    ````bash--domain=steviehoward.com```` didn't include because this is hosted by google so it might need to step into googles domains
    ````bash-b```` runs it in the background and I can't see progress... I like "nohup <commands> &" better

    ````bash--restrict-file-names=windows```` modify filenames so that they will work in Windows as well. Seems to work okay without it
    ````bash--domain=steviehoward.com```` didn't include because this is hosted by Google so it might need to step into Google's domains

    ````bash--restrict-file-names=windows```` modify filenames so that they will work in Windows as well. Seems to work okay without this.

    <br>
    <br>
  17. @stvhwrd stvhwrd revised this gist Apr 15, 2015. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -16,7 +16,7 @@ cd /websitedl/
    ###1st way:

    ````bash
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.kossboss.com
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.WEBSITE_YOU_WISH_TO_DOWNLOAD.com
    ````

    ###2nd way:
    @@ -59,7 +59,7 @@ resumed).

    ````bash-b```` because it runs it in background and cant see progress I like "nohup <commands> &" better

    ````bash--domain=steviehoward.com```` didnt include because this is hosted by google so it might need to step into googles domains
    ````bash--domain=steviehoward.com```` didn't include because this is hosted by google so it might need to step into googles domains

    ````bash--restrict-file-names=windows```` modify filenames so that they will work in Windows as well. Seems to work okay without it

  18. @stvhwrd stvhwrd revised this gist Apr 15, 2015. 1 changed file with 34 additions and 17 deletions.
    51 changes: 34 additions & 17 deletions website-dl.md
    Original file line number Diff line number Diff line change
    @@ -1,8 +1,8 @@
    #BEST WAY TO DOWNLOAD FULL WEBSITE WITH WGET
    #The best way to download a website for offline use, using wget

    I show two ways, the first way is just one command that doesnt run in the background - the second one runs in the background and in a different "shell" so you can get out of your ssh session and it will continue either way
    There are two ways - the first way is just one command run plainly in front of you; the second one runs in the background and in a different "shell" so you can get out of your ssh session and it will continue.

    First make a folder to download the websites to and begin your downloading: (note if downloading www.kossboss.com, you will get a folder like this: /websitedl/www.kossboss.com/ )
    First make a folder to download the websites to and begin your downloading: (note if downloading www.steviehoward.com, you will get a folder like this: /websitedl/www.steviehoward.com/ )

    ##(STEP1)

    @@ -34,19 +34,36 @@ tail -f nohup.out
    ````

    ####WHAT DO ALL THE SWITCHES MEAN:

    ````bash--limit-rate=200k```` Limit download to 200 Kb /sec
    --no-clobber: don't overwrite any existing files (used in case the download is interrupted and

    ````bash--no-clobber```` don't overwrite any existing files (used in case the download is interrupted and
    resumed).
    --convert-links: convert links so that they work locally, off-line, instead of pointing to a website online
    --random-wait: Random waits between download - websites dont like their websites downloaded
    -r: Recursive - downloads full website
    -p: downloads everything even pictures (same as --page-requsites, downloads the images, css stuff and so on)
    -E: gets the right extension of the file, without most html and other files have no extension
    -e robots=off: act like we are not a robot - not like a crawler - websites dont like robots/crawlers unless they are google/or other famous search engine
    -U mozilla: pretends to be just like a browser Mozilla is looking at a page instead of a crawler like wget

    (DIDNT INCLUDE THE FOLLOWING AND WHY)
    -o=/websitedl/wget1.txt: log everything to wget_log.txt - didnt do this because it gave me no output on the screen and I dont like that id rather use nohup and & and tail -f the output from nohup.out
    -b: because it runs it in background and cant see progress I like "nohup <commands> &" better
    --domain=kossboss.com: didnt include because this is hosted by google so it might need to step into googles domains
    --restrict-file-names=windows: modify filenames so that they will work in Windows as well. Seems to work good without it

    ````bash--convert-links```` convert links so that they work locally, off-line, instead of pointing to a website online

    ````bash--random-wait```` random waits between download - websites dont like their websites downloaded

    ````bash-r```` recursive - downloads full website

    ````bash-p```` downloads everything even pictures (same as --page-requsites, downloads the images, css stuff and so on)

    ````bash-E```` gets the right extension of the file, without most html and other files have no extension

    ````bash-e robots=off```` act like we are not a robot - not like a crawler - websites dont like robots/crawlers unless they are google/or other famous search engine

    ````bash-U mozilla```` pretends to be just like a browser Mozilla is looking at a page instead of a crawler like wget

    ####PURPOSELY **DIDN'T** INCLUDE THE FOLLOWING:
    ````bash-o=/websitedl/wget1.txt```` log everything to wget_log.txt - didnt do this because it gave me no output on the screen and I dont like that id rather use nohup and & and tail -f the output from nohup.out

    ````bash-b```` because it runs it in background and cant see progress I like "nohup <commands> &" better

    ````bash--domain=steviehoward.com```` didnt include because this is hosted by google so it might need to step into googles domains

    ````bash--restrict-file-names=windows```` modify filenames so that they will work in Windows as well. Seems to work okay without it

    <br>
    <br>

    [credit](http://www.kossboss.com/linux---wget-full-website)
  19. @stvhwrd stvhwrd renamed this gist Apr 15, 2015. 1 changed file with 0 additions and 0 deletions.
    File renamed without changes.
  20. @stvhwrd stvhwrd created this gist Apr 15, 2015.
    52 changes: 52 additions & 0 deletions website-dl.txt
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,52 @@
    #BEST WAY TO DOWNLOAD FULL WEBSITE WITH WGET

    I show two ways, the first way is just one command that doesnt run in the background - the second one runs in the background and in a different "shell" so you can get out of your ssh session and it will continue either way

    First make a folder to download the websites to and begin your downloading: (note if downloading www.kossboss.com, you will get a folder like this: /websitedl/www.kossboss.com/ )

    ##(STEP1)

    ````bash
    mkdir /websitedl/
    cd /websitedl/
    ````

    ##(STEP2)

    ###1st way:

    ````bash
    wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.kossboss.com
    ````

    ###2nd way:

    #####IN THE BACKGROUND DO WITH NOHUP IN FRONT AND & IN BACK

    ````bash
    nohup wget --limit-rate=200k --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://www.kossboss.com &
    ````

    #####THEN TO VIEW OUTPUT ( it will put a nohup.out file where you ran the command):

    ````bash
    tail -f nohup.out
    ````

    ####WHAT DO ALL THE SWITCHES MEAN:
    ````bash--limit-rate=200k```` Limit download to 200 Kb /sec
    --no-clobber: don't overwrite any existing files (used in case the download is interrupted and
    resumed).
    --convert-links: convert links so that they work locally, off-line, instead of pointing to a website online
    --random-wait: Random waits between download - websites dont like their websites downloaded
    -r: Recursive - downloads full website
    -p: downloads everything even pictures (same as --page-requsites, downloads the images, css stuff and so on)
    -E: gets the right extension of the file, without most html and other files have no extension
    -e robots=off: act like we are not a robot - not like a crawler - websites dont like robots/crawlers unless they are google/or other famous search engine
    -U mozilla: pretends to be just like a browser Mozilla is looking at a page instead of a crawler like wget

    (DIDNT INCLUDE THE FOLLOWING AND WHY)
    -o=/websitedl/wget1.txt: log everything to wget_log.txt - didnt do this because it gave me no output on the screen and I dont like that id rather use nohup and & and tail -f the output from nohup.out
    -b: because it runs it in background and cant see progress I like "nohup <commands> &" better
    --domain=kossboss.com: didnt include because this is hosted by google so it might need to step into googles domains
    --restrict-file-names=windows: modify filenames so that they will work in Windows as well. Seems to work good without it