When beginning to tackle a new website for a client, it’s often helpful to backup their old website. As a part of this process, I often need to crawl the old website in order to generate a complete list of valid URLs. This list is later useful in building out a sitemap for pages that need to be designed and coded, and just as importantly, to map the old links to their corresponding pages on the new website. Enter this simple shell script.

How To Use

  1. Download the script and save to the desired location on your machine.
  2. You’ll need wget installed on your machine in order to continue. To check if it’s already installed (if you’re on Linux or a Mac, chances are you already have it) open Git Bash, Terminal, etc. and run the command: $ wget. If you receive an error message or command not found, you’re probably on Windows. Here’s the Windows installation instructions:
    1. Download the lastest wget binary for windows from https://eternallybored.org/misc/wget/ (they are available as a zip with documentation, or just an exe)
    2. If you downloaded the zip, extract all (if windows built in zip utility gives an error, use 7-zip). If you downloaded the 64-bit version,
      rename the wget64.exe file to wget.exe
    3. Move wget.exe to C:\Windows\System32\
  3. Now that you have wget, open Git Bash, Terminal, etc. and run the fetchurls.sh script:
        $ bash /path/to/script/fetchurls.sh
    
  4. You will be prompted to enter the full URL (including HTTPS/HTTP protocol) of the site you would like to crawl:
        #
        #    Fetch a list of unique URLs for a domain.
        #
        #    Enter the full URL ( http://example.com )
        #    URL:
    
  5. When complete, the script will show a message and the location of your outputted file:
        #
        #    Fetch a list of unique URLs for a domain.
        #
        #    Enter the full URL ( http://example.com )
        #    URL: https://www.example.com
        #
        #    Fetching URLs for example.com
        #    Finished!
        #
        #    File Location: ~/Desktop/example-com.txt
        #
    

The script will crawl the site and compile a list of valid URLs into a text file that will be placed on your Desktop.

Extra Info

    • To change the default file output location, edit line #18. **Default**: ~/Desktop
    • Ensure that you enter the correct protocol and subdomain for the URL or the outputted file may be empty or incomplete. For example, entering the incorrect, HTTP, protocol for https://adamdehaven.com generates an empty file. Entering the proper protocol, HTTPS, allows the script to successfully run.
    • The script, by default, filters out the following file extensions:
      • css
      • js
      • map
      • xml
      • png
      • gif
      • jpg
      • JPG
      • bmp
      • txt
      • pdf
    • The script filters out several common WordPress files and directories such as:
      • /wp-content/uploads/
      • /feed/
      • /wp-json/
      • xmlrpc
    • To change or edit the regular expressions that filter out some pages, directories, and file types, you may edit lines #24 through #29. **Caution**: If you’re not familiar with grep, you can easily break the script.