Month: December 2024

General Tips

Core Tools to Know: ffuf

ffuf run logo version, from the GitHub Repoffuf (“Fuzz Faster U Fool”) is a powerful, open-source tool designed for web application enumeration and fuzzing. Whether you’re performing vhost, directory, page, or parameter enumeration, ffuf can help you identify and exploit vulnerabilities effectively. In this overview tutorial, we’ll cover how to install ffuf and use it for various enumeration tasks. Right off the bat, I want to stress that you should never run tools like ffuf against any machine, network, system, etc that you either don’t own and control or have explicit permission to run these sorts of programs on. If you venture outside of that boundary, you get in pretty murky legal territory and I’m officially suggesting that you don’t do that.

Installing ffuf

Before diving into its usage, let’s install ffuf on your system. ffuf works on Linux, macOS, and Windows. According to the ffuf room on TryHackMe, ffuf is already included in the following Linux distributions: BlackArch, Pentoo, Kali, and Parrot. If you’re on Windows, macOS, or a Linux version without ffuf preinstalled, the easiest thing to do is download one of the prebuilt releases from ffuf’s GitHub. If you want to install ffuf from source, you’ll need to install Go 1.16 (or higher) and follow the instructions on ffuf’s readme on their GitHub.

Once you’re sure you’ve got the program, let’s dig in. We can make sure that ffuf is in our path by running the following command and getting the help menu in response. If it fails and you’re sure that you’ve got it installed, add its location to your path, or use the fully qualified path when calling it from the command line (like /opt/ffuf/ffuf or c:\Tools\ffuf.exe instead of just ffuf).

So How Does it Work?

Let’s take a look at the built-in help to understand what all it can do. Like many security tools, ffuf is command-line only, so pop open a terminal and issue the following command (the output is really long, feel free to skim it or just scroll past this block to not be overwhelmed).

$ ffuf -h
Fuzz Faster U Fool - v2.1.0-dev

HTTP OPTIONS:
  -H                  Header `"Name: Value"`, separated by colon. Multiple -H flags are accepted.
  -X                  HTTP method to use
  -b                  Cookie data `"NAME1=VALUE1; NAME2=VALUE2"` for copy as curl functionality.
  -cc                 Client cert for authentication. Client key needs to be defined as well for this to work
  -ck                 Client key for authentication. Client certificate needs to be defined as well for this to work
  -d                  POST data
  -http2              Use HTTP2 protocol (default: false)
  -ignore-body        Do not fetch the response content. (default: false)
  -r                  Follow redirects (default: false)
  -raw                Do not encode URI (default: false)
  -recursion          Scan recursively. Only FUZZ keyword is supported, and URL (-u) has to end in it. (default: false)
  -recursion-depth    Maximum recursion depth. (default: 0)
  -recursion-strategy Recursion strategy: "default" for a redirect based, and "greedy" to recurse on all matches (default: default)
  -replay-proxy       Replay matched requests using this proxy.
  -sni                Target TLS SNI, does not support FUZZ keyword
  -timeout            HTTP request timeout in seconds. (default: 10)
  -u                  Target URL
  -x                  Proxy URL (SOCKS5 or HTTP). For example: http://127.0.0.1:8080 or socks5://127.0.0.1:8080

GENERAL OPTIONS:
  -V                  Show version information. (default: false)
  -ac                 Automatically calibrate filtering options (default: false)
  -acc                Custom auto-calibration string. Can be used multiple times. Implies -ac
  -ach                Per host autocalibration (default: false)
  -ack                Autocalibration keyword (default: FUZZ)
  -acs                Custom auto-calibration strategies. Can be used multiple times. Implies -ac
  -c                  Colorize output. (default: false)
  -config             Load configuration from a file
  -json               JSON output, printing newline-delimited JSON records (default: false)
  -maxtime            Maximum running time in seconds for entire process. (default: 0)
  -maxtime-job        Maximum running time in seconds per job. (default: 0)
  -noninteractive     Disable the interactive console functionality (default: false)
  -p                  Seconds of `delay` between requests, or a range of random delay. For example "0.1" or "0.1-2.0"
  -rate               Rate of requests per second (default: 0)
  -s                  Do not print additional information (silent mode) (default: false)
  -sa                 Stop on all error cases. Implies -sf and -se. (default: false)
  -scraperfile        Custom scraper file path
  -scrapers           Active scraper groups (default: all)
  -se                 Stop on spurious errors (default: false)
  -search             Search for a FFUFHASH payload from ffuf history
  -sf                 Stop when > 95% of responses return 403 Forbidden (default: false)
  -t                  Number of concurrent threads. (default: 40)
  -v                  Verbose output, printing full URL and redirect location (if any) with the results. (default: false)

MATCHER OPTIONS:
  -mc                 Match HTTP status codes, or "all" for everything. (default: 200-299,301,302,307,401,403,405,500)
  -ml                 Match amount of lines in response
  -mmode              Matcher set operator. Either of: and, or (default: or)
  -mr                 Match regexp
  -ms                 Match HTTP response size
  -mt                 Match how many milliseconds to the first response byte, either greater or less than. EG: >100 or <100
  -mw                 Match amount of words in response

FILTER OPTIONS:
  -fc                 Filter HTTP status codes from response. Comma separated list of codes and ranges
  -fl                 Filter by amount of lines in response. Comma separated list of line counts and ranges
  -fmode              Filter set operator. Either of: and, or (default: or)
  -fr                 Filter regexp
  -fs                 Filter HTTP response size. Comma separated list of sizes and ranges
  -ft                 Filter by number of milliseconds to the first response byte, either greater or less than. EG: >100 or <100
  -fw                 Filter by amount of words in response. Comma separated list of word counts and ranges

INPUT OPTIONS:
  -D                  DirSearch wordlist compatibility mode. Used in conjunction with -e flag. (default: false)
  -e                  Comma separated list of extensions. Extends FUZZ keyword.
  -enc                Encoders for keywords, eg. 'FUZZ:urlencode b64encode'
  -ic                 Ignore wordlist comments (default: false)
  -input-cmd          Command producing the input. --input-num is required when using this input method. Overrides -w.
  -input-num          Number of inputs to test. Used in conjunction with --input-cmd. (default: 100)
  -input-shell        Shell to be used for running command
  -mode               Multi-wordlist operation mode. Available modes: clusterbomb, pitchfork, sniper (default: clusterbomb)
  -request            File containing the raw http request
  -request-proto      Protocol to use along with raw request (default: https)
  -w                  Wordlist file path and (optional) keyword separated by colon. eg. '/path/to/wordlist:KEYWORD'

OUTPUT OPTIONS:
  -debug-log          Write all of the internal logging to the specified file.
  -o                  Write output to file
  -od                 Directory path to store matched results to.
  -of                 Output file format. Available formats: json, ejson, html, md, csv, ecsv (or, 'all' for all formats) (default: json)
  -or                 Don't create the output file if we don't have results (default: false)

EXAMPLE USAGE:
  Fuzz file paths from wordlist.txt, match all responses but filter out those with content-size 42.
  Colored, verbose output.
    ffuf -w wordlist.txt -u https://example.org/FUZZ -mc all -fs 42 -c -v

  Fuzz Host-header, match HTTP 200 responses.
    ffuf -w hosts.txt -u https://example.org/ -H "Host: FUZZ" -mc 200

  Fuzz POST JSON data. Match all responses not containing text "error".
    ffuf -w entries.txt -u https://example.org/ -X POST -H "Content-Type: application/json" \
      -d '{"name": "FUZZ", "anotherkey": "anothervalue"}' -fr "error"

  Fuzz multiple locations. Match only responses reflecting the value of "VAL" keyword. Colored.
    ffuf -w params.txt:PARAM -w values.txt:VAL -u https://example.org/?PARAM=VAL -mr "VAL" -c

  More information and examples: https://github.com/ffuf/ffuf

That's quite a bit and everything that's there is what is output. Fortunately, you don't often need all of that. I'm going to go over some of the biggest use cases that you're likely to encounter. Honestly, anything that you can think of swapping out just one part of for another part works great, but these are the highlights.

Using ffuf for Enumeration

Vhost Enumeration

Virtual host (vhost) enumeration is essential for identifying additional subdomains or services hosted on the same server. Public servers can be discovered by DNS queries, but if you're fuzzing an internal application that you may not have access to full DNS, this is great.

I set up a stupid little node script to be a simple web server to work out some of these examples. So, some may seem a little contrived, but I also wanted to respect the rule about not fuzzing where you aren't supposed to. Also, ffuf is verbose when providing output (banner, parameter restatement, etc) and I will show it the first time, but subsequent samples will omit it. Just know that you'll see the banner et al every time when you're using the tool.

Our command is ffuf -w /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt -u http://ffuf.hack:8000 -H "Host: FUZZ.ffuf.hack"

In this case, -w represents the wordlist to use when fuzzing. -u is the URL to hit (I just put ffuf.hack in my /etc/hosts pointed at my local machine), and -H is the HTTP header to send along, in this case, the Host that I'm trying to hit. Note that I put FUZZ in the spot where I'd like each word from the wordlist to go: foo.ffuf.hack, bar.ffuf.hack, baz.ffuf.hack, etc.

$ ffuf -w /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt -u http://ffuf.hack:8000 -H "Host: FUZZ.ffuf.hack"

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v2.1.0-dev
________________________________________________

 :: Method           : GET
 :: URL              : http://ffuf.hack:8000
 :: Wordlist         : FUZZ: /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt
 :: Header           : Host: FUZZ.ffuf.hack
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 10
 :: Threads          : 40
 :: Matcher          : Response status: 200-299,301,302,307,401,403,405,500
________________________________________________

ns1                     [Status: 403, Size: 23, Words: 3, Lines: 1, Duration: 6ms]
mail                    [Status: 403, Size: 23, Words: 3, Lines: 1, Duration: 7ms]
// ****** SNIP ******
support                 [Status: 403, Size: 23, Words: 3, Lines: 1, Duration: 5ms]

I hit control-c here to break since we have our file size. One of the first tricks to learn about ffuf is to realize what it is going to consider a success or failure. You can see from the "Matcher" section that it will accept any of those response codes. For instance, 400 and 404 won't be returned. You can add additional codes or you can limit by file size (this version is the majority of the time for me). I snipped some out, but you can see that every negative response is 23 bytes. So, I then reissued the command with -fs 23 to filter size of 23 bytes and got the correct results. I left part of the header in this time so you can see that the response codes were the same, but a Filter was added for Response size.

$ ffuf -w /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt -u http://ffuf.hack:8000 -H "Host: FUZZ.ffuf.hack" -fs 23

 :: Matcher          : Response status: 200-299,301,302,307,401,403,405,500
 :: Filter           : Response size: 23
________________________________________________

www                     [Status: 200, Size: 103732, Words: 14137, Lines: 948, Duration: 27ms]
lms                     [Status: 200, Size: 103732, Words: 14137, Lines: 948, Duration: 9ms]
:: Progress: [4989/4989] :: Job [1/1] :: 0 req/sec :: Duration: [0:00:00] :: Errors: 0 ::

This is the correct result. I had set up my script to serve pages based on going to either ffuf.hack, www.ffuf.hack, or lms.ffuf.hack.

Directory Enumeration

Directory enumeration uncovers hidden directories on a web server. This one is even a more important use case for fuzzing, since there is nothing like DNS to otherwise find these directories. There are no new surprises with flags, just where I moved the FUZZ position to be replaced. I already knew the file size for a bad request and confirmed it, so that's included here right off the bat, too. In regular situations, you likely wouldn't even need the -H flag, but I'm using a janky simple web server script that I didn't want to keep changing, so I had to tag the Host header along, too. I'm also just enumerating the ffuf.hack domain itself, not any of the other vhosts. I am using a different word list, too. The correct wordlist to use is more art than science. You kind of get a feel for it, or you'll build up some favorites that always come through for you. All of the lists that I'm using come in the base Kali Linux install that I'm using.

$ ffuf -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-words-lowercase.txt -u http://ffuf.hack:8000/FUZZ -H "Host: ffuf.hack" -fs 23

________________________________________________

images                  [Status: 500, Size: 25, Words: 4, Lines: 1, Duration: 38ms]
.                       [Status: 500, Size: 25, Words: 4, Lines: 1, Duration: 26ms]
:: Progress: [38267/38267] :: Job [1/1] :: 3846 req/sec :: Duration: [0:00:10] :: Errors: 0 ::

In this case, this is also correct. I only made one directory in this pretend website and it was /images. If this was a penetration test, we'd probably wonder if the list was bad depending on what kind of site this was and how complex it seemed to be.

Page Enumeration

Page enumeration targets specific files or pages on the server. In this case, we use the -e flag (extension) and give it a comma-separated list of extensions for ffuf to tack onto our list. Here, I'm telling it to try .html, .jpg, and .js. I left the new part of the header in to show that ffuf has correctly understood what we're doing. Here, I used the same word list that I used for the directory enumeration.

$ ffuf -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-words-lowercase.txt -u http://ffuf.hack:8000/FUZZ -H "Host: ffuf.hack" -e ".html,.jpg,.js" -fs 23

 :: Extensions       : .html .jpg .js
________________________________________________

images                  [Status: 500, Size: 25, Words: 4, Lines: 1, Duration: 31ms]
index.html              [Status: 200, Size: 103732, Words: 14137, Lines: 948, Duration: 27ms]
main.js                 [Status: 200, Size: 59, Words: 8, Lines: 3, Duration: 22ms]
.                       [Status: 500, Size: 25, Words: 4, Lines: 1, Duration: 21ms]
server.js               [Status: 200, Size: 2294, Words: 594, Lines: 70, Duration: 30ms]
:: Progress: [153068/153068] :: Job [1/1] :: 4878 req/sec :: Duration: [0:00:41] :: Errors: 0 ::

Again, this is all correct. In fact, because I don't have this set up like a real node app or anything, it also found my server.js web server script. In a penetration test, if you could download this, you could read the source code for the application. It would be a huge finding as it should be excluded by web server configuration or by the way the application is set up.

Parameter Enumeration

Parameter enumeration identifies potential GET or POST parameters. ffuf can be used to find both parameter names and values. Which one you are trying to isolate depends on where you put the FUZZ. I'll show two examples here, first where I FUZZ where the parameter name is (with a junk value) and then fuzzing the values once I've identified the names. In this case, I'm fuzzing GET parameters. If you were fuzzing POST parameters, you'd just have the -u url be normal, and include a new flag of -d (data) like -d "param1=FUZZ" and then also add -X POST to the command for POST parameter fuzzing.

$ ffuf -w /usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-words-lowercase.txt -H "Host: www.ffuf.hack" -u "http://ffuf.hack:8000/?FUZZ=foobar"
________________________________________________

first                   [Status: 200, Size: 119, Words: 20, Lines: 1, Duration: 12ms]
username                [Status: 200, Size: 119, Words: 20, Lines: 1, Duration: 8ms]
:: Progress: [38267/38267] :: Job [1/1] :: 904 req/sec :: Duration: [0:00:47] :: Errors: 0 ::

In this value fuzzing example, I made a short list to try it out. When fuzzing parameter values, you'll almost always want to have a "smart list" that you use. That could just be a list of numbers (like fuzzing an id parameter), a list of known usernames, a list of LFI escapes, etc. So in my case, I made a list of 14 values that would make sense to try to hack into an app written by the egotistical author of this blog 😉

// First Parameter Value Fuzzing
$ ffuf -w curated_values.txt -H "Host: www.ffuf.hack" -u "http://ffuf.hack:8000?username=FUZZ"
________________________________________________

peteonsoftware          [Status: 200, Size: 52, Words: 7, Lines: 1, Duration: 11ms]
:: Progress: [14/14] :: Job [1/1] :: 0 req/sec :: Duration: [0:00:00] :: Errors: 0 ::
// Second Parameter Value Fuzzing
$ ffuf -w curated_values.txt -H "Host: www.ffuf.hack" -u "http://ffuf.hack:8000?first=FUZZ"
________________________________________________

pete                    [Status: 200, Size: 42, Words: 7, Lines: 1, Duration: 10ms]
:: Progress: [14/14] :: Job [1/1] :: 0 req/sec :: Duration: [0:00:00] :: Errors: 0 ::

So ffuf identified that my application takes two parameters: username and first. Valid values for username seem to only be peteonsoftware and the only valid value for the first parameter is pete. I can confirm that that's correct.

Parting Tips

If you don't care about how "loud" you are, you can increase how many threads that ffuf uses to bang on the server with -t. Also, if you're fuzzing an https URL and there are certificate issues, you use use -k to ignore the certificate problems.

As I touched on earlier, wordlists are key here, just like they are for password cracking and just about everything else. Daniel Miessler's SecLists can be a good place to get started.

Lastly, you are basically only limited by your imagination when using ffuf. If you can think of a creative way to build a request where you isolate one piece to constantly be replaced, you can use ffuf to do it. This overview only just gets you started. A good next place to go might be the TryHackMe ffuf room. It is a free room and anyone can do it (you don't have to have a paid subscription!). While I've had a paid subscription to TryHackMe for about a year and a half, they offer a ton of content for free and it is worth checking out!

Happy Fuzzing!