Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!
banner Expire 25 April 2025
adv ex on 5 january 2024
adv ex on 22 February 2024
Banner expire 20 November 2024
Kfc Club

Patrick Stash
casino
banner expire at 13 August 2024
BidenCash Shop
Rescator cvv and dump shop
Yale lodge shop
UniCvv
banner Expire 1 April  2021

Premiums

TRUSTED VENDOR
Joined
Dec 5, 2020
Messages
2,383
Directory Bruteforcing

One thing you learn when you start a career pentesting is:

Never assume anything.

In my experience hacks aren’t always elegant and elaborate. Sometimes something simple and effective is your avenue of penetration. Which brings us to today’s topic: directory bruteforcing.

Directory bruteforcing is a favorite of mine. I can’t tell you how many times a directory listing has broken open a pentest for me. Whether it be that all elusive web admin panel, or a directory listing containing a database with passwords, there’s almost always something hiding beneath that tidy little web server.

Before we start bashing away, and let’s be honest here that’s what we’re doing, we have to mention that this kind of enumeration can get you blacklisted. In fact if your scope doesn’t have you whitelisted for the engagement, we recommend you scan low and slow to get a feel for the targets response. It never hurts to have a backup IP (or a few) to scan from as well. Dealing with customer blacklisting is a pain.

Today, we’re showcasing python based Wfuzz by Edge-Security and Java based Dirbuster maintained by the OWASP project. Both are excellent directory and file brute forcing tools that come complete with lists of common (and sometimes not so common) directories or files. Both support recursion, multi-threading, and output to useful file formats. They are also great about inherent false positive detection and support proxies… excellent. We use Wfuzz on our *nix boxes and Dirbuster from Windows. We interchange lists frequently.

We hate to regurgitate verbatim but Wfuzz actually gives pretty good usage and feature documentation that can be seen on their website here.

“Wfuzz is a tool designed for bruteforcing Web Applications, it can be used for finding resources not linked (directories, servlets, scripts, etc), bruteforce GET and POST parameters for checking different kind of injections (SQL, XSS, LDAP,etc), bruteforce Forms parameters (User/Password), Fuzzing,etc.â€

Usage:

# wfuzz.py -c -z file -f wordlists/commons.txt --hc 404 --html 2> results.html

This does a basic directory bruteforce against throwing http GETs to the web server matching every line in the wordlists/commons.txt file. It strips out the 404 not found responses and sends the output to an HTML file for later usage.

Wfuzz is actually a far more robust tool allowing you to fuzz web parameters to identify SQL injection, XSS, and bruteforce usernames and passwords. The lists for these injection strings are included with wfuzz. We will showcase Wfuzz in more detail in a future write-up.

Dirbuster is very similar. It uses a pretty java GUI that allows you to specify number of threads and tune the amount of threads on the fly (which is actually really handy). It also supports pausing which is useful. Another great feature it offers is selective recursion. If Dirbuster finds a directory it will automagically queue it for recursive scanning, but if we want to skip that directory we can un-check the tick box next to it and change this on the fly. The GUI itself is pretty self explanatory and you can see basic usage in the video.

So what are we looking for?

Some of our favorites are:

* Jboss admin panels
* Backend web administration (think VPN, firewall, and website management logins)
* OWA servers
* Frontpage Config Files
* Citrix Portals
* Directories with databases
* Directories with readable web app scripts
* Webcam portals
* Development/stage versions of software/sites
* Default PHP Config files
* 401 credential protected directories
* Directories containing documents we can mine for metadata
* Scripts we can fiddle with (list below)

For file types we wanna look for things like scripts we might be able to manipulate, log files, etc:

* .log, .phtml, .php, .php3, .php4, .php5, .inc, .asp, .aspx, .pl, .pm, .cgi, .lib, .jsp, .jspx, .jsw, .jsv, .jspf, .cfm, .cfml, .cfc, .dbm, .mdb

Even resources that give you 403 Forbidden responses are valuable in identifying the web server’s structure and the apps that run on it.

Earlier we said that we interchange lists. Here’s why. Below are the sizes, in words, of the lists supplied with Wfuzz and Dirbuster (as well as another favorite tool of ours Grendel Scan). The whopping difference here is Dirbuster’s lists are huge comparatively. The reason for this is that Dirbuster uses a large number of numeric only resource requests. Dirbuster also seems to really take the word “bruteforce†to heart requesting less than technical directory names. We’d love to say “Use X list over Y list†but we really can’t. We have garnered valuable findings from all these lists. If you aren’t under the blacklisting/shunning gun per-se you can cat these into a “masterdirs†file and then sort and uniq it. Just be aware that these lists are unordered on purpose to be optimized, if we have the time to complete the full list then it doesn’t matter, but if you have a short testing time frame it will.

Wfuzz common 947 medium 1660 big 3037
Dirbuster small 81643 medium 207631 big 1185252

Grendel Scan Small 100 Medium 300 Large 500 XL 819

In our video we show the basic usage of both tools.

If you’re working with a scope that limits tools you can install, or you want to comb over some could-be false positives from a tools output you can do this by using a bash script (be easy on our Bash foo!)

# cat dircurl.sh

#!/usr/bin/bash

if [[ $# -ne 2 ]]; then
echo "usage: $0
exit
fi

for i in $(cat $1)
do echo -ne "directory: "
echo -ne $i
echo -ne "\t"
echo -ne "count: "
echo -ne `curl $2/$i 2> /dev/null | wc -l`
echo
done

This does a Curl request to each line in the supplied “directorylist†to the “target.com†and then does a wordcount (wc -l) on it. Look at the output, what is the most common response?

# bash dircurl.sh scanneroutput

directory: sitemap count: 266
directory: archives count: 266
directory: wp-admin count: 7
directory: links count: 0
directory: login count: 266
directory: articles count: 266
directory: support count: 266
directory: keygen count: 266
directory: article count: 266
directory: help count: 266
directory: events count: 266
directory: archive count: 266
directory: register count: 266
directory: en count: 266
directory: forum count: 266
directory: wp-includes count: 7
directory: software count: 266
directory: downloads count: 266
directory: security count: 0
directory: category count: 266
directory: content count: 266
directory: main count: 266
directory: press count: 266
directory: media count: 266
directory: templates count: 266
directory: services count: 266
directory: icons count: 266
directory: wp-content count: 7
directory: resources count: 0
directory: info count: 0
directory: overnment count: 266
directory: corrections count: 266
directory: ajax count: 266
directory: icom_includes count: 266
directory: rules count: 266
directory: tr count: 266
directory: server count: 266
directory: mirrors count: 266
directory: government count: 266
directory: corrections count: 266

Looks like my error page (or in some cases my redirects) have about 266 newlines. Lets pipe that into grep -v 266, removing all lines containing 266:

# bash dircurl.sh scanneroutpu|grep -v 266

directory: wp-admin count: 7
directory: links count: 0
directory: wp-includes count: 7
directory: security count: 0
directory: wp-content count: 7
directory: resources count: 0
directory: info count: 0
 
Top Bottom