February 21, 2007

Nmap Secrets...

Are you a big fan of Nmap like me ? If so keep reading else blogID =+ 1 .
If you`re a real Nmap kind of guy you've probably checked the most complete released material about Nmap named as "Secrets of Network Cartography". The book contained many interesting and useful tips on how to use Nmap more professionally. Even if you belive yourself as a expert give it a try. But wait, don`t go for old version anymore if you've missed it. new release of the book + some more gifts are in way and you can upgrade your knowledge about Nmap with brand new materials which will be published in less than 4 hours ( At the time of writing this ). Upcoming event will be a free webinar which will announce some new features of Nmap and some quick tips on using it. This webinar have also a gift for you. I`m not going to leak it of course :p . If you really enjoy Nmap related gifts, join the webinar and at the end you'll be thankful of me announcing it to you ;)

February 20, 2007

Snort Night-matter 2007 !

We all remember the bright "Yellow" color of SANS threat-meter while every one was coding his own version of exploit for CVE-2005-3252 (AKA Back-Orifice Pre-Processor overflow) to blindly target running Snort and SourceFire appliances. It was really kind of cool and at the same time dangerous flaw which has been used to compromise MANY targeted and random victims. Although Snort team was fast enough to release fixed version of snort, but as always tons of administrators left the upgrade process for next working week and guess what? Most of them had a crash-dump of snort, ready for analyze ! This was the major flaw of Snort in 2005.

Second major flaw in snort announced as CVE-2006-6931 (AKA Rule Matching Backtrack DoS) when three researchers from University of Wisconsin-Madison released a paper describing how it's possible to take down most of current brands in IDS technology with which a technique called "Backtracking Algorithmic Complexity Attacks". Snort was one of vulnerable brands, could be DoSed more easily than some other brands , by sending a single crafted packet. 2006 finished without any other major flaw in snort getting publicly announced ( Oh thanks God!!! ).

Guess what? right, another major flaw in snort for 2007 making many 1337 c0d3rs busy out there, writing another remote for snort. Once again ISS (Neel Mehta) is credited for the flaw, which seems has been result of his previous research on Snort back in 2006, getting published in 2007. CVE-2006-5276 is placeholder of mentioned flaw affecting "DCE/RPC PreProcessor" of Snort 2.6.1 / / / 2.7.0 BETA1 . I just wonder why ISS and Snort (SourceFire) waited that long time to publish this one. Maybe enough snorts have not been owned last time... ;)
I'm not sure when we will see first public PoC but black-market has already released new toys in markets. Keep this one serious and update your Snort/Sourcefire ASAP as this flaw can be reliably exploited and it's not hard to discover where the flaw can be triggered. SANS handler J.Esler posted useful dairy describing quick workaround for the flaw. Don`t forget to take a look at his post.

[Updated on February 23 ]
Seems first public PoC is out . The bad news for kids is that it's a DoS code . So far three working exploits have been released commercially by different consultancy companies. The ones I'm aware of, can target SourceFire appliance and snort running on SUSE,Debian,RHEL3/4 and FreeBSD.

February 17, 2007

Sniffing Oracle authentication : Downgrade Attack

Once again, Oracle is the case.
If you've reviewed my previous post about Oracle authentication you should have learned that eavesdropping attacks against Oracle authentication mechanisms is now a documented technique and should be considered seriously while hardening your RDBMS and network design. Following Litchfield's technical details, Laszlo Toth has just published results of his research on same case as a demonstration paper. In this paper four versions of Oracle native authentication mechanism has been tried and actual attack becomes possible by downgrading to vulnerable version of authentication mechanism. Remember recent upgrades in CAIN 4.2 & 4.3? You`re right I'm talking about new feature of CAIN, let attacker downgrade NTLM-2 S.KEY to an easy-to-crack type by injecting known ( defined by attacker ) s.key with help of MITM attack. And in both Oracle and NTLM cases client will face with a disconnect/failed-authentication. Now here's the way a hacker can own your flying Oracle authentication packets. As you'll read in paper, author did not disclosed any technical details on implementation of attack. Disappointing huh ? But if you've been my blog-reader you've already got details on how to implement such attack ;)
Although the attack described by D.Litchfield is a bit different but the idea is actually the same. I'm not sure who's the first one who have implemented this attack as a working tool but after a chat with some frineds I found that there are private implementations already available in hands of various researchers. Let's see how this new game goes on...
Finally if you've been blind while reading this new paper, I should remind you the power of Ettercap and it`s architecture let you code your own plugins. Hopefully this attack can be implemented as an Ettercap plug without going too much into details. The hard part is MITM which Ettercap will nicely handle it. TODO tasks are decoding transmitted SKEY and response hash from captured packets and finally brute-forcing . Both are already well-documented and have open-source tools released for. I'll leave this final part as and exercise for readers, to google missing puzzle parts and fix them together.

[Updated: 11:41pm ]
Seems securityFocus.com listed this topic too.

February 15, 2007

About recent falw in Solaris telnetd

Haha, how about muliple blog posts in one day ? :)
Well I have much to write about, but it`s really hard to choose best one for blogging.
Since first announce of authentication-bypass flaw in telnetd of Solaris 10 & 11 in full-discloser, I saw storm of posts, news and emails here and there talking about the case. Very few of these notes really helps an administrator or anyone else interested in details (Is really a >10 years old bug interesting at all ?!) . skipping Casper.Dik (from SUN) posts in full-disclosure , most of others looks like replay to spams. After checking few known resources I finally came across a nice post about the case which can be referred as most complete resource for this case for people interested in easy-to-understand details.

waiting for third post of day ... ? :)

Interesting article in INSECURE 10

Due to load of works, I rarely find chance to update this blog these days. Right like past years there are many pending jobs to finish before we all quit our offices for new year holidays. But this does not mean I'll leave blog idle.
Today I downloaded latest release of INSECURE magazine, trying to review it for interesting notes. I don`t believe this one as a technically useful magazine nor a good one on security management, how ever some times you`ll find cool articles inside. This months magazine include an article named "Climbing the security career mountain: how to get more than just a job" which is good reading for any of you out there just got interested in IT-SEC and try to get yourself ready for a position in this field.
I've been asked many times "How to become a security professional?" or "How those experts got their current positions? ". Well, I guess mentioned article cover most of these questions, and I liked the way author explains challenges for you. As you`ll read in article the true is that getting a good position is this field is really challenging and hard to archive but after all it`s a win-win game and no matter how you look at your job, you`re always learning something new.Finally if you want good results out of this field, you must try your BEST. I think below sentences in article are enough to show you what's really going on :

"in developing a real career in security, there
are very few areas of technology that you
won’t be required to know and understand at a
significant level. If you meet some of the best
security professionals, you’ll quickly realize
that they unix like a unix admin, Cisco routers
like a CCNP, Oracle like a DBA and C++ like a
software engineer. And they can keep up in a
conversation with any of those people."

You can get current release (10) from here.

February 8, 2007

Automated Penetration-Testing Frameworks...

Metasploit, SAINT-exploit, Core-Impact and CANVAS are names that you’ve probably heard them while following any conversation covering penetration testing and frameworks which has been developed to speed and enhance them. The term framework we use here means set of tools, codes and scripts integrated together to help the user accomplish all or at least most of tasks s/he should focus on in a pen-test session, including but not limited to information gathering (AKA foot-printing), identifying targets, analyzing them for potentially vulnerabilities, developing exploits for them, gaining access to target and following post-exploitation tasks which can be another loop of mentioned steps.

In a normal pen-test, each step has its own definition, tools and plans to check and try. Agent should have deep level of knowledge and experience to be able to manage and finish every step and summaries results of each one to feed next step. After any step agent is usually faced with tons of results which should be cleaned up and identifying false-positives and missed items. In a legacy pen-test session, it’s agent who’ll take care of everything and match pieces of puzzle together. But in an automated session story is a bit different. Assuming the way market describe these frameworks (As automated ethical hackers) is true, an automated tool they sell should be able to automatically finish every step, summaries results and move to next step, identify flaws and finally successfully exploit them and leave the user access to compromised system for post-exploitation tasks. Of course these tools are not expected to be used by a raw brain, know nothing about what he’s doing.

But how much end-user should be experienced? reading some advertise make us think like this: Customer purchase a copy of software, one of technical guys in company will launch framework against managed IP ranges while following tools documentations and by the end of working day, company can assume they are aware of any potential vulnerability which may be abused by hackers, while checking how effective their IPS is. Such descriptions and advertises IMO, gives the false sense of security and power (of knowledge) to end-user, leaving some doors open for experienced attackers. The true is that such advertisements really sell! It’s not that hard to make your boss pay for such a wizard which is capable of owning your enterprise in matter of clicks. Why pay 100k $ each year for a red-team when we can bring one into office for a price usually less than 20k $ ?

Here is exactly where we should take a look under the hood. Let’s see what these tools really offer and who should be a real end-user of such products. Skipping some post-exploitation features, most of public frameworks out there are actually the same ideas but with different implementations. As I mentioned at beginning of this note, there are some basic steps which every framework/agent should follow in pen-test session. Let’s see how our so-called automated tools handle each step and how it really should be.

Knowing foot-printing as first step , non of available tools in market have something cool to offer , but some simple routines for automated discovery of alive hosts , open ports and detection of remote operating system (and version) . So basically we should feed the tool with range of IP addresses. It seems works right? Yes and no. ‘yes’ for the condition tool is used in local unprotected network which every host can be fingerprinted by various methods including ARP pings , sniffing broadcasts , enumerating RPC interfaces an so on. ‘No’ which is better answer is where we’re faced with some serious job and the time we’re out boarders of target network.

A basic foot-printing these tools are capable of doing, no more works these days. Most of their features come handy when you’re already inside. Beside this, most of them does not offer more-in-depth fingerprinting options like digging DNS servers, brute forcing hostnames, etc. Blindly pinging IP ranges and checking for open ports is no more an interesting idea of discovering hosts behind firewalls. Result of finger-printing can be assumed well enough when we’ve already tried public search engines , registered domains , digging any possibly linked DNS server , tracing any available email headers , brute forcing hostnames and sub domains . Finishing these tasks requires hours of search and try & error for agent, or nice pieces of fingerprinting tools and scripts mixed with AI. Both (human resource and fingerprinting scripts) are available these days, but we can not find any good sign of them in available frameworks out there. So finally we should actively help our framework to finish first step by finishing it manually or using 3rd party tools out there to make food for our IP hungry framework. We finish first step with compiling list of IP addresses we should work on in next step.

Next step is analyzing gathered IP addresses to find out details of running softwares, services and their versions. Results of this step are usually considered our only source of information for next step, exploiting flaws which require specific details about targets. Let’s see how current frameworks handle this step and how it should be. Process begins with probing hosts for open ports, continues with identifying the software listening behind any of them and finishes with trying to identify exact version of each software. Unless we’ve not finished above correctly, we have no idea if remote target is vulnerable to any known/unknown flaw. Of course there’s always chance of assume running version a common one, and try to blindly exploit it in next step. Available products are well enough to finish this step but they are not perfect like other choices we have out of package. For example they don’t have a massive database for fingerprinting and matching services like Nmap , how ever they all support its outputs .The only one looks good in this step is Core-Impact. Also they usually fail if target software is not listening on default port, except some protocols like HTTP, RPC and few others.

Real game begins here at third step; trying to exploit discovered flaws, or even cooler developing exploits based on previous findings and using them to jump in. Here’s where we really need frameworks and where they should show their real power and capabilities. In a normal pen-test, after finishing previous steps agent have enough information to focus on specific software/service as target and try to find the best possible way of exploiting vulnerabilities. ‘Best’ here means the most stable way of exploiting flaw in a manner that exploitation makes least impact on target, while flying under the radar of monitoring, detection and protection mechanism. Assuming there’s no framework out there, agent will build the exploit from scratch based on every single findings he had in previous steps. For example, he will tune addresses to exactly match remote versions, choose best possible payload based on situation (it may be a simple port-bind or an ACL flashing payload). About flying under the radar, agent must try best possible/available techniques to stay stealth. Most stealth technique is not always the best choice. So there’s much to do for preparing exploit. For a sensitive and special case (or target) we really need to carefully fine-tune the exploit, as there may be no second chance at all. But in many cases (usual tests) agent is usually faced with straight-forward and already-tried vulnerabilities. All he need to do is customize available resource (exploit) to match new target, version and situation and it means repeating of exploit development stages ,which can be really time consuming.

Let’s see how available frameworks can help and speed up this process. Unless we know technical details of targeted vulnerability (details like how to deliver payload to target service , size of buffer , heap states , bad chars , preferred code execution technique , etc.) all we have to do is write few lines in language of framework , to tell it how and where to send a payload and teach framework some details about flaw . If you’ve been careful enough in providing correct details to framework, available frameworks are stable enough to give back working results, while hiding many details of exploitation from user including generation of payload while taking care of bad chars, encoding and sending it to wire. Most of them are recently armed with advanced techniques to generate and send payloads undetectable by current market of monitoring and intrusion detection mechanism. SMB & RPC fragmentations or encrypted sessions and encoded payloads used in client-side attacks are some good samples available in MSF, CANVAS and Core-Impact.

Although frameworks looks cool and complete at this step, but again they are not final and ready to use solution for discovered vulnerabilities. In cases that targeted vulnerability has been already added to framework, user usually faces incorrect or unmatched versions of target software. skipping some specific windows related vulnerabilities which can be exploited universally, in most of cases user should have correct details for exploiting the flaw, and if end-user isn’t experienced enough to find correct details for his own version of targeted system, he will be limited to framework’s hardcode details .to be more clear, exploits provided in frameworks are useless if end-user do not have the knowledge to correctly modify them! This level of knowledge means an agent, capable of coding (not always) simple exploits for common overflow cases including but not limited to heap or stack overruns.

In case user works on exploiting some flaw which has been found during analyzes or using a flaw previously not provided in framework, he has no choice but developing his own module for framework. Current frameworks have enough interesting options to offer for this step.

In order to develop modules for framework user fist must get familiar with it. One option is reading codes of provided exploits and modules and framework core components, which is the hard way but more effective way of learning it. Next option would be checking framework documentations (if there is anything to read!). If you’re going to select CANVAS as your choice be warned that you have nothing to follow but code comments. In case you choose MSF you’ll have nice documented API and many public resources available on how to develop modules for MSF. If you choose Core-impact, few development guides compiled as CHM will be available in package. There you have few fully commented sample exploits and few other hints for BASIC developments. I’ve not checked latest versions of Impact-Dev-Guide but you can not find anything about advanced features of framework. The only resources are again provided python exploit modules. For example none of CANVAS or Core-Impact provides documentations about their payload encoders or NOP generators nor their evasion details (in a documented manner).

Skipping documentations , AFAIK Metasploit is the only framework provide some scripts and tools for primary stages of development , like determining bad-chars , determining buffer size , locating proper jump points in binaries and etc. In others you have to extract required details from your debugger or custom scripts and fill the blanks in framework.

Once again it becomes clear that end-user of framework MUST be experienced enough if he wants to get full benefits of the product. I doubt if any company out there purchase or download one of these frameworks have such a cool guy inside their office! Of course while talking about ‘customers’ I skip those companies purchase/get frameworks to speed-up their consultancy services such as penetration-testing or research.

If agent has successfully passed mentioned steps, it’s finally time to own targets. But hey, some times it’s not as simple as getting a remote shell from compromised system. Agent should get deeper into network, detect and compromise more hosts from entry point, grab some data or fool administrators to reach final penetration-test target.

In order to finish this step successfully, agent should have a previously prepared set of tools (can be home-grown or available tools released by community) ready for game. Some people prefer mix of few generic tools and their coding/scripting experience while some others prefer a complete collection of tools, already customized and tuned for every single task.

Assuming agent should jump from one host to another, portability of tools some times make trouble and annoying. Not all of tools are platform-independent & Compromised host has not always all of your tool-set dependencies. Beside that, agent should stay stealth while working. Moving every single tool to remote network is not good idea. Admins can always be there, monitoring. Transferring targeted data is not always as simple as copying them to agent’s host or an internet readable directory. Some times it’s required to bypass multiple strict firewall rules and policies to extract data from protected back-end servers. And finally there are always some prying eyes watching every single packet on network! These are some post-exploitation challenges an agent may face with in his penetration test.

Let’s see what framework have to offer. During recent years many techniques have been researched by community and some of them were cutting-edge techniques showing new aspects of post-exploitation steps. Syscall-proxying is the most notable research, introduced by Oliver Friedrichs and Tim Newsham back in 2001 as a model and first implementation was brought to community by CORE ST guys as a linux shellcode. Based on this concept many other tricks were implemented in frameworks, solving some problems for agents including moving inside from chain of compromised host , playing with compromised host without touching hard disk , or loading required tools directly into memory of remote host . This subject is not dead after about 5 years and once a while we can see new add-ons. The last one I’m aware of is implemented into Core-Impact and let you bypass non-executable stack mechanism, built into recent versions of operating systems like windows 2003 SP1.Another interesting feature of current frameworks (for post-exploitation) is loading applications directly into compromised host’s memory. CANVAS do this in it’s unique MOSDEF way which let you remotely compile your custom codes into memory but Core-Impact and MSF let you load binaries (DLL binaries) remotely. In current market, Core-Impact and CANVAS are the only products support post-exploitation based on syscall proxying. Metasploit is still missing this most-wanted, but MSF users are not left alone. Meterpreter is MSF`s most advanced payload providing some cool post-exploitation options, how ever it’s only and worst weakness is that it’s current version is available only for windows targets.

As you see, for this last step working is simple as some point and clicks. End-user can enjoy his browsing of compromised hosts without being worry about blocks of data moving between hosts.

Reporting is last step. No doubt that tools are always faster than us in report generation but report of a penetration test is not like reporting open ports and missed patches in scanned network. No matter how detailed and user-friendly generated reports are, they can not be used as final output. At best cases you can grab parts of generated reports (if any available) and use them in final reports. Current state of frameworks in report generation is not interesting at all. They come handy only when you want to mention exact times, dates or some detailed debugging information. Core-impact generates graphical reports and parts of generated report are really useful, but others just began caring about reports! MSF only support its detailed debugging reports and CANVAS recently added few options to framework, generating pretty raw and simple outputs. Of course canvas.log is always out there for your reference, filled with time stamped log of actions you’ve done in framework.

Summarizing above paragraphs clearly shows that current penetration-testing frameworks/products are not what they are announced in market. I don’t mean they are not useful nor poor, but they are simply not the packages as you may read about them in vendor sites, advertisements or hearing from community. In other hand, it’s now clear that what developers of these frameworks consider as ‘customers’ are not ones most of people think. Developers expect their product to be used by experienced end-users capable of using provided framework as base of their tool-set NOT their final ultimate hack-pack. At the same time, large amount of people get interested in such products think that once they purchase one of these, they get the power and knowledge of framework’s developer and nothing is left for them to do. They happily think they will get it, launch so called QA and brand new exploits against their targets and they will be in. finally after multiple tries they may even think that they have been tricked!!! The truth is that such group of customers should NOT be end-user. Before wasting your budget you should qualify yourself and check if such product can really help you or your company. If you expect the product to do the magic for you then you’re probably choosing wrong product, but if you think framework can boost your already sorted tool-set, and you can modify & enhance its features YOURSELF then I think it is right decision to pay for it. All of available frameworks have supports and experienced teams behind their products but the true is that you can not expect much from them. As I’ve experienced multiple times they are all cool and great while replying your request but you shouldn’t expect them to do all sort of things for you. Finally I think it’s better to relay on features of a framework, rather than exploits of framework. Yes, their exploits always save time you may work on your own exploit, but it’s not a single exploit-code making frameworks powerful. Frameworks are considered valuable because they provide great base and platform for your exploits and exploitation process.

*Btw, comments are always welcome here :)
Drop me a line of comment if you like to read about specific topics here.

[Updated 08 February 2007]
I was wrong while mentioning syscall-proxying and CANVAS. CANVAS actually does not handle pivoting based on syscall-proxying but it relays on MOSDEF and , MOSDEF use it's own techniques to implement pivoting which is equal to syscall-proxying. Read Dave's comment below. Remember I told you the only resource for knowing canvas is to read codes ? ;)
Thanks Dave for your review :>