November 14, 2007

Web-applicatoin Vulnerability Scanners War

Do you remember Larry Suto`s paper on comparing few commercial web-app scanners ?
Suto evaluated actually three scanners (NTOSpider,AppScan,WebInspect) , by targeting 3 web-apps while monitoring scan process with Fortify Tracer product .
Well , a paper full of big names !
At first glance , paper looks interesting to anyone unfamiliar with web-app testing , or new to automated tools . I was also surprised how one of evaluated products (NTOSpider) remained anonymous before release of the paper . believe me or not , MANY people changed their minds and refreshed their ratings based on published paper . I think including "code coverage" among "Fortify Tracer" in paper , fooled many readers .
After releasing paper , I was expecting evaluated companies to respond to it , as paper made a lot of noise in community , but we saw a strange silence since release date , till today . As the winner (based on paper ratings), NTOobjective was just busy responding to hundreds of trial-request emails and calls ,and also happy about being rated #1 ! Why shoud they care about accuracy of published paper , when they`ve got their free marketing chance ?
About WatchFire (IBM) and SPI (HP) two companies being supported by world leading researchers in their teams and also unlimited budget for their work , a technical response was expected soon or later . SPI was the first ,to present their response .

Today I noticed SPI released the paper , revealing some realities about the process of evaluations , and how a real evaluation report should be . At first glance you may assume it being a paper to save SPI from being the bad looser in test , but if you finish reading it you`ll notice how clear and more-accurate real results are. And to save your time Jeff Forristal is trying to tell you "WebInspect is not really as bad as mentioned in previous paper!" and also "Suto`s methodology for evaluation was not effective and accurate."

Using WebInspect for so long time as part of my tool-box for (automated) assessments , I was curious about this new scanners war , and the way competitors are going to prove themselves the better one. I`ve to mention that I`ve used all of named scanners in real assessments and personal hard-core evaluations , among some other scanners not mentioned in this war . So I know very well what I`m talking about here .
Focused on technology and logic behind scans and of course accuracy , I`d instantly filter out only WebInspect and NTOspider , leaving other commercial products. Talking about AppScan I`ve to say that it really disappointed me every time I tried to get close to it and evaluating new versions/features released by vendor .
The day I began using WebInspect (back in 2004), most of automated (web-app) scanners were nothing but polished crawlers with capabilities to manipulate/inject some parameters in HTTP protocol and audited application , armored with number of hard-coded vulnerability checks related to known web applications , web servers and few application servers. There was not much intelligences nor effective methodology in background , and you had tens of false positives and missed critical vulnerabilities per scan . WebInspect and NTOspider were a little bit better and faster than other lame scanners and had few optimizations and customization capabilities inside , making them better than others . That was the reason I turned to SPI .
Other interesting point about WebInspect was SPI`s focus and continuous work on enhancing scanner engine and making scan modules more intelligent , rather than releasing burst of new vulnerability checks and hard-coded scan signatures . NTO was almost like SPI , but never as active and productive as SPI . So while closely monitoring NTOspider development and releases , I kept using WebInspect . ( feel free to call me a fan of SPI and their research team! )

Nowadays , scanners changed a lot . They no more try to relay on a massive database of known vulnerabilities and try to check them against target . They`re being designed to crawl and parse web application parameters smartly and check for common vulnerabilities by AI and few hard-coded tricks ,among common brute-forcing .

No need to mention that non of these automated tools are ultimate answer for a customer trying to audit and assess his web-app . IMO web-app scanners are not still effective enough to be able to leave them scan target and give back results we can trust on . They`re not even close to this point ! The truth is that they will never reach this golden point . Web 2.0 technologies as an example , made many of these scanners useless and ineffective , and I`m not going to include tricks and mechanisms recently being used to defeat automated scanner and attack tools . AppScan had some honors on being the only automated scanner for scanning AJAX technology but farther checks showed that it was just a marketing buzz , and it`s scanning methodology is still very poor .

As you saw in SPI`s evaluation paper , from beginning lunching the scanner till analyzing final reports , there must be some knowledgeable expert guiding the scanner , optimizing it in every scan step and finally confirming findings to check for false positives . Even at final step you can`t be sure everything is fully tested and you must repeat whole of audit process manually , to reveal vulnerabilities that can not be discovered by automated tools . Of course this last part is for those who have enough budget to hire an expert and look for something more than just an automatically generated report with tens of junk notes being injected into final report.

Here`s how I usually relay on my automated tools in an assessment targeting web-applications:

Step 1: Try to fingerprint web-server and deployed web-applications by lightly queering application and server and also using common online services such as NetCraft.
Step 2: Try to learn as much as possible about target domain/site/web-app by looking at indexed contents by Google and other search engines. Google is really enough , but I never relay only on it.
Step 3: Try to learn the logic of web-app by grouping step-2 findings and manual browsing web-app , and slightly manipulating parameters.

Step 4: Here`s where I switch to selected automated scanner (based on findings) , if it should be used at all ! I`ll optimize settings and include customizations ,and some times develop custom scan/check modules for scanner, if I notice that current scanner checks are not covering what I`ve found in previous steps . Finally I`ll define new scan policy to include/exclude checks available in vulnerability database .
Three points here .
Point One: There`s really no reason to use automated scanners at every assessment you do unless it`s really required .
Point Two: You must know the scanner you`re using very well , and be able to use all of it`s functionality . Not just following default settings and using provided scan policies.
Point Three: Do not relay on only one automated scanner (commercially or free) ,and do not try to waste your time customizing the scanner. Some times there are simply better choices out there for the purpose you`re trying to customize your scanner for. It`s highly dependent if point two apples to you .A good example is auditing web-sites with very limited dynamic contents or completely static contents . Light free scanners for crawling and brute-forcing directories are much more effective than your giant commercial scanner . This is a sad truth while you remember amount of $ spent for product :) SensePost tools are my favorite replacements at this step.
Step 5: Checking and confirming automated-scanner findings (if used) and identifying missed checks and poorly audited functions .
Step 6: Beginning manual auditing (raw traffic/parameter inspection and manipulation) in order to check for vulnerabilities that are usually missed by automated scanners , or not possible to detect them by scanners at all . Session management and handling vulnerabilities , or flaws in logic of application are good examples for this point. Downloading and auditing source-code of identified web-applications (if available) also fits in this step.
Step 7: Documenting findings and exploits , and preparing initial report. repeating step 2,3 and 6 to discover more items, considering gained knowledge about target.
Step 8: Documenting and preparing final report.

Of course these steps are for a black-box assessment . In other scenarios things really change.

I`m going to finish this post with a well-known sentence :
"Security tools and applications are as effective and good as the person using them."

No comments:

Post a Comment