Tuesday, 28 December 2010

A few words about NuCaptcha

After my recent posting about the Cforms Captcha Bypass Vulnerability the folks over at NuCaptcha asked me to take a look at their offering. I took a poke around the free version of the product/service that they offer.

A Slightly Different Approach:
Lots of people are trying to come up with new and innovative approaches to the Captcha concept as OCR bots continually demolish a lot of the products out there. On top of that, there are pay services now, where humans will set and crunch captchas for you all day long. This leaves us with a question of how to change the game enough to continue moving forward. I've seen a lot of different ideas about this, and nu/captcha's is far from the most innovative. That being said, it works well enough for now. Their Captcha's are animated, and provide text that is and isn't part of the captcha. You are asked to enter in only the text that appears in red.

This is definitely a step in the right direction. That being said, the red text always appears at the end of the string. I would think it might be a little more effectively to randomly colourise charachters within the string, not clumping them together, and not putting them at a predictable location. also, since they are colour coded, I can certainly envision an OCR bot capable of distinguishing colours. This is compensated for a bit by all the animation in the background. Especially the ones with the full advertisements in the background. This provides a lot of 'noise' to help confuse any OCR bots. However, I don't think the NuCaptcha system is going to be impervious to OCR techniques, not by a long shot.

Ways i might suggest to improve this technique:

  1. Use random charachters out of the larger text string to colour code
  2. Colourise all charachters in the string, different colours and randomly select the 'correct' colour each request (one request wants the blue letters, the next the yellows, etc). sort of adding entropy to both the letters and colours
What about the paid OCR crackers? Well, Nucaptcha claims they address this by increasing the ammount of time it take a person to recognise and complete each captcha, by a few seconds. Whether this is true or not, I doubt that it will make much of a serious impact. that being said, if anyone has any better ideas out there, i'd love to hear them.

Where their technique really shines through is more in usability than security. I have gotten to the point where the pure sight of a captcha irritates me. They are often so illegible that an actual human has a hard time filling the stupid things out correctly on the first try. I do not feel any of that frustration with their system. Also the idea to blend advertising space into their solution is a pretty savy business move in my opinion.

The Pseudo-Technical:

so I took some preliminary dives through their offerings. They offer PHP,Java, and .NET APIs. I subjected the .NEt and JAVA APIs to some static analysis tools and let them run. I then ran some quick php examples and their WordPress plugin through a Web application Scanner and let it fly. I snatched at some of their SWF components and ran them through a decompiler app, and didn't find much of itnerest there. Finally I poked and prodded from within burpsuite looking for anything unusual.

The long and short of it is that I found nothing of real interest there. I have not, and probably will not go digging line by line through their source code. For one thing, it looks like a bunch of the real work is offloaded back to their environment to some internal webapps on their side. For another, nothing in my cursory examinations turned up anything the least bit indicative of a problem. Maybe someone more determined will come along and find something i missed.

So is nucaptcha the "most secure" captcha solution out there? Beats me. I don't honestly know how to make such a comparison in the market place. What i do know is that it works, it is end-user friendly, and it does not have any glaring defects. Perhaps not as glowing of a recommendation as they were hoping for, but anything more definitive is just asking for me to be proven wrong. cheers!

UPDATE: Christopher Bailey from NuCaptcha wanted me to point out that the red letters option is only one form the security Captcha can take. They have different variants as can be seen Here. The actual  captcha text is still always at a predictable location within the string though, so my suggestion about randomly selecting characters within the string still stands. Thanks to the folk from NuCaptcha for inviting me to take a peek though. I certainly appreciate their openness if nothing else.

Friday, 17 December 2010

Dear Mr Haywood, Welcome to 2010

There has been some controversy over the recent rise in bug bounty programs. One response was issued by Anthony Haywood, CTO of Idappcom. You can find his article here. I read this article in disbelief at some of the 'points' espoused in this article. I will avoid the more mundane trollings  of the article and try to stick to the salient points.

At Idappcom, we’d argue that these sorts of schemes are nothing short of a publicity stunt and, infact, can be potentially dangerous to an end users security.
This is the crux of his argument. It is 2010, and we are still hearing the Security through Obscurity argument touted as a valid security strategy?

One concern is that, by inviting hackers to trawl all over a new application prior to its launch, just grants them more time to interrogate it and identify weaknesses which they may decide is more valuable if kept to themselves.
If a company is already at the phase of it's security evolution where it is attempting bug bounties, it more than likely has an SDL in place. This SDL should include rigorous review, source code analysis, and even penetration testing by an internal security team. Nobody is suggesting that a company should rely solely on bug bounties to find it's security flaws. Intimating that this is happening is a red herring and this statement is  a classic example of FUD in action. Mr Haywood is essentially saying "If you let hackers see your program before your customers get it, they will be even more likely to find ways to abuse it". First of all, to my knowledge these bug bounties do not include distributing pre-release versions of code to hackers on the Internet. It is simply a way of incentivising security researchers and/or hackers to responsible disclosure by offering monetary award for their contribution. Mr. Haywood, hackers are already going to be trawling all over these applications. A bug bounty is just trying to bribe them to giving what they find back to the vendor.

Which ties into my second point: what;'s the difference if they see it now or later. If a company did what you're suggesting, there will be a portion of people who may well hold back the information to use after release. There will, however, also be legitimate security researchers who will turn over what they find, which will likely overlap with the findings of the malicious sorts. This increases the chance that the vendor will be able to issue a fix before going to release. Explain to me again, how this is dangerous, or negative in any way?

The hacker would happily claim the reward, promise a vow of silence and then ‘sell’ the details on the black market leaving any user, while the patch is being developed or if they fail to install the update, with a great big security void in their defences just waiting to be exploited.

Yes some malicious hackers will try to do evil, but us good guys will likely find the same things and report it. Your statement seems to imply that anyone looking over the code would be malicious. Frankly, I find this insulting. I have turned in numerous vulnerabilities to vendors without any promise of reward even. I have gone full disclosure in the event that my attempts to elicit a response from the vendor have failed. The same can be said about any number of small time folk like me, never mind people like Tavis Ormandy, Michal Zalewski, HD Moore, Jeremiah Grossman, Rob Hansen , etc.  You seem to be taking a pretty broad shot at the security community in general, with statements such as these. moving on.

Sometimes it’s not even a flaw in the software that can cause problems. If an attack is launched against the application, causing it to fail and reboot, then this denial of service (DOS) attack can be just as costly to your organisation as if the application were breached and data stolen.
I'm not even sure what point you are trying to make here. Yes there are Denial of Service vulnerabilities out there. What does that have to do with your argument at all?

A final word of warning is that, even if the application isn’t hacked today, it doesn’t mean that tomorrow they’re not going to be able to breach it.
That's exactly right. That is why a continuous security program needs to be in place. Security needs to be a factor from project conception, through the development lifecycle, all the way past release. Testing needs to be done continually. A bug bounty is a way of crowd sourcing continued testing in the wild.

IT’s never infallible and for this reason penetration testing is often heralded as the hero of the hour. That said technology has moved on and, while still valid in certain circumstances, historical penetration testing techniques are often limited in their effectiveness. Let me explain – a traditional test is executed from outside the network perimeter with the tester seeking applications to attack.
Wow. You take one possible portion of a penetration test, and say "this is what a penetration test is" while ignoring all the other factors at play.  An external only Black Box pen test may go like this, but there are many different way to perform a pen test, depending upon the engagement.

However, as these assaults are all from a single IP address, intelligent security software will recognise this behaviour as the IP doesn’t change. Within the first two or three attempts the source address is blacklisted or fire walled and all subsequent traffic is immaterial as all activities are seen and treated as malicious.
If you are really really bad at performing penetration tests, this may be true. A real penetration tester will pivotwhenever possible. Since we are specifically talking about AppSec(that's short for Application Security Mr Haywood) this becomes even more relevant. In pen testing web apps it is extremely easy to disguise yourself as a perfectly normal user. A standard IPS is mostly ineffective in this realm, and WAFS are notoriously hard to configure in any meaningful way that does not break a complex application's functionality. Also, remembering that we are talking AppSec, a good pen tester will probably have proxies he can flow through. So if an IP gets blocked, he just comes from a different IP.

I was a little perplexed by this strange attack on penetration Testing. Then I found this article:

Idappcom seeks to displace penetration testers

Where you claim that your nifty little appliance will somehow replace penetration testers. So we can read your entire position as "don't trust manual testing, buy our product instead". Hardly the first time we've seen such a tactic from the vendors. Let's take a look at this for a moment though. Will your appliance detect someone exploiting a business logic flaw? will it shut down an attacker connecting to a file share with an overly permissive ACL? will it be able to detect multi-step attacks against web applications? Will it really notice a SQL injection attack, and if so how does it know the difference between a valid query and an injected one? These are the sorts of questions that present the burning need for manual human review on a repeat basis.  no matter how hard you try, you will never be able to fully automate this. Actual humans will always find things a program can't. Let's move back the the techjournalsouth.com article though.

 Instead you need two and both need to be conducted simultaneously if your network’s to perform in perfect harmony:

   Application testing combined with intrusion detection

Congratulations, we have all been saying there is no magic bullet for a long time. However, you present only two layers of defense in depth. application Testing and IPS by themselves are not enough. You need a full Security Development Lifecycle. You needs firewalls and IPS systems that are properly configured and audited on a regular basis. You need policies governing change management, and configuration management. You need proper network segmentation and separation of duties. You need hands on testers who know how to tear an application or system apart and find the weak points.

Intrusion detection, capable of spotting zero day exploits, must be deployed to audit and test the recognition and response capabilities of your corporate security defences. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched irrespective of what the service it supports is – be it email, a web service, anything.

First of all, see some of previous points about IPS/WAFS and protecting against web application attacks.  Secondly, let;'s talk about your 'zero day' protection. This protection is only as good as the signatures loaded into the device. I could write an entire book on why signature based security mechanisms are doomed to fail, and i would be far from the first person to speak at length on this subject. For some of the high points just look back at my posts with Michal Zalewski about the anti-virus world. I'll leave it there.

While we wait with baited breath to see who will lift Deutsche Post’s Security Cup we mustn’t lose sight of our own challenges. My best advice would be that, instead of waiting for the outcome and relying on others to keep you informed of vulnerabilities in your applications, you must regularly inspect your defences to make sure they’re standing strong with no chinks. If you don’t the bounty may as well be on your head.
Yes, and one of the ways you inspect these defenses, is to have skilled people testing them on a  regular basis. Relying on a magic bullet security appliance or application to save you is irresponsible and foolish. Don't buy into vendor FUD.

Special thanks to Dino Dai Zovi(found here and here) for pointing out this article.

Wednesday, 15 December 2010

cformsII CAPTCHA Bypass Vulnerability

The cformsII plugin for WordPress contains a vulnerability within its Captcha Verification functionality. This vulnerability exists due to an inherent trust of user controlled input. An attacker could utilise this vulnerability to completely bypass the captcha security mechanism on any wordpress forms created with this plugin.

Captcha Generation:
CformsII generates it's captcha by randomly selecting characters from a character set of ak,m,n,p-z2-9. I assume that the letters l and o, and the numerals 1 and 0 were excluded to avoid any confusion when rendered as an image. It selects a random number of these characters based on preset minimum and maximum limits, and assembles a string of them. It then creates an md5 hash of this string, prepends 'i+' to the hash and sets it as a cookie called 'turing_string_'. See the below code excerpts:
$min = prep( $_REQUEST['c1'],4 );
$max = prep( $_REQUEST['c2'],5 );
$src = prep( $_REQUEST['ac'], 'abcdefghijkmnpqrstuvwxyz23456789');

### captcha random code
$srclen = strlen($src)-1;
$length = mt_rand($min,$max);

$turing = '';
for($i=0; $i<$length; $i++)
$turing .= substr($src, mt_rand(0, $srclen), 1);

$tu = ($_REQUEST['i']=='i')?strtolower($turing):$turing;

setcookie('turing_string_'.$no, $_REQUEST['i'].'+'.md5($tu),(time()+60*60*5),"/");

This cookie is set when the user is presented with generated captcha image. When they submit their completed form, the capctha code is submitted in a POST parameter titled 'cforms_captcha'. This parameter is then md5'd and compared to the md5 value from the turing_string_ cookie. If the two hashes match, then it is considered to be valid.

else if( $field_type == 'captcha' ){  ### captcha verification

         $validations[$i+$off] = 1;

$a = explode('+',$_COOKIE['turing_string_'.$no]);

$a = $a[1];
$b = md5( ($captchaopt['i'] == 'i')?strtolower($_REQUEST['cforms_captcha'.$no]):$_REQUEST['cforms_captcha'.$no]);

if ( $a <> $b ) {
$validations[$i+$off] = 0;
$err = !($err)?2:$err;


The end result is that an attacker could pre-set a 'valid' captcha string. They then get the md5 hash of the string, and prepend “i%2b” (url encoded 'i+') to the value and set that as the turing_string_ cookie for their post requests. Every request set with this parameter and cookie combination will be inherently trusted as valid from the Captcha standpoint.

The problem here is two fold. The first issue, is that the captcha codes are not one time use codes, as they should be. So even without tricking the Captcha system in the first place, it would be possible to launch  a replay attack against this system to generate large amounts of submissions. Each captcha code should only be valid for one use and only during a very limited time window.

The second problem is the trust of user supplied data. The process is meant to create a validation of entered data against another piece of data. However both sets of data are freely offered up to the client-side for tampering. This completely negates the verification process as the server side is not truly in control of the validation at this point.

The take-away:
using cookies to store captcha data then comparing against user supplied input is not an appropriate method of validation for a number of reasons. The captcha code, whether in raw form or hashed should be stored server side for validation, should be valid for only one use, and should be valid only for a limited timeframe. This could be done by using an in-memory array, a database, or even a flatfile.