Friday 17 December 2010

Dear Mr Haywood, Welcome to 2010

There has been some controversy over the recent rise in bug bounty programs. One response was issued by Anthony Haywood, CTO of Idappcom. You can find his article here. I read this article in disbelief at some of the 'points' espoused in this article. I will avoid the more mundane trollings  of the article and try to stick to the salient points.

At Idappcom, we’d argue that these sorts of schemes are nothing short of a publicity stunt and, infact, can be potentially dangerous to an end users security.
This is the crux of his argument. It is 2010, and we are still hearing the Security through Obscurity argument touted as a valid security strategy?

One concern is that, by inviting hackers to trawl all over a new application prior to its launch, just grants them more time to interrogate it and identify weaknesses which they may decide is more valuable if kept to themselves.
If a company is already at the phase of it's security evolution where it is attempting bug bounties, it more than likely has an SDL in place. This SDL should include rigorous review, source code analysis, and even penetration testing by an internal security team. Nobody is suggesting that a company should rely solely on bug bounties to find it's security flaws. Intimating that this is happening is a red herring and this statement is  a classic example of FUD in action. Mr Haywood is essentially saying "If you let hackers see your program before your customers get it, they will be even more likely to find ways to abuse it". First of all, to my knowledge these bug bounties do not include distributing pre-release versions of code to hackers on the Internet. It is simply a way of incentivising security researchers and/or hackers to responsible disclosure by offering monetary award for their contribution. Mr. Haywood, hackers are already going to be trawling all over these applications. A bug bounty is just trying to bribe them to giving what they find back to the vendor.

Which ties into my second point: what;'s the difference if they see it now or later. If a company did what you're suggesting, there will be a portion of people who may well hold back the information to use after release. There will, however, also be legitimate security researchers who will turn over what they find, which will likely overlap with the findings of the malicious sorts. This increases the chance that the vendor will be able to issue a fix before going to release. Explain to me again, how this is dangerous, or negative in any way?

The hacker would happily claim the reward, promise a vow of silence and then ‘sell’ the details on the black market leaving any user, while the patch is being developed or if they fail to install the update, with a great big security void in their defences just waiting to be exploited.

Yes some malicious hackers will try to do evil, but us good guys will likely find the same things and report it. Your statement seems to imply that anyone looking over the code would be malicious. Frankly, I find this insulting. I have turned in numerous vulnerabilities to vendors without any promise of reward even. I have gone full disclosure in the event that my attempts to elicit a response from the vendor have failed. The same can be said about any number of small time folk like me, never mind people like Tavis Ormandy, Michal Zalewski, HD Moore, Jeremiah Grossman, Rob Hansen , etc.  You seem to be taking a pretty broad shot at the security community in general, with statements such as these. moving on.

Sometimes it’s not even a flaw in the software that can cause problems. If an attack is launched against the application, causing it to fail and reboot, then this denial of service (DOS) attack can be just as costly to your organisation as if the application were breached and data stolen.
I'm not even sure what point you are trying to make here. Yes there are Denial of Service vulnerabilities out there. What does that have to do with your argument at all?

A final word of warning is that, even if the application isn’t hacked today, it doesn’t mean that tomorrow they’re not going to be able to breach it.
That's exactly right. That is why a continuous security program needs to be in place. Security needs to be a factor from project conception, through the development lifecycle, all the way past release. Testing needs to be done continually. A bug bounty is a way of crowd sourcing continued testing in the wild.

IT’s never infallible and for this reason penetration testing is often heralded as the hero of the hour. That said technology has moved on and, while still valid in certain circumstances, historical penetration testing techniques are often limited in their effectiveness. Let me explain – a traditional test is executed from outside the network perimeter with the tester seeking applications to attack.
Wow. You take one possible portion of a penetration test, and say "this is what a penetration test is" while ignoring all the other factors at play.  An external only Black Box pen test may go like this, but there are many different way to perform a pen test, depending upon the engagement.

However, as these assaults are all from a single IP address, intelligent security software will recognise this behaviour as the IP doesn’t change. Within the first two or three attempts the source address is blacklisted or fire walled and all subsequent traffic is immaterial as all activities are seen and treated as malicious.
If you are really really bad at performing penetration tests, this may be true. A real penetration tester will pivotwhenever possible. Since we are specifically talking about AppSec(that's short for Application Security Mr Haywood) this becomes even more relevant. In pen testing web apps it is extremely easy to disguise yourself as a perfectly normal user. A standard IPS is mostly ineffective in this realm, and WAFS are notoriously hard to configure in any meaningful way that does not break a complex application's functionality. Also, remembering that we are talking AppSec, a good pen tester will probably have proxies he can flow through. So if an IP gets blocked, he just comes from a different IP.

I was a little perplexed by this strange attack on penetration Testing. Then I found this article:

Idappcom seeks to displace penetration testers


Where you claim that your nifty little appliance will somehow replace penetration testers. So we can read your entire position as "don't trust manual testing, buy our product instead". Hardly the first time we've seen such a tactic from the vendors. Let's take a look at this for a moment though. Will your appliance detect someone exploiting a business logic flaw? will it shut down an attacker connecting to a file share with an overly permissive ACL? will it be able to detect multi-step attacks against web applications? Will it really notice a SQL injection attack, and if so how does it know the difference between a valid query and an injected one? These are the sorts of questions that present the burning need for manual human review on a repeat basis.  no matter how hard you try, you will never be able to fully automate this. Actual humans will always find things a program can't. Let's move back the the techjournalsouth.com article though.

 Instead you need two and both need to be conducted simultaneously if your network’s to perform in perfect harmony:

   Application testing combined with intrusion detection

Congratulations, we have all been saying there is no magic bullet for a long time. However, you present only two layers of defense in depth. application Testing and IPS by themselves are not enough. You need a full Security Development Lifecycle. You needs firewalls and IPS systems that are properly configured and audited on a regular basis. You need policies governing change management, and configuration management. You need proper network segmentation and separation of duties. You need hands on testers who know how to tear an application or system apart and find the weak points.


Intrusion detection, capable of spotting zero day exploits, must be deployed to audit and test the recognition and response capabilities of your corporate security defences. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched irrespective of what the service it supports is – be it email, a web service, anything.

First of all, see some of previous points about IPS/WAFS and protecting against web application attacks.  Secondly, let;'s talk about your 'zero day' protection. This protection is only as good as the signatures loaded into the device. I could write an entire book on why signature based security mechanisms are doomed to fail, and i would be far from the first person to speak at length on this subject. For some of the high points just look back at my posts with Michal Zalewski about the anti-virus world. I'll leave it there.

While we wait with baited breath to see who will lift Deutsche Post’s Security Cup we mustn’t lose sight of our own challenges. My best advice would be that, instead of waiting for the outcome and relying on others to keep you informed of vulnerabilities in your applications, you must regularly inspect your defences to make sure they’re standing strong with no chinks. If you don’t the bounty may as well be on your head.
Yes, and one of the ways you inspect these defenses, is to have skilled people testing them on a  regular basis. Relying on a magic bullet security appliance or application to save you is irresponsible and foolish. Don't buy into vendor FUD.

Special thanks to Dino Dai Zovi(found here and here) for pointing out this article.



1 comment:

  1. I have been looking around for this kind of information. Will you post some more in future.I’ll be grateful if you will.

    http://best-security.net/

    ReplyDelete