Tuesday, 9 March 2010

Who can you trust?

So by now, everybody has heard about the whole energizer DUO. Couple that with the news that vodafone shipped out some Android phones with Windows malware loaded on them. If you haven't ehard about this bit yet, I recommend reading here and here . The Zdnet post is especially nice because it include links to posts about other incidents just like this. You just have to ignore the linux vs windows flamewar, which I'm sorry to say I let myself get dragged into the middle of. I think it's a shame that the post devolved into that when there's a serious security concern brewing here. It has nothing to do with OSes are or even software. It has to do with trust.

We spend a lot of time talking about trust in the security world. "Don't download software from an untrusted source", "don't open emails from people you don't trust", "Don't plug untrusted usb devices into your computer." Then we get very condescending when people fail to obey these simple tenants of trust. What do we do when the trust betrays us though. These two most recent examples show cases where the users had every right to trust the infection vector. They downloaded software directly from energizer's site, why wouldn't it be safe? I just bought this phone, it's brand new. How could it possibly have malware on it? The phone example would be exactly the same as if you went to a store like staples, bought a thumb drive. Opened that horrid plastic bubble packaging, insert it in your computer, and then your antivirus starts setting off alarms like a 1940's air raid siren. The device was brand new, had not been tampered with in the store as far as you could tell, and came from a trusted source.

So now what if we take our hypothetical situation one step further. What if the malware isn't recognized by your AV. Now we have an infected computer. Your friend brings his usb drive over a couple days later to copy some files. It's his usb drive, he knows where it's been. He knows your a smart guy, so your computer should be safe. He takes the infected drive home, and now infects his machine. The cycle is obvious of course. Yes, of course these hypothetical people should have autorun turned off, we all know that by now, and so this example is not perfect. The issue is the trust factor though. In these situations, there is no "blame it on the user". They had every reason to trust these sources. It seems like the only answer is "don't trust anyone or anything". I'd love to see people's thoughts on this.

Monday, 8 March 2010

This is just sad

So I was taking a poke at a friend's server, doing a preliminary sweep for them. I noticed that they were running filezilla 0.9.33 and so I did a quick google search for "filezilla 0.9.33 vuln". What I came up with scared me a little bit. It wasn't that I found some huge gaping vulnerability, but rather a level of ignorance from one of filezilla's forum admins that was simply astounding. Yuo can see the forum thread here , and find the CVE for the vulnerability being discussed here. The vulnerability that is being discussed is an information disclosure with the getcwd() function.

The site admin, botg, replies "What is FTP getcwd()? There's no such thing". Botg seems to think that this posting is about misuse of an ftp protocol command. He is then presented, by another user, with the CVE for this vulnerability. He then replies "Thank you, I know how to use Google. Doesn't change the fact that there's no such thing as FTP getcwd(), whatever that means". This is the statement, that more than anything else, blows me away.

In the scan results the original user posted it says
Details: The FTP daemon exhibits a descriptor leak in the getcwd (get current working directory) function.
Extra info: None.
Fix: Upgrade your libc C library to the current version.
And in botg's reply, he even includes the function brackets when referring to getcwd. Funny botg, that sure looks like a programming function call, now doesn't it? His snarky reply even sews the seeds of his own demise. "I know how to use google". Oh really? Let me help you out . As the first link describes the C function getcwd() I would say you seem to have some problems using google after all. I would also say, that you obviously have no understanding of how software vulnerabilities happen. If you think that vulnerabilities happen by some command the user can just type in and "hack the gibson", you need to stop watching TV mate. "It's not my job to know these things" you might say. No, but you are in the position of helping users, and this one came to you with a question. Rather than doing any decent amount of research, you opened your mouth and inserted your foot. Let's forget the whole Google bit, or the fact that it is immediately obvious that this is a C function call. I once again point you to the scan results the user posted:

Fix: Upgrade your libc C library to the current version.

Hrm, I wonder if that might provide a clue as to what's going on here? If this is the level of support a filezilla user can expect, I feel very sorry for them.

Update: I decided to register for their forums, so i could post some useful advice to this thread. I would take the high road, instead of just sitting back and being snarky myself. Imagine my surprise when my confirmation email comes in to activate my account, and my username and password are both on it in plaintext...uggggg. These people make me want to cry!

Friday, 5 March 2010

Monitoring those NTLM authentication Proxies

So, now that we have discussed how to overcome the challenge of testing those NTLM proxies, we move on to a better use. Load testing is fine and good, but how often do you really need to load test. Let's say though, that you have a couple dozen of these proxies spread out all over the globe, and for some reason MOM just doesn't cut it with monitoring the actual request performance on these proxies.

Using the base design of the previous script, I created one that is set to test each proxy in the environment once, through the same URL, and measure the delay in response. This is not 100% accurate as internal networking issues can cause some unaccounted fluctuation, but it is good enough for general purposes. So I created a mysql database with two tables. One is a status table, which contains the proxy, a counter, and the current known status. This is especially useful as the script pulls the proxies to test from this table, so adding or removing proxies is just a matter of doing it in the database, instead of altering code. the other table is a simple log file.

The script times the delay in the final response from the initiation of the request and then assigns a status based on this result. It compares it to the current status listed for that proxy, if it is different, it updates the table and emails out an alert. If it continues in a persistent bad state, it will send out a new alert again on the 12 straight return of that bad status. This ensures we are notified that the status is persisting, but doesn't flood us every 10 minutes, which is how frequently the script runs. Anyways, without further ado, here is my simplistic little proxy monitoring script

----------------code--------------------

#!/usr/bin/perl -w
use threads;
use DBI;
use LWP;
use LWP::UserAgent;
use HTTP::Request;
use Authen::NTLM(nt_hash, lm_hash);
use Authen::NTLM::HTTP;
use Time::HiRes qw( gettimeofday );
use Math::Round;
use Net::SMTP;

#Opens the connection to the datbase and prepares the statement handles we will need to call on.
our $dbh = DBI->connect("DBI:mysql:Proxy_Health", , ) or die "Unable to connect to database $DBI::errstr";
our $statuschk = $dbh->prepare("SELECT * from status WHERE proxy=?");
our $statusupd = $dbh->prepare("UPDATE status SET status=? , count=? where proxy=?");
our $logsth=$dbh->prepare("INSERT INTO chklog(proxy,delay) VALUES(?,?)");

#pulls the lsit of proxies from the datbase and maps them to a hash
%proxies= map {$_ => 0 } @{ $dbh->selectcol_arrayref("Select proxy from status" )};

#generates a worker thread for each proxy to test
 my $threadcount = 0;
 foreach (keys %proxies){
$threadcount+=1;
$thrs[$threadcount]= threads->create(\&Test, $_);

}
#performs blcoking for the threads, and returns the result of each test and inserts them into the chklog table
 foreach (keys %proxies){
$proxies{$_}= $thrs[$threadcount]->join;
$proxy_human = $_ ;
$proxy_human=~s/http:\/\///;
$proxy_human=~s/:80//;
$logsth->execute($proxy_human, $proxies{$_});
$threadcount-=1;
}

#Takes the results, and comapres the current status of the proxy to the last recorded status of the proxy. If the status has changed, it updates the status table and sends an alert. If the status has remained the same but is in a negative state, it increments a counter. Every 12 checks that return that negative result will generate a new Alert.
foreach (keys %proxies){
my $scount = 0;
if ($proxies{$_}>= 120){ $status = 'DOWN';}
elsif ($proxies{$_}>= 90){ $status = 'CRITICAL';}
elsif ($proxies{$_}>= 60){ $status = 'MAJOR';}
elsif ($proxies{$_}>= 40){ $status = 'MINOR';}
elsif ($proxies{$_}>= 20){ $status = 'SLOW';}
else{$status = 'GOOD';}
$statuschk->execute($_);
my @statusline = $statuschk->fetchrow_array;

if ($status eq $statusline[1]){
if ($status eq'GOOD'){last;}
elsif ($statusline[2]==11){
$scount =1;
&Alert($_, $status);
print "ALERT $_ !\n";
}
else{
$scount= $statusline[2] +1;
}
if ($scount==1){
&Alert($_, $status);
print "ALERT $_ !\n";
}
$statusupd->execute($status,$scount,$_);
}
else{
if ($status eq'GOOD'){$scount=0;}
else{$scount=1;}
$statusupd->execute($status,$scount,$_);
&Alert($_, $status);
print "ALERT $_ !\n";
}
}

 #

 #This function is what the worker threads run to test their given proxy.
sub Test{
#pulls the proxy from the passed parameters, sets the target as maps.google.com because that site is set to 'private' meaning the proxy will not cache it. It then retrieves the hostname of the local machine and the login credentials, so that it can properly negotiate NTLM authentication with the proxy server
my $proxy=$_[0];
my $url="http://maps.google.com";
our $workstation = `hostname` ;
my $user=;
my $my_pass = ;

#instanatiates the LWP user agent , sets the proxy, and sets the timeout to 120 seconds, because this is the timeout used on our ISA installs
my $ua =  new LWP::UserAgent(keep_alive=>1);
$ua->proxy('http', $proxy);
$ua->timeout(120);

#Creates the first request for the target website, starts the counter running and then fires off the request
my $req = HTTP::Request->new(GET => $url);
my $start = gettimeofday();
my $res = $ua->request($req);


#Sets up the data about the client to send the NTLM Authentication Negotiation Message
$client = new_client Authen::NTLM::HTTP(lm_hash($my_pass), nt_hash($my_pass),Authen::NTLM::HTTP::NTLMSSP_HTTP_PROXY, $user, , , $workstation, );

$flags = Authen::NTLM::NTLMSSP_NEGOTIATE_ALWAYS_SIGN | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM_DOMAIN_SUPPLIED | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM_WORKSTATION_SUPPLIED | Authen::NTLM::NTLMSSP_NEGOTIATE_NTLM | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM ;
$negotiate_msg = $client->http_negotiate($flags);

#Takes the negotiation message and sets it as a header in the request and resends the request
$negotiate_msg = "Proxy-" . $negotiate_msg ;
@pa = split(/:/,$negotiate_msg);
$req->header($pa[0] => $pa[1]);
$res = $ua->request($req);

#Strips the NTLM challenge message from the response header and parses it
my $challenge_msg = "Proxy-Authenticate: " . $res->header("Proxy-Authenticate");

($domain, $flags, $nonce, $ctx_upper, $ctx_lower) = $client->http_parse_challenge($challenge_msg);

if ($domain or $ctx_upper or $ctx_lower){$placeholder=1;}

#Takes the nonce and flags from the challenge message , calculates the final authentication message, sets it as a header and sends it in the final request, recieving the originally requested page in response
$flags = Authen::NTLM::NTLMSSP_NEGOTIATE_ALWAYS_SIGN | Authen::NTLM::NTLMSSP_NEGOTIATE_NTLM | Authen::NTLM::NTLMSSP_REQUEST_TARGET;
$auth_msg = $client->http_auth($nonce, $flags);

@pa = split(/:/,$auth_msg);
$req->header($pa[0] => $pa[1]);
$res = $ua->request($req);

#Stops the timer, calculates the elapsed time rounding to the nearest hudnredth of a second and returns that value to the main thread
my $end = gettimeofday();
my $delta = ($end - $start);
$delta= nearest(.01,$delta);
print "Finished getting $url through $proxy in $delta seconds! \n";
return $delta;

}

#This function actually handles the generation of the email alert for a status change. Depending on the status it picks from different wordings in the email subject and message.
sub Alert{
my $proxy = $_[0];
my $status=$_[1];


if ($status eq 'GOOD'){
$subject="Subject: $proxy has returned to Normal Operation";
$message = "The ProxyHealth Monitor has detected that proxy $proxy has returned to a 'GOOD' status and is retrieving pages within an acceptable timeframe.";
}
elsif ($status eq 'SLOW'){
$subject="Subject: $proxy is experiecing delay";
$message="The ProxyHealth Monitor has detected that the proxy $proxy is experiencing slowness in processing web requests. The system will continue to monitor and will send an update when the status changes.";
}
elsif($status eq 'MINOR'){
$subject="Subject: $proxy is experiencing a Performance Problem";
$message="The ProxyHealth Monitor has detected that the proxy $proxy is suffering noticeable slowness in processing web requests. It's current status is rated as 'MINOR'. The system will continue to monitor and will send an update when the status changes.";
}
elsif($status eq 'MAJOR'){
$subject="Subject: $proxy is experiencing a Major Performance Problem";
$message="The ProxyHealth Monitor has detected that the proxy $proxy is suffering serious slowness in processing web requests. It's current status is rated as 'MAJOR'. The system will continue to monitor and will send an update when the status changes.";
}
elsif($status eq 'CRITICAL'){
$subject="Subject: $proxy is experiencing a Critical Performance Problem";
$message="The ProxyHealth Monitor has detected that the proxy $proxy is facing a 'CRITICAL' performance decrease. Web traffic throguh this proxy will be extremely slow. The system will continue to monitor and will send an update when the status changes.";
}
elsif($status eq 'DOWN'){
$subject="Subject: $proxy is DOWN!";
$message="The ProxyHealth Monitor has detected that traffic through $proxy is exceeding the timeout limit of 2 minutes. This has led to the system declaring the proxy as being 'DOWN'. Web requests through this proxy will FAIL due to timeout. The system will continue to monitor and will send an update when the status changes.";
}



my $mailer= Net::SMTP->new(, Hello=> );
$mailer->mail();
$mailer->to();
$mailer->data();
#Sets the UK and US Security Team Distribution lists as the Recipients
$mailer->datasend('To: , ');
$mailer->datasend("\n");
$mailer->datasend('Return-Path:');
$mailer->datasend("\n");
#Sets a header that will tell the mail client that replies are to go to the Security Distribution lists and not back to the fake address used to send the alert.
$mailer->datasend('Reply-To:, ');
$mailer->datasend("\n");
$mailer->datasend('FROM:');
$mailer->datasend("\n");
#Sets the message importance to high
$mailer->datasend('Importance: High');
$mailer->datasend("\n");
$mailer->datasend($subject);
$mailer->datasend("\n\n");
$mailer->datasend($message);
$mailer->dataend();
$mailer->quit;



}


LWP and NTLM Proxy Authentication

During the course of my duties, I had a need to load test some proxy servers. To do this, we decided to use ISA logs as sources for test traffic. so the objective was seemingly simple, write a quick LWP script that parses an ISA log for urls, then goes and tries to retrieve them through the target proxy. Oh and , of course, make it multi-threaded so we can send tons and tons of traffic at a time. Where it gets a little more complicated is this: the proxies in question all use NTLM Authentication. I wasn't discouraged, at first, but soon discovered that I could not find anyone who had managed to make LWP work with an NTLM proxy. Sure, I could have kludged it together with something like CNTLM, but that didn't feel right, and didn't provide for solid re usability.

Fortunately, I did find Yee Man Chan's Authen::NTLM Module which I was able to appropriately adapt to my purposes.It  is important to not that this Yee Man Chan's module not the one with the same namespace . You can tell which is which by the version numbers. Anyways, the script I wrote takes a proxy address, an isa log file and a number of threads as arguments, and proceeds to slam said proxy into oblivion. Here it is.Please feel free to leave comments and/or feedback.


-------------------------------------code-------------------------------------------------------------

#!/usr/bin/perl


use threads;
use Thread::queue;
use LWP;
use LWP::UserAgent;
use HTTP::Request;
use Authen::NTLM(nt_hash, lm_hash);
use Authen::NTLM::HTTP;


#Checks to ensure the user has invoked the script correctly
unless(scalar(@ARGV) ==3){
print "Proper usage is proxytest.pl <# of threads> \n";
print "Proxy must be entered as http://: or http://:\n ";
exit;
}

#Begin instantiating our queues
our $users = new Thread::Queue;
our $urls = new Thread::Queue;

#Takes the apssed parameters and sets them. This is the ISA log file being parsed for test URLS, the number of threads to use in testing, and the proxy being tested
my $logfile = $ARGV[0];
my $numthreads = $ARGV[1];
my $proxy = $ARGV[2];

#Collect the hostname for the local machine, this is important for the NTLM Negotiation that will be happening later
our $workstation = `hostname` ;
our $placeholder = 0;

#Verifies that the proxy was entered in the correct format
unless ($proxy=~/^http:\/\/[A-Za-z0-9\.]+:\d+$/){
print "Proxy must be entered as http://: or http://:\n ";
exit;
}

#Enqueues the test accounts to use
$users->enqueue();

#Reads through the supplied log file, and collects all of the URLs and enqueues them for the worker threads to use
open ISALOG, "<$logfile";

while (){
chomp;
if($_=~/\banonymous\b/i){next;}
if($_=~/\bhttp:\/\/\S+\b/i){$urls->enqueue($&);}
}

close ISALOG;
print "\n Done Reading Log! \n\n";

#Instantiates a number of worker threads based on the parameter passed when invoking the script
for($tcount=1; $tcount<=$numthreads;$tcount++){
$thrs[$tcount]= threads->create(\&printoff, $tcount );
}
#Sets blockings joins for each one of these asynchronous worker threads
for($tcount=1; $tcount<=$numthreads;$tcount++){
$thrs[$tcount]->join;
}

#foreach(@thrs){$_->join;}


#The meat and potatoes
sub printoff{
#Dequeues a URL and username to use. It then re-enqueues the username, sticking back at the end of the Queue to be used over again
my $tid = $_[0];
my $url = $urls->dequeue_nb;
my $user = $users->dequeue;
$users->enqueue($user);

#While it had a valid URL, it will perform the below tests
while ($url){


#Password is set here. This password is static for all of the used test accounts
my $my_pass = ;

#Creates the LWP User Agent, tells it to use the supplied proxy, and sends the initial HTTP GET request for the supplied URL and takes in a response
my $ua =  new LWP::UserAgent(keep_alive=>1);
$ua->proxy('http', $proxy);
$ua->timeout(30);
my $req = HTTP::Request->new(GET => $url);
my $res = $ua->request($req);

#Once the initial request has been sent out, the proxy will send back an NTLM negotiate message
#We set up the NTLM authentication client response by passing ntlm hashes of the username, password, domain, and workstation hostname
$client = new_client Authen::NTLM::HTTP(lm_hash($my_pass), nt_hash($my_pass),Authen::NTLM::HTTP::NTLMSSP_HTTP_PROXY, $user, , , $workstation, );
#Here we set the NTLM protocol flags that we wish to be accepted
$flags = Authen::NTLM::NTLMSSP_NEGOTIATE_ALWAYS_SIGN | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM_DOMAIN_SUPPLIED | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM_WORKSTATION_SUPPLIED | Authen::NTLM::NTLMSSP_NEGOTIATE_NTLM | Authen::NTLM::NTLMSSP_NEGOTIATE_OEM ;

#We then take the client data, and the flags and jam them into a header, and add it back to the original request, and resend it.
$negotiate_msg = $client->http_negotiate($flags);

$negotiate_msg = "Proxy-" . $negotiate_msg ;
@pa = split(/:/,$negotiate_msg);

$req->header($pa[0] => $pa[1]);
#The proxy then sends back an NTLM challenge response, which we strip from the message and parse using the NTLM methods provided by the module
$res = $ua->request($req);

my $challenge_msg = "Proxy-Authenticate: " . $res->header("Proxy-Authenticate");

($domain, $flags, $nonce, $ctx_upper, $ctx_lower) = $client->http_parse_challenge($challenge_msg);
#Kludged together fix. for some reason it generates errors if you do not do this. Possibly an oddity about the way we are using the NTLM module
if ($domain or $ctx_upper or $ctx_lower){$placeholder=1;}

#We set the next round of flags, take the Nonce which we gained from parsing the challenge message, and send back a final authentication message. Once the proxy recieves this, it processes the original GET request
$flags = Authen::NTLM::NTLMSSP_NEGOTIATE_ALWAYS_SIGN | Authen::NTLM::NTLMSSP_NEGOTIATE_NTLM | Authen::NTLM::NTLMSSP_REQUEST_TARGET;
$auth_msg = $client->http_auth($nonce, $flags);

@pa = split(/:/,$auth_msg);
$req->header($pa[0] => $pa[1]);
$res = $ua->request($req);
print "Finished getting $url \n";
#my $bytes = length $res->content;
#print " $url was $bytes bytes \n";
#print $res->code;
#print "\n\n" . $res->content;
#We then dequeue the next URL and continue on until there are no more URLs. The worker thread will then attempt to join. when all worker threads have joined, the code exits.
$url = $urls->dequeue_nb;


}

}






Thursday, 4 March 2010

Defrauding the fantasy economy

There is an interesting story developing, about World of Warcraft account fraud. The original articles I found are over at Sunbelt Software and El Reg. Apparently, the latest round of WoW account hacks is using malware that intercepts the multi-factor authentication credentials, transmits them to a MitM server, and replays a failed login to the user. Meanwhile the MitM box replays the login data to the WoW authentication servers, and promptly empties their characters of their hard farmed gold. I would imagine that by the time the user successfully logged in, their characters would all be broke.

I feel that there are a couple of important take-aways from this story. The first, is one that plenty of other people have been saying for a long time now. The fraudsters are getting better. They are smart, they are dedicated, and they are engaged in an arms race with the Security Industry.  It raises serious concerns over our ability to stay on top of this arms race. Along those lines is the second point. This is nothing new either, but the end users are the weakest link. Yes, from a technical perspective the vector is a Trojan. Realistically though, it's a social engineering attack. The initial con where you get the user to download and install the new "add-on". Both sides of this attack vector are hard to stay on top of. Firstly, malware authors are very good at creating variants to escape AV definitions, so AV alone cannot be relied upon.  Secondly, how do you make sure users don't fall for these traps. Many would-be pundits will say it is the fault of "stupid users". In some cases this may be accurate, but let's be honest here, fraudsters have gotten VERY good at social engineering.

This is probably the biggest lesson of the over-hyped Aurora incident. Social Engineering can hit anyone. Users at google may not have had any reason to doubt the authenticity of the emails they received. They had no way to sense the  malignant payload carried in those innocent looking PDFs. Sure, intellectually we all know PDFs can have bad things in them. We also knew as kids that some people put razor blades in candy apples. I don't think most people tear apart their fruit before they eat it. Especially if it's someone they trust handing it to them. So how were these WoW users to know that this add-on was no good. There are known 'safe' repositories of add-on, you will undoubtedly say. We have seen how much of a fallacy even that can be. There is no reliable system of trust on the internet. It's a best guess effort. You might check around the forums to see if other people say anything about the add-on. You might do a google-search for the add-on and see what comes up, or even ask people in the game about it. If you're particularly in the know, you might even check the sites hosting it against something along the lines of Mcafee's Site Adviser . What do you do if all of these come up dry? Chances are, you're going to take a chance and install it. Conventional wisdom says, if you notice anything strange during the install, then you panic, remove it, and run an anti-virus. Malware authors are not so sloppy as to make it obvious anymore, though. So now you have installed software that, as far as you can tell, is exactly what it says it is. By the time you might realize you were wrong, it's already too late

So the question becomes, how do we fight this attack vector? There is no silver bullet answer. It is still just a best effort game. So we rely on the things that we know help protect us. We use only known trusted sources. We do some research on software before we install it. We might check Site Adviser, or upload the binary to Virus Total . We make sure our anti-virus is up to date, and our boxes are patched. Every once and a while, we may still get nailed.

There is a third take-away point in all of this, that I'd like to discuss briefly. This is perhaps the most bizarre piece of this. We have seen these sort of things all before. Spend a month reading the security blogs and new sites out there, and you'll be flooded with plenty of stories about targeted malware, and banking trojans. You'll see reports of botnets that stole millions of logins. What is truly strange about this particular case, at least to me, is the target. Remember that we are talking about World of Warcraft here, a video game. We are seeing the same amount of effort put into stealing video game logins as the people who break into bank accounts. People are breaking into a virtual world, committing fantasy identity theft, and using it to empty imaginary bank accounts of money that doesn't exist and cannot be sued outside of the confines of this imaginary world. And yet, they take this imaginary money, and they turn it into real money. It is all well rooted in the theory of supply and demand, I suppose. I, however, cannot shake the sensation that this is truly a strange situation we find ourselves in. It's rather like if we were playing a game of monopoly, and when you weren't looking I stole some of your play money. I then turn around and sell that play money to another player for $100. Is it just me, or does anyone else find this to be insane?

Wednesday, 3 March 2010

Lessons Learned: Self-referencing local file includes...

So I had a small incident at work today. I found a perl cgi script that had a local file include/os command injection vulnerability on it. After confirming this vulnerability, i decided to try and pull the source code for the vulnerable script, and the system choked. When I went to try something else, I was greeted by an ugly apache 500 server error. At first I just frowned and went back to a command string I had already validated worked. 500 error again. apparently somewhere in the mix, I am unsure if it is apache itself, mod_perl, or a condition created on the OS level, did not like the script trying to read itself and return it back out through apache. I suppose you could class this as an inadvertent denial of service attack