Category Archives: Security

Background Noise on the Internet

Not too long ago there was a reasonable amount of press ( in the IT world anyhoo, meatspace pretty much ignored it ) regarding attacks against the SSh protocol. The “SShPsychos” group has been responsible for a large number of coordinated brute force attempts against well known usernames with a variety of common passwords. This isn’t long term targeted attempts against a particular target – rather a scatter-gun approach at anything that’s running an SSh daemon on Port 22 using a short-ish list of dumb passwords.SSh "hack" from the Matrix

To be honest, I’d known about this sort of background level for a long time – and it came as no great surprise to me. It’s been going on as long as I’ve had an SSh server running on a public IP, and to be fair the volume _has_ increased. It has been a great example to students though when I’ve been teaching Linux security – pointing out the reasons for carrying out the basics of securing SSh:

  1. No remote root login
  2. Complex passwords
  3. Specific IP firewall rules if/where possible

And also some of the more complicated ones:

  1. Fail2Ban
  2. Chroot Jails
  3. Multi-Factor Authentication

Even now, logging into my webserver ( “www.thinking-security.co.uk” ) via SSh on Port 22 there are approximately 2000 illegitimate login attempts over the last 20 hours. Quite often when I re-connect after a weekend or more than a few days, this number is in the 10s to even 100s of thousands. I’ll be honest, it doesn’t particularly bother me – it is much rattling of windows and testing of locks – there are much easier fish to fry on the interwebs than that particular machine.

It did cause me to ask two particular questions though:

1) Where are all the attacks coming from ?

2) What usernames and passwords are they trying ?

Turns out that question 1 is easy, and question 2 is half easy …

On any Linux server, the connections made against SSh are logged. These go into /var/log/secure and here is a prime example:

May 15 14:35:55 ts-one sshd[23429]: Invalid user bankid from 37.59.230.138
May 15 14:35:55 ts-one sshd[23429]: input_userauth_request: invalid user bankid [preauth]
May 15 14:35:55 ts-one sshd[23429]: pam_unix(sshd:auth): check pass; user unknown
May 15 14:35:55 ts-one sshd[23429]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=37.59.230.138
May 15 14:35:57 ts-one sshd[23429]: Failed password for invalid user bankid from 37.59.230.138 port 59470 ssh2
May 15 14:35:57 ts-one sshd[23429]: Received disconnect from 37.59.230.138: 11: Bye Bye [preauth]

This is one connection attempt – the source ( from ) is 37.59.230.138, and it has presented the invalid user “bankid” – this has actually failed at this point – but SSh won’t let the attacker know that, it will still allow them to enter three password attempts before terminating the connection. This inability to tell if it is the username or the password that has failed is actually quite important – realise that if you can tell if an account is valid, then you can easily stop wasting time and effort on ones which are not. This non-specific failure method “Either the username or the password is wrong” leaves the whole possible space open requiring a far greater number of attempts to find a valid username _and_ password combination.

37.59.230.138 – great IP address – I’m sure that there are some savants out there who can look at that and tell me where it is from – but I assure you, I am _not_ one of them. I have to look it up – and even then the sources occasionally disagree ( not that it actually really matters as I’m not sending a drone over to wreak revenge ).  For the purposes of the remainder of this process I’ll be using the MaxMind database, and, for the sake of legal compliance:

This product includes GeoLite2 data created by MaxMind, available from http://www.maxmind.com.
Creative Commons Licence
This data is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

There is a direct interface to the data that is available on the front page of the MaxMind website which allows for the query of a single IP address, and, for us in the case above, this tells us:

IP Address = 37.59.230.138
Country Code = FR
Location = France, Europe
Coordinates = 48.86, 2.35
ISP = OVH SAS
Organization = OVH SAS

So it is a French IP address hosted with the OVH Company.

Our friends in France ...

Our friends in France …

As a hosting company, they aren’t directly responsible for the attack, rather it is just being launched from a machine that has been allocated an IP address within their scope. However, having said that, a quick glance at a Google search about them suggests that this is far from the first time that they have been used as a stepping stone onto other things …

The IP address lookup also gave us the Lat / Long ( estimated – by a long shot ! ) of the address. We can plug these into Google Maps to have a look-see at the rough area of operation _of the IP address_. This isn’t, most likely, where our perpetrator is sitting – more likely the recorded head office of the company …

Our location in Paris...

That’s great one IP address down – 2000 to go …

In the next articles in this series, I’m going to extract all of the IP addresses & usernames from the logs ( across multiple servers ! ), and then plot these against a map to show both historic and real-time data … And then we’re going to move on to finding out what passwords are being attempted using a “honeypot” !

Tagged , , , ,

Windows XP – Looming End of Life (E0L) – What are the risks ?

As I’m fairly sure that you may have gathered, Windows XP is going to go end of life very, very soon. 8th April this year (2014) in fact. This is proving to be a little bit of an issue for more than one organisation, many people have come to love Windows XP – and the old mantra of “if it ain’t broke, don’t fix it” has, until now – meant that there was little reason to move to Windows 7 or, even less to the poorly received Windows 8(.1). This has meant that, right now, across the world there are IT departments who are having a little bit of an issue – how to upgrade _all_ the machines in the organisation to Windows 7 as soon as is humanly possible. I have to say though that of the multiple organisations that I know of, not a single one is going to have finished their upgrade by the 8th …

So, realistically, where does this leave them in terms of risk ? I was actually asked this by a customer this morning – please quantify our risk. Well, I’m a big fan of statistics, not a great mathematician, but the concept definitely amuses me. So, thought I, what is the probability of there being a certain type of vulnerability this month. A quick Google didn’t throw up many sites with statistical data for XP patches, and I didn’t want to go through all of the Microsoft stuff myself, so I’ve borrowed the data from the excellent guys over at Secunia1. The following graph shows the vulnerability severities ( 356 in total over the last 10 years )2.

Criticality

This isn’t entirely helpful as these don’t map directly to the Microsoft Classifications ( Critical, Important, Moderate, Low ). Let’s make one or two assumptions here then – we’ll assign the top four Secunia categories to the equivalent Microsoft ones, and we are going to assume that the vulnerabilities were evenly distributed over the 10 year period ( 120 months ). So, 1% of the vulnerabilities is equivalent to 3.5 vulnerabilities ( roughly – it’s 3.56, and I know that I should round up, but this is all assumption anyhoo ! ) So, each of those segments above equates as follows:

  • Critical ( Extremely) – 4% or 14 vulnerabilities
  • Important ( Highly) – 38% or 133 vulnerabilities
  • Moderate (Moderately) – 24% or 84 vulnerabilities
  • Low (Less) – 28% or 98 vulnerabilities

If we continue with our assumption that these have been evenly distributed over the lifetime of XP ( 10 years/120 months) we can see that the percentage probability of a given criticality occurring in a given month is equivalent to the total number of (vulnerabilities / 120) * 100 which give us the following:

  • Critical – Approx 10%
  • Important – Approx 110% ( More than certain !)
  • Moderate – Approx 70%
  • Low – Approx 80%

Well, that’s not very good news – it would suggest that each month moving forward would increase the number of vulnerabilities by these amounts, so at the end of 1 year you’d expect to see, on average:

  • Critical – 1.4 vulnerabilities
  • Important – 13.3 vulnerabilities
  • Moderate – 8.4 vulnerabilities
  • Low – 9.8 vulnerabilities

Ok, that’s fine – but what we are actually interested in is the residual risk isn’t it – what are we left with after we have considered our controls & countermeasures – what mitigation is in place. Well, the following information gives us a bit of a better idea:

location

Using the base assumption of even distribution we get the following probabilities of it occurring in a single month:

  • Remotely exploitable – 180% ( Nearly two guaranteed to be remote exploits !)
  • Local Network – 50%
  • Local System – 60%

Or as above, that’s:

  • Remotely exploitable – 1.8 a month, or 21.7 a year
  • Local Network – 0.5 a month, or 6 a year
  • Local System – 0.6 a month, or 7.2 a year

You can keep going if you want to, the following graph shows the actual impacts:

impact

I’m not going to do that here, I don’t really think that it has much value. The point is that there is a distinct bias towards the first vulnerability being announced being an “Important, Remotely Exploitable” one.

So what ? Well, that’s interesting actually. Microsoft has given up patching XP, but that’s not the same thing as being left defenceless. Both Anti-Virus and Firewall technology for XP is going to continue being supported for some time – and if these countermeasures have been implemented, then there is a good chance that any given vulnerability will be completely mitigated by them – the trouble is, until the vulnerabilities are actually announced – you aren’t going to be able to tell how effective your controls will actually be – and you may need to do some fairly rapid reconfiguration of your firewalls &/or AV signatures to ensure that you are detecting and preventing those attacks.

Please don’t take that as permission to slack off in your upgrades, or even worse, decide that you can accept that risk – the best course of action is to upgrade to a patched and supported OS, however, the above at least has a stab3 at quantifying the level of the problems !

Just for the record by the way, I have confirmed with Microsoft that there will be patches for XP released on Tuesday the 8th April – these will be the last ever XP patches, but for those of you who have a monthly patching policy, you won’t actually breach your policy until the following month …


1. Guys, if you read this link, you should definitely bring back the free Vulnerability Alert mailings – but if you don’t a free subscription for plugging you would be welcome 😉

2. Ok, I realise that this is not 100% legitimate, there wouldn’t be an even distribution over the years, so this really is a generalisation. The distribution over the years actually looks like this …

distribution

If I had paid attention in University Mathematics lectures, I would remember how to do this more accurately, but I didn’t and I don’t…

3. You should look carefully at what your organisational risk appetite is, and also the full business impact of a vulnerability being exploited. Also, please remember that you may have obligations under other things (PCI/DSS for example) that you need to meet…

Raspberry Pi Toybox – The bits …

English: The Castel Sant'Angelo looms in the b...

English: The Castel Sant’Angelo looms in the background from a bridge overlooking the Tiber River in Rome, Italy. (Photo credit: Wikipedia)

Well, it has all arrived ( Thank You Amazon ! ) and so here, without further ado, are the components:

I haven’t photographed them properly yet – [ I haven’t assembled them properly yet ! ] – but this is a rough look:*

DSC_0606_edited-1

The only things that I’m using other than the above are:

  • A Laptop1 with a SD Card Reader ( LINDY 46-in-1 PCMCIA Card Reader )
  • A keyboard and mouse (USB)
  • A monitor with an HDMI input
  • Elgato Game Capture HD ( See here for more information on this )
  • An 8GB Thinking Security USB Memory Stick
  • A 16GB SD card of one sort or another that I had lying around …

I’m planning using the Fedora 17 Remix – at least to start with – shouldn’t be a problem to obtain / compile pretty much anything to run on it ( famous last words those ! ). So seems like a reasonable way forward.

I’ve been a long time RedHat / Fedora fan – was my first Linux back in the day ( when RedHat was still free … I don’t recall, but probably RedHat 2.0 ) – I had it installed on my Pentium at University and used it, with a 14400 modem, to avoid the Edinburgh weather instead of having to go to the AI and CS labs for assignments … Sigh … The good old days …

Getting it onto the card is pretty straight forward, once you have your uncompressed image use Win32ImageWriter to write it to the card.2 This worked just fine, and booted up beautifully.

For screenshots & video of the Pi, I’m using the Elgato Game Capture HD ( see above ) – this works brilliantly, it has a USB connection to my laptop, an HDMI from the Pi and an HDMI to the monitor. It introduces no lag on the monitor side, but quite neatly captures – in full HD – the image on the way through. It’s a very neat way of getting screenshots off the Pi, which otherwise would prove a little troublesome. I’ve attached the video of the first boot ( and setup configuration ) below – more information and details will follow in due course !


*. The astute and keen eyed amongst you may have noticed that in this picture the two USB WiFi devices aren’t showing – that’s because they are currently in my Ubuntu PenTest laptop running aircrack-ng as a proof of concept for this project …

1. We’ve had some laptop issues at home, my other half’s MacBook Pro croaked – and seeing as I have an issued laptop from my current client, and she doesn’t – she’s taken my MacBook Pro with her SSD. I’ve spent the last few weeks turning an old Lenovo T61 into a usable computer again. First off – out with the old spinny platters and in with an SSD for the primary HD. Doubled the RAM again ( past the quoted manufacturer maximum ) to 8GB and got rid of the CD-RW drive ( never used it anyway ) and replaced it with a 750GB hybrid disk to hold my VM images, oh and, missing my screen real estate from my 17″ MBP I also acquired a portable Lenovo second screen – I really don’t know why I’ve not seen these around more – they are brilliant ! I’m not sure that I couldn’t have bought another laptop for the cost of all the upgrades, but – it was fun to do, and there is something quite stylish about the older Lenovos – that IBM feel still I think !3

2. Be prepared, this is a definite “cup of tea” part of the process. In my case unload and load the dish washer, make and drink cup of tea, have chat with Brother-in-Law on phone, get high score on Temple Run 2 and, finally, just to be sure, go and get the kids from school. But hey, it finished ! ( In all seriousness, I was getting about 1MB per second for 3GB – that’s about 50 minutes )

3. Slight update on the laptop front, picked up a sale Acer Aspire i3, 6GB RAM, 500GB HD which is currently running Ubuntu. Neat little bit of kit … Dirt cheap too !

Tagged , , , , , ,

Raspberry Pi Toybox

Roman depiction of the Tiber as a river-god (T...

Roman depiction of the Tiber as a river-god (Tiberinus) with cornucopia at the Campidoglio, Rome. (Photo credit: Wikipedia)

I must admit a certain love for the Raspberry Pi – we have two in the house just now – one which was doing a service as an XMBC box onto the TV ( something it was OK at, but not great – now replaced by a PS3, which just works better and I can play BioShock1 on it too ) and a second which was left by Santa in order to take up a role as a Python training device for the smaller members of the household ( although, having discovered yesterday Raspberry Pi Assembly Language Beginners: Hands On Guide: 1 and RISC OS for Pi2 they may well find themselves learning Assembly instead ). With the retirement of the first Pi from media player duties though, I’ve started to contemplate what it might become – it doesn’t pack a huge amount of punch, but for all of that, it’s small, light and exceedingly power efficient – so much so, it is feasible to run it from batteries.

A few years ago I went through a similar Mini-ITX phase, building a small footprint machine which ran very serviceably ( and the components still do I believe –  they were carved up for an Arcade project which is still uncompleted [ although the controller with two good arcade joysticks and some good buttons to thump was running very nicely over USB with MAME and Gauntlet !  Anyhoo, I digress more than usual ] ) at the time I was frequenting the rather good Mini-ITX.com and enjoying their project pages ( sadly no longer updated much – they used to be fun … )  – they had a link to “The Janus Project” – a self-contained wireless security test rig in a Pelican case.

Now I always liked this idea, didn’t have the money or the time, but I thought it was cool. Well, time and technology wait for no man, and since then we have had much in the way of efficiency and miniaturisation, not to mention some much more refined ways of cracking WiFi. To this end, I have intent to build a mini-Janus, a son of Janus – “The Tiberinus3 Project” if you will.

Given that time has moved on so much though, I find, that I have an opportunity to work on a smaller scale, and to be portable … So to that end, I have started to assemble the parts – to wit :

  • 1 x Raspberry Pi, OS & SD Card
  • 1 x Power Source ( 12000mAH battery pack )
  • 1 x GPS dohicky
  • 2 x WiFi dohickys
  • 1 x 3G Modem
  • 1 x Waterproof Case
  • 1 x USB Hub

The idea is to contain all of the above in a box which will be self contained for a period ( 12000mah – not sure, but reckon in excess of 8 hours runtime, although that will depend on the peripherals … ) and to be fairly autonomous in the collection of data – e.g. while it is on, it will constantly seek out WiFi sources. This device can then be left comfortably on client site for a period to perform an unobtrusive wireless audit as part of a PenTest. There are currently two WiFi dongles on the list, rather simply one to scan and one to manage, although, depending on the power consumption, it may be possible to run more than two through a powered USB hub, or to run two in scanning mode and leaving management out of the issue, or possibly even use the 3G Modem over USB to provide managment, and use two to scan … All an experimental theory at the moment !

Obviously, you should try this at home – what’s the point in writing it up otherwise – but remember the various legal requirements surrounding ( in the UK4 at least ) the Computer Misuse Act – you shouldn’t make use of anyone’s computer systems without their prior authorisation.

Parts are on order, and I’ll update as things assemble ! ( For the record though, I’ve been looking at doing some of the development work on the QEMU Pi Emulator … Not sure how that’s going to pan out either … )


1.  A game I _really_ enjoy, although, like most games – I suck. I’ve also been infuriated by the constant delays surrounding BioShock Infinite which has switched from a birthday present to a Christmas present and back again since it was supposed to be released …

2. My Junior School had just switched to Archimedes computers when I left, the Senior was RM IBM drclones. I actually never really got to play with them properly, although they always held a certain fascination – I’ve eyed up various 2nd hand bits of kit in the Vintage section of E-Bay, and have even bid, but never to a winning outcome – the port to Pi has got me all of a flutter !

3. “One tradition states that he came from Thessaly and that he was welcomed by Camese in Latium, where they shared a kingdom. They married and had several children, among which the river god Tiberinus (after whom the river Tiber is named).” – Encyclopaedia Mythica – I would so love to claim I knew that, but it was Google.

4. Other countries are available, and I could even recommend one or two as being nice places to go. However, make sure that what you are doing is acceptable under your local jurisdiction – fines, prison or worse awaits those who overstep the mark.

 

Tagged , , , , , ,

Security Mindset

User big brother 1984

User big brother 1984 (Photo credit: Wikipedia)

I’m a big fan of Derren Brown, perhaps not so much of his actual performance stuff, but rather his later work on psychology and human manipulation. I’ve not seen all of his programmes, although I plan on going looking for some since I found they existed through the wikipedia link above, but I did just finish watching the “Fear & Faith” pair that I had recorded from a few weeks back on Channel 4 in the UK. There was one particular point that he made that was of interest to me:

People behave better when they have the impression that they are being watched.

Now, after an earlier discussion about AUPs on Forensic Focus where I wrote a draft, simple AUP, I realise that this is what I left out. There is neither mention of consequences, nor is there mention of monitoring – an oversight which I acknowledge leaves the policy toothless. In my defence though that wasn’t the point that I was trying to make at the time !

The research study by Max Ernest-Jones, Daniel Nettle and Melissa Bateson at Newcastle University on “Effects of eye images on everyday cooperative behaviour: a field experiment” further builds on previous research by Terrence Burnham and Brian Hare ( here ) showing that even computer generated “eyes” watching will influence behaviour.

I recall from my first ( and last ! ) permanent role, a Government issued poster, hanging in what very much resembled something that was very reminiscent of Chernobyl ( unsurprisingly really, as it was Hanger 4 at Harwell, home of GLEEP. ) We kept our backup tapes in a room which used to house a Cray – I’d be lying if I said I knew which one, it was long gone by the time I arrived, but I do know that it was one with integral seating … – and it had all of the security that you’d have expected of a data centre on a nuclear site – man-trap doors, security office, etc. – and, some of these posters – I wish now that I’d “redistributed” them before we left the building and it was pulled down – but I was young and foolish, and had no idea that I’d be writing this blog now … The one that sticks in my mind was rather creepy, hanging between the two doors of the man-trap as it was, bored people had messed with it – picking out the eyes with pins giving the poster a very unnatural stare. I don’t know if I behaved any better for it, all I had to do was collect and drop off tapes as it was – the room was cold, empty and unfriendly I didn’t hang around long enough too misbehave. I’ve tried my best to find a copy of it online now, but with no success. I did get these though:

security_poster_1960 security_poster_1962

This first one ( Don’t Brag ) is from 1960 ( I’m told ). And the second from 1962 ( again, I’m told ).

Both are notable for their lack of eyes, as, oddly are many, if not all of the ones that I could find that are currently being circulated.

CESG

I rather like these Welsh ones by Rebecca Lloyd as she says herself – inspired by the very popular iPod adverts.

welsh1 welsh2 welsh3

Quite entertainingly, the most intimidating poster by far and the one with the most eyes, with massive reference to 1984 and a horrendous secret state is this one from Transport for London. Nothing to do with InfoSec per se, but general CCTV surveillance of society.

TFL_CCTV

That’s the sort of thing that nightmares are made of ! On the other hand, if that was stuck before me on a bus, I might well not misbehave – which is a win on the part of the designer !

So there are two things that we should consider then – first off – my oversight on the AUP with regard to consequences and monitoring should be resolved – the addition of something like :

We like to be sure that nothing untoward is happening the machines which are our responsibility, so we do monitor them for things that we have said we don't like. If, once you have signed this document to signify your understanding, you choose to break the agreement you've made, we will have to take disciplinary action, depending on the seriousness of the breach, this could include losing your job.

Secondly, as ongoing awareness of Information Security is a requirement of pretty much every set of best practice guidelines ( and if it isn’t, it should be ! ) perhaps we should make sure that we make use of strategically placed posters with eyes in order to get our point across with the maximum uptake ? How about the following:

Poster1Poster2Poster3

I know that for two out of three, they aren’t exactly “watching” eyes, but there needs to be a line drawn on the amount one intimidates one’s employees !

I leave you with a Seasonal Poster – courtesy of the US Archives ( which are fabulous by the way, can we have a UK one of these please ? ) You’ll need to view it full size to see what the “security” message is.US Christmas InfoSec Poster

 [Actually, you know what, if people send me UK posters, I’ll make an online collection available to everyone myself … ]

Tagged , , , , , , ,

VirtualBox Install on Mac OS X

I thought that I’d give this a go – this is a very short run through of an install of VirtualBox on Mac OS X. Comments on production issues are very welcome, I’d like to improve these to the point of getting them useable !

Thanks !

 

Tagged , , , ,

NWrap Version 0.05 – NMap Wrapper with OPRP Database Dump

 

sshnuke hack in Matrix II 03

sshnuke hack in Matrix II 03 (Photo credit: guccio@文房具社)

 

Just a quick post, a few years ( in 2004 ! ) ago, I wrote a Perl wrapper on behalf of ISECOM for NMap that incorporates the data from the Open Protocol Resource Database (OPRP). It was featured in Professional Pen Testing for Web Applications by Andres Andreu, which was nice. However, it hasn’t been updated since then, and the ISECOM page has some issues with the OPRP download. I just thought I would (a) check that it still works and (b) bring it up to date if it doesn’t … Here, first, a quick example of it running: ( first without, and then with NWrap ).

 

[root@perl ~]# nmap localhost
Starting Nmap 5.50 ( http://nmap.org ) at 2012-07-17 17:50 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000018s latency).
Not shown: 999 closed ports
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds
[root@perl ~]# ./nwrap.pl localhost
#########################################
# nwrap.pl - Nmap and OPRP combined !   #
# (C) Simon Biles TS Ltd. '04           #
# http://www.isecom.org                 #
# http://www.thinking-security.co.uk    #
#########################################

Starting Nmap 5.50 ( http://nmap.org ) at 2012-07-17 17:50 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000018s latency).
Not shown: 999 closed ports
PORT STATE SERVICE
22/tcp : open 
 - Adore worm 
 - SSH 
 - Shaft DDoS
Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds
[root@perl ~]#

 

Now for the code:

 

#! /usr/bin/perl
# Nmap wrapper for OPRP data.
# (C) Simon Biles
# http://www.isecom.org
# Version 0.05
$version = "0.05";
# History - 0.01 Working version.
# 0.02 Changed use of ``s for output to opening a pipe.
# 0.03 Use the OPRP database dump directly, not through
# pre-parsed file
# 0.04 Included output switches and file writing stuff
# 0.05 Updated for CSO to TS name change and checked working (2012)
# OPRP Dump file has changed to HTML, converted to CSV and
# rewrote parser to work with CSV.# Read in from the OPRP data file created earlier.
# and fill in an internal table.
# Give us a little credit :) and show that it is running ...
print "\n#########################################\n";
print "# nwrap.pl - Nmap and OPRP combined ! #\n"; 
print "# (C) Simon Biles TS Ltd. '04 #\n";
print "# http://www.isecom.org #\n";
print "# http://www.thinking-security.co.uk #\n";
print "#########################################\n\n";
%services=();
open (DATA, "< oprp_services_dump.csv");
# New CSV parser code
while (){
# Split the data at comma separations
 ($port_no,$port_type,$name,$reference) = split(/,/, $_);
if ($port_type =~ /^UDP/){
 $port_prot = $port_no."/udp";
 push( @{$services{$port_prot}},$name);
 }
 elsif ($port_type =~ /^BOTH/){
 $port_prot = $port_no."/tcp";
 push( @{$services{$port_prot}},$name);
 $port_prot = $port_no."/udp";
 push( @{$services{$port_prot}},$name);
 }
 elsif ($port_type =~ /^TCP$/){
 $port_prot = $port_no."/tcp";
 push( @{$services{$port_prot}},$name);
 }
 elsif ($port_type =~ ""){
 $port_prot = $port_no."/unknown";
 push( @{$services{$port_prot}},$name);
 }
}# Just to keep things tidy !
close DATA;
# There are some output to file arguments that I hadn't thought about !
# Check for them here and set up some variables ...
# They then are pulled from the arguments so that we can do the output ...
# If more than one output option is specified ( which I'm not sure is legal anyway )
# the final switch will take priority
for($i = 0;$i < @ARGV;$i++){
 if (@ARGV[$i] =~ m/-o/){
 if (@ARGV[$i] =~ m/-oN/){$out_normal = 1; $out_xml = 0; $out_grep = 0; $arguments = $arguments." -oN - "; $i++; $filename = @ARGV[$i];}
 if (@ARGV[$i] =~ m/-oX/){$out_xml = 1; $out_normal = 0; $out_grep = 0; $arguments = $arguments." -oX - "; $i++; $filename = @ARGV[$i];}
 if (@ARGV[$i] =~ m/-oG/){$out_grep = 1; $out_xml = 0; $out_normal = 0; $arguments = $arguments." -oG - "; $i++; $filename = @ARGV[$i];}
 } else {
 $arguments = $arguments.@ARGV[$i];
 }
}
# O.k. ... So if there is a file specified, we had better open it to write to ...
if ($out_normal == 1 || $out_xml == 1 || $out_grep == 1){
 open(OUT,"> $filename") or die "Can't open $filename to write to ! $! \n";
}
# Run nmap with the provided command line args.
# doing it this way rather than with backticks, means that the output is "live"
open(NMAP, "nmap $arguments |") or die "Can't run nmap: $!\n";
# If necessary warn the user that they shouldn't expect to see any output ...
if ($out_xml == 1){
 print "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n";
 print "! Sorry. The XML output option only !\n";
 print "! ouputs to the filename specified !\n";
 print "! not to the screen. !\n";
 print "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n";
}
if ($out_grep == 1){
 print "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n";
 print "! Sorry. The Grep output option only !\n";
 print "! ouputs to the filename specified !\n";
 print "! not to the screen. !\n";
 print "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n";
}
# Modify the output as required.
while(){
 if ($out_normal == 0 && $out_xml == 0 && $out_grep == 0){
 if ($_ =~ m/(^\d+\/)(tcp|udp)/){
 ($port,$state,$service)= split (/\s+/, $_);
 print "$port : $state \n";
 foreach $service ( sort @{$services{$port}}){
 print " - $service \n";
 }
 } else {
 print $_;
 }
 } elsif ( $out_normal == 1 && $out_xml == 0 && $out_grep == 0){
 if ($_ =~ m/(^\d+\/)(tcp|udp)/){
 ($port,$state,$service)= split (/\s+/, $_);
 print "$port : $state \n";
 foreach $service ( sort @{$services{$port}}){
 print " - $service \n";
 }
 print OUT "$port : $state \n";
 foreach $service ( sort @{$services{$port}}){
 print OUT " - $service \n";
 }
 } else {
 print $_;
 print OUT $_;
 }
 } elsif ( $out_xml == 1 && $out_normal == 0 && $out_grep == 0){
if ($_ =~ /port /){
 $_ =~ s/\/ /g;
 $_ =~ s/\"//g;
 (@array) = split (" ",$_);
 foreach (@array){
if ($_ =~ m/portid/){
 ($a, $port) = split ("=",$_);
 }
 if ($_ =~ m/state/){
 ($a,$state) = split ("=",$_);
 }
 if ($_ =~ m/protocol/){
 ($a,$protocol) = split ("=",$_);
 }
 if ($_ =~ m/conf/){
 ($a,$conf) = split ("=",$_);
 }
 if ($_ =~ m/method/){
 ($a,$meth) = split ("=",$_);
 }
 }
 $port_prot = $port."/".$protocol;
 foreach $service ( sort @{$services{$port_prot}}){
 print OUT "\\n";
 }
 } else {
 print OUT $_;
 }
 } elsif ( $out_grep == 1 && $out_normal == 0 && $out_xml == 0){
# This is all one bloody long line, so this should be fun ...
# Send the comments stright through ...
 if ( $_ =~ /^\#/ ){
 print OUT $_;
 } else {
 @array = split(",",$_);
 for($i=0;$i < @array; $i++){
 if(@array[$i] =~ /Host:/){
 ($a,$host_ip,$host_name,$b,$remainder)= split(" ",@array[$i]); 
 @array[$i] = $remainder;
 }
 if(@array[$i] =~ /Ignored/){
 ($port_data,@therest)= split(" ",@array[$i]);
 @array[$i] = $port_data;
 }
 }
 print OUT "$a $host_ip $host_name $b ";
 foreach (@array){
 $_ =~ s/\// /g;
 $_ =~ s/\,//g;
 $_ =~ s/\s+/:/g;
 ($nada,$port,$state,$protocol,$name) = split(":",$_);
 $port_prot = $port."/".$protocol;
 foreach $service ( sort @{$services{$port_prot}}){
 print OUT "$port/$state/$protocol//$service///,";
 } 
 }
 print OUT " ".join(" ",@therest)."\n";
 }
 }
}
# Tidy up the open files ... if they exist ...
if ($out_normal == 1 || $out_xml == 1 || $out_grep == 1){
 close OUT;
}
# That's it really !

 

In order to make it work you’ll need to download the CSV file of the OPRP database here.

 

Incidentally, if you are interested in Port Scanning and Penetration Testing and the like, you might find this series on Forensic Focus interesting.

 

Tagged , ,

Why are statistics useful in Security ? ( Part 1 )

I have a fascination with Statistics. To be honest, it tends to be a fascination with its misuse, but it is a fascination none the less. I was reminded of this over the weekend twice – once on Sunday morning, before coffee, when I retweeted a statistic:

@MarkMazza1 93% of companies that lost their data center for 10 days or more due to a disaster, filed for bankruptcy within one year. @dpoecompany

It may, or may not be true, I have no idea – but because it sounds good, I retweeted it anyway (it doesn’t actually harm my business case either), a few minutes later, half way down the first coffee of the day, it occured to me that this wasn’t quite right and I tweeted the following in pennance:

“Nothing like retweeting an unsubstantiated statistic first thing on a Sunday morning. 95% of people agree ;-)”

The second thing, was courtesy of my son, who forwarded me the following – genuine and true – statistic:

Owly Images[ I’m sorry for the lack of attribution – I don’t know whence it came – if anyone wants to tell me – I’ll happily give a credit ]

The statistics of advertising fascinate me too – the variable, and selective sample size that returns just the right percentage of “dogs that prefer” muttfood™. ( That involves finding the 8 dogs who have no sense of smell and 2 who do – to give a believable 80% … )

The point is that, as someone said ( generally attributed to Disraeli – but aparrently not his )

“There are three kinds of lies: lies, damned lies, and statistics.”

When selling things, statistics are warped and presented in such a way as to scare us, to emphasise our need for the product or even to show us that our peers are using it, so why aren’t we ? Obviously there is a huge overlap between psychology and statistics here, but none the less, the point stands.

When we know about statistics though, we can turn them to our advantage. Not only are we in a position to treat what we are told with more care, but we can start to ask questions that might actually enlighten what the reality is. Let’s go back to our first example:

“93% of companies that lost their data center for 10 days or more due to a disaster, filed for bankruptcy within one year.”

What can we ask about this data ? Well, let’s start with asking where it came from ? Who has admited that their data centre was down for 10 days ? What happens to those who’s data centre was only down for 9 days ? 5 days ? 2 days ? Is there a direct linear corrolation between data centre down time and probability of bankruptcy ? Were all the companies in good financial shape before hand ? Were they skimping on data-centre maintenance because of poor cash flow ? Was their main stock warehouse in the same building as the data-centre when it burnt down ?

A key thing to remember is that corrolation isn’t causation. This is important, and why the placebo effect is an issue in medical trials. If A rises and B rises, is A the cause of B – or is there an unseen, or more to the point, unmeasured, C that is causing the rise of B ?

However we can bring forward even more questions. Ok, so ( more or less ) 1 in 10 companies that have a 10 day outage survive – what are they doing right that we can emulate ? Did they have a better buisiness continuity plan ? ( Almost certainly – but I don’t have the data to back that statement up ). Is there a commonality amongst the companies that survived ? ( Are they all in the same industry – all consultancies for example ? Does this mean that my business is at less risk ? )

I hope you see my point that too little information about the data is, whilst not a bad thing per se, not exactly condusive to sensible decision making.

So where does that leave us within our own organisations ? Well it leaves us with an necessity to collect the right data. That’s easier said than done to be honest, because we’re back up against the corrolation/causation barrier again – we need to be sure that we are gathering data that does actually relate to what we are seeking to study. Ensuring that A is related to B involves verifying that C has nothing to do with it – nothing acts in isolation, so excluding C can save a wild goose chase and a waste of money pursuing the wrong track.

Much as it may seem unscientific, I really do recommend the idea of getting together a few people and brainstorming possible data sources and other connections between the possible influencing factors. Everyone has a perspective, and often it is the perspectives of others that add the most value !

Don’t forget the human factor in this – it could be that there are less viruses during the summer, not only because of your new AV product, but because the staff are away, surfing the net less and bringing less into the network – in fact your trial data is useless because the product is actually worse, but it had less to find, and thus looks more effective … More effective decision making is enabled with good statisics, effective decision making saves money.

This is where historical data has a value – don’t discard old reports and metrics, use them to show year on year growth and annual, monthly, weekly, daily and hourly trends. You’ll be able to make more sense of any new data in light of this information. You can also spot anomalies in the data, and, if you get to the stage of doing this in real time, you can find problems and security incidents as they happen, and that is the holy grail of information systems and security management.

In part two, next week, we’ll start to decompose some basic things to collect, potential sources and analysis of the data.

Why don’t you subscribe, either to my Twitter feed (@si_biles) or to the Blog, and you’ll be notified of that post and other things of interest as time goes on ?

Tagged , , , ,

Five free ways to improve your security

Peer Review

Peer Review (Photo credit: AJC1)

We’re in recession, lest we forget – it isn’t like the press is going to let it slip from our minds – so money in a tight field is getting tighter. However, even for large businesses improving security need not cost the earth, or indeed anything at all ( apart from some time, and we must recall that time is equal to money ). To that end, I thought that I’d put down five very cost-effective and pragmatic ways to significantly improve your security.

1. Patching

Certainly at a desktop or server OS level, patches are mostly available for free. ( If you have devices, operating systems or applications that require a maintenance contract for patch updates – this isn’t quite for free, however let’s, for the time being assume that this cost is covered off already. ) Patching up to date ensures that, with the exception of those pesky “zero-day” problems, that your system is protected against known vulnerabilities. I’ve been to many, many organisations where patching is so out of date the measure is years – that’s seriously wrong. The excuse is often – “our application is so unstable we can’t” – let us think carefully about this statement and consider, under these circumstances what we should do about it … if and only if this is true and there is nothing that you can do to get the application maintained – then can it remain as is – however the device or server should be isolated behind other mitigations. ( So much so that if I am scanning your network in a vulnerability or penetration test – I don’t want to be able to see the patch level. )

2. Review your Firewall Rules

When was the last time you reviewed your firewall rules ? You’ve added some recently I’m willing to bet, but have you purged old entries ? Do you have a process for deleting rules when they are no longer needed ? Each “allow” rule is a doorway into your network – if it isn’t needed, lock the door. Incidentally, at this time it is wise to pre-empt the next point, is there supporting documentation surrounding your firewall ruleset ? At a minimum, you need to know what the rule is for in English, ( e.g. “allow port 80 tcp to 123.234.123.234 from 123.235.0.0” doesn’t tell me anything, “http website access to the stock server from the warehouse subnet” does. ) And who owns it ( John Smith from Warehouse Control ). That way, a review involves going through the list calling Bob and asking him if he still needs that rule.

3. Documentation

Review your docs – dry run through processes and procedures – do they still work ? Update them if not. Are there any documents that are clearly missing ? Write them. Review your policies, you are of course doing this annually anyway aren’t you, but IT moves faster than on a yearly basis, and I’m pretty sure that a mid-term review wouldn’t do you any harm – issue errata if you don’t want to actively change the policy at this stage – but keep the changes to hand for the updates and it will save time later. Check that your supporting documentation is up-to-date and relevant – such as your firewall rules above – if it isn’t in English, make a translation – you might know what it means, however if you get hit by the proverbial bus ( or get an offer you can’t refuse ) – then your successor will need to figure it out – the more uncertainty there is in that time the higher the risks of an incident – if you want an incentive a public breach that might be blamed on you after you’ve left ( “My predecessor left such a mess it was impossible to manage” ) might haunt you for a long time. It never ceases to amaze me how small this industry actually is.

4. Cull dead accounts

Like old firewall rules, old, unused accounts are opportunities to an external attacker. Hopefully you have a policy in place for removing accounts when an employee leaves, but it is still well worth going through and auditing. Look for test accounts, administrator accounts, contractor or supplier accounts and system accounts that wouldn’t be identified by a leavers process, and may well not have the same lockout or expiry controls. At the same time, have a quick check to make sure that all accounts have the correct settings – there are many tools and code available for walking AD or other directories to look for specific settings freely available on the net.

5. Educate a bit

I’m not talking about a huge CBT on security here – that’s hardly free. However writing and sending an e-mail to all staff is. Give some thought to what your major concerns and issues are, write a positive statement of ways to manage these risks ( one per e-mail, send a few ) and get it out there. Creating awareness, putting ideas into the heads of staff and giving them details of whom to contact with concerns or questions is going to reap long-term benefits that you can only imagine now. This is probably the largest return on investment that you can imagine – proactive staff will head off problems you have yet to conceive, and, given a voice, they’ll give you ideas and suggestions that will not only improve security, but could well make your business more profitable overall.

These are just five simple suggestions – you could extrapolate a little I’m sure to find a few other things that won’t cost a thing, but will improve your security ( here’s a clue – if you start with the word “review” or “audit” and follow with things like “running services”, “configurations” or “file/folder/group permissions” you’ll probably come up with another few ). It’s an interesting time to be in Security – budgets are down, but threats are up – pro-active low-cost work could be the difference between success and failure – these things really should be part of a security routine anyway – but we are so often firefighting or implementing the next new thing that we don’t get much of a chance – this breathing space might actually be what the doctor ordered …

Tagged , , , , , , ,

It’s all about managing risk …

Well, what an interesting turn up for the books, it seems that a group of CISOs have gotten together, in what I am sure were challenging circumstances ;-), in Hong Kong to figure out that Information Security isn’t all about technology ( http://www.theregister.co.uk/2012/04/25/ciso_advice_risk_management/ ) … I’d like to take this opportunity to point you to my article on “What is ‘good enough’ information security ?” (http://articles.forensicfocus.com/2011/09/19/what-is-good-enough-information-security/) from a while ago.

I re-iterate – as security consultants we are risk managers – it needs to be fit for purpose – not a technical solution to a problem that doesn’t exist !

Tagged , , , , , ,