Tag Archives: Linux

Resurrection

I can’t stand to see an old computer be thrown out. A fact that means that I’ve far too much clutter and plenty of things that are collecting dust in some corner or another. A quick mental audit throws up the following:

In my store room, no-one can hear you scream ...

In my store room, no-one can hear you scream …

And yet I still can’t refuse when someone says: “I’ve replace my desktop / laptop / server do you want the old one ?” To this end, I’m grateful to my Brother-in-Law ( he of the clown ) who, when he replaced his old Mac Book (A1181) asked exactly that. To be fair, he was replacing it because it had slowed to a crawl with OS X, the battery had died (permanent power required) and the optical drive has either been removed or died – possibly both, but not necessarily in that order !

It has also come to pass that my eldest child is just about to finish GCSEs and start on A-Levels, and, with mostly (all!) essay subjects plus a relaxation of the school rules on the use of computers – we had a discussion about giving her a laptop.

The above list of machines are things that I own that are in storage – there are plenty of working, switched on computers around the house – oldest two have desktops, youngest has another old laptop re-purposed to run Ubuntu 14 04 – there is also at least one “communal” Apple laptop and my better half’s and my personal laptops ( a distinction that appears to be completely lost on the kids … ) – because of this, I really don’t feel that buying a new laptop – even for the few hundred pounds that you can pick-up things for from PC World today – is worth it. It’s likely to be kicked around, abused, used and left in a bag in a school locker / shelf / corridor – so something new and shiny isn’t going to be for long. Say what you like about the speed of old computers, but the build quality of them is something else all together – I can use the IBM RS6000 to get things off high shelves by standing on it – I’d like to see you do that with a new Mac Pro trashcanMac Pro Trash Can

So, given the “new” laptop at my disposal, and a new found determination to help everyone see the light about Linux – a solution dawned. A quick survey about user requirements elicited the following:

  1. Word Processing
  2. Music – listening, not composing
  3. Movies – watching, not filming
  4. E-mail
  5. Skype
  6. And the usual suspects of social media … ( Facebook … )

Ok, well there is nothing in the list that should challenge the processor overly, so the plan still looks pretty sound. Ordered a new battery from Amazon for the grand total of £17 and set to work.

One of the inspirations for this has been a recent episode of one of the pod-casts that I listen to – “The Linux Action Show” – they have recently migrated an Apple user from her MacBook pro to Linux – originally on the Mac hardware, but ultimately to a Lenovo Yoga. For them, after several attempts at getting Linux installed – over various flavours – they opted for Antergos – this is actually the distro that I currently have running on my desktop, and thus I (a) have a DVD already burnt and (b) it seems fine to me (!).

It was at this point that I found out that the optical drive didn’t exist … Slightly troublesome and I slid the disk into what turned out to be an empty slot and then spent 5 minutes trying to extract it from the case without an eject mechanism. Fine, stage 2 – put it onto a USB thumb drive – piece of cake – had the image, had the drive done. Hold down the option key to select boot media – nothing to chose from except the hard disk… Methinks – ok, being daft here, no trouble to try a different OS, I have the images so won’t take long – let’s have Fedora 22 on the USB … Still nada …

Right – Google time. All became clear, and I kicked myself for my forgetfulness. The Mac needs to have the EFI adjusted to allow it to boot other operating systems. The tool to do this is called rEFIt – this hasn’t been supported since it forked in 2013, but as the A1181 Mac model range is from 2006 – 2009 I figured that there would be a pretty good chance that the last published version ( thankfully still available for download ) would work. It’s a quick an easy install, although I did carry out the following command at the command line to be sure:

sudo /efi/rEFIt/enable.sh

( This is courtesy of this page – don’t know if it was necessary, but didn’t see the harm in being safe ! )

Ah hah ! Brilliant – I can now see the USB boot device and even select it to boot from ! Will it now install, will it heck …

So, Antergos – nope, Fedora 22 – nope ( although further than Antergos ), Elementary OS – nope, Ubuntu 14 04 – nope …

Urm … Back to Google – unfortunately I can’t seem to find the vague comment that caught my attention about the fact that this model doesn’t have a 64bit EFI – it’s a 64bit chip, but only a 32bit EFI. The common factor until this point was that all of the versions that I had tried were 64bit. So, after a quick scout, the first that came up with a 32bit version was …

(insert drum roll here)

Elementary OS

Right, now we are cooking with gas … sort of … Installed the downloaded 32bit to USB key, tried and … nope.

Ok, so, back to the drawing board – going over the process from the beginning, back to the EFI. Lo and behold:

rEFIt Troubleshooting USB Disks

Note: The following applies not just to USB hard disks, but to any
storage device that is not considered "internal". That includes USB
flash drives, SD cards and other memory cards, as well as hard drives
attached through Firewire or other connections.

Booting Windows or Linux from an external disk is not well-supported
by Apple’s firmware. It may work for you, but if it does not work, 
there is nothing rEFIt can do about it.

So, don’t have an “internal” CD drive so might have hit a problem here … But anyhoo, lets give it a shot. Off I go to find my USB optical drive, plug that in, burn a 32bit Elementary ISO to a DVD-R and give it a go …

Success ! Woo hoo. Boots like swimming through treacle – so I must admit that I was a little concerned about the performance of the OS once it was installed, but actually it turned out fine once it was running off the hard disk rather than over the lousy USB interface the Mac.

I have to say that it looks great. The requirements list has been met:

  1. Word processing – the indomitable LibreOffice is providing this function.
  2. Music – Spotify – through the web-player rather than the client – I’d have liked the client to install, but as a 32bit OS there doesn’t seem to be a package available – I might revisit and see if the source is there and re-compilable, but as the web-player works so well, I don’t really see the point.
  3. Movies – Netflix / iPlayer / 4oD etc. through Google Chrome – ( this was also required for Spotify for the built in Flash ) and also VLC for viewing things that are held on the media NAS – I have to say that I was pleasantly surprised by the performance of VLC streaming over the WiFi to the laptop, wasn’t something that I was expecting to be so smooth.
  4. E-mail – Thunderbird and, also, the built in Elementary OS Calendar – both of these were synchronised with the Gmail account and this was seamless ( on the client – had to adjust the Google Security Settings to allow for this to actually work )
  5. Skype – well, I installed Skype – fortunately this is mostly used in chat mode, as hardware support for the camera and microphone seems to be non-existent at the moment. Sound out is fine, sound in – doesn’t even seem to have a microphone – one on the bug list …
  6. Social Media – thank goodness for Google Chrome – I don’t have to worry about any clients as the web interface to everything else is only a click away !

So far the laptop seems to have hit the mark – when I went to say “Goodnight” last night it was in active use – and so far there have been no complaints. There are still some things that need to be sorted out – the microphone hardware is one, an other – if this is going to be used for A-level coursework – will be backups, perhaps that will be tied in with synchronisation with the desktop machine – currently a Windows machine, but maybe going to migrate to Linux off the back of this so that the environment can be common on both. At least that will be 64bit !

Tagged , , ,

Building a Linux based Digital Forensic Workflow – Part 2 – OS & Communications

The overall title of this series has been based around the “Digital Forensic Workflow” – I mean this in it’s broadest sense. This isn’t just about the imaging / examination side of things – but the full life cycle. From first client contact to the final report ( and billing ! )

Possibly you read a pair of articles that I wrote on Forensic Focus – Part 1 and Part 2 here – in which I mentioned that I’ve “put out to pasture” my old Mac Book Pro and obtained a shiny Lenovo X1 Carbon – just before the Superfish scandal hit. That particular issue didn’t bother me overly as, I never even booted the machine into Windows – as soon as it came out of the box it had Fedora Core 21 installed on it from a USB stick ( yep, no optical media drive on the Lenovo ). Everything on the laptop seems to work without any tinkering – even the fingerprint reader flashes away when authentication is required, although I have not made use of it … yet …

I’m a fan of Fedora – this is news to no-one by now I suspect – and it has been my choice of Linux flavour since it came out. My youngest daughter’s laptop runs Ubuntu ( 14.04 I think – I can’t remember what I installed ) and my workhorse desktop is currently running Antegros ( an Arch based distro – to be fair, as a mix between an experiment and the fact that I couldn’t actually get Ubuntu running satisfactorily on it with a dual-Nvidia dual-monitor setup ). There are many more machines kicking around – not least a rather substantial1 HP rack-mount server that is currently running VMWare ESXi – that are awaiting conversion.

For now though we are going to focus on my mobile computing platform (!) – the others will get their own write up in due course.

So, what exactly does one use a laptop for then ? Well, in my case the list pans out pretty much as follows:

  1. E-mail – a lot !
  2. Writing – which breaks down to:
    1. Blogging ( like this )
    2. Word-Processing
  3. Social Media type stuff – Twitter, Facebook etc.
  4. System Administration of other things
  5. Research ( web, but also other more “hand-on” things )
  6. Coding
  7. Skype (?) / Instant Messaging (?)

I don’t really play games on my laptop – not even solitaire, so I’ve left that off the list – although, again, may come back to that one later2 !

Staring at the top, but not particularly promising to continue in any given order …Thunderbird E-Mail

I used to use Outlook on the Mac and on Windows – clearly this isn’t one that I can transfer across, being both unavailable for Linux3 so I need to find something else. That something else is Thunderbird ( at least it is at the moment, and so far, so good ). Thunderbird is the sister application of the Firefox web browser from Mozilla. Mozilla, for those of you who are interested in this sort of thing4, has a heck of a pedigree – created in 1998 when the source to the Netscape browser was released (in the 90’s Netscape Communicator was the dominant browser by a long shot) – it has since grown again to a powerhouse with Firefox, Thunderbird and Firefox OS even on phones.

Thunderbird is great for me – not dissimilar in it’s tabbed approach to e-mails to that which I’ve seen in Lotus Notes 9+ lately, I’d like tabbed composition of e-mails as well, rather than pop-outs, but I think I’ll either have to wait or write it ! To be honest, an e-mail client is an e-mail client pretty much – the basic concepts have to be there, otherwise it isn’t an e-mail client ! So effectively it is the enhancements – in the case of Thunderbird, called “Extensions” – that make the difference. I’m working with four that I like that are managing to make things easier for me.

Enigmail

Enigmail is the extension that manages the PGP encryption and signing of e-mails. Provided that you have PGP installed on your system, when you kick it off it talks you through the creation of a public / private key pair, and even uploads the public part for you to a key server. Then it is a matter of selecting the sign or encrypt icons in the compose window and you are done. Emails that come in, either encrypted or signed, are managed automagically and the decryption and / or authentication of signature is seamless.

Thunderbird Conversations

This is more of a pretty-fication rather than an actual tool – it sorts e-mails into conversations – so that you can see the back and forth of an e-mail chain all in the same place, rather than scattered over time. I like it – others may not. It also adds the feature of a “quick reply” in the same tab so that e-mails can quickly be responded to. The problem with that is that Conversations and Enigmail don’t want to talk to each other, so there are no signing / encryption options on the replies.

TaQuilla

This is an interesting one – I’m experimenting with this at the moment. Tagging allows you to assign a given message to a specific category ( for both visual separation – you specify the colour of the tag ) and for tag filtering. E.g. ( As is in mine ) Personal = green, Work = orange, Social Media = Purple etc. It makes it easy to carry out a quick scan. TaQuilla is a bayesian tag adder … This means that rather than you tagging the message, or having set rules to tag a message ( all e-mail from the better 1/2 becomes “Personal” ) it learns from the e-mails that you have already tagged. So, because I have tagged all messages from better 1/2, children, sibling and parents as green – it recognises the common features of _that type of e-mail_ so that when an e-mail comes that doesn’t necessarily match a specific rule ( e.g. a child sends from school e-mail rather than personal ) that it can recognise the e-mail with a degree of certainty and tag it as personal … Still training this one a little, but it is getting there.

Lightning

calendar

This is the calendar extension for Thunderbird – required in order to replace Outlook in my opinion. I could make use of another, separate calendaring application to track time, but I happen to like it as I tend to add things to my calendar most often when I’m with my e-mail so it makes sense for me ! I happen to have bound my calendar in Thunderbird to the online one that I have with my Google Account – it wasn’t my first choice, I actually wanted to continue to use my hosted Exchange calendar as my primary source – but it turns out that Microsoft doesn’t want to play particularly nicely with open standards for calendars, Apple won’t let me use iCal unless I make it public to everyone. So fine, I’m using Google. This means that I can sync the calendar across all my devices – laptop, iPhone & Android, which really is great as I’m bound to have one of the above around at any given time !

Thunderbird itself is configured to pick up and send my e-mail to and from Exchange over SSL/TLS IMAP and SMTP. So far, I have to say that it is proving to be a most viable option.


1. For home use anyhoo …

2. With the streaming features of Steam, even though the laptop itself doesn’t have the graphical oomph to pull of gaming. I may, once I get the main desktop machine running to my liking, give this a go …

3. Yes, I know about Wine and CrossOver … but why would I want to in this case ?

4. https://www.mozilla.org/en-US/about/history/details/

Tagged , , ,

Background Noise on the Internet

Not too long ago there was a reasonable amount of press ( in the IT world anyhoo, meatspace pretty much ignored it ) regarding attacks against the SSh protocol. The “SShPsychos” group has been responsible for a large number of coordinated brute force attempts against well known usernames with a variety of common passwords. This isn’t long term targeted attempts against a particular target – rather a scatter-gun approach at anything that’s running an SSh daemon on Port 22 using a short-ish list of dumb passwords.SSh "hack" from the Matrix

To be honest, I’d known about this sort of background level for a long time – and it came as no great surprise to me. It’s been going on as long as I’ve had an SSh server running on a public IP, and to be fair the volume _has_ increased. It has been a great example to students though when I’ve been teaching Linux security – pointing out the reasons for carrying out the basics of securing SSh:

  1. No remote root login
  2. Complex passwords
  3. Specific IP firewall rules if/where possible

And also some of the more complicated ones:

  1. Fail2Ban
  2. Chroot Jails
  3. Multi-Factor Authentication

Even now, logging into my webserver ( “www.thinking-security.co.uk” ) via SSh on Port 22 there are approximately 2000 illegitimate login attempts over the last 20 hours. Quite often when I re-connect after a weekend or more than a few days, this number is in the 10s to even 100s of thousands. I’ll be honest, it doesn’t particularly bother me – it is much rattling of windows and testing of locks – there are much easier fish to fry on the interwebs than that particular machine.

It did cause me to ask two particular questions though:

1) Where are all the attacks coming from ?

2) What usernames and passwords are they trying ?

Turns out that question 1 is easy, and question 2 is half easy …

On any Linux server, the connections made against SSh are logged. These go into /var/log/secure and here is a prime example:

May 15 14:35:55 ts-one sshd[23429]: Invalid user bankid from 37.59.230.138
May 15 14:35:55 ts-one sshd[23429]: input_userauth_request: invalid user bankid [preauth]
May 15 14:35:55 ts-one sshd[23429]: pam_unix(sshd:auth): check pass; user unknown
May 15 14:35:55 ts-one sshd[23429]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=37.59.230.138
May 15 14:35:57 ts-one sshd[23429]: Failed password for invalid user bankid from 37.59.230.138 port 59470 ssh2
May 15 14:35:57 ts-one sshd[23429]: Received disconnect from 37.59.230.138: 11: Bye Bye [preauth]

This is one connection attempt – the source ( from ) is 37.59.230.138, and it has presented the invalid user “bankid” – this has actually failed at this point – but SSh won’t let the attacker know that, it will still allow them to enter three password attempts before terminating the connection. This inability to tell if it is the username or the password that has failed is actually quite important – realise that if you can tell if an account is valid, then you can easily stop wasting time and effort on ones which are not. This non-specific failure method “Either the username or the password is wrong” leaves the whole possible space open requiring a far greater number of attempts to find a valid username _and_ password combination.

37.59.230.138 – great IP address – I’m sure that there are some savants out there who can look at that and tell me where it is from – but I assure you, I am _not_ one of them. I have to look it up – and even then the sources occasionally disagree ( not that it actually really matters as I’m not sending a drone over to wreak revenge ).  For the purposes of the remainder of this process I’ll be using the MaxMind database, and, for the sake of legal compliance:

This product includes GeoLite2 data created by MaxMind, available from http://www.maxmind.com.
Creative Commons Licence
This data is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

There is a direct interface to the data that is available on the front page of the MaxMind website which allows for the query of a single IP address, and, for us in the case above, this tells us:

IP Address = 37.59.230.138
Country Code = FR
Location = France, Europe
Coordinates = 48.86, 2.35
ISP = OVH SAS
Organization = OVH SAS

So it is a French IP address hosted with the OVH Company.

Our friends in France ...

Our friends in France …

As a hosting company, they aren’t directly responsible for the attack, rather it is just being launched from a machine that has been allocated an IP address within their scope. However, having said that, a quick glance at a Google search about them suggests that this is far from the first time that they have been used as a stepping stone onto other things …

The IP address lookup also gave us the Lat / Long ( estimated – by a long shot ! ) of the address. We can plug these into Google Maps to have a look-see at the rough area of operation _of the IP address_. This isn’t, most likely, where our perpetrator is sitting – more likely the recorded head office of the company …

Our location in Paris...

That’s great one IP address down – 2000 to go …

In the next articles in this series, I’m going to extract all of the IP addresses & usernames from the logs ( across multiple servers ! ), and then plot these against a map to show both historic and real-time data … And then we’re going to move on to finding out what passwords are being attempted using a “honeypot” !

Tagged , , , ,

Running your own DNS server (Part 2)

Right, you need to read part one in order for this (a) to make sense or (b) to work – so if you haven’t done that already – Go !

Looking at a domain ...

Looking at a domain …

So, right now you have a working domain name server, just the one mind you, and it is configured to allow you replicate it over to another server … We are going to set up that second server now, so that we have some resiliency in case something horrible happens to the first one. Just as a side note – my two servers are both hosted at Digital Ocean – a great, cheap, provider of virtual hosts. They have a number of data centres world wide, so it would make no sense at all to have my resilient server located in the same place as my main server. To that end – one is in London, the other in Frankfurt. I figure that as a majority of my business is European, sticking them anywhere else is a bit pointless ! There is nothing, however, in future to stop me from either migrating or adding additional servers …

Anyhoo, onto the configuration. The first part is identical:

yum update -y
yum install bind bind-utils bind-chroot -y

Back into /etc/named.conf we go … ( vi 😉 ) and this time we make the following changes:

options {
 listen-on-v6 port 53 { ::1; };
 directory "/var/named";
 dump-file "/var/named/data/cache_dump.db";
 statistics-file "/var/named/data/named_stats.txt";
 memstatistics-file "/var/named/data/named_mem_stats.txt";
 allow-query { any; };

 recursion no;

 dnssec-enable yes;
 dnssec-validation yes;
 dnssec-lookaside auto;

 /* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";

 managed-keys-directory "/var/named/dynamic";
 
 pid-file "/run/named/named.pid";
 session-keyfile "/run/named/session.key";
};

Note that the only real difference here is the removal of the “allow-transfer” line. This server shouldn’t allow transfer to anyone anywhere, so it is omitted entirely and it default to off.

Then, as with the master, we need to add the relevant zone entry so the server knows what it is looking after.

zone "security-intelligence.uk" IN {
 type slave;
 masters { www.xxx.yyy.zzz; };
 file "security-intelligence.uk.zone";
};

You can see the difference from the earlier one, this is slave entry and it points at the master IP address ( fill in your own domain and IPs here … )

This being done we can kick off the BIND process and get it added into the boot sequence as we did on the first server, troubleshooting as required. ( Which, go figure, I needed to do again, because I missed a darn “;” ! )

service named restart
chkconfig named on

Again, now we should be able to query that nameserver directly about our domain, so, using nslookup as we did before double check that it works.

Assuming it did, congratulations, you now have redundant name servers managing your DNS for your domain.

There is one last thing to cover, and that is making changes.

To make a change to an existing zone:

1) Edit the zone file on the master changing:

a) the serial ( incrementing )

b) the entry that you need to alter

2) Reload the zone files using the command

rndc reload

This will reload the zone file and the changes will propagate over to the slave ( if and only if you increment the serial though ! )

To make a change to add / remove a whole zone basically follow the same steps as for adding the first zone from Part 1. You’ll need to create the new zone file and populate it with the information that you require, and add the zone into the named.conf. You’ll also need to let the slave know about the zone in it’s named.conf. In this case use:

service named restart

To restart the service so that it reloads the named.conf file and is aware of the new zones.

I hope that your own DNS server gives you back some of the control that you would like of your digital estate. I will write more on this as I progress through the migration of the other 81 domains and also cover off things like round-robin DNS load balancing and MX entries for e-mail … Along with the trials and tribulations of getting everything migrated !

References:

https://www.digitalocean.com/community/tutorials/how-to-install-the-bind-dns-server-on-centos-6

Tagged , ,

Running your own DNS server (Part 1)

Somewhere over time I seem to have acquired 82 ( yes, eighty-two ) domain names, a small number have been bought on behalf of other people ( relatives, friends & children ) – some have been bought sensibly ( business related ) and some have been bought on a whim as I thought that I might get around to doing something with them at some point. I’ve made use of the rather good ( and cheap ) service at 123reg – which in terms of registration is great, and, I’m sure if you are managing the DNS of one or two domains is probably a pretty good admin interface for that too – however, for the full 82 it is excessively painful.

Cover of the O'Reilly book on the subject.

Cover of the O’Reilly Book on the subject

The recent – “I’m going to move everything to Linux” – decision has left me thinking that I should get on and tidy up everything else. The company website runs on Linux already – and I’m planning to point all of the pertinent domains at it. At some point I’ll be migrating the e-mail from the hosted MS Exchange server as well – although I have to admit that’s one thing that I don’t fancy doing – partially because other people rely on that one beyond me.

So, as part of this “phase” I’m going to take back control of my domains and host my own DNS servers ( yes, plural, for redundancy purposes ) on Digital Ocean droplets across two data centres – one in Frankfurt the other in London. ( I figure that I’ll remain in Europe for these, rather than the US or Middle/Far East ).

As is my wont, I’m going to be using Fedora Core 21 x64 as the base OS – this isn’t to bad-mouth any other distros ( except Ubuntu – I don’t like Ubuntu, or Debian … ) – I just like Fedora ! This could / would work equally as well with CentOS, which is the other sane option on Digital Ocean – FreeBSD is pretty cool, but it isn’t Linux, so doesn’t count … [ please address hate mail to /dev/null ]

I used to work in an ISP, it was my first job when I was still at University, and managing DNS on BIND was one of my roles back then. I have now, forgotten absolutely everything that I ever know about it. So this is going to be a little bit of a learning curve !

We begin0:

First off – are we up-to-date ? Just installed, so unlikely – a quick:

yum update -y

To bring it all up to speed and make sure that all the packages are at their latest and greatest.

Then, you need to install BIND, BIND tools and, for the sake of security BIND chroot1:

yum install bind bind-utils bind-chroot -y

The main configuration file for bind is in /etc/named.conf, so using your editor of choice ( which will, of course be vi ) edit th options section to look like the following:

[ replacing aaa.bbb.ccc.ddd with the ip of your secondary if you have one, and removing it if you don’t ]

 options {
 listen-on-v6 port 53 { ::1; };
 directory "/var/named";
 dump-file "/var/named/data/cache_dump.db";
 statistics-file "/var/named/data/named_stats.txt";
 memstatistics-file "/var/named/data/named_mem_stats.txt";
 allow-query { any; };
 allow-transfer { localhost; aaa.bbb.ccc.ddd; };
 recursion no;

 dnssec-enable yes;
 dnssec-validation yes;
 dnssec-lookaside auto;

 /* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";

 managed-keys-directory "/var/named/dynamic";
 
 pid-file "/run/named/named.pid";
 session-keyfile "/run/named/session.key";
};

One of the important things here is that – if you are running an authoratitive DNS server for your domain(s) – you _turn off recursion_ – this prevents your server being made part of a Distributed Denial of Service (DDoS) attack.2

N.B. Watch your “;”s BIND is painfully picky about syntax !

Once you’ve got this part configured, it is time to start adding domains !

Further down in the named.conf file are a list of all the “zones” that your nameserver will know about.

zone "." IN {
 type hint;
 file "named.ca";
};

zone "security-intelligence.uk" IN {
 type master;
 file "security-intelligence.uk.zone";
 allow-update { none; };
};

The first zone is the default for the server, the second has been added by me for the domain “security-intelligence.uk”- the syntax above makes it the master for the zone ( the definitive record ), the file is where the actual information about the zone is held, and the allow-update relates to which machines are allowed to make dynamic updates to the DNS entries for this domain – for use in DHCP scenarios. You can repeat this as many times as you like ( in my case it will be 82 by the time I’m finished ! Although I suspect that a script may well come into play to take the downloadable CSV file from 123reg and do the majority of this for me !3 )

At this point we move on to create the associated zone file … Ok, so again, using vi your editor of choice, create the file that you gave your zone above in /var/named/ e.g.

vi /var/named/security-intelligence.uk.zone

And then populate it 🙂

$TTL 86400
@ IN SOA ns1.security-intelligence.uk. root.security-intelligence.uk. (
     2015140501 ;Serial
     3600 ;Refresh
     1800 ;Retry
     604800 ;Expire
     86400 ;Minimum TTL
)
; Specify our two nameservers
                IN     NS     ns1.security-intelligence.uk.
                IN     NS     ns2.security-intelligence.uk.

; Resolve nameserver hostnames to IP, replace with your two DNS server IP addresses.
ns1             IN     A      www.xxx.yyy.zzz
ns2             IN     A      aaa.bbb.ccc.ddd

; Define hostname -> IP pairs which you wish to resolve
@               IN     A      qqq.rrr.sss.ttt
www             IN     A      qqq.rrr.sss.ttt

I’m not going to go through this in detail at the moment – will come back to that later, but there are a few things here that you need to consider:

(1) The Serial : this needs to be changed each time you make an update to the record, incrementing with each modification, sadly this is a point where the use of the American date format makes sense as yyyy-mm-dd will always increment. If you are making changes more than once a day then append an additional couple of digits so that you can run through 99 changes before requiring a new date …

(2) Change the bits that refer to my domain to refer to yours …

(3) Change the IP address “www.xxx.yyy.zzz” to your main DNS server, “aaa.bbb.ccc.ddd” to your secondary ( if you have one – remove the second name server from the list as well if you are only doing one ) and “qqq.rrr.sss.ttt” with whatever you want your domain records to point at. In this case they both point at my webserver, so a URL of “security-intelligence.info” or “www.security-intelligence.info” will both go to the website.

Once that’s all done for your domain, you are actually good to go. Kick off BIND by entering the following command:

service named restart

If you get an error at this point ( like I did :-/ ) then:

systemctl status named.service

May well point you in the direction of your missing “;” !

Assuming that, unlike me, you can get it right – you should now have your primary/master nameserver up and running.

Give it a quick test:

nslookup - www.xxx.yyy.zzz

This will put you into nslookup’s interactive mode, querying your server at the ip “www.xxx.yyy.zzz” ( what your server’s actual IP is here … ). Enter one of the domain names that you are serving, in my case “www.security-intelligence.uk” and you should see back the response with the correct IP address as specified in your zone file.

The other thing that you’ll want to be doing is setting up BIND to start on each reboot, so a quick:

chkconfig named on

Will sort this out for you.

Well Done !

Part 2 along shortly detailing the configuration of the secondary …

References:

https://www.digitalocean.com/community/tutorials/how-to-install-the-bind-dns-server-on-centos-6


0. All commands here need to be run as root … so either get a root prompt or sudo your way through them …

1. chroot – changed root – running in a limited environment so if compromised access to system is limited. May write more on that later !

2. https://blogs.akamai.com/2013/06/dns-reflection-defense.html

3. Which if it does, I’ll post more about later !

Tagged , , ,

Building a Linux based Digital Forensic workflow – Introduction

For the whole time that I’ve been doing Digital Forensics, I’ve been using Windows for it. This seriously irks me ! I’ve been in love with open source / free software since I installed my first Linux box at University. The original reason for the install was to avoid having to walk to the CS/AI labs in winter, in Edinburgh. I like being warm and dry as much as the next person – something that doesn’t happen often outside in Scotland in Winter. Linux emulated the SunOS / IRIX environment well enough that I could carry out my C / Prolog work without hypothermia.

Since then, I’ve always had _at least_ one Linux machine running at any one point in time – but since I stopped being a UNIX SysAdmin and started being a Security / Forensics Consultant, usually not as my main machine. I tried for a while to assuage my guilt by using Macs – well documented below – ‘cos at least they are “UNIX” machines when running OS X. Windows though has been an ever present thorn in my side, firstly for the running of proprietary forensic tools ( Oxygen, XWays Forensics & other odds and ends ), secondly for the running of games ( I don’t play many, but enough … ) and finally for the suite that is Office – something that has been required day-in-day-out for far, far too long …

Until now, I haven’t actually _tried_ to get rid of it though – having enough bits of hardware around to run Windows, MacOS and Linux both physically and in virtualised environments has meant that I don’t need to do it. The gnawing feeling that this is wrong has been exacerbated by tuning into a number of Linux podcasts ( I recommend Jupiter Broadcasting, Linux Action Show and Linux Unplugged ) and this had drawn to my attention that perhaps Linux is now “desktop ready”. And now Steam ( at least in theory ) works on Linux for some games in my library ( Bioshock Infinite ), there really is no excuse any longer.

This is it, I’m biting the bullet and removing Microsoft and Apple from my day-to-day workflow – for _everything_ forensics, security, documents, e-mails, IM/VoIP, games, calendar, phone synchronisation (but not phones themselves – I am aware of the Ubuntu phone and may make the switch at some point, but for now my iPhone remains) etc. etc. etc.

I think that there are some things that won’t be straightforward, I’ll admit that up front – but I sincerely hope that the Open Source Eco-System has solutions to all problems, and I’m not unwilling to dust of the few coding skills that I have in order to get to the end goal.

More to follow as this progresses …

Tagged , , ,

Raspberry Pi Toybox – The bits …

English: The Castel Sant'Angelo looms in the b...

English: The Castel Sant’Angelo looms in the background from a bridge overlooking the Tiber River in Rome, Italy. (Photo credit: Wikipedia)

Well, it has all arrived ( Thank You Amazon ! ) and so here, without further ado, are the components:

I haven’t photographed them properly yet – [ I haven’t assembled them properly yet ! ] – but this is a rough look:*

DSC_0606_edited-1

The only things that I’m using other than the above are:

  • A Laptop1 with a SD Card Reader ( LINDY 46-in-1 PCMCIA Card Reader )
  • A keyboard and mouse (USB)
  • A monitor with an HDMI input
  • Elgato Game Capture HD ( See here for more information on this )
  • An 8GB Thinking Security USB Memory Stick
  • A 16GB SD card of one sort or another that I had lying around …

I’m planning using the Fedora 17 Remix – at least to start with – shouldn’t be a problem to obtain / compile pretty much anything to run on it ( famous last words those ! ). So seems like a reasonable way forward.

I’ve been a long time RedHat / Fedora fan – was my first Linux back in the day ( when RedHat was still free … I don’t recall, but probably RedHat 2.0 ) – I had it installed on my Pentium at University and used it, with a 14400 modem, to avoid the Edinburgh weather instead of having to go to the AI and CS labs for assignments … Sigh … The good old days …

Getting it onto the card is pretty straight forward, once you have your uncompressed image use Win32ImageWriter to write it to the card.2 This worked just fine, and booted up beautifully.

For screenshots & video of the Pi, I’m using the Elgato Game Capture HD ( see above ) – this works brilliantly, it has a USB connection to my laptop, an HDMI from the Pi and an HDMI to the monitor. It introduces no lag on the monitor side, but quite neatly captures – in full HD – the image on the way through. It’s a very neat way of getting screenshots off the Pi, which otherwise would prove a little troublesome. I’ve attached the video of the first boot ( and setup configuration ) below – more information and details will follow in due course !


*. The astute and keen eyed amongst you may have noticed that in this picture the two USB WiFi devices aren’t showing – that’s because they are currently in my Ubuntu PenTest laptop running aircrack-ng as a proof of concept for this project …

1. We’ve had some laptop issues at home, my other half’s MacBook Pro croaked – and seeing as I have an issued laptop from my current client, and she doesn’t – she’s taken my MacBook Pro with her SSD. I’ve spent the last few weeks turning an old Lenovo T61 into a usable computer again. First off – out with the old spinny platters and in with an SSD for the primary HD. Doubled the RAM again ( past the quoted manufacturer maximum ) to 8GB and got rid of the CD-RW drive ( never used it anyway ) and replaced it with a 750GB hybrid disk to hold my VM images, oh and, missing my screen real estate from my 17″ MBP I also acquired a portable Lenovo second screen – I really don’t know why I’ve not seen these around more – they are brilliant ! I’m not sure that I couldn’t have bought another laptop for the cost of all the upgrades, but – it was fun to do, and there is something quite stylish about the older Lenovos – that IBM feel still I think !3

2. Be prepared, this is a definite “cup of tea” part of the process. In my case unload and load the dish washer, make and drink cup of tea, have chat with Brother-in-Law on phone, get high score on Temple Run 2 and, finally, just to be sure, go and get the kids from school. But hey, it finished ! ( In all seriousness, I was getting about 1MB per second for 3GB – that’s about 50 minutes )

3. Slight update on the laptop front, picked up a sale Acer Aspire i3, 6GB RAM, 500GB HD which is currently running Ubuntu. Neat little bit of kit … Dirt cheap too !

Tagged , , , , , ,

Five free ways to improve your security

Peer Review

Peer Review (Photo credit: AJC1)

We’re in recession, lest we forget – it isn’t like the press is going to let it slip from our minds – so money in a tight field is getting tighter. However, even for large businesses improving security need not cost the earth, or indeed anything at all ( apart from some time, and we must recall that time is equal to money ). To that end, I thought that I’d put down five very cost-effective and pragmatic ways to significantly improve your security.

1. Patching

Certainly at a desktop or server OS level, patches are mostly available for free. ( If you have devices, operating systems or applications that require a maintenance contract for patch updates – this isn’t quite for free, however let’s, for the time being assume that this cost is covered off already. ) Patching up to date ensures that, with the exception of those pesky “zero-day” problems, that your system is protected against known vulnerabilities. I’ve been to many, many organisations where patching is so out of date the measure is years – that’s seriously wrong. The excuse is often – “our application is so unstable we can’t” – let us think carefully about this statement and consider, under these circumstances what we should do about it … if and only if this is true and there is nothing that you can do to get the application maintained – then can it remain as is – however the device or server should be isolated behind other mitigations. ( So much so that if I am scanning your network in a vulnerability or penetration test – I don’t want to be able to see the patch level. )

2. Review your Firewall Rules

When was the last time you reviewed your firewall rules ? You’ve added some recently I’m willing to bet, but have you purged old entries ? Do you have a process for deleting rules when they are no longer needed ? Each “allow” rule is a doorway into your network – if it isn’t needed, lock the door. Incidentally, at this time it is wise to pre-empt the next point, is there supporting documentation surrounding your firewall ruleset ? At a minimum, you need to know what the rule is for in English, ( e.g. “allow port 80 tcp to 123.234.123.234 from 123.235.0.0” doesn’t tell me anything, “http website access to the stock server from the warehouse subnet” does. ) And who owns it ( John Smith from Warehouse Control ). That way, a review involves going through the list calling Bob and asking him if he still needs that rule.

3. Documentation

Review your docs – dry run through processes and procedures – do they still work ? Update them if not. Are there any documents that are clearly missing ? Write them. Review your policies, you are of course doing this annually anyway aren’t you, but IT moves faster than on a yearly basis, and I’m pretty sure that a mid-term review wouldn’t do you any harm – issue errata if you don’t want to actively change the policy at this stage – but keep the changes to hand for the updates and it will save time later. Check that your supporting documentation is up-to-date and relevant – such as your firewall rules above – if it isn’t in English, make a translation – you might know what it means, however if you get hit by the proverbial bus ( or get an offer you can’t refuse ) – then your successor will need to figure it out – the more uncertainty there is in that time the higher the risks of an incident – if you want an incentive a public breach that might be blamed on you after you’ve left ( “My predecessor left such a mess it was impossible to manage” ) might haunt you for a long time. It never ceases to amaze me how small this industry actually is.

4. Cull dead accounts

Like old firewall rules, old, unused accounts are opportunities to an external attacker. Hopefully you have a policy in place for removing accounts when an employee leaves, but it is still well worth going through and auditing. Look for test accounts, administrator accounts, contractor or supplier accounts and system accounts that wouldn’t be identified by a leavers process, and may well not have the same lockout or expiry controls. At the same time, have a quick check to make sure that all accounts have the correct settings – there are many tools and code available for walking AD or other directories to look for specific settings freely available on the net.

5. Educate a bit

I’m not talking about a huge CBT on security here – that’s hardly free. However writing and sending an e-mail to all staff is. Give some thought to what your major concerns and issues are, write a positive statement of ways to manage these risks ( one per e-mail, send a few ) and get it out there. Creating awareness, putting ideas into the heads of staff and giving them details of whom to contact with concerns or questions is going to reap long-term benefits that you can only imagine now. This is probably the largest return on investment that you can imagine – proactive staff will head off problems you have yet to conceive, and, given a voice, they’ll give you ideas and suggestions that will not only improve security, but could well make your business more profitable overall.

These are just five simple suggestions – you could extrapolate a little I’m sure to find a few other things that won’t cost a thing, but will improve your security ( here’s a clue – if you start with the word “review” or “audit” and follow with things like “running services”, “configurations” or “file/folder/group permissions” you’ll probably come up with another few ). It’s an interesting time to be in Security – budgets are down, but threats are up – pro-active low-cost work could be the difference between success and failure – these things really should be part of a security routine anyway – but we are so often firefighting or implementing the next new thing that we don’t get much of a chance – this breathing space might actually be what the doctor ordered …

Tagged , , , , , , ,

Ports to Promisc Linux

It may be that you need to configure your network ports to listen in promiscuous mode – packet sniffing, IDS etc. Quick and easy configuration on Linux is available through /etc/network/interfaces and the addition of the following lines will do it assuming (eth2):

auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

Just a quick tip 😉

Tagged , , , ,

SSh Tunnelling for fun and profit …

Firewalls are good – firewalls that are outside of your control, aren’t. I’ve been working with a client to install a network monitoring device within their network – unfortunately they have no sensible way of giving me access to it through the firewall – no available routable IPs, no port forwarding, nothing useful what so ever. This has somewhat cramped my style – making it a pain to get to the device in any way other than being in their offices. Well, I had to be there for a few days anyway – but I finally got round to implementing the solution to the problem today. I’ve used SSh tunnels for over 15 years now, originally between university Unix boxes and Linux servers at the ISP that I worked for part-time so that I could do things all round ( Uni work in the office, office work from Uni … both from home via dial-up to work … nothing from the student union because mobile computing hadn’t been invented & the beer was cheap … ) – and every so often I end up revisiting them to either (a) bypass other people’s security controls or (b) to tunnel unencrypted protocols over a secure channel. The really nice thing about SSh tunnelling is that it is actually pretty platform agnostic – PuTTY & Cygwin on Windows, MacOS X, Linux, UNIX and even Android – all have support for it one way or another.

I have always admired the programmers virtues, despite not being a programmer myself much – I feel that they should apply to all who work in IT – laziness, impatience and hubris. And in the spirit of the first, on this occasion, rather than reading the man pages and trying to recall how it all hangs together – I went to the ultimate lazy resource ( Google ) and found this script here:

#!/bin/sh

# $REMOTE_HOST is the name of the remote system
REMOTE_HOST=my.home.system

# $REMOTE_PORT is the remote port number that will be used to tunnel
# back to this system
REMOTE_PORT=5000

# $COMMAND is the command used to create the reverse ssh tunnel
COMMAND="ssh -q -N -R $REMOTE_PORT:localhost:22 $REMOTE_HOST"

# Is the tunnel up? Perform two tests:

# 1. Check for relevant process ($COMMAND)
pgrep -f -x "$COMMAND" > /dev/null 2>&1 || $COMMAND# 2. Test tunnel by looking at "netstat" output on $REMOTE_HOST
ssh $REMOTE_HOST netstat -an | egrep "tcp.*:$REMOTE_PORT.*LISTEN" \
   > /dev/null 2>&1
if [ $? -ne 0 ] ; then
   pkill -f -x "$COMMAND"
   $COMMAND
fi

This, coupled with a cron job to run it every five minutes and shared keys mean that my tunnel now remains open on my server, allowing me to get in remotely, fiddle with things move files etc. etc. etc.

Ironically, though, rather than making my life easier this now means that I can worry about what it is doing at 3am _and find out_ !


		
Tagged , , , , , , , , ,