Firstly I'd like to begin by apologising if the title makes this sound at all interesting - it drove me completely fucking crazy for most of last week. The issue I had was a packet capture probe which would stream the data elsewhere, but I didn't have the luxury of being able to cache the gigabytes of data it was putting out every minute. I tried using netflow with ntop (since I'd had absolutely no experience with flow analysers) and this gave me a good start, but there was pretty much nothing I could do to manipulate the data apart from a few hall of fame style charts.
The next step was to try snort and set up some rules based on ip ranges (a step ahead of ntop in netflow mode) and then run it through snortalog to make it a bit easier to view, but snort doesn't take data directly from a single port. I tried to dump it from netcat into a named pipe, but snort doesn't read "special" files... The next option was to start playing around with tap interfaces.
The tap and tun interfaces are virtual NICs which you can apparently send data directly to. I had no luck getting this to work - my tap0 didn't give me a /dev/tap0 file to pipe to, so I ended up back at square one... almost. The final key was using tcpreplay to replay from the named pipe to tap0, and then attaching snort to it. It ended up working, but being asynchronous, snort ended up missing half the packets since the system was busy trying to pipe data to this virtual interface that nobody else could read from...
And in the end? We all lived happily ever after. I found the tcpdump filter I wanted, set up tshark to read from the named pipe, and it's all working, all thanks to the almost unusable tuntap interfaces.
Sunday, May 15, 2011
Monday, May 9, 2011
Two part network authentication
We've just changed to using two part authentication for google at work, and it seems to do the job well - unintrusive, unless you lose your phone. The idea is simple - retain your current password, but use a second one-time password which is generated by an app on your phone. When you first set it up, you paste a massive password into your phone if you're unlucky enough to not have an iPhone 3GS or higher (with a camera capable of reading the barcode on the screen). This acts as the seed for a random number generator, which is combined with the current time to the nearest minute to generate one-time passwords that can only be predicted if both sides have their time synchronised and share the same password.
This is cool in itself, but after a conversation with one of my colleagues I thought it would be cool to extend it, and combine it with the concept of port knocking.
Port knocking, for those who don't know, is at worst another layer of security through obscurity, but at best is another channel for confirming knowledge of shared secrets. Normal firewalls try to make it difficult for potential hackers by detecting when they scan for open ports (which correspond to network services) and then not confirming or denying whether or not any of the ports are open. Port knocking goes further, by making all ports appear closed, unless the IP attempting to connect to a service has recently queried a list of ports in the correct order (a sort of secret knock if you like).
A traditional port knock is a predictable sequence, which can be easily inspected by routers along the way. To add a further layer of security, setting the TCP sequence number to the value of a hash of the packet combined with a shared secret - thus ensuring a port knock from one IP can't be replayed later from another.
But what if we want to make the sequence itself unpredictable? If we restrict ourself to just 256 ports, and make our port knock sequence 16 ports long, then we can convert the output from a cryptographic hash into a sequence of ports to query. Sharing a secret in advance, and salting this with the current time to the nearest minute allows us to create per-session portknocks. And the icing on the cake? Add the IP to the time as a second salt, allowing the client to perform the portknock in plain sight, and then be allowed access to a totally hidden port.
OR..... we could just use IPSEC AH
Port knocking, for those who don't know, is at worst another layer of security through obscurity, but at best is another channel for confirming knowledge of shared secrets. Normal firewalls try to make it difficult for potential hackers by detecting when they scan for open ports (which correspond to network services) and then not confirming or denying whether or not any of the ports are open. Port knocking goes further, by making all ports appear closed, unless the IP attempting to connect to a service has recently queried a list of ports in the correct order (a sort of secret knock if you like).
A traditional port knock is a predictable sequence, which can be easily inspected by routers along the way. To add a further layer of security, setting the TCP sequence number to the value of a hash of the packet combined with a shared secret - thus ensuring a port knock from one IP can't be replayed later from another.
But what if we want to make the sequence itself unpredictable? If we restrict ourself to just 256 ports, and make our port knock sequence 16 ports long, then we can convert the output from a cryptographic hash into a sequence of ports to query. Sharing a secret in advance, and salting this with the current time to the nearest minute allows us to create per-session portknocks. And the icing on the cake? Add the IP to the time as a second salt, allowing the client to perform the portknock in plain sight, and then be allowed access to a totally hidden port.
OR..... we could just use IPSEC AH
Saturday, April 9, 2011
Server upgrade: day 2
After deleting a few packages that had been installed from the ubuntu stream and were causing major issues, the upgrade somewhat took care of itself. TFTPD-HPA isn't incredibly happy at the moment, but I'll sort that out next time I feel the need to PXE boot anything (there may be a tutorial on exactly how I got BartPE to PXE boot, which would be just a mix of all the information that is already floating around on the internets).
I've already got a working config for the WIDE DHCPv6, so migrating to the ISC version wasn't going to be too difficult - fortunately I found a sweet tutorial which made life easier. Once I'd got my new config set up, all that needed to be changed was /etc/init.d/dhcp with a couple of extra lines
echo -n "Starting DHCPv6 server: "
start-stop-daemon --start --pidfile $DHCPD6PID \
--exec /usr/sbin/dhcpd -- -q -6 -cf /etc/dhcp/dhcpd6.conf $INTERFACES
sleep 2
if [ -f "$DHCPD6PID" ] && ps h `cat "$DHCPD6PID"` >/dev/null; then
echo "dhcpd6."
else
echo "dhcpd6 failed to start - check syslog for diagnostics."
fi
I've already got a working config for the WIDE DHCPv6, so migrating to the ISC version wasn't going to be too difficult - fortunately I found a sweet tutorial which made life easier. Once I'd got my new config set up, all that needed to be changed was /etc/init.d/dhcp with a couple of extra lines
echo -n "Starting DHCPv6 server: "
start-stop-daemon --start --pidfile $DHCPD6PID \
--exec /usr/sbin/dhcpd -- -q -6 -cf /etc/dhcp/dhcpd6.conf $INTERFACES
sleep 2
if [ -f "$DHCPD6PID" ] && ps h `cat "$DHCPD6PID"` >/dev/null; then
echo "dhcpd6."
else
echo "dhcpd6 failed to start - check syslog for diagnostics."
fi
It hasn't crashed yet, and I'm hoping that it will be a bit more resilient than the WIDE package. I'll let you know how my testing goes with all my different operating systems here!
Friday, April 8, 2011
Nagios, DHCPv6, and migrating Debian servers
Migrating debian
I convinced myself I'd keep this updated after I started work, but this was assuming that I'd still have time to play with my server alongside having a real job and trying to maintain some sort of social life. I did however end up a little bit tipsy last Thursday, which prompted me to start migrating my server from it's original 8GB home to the 250GB drive that has been there for the last two years under the name of /usr/bigdisk for obvious reasons.
My first hurdle was underestimating how long it takes to copy 8GB over IDE on an old pentium 4 (on a side note, describing a pentium 4 as old still makes me feel like a bit of a dinosaur given i vividly remember my first 586) so after having unscrewed my case and plugged in the CD-ROM without electrocuting myself, and booting up backtrack (version 2, because 4 is on a dvd, and 3 was a bit shit), I started what I thought was a sensible cp /mnt/sda1 /mnt/sdc2
Fast forward an hour and a half, and ignoring the part where I converted the original partition to ext2 (by deleting the journal) and shrunk it which took about half an hour by itself (and is already adequately documented everywhere else on the internet) and created the new partition (total of 10 seconds work), I got to the stage where I installed grub (after a chroot to /mnt/sdc2 because bt2 only has LILO) and was ready for a reboot, only to find that half of my stuff wouldn't work because EVERYTHING was 755 to root.
This was about the time I decided it was good to go to bed, since I had to get up for work in less than five hours. The next day I did the procedure all over again, except with the help of the -a switch to the cp command (keeps permissions and timestamps the same - don't leave home without it), made sure to set up fstab, set the partitions back to ext3, and set up grub to use the correct partitions. Finally, everything booted, and I was good to go.
Why did I migrate? The drive was only 95% full, and I thought maybe that was why my wide-dhcpv6-server was crashing... but this wasn't the case in the end.
DHCPv6
I'd not had any problems with wide-dhcpv6-server until recently, so I figured a nearly-full hard drive was why it seemed to crash for no reason every second day. This wasn't the case however, and my research showed that the WIDE DHCPv6 project stopped about three years ago. It turns out that the official ISC DHCP server incorporates both now (although they need to run in separate instances), but to install this, I needed to upgrade to squeeze, so once all 2.4GB of packages come down, I'll let you know how this goes for me.
Nagios
We use this at work to keep an eye on the network, and after writing a plugin to report on light levels on SFP's I decided it made sense to install it at home. This helped me track the DHCPv6 problem, but also made it easier to see the moment anything went wrong. It's only a simple setup at the moment, with periodic pings to google through my IPv4 and IPv6 gateways, and checks on the HTTP and DHCP servers, but already I feel like I have a way closer eye on the health of my server.
I convinced myself I'd keep this updated after I started work, but this was assuming that I'd still have time to play with my server alongside having a real job and trying to maintain some sort of social life. I did however end up a little bit tipsy last Thursday, which prompted me to start migrating my server from it's original 8GB home to the 250GB drive that has been there for the last two years under the name of /usr/bigdisk for obvious reasons.
My first hurdle was underestimating how long it takes to copy 8GB over IDE on an old pentium 4 (on a side note, describing a pentium 4 as old still makes me feel like a bit of a dinosaur given i vividly remember my first 586) so after having unscrewed my case and plugged in the CD-ROM without electrocuting myself, and booting up backtrack (version 2, because 4 is on a dvd, and 3 was a bit shit), I started what I thought was a sensible cp /mnt/sda1 /mnt/sdc2
Fast forward an hour and a half, and ignoring the part where I converted the original partition to ext2 (by deleting the journal) and shrunk it which took about half an hour by itself (and is already adequately documented everywhere else on the internet) and created the new partition (total of 10 seconds work), I got to the stage where I installed grub (after a chroot to /mnt/sdc2 because bt2 only has LILO) and was ready for a reboot, only to find that half of my stuff wouldn't work because EVERYTHING was 755 to root.
This was about the time I decided it was good to go to bed, since I had to get up for work in less than five hours. The next day I did the procedure all over again, except with the help of the -a switch to the cp command (keeps permissions and timestamps the same - don't leave home without it), made sure to set up fstab, set the partitions back to ext3, and set up grub to use the correct partitions. Finally, everything booted, and I was good to go.
Why did I migrate? The drive was only 95% full, and I thought maybe that was why my wide-dhcpv6-server was crashing... but this wasn't the case in the end.
DHCPv6
I'd not had any problems with wide-dhcpv6-server until recently, so I figured a nearly-full hard drive was why it seemed to crash for no reason every second day. This wasn't the case however, and my research showed that the WIDE DHCPv6 project stopped about three years ago. It turns out that the official ISC DHCP server incorporates both now (although they need to run in separate instances), but to install this, I needed to upgrade to squeeze, so once all 2.4GB of packages come down, I'll let you know how this goes for me.
Nagios
We use this at work to keep an eye on the network, and after writing a plugin to report on light levels on SFP's I decided it made sense to install it at home. This helped me track the DHCPv6 problem, but also made it easier to see the moment anything went wrong. It's only a simple setup at the moment, with periodic pings to google through my IPv4 and IPv6 gateways, and checks on the HTTP and DHCP servers, but already I feel like I have a way closer eye on the health of my server.
Saturday, February 26, 2011
PhotoMagnetic
This is a project I've been wanting to do for a while, and I finally got around to putting the code together yesterday.
It's based on a project called JSteg, the idea of which is to simply alter JPEG data after the lossy compression phase has been completed.
Microsoft has a fairly good explanation of how JPEG works and this can be simplified into three main phases:
The first phase represents the image in terms of brightness and chrominance, brightness being measured on a one dimensional scale, and chrominance on a two dimensional scale with one axis being red-green, and the other being blue-yellow. Experiments have shown that the brightness channel is what effects our perception of images the most - the JSteg page has an example of how far the chrominance channels can be downsampled before noticable changes start to appear.
Transforming and quantising
The data is then subjected to a discrete cosine transform which effectively replaces the pixel values with frequency values. These are then quantised (smoothing off the rough edges if you like) and then rearranged in order to place all the resultant 0 values next to each other. You can imagine what the next step is now.
Lossless encoding
The adjacent zeroes are perfect fodder for run length encoding, followed by huffman coding. The resultant space savings come not only from direct reductions in the space used by chrominance values, but also a simplification of the data such that traditional compression can be used.
Hiding a 160 bit hash
This is actually the easiest part. All the complicated work is done in the lossy stage, so all that needs to be done to hide our hash is to alter the file before it is losslessly compressed. The basic operation is as follows:
You can download the project from here if you'd like to try it out for yourself.
It's based on a project called JSteg, the idea of which is to simply alter JPEG data after the lossy compression phase has been completed.
Microsoft has a fairly good explanation of how JPEG works and this can be simplified into three main phases:
- Downsampling
- Transforming and quantising
- Lossless compression
The first phase represents the image in terms of brightness and chrominance, brightness being measured on a one dimensional scale, and chrominance on a two dimensional scale with one axis being red-green, and the other being blue-yellow. Experiments have shown that the brightness channel is what effects our perception of images the most - the JSteg page has an example of how far the chrominance channels can be downsampled before noticable changes start to appear.
Transforming and quantising
The data is then subjected to a discrete cosine transform which effectively replaces the pixel values with frequency values. These are then quantised (smoothing off the rough edges if you like) and then rearranged in order to place all the resultant 0 values next to each other. You can imagine what the next step is now.
Lossless encoding
The adjacent zeroes are perfect fodder for run length encoding, followed by huffman coding. The resultant space savings come not only from direct reductions in the space used by chrominance values, but also a simplification of the data such that traditional compression can be used.
Hiding a 160 bit hash
This is actually the easiest part. All the complicated work is done in the lossy stage, so all that needs to be done to hide our hash is to alter the file before it is losslessly compressed. The basic operation is as follows:
- Decompression
- Altering the coefficients
- Compression
You can download the project from here if you'd like to try it out for yourself.
Monday, February 14, 2011
Ściągnij z iply
Kolega mi polecił programa "IPLA" abym obejrzeć seriałów polskich. Oczywiście, to mi się spodobało, ale nie lubię objerzeć online - wolę ściągnąć a potem obejrzeć, abym nie wyczerpać internet.
Więc, uradziłem napisać swojego program, przez co mogłbym tak zrobić. Na szczęście było prosto, bo IPLA używa XML, a mogłem tworzeć jakiegoś XSLT który mi pomógł. Wreszice mi się udało, ale trudno użyć UTF-8 z stdin'a z windowsem - nie mogłem po prostu użyć fgets() lub scanf() a raczej potrezebowałem ReadConsoleW(), SetConsoleCP() i SetConsoleOutputCP().
Program ściągnij z iply można pobrać, więc spróbuj, i mów mi co masz na myśli!
Więc, uradziłem napisać swojego program, przez co mogłbym tak zrobić. Na szczęście było prosto, bo IPLA używa XML, a mogłem tworzeć jakiegoś XSLT który mi pomógł. Wreszice mi się udało, ale trudno użyć UTF-8 z stdin'a z windowsem - nie mogłem po prostu użyć fgets() lub scanf() a raczej potrezebowałem ReadConsoleW(), SetConsoleCP() i SetConsoleOutputCP().
Program ściągnij z iply można pobrać, więc spróbuj, i mów mi co masz na myśli!
Thursday, February 3, 2011
ASFDump: I'm sick of it
The plugin is dead in the water. XPCOM and NSPR lack documentation and working samples, and I've already spent enough time getting this project up and running. The command line app works, but as for the plugin, somebody else will have to make that.
The project can be found here, feel free to let me know what you think of it
The project can be found here, feel free to let me know what you think of it
Subscribe to:
Posts (Atom)