Is the privacy of your email a real concern?

So, users of Google Apps for Business, are you OK with Google reading your commercial email?

Last week’s revelation that users of Googles email services should have no expectation of privacy, caused quite a stir. Even when this was clarified as applying only to the privacy of emails sent to Google users, it was still a shock to some.

In case you missed all this, Google was submitting a Motion to Dismiss in response to a class action suit. The salient words are…

Just as a sender of a letter to a business colleague cannot be surprised that the recipient’s assistant opens the letter, people who use web-based email today cannot be surprised if their communications are processed by the recipient’s ECS provider in the course of delivery. Indeed,”a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.” (my emphasis)

This was later clarified and justified on the basis that:

  • the text is not about users of Gmail, but rather people to send emails to users of Gmail (presumably including other Gmail users)
  • this indeed is what US Law dictates.

For a fuller explanation of the issues, visit Naked Security

My view is that if it prompts more businesses, and individuals, to pay attention to their information security, then it will have been good thing.

Misconceptions about information security

I find that most people suffer from a number of misconceptions when it comes to the privacy of their data in the online world:

  1. They think that sending an email is like sending a letter: i.e. the contents are sealed. It isn’t, unless you have taken additional measures like encryption.
  2. They adopt the view that if you haven’t done anything wrong then you have nothing to hide.

Both statements are plainly not true if you are using Gmail for business purposes. Do you really want Google looking through your correspondence with clients, with accountants, or, perhaps most worrying, your legal representatives?

Of course there’s no suggestion that a human being is trawling through your emails. In fact it’s an automaton that is trying to profile you in order to target advertising more effectively.

In of itself this may be a good thing. After all, if we must be bombarded with adverts, at least if they are relevant, it’s maybe not as bad as random ads of no interest. I appreciate this is being a bit too forgiving; but after all, everybody’s got to make a living 🙂

Of greater concern is the potential for the automaton to get it wrong.

How it can all go wrong

Take a look at this Forbes article. It illustrates how Target used data collected from their website to predict that a teenage girl was pregnant and then use that knowledge to target[sic] her with maternity products. It may be an anecdote, but it shows how everything you do online can be aggregated and maybe used against you.

The lesson

The lesson is to be more aware of the implications of living your life online. Whilst you can take measures to reduce your digital footprint, as Tom Henderson did, for most people this will be over the top. However, at least realise this problem exists and take measures where you feel it is appropriate: e.g.

  • by installing something like OpenPGP and using it to encrypt and digitally sign sensitive email correspondence;
  • or by installing TrueCrypt to create an encrypted virtual disk on your computer, or on your cloud storage service.

Can I help you?

If you’ve found any of this interesting, or if you disagree, let me know in the comments. If I can advise you further on your specific issues, let me know through the comments.

Using OpenIndiana as a file server

I’ve talked before about using ZFS as the basis for a file server that contains valuable information; like family photos and music collections. My first attempt at this was to use Ubuntu as the basis for the server and add the ubuntu-zfs package so that I could create and manage ZFS pools. The performance wasn’t great with this solution, so I’m taking another approach: using OpenIndiana to deliver a set of ZFS volumes to a separate installation of Debian (I’ve become disillusioned with Ubuntu of late. Basically, I don’t like the new UI).

Architecture

  • The base hardware is the same as before, except that I am adding 2 off 64GB Solid State Disks (SSDs) and a 16GB SSD
  • VMware vSphere is installed on the 16GB SSD
  • A new Host Bus Adapter (HBA) with the current 2 off 3TB disks is passed through ESXi
  • OpenIndiana is installed into the first VM and configured to use the 2 off 64GB SSDs as a ZFS Mirror for its root pool
  • The existing ZFS pool is then imported into OpenIndiana and the appropriate volumes exported as NFS mounts
  • A new ZFS Volume is created to hold further VMs and exported back to ESXi as an NFS mountpoint
  • Debian is then installed into a new VM and used to serve the various NFS resources to the house clients.

This may sound a bit complex, but it gives some significant advantages:

  • ALL filesystems (apart from the very small ESXi filesystem) are created on redundant ZFS volumes.
  • Client operating systems like Debian (and Windows Server 2012) can benefit from the speed and integrity of ZFS without being aware of it.
  • There’s no need to add non-standard additions into Debian for it to take advantage of ZFS.

Progress

So far, I have been playing with OpenIndiana to get used to it. I’ve replaced the Ubuntu/ubuntu-zfs combination with OpenIndiana to gain speed and more functionality.

This is only a first step. Once I have the SSDs and a replacement for the existing HBA, I’ll install vSphere and build up.

HOWTO upgrade an Ubuntu zpool that has its root filesystem on ZFS

My attempts and final success at upgrading the disk space on my media server
Hard Disk

Why use ZFS

ZFS is, in my view, the best filesystem to use on a file server that is going to contain a large amount of data. This is a sweeping statement, but I think it is backed up by 1 2 and 3.

Background

For the purposes of this HOWTO, I’ll assume the reader knows about vdevs, zpools and zfs filesystems. If not, take a look at this tutorial on the Oracle web site.

The media server at Chez Howell comprises an 64-bit Ubuntu 12.04 (Precise Pangolin) Linux distribution with the added ubuntu-zfs package. To get this far I followed the HOWTO on github.

The initial build had 2 x 1TB drives because that’s what I had available from my previous media server, a Fujitsu Scaleo Windows Home Server. However, that was never going to be enough, so I knew that I would need to upgrade the disks.

One of the beauties of ZFS is the ease with which you can expand a zpool (in theory anyway). In my case, I have two zpools. “rpool” is built from one vdev, a 160GB SATA drive. “rdata” is built from 2 x 1TB drives to provide a 1TB mirror. rpool has the root filesystem on it and rdata has all the media files.

In theory, to expand the size of rdata, all I needed to do was to shut down the system, replace one of the 1TB drives with a 3TB drive, power on and execute the command:

zpool replace rdata *old-drive-name* *replacement-drive-name*

This would break the mirror temporarily and then add the new drive. The system would then automatically “resilver” the mirror: i.e. copy all the data from the other old disk to the new disk and re-establish redundancy. I could then repeat the process for the other drive.

The magic occurs when I add the second drive. ZFS recognises that the zpool has two larger drives and
automatically expands the containing ZFS filesystems to fill the new capacity.

Note: in case you weren’t aware, you can have multiple filesystems on a zpool, each of which is only limited in size by the capacity of the largest underlying vdev. Unlike most filesystems, you don’t normally pre-define the size of the filesystem. You can set a quota on a filesystem if you need to, but it can be changed at any time. Neat huh?

Upgrade attempt one

Anyway, when I tried to do this, I hit a problem; based on a lack of foresight on my part when I created the rdata zpool.

When you create a zpool, one of the parameters to the zpool create command is ashift. ashift defines the size of the extents used to store data in the zpool. It defaults to the size reported by the vdev (i.e. disk) and must be the same for all vdevs in the zpool.

When I created rdata, ashift defaulted to 0 which means use the reported sector size of the vdevs: i.e. 512 bytes. This was the physical and logical sector size of the Western Digital disks used. The new Seagate 3TB disks use a 4k physical sector size (i.e. ashift=12) and thus cannot be added to the rdata zpool.

After some digging, I found this good explanation on the IBM website.

Upgrade attempt two

Thus I needed to change the approach. Rather than replacing the disks in the existing rdata zpool, I had to create a new zpool, copy the data across and then replace the zpools. The relevant info and instructions are:

# Power the system down, remove one element of rdata and install a new disk
# Power on
zpool create -o ashift=12 datapool *replacement-disk-1* #create a new temporary zpool called datapool
cd /
rsync -av /rdata /datapool #copy all data from rdata to datapool
zpool export rdata #export both zpools so the system forgets about them
zpool export datapool
zpool import datapool rdata #import the newly created zpool under a new name

At this point, there is a new rdata zpool with no redundancy. All I now needed to do was poweroff, replace the other disk and recreate a mirror. Unfortunately I hit another problem.

When the system was rebooted, it came back with the old rdata, even though I had exported it and replaced it with the renamed datapool. I was stumped for ages until I asked a question over on AskUbuntu. Luckily the answer came back fairly quickly (thanks jonathon).

Prior to rebooting, I needed to execute the following to update the Ubuntu RAMfs.

update-initramfs -c -k all
update-grub`

After I did this, I was able to poweroff, replace the disk, poweron and execute:

zpool attach rdata mirror *replacement-disk-1* *replacement-disk-2*

Which upgraded the zpool to a mirror using the two new disks and automatically resilvered the zpool. Two hours later and I had a redundant 3TB media pool.

Result!

I’ve just migrated to Virgin Media

I’ve just moved from one ISP to Virgin Media. So far the experience has been pretty painless. Two guys turned up at 8AM as promised and simply got on with their work after confirming where I wanted the telephone point and SuperHub to be located. T…

I’ve just moved from one ISP to Virgin Media. So far the experience has been pretty painless.

Two guys turned up at 8AM as promised and simply got on with their work after confirming where I wanted the telephone point and SuperHub to be located. Two hours later and they were done.

I did have to spend a bit of time re-configuring the Superhub to fit in with the network configuration here, but as it’s actually a Netgear under the hood, that was pretty easy too. And the results?

www.speedtest.net shows that I’m getting pretty good performance right now.

speedtest.net results

I’ll keep an eye on it over the next few days; particularly at peak time.

Three disks to compile a program. Those were days.

I’ve just been reading an article on one of my favourite software development blogs: Joel on Software. It was a post on the importance of doing Daily Builds when you are developing software, but the point that jumped out was a reference back to th…

I’ve just been reading an article on one of my favourite software development blogs: Joel on Software.  It was a post on the importance of doing Daily Builds when you are developing software, but the point that jumped out was a reference back to the days when PC workstations were equipped with floppy disks.

Joel referred to the IBM Pascal compiler that came on three floppy disks; with each compiler pass on a separate floppy. 

Blimey, I remember those days too. Compiling a non-trivial program took ages. I was developing programs that modelled real-world events like packet switching networks: and these were certainly non-trivial!

Does your software development history go back to the 80’s? Do you remember the agony, and the ecstasy of developing software in those days?

At last – the missing Chrome addon appears

As everybody knows, I use a Macbook Air for most of my work and leisure. I love it. and I did love the built-in Safari browser until Chrome came along. The one feature I missed when moving from Sfari to Chrome was the “Reader” feature. The Reader …

As everybody knows, I use a Macbook Air for most of my work and leisure. I love it. and I did love the built-in Safari browser until Chrome came along. The one feature I missed when moving from Sfari to Chrome was the “Reader” feature.

The Reader feature allowed you to re-display a web page with just the main content displayed: i.e. sans all the trimmings that surround blog articles and the like.

Now, Evernote has produced an extension for Chrome that appears to deliver the same experience. Called Evernote Clearly the extension delivers the same functionality as Reader did in Safari, plus you can clip the page straight into Evernote if you wish. As a long time Evernote user, I can see this getting great use.

Try it out for yourself.