How Distributed Grid Computing Could Cut Costs and Help the Environment

Cat-5 Cable
Creative Commons License photo: Darren Hester

The dream of distributed computing (or grid computing) is that it can cut the costs of computing and cut carbon emissions. In this post, I am to explain how it works.

Let us imagine a scenario where both Carl and Daniel have computers. Carl has a computer which is twice as efficient – that is it costs him half as much to do the same thing on his computer. Let’s say it costs £1 in electricity for Carl to run a computer model; and £2 for Daniel. In total, it costs £3 to society to run the computer model once for Carl and once for Daniel.

Cost to Carl: £1
Cost to Daniel: £2
Total Cost to Society: £3

With Distributed Computing

Now imagine the same scenario but with one addition: distributed computing. As it costs Carl less money to run the model on his computer than it would cost Daniel, Daniel could pay Carl to run the model for him. Imagine that Daniel paid Carl £1.50. It only costs Carl £1 to run the model Daniel’s model for him, but he has gained £1.50 for his effort giving him a profit of 50p. Daniel only spends £1.50 to have his model run, as opposed to the £2 which it would have cost him to run the model himself.

Everybody benefits by saving money and the end result is the same: Carl and Daniel have both had their model run.

Cost to Carl: 50p (£1 to run his own model, subtract 50p profit from running Daniel’s model)
Cost to Daniel: £1.50 (He pays Carl £1.50 to run his model for him)
Total Cost to Society: £2

What assumptions have we made?

Creative Commons License photo: Ack Ook

There are no costs involved in the transaction itself. Imagine that it costs £2 for Daniel to send a copy of the computer model to Carl and then to receive the results. If Daniel had to print out instructions on how to use the model, then FedEx it to Carl and wait several weeks to see the results of the model, that’s perfectly conceivable. In this case, it costs of Daniel asking Carl to run the model for him would be £3 (£1 for Carl to run the model on his computer and £2 in transaction costs). In this case, he might as well have run it himself. Real world transaction costs would include slow network connections and incompatibilities between different computer systems. So for distributed computing to work we need fast, reliable network connections and software compatibility.

Daniel would happily allow Carl to run the model for him. Are there privacy implications for example? Daniel must be confident that he can allow Carl to run the model for him and be equally confident that Carl couldn’t have a peek at the results of his model. After all, there might be trade secrets in there. Similarly, Carl must be confident that Daniel isn’t sending him malicious software which could break his computer. For distributed computing to work, there must be a foolproof and hackproof way for Carl and Daniel to trust each another to keep to their side of the bargain.

Creative Commons License photo: JohnSeb

Thirdly, Daniel must actually be able to cut his costs. Let me explain. It’s possible that Daniel will have his computer on 24/7 anyway. That is, it’ll cost him £2 whether he’s runs the model or not. If he’s leaving his computer running at 100% but idle and still asks Carl to run the model for him, he essentially pays for the model to be run twice. My computer doesn’t dynamically underclock so whether or not I’m using it, it’s eating up the same amount of energy. For distributed computing to work, our own computers must make much more efficient use of resources. We need to have thin-client computers with neglible costs.

The real world

Distributed computing hasn’t taken off yet on any large scale. The three conditions don’t yet exist:

  • We need fast, reliable network connections and software compatibility. This definitely doesn’t exist at the moment: I don’t trust my own network connection to be 99.9999% reliable. It’s OK for downloading files and sending e-mails but it needs to be good enough for me to be able to send entire computer programmes over the network in under a second. Additionally, software isn’t at the stage where it’s “write once, run anywhere”. We need standards, standards and standards.
  • There must be a foolproof and hackproof way for Carl and Daniel to trust each another to keep to their side of the bargain. There is no way I would let anybody run a piece of software on my computer without me checking it first. If I had to pre-approve every single piece of software, that adds to the transaction costs which I discuss. Virtual machines are one way we can get around this issue by creating safe ways to isolate software and to track it’s progress. Still, I’m not sure if there is a secure way to run software on a computer with the confidence that the owner of the computer can’t take a peek. And I’m not sure if we’ll ever reach the point where people will happily allow third-parties to run software on their computer and have no possible way to find out what it’s doing.
  • We need to have thin-client computers with negligible costs. I’ve already debunked this one. My computer uses exactly the same amount of power whether it’s active or idle. I don’t believe that people will drop the idea of “a computer on every desk in every home” until they are confident the first two criteria have been met. Only then will they accept owning a thin-client computer.

It’s already being used…

Last year I worked at a company which employed distributed computing on a smaller scale. They had a small cluster (~20 computers) with identical hardware, each linked with Gigabit Ethernet. Software ran inside virtual machines and those virtual machines moved around between computers depending on the amount of spare capacity each one had.

The reason why they could employ distributed computing is because within their own system, they knew that:

  • They had a reliable intranet connection and because all the computers were identical software worked on every single computer.
  • Because they only ran a limited number of programmes and all the computers were under their own control, there were no trust issues.

Creative Commons License photo: Petrick2008

Distributed and grid computing isn’t yet practical on a worldwide scale but I think we’re making progress. Networks are becoming more reliable. Software platforms appear to be becoming more standardised. Virtual machines are coming of age. And our computers are becoming more environmentally concious and adapting their resource usage to the amount of processing power required.

So there is the blueprint to how we can lower the costs of computing. You might be wondering what that’s got to do with the environment. Well, simply replace the £ sign with Joules of energy. Like free-market trade can leads to an efficient allocation of resources in the real world, the trade of computer time in a distributed grid of computers leads to a more efficient allocation of computing resources. And that lowers the energy consumption of computing and it’s environmental impact.

Ubuntu Redux

Since my posting yesterday, I’ve spent a bit of time configuring Ubuntu. I had to manually set my DNS server in Ubuntu because my D-Link DSL-G604T resolved stuff all wrong – it resolved to (I don’t know why).

I also decided to emerge X-Chat which provided a decent IRC client (xchat-gnome isn’t worth trying; it sucks) and I gave aMSN a go as Gaim was segfaulting. I got rid of aMSN and went back to Gaim (seems to work now).

If you are setting up, Easy Ubuntu is really, really good. It’s 4 lines of code which you simply paste into the terminal. It downloads some files and gives you an interface where you can install proprietary and restricted codecs for multimedia such as MP3 and DVDs, can download and install drivers for your graphics card (ATi, nVidia), browser plugins such as Flash and Java, extra repositories and Microsoft fonts.

Web pages looked a lot better after installing these fonts and the system works so much better. This program has saved tons of time and I’ve got a decent working system in a lot less time than it took me with previous distributions. 

A few friends suggested Automatix which is a bit like Easy Ubuntu. Opinion is divided over Automatix – some people said it can harm your system or can be a security risk. I decided to go with Easy Ubuntu as it did everything I needed and it did it well.

With my extra codecs, I opened up RhythmBox and added my Windows "My Music" folder. It worked beautifully; not as pretty as Windows Media Player or as featureful as amaroK but it worked, and worked well.

On all of my previous Linux installs I’ve used KDE and most often used aRts which was a real pain. I was really glad that Ubuntu had a decent sound system out of the box which allowed multiple sounds to be played and it even supported my Multimedia Keyboard buttons out of the box!

Updating the system was also a lot easier than in previous distributions I’ve tried – I updated about 120 packages and it was a flawless update and a ton faster than Gentoo’s emerge. And it didn’t break anything I also decided to try multiple desktops and it was really nice – much better than KDE’s implementation.

At the moment, I’m hooked. When it works, Ubuntu is a beautiful operating system which looks fantastic and is usable. The installation wasn’t as smooth for me as it could have been but once those issues were ironed out it was fantastic. By far, my favourite linux distribution.

Whether it’s good enough to replace Windows, probably not. It still lacks hardware and software support:

  • My PC is connected to 2 printers (LPT + USB) which are shared over the network. When I’m in Linux, these don’t work so I’ve got to switch to Windows just to allow someone else to print. Perhaps CUPS and SAMBA might be able to allow me to do this but it’ll take quite a lot of work.
  • It doesn’t support MSN Messenger/Windows Live Messenger. Sure, Gaim allows you to chat over them but it doesn’t let you play games, share files at a decent speed or send sounds.
  • My TV card doesn’t work (AFAIK).

Application wise, apart from "Windows Live Messenger" and perhaps Office, there is nothing that’s stopping me from switching to Linux full-time. Hardware wise, there’s a lot of reason to stay on Windows.

Quite a U-turn in a day. 


Since I managed to obtain a Ubuntu CD (which survived a Coke spill so would make a pretty good coaster) I decided I’d wipe my existing install of Gentoo (which hasn’t been used in way over a year) and replace it with Ubuntu 6.06 – the "free forever" operating system with 3 years of free support.

I was expecting from Ubuntu a distribution which redefined desktop linux – a distribution which just worked, was straightforward and wasn’t such a pain to install and to manage. Ubuntu did do better than many other distributions in that it detected and configured all my hardware, etc. but I didn’t find the install any nicer than the time I installed Mandrake 9.

  • First of all, during the installation process, it whined about something wrong with my FAT16 partition. This is a small partition around 100MB which Microsoft or Dell or someone stick there for recovery. I dismissed the error.
  • The installation process got stuck for about 10 minutes setting up apt-get. I believe this is due to an issue with my router as it gets confused with IPv6. I’ve had the same issue with many other Linux distributions but it’s never actually frozen the install. I decided to disconnect the network interface in the middle of the install and it resumed.
  • The first time I went on Ubuntu I had to disable IPv6. This was done by editing /etc/modprobe.d/aliases and changing "alias net-pf-10 ipv6" to "alias net-pf-10 off". A reboot allowed Firefox to successfully connect to the network.
  • For some reason, applications such as GNOME and Synaptic still don’t work as they resolve everything to I’ve tried manually changing my DNS server to and it didn’t seem to do anything.
  • Ubuntu stupidly mounted the Windows recovery partition so I had to remove this from fstab. It also annoyed me with a dosfsck check every time it started up which took ages so I had to manually modify fstab to remove the check (change the 6th entry).
  • Ubuntu’s GRUB menu put itself as the default and didn’t offer any graphical interface to make Windows the default. I had to manually edit /boot/grub/menu.lst to move Windows to the top of the list and to make it default. I think there’s only a certain point you can push usability and simplification – once you simplify too much you make it hard to do something you actually want to do.

I haven’t really used Ubuntu that much as I’m still trying to make it work properly but it feels like a pretty decent operating system. I still can’t see what all of the fuss is about and I can’t see any particular reason why I should use Ubuntu.

Anyone know or recommend any Linux distributions which are worth trying? 

Me, I stick to the shadows, use Windows.

Operating System Screencasts features a variety of screencasts from a variety of Operating Systems from Vista to well known Linux distributions such as Ubuntu. It also has some casts from some more interesting operating systems such as Symphony OS (which is Linux but with a radical GUI) and React OS (open source Windows clone)

The casts show the installation process of many operating systems and some of the features and programs they have. They’re not particularly entertaining except from for the OS bods but it’s a nice way to sample and get a feel for the operating systems which screenshots don’t give you.

Certainly not as interesting as Long Zheng’s Windows Vista screencasts but it’s worth a visit. 


LugRadio is a great radio show or a "podcast" as some insist on calling it. It’s mainly about Linux and Open Source but sometimes they venture outside open source and talk about Google and other technology. The show often has interviews with people in the open source industry – this week Mark Shuttleworth of Ubuntu talks about running Dripper Drak on your camera.

The fortnightly show seems to have quite a loyal following; very funny and well recommended.

Thanks to Ryan for pointing me towards LugRadio.

SUSE better than Vista?

A writer at Desktop Linux (biased, obviously) compares Vista and SUSE Linux and concludes that SUSE is the better operating system. He cites issues getting wireless networking working on Vista and the ease of getting it working on SUSE (although it does sound quite technical).

The reviewer also says Aero Glass (apparently Aero stands for Authentic, Energetic, Reflective and Open) runs quite badly and has issues such as bluriness and artifacts. Now I’ve not used Vista so I’m not going to comment on any specifics but simply what I’ve read through the blogosphere and on sites such as Win SuperSite.

The UI

Vista’s glass might look nice but it doesn’t help the user achieve the end task – whether that might be to find information on the internet or finish that essay. The UI may even slow down the user as they can’t see which window is selected as easily as would be possible on Windows XP. From screenshots, the title bar text can also be a lot harder to read on Vista with glass turned on.

Vista’s "Aero Glass" is said to use quite a lot of system resources – you can turn it off to increase readability and possibly make your system run a bit better but I much prefer the look of Luna Element.

Ubuntu and SuSE don’t have such nice looking interfaces but they’re practical and usable which is the main thing.


The main thing that has been preventing me from switching to Linux in the past has been the lack of support for Windows applications. There are about 8 programs I use every day:

  • Mozilla Firefox – This is available on Linux.
  • Microsoft Office – Open Office is *OK* but I much prefer Microsoft Office
  • mIRC – Xchat is satisfactory or mIRC can be run through WINE
  • MSN Messenger – I have yet to find a good MSN Messenger client for Linux
  • Media Player – amaroK is better than Windows Media Player
  • Crimson Editor – There is no shortage of fantastic text editors on Linux
  • Paint Shop Pro – I have yet to find a good image editor for Linux
  • Wolfenstein: Enemy Territory – This is available on Linux

The main issues are the lack of Microsoft Office, image editor and IM client for Linux. Many more applications these days are cross-platform which is fantastic and open source applications get better by the day. Some of them still lack on the usability front (GIMP) but I think Linux applications are coming close to their Windows counterparts.

Anyways, WINE is pretty good these days and many Windows applications will run without any issues. 

Other Reasons

Based on security and speed, Linux beats Windows outright. On Linux you can pretty much live without anti-spyware programs and anti-virus which can slow your computer to a crawl on Windows.

My current PC is over 2 years old and it’s still got quite a bit of life in it. I’m not planning on upgrading the PC but I’d like to change the operating system as Windows XP has way too many security issues and has ran quite slowly when AVG is scanning the system. At the moment, Linux looks a lot more attractive to me than XP, or Vista.

Xgl Live CD

Just tried out the Kororaa Xgl Live CD and it is pretty damn impressive. It's a nice way to try out Xgl and the uber cool effects without risking your existing setup.

There is a file on the desktop which gives you some commands of what you can do. You can rotate desktops using a kind of carousel thing, show all windows by pressing F12, add transparency to windows, get the wibbly wobbly window effect when you drag, etc.

The effects are actually useless and would actually annoy me if I had to use them from day to day. Xgl is impressive though. Opening up glxgears and moving the window around like mad I still managed to get around the same number of fps.

This live CD comes with nVidia (en-vee-dah ) and ATi drivers so it's uber easy to try. Most distros will require you to install these drivers yourself.