Facebook leads to lower grades!?!

'Red Spiral'
Creative Commons License photo: ishrona

In what must be one of the most ridiculously alarmist and inaccurate articles I’ve read in a while, career website Milkround is claiming that Facebook users could risk having lower grades as a result of their usage of the social networking site. Unfortunately, it looks like another instance of a journalist falling for the “correlation implies causation” fallacy.

According to Milkround:

Researchers at Ohio State University found students who enjoy communicating via cyberspace spend less time studying and risk getting a whole grade lower than their peers as a result despite more than three quarters of Facebook users claiming their interaction with friends on the site didn’t interfere with their work.

The study claims Facebook users averaged one to five hours a week studying, while non-users studied 11 to 15 hours per week.

By implication of the article and study, a typical student would do 4 times more work if they didn’t have Facebook and on average would achieve one grade higher.

College Football
Creative Commons License photo: rdesai

Here’s an explanation which is much more likely: More extroverted people who go to more parties and get involved in more societies are much more likely to use Facebook. The people who constantly work 24/7 are the people who are more likely to refuse to get a Facebook account or will have little use for a Facebook account. The likelihood of a student having a Facebook account depends on his participation in college life and how hard working he is.

Of course, students do use Facebook as a procrastination tool – I won’t argue with that. But correlations prove nothing. As a more rigiourous technique to test this hypothesis, we’d need to compare student’s results before they signed up to Facebook and results after signing up to Facebook (assuming a constant level of how hard-working or social the students are). Alternatively, you’d need a control group of people who are social and roughly as hard-working as the Facebook group but don’t use Facebook (good luck finding one).

Getting an informed, balanced political debate about science

What can be done to ensure an informed and balanced public and political debate of Science and Technology?

We're at the tipping point for climate change (bonus: face in the clouds)
Creative Commons License photo: kevindooley

The Oxford English Dictionary describes science as “…the systematic study of the structure and behaviour of the physical and natural world through observation and experiment.”1 But science is much more than that: most scientists enter the profession, and most scientific research is funded, because they believe science can greatly improve society. Usually, these goals of learning more about the world and improving society are coupled. However, new and cutting-edge research has been raising ever-increasing concerns about whether the research is benefitting or destroying society.

Scientists occasionally argue that their work should be judged on a purely scientific basis without consideration for the ethics and consequences of the research; but this neglects science’s responsibility to the community at large. As issues that affect all of us, it is important that a public debate is held about these topics and that scientists properly engage in it.

I believe there are three main barriers to a reasoned public debate.

Emotional responses blinding a scientific debate

In recent years, there has been an exponential increase in the number of scientific controversies involving ethical issues of “playing God”. These include stem cell research, cloning and genetically modified “Frankenstein” foods. For many people, the initial emotional response is that of disgust2 – something dubbed the “yuk response”. In order to have the more productive debate about the science, consequences and ethics behind the work, we must get past this initial emotional response.

Surgeons at work
Creative Commons License photo: salimfadhley

In one study, psychologist Philip Tetlock asked people to comment on the proposal to set up regulated markets to trade organs3. For most people, the response was that of moral outrage at treating sacred body parts as secular commodities to be traded. However, when the debate was reframed to neutralise the moral outrage (e.g. by exploring how organ trading would lead to a greater number of scarce organs being available to save more sacred lives), 40% of people toned down their opposition. Neutralising the moral outrage encouraged people to critically analyse the arguments in the debate and come to a more reasoned decision.

On many scientific issues, the “yuk response” is preventing a reasoned scientific debate from happening in the first place. As scientists, we should not ignore our moral guidance, but we must not allow the debate to be blinded by it.

Ensuring the media covers the debate accurately

The media plays an important role in informing society about issues which may affect them7. However, two factors lead to poor and inaccurate coverage of scientific issues:

1.       To maximise readership, the press likes to present scientific issues as a series of horror stories. We’re told that cloning will lead to designer armies of obedient soldiers and that nanotechnology-robots will turn the entire world into a blob of “grey goo”45. These poignant scenarios lead people to make their mind up before considering the scientific uncertainties, risks and benefits to the detriment of a good debate.

2.       To provide “balanced” coverage, journalists will try to cover both sides of the story. In a political or social dispute, such as whether the UK should join the Euro, it is reasonable that both sides should be treated equally and receive equal press coverage. However, it is inappropriate to treat the arguments of both sides in a “scientific-fact” controversy as equal. For example, the weight of evidence in favour of climate change is much greater than that against. Attempts by journalists to be “balanced” and present both sides equally give an inaccurate impression that there is still a great deal of scientific controversy about climate change6.

The Louvre
Creative Commons License photo: L.Brumm Photography

Scientists should be aware of the importance of the media in shaping debate and public opinions and that communicating the science can be as important as the science itself.

Engaging the people: Capturing the popular imagination

Because of cultural differences8 between science and art (e.g. science being concerned with truth; art about opinions), scientists tend to avoid the arts. However, given debate is all about opinions, scientists should not be afraid to utilise the arts to catalyse debate about issues of scientific importance.

For example, Dan Brown’s book “The Da Vinci Code” inspired a large amount of “real world” interest, television documentaries9 and archaeological research about the Holy Grail in Christian theology. In the same way, the arts can provoke discussion about important scientific issues10.

The arts will, of course, never replace the rigour of peer-reviewed papers and the scientific process; but as a complement can outline the major issues to the public in an interesting and engaging way without undermining the practice of science itself.


As science affects the whole of society, scientists have a moral obligation to inform, involve and engage the public in a debate about science. This should be achieved by focusing the debate on the important issues, ensuring they are portrayed accurately and inspiring discussion about them.


1.       Concise Oxford English Dictionary: Science
Online edition. Available from: http://www.askoxford.com/concise_oed/science?view=uk. (Retrieved 1 March 2009)

2.       New Scientist: Immoral advances: Is science out of control?
Jones, D. New Scientist, issue 2690, pp. 22-33. Available from: http://www.newscientist.com/article/mg20126905.100-immoral-advances-is-science-out-of-control.html?full=true

3.       Trends in Cognitive Sciences: Thinking the unthinkable: sacred values and taboo cognitions
Tetlock, Philip E. Trends in Cognitive Sciences, vol. 7 issue 7, pp. 320-324. Available from:

4.       The Guardian: Brave new world or miniature menace? Why Charles fears grey goo nightmare
Radford, Tim. The Guardian, 29 April 2003. Available from: http://www.guardian.co.uk/science/2003/apr/29/nanotechnology.science

5.       Institute of Physics Press Release: ‘Grey goo’ misconceptions could harm poor in developing world
Institute of Physics Press Release, 27 January 2004. Available from: http://www.eurekalert.org/pub_releases/2004-01/iop-gm012704.php

6.       The Independent: Reporters feel the heat over climate change
Ward, B. The Independent, 10 March 2008. Available from: http://www.independent.co.uk/news/media/reporters-feel-the-heat-over-climate-change-793586.html

7.       Public Opinion Quarterly: The Agenda-Setting Role of the Mass Media in the Shaping of Public Opinion
McCombs, M. Public Opinion Quarterly, vol. 36, pp. 176-187. Available from: http://sticerd.lse.ac.uk/dps/extra/McCombs.pdf

8.       The University Blog: Science vs. Art
Blog. 14 March 2008. Available from: http://theuniversityblog.co.uk/2008/03/14/science-vs-art/. (Retrieved 1 March 2009)

9.       Priory-of-sion.com: Da Vinci Code Documentaries
Website. Available from: http://priory-of-sion.com/dvc/documentaries.html. (Retrieved 1 March 2009)

10.   The Guardian: ‘Space flight can be as luminous as any novel’
Radford, Tim. The Guardian, 11 April 2008. Available from: http://www.guardian.co.uk/education/2008/apr/11/highereducation.comment


This essay was originally prepared for an essay writing competition at my college. I have decided to share it here as I feel it could be of interest to regular readers. Comments and thoughts very welcome.

"Carbon cost" of Google search same as boiling a kettle

Google Lego 50th Anniversary Inspiration
Creative Commons License photo: manfrys

The BBC reports today on a study by Harvard physicist Alex Wissner-Gross. Wissner-Gross claims that performing a standard Google search on a desktop computer produces 7g of CO2. A quick session with two searches will produce 14g of CO2 – the same as that from boiling a kettle.

From the BBC article:

Although the American search engine is renowned for returning fast results, Dr Wissner-Gross says it can only do so because it uses several data banks at the same time.

Speaking to the BBC, he said a combination of clients, networks, servers and people’s home computers all added up to a lot of energy usage.

“Google isn’t any worse than any other data centre operator. If you want to supply really great and fast result, then that’s going to take extra energy to do so,” he said.

According to Google Web History, I’ve performed 9,308 Google searches and it’s only counted the searches I’ve performed whilst I was logged on.

I’m guesstimating I perform about 40 searches a day; that’s 15,000 Google searches per year (sounds scary when you put it like that). My annual Google carbon footprint would be 105kg of CO2 (0.15 tons).

Google have disputed this figure; saying that a search only produces 0.2g of CO2.

I’m not able to comment on what I think of the methodoly as I don’t know how either figure was reached. But I think it is important to point out the difference between average cost and fixed cost.

As an example, imagine a server farm which was responsible for 100g of CO2 emissions every day. If ten people perform searches, the average carbon cost of a search is 100g divided by 10 searches = 10g of CO2 per search. This is the average cost of the search.

Beijing smog
Creative Commons License photo: kevindooley

Whereas, the marginal cost would be the CO2 cost of performing one more search. If we then performed an 11th search, the CO2 emissions of the server farm stay the same (we assume it’s running with spare capacity). The marginal cost of performing a search of zero grams of CO2.

With eleven searches, you could claim each search had a carbon cost of 9g. But that’s a bit unfair – considering the CO2 output of the server farm if you had made the search and if you had not, you find the CO2 output it exactly the same. Your search had a marginal cost of zero grams of carbon.

Whether Wissner-Gross and Google stated the average cost or the marginal cost I don’t know (although I suspect the first may have been the average cost and the second the marginal cost).

With Google’s server farms, we know that they will be running regardless of whether we perform searches or not. The important thing then is the marginal cost of a search – this being so close to zero, I don’t think any of us should feel a guilty conscience from using Google.

Evolving the Mona Lisa by natural selection

Fantastic experiment and write up by Roger Alsing. Using an evolutionary algorithm, Roger wrote a programme that would attempt to paint the Mona Lisa using 50 transparent polygons. The “fitness” of each permutation was tested by comparing it pixel by pixel to the actual Mona Lisa. Wonderful.

Richard Dawkin’s “The Blind Watchmaker” is a fantastic book to read if you’re interested in evolution. I read it fairly recently; he writes a computer programme to simulate evolution and the results are fantastic. Worth a read.

Predicting the future popularity of a web page

Balloons in Trafalgar Square
Creative Commons License photo: wili_hybrid

New Scientist reports this week that a new tool developed at HP Labs could potentially predict the popularity of a web page in 30 days time. Essentially they say that by looking at the rate at which a web page picks up views in the first few days can predict the subsequent popularity of the page 90% of the time. It doesn’t seem too radical an idea – after all the pages which are more popular in the first few days are likely to get bookmarked more, linked to more, higher place on Google, etc.

The research focused around the sites Digg and YouTube so it would be interesting to see how it could be applied to other sites. You can download the paper online at arXiv.org.

On a similar note, I’ve found that I’ve been able to get some incredibly stunning useful information from the popularity of webpages on my site. For example, one of my posts about MSN Messenger downtime gets a lot of hits whenever MSN Messenger goes down. When the number of visitors for that page is significantly above normal, I know that MSN is actually down. If the number of visitors is normal, it’s typically just an issue with my connection or my local server. In fact, I’ve found this method much more reliable than using Microsoft’s own service status page for the Messenger service. Similarly, I found a huge spike in the number of visitors to my post on the possibility of VAT cuts straight after the recent pre-budget report. If only there was a way of exposing these statistics in a useful way!

Could Free Starbucks Win the Election for Obama?

barack obama
Creative Commons License photo: patrick dentler

The day of the US presidential election is approaching. There is an expected turnout of 80%. Both parties have worked very hard to register as many new voters as possible and companies such as MTV and Starbucks have been encouraging people to register to vote.

Starbucks is offering a free cup of coffee to those who vote on November 4th. How could this distort the results of the election?

Well, It seems pretty logical that the people who feel strongly about whether they are Republican or Democrat or have a strong preference for either Obama or McCain are the people who would vote anyway, regardless of incentives such as free coffee. So this promotion probably wouldn’t affect whether they would vote.

Swing and undecided voters, on the other hand, may not vote without an additional incentive such as free coffee. If, say, undecided voters mostly lean towards Obama – the incentive of free coffee at Starbucks would benefit Obama in the polls by encouraging the undecided voters to go to the polling station and to vote for him.

Two shots of espresso please!
Creative Commons License photo: aubrey arenas

It would certainly be an interesting research to see whether this promotion or other incentives may distort the results of the election. The “Starbucks Stores per Capita” differs immensely between each of the states. The District of Columbia for example has 1.18 Starbucks for every 10,000 people – nearly 22 times as many Starbucks stores per capita to Arkansas which has 0.054 Starbucks for 10,000 people. Swing state Virginia has the 11th highest “Starbucks per Capita” of the states. If the Starbucks promotion does have an effect on swing voter turnout, we would expect the biggest effects to be in a) the states with the highest concentration of Starbucks and b) cities (which are of course more liberal than small town America) where people are more likely to have a Starbucks nearby.

I’m exaggerating the effects of a free cup of coffee on the election results you say. Perhaps so. But research has shown it can be quite easy to “prime” people to affect who and what they vote for. For example, research found that people who used a church as their local polling station were less supportive of gay marriage.

Another piece of research looked at a 2000 ballot initiative in Arizona to increase spending on education:

The authors…divided the precincts between schools and non-schools, and found that voters who voted in a school had a marginal preference (3 points) for the initiative.

I am all yours...
Creative Commons License photo: HAMED MASOUMI

And when I spoke to some local activists for the Labour Party (UK) earlier this year, they suggested that the Gordon Brown calls an election before 2009. Not because they believe he is more likely to win: they believe that Gordon Brown losing the next general election is already a done deal. It’s because the local elections are also due to be held in 2009. And having the general election at the same time as the local one would mean Brown’s personal unpopularity would rub off onto the rest of the Labour party and their local councillors.

There are many subtle ways of affecting the results of an election. Could free Starbucks have a significant one?

The Large Hadron Collider and the End of the World?

The Large Hadron Collider/ATLAS at CERN
Creative Commons License photo: Image Editor

It’s been rather, well, amusing to see the news coverage of the test firing of the Large Hadron Collider over the large few days. As somebody who has worked in physics and may occasionally classify themselves as a “physicist”, it’s really nice to see physics making the headlines! But I thought the news coverage was absolutely sensationalist and ridiculous.

The downmarket British tabloid The Sun was ridiculously sensationalist with it’s headline “End of the world due in nine days”. It wrote:

SCIENTISTS are trying to stop the most powerful experiment ever – saying the black holes it will create could destroy the world.

That is why boffins are now trying to stop the project with a last-ditch challenge in the courts.

They fear the LHC experimenters are tinkering with the unknown and putting mankind — and our whole planet — at risk.

The Black Hole
Creative Commons License photo: lautsu

It wasn’t just the downmarket tabloids at it. BBC News discussed the outlandish theory, and ITV originally reported the story as so on their website:

Scientists at the European Centre for Nuclear Research (Cern) are pressing ahead with the experiment despite warnings that it could destroy the universe.

And I woke up in the morning to a long discussion on the radio about the LHC would cause the end of the world.

I’m sorry, but as if any serious scientists thought the LHC would lead to the end of the world. It’s a totally ridiculous theory.

But I’m more than happy to be proved wrong. Check out the live webcams from the LHC and let me know if you see anything 😉

How Distributed Grid Computing Could Cut Costs and Help the Environment

Cat-5 Cable
Creative Commons License photo: Darren Hester

The dream of distributed computing (or grid computing) is that it can cut the costs of computing and cut carbon emissions. In this post, I am to explain how it works.

Let us imagine a scenario where both Carl and Daniel have computers. Carl has a computer which is twice as efficient – that is it costs him half as much to do the same thing on his computer. Let’s say it costs £1 in electricity for Carl to run a computer model; and £2 for Daniel. In total, it costs £3 to society to run the computer model once for Carl and once for Daniel.

Cost to Carl: £1
Cost to Daniel: £2
Total Cost to Society: £3

With Distributed Computing

Now imagine the same scenario but with one addition: distributed computing. As it costs Carl less money to run the model on his computer than it would cost Daniel, Daniel could pay Carl to run the model for him. Imagine that Daniel paid Carl £1.50. It only costs Carl £1 to run the model Daniel’s model for him, but he has gained £1.50 for his effort giving him a profit of 50p. Daniel only spends £1.50 to have his model run, as opposed to the £2 which it would have cost him to run the model himself.

Everybody benefits by saving money and the end result is the same: Carl and Daniel have both had their model run.

Cost to Carl: 50p (£1 to run his own model, subtract 50p profit from running Daniel’s model)
Cost to Daniel: £1.50 (He pays Carl £1.50 to run his model for him)
Total Cost to Society: £2

What assumptions have we made?

Creative Commons License photo: Ack Ook

There are no costs involved in the transaction itself. Imagine that it costs £2 for Daniel to send a copy of the computer model to Carl and then to receive the results. If Daniel had to print out instructions on how to use the model, then FedEx it to Carl and wait several weeks to see the results of the model, that’s perfectly conceivable. In this case, it costs of Daniel asking Carl to run the model for him would be £3 (£1 for Carl to run the model on his computer and £2 in transaction costs). In this case, he might as well have run it himself. Real world transaction costs would include slow network connections and incompatibilities between different computer systems. So for distributed computing to work we need fast, reliable network connections and software compatibility.

Daniel would happily allow Carl to run the model for him. Are there privacy implications for example? Daniel must be confident that he can allow Carl to run the model for him and be equally confident that Carl couldn’t have a peek at the results of his model. After all, there might be trade secrets in there. Similarly, Carl must be confident that Daniel isn’t sending him malicious software which could break his computer. For distributed computing to work, there must be a foolproof and hackproof way for Carl and Daniel to trust each another to keep to their side of the bargain.

Creative Commons License photo: JohnSeb

Thirdly, Daniel must actually be able to cut his costs. Let me explain. It’s possible that Daniel will have his computer on 24/7 anyway. That is, it’ll cost him £2 whether he’s runs the model or not. If he’s leaving his computer running at 100% but idle and still asks Carl to run the model for him, he essentially pays for the model to be run twice. My computer doesn’t dynamically underclock so whether or not I’m using it, it’s eating up the same amount of energy. For distributed computing to work, our own computers must make much more efficient use of resources. We need to have thin-client computers with neglible costs.

The real world

Distributed computing hasn’t taken off yet on any large scale. The three conditions don’t yet exist:

  • We need fast, reliable network connections and software compatibility. This definitely doesn’t exist at the moment: I don’t trust my own network connection to be 99.9999% reliable. It’s OK for downloading files and sending e-mails but it needs to be good enough for me to be able to send entire computer programmes over the network in under a second. Additionally, software isn’t at the stage where it’s “write once, run anywhere”. We need standards, standards and standards.
  • There must be a foolproof and hackproof way for Carl and Daniel to trust each another to keep to their side of the bargain. There is no way I would let anybody run a piece of software on my computer without me checking it first. If I had to pre-approve every single piece of software, that adds to the transaction costs which I discuss. Virtual machines are one way we can get around this issue by creating safe ways to isolate software and to track it’s progress. Still, I’m not sure if there is a secure way to run software on a computer with the confidence that the owner of the computer can’t take a peek. And I’m not sure if we’ll ever reach the point where people will happily allow third-parties to run software on their computer and have no possible way to find out what it’s doing.
  • We need to have thin-client computers with negligible costs. I’ve already debunked this one. My computer uses exactly the same amount of power whether it’s active or idle. I don’t believe that people will drop the idea of “a computer on every desk in every home” until they are confident the first two criteria have been met. Only then will they accept owning a thin-client computer.

It’s already being used…

Last year I worked at a company which employed distributed computing on a smaller scale. They had a small cluster (~20 computers) with identical hardware, each linked with Gigabit Ethernet. Software ran inside virtual machines and those virtual machines moved around between computers depending on the amount of spare capacity each one had.

The reason why they could employ distributed computing is because within their own system, they knew that:

  • They had a reliable intranet connection and because all the computers were identical software worked on every single computer.
  • Because they only ran a limited number of programmes and all the computers were under their own control, there were no trust issues.

Creative Commons License photo: Petrick2008

Distributed and grid computing isn’t yet practical on a worldwide scale but I think we’re making progress. Networks are becoming more reliable. Software platforms appear to be becoming more standardised. Virtual machines are coming of age. And our computers are becoming more environmentally concious and adapting their resource usage to the amount of processing power required.

So there is the blueprint to how we can lower the costs of computing. You might be wondering what that’s got to do with the environment. Well, simply replace the £ sign with Joules of energy. Like free-market trade can leads to an efficient allocation of resources in the real world, the trade of computer time in a distributed grid of computers leads to a more efficient allocation of computing resources. And that lowers the energy consumption of computing and it’s environmental impact.

UFOs & Why Aliens Haven't Made Contact

Hi guys! I’m back in England… I spent the last two weeks in sunny America and had the delight of seeing Yellowstone National Park and the Grand Canyon as well as spending July 4th in America (odd being British and in America on Independence Day but there goes). You’ve been treated to a couple of scheduled posts over the last two weeks; hopefully we’ll be back to full operation soon!

Hovering Lights
Creative Commons License photo: Todd Huffman

When I was in America, I had plenty of opportunity to experience the wonders of American cable television (which also seems to have a ridiculous number of commercial breaks).

I saw a discussion programme about UFOs and there was a theory about UFOs and why “aliens” haven’t yet made contact which I thought was pretty concieved and quite funny.

Now, the standard theory about UFOs is that they are aliens of extra-terrestrial origin. They crash landed at Roswell and the government has been covering it up ever since, either because they feel we are not ready to know or because the military feel it is a tactical advantage to keep such information secret.

They’re actually time travelling humans…

The theory put across by one contributor to the TV show was that they are not actually aliens, but humans from the future. They believe that time travel was discovered in the Philadelphia Experiment. The laws of physics don’t actually prevent time travel; they are apparently possible using wormholes. Some people have claimed to have made small particles travel in time. It is perfectly conceivable that a time travel device could be constructed in the future for humans or spacecraft. This time travel device might arguably be easier than a civilisation developing the capabilities to traverse great distances to make contact with extra-terrestrial civilisations (in this case, us). They argue that UFOs are actually time travellers from the future who have come back to prevent us from making big mistakes which would impact the future (like the temporal agents of “Star Trek: Enterprise”).

The reasons why we can never know the truth about UFOs is because if we know, we’d change the future. We might panic and destroy ourselves.

Why has it been documented that people in circumstances where they might experience more stress (e.g. during Wars) see more UFOs? Psychologists say that these people are more likely to “imagine” or make up stories about being abducted by aliens. But theorists say it’s because those are also the occasions are the ones which we might need more help to stop us from doing what is wrong.

Creative Commons License photo: Jami Dwyer

If there are indeed aliens swarming around in UFOs everywhere, why aren’t there more documented cases, especially with the number of camera phones around these days? Because the future human in their UFOs wouldn’t visit us unless there was something in history to correct.

I thought this was a totally genius theory because it manages to answer why the governments can’t disclose anything about UFOs and the paradoxes of believing UFOs are of extra-terrestrial origin as well as finding a solution for a paradox of time travel.

I thought it was certainly a rather interesting, although rather concieved theory for the origin UFOs. Of course, I’m very skeptical about it as am I about the existance of UFOs. The beauty of the theory is it can’t be disproved. But I’m sure that my future self will be able to come back in time and stop me from making this post to prevent me from the embarrassment of having to admit I’m wrong.

Friends turn mountains into molehills

Desert Leader
Creative Commons License photo: Hamed Saber

The New Scientist reports on a study at the University of Plymouth where students were asked to assess the slope of a hill. Those with friends estimated the slope at 10 to 15 degrees less steep than those who were alone during the experiment. They also found that the same effect could be achieved just by thinking of somebody close to you.

Fascinating stuff. I guess it’s pretty metaphorical too… how we can do tough and seemingly impossible things when we’re surrounded by great people who give us support all the way.

I wish the paper was made available for free. It would have been really interesting to find out exactly how the experiment was done and how other factors were accounted for e.g. saying something is easier than you percieve it is to impress your peers.