Red Links on Webpages – Avoid them?

I was catching up on Firefox 3 (which is an awesome browser by the way) today when I stumbled across a blog post about smooth image scaling in Firefox 3. The blog post actually flagged something up to me that was really interesting.

The web page in question styles all of its hyperlinks in red text without underlines:

redlinks1.jpg

As I read through the article, I noticed that I had a certain reluctance about following any of the hyperlinks referenced in the article. I couldn’t work out why but I realised it was due to Wikipedia which uses the same styling for it’s non-existent articles:

redlinks2.jpg

I noticed that I purposely avoided red hyperlinks on Wikipedia because it never led to any useful information. And it was slightly strange that the same behaviour then extended to a totally different web page which I had never visited before.

Of course, it’s common knowledge amongst web designers that people expect hyperlinks to be blue and visited hyperlinks to either be purple or a more saturated blue, but I’ve never heard of any expectations regarding red links.

It’d be interesting to know whether any of my readers had the same feeling about the links on this page or whether it’s because I spend too much time on Wikipedia.

Another interesting experience recently… one of the posters at school used red wavy underlines for all the titles. Again, it was one of those things which just annoys you and you can’t work out why. Eventually I figured it was because Microsoft Word would highlight incorrect spellings with a red wavy underline and that I’d developed some kind of “learned behaviour” in Word to correct typos and remove the red wavy underlines as soon as they appear.

Data Visualisation in Javascript

Processing.js is quite something. Its a port of the Processing data visualisation programming language into Javascript. A great use of the <canvas> functionality in recent browsers. Because it pushes the browser to it’s maximum, it is recommended you use it only with the latest beta browsers: Firefox 3, WebKit nightlies and Opera 9.5.

If this doesn’t sound impressive to you, check out some of the demos:

There really are all kinds of things from physics to fractals, from clocks to modern art.

What can I say apart from that it’s left me speechless and quite how processing.js can weigh in at over 5000 lines and compresses to under 10kb.

I suppose the sluggish performance in the browsers is the main downside of this but I guess that this will get better in time as I haven’t seen enough intensive <canvas> applications lying about which would warrant browser makers putting time into improving performance.

Encoding Javascript in a PNG through canvas

I think this is a hilarious way to compress and perhaps obfuscate your Javascript code. How does it work? Well, in ASCII text, which is how Javascript is encoded, 8 bits are used to encode a single character (this gives 2^8 combinations or 256 possible characters).

In an 8-bit binary image file, the 8 bits are used to represent a colour. With 8 bits, you can encode 256 different colours all the way from white to black. In colour images, 3 bytes are used for each pixel. It stores the amount of red, green and blue as values from 0 to 255 and then combines them together to give colour.

This script works by encoding the 256 possible characters in an ASCII Javascript file as 256 possible colours in a binary PNG file. <canvas> is used to read the “values” or colours of each pixel, turning it back into the ASCII equivalent. The result is then eval()-ed.

The benefit of this technique is you can then benefit from PNG compression. Of course, most servers will gzip your Javascript files anyway meaning you don’t actually benefit from a reduction in file size.

I got this idea to stuff Javascript in a PNG image and then read it out using the getImageData() method on the canvas element. Unfortunately, for now, that means only Firefox, Opera Beta and the recent WebKit nightlies work. And before anyone else points out how gzip is superior, this is in no way meant as a realistic alternative.

Anyway, since the support for the getImageData method on the canvas element isn’t widely supported yet, I guess this remains a curiosity for now and just another way to use/misuse the canvas. So, this is meant only as thing of interest and is not something you should use in most any real life applications, where something like gzip will outperform this.

This reminds me of steganography and techniques of hiding files inside JPEG images. You could perhaps use this technique to add a digital watermark to an image. Or you could store metadata inside the pixels itself and then use <canvas> to read it out. The advantage of that is your metadata can never get separated from your actual image.

Javascript Image Effects

The Javascript Image Effects script is stunning. It works by using VML filters in Internet Explorer and the <canvas> tag in Firefox and Opera – the same methods used by Reflection.js.

This library tries to enable different client side image effects. IE has long had its filters, which have provided a few basic effects for IE. With canvas, some of these effects can also be achieved in Firefox and Opera.

Safari lacks the getImageData/putImageData methods on the canvas 2d-context, so only the very basic flipping effects will work on this browser. The functions are present in the latest WebKit nightlies, however. Likewise, Opera only supports the get/putImageData methods in the beta version (9.50b), so you need the beta to see most of the effects. Check the compatibility column in the table above.

Among the effects available are blur, sharpen, flip, invert colours, find edges, emboss and brightness/contrast adjustment.

I think it’s fantastic that all of these effects can be achieved using <canvas>. However, it appears that the effects are achieved through the manipulation of individual pixels and unless there are dramatic improvements in performance I don’t think this would be practical as an unobtrusive Javascript. However, it could make a fantastic web-based AJAX image editor which would allow you to tweak an image before submitting it using toDataUrl().

If you use the javascript image effect, be aware of the terms of use which prohibit commercial use without permission.

A rap about search engine optimization

This video made me smile!

My favourite part:

don’t use bold, please use strong
if you use bold that’s old and wrong

It looks like this guy (the SEO rapper) is totally serious and looking through his other YouTube videos, hes also got raps about social networking, link building and web advertising.

This reminds me of some really lame HTML jokes:

Why did the XHTML actress turn down an Oscar?
Because she refused to be involved in the presentation.

Why was the XHTML bird an invalid?
Because it wasn’t nested properly.

Boom boom.

Pure Javascript/Ajax Video Player

The javascript video player is pretty cool and fun, even if it is as the author describes it, “semi-useless”! See demo.

So how does it work? A script exports every single frame from an MPEG movie into individual JPEG files. These are then collected together, base64 encoded and exported into a JSON file. The script then creates image objects for each frame and shows them all in succession. There is no support for sound.

First strategy was to create an Image object for each frame and render it on a canvas element using drawImage(). That worked fine and performance was nice (although Opera used a lot of CPU), but I figured I’d try just using a regular image tag and just change the src property to another data:uri each frame. The change was barely noticeably in Firefox and Safari and it ran a bit better in Opera, so I lost the canvas and stuck with plain old images.

Now, it seems that Firefox will eat up all the memory in the world if you keep throwing new data:uris at the same image tag, which led to another change, so for each frame a new Image object was created and saved for later and as the video played, the previous frame Image was replaced by the new Image object. That seemed to work, but introduced an annoying delay as all these Image objects were created before playing, so I ended up moving the Image creation to actual render cycle where it simply checks if the frame Image has already been created, and if not, creates it.

So this is totally impractical but who cares: it’s cool and a fun experiment. I wish I still had time for experiments like these!

Evolving the perfect website through natural selection

The Daily Telegraph covers an experiment to breed the perfect web page design through Darwinian natural selection.

Matthew Hockenberry and Ernesto Arroyo of Creative Synthesis, a non-profit organisation in Cambridge, Massachusetts, have created evolutionary software that alters colours, fonts and hyperlinks of pages in response to what seems to grab the attention of the people who click on the site.

Evolutionary algorithms have in the past been used to design aircraft wings and boat hulls. Because there are so many competiting and contributing factors to how well a certain design works, a human designer won’t find that optimum design which works the best. The idea of using an evolutionary algorithm is you start off with thousands of random and different designs. Each design then undergoes a form of natural selection – for aircraft wings, each design would be loaded into a computer model. The best designs are then “breeded” together to create the next generation of aircraft wings. This process is repeated thousands of times to produce an optimum wing.

The software treated each feature as a “gene” that was randomly changed as a page was refreshed.

After evaluating what seemed to work, it killed the genes associated with lower scoring features – say the link in an Arial font that was being ignored – and replaced them with those from higher scoring ones say, Helvetica.

Evolutionary algorithms are certainly a useful addition to the toolbox of engineers. There are of course limitations. Creationists sometimes argue “what use is half an eye?” as a refute to Darwinian natural selection. In this example, half an eye is still better than no eye at all: you might be able to see but you won’t get a sharp picture.

But at least you won’t walk into a wall. But say… why haven’t humans evolved wheels? “What use is half a wheel?” is a good question, and there isn’t a use for half a wheel. If any organism did evolve half a wheel, it would be selected against and therefore natural selection would never lead to a whole wheel.

I think this illustrates well why evolutionary algorithms can never be the end all; we’ll still need human input and design. But it’s great as a way to refine a design.

Read the experiment write up at Creative Synthesis and have a look at the end result.

document.getElementsByClassName compatibility

Firefox 3 and Safari 3.1 introduce an important new compatibility problem for many webpages. The problem originates from Prototype.js’s document.getElementsByChildName function which is implemented like this:

if (!document.getElementsByClassName)
document.getElementsByClassName = function(instanceMethods){
// …
};

The problem is that the native browser implementation works differently from Prototype’s function which was created before the document.getElementsByClassName specification was written. The native implementation returns a live nodelist whilst Prototype’s function returned a object array.

This compatibility issue affects websites using versions of Prototype.js older than 1.6 as well as other scripts which have used the document.getElementsByClassName function (including Reflection.js versions 1.8 and older).

Prototype.js users should use $$ or Element#select instead.

For my script, Reflection.js, I have renamed my function to document.myGetElementsByClassName. Sure it’s ugly but it means we preserve compatibility with older browsers which don’t support the new document.getElementsByClassName function natively. Also, we don’t need to test for whether the browser supports the function natively (and use a different codepath depending on whether a nodelist/object is returned). The downsides of course are that we can’t benefit from the faster native implementation.

Anyway, hope this helps someone out there.

Dynamic Gradient Background (Canvas)

A really clever script here which makes use of Javascript and canvas to give dynamic gradient backgrounds.

Here is the problem: when you use the <canvas> tag to manipulate an image or a graphic, it is treated in HTML as an inline object (similar to a super-charged <img>). This means you can’t set a canvas as the background image of a <div> for example.

This script gets around this limitation by using the toDataUrl() function in canvas (supported in Firefox and Opera). toDataUrl() essentially takes the contents of a canvas and turns it into an image, which can then be made into the background of a <div>.

It sounds complicated, check out the JS source if Javascript is your preferred language as opposed to English.

I can see Reflection.js being adapted to work for background images using a similar method, but it’s likely only to work in Firefox and Opera.

Developing for Facebook

I remember falling in love with Facebook when I first became a user: it was very easy to use, uncluttered and such an amazing time saver. Additionally, you can log in any morning after a night out and pictures will already be online: enough to jog your memory of the night before. Anyway, I’ve now fallen in love with Facebook again: this time as a developer.

Recently a friend and I have been exploring developing Facebook applications. The best known Facebook applications are things like Scrabulous, Texas Hold Em, Superwall. Things like that: pretty banal and useless. Some applications have even been downright annoying – I got so much spam from applications such as Likeness, Super Wall, Funwall, Flixster, etc. that I’ve uninstalled the applications.

Anyway: this was our problem. We run a website for a local educational establishment. What we’ve tried to do is to improve two-way communication between students and staff. So we did the traditional thing: we set up a online PHP community – forums, etc. Unfortunately, it never took off. Why? First of all, it’s another URL and login to remember, it needs to reach a critical mass before it even becomes useful and it’s not exactly the first thing you’ll think of doing when you’ve got a web browser open in front of you.

Facebook already has a critical mass and networks and relationships to represent existing acquaintances and so on. In many ways, we’ve even switched to using Facebook as a communication channel for all social events because so many more people will see information about it. It also means not needing to visit another website, not needing another login, etc.

The next stage of this has been to develop a Facebook application to disseminate information but also to extend Facebook’s features. Facebook is fantastic, but it won’t let you see a list of people in your classes for example. With a Facebook application, this can be developed. Additionally, Facebook relies on “peer-to-peer” to distribute information amongst people. Somebody posts a notice or a photo, and then “shares” it with their friends using their wall, messages, invites. The advantage of having a central application is that staff could write a notice which is then automatically disseminated to all students without this peer-to-peer step. And it saves the embarrassment of having your headteacher on your “friends list”!

Of course, I can see this being a fantastic tool for alumni too. And it probably won’t be long until we start offering people $200 on Texas Holdem for inviting their friends to a school open day…

Anyway, I’m rambling. The beauty was how easy it was to develop for Facebook. It is in many ways an extension of everything I wanted Geneone to be: a platform for developing a community. But what Facebook has is an amazing worldwide community already there with one central login system and hub.

I see Facebook as a way to hook into existing relationships and networks from the real life without having to try and reinvent the wheel and to attempt to replicate them.

I don’t hesitate to suggest that Facebook will overtake Google as the web’s hottest property either in the next two years. In fact, screw the rest of the internet. Let’s build everything into Facebook. I’ve never been so excited about Web 2.0.