18 November 2008

A rant

I have tried hard not to fill this blog with random off-topic rants. I am about to break that rule. If you do not wish to read my rant, please move on - normal service will be resumed shortly.

There is some (tenuous) relevancy about this rant. I will mention user interfaces!

My rant is about iPod's. No no, sorry, its about iPod users and iPod headphones.

I use an iPod. I use it because it was given to me and is 8Gb whereas my MP3 player of choice (a Creative Zen Stone Plus) is only 4Gb. It is a 3rd gen nano and is truly wondrous to hold and behold. I find the sound quality lacking in comparison to the two Creative players I have had, but it serves my needs adequately.

I am not such a fan of using the iPod. I often find myself getting frustrated at the imprecise and slow controls - I just want to move to a position within a track and change volume without waiting for the interface to allow me!

Anyway, I was sitting on the train this morning - not listening to my iPod - and the woman sitting next to me was listening to hers with the rather poor, but iconic iPod headphones.

She felt she had to have the volume turned to maximum just to be able to hear her music through those rubbish and very leaky headphones! This resulted in the whole carriage "enjoying" her music along with her. And I got the brunt of it.

In short - if you have an iPod, buy some decent headphones!

</rant>

3 November 2008

Browser wars

To finish up this little series of posts about browser standards, I wanted to refer to the browser wars and ask this simple question:

Do we need another browser war?

Well? What do you think?

I think we cannot have another browser war in quite the same way. When it was Netscape vs Internet Explorer, every user spoke (or perhaps didn't speak) and chose (or perhaps was simply given) Internet Explorer. What then happened is that all the proprietary features of IE could be safely used whilst knowing there was a very high support rate.

Since IE6 came out and the browser war ended, there was a little stagnation but really what happened was that people got a bit bored and wanted more. So Firefox came along as a viable competitor. That was the time to have a browser war. The moment has passed.

Right now there is too much competition (or maybe none - its hard to tell) between the browser vendors. The market is too segmented. We now need the vendors to follow a standard spec. And so we get into the sordid world of the W3C and standards.

Standards?

Yesterday, I posted about some of my thoughts on web standards. It raised some questions for me. It was a post I was reluctant to make but only because I could not reasonably say in one post how I feel and what I think about the current state of affairs. I don't want this blog to descend into a series of rants about the W3C and standards etc etc.

I will attempt now to clarify my thoughts on where we should go from here.

Firstly, I think we do need to make progress. We need to move things forwards. We cannot stagnate. We must find ways to innovate. However, I don't think we have even scratched the surface of what is commercially possible and viable given today's technology. The browser has proven itself to be a remarkable platform capable of producing almost any sort of UI.

So here are some thoughts for each proposed standard:


  1. HTML 5
    I am unconvinced by large parts of the spec and I think it is far too large a specification. I want to see greater interoperability between the various browsers and as such I believe simplification would be better. I also strongly believe that the next spec should enforce an XML type syntax. Its easier to maintain and easier to find errors. I would encourage the browser vendors to simplify their parsing engines, not force them into more complicated solutions.

  2. CSS 3
    The power of CSS is in its expressive simplicity. This will not be lost within CSS 3, but do we really need much more on top of CSS 2.1? Do we need to have 4 rendering engines in wide use which are trying to implement a load of new CSS properties that will not be widely used? Are we not better off getting the current specs properly implemented with perhaps a few additions (multiple background images comes to mind) which can be progressively applied using the principles of progressive enhancement?

  3. JavaScript 2
    I do not think JavaScript should gain classes and I do not think the language should be greatly extended. Interfaces may be a useful addition, but generally I just think a little tidying is needed. Its in the DOM that work is required. Lets get the DOM objects of the various browsers to be more closely aligned with each other.



What I am generally advocating is a simplification of the specs to make them manageable for browser vendors. Lets give the browser vendors an easier task to get themselves up to scratch and then lets ask them to add new features. As always these new features should be able to be applied using the principles of progressive enhancement.

The savior for us all could be JavaScript libraries. They transcend the browser differences hiding them conveniently. Of course that creates a level of ignorance about those issues amongst some, but they are undeniably popular and successful.

But the world of JavaScript libraries is amazingly similar to the early world of the browser. They all do basically the same things but in quite different ways. Some libraries do some things well, others do other things well. Funnily enough, libraries need to converge on something of a unified API. Just like the browsers.

So for the foreseeable future, the web developer will continue to bridge the gap with ingenuity, cunning, blood, sweat and tears. Its OK for me, but will it still be OK in 10 years time? Will we still be in the same situation in 10 years time? Sadly, I think yes. Does this dull my passion for my job? Absolutely not! I enjoy dealing with these vagaries and having to think on my feet every day!

Surely though, we can sort this out, one way or another...

31 October 2008

Standard nonsense

My thoughts on web standards change on an almost weekly basis. I now consider myself a sceptical evangelist! I believe in web standards as an end but I do not agree with the means by which we seem to be getting there. I think we should be using web standards, but I live and work in the real world where web standards remain nothing more than a laudable goal and an occasional convenient excuse.

Currently, we have HTML 5 and CSS 3 and JavaScript 2 as the shining examples of where we are heading with web standards. I am aware that JavaScript 2 is no more (the end of a great folly in my opinion), however, these three serve to demonstrate my point.

The vast majority of users on the web, and surely its the users who are most important - not us developers, are using Internet Explorer as their primary web browser. IE is a fine product for browsing the web today and I suspect it fulfills the needs of those users in the vast majority of cases.

A good proportion of users are using Firefox with many using Firefox 3. This too is a fine browser. As is Safari which dominates the remaining small proportion of users.

At this point, others may launch into a diatribe about how users should switch browsers. Some may wish to analyse the statistics on a deeper level. But I wish to just re-iterate that for users, their current browser works fine! Why are we - the developers - getting so upset about users not changing their browser?

HTML 5, CSS 3 and JavaScript 2 are (or were) all meant to be innovations and additions to the interface developers toolkit, allowing us to do exciting new and innovative things. Creating new types of web pages, allowing the user to interact in new and exciting ways. Hmm. Writing that makes me think of AJAX. AJAX was meant to be an innovation and an addition to the interface developers toolkit allowing us to do exciting new and innovative things. AJAX was meant to allow us to create new types of web pages, allowing the user to interact in new and exciting ways.

Why did it work with AJAX? Its because support was already in the users browser. When AJAX started to become popular, there were some older browsers in use almost as much as Firefox 2 is in use today. Well, those older browsers disappeared as users could not access the websites that used advanced AJAX techniques. The other very helpful factor for AJAX was that computers became very cheap and very powerful and so users were upgrading more easily and more frequently. Most users only change browser when they change computer.

I believe there are the following 3 main reasons that HTML 5 and CSS 3 are not in common use today and will not be for at least 5 years from now:

  1. The W3C
    A large slow moving organisation that seemingly cannot decide in which direction to lumber. There has been some progress over the last year, but they have absolutely set the pace. The gap between HTML 4 and HTML 5 being proper standards will be mirrored by the gap between users using browsers which implement those standards.

  2. The browser vendors
    More slow moving organisations. The era of co-operation never began and never will be. There seems to be more effort being made to improve the performance of existing browser features rather than in implementing anything new. This in itself is a positive, but it does not really move the game forwards. This is the vendors playing catch-up so the websites we already build can run in a more reasonable fashion.

    There appears to be so little real competition in this market that vendor specific features are becoming acceptable again within the web development community. As developers we just want something shiny and new to play with!

  3. The Users
    The only group that should really matter. Users have not been given any real incentive to change browsers or to support those vendors attempting to implement and push forwards new standards. Users will only shift their position on this in response to catastrophic security issues or a family member who knows better and has a convincing voice.

    Users still use older browsers such as IE 6 and Firefox 2 because there is nothing that the newer browsers offer which is compelling enough to make the change. Users are not technically savvy and nor should they have to be. Users only demand reliability, usable performance and security - even IE 6 can provide that.

Ultimately, there is no business case for building a website implementing new standards. Indeed, the opposite is true - it is better for businesses to ensure their websites work on the widest cross-section of browsers out there which means implementing old standards.

So is this all a cause for depression? Absolutely not. Do we need new standards? Not really! Wow, that was a controversial statement. I would like new and more appropriate standards, I would like those standards to be adhered to so I don't have to deal with cross browser issues. But, I am still doing really exciting and innovative work, I still find new bugs and issues with the browsers and I am yet to see a design or specification for a website which I could not build with today's existing technologies.

30 October 2008

The Quiet

Its been pretty quiet on this blog for the last 6 weeks or so. I have been very busy working on an interesting, fairly complex and fairly large JavaScript application during the day and then recovering from it at night! I also had the pleasure of a lovely week away in this period of quiet.

I have also recently got my hands on a rather sexy new phone - an HTC Touch Diamond - and have spent a lot of time playing with that. Its a Windows Mobile PocketPC and its been an interesting experience, particularly Opera 9.5 Mobile which is the default browser.

Browsing on a handheld device has been a new experience for me. My phone comes with Pocket IE installed as well, but that is rather archaic. I have done all I can to avoid using it.

Opera Mobile however is a fantastic browser. Its fairly fast, pretty standards compliant and renders most pages rather well. It has also given me a useful insight into a different type of user agent for browsing the web. Its quite a different experience and its easy to see which sites are well designed and which are not.

I have taken away a couple of important lessons from my experience. Firstly, clear interface design is very important on this type of device. The browser can zoom into pages but when a page loads, it is zoomed out and most text cannot be read clearly at all. However, a site with good design is very usable as I can zoom in on just the right area. Clearly defined navigation really helps with browsing on a mobile device.

The second lesson I have learnt is that the easiest sites to use employ a liquid or semi-liquid layout. My phone has a resolution of 480x640 and if a site can flow its content to fit into either of those widths, that helps greatly. If I were designing a website today, I would definitely ensure it works in a browser width of 640 pixels!

The final lesson I have taken with me thus far is that the size of elements on screen really matters. I use my finger to navigate a website and having links tightly packed together with a small font size makes life very difficult. Having small form fields or form fields with a small font size and another link right next to or just underneath the form field can make it difficult to select the element I want to interact with. Finally, having heavily styled buttons, especially image buttons makes submitting a form harder than it needs to be, particularly if pressing enter on the keyboard does not work.

For me, the most noteworthy new aspect of interface development that I will carry with me is the importance of keyboard interaction with a form - never ever stop a form being submitted with the enter key!

There is much more to say about this phone and browsing the web using Opera, but at least for now, the quiet is no more!

17 September 2008

Another JavaScript pattern - private members shared between object instances

In this post, I will introduce another pattern to add to my previous post about my JavaScript patterns. In fact, this is an extension to those patterns and combines both patterns together! This is a singleton pattern that creates a constructor!

In this post I will also make use of some aspects of JSquared, so please refer to the JSquared website for more information on those.

So, private members shared between object instances. What does that mean? What it means is that I want to have some private variables and functions which are accessible to multiple instances of a constructor but without them being part of the constructor or the objects the constructor will create. A fairly well known way of achieving this is by making the prototype of my constructor an object built using something similar to my singleton pattern. This would allow public and private members on the prototype of my object. But that will not give me all I wish to achieve this time and it can be a clumsy syntax.

The code I will use to help explain this concept is designed to be a simple panel object. The object will manage a series of panels on a webpage which are linked. Only one panel can be open at a time. Typically one would construct this with a singleton object which finds all instances of the panels in the DOM and adds some handling to each DOM node accordingly. I will do this in a different way. This code example is of course merely a skeleton.

var Panel = new (function() {

 

    //add a DOM load event

    addLoadEvent( function() {

        //get all DIV elements with a class of "panel"

        document.getElementsByClassName( {cssClass: "panel", tags: "div", callback: function() {

            //create a new instance of the Panel constructor for each panel

            new Panel(this);

        } } );

    } );

 

    var panels = [];

 

    function closeAll() {

        for (var i = panels.length-1; i>=0; i--) {

            panels[i].close();

        }

    }

 

    //return the Panel constructor

    return function(panelNode) {

        this.open = function() {

            closeAll();

            //perform the open logic

        }

        this.close = function() {

            //perform the close logic

        }

        //add this instance to the all instances array inside the closure

        panels.push(this);

    }

});



Lets step through each part of this example and see what it does:


var Panel = new (function() {


Create a new variable called Panel using the singleton pattern.



    //add a DOM load event

    addLoadEvent( function() {

        //get all DIV elements with a class of "panel"

        document.getElementsByClassName( {cssClass: "panel", tags: "div", callback: function() {

            //create a new instance of the Panel constructor for each panel

            new Panel(this);

        } } );

    } );


Using JSquared methods add an event handler for when the documents loads. The handler will use another JSquared method to find all elements in the document which are DIVs with a class of panel and for each one run the supplied function which will create a new instance of Panel passing in the DIV node with the class panel that was found. (see the JSquared docs for more info on how getElementsByClassName is used)



    var panels = [];

 

    function closeAll() {

        for (var i = panels.length-1; i>=0; i--) {

            panels[i].close();

        }

    }


These are the private members which each instance of Panel will have access to. We have an array of panels which will get filled with each instance of Panel that is created and we have a closeAll method that loops through each instance of Panel and calls its close method.



    //return the Panel constructor

    return function(panelNode) {


We are going to return a constructor (using the standard constructor pattern). The variable Panel that we created at the top of the code example will now take the value of this constructor. In other words, Panel becomes a constructor which we can create instances of using the new keyword.



        this.open = function() {

            closeAll();

            //perform the open logic

        }

        this.close = function() {

            //perform the close logic

        }


Create open and close methods which will perform those actions. In the open method, we first want to close all the panels ensuring only one can be open at any time. To do that we call the private closeAll method which is available through the closure around Panel.



        //add this instance to the all instances array inside the closure

        panels.push(this);


Add this new instance (this line of code is still part of the Panel constructor) to the private panels array also available through the closure we have created.


To recap, we use the singleton pattern to execute some logic before returning a constructor which is then available to us later on in the page execution. We can use the closure this creates to make private members, declared inside the self executing function which is the singleton pattern. These private members are available to each instance of the constructor but the members are not available anywhere else within any JavaScript - as is usual for a closure of this type.

This can be a very powerful and useful pattern. When building a large application, I believe it is good to keep public members of all objects to a minimum and I also prefer not to use the prototype of an object unless I am using inheritance. This pattern achieves both of these aims in an elegant and encapsulated way.

2 September 2008

Google Chrome

So, its finally happened. Google's next step towards world domination is here, and its a web browser! Its also pretty clever. Indeed, I am writing this post in the new browser.

Its called Google Chrome. The Google Chrome chrome is extremely minimal and I like it. The tabs have certainly taken center stage and got a nice feel to it. It is extremely fast to use and the JavaScript engine appears to be fast and well, I am impressed!

Its well worth a download and a play, and dont forget the developer tools - yes, they are included in the browser.

I wont go into any technical details as Google have done a good (if somewhat unusual) job of explaining things. Check out the website for more information.

My favourite feature so far is the way the browser remembers searches you perform on other sites and makes it so easy for new searches to be performed.

If this is how Google are approaching their application development - and lets face it, no-one expected anything different - then I have high hopes for Android. On that subject, I have recently become the proud owner of an HTC Touch Diamond which is a Windows Mobile phone and I am keeping my fingers crossed that Chrome Mobile is coming soon!

As an aside, being a Webkit based browser, I was hoping that JSquared would "just work". However, I will be doing some testing and you can expect another report soon, especially given the new JavaScript engine.

21 August 2008

The future of JavaScript

For those who have not heard, ECMAScript 4 is no more. That also means that JavaScript 2 is currently at best paused. Some will mourn this as a great loss. I am not one of those.

JavaScript 2 had some very interesting ideas in it, but I am far from convinced that it was the future. I firmly believe that JavaScript 1.x has a long long way to run yet.

It is not often that I will link to an article on Slashdot, but this one piqued my interest.

The future of JavaScript is very much tied up in the present and future of browsers. Much of what I have to say here is also true about future versions of CSS.

The problems we as web professionals face today is not due to a shortcoming of JavaScript or CSS or HTML. They are due to shortcomings of the browsers. Until browsers support the standards, there may as well not be standards.

So we have options. We can introduce new ideas and new technologies which will only be supported by a sub-set of users for a fairly long period and will then fall into a category of technologies we are reticent to use due to the need to support older browsers.

Or, we can wait for the browsers to catch up with the standards so that we can exploit them to the full, with guaranteed compatibility.

Well, I dont fancy waiting something like 4 or 5 or even 6 years for browsers to catch up with the standards, I want to do cool, new and interesting things now. So, we are left with the former of the 2 options I present above.

Of course, the thing which allows us to push forward and use the new technologies of CSS and JavaScript which are being introduced and still provide a good level of cross browser support is JavaScript! Look at how the innovative use of JavaScript, largely by library authors, has driven the W3C and browser vendors to introduce new native features to the browsers. This is only of benefit to us.

I firmly believe that changing the core nature of JavaScript now would undermine a lot of the good work of the last few years and it would also be to deny the true power of JavaScript as it stands today. Yes, there is a barrier to people wanting to learn JavaScript - it can be complex, it is difficult to master - but this also is a good thing. Ultimately the standard of JavaScript development will improve and we don't need a new version of the language to make this happen.

JavaScript is an amazingly flexible language and finding new ways of bending it to our will is going to define the next set of standards, the next set of tools that we want to see, and it is the next set of standards that are far more likely to be adhered to than the current generation.

We must not stifle innovation, we must encourage it. Keeping a massive level of embedded knowledge whilst JavaScript is still at its infancy of true professionalism is the right way forward. Maintaining the core of the language as it stands moving forward must be a good thing. Yes, lets play with other languages in the browser, lets even use and support them, but where JavaScript is concerned, my message is clear - don't go changing!

30 July 2008

Complex assignments

I find my coding style continues to evolve as time goes on. This suggests to me that I am still a long way off getting it right! However, it also gives me the chance to play with new ideas and introduce new concepts to my code.

One new pattern that I seem to now be using more and more is one I call Complex Assignments.

To introduce this, it is useful to look at simple assignment code first. An assignment is simply setting one thing equal to another:

var myVariable = 1;

 

this.myMember = "Some text";

 

this.myMethod = function() {

    ....

}



These are all simple assignments. Sometimes though this is not enough. Sometimes you need to set the value of a variable based on some parameter, eg:

function( myParam ) {

    var myVariable;

    if (myParam === 1) {

        myVariable = "Some text";

    } else {

        myVariable = "Some other text";

    }

}



Now of course this is a trivial example, but you may find yourself needing a more complex set of rules. In that case, the logic around this assignment can become complex. To alleviate this problem we will often delegate the assignment logic into a function. This is perfectly acceptable and actually desirable and in a JavaScript object we can make a private function to do the work:

function myObject() {

    this.someValue = getSomeValue();

 

    function getSomeValue() {

        //do some work

        .....

        return someValue;

    }

}



However, if this function is only going to get called once, there is no need to make it a named function. In fact I find it neater to make it an anonymous function. Of course, if its an anonymous function that only gets run once, we can make it a self invoking function. To rewrite the example above:

function myObject() {

    this.someValue = (function() {

        //do some work

        .....

        return someValue;

    })();

}



In this instance, the code looks very similar (if somewhat neater). But, with this anonymous self invoked function, we are actually creating another level of scope which will get destroyed once the function has been executed. This can be very useful and can be very efficient. I also think the code is simple to understand and even better encapsulated.

This is particularly useful for the setting of constant like variables. I find myself using this pattern more and more as I find more occasions where I want a function to be used to assign a value to a variable.

17 July 2008

Coupling, API surface area and change change change

Recently I started a new job focusing much more heavily on JavaScript than before and having been in this new role for a little while and having got the release of JSquared 1.1 out of the way, I have had some time to think about what I have learnt so far.

Firstly it must be said that I am thoroughly enjoying working more heavily with JavaScript and I have already learnt some wonderful and interesting things about the way that JavaScript operates and how to make it go a bit faster. No, a LOT faster! I will be talking about some of these things in the future and applying that knowledge to JSquared!

Also, I have recently read the excellent book from Douglas Crockford espousing about the good parts of JavaScript. However, what I saw in my new role did not conform too closely if at all to the Crockford way of thinking (which I happen to largely share).

First off, the code that exists here is of a very high quality and is well organised, but I have started to realise that there are a number of things I would look to do differently if we could re-write all the code from scratch.

The code has become fairly tightly coupled over time with one object requiring and expecting another object to exist and the second object creating a back reference to the first. There are a number of circular references if one looks through the various objects closely enough.

References to objects are being passed around freely and there is little control being exercised over how values should be accessed and a distinct lack of singletons being used when I believe they should be.

This leads to a very large API surface area. By this I mean that there are a large number of methods that each object needs to expose and a large number of object references being passed around when a smaller API with singletons where appropriate would be simpler.

The combination of the above two things leads to a problem. If you want to rebuild all or part of an object, it becomes more difficult as the objects have very little private code. If an object exposes just a few methods and properties as its API, and retains the rest of its working as private internals, it becomes much easier and simpler to re-write or add to it.

This is a JavaScript application which undergoes constant change by multiple people. Whilst this is good, it does mean that there are more and more objects with wider and wider surface areas. I personally believe that this will lead to the need for a complete re-write of large parts of the application sooner.

This brings me on to my final point, a positive and a negative point. The application is harder to learn and takes longer to become confident with because of this large API surface area. I enjoy that though as I want the challenge!

12 July 2008

JSquared 1.1

It is with pleasure that earlier today I was able to announce the launch of JSquared 1.1. See the blog and the website for more details.

30 June 2008

Not lovely...

Well, its not often that I have had a bad word to say about Mozilla Firefox. However, for the second time in a week, I now do.

I have been aware of issues with the Mozilla implementation of eval for some time, but the latest exposure was news to me and seemingly many others.

Full details can be found here. There is no reliable way round that I can find as yet so I will continue along the assumption that in the JavaScript world, nothing is safe.

I will still use a module pattern (as I have discussed before in a previous post) and I will still call "private" members private. I will also continue to discourage the use of evil eval.

Douglas Crockford has a few things to say about Firefox in general and had his own comments about this latest issue.

I am deeply disappointed frankly. Firefox 3 ruined my week last week (well, truthfully, Firebug was as much to blame) and now this. I suppose its nice to see that other browser vendors make mistakes!

29 June 2008

Lovely...

This is a lovely little tool.

There is nothing else one can really say about it!

24 June 2008

Firefox 3

Well, it has been a good few days now that I have been running the release version of Firefox 3 and I thought I would share some of my experiences - both positive and negative.

First off, I like the interface updates, the "awesome bar" and the speed improvements. I have been very impressed with its stability as well.

However, I have found difficulty using a number of plugins and in particular Firebug. This is simply unacceptable. I have tried using a number of different versions of Firebug but to no avail - all were slow and often crashed Firefox.

Whilst this has proven to be just about acceptable for working at home on things such as JSquared, at work it has proven utterly useless and I have downgraded to Firefox 2 and Firebug 1.05. The improvement in performance of Firebug was immense.

How are you finding Firefox 3 and Firebug? Have you found a combination that works for you?

9 June 2008

My JavaScript patterns

Since posting about my JavaScript "rules", I have had a number of requests to expand on the patterns I use for developing my JavaScript objects.

I use two main patterns, a singleton pattern and a constructor pattern. A singleton is an object which will have only one instance within an application. A constructor is a function which when invoked with the new keyword will create an object based on its definition. These objects are known as instances.

Constructor Pattern
A constructor pattern is something which JavaScript has native support for. Douglas Crockford has previously talked about this pattern at length.

function EventHandler() {

    //constructor

 

    //private members

    var registeredEvents = [];

 

    //public methods

    this.registerHandler = function(func) {

        if (typeof func === "function") {

            registeredEvents.push(func);

        }

    }

    this.fire = function() {

        for (var i = registeredEvents.length-1; i >= 0; i--) {

            registeredEvents[i]();

        }

    }

}



A constructor is simply a function which defines various members and contains its own variables and indeed other functions. You can also define constructor logic with a series of statements which do not declare methods or private variables or functions.

Its a fairly simple pattern and it is just as easy to use. To declare a new instance of this constructor, the code is simply this:

var myEventHandler = new eventHandler();



Singleton Pattern
My aim with a singleton is to have a highly readable object formation pattern which allows for private members and the exposure of a public interface. The syntax also must ensure that only one instance of the singleton can exist though of course there is nothing to stop the object being inherited from.

I also wanted the pattern to look similar to the constructor pattern above. This should make it easier to code and maintain.

For this example, I am using a simplified version of an object in JSquared. JSquared makes extensive use of this pattern as much of the library is built up from singleton objects. The object I am using here is the cookies object. For details on how to use this within an application of your own, look out for a post on the JSquared blog.

var cookies = new (function() {

    //private members

    var cookies = {};

 

    //constructor

    //get the current set of cookies

    var currentCookies = document.cookie.split(";");

    var cookie;

    //loop through current cookies and add their values to the cookies collection

    for (var i = currentCookies.length - 1; i >= 0; i--) {

        cookie = currentCookies[i].split("=");

        if (cookie.length >= 2) {

            cookies[cookie[0]] = cookie[1];

        }

    }

 

    //public methods

    this.set = function(name, value, expiration) {

        document.cookie = name + "=" + value + ";expires=" + expiration + ";path=/";

        cookies[name] = value;

    }

    this.get = function(name) {

        return cookies[name];

    }

    this.delete = function(name) {

        this.set(name, "", -1);

        delete cookies[name];

    }

});



You will I am sure notice how similar this singleton pattern is to the constructor pattern above. The major difference is that the singleton pattern created the constructor as an anonymous function and then invoked an instance of that anonymous function immediately.

Just like when creating a new instance of a constructor, an object defined as per the constructor function will automatically be returned from the invocation. So in the example above, the variable cookies will contain the singleton.

The syntax may look odd, but it has proven itself very resilient and extremely effective. It allows for private members and functions, the public methods have access to those private members. It contains constructor logic and of course parameters can be passed in to the constructor just like they could be in the constructor pattern shown above.

As I seem to often say, its as simple as that!

27 May 2008

My JavaScript "rules"

In my day to day life, I see a lot of JavaScript code. I see a lot of code written by others and a lot of code which I have written in the past. As I look back over my own code and my coding style over the last 2 years or so, it is easy to see how far I have come and how much better my code is now.

None of this is meant to suggest that it cannot be improved. Nor am I saying that it is even that good! But, I thought it might be useful to talk about some of the ideas I have employed. These are my JavaScript "rules". I try to apply them in everything that I do:

1. Be unobtrusive
This is an absolutely imperative and key rule. Unobtrusive JavaScript code is easier to read, easier to write and is encapsulated within a JavaScript layer in your application. Writing unobtrusive code means having no JavaScript in the markup of your documents. Make use of browser events (onLoad, DOMContentLoaded etc) to apply the JavaScript enhancements to your pages. Use ID's and if necessary classes to pick DOM nodes from the document to work with.

2. Name your spaces
Namespaces make for much friendlier JavaScript. If nothing else, a namespace will stop you polluting the global scope of the document. I always have one root namespace for an application and then other namespaces off that to define the functionally different regions of your application. Always group similar functionality. If for example you have a bunch of objects which are used for handling forms and form validation, group them all under a common namespace. Creating a namespace is as easy as creating an object. Here is an example of creating a root namespace and 2 sub-namespaces:

var Nortools = {};

Nortools.Login = {};

Nortools.Forms = {};



3. Cry over spilt milk
I always encapsulate functionality within relevant objects and namespaces. Dont let that functionality spill out of the object or the namespace. Use private members in objects to hide the secrets of an object and strengthen your code. This will prevent your application from being changed or interfered with in any way by other JavaScripts on the page. I use self executing functions to create completely closed and hidden objects, neatly encapsulating my code. It is even possible to have private members on the prototype of a constructor using this technique:

var myConstructor = function() {};

myConstructor.prototype = new (function() {

    var myPrivateMember = "some value";

    this.getPrivateMemeber = function() {

        return myPrivateMember;

    }

});


This code may look odd, but the prototype of myConstructor is now a self creating singleton object which cannot be forced to give up its secrets. I use this technique for creating all singleton objects - its encapsulated!

4. Separate
I always separate the various types of code in my applications strictly. If I am dealing with data, I create a data layer to manage all that logic with as few public methods as possible. If I have business rules, the same applies. All presentation logic should also be self contained and independent of the data and the business rules which translate that data into objects which can then be displayed.

5. I object
Use objects and use them extensively. I never store any value within global scope and with everything in a well organised object model, you always know where things are. Use the singleton pattern above (or the many others which are just as good) to create singleton objects - an object that only has one instance.

6. JSON
JSON is extremely useful and powerful. I always use JSON for storing and transferring data in my applications. I also make extensive use of object notation for providing options to my objects. Passing in a JSON object to a constructor can be much easier than handling optional parameters. This is one method for handling optional parameters:

var myConstructor = function(param1, param2, param3) {

    param1 = param1 || "defaultValue";

    param2 = param2 || "defaultValue";

    param3 = param3 || "defaultValue";

};

var myInstance = new myConstructor( "value", null, "value" );


With this method, on creating an instance, I have to know which parameter is which and what order they are in. Its also slightly clumsy looking.

My preferred method is:

var myConstructor = function(options) {

    var param1 = options.param1 || "defaultValue";

    var param2 = options.param2 || "defaultValue";

    var param3 = options.param3 || "defaultValue";

};

var myInstance = new myConstructor( {

    param2: "value",

    param3: "value"

} );


Much neater and I find it much easier to understand and read.

7. Be a detective
I always try to code defensively. I dont want errors to even be possible in my code. To this end, I always want to detect whether the operation I am about to perform is possible. Sometimes this means checking my own objects for a feature, sometimes a built in object. This is much more effective than user agent sniffing for browser features. Not only do I not have to know what each specific browser supports, but I also do not have to keep changing my code each time there is a new version of a browser. It makes my code future proof!

if (document.getElementById) { //check the browser supports getElementById

    var myElement = document.getElementById("myElement");

    if (myElement) { //check an element was found

        //perform some action

    }

}



8. Lets go native
Always use native functionality when its there. Its the fastest method. If you want to have a getElementsByClassName function in your application, check if the browser already has it built in and use that method if it is there. Replacing it will only make your application slower.

9. Delegation's what you need
Events are very important when building web applications. But adding event handlers to lots of nodes can be bad for performance and lead to management overheads. If for instance you have a list of items and clicking on an item in the list performs an action, consider adding the click event to the parent of the list items and using a delegation pattern to handle the event. For a more detailed explanation see this excellent article.

10. Get organised
I always try to organise my code well. It is vital during a build phase of a project to be well organised. I only ever put one object or constructor in a JavaScript file and I name the file accordingly. If the full path (taking into account namespaces etc) to my object is MyApplication.ApplicationSection.ObjectName, then I will name the file MyApplication.ApplicationSection.ObjectName.js. In a large application, my folder structure will represent the namespace hierarchy as well.

Choose a coding style for your application and stick to it. I have my own coding style with naming conventions etc, and I always stick to it. It makes the code more consistent and easier to read and understand.

Comment code wherever it is needed to help understand why the code does what it does. Dont comment code when the comments will not add to the understanding of the code.

11. Dont panic
Writing JavaScript is hard. Writing web applications is harder still. Dont panic. Help is at hand. Use a library.

24 May 2008

JSquared 1.1 Beta 1

Just a quick note to point you to information regarding the release of JSquared 1.1 Beta 1.

14 May 2008

Usability and accessibility

In recent weeks I have been spending much of my time at work thinking about usability, commenting on it and trying to convince others of my ideas.

This thought has been focussed almost solely around two tasks. The first of these was an interesting proposition - to demonstrate that projects previously completed using Flash could be built using JavaScript. As a useful by-product, I have been able to make some more use of JSquared and particularly my alpha build of FXSquared!

There were two particular products that I have been looking at, the first of which was a finance tool for Fiat. I spent 2 days working on this task and as such was only able to make part of this tool.

I started by building the dials which are the main control for the tool. This involved building a number of animations and some simple interactions. I then worked on the panel which flashes as the values change. Amazingly, in only two days, I was able to match all the functionality of the dials and get the values updating in the panel and get the panel to flash.

Accessibility was not an important consideration however, when reviewing the work I was able to complete, the markup I had written was valid and semantic and the product fully accessible. It was certainly no less usable than the Flash, and due to the keyboard being able to control the tools, it was perhaps more usable. It was accessible and worked without CSS and could be made to work without JavaScript very simply. It was a triumph.

The other product I worked on, was in a similar vein and was a similar success.

The original project manager on the Fiat project could not tell the difference between the two products and it was entirely cross-browser.

The point I am making here is that by writing the tool using semantic HTML and progressively enhancing the code with CSS and JavaScript, I was able to make something more accessible and more usable all at the same time.

My second key experience was joining a project that had a significant development effort behind it already. I was asked to point out where I thought the product was not usable. I found myself pointing out 8 areas that I had issue with. When I took a step back and looked at these comments I realised how each of them could also have appeared on a list of changes to make the product more accessible!

These experiences have really driven home for me how closely linked accessibility and usability are and how investing in one will inevitably be investment in the other. What a great selling point for spending more effort on these vital areas that are sometimes overlooked by clients.

11 May 2008

The JSquared blog

I am excited to announce the launch of the JSquared blog - http://blog.j-squared.info/.

From now on, this blog will discuss general web interface development issues whilst the JSquared blog will discuss features of JSquared and provide news of updates etc. You will see the latest few items in the left pane of this blog as well for convenience.

I have copied over the relevant posts to the JSquared blog. Please let me know if you have any comments.

8 May 2008

JSquared update

First of all, apologies for the sparsity of posts in the last few weeks. Its been a busy couple of weeks, but there is plenty to discuss. More of that to come in the next week or so.

The big news however is that JSquared has a new home - www.j-squared.info. There will be a major update to the website to coincide with the release of JSquared 1.1 in late June or early July. JSquared will have its own blog, freeing this blog for non-JSquared related topics generally. More on that in time to come.

An update on the progress of the goals for release 1.1:

JSquared Testing Platform
Using JSUnit, I have had much success testing JSquared. I have about 20% code coverage in the core library so far. My aim is still to have full coverage of the core library for version 1.1

Documentation
Using JSDocs, I have documented around 40% of the core library and I am getting close to a full API reference. This will be available along with a quick start guide for JSquared 1.1

FXSquared
The core FX library is complete. It is based on plugins and I am working on two plugins. I have been using the FX library for a few weeks now and it is nearly ready for release. It will be released with JSquared 1.1

IE 8 Beta Support
There has been some work on general compatibility updates and JSquared continues to improve. It is unlikely that JSquared 1.1 will be fully tested and working on any beta platform, but there will be compatibility updates to help with IE 8 and Firefox 3 support as well as improved Opera 9.5 support.

I hope you are as excited as I am by these updates and new developments. JSquared work continues at a good pace (despite protests from my wife) and I am hoping to increase the usage of the library over the remainder of the year.

Just to keep you all excited, Chris Heilmann gave a talk at AKQA last week and he has posted about it.

27 April 2008

Enhancements to CSS

This post presents an interesting idea, that of CSS variables. On the surface, this could be a brilliant idea, though there are some potential flaws.

Firstly, there is the issue of redefinition - if a variable is defined and then redefined. This is particularly pertinent if the variable has been used before it is redefined.

Then there is the issue of CSS injected via JavaScript. The final issue I will raise about this is browser support.

Browser support is an issue which is close to my heart and about which I feel strongly. I firmly believe that with progressive enhancement, sensible design and tolerance of some differences between browsers from the client, a website can be made to support multiple browsers with minimal effort.

New innovations are fantastic and should be encouraged, but changes in the CSS specs can cause issues. We cannot start using this sort of new feature until a major proportion of users are accessing your website with a browser which supports the new feature.

So, what can we do about this? It is hard to reconcile the need for general cross-browser support and the desire for improvements and more features in the underlying specifications we use to build websites.

As I see it, there are a number of things that can be done. Firstly, we can educate our clients that it is ok for there to be differences. Secondly, we could build websites in "the classic manner" and also include the CSS style rules introduced by updates to the specification - but this may not work that successfully for obvious reasons.

As always, us interface developers will generally overcome these issues with a combination of techniques based on the principles of building to web standards and that of progressive enhancement.

What do you think we can do to overcome these issues? How can we bring about changes and enhancements as wide ranging as CSS variables without compromising the installed base of users who do not support new features and who can take many years to upgrade?

23 April 2008

When perfect is not perfect

I must recommend this post by Marcus Alexander. He moves the argument against pixel perfection - which I have already talked about in a previous post - forwards.

It is my belief that achieving cross browser pixel perfection across a wide range of different browsers costs a project a disproportionate amount of effort and distracts interface developers from the truly important aspects of a website which should be perfect. I am talking here about minor differences, generally related to the rendering of standard elements - form fields for instance.

We should be aiming for valid markup, high quality well organised CSS and unobtrusive object oriented JavaScript - we should never have JavaScript errors. We should be working towards best practices and we should be implementing web standards wherever possible.

We should be listening to what our clients want and delivering a high quality solution which successfully addresses the problems the client wants solved.

However, we should not be expected to work around every single minor layout issue or to change system defaults. Users understand how their default system controls work, they are inherently usable and accessible. A user also does not care if a bit of a text is a few pixels out.

Lets take an example from another aspect of life - television. Television producers will ensure that their program is executed perfectly with a fantastic script, perfect camera work, flawless sound etc. However, if you are viewing a program being broadcast in wide screen on a non-wide screen television, you will miss part of the picture - the edges are removed to fit onto the television. This is a good example as the producer can decide which part of the picture gets cut off.

I must re-iterate however, that it is vital to produce high quality work which conforms to industry best practices and is accessible to as many users as possible. There is a big difference between accessible to all users and perfect for all users.

As Marcus states, no end user of the website is going to be aware of small differences between browsers - and nor should they be. They will only be unhappy if the website does not operate or is so poorly laid out as to be unusable on their browser of choice.

I am not suggesting that anything will do. I am suggesting quite the opposite. We need to be perfect in most things. But as long as the differences are small, the basic site layout is not compromised and the full website is usable to all, achieving pixel perfection does not produce the returns its cost surely demands.

22 April 2008

Asynchronous JavaScript Part 5 - The JSquared AJAX Object

In my previous post in this series, I introduced some of my thoughts on AJAX. I will now go into detail as to how to use the JSquared AJAX object.

The AJAX object just like ADIJ is an instance based object. That is to say that for each AJAX request you wish to make, you create an instance of the object and then call a method on it to send the request. The same object can be reused or new instances can be created.

The AJAX object will require certain parameters to be provided and generally it is easiest to do so in the constructor, however, the object has methods for setting these values later.

Only one parameter is required and that is the URL the request is going to. The full list of parameters is:

URL - must be provided either in the constructor or using the setUrl method
method - the HTTP verb to be used for the request. Defaults to GET
onSuccess - the callback function if the request is successful
onFail - the callback function if the request fails
scope - the scope in which to run the onSuccess or onFail handler (the scope sets the value of this within the callback function). Defaults to the handler function itself
timeoutLength - the maximum time to wait for a server response to the request before timing out the request and calling the onFail handler. Defaults to 12 seconds
headers - an array of objects containing key and value pairs which are the additional headers to add to the request

Each of these parameters can be set once the object is created using a method.

To send the request, simply call the send method:

var myAjaxRequest = new J2.AJAX( {url: "myAJAXUrl.html"} );

myAjaxRequest.send();



The send method accepts an argument which is the string of data to add to the request. This argument is optional.

As you can clearly see, the parameters are passed into the constructor as a JavaScript object using object notation.

If you provide an onSuccess handler, that will be called if the request completes successfully and will have the AJAX object passed in as the first argument to the function. This will allow all the properties of the request to be accessed.

If the request should fail, the onFail handler will be called. The first argument passed to the fail handler is the AJAX object. The second argument will be the failure code. The code can then be examined by comparing it against the value of the AJAX failure codes object which looks as follows:

J2.AJAX.FailureCodes = {

    general: "xx1",

    unauthorised: 401,

    notFound: 404,

    timeout: 408,

    server: 500   

}



If you do not provide an onSuccess or onFail handler, then the object will simply do nothing - there will be no error shown.

Once again, I have to say that it is as simple as that. This is a really powerful but easy to use AJAX object.

This is the end of the series on Asynchronous JavaScript. I will be writing more about the other features of JSquared in the near future.

21 April 2008

JSquared road map and update

I have literally been swamped by a request for a road map for JSquared. In response, I have created this wiki page.

I have some ambitious plans for JSquared and it may not all be possible - much depends on how much my wife puts up with me working late into the night. But my aims are stated there.

I will be releasing periodic compatibility updates in between these major releases.

The single most requested feature for JSquared thus far is documentation and it is high on my priority list. I am intending on providing this as a set of object models output using JSDocs and an accompanying guide. Following that, I hope to be able to get a JSquared website up and running full of example code.

A set of unit tests and indeed a full testing platform for JSquared is also very high on the priority list and progress is being made on this. The intention for JSquared 1.1 is to get the core functions unit tested.

FXSquared is making good progress with a basic FX module ready for some heavy duty testing. FXSquared is built around plugins to allow for maximum flexibility.

IE 8 support continues to improve with each commit of the code. I expect IE 8 support to match that of all other browsers for JSquared 1.1

Please use this post or the wiki on the current JSquared home to comment on the road map, especially the form that the documentation should take.

17 April 2008

CSS reset and pixel perfection

This recent post from Jonathan Snook nicely sums up my feeling on CSS reset files.

I have often argued that CSS reset files will end up causing more problems than they solve. Indeed, I am not convinced they solve a real problem.

The problem seems to be that each browser I support does not apply the same default styling to some elements that I may use in my website.

The solution proposed by CSS reset files is to create a base line set of CSS rules which will on first inspection solve this problem. However, I am not sure I agree that the problem is as stated above.

I believe the problem is that each website I code looks and behaves differently and does not use the same set of default styles as other websites I have developed.

The solution I propose is resetting only the CSS styles I actually need in order to make the website look correct in as few styles as possible. Only then will I be interested in the difference between browsers. I would still use the * reset myself as I find that generally is useful and makes a fairly large difference when designing my CSS. I would not employ any more of a reset than that.

As an example, my website may not use any level 3 headings. In that instance, my CSS code will not try and select any 3rd level headings at all. If later during development I need to use a level 3 heading, I will use the tag and add the relevant CSS code. I have always employed this approach and it has so far never let me down.

For more details on my approach to structuring CSS, I suggest reading this post. Of course ideas change over time, but it is still relevant.

I always believe in code being as lightweight as possible, not purely for speed of download but for ease of maintenance and the retention of my own sanity!

The other point raised by the post I mention above is that of producing cross-browser pixel perfect layouts. This is a topic that probably deserves its own post if not a series of posts, however, I rarely strive for pixel perfection. I just do not believe it is that important any more. I would argue that users are becoming aware that websites will look very slightly different on different platforms. I definitely did not perfectly explain my position in my previous post about browser support so I will clarify it here.

In that post, for level 1 browsers (the only relevant browsers in this discussion) I state:

All features of the website are fully functional. All content is available. The web pages will match the designs provided completely. The web pages will look the same across all browsers at this level.


What I should have said was:

All features of the website are fully functional. All content is available. The web pages will match the designs provided completely with the exception of system entities such as form fields and rendered fonts which may cause a layout to differ slightly. The web pages will look the same across all browsers at this level within the constraints of the different rendering methods for each platform.


That more fully expresses how I feel about pixel perfection. Sometimes it can be done, sometimes it cannot. We should educate our clients that pixel perfection is not always important.

9 April 2008

WOW - Seriously

I saw this post on Ajaxian and could not resist posting it here myself.

It is truly quite awesome!

Super Mario in 14Kb of JavaScript

8 April 2008

outline:0

There is nothing more annoying for those who wish to navigate a website using the keyboard than the extensive use of outline:0 in CSS code. For a more detailed explanation of what this is and what it means, I suggest you read this blog post.

Although I luckily do not suffer from any form of disability and do not actually require any effort to be made by a web developer towards accessibility, I do like to browse using the keyboard on occasion and this can be a highly frustrating problem.

CSS reset files such as the one mentioned in the post can leave developers without a full understanding of the nuances of the platform they are developing for. It is at least in part for this reason that I dont use a CSS reset at all.

My message to all would be to not remove this extremely useful and built-in behavior but actually to enhance it. Try navigation the London 2012 website with the keyboard (a site I led the development on) to see how nice it is to have the outline behaviour enhanced rather than removed.

7 April 2008

Documenting JSquared

One of the biggest missing aspects from JSquared is documentation.

I have been looking at using an automated tool to build documentation from comments in the code (another thing which is currently lacking) and this is going to be a major push for me over the next 2 months.

I have almost settled on using JSDoc. I would be very thankful if the readers of this blog could give any thoughts on JSDoc or any alternatives that I should be looking at.

Thanks in advance.

Safari 3.1 and IE8 Beta 1 support in JSquared

With the release of Safari 3.1 on PC and MAC, the handling of keyboard events has changed as detailed in this post on Ajaxian.

I am delighted to say that the auto filter object in JSquared 1.0 is fully compliant with Safari 3.1 and no changes to the code are required. This is the only object in JSquared with keyboard handling at present.

The big news recently though is the release of IE8 Beta 1. I am currently testing JSquared with this release and things are looking good. I expect there to be some minor changes to the code which will precipitate the release of a JSquared compatibility update. More news on this in the next few weeks.

5 April 2008

Asynchronous JavaScript Part 4 - AJAX (a quick introduction)

In the previous 3 parts of this series on asynchronous JavaScript, I have talked at length about ADIJ. I suggest starting at part 1 to catch up!

AJAX is a term which has been bastardised to describe any sort of interactive behaviour in a web page. A number of years ago, these interactions were known as dHTML (dynamic HTML). Neither AJAX nor dHTML perfectly describe these types of interactions, but I prefer the latter name.

AJAX to me should mean Asynchronous JavaScript and XML. This is not perfect as it is often the case that XML is not appropriate and sometimes JavaScript, JSON, HTML or even plain text is required instead. Nonetheless, the acronym AJAX still has one definite meaning to me!

For the purposes of this post and the remainder of this series I will use it to mean any form of asynchronous HTTP request whose response will be handled using JavaScript code whatever form that response takes.

AJAX is a very useful and powerful technique invented by Microsoft for Outlook Web Access and later adopted by other browser vendors. When used in a fairly light touch manner, AJAX techniques can give the user a greatly enhanced experience.

AJAX involves sending an HTTP request to a web server and getting a response asynchronously which is handled in JavaScript. This means more data can be requested from the server or passed to the server without the user seeing the web page refresh.

An asynchronous request happens in the background - it allows other operations to be performed whilst the HTTP cycle is completed. You provide a callback function to the AJAX object which is called once the HTTP cycle is complete. I wont go into more detail around implementation and what it all means as there are many excellent tutorials already written. The W3Schools has one of their own.

Using AJAX it is possible do things such as post a form or update the content of a web page without the user going through a page refresh cycle. This makes the web page seem more like an application and can make it better for users to interact with.

Of course, JSquared has an AJAX component which is extremely simple to use yet offers flexibility and power. In part 5 of this series, I will walk through the AJAX object and then discuss how to use it.

3 April 2008

Future proofing

With updates to some of the most popular and widely used browsers soon to appear - namely IE 8 and Firefox 3 - the question has to be posed about how and when to support them.

It is a fairly simple matter to test your websites in the new versions of these browsers as there are beta versions of both available, however, it can be very time-consuming to fix issues and clients can find it a bitter pill to swallow.

The biggest problem here is a lack of overall transparency about how and when these browsers will be released. This issue is much much greater where IE is concerned.

We know that Firefox is slated for a June release and it is likely that the automatic update system will offer users the new version. It is therefore reasonable to suppose that there will be a fairly rapid surge in the number of users for Firefox 3.

As far as IE is concerned, we cannot be sure of a release date or the mechanism by which the update will be delivered. It is possible that there will be a split between IE 7 and IE 8 users for some time until IE 8 is pushed through automatic updates as a high priority update. This in a way is what has happened with the IE 6 to IE 7 transition.

I think it is reasonable for an interface developer to support 2 versions of a popular web browser (perhaps at different levels of support), but I do not like the idea of having to support 3 versions at any level, particularly when one of those is IE 6!

So, the question is, what to do? I am unsure at the moment, though I am considering testing my websites in Firefox 3 as of now and attempting a good level of support. As far as IE 8 goes, if its IE 7 mode works well, then that is a very good reason to not support IE 8 until it is actually released.

If Microsoft released a road map with release dates for IE 8 and the version afterwards, then I could plan my browser support strategy much more easily. So come on Microsoft, talk to me....

31 March 2008

Progressive degradation

I should start this post with a disclaimer. I did not discover or invent this technique. It was suggested to me by a now ex-colleague, Marcus. He also does not like what I have named this technique. But you can make your own mind up.

We should by now all be aware of progressive enhancement, what it is and why to use it. Well, progressive degradation actually assists with progressive enhancement and in some ways turns the idea on its head to make life easier and simpler. Whenever I have employed progressive degradation, I have found things easier to develop and maintain.

Progressive enhancement generally involves writing high quality semantic markup and then layering onto that additional elements which do not in themselves add additional meaning but enhance the visual presentation of the content of a web page or make it easier and nicer to use.

Generally progressive enhancement will mean layering CSS on to the markup to make the web page look nice, and JavaScript to make the web page easier and nicer to use. Progressive degradation turns that idea on its head.

With progressive degradation, we still need high quality semantic markup. That is always the base. We are still going to layer CSS on to the markup to make the web page look nice. The big change is with the JavaScript.

Using progressive enhancement, we would write the CSS in such a way that if the user does not have a JavaScript enabled browser, the site works well and looks correct. If the user does have a JavaScript enabled browser, the JavaScript code itself would make changes to the way the CSS is applied (and perhaps to the markup as well) to give the web pages a slightly different look and additional behaviours.

Using progressive degradation, we apply the CSS to the markup in the assumption that the user has a JavaScript enabled browser. Let me just make that clear. We make the assumption that the user has JavaScript enabled within their browser.

So where is the magic? The trick is that we include an additional CSS file for those users who DO NOT have JavaScript enabled in their browser which will re-style the content to be usable without the JavaScript enhancements.

This technique is not suitable for all occasions, but when I have employed it within a website, I have found a way to use it for every enhanced element.

The major advantage of progressive degradation is that there is no page flicker as the content is altered by JavaScript code even when the JavaScript code is run on page load as the CSS code that is loaded with the page sets up the page for the JavaScript enhanced version from load.

Its an extremely simple technique to apply and the following code is all you need:

<noscript>

<link rel="stylesheet" href="noscript.css" type="text/css" media="screen" />

</noscript>


As you can see, all that is involved is the addition of a CSS file inside noscript tags. Whilst this does make the markup for your web page invalid, I believe it is a small price to pay for the power it brings. To demonstrate that power, an example could be useful.

The example I will use is that of a form which will add an item to a list. Without JavaScript, the form will always be visible. With JavaScript, there will be an add button which when clicked will show the form. How the form submission is handled here is not important but one could use an AJAX request for those user agents which support it and a postback cycle for those which do not.

The 2 versions of this page element look as follows:

With JavaScript enabled:



Without JavaScipt enabled:



The markup is as follows:

<div id="ListContainer">

<ul>

<li>Item 1</li>

<li>Item 2</li>

</ul>

<form action="page.html" method="post">

<label for="NewItem" id="NewItemLabel">Add item</label>

<input id="NewItem" type="text" />

<input type="submit" id="SubmitForm" value="add" />

</form>

</div>



The pertinent CSS for the purposes of this article is:
#NewItem, #SubmitForm {

display:none;

}


This CSS will hide the input field and the submit button.

The following (somewhat rough) JavaScript will make the progressive enhancement work so that when clicking on the label for the form field, the input field and submit button will show:
<script type="text/javascript">

window.onload = function() {

document.getElementById("NewItemLabel").onclick = function() {

this.style.display = "none";

document.getElementById("NewItem").style.display = "block";

document.getElementById("SubmitForm").style.display = "block";

}

}

</script>


So, we now have a working form module as described above. When the page loads, some elements are hidden - there is no flicker whilst the JavaScript kicks in on page load and hides the relevant elements as they are already hidden! Clicking on the label will hide the label and show the relevant elements.

Now for the progressive degradation:
<noscript>

<style type="text/css">

#NewItem, #SubmitForm {

display:block;

}

</style>

</noscript>


And that is all there is to it. Without JavaScript enabled in the web browser, the user will see the complete form. With JavaScript enabled, the user gets an enhanced experience. If you re-read the last 2 sentences, you will notice how right it sounds and yet we are mixing the concepts of progressive enhancement with those of progressive degradation.

Of course we could take this example much further, writing some JavaScript to allow the user to return the page to its load state if they changed their mind and did not actually want to add another list item. We could write code to handle the form submission etc and all this would be progressive enhancement, but progressive enhancement made even easier by starting with the concept of progressive degradation.

The key to how progressive degradation makes this easier is that we know the non-JavaScript version of the page will work perfectly and we can add as much progressive enhancement as we like and concentrate on writing fantastic JavaScript to enable exciting but as always simple and light-touch enhancements to the web pages we build.