From reading the title of this post you might assume that I am about to talk about what I do every day and have always done. You are only partly right.
The specific topic I am interested in is the building of completely flat websites. That is to say the type of delivery to a client which is made up of HTML, CSS and JavaScript that will never be integrated into a back-end, never have any other coding wrapped around it but will be kept as a series of flat pages.
This is something I have had to do recently for the first time in some time and the question of how to achieve such a thing in an elegant manner keeping the trials of development to a minimum occupies some of my time now.
What I did in this instance was use server side includes to include all the common components of each page from a central resource - this includes things like global navigation, a footer etc etc. When it came time to send the code to the client, I then ran an ANT script (as supplied by the esteemed Mr Alexander, once more a colleague) to output the flat pages.
This seemed to work well for me. However, I was not happy with the way this worked for the CSS and JavaScript code.
During development, I wanted the CSS and JavaScript code broken down into many files to help me find where code was, organise new code and bug fix. I wanted the convenience of loading Firebug and being able to see exactly where my style declarations came from on any particular node.
For the final delivery, I wanted one CSS file and just a few JavaScript files. But how to make that happen without breaking my ideals during development? Well, I could end up using ANT to piece it all together just like the HTML code. But that felt a bit messy and frankly, hand crafting XML is about the worst thing I can think of doing.
This is where I stop providing solutions and pose a question. How would you have prepared your build? How would you have made sure that during development you could pull in only the bits of JavaScript that should have been on each page and then built the flat files correctly? Its more complex than it at first appears....
19 November 2009
22 September 2009
Reflecting on a return to action
I have now been with my new employer - EMC Consulting - for four weeks and to be frank, I now have a lot more of interest to discuss on this blog. So, I thought I would re start my posting here with some thoughts on these first four weeks.
I have spent the last fourteen months working almost exclusively on a large and complex JavaScript application and I wont deny that I did learn quite a little bit during that time. But I still find myself glad to be back working with all of my skills in a more balanced fashion. Its been refreshing to see how much people care about UX and accessibility and it is a pleasure to be able to combine that with my passion for high quality and minimal CSS and HTML code.
Already, pragmatism has reared its friendly head and is guiding my path. This is something I have to reacquaint myself with and I am enjoying that process. There is little to rival the feeling of writing code to fulfil a purpose, getting it done and working and moving on.
Yes, I now have to fix IE6 bugs again, but at least I am able to do so in the shadow of Borough Market and the delights that can be found within!
I have spent the last fourteen months working almost exclusively on a large and complex JavaScript application and I wont deny that I did learn quite a little bit during that time. But I still find myself glad to be back working with all of my skills in a more balanced fashion. Its been refreshing to see how much people care about UX and accessibility and it is a pleasure to be able to combine that with my passion for high quality and minimal CSS and HTML code.
Already, pragmatism has reared its friendly head and is guiding my path. This is something I have to reacquaint myself with and I am enjoying that process. There is little to rival the feeling of writing code to fulfil a purpose, getting it done and working and moving on.
Yes, I now have to fix IE6 bugs again, but at least I am able to do so in the shadow of Borough Market and the delights that can be found within!
10 August 2009
jQuery browser history manager
Despite being the creator of JSquared, I have recently had cause to use jQuery.
For the project where it is in use, I needed a browser history manager and after a very rapid cursory search, none came up that caught my eye. So, I decided to port my browser history manager from JSquared to jQuery and I woule like to make it available for you to use as well.
It works like all other browser history managers, appending named items with a value onto the hash portion of the URL in a QueryString like format.
The port is based on the new browser history manager that will be part of the next version of JSquared and it extends the jQuery object. You can download it here.
Using it could not be simpler. You set up named items you wish to listen for changes on and provide a callback function for when the change occurs. You can also update the value of an item as well.
To listen for changes, simply do the following:
Your callback function will be called when the user goes back and forwards and the value of your named item changes. The callback function will also get called if the user comes to your site from a bookmark and your item has a value.
To update the value of an item and create an entry in the users history, just do the following:
The third parameter to the update method is called "dontUpdateUrl". When set to true, the value shown to the user in the URL will not change, but the value internally within the browser history manager will. This is useful is you are updating many items and only wish to create one history point.
The new value can be any string and the object does not check that the length of the URL is safe - most browsers restrict URL's to around 2000 characters.
Please do feedback on whether this is working well for you.
For the project where it is in use, I needed a browser history manager and after a very rapid cursory search, none came up that caught my eye. So, I decided to port my browser history manager from JSquared to jQuery and I woule like to make it available for you to use as well.
It works like all other browser history managers, appending named items with a value onto the hash portion of the URL in a QueryString like format.
The port is based on the new browser history manager that will be part of the next version of JSquared and it extends the jQuery object. You can download it here.
Using it could not be simpler. You set up named items you wish to listen for changes on and provide a callback function for when the change occurs. You can also update the value of an item as well.
To listen for changes, simply do the following:
$.url.listen( "myItemName", function(e, itemValue, itemName) {
//perform some actions
});
Your callback function will be called when the user goes back and forwards and the value of your named item changes. The callback function will also get called if the user comes to your site from a bookmark and your item has a value.
To update the value of an item and create an entry in the users history, just do the following:
$.url.update( "myItemName", "myNewValue", false);
The third parameter to the update method is called "dontUpdateUrl". When set to true, the value shown to the user in the URL will not change, but the value internally within the browser history manager will. This is useful is you are updating many items and only wish to create one history point.
The new value can be any string and the object does not check that the length of the URL is safe - most browsers restrict URL's to around 2000 characters.
Please do feedback on whether this is working well for you.
17 July 2009
position:fixed
Well well, I have learnt something new this morning. I think this new thing comes under the category of "how did I not already know this" but equally there is always the category of "kill IE6, please!".
I suspect that had IE6 dies off a good few years ago, I would already know this new technique as it is not supported in IE6. But I am very much of the opinion that we no longer need to support IE6 perfectly. I do not want to go into detail about browser support, I have spoken often enough about my thoughts on the matter.
So, onto the interesting part. Have you ever wanted to create a layout with an element which has a height of 100% minus some pixels? Or an equivalent for the width of an element? I know that I have on many occasions. Each time, I have had to either accept it cant be done or utilise some seemingly clever JavaScript to achieve my goal.
But, no more! position:fixed can now be our saviour. Picture the situation where you have a two column layout with your navigation in the left column and content in the right column. You want the left column to have a solid background colour and be 100% in height but leave a 50 pixel space at the top of the column. You might use the following markup:
Now if you apply the following CSS:
You should see things exactly as I described above. The key styles are on the LeftNavigation DIV, the position, top and bottom styles. You should be aware that this is not supported in IE6 but I have found it works in all the browsers I have tried.
If this is also new to you, have a little play - it has certainly brightened my day!
I suspect that had IE6 dies off a good few years ago, I would already know this new technique as it is not supported in IE6. But I am very much of the opinion that we no longer need to support IE6 perfectly. I do not want to go into detail about browser support, I have spoken often enough about my thoughts on the matter.
So, onto the interesting part. Have you ever wanted to create a layout with an element which has a height of 100% minus some pixels? Or an equivalent for the width of an element? I know that I have on many occasions. Each time, I have had to either accept it cant be done or utilise some seemingly clever JavaScript to achieve my goal.
But, no more! position:fixed can now be our saviour. Picture the situation where you have a two column layout with your navigation in the left column and content in the right column. You want the left column to have a solid background colour and be 100% in height but leave a 50 pixel space at the top of the column. You might use the following markup:
<!DOCTYPE html>
<html>
<body>
<div id="LeftNavigation">
<ul>
....a series of links
</ul>
</div>
<div id="Content">
....some content
</div>
</body>
</html>
Now if you apply the following CSS:
#LeftNavigation {
position:fixed;
top:50px;
bottom:0;
background:#cccccc;
overflow:scroll;
}
#Content {
margin-left:310px;
}
You should see things exactly as I described above. The key styles are on the LeftNavigation DIV, the position, top and bottom styles. You should be aware that this is not supported in IE6 but I have found it works in all the browsers I have tried.
If this is also new to you, have a little play - it has certainly brightened my day!
3 July 2009
Jamesnorton.com
I have launched my all new website at http://www.jamesnorton.com.
I wanted to get something out there to represent me and who I am and what I do, but the site is not complete. It will move forwards and change and be enhanced over time.
One thing you can all help me with is a potential Webkit bug particularly prevalent in Chrome but also an issue in Safari which I just cannot get to the bottom of. If you view the site in a Webkit browser and you get weird large white patches at the bottom of the screen, please do reply to this post with details of which browser you are using.
Otherwise, enjoy the site and feel free to provide any feedback you like.
I wanted to get something out there to represent me and who I am and what I do, but the site is not complete. It will move forwards and change and be enhanced over time.
One thing you can all help me with is a potential Webkit bug particularly prevalent in Chrome but also an issue in Safari which I just cannot get to the bottom of. If you view the site in a Webkit browser and you get weird large white patches at the bottom of the screen, please do reply to this post with details of which browser you are using.
Otherwise, enjoy the site and feel free to provide any feedback you like.
Macro to micro blogging
You will obviously have noticed the quiet on this blog over the past 6 weeks.
Yes, its true, I have moved most of my activity onto Twitter now.
I have neglected my blog. I have found that I have a little less to say on here than I would like because I now only talk in units of 140 characters! I would encourage you to follow me on Twitter and that way you wont miss out!
I will endeavour to make more updates to my blog, but I will always keep updating Twitter...
Yes, its true, I have moved most of my activity onto Twitter now.
I have neglected my blog. I have found that I have a little less to say on here than I would like because I now only talk in units of 140 characters! I would encourage you to follow me on Twitter and that way you wont miss out!
I will endeavour to make more updates to my blog, but I will always keep updating Twitter...
14 May 2009
Process
There is nothing like a discussion about process. Everyone has their own ideas, every company works differently.
I have recently taken a lot more interest in process. Last month I became a Certified Scrum Master and those who know me will know that I am a Scrum advocate. I think its a fantastic process. I have worked on a couple of projects that used Scrum and it has proven itself to be very successful.
Its a deceptively simple process which is far too often badly implemented. A poor implementation of Scrum is more damaging than not using Scrum in my experience.
I am not going to attempt to explain how Scrum works, I will leave that to the Wikipedia. Whilst not the best explanation, it will suffice. I personally enjoy using Scrum as it involves doing lots of small pieces of work which are well understood and can be fairly accurately estimated leading to reasonable expectations from clients and higher quality end product. I also believe that the speed with which issues can be surfaced is a massive benefit.
Doing Scrum badly will lead to more problems than if you were not doing Scrum at all. It is for this reason that most Scrum trainers will suggest that until you are experienced at using Scrum and implementing it well, you should follow the Scrum process pretty much to the letter.
A bad implementation of Scrum can force issues down and stop them being surfaced, can lead to inefficiencies as everyone attempts to work out what is going wrong and instead of opening up the process for all to see, it closes down the Scrum team and cloaks the detail in useless rhetoric.
I want to work on projects using Agile with Scrum. I do not want to be agile when I should be Agile nor vice-versa. I make a distinction between agile and Agile and its an important one.
When talking about agile with a lower case "a" I mean that I will show a more literal agility - responding to change, trying to prototype quickly and generally being flexible. This is great for a client but dangerous. Too much agility will lead to timelines slipping or the dreaded overtime!
Agile with a capital "A" on the other hand is a process and a set of techniques including but not limited to Scrum and XP. When I am being Agile, I will not respond to change in the same way, the person requesting the change has to follow a specific process, a simple process and one which is totally transparent. I have never heard a client complain about having to do that when the result turns out to be a better product. The process protects me and my Scrum team from being distracted, allowing quality to rise whilst also forcing the clients priorities to be my priorities.
From this realisation I have chosen to be Agile not agile.
I have recently taken a lot more interest in process. Last month I became a Certified Scrum Master and those who know me will know that I am a Scrum advocate. I think its a fantastic process. I have worked on a couple of projects that used Scrum and it has proven itself to be very successful.
Its a deceptively simple process which is far too often badly implemented. A poor implementation of Scrum is more damaging than not using Scrum in my experience.
I am not going to attempt to explain how Scrum works, I will leave that to the Wikipedia. Whilst not the best explanation, it will suffice. I personally enjoy using Scrum as it involves doing lots of small pieces of work which are well understood and can be fairly accurately estimated leading to reasonable expectations from clients and higher quality end product. I also believe that the speed with which issues can be surfaced is a massive benefit.
Doing Scrum badly will lead to more problems than if you were not doing Scrum at all. It is for this reason that most Scrum trainers will suggest that until you are experienced at using Scrum and implementing it well, you should follow the Scrum process pretty much to the letter.
A bad implementation of Scrum can force issues down and stop them being surfaced, can lead to inefficiencies as everyone attempts to work out what is going wrong and instead of opening up the process for all to see, it closes down the Scrum team and cloaks the detail in useless rhetoric.
I want to work on projects using Agile with Scrum. I do not want to be agile when I should be Agile nor vice-versa. I make a distinction between agile and Agile and its an important one.
When talking about agile with a lower case "a" I mean that I will show a more literal agility - responding to change, trying to prototype quickly and generally being flexible. This is great for a client but dangerous. Too much agility will lead to timelines slipping or the dreaded overtime!
Agile with a capital "A" on the other hand is a process and a set of techniques including but not limited to Scrum and XP. When I am being Agile, I will not respond to change in the same way, the person requesting the change has to follow a specific process, a simple process and one which is totally transparent. I have never heard a client complain about having to do that when the result turns out to be a better product. The process protects me and my Scrum team from being distracted, allowing quality to rise whilst also forcing the clients priorities to be my priorities.
From this realisation I have chosen to be Agile not agile.
JSquared 2.1
I am very pleased to announce that JSquared 2.1 has been released as of this morning.
Check out the website for yourself, follow the JSquared blog, download the code and read the docs.
Check out the website for yourself, follow the JSquared blog, download the code and read the docs.
24 April 2009
Recent times
I am sure you will have noticed that posting has been light on here the last few weeks and I thought it might be interesting to talk a little about what I have been doing.
I have spent most of my spare time working on JSquared. The code base is coming on nicely now. I have completely re-written some parts of it even since the latest release and I am getting close to being code complete for the next version. Some more demos will come soon as well, but you can get more information on that over on the JSquared blog over the coming week or two.
Recently I have achieved the status of Certified Scrum Master. This is something about which I am very excited. I believe that Agile with Scrum is a fantastic engineering process. I have seen it in action myself whilst I worked at LBi and I am actively looking to assist Betfair in making the transition to this process. It can be painful and without doubt we are not yet getting it right, but with time, I am sure we will.
Now that I am officially certified, I feel more able to push things forward with Betfair and I have gained a great understanding of the process. I even think that I might bring some of the principles to my "normal" life, though I am not sure my wife will like a daily scrum meeting!
Hopefully I should be able to work on some more exciting things over the coming months than in the last few and I will be sure to blog about any and all the exciting happenings in the world of Betfair. For now, it seems I am far more an observer than participant in this great interface development world in which I live and work.
5 April 2009
JSquared 2.0
For those of you that might have missed the news, I released JSquared 2.0 today, April 5 2009.
This version represents a massive step forwards for the library. There is a whole new website to go along with the massively revamped code base. The documentation has become much more extensive and the website now features demos of how some of the library works.
The best thing to do is to check out the website for yourself, follow the JSquared blog, download the code and read the docs.
This version represents a massive step forwards for the library. There is a whole new website to go along with the massively revamped code base. The documentation has become much more extensive and the website now features demos of how some of the library works.
The best thing to do is to check out the website for yourself, follow the JSquared blog, download the code and read the docs.
2 April 2009
IE6
IE6 is a browser that all web developers hate. Or at least that is the case if you tow the line.
IE6 used to be by far the most common browser and once was the most advanced. However, things have moved on since 2001. Web developers hate IE6 for its poor implementation of standards. Web developers hate the vagaries of the IE6 rendering engine and the performance of its JavaScript engine. These problems are well documented elsewhere.
However, the important people, the ones who matter, the ones who web developers do all their hard work for - the users - they don't seem to care nearly as much. This can be evidenced by the proportion of users who still use IE6. Even today, 8 years after its launch, over 15% of users typically visit a website with IE6. If I were Microsoft, I might just be congratulating myself for making a product so beloved of its users. Of course, where I to do that then I would naively be ignoring the massive proportion of users who have no idea what browser they are using, that there are other browers out there or how to switch.
We as web developers must also not ignore this set of users. There have been a lot of campaigns started recently to end IE6 support. Some have gone even further such as this one I just came across. I support the idea of getting our users using a more modern browser, but its up to them not us.
I don't get the big deal here. I stopped treating IE6 as a "fully supported browser" about 2 years ago. I think the problem here is this idea that a website must look identical in all supported browsers. What is wrong with a website looking subtly different in different browsers?
Of all the websites I have built from scratch in the last 2 years, not one has looked identical in different browsers unless it has been demanded from the client. They all worked in an identical way, but they looked subtly different - generally because browsers have different default rendering styles.
These differences have never been an issue. In IE6 you might get simpler button styles, you might get a more linear visual layout or you might find text looking slightly different. When I have shown them to a client and explained why they are different, the client does not care either - in fact they are thankful for the money and time saved!
I have only had problems with IE6 when I have been maintaining a site where the client has it in writing that browser support will include IE6 and that the site will be identical in all browsers.
In the many many demonstrations to clients that I have made in the last 2 years, I have never once been asked to use any browser other than the one I picked for the demo.
So, I just don't get the big deal with IE6 these days. Lets make a big deal about why we need to make websites look identical in different browsers on different platforms. Lets make a big deal about making websites more accessible. Lets make a big deal about building standards compliant websites with the simplest possible code.
If we spend our energies doing this, then we are more likely to have built-in zero cost support for many many browsers including mobile browsers, not just a narrow set often defined well before the release of a website and therefore out of date at its release.
IE6 used to be by far the most common browser and once was the most advanced. However, things have moved on since 2001. Web developers hate IE6 for its poor implementation of standards. Web developers hate the vagaries of the IE6 rendering engine and the performance of its JavaScript engine. These problems are well documented elsewhere.
However, the important people, the ones who matter, the ones who web developers do all their hard work for - the users - they don't seem to care nearly as much. This can be evidenced by the proportion of users who still use IE6. Even today, 8 years after its launch, over 15% of users typically visit a website with IE6. If I were Microsoft, I might just be congratulating myself for making a product so beloved of its users. Of course, where I to do that then I would naively be ignoring the massive proportion of users who have no idea what browser they are using, that there are other browers out there or how to switch.
We as web developers must also not ignore this set of users. There have been a lot of campaigns started recently to end IE6 support. Some have gone even further such as this one I just came across. I support the idea of getting our users using a more modern browser, but its up to them not us.
I don't get the big deal here. I stopped treating IE6 as a "fully supported browser" about 2 years ago. I think the problem here is this idea that a website must look identical in all supported browsers. What is wrong with a website looking subtly different in different browsers?
Of all the websites I have built from scratch in the last 2 years, not one has looked identical in different browsers unless it has been demanded from the client. They all worked in an identical way, but they looked subtly different - generally because browsers have different default rendering styles.
These differences have never been an issue. In IE6 you might get simpler button styles, you might get a more linear visual layout or you might find text looking slightly different. When I have shown them to a client and explained why they are different, the client does not care either - in fact they are thankful for the money and time saved!
I have only had problems with IE6 when I have been maintaining a site where the client has it in writing that browser support will include IE6 and that the site will be identical in all browsers.
In the many many demonstrations to clients that I have made in the last 2 years, I have never once been asked to use any browser other than the one I picked for the demo.
So, I just don't get the big deal with IE6 these days. Lets make a big deal about why we need to make websites look identical in different browsers on different platforms. Lets make a big deal about making websites more accessible. Lets make a big deal about building standards compliant websites with the simplest possible code.
If we spend our energies doing this, then we are more likely to have built-in zero cost support for many many browsers including mobile browsers, not just a narrow set often defined well before the release of a website and therefore out of date at its release.
Labels:
Browser support,
Browsers,
Internet Explorer,
rant
1 April 2009
CSS Selector Engines
CSS selectors are a great way to pick DOM nodes off a page and many JavaScript libraries use them extensively.
For those that have used JSquared, you will know that there is no CSS selector engine built in. Some people have asked why and I have always said that I think its simpler, faster, easier and makes a better performing application to use standard DOM methods to pick elements.
Well, yesterday I saw TaskSpeed and it got me thinking. I was about to start making a JSquared version of the tests in order to see what the performance was when I cam across this post which proved the point I wanted to make without me having to do any work!
Andrea shows that using native DOM methods is by far and away the fastest way to complete the tests. Whilst this isn't totally related to a CSS selector engine, it does show that native methods are the best and that certainly holds true for the few parts of the test where a selector engine would be used it it existed.
Of course, once all browsers in use support the Selectors API, all will be much improved,
For those that have used JSquared, you will know that there is no CSS selector engine built in. Some people have asked why and I have always said that I think its simpler, faster, easier and makes a better performing application to use standard DOM methods to pick elements.
Well, yesterday I saw TaskSpeed and it got me thinking. I was about to start making a JSquared version of the tests in order to see what the performance was when I cam across this post which proved the point I wanted to make without me having to do any work!
Andrea shows that using native DOM methods is by far and away the fastest way to complete the tests. Whilst this isn't totally related to a CSS selector engine, it does show that native methods are the best and that certainly holds true for the few parts of the test where a selector engine would be used it it existed.
Of course, once all browsers in use support the Selectors API, all will be much improved,
31 March 2009
What do you use?
Every now and then I like to play with some new tools. Try a few Firefox plugins, maybe a new dev environment. This week is one of those times.
So make your recommendations, what do you use that is cool and why?
Just to get things started and to pre-emptively give something back, I have recently been playing with Spotify which I can recommend as a great way to listen to some new and old music.
I have also been playing with XUL Profiler which is a pretty cool Firefox plugin (a nightly of Minefield works best with it I believe) for profiling your JavaScript and analysing how much work the browser is doing to render your page.
Now its your turn...
So make your recommendations, what do you use that is cool and why?
Just to get things started and to pre-emptively give something back, I have recently been playing with Spotify which I can recommend as a great way to listen to some new and old music.
I have also been playing with XUL Profiler which is a pretty cool Firefox plugin (a nightly of Minefield works best with it I believe) for profiling your JavaScript and analysing how much work the browser is doing to render your page.
Now its your turn...
25 March 2009
Twitter is something I never looked at until very recently and for some reason I have suddenly decided to use!
So rather excitingly, I am now on Twitter and you can follow my antics (or whatever I bother to say on there) at http://twitter.com/nortools.
I hope you have as much as fun reading as I do writing.
So rather excitingly, I am now on Twitter and you can follow my antics (or whatever I bother to say on there) at http://twitter.com/nortools.
I hope you have as much as fun reading as I do writing.
17 March 2009
IE8 - right, wrong or something else...?
For those that don't know I was recently struck down by a rather severe bout of bronchitis which has laid me low for 3 full weeks. I am delighted to say that I am almost fully recovered. Though just to be sure, I am still "taking the pills" - a form of prevention as much as cure.
Its amazing how similar that situation is when compared to Internet Explorer.
For many years, IE was suffering from an infection - non-standards compliance. There was a lack of anti-bodies to kill it off. But Microsoft finally took its sick child to the doctor who prescribed a welcome initial course of anti-biotics and so IE7 was born. But all was not fully well, so a second course was required.
As Microsoft now force-feeds its child the last dose of that medicine, we see IE8 emerge. Standards compliant, much improved performance and an all round more open approach to development.
But the virus remains, dormant for some but nonetheless very real and there. It has mutated into compatibility mode. Luckily there is a vaccination, a form of prevention, and that is a valid doctype and standards compliance.
Of course, you could stop reading the drivel I am writing and read about what Jon von Tetzchner from Opera thinks of IE8.
Whatever you think of Microsoft and IE, you have to acknowledge that they are trying. They cannot please everyone and certainly the remaining prevalence of IE6 does not please me, but I do believe they are trying. Now if only they could speed up their release schedules, I could finally breathe easy.
Its amazing how similar that situation is when compared to Internet Explorer.
For many years, IE was suffering from an infection - non-standards compliance. There was a lack of anti-bodies to kill it off. But Microsoft finally took its sick child to the doctor who prescribed a welcome initial course of anti-biotics and so IE7 was born. But all was not fully well, so a second course was required.
As Microsoft now force-feeds its child the last dose of that medicine, we see IE8 emerge. Standards compliant, much improved performance and an all round more open approach to development.
But the virus remains, dormant for some but nonetheless very real and there. It has mutated into compatibility mode. Luckily there is a vaccination, a form of prevention, and that is a valid doctype and standards compliance.
Of course, you could stop reading the drivel I am writing and read about what Jon von Tetzchner from Opera thinks of IE8.
Whatever you think of Microsoft and IE, you have to acknowledge that they are trying. They cannot please everyone and certainly the remaining prevalence of IE6 does not please me, but I do believe they are trying. Now if only they could speed up their release schedules, I could finally breathe easy.
16 March 2009
Standards - again
As if to reinforce what I was saying 4 months, the unique and unparalleled Douglas Crockford has articulated in part my own thoughts about HTML 5 in this blog post.
As I said back in November:
As I said back in November:
I want to see greater interoperability between the various browsers and as such I believe simplification would be better.
16 February 2009
Changing style regarding changing style
I was having a conversation with the esteemed Mr Alexander today about changing CSS styles based on whether JavaScript is enabled in the users browser.
This problem had consumed some of my thoughts in the past and eventually I chose to use the technique I termed progressive degradation, something I have blogged about in a previous post.
Progressive degradation involves writing your CSS with the assumption that JavaScript is enabled. You then include an additional style sheet in this way:
This has generally worked well for me with the one caveat that this is invalid HTML. This has concerned me a little. The argument which both myself and Mr Alexander have used is that this is easily the simplest way to adjust styles depending on whether JavaScript is enabled or not.
I prefer to write my CSS with the assumption that the best possible experience is available and then include CSS to enhance the experience for those users with a less capable user agent as required.
Mr Alexander proposed another potentially neat solution:
In this example, the noscript CSS file is always in the document and it is then removed using JavaScript immediately. I haven't tested this, but my first thought was that I would need to investigate if there would be a flicker on the screen as the page is reflowed, affecting the rendering performance of the page. My second thought is that this technique involves an additional HTTP request for all users not only those who need it.
There is another well known technique which involves writing your CSS assuming that JavaScript is not available and then including a script enabled CSS file using JavaScript as the page is being rendered. I am not a fan of this technique as I prefer not to write my CSS in this way.
There are a variety of other techniques as well such as changing a class on the BODY tag using JavaScript and cascading your selectors from that. If using this technique I would always have a noscript class on the BODY and then remove that (again, it fits better with the way I like to write my CSS).
This issue often raises its head and frankly I am now not fully satisfied with any of these techniques. So what do you think? What is the best way of handling this issue?
This problem had consumed some of my thoughts in the past and eventually I chose to use the technique I termed progressive degradation, something I have blogged about in a previous post.
Progressive degradation involves writing your CSS with the assumption that JavaScript is enabled. You then include an additional style sheet in this way:
<noscript>
<link rel="stylesheet" href="noscript.css" type="text/css" media="screen" />
</noscript>
This has generally worked well for me with the one caveat that this is invalid HTML. This has concerned me a little. The argument which both myself and Mr Alexander have used is that this is easily the simplest way to adjust styles depending on whether JavaScript is enabled or not.
I prefer to write my CSS with the assumption that the best possible experience is available and then include CSS to enhance the experience for those users with a less capable user agent as required.
Mr Alexander proposed another potentially neat solution:
<link rel="stylesheet" href="noscript.css" type="text/css" media="screen" id="noScriptCSS" />
<script type="text/javascript">
var noScriptCSS = document.getElementById("noScriptCSS");
noScriptCSS.parentNode.removeChild(noScriptCSS);
</script>
In this example, the noscript CSS file is always in the document and it is then removed using JavaScript immediately. I haven't tested this, but my first thought was that I would need to investigate if there would be a flicker on the screen as the page is reflowed, affecting the rendering performance of the page. My second thought is that this technique involves an additional HTTP request for all users not only those who need it.
There is another well known technique which involves writing your CSS assuming that JavaScript is not available and then including a script enabled CSS file using JavaScript as the page is being rendered. I am not a fan of this technique as I prefer not to write my CSS in this way.
There are a variety of other techniques as well such as changing a class on the BODY tag using JavaScript and cascading your selectors from that. If using this technique I would always have a noscript class on the BODY and then remove that (again, it fits better with the way I like to write my CSS).
This issue often raises its head and frankly I am now not fully satisfied with any of these techniques. So what do you think? What is the best way of handling this issue?
Labels:
CSS,
Interface development,
Progressive enhancement
20 January 2009
Browser detection
I am not a fan of user agent sniffing. I think it is a wholly inappropriate way to find out the capabilities of a user agent accessing your website.
However, it has its uses. Steve Souders built UA Profiler, for gathering browser performance characteristics. This is undoubtedly an important and intriguing project. In order to detect which browser you are using, he has written his own user agent sniffing code as he describes in his latest blog post.
He also offers to make this code available through a web service. This is an intriguing possibility. My first reaction was to think that this was an excellent move. A high quality user agent sniffer, improved upon by the community and its breadth of use that we could all access via a web service meaning we would not have to ask our users to download the code to do the detection.
Having calmed down slightly, I started to think more clearly. Overall, I am not convinced this is the best idea. Generally I feel that detecting a user agent from its user agent string is dangerous. This makes the assumption that no user agent spoofing is taking place, that your detection is granular and perfect enough and that browser features and support will not change. None of these are necessarily true.
I prefer to use feature detection. If I want to know whether the browser I am using is capable of applying a particular method, does that method exist?
The only reliable way I have found to detect a browser is for Internet Explorer only using conditional comments, both in HTML (useful for including CSS and JavaScript files for specific versions of IE) and in the JavaScript itself.
I notice that the latest version of jQuery (1.3) is deprecating its user agent sniffing. This decision is to be applauded.
I also realised that the next version of JSquared will actually add the ability to determine if the browser you are using is IE and if it is version 6 or another version. Whilst I might be adding a limited set of user agent support to my library, I am trying to do it in a fail-safe way.
However, it has its uses. Steve Souders built UA Profiler, for gathering browser performance characteristics. This is undoubtedly an important and intriguing project. In order to detect which browser you are using, he has written his own user agent sniffing code as he describes in his latest blog post.
He also offers to make this code available through a web service. This is an intriguing possibility. My first reaction was to think that this was an excellent move. A high quality user agent sniffer, improved upon by the community and its breadth of use that we could all access via a web service meaning we would not have to ask our users to download the code to do the detection.
Having calmed down slightly, I started to think more clearly. Overall, I am not convinced this is the best idea. Generally I feel that detecting a user agent from its user agent string is dangerous. This makes the assumption that no user agent spoofing is taking place, that your detection is granular and perfect enough and that browser features and support will not change. None of these are necessarily true.
I prefer to use feature detection. If I want to know whether the browser I am using is capable of applying a particular method, does that method exist?
The only reliable way I have found to detect a browser is for Internet Explorer only using conditional comments, both in HTML (useful for including CSS and JavaScript files for specific versions of IE) and in the JavaScript itself.
I notice that the latest version of jQuery (1.3) is deprecating its user agent sniffing. This decision is to be applauded.
I also realised that the next version of JSquared will actually add the ability to determine if the browser you are using is IE and if it is version 6 or another version. Whilst I might be adding a limited set of user agent support to my library, I am trying to do it in a fail-safe way.
7 January 2009
Using CSS3
Despite ones personal feelings about it, I am really keen to start using CSS3. There are a number of browsers with support for some of the interesting parts of CSS3 and I think there is a new opportunity to use our skills of progressive enhancement to create some really exciting new effects.
If we wisely apply the appropriate properties, we can enhance the user experience for some users - and an increasing number of users - whilst offering a more than acceptable experience for the remainder of our users.
There may be a useful side effect here as well. When people see a site enhanced by the latest web browsers, we may get more users switching from older and less standards compliant browsers to the more modern and better newer browsers.
We may have to use vendor specific prefixes to get some effects to work, but I think that is a small price to pay to help push forwards the use of the good parts of CSS3.
The best part of all this is that as new browsers are released and used, our websites will look better and a good level of support is on the horizon. Safari 3 already has a good level of support, IE8 will have some and Firefox 3.1 will have a good level of support. With Safari having decent support for CSS3, it means all modern Webkit browsers should as well - read Chrome and the iPhone browser in particular!
So which properties might be able to be used in this way? Here are a few select ones that I think could be useful:
box-shadow
With this effect, we can create drop shadows around boxes. When this property is not understood, there is no detrimental effect and when it is recognised, a nice enhancement can be achieved.
border-radius
This effect allows us to create rounded corners using only CSS. I think this is useful as its simpler than any other technique and support can be achieved for IE by using conditional comments to include other stylesheets. Firefox 3 and Safari 3 already have support for this property - definitely one of the most useful to start implementing.
text-shadow
Similar to box-shadow, this effect will create a drop shadow effect around text. Another nice enhancement. Not strictly a new addition in CSS3, but support seems to be coming with CSS3 support.
There are other interesting effects coming, but I think these are the most potentially useful when applied using the principles of progressive enhancement. There are also a couple of useful new selectors coming, but its hard to see how one could use them with the principles of progressive enhancement.
I am excited about some of the new features of CSS3 and its great to be able to start using some. Indeed, the new JSquared website will feature some new CSS3 effects applied in this way.
What we do need to be cautious of though is creating a complete maintenance headache. Lets start using new features, but sparingly at first. We dont want to make development even harder with yet another class of browser to support.
If we wisely apply the appropriate properties, we can enhance the user experience for some users - and an increasing number of users - whilst offering a more than acceptable experience for the remainder of our users.
There may be a useful side effect here as well. When people see a site enhanced by the latest web browsers, we may get more users switching from older and less standards compliant browsers to the more modern and better newer browsers.
We may have to use vendor specific prefixes to get some effects to work, but I think that is a small price to pay to help push forwards the use of the good parts of CSS3.
The best part of all this is that as new browsers are released and used, our websites will look better and a good level of support is on the horizon. Safari 3 already has a good level of support, IE8 will have some and Firefox 3.1 will have a good level of support. With Safari having decent support for CSS3, it means all modern Webkit browsers should as well - read Chrome and the iPhone browser in particular!
So which properties might be able to be used in this way? Here are a few select ones that I think could be useful:
box-shadow
With this effect, we can create drop shadows around boxes. When this property is not understood, there is no detrimental effect and when it is recognised, a nice enhancement can be achieved.
border-radius
This effect allows us to create rounded corners using only CSS. I think this is useful as its simpler than any other technique and support can be achieved for IE by using conditional comments to include other stylesheets. Firefox 3 and Safari 3 already have support for this property - definitely one of the most useful to start implementing.
text-shadow
Similar to box-shadow, this effect will create a drop shadow effect around text. Another nice enhancement. Not strictly a new addition in CSS3, but support seems to be coming with CSS3 support.
There are other interesting effects coming, but I think these are the most potentially useful when applied using the principles of progressive enhancement. There are also a couple of useful new selectors coming, but its hard to see how one could use them with the principles of progressive enhancement.
I am excited about some of the new features of CSS3 and its great to be able to start using some. Indeed, the new JSquared website will feature some new CSS3 effects applied in this way.
What we do need to be cautious of though is creating a complete maintenance headache. Lets start using new features, but sparingly at first. We dont want to make development even harder with yet another class of browser to support.
Labels:
Browser support,
CSS,
CSS 3,
Progressive enhancement
Happy new year
I wish you all a (slightly) belated happy new year.
2008 was a tough but amazing year. Some of its highlights for me included getting married, having an amazing honeymoon and having 3 different jobs!
Some of the less good parts included having 3 different jobs! I am working harder than ever now at home and at work and have had much less time in the last 6 months to blog or work on JSquared than I would have liked.
But things can change for 2009 and so I hope to only have 1 job this year, not get married again (as amazing as that was) and give a bit more time to blogging here.
As you can see on the JSquared blog, work has resumed on the project after a period of quiet and I have a new team member on board to make the project even better and an even bigger success.
This is hardly going to be a quiet year though as I hope to move house - here's hoping Mrs N and myself can find the right place!
I hope you all have a happy, healthy and successful year and keep reading....
2008 was a tough but amazing year. Some of its highlights for me included getting married, having an amazing honeymoon and having 3 different jobs!
Some of the less good parts included having 3 different jobs! I am working harder than ever now at home and at work and have had much less time in the last 6 months to blog or work on JSquared than I would have liked.
But things can change for 2009 and so I hope to only have 1 job this year, not get married again (as amazing as that was) and give a bit more time to blogging here.
As you can see on the JSquared blog, work has resumed on the project after a period of quiet and I have a new team member on board to make the project even better and an even bigger success.
This is hardly going to be a quiet year though as I hope to move house - here's hoping Mrs N and myself can find the right place!
I hope you all have a happy, healthy and successful year and keep reading....
Subscribe to:
Posts (Atom)