Is progressive enhancement dead?

Written by  on September 22, 2015 

When I discovered the magic of JavaScript and DOM manipulation at school in 2006, I instantly thought of using it to make stunning and dynamic user experiences in a web browser and getting rid of the odd white flash when browsing a website page to page.

Javascript in browser, a love and hate story

I was a bit disappointed to learn, from my well intended and experienced teachers that the best practice was to use Javascript only if the feature it brings is already available without. This is what is called, progressive enhancement.

It was true, a fair number of users (2010 Yahoo research states around 2%) were still using a browser that was not compliant with the ECMAScript norm or not implementing it at all. In the case of an online store, if it was to lose 2% of its users, that could mean losing 2% of its profit, which was not imaginable. Not to mention that Google’s web crawlers were not able to parse a page whom content was generated and/or inserted through JS.

Now the problem was, Progressive enhancement comes at a cost: if I have to develop a feature without JavaScript, why in the hell would I develop another JS-version of it? And why would I maintain two versions of the same feature? That could potentially double the time spent on making this feature and therefore double its allocated budget.

Besides, there was the performance problem preventing from relying heavily on JavaScript. JS code was usually interpreted or compiled at run time by the virtual machine and there was a significant overhead varying on the JS engine’s implementation and the user’s system.

There was the state of JavaScript at the beginning of the millennium. It was a fancy ornament used with caution.

The wonders of progress never cease

The situation did not stop web developer to propose a better user experience through JavaScript. There was not many solutions, except by using Flash and css3 later around 2012, to provide such thing as a simple carrousel.

So developers kept optimising JS engines in the period of time called “The JavaScript Engine Race“. Browsers were competing to become the fastest at executing JS using Just-In-Time compilation mixing Ahead-of-time compilation and interpretation when in a 2008, a disruptive project occurred, the V8 engine.

V8 is a project developed by Google’s team in Denmark aiming at being the most efficient way of executing JS code. One of the main technique used is pure plain compilation. For the first time in history, JS code was fully compiled to machine code (Code straightly executable by the CPU) instead of byte code (Generic code that is translatable to machine code). Thus removing the significant overhead that was preventing developer from writing heavier algorithm. In theory with V8, JavaScript was performant enough to be back in the dynamic languages race.

We have been enjoying this new engine, first in Chromium and Google Chrome, but this big push was the corner stone of several upcoming improvements in JS technologies:

-Other engines had to adapt and compete

-V8 is licensed under BSD which is open-source, making it available into various project

There we were, we finally had a environment allowing us, developers, to write competitive JS code and so as it became more and more performant, we noticed numerous libraries and frameworks making use of it.

V8’s upheaval

Historically, Javascript was mainly used in a web page, to manipulate the Document Object Model.
With the power of V8, it was just a matter of time before we could see its use out of the browser. This became possible, in 2009, with the Node.JS project, a cross-platform runtime environment based on V8. For the first time, we were able to make any kind of software, and this, using JS code. One of the main feature Node.JS brought was a network socket API, which means the ability to write client-server applications. Same, it did not take much time before seeing, the first web server written in javascript.

The strength of Node.JS is its asynchronous I/O, event driven architecture. Using only one thread (1 core), all input/output operations are made without blocking. If Node has to perform a filesystem or database call for one request and it’s hanging, it will jump to the execution of the next request and come back later for the first one, therefore not blocking the thread on the first request. This is the asynchronous property of Node. This is perfect in such use case where all we need is read/write from a database/filesystem and serve data without performing much computation.

On the other hand, it is still possible to block this one and only thread by running a CPU intensive task. Reason why it is commonly said Node.JS cannot be used as a web server along with heavy algorithms . This problem is probably the main one Node developers are working on and some solutions are being proposed. Like having a thread pool the node platform can choose to use. This is still fairly young but I am confident node will get there one day.

The state of Javascript in 2015

So that is it! JavaScript has spread from the browser to our servers. More and more frameworks like Ember, Backbone, Meteor and AngularJS are becoming a standard in the web industry. Some are even embracing this change by proposing a way to code web apps using JS only, server side and client side. I’m thinking of the trendy one everyone is talking about, Meteor.

Meteor is an open-source JS web framework that aims to produce real time client server application. It depends on node server side, and jQuery client side. It is cross-platform and can produce Android and iOS executable.

Another Framework, Reaction, built on top of Meteor, even proposes an e-commerce solution. Yes, someone finally made it : an online store solution turning their back on users with JS disabled. Well at least until Meteor support server-side rendering, in other words generating pure html/css from the server. Bang!  JS mandatory and accessibility secondary, Is progressive enhancement dead? Or Should we call it reverse Progressive enhancement!

Conclusion : Is progressive enhancement dead?

lowheartrate

As much as I enjoy the possibilities of a “refresh-free” web user experience. Progressive enhancement still make sense in the case of a public government website because it needs to be accessible. But If you need something as simple as using Google maps you need JS enabled.

More and more websites advertise they require Javascript to work properly. Some still follow the progressive enhancement principle, some do not even fail gracefully and just show a broken page.

For example, with JS disabled:

-Gmail proposes a pure HTML/CSS version but it does not contain the same set of features

-Twitter seems to work until you want to tweet …

-Facebook redirects to a version containing the most basic features like displaying 3 posts per page.

Practice are changing, businesses are more and more inclined to leave users with JS disabled. In 2010, a study from Tobyho, testing 19 of the most popular websites, shows :

16% of sites that are broken let their users know that the site is broken

A more recent study from the UK government, reports numbers between 0.2% and 1.1% with JS not working. The author attribute this to the growth in the use of mobile devices, shipped with modern JS-able browsers.

Uniting skill-sets, having just one language for different layers of development is definitely appealing. Back-end and front-end teams could blend. Companies could profit from a more flexible and heterogenous workforce.

The use of JavaScript has become a standard, plus with companies like Google, pushing Angular and releasing JS friendly web crawlers, I think the trend is unlikely to inverse.

Sources:

Yahoo Developer Network : How many users have JS disabled Yahoo Developer Network: Part 2

Is graceful degradation dead?

Tobyho’s Blog : How much of the web works without JavaScript 

UK Government Digital Service’s Blog

Category : I.T

Tags :

Leave a Reply

Your email address will not be published. Required fields are marked *