Regressive Disenfrancishement: Enhance, Fallback or Something else.

My previous This is Hurting Us All: It’s time to stop…” seems to have caused some debate because in it I mentioned delivering users of old/unsupported browsers a 403 page.  This is unfortunate as the 403 suggestion was not the thrust of the article, but a minor comment at the end.  This post takes a look at the history of how we’ve tried dealing with this problem, successes and failures alike, and offers some ideas on how an evergreen future might impact the problem space and solutions going forward.

A History of Evolving Ideas

Religious debates are almost always wrong: Almost no approach to things is entirely meritless and the more ideas that we mix and the more things change the more we make things progressively better.  Let’s take a look back at the history of the problem.

In THE Beginning….

In the early(ish) days of the Web there was some chaos: vendors were adding features quickly, often before they were even proposed as a standard. The things you could do with a Web page in any given browser varied wildly. Computers were also more expensive and bandwidth considerably lower, so it wasn’t uncommon to have a significant number of users without those capabilities, even if they had the right “brand”.

As a Web developer (or a company hiring one), you had essentially two choices:

  • Create a website that worked everywhere, but was dull an non compelling, and used techniques and approaches which the community had already agreed were outdated and problematic – essentially hurting the marketability and creating tech debt.
  • Choose to develop better code with more features and whiz/bang – Write for the future now and wait for the internet to catch up, maybe even help encourage it and not worry about all of the complexity and hassle.

“THIS SITE BEST VIEWED WITH NETSCAPE NAVIGATOR 4.7 at 800×600” 

Many people opted for the later choice and, while we balk at it, it wasn’t exactly a stupid business decision.  Getting a website wasn’t a cheap proposition and it was a wholly new business expense, lots of businesses didn’t even have internal networks or significant business software.  How could they justify paying people good money for code that was intended to be replaced as soon as possible?

Very quickly, however, people realized that even though they put a notice with an “Get a better browser” kind of link, that link was delivered along with a really awful page which makes your company look bad.

Browser Detection

To deal with this problem sites started detecting your browser via user-agent and giving you some simpler version of the “Your browser sucks” page which at least didn’t make them look unprofessional: A broken page is the worst thing your company can put in front of users… Some people might even associate their need for a “modern browser” as “ahead of the curve”.

LIAR!:  Vendors game the system

Netscape (at this point) was the de-facto standard of the Web and Microsoft was trying desperately to break into the market – but lots of sites were just telling IE users “no”.  The solution was simple:  Lie.  And so it was that Microsoft got a fake ID and walked right past the bouncer, by publicly answering the question “Who’s asking?” with “Netscape!”.

Instead of really fixing that system, we simply decided that it was too easy to game and moved on with other ideas like checking for Microsoft specific APIs like document.all to differentiate on the client.

Falling Back

As HTML began to grow and pages became increasingly interactive, we introduced the idea of fallback. If a user agent didn’t support script, or object/embed or something, give them some content. In user interface and SEO terms, that is a pretty smart business decision.

One problem: Very often, fallback content wasn’t used. When it was, the fallback usually said essentially “You browser sucks, so you don’t get to see this, you should upgrade”.

the CROSS browser era and the great stagnation

Ok, so we have to deal with more than one browser and at some point they both have competing ideas which aren’t standard, but are far too useful to ignore.  We create a whole host of solutions:

We came up with safe subsets of supported CSS and learned all of the quirks of the browsers and doctypes, we developed libraries to create new APIs that could switch code paths in and do the right thing with script APIs.

As you would expect, we learned things along the way that seem obvious in retrospect: Certain kinds of assumptions are just wrong.  For example:

  • Unexpected vendor actions that might increase the number of sites a user can view with a given browser isn’t unique to Microsoft. Lots of solutions that switched code paths based on document.all started breaking as Opera copied it, but not all of Microsoft’s apis.  Feature detection is better than basing logic on assumptions about the current state of vendor APIs.
  • All “support” is not the same – feature detection alone can be wrong.  Sometimes a standard API or feature is there, but it is so woefully incomplete or wrong that you really shouldn’t use it.

And all of them still involved some sense of developing for a big market share rather than “everyone”.  You were almost always developing for the latest browser or two for the same reasons listed above – only the justification was even greater as there were more APIs and more browser versions.  The target market share was increasing, but not aimed at everyone – that would be too expensive.

Progressive Enhancement

Then, in 2003 a presentation at SXSW entitled “Inclusive Web Design For the Future” introduced the idea of “progressive enhancement” and the world changed, right?

We’re all familiar with examples of a list of links that use some unobtrusive JavaScript to add a more pleasant experience for people with JavaScript enabled browsers.  We’re all familiar with examples that take that a step further and do some feature testing to take this a bit further and make the experience still a little better if your browser has additional features, but still deliver the crux content.  It gets better progressively along with capabilities.

Hold that Thought…

Let’s skip ahead a few years and think about what happened:  Use of libraries like jQuery exploded and so did interactivity on the Web, new browsers became more mainstream and we started getting some forward progress and competition again.

In 2009, Remy Sharp introduced the idea of polyfills – code that that fill the cracks and provides slightly older browsers with the same standard capabilities as the newer ones.  I’d like to cite his Google Plus post on the history

I knew what I was after wasn’t progressive enhancement because the baseline that I was working to required JavaScript and the latest technology. So that existing term didn’t work for me.

I also knew that it wasn’t graceful degradation, because without the native functionality and without JavaScript (assuming your polyfill uses JavaScript), it wouldn’t work at all.

In the past few years, all of these factors have increased, not decreased.  We have more browsers, more common devices with variant needs, more OS variance, and an explosion of new features and UX expectations.

Let’s get to the point already…

Progressive Enhancement: I do not think it means what you think it means.The presentation at SXSW aimed to “leave no one behind” by starting from literally text only and progressively enhancing from there.   It was in direct opposition to the previous mentality of “graceful degradation” – fallback to a known quantity if the minimum requirements are not met.  

What we’re definitely not generally doing, however, is actually living up to the full principles laid out that presentation for anything more than the most trivial kinds of websites.

Literally every site I have ever known has “established a baseline” of what browsers they will “support” based on market-share.  Once a browser drops below some arbitrary percentage, they stop testing/considering those browsers to some extent.  Here’s the thing:  This is not what that original presentation was about.  You can pick and choose your metrics, but the net result is that people will hit your site or app with browsers you no longer support and what will they get?

IE<7 is “dead”.  Quite a large number of sites/apps/libraries have announced that they no longer support IE7, and many are beginning to drop support for IE8.  When we add in all of the users that we are no longer testing for and it’s becoming an a significant number of people… So what happens to those users?

In an ideal, progressively enhanced world they would get some meaningful content, progressive graded according to their abilities, Right?

But in Reality…

What does the online world of today look like to someone, for example, still using IE5?

Here’s Twitter:

Twitter is entirely unusable…

And Reddit:

Reddit is unusable… 

Facebook is all over the map.  Most of the public pages that I could get to (couldn’t login) had too much DOM/required too much scroll to get a good screenshot of – but it was also unusable.

Amazon was at least partially navigable, but I think that is partially luck because a whole lot of it was just an incoherent jumble:

Oh the irony.

I’m not cherry picking either – most sites (even ones you’d think would because they aren’t very feature rich or ‘single page app’ like) just don’t work at all.  Ironically, even some that are about design and progressive enhancement just cause that browser to crash.

FAIL?

Unless your answer to the question is “which browsers can I use on your site and still have a meaningful experience?” is “all of them” then you have failed in the original goals of progressive enhancement.  

Here’s something interesting to note:  A lot of people mention that Yahoo was quick to pick up on the better ideas about progressive enhancement and introduced “graded browser support” in YUI.   In it, it states

Tim Berners-Lee, inventor of the World Wide Web and director of the W3C, has said it best:

“Anyone who slaps a ‘this page is best viewed with Browser X’ label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network.”

However, if you read it you will note that it identifies:

C-grade browsers should be identified on a blacklist.

and if you visit Yahoo.com today with Internet Explorer 5.2 on the Mac here is what you will see:

Your browser sucks.

Likewise, here’s what happens on Google Plus:

You must be at least this tall to ride this ride…

In Summary…

So what am I saying exactly?  A few things:

  • We do have to recognize that there are business realities and cost to supporting browsers to any degree.  Real “progressive enhancement” could be extremely costly in cases with very rich UI, and sometimes it might not make economic sense.  In some cases, the experience is the product.  To be honest, I’ve never really seen it done completely myself, but that’s not to say it doesn’t exist.
  • We are right on the cusp of an evergreen world which is a game changer.  In an evergreen world, we can use ideas like pollyfills, prollyfills and “high end progressive enhancement” very efficiently as there are no more “far behind laggards” entering the system.
  • There are still laggards in the system and there likely will be for some time to come – we should do what we can to get as many of them who can update to do so and decrease the scope of this problem.
  • We are still faced with choices that are unpleasant from a business perspective for how to deal with those laggards in terms of new code we write.  There is no magic “right” answer.
  • It’s not entirely wrong to prevent yourself from showing your users totally broken stuff that you’d prefer they not experience and associate with you.  It is considerably friendlier to them if you literally write them off (as the examples above do) anyway and there is at least a chance that you can get them to upgrade.
  • In most cases, however, the Web is about access to content – so writing anyone off might not be the best approach.  Instead it might be worth investigating a new approach, here’s one suggestion that might work for even complex sites:  Design a single, universal fallback content (hopefully one which still unobtrusively notifies the user why they are getting it and prompts them to go evergreen) which should work on even very old browsers to deliver them meaningful, but probably comparatively non compelling content/interactions and deliver that to non-evergreen browsers and search engines.  Draw the line at evergreen and enhance/fill from there.

W3C Extensible Web Community Group

The Web requires stability and a high degree of client ubiquity, that’s why W3C is important. However, that’s also part of the reason that standards develop at a comparatively, (sometimes seemingly painfully) slow rate: They take time, deliberation, testing and mutual agreement between a number of experts and organizations with significantly different interests.

While we frequently complain, that is actually a Really Good Thing™. Standards take a while to “cook” and the Web would be a real mess if every idea became a native standard haphazardly and with great speed:  Native implementations have a way of being hard to kill (or change significantly) once the public has free access to them – the general mantra is “don’t break it” – and that makes iteration/significant evolution artificially harder than it needs to be.  It seems that the pressures and incentives are mixed up.

The W3C Extensible Web Community Group was founded with a vision toward supplementing the traditional model with a new one which we believe is a better path forward in most cases.

Polyfills and Prollyfills…

Polyfills are a well-known concept and it’s an excellent, visualizable name for it:  Something that “fills the holes” here and there on browsers which are behind the curve with regard to implementation of a pretty mature spec.  However, since the term was coined by Remy Sharp a few years ago, and it has become an increasingly popular practice, its meaning has become somewhat diluted.

No longer is it the case that they are merely “filling in a few holes” in implementations based on mature specs – more and more often it is more like they are building whole new annexes based on a napkin sketch.  Within a very short period of time from the first announcement of a draft, we have some “fills” that provide implementation.

Given this contrast, it seems we could use a name which differentiates between the two concepts.  For a while now a few of us have been using different terms trying to describe it.  I’ve written about the concept in the past and it is the subject of Borris Smus’ excellent article How the Web Should Work.  Until recently the best term we could come up with was “forward polyfill”.  Then, on October 12, 2012 Alex Sexton coined the term “prollyfill” on Twitter.

October 12, 2012 “@SlexAxton: Prollyfill: a polyfill for a not yet standardized API”  

The Benefits of Prollyfilling

One thing is clear about the idea of prollyfills:   If we get them “right” they could radically improve the traditional model for standards evolution because they have a few very important benefits by their very nature. Most of this benefit comes from the simple decoupling from a browser release itself.  Since lots more people can contribute to their creation and you only need one implementation, they can be developed with much greater speed and by a larger community.  Such an approach also puts the author of a site in control of what is and isn’t supported.  In the traditional model, an author has no ability to change the speed at which native features are implemented nor what browser users will use to view it with.  Using prolyfills could allow the author to rely on JavaScript to implement the features needed, with only degraded performance, so this is a huge advantage in general.  However, even more of an advantage is realized in that scenario is that it allows multiple competing APIs or even variant APIs to co-exist while we round the edges and iterate because the author can choose which one they are going to use – that is pretty much impossible with the traditional model.  Iteration in the traditional model is likely to cause breakage which is a deterrent to use and therefore severely limiting in terms of how many use cases will be considered or how many authors can give meaningful feedback.   Iteration in this model breaks nothing, APIs and drafts can compete for users, get lots of feedback and ultimately if there is a clear winner it is evident from actual data before native code ever has to be written.

The value of the ability to compete and iterate freely should not be under-estimated – it is key to successful evolution.  Take, for example, the “nth” family of selectors.  They took a long time to come to be and it would appear that most people aren’t especially happy with them – they aren’t often used and they are often mis-understood.  Given the ability to create prollyfills for selector pseudo-classes, it is unlikely that this is the means of accomplishing use cases that would have won out, yet those limited resources ultimately spent an awful lot of time on working out details, drafting prose, presenting, getting agreement, implementing, going through processes about prefixing, optimizing implementations, etc.  In fact, the concept of nth was being discussed at W3C least as early as 1999 and parts of what was discussed was punted to CSS Selectors Level 4.  There was a definite effort to “answer all of the nth questions now,” and while what we have might be an “academically really good answer”  – it’s hard to argue that it’s really caught on and been as useful as some other things.  It’s easy to speculate what might have been better but the truth is, we’ll never know.

The truth is, currently a very small number of people at W3C do an enormous amount of work.  As of the most recent TPAC, the W3C CSS Working Group alone currently has 58 “in process” drafts and only a handful of editors.   This means that a significant number of them are going to be de-prioritized for now in order to focus on the ones with the most support or that are further along and new ideas probably won’t be undertaken for a time.  While they are trying to streamline the process, it does tend to go in fits and starts like this… Without a doubt, several of those 58 will be years and years in the making.

If instead, these same individuals could participate in less demanding, more advisory fashion to a satellite group of Web developers submitting experimental works and after a certain threshold could take over a fairly robust and reasonably mature draft, it is easy to imagine that things could evolve faster and those resources could focus a lot more.  Of course, browser manufacturers and standards bodies could participate in the same manner:  Adobe recently followed this model for their Draft Proposal on CSS Regions, Mozilla is working on a x-tags which implements part of the still very early Web Components API via script and as a general practice, the ECMA team does this pretty often.  These are very positive developments.

The Challenges of *lyfilling.

Currently every fill is generally implemented as a “from the ground up” undertaking, despite the fact that there is potentially a lot of overlap in the sorts of things you need to implement them.  For example: If you are filling something in CSS, for example, you have to (at a minimum) parse CSS.  If you are filling a selector pseudo, you’ve got a lot of figuring out, plumbing and work to do to make that work and efficient enough.  If you are filling a property, it’s potentially got a lot of overlap with the selector bit.  Or perhaps you have a completely new proposal that is merely “based on” essentially a subset of some existing Web technology, like Tab Atkin’s Cascading Attribute Sheets – it’s pretty hard to start at square one and that means that often these fills are comparatively low fidelity.

Another challenge with polyfills and prollyfills is how to write them effectively (best practices), how to make them discoverable and communicate what needs to be communicated about the degree of parity that they provide or their method/degree of forward compatibility.

Charting the Uncharted Waters

It seems clear that we could do with some cooperation and potentially some robust and well thought out and tested prollyfills for native APIs that would make some of this easier.

There really is nothing like “caniuse” for polyfills detailing compatibility, parity with a draft or method/degree of forward compatibility.  Likewise, here is also no such thing for prolyfills – nor is there a “W3C” sort of organization where you can post your proposal, discuss, get people to look at/contribute to your prose/examples, ask questions, etc.  No group creating and maintaining test cases or helping to collect data and work with willing W3C members to help make these things real or prioritized in any way.

In short, there is no community participation acting as a group interested in this subject.  These are gaps we hope to fill (pun intended) with the W3C Extensible Web Community Group.  We’re just getting started, but we have created a github organization and registered prollyfill.org.  Participate or just follow along in the conversations on our public mailing list.