My previous This is Hurting Us All: It’s time to stop…” seems to have caused some debate because in it I mentioned delivering users of old/unsupported browsers a 403 page. This is unfortunate as the 403 suggestion was not the thrust of the article, but a minor comment at the end. This post takes a look at the history of how we’ve tried dealing with this problem, successes and failures alike, and offers some ideas on how an evergreen future might impact the problem space and solutions going forward.
A History of Evolving Ideas
Religious debates are almost always wrong: Almost no approach to things is entirely meritless and the more ideas that we mix and the more things change the more we make things progressively better. Let’s take a look back at the history of the problem.
In THE Beginning….
In the early(ish) days of the Web there was some chaos: vendors were adding features quickly, often before they were even proposed as a standard. The things you could do with a Web page in any given browser varied wildly. Computers were also more expensive and bandwidth considerably lower, so it wasn’t uncommon to have a significant number of users without those capabilities, even if they had the right “brand”.
As a Web developer (or a company hiring one), you had essentially two choices:
- Create a website that worked everywhere, but was dull an non compelling, and used techniques and approaches which the community had already agreed were outdated and problematic – essentially hurting the marketability and creating tech debt.
- Choose to develop better code with more features and whiz/bang – Write for the future now and wait for the internet to catch up, maybe even help encourage it and not worry about all of the complexity and hassle.
“THIS SITE BEST VIEWED WITH NETSCAPE NAVIGATOR 4.7 at 800×600”
Many people opted for the later choice and, while we balk at it, it wasn’t exactly a stupid business decision. Getting a website wasn’t a cheap proposition and it was a wholly new business expense, lots of businesses didn’t even have internal networks or significant business software. How could they justify paying people good money for code that was intended to be replaced as soon as possible?
Very quickly, however, people realized that even though they put a notice with an “Get a better browser” kind of link, that link was delivered along with a really awful page which makes your company look bad.
To deal with this problem sites started detecting your browser via user-agent and giving you some simpler version of the “Your browser sucks” page which at least didn’t make them look unprofessional: A broken page is the worst thing your company can put in front of users… Some people might even associate their need for a “modern browser” as “ahead of the curve”.
LIAR!: Vendors game the system
Netscape (at this point) was the de-facto standard of the Web and Microsoft was trying desperately to break into the market – but lots of sites were just telling IE users “no”. The solution was simple: Lie. And so it was that Microsoft got a fake ID and walked right past the bouncer, by publicly answering the question “Who’s asking?” with “Netscape!”.
Instead of really fixing that system, we simply decided that it was too easy to game and moved on with other ideas like checking for Microsoft specific APIs like document.all to differentiate on the client.
As HTML began to grow and pages became increasingly interactive, we introduced the idea of fallback. If a user agent didn’t support script, or object/embed or something, give them some content. In user interface and SEO terms, that is a pretty smart business decision.
One problem: Very often, fallback content wasn’t used. When it was, the fallback usually said essentially “You browser sucks, so you don’t get to see this, you should upgrade”.
the CROSS browser era and the great stagnation
Ok, so we have to deal with more than one browser and at some point they both have competing ideas which aren’t standard, but are far too useful to ignore. We create a whole host of solutions:
We came up with safe subsets of supported CSS and learned all of the quirks of the browsers and doctypes, we developed libraries to create new APIs that could switch code paths in and do the right thing with script APIs.
As you would expect, we learned things along the way that seem obvious in retrospect: Certain kinds of assumptions are just wrong. For example:
- Unexpected vendor actions that might increase the number of sites a user can view with a given browser isn’t unique to Microsoft. Lots of solutions that switched code paths based on document.all started breaking as Opera copied it, but not all of Microsoft’s apis. Feature detection is better than basing logic on assumptions about the current state of vendor APIs.
- All “support” is not the same – feature detection alone can be wrong. Sometimes a standard API or feature is there, but it is so woefully incomplete or wrong that you really shouldn’t use it.
And all of them still involved some sense of developing for a big market share rather than “everyone”. You were almost always developing for the latest browser or two for the same reasons listed above – only the justification was even greater as there were more APIs and more browser versions. The target market share was increasing, but not aimed at everyone – that would be too expensive.
Then, in 2003 a presentation at SXSW entitled “Inclusive Web Design For the Future” introduced the idea of “progressive enhancement” and the world changed, right?
Hold that Thought…
Let’s skip ahead a few years and think about what happened: Use of libraries like jQuery exploded and so did interactivity on the Web, new browsers became more mainstream and we started getting some forward progress and competition again.
In 2009, Remy Sharp introduced the idea of polyfills – code that that fill the cracks and provides slightly older browsers with the same standard capabilities as the newer ones. I’d like to cite his Google Plus post on the history:
In the past few years, all of these factors have increased, not decreased. We have more browsers, more common devices with variant needs, more OS variance, and an explosion of new features and UX expectations.
Let’s get to the point already…
The presentation at SXSW aimed to “leave no one behind” by starting from literally text only and progressively enhancing from there. It was in direct opposition to the previous mentality of “graceful degradation” – fallback to a known quantity if the minimum requirements are not met.
What we’re definitely not generally doing, however, is actually living up to the full principles laid out that presentation for anything more than the most trivial kinds of websites.
Literally every site I have ever known has “established a baseline” of what browsers they will “support” based on market-share. Once a browser drops below some arbitrary percentage, they stop testing/considering those browsers to some extent. Here’s the thing: This is not what that original presentation was about. You can pick and choose your metrics, but the net result is that people will hit your site or app with browsers you no longer support and what will they get?
IE<7 is “dead”. Quite a large number of sites/apps/libraries have announced that they no longer support IE7, and many are beginning to drop support for IE8. When we add in all of the users that we are no longer testing for and it’s becoming an a significant number of people… So what happens to those users?
In an ideal, progressively enhanced world they would get some meaningful content, progressive graded according to their abilities, Right?
But in Reality…
What does the online world of today look like to someone, for example, still using IE5?
- Twitter is entirely unusable…
- Reddit is unusable…
Facebook is all over the map. Most of the public pages that I could get to (couldn’t login) had too much DOM/required too much scroll to get a good screenshot of – but it was also unusable.
Amazon was at least partially navigable, but I think that is partially luck because a whole lot of it was just an incoherent jumble:
- Oh the irony.
I’m not cherry picking either – most sites (even ones you’d think would because they aren’t very feature rich or ‘single page app’ like) just don’t work at all. Ironically, even some that are about design and progressive enhancement just cause that browser to crash.
Unless your answer to the question is “which browsers can I use on your site and still have a meaningful experience?” is “all of them” then you have failed in the original goals of progressive enhancement.
Here’s something interesting to note: A lot of people mention that Yahoo was quick to pick up on the better ideas about progressive enhancement and introduced “graded browser support” in YUI. In it, it states
“Anyone who slaps a ‘this page is best viewed with Browser X’ label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network.”
However, if you read it you will note that it identifies:
C-grade browsers should be identified on a blacklist.
and if you visit Yahoo.com today with Internet Explorer 5.2 on the Mac here is what you will see:
Likewise, here’s what happens on Google Plus:
So what am I saying exactly? A few things:
- We do have to recognize that there are business realities and cost to supporting browsers to any degree. Real “progressive enhancement” could be extremely costly in cases with very rich UI, and sometimes it might not make economic sense. In some cases, the experience is the product. To be honest, I’ve never really seen it done completely myself, but that’s not to say it doesn’t exist.
- We are right on the cusp of an evergreen world which is a game changer. In an evergreen world, we can use ideas like pollyfills, prollyfills and “high end progressive enhancement” very efficiently as there are no more “far behind laggards” entering the system.
- There are still laggards in the system and there likely will be for some time to come – we should do what we can to get as many of them who can update to do so and decrease the scope of this problem.
- We are still faced with choices that are unpleasant from a business perspective for how to deal with those laggards in terms of new code we write. There is no magic “right” answer.
- It’s not entirely wrong to prevent yourself from showing your users totally broken stuff that you’d prefer they not experience and associate with you. It is considerably friendlier to them if you literally write them off (as the examples above do) anyway and there is at least a chance that you can get them to upgrade.
- In most cases, however, the Web is about access to content – so writing anyone off might not be the best approach. Instead it might be worth investigating a new approach, here’s one suggestion that might work for even complex sites: Design a single, universal fallback content (hopefully one which still unobtrusively notifies the user why they are getting it and prompts them to go evergreen) which should work on even very old browsers to deliver them meaningful, but probably comparatively non compelling content/interactions and deliver that to non-evergreen browsers and search engines. Draw the line at evergreen and enhance/fill from there.