A Web for the Next Century


The Web Platform
Chapter 1.

1. In the beginning, Tim created the Web.
2: And the platform was without form, and void; and confusion was upon the face of the Internet. And the mind of Tim moved upon the face of the problem.
3: And Tim said, Let there be a new protocol: and there was HTTP.
4: And Tim saw the protocol, that it was good: and he divided the network by domains and subdomains.
5: And he called the network the World Wide Web.
6: And Tim said, Let there be a browser for viewing pages delivered by this Web that they might be viewed.
7. And it was so.
8: And Tim separated the structure of the content from its style.
9: And the structured content he called HTML and the means of styling he called CSS. And he saw that it was good.
10. And Tim said, let us describe this structured content in the form of a tree and make it scriptable, and it was so.
11. And from the dust of the Interwebs were created developers, whom he gave dominion over the platform.

If you’ve read any of the numerous articles about The Extensible Web or heard about it in conference presentations, or seen The Extensible Web Manifesto you’ve likely seen (or heard) three phrases repeated: “Explain the magic,” “fundamental primitives” and “evolution of the platform”. I thought it might be worth (another) piece explaining why I think these are at the heart of it all…

For thousands of years the commonly accepted answer to the question ”where did dolphins come from” (or sharks or giraffes or people) was essentially that they were specially created in their current form, by a deity as part of a complex and perfect plan.  Almost all cultures had some kind of creation myth to explain the complex, high level things they couldn’t understand.

Turns out that this very simplified view was wrong (as is much of the cute creation myth I’ve created for the Web Platform) and I’d like to use this metaphor a bit to explain…

Creation and Evolution: Concrete and Abstract

It’s certainly clear that Sir Tim’s particular mix of ideas became the dominant paradigm:  We don’t spend a lot of time talking about SGML or Gopher.

It seems straightforward enough to think of the mix of ideas that made up the original Web as being evolutionary raw materials and to think of users as providing some kind of fitness function in which it became the dominant species/paradigm, but that is is a pretty abstract thing and misses a subtle, but I think important distinction.

The Web Platform/Web browsers are not an idea, they are now a concrete thing.  The initial creation of the Web was act of special creation – engineering that introduced not just new ideas, but new software and infrastructure.  The Web is probably the most grand effort in the history of mankind – browsers as a technology outstrip any operating system or virtual machine in terms of ubiquity and they  are increasingly capable systems.  There are many new systems with concrete ideas to supplant the Web browser and replace it with something new.  People are asking themselves:  Is it even possible  for the Web to hang on?  Replacing it is no easy task: technically or socially – This is a huge advantage to the Web.  So how do we make it thrive?  Not just today, but years from now?

Some more history…

In Tim’s original creation, HTTP supported only GET; In HTML there were no forms, no images, no separate idea of style.  There was no DOM or async requests – as – indeed there was no script. Style was a pretty loosely defined thing – there wasn’t much of it – and CSS wasn’t a thing.  There was just GET me that very simple HTML document markup which is mediocre at displaying text – and display it – when I give you a URL and make sure there is this special concept of a “link”.

This is at the heart of what we have today, but it is not nearly all of it:  What we have today has become an advanced Platform – so how did we get here?  Interestingly, there are two roads we’ve followed at different times – and it is worth contrasting them.

In some cases, we’ve gone off and created entirely new high level ideas like CSS or AppCache which were, well, magic.  That is, they did very, very complex things and provided a high-level, declarative API which was highly designed to solve very specific use-cases.  And at other times (like DOM, XMLHttpRequest and CSSOM) we have explained some of the underlying magic by taking some of those high-level APIs and providing some imperative APIs.

Looking at those lists, it seems to me that were it not for those small efforts to explain some of the magic, the Web would already be lost by now.

Creating a Platform for the Next 100 Years

The real strength of life itself is derived from the fact that it is not specifically designed to perfectly fill a very niche, but because complex pressures a high level judge relatively minor variance at a low level and this simple process inevitably yields the spread of things that are highly adaptive and able to survive changes in the complex pressures.

Sir Tim Berners-Lee couldn’t have forseen iPhones and Retina displays, and had he been able to account for them in his original designs, the environment itself (that is, users who choose to use or author for the Web) would likely have rejected it.   Such are the complex pressures changing our system and we could learn something from nature and from the history of technology here:  Perfectly designed things are often not the same as “really widely used” things and either can be really inflexible to change.   

Explaining the magic means digging away at the capabilities that underly this amazing system and describing their relationships to one another to add adaptability (extensibility).   At the bottom are a number of necessary and fundamental primitives that only the platform (the browser, generally) can provide.  When we think about adding something new, let’s try to explain it “all the way down” until we reach a fundamental primitive and then work up.

All of this allows for small mutations – new things which can compete for a niche in the very real world – and unlike academic and closed committees can help create new, high-level abstractions based on real, verified shared need and acceptance and shared understanding.  In other words, we will have a Platform which, like life itself, is highly adaptive and able to survive complex changes in pressures and last beyond any of our lifetimes.

Tagging in a new partner…

At the beginning of this year, we helped make Web history…

But there is a fork in the road ahead… 

For the unfamilliar, the W3C Technical Architecture Group is a kind of steering commitee for the direction of the Web and its architecture and how that relates to Web Standards.  It is composed of 9 members chaired by Sir Tim Berners-Lee , a few appointees and 5 elected members. Since its inception, this has been a closed process and, although it maintained a public mailing list, the simple fact is that few people were even aware of it. How could such an important group be largely unknown? Surely we should be well-aware of its works, right? As it turns out, not really – but we have felt effects. What’s more, only a small percentage of eligible representatives even cast their vote in these elections.

Until now…

This year we changed that by taking the case to the public that we would like to change all that. We got reform candidates to blog about their vision: what they would envision and advocate and where they thought TAG and W3C itself needed some improvement. We tweeted and blogged and shared and made public opinion clear and lobbied representatives to exercise their right and vote with us. What resulted was the most participatory election in W3C history and every open seat filled by one of our candidates.

We delivered a mandate for significant change…

In the few months since then, things have shaped up nicely.

  • We have seen unprecedented coordination and consultation with ECMA, the group in charge of JavaScript standards.
  • We have seen great advice on improving APIs and coordination across Working Groups.  Excellent new things like Promises/Futures being introduced across the spectrum.
  • TAG, as a group, is more known and visible than ever: things are moving to github, you can follow it on Twitter, they have even had a Meet the TAG event.
  • TAG members are out there in public – developers know who they are, they know the sorts of things they stand for and they are interested.
  • All 4 of our candidates and both new TAG co-chairs helped author and/or signed the Extensible Web Manifesto, laying out a new, more open vision for architecture and process surrounding standards.
  • Work to replace WebIDL in standards descriptions has begun.
  • When it was time for Tim Berners-Lee to appoint a co-chair, he  appointed from among our reformers (we had more candidates than seats available).

The TAG is listening to everyday developers – and delivering…

But another thing happened, one of our elected representatives switched employers and because this would leave the TAG with two members from the same organization, it means he is no longer eligible to serve. As such, there is a special election happening to fill his spot.

Reaffirm the mandate
There are only two nominees to fill this seat:  Frederick Hirsch from Nokia and Sergey Konstantinov from Yandex.

Frederick is a pretty traditional sort of TAG nominee – he has lots of W3C experience under his belt and credential projects like XKMS, SAML, WS-I Security and a bunch of other stuff.

Sergey is pretty new to the official Web Standards game – Yandex has only been a member of W3C for about a year.  He comes from the trenches – most recently in charge of Yandex Maps.  He does a lot of working with developers directly, he has written two public posts explaining some of the reasons that he is running.  I have spoken to him myself and I believe that electing him will reaffirm the mandate that we sent at the beginning of the year.

If you supported reform candidates in the last TAG election,  please join me in spreading the word – share via whatever means you like – and let AC reps know:

image

Regressive Disenfrancishement: Enhance, Fallback or Something else.

My previous This is Hurting Us All: It’s time to stop…” seems to have caused some debate because in it I mentioned delivering users of old/unsupported browsers a 403 page.  This is unfortunate as the 403 suggestion was not the thrust of the article, but a minor comment at the end.  This post takes a look at the history of how we’ve tried dealing with this problem, successes and failures alike, and offers some ideas on how an evergreen future might impact the problem space and solutions going forward.

A History of Evolving Ideas

Religious debates are almost always wrong: Almost no approach to things is entirely meritless and the more ideas that we mix and the more things change the more we make things progressively better.  Let’s take a look back at the history of the problem.

In THE Beginning….

In the early(ish) days of the Web there was some chaos: vendors were adding features quickly, often before they were even proposed as a standard. The things you could do with a Web page in any given browser varied wildly. Computers were also more expensive and bandwidth considerably lower, so it wasn’t uncommon to have a significant number of users without those capabilities, even if they had the right “brand”.

As a Web developer (or a company hiring one), you had essentially two choices:

  • Create a website that worked everywhere, but was dull an non compelling, and used techniques and approaches which the community had already agreed were outdated and problematic – essentially hurting the marketability and creating tech debt.
  • Choose to develop better code with more features and whiz/bang – Write for the future now and wait for the internet to catch up, maybe even help encourage it and not worry about all of the complexity and hassle.

“THIS SITE BEST VIEWED WITH NETSCAPE NAVIGATOR 4.7 at 800×600” 

Many people opted for the later choice and, while we balk at it, it wasn’t exactly a stupid business decision.  Getting a website wasn’t a cheap proposition and it was a wholly new business expense, lots of businesses didn’t even have internal networks or significant business software.  How could they justify paying people good money for code that was intended to be replaced as soon as possible?

Very quickly, however, people realized that even though they put a notice with an “Get a better browser” kind of link, that link was delivered along with a really awful page which makes your company look bad.

Browser Detection

To deal with this problem sites started detecting your browser via user-agent and giving you some simpler version of the “Your browser sucks” page which at least didn’t make them look unprofessional: A broken page is the worst thing your company can put in front of users… Some people might even associate their need for a “modern browser” as “ahead of the curve”.

LIAR!:  Vendors game the system

Netscape (at this point) was the de-facto standard of the Web and Microsoft was trying desperately to break into the market – but lots of sites were just telling IE users “no”.  The solution was simple:  Lie.  And so it was that Microsoft got a fake ID and walked right past the bouncer, by publicly answering the question “Who’s asking?” with “Netscape!”.

Instead of really fixing that system, we simply decided that it was too easy to game and moved on with other ideas like checking for Microsoft specific APIs like document.all to differentiate on the client.

Falling Back

As HTML began to grow and pages became increasingly interactive, we introduced the idea of fallback. If a user agent didn’t support script, or object/embed or something, give them some content. In user interface and SEO terms, that is a pretty smart business decision.

One problem: Very often, fallback content wasn’t used. When it was, the fallback usually said essentially “You browser sucks, so you don’t get to see this, you should upgrade”.

the CROSS browser era and the great stagnation

Ok, so we have to deal with more than one browser and at some point they both have competing ideas which aren’t standard, but are far too useful to ignore.  We create a whole host of solutions:

We came up with safe subsets of supported CSS and learned all of the quirks of the browsers and doctypes, we developed libraries to create new APIs that could switch code paths in and do the right thing with script APIs.

As you would expect, we learned things along the way that seem obvious in retrospect: Certain kinds of assumptions are just wrong.  For example:

  • Unexpected vendor actions that might increase the number of sites a user can view with a given browser isn’t unique to Microsoft. Lots of solutions that switched code paths based on document.all started breaking as Opera copied it, but not all of Microsoft’s apis.  Feature detection is better than basing logic on assumptions about the current state of vendor APIs.
  • All “support” is not the same – feature detection alone can be wrong.  Sometimes a standard API or feature is there, but it is so woefully incomplete or wrong that you really shouldn’t use it.

And all of them still involved some sense of developing for a big market share rather than “everyone”.  You were almost always developing for the latest browser or two for the same reasons listed above – only the justification was even greater as there were more APIs and more browser versions.  The target market share was increasing, but not aimed at everyone – that would be too expensive.

Progressive Enhancement

Then, in 2003 a presentation at SXSW entitled “Inclusive Web Design For the Future” introduced the idea of “progressive enhancement” and the world changed, right?

We’re all familiar with examples of a list of links that use some unobtrusive JavaScript to add a more pleasant experience for people with JavaScript enabled browsers.  We’re all familiar with examples that take that a step further and do some feature testing to take this a bit further and make the experience still a little better if your browser has additional features, but still deliver the crux content.  It gets better progressively along with capabilities.

Hold that Thought…

Let’s skip ahead a few years and think about what happened:  Use of libraries like jQuery exploded and so did interactivity on the Web, new browsers became more mainstream and we started getting some forward progress and competition again.

In 2009, Remy Sharp introduced the idea of polyfills – code that that fill the cracks and provides slightly older browsers with the same standard capabilities as the newer ones.  I’d like to cite his Google Plus post on the history

I knew what I was after wasn’t progressive enhancement because the baseline that I was working to required JavaScript and the latest technology. So that existing term didn’t work for me.

I also knew that it wasn’t graceful degradation, because without the native functionality and without JavaScript (assuming your polyfill uses JavaScript), it wouldn’t work at all.

In the past few years, all of these factors have increased, not decreased.  We have more browsers, more common devices with variant needs, more OS variance, and an explosion of new features and UX expectations.

Let’s get to the point already…

Progressive Enhancement: I do not think it means what you think it means.The presentation at SXSW aimed to “leave no one behind” by starting from literally text only and progressively enhancing from there.   It was in direct opposition to the previous mentality of “graceful degradation” – fallback to a known quantity if the minimum requirements are not met.  

What we’re definitely not generally doing, however, is actually living up to the full principles laid out that presentation for anything more than the most trivial kinds of websites.

Literally every site I have ever known has “established a baseline” of what browsers they will “support” based on market-share.  Once a browser drops below some arbitrary percentage, they stop testing/considering those browsers to some extent.  Here’s the thing:  This is not what that original presentation was about.  You can pick and choose your metrics, but the net result is that people will hit your site or app with browsers you no longer support and what will they get?

IE<7 is “dead”.  Quite a large number of sites/apps/libraries have announced that they no longer support IE7, and many are beginning to drop support for IE8.  When we add in all of the users that we are no longer testing for and it’s becoming an a significant number of people… So what happens to those users?

In an ideal, progressively enhanced world they would get some meaningful content, progressive graded according to their abilities, Right?

But in Reality…

What does the online world of today look like to someone, for example, still using IE5?

Here’s Twitter:

Twitter is entirely unusable…

And Reddit:

Reddit is unusable… 

Facebook is all over the map.  Most of the public pages that I could get to (couldn’t login) had too much DOM/required too much scroll to get a good screenshot of – but it was also unusable.

Amazon was at least partially navigable, but I think that is partially luck because a whole lot of it was just an incoherent jumble:

Oh the irony.

I’m not cherry picking either – most sites (even ones you’d think would because they aren’t very feature rich or ‘single page app’ like) just don’t work at all.  Ironically, even some that are about design and progressive enhancement just cause that browser to crash.

FAIL?

Unless your answer to the question is “which browsers can I use on your site and still have a meaningful experience?” is “all of them” then you have failed in the original goals of progressive enhancement.  

Here’s something interesting to note:  A lot of people mention that Yahoo was quick to pick up on the better ideas about progressive enhancement and introduced “graded browser support” in YUI.   In it, it states

Tim Berners-Lee, inventor of the World Wide Web and director of the W3C, has said it best:

“Anyone who slaps a ‘this page is best viewed with Browser X’ label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network.”

However, if you read it you will note that it identifies:

C-grade browsers should be identified on a blacklist.

and if you visit Yahoo.com today with Internet Explorer 5.2 on the Mac here is what you will see:

Your browser sucks.

Likewise, here’s what happens on Google Plus:

You must be at least this tall to ride this ride…

In Summary…

So what am I saying exactly?  A few things:

  • We do have to recognize that there are business realities and cost to supporting browsers to any degree.  Real “progressive enhancement” could be extremely costly in cases with very rich UI, and sometimes it might not make economic sense.  In some cases, the experience is the product.  To be honest, I’ve never really seen it done completely myself, but that’s not to say it doesn’t exist.
  • We are right on the cusp of an evergreen world which is a game changer.  In an evergreen world, we can use ideas like pollyfills, prollyfills and “high end progressive enhancement” very efficiently as there are no more “far behind laggards” entering the system.
  • There are still laggards in the system and there likely will be for some time to come – we should do what we can to get as many of them who can update to do so and decrease the scope of this problem.
  • We are still faced with choices that are unpleasant from a business perspective for how to deal with those laggards in terms of new code we write.  There is no magic “right” answer.
  • It’s not entirely wrong to prevent yourself from showing your users totally broken stuff that you’d prefer they not experience and associate with you.  It is considerably friendlier to them if you literally write them off (as the examples above do) anyway and there is at least a chance that you can get them to upgrade.
  • In most cases, however, the Web is about access to content – so writing anyone off might not be the best approach.  Instead it might be worth investigating a new approach, here’s one suggestion that might work for even complex sites:  Design a single, universal fallback content (hopefully one which still unobtrusively notifies the user why they are getting it and prompts them to go evergreen) which should work on even very old browsers to deliver them meaningful, but probably comparatively non compelling content/interactions and deliver that to non-evergreen browsers and search engines.  Draw the line at evergreen and enhance/fill from there.

This is Hurting Us All: It’s time to stop…

The status quo with regard to how users receive browser upgrades has changed dramatically in the last few years – modern browsers are now evergreen, they are never significantly out of date with regard to standards support (generally a matter of weeks), and ones that are not (generally mobile) are statistically never more than a year or so back.  This means that in 2013 we can write software using standards and APIs created in, say, 2012 instead of 2000 – but by and large we don’t – and that is hurting us all.

It’s not about a specific version…

Historically, a large number of users of got their browser upgraded only as a side-effect of purchasing a new computer.  Internet Explorer 6 was released toward the end of 2001 – well over a decade ago – pre-9/11/2001.  It wasn’t fully compliant with the standards of the day – and for five years, nothing much happened in IE land, so anyone who bought a new machine until about 2007 was still getting IE6 for the first time.  This meant that until very recently many businesses and Web authors still supported IE6 – and it was only with active campaigning that we were finally able to put a nail in that coffin.

So, now we’re all good – right?  Wrong.

A great number of companies have followed their traditional policies and took this to mean that they must now offer support back to IE7, some have even jumped to IE8 which seems like a quantum leap in version progress, but…  If you write Web sites or run a company that does, this is now hurting us all: it’s time to stop.  

The problem is in the gaps not the versions…

A small chunk of what the gaps in support between IE.old and evergreen browsers Let me add some data and visuals to put this into perspective…   If you were to create a giant table of features/abilities in the Web Platform today in most evergreen and the vast majority of mobile browsers and compare them with the abilities of non-evergreen IE browsers, remove the features that have been in there for more than a decade and then zoom WAY out, you would see something like the image here (and then you’d have a lot of scrolling to do).  Rows represent features, the green columns on the left represent evergreen browsers and the red columns on the right represent IE10, 9, 8, 7, 6 (in that order).

If you want the actual dirty details and to see just how long my (incomplete) list of features really is, you can take a look at:

Now, if you really look at the data, both are slightly flawed, but the vast majority of the picture is crystal clear…

  • There are an ever increasing number of gaps in ability between evergreen browsers and old IE.  More importantly is where the gaps are: Many are some of the most powerful and important sorts of features to come to the Web in a really long time.
  •  We can’t wait years for these features because meanwhile the user/business expectations for features made possible by these APIs keep increasing.
  • Solutions/workarounds for missing features are non-standard.  They start getting bulky as you increase their number, they are less maintainable, require more code and testing (anecdotally, I have worked on a few small projects that spent almost as much time working out cross browser problems as the entire rest of the project).

How it hurts us all…

Directly, this is hurting you or your company because it means you spend a significant amount of time and money developing for and supporting browsers which are so far behind the curve – and the work you are doing is already out of date.

What’s more though, by supporting them you are effectively enabling these users to continue inadvertently bad behavior which is free to fix – and the longer they stay around, the more other companies follow suit.  You are creating a self-fulfilling prophecy:  My users won’t upgrade.

What can we do?

Luckily, this is actually a really solvable problem… All we have to do is get users to upgrade their silly browsers – and it’s free.

I propose that we set two dates as a community and do the following:

  • Try to get cooperation from major sites like we did for SOPA and make sections of the interwebs go semi-dark for those users (like we did for SOPA) on the first date – while we tell them about the second date (next bullet)
  • Upon the second date, we collectively stop serving them content and start serving them up 403’s (I’d propose July 4th – Independence Day in the US, but that doesn’t internationalize well).

I think there are a very small number of users who still don’t have high speed access, for those people, you could recommend them to Best-Buy where they can pick up a copy of Google Chrome on CD.  If you have other ideas, want to help coordinate a date, want to submit scripts that people can use to black out their site or even want to share another way that users can get disks by mail, leave a comment.

Clarification: I am not suggesting that the entire Web start serving 403 responses to anyone with an older version of IE here – I am suggesting that sites stop support/new development for those browsers.  Eventually, for some classes of sites, this will mean that their site is actually unusable (try using twitter or google plus with IE4, I haven’t, but I bet you won’t get far) – if that is the case, a 403 page asking users to upgrade is an entirely appropriate response which is actually better than a broken user experience.  This is about helping users to help us all. 

Our support of non-evergreen browsers is killing us all, let’s work together to spread the word and fix that pro-actively, as a community.

Thanks to Paul Irish and Cody Lindey for helping to hook me up with some data.

The New Gang Of Four

For programmers, when we hear “Gang Of Four” we pretty much immediately conjure up an image of the blue and white book that most of us have sitting on a shelf somewhere in which four very smart people sat down and documented design patterns which would shape the way that people think about writing software for many years to come.  However, the concept a “Gang of N” is used frequently in politics the world round to denote a group of, at least somewhat like-minded people who form a voting block big enough to matter and thus wield a greater degree of power than any one of them could individually… Power which is often used to bring about significant changes rapidly.

The Powers that Be

Most developers have probably never heard of the W3C Technical Architecture Group, so if you haven’t, here it is in a nutshell:  This is a very small group of people at the W3C – a mere 9 people to be exact, chaired by Tim Berners-Lee himself – responsible for sort of envisioning the architecture of the Web itself and championing it.  I’m not sure exactly how much formal power they have, but in the very least it wields the power of the bully pulpit at the W3C and therefore to some significant extent over the actual focus that a lot of big groups drive toward.

The Need for Change

I will let you read more in the links below, but to sum up: Most of the stuff that this group has historically concerned itself with has little to do with the reality of some of the most important pieces of the Web platform that most of you reading this live and breathe every day.  If you want to think deep thoughts about how the Web is documents full of data interlinked through URIs – this is traditionally the group for you.  Don’t get me wrong – I’m not trying to trivialize it, but the Web is about so, so much more than this group has actively discussed and in my opinion this is creating a sad new reality where it seems that the importance and influence of the once proud W3C is less important to the everyday developer. If you now reference the WHAT-WG HTML Living Standard over the current W3C version, prefer JSON despite W3C’s enormous push of “technically superior” XML family of languages and tools and really want more focus on things that you actually deal with programmatically – then you see the problem.

One GIANT Chance to Make a Big Change

Only 5 of the 9 positions in that group are elected (Sir Tim is the chair and 3 positions are appointed – hmmm).  However, it just so happens that this time around 4 of those 5 spots are up for grabs… and somehow miraculously we managed to get 4 people officially nominated who could really shake things up!

  • Anne Van Kesteren
  • Alex Russell
  • Yehuda Katz
  • Marcos Caceres

If you don’t know who these guys are – I think you just aren’t paying a whole lot of attention…   They are smart, super active (dozens of influential working groups and important open source projects) and not the sort of guys who sit quietly and go along if they disagree.  A few of them have written some good articles about what they think is wrong with TAG now and what they plan to do if elected [1][2][3][4][5].

Basically – Make it concentrate more on the stuff you really care about:

  • Layered Architecture – increasing levels of abstraction
  • Close coordination with ECMA
  • Extensibility – Web Components / Shadow DOM – new APIs including composition and templates
  • Bring a level of “from the trenches” representation.
  • In short:  Advocate and use those powers for what people like us really care most about.

Let’s Get These Guys Elected!

Here’s what you can’t do: Cast an actual vote.  Only the 383 member organizations of W3C actually get to vote – but they get to vote once for every open slot.

Here is what you can do:  Lobby…. Use the power of social media to promote this article saying HEY MEMBER ORGS — THESE FOUR ARE WHO WE WANT  YOU TO CAST YOUR VOTE FOR… Tweet… +1… Like… Blog… Make Memes…

C’mon interwebs – do what you do so well…

W3C Extensible Web Community Group

The Web requires stability and a high degree of client ubiquity, that’s why W3C is important. However, that’s also part of the reason that standards develop at a comparatively, (sometimes seemingly painfully) slow rate: They take time, deliberation, testing and mutual agreement between a number of experts and organizations with significantly different interests.

While we frequently complain, that is actually a Really Good Thing™. Standards take a while to “cook” and the Web would be a real mess if every idea became a native standard haphazardly and with great speed:  Native implementations have a way of being hard to kill (or change significantly) once the public has free access to them – the general mantra is “don’t break it” – and that makes iteration/significant evolution artificially harder than it needs to be.  It seems that the pressures and incentives are mixed up.

The W3C Extensible Web Community Group was founded with a vision toward supplementing the traditional model with a new one which we believe is a better path forward in most cases.

Polyfills and Prollyfills…

Polyfills are a well-known concept and it’s an excellent, visualizable name for it:  Something that “fills the holes” here and there on browsers which are behind the curve with regard to implementation of a pretty mature spec.  However, since the term was coined by Remy Sharp a few years ago, and it has become an increasingly popular practice, its meaning has become somewhat diluted.

No longer is it the case that they are merely “filling in a few holes” in implementations based on mature specs – more and more often it is more like they are building whole new annexes based on a napkin sketch.  Within a very short period of time from the first announcement of a draft, we have some “fills” that provide implementation.

Given this contrast, it seems we could use a name which differentiates between the two concepts.  For a while now a few of us have been using different terms trying to describe it.  I’ve written about the concept in the past and it is the subject of Borris Smus’ excellent article How the Web Should Work.  Until recently the best term we could come up with was “forward polyfill”.  Then, on October 12, 2012 Alex Sexton coined the term “prollyfill” on Twitter.

October 12, 2012 “@SlexAxton: Prollyfill: a polyfill for a not yet standardized API”  

The Benefits of Prollyfilling

One thing is clear about the idea of prollyfills:   If we get them “right” they could radically improve the traditional model for standards evolution because they have a few very important benefits by their very nature. Most of this benefit comes from the simple decoupling from a browser release itself.  Since lots more people can contribute to their creation and you only need one implementation, they can be developed with much greater speed and by a larger community.  Such an approach also puts the author of a site in control of what is and isn’t supported.  In the traditional model, an author has no ability to change the speed at which native features are implemented nor what browser users will use to view it with.  Using prolyfills could allow the author to rely on JavaScript to implement the features needed, with only degraded performance, so this is a huge advantage in general.  However, even more of an advantage is realized in that scenario is that it allows multiple competing APIs or even variant APIs to co-exist while we round the edges and iterate because the author can choose which one they are going to use – that is pretty much impossible with the traditional model.  Iteration in the traditional model is likely to cause breakage which is a deterrent to use and therefore severely limiting in terms of how many use cases will be considered or how many authors can give meaningful feedback.   Iteration in this model breaks nothing, APIs and drafts can compete for users, get lots of feedback and ultimately if there is a clear winner it is evident from actual data before native code ever has to be written.

The value of the ability to compete and iterate freely should not be under-estimated – it is key to successful evolution.  Take, for example, the “nth” family of selectors.  They took a long time to come to be and it would appear that most people aren’t especially happy with them – they aren’t often used and they are often mis-understood.  Given the ability to create prollyfills for selector pseudo-classes, it is unlikely that this is the means of accomplishing use cases that would have won out, yet those limited resources ultimately spent an awful lot of time on working out details, drafting prose, presenting, getting agreement, implementing, going through processes about prefixing, optimizing implementations, etc.  In fact, the concept of nth was being discussed at W3C least as early as 1999 and parts of what was discussed was punted to CSS Selectors Level 4.  There was a definite effort to “answer all of the nth questions now,” and while what we have might be an “academically really good answer”  – it’s hard to argue that it’s really caught on and been as useful as some other things.  It’s easy to speculate what might have been better but the truth is, we’ll never know.

The truth is, currently a very small number of people at W3C do an enormous amount of work.  As of the most recent TPAC, the W3C CSS Working Group alone currently has 58 “in process” drafts and only a handful of editors.   This means that a significant number of them are going to be de-prioritized for now in order to focus on the ones with the most support or that are further along and new ideas probably won’t be undertaken for a time.  While they are trying to streamline the process, it does tend to go in fits and starts like this… Without a doubt, several of those 58 will be years and years in the making.

If instead, these same individuals could participate in less demanding, more advisory fashion to a satellite group of Web developers submitting experimental works and after a certain threshold could take over a fairly robust and reasonably mature draft, it is easy to imagine that things could evolve faster and those resources could focus a lot more.  Of course, browser manufacturers and standards bodies could participate in the same manner:  Adobe recently followed this model for their Draft Proposal on CSS Regions, Mozilla is working on a x-tags which implements part of the still very early Web Components API via script and as a general practice, the ECMA team does this pretty often.  These are very positive developments.

The Challenges of *lyfilling.

Currently every fill is generally implemented as a “from the ground up” undertaking, despite the fact that there is potentially a lot of overlap in the sorts of things you need to implement them.  For example: If you are filling something in CSS, for example, you have to (at a minimum) parse CSS.  If you are filling a selector pseudo, you’ve got a lot of figuring out, plumbing and work to do to make that work and efficient enough.  If you are filling a property, it’s potentially got a lot of overlap with the selector bit.  Or perhaps you have a completely new proposal that is merely “based on” essentially a subset of some existing Web technology, like Tab Atkin’s Cascading Attribute Sheets – it’s pretty hard to start at square one and that means that often these fills are comparatively low fidelity.

Another challenge with polyfills and prollyfills is how to write them effectively (best practices), how to make them discoverable and communicate what needs to be communicated about the degree of parity that they provide or their method/degree of forward compatibility.

Charting the Uncharted Waters

It seems clear that we could do with some cooperation and potentially some robust and well thought out and tested prollyfills for native APIs that would make some of this easier.

There really is nothing like “caniuse” for polyfills detailing compatibility, parity with a draft or method/degree of forward compatibility.  Likewise, here is also no such thing for prolyfills – nor is there a “W3C” sort of organization where you can post your proposal, discuss, get people to look at/contribute to your prose/examples, ask questions, etc.  No group creating and maintaining test cases or helping to collect data and work with willing W3C members to help make these things real or prioritized in any way.

In short, there is no community participation acting as a group interested in this subject.  These are gaps we hope to fill (pun intended) with the W3C Extensible Web Community Group.  We’re just getting started, but we have created a github organization and registered prollyfill.org.  Participate or just follow along in the conversations on our public mailing list.