The Web requires stability and a high degree of client ubiquity, that’s why W3C is important. However, that’s also part of the reason that standards develop at a comparatively, (sometimes seemingly painfully) slow rate: They take time, deliberation, testing and mutual agreement between a number of experts and organizations with significantly different interests.
While we frequently complain, that is actually a Really Good Thing™. Standards take a while to “cook” and the Web would be a real mess if every idea became a native standard haphazardly and with great speed: Native implementations have a way of being hard to kill (or change significantly) once the public has free access to them – the general mantra is “don’t break it” – and that makes iteration/significant evolution artificially harder than it needs to be. It seems that the pressures and incentives are mixed up.
The W3C Extensible Web Community Group was founded with a vision toward supplementing the traditional model with a new one which we believe is a better path forward in most cases.
Polyfills and Prollyfills…
Polyfills are a well-known concept and it’s an excellent, visualizable name for it: Something that “fills the holes” here and there on browsers which are behind the curve with regard to implementation of a pretty mature spec. However, since the term was coined by Remy Sharp a few years ago, and it has become an increasingly popular practice, its meaning has become somewhat diluted.
No longer is it the case that they are merely “filling in a few holes” in implementations based on mature specs – more and more often it is more like they are building whole new annexes based on a napkin sketch. Within a very short period of time from the first announcement of a draft, we have some “fills” that provide implementation.
Given this contrast, it seems we could use a name which differentiates between the two concepts. For a while now a few of us have been using different terms trying to describe it. I’ve written about the concept in the past and it is the subject of Borris Smus’ excellent article How the Web Should Work. Until recently the best term we could come up with was “forward polyfill”. Then, on October 12, 2012 Alex Sexton coined the term “prollyfill” on Twitter.
October 12, 2012 “@SlexAxton: Prollyfill: a polyfill for a not yet standardized API”
The Benefits of Prollyfilling
One thing is clear about the idea of prollyfills: If we get them “right” they could radically improve the traditional model for standards evolution because they have a few very important benefits by their very nature. Most of this benefit comes from the simple decoupling from a browser release itself. Since lots more people can contribute to their creation and you only need one implementation, they can be developed with much greater speed and by a larger community. Such an approach also puts the author of a site in control of what is and isn’t supported. In the traditional model, an author has no ability to change the speed at which native features are implemented nor what browser users will use to view it with. Using prolyfills could allow the author to rely on JavaScript to implement the features needed, with only degraded performance, so this is a huge advantage in general. However, even more of an advantage is realized in that scenario is that it allows multiple competing APIs or even variant APIs to co-exist while we round the edges and iterate because the author can choose which one they are going to use – that is pretty much impossible with the traditional model. Iteration in the traditional model is likely to cause breakage which is a deterrent to use and therefore severely limiting in terms of how many use cases will be considered or how many authors can give meaningful feedback. Iteration in this model breaks nothing, APIs and drafts can compete for users, get lots of feedback and ultimately if there is a clear winner it is evident from actual data before native code ever has to be written.
The value of the ability to compete and iterate freely should not be under-estimated – it is key to successful evolution. Take, for example, the “nth” family of selectors. They took a long time to come to be and it would appear that most people aren’t especially happy with them – they aren’t often used and they are often mis-understood. Given the ability to create prollyfills for selector pseudo-classes, it is unlikely that this is the means of accomplishing use cases that would have won out, yet those limited resources ultimately spent an awful lot of time on working out details, drafting prose, presenting, getting agreement, implementing, going through processes about prefixing, optimizing implementations, etc. In fact, the concept of nth was being discussed at W3C least as early as 1999 and parts of what was discussed was punted to CSS Selectors Level 4. There was a definite effort to “answer all of the nth questions now,” and while what we have might be an “academically really good answer” – it’s hard to argue that it’s really caught on and been as useful as some other things. It’s easy to speculate what might have been better but the truth is, we’ll never know.
The truth is, currently a very small number of people at W3C do an enormous amount of work. As of the most recent TPAC, the W3C CSS Working Group alone currently has 58 “in process” drafts and only a handful of editors. This means that a significant number of them are going to be de-prioritized for now in order to focus on the ones with the most support or that are further along and new ideas probably won’t be undertaken for a time. While they are trying to streamline the process, it does tend to go in fits and starts like this… Without a doubt, several of those 58 will be years and years in the making.
If instead, these same individuals could participate in less demanding, more advisory fashion to a satellite group of Web developers submitting experimental works and after a certain threshold could take over a fairly robust and reasonably mature draft, it is easy to imagine that things could evolve faster and those resources could focus a lot more. Of course, browser manufacturers and standards bodies could participate in the same manner: Adobe recently followed this model for their Draft Proposal on CSS Regions, Mozilla is working on a x-tags which implements part of the still very early Web Components API via script and as a general practice, the ECMA team does this pretty often. These are very positive developments.
The Challenges of *lyfilling.
Another challenge with polyfills and prollyfills is how to write them effectively (best practices), how to make them discoverable and communicate what needs to be communicated about the degree of parity that they provide or their method/degree of forward compatibility.
Charting the Uncharted Waters
It seems clear that we could do with some cooperation and potentially some robust and well thought out and tested prollyfills for native APIs that would make some of this easier.
There really is nothing like “caniuse” for polyfills detailing compatibility, parity with a draft or method/degree of forward compatibility. Likewise, here is also no such thing for prolyfills – nor is there a “W3C” sort of organization where you can post your proposal, discuss, get people to look at/contribute to your prose/examples, ask questions, etc. No group creating and maintaining test cases or helping to collect data and work with willing W3C members to help make these things real or prioritized in any way.
In short, there is no community participation acting as a group interested in this subject. These are gaps we hope to fill (pun intended) with the W3C Extensible Web Community Group. We’re just getting started, but we have created a github organization and registered prollyfill.org. Participate or just follow along in the conversations on our public mailing list.