Extend The Web Forward: This is an intervention…

Today marks what I hope will be a turning point in the way we deal with standards which will usher in a new wave of  innovative enterprise on the Web and a smoother and more sensible process and better architecture.  It’s no small dream.

This morning, along with 19 other signatories representative of a broad coalition of individuals across standards bodies, organizations, browser vendors, groups and library creators, I helped to launch The Extensible Web Manifesto. In it, we outline four core principles for change that will create a more robust Web Platform and describe their value.  For a more complete introduction, and reasons behind it, see Yehuda Katz’s explanatory post.

We hope you will join us in creating a movement to #extendthewebforward. If you agree, pass it along and help us spread the word.

Brian Kardell
Chair, W3C Extensible Web Community Group

W3C Extensible Web Community Group

The Web requires stability and a high degree of client ubiquity, that’s why W3C is important. However, that’s also part of the reason that standards develop at a comparatively, (sometimes seemingly painfully) slow rate: They take time, deliberation, testing and mutual agreement between a number of experts and organizations with significantly different interests.

While we frequently complain, that is actually a Really Good Thing™. Standards take a while to “cook” and the Web would be a real mess if every idea became a native standard haphazardly and with great speed:  Native implementations have a way of being hard to kill (or change significantly) once the public has free access to them – the general mantra is “don’t break it” – and that makes iteration/significant evolution artificially harder than it needs to be.  It seems that the pressures and incentives are mixed up.

The W3C Extensible Web Community Group was founded with a vision toward supplementing the traditional model with a new one which we believe is a better path forward in most cases.

Polyfills and Prollyfills…

Polyfills are a well-known concept and it’s an excellent, visualizable name for it:  Something that “fills the holes” here and there on browsers which are behind the curve with regard to implementation of a pretty mature spec.  However, since the term was coined by Remy Sharp a few years ago, and it has become an increasingly popular practice, its meaning has become somewhat diluted.

No longer is it the case that they are merely “filling in a few holes” in implementations based on mature specs – more and more often it is more like they are building whole new annexes based on a napkin sketch.  Within a very short period of time from the first announcement of a draft, we have some “fills” that provide implementation.

Given this contrast, it seems we could use a name which differentiates between the two concepts.  For a while now a few of us have been using different terms trying to describe it.  I’ve written about the concept in the past and it is the subject of Borris Smus’ excellent article How the Web Should Work.  Until recently the best term we could come up with was “forward polyfill”.  Then, on October 12, 2012 Alex Sexton coined the term “prollyfill” on Twitter.

October 12, 2012 “@SlexAxton: Prollyfill: a polyfill for a not yet standardized API”  

The Benefits of Prollyfilling

One thing is clear about the idea of prollyfills:   If we get them “right” they could radically improve the traditional model for standards evolution because they have a few very important benefits by their very nature. Most of this benefit comes from the simple decoupling from a browser release itself.  Since lots more people can contribute to their creation and you only need one implementation, they can be developed with much greater speed and by a larger community.  Such an approach also puts the author of a site in control of what is and isn’t supported.  In the traditional model, an author has no ability to change the speed at which native features are implemented nor what browser users will use to view it with.  Using prolyfills could allow the author to rely on JavaScript to implement the features needed, with only degraded performance, so this is a huge advantage in general.  However, even more of an advantage is realized in that scenario is that it allows multiple competing APIs or even variant APIs to co-exist while we round the edges and iterate because the author can choose which one they are going to use – that is pretty much impossible with the traditional model.  Iteration in the traditional model is likely to cause breakage which is a deterrent to use and therefore severely limiting in terms of how many use cases will be considered or how many authors can give meaningful feedback.   Iteration in this model breaks nothing, APIs and drafts can compete for users, get lots of feedback and ultimately if there is a clear winner it is evident from actual data before native code ever has to be written.

The value of the ability to compete and iterate freely should not be under-estimated – it is key to successful evolution.  Take, for example, the “nth” family of selectors.  They took a long time to come to be and it would appear that most people aren’t especially happy with them – they aren’t often used and they are often mis-understood.  Given the ability to create prollyfills for selector pseudo-classes, it is unlikely that this is the means of accomplishing use cases that would have won out, yet those limited resources ultimately spent an awful lot of time on working out details, drafting prose, presenting, getting agreement, implementing, going through processes about prefixing, optimizing implementations, etc.  In fact, the concept of nth was being discussed at W3C least as early as 1999 and parts of what was discussed was punted to CSS Selectors Level 4.  There was a definite effort to “answer all of the nth questions now,” and while what we have might be an “academically really good answer”  – it’s hard to argue that it’s really caught on and been as useful as some other things.  It’s easy to speculate what might have been better but the truth is, we’ll never know.

The truth is, currently a very small number of people at W3C do an enormous amount of work.  As of the most recent TPAC, the W3C CSS Working Group alone currently has 58 “in process” drafts and only a handful of editors.   This means that a significant number of them are going to be de-prioritized for now in order to focus on the ones with the most support or that are further along and new ideas probably won’t be undertaken for a time.  While they are trying to streamline the process, it does tend to go in fits and starts like this… Without a doubt, several of those 58 will be years and years in the making.

If instead, these same individuals could participate in less demanding, more advisory fashion to a satellite group of Web developers submitting experimental works and after a certain threshold could take over a fairly robust and reasonably mature draft, it is easy to imagine that things could evolve faster and those resources could focus a lot more.  Of course, browser manufacturers and standards bodies could participate in the same manner:  Adobe recently followed this model for their Draft Proposal on CSS Regions, Mozilla is working on a x-tags which implements part of the still very early Web Components API via script and as a general practice, the ECMA team does this pretty often.  These are very positive developments.

The Challenges of *lyfilling.

Currently every fill is generally implemented as a “from the ground up” undertaking, despite the fact that there is potentially a lot of overlap in the sorts of things you need to implement them.  For example: If you are filling something in CSS, for example, you have to (at a minimum) parse CSS.  If you are filling a selector pseudo, you’ve got a lot of figuring out, plumbing and work to do to make that work and efficient enough.  If you are filling a property, it’s potentially got a lot of overlap with the selector bit.  Or perhaps you have a completely new proposal that is merely “based on” essentially a subset of some existing Web technology, like Tab Atkin’s Cascading Attribute Sheets – it’s pretty hard to start at square one and that means that often these fills are comparatively low fidelity.

Another challenge with polyfills and prollyfills is how to write them effectively (best practices), how to make them discoverable and communicate what needs to be communicated about the degree of parity that they provide or their method/degree of forward compatibility.

Charting the Uncharted Waters

It seems clear that we could do with some cooperation and potentially some robust and well thought out and tested prollyfills for native APIs that would make some of this easier.

There really is nothing like “caniuse” for polyfills detailing compatibility, parity with a draft or method/degree of forward compatibility.  Likewise, here is also no such thing for prolyfills – nor is there a “W3C” sort of organization where you can post your proposal, discuss, get people to look at/contribute to your prose/examples, ask questions, etc.  No group creating and maintaining test cases or helping to collect data and work with willing W3C members to help make these things real or prioritized in any way.

In short, there is no community participation acting as a group interested in this subject.  These are gaps we hope to fill (pun intended) with the W3C Extensible Web Community Group.  We’re just getting started, but we have created a github organization and registered prollyfill.org.  Participate or just follow along in the conversations on our public mailing list.

The Blind Architect

In my previous post Tim Berners-Lee Needs Revision, I laid out a general premise: In the late 90’s, Tim Berners-Lee identified numerous problems and proposed a grand vision for “the next evolution of the Web” and proposed some fairly specific solutions which would remain in many ways the central point of focus at W3C for many years to come.  Many (perhaps most) of the solutions put forward by the W3C have failed to take hold (and keep it).  None have changed the world in quite the same way that HTTP/HTML did.  Perhaps more importantly, a good “grand philosophy” for the focus and organization of the W3C and a strategy for helping move the Web infinitely forward was never really adopted.  

I was asked in response to my first article whether I was asking for/favored  evolution or a revolution.  It is a deceptively simple seeming question which seems like a good metaphor to use as a segue…

Generally speaking, it is my contention that in the time following these papers, despite the very best intentions of its creator (whose entire laudable goal seems to have been to provide a model in which the system can evolve more naturally) and despite the earnest efforts by its members (some of whom I interact with and respect greatly), the W3C  (perhaps inadvertently) wound up working against a true evolutionary model forward for the Web instead of for it.

The single most valuable thing that I would like to see occur is a simple change in the focus and philosophy – some of which is already beginning to occur.  More specifically, I would like to propose changes which embrace a few more concepts of evolutionary theory more readily.  Namely, competition and failure.   Given the way things have worked for a long time, I am tempted to say that embracing evolution might actually be a revolution.

On Design

As humans (especially as engineers or architects) we like to think of ourselves as eliminators of chaos;  intelligent designers of beautifully engineered, efficient solutions to complex problems that need solving.  We like to think that we can create solutions that will change things, and last a long time because they are so good.  However, more often than not, and by a very wide margin, we are proven wrong. Statistically speaking, we absolutely suck at that game.

Why?  Very simply:  Reality is messy and ever changing… 

Ultimately, the most beautiful solution in the world is worthless if no one uses it.   History is replete with examples of  elegantly designed solutions that went nowhere and not-so-beautifully designed, but “good enough” solutions that became smash hits.  “Good” and “successful” are not the same thing.  At the end of the day it takes users to make something successful.  By and large, it seems that we are not good judges of what will succeed in today’s environment.  Worse still, the environment itself is constantly changing:  Users’ sense of what is valuable (and how valuable that is) is constantly changing.

Research seems to show that it’s not  just a problem with producers/problem solvers either:  If you ask potential users, it would appear that most don’t actually know what they really want – at least not until they see it.  It’s an incredibly difficult problem in general, but the bigger the problem space, the more difficult it becomes.

What this means to W3C…

Now imagine something the scale of the Web:  How sure am I that W3C is not the exempt to all of the failures and forces listed above and won’t “just get it right”?  100%.

It’s OK.  Embrace it.

Think about this:  The W3C is in a sticky spot.  On the one hand, they are in charge of standards.  As such, they frown on things that are not.  However, change is a given.  How do you change and stay the same simultaneously? The current process of innovation to standard is exceptionally complicated and problematic.

Browsers manufacturers are in an even stickier spot.  Worse, the whole process full of perverse risk/reward scenarios for browser vendors depending on what is going on at any point in time.  In the end, there is a fuzzy definition of “standard” that to this day remains elusive to the average user.

Think not?  Have you have ever considered which browsers your users have and then used references and shims to determine exactly which parts of which specs were supported in “enough” of the browsers you expected to encounter?  If so, then you see what I mean.

But What About The Browser Wars?

Much of the above was set up to explicitly to avoid a repeat of “the browser wars”.

However, I think that we drew the wrong lessons from that experience.

Was the real problem with the browser wars the rapid innovation and lack of more or less single global standard?  Or was it that that made it difficult for authors to write code that worked in any browser?  Is there a difference?  Yes, actually there is, and we know it well in programming.  It is the difference between DOM and JQuery, between Java and Spring. One is a standard, the other is a library chosen by the author.  Standards are great – important – I fully agree.  The question is how you arrive at them, and evolution through libraries has many distinct advantages.

A lot of the JavaScript APIs that we have native in the browser today are less than ideal in most authors minds, even if they were universally implemented (still not).  Yet having them, we do _amazing_ things with it because in JS we can provide something better on top with libraries.  Libraries compete in the wild – authors choose them, not the browsers or the W3C.  Ideas from libraries get mixed and mutated and anything that gains a foothold tries to adapt and survive.  After a while, you start to see common themes rise to the top – dominant species.  Selectors are a good example, so is a better event API.  Standards just waiting to be written, right?

The W3C is currently unprepared for this in its current arrangement in my opinion and that is a fundamental problem.   Efforts to capitalize haven’t been ideal.  There is hesitancy to “pick a winner” (weird since browsers are already the winners) or give credence to one approach over another rather than starting over.  There is always an effort to re-consider the idea holistically as part of the browser and do a big design.   In the end, if we get something, it’s not bad, but it’s usually fundamentally different and the disparity is confusing, sometimes even unhelpful to authors trying to improve the code in the existing use-cases/models.

It doesn’t have to be that way.  In the end, a lot of recs wind up being detailed accounts of things that went out into the world and permeated a lot, planned or not.  Even many of the new semantic tags derived from a study of the most used CSS classes.  There is a lot of potential power to the idea of exploiting this and building a system around it.

There is finally movement on Web Components – that could be something of a game changer with the right model at the W3C allowing ideas to compete and adapt in the wild decoupled from the browser before becoming a standard.  Likewise some extension abilities could make certain kinds of CSS changes compete in the wild before standardizing too.  If we start thinking about defining simple natural extension points, I think it shapes how we think about growing standards:  Decoupling these advancements from a particular browser helps things grow safely and stay sane – it reduces a lot of risks all around.

(Re)Evolution happens anyway…

One closing example which illustrates how it doesn’t have to be this way and evolution will continue to happen even if we fight it.

While prescriptive standards put forward by W3C had an advantage, in the long run they are subject to the same forces. I think no idea in history can be said to have had a greater advantage than XML. In many respects, it was the lynchpin of the larger plans. Then, a lone guy at a startup noticed a really nice coincidence in the way that all C family languages think about and express data – and it just happened to be super easy to parse and serialize in JavaScript. He registered a domain and threw out a really basic specification which fits neatly on an index card, a simple JavaScript library and more or less forgot about it…and JSON was born.

Technically speaking, it is inferior on a whole number of levels in terms of what it can and can’t particularly express.  Just ask anyone who worked on XML. But that was all theory. In practice, most people didn’t care about those things so much. JSON had no upfront advantages except that in the browser some people just eval’ed it initially. No major companies or standard bodies created it or pushed it. Numerous libraries weren’t already available for every programming language known to man.

The thing is, people liked it… Now there are implementations in every language, now it is a recognized standard… It is a standard because it was defined in ECMA, not W3C.  It was scrunitized and pushed and pulled on the edges – but that’s what standards committees are really good at.  Now it is natively supported in Web browsers safely (and more efficiently) and an increasing number of other programming languages as well.  Now it is inspiring new ideas to address things originally addressed by XSL, XPath and RDF.  Maybe someday it will fall out of favor to some spinoff with better advantages for the time.  That’s reality, and it’s ok.  The important thing to note is that it achieved this because it was good enough to survive – not because it was perfect.

In 1986 Richard Dawkins elegantly wrote “The Blind Watchmaker” to explain how the simple process of evolution creates beautiful solutions to complex problems:  Blindly – it doesn’t have a greater plan for twelve steps down the road in mind.