What Would Bruce Lee Do? The Tao of the Extensible Web.

Recently I participated in a panel on The Standards Process and the Extensible Web Manifesto at Edge Conf.  While ours was the very last session of the day (you can watch the video here), nearly every topic that day called out some mention of the Extensible Web Manifesto.  Privately, there were still plenty of people I met between sessions who privately said “So, what exactly is this Manifesto thing I keep hearing about, tell me more…”.  and my day was filled with discussions at all levels about how exactly we should apply its words in practice or what exactly we meant by x – or specifically what we want to change about standards. 

Given this, I thought it worth a post to share my own personal thoughts, and say some of the things I didn’t have an opportunity to say on the panel.   I’ll begin in the obvious place:  Bruce Lee.


What Would Bruce Lee Do?

What Would Bruce Lee Do? T-shirt from partyonshirts.com who totally agrees with me (I think, they didn’t actually say it, but I suspect it).

Let’s talk about Bruce Lee.  Chances are pretty good that you’ve heard of him from his film work and pop-culture status, but it’s (slightly at least) less commonly known that he was more than just an exceptional martial artist:  He was a trained philosopher and deep thinker – and he did something really new:  He questioned the status quo of traditional martial arts.  He published a book entitled “The Tao of Jeet Kune Do” and laid out an entirely new approach to fighting.  He taught, trained and fought in a way unlike anyone before him. 

He kicked ass in precisely the same way that standards don’t.…  It started by being both intelligent and gutsy enough to make some observations…

There are values in principles, but nothing is too holy to be questioned…

You know, all I keep hearing is “the fight took too long,” “too much tradition, too much classical mess, too many fixed positions and Wing Chun”. You know everything that’s wrong, so fix it. – Linda Lee’s character to Bruce in “Dragon: The Bruce Lee Story” (a fictional, but inspiring scene)

Ok, so we all agree that standards are painful to develop and have sometimes gone very wrong.  From the outside they look, at times, almost comically inefficient, illogical or even self-defeating.  Many comments in our panel were about this in some form:  Too much classical mess – too much Wing Chun.

Bruce Lee saw through the strict rules of forms to what they were trying to accomplish, and he said “yeah, but that could be better- what about this instead”… and opponents fell.

Likewise, standards bodies and processes are not a religion.  Sometimes process is just process – it can be in the way, unproductive – even harmful if it isn’t serving its true purpose.  There are no holy texts (including the Manifesto itself) and there should be no blind faith in them:  We are thinking, rational people.  The Manifesto lays out guiding principles, with what we hope is a well rationed incentive, not absolutes.  Ideological absolutes suck, if you ask me, and Bruce’s approach makes me think he’d agree.

The Manifesto is pretty short though, it begs follow-on and one recurrent theme was that many people at the conference seemed interested in discussing what they perceived to be contradiction in that we have not redirected absolutely everything toward the absolute lowest primitives possible (and for others, a fear that we would).  I’d like to explain my own perspective:

It is often more important to ask the question than it is to answer it perfectly.

Again, the Manifesto lays out principles to strive for, not laws: Avoid large, deep sinks of high-level decisions which involve many new abstractions and eliminate the possibility of competition and innovation – we have finite resources and this is not a good use of them. Favor lower level APIs that allow these things to occur instead – they will, and try to help explain existing features.  Where to draw the line between pragmatism and perfection is debatable, and that’s ok – we don’t have to have all of the perfect answers right now, and that’s good, because we don’t.

Screen Shot 2014-09-28 at 9.54.04 PM

Paralysis is bad.  Perfect, as they say, can be the enemy of good:  Progress matters and ‘perfect’ is actually a non-option because it’s an optimization problem that involves many things, including an ever changing environment.  Realize that there are many kinds of advantage that have historically trumped theoretical purity every time – the technically best answer has never, to the best of my knowledge ‘won’ – and shipping something is definitely way up there in terms of things you need to succeed. More importantly, do what we can to avoid bad. At some very basic level, all the Manifesto is really saying is:  “Hey standards bodies — Let’s find ways to manage risk and complexity, to work efficiently and use data for the really big choices”.

In my mind, the W3C TAG has done a great job of attempting to redirect spec authors, without paralyzing progress, toward asking the right questions: “Is this at an appropriately low level?”, “If it isn’t quite at the bottom, does it look like we can get there from here – or is it what’s here too high-level and overly complicated with magic to be explained?”, “Can we explain existing magic with this?” and, “Is it consistent in the platform so that authors can re-apply common lessons and reasoning”?

I’m sure the there are some who would disagree, but in the end, I think that Web Audio API is a nice example of this kind of pragmatism at play.  There are things that we know already:  We’ve have an <audio> tag, for example, for a while. We have a collected cases which should be plausible, but currently aren’t.  The Web Audio API tries to address this at a considerably lower level, but definitely not the lowest one possible.  However… It has been purposely planned and reviewed to make sure that it gives good answers to the above questions.  While it’s possible that this could lead to something unexpected beneath, we’ve done a lot to mitigate that risk and admitted that we don’t have all of the information we need to make good choices there yet.  It was built with the idea that it will have further low-level fleshing out and that we think we know enough about it to say that we can use it to explain the mechanics of the audio tag with it.  We got the initial spec and now, nearly immediately, they’ve begun work on the next steps.  This has the advantage of steady progress, drawing a boundary around the problem and gives a lot of developers new tools with which they’ll help ask the right questions and through use, imagine new use cases which feed into a better process.  It gives us tools that we need so that efforts like HTML as Custom Elements can begin to contribute in helping to explain the higher level feature.  The real danger is only in stopping the progressive work.

Similarly, it was asked by someone whether Beacon “flies in the face of the Extensible Web Manifesto” because it could be described by still further low level primitives (it might be possible to implement with Service Workers, for example).  Again, I’d like to argue that in fact, it doesn’t.  It’s always been plausible, but prohibitively cumbersome to achieve roughly the same effect – this is why HTML added a declarative ping attribute to links:  Simple use cases should be simple and declarative – the problem isn’t that high-level, declarative things exist, we want those – the Manifesto is explicit about this — the problem is in how we go about identifying them.  Because this is a really common case that’s been around for a long time – we already have a lot of good data on that.  As it happens, ping wasn’t implemented by some browsers, but they were interested in sendBeacon, which – hooray – can actually be used to describe what ping does, and maybe polyfill it too!  It’s simple, constrained, concise – and it’s getting implemented.  It’s enough for new experimentation and it’s also different enough from what you typically do with other network level APIs that maybe it’s fine that it have no further explanation.  If you read my posts, you know that I like to make the analogy of evolution, and so I’ll point to something similar in the natural world:  This may simply be a case of convergent evolution – differently adapted DNA that has a similar-looking effect, but share none of the primitives you might think, and that too can actually be ok.

The important part isn’t that we achieve perfection as much as that we ask the questions, avoid bad mistakes and make good progress that we can iterate on: Everyone actually shipping something that serves real needs and isn’t actively bad is actually a killer feature.  

4921The Non-Option

Bruce had a lot excellent quotes about the pitfalls of aiming too low or defeating yourself.

It’s easy to look at the very hard challenges ahead of us to distill the cumbersome sky-castles we’ve built into sensible layers and fundamentals and just say “too hard” or “we can’t because explaining it locks out our ability to improve performance”.

Simultaneously, one of the most urgent needs is that we fill the gaps – if the Web lacks features that exist in native, we’re losing. Developers don’t want to make that choice, but we force them to.
We’re doing best when we are balancing both ends of this to move ever forward.

Don’t give up.  It’s going to be hard but we can easily defeat ourselves before we ever get started if we’re not careful.  We can’t accept that – we need to convince ourselves that it’s a non-option, because it is.  If we don’t do the hard work, we won’t adapt – and eventually this inability will be the downfall of the Web as a competitive platform.

Jump in the Fire

Good things happen when we ship early and ship often while focusing this way.  As we provide lower-level features we’ll see co-evolution with what’s happening in the developer community, this data can help us make intelligent choices – lots of hard choices have been made, we can see what’s accepted, we can see new use cases.

Some have made the case – to me at least – that we shouldn’t try applying any of this to standards until we have everything laid out with very precise rules about what to apply when and where.  I disagree.  Sometimes, the only way to figure some of it out is to jump in, have the discussions and do some work.

Standards processes and even bodies themselves have evolved over time, and we should continue to adapt where we see things that aren’t working.  It won’t be perfect – reality never is — it’s messy, and that’s ok..

Very special thanks to Brendan Eich and Bruce Lawson for kindly reviewing and providing useful thoughts, commentary and corrections ❤..

A Web for the Next Century


The Web Platform
Chapter 1.

1. In the beginning, Tim created the Web.
2: And the platform was without form, and void; and confusion was upon the face of the Internet. And the mind of Tim moved upon the face of the problem.
3: And Tim said, Let there be a new protocol: and there was HTTP.
4: And Tim saw the protocol, that it was good: and he divided the network by domains and subdomains.
5: And he called the network the World Wide Web.
6: And Tim said, Let there be a browser for viewing pages delivered by this Web that they might be viewed.
7. And it was so.
8: And Tim separated the structure of the content from its style.
9: And the structured content he called HTML and the means of styling he called CSS. And he saw that it was good.
10. And Tim said, let us describe this structured content in the form of a tree and make it scriptable, and it was so.
11. And from the dust of the Interwebs were created developers, whom he gave dominion over the platform.

If you’ve read any of the numerous articles about The Extensible Web or heard about it in conference presentations, or seen The Extensible Web Manifesto you’ve likely seen (or heard) three phrases repeated: “Explain the magic,” “fundamental primitives” and “evolution of the platform”. I thought it might be worth (another) piece explaining why I think these are at the heart of it all…

For thousands of years the commonly accepted answer to the question ”where did dolphins come from” (or sharks or giraffes or people) was essentially that they were specially created in their current form, by a deity as part of a complex and perfect plan.  Almost all cultures had some kind of creation myth to explain the complex, high level things they couldn’t understand.

Turns out that this very simplified view was wrong (as is much of the cute creation myth I’ve created for the Web Platform) and I’d like to use this metaphor a bit to explain…

Creation and Evolution: Concrete and Abstract

It’s certainly clear that Sir Tim’s particular mix of ideas became the dominant paradigm:  We don’t spend a lot of time talking about SGML or Gopher.

It seems straightforward enough to think of the mix of ideas that made up the original Web as being evolutionary raw materials and to think of users as providing some kind of fitness function in which it became the dominant species/paradigm, but that is is a pretty abstract thing and misses a subtle, but I think important distinction.

The Web Platform/Web browsers are not an idea, they are now a concrete thing.  The initial creation of the Web was act of special creation – engineering that introduced not just new ideas, but new software and infrastructure.  The Web is probably the most grand effort in the history of mankind – browsers as a technology outstrip any operating system or virtual machine in terms of ubiquity and they  are increasingly capable systems.  There are many new systems with concrete ideas to supplant the Web browser and replace it with something new.  People are asking themselves:  Is it even possible  for the Web to hang on?  Replacing it is no easy task: technically or socially – This is a huge advantage to the Web.  So how do we make it thrive?  Not just today, but years from now?

Some more history…

In Tim’s original creation, HTTP supported only GET; In HTML there were no forms, no images, no separate idea of style.  There was no DOM or async requests – as – indeed there was no script. Style was a pretty loosely defined thing – there wasn’t much of it – and CSS wasn’t a thing.  There was just GET me that very simple HTML document markup which is mediocre at displaying text – and display it – when I give you a URL and make sure there is this special concept of a “link”.

This is at the heart of what we have today, but it is not nearly all of it:  What we have today has become an advanced Platform – so how did we get here?  Interestingly, there are two roads we’ve followed at different times – and it is worth contrasting them.

In some cases, we’ve gone off and created entirely new high level ideas like CSS or AppCache which were, well, magic.  That is, they did very, very complex things and provided a high-level, declarative API which was highly designed to solve very specific use-cases.  And at other times (like DOM, XMLHttpRequest and CSSOM) we have explained some of the underlying magic by taking some of those high-level APIs and providing some imperative APIs.

Looking at those lists, it seems to me that were it not for those small efforts to explain some of the magic, the Web would already be lost by now.

Creating a Platform for the Next 100 Years

The real strength of life itself is derived from the fact that it is not specifically designed to perfectly fill a very niche, but because complex pressures a high level judge relatively minor variance at a low level and this simple process inevitably yields the spread of things that are highly adaptive and able to survive changes in the complex pressures.

Sir Tim Berners-Lee couldn’t have forseen iPhones and Retina displays, and had he been able to account for them in his original designs, the environment itself (that is, users who choose to use or author for the Web) would likely have rejected it.   Such are the complex pressures changing our system and we could learn something from nature and from the history of technology here:  Perfectly designed things are often not the same as “really widely used” things and either can be really inflexible to change.   

Explaining the magic means digging away at the capabilities that underly this amazing system and describing their relationships to one another to add adaptability (extensibility).   At the bottom are a number of necessary and fundamental primitives that only the platform (the browser, generally) can provide.  When we think about adding something new, let’s try to explain it “all the way down” until we reach a fundamental primitive and then work up.

All of this allows for small mutations – new things which can compete for a niche in the very real world – and unlike academic and closed committees can help create new, high-level abstractions based on real, verified shared need and acceptance and shared understanding.  In other words, we will have a Platform which, like life itself, is highly adaptive and able to survive complex changes in pressures and last beyond any of our lifetimes.

Extend The Web Forward: This is an intervention…

Today marks what I hope will be a turning point in the way we deal with standards which will usher in a new wave of  innovative enterprise on the Web and a smoother and more sensible process and better architecture.  It’s no small dream.

This morning, along with 19 other signatories representative of a broad coalition of individuals across standards bodies, organizations, browser vendors, groups and library creators, I helped to launch The Extensible Web Manifesto. In it, we outline four core principles for change that will create a more robust Web Platform and describe their value.  For a more complete introduction, and reasons behind it, see Yehuda Katz’s explanatory post.

We hope you will join us in creating a movement to #extendthewebforward. If you agree, pass it along and help us spread the word.

Brian Kardell
Chair, W3C Extensible Web Community Group

W3C Extensible Web Community Group

The Web requires stability and a high degree of client ubiquity, that’s why W3C is important. However, that’s also part of the reason that standards develop at a comparatively, (sometimes seemingly painfully) slow rate: They take time, deliberation, testing and mutual agreement between a number of experts and organizations with significantly different interests.

While we frequently complain, that is actually a Really Good Thing™. Standards take a while to “cook” and the Web would be a real mess if every idea became a native standard haphazardly and with great speed:  Native implementations have a way of being hard to kill (or change significantly) once the public has free access to them – the general mantra is “don’t break it” – and that makes iteration/significant evolution artificially harder than it needs to be.  It seems that the pressures and incentives are mixed up.

The W3C Extensible Web Community Group was founded with a vision toward supplementing the traditional model with a new one which we believe is a better path forward in most cases.

Polyfills and Prollyfills…

Polyfills are a well-known concept and it’s an excellent, visualizable name for it:  Something that “fills the holes” here and there on browsers which are behind the curve with regard to implementation of a pretty mature spec.  However, since the term was coined by Remy Sharp a few years ago, and it has become an increasingly popular practice, its meaning has become somewhat diluted.

No longer is it the case that they are merely “filling in a few holes” in implementations based on mature specs – more and more often it is more like they are building whole new annexes based on a napkin sketch.  Within a very short period of time from the first announcement of a draft, we have some “fills” that provide implementation.

Given this contrast, it seems we could use a name which differentiates between the two concepts.  For a while now a few of us have been using different terms trying to describe it.  I’ve written about the concept in the past and it is the subject of Borris Smus’ excellent article How the Web Should Work.  Until recently the best term we could come up with was “forward polyfill”.  Then, on October 12, 2012 Alex Sexton coined the term “prollyfill” on Twitter.

October 12, 2012 “@SlexAxton: Prollyfill: a polyfill for a not yet standardized API”  

The Benefits of Prollyfilling

One thing is clear about the idea of prollyfills:   If we get them “right” they could radically improve the traditional model for standards evolution because they have a few very important benefits by their very nature. Most of this benefit comes from the simple decoupling from a browser release itself.  Since lots more people can contribute to their creation and you only need one implementation, they can be developed with much greater speed and by a larger community.  Such an approach also puts the author of a site in control of what is and isn’t supported.  In the traditional model, an author has no ability to change the speed at which native features are implemented nor what browser users will use to view it with.  Using prolyfills could allow the author to rely on JavaScript to implement the features needed, with only degraded performance, so this is a huge advantage in general.  However, even more of an advantage is realized in that scenario is that it allows multiple competing APIs or even variant APIs to co-exist while we round the edges and iterate because the author can choose which one they are going to use – that is pretty much impossible with the traditional model.  Iteration in the traditional model is likely to cause breakage which is a deterrent to use and therefore severely limiting in terms of how many use cases will be considered or how many authors can give meaningful feedback.   Iteration in this model breaks nothing, APIs and drafts can compete for users, get lots of feedback and ultimately if there is a clear winner it is evident from actual data before native code ever has to be written.

The value of the ability to compete and iterate freely should not be under-estimated – it is key to successful evolution.  Take, for example, the “nth” family of selectors.  They took a long time to come to be and it would appear that most people aren’t especially happy with them – they aren’t often used and they are often mis-understood.  Given the ability to create prollyfills for selector pseudo-classes, it is unlikely that this is the means of accomplishing use cases that would have won out, yet those limited resources ultimately spent an awful lot of time on working out details, drafting prose, presenting, getting agreement, implementing, going through processes about prefixing, optimizing implementations, etc.  In fact, the concept of nth was being discussed at W3C least as early as 1999 and parts of what was discussed was punted to CSS Selectors Level 4.  There was a definite effort to “answer all of the nth questions now,” and while what we have might be an “academically really good answer”  – it’s hard to argue that it’s really caught on and been as useful as some other things.  It’s easy to speculate what might have been better but the truth is, we’ll never know.

The truth is, currently a very small number of people at W3C do an enormous amount of work.  As of the most recent TPAC, the W3C CSS Working Group alone currently has 58 “in process” drafts and only a handful of editors.   This means that a significant number of them are going to be de-prioritized for now in order to focus on the ones with the most support or that are further along and new ideas probably won’t be undertaken for a time.  While they are trying to streamline the process, it does tend to go in fits and starts like this… Without a doubt, several of those 58 will be years and years in the making.

If instead, these same individuals could participate in less demanding, more advisory fashion to a satellite group of Web developers submitting experimental works and after a certain threshold could take over a fairly robust and reasonably mature draft, it is easy to imagine that things could evolve faster and those resources could focus a lot more.  Of course, browser manufacturers and standards bodies could participate in the same manner:  Adobe recently followed this model for their Draft Proposal on CSS Regions, Mozilla is working on a x-tags which implements part of the still very early Web Components API via script and as a general practice, the ECMA team does this pretty often.  These are very positive developments.

The Challenges of *lyfilling.

Currently every fill is generally implemented as a “from the ground up” undertaking, despite the fact that there is potentially a lot of overlap in the sorts of things you need to implement them.  For example: If you are filling something in CSS, for example, you have to (at a minimum) parse CSS.  If you are filling a selector pseudo, you’ve got a lot of figuring out, plumbing and work to do to make that work and efficient enough.  If you are filling a property, it’s potentially got a lot of overlap with the selector bit.  Or perhaps you have a completely new proposal that is merely “based on” essentially a subset of some existing Web technology, like Tab Atkin’s Cascading Attribute Sheets – it’s pretty hard to start at square one and that means that often these fills are comparatively low fidelity.

Another challenge with polyfills and prollyfills is how to write them effectively (best practices), how to make them discoverable and communicate what needs to be communicated about the degree of parity that they provide or their method/degree of forward compatibility.

Charting the Uncharted Waters

It seems clear that we could do with some cooperation and potentially some robust and well thought out and tested prollyfills for native APIs that would make some of this easier.

There really is nothing like “caniuse” for polyfills detailing compatibility, parity with a draft or method/degree of forward compatibility.  Likewise, here is also no such thing for prolyfills – nor is there a “W3C” sort of organization where you can post your proposal, discuss, get people to look at/contribute to your prose/examples, ask questions, etc.  No group creating and maintaining test cases or helping to collect data and work with willing W3C members to help make these things real or prioritized in any way.

In short, there is no community participation acting as a group interested in this subject.  These are gaps we hope to fill (pun intended) with the W3C Extensible Web Community Group.  We’re just getting started, but we have created a github organization and registered prollyfill.org.  Participate or just follow along in the conversations on our public mailing list.