Extend The Web Forward: This is an intervention…

Today marks what I hope will be a turning point in the way we deal with standards which will usher in a new wave of  innovative enterprise on the Web and a smoother and more sensible process and better architecture.  It’s no small dream.

This morning, along with 19 other signatories representative of a broad coalition of individuals across standards bodies, organizations, browser vendors, groups and library creators, I helped to launch The Extensible Web Manifesto. In it, we outline four core principles for change that will create a more robust Web Platform and describe their value.  For a more complete introduction, and reasons behind it, see Yehuda Katz’s explanatory post.

We hope you will join us in creating a movement to #extendthewebforward. If you agree, pass it along and help us spread the word.

Brian Kardell
Chair, W3C Extensible Web Community Group

Off With Their Heads: Disband the W3C?

Tenniel red queen with alice

Just a few days ago, Matthew Butterick presented a talk entited “The Bomb in the Garden” at TYPO in San Francisco (poor title timing given recent events in Boston). In it, he concludes “the misery exists because of the W3C—the World Wide Web Consortium… So, respectfully, but quite seriously, I suggest: let’s Disband the W3C“. Ultimately he suggests that “...the alternative is a web that’s organized entirely as a set of open-source software projects.

Butterick’s Points:

  • It takes a really long time for standards to reach Recommendation Status (“the Web is 20 years old)
  • The W3C doesn’t enforce standards
  • Browser vendors eventually implement the same standards differently
  • We fill pages with hacks and black magic to make it work
  • Ultimately, what we wind up with still isn’t nearly good enough
  • There is no good revenue model
  • Newspaper and magazine sites all look roughly the same and are somewhat ‘low design’.

His presentation is definitely interesting and worth a read/view. In general, if you have been working on the Web a long time, you will probably experience at least some moments where you can completely relate to what he is saying.

Still, it seems a little Red Queen/over-the-top to me so I hope you’ll humor a little Alice in Wonderland themed commentary…

Why is a Raven Like a Writing Desk?

Michael Smith (@sideshowbarker to some) replied with some thoughts on it on the W3C Blog with a post entiteled “Getting agreements is hard (some thoughts on Matthew Butterick’s “The Bomb in the Garden” talk at TYPO San Francisco)” in which he points out in short, bullet-list form, several problems with Butterick’s statements about how W3C is misportrayed. The post is short enough and already bulleted so I won’t summarize here, instead I encourage you to go have a read yourself.  He closes up with the point that “Nowhere in Matthew Butterick’s talk is there a real proposal for how we could get agreements any quicker or easier or less painfully than we do now by following the current standards-development process.” (emphasis mine).

Indeed, the open source projects mentioned by Butterick are about as much like standards as a raven is like a writing desk and, in my opinion, to replace a standards body with a vague “bunch of open source projects” would send us down a nasty rabbit hole (or through the looking glass) into a confusing and disorienting world: Curiouser and curiouser.

“Would you tell me, please, which way I ought to go from here?”
“That depends a good deal on where you want to get to.”
“I don’t much care where –”
“Then it doesn’t matter which way you go.”
― Lewis CarrollAlice in Wonderland

Still, I don’t think Butterick really means it quite so literally.  After all,  he holds up PDF as an ISO standard that “just works” and ISO is anything but an open source project like Wordpres.  In fact, PDF and ISO could have some of the same challenges laid against them.  For example, from the ISO website:

Are ISO standards mandatory?

ISO standards are voluntary. ISO is a non-governmental organization and it has no power to enforce the implementation of the standards it develops.

It seems to me that ISO and W3C have a whole lot more in common than they differ IMO:  Standards are proposed by stakeholders, they go before technical committees, they have mailing lists and working groups, they have to reach consensus, etc.  Most of this is stated in Michael’s post.  Additionally though, all PDF readers are not alike either: Different readers have different level of support for reflow and there is a separate thing called “PDF/A” which extends the standard (they aren’t the only ones) and adds DRM (make it expensive?).  Some readers/authors can accept links to places outside the file, some can’t.  Some can contain comments added by reviewers or markings, others can’t.   Etc.

You used to be much more…”muchier.”

I think that instead, Butterick is simply (over) expressing his frustration and loss of hope in the W3C:  “They’ve lost their “muchness”.  You know what?  It really does suck that we have experienced all of this pain, and to be honest, Butterick’s technical examples aren’t even scratching the surface.  After 20 years, you really really think we’d be a little further along.

“I can’t go back to yesterday because I was a different person then.”
― Lewis CarrollAlice in Wonderland

A lot of the pain we’ve experienced has taken place due to really big detour in the history of Internet standards: The ones we really use and care about were basically sort of put on hold and efforts mostly put toward a “something else”.  Precisely which something else would have made the Web super awesome is a little fuzzy, but whatever it was you could bet that it would have contained at least one of the letters “x” “m” or “l” and have contained lots of “<” and “>”‘s.  The browser maker with the largest market share disbanded their team and another major one split up.  It got so contentious at one point that the WHATWG was established to carry on the specs that the W3C were abandoning.

Re-muchifying…

While we can’t go back and fix that now, the question is:  Can we prevent the problems from happening again and work together to make the Web a better place?  I think we can.

“Why, sometimes I’ve believed as many as six impossible things before breakfast.”
― Lewis CarrollAlice in Wonderland

The W3C is an established standards body with a great infrastructure and all of the members you’d really need to make something happen.  Mozilla CTO Brendan Eich had some good advice in 2004:

What matters to web content authors is user agent market share. The way to crack that nut is not to encourage a few government and big company “easy marks” to go off on a new de-jure standards bender. That will only add to the mix of formats hiding behind firewalls and threatening to leak onto the Internet.

Luckily, it seems that the W3C has learned some important lessons recently.  More has happened to drive Web standards and browser development/interoperability forward in the past 2-3 years than happened in the 6-7 years combined and more is queued up than I can even wrap my head around.  We have lots of new powers in HTML and lots of new APIs in the DOM and CSS.  We have efforts like Test the Web Forward uncovering problems with interoperability and nearly all browsers becoming evergreen – pushing out improvements and fixes all the time.  We also managed to get some great reformers elected to the W3C Technical Architecture Group recently who are presenting some great ideas and partnership and cooperation between W3C and other standards bodies like ECMA/TC-39 (also making excellent progress) are beginning.   I believe that we can all win with community participation and evolution through ideas like prollyfill.org which is trying to team up the community with standards groups and implementers to create a more nimble and natural process based on evolutionary and open ideas… Perhaps that might sound like a marriage of open source ideas and standards that Matthew Butterick would be more happy with… Maybe I should send him an email.

So what do you think?

“Do you think I’ve gone round the bend?”
“I’m afraid so. You’re mad, bonkers, completely off your head. But I’ll tell you a secret. All the best people are.”
― Lewis CarrollAlice in Wonderland

W3C Extensible Web Community Group

The Web requires stability and a high degree of client ubiquity, that’s why W3C is important. However, that’s also part of the reason that standards develop at a comparatively, (sometimes seemingly painfully) slow rate: They take time, deliberation, testing and mutual agreement between a number of experts and organizations with significantly different interests.

While we frequently complain, that is actually a Really Good Thing™. Standards take a while to “cook” and the Web would be a real mess if every idea became a native standard haphazardly and with great speed:  Native implementations have a way of being hard to kill (or change significantly) once the public has free access to them – the general mantra is “don’t break it” – and that makes iteration/significant evolution artificially harder than it needs to be.  It seems that the pressures and incentives are mixed up.

The W3C Extensible Web Community Group was founded with a vision toward supplementing the traditional model with a new one which we believe is a better path forward in most cases.

Polyfills and Prollyfills…

Polyfills are a well-known concept and it’s an excellent, visualizable name for it:  Something that “fills the holes” here and there on browsers which are behind the curve with regard to implementation of a pretty mature spec.  However, since the term was coined by Remy Sharp a few years ago, and it has become an increasingly popular practice, its meaning has become somewhat diluted.

No longer is it the case that they are merely “filling in a few holes” in implementations based on mature specs – more and more often it is more like they are building whole new annexes based on a napkin sketch.  Within a very short period of time from the first announcement of a draft, we have some “fills” that provide implementation.

Given this contrast, it seems we could use a name which differentiates between the two concepts.  For a while now a few of us have been using different terms trying to describe it.  I’ve written about the concept in the past and it is the subject of Borris Smus’ excellent article How the Web Should Work.  Until recently the best term we could come up with was “forward polyfill”.  Then, on October 12, 2012 Alex Sexton coined the term “prollyfill” on Twitter.

October 12, 2012 “@SlexAxton: Prollyfill: a polyfill for a not yet standardized API”  

The Benefits of Prollyfilling

One thing is clear about the idea of prollyfills:   If we get them “right” they could radically improve the traditional model for standards evolution because they have a few very important benefits by their very nature. Most of this benefit comes from the simple decoupling from a browser release itself.  Since lots more people can contribute to their creation and you only need one implementation, they can be developed with much greater speed and by a larger community.  Such an approach also puts the author of a site in control of what is and isn’t supported.  In the traditional model, an author has no ability to change the speed at which native features are implemented nor what browser users will use to view it with.  Using prolyfills could allow the author to rely on JavaScript to implement the features needed, with only degraded performance, so this is a huge advantage in general.  However, even more of an advantage is realized in that scenario is that it allows multiple competing APIs or even variant APIs to co-exist while we round the edges and iterate because the author can choose which one they are going to use – that is pretty much impossible with the traditional model.  Iteration in the traditional model is likely to cause breakage which is a deterrent to use and therefore severely limiting in terms of how many use cases will be considered or how many authors can give meaningful feedback.   Iteration in this model breaks nothing, APIs and drafts can compete for users, get lots of feedback and ultimately if there is a clear winner it is evident from actual data before native code ever has to be written.

The value of the ability to compete and iterate freely should not be under-estimated – it is key to successful evolution.  Take, for example, the “nth” family of selectors.  They took a long time to come to be and it would appear that most people aren’t especially happy with them – they aren’t often used and they are often mis-understood.  Given the ability to create prollyfills for selector pseudo-classes, it is unlikely that this is the means of accomplishing use cases that would have won out, yet those limited resources ultimately spent an awful lot of time on working out details, drafting prose, presenting, getting agreement, implementing, going through processes about prefixing, optimizing implementations, etc.  In fact, the concept of nth was being discussed at W3C least as early as 1999 and parts of what was discussed was punted to CSS Selectors Level 4.  There was a definite effort to “answer all of the nth questions now,” and while what we have might be an “academically really good answer”  – it’s hard to argue that it’s really caught on and been as useful as some other things.  It’s easy to speculate what might have been better but the truth is, we’ll never know.

The truth is, currently a very small number of people at W3C do an enormous amount of work.  As of the most recent TPAC, the W3C CSS Working Group alone currently has 58 “in process” drafts and only a handful of editors.   This means that a significant number of them are going to be de-prioritized for now in order to focus on the ones with the most support or that are further along and new ideas probably won’t be undertaken for a time.  While they are trying to streamline the process, it does tend to go in fits and starts like this… Without a doubt, several of those 58 will be years and years in the making.

If instead, these same individuals could participate in less demanding, more advisory fashion to a satellite group of Web developers submitting experimental works and after a certain threshold could take over a fairly robust and reasonably mature draft, it is easy to imagine that things could evolve faster and those resources could focus a lot more.  Of course, browser manufacturers and standards bodies could participate in the same manner:  Adobe recently followed this model for their Draft Proposal on CSS Regions, Mozilla is working on a x-tags which implements part of the still very early Web Components API via script and as a general practice, the ECMA team does this pretty often.  These are very positive developments.

The Challenges of *lyfilling.

Currently every fill is generally implemented as a “from the ground up” undertaking, despite the fact that there is potentially a lot of overlap in the sorts of things you need to implement them.  For example: If you are filling something in CSS, for example, you have to (at a minimum) parse CSS.  If you are filling a selector pseudo, you’ve got a lot of figuring out, plumbing and work to do to make that work and efficient enough.  If you are filling a property, it’s potentially got a lot of overlap with the selector bit.  Or perhaps you have a completely new proposal that is merely “based on” essentially a subset of some existing Web technology, like Tab Atkin’s Cascading Attribute Sheets – it’s pretty hard to start at square one and that means that often these fills are comparatively low fidelity.

Another challenge with polyfills and prollyfills is how to write them effectively (best practices), how to make them discoverable and communicate what needs to be communicated about the degree of parity that they provide or their method/degree of forward compatibility.

Charting the Uncharted Waters

It seems clear that we could do with some cooperation and potentially some robust and well thought out and tested prollyfills for native APIs that would make some of this easier.

There really is nothing like “caniuse” for polyfills detailing compatibility, parity with a draft or method/degree of forward compatibility.  Likewise, here is also no such thing for prolyfills – nor is there a “W3C” sort of organization where you can post your proposal, discuss, get people to look at/contribute to your prose/examples, ask questions, etc.  No group creating and maintaining test cases or helping to collect data and work with willing W3C members to help make these things real or prioritized in any way.

In short, there is no community participation acting as a group interested in this subject.  These are gaps we hope to fill (pun intended) with the W3C Extensible Web Community Group.  We’re just getting started, but we have created a github organization and registered prollyfill.org.  Participate or just follow along in the conversations on our public mailing list.

Properties: The New Variables

Problematic History

Variables in CSS are among the most often and oldest requested features in CSS.  For well over a decade, numerous W3C proposals for them have come and gone.  To answer a number of the most common use cases, several preprocessors have sprung up over the years, more recently and most notably LESS and SASS.  Once in place, there were a lot of great ideas experimented with and, on a few occasions it even looked like we might just be building to something which might become a standard. But the results in the end have always been the same: An eventual agreement by a lot of members that the things we keep specing out just don’t “fit” within CSS. Generally, the consensus view has been that these things are, frankly, better left to a preprocessor which can be “compiled” into CSS: it is more efficient (potentially quite a bit), requires no changes to CSS and allows competition of ideas.

New Hope

That is, until recently, when a fortunate confluence of new ideas (like HTML data-* attributes) opened the door to a brand new way of looking at it all and thus was born the new draft of CSS Variables. The principles laid out in this new draft really do “fit” CSS quite nicely, and it addresses most of the common cases as well as several that preprocessors cannot. Further, it should be reasonably easy to implement and won’t require drastic changes to complex existing implementation and ultimately should be pretty performant. In short , it really answers all of the previous concerns that have historically held it up.

Confusion

But… it seems to be causing no end of confusion and debate by people among people familliar with variables in existing pre-processor based systems like LESS or SASS. It has all been very dramatic and full of heated debates about why things don’t “seem like” variables and how to make them seem more so.  All of this discussion, however misses the real point. There is a clear reason for the confusion: What the draft describes as “variables” (largely because of its history it would seem) are actually entirely unlike any existing concept of preprocessor variables (for reasons already explained).  Instead, it describes something else entirely: Custom properties.

Enter: Custom Properties

When described in verbiage regarding “properties” and “values”, rather than “variables”, it is actually quite simple to not only understand the new draft without the confusion, but also to see how the new draft fits the CSS model so much better than all of the previous attempts and not only provides means to solve a large number of known use cases, but also provides fertile ground for new innovative ideas.

To this end, at the suggestion of a few folks involved in the ongoing W3C discussions, Francois Remy and I have forked the draft and proposed a rewrite presenting the idea in more appropriate terms of “custom properties” instead of continuing to attempt to shoe-horn an explanation of the now overloaded idea of “variables”.

You can view the proposal and even fork it yourself on github and suggest changes. As with any draft, it’s full of necessary technical mumbo jumbo that won’t interest a lot of people, but the gist can be explained very simply:

1.  Any property in a CSS rule beginning with the prefix “my-” defines a custom (author defined) property which can hold any valid CSS value production.  It has no impact on rendering, and no meaning at the point of declaration it is simply holding a named value (tokens).

2. Custom properties work (from the author’s perspective) pretty much like any other CSS property.  They follow the same cascade, calculation and DOM structure inheritance models, however, their values are only resolved when they are applied by reference.

3. Custom properties may be referenced via a function in order to provide a value to another property (or function which provides a value). All referencing functions begin with the $ character. Reference functions, like the attr() function provide an optional second default/fallback to use in the case where the named value is not present.

A Fun Example…

Putting it all together, you can see an extremely simple example which illustrates some of the features:

/* 
   Set some custom properties specific to media which hold a 
   value representing rgb triplets 
*/
@media all{ 
    .content{
        my-primary-rgb: 30, 60, 120;
        my-secondary-rgb: 120, 80, 20;
     }
}
@media print{ 
     .content{
        my-primary-rgb: 10, 10, 120;
        my-secondary-rgb: 120, 10, 10;
     }
}

/* 
   Reference the values via $()
   The background of nav will be based on primary rgb 
   color with 20% alpha.  Note that the 3 values in the 
   triplet are resolved appropriately as if the tokens 
   were there in the original, not as a single value. 
   The actual values follow the cascade rules of CSS.  
*/
nav{
   background-color:  rgba($(my-primary-rgb), 0.20);
}

/* 
   The background of .foo will be based on primary 
   rgb color with 60% alpha 
*/
.foo{
   background-color:  rgba($(my-primary-rgb), 0.60);
}/* 
    The foreground color of h1s will be based on the 
    secondary rgb color or red if the h1 isn't inside 
    .content - note an amazing thing here that the 
    optional default can also be any valid value - 
    in this case it is an rgb triplet! 
*/
h1{
 color: rgb($(my-secondary-rgb, 200, 0, 0));
}

Both drafts describe exactly the same thing…

The Blind Architect

In my previous post Tim Berners-Lee Needs Revision, I laid out a general premise: In the late 90’s, Tim Berners-Lee identified numerous problems and proposed a grand vision for “the next evolution of the Web” and proposed some fairly specific solutions which would remain in many ways the central point of focus at W3C for many years to come.  Many (perhaps most) of the solutions put forward by the W3C have failed to take hold (and keep it).  None have changed the world in quite the same way that HTTP/HTML did.  Perhaps more importantly, a good “grand philosophy” for the focus and organization of the W3C and a strategy for helping move the Web infinitely forward was never really adopted.  

I was asked in response to my first article whether I was asking for/favored  evolution or a revolution.  It is a deceptively simple seeming question which seems like a good metaphor to use as a segue…

Generally speaking, it is my contention that in the time following these papers, despite the very best intentions of its creator (whose entire laudable goal seems to have been to provide a model in which the system can evolve more naturally) and despite the earnest efforts by its members (some of whom I interact with and respect greatly), the W3C  (perhaps inadvertently) wound up working against a true evolutionary model forward for the Web instead of for it.

The single most valuable thing that I would like to see occur is a simple change in the focus and philosophy – some of which is already beginning to occur.  More specifically, I would like to propose changes which embrace a few more concepts of evolutionary theory more readily.  Namely, competition and failure.   Given the way things have worked for a long time, I am tempted to say that embracing evolution might actually be a revolution.

On Design

As humans (especially as engineers or architects) we like to think of ourselves as eliminators of chaos;  intelligent designers of beautifully engineered, efficient solutions to complex problems that need solving.  We like to think that we can create solutions that will change things, and last a long time because they are so good.  However, more often than not, and by a very wide margin, we are proven wrong. Statistically speaking, we absolutely suck at that game.

Why?  Very simply:  Reality is messy and ever changing… 

Ultimately, the most beautiful solution in the world is worthless if no one uses it.   History is replete with examples of  elegantly designed solutions that went nowhere and not-so-beautifully designed, but “good enough” solutions that became smash hits.  “Good” and “successful” are not the same thing.  At the end of the day it takes users to make something successful.  By and large, it seems that we are not good judges of what will succeed in today’s environment.  Worse still, the environment itself is constantly changing:  Users’ sense of what is valuable (and how valuable that is) is constantly changing.

Research seems to show that it’s not  just a problem with producers/problem solvers either:  If you ask potential users, it would appear that most don’t actually know what they really want – at least not until they see it.  It’s an incredibly difficult problem in general, but the bigger the problem space, the more difficult it becomes.

What this means to W3C…

Now imagine something the scale of the Web:  How sure am I that W3C is not the exempt to all of the failures and forces listed above and won’t “just get it right”?  100%.

It’s OK.  Embrace it.

Think about this:  The W3C is in a sticky spot.  On the one hand, they are in charge of standards.  As such, they frown on things that are not.  However, change is a given.  How do you change and stay the same simultaneously? The current process of innovation to standard is exceptionally complicated and problematic.

Browsers manufacturers are in an even stickier spot.  Worse, the whole process full of perverse risk/reward scenarios for browser vendors depending on what is going on at any point in time.  In the end, there is a fuzzy definition of “standard” that to this day remains elusive to the average user.

Think not?  Have you have ever considered which browsers your users have and then used references and shims to determine exactly which parts of which specs were supported in “enough” of the browsers you expected to encounter?  If so, then you see what I mean.

But What About The Browser Wars?

Much of the above was set up to explicitly to avoid a repeat of “the browser wars”.

However, I think that we drew the wrong lessons from that experience.

Was the real problem with the browser wars the rapid innovation and lack of more or less single global standard?  Or was it that that made it difficult for authors to write code that worked in any browser?  Is there a difference?  Yes, actually there is, and we know it well in programming.  It is the difference between DOM and JQuery, between Java and Spring. One is a standard, the other is a library chosen by the author.  Standards are great – important – I fully agree.  The question is how you arrive at them, and evolution through libraries has many distinct advantages.

A lot of the JavaScript APIs that we have native in the browser today are less than ideal in most authors minds, even if they were universally implemented (still not).  Yet having them, we do _amazing_ things with it because in JS we can provide something better on top with libraries.  Libraries compete in the wild – authors choose them, not the browsers or the W3C.  Ideas from libraries get mixed and mutated and anything that gains a foothold tries to adapt and survive.  After a while, you start to see common themes rise to the top – dominant species.  Selectors are a good example, so is a better event API.  Standards just waiting to be written, right?

The W3C is currently unprepared for this in its current arrangement in my opinion and that is a fundamental problem.   Efforts to capitalize haven’t been ideal.  There is hesitancy to “pick a winner” (weird since browsers are already the winners) or give credence to one approach over another rather than starting over.  There is always an effort to re-consider the idea holistically as part of the browser and do a big design.   In the end, if we get something, it’s not bad, but it’s usually fundamentally different and the disparity is confusing, sometimes even unhelpful to authors trying to improve the code in the existing use-cases/models.

It doesn’t have to be that way.  In the end, a lot of recs wind up being detailed accounts of things that went out into the world and permeated a lot, planned or not.  Even many of the new semantic tags derived from a study of the most used CSS classes.  There is a lot of potential power to the idea of exploiting this and building a system around it.

There is finally movement on Web Components – that could be something of a game changer with the right model at the W3C allowing ideas to compete and adapt in the wild decoupled from the browser before becoming a standard.  Likewise some extension abilities could make certain kinds of CSS changes compete in the wild before standardizing too.  If we start thinking about defining simple natural extension points, I think it shapes how we think about growing standards:  Decoupling these advancements from a particular browser helps things grow safely and stay sane – it reduces a lot of risks all around.

(Re)Evolution happens anyway…

One closing example which illustrates how it doesn’t have to be this way and evolution will continue to happen even if we fight it.

While prescriptive standards put forward by W3C had an advantage, in the long run they are subject to the same forces. I think no idea in history can be said to have had a greater advantage than XML. In many respects, it was the lynchpin of the larger plans. Then, a lone guy at a startup noticed a really nice coincidence in the way that all C family languages think about and express data – and it just happened to be super easy to parse and serialize in JavaScript. He registered a domain and threw out a really basic specification which fits neatly on an index card, a simple JavaScript library and more or less forgot about it…and JSON was born.

Technically speaking, it is inferior on a whole number of levels in terms of what it can and can’t particularly express.  Just ask anyone who worked on XML. But that was all theory. In practice, most people didn’t care about those things so much. JSON had no upfront advantages except that in the browser some people just eval’ed it initially. No major companies or standard bodies created it or pushed it. Numerous libraries weren’t already available for every programming language known to man.

The thing is, people liked it… Now there are implementations in every language, now it is a recognized standard… It is a standard because it was defined in ECMA, not W3C.  It was scrunitized and pushed and pulled on the edges – but that’s what standards committees are really good at.  Now it is natively supported in Web browsers safely (and more efficiently) and an increasing number of other programming languages as well.  Now it is inspiring new ideas to address things originally addressed by XSL, XPath and RDF.  Maybe someday it will fall out of favor to some spinoff with better advantages for the time.  That’s reality, and it’s ok.  The important thing to note is that it achieved this because it was good enough to survive – not because it was perfect.

In 1986 Richard Dawkins elegantly wrote “The Blind Watchmaker” to explain how the simple process of evolution creates beautiful solutions to complex problems:  Blindly – it doesn’t have a greater plan for twelve steps down the road in mind.