W3C Extensible Web Community Group

The Web requires stability and a high degree of client ubiquity, that’s why W3C is important. However, that’s also part of the reason that standards develop at a comparatively, (sometimes seemingly painfully) slow rate: They take time, deliberation, testing and mutual agreement between a number of experts and organizations with significantly different interests.

While we frequently complain, that is actually a Really Good Thing™. Standards take a while to “cook” and the Web would be a real mess if every idea became a native standard haphazardly and with great speed:  Native implementations have a way of being hard to kill (or change significantly) once the public has free access to them – the general mantra is “don’t break it” – and that makes iteration/significant evolution artificially harder than it needs to be.  It seems that the pressures and incentives are mixed up.

The W3C Extensible Web Community Group was founded with a vision toward supplementing the traditional model with a new one which we believe is a better path forward in most cases.

Polyfills and Prollyfills…

Polyfills are a well-known concept and it’s an excellent, visualizable name for it:  Something that “fills the holes” here and there on browsers which are behind the curve with regard to implementation of a pretty mature spec.  However, since the term was coined by Remy Sharp a few years ago, and it has become an increasingly popular practice, its meaning has become somewhat diluted.

No longer is it the case that they are merely “filling in a few holes” in implementations based on mature specs – more and more often it is more like they are building whole new annexes based on a napkin sketch.  Within a very short period of time from the first announcement of a draft, we have some “fills” that provide implementation.

Given this contrast, it seems we could use a name which differentiates between the two concepts.  For a while now a few of us have been using different terms trying to describe it.  I’ve written about the concept in the past and it is the subject of Borris Smus’ excellent article How the Web Should Work.  Until recently the best term we could come up with was “forward polyfill”.  Then, on October 12, 2012 Alex Sexton coined the term “prollyfill” on Twitter.

October 12, 2012 “@SlexAxton: Prollyfill: a polyfill for a not yet standardized API”  

The Benefits of Prollyfilling

One thing is clear about the idea of prollyfills:   If we get them “right” they could radically improve the traditional model for standards evolution because they have a few very important benefits by their very nature. Most of this benefit comes from the simple decoupling from a browser release itself.  Since lots more people can contribute to their creation and you only need one implementation, they can be developed with much greater speed and by a larger community.  Such an approach also puts the author of a site in control of what is and isn’t supported.  In the traditional model, an author has no ability to change the speed at which native features are implemented nor what browser users will use to view it with.  Using prolyfills could allow the author to rely on JavaScript to implement the features needed, with only degraded performance, so this is a huge advantage in general.  However, even more of an advantage is realized in that scenario is that it allows multiple competing APIs or even variant APIs to co-exist while we round the edges and iterate because the author can choose which one they are going to use – that is pretty much impossible with the traditional model.  Iteration in the traditional model is likely to cause breakage which is a deterrent to use and therefore severely limiting in terms of how many use cases will be considered or how many authors can give meaningful feedback.   Iteration in this model breaks nothing, APIs and drafts can compete for users, get lots of feedback and ultimately if there is a clear winner it is evident from actual data before native code ever has to be written.

The value of the ability to compete and iterate freely should not be under-estimated – it is key to successful evolution.  Take, for example, the “nth” family of selectors.  They took a long time to come to be and it would appear that most people aren’t especially happy with them – they aren’t often used and they are often mis-understood.  Given the ability to create prollyfills for selector pseudo-classes, it is unlikely that this is the means of accomplishing use cases that would have won out, yet those limited resources ultimately spent an awful lot of time on working out details, drafting prose, presenting, getting agreement, implementing, going through processes about prefixing, optimizing implementations, etc.  In fact, the concept of nth was being discussed at W3C least as early as 1999 and parts of what was discussed was punted to CSS Selectors Level 4.  There was a definite effort to “answer all of the nth questions now,” and while what we have might be an “academically really good answer”  – it’s hard to argue that it’s really caught on and been as useful as some other things.  It’s easy to speculate what might have been better but the truth is, we’ll never know.

The truth is, currently a very small number of people at W3C do an enormous amount of work.  As of the most recent TPAC, the W3C CSS Working Group alone currently has 58 “in process” drafts and only a handful of editors.   This means that a significant number of them are going to be de-prioritized for now in order to focus on the ones with the most support or that are further along and new ideas probably won’t be undertaken for a time.  While they are trying to streamline the process, it does tend to go in fits and starts like this… Without a doubt, several of those 58 will be years and years in the making.

If instead, these same individuals could participate in less demanding, more advisory fashion to a satellite group of Web developers submitting experimental works and after a certain threshold could take over a fairly robust and reasonably mature draft, it is easy to imagine that things could evolve faster and those resources could focus a lot more.  Of course, browser manufacturers and standards bodies could participate in the same manner:  Adobe recently followed this model for their Draft Proposal on CSS Regions, Mozilla is working on a x-tags which implements part of the still very early Web Components API via script and as a general practice, the ECMA team does this pretty often.  These are very positive developments.

The Challenges of *lyfilling.

Currently every fill is generally implemented as a “from the ground up” undertaking, despite the fact that there is potentially a lot of overlap in the sorts of things you need to implement them.  For example: If you are filling something in CSS, for example, you have to (at a minimum) parse CSS.  If you are filling a selector pseudo, you’ve got a lot of figuring out, plumbing and work to do to make that work and efficient enough.  If you are filling a property, it’s potentially got a lot of overlap with the selector bit.  Or perhaps you have a completely new proposal that is merely “based on” essentially a subset of some existing Web technology, like Tab Atkin’s Cascading Attribute Sheets – it’s pretty hard to start at square one and that means that often these fills are comparatively low fidelity.

Another challenge with polyfills and prollyfills is how to write them effectively (best practices), how to make them discoverable and communicate what needs to be communicated about the degree of parity that they provide or their method/degree of forward compatibility.

Charting the Uncharted Waters

It seems clear that we could do with some cooperation and potentially some robust and well thought out and tested prollyfills for native APIs that would make some of this easier.

There really is nothing like “caniuse” for polyfills detailing compatibility, parity with a draft or method/degree of forward compatibility.  Likewise, here is also no such thing for prolyfills – nor is there a “W3C” sort of organization where you can post your proposal, discuss, get people to look at/contribute to your prose/examples, ask questions, etc.  No group creating and maintaining test cases or helping to collect data and work with willing W3C members to help make these things real or prioritized in any way.

In short, there is no community participation acting as a group interested in this subject.  These are gaps we hope to fill (pun intended) with the W3C Extensible Web Community Group.  We’re just getting started, but we have created a github organization and registered prollyfill.org.  Participate or just follow along in the conversations on our public mailing list.

Properties: The New Variables

Problematic History

Variables in CSS are among the most often and oldest requested features in CSS.  For well over a decade, numerous W3C proposals for them have come and gone.  To answer a number of the most common use cases, several preprocessors have sprung up over the years, more recently and most notably LESS and SASS.  Once in place, there were a lot of great ideas experimented with and, on a few occasions it even looked like we might just be building to something which might become a standard. But the results in the end have always been the same: An eventual agreement by a lot of members that the things we keep specing out just don’t “fit” within CSS. Generally, the consensus view has been that these things are, frankly, better left to a preprocessor which can be “compiled” into CSS: it is more efficient (potentially quite a bit), requires no changes to CSS and allows competition of ideas.

New Hope

That is, until recently, when a fortunate confluence of new ideas (like HTML data-* attributes) opened the door to a brand new way of looking at it all and thus was born the new draft of CSS Variables. The principles laid out in this new draft really do “fit” CSS quite nicely, and it addresses most of the common cases as well as several that preprocessors cannot. Further, it should be reasonably easy to implement and won’t require drastic changes to complex existing implementation and ultimately should be pretty performant. In short , it really answers all of the previous concerns that have historically held it up.

Confusion

But… it seems to be causing no end of confusion and debate by people among people familliar with variables in existing pre-processor based systems like LESS or SASS. It has all been very dramatic and full of heated debates about why things don’t “seem like” variables and how to make them seem more so.  All of this discussion, however misses the real point. There is a clear reason for the confusion: What the draft describes as “variables” (largely because of its history it would seem) are actually entirely unlike any existing concept of preprocessor variables (for reasons already explained).  Instead, it describes something else entirely: Custom properties.

Enter: Custom Properties

When described in verbiage regarding “properties” and “values”, rather than “variables”, it is actually quite simple to not only understand the new draft without the confusion, but also to see how the new draft fits the CSS model so much better than all of the previous attempts and not only provides means to solve a large number of known use cases, but also provides fertile ground for new innovative ideas.

To this end, at the suggestion of a few folks involved in the ongoing W3C discussions, Francois Remy and I have forked the draft and proposed a rewrite presenting the idea in more appropriate terms of “custom properties” instead of continuing to attempt to shoe-horn an explanation of the now overloaded idea of “variables”.

You can view the proposal and even fork it yourself on github and suggest changes. As with any draft, it’s full of necessary technical mumbo jumbo that won’t interest a lot of people, but the gist can be explained very simply:

1.  Any property in a CSS rule beginning with the prefix “my-” defines a custom (author defined) property which can hold any valid CSS value production.  It has no impact on rendering, and no meaning at the point of declaration it is simply holding a named value (tokens).

2. Custom properties work (from the author’s perspective) pretty much like any other CSS property.  They follow the same cascade, calculation and DOM structure inheritance models, however, their values are only resolved when they are applied by reference.

3. Custom properties may be referenced via a function in order to provide a value to another property (or function which provides a value). All referencing functions begin with the $ character. Reference functions, like the attr() function provide an optional second default/fallback to use in the case where the named value is not present.

A Fun Example…

Putting it all together, you can see an extremely simple example which illustrates some of the features:

/* 
   Set some custom properties specific to media which hold a 
   value representing rgb triplets 
*/
@media all{ 
    .content{
        my-primary-rgb: 30, 60, 120;
        my-secondary-rgb: 120, 80, 20;
     }
}
@media print{ 
     .content{
        my-primary-rgb: 10, 10, 120;
        my-secondary-rgb: 120, 10, 10;
     }
}

/* 
   Reference the values via $()
   The background of nav will be based on primary rgb 
   color with 20% alpha.  Note that the 3 values in the 
   triplet are resolved appropriately as if the tokens 
   were there in the original, not as a single value. 
   The actual values follow the cascade rules of CSS.  
*/
nav{
   background-color:  rgba($(my-primary-rgb), 0.20);
}

/* 
   The background of .foo will be based on primary 
   rgb color with 60% alpha 
*/
.foo{
   background-color:  rgba($(my-primary-rgb), 0.60);
}/* 
    The foreground color of h1s will be based on the 
    secondary rgb color or red if the h1 isn't inside 
    .content - note an amazing thing here that the 
    optional default can also be any valid value - 
    in this case it is an rgb triplet! 
*/
h1{
 color: rgb($(my-secondary-rgb, 200, 0, 0));
}

Both drafts describe exactly the same thing…