The Future Web Wants You.

A few weeks ago I gave a talk in Pittsburgh, PA at Code and Supply. It was recorded if you would rather watch the video (the actual talk is ~40 minutes, the video captures some of the Q&A afterward as well), but I’m more comfortable writing and I thought it might be worth a stab at a companion piece that tries to make the same points/arguments in blog-post-size, so – here it is.

tumblr_nn65ziV7fR1s3y9slo1_500

A photo of me in the mid-1990’s dreaming about standards.

In the mid-1990’s, I didn’t really know much about standards, but it seemed that they obviously existed.  So, when I heard that there was going to be a “standards organization” setup to bring together all sorts of powerful tech interests and that it would be led by Tim Berners-Lee (creator of the Web) himself, it didn’t take much more to win my confidence.

I suppose I imagined something between Michelangelo’s Creation panel on the Sistine Chapel Ceiling and Raphael’s School of Athens. Somewhere, perhaps high in a tower in a proverbial Mount Olympus where the Gods of programming – wise and benevolent beings would debate and design – and the outcome would be an elegant, simple and beautiful solution. They would show us the One True Way. Specifications might have been handed down on tablets of stone as far as I imagined.  The future was bright. They would lead me to the promised land.

school-of-athens-detail-from-right-hand-side-showing-diogenes-on-the-steps-and-euclid-1511.jpg!Large

My imagination of what a W3C meeting was like..

By around 2009, I guess you could say that my outlook had “matured”.

tumblr_inline_n0ea3lrPjr1rhignx

Me, circa 2009 expressing my feelings on Web standards.

I was jaded, yes.

What had happened?  I decided to begin to try following standards a little more “from the inside” and I learned a lot.  I talk more about it in the video, but here is the most important takeaway I can give you:  There is no standard for standards.

That is:  We really don’t know what we’re doing.  Standards are a really “young” idea.  In the roughly 100 years we’ve been trying to deal with them you can sum up a brief history something like this:

  • Countries established national standards organizations – here in the US, ANSI.
  • National Standards really weren’t good enough for some things, so we got an international standards organization: ISO.
  • ISO tweaked how they approached things a few times along the way, but when it came to networks and software, they were kind of abysmal.  After a decade of working on the OSI 7 Layer Model, Vint Cerf and some others left and created the IETF.  We got the internet. The IETF works very differently from ISO/ANSI.
  • When Tim Berners-Lee came along he could have taken things to ISO, or the IETF – and in fact, he did choose the later.  Some things were standardized there, others languished and never actually reached what you could call, in IETF terms, a standard.  After some mulling, the W3C was created.  It works differently than ANSI, ISO or the IETF.
  • When Internet Explorer began reverse engineering JavaScript and Netscape wanted to standardize it, they could have taken it to any of the above.  Instead, they took it to ECMA – a body previously dedicated to the manufacturing standards for computers in Europe.  Why?  Because historical events led them to believe that Microsoft would wield less powerful influence in this venue and that ECMA would be more fair to the creators.  It works differently than all of the above.
  • After a period in which much of the world (including major players like Microsoft who controlled 95% of the browser market share at the time) decided that perhaps HTML wasn’t the future we wanted after all and spent a decade trying to influence a different possible future in the W3C, a group defected and created the WHATWG which – again – works very differently than all of the above.  The WHATWG was spun up in 2004, the first draft of HTML was published in 2008.

Along the way we’ve seen features that were disappointing (AppCache) things that aren’t quite interoperable (IndexedDB/WebSQL), things that failed to materialize (native dialogs, the document outline) and battles over control of the “really official standard” as well as what that even means.  In late 2014, it reached W3C status that we might call ‘standard’ – however, there’s still a lot that doesn’t work in all browsers – HTML input types support, for example.  So it would be foolish to say that process really “worked well” in total either.

frabz-Tis-Blasphemy-cc135d

It’s not blasphemous to suggest that we can do better.

The interesting point here is that the reason there are many venues is simple: That the ones that came before them weren’t working well and that each of these has tried to adapt to get better.

Lessons Learned

In a nutshell: We’ve moved around a lot of variables a lot of times, trying to figure this out – but the one thing we haven’t figured out how to tap into is developers.  This is strange because ultimately, it is developers who decide the fate of it all.  Over the years standards have come to say “we have businesses, we have academia, we have government.”

a967331492c30ea8002d09b1ebed6bbf

Bring your army, we have developers.

Yay.  That’s great.  But, the truth is: we have developers.  Developers are like the Hulk, their potential power is nearly limitless, it’s just untapped!  If you want to win the day – you need the Hulk on your side.

Think about it.  Microsoft quite literally “owned” the browser market and disbanded the team.  When work continued on HTML, it created what might have been an impossible impasse.  There was no obvious way to get there from here.

What happened?  Polyfills. Remy Sharp coined the term and developers stepped up and filled the gap, providing a way forward.

When virtually every major tech company on earth was focused on “how on earth can we imagine a new, better ‘Web’ based on XML?” – when billions of dollars in R&D had been spent over a decade and everyone was desperately trying to figure it out, developers said “JSON: I choose you!”.  Guess who carried the day?

IL001-_Pokémon_-_I_Choose_You_03

JSON: I choose you!

It’s not that standards bodies are “bad” at making standards. The problem, at its core, is how we approach/view standards and how we set up the right economics.

Fixing the economics

ewm-1

The Extensible Web Manifesto Logo by the great Bruce Lawson

Around 2010, a lot of people began talking about what was wrong with Web Standards and how we might fix it. This led to, not a new standards body, but a joint statement of core principles by people involved at many levels: The Extensible Web Manifesto.

Since it was published in 2013 it has become a statement of “core principles” for all of the major standards bodies involved with the Web.

The Extensible Web Manifesto is a short document, but it comes from considerably more detailed discussions and a bigger vision. It’s a vision that says that the economics are broken.  Failure isn’t avoidable, it’s inevitable.  Experiments are necessary in order to get there from here.

captureW5

Early electrical appliances plugged into light sockets!

As my friend Matt Griffin explains well both in his A List Apart Article The Future of the Web and his documentary on the Web The Future is Next – you can’t do it right until you’ve done it wrong.

History, both of the Web and physical standards proves out that an evolutionary result is inevitable.  When homes were first electrified, for example, it was for the purpose of artificial light.  There weren’t outlets – there was nothing to plug in.  Companies were battling over lights.  The result?  Early appliance inventors stepped up and filled the gap – they made cords that screwed into light sockets and birthed a whole new industry!

The Extensible Web Manifesto simply argues that while we’re busy arguing about light bulbs, the really amazing stuff is what you can do given electricity – and that we’ll very likely miss it.  It’s unanticipatable.  We will try, and we will fail.  All failures aren’t dead ends though.  Service Workers, for example, are the result of many failed experiments.

Some failure persists and only looks like failure for a time – the sum of the DNA however ultimately provide new possibilities far far beyond any of our plans.  If you were busy trying to “design” the perfect canine you’d never come up with a maned wolf.  Chances are you’ve never seen a maned wolf since they only evolved in a certain environment in South America.  But they are amazing and kind of a testiment to the power of evolution to create something that survives.  Ultimately, we need things that survive in all of the environments, even the ones we aren’t thinking of – and to do that we need to be adaptable.    

maned-wolf

The maned wolf is real, and it is awesome.

So, experiments and failure to reach “standard” are actually good things – that’s how we can get better by exploring the edges and learning.  But the original Web plan made it the norm that experimentations ship with browsers – out in the open, and usually very high level. That led to serious problems of miscommunication and frustration and interoperability challenges.

Polyfills showed us a different way forward though by mixing what little DNA we had exposed to us to fill the gaps.  If you could polyfill a feature because a few browsers didn’t support it, you could just as easily fill it before any browser supported it.  Instead of proposing something that only works in a single browser – why not use the power of the Web to propose something that works in all browsers.  A prollyfill (will it become a standard? I dunno, prolly something like it).  Given lower level DNA, we can experiment.  The Extensible Web Manifesto calls this DNA “Fundamental Primitives” and it encourages standards to focus majority efforts on them.  Sometimes this may mean introducing new ones, but there’s already a lot of rich DNA already locked away within the existing higher level APIs of the platform.  Exposing it means we have more raw materials and can prollyfill more and better experiments.  Beneath the existing features are all sorts of things that deal with network fetching, streaming, parsing, caching, layout, painting and so on.  Each of these is being currently being ‘excavated’.

The huge shift in economics that this could create is amazing.

In the mid-2000’s, a lot of people wanted something like flexbox.  It’s only now, in 2016, that we can really begin to get broad feedback from developers who are largely just starting to see what they can use it for.  In all likelihood, they will find some faults and have some better ideas.  But if we could have given developers flexbox in a fashion that at least many of them could use to accomplish real things – that’s a good incentive to be involved. The feedback loop could be tighted up considerably and it’s possible because even if it fails to become a standard, it still works to accomplish something.

Wait a minute.  Hold the presses.  Think about that for a moment:  Why do developers want to learn about standards?  To feel smart?  Shit no.  Developers want standards because they have work to get done.  A standard way is portable. “Being standard” means it’s had a lot of eyeballs and ultimately it winds up being “free”.  But if they can’t use a standard, they’ll use a library.  Why?  Because things have to get done.  Libraries have many of the same benefits, but not all.  A lot of people ask me “why didn’t we just standardize library X”.  The answer is generally simple: No library has been proposed as a standard, in a fashion compatible with standardization.  They’re usually too big, there are IP issues, and at the end of the day lots of people feel like they didn’t get a say.  But… If a proposal is delivered like a prollyfill that works everywhere, it’s roughly that – only in the right form!

What we need then is a way to incubate ideas, build prolyfills and somehow get lots of eyeballs, use and participation.  We need to see what sticks, and what can be better.  We need ideas to fail safely without breaking the economics of participation or breaking the Web.  And we need standards to act more like dictionary editors than startups.  I explain in the presentation that, in fact, most of their successes have been this and that the idea is not at all radical, but I’ll spare you that here.

A million voices cried out…

Meetup.com has 4 million people signed up who call themselves Web developers.  How can we involve even thousands of them the way that we do standards now?  The answer is simple: We can’t.  Discussing things on mailing lists while we wait forever for consensus and implementation doesn’t scale.  An incubator would need people helping cream rise to the top – it needs networked communication.  Just as in networking, not all noise needs to be in all places.

Chapters.io is the answer (or at least the first attempt at an answer) to that problem.  Chapters is an effort to pair people who are involved with standards with meetups about the Web who can help them find, try, and discuss things that are in incubation (or proposed for incubation).  They provide a “safe” space for noisy and potentially less formal discussion.  Ideas can be collected, summarized and championed.

This is not a far flung dream: It is happening. The Extensible Web has helped shape ideas like the Web Incubator Community Group (WICG) which provides just such an outlet for incubation and the Houdini Task Force. Browser makers and standardistas are making proposals there, and we’re figuring out how to incubate them.  Good ideas find champions and move forward.  The WICG also provides a Discourse instance where developers can subscribe to and participate in a way a lot more plausible than a mailing list.  Very recently, the jQuery Foundation announced that the standards team will be helping to establish, manage and champion chapters.

So what do you say?  Are you in?  The future Web wants you.

59f5dd7efe735606582d61d6e4492e8b

Don’t be silent: Join the rebellion and help us organize a chapters.io near you.  Tweet interest to me @briankardell or open an issue like this with the jQuery standards team and we’ll see if we can help you get something started!  If you are in Pittsburgh, PA – join us!

 

Very special thanks to my friends the great Bruce Lawson and (the also great) Greg Rewis for proofreading this piece.

 

 

Advertisement

Desparately Seeking Jimi: The Web Standards Mountain

Over the past year or so, a number of very smart people have asked me privately or made comments about how I expend my energies with regard to standards: I’ve dedicated a lot of time toward elections to two advisory groups within the W3C (currently AB). Most of the comments can be distilled down to “why bother?” While there are varying arguments, the most prevalent can be summed up as “The groups in question have no real power…” Others argue that W3C is increasingly irrelevant, and that such efforts are a lost cause. More generally, that my time and talents would be better spent in other ways that didn’t deal with seemingly mundane details or bureaucracy. I thought it worth a short post to explain why I do what I do, and why I want you to care too…

Creating standards historically has a way of making one feel a little like Sisyphus in Greek mythology – forever compelled to push an immense boulder up a mountain, only to watch it roll back down again and repeat.  Time and again, we’ve seen struggles: Small groups of very bright people with nothing but the best intent work hard – of that there can be no doubt. They discuss and debate a LOT – progress and consensus takes a long time to arrive, implementations don’t aways arrive at all – and frequently what we get isn’t really usable as we’d like.  A change of venue alone doesn’t solve the problem either: The WHATWG has struggles and conflicts of its own – and has created failures as well. Maybe it’s a smaller mountain, I’ll give you that – but it’s still a big task.  And with the smaller mountain comes a lacks the Titan-sized support of W3C membership.  While I think the WHATWG makes a good argument about the importance of this, it takes a planet to really make it a good standard and having the support of W3C’s membership is still a good thing.  It’s not the fault of the people (ok, sometimes it is), it’s mainly the fault of the process we’ve created based on an outlook of where standards come from. The W3C organization and its processes are geared toward serving this model and that outlook is in need of revision.

A few years ago, several of us converged upon similar ideas which, almost exactly 1 year ago, were written down and agreed to in The Extensible Web Manifesto. It presents an alternative vision –  one that recognizes the importance of developer feedback, contribution and ultimate adoption as well as the evolutionary nature of successful and valuable standards that can both compete in an ever changing market and stand the test of time. Getting a bunch of smart people to sign a document about core principles is an excellent start, but, effectively convincing the larger standards world that it needs to change and embrace important elements of this vision is, to stick with my earlier Greek mythology theme, a Herculean task.

It takes movement on many, many fronts – but interestingly, they all involve people. All of these member orgs, all of the reps, all of the developers out there – it’s just people… People like me and you. The fist step is getting all of those members to pay attention, and then getting everyone to agree that there is a problem that they actually want to solve. It involves realizing that sometimes change requires some brave souls to stand up and make a statement. Sometimes just the act of hearing someone else you respect say it can lead to others standing up too, or at least to opening a dialog.

There are lots of ways to accomplish this I suppose, but it seems efficient if there were some ways to capitalize on the fact that we’re all really connected and, ultimately, wanting the same basic goals. If only there were a way to create a network effect – a call for change and a perception of value that picks up steam rather than simmers out after hitting the brick wall. One way to accomplish this might be give the right people some microphones and stick them in front of the right folks. As the W3C has two major “positions” (Director and CEO) and two advisory bodies (TAG and AB) that have their ear, and, in a more direct way, the ears/role of communicating with technical folks on matters related to the long term architecture (TAG) and on direction /process with AC members (the AB) – those seem like excellent intersections. Getting people elected to these positions involves us reaching out to get enough ACs to vote for them in the first place, electing many gives them perceived volume. It makes people sit up and take notice. It stimulates the debate. It means we have to work together to find agreement about which pieces we are willing to pursue together. It means making sure someone is looking out for developers and finding people who are willing to put in incredible efforts to help make that change happen. – And these are all Really Good Things™.

We’re technically minded people.  We don’t generally like this sort of thing.

So yes, it’s true that I might see more immediately tangible results on a particular API or set of APIs if I very actively focused efforts in that direction, but it doesn’t deal with the bigger problem – the mountain.  What I’d really like to to is help to change the whole world’s mind, it seems like a bigger win for us all in the long run. What I really want to do convince us all to channel Jimi and say….

Well, I stand up next to a mountain And I chop it down with the edge of my hand…

If any of this makes any sense to you – or if you’re just willing to humor me…

If you are an AC in the W3C – lend your voice and vote for a reform candidate (below).  If you’re not an AC, but know one – reach out.  If you don’t know one – tweet or share your support – we’re all connected, chances are pretty good that someone in your social network does.  While I believe that all of the nominees below are excellent, I think there is something critical about sticking one or more developers directly into one of these slots for reasons explained above. If I’m wrong and these groups don’t matter, you’ve lost nothing.


Lea is an actual developer and active in the community — you can find her at talks and conferences all the time. It should be obvious why think some real practitioner voice is important, but she also is an invited expert in the CSS WG and worked for W3C for a while in developer relations — so she has a very unique perspective having seen it from all sides and brings a direct connection to represent developers in the dialog.

Boaz is also a developer and the flâneur at http://bocoup.com/ — a passionate advocate of the open web who has done quite a bit of outreach, hosting (recorded/live) TC-39 and TAG events and whose employees are especially active in the community. He brings a drive and understanding of what it’s like for developers and non-megalithic companies to be involved and has a serious interest and skills with process. (Update: Boaz has since posted about why he is running).

Art is from Nokia — he has been working with standards for a long time, he’s been effective and involved and is outspoken and thoughtful on issues of process, involvement, licensing, etc. and has led efforts to streamline the ability to keep track of what is going on or how to get involved and open things up to developers.

Virginie is from Gemalto and active in a number of groups (she chairs the security group) and I think she can sum up why she is running and why you should vote for her much better than I can. Suffice it to say for purposes here: She sees the problems discussed here (as well as others), brings a unique perspective and has been increasingly involved in efforts to figure out how to help give developers a voice.

David is from Apple and he’s also been working on standards for a long time. He knows the process, the challenges and the history. I’ve not always agreed with him, but he has expressed a number of good things in relation to things mentioned above in his candidate statement which make me especially hopeful that he would be an excellent choice and a good voice to round things out.

While I left him out of my original post, it was an oversight – Soohong Daniel Park from Samsung also wrote an excellent AB statement which is full of great things and I believe would be a great choice as well.  I’ll leave you to read it.

Off With Their Heads: Disband the W3C?

Tenniel red queen with alice

Just a few days ago, Matthew Butterick presented a talk entited “The Bomb in the Garden” at TYPO in San Francisco (poor title timing given recent events in Boston). In it, he concludes “the misery exists because of the W3C—the World Wide Web Consortium… So, respectfully, but quite seriously, I suggest: let’s Disband the W3C“. Ultimately he suggests that “...the alternative is a web that’s organized entirely as a set of open-source software projects.

Butterick’s Points:

  • It takes a really long time for standards to reach Recommendation Status (“the Web is 20 years old)
  • The W3C doesn’t enforce standards
  • Browser vendors eventually implement the same standards differently
  • We fill pages with hacks and black magic to make it work
  • Ultimately, what we wind up with still isn’t nearly good enough
  • There is no good revenue model
  • Newspaper and magazine sites all look roughly the same and are somewhat ‘low design’.

His presentation is definitely interesting and worth a read/view. In general, if you have been working on the Web a long time, you will probably experience at least some moments where you can completely relate to what he is saying.

Still, it seems a little Red Queen/over-the-top to me so I hope you’ll humor a little Alice in Wonderland themed commentary…

Why is a Raven Like a Writing Desk?

Michael Smith (@sideshowbarker to some) replied with some thoughts on it on the W3C Blog with a post entiteled “Getting agreements is hard (some thoughts on Matthew Butterick’s “The Bomb in the Garden” talk at TYPO San Francisco)” in which he points out in short, bullet-list form, several problems with Butterick’s statements about how W3C is misportrayed. The post is short enough and already bulleted so I won’t summarize here, instead I encourage you to go have a read yourself.  He closes up with the point that “Nowhere in Matthew Butterick’s talk is there a real proposal for how we could get agreements any quicker or easier or less painfully than we do now by following the current standards-development process.” (emphasis mine).

Indeed, the open source projects mentioned by Butterick are about as much like standards as a raven is like a writing desk and, in my opinion, to replace a standards body with a vague “bunch of open source projects” would send us down a nasty rabbit hole (or through the looking glass) into a confusing and disorienting world: Curiouser and curiouser.

“Would you tell me, please, which way I ought to go from here?”
“That depends a good deal on where you want to get to.”
“I don’t much care where –”
“Then it doesn’t matter which way you go.”
― Lewis CarrollAlice in Wonderland

Still, I don’t think Butterick really means it quite so literally.  After all,  he holds up PDF as an ISO standard that “just works” and ISO is anything but an open source project like Wordpres.  In fact, PDF and ISO could have some of the same challenges laid against them.  For example, from the ISO website:

Are ISO standards mandatory?

ISO standards are voluntary. ISO is a non-governmental organization and it has no power to enforce the implementation of the standards it develops.

It seems to me that ISO and W3C have a whole lot more in common than they differ IMO:  Standards are proposed by stakeholders, they go before technical committees, they have mailing lists and working groups, they have to reach consensus, etc.  Most of this is stated in Michael’s post.  Additionally though, all PDF readers are not alike either: Different readers have different level of support for reflow and there is a separate thing called “PDF/A” which extends the standard (they aren’t the only ones) and adds DRM (make it expensive?).  Some readers/authors can accept links to places outside the file, some can’t.  Some can contain comments added by reviewers or markings, others can’t.   Etc.

You used to be much more…”muchier.”

I think that instead, Butterick is simply (over) expressing his frustration and loss of hope in the W3C:  “They’ve lost their “muchness”.  You know what?  It really does suck that we have experienced all of this pain, and to be honest, Butterick’s technical examples aren’t even scratching the surface.  After 20 years, you really really think we’d be a little further along.

“I can’t go back to yesterday because I was a different person then.”
― Lewis CarrollAlice in Wonderland

A lot of the pain we’ve experienced has taken place due to really big detour in the history of Internet standards: The ones we really use and care about were basically sort of put on hold and efforts mostly put toward a “something else”.  Precisely which something else would have made the Web super awesome is a little fuzzy, but whatever it was you could bet that it would have contained at least one of the letters “x” “m” or “l” and have contained lots of “<” and “>”‘s.  The browser maker with the largest market share disbanded their team and another major one split up.  It got so contentious at one point that the WHATWG was established to carry on the specs that the W3C were abandoning.

Re-muchifying…

While we can’t go back and fix that now, the question is:  Can we prevent the problems from happening again and work together to make the Web a better place?  I think we can.

“Why, sometimes I’ve believed as many as six impossible things before breakfast.”
― Lewis CarrollAlice in Wonderland

The W3C is an established standards body with a great infrastructure and all of the members you’d really need to make something happen.  Mozilla CTO Brendan Eich had some good advice in 2004:

What matters to web content authors is user agent market share. The way to crack that nut is not to encourage a few government and big company “easy marks” to go off on a new de-jure standards bender. That will only add to the mix of formats hiding behind firewalls and threatening to leak onto the Internet.

Luckily, it seems that the W3C has learned some important lessons recently.  More has happened to drive Web standards and browser development/interoperability forward in the past 2-3 years than happened in the 6-7 years combined and more is queued up than I can even wrap my head around.  We have lots of new powers in HTML and lots of new APIs in the DOM and CSS.  We have efforts like Test the Web Forward uncovering problems with interoperability and nearly all browsers becoming evergreen – pushing out improvements and fixes all the time.  We also managed to get some great reformers elected to the W3C Technical Architecture Group recently who are presenting some great ideas and partnership and cooperation between W3C and other standards bodies like ECMA/TC-39 (also making excellent progress) are beginning.   I believe that we can all win with community participation and evolution through ideas like prollyfill.org which is trying to team up the community with standards groups and implementers to create a more nimble and natural process based on evolutionary and open ideas… Perhaps that might sound like a marriage of open source ideas and standards that Matthew Butterick would be more happy with… Maybe I should send him an email.

So what do you think?

“Do you think I’ve gone round the bend?”
“I’m afraid so. You’re mad, bonkers, completely off your head. But I’ll tell you a secret. All the best people are.”
― Lewis CarrollAlice in Wonderland

Properties: The New Variables

Problematic History

Variables in CSS are among the most often and oldest requested features in CSS.  For well over a decade, numerous W3C proposals for them have come and gone.  To answer a number of the most common use cases, several preprocessors have sprung up over the years, more recently and most notably LESS and SASS.  Once in place, there were a lot of great ideas experimented with and, on a few occasions it even looked like we might just be building to something which might become a standard. But the results in the end have always been the same: An eventual agreement by a lot of members that the things we keep specing out just don’t “fit” within CSS. Generally, the consensus view has been that these things are, frankly, better left to a preprocessor which can be “compiled” into CSS: it is more efficient (potentially quite a bit), requires no changes to CSS and allows competition of ideas.

New Hope

That is, until recently, when a fortunate confluence of new ideas (like HTML data-* attributes) opened the door to a brand new way of looking at it all and thus was born the new draft of CSS Variables. The principles laid out in this new draft really do “fit” CSS quite nicely, and it addresses most of the common cases as well as several that preprocessors cannot. Further, it should be reasonably easy to implement and won’t require drastic changes to complex existing implementation and ultimately should be pretty performant. In short , it really answers all of the previous concerns that have historically held it up.

Confusion

But… it seems to be causing no end of confusion and debate by people among people familliar with variables in existing pre-processor based systems like LESS or SASS. It has all been very dramatic and full of heated debates about why things don’t “seem like” variables and how to make them seem more so.  All of this discussion, however misses the real point. There is a clear reason for the confusion: What the draft describes as “variables” (largely because of its history it would seem) are actually entirely unlike any existing concept of preprocessor variables (for reasons already explained).  Instead, it describes something else entirely: Custom properties.

Enter: Custom Properties

When described in verbiage regarding “properties” and “values”, rather than “variables”, it is actually quite simple to not only understand the new draft without the confusion, but also to see how the new draft fits the CSS model so much better than all of the previous attempts and not only provides means to solve a large number of known use cases, but also provides fertile ground for new innovative ideas.

To this end, at the suggestion of a few folks involved in the ongoing W3C discussions, Francois Remy and I have forked the draft and proposed a rewrite presenting the idea in more appropriate terms of “custom properties” instead of continuing to attempt to shoe-horn an explanation of the now overloaded idea of “variables”.

You can view the proposal and even fork it yourself on github and suggest changes. As with any draft, it’s full of necessary technical mumbo jumbo that won’t interest a lot of people, but the gist can be explained very simply:

1.  Any property in a CSS rule beginning with the prefix “my-” defines a custom (author defined) property which can hold any valid CSS value production.  It has no impact on rendering, and no meaning at the point of declaration it is simply holding a named value (tokens).

2. Custom properties work (from the author’s perspective) pretty much like any other CSS property.  They follow the same cascade, calculation and DOM structure inheritance models, however, their values are only resolved when they are applied by reference.

3. Custom properties may be referenced via a function in order to provide a value to another property (or function which provides a value). All referencing functions begin with the $ character. Reference functions, like the attr() function provide an optional second default/fallback to use in the case where the named value is not present.

A Fun Example…

Putting it all together, you can see an extremely simple example which illustrates some of the features:

/* 
   Set some custom properties specific to media which hold a 
   value representing rgb triplets 
*/
@media all{ 
    .content{
        my-primary-rgb: 30, 60, 120;
        my-secondary-rgb: 120, 80, 20;
     }
}
@media print{ 
     .content{
        my-primary-rgb: 10, 10, 120;
        my-secondary-rgb: 120, 10, 10;
     }
}

/* 
   Reference the values via $()
   The background of nav will be based on primary rgb 
   color with 20% alpha.  Note that the 3 values in the 
   triplet are resolved appropriately as if the tokens 
   were there in the original, not as a single value. 
   The actual values follow the cascade rules of CSS.  
*/
nav{
   background-color:  rgba($(my-primary-rgb), 0.20);
}

/* 
   The background of .foo will be based on primary 
   rgb color with 60% alpha 
*/
.foo{
   background-color:  rgba($(my-primary-rgb), 0.60);
}/* 
    The foreground color of h1s will be based on the 
    secondary rgb color or red if the h1 isn't inside 
    .content - note an amazing thing here that the 
    optional default can also be any valid value - 
    in this case it is an rgb triplet! 
*/
h1{
 color: rgb($(my-secondary-rgb, 200, 0, 0));
}

Both drafts describe exactly the same thing…

Tim Berners-Lee Needs Revision

In the movie A Beautiful Mind there is a scene in which a brilliant, but awkward young John Nash has an epiphany in a bar while attending Princeton University. In it, a bevy of young women walk into the bar and all eyes draw on a particular, very attractive blonde. His friends say something along the lines of “every man for himself” and “recall the lessons of Adam Smith, the father of modern economics, ‘In competition, individual ambition serves the common good.'” Nash grows silent, lost in thought and then states “Incomplete…Adam Smith needs revision…” and goes on to explain that in acting with only selfish interests, they achieve a negative result, but in considering the group as well as their own interests they yield the most positive result. In other words, Smith was close, but not quite right as he illustrated in a very simple model. In case you’ve never seen it, you can read the relevant script if you are interested.

As Smith has been called “The Father of Modern Economics,” Berners-Lee has been called “The Father of the Web”. In addition to formalizing the fundamentals that make the Web as we know it possible today, he is the founder of the recognized standards body of the Web, the W3C and in the late 1990’s he posted a series of interesting drafts laying out a future vision of the Web. Particularly interesting among them was one titled “The Evolution of a Specification — Commentary on Web architecture. For much of the following 14 years, the W3C focused and aligned largely along the vision he lays out. Berners-Lee’s insights were brilliant, but his vision ultimately incomplete. Tim Berners-Lee (and therefore the direction/vision of the W3C) needs revision.

Who are you to criticize Tim Berners-Lee!?

Well, no one special really – and it’s not exactly a criticism of the ideas as much as a statement which should seem almost self-evident.

When writing, he frequently references what might or might not work “in the real world” (his words), explains why (pointing to historical lessons learned) and finally proposes some solutions (a grand vision): XML, schemas, XSL, RDF and a giant Semantic Web. For the most part, the W3C has recently shifted away from this focus and back toward something more like the Web we are mostly familliar with.

With over a decade of hindsight, we can leave aside a judgement of “good” or “bad” and state pretty simply that many of Berners-Lee’s ideas presented in this “new vision” just don’t seem to have panned out and been largely sellable as he might have hoped in “the (really) real world”.

So what are you saying?

Very simply, I guess I am saying two things. First, that many of the problems described by and observations made by Berners-Lee all the way back then are still as true today as they ever were:

  • We still don’t have a truly “great” model at the W3C (or anywhere else) as to how we can simultaneously be independently innovative and interoperble. We still don’t avoid the problems he describes in order to achieve standardization, in fact, looking at the record of the W3C, it is my estimation that this process and the models themselves might have gotten worse during that time (CSS 2.1 for example went in and out of recommendation status numerous times over the course of well over a decade). Certainly it seems that there has been a turning point more recently and progress has begun to be made again, but even today the discussions about the actual standard model of how to get from nothing to interoperability and standardization are regular discussion on the W3C (and now WHATWG) lists and in their meetings (the minutes of which are all published).
  • We still dont have the kinds of “great” ways Berners-Lee envisioned for authors to easily express both local and global semantics or to deal with these as well as we’d like to. While I would argue that we are much closer, we still aren’t there.
  • We will don’t have great ways to mix languages in the sense that it sounds to me like he is describing in the Evolution draft.

Second, I am saying that perhaps some of the goals and ideas from almost a decade and a half ago could use review. As one extremely simple example: One that I think could use revision is the model of “partial understanding” as he describes it, because that general philosophy still affects some decisions. In his analogies, they often make perfect sense: If you get a bunch of data, you really only need to understand the bits you intend to use. Obviously. On the other hand, this principle is very widely applied throughout Web technologies and in Berners-Lee’s vision, he imagines that all HTML documents should be readable/largely hold up with a simple text-based browser using an anciently slow connection in some only semi-connected country. While this may have seemed logical in 1998, I know of no site that lives up to that level of compatibility. Indeed, “interoperability” between browsers and versions of browsers is something we consciously choose. A great number of websites would cease to make sense without the CSS that displays them and simply wouldn’t work without the JavaScript that drives them past the initial state. Given his own admitted lack of imagination for just how unexpectedly things will develop in the future, it seems to me unreasonable to expect that a site written to take advantage of those leaps today could usefully be viewed by a browser written in 1998, even with some kind of partial experience. It feels like we could do better by shaping that expectation a little more explicitly so that authors could simply say “sorry, you need to update” or even “here is a really simplistic version of some data for your sad, sad browser – maybe you should update.”

Where do we go from here?

I will be dedicating some future posts to sharing some of my own thoughts, and how some of the projects that I am involved with (like HitchJs are shaped by them) but as I said: I’m really not anyone special, that’s not the point. Regardless of your specialness, I think we could use some new ideas and I propose an active conversatio…. Leave a comment… Write your own post(s)… Tweet your thoughts about it. Use the social media to get some active discussion going if you like the idea: What are your thoughts onhow to revise the vision/model for the evolution/architecture of the Interwebs, standards or even browsers since they are our window to the world as it were?