Properties: The New Variables

Problematic History

Variables in CSS are among the most often and oldest requested features in CSS.  For well over a decade, numerous W3C proposals for them have come and gone.  To answer a number of the most common use cases, several preprocessors have sprung up over the years, more recently and most notably LESS and SASS.  Once in place, there were a lot of great ideas experimented with and, on a few occasions it even looked like we might just be building to something which might become a standard. But the results in the end have always been the same: An eventual agreement by a lot of members that the things we keep specing out just don’t “fit” within CSS. Generally, the consensus view has been that these things are, frankly, better left to a preprocessor which can be “compiled” into CSS: it is more efficient (potentially quite a bit), requires no changes to CSS and allows competition of ideas.

New Hope

That is, until recently, when a fortunate confluence of new ideas (like HTML data-* attributes) opened the door to a brand new way of looking at it all and thus was born the new draft of CSS Variables. The principles laid out in this new draft really do “fit” CSS quite nicely, and it addresses most of the common cases as well as several that preprocessors cannot. Further, it should be reasonably easy to implement and won’t require drastic changes to complex existing implementation and ultimately should be pretty performant. In short , it really answers all of the previous concerns that have historically held it up.

Confusion

But… it seems to be causing no end of confusion and debate by people among people familliar with variables in existing pre-processor based systems like LESS or SASS. It has all been very dramatic and full of heated debates about why things don’t “seem like” variables and how to make them seem more so.  All of this discussion, however misses the real point. There is a clear reason for the confusion: What the draft describes as “variables” (largely because of its history it would seem) are actually entirely unlike any existing concept of preprocessor variables (for reasons already explained).  Instead, it describes something else entirely: Custom properties.

Enter: Custom Properties

When described in verbiage regarding “properties” and “values”, rather than “variables”, it is actually quite simple to not only understand the new draft without the confusion, but also to see how the new draft fits the CSS model so much better than all of the previous attempts and not only provides means to solve a large number of known use cases, but also provides fertile ground for new innovative ideas.

To this end, at the suggestion of a few folks involved in the ongoing W3C discussions, Francois Remy and I have forked the draft and proposed a rewrite presenting the idea in more appropriate terms of “custom properties” instead of continuing to attempt to shoe-horn an explanation of the now overloaded idea of “variables”.

You can view the proposal and even fork it yourself on github and suggest changes. As with any draft, it’s full of necessary technical mumbo jumbo that won’t interest a lot of people, but the gist can be explained very simply:

1.  Any property in a CSS rule beginning with the prefix “my-” defines a custom (author defined) property which can hold any valid CSS value production.  It has no impact on rendering, and no meaning at the point of declaration it is simply holding a named value (tokens).

2. Custom properties work (from the author’s perspective) pretty much like any other CSS property.  They follow the same cascade, calculation and DOM structure inheritance models, however, their values are only resolved when they are applied by reference.

3. Custom properties may be referenced via a function in order to provide a value to another property (or function which provides a value). All referencing functions begin with the $ character. Reference functions, like the attr() function provide an optional second default/fallback to use in the case where the named value is not present.

A Fun Example…

Putting it all together, you can see an extremely simple example which illustrates some of the features:

/* 
   Set some custom properties specific to media which hold a 
   value representing rgb triplets 
*/
@media all{ 
    .content{
        my-primary-rgb: 30, 60, 120;
        my-secondary-rgb: 120, 80, 20;
     }
}
@media print{ 
     .content{
        my-primary-rgb: 10, 10, 120;
        my-secondary-rgb: 120, 10, 10;
     }
}

/* 
   Reference the values via $()
   The background of nav will be based on primary rgb 
   color with 20% alpha.  Note that the 3 values in the 
   triplet are resolved appropriately as if the tokens 
   were there in the original, not as a single value. 
   The actual values follow the cascade rules of CSS.  
*/
nav{
   background-color:  rgba($(my-primary-rgb), 0.20);
}

/* 
   The background of .foo will be based on primary 
   rgb color with 60% alpha 
*/
.foo{
   background-color:  rgba($(my-primary-rgb), 0.60);
}/* 
    The foreground color of h1s will be based on the 
    secondary rgb color or red if the h1 isn't inside 
    .content - note an amazing thing here that the 
    optional default can also be any valid value - 
    in this case it is an rgb triplet! 
*/
h1{
 color: rgb($(my-secondary-rgb, 200, 0, 0));
}

Both drafts describe exactly the same thing…

Advertisement

The Blind Architect

In my previous post Tim Berners-Lee Needs Revision, I laid out a general premise: In the late 90’s, Tim Berners-Lee identified numerous problems and proposed a grand vision for “the next evolution of the Web” and proposed some fairly specific solutions which would remain in many ways the central point of focus at W3C for many years to come.  Many (perhaps most) of the solutions put forward by the W3C have failed to take hold (and keep it).  None have changed the world in quite the same way that HTTP/HTML did.  Perhaps more importantly, a good “grand philosophy” for the focus and organization of the W3C and a strategy for helping move the Web infinitely forward was never really adopted.  

I was asked in response to my first article whether I was asking for/favored  evolution or a revolution.  It is a deceptively simple seeming question which seems like a good metaphor to use as a segue…

Generally speaking, it is my contention that in the time following these papers, despite the very best intentions of its creator (whose entire laudable goal seems to have been to provide a model in which the system can evolve more naturally) and despite the earnest efforts by its members (some of whom I interact with and respect greatly), the W3C  (perhaps inadvertently) wound up working against a true evolutionary model forward for the Web instead of for it.

The single most valuable thing that I would like to see occur is a simple change in the focus and philosophy – some of which is already beginning to occur.  More specifically, I would like to propose changes which embrace a few more concepts of evolutionary theory more readily.  Namely, competition and failure.   Given the way things have worked for a long time, I am tempted to say that embracing evolution might actually be a revolution.

On Design

As humans (especially as engineers or architects) we like to think of ourselves as eliminators of chaos;  intelligent designers of beautifully engineered, efficient solutions to complex problems that need solving.  We like to think that we can create solutions that will change things, and last a long time because they are so good.  However, more often than not, and by a very wide margin, we are proven wrong. Statistically speaking, we absolutely suck at that game.

Why?  Very simply:  Reality is messy and ever changing… 

Ultimately, the most beautiful solution in the world is worthless if no one uses it.   History is replete with examples of  elegantly designed solutions that went nowhere and not-so-beautifully designed, but “good enough” solutions that became smash hits.  “Good” and “successful” are not the same thing.  At the end of the day it takes users to make something successful.  By and large, it seems that we are not good judges of what will succeed in today’s environment.  Worse still, the environment itself is constantly changing:  Users’ sense of what is valuable (and how valuable that is) is constantly changing.

Research seems to show that it’s not  just a problem with producers/problem solvers either:  If you ask potential users, it would appear that most don’t actually know what they really want – at least not until they see it.  It’s an incredibly difficult problem in general, but the bigger the problem space, the more difficult it becomes.

What this means to W3C…

Now imagine something the scale of the Web:  How sure am I that W3C is not the exempt to all of the failures and forces listed above and won’t “just get it right”?  100%.

It’s OK.  Embrace it.

Think about this:  The W3C is in a sticky spot.  On the one hand, they are in charge of standards.  As such, they frown on things that are not.  However, change is a given.  How do you change and stay the same simultaneously? The current process of innovation to standard is exceptionally complicated and problematic.

Browsers manufacturers are in an even stickier spot.  Worse, the whole process full of perverse risk/reward scenarios for browser vendors depending on what is going on at any point in time.  In the end, there is a fuzzy definition of “standard” that to this day remains elusive to the average user.

Think not?  Have you have ever considered which browsers your users have and then used references and shims to determine exactly which parts of which specs were supported in “enough” of the browsers you expected to encounter?  If so, then you see what I mean.

But What About The Browser Wars?

Much of the above was set up to explicitly to avoid a repeat of “the browser wars”.

However, I think that we drew the wrong lessons from that experience.

Was the real problem with the browser wars the rapid innovation and lack of more or less single global standard?  Or was it that that made it difficult for authors to write code that worked in any browser?  Is there a difference?  Yes, actually there is, and we know it well in programming.  It is the difference between DOM and JQuery, between Java and Spring. One is a standard, the other is a library chosen by the author.  Standards are great – important – I fully agree.  The question is how you arrive at them, and evolution through libraries has many distinct advantages.

A lot of the JavaScript APIs that we have native in the browser today are less than ideal in most authors minds, even if they were universally implemented (still not).  Yet having them, we do _amazing_ things with it because in JS we can provide something better on top with libraries.  Libraries compete in the wild – authors choose them, not the browsers or the W3C.  Ideas from libraries get mixed and mutated and anything that gains a foothold tries to adapt and survive.  After a while, you start to see common themes rise to the top – dominant species.  Selectors are a good example, so is a better event API.  Standards just waiting to be written, right?

The W3C is currently unprepared for this in its current arrangement in my opinion and that is a fundamental problem.   Efforts to capitalize haven’t been ideal.  There is hesitancy to “pick a winner” (weird since browsers are already the winners) or give credence to one approach over another rather than starting over.  There is always an effort to re-consider the idea holistically as part of the browser and do a big design.   In the end, if we get something, it’s not bad, but it’s usually fundamentally different and the disparity is confusing, sometimes even unhelpful to authors trying to improve the code in the existing use-cases/models.

It doesn’t have to be that way.  In the end, a lot of recs wind up being detailed accounts of things that went out into the world and permeated a lot, planned or not.  Even many of the new semantic tags derived from a study of the most used CSS classes.  There is a lot of potential power to the idea of exploiting this and building a system around it.

There is finally movement on Web Components – that could be something of a game changer with the right model at the W3C allowing ideas to compete and adapt in the wild decoupled from the browser before becoming a standard.  Likewise some extension abilities could make certain kinds of CSS changes compete in the wild before standardizing too.  If we start thinking about defining simple natural extension points, I think it shapes how we think about growing standards:  Decoupling these advancements from a particular browser helps things grow safely and stay sane – it reduces a lot of risks all around.

(Re)Evolution happens anyway…

One closing example which illustrates how it doesn’t have to be this way and evolution will continue to happen even if we fight it.

While prescriptive standards put forward by W3C had an advantage, in the long run they are subject to the same forces. I think no idea in history can be said to have had a greater advantage than XML. In many respects, it was the lynchpin of the larger plans. Then, a lone guy at a startup noticed a really nice coincidence in the way that all C family languages think about and express data – and it just happened to be super easy to parse and serialize in JavaScript. He registered a domain and threw out a really basic specification which fits neatly on an index card, a simple JavaScript library and more or less forgot about it…and JSON was born.

Technically speaking, it is inferior on a whole number of levels in terms of what it can and can’t particularly express.  Just ask anyone who worked on XML. But that was all theory. In practice, most people didn’t care about those things so much. JSON had no upfront advantages except that in the browser some people just eval’ed it initially. No major companies or standard bodies created it or pushed it. Numerous libraries weren’t already available for every programming language known to man.

The thing is, people liked it… Now there are implementations in every language, now it is a recognized standard… It is a standard because it was defined in ECMA, not W3C.  It was scrunitized and pushed and pulled on the edges – but that’s what standards committees are really good at.  Now it is natively supported in Web browsers safely (and more efficiently) and an increasing number of other programming languages as well.  Now it is inspiring new ideas to address things originally addressed by XSL, XPath and RDF.  Maybe someday it will fall out of favor to some spinoff with better advantages for the time.  That’s reality, and it’s ok.  The important thing to note is that it achieved this because it was good enough to survive – not because it was perfect.

In 1986 Richard Dawkins elegantly wrote “The Blind Watchmaker” to explain how the simple process of evolution creates beautiful solutions to complex problems:  Blindly – it doesn’t have a greater plan for twelve steps down the road in mind.