The Blind Architect

In my previous post Tim Berners-Lee Needs Revision, I laid out a general premise: In the late 90’s, Tim Berners-Lee identified numerous problems and proposed a grand vision for “the next evolution of the Web” and proposed some fairly specific solutions which would remain in many ways the central point of focus at W3C for many years to come.  Many (perhaps most) of the solutions put forward by the W3C have failed to take hold (and keep it).  None have changed the world in quite the same way that HTTP/HTML did.  Perhaps more importantly, a good “grand philosophy” for the focus and organization of the W3C and a strategy for helping move the Web infinitely forward was never really adopted.  

I was asked in response to my first article whether I was asking for/favored  evolution or a revolution.  It is a deceptively simple seeming question which seems like a good metaphor to use as a segue…

Generally speaking, it is my contention that in the time following these papers, despite the very best intentions of its creator (whose entire laudable goal seems to have been to provide a model in which the system can evolve more naturally) and despite the earnest efforts by its members (some of whom I interact with and respect greatly), the W3C  (perhaps inadvertently) wound up working against a true evolutionary model forward for the Web instead of for it.

The single most valuable thing that I would like to see occur is a simple change in the focus and philosophy – some of which is already beginning to occur.  More specifically, I would like to propose changes which embrace a few more concepts of evolutionary theory more readily.  Namely, competition and failure.   Given the way things have worked for a long time, I am tempted to say that embracing evolution might actually be a revolution.

On Design

As humans (especially as engineers or architects) we like to think of ourselves as eliminators of chaos;  intelligent designers of beautifully engineered, efficient solutions to complex problems that need solving.  We like to think that we can create solutions that will change things, and last a long time because they are so good.  However, more often than not, and by a very wide margin, we are proven wrong. Statistically speaking, we absolutely suck at that game.

Why?  Very simply:  Reality is messy and ever changing… 

Ultimately, the most beautiful solution in the world is worthless if no one uses it.   History is replete with examples of  elegantly designed solutions that went nowhere and not-so-beautifully designed, but “good enough” solutions that became smash hits.  “Good” and “successful” are not the same thing.  At the end of the day it takes users to make something successful.  By and large, it seems that we are not good judges of what will succeed in today’s environment.  Worse still, the environment itself is constantly changing:  Users’ sense of what is valuable (and how valuable that is) is constantly changing.

Research seems to show that it’s not  just a problem with producers/problem solvers either:  If you ask potential users, it would appear that most don’t actually know what they really want – at least not until they see it.  It’s an incredibly difficult problem in general, but the bigger the problem space, the more difficult it becomes.

What this means to W3C…

Now imagine something the scale of the Web:  How sure am I that W3C is not the exempt to all of the failures and forces listed above and won’t “just get it right”?  100%.

It’s OK.  Embrace it.

Think about this:  The W3C is in a sticky spot.  On the one hand, they are in charge of standards.  As such, they frown on things that are not.  However, change is a given.  How do you change and stay the same simultaneously? The current process of innovation to standard is exceptionally complicated and problematic.

Browsers manufacturers are in an even stickier spot.  Worse, the whole process full of perverse risk/reward scenarios for browser vendors depending on what is going on at any point in time.  In the end, there is a fuzzy definition of “standard” that to this day remains elusive to the average user.

Think not?  Have you have ever considered which browsers your users have and then used references and shims to determine exactly which parts of which specs were supported in “enough” of the browsers you expected to encounter?  If so, then you see what I mean.

But What About The Browser Wars?

Much of the above was set up to explicitly to avoid a repeat of “the browser wars”.

However, I think that we drew the wrong lessons from that experience.

Was the real problem with the browser wars the rapid innovation and lack of more or less single global standard?  Or was it that that made it difficult for authors to write code that worked in any browser?  Is there a difference?  Yes, actually there is, and we know it well in programming.  It is the difference between DOM and JQuery, between Java and Spring. One is a standard, the other is a library chosen by the author.  Standards are great – important – I fully agree.  The question is how you arrive at them, and evolution through libraries has many distinct advantages.

A lot of the JavaScript APIs that we have native in the browser today are less than ideal in most authors minds, even if they were universally implemented (still not).  Yet having them, we do _amazing_ things with it because in JS we can provide something better on top with libraries.  Libraries compete in the wild – authors choose them, not the browsers or the W3C.  Ideas from libraries get mixed and mutated and anything that gains a foothold tries to adapt and survive.  After a while, you start to see common themes rise to the top – dominant species.  Selectors are a good example, so is a better event API.  Standards just waiting to be written, right?

The W3C is currently unprepared for this in its current arrangement in my opinion and that is a fundamental problem.   Efforts to capitalize haven’t been ideal.  There is hesitancy to “pick a winner” (weird since browsers are already the winners) or give credence to one approach over another rather than starting over.  There is always an effort to re-consider the idea holistically as part of the browser and do a big design.   In the end, if we get something, it’s not bad, but it’s usually fundamentally different and the disparity is confusing, sometimes even unhelpful to authors trying to improve the code in the existing use-cases/models.

It doesn’t have to be that way.  In the end, a lot of recs wind up being detailed accounts of things that went out into the world and permeated a lot, planned or not.  Even many of the new semantic tags derived from a study of the most used CSS classes.  There is a lot of potential power to the idea of exploiting this and building a system around it.

There is finally movement on Web Components – that could be something of a game changer with the right model at the W3C allowing ideas to compete and adapt in the wild decoupled from the browser before becoming a standard.  Likewise some extension abilities could make certain kinds of CSS changes compete in the wild before standardizing too.  If we start thinking about defining simple natural extension points, I think it shapes how we think about growing standards:  Decoupling these advancements from a particular browser helps things grow safely and stay sane – it reduces a lot of risks all around.

(Re)Evolution happens anyway…

One closing example which illustrates how it doesn’t have to be this way and evolution will continue to happen even if we fight it.

While prescriptive standards put forward by W3C had an advantage, in the long run they are subject to the same forces. I think no idea in history can be said to have had a greater advantage than XML. In many respects, it was the lynchpin of the larger plans. Then, a lone guy at a startup noticed a really nice coincidence in the way that all C family languages think about and express data – and it just happened to be super easy to parse and serialize in JavaScript. He registered a domain and threw out a really basic specification which fits neatly on an index card, a simple JavaScript library and more or less forgot about it…and JSON was born.

Technically speaking, it is inferior on a whole number of levels in terms of what it can and can’t particularly express.  Just ask anyone who worked on XML. But that was all theory. In practice, most people didn’t care about those things so much. JSON had no upfront advantages except that in the browser some people just eval’ed it initially. No major companies or standard bodies created it or pushed it. Numerous libraries weren’t already available for every programming language known to man.

The thing is, people liked it… Now there are implementations in every language, now it is a recognized standard… It is a standard because it was defined in ECMA, not W3C.  It was scrunitized and pushed and pulled on the edges – but that’s what standards committees are really good at.  Now it is natively supported in Web browsers safely (and more efficiently) and an increasing number of other programming languages as well.  Now it is inspiring new ideas to address things originally addressed by XSL, XPath and RDF.  Maybe someday it will fall out of favor to some spinoff with better advantages for the time.  That’s reality, and it’s ok.  The important thing to note is that it achieved this because it was good enough to survive – not because it was perfect.

In 1986 Richard Dawkins elegantly wrote “The Blind Watchmaker” to explain how the simple process of evolution creates beautiful solutions to complex problems:  Blindly – it doesn’t have a greater plan for twelve steps down the road in mind.


Advertisements

One thought on “The Blind Architect

  1. Pingback: W3C Extensible Web Community Group | briankardell

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s