About briankardell

* Developer Advocate, Igalia * Original Co-author/Co-signer of The Extensible Web Manifesto * Chair, W3C Extensible Web CG * Member W3C CSS WG (Open JS Foundation) * Co-author of HitchJS * Blogger, Standards Geek,

Potentially Scattered Thoughts on Web Components and Frameworks

Chances are pretty good that you’ve seen at least one of the many articles and tweets flying around lately about Web Components and Frameworks. For the past week I’ve watched as all sorts of arguments have been made by friends of mine on both sides and wanted to weigh in and share some thoughts, but my life hasn’t been cooperating. Finally, I have a moment so I tried to put down some thoughts that have been banging around in my head… No guarantee they are coherent and I’m out of time, so here they are….

A lot of developers (by far the vast majority) don’t work for browser vendors, they aren’t involved in building frameworks or standards development. They don’t have the time to spend a few years considering all of the gory little details and discussions that got us to where we are today.  A lot of us stay in the loop by looking to and take queues from a comparatively small number of people who do and boil it down for us. We all have to do this with at least some things.  Then, we then further categorize into “good” and “bad” piles mentally, and we use that to determine what’s worth spending time on or what will be a waste. Right now, for most people, it’s probably a little overwhelming. Which pile do Web Components belong in? Are they going to be Shangri La or a real shit show?

The answer is: No. Neither. A little of both at times. I’m sorry if that sounds wishy-washy but real life is like that.  It’s full of grays. We move forward in fits and starts.  We frequently can’t even predict what people will do with things until they get them – I’ve written about all of this a lot.  Reality is messy.  Getting agreement is hard.  Truth is often subtler than we imagine and non-binary.

Let me explain what I mean though because without more, that’s kind of a non-answer…

Web Components are super cool, and also – believe it or not – imperfect. I think that they’re a fundamental step toward us getting much better at a lot of things, and I think we’re kidding ourselves if we think it’s not going to be bumpy for a while while we sort some things out.  I believe that we can make them work with very popular frameworks and I also believe that they are not ideally suited to them today for a number of reasons and that people may choose not to use them.  This creates challenges that I think we are not articulating well and… that actually might matter.

Web Components or Custom Elements?

Since I’m talking about articulating to developers… We’re not always clear on this.   Sometimes we’re even conflating Web Components with Polymer. We’re doing better than we were in 2013, but we’ve got to do better still.  Polymer isn’t Web Components in the way that birds aren’t ducks.  Polymer is conceptually built around Web Components, but it’s a library.  Because it is a library created by Google and it includes polyfills and prollyfills and because these are not always historically clearly labeled, that’s gotten a little confusing sometimes.  Occasionally, that’s caused bad feelings.  Use Polymer, React, Angular or Ember.  They’re all good, I’m not trying to pick on one.  Just be aware what’s what and what isn’t.  Your mileage may vary.

So, in an effort to be clear – in this piece I when I say “Web Components” I am talking about Custom Elements and Shadow DOM as those are the two pieces that are near shipping.

Can/should you use the current incarnation of Web Components with all of the popular frameworks?

Yes, probably you can… Sort of.

Believe it or not, I think this is where most of the trouble lies.  “Can you” isn’t the question to ask.  “Should you” or “Will People” are the questions.  One group says “resoundingly yes you should because it is native and therefore is most widely usable.”  Another counters “potentially you can, but you really might not want to with a framework because it won’t be as convenient as you might think.  A lot of components will either not work or will require wrappers/adapters in practice”.  “Can” has a definite answer and, I’m pretty sure the answer is “yes” because we can make just about anything work if we try hard enough (see: the entire Web).

That’s where “should you” comes in and I’m afraid that the answer there doesn’t seem so cut and dry – and that “Will People” is kind of up in the air.  Why?  Because “can you” isn’t the only variable.  You’ve probably had experiences where it gets pretty hard and you think “is this really worth it?”  I’ll tell you about one of those I had below that makes me sensitive to this one.  I’m happy to be shown “it’s worth it” in actual practice and “people will (more than once)” in the real world, but my own experiences thusfar don’t entirely bear that out.  I’ll also explain why I’m ok with that.

It’s a pickle. But it’s a pickle like “should you do x” and we have a lot of those.

Will people use Web Components with their framework?

Yes, or at least they will try if a whole bunch of people are all telling them that for sure this  should work and that’s part of what people are worried about.  I’m not sure this is being well articulated but the worry that some have is: It’s not going to be the walk in the park that they imagine and that’s that’s going to turn people off to Web Components. Even framework people think that would be a shame because we all agree that Web Components are a great idea that we absolutely want. Web developers have been disappointed and frustrated a few times already with challenges in the morphing of Web Components as we struggled to get consensus. Regardless of reasons or blame, that seems true and we probably can’t afford a lot more of that.

So, should we go back to the drawing board and re-think Web Components from the ground up?

Hellz No! But, believe it or not, I don’t actually think anyone really meant to imply this literally regardless of what they might have said in 140 characters one day after lunch.

Why not?

Because they’re a good step forward and not shipping something fits just as neatly under the “we probably can’t afford a lot more of that” heading.

For those who might not be aware, “Web Components” is actually a cold reboot of many previously failed attempts to do many of the same things – HTC, XUL, XAML, FLEX and XBL were all trying to do the same. These efforts were restarted, to the best of my recollection around 2009 or 2010 under the banner “Web Components” – but you can track the general idea of “custom elements” (lowercase because they were more like “Web Components”) in some fashion back, practically as far as the Web itself. In all that time, we have never been so close. Why? Because the problem is hard.  Because new things come along and give us pause. Because consensus is hard. Even just between those who actually make a browser it is, in a word: fucking hard. Standards are like this Weird Al Song . On anything that asks for the ability for authors to be able to mint elements and define them, doubly so.

It’s not the end of the game but we can’t move on further until we beat this level.

So what should we do?

Just be realistic I think. We move forward in steps. If we are pitching an all-singing, all-dancing, no challenges to be found future, that’s probably not realistic.  A de-facto implementation that will be used everywhere regardless of framework, might be over-selling a vision until we actually see it broadly happen.  Sure, it might be possible, but possible isn’t the only variable.  It’s also entirely likely that even if someone posted such a thing, even one with an adapter for Ember or React, or even Polymer of a very popular custom element with “Web Components Underneath” that within a few days someone will say “yeah, but that’s inefficient”.  It’s pretty easy for them to make a derivative component in their framework that does the same thing but cheaper, and it’s very likely that many people with that framework will gravitate toward the “more efficient for their framework version.”  I’m not sure.  We’ll see.  That’s all I’m saying.  That and…



Yeah, meh. I’m ok with that for now.  We’re not there yet, but we’ll figure it out as long as we work together.  I know that might sound a little disappointing if you were hoping for total Nirvana – but it’s hella better than what we had a few years back.  Declarative solutions are very easy to use and thank goodness we all agree on that much, even to the basic point of how we express them. In fact, all of the major frameworks have a solution that looks a lot like Custom Elements. Here’s some markup, is it an Ember component or a Custom Element?  You can’t tell.

<common-button id="btn">Statu</common-button>
<common-badge icon="favorite" for="btn" label="favorite icon"></common-badge>

view raw


hosted with ❤ by GitHub

Yes, it means there might be N implementations of the same thing instead of the one true implementation — and yes that kinda sucks.  But… meh.

We have armies of developers who are probably willing to port a popular custom element to their framework of choice until we sort this out.  Within my own company we have some things implemented twice – once as a Custom Element and once as an Angular directive.  It didn’t start that way, it wound up that way because that turned out to be easier and more efficient to say to Angular “Ok – you be you”.  I fought it until I could fight no more.  Now that we did it, it’s not the end of the world.  The folks who write markup still don’t need to learn something new moving from frameworkless pages to Angular pages.  Their knowledge is portable.  That’s not as amazing as I’d originally hoped, but in practice, it’s actually been pretty damned good.  And I’ll take it over all of the N-solution, very complex alternatives we had before any day.

I believe we will get to the day when that isn’t necessary, but – if someone says “that day might not be today” I don’t think they’re just being difficult.  I’m super happy to be shown – in reality – that this isn’t likely, but there only time will tell.  Dueling speculations won’t tell us, only real experience will.

The mere fact that we can mint tags and easily transfer understanding between all of these is, in itself, a pretty significant leap in my opinion.  That it allows us to begin the process of developers helping to establish the “slang” and further wrestles away creative powers from being the sole domain of browser vendors and for standards to play the role of dictionary editors is, in and of itself, pretty significant.  We should throw this in the win column regardless of any of that other stuff.

What next?  Are we resigned to this forever?

Nah, I don’t think so.  We should manage expectations for now, but React, Ember and Angular all have some really interesting observations about declarative serialization and how we express things and why that’s a challenge for Web Components within their frameworks.  We should listen.  We should gain some experience.  We should also write the hell out of some Web Components and see what we can do.  Given new abilities I can’t wait to see what developers come up with.  New capabilities inevitably breed new ideas and solutions that it’s nearly impossible to imagine until it happens.

Will it be as pretty as it might be if we started over?  Probably not, but, I think it’ll actually get done this way.  If we make some progress, we inch the impossible destination ever closer.


The Future Web Wants You.

A few weeks ago I gave a talk in Pittsburgh, PA at Code and Supply. It was recorded if you would rather watch the video (the actual talk is ~40 minutes, the video captures some of the Q&A afterward as well), but I’m more comfortable writing and I thought it might be worth a stab at a companion piece that tries to make the same points/arguments in blog-post-size, so – here it is.


A photo of me in the mid-1990’s dreaming about standards.

In the mid-1990’s, I didn’t really know much about standards, but it seemed that they obviously existed.  So, when I heard that there was going to be a “standards organization” setup to bring together all sorts of powerful tech interests and that it would be led by Tim Berners-Lee (creator of the Web) himself, it didn’t take much more to win my confidence.

I suppose I imagined something between Michelangelo’s Creation panel on the Sistine Chapel Ceiling and Raphael’s School of Athens. Somewhere, perhaps high in a tower in a proverbial Mount Olympus where the Gods of programming – wise and benevolent beings would debate and design – and the outcome would be an elegant, simple and beautiful solution. They would show us the One True Way. Specifications might have been handed down on tablets of stone as far as I imagined.  The future was bright. They would lead me to the promised land.


My imagination of what a W3C meeting was like..

By around 2009, I guess you could say that my outlook had “matured”.


Me, circa 2009 expressing my feelings on Web standards.

I was jaded, yes.

What had happened?  I decided to begin to try following standards a little more “from the inside” and I learned a lot.  I talk more about it in the video, but here is the most important takeaway I can give you:  There is no standard for standards.

That is:  We really don’t know what we’re doing.  Standards are a really “young” idea.  In the roughly 100 years we’ve been trying to deal with them you can sum up a brief history something like this:

  • Countries established national standards organizations – here in the US, ANSI.
  • National Standards really weren’t good enough for some things, so we got an international standards organization: ISO.
  • ISO tweaked how they approached things a few times along the way, but when it came to networks and software, they were kind of abysmal.  After a decade of working on the OSI 7 Layer Model, Vint Cerf and some others left and created the IETF.  We got the internet. The IETF works very differently from ISO/ANSI.
  • When Tim Berners-Lee came along he could have taken things to ISO, or the IETF – and in fact, he did choose the later.  Some things were standardized there, others languished and never actually reached what you could call, in IETF terms, a standard.  After some mulling, the W3C was created.  It works differently than ANSI, ISO or the IETF.
  • When Internet Explorer began reverse engineering JavaScript and Netscape wanted to standardize it, they could have taken it to any of the above.  Instead, they took it to ECMA – a body previously dedicated to the manufacturing standards for computers in Europe.  Why?  Because historical events led them to believe that Microsoft would wield less powerful influence in this venue and that ECMA would be more fair to the creators.  It works differently than all of the above.
  • After a period in which much of the world (including major players like Microsoft who controlled 95% of the browser market share at the time) decided that perhaps HTML wasn’t the future we wanted after all and spent a decade trying to influence a different possible future in the W3C, a group defected and created the WHATWG which – again – works very differently than all of the above.  The WHATWG was spun up in 2004, the first draft of HTML was published in 2008.

Along the way we’ve seen features that were disappointing (AppCache) things that aren’t quite interoperable (IndexedDB/WebSQL), things that failed to materialize (native dialogs, the document outline) and battles over control of the “really official standard” as well as what that even means.  In late 2014, it reached W3C status that we might call ‘standard’ – however, there’s still a lot that doesn’t work in all browsers – HTML input types support, for example.  So it would be foolish to say that process really “worked well” in total either.


It’s not blasphemous to suggest that we can do better.

The interesting point here is that the reason there are many venues is simple: That the ones that came before them weren’t working well and that each of these has tried to adapt to get better.

Lessons Learned

In a nutshell: We’ve moved around a lot of variables a lot of times, trying to figure this out – but the one thing we haven’t figured out how to tap into is developers.  This is strange because ultimately, it is developers who decide the fate of it all.  Over the years standards have come to say “we have businesses, we have academia, we have government.”


Bring your army, we have developers.

Yay.  That’s great.  But, the truth is: we have developers.  Developers are like the Hulk, their potential power is nearly limitless, it’s just untapped!  If you want to win the day – you need the Hulk on your side.

Think about it.  Microsoft quite literally “owned” the browser market and disbanded the team.  When work continued on HTML, it created what might have been an impossible impasse.  There was no obvious way to get there from here.

What happened?  Polyfills. Remy Sharp coined the term and developers stepped up and filled the gap, providing a way forward.

When virtually every major tech company on earth was focused on “how on earth can we imagine a new, better ‘Web’ based on XML?” – when billions of dollars in R&D had been spent over a decade and everyone was desperately trying to figure it out, developers said “JSON: I choose you!”.  Guess who carried the day?


JSON: I choose you!

It’s not that standards bodies are “bad” at making standards. The problem, at its core, is how we approach/view standards and how we set up the right economics.

Fixing the economics


The Extensible Web Manifesto Logo by the great Bruce Lawson

Around 2010, a lot of people began talking about what was wrong with Web Standards and how we might fix it. This led to, not a new standards body, but a joint statement of core principles by people involved at many levels: The Extensible Web Manifesto.

Since it was published in 2013 it has become a statement of “core principles” for all of the major standards bodies involved with the Web.

The Extensible Web Manifesto is a short document, but it comes from considerably more detailed discussions and a bigger vision. It’s a vision that says that the economics are broken.  Failure isn’t avoidable, it’s inevitable.  Experiments are necessary in order to get there from here.


Early electrical appliances plugged into light sockets!

As my friend Matt Griffin explains well both in his A List Apart Article The Future of the Web and his documentary on the Web The Future is Next – you can’t do it right until you’ve done it wrong.

History, both of the Web and physical standards proves out that an evolutionary result is inevitable.  When homes were first electrified, for example, it was for the purpose of artificial light.  There weren’t outlets – there was nothing to plug in.  Companies were battling over lights.  The result?  Early appliance inventors stepped up and filled the gap – they made cords that screwed into light sockets and birthed a whole new industry!

The Extensible Web Manifesto simply argues that while we’re busy arguing about light bulbs, the really amazing stuff is what you can do given electricity – and that we’ll very likely miss it.  It’s unanticipatable.  We will try, and we will fail.  All failures aren’t dead ends though.  Service Workers, for example, are the result of many failed experiments.

Some failure persists and only looks like failure for a time – the sum of the DNA however ultimately provide new possibilities far far beyond any of our plans.  If you were busy trying to “design” the perfect canine you’d never come up with a maned wolf.  Chances are you’ve never seen a maned wolf since they only evolved in a certain environment in South America.  But they are amazing and kind of a testiment to the power of evolution to create something that survives.  Ultimately, we need things that survive in all of the environments, even the ones we aren’t thinking of – and to do that we need to be adaptable.    


The maned wolf is real, and it is awesome.

So, experiments and failure to reach “standard” are actually good things – that’s how we can get better by exploring the edges and learning.  But the original Web plan made it the norm that experimentations ship with browsers – out in the open, and usually very high level. That led to serious problems of miscommunication and frustration and interoperability challenges.

Polyfills showed us a different way forward though by mixing what little DNA we had exposed to us to fill the gaps.  If you could polyfill a feature because a few browsers didn’t support it, you could just as easily fill it before any browser supported it.  Instead of proposing something that only works in a single browser – why not use the power of the Web to propose something that works in all browsers.  A prollyfill (will it become a standard? I dunno, prolly something like it).  Given lower level DNA, we can experiment.  The Extensible Web Manifesto calls this DNA “Fundamental Primitives” and it encourages standards to focus majority efforts on them.  Sometimes this may mean introducing new ones, but there’s already a lot of rich DNA already locked away within the existing higher level APIs of the platform.  Exposing it means we have more raw materials and can prollyfill more and better experiments.  Beneath the existing features are all sorts of things that deal with network fetching, streaming, parsing, caching, layout, painting and so on.  Each of these is being currently being ‘excavated’.

The huge shift in economics that this could create is amazing.

In the mid-2000’s, a lot of people wanted something like flexbox.  It’s only now, in 2016, that we can really begin to get broad feedback from developers who are largely just starting to see what they can use it for.  In all likelihood, they will find some faults and have some better ideas.  But if we could have given developers flexbox in a fashion that at least many of them could use to accomplish real things – that’s a good incentive to be involved. The feedback loop could be tighted up considerably and it’s possible because even if it fails to become a standard, it still works to accomplish something.

Wait a minute.  Hold the presses.  Think about that for a moment:  Why do developers want to learn about standards?  To feel smart?  Shit no.  Developers want standards because they have work to get done.  A standard way is portable. “Being standard” means it’s had a lot of eyeballs and ultimately it winds up being “free”.  But if they can’t use a standard, they’ll use a library.  Why?  Because things have to get done.  Libraries have many of the same benefits, but not all.  A lot of people ask me “why didn’t we just standardize library X”.  The answer is generally simple: No library has been proposed as a standard, in a fashion compatible with standardization.  They’re usually too big, there are IP issues, and at the end of the day lots of people feel like they didn’t get a say.  But… If a proposal is delivered like a prollyfill that works everywhere, it’s roughly that – only in the right form!

What we need then is a way to incubate ideas, build prolyfills and somehow get lots of eyeballs, use and participation.  We need to see what sticks, and what can be better.  We need ideas to fail safely without breaking the economics of participation or breaking the Web.  And we need standards to act more like dictionary editors than startups.  I explain in the presentation that, in fact, most of their successes have been this and that the idea is not at all radical, but I’ll spare you that here.

A million voices cried out…

Meetup.com has 4 million people signed up who call themselves Web developers.  How can we involve even thousands of them the way that we do standards now?  The answer is simple: We can’t.  Discussing things on mailing lists while we wait forever for consensus and implementation doesn’t scale.  An incubator would need people helping cream rise to the top – it needs networked communication.  Just as in networking, not all noise needs to be in all places.

Chapters.io is the answer (or at least the first attempt at an answer) to that problem.  Chapters is an effort to pair people who are involved with standards with meetups about the Web who can help them find, try, and discuss things that are in incubation (or proposed for incubation).  They provide a “safe” space for noisy and potentially less formal discussion.  Ideas can be collected, summarized and championed.

This is not a far flung dream: It is happening. The Extensible Web has helped shape ideas like the Web Incubator Community Group (WICG) which provides just such an outlet for incubation and the Houdini Task Force. Browser makers and standardistas are making proposals there, and we’re figuring out how to incubate them.  Good ideas find champions and move forward.  The WICG also provides a Discourse instance where developers can subscribe to and participate in a way a lot more plausible than a mailing list.  Very recently, the jQuery Foundation announced that the standards team will be helping to establish, manage and champion chapters.

So what do you say?  Are you in?  The future Web wants you.


Don’t be silent: Join the rebellion and help us organize a chapters.io near you.  Tweet interest to me @briankardell or open an issue like this with the jQuery standards team and we’ll see if we can help you get something started!  If you are in Pittsburgh, PA – join us!


Very special thanks to my friends the great Bruce Lawson and (the also great) Greg Rewis for proofreading this piece.



X-Web: Days of Future Past

If you’re not a comic book nerd, or a comic movie nerd, I suppose an explanation is necessary:  In the Marvel Universe, there is a genetic mutation in some humans called “the X-gene”.  This gene leads to development of an exotic protein which can radically affect other genes in unpredictable ways.  The result is mutants with all sorts of different “super” abilities.  This is the basis for the story lines in the X-Men comics.  Some humans, without the X-gene, see mutants as freaks, there is prejudice against them.  Others would like to recruit them as weapons.  Still others are friends.  The mutant community is divided too.  One group of mutants believe that they are a new homo superior destined to become the dominant species and they’re willing to earn this by fear and force.  Another group of mutants, the X-Men, led by Professor Xavier, believe that mutants and non-mutants should live in harmony.  Mutants powers can even aid humanity.  They work very hard for the advancement of peace and tolerance. And then, there are quite a lot of people – both mutants and everydays humans who are torn back and forth somewhere in the middle on a lot of the subtleties that arise. One of the most amazing parts of the narrative is that the leaders of the two groups are actually best friends.  They actually agree on much.  Most of their followers are not similarly friendly.  It’s kind of a hot mess, and it reminds me a little of the Web community.  

I don’t think it should be controversial to say “we aren’t really where we’d like to be” on the Web.  For some reason there seems to be more division that we’d like.  Some would like to paint the future, or even the present, as being dark and desolate…

The future: a dark, desolate world. A world of war, suffering, loss on both sides. Mutants, and the humans who dared to help them, fighting an enemy we cannot defeat. – Professor Xavier

However, I don’t think that this is the case.  It strikes me that our debates are as much about the past as they are about the future.  Given this, it’s worth stopping for a moment and assessing where we are, considering how we got to this place, and whether (and how) we will adapt.

Are we destined down this path, destined to destroy ourselves like so many species before us? Or can we evolve fast enough to change ourselves… change our fate? Is the future truly set?- Professor Xavier

For a little over 20 years, we have attempted to develop the vocabulary and features of HTML via a standards body.  HTML was designed to give you the structure and function with a default look and feel.  Author provided CSS prettied up the presentation.  Because of this, with CSS disabled a user should, in theory, get a perfectly usable (if not very pretty) interface.  Both HTML and CSS had forward-compatible parsers which effectively just ignore the stuff they don’t understand.  All of this is designed to make HTML work on the greatest number of browsers.

Our lexicon has indeed increased, though, admittedly slowly.  Unfortunately, even after reaching ‘standard’ even some of the authors don’t agree on the proper use and meaning of some elements <main> is an example of this.  In other cases, things that made it through to the “standard” (recommendation) are implemented by no one (the document outline is an example of this), so writing markup assuming that is actually wrong.  In still other cases, even when elements pass standards and are widely implemented and experts agree on how they should be used, actual developers misunderstand and use them differently on an all too common basis – <address> and<article> are examples of this.   Some very common UI metaphors still have no native definition to use (tabs are an example of this).  Most native elements have some styling limitations, sometimes these are trivial but other times not:  <select> , for example, is unstylable in non-trivial ways – it is incapable of doing multi-line options or including any text formatting inside them.  Some, like table-oriented elements, lack the semantics on their own (without ARIA attributes) to be meaningful enough for a screen reader to make sense of in a lot of cases, yet this is infrequently taught.

But there’s more… let’s go to the video…

In still other cases, combinations of these factors conspire against us, like the universe itself trying to keep us from success.  For example:  The vast majority of browsers have supported the <video> element for some time.  Someone following lots of conventional advice would think that markup like this would be a pretty safe bet.

<video crossorigin="anonymous" poster="http://cdn.example.com/link/to/poster.jpg">
  <a href="http://cdn.example.com/link/to/video.blah">
    Download my video of a cat playing with a grape.
  <track kind="captions" 
    label="English captions" 

In theory at least, unsupporting browsers would show a link.  A user clicking it would be prompted to download it, and then they could play it with their favorite media player (an even relatively recent OS would even help them find one if they didn’t have one). Browsers that did support <video> would just go ahead and use it.   With no JavaScript and no author provided CSS at all, a simple, declarative HTML document can give you a pretty valid experience on just about every browser ever made and accessibility would be free.

But actually, that’s wrong.  In fact, regardless of whether you had a new browser or an old one in early 2016, your native <video>  element would be inaccessible!  It didn’t really “work” to some extent or other.  That is, if the user of a screen reader had an old browser, they could download the video, but not connect it to the captions files.

Ouch, that is suck.

But even if the user of a screen reader had a “modern” browser it was still potentially problematic: IE11 would fail to download the captions, regardless of whether you included the right CORS headers.

Sigh, IE.  Double Suck.

Even outside of that, however, the player in all browsers but Chrome was not keyboard accessible.

Triple suck.

Oh yeah, did I mention that you can’t style the controls of the player when it is embedded?

Quadruple suck.


We left out an important detail.  If this was really the only option, HTML 5 might still be lingering as something we ‘hoped’ to achieve.  The truth is that there are still commonly used HTML 5 bits that are not universally implemented even in modern browsers.  The implementation space is, was, and always will be ragged.  The aspect that launched a thousand ships with HTML uptake was the polyfill.  Key to the polyfill, of course, is JavaScript.  

If the Web had an “X-Gene” it would be JavaScript.

Yes, the X-Gene can give you powers that are seriously abused.  Yes, people might use their powers in not super ways.  They might even do something like try to drop a stadium on the White House…


But it can also give us powers to do good things, like, you know… Save the world.  You might have noticed that there are a number of sites using the <video> element that are perfectly accessible when used with a modern browser, and styled when embedded as well.  That’s possible because of JavaScript.  JavaScript made a solution that was more accessible possible.  It’s a nice thing to say “It should work in really old browsers, even without JavaScript enabled” but the truth is, that doesn’t match where we are in a lot of cases.  Saying “you don’t have to use a modern browser or have JavaScript enabled… unless you are disabled” seems like… I don’t know, kind of a dick move.

This is the reality of where we actually are as I see it.

Enter: Xavier’s School for Gifted Youngsters

In the comics, Professor Xavier runs a “School for Gifted Youngsters”.  A place whose purpose Marvel’s wiki describes as “train young mutants in controlling their powers and help foster a friendly human-mutant relationship.”  What if we could learn to harness our powers for good?  What would it look like?  If, as in Days of Future Past, we could go back in time and apply those same lessons, what kind of difference could that make – and is it worth applying them now?

The past: a new and uncertain world. A world of endless possibilities and infinite outcomes. Countless choices define our fate: each choice, each moment, a moment in the ripple of time. Enough ripple, and you change the tide… for the future is never truly set. – Professor Xavier

The Extensible Web Manifesto stresses the duality of  ideas that we want developers to write more declarative code, not less and we want browsers to focus on low-level-features.  Those things can seem at odds, unless you understand the intent.  JavaScript is already a superpower.  Very much Web as we know it could not rightly exist without it, no matter how much we’d like to pretend otherwise.  Without JavaScript, there are no polyfills.  Without JavaScript, there are unresolvable accessibility issues.  We might wish it weren’t so, but it is.

What good is spending years debating whether we should have a high level element, followed by years of ragged implementation that we couldn’t patch, followed by years of unhappiness that it still doesn’t give us what we want.

X-Men-Days-Of-Future-Past-Storm-WallpaperImagine then, that we could go back in time – low level features exposed and the ability to mint custom elements.  Imagine, if you can, that this was part of the Web from Day One.  It’s a big ask of the imagination, I realize, but it requires no more suspension of belief than time travel or a genetic mutation that somehow grants its possessor control over the weather, and we find that entertaining, so try to follow along…

Would we have the vocabulary that we have today?  Doubtful. We’d have addressed these issues with experience by now.  This is the ultimate goal of prioritizing the low-level-features — so that we can actually figure out the high level in a sane way.  We want low-level features exposed because we want developers to help us define new vocabulary.    Would we have debates about whether you should count on JavaScript?  I don’t think so.  It can fail to download – but so can your page midway through and that’s no good either.  There are ways we can reduce that likelihood and improve performance too, and we’re working on it.  It can have an error, but in truth, so can a Web browser’s native code.  The trouble is that we have no common, good way to currently find and evaluate high quality custom elements.  However, if this became common practice – if it had been a thing since Day One – we’d have solved that by now, 25 years in.

Given a better vocabulary, things that currently require authors to write complex HTML, CSS and JavaScript can be simplified.  We can say more with less.

That’s good for authors.

That’s good for accessibility.

That’s good for performance.

That’s good for slow connections.

JavaScript is not the enemy, it’s the the killer asset.  I’d also like authors to have to write less of it by harnessing common high level, preferably declarative code.

This is the present I wish that we had.  Unfortunately, changing the past is only an option in fantasy.  What we can do, however, is learn the lessons and make today the new Day 0, and begin a new timeline.  Not disconnected from the past, but with a different path forward.

We have the powers, now let’s learn to use them really well.  What we really need is a Professor Xavier’s Home for Gifted Youngsters where we can go to find the quality X-Web.  This problem is the one I’m most interested in. wicg.io is a step in the right direction. More soon, I hope.


Prognostication & the failure of the Web


This awesome stock art depicting me predicting failure thanks to http://obliviate-stock.deviantart.com/

For almost as long as there has been a Web to write about, people have been prognosticating the death of the Web.  Mine is a different kind of prediction, but as I look into my crystal ball, I’m absolutely certain of it:  The Web will fail.   Let me tell you what I see through the glass…

A Story of Failure


Scott’s phonautograph on display in the Smithsonian

In school, most of us learn to associate the recording of sound with Thomas Edison who invented the phonograph in 1877.  What a lot of people don’t know, however, is that two decades before Edison, a man named Édouard-Léon Scott de Martinville invented, used and patented a means for recording sound waves.  The “phonautograph,”  as he called it, used vibrations from a brush to capture the sound waves from within a cone to make lines on soot covered paper as it passed along a rotating drum.

If you’ve never heard of the phonautograph, you’re probably thinking “wow… that’s amazing!?  How did one play it back?” Of course, that’s a natural question.  What could be more natural?  So you’ll probably be shocked to learn “you couldn’t”.

Let that sink in for a moment.  The man invented a sound recorder that couldn’t be played back.

Why would someone even invent such a thing?  To understand that, you’ll have to some mental gymnastics and put yourself back in Scott’s world.  Until that time, people had been recording things for thousands of years – by writing it down.  Sound was an expression of thoughts from your brain, through your voice.  There were lots of alphabets and even things like stenography which Scott had studied that made it more efficient.  Words were made of phonemes.  In all these cases, by transforming the phonemes into an alphabet, our brain learned to make sense of these markings via our eyes, rather than our ears.

Scott just assumed that if he could manage to get the soundwaves “written down” then people could just learn to “read” soundwaves.  He spent a lot of time on this, and, in terms of reaching his proposed destination, he failed kind of epicly:  You can’t read soundwaves with your eyes.

Except that, as we see from this vantage point, he didn’t.  While he couldn’t even see or conceive of “playback,” his work enabled others to see further.  As a side note – many many years later, when we got to where we have computers that could be made to ‘read’ a sound wave, a team actually did just that and ‘played back‘ some of Scott’s recordings.  The quality is pretty terrible, as you might expect, but it is undeniable that he was successfully recording.

Our ability to plan the future

Our ability to see the future is, as we just saw, very tied to our place in time and our current perspective.  Very frequently, in trying to solve one problem, we are affecting another.  If we measure “failure” by intent to complete what we set out to do, it’s truly astonishing how much around us that we’d typically view as huge success was actually rooted in “failure”.  When he created sound recording and playback, Edison imagined that people would use it for correspondence.  Again, his world was a world of writing and mail.  He prognosticated that a fine and probably common use of his technology would be for someone to record a “letter” and pop it in the mail.  When Bell invented the telephone he imagined that it might be used to pipe music from a performance to an audience remotely.  When people got their hands on both, they were basically used inversely.  Meanwhile, Lee de Forest who helped pioneer AM radio, had imagined something so different that he sent a letter to National Association of Broadcasters saying “”What have you done with my child, the radio broadcast? You have debased this child, dressed him in rags of ragtime, tatters of jive and boogie-woogie.”  Ever used super glue?  That isn’t what he was trying to invent.  Not remotely.

Have a look-see here…

That’s pretty relevant because in many ways the Web today is hardly what Tim Berners-Lee was trying to accomplish at all.  Tim imagined that browsers would be equal parts authoring environments and that the Web would be a very two-way medium.  Instead, it has been until recently, very one way.  And it still isn’t a real editing environment as imagined.  Tim imagined annotations, we’ve still not got those, really.  In fact, some early browsers had them and took them out.  Tim didn’t initially see scripting, nor CSS nor HTML being the content so much.  His initial use case was research, yet still, I can’t find most scientific research available in HTML format.  By the measure of accomplishing the goals of the time, the Web is kind of a failure… But is it?  No way.

Failure vs Death

The death of the Web would mean it was over, done.  But failure is something different, or at least it can be.  Failure can be good. It’s necessary even.  If you are capable of seeing or hearing this post, it’s only the result of millions of failures in DNA and the consequence of a process with no long-term vision at all – just an inherent bias toward not dying long enough to reproduce.

But the really interesting thing to note is that this is possible only because failures breed opportunity.  The entire universe (but especially humanity) seems very adept at finding opportunity and exploiting it.  The general trend then is usually, at least from the perspective of those viewing it, “up” as we capitalize on an increasing number of opportunities.

By these measures, the Web already is a spectacular failure.  Emphasis on the word “spectacular”.  It’s absolutely great in some ways, but not quite exactly as intended in the first draft.  Even Tim’s vision of what the Web could, should or might possibly be is affected by what’s all around.


Seeing the difference between the two (death and failure), however, kind of matters.  The reason that the Web isn’t going to die, in my opinion, is that it is learning this lesson.  All of the major Web Standards Bodies have “adopted” the Extensible Web Manifesto and they’re hard at work taking the stuff that works and just exposing more opportunity DNA.  The W3C Technical Architecture Group (TAG) is helping to make sure that W3C projects keep this in mind and don’t go off the track with spec reviews and community and working group-coordinated efforts like The Extensible Web Report Card and task forces like Houdini.  Friends at the WHATWG are doing lots of hard work explaining nitty-gritty details and getting those hard last-mile agreements on ideas like Streams and Fetch.  ECMA provides prollyfills and transpilers before specs are finished and accepts feedback  The W3C Web Incubator Community Group provides a place for lots of ideas from anywhere to take root, be discussed, prollyfilled and, frequently usefully used before they take deep roots in standards… And importantly to fail.  It offers value along the way and provides new opportunities, which begin to let us see the future more clearly.  Finally, the W3C Advisory Board (AB) is working on changes to the W3C Process which would require all of the things to be incubated.

I think that the Web has a bright and vibrant future ahead, full of beautiful failures.

Note, if you enjoyed the stories of failure in this piece, I’d highly recommend How We Got to Now: Six Innovations That Made the Modern World” by Steven Johnson.


A Brief(ish) History of the Web Universe: Part IV New Hope(s)

It’s early 1994. AT&T has purchased Go and PenOS (see Part I) and now they are pulling the plug.  SmartSketch FutureSplash (again, Part I) won’t be released. Its makers Jonathan Gay and Charlie Jackson briefly try porting to the desktop, but there they would have to compete with well-funded and mature products and that isn’t practical.  Keep them in the back of your mind, we’ll come back to them.

For the past few years, Tim Berners-Lee’s “Web” concept has experienced increasing growth month over month thanks to a number of large number of contributing factors (see Part III). Still, its audience is comparatively almost microscopic by 2016 standards and mainly composed of academics and hard-core internet and markup enthusiasts. But, to put this into perspective: Just having a computer at home remains a pretty novel concept, even in America. Studies reveal that where they do exist they are frequently underused. While it’s true that from 1984 to 1994 the percentage of homes with a PC tripled, it is still only 1 in 4  US homes that has a computer. Of those, many don’t have modems or any kind of internet access. Only about 2% of Americans have access to the internet from home. Of this, only a small fraction are using the Web.



Note carefully that that 2% represents all internet access – not just the Web.  Very many people are getting on the “internet” through an online service.   Just a few months ago, America Online began direct mailing disks to people encouraging them to get online. It now has advertisers and 1 million members, many of whom are not the same crowd as the Web right now. The world is confused about the Internet, online services like AOL and the Web, but this isn’t all bad news for the Web. It introduces a whole lot of people to the idea of getting “online” and Moore’s law continues to lessen the price of computers and more and more of them are coming with increasingly capable modems.  Even now, however, the Web itself (including work and home) only accounts for only about 2.5% of Internet traffic. 

Given the small size, the number of browsers that have sprung up is truly astonishing, and each has extended Tim’s original definition of HTML with their own ideas. A few people, like Dan Connolly, have been actively working on trying to hammer down agreements and a base standard. Over the past year however a new dominant factor has emerged: Mosaic.

Of those that are using the Web at this point, estimates are as high as 97% of them are now using the Mosaic browser (see Part III) which is available on many platforms and comparatively easy to install thanks to the hard work of folks like Marc Andreessen, Eric Bina, Chris Wilson, Jon Mittelhauser and Aleks Totic. It’s name has become so synonymous with using the Web that the line between the two has become blurred: Even people who are interested in the Web begin to ask if you’re “on Mosaic” instead of “on the Web”.

“Web Time…”

The next few months is almost a blur, so much happened so fast.

netscape1994In March 1994, Silicon Graphics, Inc. (SGI) founder Jim Clark  begins working on a business venture with Mosaic creator Marc Andreessen. They’ll hire up a lot of talent from both SGI and NCSA and they’re working on some server products and, more importantly a commercial browser built to kill Mosaic.  Initially it is called “MCOM” (Mosaic Communications).  Internally, the browser project is called “Mozilla” (a name given to it by another co-founder, Jamie Zawinski, who would go on to write much of the Unix version of their 1.0 browser) or “Mosaic-killer”.  As you might imagine, the MCOM name was a problem and so “Netscape” was born.  Very early Netscape website (while still in the mcom.com domain) even featured a “Mozilla” character originally created by employee Dave Titus.

Two months later in May, NCSA hands off licensing of Mosaic to Spyglass, Inc. – a commercial offshoot of University of Illinois at Urbana-Champaign built to monetize research there. Spyglass would license the Mosaic product for modification and distribution.  At least one of those licenses will come back and change things again very soon…

That same month, the first World Wide Web Conference is held. Attended by 380 people from around the world.  Unsurprisingly, overwhelmingly made-up of technically enthusiastic academics. Thanks in large part to Dan Connolly’s presentation entitled “Interoperability: Why Everyone Wins” it launched the first very serious efforts behind creating an HTML standard. However, by this point, just what the baseline should be is very fuzzy: Almost no-one stuck to Tim’s original specifications to begin with and some aspects of it already seem defunct.   HTML has been re-spec’ed into an actual implementation of SGML, except still not quite really… Some browsers (even Tim’s now) have inline images thanks to hard-pulling/pioneering work by Marc Andreessen. Some have tables, some had forms.  Dave Raggett has a proposal called HTML+ which contains a lot of this.  Further adding to the challenge, Tim Berners-Lee has also begun talks of starting a consortium modeled after the x-consortium designed to help the Web remain open, competitive and interoperable.

Meanwhile, Sun Microsystem’s Green Team (see Parts II and III) has been given a new lease on life by the success of Mosaic. They imagine a Web in which “write once, run anywhere” applications could be delivered regardless of the user’s operating system because of their Oak VM (later “Java”) – and their foot in the door will also be a Mosaic competitor: “WebRunner” (soon “HotJava”) which the team is getting increasingly excited about.

The summer was abuzz with talks of styling too.  With more authors creating content and finding new uses, a typewritten page seemed pretty insufficient.  SGML didn’t really have a single standard but it had helped create a small group of what appeared to be reasonably successful ideas and from these discussions there came a myriad of proposals.

Two months later, in July 1994, Dan Connolly presents HTML 2.0 at an IETF meeting in Toronto, an IETF Working group is formed.  But in October, two key factors emerged: First, an agreement to start the World Wide Web Consortium (W3C) is signed setting up the eventual future question of where the HTML standard will live and what the relevant roles will be.  Second, Hakon W Lie proposes “CSS” which will become one of the first targets of the new W3C.

In November 1994, HTML finally received an IETF identifier: RFC-1866 (https://tools.ietf.org/html/rfc1866).


There was a lot of speculation about how one might make money with the Web, but 1994 put some real ideas to the test.


On October 27, 1994, HotWired.com launched the first major banner ads, including a campaign that looked like this:


It was the first real, modern(ish) attempt to apply the advertising model that had worked for TV, newspapers, magazines, etc – to the Web. If you’re using the Web in 2016 as you’re reading this, you’re probably aware that this is how the vast majority of Web content is funded, and it’s pretty debatable whether the Web would still exist in anything remotely resembling its current form without ads.  Why?  Because the Web had no payment or compensation model.  In fact, not only did it have no concept of how you could monetize, the insecure nature of HTTP, the protocol that made it possible, effectively made things like privacy, which are necessary for transactions, nearly impossible.

Licensing, Commerce and the Netscape Factor.

One could reasonably make the case that Netscape changed everything.  It was about this same time (late 1994) that Netscape began releasing beta versions of Netscape Navigator. Within just a few months it overtook Mosaic as the dominant browser in 1.0. Very much like all browsers before it, Netscape innovated and added its own tags to HTML which are part of neither HTML 2.0 nor the new HTML 3.0 specifications (notably <font> and <center>). “What’s that” you say, “3.0?! Where did that come from?!”. As you might expect, history doesn’t stand still – things that weren’t included in the pending HTML 2.0 standard are being collated into a possible HTML 3.0 standard. As the now dominant force, however, and with the increasing growth of the community being introduced to HTML without past knowledge and theory – and this is critical: It actually made it possible to do things that everyone actually wanted.  Regardless of how it accomplished it, these things made the Web a lot more interesting.  A hack that works and can get used, as it turns out, is of considerably more practical value than theoretical purity that doesn’t.  

As a commercial endeavor, Netscape realizes that they need to make e-commerce possible and the Web commercially viable. As plain-text, HTTP wasn’t going to cut it. Netscape/Dr. Taher Elgamal’s SSL is the result, but before version 1.0 can ship it’s realized that there are serious concerns and so, a lot like it was with HTML, SSL 2.0 is the first one most people hear about. It is released in February 1995 and if you were there, it might have looked something like this:

That same month the HTML 2.0 specifications are revised at IETF in hopes of actually getting something passed. Changes are mostly around MIME, encoding an improved DTD and simple formatting.

ae02By March 1995 Sun was giving some demos to people outside and “Java” was starting to make news.  The San Jose Mercury News ran a front-page piece entitled “Why Sun thinks Hot Java will give you a lift“.  Sun released Java for public download and in short order Sun’s T1 line were so saturated that developers weren’t able to download. The piece explains that browsers are dumb (not to have, just not especially capable) and in the piece, Marc Andreessen is quoted giving it praise.

Thinking that a browser that viewed HTML was “dumb” wasn’t new.  As explained in Part I, there were plenty of more capable hypermedia systems before HTML.  As explained in Part II, Tim didn’t expect HTML to be the whole thing he just thought that the “dumb” parts would be the ones that connected everything.  As explained in Part III, others like Pei Wei had already shipped a browser capable of doing embedded programs very much like applets. And Midas, developed at Stanford, had sort of pioneered the idea of plugins (it could display postscript).  Again, this wasn’t shocking, but rather seemed exciting to Tim.  So, unbeknownst to developers at Sun, but certainly unsurprisingly:  Certain players at Netscape have also been thinking about the need for a less-dumb imperative language in the browser.

Netscape folks from SGI like Jim Clark and Kipp Hickman have been courting an enterprising engineer named Brendan Eich (also an SGI alumnus) with the lure of doing “Scheme in the browser”. You might think that this means that Brendan was an advanced Scheme developer, but that’s not the case at all. In fact, he had no practical professional experience with Scheme at all. Instead, Brendan was something of a language nerd and saw a lot of promising things in Scheme that were lacking in other languages and he was interested in seeing what he could do. Quietly, and without fanfare, he left MicroUnity for Netscape in April 1995.  Netscape who was just releasing Navigator 1.1 which, as noted in Information Today, contained what would be a new bit of DNA:

Netscape Navigator 1.1 includes sophisticated new features such as the Netscape Client Application Programming Interface (NCAPI) for easy integration with third party applications, advanced layout capabilities for more visually compelling pages, dynamic document updating for changing information and enhanced security features.

– Information Today, April 1995

The plugin architecture, developed in large part by John Giannandrea (now working on Search / Deep AI at Google) spurred a lot of excitement: A number of companies quickly got to work, including Macromedia – the maker of the popular Director software (see Part I), Real Networks and Sun.

Unfortunately, by the time Brendan started, Marc Andreessen has also begun talks with Sun to license their Java Virtual Machine (VM) for redistribution within the Navigator browser, for embedding applications. Given this and some unfortunate headcount issues, Brendan is initially placed on the server team.

It’s May 1995 before Brendan is moved to the client team and begins work on a project code-named “Mocha”. Certain factions within Netscape are scheming for a possible alternative to Java. If your first reaction is “why?” let me explain: People like Brendan and Andreessen realize Java is rather close to C++ and this is very unlike the Web so far which is pretty easy for amateurs and beginners to get started with. Writing code in Java requires a developer with an understanding of compilation, classes, data types, a main method, packages and so on. Microsoft is making a lot of headway in expanding the number of people who can program, by contrast with Visual Basic. “The Web needs something like that”, they argued. For that kind of idea to really work, they decided that it would be necessary to be able to code right there in the page. It would have to be interpreted, not compiled. It would have to have simpler qualities.  In order to keep the project alive, get funding and not destroy other potentially lucrative avenues with the Sun agreement, they realize that he will have to pretty quickly show that it is possible to do a language in the browser, show that it isn’t redundant with Java.

While he hasn’t started anything, Brendan has been quietly thinking about this since before he came on: What kind of language would work?  Being a language buff, he knows the history of languages and bits about their theory.  He is drawn particularly to languages like Self.  Most programmers have heard references to Xerox-PARC’s Smalltalk as being way ahead of its time and elegant, but certain people at Xerox-PARC went on to work on better models, among them, Self.  David Ungar pioneered the idea of reducing the number of concepts and maximizing their utility.  Self was prototypal and that was appealing.  Brendan was also drawn to aspects of HyperTalk, the programming language available for the wildly successful HyperCard (see Part I) for inspiration on how you could put these things together in a way that amateurs could glue things together.

And so, in 10 days in May 1995, Brendan put together just enough:  The core language parser, interpreter, decompiler, and minimal standard library core language.  He integrated it via Lou Montulli’s protocol handler (Lou Montelli was another Netscape founder who also co-authored the Lynx browser).  Using these , Brendan created a demo in which you could use it as a protocol in the browser’s URL bar.  If you’ve ever done javascript:alert(‘hello world’) or something you’re using a similar idea. Here, however, typing the protocol mocha: would launch a primitive frameset based console.  Using this Brendan was able to give a demo that would both generate enough interest and allay concerns of redundancy with Java.  It wouldn’t officially become “JavaScript” until Bill Joy of Sun signed the trademark license with Marc, Brendan and Rick Schell (Netscape VP of Engineering) later on December 4, 1995.  For convenience, we’ll still refer to it as JavaScript.

A few more challenges for JavaScript:  First, the Netscape browser (nor really any browser save perhaps Viola) had been written to have an embedded scripting language intertwined in the mix. As a result, Brendan compared trying to shoehorn this in in a rush to juggling with chainsaws.  Why the rush?  well, that’s the second challenge:  It would have to ship in the same release as Java (Netscape 2).   Here’s why:  Because while the tech world is abuzz about both Netscape and Sun, yesterday’s talk about the rising power that was Microsoft has turned into speculation that Bill Gates has missed the boat.  Everyone knows that it’s just a matter of time before Microsoft leans in and attempts to stake claim, and the best way to prevent that is to be too good and too well entrenched by the time it arrives.  People at Netscape realize that if they can’t get a firm grasp on the market, Microsoft will just eat them up.   That same month, Bill Gates sends a lengthy internal Memo titled “The Internet Tidal Wave” setting precisely those priorities – and Microsoft licenses Mosaic from SpyGlass and “Internet Explorer” is conceived.

Netscape 2 also shipped with NPAPI – a plugin architecture that used the protocol handler to associate another, helper program for understanding, handling and (potentially) rendering other kinds of non-HTML content.

At the August SIGGRAPH (Special Interest Group on GRAPHics and Interactive Techniques) conference that year Steve Jobs gave the keynote.  He had bought Pixar and they were doing incredibly interesting things – including working with Disney to produce the first full-length computer animated feature: Toy Story.  But a lot of the talk was about the internet and the Web.  James Gosling of Java fame sat on a panel called Set-Top Boxes – The Next Platform” in which he said:

It’s madness out there… My personal guess is that there will never be ‘intelligent set top boxes’.
James Gosling SIGGRAPH 1995 (page 5)

Another panel was called “Visualizing the Internet: Putting the User in the Driver’s Seat” and it really centered a lot on “Wow, this WWW thing is really taking off and it kind of looks like shit.”  Jonathan Gay who opened our piece was there too and heard a lot of people saying that the internet needed a really good vector based animation product.  Netscape’s NPAPI gave him the tool that he needed and his product was rebranded as “FutureSplash Animator” and the “FutureSplash plugin”.  Just start removing letters and you might sense where this is evolving before we get there in Part V.  Note: If you want to get a jump on it, it turns out that after posting I discovered that my friend Rick Waldron wrote a whole piece on exactly this subject about 16 years ago.

On August 9, 1995, Netscape made an initial public offering and the world, to put it mildly, went nuts.

Special thanks to friends like Brendan Eich, Chris Wilson and Simon St Laurent for helping fill gaps, stay honest and fix typos.  Especially Brendan, without whom this post would not have been possible on many levels.

TokenLists: Missing Web DNA

A long long time ago, in a browser far far away, Brendan Eich introduced what would become known as “DOM Level 0” – basically: Simple reflective properties that allowed you to access useful bits of what would later become “DOM” and twiddle with them.  It looked something like this…

document.forms[0].firstName.value = "Brian";

However, there is a long, complex and twisted history that led us to where we are today (see my A Brief(-ish) History of the Web Universe series of posts).  To sum up some key bits: CSS and the actual DOM were conceived of separately from thoughts of DOM0 and JavaScript.  Unlike its predecessor, the DOM was intended to be generic – in trying to bring a lot of people together and “fix” the Web, there was a lot of focus on trying to address the problems of SGML that made HTML so appealing in the first place – so we got DTDs for HTML and work began on all things XML.

The DOM was intended to serve all masters, and as such it dealt with basic attributes in a tree which could be serialized, manipulated and parsed and rewritten in any language with a common interface.  This meant that authors would use getAttribute(attributeName) and setAttribute(attributeName, value) to get and set attribute values respectively.  It seemed to those spec writers, then absolutely logical to create an attribute called “class” and allow a user to type:

element.setAttribute("class", "intro")

This was problematic in the browser though because DOM Level 0 was not only more well known, but far more terse/convenient for authors who really just wanted to deal with reflective properties and type something like:

element.class = "intro";

What’s more, not all attributes were reflective back with their properties.  Properties were just easier to deal with – they made sense for runtime properties, attributes seemed to do so less.  To resolve this we got the .className property which was reflective on the class attribute.  Problem solved… Except, not.

Simple Enough?

CSS says that any element can specify 0…N classes, not 0 or 1, in a space separated list.  In SGML/XML terms these were “NMTokens“.  It sounds quite simple – a space separated list of values with some simple constraints should work everywhere, and it does… kind of.

In the browser world, however, where we were messing with classes at runtime all over the place that needed to be reflected back (CSS wasn’t based on runtime properties, it was based on attributes) we began facing issues.  Someone would come along and write code like the above example, which assumed that it was a single value:

element.className = "intro";

The net result being that any existing class names at the time of execution were replaced with just one.  Some other person would assume they wanted to toggle a value and write something like:

// Toggle the 'selected' class
element.className = (element.className === "selected") ? "" : "selected";

The problem being two-fold:  First, it assumes it could === a single value, the second being that it can overwrite all the others.  We had problems removing classes, adding them, removing them, toggling them, finding out if it contained something.  It sounds trivial but it turns out that it wasn’t: Each time you wanted to touch the className you had to deal with deserializing the string, doing your work and re-serializing it without stepping on any of a number of landmines.  The net result was, as one might expect, we came up with libraries to help with this – however, they varied in quality and assumptions. It was still a mess.

When jQuery joined the W3C after becoming the most widely used solution, they lobbied to improve this situation (disclaimer, I represent jQuery in several W3C groups).  It wasn’t long before we had the .classList interface.  The world is much better with .classList at our disposal – finally we can be rid of the above problem.  Now users can write:


It’s the missing interface developers always needed.

Problem Solved?

Sadly, I think not quite.  While it’s a major improvement, the trouble is that the NMTokens issue does not solely affect the class attribute, or even just in JavaScript.  It’s quite possible that you are thinking “Well, this probably isn’t something I need to worry about because I’ve never come across it”.  However, I think you will, and that’s the problem.  With efforts like WAI-ARIA and others, there are other NMTokens issues that you’ve probably not thought about before but eventually will have to.  The aria-describedby attribute is one example – a control can be described by multiple elements for different purposes.  For example, an <input> element may have associated helpful advice that appears in a tooltip popup and associated constraint validation errors.  Further, it works a lot like the class attribute and has similar challenge in JavaScript in that it frequently has to be actively maintained, not just written in markup, and that’s deceptively hard.  For example, an author should not associate an input with an errors collection until there are actually errors.  This sucks.  ARIA is hard enough without simple challenges like the one we faced prior to having .classList.

Good News and Bad

The good news is that standards makers had the foresight to create an interface for this type of problem called DOMTokenList with all the useful methods and properties that .classList exposes.  The .classList property holds a DOMTokenList.

The bad news is that it’s pretty much locked away and there’s no way to easily re-apply it to new things as they emerge.  We could continue to identify spec properties and create new things like .classList each time we find them.  For example, we could expose .ariaDescribedByList – and we might want to occasionally do that – but it’s not great.  It’s just additive.  Each time we do, the API of things to learn gets bigger, it also doesn’t expose these abilities to custom elements, and it doesn’t help with anything that isn’t specifically HTML (if you care about that sort of thing).

Alternatively, however, we could define a single new DOM method to expose any attribute this way.  This is actually pretty easy to do, minimal new API to learn and should reasonably work for everyone.  Jonathan Neal and I are providing a prollyfill for this (public domain) which allows people to ask for an attribute as a DOMTokenList and deal with it the same as they would .classList.  Because it’s a proposal, and should a standard ultimately arrive it may differ, we’ve underscored the method name, but here’s an example of its use…


In Extensible Web terms, this isn’t asking for new additive functionality at all – it is explaining existing magic that already exists, but lies mostly dormant and unexposed in the bowels of the platform.  Given this interface, the .classList property, for example, is then merely legacy sugar for its equivalent .asTokenList(attr) accessor (which doesn’t require ‘name’ distinction either and deals with dasherized attributes just fine too):


I’d like to know what you think about this proposal – please provide comments here, find me on twitter (@briankardell) or find the topic “asTokenList” on Specifiction and let us know.

Update: Given that this has now gotten some discussion I’ve created a repository for it so that individuals can send pull requests and track issues – from this new location I have also changed the name of the method based on feedback to _tokenListFor(attr).

Thanks to the many people who proofread, looked at demos, discussed or gave thoughts on this as it developed, including Jonathan Neal, Bruce Lawson,Mathias Bynens, Simon St Laurent, Jake Archibald, and Alice Boxhall.

A Brief(ish) History of the Web Universe: Part III The Early Web

Part I and Part II of this series attempt to set the creation of the Web, the first Web browser and early attempts to publicly share Sir Tim Berners-Lee’s idea of “the Web” into some historical context.  They attempt to illustrate that there were ideas and forks of ideas along the way, each  considering different aspects of different problems; and that each had varying degrees of success. Some were popular and increasingly mature desktop products, others created standards upon which many products would be built.  Some were technically interesting but completely failed to take off and some which were academically fascinating but were largely vapor.   All of these were still “in motion” at the time that the Web was being conceived and it’s important to realize that they didn’t stop just because Tim said “wait, I’ve got it.”.  Those posts also attempt to explain how Tim really wanted the Web to be read/write, wanted to tap into existing mature products and which bits he imagined were most and least important.  I described the landscape of deployed hardware and technology at the time – the world was close, but not yet “ready” to be wired in richer form/  We weren’t connected by and large – In fact, the entire amount of information sent across the Internet monthly at that time would easily fit on a single sub $100 (US) external USB hard-drive in 2016.    All of this led helped shape the Web.

It would be an understatement to say that early on, the vast majority of folks in the mature bits of the industry didn’t really take the Web seriously yet.  As explained in previous posts, the authors of mature hypertext products turned down early opportunities to integrate the Web outright.  In 1992 Tim’s talk proposal for the Hypertext Conference was actually rejected.  Even in terms of new “Internet ideas” the Web didn’t seem like the popular winner. SGML enthusiasts by and large shunned it as a crude interpretation of a much better idea with little hope of success.  Gopher, which was created at almost exactly the same time, was gaining users far faster than this “World Wide Web”. What did happen for the Web, however, is that a huge percentage of the small number of early enthusiasts involved started building browsers… Meanwhile, existing ideas kept evolving independently, and new ideas started emerging too and these would continue to help shape and inspire.

1992-1993:  WWWWI

1992 saw the birth of a lot of browsers and little actual hypertext.  For perspective, by the end of 1992 there were seven web browsers which allowed users to surf the vast ocean of what was at the time only 22 known Websites.  As each browser came online they built on Tim’s original idea that the parser could just ignore tags it didn’t understand and each attempted to “adjust” what we might call the political map of the Web’s features.  Each brought with it it’s own ideas, innovations, bugs and so on.  Effectively, this was the first “Browser War” or what I’ll call “World Wide Web War I”.

There is one in particular, worth calling out:  ViolaWWW. It was created by a student named Pei Wei and it included innovations like stylesheets, tables, inline images, a scripting language and even the ability to embed small applications.  Remember that nearly all the popular non-Web, desktop hypertext and hypermedia products of the time had many these features.  What made ViolaWWW different was that it was so much more than text.  Viola (not ViolaWWW) was an Object Oriented programming language and a bytecode VM.   The ViolaWWW browser was a just VM application (though, it was the killer one that made most people care about it) – this made it possible to do all sorts of incredibly interesting things.

Screen Shot 2015-12-07 at 9.22.28 PM

Screenshots of the Viola Web Browser courtesy of viola.org

Some people reading this in 2016 are likely simultaneously impressed and a perhaps just a little horrified by the idea that attempts to “move away from” nice clean declarative, semantic markup with scripts and programs came so early on.  Tim must have been absolutely horrified, right?

Well, no that doesn’t seem to be quite accurate – at least from what I read from the record.  History is a lot more nuanced than we frequently tend to present it.  Ideas are complex.  Understanding nuances can be hard, but I’d like to try.

As I described in Part II, Tim didn’t imagine HTML would be “the solution” for content but rather

I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content. – Tim Berners-Lee in Weaving the Web

In fact, regarding Pei and Viola, he spoke in generally glowing terms in Weaving the Web, and numerous interviews.  In his address at CERN in 1998, upon accepting a fellowship, he said

It’s worth saying that I feel a little embarrassed accepting a fellowship when there are people like Pei Wei …[who] read about the World Wide Web on a newsgroup somewhere and had some interesting software of his own; an interpreted language which could be moved across the NET and could talk to a screen.. in fact what he did was really ahead of his time.

As questions came in related to similar ideas on the www-talk mailing list Tim answered and explained a number of related concepts.  Here’s one that might capture his feelings at the time from May 1992 (Tim’s is the reply):

> I would like to know, whether anybody has extended WWW such, that it is possible to start arbitrary programs by hitting a button in a WWW browser.

Very good question. The problem is that of programming language. You need something really powerful, but at the same time ubiquitous. Remember a facet of the web is universal readership. There is no universal interptreted [sic] programming language. But there are some close tries. (lisp, sh). You also need something which can run in a very safe mode, to prevent virus attacks…. [It should be] public domain. A pre-compiled standard binary form would be cool too.  It isn’t here yet.

While I don’t know precisely what Tim was thinking in the early 1990’s I do think it is worth noting the description, his the use of the words “cool”  and “yet” as well as the absence of any sort of all caps/head exploding response.  In fact, if you wade through the archives, it turns out that a lot of early talk and efforts in 1991-1992 were already specifically surrounding this weird line between documents and applications or HyperText and HyperMedia.  Traditional desktop system makers had recognized the gap and early Web enthusiasts did too.

The point of this observation is simple:  “We” didn’t really fully know what we were doing then, and in many ways we’re still figuring it out today... and that’s ok.

Even the inclusion of inline images brought new questions – MIME wasn’t a given, content-negotiation was still kind of a rough dream and requests were very expensive.

Others were starting to add things like <input> and annotation and comment systems and variable ‘slots’ into the markup for some rough idea of ‘templating’ to help flesh out the problem that a whole lot of a sites, even then, would have to be concerned with repeating things.

Some people wanted to make servers really smart.  They saw the investment in a server as a thing which could query databases, convert document types on the fly, run business logic, etc.  These folks thought that modifying HTML would give them something akin to the IBM 3270 model which allowed the pushing of values and the pulling of page-based responses.  Others, like Pei, wanted to make clients smarter, or at least the cooperation between the two.  At some level, these are conversations nearly as old as computing itself and we’re still having them today.

Tim continues in that same post above to say that:

In reality, what we would be able to offer you real soon now with document format negotiation is the ability to return a document in some language for execution.

He mentions that, for example, the server might send either a shell script or a Viola script by way of negotiation – which he explains would “cover most Unix systems”. For Tim it seems (from my reading) that the first problem was that there wasn’t a standard language, the second was that it might not be safe, the third was that if there was one it should be in the public domain.  If you were on a Unix machine you’d be covered.  If you had Viola, you’d be covered.  If you had neither… well… it’s complicated.  But Viola had already begun tackling the safety issue too.

But even this early on – it didn’t seem to be a question of whether applications should be part of the Web, but more like how they should be.  Should be one way or maybe a few?

There was so much innovation and variance in browsers themselves that in December 1992 Tim sent an email entitled “Let’s keep the web together” simultaneously praising the wealth of ideas and innovations and stressing that they should work to begin the process of future standardization/alignment lest the Web fragment too far.

1993-1994: The Perfect Storm

When Marc Andreessen, was shown Viola it helped inspire him to start an effort at NSCA – at least that’s one story.

For all its value Viola had what turned out to be a critical flaw:  It was hard to get it working.  Despite all of its advantages, you had to install the runtime itself (the VM) and then run the browser in the runtime. Getting it setup and running proved problematic, even for a number of pretty technically savvy people. There were issues with permissions and bugs and Pei was kind of alone in building and maintaining it.

But this wasn’t unique.  Browsing the Web in 1993 was still a kind of painful endeavor to get going. The line mode browser was placed into the public domain in May 1993. There wasn’t a terribly lot there to browse either – even by the end of that year there were only 130 known websites. Setup was difficult and things were generally very slow.  Even finding something on that small number of sites was hard – forget searching Google, even Yahoo wasn’t a thing yet. Even if you could get something working with one browser, chances were pretty decent that you might come across something where it wasn’t the right browser for you to get the whole experience.

Marc Andreessen was the first one to start treating the browser like a modern product – really taking in feedback and smoothing out the bumps.  The team at NCSA quickly ported his UNIX work to Mac and PC, there was even one for the Commodore Amiga.  Mosaic was a good browser, but feature-wise, it probably wasn’t even the best.  Aside from ViolaWWW’s notable work, some browsers already had pioneered forms, for example, and Mosaic initially didn’t support forms.  But there was at least one thing they nailed:   They created an easy to install and setup browser on many platforms and drove the barrier to entry way down.

Moore’s Law had finally created faster and cheaper machines with modems entering a more mainstream market; some notable regulation changes happened; when the makers of Gopher announced that maybe just possibly in some circumstances you might be charged a very small fee Tim convinced CERN to make a statement that the Web wouldn’t do that.

In other words: When Mosaic was released publicly in late 1993 “free for non-commercial use” it was in the midst of the perfect storm.  Timing matters.

Suddenly the Web really started to hit and growth began to really explode.  By 1994, there were an estimated 2800 sites.  Regular PeopleTM were being introduced to the Web for the first time and they were using Mosaic.   After only roughly a year since being placed in the public domain, it is  estimated that less than 2% of Web users (still a comparatively small total by today’s measures) were getting around using the line mode browser. In April of that year, James Clark and Marc Andreessen established what would become Netscape.

Previous efforts to standardize didn’t get there – It wasn’t until November 28, 1994 that RFC-1886 was finally sent to the IETF to begin to create an HTML “standard” which we’ll talk about in Part IV, but wouldn’t ultimately arrive for another year and then in debatable form.

The VM

Pei wasn’t the only one thinking about a runtime VM, Sun Microsystems was too.  As described in Part II, Project Green spawned Oak thinking they’d found the next big market in consumer devices with embedded systems.  By 1992 they had a working demo for set-top boxes which created a read/write interactive internet-like experience for television – MovieWood which they had hoped to sell to cable companies.  But the cable companies didn’t bite.

This confluence of timing and ideas left the Green Team at Sun wondering what to do with the rest of their lives and their ideas.  Over the course of 3 days they discussed the success of Mosaic and decided their future.  As James Gosling would later describe:

Mosaic… revolutionized people’s perceptions. The Internet was being transformed into exactly the network that we had been trying to convince the cable companies they ought to be building. All the stuff we had wanted to do, in generalities, fit perfectly with the way applications were written, delivered, and used on the Internet. It was just an incredible accident. And it was patently obvious that the Internet and Java were a match made in heaven.

So the Sun team went back to the drawing board to build a browser like Mosaic – but in Java (which was a lot like Pei’s approach) and in 1994 they had a demo called “WebRunner”, which would later be renamed “HotJavaTM. Within just a few months everyone, not just techies, were going crazy imagining the possibilities.

Nearly everyone seemed to think that this would change the world. As we’ll see in Part IV, they might have been right, but not how they thought….

A Brief(ish) History of The Web Universe: Part II Time

Part II of “A Brief(ish) History of the Web Universe” aka “The Boring Posts”.  No themes, no punch, just history that I hope I can use to help explain where my own perspectives on a whole bunch of things draw from…

Tim Berners-Lee was working at CERN which was, by most measures, pretty large. Budgets and official policies were, as they are in many large organizations, pretty rigid and a little bureaucratic.  CERN was about particle physics, not funding Tim’s idea.  More than that, many didn’t recognize the value of lots of things which were actually necessary in some way.

The paradox of the Web was that this very very hard problem to connect heterogenous information from heterogeneous computers on heterogenous networks all over the word — a very very hard problem, was solved by a small, non-official, open approach by a team with no resources, or, practically none. – Ben Segal from CERN who brought in TCP/IP ‘under the radar’

In 1989, despite being actually necessary at this point for CERN to function, a memo went out reminding that it was “unsupported”.  A lot of the best things in history turn out to effectively have been people of good will working together outside the system to get things done that needed getting done.

So, in September 1990 Mike Sendall, Tim Berners-Lee’s boss at CERN found a way to give Tim the approval to to develop his idea in a way that many good bosses through history tend to: While no one wanted to fund development of “Tim’s idea,” there was some interest in Steve Jobs’ new NeXT computer (which it appears was also brought in initially despite, rather than as part of CERN policy and plans).  And so, under the guise of testing the capabilities/suitability of the NeXT computer for development at CERN in 1990 Tim would be able to create a prototype browser and server.

The NeXT had great tools for developing GUI applications and Tim was able to build a pretty nice prototype GUI with read-write facilities pretty quickly.  It let him figure out precisely what he was proposing.

A screenshot of the NeXT browser from 1993, courtesy of CERNit looked very similar in 1991.

A screenshot of the NeXT browser from 1993, courtesy of CERN – it looked very similar in 1991.

As explained in Part I of this history, there was already a lot going on, standardized or in place by then – for example SGML.  Because of this, CERN somewhat unsurprisingly, already had a bunch of SGML documents in the form of a thing called “SGMLGuid” (sometimes just GUID).  Unfortunately, the earliest capture of this I can find is from 1992 but here’s what SGMLGuid looked like.

SGML itself had gotten quite complex as it tackled ever more problems, and Tim didn’t really know SGML.  But he saw the clear value in having a language that was at least familiar looking for SGML authors, and of having some existing corpus of information.  As he said later

Who would bother to install a client if there wasn’t already exciting information on the Web? Getting out of this chicken and egg situation was the task before us….

Thus, he initially started with a kind of a subset of ~15 GUID tags (plus the critical <a> tag for expressing hyperlinks which is Tim’s own creation and at the very core of the idea – an exact number is hard to say because the earliest document on this isn’t until 1992).  As explained on w3.org’s origins page:

The initial HTML parser ignored tags which it did not understand, and ignored attributes which it did not understand from the CERN-SGML tags. As a result, existing SGML documents could be made HTML documents by changing the filename fromxxx.sgml to xxx.html. The earliest known HTML document is dated 3 December 1990:

There was not a lot of discussion of this at <a href=Introduction.html>ECHT90</a>, but there seem to be two leads:
<li><a href=People.html#newcombe>Steve newcombe’s</a> and Goldfarber’s “Hytime” committee
looking into SGML, and
<li>An ISO working group known as MHEG, “Multimedia/HyperText Expert Group”.
led by one Francis Kretz (Thompsa SA? Rennes?).

There’s a lot of history hiding out in this first surviving HTML actually.

First, note that this and many others weren’t “correct” documents by many counts we’d think of today: There was no doctype, no <html> element, no <head>, <title> or <body> etc.  There actually wasn’t an HTML standard yet at that point so at some level it’s kind of amazing how recognizable it remains today – not just to humans, but to browsers.  Your browser will display that page just fine.  HTML, as it was being defined however, also wasn’t valid SGML necessarily.  The W3C site points out that the final closing tag is an error (it transposes the letters).

More interestingly still for purposes here, I’d like to note that the very first surviving HTML document was about quite literally about HyperMedia. It’s part of notes from what they called the hypertext conference and, unsurprisingly, that is what everyone was talking about.  To understand why I think this matters, let’s rewind just a little and tie in some things from Part I with some things that weren’t….

The Timing of the Web

Is that title about Tim or Time?  Both.

Remember VideoWorks/Director from Part I?  It illustrated that perhaps the concept of Time was really important to hypermedia.  However, they weren’t the only ones to see this. In fact, even as early as 1984 people were already seeing a gap between documents and hypermedia/multimedia and talking about how to solve it. It turns out SGML, the reigning standard approach of the time for documents as markup actually wasn’t quite suited to the task. It needed revision.

So, Goldfarb (original GML creator, presumably mistyped above) went back to the drawing board with some more people to figure out how to apply it to a musical score. What they came up with was called SMDL (Standard Music Description Language), an effort that was ANSI approved in late 1985.  However, no practical demonstration of SMDL was completed until 1990 and as part of a Master’s thesis rather than a product (this dissonance over what makes a “standard” appears and reappears over and over in standards history).

It’s key though because you could definitely say that by the mid-late 80’s, it was becoming obvious to many that the problem of time and linking objects in time was a more generalized problem.  Video, for example, might have been a neat thing on the desktop but don’t forget that in the 1980’s, cable television was spreading along with computers and multimedia — and much faster. By this time, a number of folks were beginning to imagine something like “interactive TV” as the potentially Next Really Big Thing (even before the Web).  Sun Microsystems established a group, “Green” to figure out the next big thing, who thought it would be interactive consumer electronics (like interactive TVs).

And so in 1989, just about the time Tim was putting together his ideas on the Web, the grander problem of Time/SGML was moved out of SMDL into a new ANSI project known as “HyTime” (Hypermedia/Time-based Structuring Language” which had a lot of key players and support from major businesses.

It really looked like maybe it was going somewhere.  Remember Ted Nelson from Part I?  In 1988, AutoDesk had decided to fund him directly and commercialize his ideas which had become known as Project Xanadu. An AutoDesk press release said:

In 1964, Xanadu was a dream in a single mind. In 1980, it was the shared goal of a small group of brilliant technologists. By 1989, it will be a product. And by 1995 it will begin to change the world.

Nelson/Autodesk were some of the big names on that HyTime committee.  Ironically, I think they got the years pretty close, but the technology wrong.

At approximately the same time the MPEG (Moving Pictures Experts Group) and MHEG (Multimedia and Hypertext Experts Group – also mentioned in that initial post above) were established.  MHEG’s focus, like a lot of other things included hypermedia documents, but unlike SGML required an MHEG engine – basically, a VM.  The files they’d trade would be compiled binary rather than text-based. While they were authorable as documents, they were documents about interactive objects.

And so this is what people were talking about at the conference which Tim was summarizing in that early surviving HTML document.  Both HyTime and MHEG were already thinking about how to standardize this quality in part because there is a lot of media.   An interesting thing about media is that people were building multimedia applications.


So the world around him was moving forward and there were lots of interesting ideas on all fronts. Tim had a prototype in hand.  HTML as understood by the NeXT had no forms, no tables, you couldn’t even nest things – it was flat.  Not only did it have no CSS but no colors (his screen was black and white).  But, for the most part many of his tags were simple formatting.  You can debate that an H1 is semantic, but in Tim’s interface it was under styling.  That is, as you could “style” things as an H1, more or less WYSIWYG style, and the editor would flattened it all out in serializing markup.

Tim imagined (and has repeated since) that the most important thing was the URI, then HTTP then stuff like HTML and later CSS.  URIs, in theory, can work for anything as long as you have a concept of a file that is addressable.  HTTP was built with a feature called ‘content type negotiation’ which allows the sender to say what it’s prepared to handle and the server to give him back something appropriate.   As Tim explains this feature in Weaving the Web:

In general … the client and server have to agree on the format of data they both will understand.  If they both knew WordPerfect for example, they would swap WordPerfect documents directly.  If not, they could try to translate to HTML as a default.

So the weird intricacies of HTML or things above weren’t drastically important at the time because Tim didn’t imagine HTML would be for everything. In fact, help address his chicken and egg problem described above, Tim just made his browser give URIs and auto-translate some popular existing protocols like NNTP, Gopher and WAIS to the HTML.  But perhaps even this is over-simplifying just a bit – as he also explained:

I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content.  It would turn out that HTML would become amazingly popular for the content as well…

It would turn out…

One of the most interesting things about invention is the stuff that the inventor didn’t expect people would do with it.  It would turn out that HTML would become really popular for content for a number of reasons.  One reason, undoubtedly, is that the simplest thing to do is simply to provide HTML in the first place with no alternatives.  More importantly, perhaps,  to re-iterate the point from part I: The line between documents and ‘more than documents’ was clearly fuzzy.

To illustrate: Even with the NeXT browser “in hand”, it was very hard to show people value. Very few people had a NeXT, even at CERN – after all, it was a pilot for establishing whether the new-fangled machines would be useful.  Lugging it around only went to far .   There was a new project at CERN to provide a directory and Tim and early partners like Robert Cailliau convinced CERN to publish the directory via the Web.


Mac HyperCard address book application thanks to http://www.atarimagazines.com

This is interesting because address book applications were something that a lot of the modern computers of the time had, but a phonebook was a bunch of printed pages.  Who wouldn’t have liked that application?   It might have been potentially “easy” to create a nice HyperCard stack and auto-transform to HTML based on content type negotiation – but which part was document and which part was application?  It was actually much easier to just deliver HTML which could be generated any number of ways – and with the current digital expectations of the day, on the machines they were using, that was just fine.  Thus, the simple line mode browser that made the fewest assumptions possible was born as something that could be distributed to all the CERN machines (and all the world’s machines – more on this below).

The line mode browser was, frankly, boring.  It was wildly inferior to the NeXT interface which was itself wildly inferior to something like OWL’s Guide.    But it worked, and as usual, that matters.  Let me repeat that:  Shipping something useful matters.  

If you’ve never heard of Maslow’s Hammer, you’re probably at least familiar with the software version of it:  We like to say “If the only tool you have is a hammer, everything looks like a nail”.  Usually when we say it we’re trying to say “use the right tool for the job”.  However, there’s a corollary there that is just as true and often goes unnoticed:  If someone only has a butter knife it would turn out that they can suddenly screw in some kinds of screws.

It would also turn out that that’s not entirely a bad thing:  If you need to unclog something, a butter knife works.  If you need a lever to lift something small in a tight spot, a butter knife works in a pinch.  If you need a paperweight on a windy day, guess what turns out to work pretty well? Perhaps that wasn’t the butter knife’s original intent, but it is universally true. And guess what else turns out to be true?  A butter knife and some other things were probably an “almost” approximation for some tool that didn’t yet exist.  What’s more, having a few of those “almost” tools frequently helps inspire something better.  Steven Johnson calls this “the adjacent possible” in his Ted Talk “Where good ideas come from” and I think it’s as true of the Web as it is of anything.

Adjacent Possibles

However it came about, it turns out as well that the line mode browser was kind of perfect in time for a number of reasons.  To keep things in perspective, this was 1990.  While computers were starting to catch on, in 1990 they were still very expensive.  As such, as deployed, many of them didn’t even have OS’ with something remotely like what we would call graphical UIs as a norm yet.  Of those that did, few even had modems.  And of those with modems, many still connected at 1200 or 2400 baud.  We weren’t connected nor even completely GUI yet.  Those who were connecting most frequently were often doing so through large, expensive and frequently outdated systems which had been a really big investment years before.

Because of this, what the line mode browser definitely did was to allow Tim and others to show the people who would start writing the modern browsers with GUIs and increasingly recognizable features in short order and keep a small but steady stream of new potential enthusiasts checking it out.  Sadly perhaps, another thing it did was to omit the authoring piece that was present on the NeXT machine and set in motion a trend where people perceived the Web as a way to consume rather than publish and contribute and likely spurned a greater focus on authoring HTML.  “Sadly Perhaps,” but then again, perhaps that’s precisely what was necessary in order for it to mature anyway.  It’s hard to say in retrospect.

With a few new enthusiasts, in 1991 he created a mailing list: www-talk.  For a while a very, very small but steadily growing group of people discussed the early “Web” they were trying to build.  As more people came into the group they wanted more and different things – it should be more like HyTime, links should really work differently, it should actually be SGML rather than just “inspired by” or “look like” it and so on.

What happened next just keeps getting more interesting.  Continue to Part III: The Early Web.




A Brief(ish) History of the Web Universe – Part I: The Pre-Web

There are a couple of posts that I’ve been wanting to write, but in each of them I keep finding myself wanting to talk about historical context.  Explaining it in place turns out to be too hard and I’ve been unable to find a resource that unifies that bits I want to talk about.  I’ve decided, then, that it might be easier then to write this history separately, mostly so that I can refer back to it.  So here it is, “A Brief(ish) History of the Web Universe” aka “The Boring Posts” in a few parts.  No themes, no punch, just history that I hope I can use to help explain where my own perspectives on a whole bunch of things draw from…


Businesses, knowledge, government and correspondence  were, for literally hundreds of years, built on paper documents.   How did we get from that world to this?  And how has that world, and our path, helped shape this one?  I’m particularly interested in the question of whether some of those implications are good or bad – what we can learn from the past in order to improve on our future or understand our present.  So how did we get here from there?

Arguably the first important step was industrialization.  This changed the game in transforming the size of challenges and created new needs for efficiency.  This gave rise to the need for increasing agreement beginning with standards around physical manufacture – first locally, and then nationally around 1916.  World War II placed intense pressures and funded huge research budgets and international cooperation.  A whole lot of important things shook out in the 1940s and each developed kind of independently. I won’t go into them much here except to note a few key points to help set the mental stage of what the world was like going into the story.

The word “computer” in anything resembling really modern terms wasn’t even a thing until 1946.

The First Digital Computer – ENIAC “For a decade, until a 1955 lightning strike, ENIAC may have run more calculations than all mankind had done up to that point.” from computerhistory.org

In 1947 ISO, the International Standards Organization, was founded.  That same year, the transistor was invented at Bell Labs.  In the late 1940’s the Berlin Airlift transported over 2.3 million tons of food, fuel and supplies by developing a “standard form” document could be transmitted over just about any medium – including, for example, by telegraph.  Later this basic technique would become “EDI” (Electronic Data Interchange) and become the standard for commercial shipping and trade at scale, but It required very tight agreement and coordination on standard business documents and processes.

Transistors revolutionized things, but the silicone chips which really began the revolution weren’t yet a thing.  Intel, who pioneered and popularized it wouldn’t even be founded until 1968.

During this interim few decades, the number of people exposed to the idea of computers began, very slowly to to expand – and that gets pretty interesting because we start to see some interesting forks in the road…

1960’s Interchange, SGML and HyperStuff

In the mid 1960’s Ted Nelson noted the flaw with the historical paper model:

Systems of paper have grave limitations for either organizing or presenting ideas. A book is never perfectly suited to the reader; one reader is bored, another confused by the same pages. No system of paper– book or programmed text– can adapt very far to the interests or needs of a particular reader or student.

He imagined a very new possibility with computers, first in scholarly papers and then in books. He imagined this as an aid to authors as well which he explained evolve a work from random notes to outlines to advanced works.  He had a big vision.  In his description, he coined three important terms: Hypertext, Hypermedia (originally Hyperfilm) and Hyperlink.  For years, the terms Hypertext and Hypermedia would cause some problems. Some (including it seems to me Nelson) considered media as part of the text because it was serialized and others considered text as a subset of media) — But this was all way ahead of its time. While it was going down, the price-point and capabilities just weren’t really there. As he described in the same paper.

The costs are now down considerably. A small computer with mass memory and video-type display now costs $37,000;

Another big aspect of his idea was unifying some underlying systems about files.  Early computers just didn’t agree on anything.  There weren’t standard chipsets much less standard file types, programs, protocols, etc.


In 1969, in this early world of incompatibility, three men at IBM (Goldfarb, Mosher and Lorie) worked on the idea of using markup documents to allow machines to trade and deal with a simple understanding of “documents” upon which they could specialize understanding, storage, retrieval or processing.  It was called GML after their initials, but later “Generalized Markup Language”.  It wasn’t especially efficient.  It had nothing to do with HyperAnything nor even EDI in a technical sense.  But it was comparatively easy to get enough rough agreement in order and flexible enough to make things actually work in order to achieve real things.  For example, you could send a GML document to a printer and define separately how precisely it would print.  Here’s what it looked like:

:h1.Chapter 1:  Introduction
   :p.GML supported hierarchical containers, such as
   :li.Ordered lists (like this one),
   :li.Unordered lists, and
   :li.Definition lists
   as well as simple structures.
   :p.Markup minimization (later generalized and formalized in SGML),
   allowed the end-tags to be omitted for the "h1" and "p" elements.

But GML was actually a script – the tags indicated macros which could be implemented differently.  Over time, GML would inspire a lot, get pretty complicated, philosophical about declarative nature and eventually become SGML (Standard Generalized Markup Language).  This would continue growing in popularity – especially in the print industry.

The changing landscape

For the next decade, computers got faster, cheaper, and smaller and more popular in business, science and academia and these all matured and evolved on their own.

Networks were arising too and the need for standardization there seemed obvious.  For about 10 years there was an attempt to create a standard network stack in international committees, but it was cumbersome, complex and not really getting there.  Then, late in this process, Vint Cerf left the committee.  He led work focused on rough consensus and running code for the protocol and, in very short order, the basic Internet was born.

Around this same time, a hypertext system based on Nelson’s ideas, called “Guide” was created at Carnegie Mellon University for Unix workstations.

Rise of the Computers

In the 1980’s Macs and PCs, while still expensive, were finally becoming affordable enough that some regular people could hope to purchase them.

Return of the HyperStuff

Guide was commercially sold by, ported by to the Mac and PC by, and later very much improved on by a company called OWL (Office Workstations Limited) led by a man named Ian Ritchie.  It introduced one of the first real hypertext systems to desktops in 1985.  In 1986 Guide won a British Computer Society award. Remember “Guide” and “Ian Ritchie” because they’re going to come back up.

hcard1-1Another two years later people were really starting to take notice of “HyperText” (and beginning to get a little over-generalized with the term – frequently this was really becoming “HyperMedia”).  In 1987, an application called “HyperCard” was introduced and the authors convinced Apple to give it away free for all Mac users.  It was kind of a game changer.

HyperCard was a lot like Guide in many ways but with a few important details we’ll come back to.  The world of HyperCards was built of “decks” – each deck a stack of “cards” full of GUI: forms, animations, information and interactions which could be linked together and scripted to do all sorts of useful things.  Cards had an innate “order” and could link to and play other media which would – the state of things in a card/deck was achieved through scripting.  They were bitmap based in presentation and cleverly scalable.

Screenshot of VideoWorks and the “score”

That same year, in 1987, a product called VideoWorks by a company named Macromind was released.  In fact, if you got a Mac, you saw it because it was used to create the interactive guided tour that shipped with it.   You could purchase it for your own authorship.

One interesting aspect of VideoWorks was its emphasis on time.  Time is kind of an important quality of non-static media, so if you’re interested in the superset of hypermedia, you need the concept of time.  Thus, the makers of VideoWorks included a timeline upon which things were ‘scored’.  With this HyperMedia, authors could allow a user to move back and forth in the score at will.  This was kind of slick, it made sense to a lot of people and it caught on.  Their product later became “Director” and it became a staple for producing high-end, wow’ing multimedia content on the desktop, CD-ROMs for kiosks and so on.

By the late 1980’s, OWL’s Guide had really come to focus on the PC version.  Hypercard was Free on the Mac and as it’s creator Ian Ritchie would say later…

You can compete on a lot of things, but it’s hard to compete with free…

The emergence of an increasingly fuzzy line…

Note that most of these existing systems of this time actually dealt, in a way, with hypermedia in the sense that they didn’t draw so closely to this fundamental primitive idea based on paper.  All of these could be scripted and and you might imagine: Applications were a natural outcropping.   The smash game Myst was actually just a really nice and super advanced HyperCard stack!

The Stage is Set…

The Internet was there.  It was available – in use even, but it was pretty basic and kind of cumbersome.  Many people who used the internet perhaps didn’t even realize what the internet really was – they were doing so through an online service like Compuserve or Prodigy.

But once again, as new doors were opened, some saw new promise.

The Early 1990s

I’ve already mentioned that SGML had really nothing at first to do with HyperMedia, but here’s where it comes back in.  The mindset around HyperCard was stacks and cards.  The mindset around VideoWorks/Director was Video.  The mindset of OWL was documents.  SGML was a mature thing and having something to do with SGML was kind of a business advantage.

Guide had SGML-based documents.  More than that, it called these “Hypertext Documents” and their SGML definition was called “HyperText Markup Language” (which they abbreviated HML) and it could deliver them over networks.  Wow.  Just WOW, right?  Have you even heard the term “Web” yet?  No.

But wait – there’s more!  Looking at what else was going on, OWL had much advanced Guide on the PC and by now it had integrated sound and video and advanced scripting ability too.  While it was “based” on documents, it was so much more.  What’s more, while all of this was possible, it was hidden from the average author – it had a really nice GUI that allowed both creation and use. That it was SGML underneath was, to many, a minor point or even a mystery.

Web Conception

This is the world into which HTML was conceived.  I say conceived rather than “born” or “built” because Tim had been working out his idea for a few years and refining it.  He talked about it to anyone who would listen about a global, decentralized, read-write system that would really change the Internet.  He had worked out the idea of an identifier, a markup language and a protocol but that was about it.

And here’s the really interesting bit:  He didn’t really want to develop it, he wanted someone else to.  As Tim has explained it several times…

There were several commercial hypertext editors and I thought we could just add some internet code, so that the hypertext documents could then be sent over the internet. I thought the companies engaged in the then fringe field of hypertext would immediately grasp the possibilities of the web.

Screenshot of OWL’s Guide in 1990

Remember OWL and Guide? Tim thought they would be the perfect vehicle, they were his first choice.  So, in November 1990, when Ian Ritchie took to a trade show in Versaille to show off Guide’s HyperMedia, Tim approached him and tried hard to convince him to make OWL the browser of the Web.  However, as Tim notes “Unfortunately, their reaction was quite the opposite…”  
Note that nearly all of the applications discussed thus far, including OWL, were commercial endeavors.  In 1986, authors who wanted to use OWL to publish bought a license for about $500 and then viewers licensed readers for about $100.  To keep this in perspective, in adjusted dollars this is roughly $1,063 for writers and $204 for readers.  This is just how software was done, but it’s kind of different from the open source ethos of the Internet.  A lot of people initially assumed that making browsers would be a profitable endeavor.  In fact, later Mark Andreeson would make a fortune on the name Netscape in part because there was an assumption that people would buy it.  It was, after all, a killer app.

It’s pretty interesting to imagine where we would be now if Ritchie had been able to see it.  What would it have been like with OWL’s capabilities as a starting point, and what impact the commercial nature of the product might have had on the Web’s history and ability to catch on.   Hard to say.

However, with this rejection (and other’s like it) , Tim realized

…it seemed that explaining the vision of the web was exceedingly difficult without a web browser in hand, people had to be able to grasp the web in full, which meant imagining a whole world populated with websites and browsers. It was a lot to ask.


He was going to have to build something in order to convince people.  Indeed, Ian Ritchie would later give a Ted Talk about this in which he admits that two years later when he saw Mosaic he realized “yep, that’s it” – he’d missed it.

A Final Sidebar…

PenPoint Tablet in 1991

At very nearly the same time something that was neither HyperStuff nor SGML nor Internet related entered the picture.   It was called “PenPoint”.  Chances are pretty good that you’ve never heard of it and it’s probably going to be hard to see how, but it will play importantly into the story later.  PenPoint was, in 1991, a tablet computer with a stylus and gesture support and vector graphics.
Think about what you just read for a a moment and let it sink in.

If you’ve never seen PenPoint, you should check out this video from 1991 because it’s kind of amazing.   And here’s what it has to do with our story: It failed.  It was awesomely ahead of its time and it just… failed.  But not before an application was developed for it called “SmartSketch FutureSplash” (remember the name) – a vector based drawing tool which would have been kind of sweet for that device in 1991.

I’ll explain in Part II how this plays very importantly into the story.

Many thanks to folks who slogged through and proofread this dry post for me: @mattur @simonstl and @sundress.


Pandora’s Box

This post is part of my personal notes in a larger effort in thinking about benefits now that are currently specified in Shadow DOM, but contentious and held up in committee.  We’ll work it out in standards, I’m sure – but given the number of things Shadow DOM was addressing, it may still be several years until we have solutions widely implemented and deployed that solve all of them.  This has me doing a lot of thought exercises about what can be done in the meantime.  This post reflects one such exercise:  Specifically, what would it mean to solve just the styling end of this on its own. Warning, it may be mildly crazy so the title is perhaps doubly relevant. It was originally posted on radar.oreilly.com.

B3t0l4FIAAAPxkUThe Pandora Box

CSS works really well if you can follow good patterns and have nice rich markup. It lets you define broad rules and inherit and override selectively, and if used well it cleanly decouples a separation of concerns — it’s pretty elegant actually.

On the other hand, in the real world, things are often not that simple: Until pretty recently, HTML wasn’t especially expressive natively, so we worked around it. – many times on our own by adding classes like “article”.  But, there wasn’t a standard.  Likewise, our ideas about the patterns we should follow or best practices continues to change as we gain new powers or spend more time with the technology. Of course, there are a vast number of cases where you’ll just go and upgrade your philosophy and be the better for it, but there are a number of times when this just isn’t an option. I recently described this in a recent post as the “friendly fire” problem – when you have numerous teams operating on different time schedules and budgets across many code bases that attempt to mash together into various forms. This is sometimes referred to in standards circles, less colorfully, as “the composition problem”.

When it comes to these sorts of cases, quality and philosophy are inevitably mixed and not entirely in the control of any one team. In these sorts of cases, CSS selectors are kind of like live hand-grenades, it’s just way too easy to do damage you didn’t intend to do because it requires a level of coordination of approach which is, for business reasons, highly impractical. And so you try really hard to analyze the problem, get the right select/cascade/descend/inherit/specificity, pull the pin, lob it into the pages and… hope. Frequently, all hell breaks loose. The more you mix it up, the harder it gets to reason about and the more our stylesheets explode in complexity. You’ve opened the proverbial Pandora’s Box.

You can see this in play in some simple examples. If the page begins with assumptions and a philosophy of it’s own and then injects markup created by another team with a differing philosophy without adopting their CSS as well, the results are frequently bad. Here’s one such (overly simplified) example in which you’ll notice that the taxonomy of classes overlap and the result is that the inner context has bits that not only themselves become unreadable but obscure the page’s (already pretty bad) content. If the page does include the other team’s CSS it can easily get worse, as seen in this example in which you’ll notice both ends have harmed each other to the point of complete uselessness.

At a minimum, one proposal that seems to keep coming up in various forms (most recently Shadow DOM) is to provide a way, in cases like these, for authors to isolate the styling of various pieces so that the default thing is to do no harm in either direction, thereby just simplifying the problem.  But, for now, the platform doesn’t provide you an easy way to do that…. but it does provide a way to fake it, and that might be useful.  In the very least it can help us to figure out exactly what it is we need: Without data, standardization is hard and often a bad idea.  So much discussion is about what we think developers will find intuitive or confusing.  A better way is to know what developers understand or don’t.  So let’s see if we can try to tamp down a cow path.

Thinking about how CSS works

initial-values: All elements have all of the CSS properties from the get-go, all CSS properties have a default, or initial value which is specified in the specification.

A screenshot of the CSS2.1 spec showing the initial value for the display property was 'inline' at the time (it's grown more complicated but the net effect is the same)

The initial value for each property is provided in the spec.

For simplicity, you can sort of imagine that without anything more, all elements begin their life with properties that would look to most of us like a <span>. I’m guessing this is news to a lot of long time users of CSS because we pretty much never experience this world because browsers (“user-agents” in standards speak) come with a stylesheet of their own…

user-agent stylesheets: All browsers come with a default, or “user-agent” stylesheet. This is the thing that tells them how to display an element in the first place, and it does so with the same kinds of CSS rules that you and I write every day. Without it, you’d be able to see the contents of those <style> and <script> tags, your <h1> would look just like any other piece of text, and so on.  You can see one such example in Firefox’s stylesheet. So: initial values + user-agent sheet yields what you’d get if you started editing an HTML document, before you started adding your own CSS (“author CSS” is what we generally call that stuff).

specificity: The CSS runtime is a rules engine – all of the rules are loaded in order and maintain a stable sort based-on a concept called “specificity”. That is, each rule gets a weighted “score” according to the selector it is based on. A “*” in the selector is worth 0 “points”, a tag is worth 1, a class is worth an order of magnitude more (it’s not specifically 10 based, but for illustration you can assume 10 points), an id is worth 100. So the reason author CSS trumps the user-agent stylesheets is simply the user-agent stylesheet is very low specificity and it came first – authors will always be either more specific or have come later.

And this is where things get sticky…

In situations like the ones described above, if you’ve got a independently developed components working on differing models or timeline, no side can really know what they’re likely to inherit, what to override or how to do it (because specificity). Reasoning about this is really hard – many big companies or products have stylesheets that are many thousands of rules, and frequently not just one that must work with a vast array of content of varying age, quality and philosophy.

Keep a lid on it…

At some level, we can imagine that we’d like to identify a container as a special thing, let’s call it a “pandora-box” – the kind of integration point we’re talking about- and have it do just what the browser does with a fresh page by default for that container.  In this model, we’d like to say “Give me a clean slate (reset the defaults and provide a user-agent-like sheet for this sort of container).  Let my rules automatically trump the page within this container, by default, just like you trump the user agent… Likewise, keep my rules inside this Pandora’s Box by default”.  Essentially, we’d like a new specificity context.

If you're thinking this seems a little like a Rube Goldberg machine, it's  not quite so out there - but, welcome to the Web and why we need the sorts of primitives explained in the Extensible Web Manifesto

If you’re thinking this seems a little like a Rube Goldberg machine, it’s not quite so out there – but, welcome to the Web and why we need the sorts of primitives explained in the Extensible Web Manifesto

Well, to some extent, we can kind of do this with a not too hard specificity strategy that removes the most of the hard-core coordination concerns: We can use CSS’s own specificity rules to create an incredibly specific set of default rules, and use that as our box’s equivalent of a ‘default stylesheet’  – basically resetting things.  And that should work pretty much everywhere today.

The :not() pseudo-class counts the simple selector inside for its specificity. So, if we created an attribute called, for example, pandora-box and then picked an id which would be incredibly unlikely to exist in the wild (say #-_- because it’s not just unique but short and shows my ‘no comment’ face), then, it would – using CSS’s own theory, provide a clean slate within the box. And, importantly, this would work everywhere today (because we would wind up with a single “pandora-box stylesheet” with rules like [pandora-box] h1:not(#-_-):not(#-_-):not(#-_-) { ... } which has a specificity of 3 ids, 1 attribute and 1 tag (311 in base-10 – again CSS isn’t base-10 but it’s helpful for visualization to assign a meaningful number), enough to trump virtually any sane stylesheet’s most ambitious selector).

Given this, all you need is a way to shove the component’s rules into the box and shut the lid and you have a pretty easy coordination strategy.  You’ve basically created a new “context”.

Essentially, this is the same pattern and model as the page has with user-agent sheets, at least mentally (initial values are still initial values and they could have been modified, so we’ll have to reset those in our pandora sheet as well).

In simple terms, A pandora box has two sets of rules – the default and the author provided that parallel the user-agent (default) and author.  The automatic ordering of specificity in naturally such that the automatic pattern for resolution is: User-agent default stylesheet, then normal author stylesheets, then pandora-box default stylesheet then pandora-box author stylesheets.

If you can follow this model, you should be able to work cooperatively without the risks and figure out how well that approach works.  Of course, no one wants to write a ton of :nots but we have preprocessors which can help and, maybe we can write something helpful for basic use that doesn’t even require that.

So, here’s a quick test pandora-box.js which does just that. Including it will inject a “pandora-box stylesheet” (just once) and give you a method CSS._specifyContainer(containerElement) which will shove any <style> tags within it to a more specific ruleset (adding the pandora/specificity boosts). The file has a couple of other methods too but they all are just alternate forms of this for a trying out a few ways I’m slicing up the problem – one that allows me to hoist a string of arbitrary cssText, another that I’m playing with in a larger custom element pattern.  You can see this in use in this example, it is identical to the previous one in terms of HTML and CSS.

Interestingly, CSS has an all property for unwinding the cascaded values simply.  Unfortunately, it doesn’t have an equivalent to ‘default’ things back to the fresh page state (the one that includes the user-agent sheet), only to nuke all styles back to their initial values, which, in my mind, leaves you in a pretty bad spot.  However, if it did, our pandora sheet would be essentially a single declaration: [pandora-box]:not(-_-):not(-_-):not(-_-) * { all: user-agent; }. Luckily, that’s under discussion by the CSSWG as you read this.

One question that is semi-unresolved is just what should that pandora sheet actually contain. How much is really necessary and, should it perhaps include common reset rules. I’ve taken what I think is a pretty sane stab at it, but all of this is easily tweaked for experimentation, so feel free to fork or make suggestions. The real goal of this exercise is to allow for experimentation and feedback from real authors, expand the conversation and highlight just how badly we need to solve some of these problems.

What about performance?!

The injection of rules happens just once, I don’t think that is likely a big perf problem. Besides the pandora-box stylesheet, you have exactly the same number of rules (your components rules are replaced, not duplicated) and because it’s easier to reason about you should be able to have generally less crazy sheets. At least, that’s what I hope. Only time and use will really tell. If you’re worried about the complexity of the selectors slowing things down dramatically, I doubt that’s much of a problem either – browsers use a bloom filter and such cleverness that evaluating the pandora sheet should have not much effect and the triple :not’s are placed on the subject element of the selector, so there shouldn’t be much computational difference between something no one would have trouble writing like [pandora-box] .foo and the rule it rewrites to [pandora-box] .foo:not(-_-):not(-_-):not(-_-), it never walks anywhere.

What about theming?!

Pandora shut the box vowing never to open it again. But when a tiny voice begged to be let out, she opened it one las time. As well she did, for the creature left was the one we call Hope

Pandora’s box art from http://wallpapershd3d.com/fantasy-art-wallpaper/ by Marta Dahlig

This seems to be one of the great debates – clearly there *are* certain kinds of advice you want an outer page to provide to the stuff in these various pandora boxes – things like “the background of this calendar component should be blue”.

In case it’s not obvious, the basic principle at play here is that just like selectors are managed with orders of magnitude, this technique employs those to create order of magnitude ‘contexts’. So, if you created a new context with specificity, you can simply trump it, just like anything else.  That is to say that if you really really wanted to pierce through from the outside, you’d just have to make those rules _really_ specific (or just place them after with the same specificity).  The key is that it has the right default behaviors – the calendar defines what it is to be a calendar and the page has to specifically say “except in these ways”.  If you look back at pandora-box.js, you can use the same API to allow the page to add rules, or there is a simple pattern which was also demonstrated in the last codepen example: If the thing you contain has an ID, it will see if there is a matching style tag in your page with a special type <style type="text/theme-{id}">, where {id} is the id of the container. If it finds one, all of those rules will be placed after the component’s rules and will therefore theme in the same way an author customizes native elements.

So, be part of the conversation. Let’s figure out what works and doesn’t in solving the friendly fire problem in CSS. Would the ability to put a lid back on Pandora’s Box be helpful?