Part I and Part II of this series attempt to set the creation of the Web, the first Web browser and early attempts to publicly share Sir Tim Berners-Lee’s idea of “the Web” into some historical context. They attempt to illustrate that there were ideas and forks of ideas along the way, each considering different aspects of different problems; and that each had varying degrees of success. Some were popular and increasingly mature desktop products, others created standards upon which many products would be built. Some were technically interesting but completely failed to take off and some which were academically fascinating but were largely vapor. All of these were still “in motion” at the time that the Web was being conceived and it’s important to realize that they didn’t stop just because Tim said “wait, I’ve got it.”. Those posts also attempt to explain how Tim really wanted the Web to be read/write, wanted to tap into existing mature products and which bits he imagined were most and least important. I described the landscape of deployed hardware and technology at the time – the world was close, but not yet “ready” to be wired in richer form/ We weren’t connected by and large – In fact, the entire amount of information sent across the Internet monthly at that time would easily fit on a single sub $100 (US) external USB hard-drive in 2016. All of this led helped shape the Web.
It would be an understatement to say that early on, the vast majority of folks in the mature bits of the industry didn’t really take the Web seriously yet. As explained in previous posts, the authors of mature hypertext products turned down early opportunities to integrate the Web outright. In 1992 Tim’s talk proposal for the Hypertext Conference was actually rejected. Even in terms of new “Internet ideas” the Web didn’t seem like the popular winner. SGML enthusiasts by and large shunned it as a crude interpretation of a much better idea with little hope of success. Gopher, which was created at almost exactly the same time, was gaining users far faster than this “World Wide Web”. What did happen for the Web, however, is that a huge percentage of the small number of early enthusiasts involved started building browsers… Meanwhile, existing ideas kept evolving independently, and new ideas started emerging too and these would continue to help shape and inspire.
1992 saw the birth of a lot of browsers and little actual hypertext. For perspective, by the end of 1992 there were seven web browsers which allowed users to surf the vast ocean of what was at the time only 22 known Websites. As each browser came online they built on Tim’s original idea that the parser could just ignore tags it didn’t understand and each attempted to “adjust” what we might call the political map of the Web’s features. Each brought with it it’s own ideas, innovations, bugs and so on. Effectively, this was the first “Browser War” or what I’ll call “World Wide Web War I”.
There is one in particular, worth calling out: ViolaWWW. It was created by a student named Pei Wei and it included innovations like stylesheets, tables, inline images, a scripting language and even the ability to embed small applications. Remember that nearly all the popular non-Web, desktop hypertext and hypermedia products of the time had many these features. What made ViolaWWW different was that it was so much more than text. Viola (not ViolaWWW) was an Object Oriented programming language and a bytecode VM. The ViolaWWW browser was a just VM application (though, it was the killer one that made most people care about it) – this made it possible to do all sorts of incredibly interesting things.
Some people reading this in 2016 are likely simultaneously impressed and a perhaps just a little horrified by the idea that attempts to “move away from” nice clean declarative, semantic markup with scripts and programs came so early on. Tim must have been absolutely horrified, right?
Well, no that doesn’t seem to be quite accurate – at least from what I read from the record. History is a lot more nuanced than we frequently tend to present it. Ideas are complex. Understanding nuances can be hard, but I’d like to try.
As I described in Part II, Tim didn’t imagine HTML would be “the solution” for content but rather
I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content. – Tim Berners-Lee in Weaving the Web
In fact, regarding Pei and Viola, he spoke in generally glowing terms in Weaving the Web, and numerous interviews. In his address at CERN in 1998, upon accepting a fellowship, he said
It’s worth saying that I feel a little embarrassed accepting a fellowship when there are people like Pei Wei …[who] read about the World Wide Web on a newsgroup somewhere and had some interesting software of his own; an interpreted language which could be moved across the NET and could talk to a screen.. in fact what he did was really ahead of his time.
As questions came in related to similar ideas on the www-talk mailing list Tim answered and explained a number of related concepts. Here’s one that might capture his feelings at the time from May 1992 (Tim’s is the reply):
> I would like to know, whether anybody has extended WWW such, that it is possible to start arbitrary programs by hitting a button in a WWW browser.
Very good question. The problem is that of programming language. You need something really powerful, but at the same time ubiquitous. Remember a facet of the web is universal readership. There is no universal interptreted [sic] programming language. But there are some close tries. (lisp, sh). You also need something which can run in a very safe mode, to prevent virus attacks…. [It should be] public domain. A pre-compiled standard binary form would be cool too. It isn’t here yet.
While I don’t know precisely what Tim was thinking in the early 1990’s I do think it is worth noting the description, his the use of the words “cool” and “yet” as well as the absence of any sort of all caps/head exploding response. In fact, if you wade through the archives, it turns out that a lot of early talk and efforts in 1991-1992 were already specifically surrounding this weird line between documents and applications or HyperText and HyperMedia. Traditional desktop system makers had recognized the gap and early Web enthusiasts did too.
The point of this observation is simple: “We” didn’t really fully know what we were doing then, and in many ways we’re still figuring it out today... and that’s ok.
Even the inclusion of inline images brought new questions – MIME wasn’t a given, content-negotiation was still kind of a rough dream and requests were very expensive.
Others were starting to add things like <input> and annotation and comment systems and variable ‘slots’ into the markup for some rough idea of ‘templating’ to help flesh out the problem that a whole lot of a sites, even then, would have to be concerned with repeating things.
Some people wanted to make servers really smart. They saw the investment in a server as a thing which could query databases, convert document types on the fly, run business logic, etc. These folks thought that modifying HTML would give them something akin to the IBM 3270 model which allowed the pushing of values and the pulling of page-based responses. Others, like Pei, wanted to make clients smarter, or at least the cooperation between the two. At some level, these are conversations nearly as old as computing itself and we’re still having them today.
Tim continues in that same post above to say that:
In reality, what we would be able to offer you real soon now with document format negotiation is the ability to return a document in some language for execution.
He mentions that, for example, the server might send either a shell script or a Viola script by way of negotiation – which he explains would “cover most Unix systems”. For Tim it seems (from my reading) that the first problem was that there wasn’t a standard language, the second was that it might not be safe, the third was that if there was one it should be in the public domain. If you were on a Unix machine you’d be covered. If you had Viola, you’d be covered. If you had neither… well… it’s complicated. But Viola had already begun tackling the safety issue too.
But even this early on – it didn’t seem to be a question of whether applications should be part of the Web, but more like how they should be. Should be one way or maybe a few?
There was so much innovation and variance in browsers themselves that in December 1992 Tim sent an email entitled “Let’s keep the web together” simultaneously praising the wealth of ideas and innovations and stressing that they should work to begin the process of future standardization/alignment lest the Web fragment too far.
1993-1994: The Perfect Storm
When Marc Andreessen, was shown Viola it helped inspire him to start an effort at NSCA – at least that’s one story.
For all its value Viola had what turned out to be a critical flaw: It was hard to get it working. Despite all of its advantages, you had to install the runtime itself (the VM) and then run the browser in the runtime. Getting it setup and running proved problematic, even for a number of pretty technically savvy people. There were issues with permissions and bugs and Pei was kind of alone in building and maintaining it.
But this wasn’t unique. Browsing the Web in 1993 was still a kind of painful endeavor to get going. The line mode browser was placed into the public domain in May 1993. There wasn’t a terribly lot there to browse either – even by the end of that year there were only 130 known websites. Setup was difficult and things were generally very slow. Even finding something on that small number of sites was hard – forget searching Google, even Yahoo wasn’t a thing yet. Even if you could get something working with one browser, chances were pretty decent that you might come across something where it wasn’t the right browser for you to get the whole experience.
Marc Andreessen was the first one to start treating the browser like a modern product – really taking in feedback and smoothing out the bumps. The team at NCSA quickly ported his UNIX work to Mac and PC, there was even one for the Commodore Amiga. Mosaic was a good browser, but feature-wise, it probably wasn’t even the best. Aside from ViolaWWW’s notable work, some browsers already had pioneered forms, for example, and Mosaic initially didn’t support forms. But there was at least one thing they nailed: They created an easy to install and setup browser on many platforms and drove the barrier to entry way down.
Moore’s Law had finally created faster and cheaper machines with modems entering a more mainstream market; some notable regulation changes happened; when the makers of Gopher announced that maybe just possibly in some circumstances you might be charged a very small fee Tim convinced CERN to make a statement that the Web wouldn’t do that.
In other words: When Mosaic was released publicly in late 1993 “free for non-commercial use” it was in the midst of the perfect storm. Timing matters.
Suddenly the Web really started to hit and growth began to really explode. By 1994, there were an estimated 2800 sites. Regular PeopleTM were being introduced to the Web for the first time and they were using Mosaic. After only roughly a year since being placed in the public domain, it is estimated that less than 2% of Web users (still a comparatively small total by today’s measures) were getting around using the line mode browser. In April of that year, James Clark and Marc Andreessen established what would become Netscape.
Previous efforts to standardize didn’t get there – It wasn’t until November 28, 1994 that RFC-1886 was finally sent to the IETF to begin to create an HTML “standard” which we’ll talk about in Part IV, but wouldn’t ultimately arrive for another year and then in debatable form.
Pei wasn’t the only one thinking about a runtime VM, Sun Microsystems was too. As described in Part II, Project Green spawned Oak thinking they’d found the next big market in consumer devices with embedded systems. By 1992 they had a working demo for set-top boxes which created a read/write interactive internet-like experience for television – MovieWood which they had hoped to sell to cable companies. But the cable companies didn’t bite.
This confluence of timing and ideas left the Green Team at Sun wondering what to do with the rest of their lives and their ideas. Over the course of 3 days they discussed the success of Mosaic and decided their future. As James Gosling would later describe:
Mosaic… revolutionized people’s perceptions. The Internet was being transformed into exactly the network that we had been trying to convince the cable companies they ought to be building. All the stuff we had wanted to do, in generalities, fit perfectly with the way applications were written, delivered, and used on the Internet. It was just an incredible accident. And it was patently obvious that the Internet and Java were a match made in heaven.
So the Sun team went back to the drawing board to build a browser like Mosaic – but in Java (which was a lot like Pei’s approach) and in 1994 they had a demo called “WebRunner”, which would later be renamed “HotJavaTM. Within just a few months everyone, not just techies, were going crazy imagining the possibilities.
Nearly everyone seemed to think that this would change the world. As we’ll see in Part IV, they might have been right, but not how they thought….