93

OK, so I paraphrased. The full quote:

The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. -- Alan Kay.

I am trying to understand the history of the Internet and the web, and this statement is hard to understand. I have read elsewhere that the Internet is now used for very different things than it was designed for, and so perhaps that factors in.

What makes the Internet so well done, and what makes the web so amateurish?

(Of course, Alan Kay is fallible, and no one here is Alan Kay, so we can't know precisely why he said that, but what are some possible explanations?)

*See also the original interview*.

Kyle L
  • 959

10 Answers10

80

In a sense he was right. The original (pre-spec) versions of HTML, HTTP and URL were designed by amateurs (not standards people). And there are aspects of the respective designs ... and the subsequent (original) specs ... that are (to put it politely) not as good as they could have been. For example:

  • HTML did not separate structure/content from presentation, and it has required a series of revisions ... and extra specs (CSS) ... to remedy this.

  • HTTP 1.0 was very inefficient, requiring a fresh TCP connection for each "document" fetched.

  • The URL spec was actually an attempt to reverse engineer a specification for a something that was essentially ad hoc and inconsistent. There are still holes in the area of definition of schemes, and the syntax rules for URLs (e.g. what needs to be escaped where) are baroque.

And if there had been more "professional" standards people involved earlier on, many of these "miss-steps" might not have been made. (Of course, we will never know.)

However, the web has succeeded magnificently despite these things. And all credit should go to the people who made it happen. Whether or not they were "amateurs" at the time, they are definitely not amateurs now.

Stephen C
  • 25,388
  • 6
  • 66
  • 89
63

He actually elaborates on that very topic on the second page of the interview. It's not the technical shortcomings of the protocol he's lamenting, it's the vision of web browser designers. As he put it:

You want it to be a mini-operating system, and the people who did the browser mistook it as an application.

He gives some specific examples, like the Wikipedia page on a programming language being unable to execute any example programs in that language, and the lack of WYSIWYG editing, even though it was available in desktop applications long before the web existed. 23 years later, and we're just barely managing to start to work around the limitations imposed by the original web browser design decisions.

gnat
  • 20,543
  • 29
  • 115
  • 306
Karl Bielefeldt
  • 148,830
32

It seems to be due to a fundamental disagreement between Alan Kay versus the people (primarily Tim Berners-Lee) who designed the web, about how such a system should work.

The ideal browser, according to Kay, should really be a mini operating system with only one task: To safely execute code downloaded from the internet. In Kays design, the web does not consist of pages, but of black box "objects" which can contain any kind of code (as long as it is safe). This is why he says a browser shouldn't have features. A browser wouldn't need say a HTML parser or a rendering engine, since all this should be implemented by the objects. This is also the reason he doesn't seem to like standards. If content is not rendered by the browser but by the object itself, there is no need for a standard.

Obviously this would be immensely more powerful than the web today where pages are constrained by the bugs and limitations of the current browsers and web standards.

The philosophy of Tim Berners-Lee, the inventor of the web, is almost the exact opposite. The document "The Principle of Least Power" outline the design principles underlying HTTP, HTML, URL's etc. He points out the benefit of limitations. For example, having a well specified declarative language like HTML is easier to analyze, which makes search engines like Google possible. Indexing is not really possible in Kays web of turing-complete black-box objects. So the lack of constraints on the objects actually makes them much less useful. How valuable are powerful objects if you can't find them? And without a standard notion of links and URLS, Googles page rank algorithm couldn't work. And neither would bookmarks for that matter. Of course the black box web would be totally inaccessible for disabled users also.

Another issue is content production. Now we have various tools, but even from the beginning any amateur could learn to author a html page in notepad. This is what kickstarted the web and made it spread like wildfire. Consider if the only way you could make a web page required you to start programming you own rendering engine? The barrier to entry would be immense.

Java applets and Silverlight resemble to some extent to Kays vision. Both systems are much more flexible and powerful than the web (since you could implement a browser in them), but suffer from the problems outlined above. And both technologies are basically dead in the water.

Tim Berners-Lee was a computer scientist who had experience with networks and information systems before inventing the web. It seems that Kay does not understand the ideas behind the web, and therefore he believes the designers are amateurs without knowledge of computing history. But Tim Berners-Lee certainly wasn't an amateur.

JacquesB
  • 61,955
  • 21
  • 135
  • 189
21

I read this as Kay being unfamiliar enough with the lower level protocols to assume they're significantly cleaner than the higher level web. The “designed by professionals” era he's talking about still had major problems with security (spoofing is still too easy), reliability and performance which is why there's still new work being done tuning everything for high speed or high packet loss links. Go back just a little further and hostnames were resolved by searching a text file which people had to distribute!

Both systems are complex heterogenous systems and have significant backwards compatibility challenges any time you want to fix a wart. It's easy to spot problems, hard to fix them, and as the array of failed competitors to either shows it's surprisingly hard to design something equivalent without going through the same learning curve.

As a biologist might tell an intelligent design proponent, if you look at either one and see genius design you're not looking closely enough.

11

Ahh yes, I've asked Alan this question a number of times, for example when he was in Potsdam and on the fonc mailing list. Here is a more recent quote from the list which to me summed it up quite well:

After literally decades of trying to add more and more features and not yet matching up to the software than ran on the machines the original browser was done on, they are slowly coming around to the idea that they should be safely executing programs written by others. It has only been in the last few years -- with Native Client in Chrome -- that really fast programs can be safely downloaded as executables without having to have permission of a SysAdmin.

My understanding of his various answers is that he thinks web-browsers should not display (HTML) documents, possibly enriched, but simply run programs. I personally think he is wrong in this, though I can see where he is coming from. We already had this sort of thing with ActiveX, Java Applets, Flash and now "rich" JavaScript apps, and the experience generally wasn't good, and my personal opinion is that even now most JavaScript heavy sites are a step back from good HTML sites, not a stop forward.

Theoretically, of course, it all makes sense: trying to add interactivity piecemeal to what is basically is document description language is backwards and akin to adding more and more epicycles to the Ptolemaic system, whereas the "right" answer is figuring out that (rich) text is a special case of a program and therefore we should just send programs.

However, given the practical success of the WWW, I think it's wise to modify our theories rather than slam the WWW for having the gall not to conform to our theories.

mpw
  • 117
5

I think he was pointing to something less obscure-- TBL knew nothing about the hypertext work that had gone on from the 60s, so this work didn't inform the design of the web. He often talks of computing as a pop culture, where practitioners don't know their history, and continually "reinvent the flat tire".

4

You cannot really say that the Internet or the Web was invented by amateurs or professionals because those fields are absolutely new ones; all people were amateur in Internet protocols before they were invented so from a point of view the inventors of the Internet were amateurs too.

If we were to be really judgmental the Internet was not so great after all: IPv6 is needed. And it is not only about the address space; IPv6 has a new header with fewer and different fields.

Another big difference from the Internet and the Web is how they are perceived by the programmer; a programmer rarely interacts with the Internet. From his point of view in IP you have addresses and in TCP you have a port in addition and you are assured that the packages are sent. That's about it... While with Web the programmer has a more intense interaction: HTTP methods, headers, HTML, URLs etc. It is normal to see the limits of something with many more possibilities than in something with almost no possibilities at all. With this I don't want to say that the Internet is simple: underneath it is kind of complex but this complexity is handled by network and telecommunications engineers and is about configuring something in a limited amounts of possibilities while in the web you basically have unlimited possibilities but the task of building complex applications relying only on packet sending.

Regarding the greatness of these two technologies, the Internet is so appreciated because it is a very scalable technology and the idea of layering was very good one; basically at the lower levels you can use any technology you want (WLAN, Ethernet, Token Ring etc.) and have IP as a standard intermediate protocol upon which TCP and UDP are placed and above which you can basically add what application protocol you want.

The greatness of the Web is strictly related to the greatness of the Internet because the Web strongly relies on the Internet, having the TCP/IP stack underneath. But I would say the Internet is dependent on the Web too; the Internet existed 20 years before the Web and was kind of anonymous but 20 years after the Web, the Internet is ubiquitous and all of this thanks to the Web.

Random42
  • 10,520
  • 10
  • 52
  • 65
4

The Internet has worked remarkably well as a prototype of the packet switching concept discovered by Baran, Pouzin and contemporaries. Contrary to popular opinion, this does not mean that IPv4 as handed down is the perfect protocol architecture, or that IPv6 is the way to go. John Day, who was deeply involved in the development of ARPANET and IP, explains this in his 2008 book Patterns of Network Architecture.

As for the Web, in the words of Richard Gabriel, "Worse is Better". Tim Berners-Lee's account, Weaving The Web, is decent. How The Web Was Born by Gillies & Cailliau is denser and less readable but has lots of detail and some fascinating links with other events in personal computing at the time. I don't think Kay gives it enough credit.

vdm
  • 151
1

I dunno, some part of the non-web internet has some horrible warts. Email was before the web, and is part of the internet, and the standard is very open, and requires a lot of hacks on top to tackle (but not solve) the spam problem.

0

"Amateur" does not refer to the lack of programming skills, but the lack of imagination.

The underlying problem with Tim Berners-Lee's web is that it was never built for developers. (This is in stark contrast to Alan Kay's web.)

Tim's web was built for non-coders who would publish on the web directly by dabbling with files containing their journals/articles interspersed with HT-markup-language: It's like 1980s WordPerfect and MS-Word, except they would use "<b></b>" instead of clicking on the B icon, and would save it as an open ".htm" format instead of a proprietary ".doc" format. The invention here is the "<a>" tag, which allows these static journals/articles to be globally interlinked.

And that's it, that's the entire web vision by Tim: his web is a mere global highway of interlinked static-articles. Maybe if you had the money, you can buy an editor like Dreamweaver, Nexus, Publisher, Citydesk(?), etc, which would help you generate all those "<b></b>" tags by clicking on the B icon.

..And we see how his vision didn't work as intended. Indeed, there are mighty red flags right from the start that the world had wanted way more than what Tim's vision offers:

  • Red flag 1: The rapid rise of "smart CGI" (PHP).

  • Red flag 2: The rapid rise of "smart HTML" (Javascript).

These days, we have even more red flags like the rise of Chrome-OS-is-the-browser-is-the-OS (exactly what Alan Kay had intended the browser to be btw) and WASM / browser-extensions.


In contrast to Tim's web, Alan Kay's web on the other hand, is a dynamic web built for programmers: a global highway of interlinked dynamic-programs. Non-coders who need a "page" would simply publish one by using a program on the web. (And the program itself was obviously written by programmers, not HTML-dabblers.)

..This is exactly the status-quo of Tim's web in the 2000s, but if we had Alan's web, it will have been done in the 1990s: Instead of the world having "wordpress and friendster" only in the 2000s, we will instead have them right when the web started in the 1990s.

..Similarly, instead of having programs like Steam, Visual Studio, Warcraft, VM Ware on the web in the 2040s, we will instead have them right now in the 2010s. (The multi-decade delay is due to these programs already having been built for the OS-is-not-the-browser, thus reducing the economic incentive for them to be rebuilt on the OS-is-the-browser-is-the-OS.)

So this is what people mean when they say Tim Berners-Lee had killed the True Dynamic Web by pushing his "shabby static web" onto the world. Ever heard of the terms "web 2.0", "web 3.0"? They would have simply been called "The Web" if we had Alan's web instead of Tim's web. But Tim's web needs constant revision because it is so static.

Obviously, all hope is not lost, as the Web can be remodeled however way browser vendors define it to be. But the point is that all these "bleeding edge" stuff that they are "inventing" on the web now are stuff that has already been invented a long time ago. We could already have it all today, not tomorrow.

Pacerier
  • 5,053