Hyperliterature - the Web as a text

Originally published in the PN Review, November-December 2001

There is a great deal of writing now available on the Word Wide Web. Books can be ordered on-line, of course, but there has also been an explosion of on-line magazines, often known as webzines or e-zines. Various e-publishers have appeared, some of them selling electronic literature which has never appeared in book form; increasing numbers of writers are setting up their own websites; and there is also the phenomenon known as hyperliterature, which is literature that could never be reproduced on paper, because it takes advantage in one way or another of the special properties of the Web.

In spite of all this, the Web may not strike many readers as a natural home for writing. Literature belongs on the page, not on the screen. Reading from a book is in many ways a more pleasurable and convenient experience than reading from a monitor. Writers and publishers who have sought to establish themselves on the Web may seem to be jumping on a bandwagon, and an inappropriate bandwagon at that.

Yet the affinity between literature and the Web goes deeper than may at first be apparent. Many people imagine that the World Wide Web came into existence as a commercial application - a new way of shopping. This is understandable, since for the public at large "buying on-line" is both the most common type of web-usage and their most likely reason to start using the Web in the first place. As a matter of fact, however, the Internet and the World Wide Web were both originally created for research purposes, and the Web was designed as a means of retrieving and displaying documents. In its earliest incarnation, the World Wide Web was nothing but a gigantic array of interconnected texts.

The terms "Internet" and "World Wide Web" are not synonymous, although they are often treated as such. The Internet is a system for connecting computers to one another, and it began a surprizingly long time ago, in the 1960s in America. After the USSR launched Sputnik in 1957 the Eisenhower administration, determined not to be outdone in the space-race, set up ARPA, the Advanced Research Projects Agency, in 1958. ARPA was the precursor to NASA. It began a computer research programme in 1962, under a scientist called John Licklider, whose background was in psychology. It was Licklider who first had the notion of linking computers together into a network - what became known as the ARPANet. The idea behind this was that a scientist working on one computer could have access to the files and capabilities of another, instead of either having to visit the place where that computer was housed, or obtain a similar one for himself. In those days, of course, personal computers were unknown: the machines in question were very big and expensive, so the advantages of being able to share resources in this way were obvious. Once the ARPANet was established, however, one of its most popular uses quickly proved to be a primitive form of e-mail, which was really a side-product of the system.

Over the next couple of decades great strides were made both in telecommunications technology as a whole, and in the various protocols required to allow computers to communicate with one another. By the time the World Wide Web came into existence, the Internet was already well-established across the developed world. The WWW concept was born in 1989, originated by Tim Berners-Lee and the scientists at CERN (Geneva), the European centre for High Energy Physics, who were interested in making it easier to pool research documentation. Again, the original applications were research-based and academic rather than commercial.

Anyone who has ever read a research paper knows that traditionally they have always come heavily loaded with footnotes and references - pointers to other learned texts on similar subjects. But anyone who has ever researched a dissertation or a thesis will know that the process of following up such references can be frustrating and time-consuming. It often consists of filling out forms in university libraries, then waiting for weeks while obscure books and magazines are fetched by courier system from other university libraries up and down the country. Part of the aim of the World Wide Web was to obviate this process. When authors published their research papers on the Web, instead of appending references to them in the usual way, they could use hyperlinks, which meant that the reader would only have to click on the link for the source document to appear on his monitor.

There are plenty of documents on the Web - on medical sites, for example - which are still presented in exactly this way. They are traditional research papers, complete with references at the end, but some or all of the references double as hyperlinks. And this potentiality has been implicit in the Web from the start: it can function as an archive of texts, academic or otherwise, with links from one text to another.

But the CERN project went further than this. Tim Berners-Lee was not merely interested in creating an archive, a record of completed research. He wanted to establish the hypertext system as a flexible and dynamic way of pooling contributions towards ongoing projects. His original submission to CERN is reproduced on the Web, and its phraseology is revealing: "If a CERN experiment were a static once-only development, all the information could be written in a big book. As it is, CERN is constantly changing... Keeping a book up to date becomes impractical, and the structure of the book needs to be constantly revised... In providing a system for manipulating this sort of information, the hope would be to allow a pool of information to develop which could grow and evolve with the organisation and the projects it describes. For this to be possible, the method of storage must not place its own restraints on the information. This is why a 'web' of notes with links (like references) between them is far more useful than a fixed hierarchical system..."

From the start, Berners-Lee envisaged that hypertext might involve not just text but graphics and other media as well. He uses the term "hypermedia" several times. That having been said, however, he goes on to admit that the state of the art as it then stood would mean that most of the information being exchanged would be in the form of ASCII files - what are now commonly known as "text-only" files. So the Web would be text-based, but the text would be organised in a particular way. The contrast he makes is between a "big book" which is "static" and a linked "web" which can "grow and evolve". Later on he spells out the difference between information organised in a "tree" structure and a looser system of interlinked "nodes". The disadvantage of the tree structure is that as you follow a particular branch through its subdivisions towards a terminal twig, you get further away from the central trunk and also further away from the other branches. The advantage of a web of interlinked nodes is that from any given node you can jump off in a number of different directions, depending on where you want to go.

In a traditional library system, the books are divided into various subjects, which are then subdivided and subdivided again. But Berners-Lee is suggesting a different model of organisation, a model where the "method of storage" does not "place its own restraints on the information": a web in which any node can potentially be linked to any other node. He is imagining a library without aisles and shelves, where the books all communicate with each other: a library, in fact, which can be seen as a single text, and not a dead text either, but a text which is evolving and expanding all the time.

This is one of the first points which should be made about the Web. If we want to see it as a huge text - which in some ways it is - then we should bear in mind that it isn't organised in a traditional way. One of the most disconcerting things about the Web, when you first begin to browse it, is that there is far too much information available, and far too little guidance as regards where you should look. There are too many paths and too few signposts. You are obliged to create your own order out of the chaos. But this state of affairs was implicit from the first, because the Web was deliberately set up not as a finished structure with strict divisions and subdivisions, but as a dynamic pool of information to which a potentially infinite number of additions and interconnections could be added.

In many ways the library-method of categorising information has come to dominate our thinking. Our education system is an example: pupils are expected to specialise, then specialise further, as they work their way from primary through secondary and into further education. In effect each pupil is expected to leave the central trunk of the information-tree, and find his or her way into a particular branch. But Berners-Lee also detects the influence of the liberary-system elsewhere: he prefaces his proposals for the Web by implying a connection between a "tree-structure" of information-control, and a hierarchical method of organising workers within the CERN project:

CERN is a wonderful organisation. It involves several thousand people, many of them very creative, all working toward common goals. Although they are nominally organised into a hierarchical management structure, this does not constrain the way people will communicate, and share information, equipment and software across groups. The actual observed working structure of the organisation is a multiply connected "web" whose interconnections evolve with time...

As with information, so with people. Hierarchies are artificial: the real relationships between people are too dynamic and diverse to be categorised or contained within a tree-structure.

There is a hint of political radicalism here which is worth emphasising, because it has played an important part in the development of the Web, as well as other computer technologies. The first modern web-browser was Mosaic X, launched in 1993 by the NCSA (National Centre for Supercomputing Applications); and like many internet innovations, it was made available free of charge. In the history of computer development there is a strong tradition of "open-source" software, which means not only that anyone can use the software in question without charge, but also that anyone is welcome to suggest improvements. Unix, Linux and Apache (the last being another offspring of the NCSA) are all examples. Unix and Linux are software operating systems, the main alternatives to Windows, and unlike windows they are free. Apache, the world's leaving internet server software, is free as well. Many computer experts, particularly in the United States, feel very strongly that software innovations should not be bought up and controlled by any particular corporation. There should be a "level playing field", with open access for all. Hence the widespread opposition to Microsoft's attempts to establish a virtual monopoly of the PC and web-browsing markets.

Some of this radicalism has been carried across into the field of hyperliterature, where it has encouraged a re-examination of the relationship between reader and writer. We have already seen that the Web as a whole is more dynamic and fluid than a traditional text. Hyperliterature - literature written specifically for the Web - defines itself, or separates itself from literature of the more traditional sort, by adopting this fluidity as an essential component of its style. Pages on the Web are not laid out in a fixed sequence but in a "cloud", and within that cloud, potentially, every page can be linked to every other page. This means that the reader is not given the pages of a website, or a hyperliterature text, in a fixed sequence, with the implication that he ought to start at the beginning and read them one after the other until he gets to the end. He is given all the pages at once, as a group, and left to explore them for himself - to make his own connections, and to construct his own reading experience. Authorial control is correspondingly lessened.

Of course, when Tim Berners-Lee talks about "a big book" and "a fixed hierarchical system" he is referring to a method of organising information rather than a style of creative writing. Poetry and fiction have never been organised into tree-structures and hierarchies. The extent to which traditional authors are oppressing their readers by asking them to read texts in fixed sequences is open to doubt, as is the extent to which web-writers are setting readers free by presenting them with "clouds" of pages rather than readymade books. Yet the apologists of hyperliterature are given to stressing its advantages over conventional writing in exactly these terms. And whether we find such arguments convincing or not, it is hard to deny that hyperliterature at its best reflects the fluidity and dynamism of the Web as a whole, or that the Web has presented writers with a genuine opportunity to extend the possibilities of their art.

References

Internet for Historians, History of the Internet by R T Griffiths (a link to Tim Berners-Lee's original web-concept document is included on this page)

IC Online - Len Kleinrock (an interview with one of the founding fathers of the Internet about the establishment of ARPANet, the Internet itself and the origins of e-mail)

Pioneers of the Net


©Edward Picot, June 01