The Gutenberg Elegies

The Fate of Reading in an Electronic Age

Sven Birkerts

Faber and Faber

BOSTON • LONDON

All rights reserved under International and Pan-American Copyright Conventions, including the right of reproduction in whole or in part in any form. First published in 1994 in the United States by Faber and Faber, Inc., 50 Cross Street, Winchester, MA 01890.

Copyright © 1994 by Sven Birkerts


part ii

The Electronic

Millennium

(Selected Fragments)

8 Into the Electronic Millenium

9 Perseus Unbound

10 Close Listening

11 Hypertext: Of Mouse and Man


8 Into the Electronic Millenium

[.....]

Think of it. Fifty to a hundred million people (maybe a conservative estimate) form their ideas about what is going on in America and in the world from the same basic package of edited images–to the extent that the image itself has lost much of its once-fearsome power. Daily newspapers, with their long columns of print, struggle against declining sales. Fewer and fewer people under the age of fifty read them; computers will soon make packaged information a custom product. But if the printed sheet is heading for obsolescence, people are tuning in to the signals. The screen is where the information and entertainment wars will be fought. The communications conglomerates are waging bitter takeover battles in their zeal to establish global empires. As Jonathan Crary has written in "The Eclipse of the Spectacle," "Telecommunications is the new arterial network, analogous in part to what railroads were for capitalism in the nineteenth century. And it is this electronic substitute for geography that corporate and national entities are now carving up." Maybe one reason why the news of the change is not part of the common currency is that such news can only sensibly be communicated through the more analytic sequences of print.

To underscore my point, I have been making it sound as if we were all abruptly walking out of one room and into another, leaving our books to the moths while we settle ourselves in front of our state-of-the-art terminals. The truth is that we are living through a period of overlap; one way of being is pushed athwart another. Antonio Gramsci's often-cited sentence comes inevitably to mind: "The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appears." The old surely is dying, but I'm not so sure that the new is having any great difficulty being born. As for the morbid symptoms, these we have in abundance.

The overlap in communications modes, and the ways of living that they are associated with, invites comparison with the transitional epoch in ancient Greek society, certainly in terms of the relative degree of disturbance. Historian Eric Havelock designated that period as one of "proto-literacy," of which his fellow scholar Oswyn Murray has written:

To him [Havelock] the basic shift from oral to literate culture was a slow process; for centuries, despite the existence of writing, Greece remained essentially an oral culture. This culture was one which depended heavily on the encoding of information in poetic texts, to be learned by rote and to provide a cultural encyclopedia of conduct. It was not until the age of Plato in the fourth century that the dominance of poetry in an oral culture was challenged in the final triumph of literacy.

That challenge came in the form of philosophy, among other things, and poetry has never recovered its cultural primacy. What oral poetry was for the Greeks, printed books in general are for us. But our historical moment, which we might call "proto-electronic," will not require a transition period of two centuries. The very essence of electronic transmissions is to surmount impedances and to hasten transitions. Fifty years, I'm sure, will suffice. As for what the conversion will bring – and mean – to us, we might glean a few clues by looking to some of the "morbid symptoms" of the change. But to understand what these portend, we need to remark a few of the more obvious ways in which our various technologies condition our senses and sensibilities.

I won't tire my reader with an extended rehash of the differences between the print orientation and that of electronic systems. Media theorists from Marshall McLuhan to Walter Ong to Neil Postman have discoursed upon these at length. What's more, they are reasonably commonsensical. I therefore will abbreviate.

The order of print is linear, and is bound to logic by the imperatives of syntax. Syntax is the substructure of discourse, a mapping of the ways that the mind makes sense through language. Print communication requires the active engagement of the reader's attention, for reading is fundamentally an act of translation. Symbols are turned into their verbal referents and these are in turn interpreted. The print engagement is essentially private. While it does represent an act of communication, the contents pass from the privacy of the sender to the privacy of the receiver. Print also posits a time axis; the turning of pages, not to mention the vertical descent down the page, is a forward-moving succession, with earlier contents at every point serving as a ground for what follows. Moreover, the printed material is static–it is the reader, not the book, that moves forward. The physical arrangements of print are in accord with our traditional sense of history. Materials are layered; they lend themselves to rereading and to sustained attention. The pace of reading is variable, with progress determined by the reader's focus and comprehension.

The electronic order is in most ways opposite. Information and contents do not simply move from one private space to another, but they travel along a network. Engagement is intrinsically public, taking place within a circuit of larger connectedness. The vast resources of the network are always there, potential, even if they do not impinge on the immediate communication. Electronic communication can be passive, as with television watching, or interactive, as with computers. Contents, unless they are printed out (at which point they become part of the static order of print) are felt to be evanescent. They can be changed or deleted with the stroke of a key. With visual media (television, projected graphs, highlighted "bullets") impression and image take precedence over logic and concept, and detail and linear sequentiality are sacrificed. The pace is rapid, driven by jump-cut increments, and the basic movement is laterally associative rather than vertically cumulative. The presentation structures the reception and, in time, the expectation about how information is organized.

Further, the visual and nonvisual technology in every way encourages in the user a heightened and ever-changing awareness of the present. It works against historical perception, which must depend on the inimical notions of logic and sequential succession. If the print medium exalts the word, fixing it into permanence, the electronic counterpart reduces it to a signal, a means to an end.

Transitions like the one from print to electronic media do not take place without rippling or, more likely, reweaving the entire social and cultural web. The tendencies outlined above are already at work. We don't need to look far to find their effects. We can begin with the newspaper headlines and the millennial lamentations sounded in the op-ed pages: that our educational systems are in decline; that our students are less and less able to read and comprehend their required texts, and that their aptitude scores have leveled off well below those of previous generations. Tag-line communication, called "bite-speak" by some, is destroying the last remnants of political discourse; spin doctors and media consultants are our new shamans. As communications empires fight for control of all information outlets, including publishers, the latter have succumbed to the tyranny of the bottom line; they are less and less willing to publish work, however worthy, that will not make a tidy profit. And, on every front, funding for the arts is being cut while the arts themselves appear to be suffering a deep crisis of relevance. And so on.

Every one of these developments is, of course, overdetermined, but there can be no doubt that they are connected, perhaps profoundly, to the transition that is underway.

Certain other trends bear watching. One could argue, for instance, that the entire movement of postmodernism in the arts is a consequence of this same macroscopic shift. For what is postmodernism at root but an aesthetic that rebukes the idea of an historical time line, as well as previously uncontested assumptions of cultural hierarchy. The postmodern artifact manipulates its stylistic signatures like Lego blocks and makes free with combinations from the formerly sequestered spheres of high and popular art. Its combinatory momentum and relentless referencing of the surrounding culture mirror perfectly the associative dynamics of electronic media.

One might argue likewise, that the virulent debate within academia over the canon and multiculturalism may not be a simple struggle between the entrenched ideologies of white male elites and the forces of formerly disenfranchised gender, racial, and cultural groups. Many of those who would revise the canon (or end it altogether) are trying to outflank the assumption of historical tradition itself. The underlying question, avoided by many, may be not only whether the tradition is relevant, but whether it might not be too taxing a system for students to comprehend. Both the traditionalists and the progressives have valid arguments, and we must certainly have sympathy for those who would try to expose and eradicate the hidden assumptions of bias in the Western tradition. But it also seems clear that this debate could only have taken the form it has in a society that has begun to come loose from its textual moorings. To challenge repression is salutary. To challenge history itself, proclaiming it to be simply an archive of repressions and justifications, is idiotic.*

Then there are the more specific sorts of developments. Consider the multibillion-dollar initiative by Whittle Communications to bring commercially sponsored education packages into the classroom. The underlying premise is staggeringly simple: If electronic media are the one thing that the young are at ease with, why not exploit the fact? Why not stop bucking television and use it instead, with corporate America picking up the tab in exchange for a few minutes of valuable airtime for commercials? As the Boston Globe reports:

Here's how it would work:

Participating schools would receive, free of charge, $50,000 worth of electronic paraphernalia, including a satellite dish and classroom video monitors. In return, the schools would agree to air the show.

The show would resemble a network news program, but with 18- to 24-year-old anchors.

A prototype includes a report on a United Nations Security Council meeting on terrorism, a space shuttle update, a U2 music video tribute to Martin Luther King, a feature on the environment, a "fast fact" ('Arachibutyrophobia is the fear of peanut butter sticking to the roof of your mouth') and two minutes of commercial advertising.

"You have to remember that the children of today have grown up with the visual media," said Robert Calabrese [Billerica School Superintendent]. "They know no other way and we're simply capitalizing on that to enhance learning."

Calabrese's observation on the preconditioning of a whole generation of students raises troubling questions: Should we suppose that American education will begin to tailor itself to the aptitudes of its students, presenting more and more of its materials in newly packaged forms? And what will happen when educators find that not very many of the old materials will "play"–that is, capture student enthusiasm? Is the what of learning to be determined by the how? And at what point do vicious cycles begin to reveal their viciousness?

A collective change of sensibility may already be upon us. We need to take seriously the possibility that the young truly "know no other way," that they are not made of the same stuff that their elders are. In her Harper's magazine debate with Neil Postman, Camille Paglia observed:

Some people have more developed sensoriums than others. I've found that most people born before World War II are turned off by the modern media. They can't understand how we who were born after the war can read and watch TV at the same time. But we can. When I wrote my book, I had earphones on, blasting rock music or Puccini and Brahms. The soap operas–with the sound turned down –flickered on my TV. I'd be talking on the phone at the same time. Baby boomers have a multilayered, multitrack ability to deal with the world.

I don't know whether to be impressed or depressed by Paglia's ability to disperse her focus in so many directions. Nor can I say, not having read her book, in what ways her multitrack sensibility has informed her prose. But I'm baffled by what she means when she talks about an ability to "deal with the world." From the context, "dealing" sounds more like a matter of incessantly repositioning the self within a barrage of onrushing stimuli.

Paglia's is hardly the only testimony in this matter. A New York Times article on the cult success of Mark Leyner (author of I Smell Esther Williams and My Cousin, My Gastroenterologist) reports suggestively:

His fans say, variously, that his writing is like MTV, or rap music, or rock music, or simply like everything in the world put together: fast and furious and intense, full of illusion and allusion and fantasy and science and excrement.

Larry McCaffery, a professor of literature at San Diego State University and co-editor of Fiction International, a literary journal, said his students get excited about Mr. Leyner's writing, which he considers important and unique: "It speaks to them, somehow, about this weird milieu they're swimming through. It's this dissolving, discontinuous world." While older people might find Mr. Leyner's world bizarre or unreal, Professor McCaffery said, it doesn't seem so to people who grew up with Walkmen and computers and VCR's, with so many choices, so much bombardment, that they have never experienced a sensation singly.

The article continues:

There is no traditional narrative, although the book is called a novel. And there is much use of facts, though it is called fiction. Seldom does the end of a sentence have any obvious relation to the beginning. "You don't know where you're going, but you don't mind taking the leap," said R. J. Cutler, the producer of "Heat," who invited Mr. Leyner to be on the show after he picked up the galleys of his book and found it mesmerizing. "He taps into a specific cultural perspective where thoughtful literary world view meets pop culture and the TV generation."

My final exhibit–I don't know if it qualifies as a morbid symptom as such–is drawn from a Washington Post Magazine essay on the future of the Library of Congress, our national shrine to the printed word. One of the individuals interviewed in the piece is Robert Zich, so-called "special projects czar" of the institution. Zich, too, has seen the future, and he is surprisingly candid with his interlocutor. Before long, Zich maintains, people will be able to get what information they want directly off their terminals. The function of the Library of Congress (and perhaps libraries in general) will change. He envisions his library becoming more like a museum: "Just as you go to the National Gallery to see its Leonardo or go to the Smithsonian to see the Spirit of St. Louis and so on, you will want to go to libraries to see the Gutenberg or the original printing of Shakespeare's plays or to see Lincoln's hand-written version of the Gettysburg Address."

Zich is outspoken, voicing what other administrators must be thinking privately. The big research libraries, he says, "and the great national libraries and their buildings will go the way of the railroad stations and the movie palaces of an earlier era which were really vital institutions in their time . . . Somehow folks moved away from that when the technology changed."

And books? Zich expresses excitement about Sony's hand-held electronic book, and a miniature encyclopedia coming from Franklin Electronic Publishers. "Slip it in your pocket," he says. "Little keyboard, punch in your words and it will do the full text searching and all the rest of it. Its limitation, of course, is that it's devoted just to that one book." Zich is likewise interested in the possibility of memory cards. What he likes about the Sony product is the portability: one machine, a screen that will display the contents of whatever electronic card you feed it.

I cite Zich's views at some length here because he is not some Silicon Valley research and development visionary, but a highly placed executive at what might be called, in a very literal sense, our most conservative public institution. When men like Zich embrace the electronic future, we can be sure it's well on its way.

Others might argue that the technologies cited by Zich merely represent a modification in the "form" of reading, and that reading itself will be unaffected, as there is little difference between following words on a pocket screen or a printed page. Here I have to hold my line. The context cannot but condition the process. Screen and book may exhibit the same string of words, but the assumptions that underlie their significance are entirely different depending on whether we are staring at a book or a circuit-generated text. As the nature of looking–at the natural world, at paintings–changed with the arrival of photography and mechanical reproduction, so will the collective relation to language alter as new modes of dissemination prevail.

Whether all of this sounds dire or merely "different" will depend upon the reader's own values and priorities. I find these portents of change depressing, but also exhilarating–at least to speculate about. On the one hand, I have a great feeling of loss and a fear about what habitations will exist for self and soul in the future. But there is also a quickening, a sense that important things are on the line. As Heraclitus once observed, "The mixture that is not shaken soon stagnates." Well, the mixture is being shaken, no doubt about it. And here are some of the kinds of developments we might watch for as our "proto-electronic" era yields to an all-electronic future:

1. Language erosion. There is no question but that the transition from the culture of the book to the culture of electronic communication will radically alter the ways in which we use language on every societal level. The complexity and distinctiveness of spoken and written expression, which are deeply bound to traditions of print literacy, will gradually be replaced by a more telegraphic sort of "plainspeak." Syntactic masonry is already a dying art. Neil Postman and others have already suggested what losses have been incurred by the advent of telegraphy and television–how the complex discourse patterns of the nineteenth century were flattened by the requirements of communication over distances. That tendency runs riot as the layers of mediation thicken. Simple linguistic prefab is now the norm, while ambiguity, paradox, irony, subtlety, and wit are fast disappearing. In their place, the simple "vision thing" and myriad other "things." Verbal intelligence, which has long been viewed as suspect as the act of reading, will come to seem positively conspiratorial. The greater part of any articulate person's energy will be deployed in dumbing-down her discourse.

Language will grow increasingly impoverished through a series of vicious cycles. For, of course, the usages of literature and scholarship are connected in fundamental ways to the general speech of the tribe. We can expect that curricula will be further streamlined, and difficult texts in the humanities will be pruned and glossed. One need only compare a college textbook from twenty years ago to its contemporary version. A poem by Milton, a play by Shakespeare–one can hardly find the text among the explanatory notes nowadays. Fewer and fewer people will be able to contend with the so-called masterworks of literature or ideas. Joyce, Woolf, Soyinka, not to mention the masters who preceded them, will go unread, and the civilizing energies of their prose will circulate aimlessly between closed covers.

2. Flattening of historical perspectives. As the circuit supplants the printed page, and as more and more of our communications involve us in network processes–which of their nature plant us in a perpetual present–our perception of history will inevitably alter. Changes in information storage and access are bound to impinge on our historical memory. The depth of field that is our sense of the past is not only a linguistic construct, but is in some essential way represented by the book and the physical accumulation of books in library spaces. In the contemplation of the single volume, or mass of volumes, we form a picture of time past as a growing deposit of sediment; we capture a sense of its depth and dimensionality. Moreover, we meet the past as much in the presentation of words in books of specific vintage as we do in any isolated fact or statistic. The database, useful as it is, expunges this context, this sense of chronology, and admits us to a weightless order in which all information is equally accessible.

If we take the etymological tack, history (cognate with "story") is affiliated in complex ways with its texts. Once the materials of the past are unhoused from their pages, they will surely mean differently. The printed page is itself a link, at least along the imaginative continuum, and when that link is broken, the past can only start to recede. At the same time it will become a body of disjunct data available for retrieval and, in the hands of our canny dream merchants, a mythology. The more we grow rooted in the consciousness of the now, the more it will seem utterly extraordinary that things were ever any different. The idea of a farmer plowing a field–an historical constant for millennia–will be something for a theme park. For, naturally, the entertainment industry, which reads the collective unconscious unerringly, will seize the advantage. The past that has slipped away will be rendered ever more glorious, ever more a fantasy play with heroes, villains, and quaint settings and props. Small-town American life returns as "Andy of Mayberry"–at first enjoyed with recognition, later accepted as a faithful portrait of how things used to be.

3. The waning of the private self. We may even now be in the first stages of a process of social collectivization that will over time all but vanquish the ideal of the isolated individual. For some decades now we have been edging away from the perception of private life as something opaque, closed off to the world; we increasingly accept the transparency of a life lived within a set of systems, electronic or otherwise. Our technologies are not bound by season or light–it's always the same time in the circuit. And so long as time is money and money matters, those circuits will keep humming. The doors and walls of our habitations matter less and less–the world sweeps through the wires as it needs to, or as we need it to. The monitor light is always blinking; we are always potentially on-line.

I am not suggesting that we are all about to become mindless, soulless robots, or that personality will disappear altogether into an oceanic homogeneity. But certainly the idea of what it means to be a person living a life will be much changed. The figure-ground model, which has always featured a solitary self before a background that is the society of other selves, is romantic in the extreme. It is ever less tenable in the world as it is becoming. There are no more wildernesses, no more lonely homesteads, and, outside of cinema, no more emblems of the exalted individual.

The self must change as the nature of subjective space changes. And one of the many incremental transformations of our age has been the slow but steady destruction of subjective space. The physical and psychological distance between individuals has been shrinking for at least a century. In the process, the figure-ground image has begun to blur its boundary distinctions. One day we will conduct our public and private lives within networks so dense, among so many channels of instantaneous information, that it will make almost no sense to speak of the differentiations of subjective individualism.

We are already captive in our webs. Our slight solitudes are transected by codes, wires, and pulsations. We punch a number to check in with the answering machine, another to tape a show that we are too busy to watch. The strands of the web grow finer and finer–this is obvious. What is no less obvious is the fact that they will continue to proliferate, gaining in sophistication, merging functions so that one can bank by phone, shop via television, and so on. The natural tendency is toward streamlining: The smart dollar keeps finding ways to shorten the path, double-up the function. We might think in terms of a circuit-board model, picturing ourselves as the contact points. The expansion of electronic options is always at the cost of contractions in the private sphere. We will soon be navigating with ease among cataracts of organized pulsations, putting out and taking in signals. We will bring our terminals, our modems, and menus further and further into our former privacies; we will implicate ourselves by degrees in the unitary life, and there may come a day when we no longer remember that there was any other life.

While I was brewing these somewhat melancholy thoughts, I chanced to read in an old New Republic the text of Joseph Brodsky's 1987 Nobel Prize acceptance speech. I felt as though I had opened a door leading to the great vault of the nineteenth century. The poet's passionate plea on behalf of the book at once corroborated and countered everything I had been thinking. What he upheld in faith were the very ideals I was saying good-bye to. I greeted his words with an agitated skepticism, fashioning from them something more like a valediction. Here are four passages:

If art teaches anything . . . it is the privateness of the human condition. Being the most ancient as well as the most literal form of private enterprise, it fosters in a man, knowingly or unwittingly, a sense of his uniqueness, of individuality, of separateness–thus turning him from a social animal into an autonomous "I."

The great Baratynsky, speaking of his Muse, characterized her as possessing an "uncommon visage." It's in acquiring this "uncommon visage" that the meaning of human existence seems to lie, since for this uncommonness we are, as it were, prepared genetically.

Aesthetic choice is a highly individual matter, and aesthetic experience is always a private one. Every new aesthetic reality makes one's experience even more private; and this kind of privacy, assuming at times the guise of literary (or some other) taste, can in itself turn out to be, if not a guarantee, then a form of defense, against enslavement.

In the history of our species, in the history of Homo sapiens, the book is an anthropological development, similar essentially to the invention of the wheel. Having emerged in order to give us some idea not so much of our origins as of what that sapiens is capable of, a book constitutes a means of transportation through the space of experience, at the speed of a turning page. This movement, like every movement, becomes flight from the common denominator . . . This flight is the flight in the direction of "uncommon visage," in the direction of the numerator, in the direction of autonomy, in the direction of privacy.

Brodsky is addressing the relation between art and totalitarianism, and within that context his words make passionate sense. But I was reading from a different vantage. What I had in mind was not a vision of political totalitarianism, but rather of something that might be called "societal totalism"–that movement toward deindividuation, or electronic collectivization, that I discussed above. And from that perspective our era appears to be in a headlong flight from the "uncommon visage" named by the poet.

Trafficking with tendencies–extrapolating and projecting as I have been doing–must finally remain a kind of gambling. One bets high on the validity of a notion and low on the human capacity for resistance and for unpredictable initiatives. No one can really predict how we will adapt to the transformations taking place all around us. We may discover, too, that language is a hardier thing than I have allowed. It may flourish among the beep and the click and the monitor as readily as it ever did on the printed page. I hope so, for language is the soul's ozone layer and we thin it at our peril.

Think of it. Fifty to a hundred million people (maybe a conservative estimate) form their ideas about what is going on in America and in the world from the same basic package of edited images–to the extent that the image itself has lost much of its once-fearsome power. Daily newspapers, with their long columns of print, struggle against declining sales. Fewer and fewer people under the age of fifty read them; computers will soon make packaged information a custom product. But if the printed sheet is heading for obsolescence, people are tuning in to the signals. The screen is where the information and entertainment wars will be fought. The communications conglomerates are waging bitter takeover battles in their zeal to establish global empires. As Jonathan Crary has written in "The Eclipse of the Spectacle," "Telecommunications is the new arterial network, analogous in part to what railroads were for capitalism in the nineteenth century. And it is this electronic substitute for geography that corporate and national entities are now carving up." Maybe one reason why the news of the change is not part of the common currency is that such news can only sensibly be communicated through the more analytic sequences of print.

To underscore my point, I have been making it sound as if we were all abruptly walking out of one room and into another, leaving our books to the moths while we settle ourselves in front of our state-of-the-art terminals. The truth is that we are living through a period of overlap; one way of being is pushed athwart another. Antonio Gramsci's often-cited sentence comes inevitably to mind: "The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appears." The old surely is dying, but I'm not so sure that the new is having any great difficulty being born. As for the morbid symptoms, these we have in abundance.

The overlap in communications modes, and the ways of living that they are associated with, invites comparison with the transitional epoch in ancient Greek society, certainly in terms of the relative degree of disturbance. Historian Eric Havelock designated that period as one of "proto-literacy," of which his fellow scholar Oswyn Murray has written:

To him [Havelock] the basic shift from oral to literate culture was a slow process; for centuries, despite the existence of writing, Greece remained essentially an oral culture. This culture was one which depended heavily on the encoding of information in poetic texts, to be learned by rote and to provide a cultural encyclopedia of conduct. It was not until the age of Plato in the fourth century that the dominance of poetry in an oral culture was challenged in the final triumph of literacy.

That challenge came in the form of philosophy, among other things, and poetry has never recovered its cultural primacy. What oral poetry was for the Greeks, printed books in general are for us. But our historical moment, which we might call "proto-electronic," will not require a transition period of two centuries. The very essence of electronic transmissions is to surmount impedances and to hasten transitions. Fifty years, I'm sure, will suffice. As for what the conversion will bring–and mean–to us, we might glean a few clues by looking to some of the "morbid symptoms" of the change. But to understand what these portend, we need to remark a few of the more obvious ways in which our various technologies condition our senses and sensibilities.

I won't tire my reader with an extended rehash of the differences between the print orientation and that of electronic systems. Media theorists from Marshall McLuhan to Walter Ong to Neil Postman have discoursed upon these at length. What's more, they are reasonably commonsensical. I therefore will abbreviate.

The order of print is linear, and is bound to logic by the imperatives of syntax. Syntax is the substructure of discourse, a mapping of the ways that the mind makes sense through language. Print communication requires the active engagement of the reader's attention, for reading is fundamentally an act of translation. Symbols are turned into their verbal referents and these are in turn interpreted. The print engagement is essentially private. While it does represent an act of communication, the contents pass from the privacy of the sender to the privacy of the receiver. Print also posits a time axis; the turning of pages, not to mention the vertical descent down the page, is a forward-moving succession, with earlier contents at every point serving as a ground for what follows. Moreover, the printed material is static–it is the reader, not the book, that moves forward. The physical arrangements of print are in accord with our traditional sense of history. Materials are layered; they lend themselves to rereading and to sustained attention. The pace of reading is variable, with progress determined by the reader's focus and comprehension.

The electronic order is in most ways opposite. Information and contents do not simply move from one private space to another, but they travel along a network. Engagement is intrinsically public, taking place within a circuit of larger connectedness. The vast resources of the network are always there, potential, even if they do not impinge on the immediate communication. Electronic communication can be passive, as with television watching, or interactive, as with computers. Contents, unless they are printed out (at which point they become part of the static order of print) are felt to be evanescent. They can be changed or deleted with the stroke of a key. With visual media (television, projected graphs, highlighted "bullets") impression and image take precedence over logic and concept, and detail and linear sequentiality are sacrificed. The pace is rapid, driven by jump-cut increments, and the basic movement is laterally associative rather than vertically cumulative. The presentation structures the reception and, in time, the expectation about how information is organized.

Further, the visual and nonvisual technology in every way encourages in the user a heightened and ever-changing awareness of the present. It works against historical perception, which must depend on the inimical notions of logic and sequential succession. If the print medium exalts the word, fixing it into permanence, the electronic counterpart reduces it to a signal, a means to an end.

Transitions like the one from print to electronic media do not take place without rippling or, more likely, reweaving the entire social and cultural web. The tendencies outlined above are already at work. We don't need to look far to find their effects. We can begin with the newspaper headlines and the millennial lamentations sounded in the op-ed pages: that our educational systems are in decline; that our students are less and less able to read and comprehend their required texts, and that their aptitude scores have leveled off well below those of previous generations. Tag-line communication, called "bite-speak" by some, is destroying the last remnants of political discourse; spin doctors and media consultants are our new shamans. As communications empires fight for control of all information outlets, including publishers, the latter have succumbed to the tyranny of the bottom line; they are less and less willing to publish work, however worthy, that will not make a tidy profit. And, on every front, funding for the arts is being cut while the arts themselves appear to be suffering a deep crisis of relevance. And so on.

Every one of these developments is, of course, overdetermined, but there can be no doubt that they are connected, perhaps profoundly, to the transition that is underway.

Certain other trends bear watching. One could argue, for instance, that the entire movement of postmodernism in the arts is a consequence of this same macroscopic shift. For what is postmodernism at root but an aesthetic that rebukes the idea of an historical time line, as well as previously uncontested assumptions of cultural hierarchy. The postmodern artifact manipulates its stylistic signatures like Lego blocks and makes free with combinations from the formerly sequestered spheres of high and popular art. Its combinatory momentum and relentless referencing of the surrounding culture mirror perfectly the associative dynamics of electronic media.

One might argue likewise, that the virulent debate within academia over the canon and multiculturalism may not be a simple struggle between the entrenched ideologies of white male elites and the forces of formerly disenfranchised gender, racial, and cultural groups. Many of those who would revise the canon (or end it altogether) are trying to outflank the assumption of historical tradition itself. The underlying question, avoided by many, may be not only whether the tradition is relevant, but whether it might not be too taxing a system for students to comprehend. Both the traditionalists and the progressives have valid arguments, and we must certainly have sympathy for those who would try to expose and eradicate the hidden assumptions of bias in the Western tradition. But it also seems clear that this debate could only have taken the form it has in a society that has begun to come loose from its textual moorings. To challenge repression is salutary. To challenge history itself, proclaiming it to be simply an archive of repressions and justifications, is idiotic.*

Then there are the more specific sorts of developments. Consider the multibillion-dollar initiative by Whittle Communications to bring commercially sponsored education packages into the classroom. The underlying premise is staggeringly simple: If electronic media are the one thing that the young are at ease with, why not exploit the fact? Why not stop bucking television and use it instead, with corporate America picking up the tab in exchange for a few minutes of valuable airtime for commercials? As the Boston Globe reports:

Here's how it would work:

Participating schools would receive, free of charge, $50,000 worth of electronic paraphernalia, including a satellite dish and classroom video monitors. In return, the schools would agree to air the show.

The show would resemble a network news program, but with 18- to 24-year-old anchors.

A prototype includes a report on a United Nations Security Council meeting on terrorism, a space shuttle update, a U2 music video tribute to Martin Luther King, a feature on the environment, a "fast fact" ('Arachibutyrophobia is the fear of peanut butter sticking to the roof of your mouth') and two minutes of commercial advertising.

"You have to remember that the children of today have grown up with the visual media," said Robert Calabrese [Billerica School Superintendent]. "They know no other way and we're simply capitalizing on that to enhance learning."

Calabrese's observation on the preconditioning of a whole generation of students raises troubling questions: Should we suppose that American education will begin to tailor itself to the aptitudes of its students, presenting more and more of its materials in newly packaged forms? And what will happen when educators find that not very many of the old materials will "play"–that is, capture student enthusiasm? Is the what of learning to be determined by the how? And at what point do vicious cycles begin to reveal their viciousness?

A collective change of sensibility may already be upon us. We need to take seriously the possibility that the young truly "know no other way," that they are not made of the same stuff that their elders are. In her Harper's magazine debate with Neil Postman, Camille Paglia observed:

Some people have more developed sensoriums than others. I've found that most people born before World War II are turned off by the modern media. They can't understand how we who were born after the war can read and watch TV at the same time. But we can. When I wrote my book, I had earphones on, blasting rock music or Puccini and Brahms. The soap operas–with the sound turned down –flickered on my TV. I'd be talking on the phone at the same time. Baby boomers have a multilayered, multitrack ability to deal with the world.

I don't know whether to be impressed or depressed by Paglia's ability to disperse her focus in so many directions. Nor can I say, not having read her book, in what ways her multitrack sensibility has informed her prose. But I'm baffled by what she means when she talks about an ability to "deal with the world." From the context, "dealing" sounds more like a matter of incessantly repositioning the self within a barrage of onrushing stimuli.

Paglia's is hardly the only testimony in this matter. A New York Times article on the cult success of Mark Leyner (author of I Smell Esther Williams and My Cousin, My Gastroenterologist) reports suggestively:

His fans say, variously, that his writing is like MTV, or rap music, or rock music, or simply like everything in the world put together: fast and furious and intense, full of illusion and allusion and fantasy and science and excrement.

Larry McCaffery, a professor of literature at San Diego State University and co-editor of Fiction International, a literary journal, said his students get excited about Mr. Leyner's writing, which he considers important and unique: "It speaks to them, somehow, about this weird milieu they're swimming through. It's this dissolving, discontinuous world." While older people might find Mr. Leyner's world bizarre or unreal, Professor McCaffery said, it doesn't seem so to people who grew up with Walkmen and computers and VCR's, with so many choices, so much bombardment, that they have never experienced a sensation singly.

The article continues:

There is no traditional narrative, although the book is called a novel. And there is much use of facts, though it is called fiction. Seldom does the end of a sentence have any obvious relation to the beginning. "You don't know where you're going, but you don't mind taking the leap," said R. J. Cutler, the producer of "Heat," who invited Mr. Leyner to be on the show after he picked up the galleys of his book and found it mesmerizing. "He taps into a specific cultural perspective where thoughtful literary world view meets pop culture and the TV generation."

My final exhibit–I don't know if it qualifies as a morbid symptom as such–is drawn from a Washington Post Magazine essay on the future of the Library of Congress, our national shrine to the printed word. One of the individuals interviewed in the piece is Robert Zich, so-called "special projects czar" of the institution. Zich, too, has seen the future, and he is surprisingly candid with his interlocutor. Before long, Zich maintains, people will be able to get what information they want directly off their terminals. The function of the Library of Congress (and perhaps libraries in general) will change. He envisions his library becoming more like a museum: "Just as you go to the National Gallery to see its Leonardo or go to the Smithsonian to see the Spirit of St. Louis and so on, you will want to go to libraries to see the Gutenberg or the original printing of Shakespeare's plays or to see Lincoln's hand-written version of the Gettysburg Address."

Zich is outspoken, voicing what other administrators must be thinking privately. The big research libraries, he says, "and the great national libraries and their buildings will go the way of the railroad stations and the movie palaces of an earlier era which were really vital institutions in their time . . . Somehow folks moved away from that when the technology changed."

And books? Zich expresses excitement about Sony's hand-held electronic book, and a miniature encyclopedia coming from Franklin Electronic Publishers. "Slip it in your pocket," he says. "Little keyboard, punch in your words and it will do the full text searching and all the rest of it. Its limitation, of course, is that it's devoted just to that one book." Zich is likewise interested in the possibility of memory cards. What he likes about the Sony product is the portability: one machine, a screen that will display the contents of whatever electronic card you feed it.

I cite Zich's views at some length here because he is not some Silicon Valley research and development visionary, but a highly placed executive at what might be called, in a very literal sense, our most conservative public institution. When men like Zich embrace the electronic future, we can be sure it's well on its way.

Others might argue that the technologies cited by Zich merely represent a modification in the "form" of reading, and that reading itself will be unaffected, as there is little difference between following words on a pocket screen or a printed page. Here I have to hold my line. The context cannot but condition the process. Screen and book may exhibit the same string of words, but the assumptions that underlie their significance are entirely different depending on whether we are staring at a book or a circuit-generated text. As the nature of looking–at the natural world, at paintings–changed with the arrival of photography and mechanical reproduction, so will the collective relation to language alter as new modes of dissemination prevail.

Whether all of this sounds dire or merely "different" will depend upon the reader's own values and priorities. I find these portents of change depressing, but also exhilarating–at least to speculate about. On the one hand, I have a great feeling of loss and a fear about what habitations will exist for self and soul in the future. But there is also a quickening, a sense that important things are on the line. As Heraclitus once observed, "The mixture that is not shaken soon stagnates." Well, the mixture is being shaken, no doubt about it. And here are some of the kinds of developments we might watch for as our "proto-electronic" era yields to an all-electronic future:

1. Language erosion. There is no question but that the transition from the culture of the book to the culture of electronic communication will radically alter the ways in which we use language on every societal level. The complexity and distinctiveness of spoken and written expression, which are deeply bound to traditions of print literacy, will gradually be replaced by a more telegraphic sort of "plainspeak." Syntactic masonry is already a dying art. Neil Postman and others have already suggested what losses have been incurred by the advent of telegraphy and television–how the complex discourse patterns of the nineteenth century were flattened by the requirements of communication over distances. That tendency runs riot as the layers of mediation thicken. Simple linguistic prefab is now the norm, while ambiguity, paradox, irony, subtlety, and wit are fast disappearing. In their place, the simple "vision thing" and myriad other "things." Verbal intelligence, which has long been viewed as suspect as the act of reading, will come to seem positively conspiratorial. The greater part of any articulate person's energy will be deployed in dumbing-down her discourse.

Language will grow increasingly impoverished through a series of vicious cycles. For, of course, the usages of literature and scholarship are connected in fundamental ways to the general speech of the tribe. We can expect that curricula will be further streamlined, and difficult texts in the humanities will be pruned and glossed. One need only compare a college textbook from twenty years ago to its contemporary version. A poem by Milton, a play by Shakespeare–one can hardly find the text among the explanatory notes nowadays. Fewer and fewer people will be able to contend with the so-called masterworks of literature or ideas. Joyce, Woolf, Soyinka, not to mention the masters who preceded them, will go unread, and the civilizing energies of their prose will circulate aimlessly between closed covers.

2. Flattening of historical perspectives. As the circuit supplants the printed page, and as more and more of our communications involve us in network processes–which of their nature plant us in a perpetual present–our perception of history will inevitably alter. Changes in information storage and access are bound to impinge on our historical memory. The depth of field that is our sense of the past is not only a linguistic construct, but is in some essential way represented by the book and the physical accumulation of books in library spaces. In the contemplation of the single volume, or mass of volumes, we form a picture of time past as a growing deposit of sediment; we capture a sense of its depth and dimensionality. Moreover, we meet the past as much in the presentation of words in books of specific vintage as we do in any isolated fact or statistic. The database, useful as it is, expunges this context, this sense of chronology, and admits us to a weightless order in which all information is equally accessible.

If we take the etymological tack, history (cognate with "story") is affiliated in complex ways with its texts. Once the materials of the past are unhoused from their pages, they will surely mean differently. The printed page is itself a link, at least along the imaginative continuum, and when that link is broken, the past can only start to recede. At the same time it will become a body of disjunct data available for retrieval and, in the hands of our canny dream merchants, a mythology. The more we grow rooted in the consciousness of the now, the more it will seem utterly extraordinary that things were ever any different. The idea of a farmer plowing a field–an historical constant for millennia–will be something for a theme park. For, naturally, the entertainment industry, which reads the collective unconscious unerringly, will seize the advantage. The past that has slipped away will be rendered ever more glorious, ever more a fantasy play with heroes, villains, and quaint settings and props. Small-town American life returns as "Andy of Mayberry"–at first enjoyed with recognition, later accepted as a faithful portrait of how things used to be.

3. The waning of the private self. We may even now be in the first stages of a process of social collectivization that will over time all but vanquish the ideal of the isolated individual. For some decades now we have been edging away from the perception of private life as something opaque, closed off to the world; we increasingly accept the transparency of a life lived within a set of systems, electronic or otherwise. Our technologies are not bound by season or light–it's always the same time in the circuit. And so long as time is money and money matters, those circuits will keep humming. The doors and walls of our habitations matter less and less–the world sweeps through the wires as it needs to, or as we need it to. The monitor light is always blinking; we are always potentially on-line.

I am not suggesting that we are all about to become mindless, soulless robots, or that personality will disappear altogether into an oceanic homogeneity. But certainly the idea of what it means to be a person living a life will be much changed. The figure-ground model, which has always featured a solitary self before a background that is the society of other selves, is romantic in the extreme. It is ever less tenable in the world as it is becoming. There are no more wildernesses, no more lonely homesteads, and, outside of cinema, no more emblems of the exalted individual.

The self must change as the nature of subjective space changes. And one of the many incremental transformations of our age has been the slow but steady destruction of subjective space. The physical and psychological distance between individuals has been shrinking for at least a century. In the process, the figure-ground image has begun to blur its boundary distinctions. One day we will conduct our public and private lives within networks so dense, among so many channels of instantaneous information, that it will make almost no sense to speak of the differentiations of subjective individualism.

We are already captive in our webs. Our slight solitudes are transected by codes, wires, and pulsations. We punch a number to check in with the answering machine, another to tape a show that we are too busy to watch. The strands of the web grow finer and finer–this is obvious. What is no less obvious is the fact that they will continue to proliferate, gaining in sophistication, merging functions so that one can bank by phone, shop via television, and so on. The natural tendency is toward streamlining: The smart dollar keeps finding ways to shorten the path, double-up the function. We might think in terms of a circuit-board model, picturing ourselves as the contact points. The expansion of electronic options is always at the cost of contractions in the private sphere. We will soon be navigating with ease among cataracts of organized pulsations, putting out and taking in signals. We will bring our terminals, our modems, and menus further and further into our former privacies; we will implicate ourselves by degrees in the unitary life, and there may come a day when we no longer remember that there was any other life.

While I was brewing these somewhat melancholy thoughts, I chanced to read in an old New Republic the text of Joseph Brodsky's 1987 Nobel Prize acceptance speech. I felt as though I had opened a door leading to the great vault of the nineteenth century. The poet's passionate plea on behalf of the book at once corroborated and countered everything I had been thinking. What he upheld in faith were the very ideals I was saying good-bye to. I greeted his words with an agitated skepticism, fashioning from them something more like a valediction. Here are four passages:

If art teaches anything . . . it is the privateness of the human condition. Being the most ancient as well as the most literal form of private enterprise, it fosters in a man, knowingly or unwittingly, a sense of his uniqueness, of individuality, of separateness–thus turning him from a social animal into an autonomous "I."

The great Baratynsky, speaking of his Muse, characterized her as possessing an "uncommon visage." It's in acquiring this "uncommon visage" that the meaning of human existence seems to lie, since for this uncommonness we are, as it were, prepared genetically.

Aesthetic choice is a highly individual matter, and aesthetic experience is always a private one. Every new aesthetic reality makes one's experience even more private; and this kind of privacy, assuming at times the guise of literary (or some other) taste, can in itself turn out to be, if not a guarantee, then a form of defense, against enslavement.

In the history of our species, in the history of Homo sapiens, the book is an anthropological development, similar essentially to the invention of the wheel. Having emerged in order to give us some idea not so much of our origins as of what that sapiens is capable of, a book constitutes a means of transportation through the space of experience, at the speed of a turning page. This movement, like every movement, becomes flight from the common denominator . . . This flight is the flight in the direction of "uncommon visage," in the direction of the numerator, in the direction of autonomy, in the direction of privacy.

Brodsky is addressing the relation between art and totalitarianism, and within that context his words make passionate sense. But I was reading from a different vantage. What I had in mind was not a vision of political totalitarianism, but rather of something that might be called "societal totalism"–that movement toward deindividuation, or electronic collectivization, that I discussed above. And from that perspective our era appears to be in a headlong flight from the "uncommon visage" named by the poet.

Trafficking with tendencies–extrapolating and projecting as I have been doing–must finally remain a kind of gambling. One bets high on the validity of a notion and low on the human capacity for resistance and for unpredictable initiatives. No one can really predict how we will adapt to the transformations taking place all around us. We may discover, too, that language is a hardier thing than I have allowed. It may flourish among the beep and the click and the monitor as readily as it ever did on the printed page. I hope so, for language is the soul's ozone layer and we thin it at our peril.


9 Perseus Unbound

Like it or not, interactive video technologies have muscled their way into the formerly textbound precincts of education. The videodisc has mated with the microcomputer to produce a juggernaut: a flexible and encompassing teaching tool that threatens to overwhelm the linearity of print with an array of option-rich multimedia packages. And although we are only in the early stages of implementation–institutions are by nature conservative–an educational revolution seems inevitable.

Several years ago in Harvard Magazine, writer Craig Lambert sampled some of the innovative ways in which these technologies have already been applied at Harvard. Interactive video programs at the Law School allow students to view simulated police busts or actual courtroom procedures. With a tap of a digit they can freeze images, call up case citations, and quickly zero-in on the relevant fine points of precedent. Medical simulations, offering the immediacy of video images and instant access to the mountains of data necessary for diagnostic assessment, can have the student all but performing surgery. And language classes now allow the learner to make an end run around tedious drill repetitions and engage in protoconversations with video partners.

The hot news in the classics world, meanwhile, is Perseus 1.0, an interactive database developed and edited by Harvard associate professor Gregory Crane. Published on CD-ROM and videodisc, the program holds, according to its publicists, "the equivalent of 25 volumes of ancient Greek literature by ten authors (1 million Greek words), roughly 4,000 glosses in the on-line classical encyclopedia, and a 35,000-word on-line Greek lexicon." Also included are an enormous photographic database (six thousand images), a short video with narration, and "hundreds of descriptions and drawings of art and archeological objects." The package is affordable, too: Perseus software can be purchased for about $350. Plugged in, the student can call up a text, read it side by side with its translation, and analyze any word using the Liddell-Scott lexicon; he can read a thumbnail sketch on any mythic figure cited in the text, or call up images from an atlas, or zoom in on color Landsat photos; he can even study a particular vase through innumerable angles of vantage. The dusty library stacks have never looked dustier.

Although skepticism abounds, most of it is institutional, bound up with established procedures and the proprietorship of scholarly bailiwicks. But there are grounds for other, more philosophic sorts of debate, and we can expect to see flare-ups of controversy for some time to come. For more than any other development in recent memory, these interactive technologies throw into relief the fundamental questions about knowledge and learning. Not only what are its ends, but what are its means? And how might the means be changing the ends?

From the threshold, I think, we need to distinguish between kinds of knowledge and kinds of study. Pertinent here is German philosopher Wilhelm Dilthey's distinction between the natural sciences (Naturwissenschaften), which seek to explain physical events by subsuming them under causal laws, and the so-called sciences of culture (Geisteswissenschaften), which can only understand events in terms of the intentions and meanings that individuals attach to them.

To the former, it would seem, belong the areas of study more hospitable to the new video and computer procedures. Expanded databases and interactive programs can be viewed as tools, pure and simple. They give access to more information, foster cross-referentiality, and by reducing time and labor allow for greater focus on the essentials of a problem. Indeed, any discipline where knowledge is sought for its application rather than for itself could only profit from the implementation of these technologies. To the natural sciences one might add the fields of language study and law.

But there is a danger with these sexy new options–and the rapture with which believers speak warrants the adjective–that we will simply assume that their uses and potentials extend across the educational spectrum into realms where different kinds of knowledge, and hence learning, are at issue. The realms, that is, of Geisteswissenschaften, which have at their center the humanities.

In the humanities, knowledge is a means, yes, but it is a means less to instrumental application than to something more nebulous: understanding. We study history or literature or classics in order to compose and refine a narrative, or a set of narratives about what the human world used to be like, about how the world came to be as it is, and about what we have been–and are–like as psychological or spiritual creatures. The data–the facts, connections, the texts themselves–matter insofar as they help us to deepen and extend that narrative. In these disciplines the process of study may be as vital to the understanding as are the materials studied.

Given the great excitement generated by Perseus, it is easy to imagine that in the near future a whole range of innovative electronic-based learning packages will be available and, in many places, in use. These will surely include the manifold variations on the electronic book. Special new software texts are already being developed to bring us into the world of, say, Shakespeare, not only glossing the literature, but bathing the user in multimedia supplements. The would-be historian will step into an environment rich in choices, be they visual detailing, explanatory graphs, or suggested connections and sideroads. And so on. Moreover, once the price is right, who will be the curmudgeons who would deny their students access to the state-of-the-art?

Being a curmudgeon is a dirty job, but somebody has to do it. Someone has to hoist the warning flags and raise some issues that the fast-track proselytizers might overlook. Here are a few reservations worth pondering.

1. Knowledge, certainly in the humanities, is not a straightforward matter of access, of conquest via the ingestion of data. Part of any essential understanding of the world is that it is opaque, obdurate. To me, Wittgenstein's famous axiom, "The world is everything that is the case," translates into a recognition of otherness. The past is as much about the disappearance of things through time as it is about the recovery of traces and the reconstruction of vistas. Say what you will about books, they not only mark the backward trail, but they also encode this sense of obstacle, of otherness. The look of the printed page changes as we regress in time; under the orthographic changes are the changes in the language itself. Old-style textual research may feel like an unnecessarily slow burrowing, but it is itself an instruction: It confirms that time is a force as implacable as gravity.

Yet the multimedia packages would master this gravity. For opacity they substitute transparency, promoting the illusion of access. All that has been said, known, and done will yield to the dance of the fingertips on the terminal keys. Space becomes hyperspace, and time, hypertime ("hyper-" being the fashionable new prefix that invokes the nonlinear and nonsequential "space" made possible by computer technologies). One gathers the data of otherness, but through a medium which seems to level the feel–the truth–of that otherness. The field of knowledge is rendered as a lateral and synchronic enterprise susceptible to collage, not as a depth phenomenon. And if our media restructure our perceptions, as McLuhan and others have argued, then we may start producing generations who know a great deal of "information" about the past but who have no purchase on pastness itself.

Described in this way, the effects of interactive programs on users sound a good deal like the symptoms of postmodernism. And indeed, this recent cultural aesthetic, distinguished by its flat, bright, and often affectless assemblages of materials may be a consequence of a larger transformation of sensibility by information-processing technologies. After all, our arts do tend to mirror who we are and anticipate what we might be becoming. Changes of this magnitude are of course systemic, and their direction is not easily dictated. Whether the postmodern "vision" can be endorsed as a pedagogic platform, however, is another question.

2. Humanistic knowledge, as I suggested earlier, differs from the more instrumental kinds of knowledge in that it ultimately seeks to fashion a comprehensible narrative. It is, in other words, about the creation and expansion of meaningful contexts. Interactive media technologies are, at least in one sense, anticontextual. They open the field to new widths, constantly expanding relevance and reference, and they equip their user with a powerful grazing tool. One moves at great rates across subject terrains, crossing borders that were once closely guarded. The multimedia approach tends ineluctably to multidisciplinarianism. The positive effect, of course, is the creation of new levels of connection and integration; more and more variables are brought into the equation.

But the danger should be obvious: The horizon, the limit that gave definition to the parts of the narrative, will disappear. The equation itself will become nonsensical through the accumulation of variables. The context will widen until it becomes, in effect, everything. On the model of Chaos science, wherein the butterfly flapping its wings in China is seen to affect the weather system over Oklahoma, all data will impinge upon all other data. The technology may be able to handle it, but will the user? Will our narratives–historical, literary, classical–be able to withstand the data explosion? If they cannot, then what will be the new face of understanding? Or will the knowledge of the world become, perforce, a map as large and intricate as the world itself?

3. We might question, too, whether there is not in learning as in physical science a principle of energy conservation. Does a gain in one area depend upon a loss in another? My guess would be that every lateral attainment is purchased with a sacrifice of depth. The student may, through a program on Shakespeare, learn an immense amount about Elizabethan politics, the construction of the Globe theater, the origins of certain plays in the writings of Plutarch, the etymology of key terms, and so on, but will this dazzled student find the concentration, the will, to live with the often burred and prickly language of the plays themselves? The play's the thing–but will it be? Wouldn't the sustained exposure to a souped-up cognitive collage not begin to affect the attention span, the ability if not willingness to sit with one text for extended periods, butting up against its cruxes, trying to excavate meaning from the original rhythms and syntax? The gurus of interaction love to say that the student learns best by doing, but let's not forget that reading a work is also a kind of doing.

4. As a final reservation, what about the long-term cognitive effects of these new processes of data absorption? Isn't it possible that more may be less, and that the neural networks have one speed for taking in–a speed that can be increased–and quite another rate for retention? Again, it may be that our technologies will exceed us. They will make it not only possible but irresistible to consume data at what must strike people of the book as very high rates. But what then? What will happen as our neural systems, evolved through millennia to certain capacities, modify themselves to hold ever-expanding loads? Will we simply become smarter, able to hold and process more? Or do we have to reckon with some other gain/loss formula? One possible cognitive response–call it the "S.A.T. cram-course model"–might be an expansion of the short-term memory banks and a correlative atrophying of long-term memory.

But here our technology may well assume a new role. Once it dawns on us, as it must, that our software will hold all the information we need at ready access, we may very well let it. That is, we may choose to become the technicians of our auxiliary brains, mastering not the information but the retrieval and referencing functions. At a certain point, then, we could become the evolutionary opposites of our forebears, who, lacking external technology, committed everything to memory. If this were to happen, what would be the status of knowing, of being educated? The leader of the electronic tribe would not be the person who knew most, but the one who could execute the broadest range of technical functions. What, I hesitate to ask, would become of the already antiquated notion of wisdom?

I recently watched a public television special on the history of the computer. One of the many experts and enthusiasts interviewed took up the knowledge question. He explained how the formerly two-dimensional process of book-based learning is rapidly becoming three-dimensional. The day will come, he opined, when interactive and virtual technologies will allow us to more or less dispense with our reliance on the sequence-based print paradigm. Whatever the object of our study, our equipment will be able to get us there directly: inside the volcano or the violin-maker's studio, right up on the stage. I was enthralled, but I shuddered, too, for it struck me that when our technologies are all in place–when all databases have been refined and integrated–that will be the day when we stop living in the old hard world and take up residence in some bright new hyperworld, a kind of Disneyland of information. I have to wonder if this is what Perseus and its kindred programs might not be edging us toward. That program got its name, we learn from the brochure, from the Greek mythological hero Perseus, who was the explorer of the limits of the known world. I confess that I can't think of Perseus without also thinking of Icarus, heedless son of Daedalus, who allowed his wings to carry him over the invisible line that was inscribed across the skyway.


10 Close Listening

[.....]

Reading, because we control it, is adaptable to our needs and rhythms. We are free to indulge our subjective associative impulse; the term I coin for this is deep reading: the slow and meditative possession of a book. We don't just read the words, we dream our lives in their vicinity. The printed page becomes a kind of wrought-iron fence we crawl through, returning, once we have wandered, to the very place we started. Deep listening to words is rarely an option. Our ear, and with it our whole imaginative apparatus, marches in lockstep to the speaker's baton.

When we read with our eyes, we hear the words in the theater of our auditory inwardness. The voice we conjure up is our own–it is the sound-print of the self. Bringing this voice to life via the book is one of the subtler aspects of the reading magic, but hearing a book in the voice of another amounts to a silencing of that self–it is an act of vocal tyranny. The listener is powerless against the taped voice, not at all in the position of my five-year-old daughter, who admonishes me continually, "Don't read it like that, Dad." With the audio book, everything–pace, timbre, inflection–is determined for the captive listener. The collaborative component is gone; one simply receives.

[.....]


11 Hypertext: Of Mouse and Man

I have a friend, R., who is not only an excellent short-story writer and philosopher of the art, but who is also a convert to the sorcery of the microchip. R. has had a nibbling interest in hypertext–for some the cutting edge in writing these days–and he had me over to his studio recently so that I could get a look at this latest revolutionary development. Our text was Stuart Moulthrop's Victory Garden, an interactive novel by a writer who has been called one of the leading theoreticians of the genre. R. sat me down in a chair in front of his terminal, booted up, and off we went.

Or did we? In fact it was not one of those off-you-go kinds of things at all. What we had in front of us was a spatialized table of contents in the form of a map of an elaborate garden. There were mazelike paths and benches and nooks, each representing some element, or strand, of the novel. This was the option board. The reader was invited to proceed by inclination, choosing a character, focusing on a relationship, engaging (or not) a relevant subplot, and deciding whether to snap backward or forward in time. A kind of paralysis crept over me. I was reminded of Julio Cortázar's Hopscotch, where the reader learns that he can follow the chapters in a number of different sequences. But this was stranger, denser. The extent of the text was concealed (and in that sense lifelike). It was also stylistically uninspired. I felt none of the tug I had felt with Cortázar's novel, none of the subtle suction exerted by masterly prose. Still, I did not give up. I tipped up and back in my chair, clicked and clicked again, waiting patiently for the empowering rush that ought to come when worlds open upon other worlds and old limits collapse.

It was hard, I confess, to square my experience with the hype surrounding hypertext and multimedia. Extremists–I meet more and more of them–argue that the printed page has been but a temporary habitation for the word. The book, they say, is no longer the axis of our intellectual culture. There is a kind of aggressiveness in their proselytizing. The stationary arrangement of language on a page is outmoded. The word, they say, has broken from that corral, is already galloping in its new element, jumping with the speed of electricity from screen to screen. Indeed, the revolution is taking place even as I type with the antediluvian typewriter onto the superseded sheet of paper. I am proof of the fact that many of us are still habit-bound, unable to grasp the scope of the transformation that is underway all around us. But rest assured, we will adjust to these changes, as we do to all others, by increments; we will continue to do so until everything about the way we do business is different. So they say. Those with a lesser stake in the printed word, for whom the technologies are exciting means to necessary ends–to speed and efficiency–will scarcely notice what they are leaving behind. But those of us who live by the word, who are still embedded in the ancient and formerly stable reader-writer relationship, will have to make our difficult peace.

In a widely-discussed essay in the New York Times Book Review–entitled, terrifyingly, "The End of Books" (June 21, 1992)–Robert Coover addressed the new situation. He began boldly:

In the real world nowadays, that is to say, in the world of video transmissions, cellular phones, fax machines, computer networks, and in particular out in the humming digitalized precincts of avant-garde computer hackers, cyberpunks and hyperspace freaks, you will often hear it said that the print medium is a doomed and outdated technology, a mere curiosity of bygone days and destined soon to be consigned forever to those dusty unattended museums we now call libraries. Indeed, the very proliferation of books and other print-based media, so prevalent in this forest-harvesting, paper-wasting age, is held to be a sign of its feverish moribundity, the last futile gasp of a once-vital form before it finally passes away forever, dead as God.

His ground set out, Coover soon focuses his attention on hypertext, which is, in this newly enormous landscape, focus enough. Here is his description of the term:

"Hypertext" is not a system but a generic term, coined a quarter of a century ago by a computer populist named Ted Nelson to describe the writing done in the nonlinear or nonsequential space made possible by the computer. Moreover, unlike print text, hypertext provides multiple paths between text segments, now often called "lexias" in a borrowing from the pre-hypertextual but prescient Roland Barthes. With its webs of linked lexias, its networks of alternate routes (as opposed to print's fixed unidirectional page-turning) hypertext presents a radically divergent technology, interactive and polyvocal, favoring a plurality of discourses over definitive utterance and freeing the reader from domination by the author. Hypertext reader and writer are said to become co-learners or co-writers, as it were, fellow travelers in the mapping and remapping of textual (and visual, kinetic, and aural) components, not all of which are provided by what used to be called the author.

This is the new picture, background and foreground, and we members of the literary community had better stop thinking of it as a science-fiction fantasy.

Ground zero: The transformation of the media of communication maps a larger transformation of consciousness–maps it, but also speeds it along; it changes the terms of our experience and our ways of offering response. Transmission determines reception determines reaction. Looking broadly at the way we live–on many simultaneous levels, under massive stimulus loads–it is clear that mechanical-linear technologies are insufficient. We require swift and obedient tools with vast capacities for moving messages through networks. As the tools proliferate, however, more and more of what we do involves network interaction. The processes that we created to serve our evolving needs have not only begun to redefine our experience, but they are fast becoming our new cognitive paradigm. It is ever more difficult for us to imagine how people ever got along before fax, e-mail, mobile phones, computer networks, etc.

What is the relevance of all this to reading and writing? This must now be established–from scratch.

Words read from a screen or written onto a screen–words which appear and disappear, even if they can be retrieved and fixed into place with a keystroke–have a different status and affect us differently from words held immobile on the accessible space of a page. Marshall McLuhan set out the principles decades ago, charting the major media shifts from orality to print and from print to electronic as cultural watersheds. The basic premise holds. But McLuhan's analysis of the print-to-electronics transformation centered upon television and the displacement of the printed word by transmissions of image and voice. But what about the difference between print on a page and print on a screen? Are we dealing with a change of degree, or a change of kind? It may be too early to tell. At present, while we are still poised with one foot in each realm, it would seem a difference of degree. But as electronic communications eventually supplant the mechanical, degree may attain critical mass and become kind. Or less than kind.

Reading over Coover's description of hypertext, we have to wonder: Are our myriad technological innovations to be seen as responses to collective needs and desires, or are they simply logical developments in the inexorable evolution of technology itself? Do the hypertext options arrive because we want out of the prison-house of tradition (linearity, univocality, stylistic individuality), or are they a by-product of breakthroughs in the field? Is hypertext a Hula-Hoop fad or the first surging of a wave that will swell until it sweeps away everything in its path? If it is indeed a need-driven development–a reflection of a will to break out of a long confinement, to redefine the terms and processes of expression–then we may be in for an epic battle that will transform everything about reading, writing, and publishing.

The subject comes up a great deal in conversation these days. Disputants, many of them writers, say to me, "Words are still words–on a page, on a screen–what's the difference?" There is much shrugging of the shoulders. But this will never do. The changes are profound and the differences are consequential. Nearly weightless though it is, the word printed on a page is a thing. The configuration of impulses on a screen is not–it is a manifestation, an indeterminate entity both particle and wave, an ectoplasmic arrival and departure. The former occupies a position in space–on a page, in a book–and is verifiably there. The latter, once dematerialized, digitalized back into storage, into memory, cannot be said to exist in quite the same way. It has potential, not actual, locus. (Purists would insist that the coded bit, too, exists and can be found, but its location is not evident to the unassisted and uninstructed senses.) And although one could argue that the word, the passage, is present in the software memory as surely as it sits on page x, the fact is that we register a profound difference. One is outside and visible, the other "inside" and invisible. A thing and, in a sense, the idea of a thing. The words on the page, however ethereal their designation, partake of matter. The words removed to storage, rendered invisible, seem to have reversed expressive direction and to have gone back into thought. Their entity dissolves into a kind of neural potentiality. This fact–or, rather, this perception on the part of the screen reader–cannot but affect the way the words are registered when present. They may not be less, but they are as different as the nearly identical pieces of paper currency, the one secured by bullion-holdings at Fort Knox, the other by the abstract guarantees of the Federal Reserve System.

The shape of a word–its physical look–is only its outer garb. The impulse, the pulse of its meaning, is the same whether that word is incised in marble, scratched into mud, inscribed onto papyrus, printed onto a page, or flickered forth on a screen. Or is it? Wouldn't we say that the word cannot really exist outside the perception and translation by its reader? If this is the case, then the mode of transmission cannot be disregarded. The word cut into stone carries the implicit weight of the carver's intention; it is decoded into sense under the aspect of its imperishability. It has weight, grandeur–it vies with time. The same word, when it appears on the screen, must be received with a sense of its weightlessness–the weightlessness of its presentation. The same sign, but not the same.

Seeing is believing–or so they say. In fact, the proposition is nonsensical. Seeing is knowing, whereas believing is trusting to the existence of something we cannot see. But belief can be stronger than knowing. When we trust to the unseen, we confer power. Dieties and subatomic particles and, more recently, the silicon pathways webbed into microchips–all of these we invest with a potency that we do not always grant to more objectively verifiable phenomena. Thus, the words on the page, though they issue from the invisible force field of another's mind, are insulated between covers, while the words on the screen seem to arrive from some collective elsewhere that seems more profound, deeper than a mere writer's subjectivity. But this does not necessarily invest the words themselves with a greater potency, for the unseen creative self of the writer is conflated with the unseen depth of the technology and, in the process, the writer's independent authority is subtly undermined. The site of veneration shifts; in the reader's subliminal perception some measure of the power belonging to the writer is handed over to the machine. The words on the screen, in other words, are felt to issue from a void deeper than language, and this, not the maker of the sentences, claims any remnant impulse to belief.

The page is flat, opaque. The screen is of indeterminate depth–the word floats on the surface like a leaf on a river. Phenomenologically, that word is less absolute. The leaf on the river is not the leaf plucked out and held in the hand. The words that appear and disappear on the screen are naturally perceived less as isolated counters and more as the constituent elements of some larger, more fluid process. This is not a matter of one being better or worse, but different.

There is a paradox lurking in this metamorphosis of the word. The earlier historical transition from orality to script–a transition greeted with considerable alarm by Socrates and his followers–changed the rules of intellectual procedure completely. Written texts could be transmitted, studied, and annotated; knowledge could rear itself upon a stable base. And the shift from script to mechanical type and the consequent spread of literacy among the laity is said by many to have made the Enlightenment possible. Yet now it is computers, in one sense the very apotheosis of applied rationality, that are destabilizing the authority of the printed word and returning us, although at a different part of the spiral, to the process orientation that characterized oral cultures.

Process. As a noun, "a series of actions, changes, or functions that bring about an end or result." As a verb, "to put through the steps of a prescribed procedure." Although the word is both noun and verb, in this context its verbal attributes are dominant. The difference between words on a page and words on a screen is the difference between product and process, noun and verb. The word processor is not, never mind what some writers say, "just a better typewriter." It is a modification of the relation between the writer and the language.

The dual function of print is the immobilization and preservation of language. To make a mark on a page is to gesture toward permanence; it is to make a choice from an array of expressive possibilities. In former days, the writer, en route to a product that could be edited, typeset, and more or less permanently imprinted on paper, wrestled incessantly with this primary attribute of the medium. If he wrote with pencil or pen, then he had to erase or scratch out his mistakes; if he typed, then he either had to retype or use some correcting tool. The path between impulse and inscription was made thornier by the knowledge that errors meant having to retrace steps and do more work. The writer was more likely to test the phrasing on the ear, to edit mentally before committing to the paper. The underlying momentum was toward the right, irrevocable expression.

This ever-present awareness of fixity, of indelibility, is no longer so pressing a part of the writer's daily struggle. That is, the writing technology no longer enforces it. Words now arrive onto the screen under the aspect of provisionality. They can be transferred with a stroke or deleted altogether. And when they are deleted it is as if they had never been. There is no physical reminder of the wrong turn, the failure. At a very fundamental and obvious level, the consequentiality of bringing forth language has been altered. Where the limitations of the medium once encouraged a very practical resistance to the spewing out of the unformulated expression, that responsibility has now passed to the writer.

To theorize along these lines is to court ridicule. Present the average reader with prose originally written onto the screen and prose typed onto the page, and he will wonder what is the difference. The words are the same, of course. More or less. Yet at some level, perhaps molecular, they are not the same. The difference? It must originate in the writer, more precisely in the writer in the act of composition. A change in procedure must be at least subtly reflected in the result. How could it not? More than a few writers have explained to me just how the fluidity and alterability made possible by the medium have freed them to write more, to venture their sentences with less inhibition. And the fact that one can readily move sentences, paragraphs, even whole sections, from one place to another has allowed them to conceive of their work–the process of it–in more spatial terms. These would seem to be gains; but gains, we know, always come with a price. Which in this case would be the removal of focus from the line and a sacrifice of some of the line-driven refinements of style. With a change in potential, an incorporation of a greater awareness of the whole, the tendency of stylistic attention to be local and detail-oriented decreases. I'm talking about abstract tendencies, not about the practice of individual writers. One can still be a consummate fabricator of phrases and sentences–but one must be willing to work against the grain of the technology.

Writing on the computer promotes process over product and favors the whole over the execution of the part. As the writer grows accustomed to moving words, sentences, and paragraphs around–to opening his lines to insertions–his sense of linkage and necessity is affected. Less thought may be given to the ideal of inevitable expression. The expectation is no longer that there should be a single best way to say something; the writer accepts variability and is more inclined to view the work as a version. The Flaubertian tyranny of le mot juste is eclipsed, and with it, gradually, the idea of the author as a sovereign maker.

Roland Barthes once wrote an influential essay entitled "The Death of the Author" (which chimes, I see, with Coover's "End of the Book") in which he argued, in essence, that the individual writer is not so much the creative originator as he is the site for certain proliferations of language; that the text, by the same token, is a variegated weaving of strands from prior texts and not a freestanding entity. Barthes's pitch was extreme, calculated to provoke, and he did not really have electronic communications in mind–but it is in part the arrival of the new technologies that has made his writings so prescient.

The changes brought about by the wholesale implementation of the word processor and, more radically, the various hypertext options, are really just part of a much larger set of societal circumstances, all of which are modifying the traditional roles of writer and reader. The decline of the prestige of authorship–something all writers feel and lament–has much to do with the climate of our current intellectual culture, a climate in which all manifestations of author-ity are seen as suspect. Deconstruction and multiculturalism advance arm in arm, the former bent upon undermining the ideological base upon which aesthetic and cultural hierarchies have been erected, the latter proposing a lateral and egalitarian renovation of the canon. Together they convincingly expose the "greatness" of authors and works as complex constructs, not so much unimpeachable artistic attainments as triumphs of one set of cultural forces over others.

The idea of individual authorship–that one person would create an original work and have historical title to it–did not really become entrenched in the public mind until print superseded orality as the basis of cultural communication (and maybe this "public mind" only came into being at this point). So long as there was a spoken economy, the process, the transmission, had precedence over the thing transmitted. The speaker passed along what had been gathered and distilled from other oral sources. As the print technology gained ground, however, all that changed. Fixity brought imprimatur. Verbal perfectability, style, and the idea of ownership followed. The words on the page, chiseled and refined by a single author, aspired to permanence. The more perfect, the more inevitable the expression seemed, the greater the claim that the author could lay upon posterity. Think of the bold boasting in Shakespeare's sonnets, born of the recognition that so long as words survived (were read) the subject and the poet would both enjoy a kind of afterlife. Everything hinged upon the artistic power of the work itself.

In literary legend, Gustave Flaubert is seen as the paradigmatic maker and his Madame Bovary as the ultimate made thing. His contortions on the way to writing the perfect book, a book meditated down to its least syllable, a book that would suffer from the slightest modification of word order or punctuation, are legendary. His belief in the adequacy of language to experience had to be absolute; without it he would have had to go mad from the contemplation of unrealized possibility. Style–word order, word sound, periodic rhythm, etc.–was arbitrariness surmounted. The printed page was an objective, immutable thing; the book was an artifact. With the divestment of the creator's authority and the attenuation of the stylistic ideal, the emphasis in writing has naturally moved from product to process. The work is not intended to be absolute, nor is it received as such. Writing tends to be seen not so much as an objective realization as an expressive instance. A version. Looking from the larger historical vantage, it almost appears as if we are returning to the verbal orientation that preceded the triumph of print.

The word processor can be seen as a kind of ice-breaker for that inchoate thing that is hypertext–which is, as Coover notes, a "generic" term for writing on a computer that avails itself of some of the capacities of that technology. Hypertext is more than just an end run around paper; it is a way of giving the screen, the computer software, and the modem a significant role in the writing process.

In some ways hypertext resembles the now-familiar word processor operation. Text does not visibly accumulate, but scrolls in from and back out to oblivion. Words do not lie fixed against the opaque page but float in the quasidimensional hyperspace. Not only can they be moved or altered at will, but any part of the text can theoretically mark the beginning of another narrative or expository path. The text can be programmed to accommodate branching departures or to incorporate visual elements and documents. The lone user can sculpt texts as she wishes, breaking up narratives, arranging lines in diverse patterns, or creating "windows" that allow readers to choose how much information or description they want. And on and on.

No less significantly, the hypertext writer need not work alone. The technology affords the option of interactive or collaborative writing. And this, even more than the fluidity or the candy-store array of choices offered by the medium, promises to change our ideas about reading and writing enormously in the years to come. Already users can create texts in all manner of collaborative ways–trading lines, writing parallel texts that merge, moving independently created sets of characters in and out of communal fictional space. Coover described in his essay how he and his students established a "hypertext hotel," a place where the writers were free to "check in, to open new rooms, new corridors, new intrigues, to unlink texts or create new links, to intrude upon or subvert the texts of others, to alter plot trajectories, manipulate time and space, to engage in dialogue through invented characters, then kill off one another's characters or even sabotage the hotel's plumbing."

But while Coover sustains an attitude of exploratory optimism throughout, he does concede that he is himself enough a creature of the book to feel a certain skepticism about this brave new world. He notes what he sees as certain obvious problems:

Navigational procedures: how do you move around in infinity without getting lost? The structuring of the space can be so compelling and confusing as to utterly absorb the narrator and to exhaust the reader. And there is the related problem of filtering. With an unstable text that can be intruded upon by other author-readers, how do you, caught in the maze, avoid the trivial? How do you duck the garbage? Venerable novelistic values like unity, integrity, coherence, vision, voice, seem to be in danger. Eloquence is being redefined. "Text" has lost its canonical certainty. How does one judge, analyze, write about a work that never reads the same way twice?

Worthy, commonsensical questions. The problem is similar to that uncovered by Nietzsche: How do we ascertain or uphold values if God is dead and everything is permitted? In the case of hypertext it is not God who is gone, but the author, the traditional originator of structure and engineer of meanings. The creator who derived his essential prestige from the power of fiat: Let there be no world but this. If the game is wide open, if everything is possible between reader and writer, then how do we begin to define that game? Or do we define it at all? Does the idea of literature vanish altogether in the new gratification system of exchanged and shared impulses?

I sat in R.'s studio and did my dutiful best to get in past the wall of my resistance to hypertext. But I was still stymied. The battery of directions and option signals all but short-circuited any capacity I may have had to enter the life of the words on the screen. I was made so fidgety by the knowledge that I was positioned in a designed environment, with the freedom to rocket from one place to another with a keystroke, that I could scarcely hold still long enough to read what was there in front of me. Granted, what prose I did browse was not of a quality to compel entry by itself–it needed the enticement of its "hyper" element–but I realized that it would be the same if Pynchon or Gass had written the sentences. For the effect of the hypertext environment, the ever-present awareness of possibility and the need to either make or refuse choice, was to preempt my creating any meditative space for myself. When I read I do not just obediently move the eyes back and forth, ingesting verbal signals, I also sink myself into a receptivity. But sitting at my friend's terminal I experienced constant interruption–the reading surface was fractured, rendered collagelike by the appearance of starred keywords and suddenly materialized menu boxes. I did not feel the exhilarating freedom I had hoped to feel. I felt, rather, an assault upon what I had unreflectingly assumed to be my reader's prerogatives.

This is a matter that has not been sufficiently addressed–the ungainliness of the interaction. Not only is the user affronted aesthetically at every moment by ugly type fonts and crude display options, but he has to wheel and click the cumbersome mouse to keep the interaction going. This user, at least, has not been able to get past the feeling of being infantilized. No matter how serious the transaction taking place, I feel as though my reflexes are being tested in a video arcade. I have been assured that this will pass, but it hasn't yet. I still register viscerally the differential between the silken flow of information within the circuits and the fumbles and fidgets required to keep it from damming up. The interactive text, I suppose, cannot be any better than its reader's capabilities allow it to be.

Granted, the technology is still in its infancy. Many of the irritants will in time be refined away, and skilled writers will generate works of great cunning and suggestiveness. And readers will eventually acclimate themselves to texts encoded with signals. But even then, when trained reader encounters skilled writer, will that reader ever achieve that meditative immersion that is, for me, one of the main incentives for reading?

My guess is that the "revolution" scenarios, staple features of the New-Age "hacker" magazines, are premature and do not take into account the conservative retraction of the elastic. Innumerable possibilities will be tested–vast interactive collaborations, video inserts, much entrepreneurial fizz–but most of them will blow away like smoke in the wind. Remaining behind will be the incentives that really work–the brilliant, ingenious, artistic productions that are not merely technical tours de force but which have something to communicate, which reach the interactive reader in something more than just a cerebral way. As with all systemic processes, a natural ecology will assert itself, preserving what is useful and eliminating what is not.

Still, if the shift from typewriter to word processor altered the writer's sense of stylistic imperative, then hypertext can be seen as delivering a mighty blow to the long-static writer-reader relationship. It changes the entire system of power upon which the literary experience has been predicated; it rewrites the contract from start to finish. Coover states that hypertext "presents a radically divergent technology, interactive and polyvocal, favoring a plurality of discourses over definitive utterance and freeing the reader from domination by the author," but his tonal matter-of-factness belies the monumentality of the assertion. This "domination by the author" has been, at least until now, the point of writing and reading. The author masters the resources of language to create a vision that will engage and in some way overpower the reader; the reader goes to the work to be subjected to the creative will of another. The premise behind the textual interchange is that the author possesses wisdom, an insight, a way of looking at experience, that the reader wants.

A change in this relation is therefore not superficial. Once a reader is enabled to collaborate, participate, or in any way engage the text as an empowered player who has some say in the outcome of the game, the core assumptions of reading are called into question. The imagination is liberated from the constraint of being guided at every step by the author. Necessity is dethroned and arbitrariness is installed in its place.

Consider the difference. Text A, old-style, composed by a single author on a typewriter, edited, typeset, published, distributed through bookstores, where it is purchased by the reader, who ingests it the old way, turning pages, front to back, assembling a structure of sense deemed to be the necessary structure because from among the myriad existing possibilities the author selected it. Now look at Text B, the hypertext product composed by one writer, or several, on a computer, using a software program that facilitates options. The work can be read in linear fashion (the missionary position of reading), but it is also open. That is, the reader can choose to follow any number of subnarrative paths, can call up photographic supplements to certain key descriptions, can select from among a number of different kinds of possible endings. What is it that we do with B? Do we still call it reading? Or would we do better to coin a new term, something like "texting" or "word-piloting"?

We do not know yet whether hypertext will ever be accepted by a mass readership as something more than a sophisticated Nintendo game played with language. It could be that, faced with the choice between univocal and polyvocal, linear and "open," readers will opt for the more traditional package; that the reading act will remain rooted in the original giver-receiver premise because this offers readers something they want: a chance to subject the anarchic subjectivity to another's disciplined imagination, a chance to be taken in unsuspected directions under the guidance of some singular sensibility.

I stare at the textual field on my friend's screen and I am unpersuaded. Indeed, this glimpse of the future–if it is the future–has me clinging all the more tightly to my books, the very idea of them. If I ever took them for granted, I do no longer. I now see each one as a portable enclosure, a place I can repair to to release the private, unsocialized, dreaming self. A book is solitude, privacy; it is a way of holding the self apart from the crush of the outer world. Hypertext–at least the spirit of hypertext, which I see as the spirit of the times–promises to deliver me from this, to free me from the "liberating domination" of the author. It promises to spring me from the univocal linearity which is precisely the constraint that fills me with a sense of possibility as I read my way across fixed acres of print.


About hypertext online: OBS Papers on Online Publishing

Return to the electronic version of "Being Digital": Books without Pages

Please enter your comments here Follow our "Being Digital" discussion forum

Intro Contents Cyberdock OBS Home


Copyright © Alfred A. Knopf, Inc., 1995. All rights reserved.
Copyright © Online Edition, OBS. All rights reserved.
Updated on May 14, 1996 nn@obs-us.com