Sep 182007
 

Here’s some Q &A from the Encyclopedia Britannica online:

Quick Facts about Bellow, Saul
Biography

Q: Who is the author of “The Adventures of Augie March”?
A: Saul Bellow is the author of “The Adventures of Augie March”

Q: Who is the author of “Dangling Man”?
A: Saul Bellow is the author of “Dangling Man”

Q: Who is the author of “The Victim”?
A: Saul Bellow is the author of “The Victim”

….
Quick Facts about Plath, Sylvia
Biography
Q: Who is the author of “The Bell Jar”?

Sylvia Plath is the author of “The Bell Jar”

Q: Who is the author of “Ariel”?
A: Sylvia Plath is the author of “Ariel”
…..

Q: Who is the author of “The Collected Poems”?
A: Sylvia Plath is the author of “The Collected Poems”

Does the Britannica believe that anyone of any age would be interested in this nonsense? It certainly makes one wonder why Andrew Keen, in his the cult of the amateur: how today’s internet is killing our culture, is quite exercised that the “professional” Brittannica may be replaced by the “amateur” Wikipedia. (In passing, Keen exults that one college’s history department banned references to Wikipedia in papers, but most college teachers would look askance at citations of any encyclopedia, I think. I certainly would.)  But the online version of the Britannica is guilty of a number of quite amateurish moves, the above idiotic Q & A being just some.

Keen does not appear to have made any actual comparisons of the Britannica and Wikipedia, perhaps relying on publicity handouts from the former. He mentions for instance that Albert Einstein, Madame Curie and George Bernard Shaw all once wrote for the Britannica. He neglects to point out that all these authors have been dead for at least half a century. He further ignores the fact that Britannica has been significantly dumbed down since those days. One can, indeed still read Albert Einstein’s article on “Spacetime,” but rather than being part of the current edition, it is described as from “classic Britannica.” It is prefaced by remarks that it is probably going to be a hard read.

2. The Once and Future King of Reference Works

The encyclopedia, as a form, made its appearance in the eighteenth century as a multi-volume compendium of knowledge that the rich might put in their personal libraries. But by the 1960’s, at the least, the full-fledged encyclopedia as a tool for adults was largely outmoded. For one thing, significant knowledge was multiplying at too rapid a rate to be confined in a reasonable number of volumes. The Britannica hit twenty volumes long ago. By now, to keep up the same level of coverage of various fields, it might well require a thousand volumes, and would have to cost something over $50,000.00 in print versions. That would be pretty much impossible, of course. Besides that, substantial revisions would be needed very frequently to keep the knowledge at the forefront. Relatively low cost books and large public and university libraries have meant that other sources of knowledge would be as readily accessible. The Britannica in its 15th edition of some 40 volumes at best became a slightly odd status symbol, one that most educated people found they could well do without.

If one adopts Keen’s outlook of opposition to “today’s Internet”, it is ironic that with its arrival, the problems of too great length or the need for rapid revision can be addressed in new ways, so once again the idea of an all-encompassing encyclopedia becomes feasible, at least in part. To a degree, the web itself is an encyclopedia, with every search engine some sort of index. But this is a difficult-to-correct set of articles, with no necessary or obvious indications of bias, lack of knowledge on the part of the authors, etc. In this situation, the wiki method is a brilliant innovation, though perhaps it could be improved slightly. Anyone who believes she has something to say about a topic can address it in the Wikipedia led by Jimmy Wales, but if others disagree, they can make alterations. When there is great dispute, Wales or the council working with him and somehow led by him can enter in, as of course can any other outsiders. As long as the overall council acts on more or less reasonable principles, the articles tend to get better, and revisions tend to settle down — most of the time.

There are problems inherent in any work being written by a committee: lack of literary style, repetitiveness, lack of overall organization in individual articles, great variation in quality between articles, some bias, etc. But these problems are also present to some degree in the EB or in any encyclopedia. The Wikipedia is very manifestly a work in progress, and it is only going to improve on the average, while already encompassing a larger swath of knowledge than the EB, with a reasonable degree of accuracy and with more currency. It is a key to Wikipedia’s success that one does not have to pay to use it, so that anyone interested can check an article and add their expertise. Because people believe that what they know and care about deserves attention, they are eager to make sure articles that matter to them are correct. They average Wikipedia writer is less good at making sure that non-experts can understand, but on the whole they do not do such a bad job with this

I looked at a few dozen articles in a wide range of fields, comparing online EB with W, all being articles about which I have considerable background knowledge. These include articles about such subjects as feudalism, string theory and other aspects of current physics, various modern writers, art, recent European history, botany and zoology, American politics, simple geometry and some other math, philosophy, auto mechanics, and more. On the whole, I learned more from W than from EB, though, in general, EB articles were better shaped and less repetitive.

Keen sneeringly suggests that an auto mechanic is just as likely to write or “correct” an article on physics as a physicist.  (He fails to consider articles on auto mechanics, which might also be of considerable interest the average reader. W is far superior to EB on the subject of anti-lock brakes, for example.) Why anyone not an expert would choose to write on a subject, Keen does not explain. In truth, as far as I can tell, the W article on string theory as well as other articles in recent or contemporary physics topics were probably written by physics graduate students, up on the latest, but not so knowledgeable about history, for example. The EB String Theory article was written by Brian Greene, a Columbia physics professor and author of a couple of “popular” books on the subject. His article is too brief, but what it does say it says well. However, anyone patient enough to follow the somewhat more awkward W article will learn more about the contemporary situation and the background as well, and even the very earliest history.

String theory emerged from an interpretation of a formula offered for different purposes by Gabriele Veneziano, in about 1968. He offered this as satisfying a set of requirements for what was known as the S-Matrix that had been proposed by Geoffrey Chew. When I looked up S-Matrix theory in W, I found an article that W‘s editing mechanism informed all readers was not appropriately written and should be cleaned up. The obvious reason was too many unexplained terms. The article was basically correct, in my view, but poorly written. No one reading W can be in doubt that this article has problems, and gets some sense of what those problems are. On the other hand, when I looked up S-matrix in EB online, I found nothing, and the same happened when I searched for Geoffrey Chew, who also at least has a brief entry in W.

W’s article on feudalism is also better than EB’s because the latter is written by an especially biased writer, whose main claim to fame is disputing that the term “feudalism” should ever be used at all. W’s article references her work, but also that of many others.

Still, W does have some pretty foolish articles. Anyone who has ever read the works of Thomas Pynchon knows that their plots are pretty much secondary to the telling of the story, but in a bizarre effort to match Cliff’s Notes, the article on Pynchon’s novel V offers  a plot outline that covers about half the chapters. Some dolt might use this as the basis for a paper to submit to a college class, but less harm would thereby be done than in most cases of the use of actual Cliff’s Notes. EB says almost nothing about V, so, while not informative, it does no harm in this case either. On the other hand, the main article about Pynchon in W is better than the one in EB.

And so on.

Wikipedia is already better than Britannica, in my view. While it will continue to have some eccentric articles, it will almost surely get better and better, and it will do so precisely because it does not charge set fees. It might do even better if it figures out a way for most contributors to have their names attached. That will increase the rewards of attention coming to the writers, and that attention will encourage care and accuracy where that has meaning — and perhaps better organization and writing.  At least this is worth some experimentation.

For instance, Wikipedia could start allowing authors who submit a brief bio and a photo to list their names, and then allow readers to judge what they have written. Those authors whose work is rated highest, and who have therefore contributed the most of an article, would have their names listed first among authors. Separate listings could be for editing. There could also be an honor roll of the best contributors to the most articles, and so on.  Writing a good article remains a difficult task, but it is also a wonderful exercise in understanding and learning how to explain. I think any offer of monetary rewards should be rejected, but the attention one gets for good article writing could logically carry over into the rest of one’s life. Of course, this would create some new problems, with attempts to gain unmerited attention, claims of precedence, and many other well-known problems of scholarly infighting. But one aspect of W that is good is that it already leads to a degree of protective watchfulness on the part of large number of readers.

With or without that change, W will revolutionize the notion of a single source of comprehensive knowledge in the Internet era. And it will do that in dialogical fashion, which is of course the source of all worthwhile knowledge except for individual expression and autobiography. The recognition that everyone is to some degree an expert in something and that that expertise can be of value to others is one of the implicit glories of the Internet.

Sep 172007
 

A couple of years ago, the philosophy professor Harry Frankfurt made publishing history of a sort by allowing his 7,000-word paper “On Bullshit” — which lives up pretty well to the second word in its title — to be published as a book. Bind some printed pieces of paper together, preferably in hard covers, distribute them via bookstores, at a cost of around $20, and voila, you have a book. If you choose not buy that one, however, you can read the paper free online. A book is thus a cultural artifact ,the form and meaning of which has changed throughout history. Books today tend to be printed words on paper, bound together, and thick enough that they can be located on the shelf by reading their spines.  They are sturdily enough held together so that you can carry them around, and today they are, especially when paperback, cheap enough that price is not the main preventative of reading them. Also today, books tend to have one author, and at least some pretense at coherence (though the occasional volume of selected or collected shorter works can be quite incoherent, and a number of books are edited collections, justified as not be a single author but as the selection of one or two editors). Books are of course only one way that printed works are presented; other common modes are newspapers, where articles, editorials, letters to the editor and columns can all be quite short, and magazines or scholarly journals. Pamphlets exist too. But books stand a better chance of being read form cover to cover, and of making a deep impact on the reader — at times.

Andrew Keen’s the cult of the amateur: how today’s internet is killing our culture  is somewhat longer than  On Bullshit — some 40,000 words, but it is still closer to a pamphlet than a book. However, the future of the book is one of Keen’s deep concerns. Here’s a key quote:

“Silicon Valley utopian Kevin Kelly wants to kill off the book entirely — as well as the intellectual property rights of writers and publishers. In fact, he wants to rewrite the definition of the book, digitalizing all books into a single universal and open-source free hypertext — like a huge literary Wikipedia. In a May 2006 New York Times Magazine ‘manifesto,’ Kelly describes this as the ‘Liquid Version’ of the book, a universal library in which ‘each is cross-linked, clustered, cited, extracted, indexed, analyzed, annotated, remixed, reassembled, and woven deeper into the culture than ever before.’ And Kelly couldn’t care less whether the contributor to this hyper-utopia is Dostoyevsky or one of the seven dwarfs.

“’Once digitized,’ Kelly says, ‘books can be unraveled into single pages or be reduced further, into snippets of a page. These snippets will be remixed into reordered books and virtual bookshelves.’ It is the digital equivalent of tearing out the pages of all the books in the world, shredding them line by line, and pasting them back together in infinite combinations. In his view, this results in ‘a web of names and a community of ideas.’ “

Who is right? Keen or Kelly, or neither? Here I, Goldhaber, just snipped Keen, who snipped Kelly. It’s not so alarming. The practice is as old as literature itself, or even older. The “Five Books of Moses” or The Pentateuch or Torah, better known to Christians as the first five books of the Old Testament, is clearly a compilation of texts with a variety of authors and origins. Some of these come from still earlier traditions such as the Book of Gilgamesh, the codes of Hammurabi, and no doubt a variety of tales handed down orally. Later, Jewish rabbis wove a huge series of comments and interpretations and further comments on and interpretations of those into a lengthy, multi-volume text known as Talmud. Today’s theologians keep this up with further commentary, and lay authors weave aspects of all these into countless texts, songs, plays, movie scripts, derivative music, etc.

(The snipping has sometimes gone even further, down to the level of letters. The medieval Aramaic Zohar was put together by Jewish mystics who believed the meanings of the biblical texts were to be found by viewing the letters of the words as a kind of code. Much later, around 1900, supporters of the idea that Francis Bacon wrote the works attributed to Shakespeare argued that Shakespeare’s First Folio was printed with two different sets of type, and that the two were placed so as to encode in binary statements about the actual authorship. )

Similar things happened with Greek mythology, woven into the oral tales later written down as the works of Homer, which were culled, added to and re-snipped to be the basis of the works of the great Greek tragedians, Aeschylus, Sophocles and Euripides. The Roman poet Virgil used Homer’s Iliad and Odyssey as a basis and model for his Aeneid (a dreadful piece of gore, in my view) and the greatest poet of Italy, Dante Alighieri, used Odysseus’s passage into the underworld as one source of his Divina Commedia or Divine Comedy (which, in the translations I’ve seen, gets boring as he leave tough, cynical Hell —Inferno —and ascends towards sweeter-than-sugar Heaven —Paradiso). Not long after Dante, his fellow Italian, Giovanni Boccaccio, wrote a collection of tales probably based in part on earlier works, which he called the Decameron. Geoffrey Chaucer soon stole many of its stories – some by direct translation, with no authorial credit — for his own Canterbury Tales.

To jump to a later time and another medium, the earliest-produced installment of George Lucas’s space epic, Star Wars, was based in several ways on famed Japanese director Akira Kurosawa’s 1958 Kakushi-toride no san-akunin or The Hidden Fortress, a samurai tale of Shogun-era Japan.  A later Kurosawa film, Ran, in turn is a Japanese version of Shakespeare’s King Lear.

Here is what Alfred Harbage, in his 1958 introduction to the Pelican Shakespeare edition says about King Lear itself:
“The story of Lear and his three daughters was given written form four centuries before Shakespeare’s birth. How much older its components may be we do not know. Cordelia [Lear’s loving but mistreated daughter] in one guise or another, including Cinderella’s, has figured in the folklore of most cultures, perhaps originally expressing what [Ralph Waldo] Emerson saw as the conviction of every human being of his worthiness to be loved and chosen, if only his true self were truly known. The figure of the ruler asking a question, often a riddle, with disastrous consequences to himself is equally old and dispersed. In his Historia Regum Britanniae [History of the Kings of Britain] (1136) Geoffrey of Monmouth converted folklore to history and established Lear and his daughters as rulers of ancient Britain, thus bequeathing them to the chronicles. Raphael Holinshed’s (1587) declared that ‘Leir, the sonne of Baldud,’ came to the throne ‘in the year of the world 3105, at which time Joas reigned in Juda,’ but belief in the historicity of such British kings was now beginning to wane, and Shakespeare could deal freely with the record. He read the story also in John Higgins’s lamentable verses in A Mirrour for Magistrates (1574), and in Edmund Spenser’s Faerie Queene, II, 10, 27-32. He knew, and may even have acted in, a bland dramatic version, The True Chronicle History of King Leir, published anonymously as in 1605 but staged at least as early as 1594.

“…. [the earliest date  for Shakespeare’s version is after ] March 16, 1603, when Samuel Harsnett’s Declaration Of Egregious Popishe Impostures was registered for publication. That this excursion in ‘pseudo-demonology’ was available to Shakespeare is evident in various ways, most clearly in the borrowed inventory of devils imbedded in Edgar’s jargon as Tom o’ Bedlam….”

It is a good thing copyright had not yet been invented when Chaucer or Shakespeare worked, or we wouldn’t have much of their work. Besides, if eternal copyright were the law, as some have suggested, we would not have numerous careful, scholarly editions of Shakespeare now available to us, along with the numerous adaptations and even bowdlerizations (such as those by Thomas Bowdler himself in the early nineteenth century). Probably Chaucer’s and Shakespeare’s works would have been long lost, as some heir, abashed, denied permission to reprint. No publisher could be quite sure who the rightful heirs were, and would certainly receive legal advice not to mess with the chance of being sued inherent in putting out an edition.

2. Attention Leads to New Works

In any medium, expression whether worthwhile or not, if anyone at all pays attention to it, has been influenced by earlier expressions and in turn often influences later ones, so that none stands in a vacuum. Expressive works of all sorts have always been transmitted, copied, riffed on, varied, quoted, translated, honored, given homages, lovingly or unlovingly parodied, satirized, pastiched, collaged, sampled, anthologized, excerpted, used as background, restated, adapted, and so on. Sometimes the whole work is lavishly reproduced, sometimes only a plot outline is kept, sometimes there are extensive quotes, sometimes only loose paraphrases. Everything of this sort took place long before the Web was a gleam in anyone’s eye. It is an inevitable result of paying attention to any work that it influences one, for better or worse, even one is an artist seeking to do something brand new.

3. Sitting by the Samovar

Keen specifically mentions Dostoyevsky.  Few non-Russians can fluently read his original words, instead having to settle for some translation. Which translation should you choose?  One way to decide is to compare them. It might be ideal to have many different translations available, so that you could flip from one to the other. It would also help to have at your disposal knowledgeable commentaries by Russian speakers very familiar with Dostoyevsky, though they will not necessarily agree among themselves. An average reader could not afford to buy all the necessary works, and it would be cumbersome to get them from a library, or even to make use of them if you had them all. You would have to open all the books, keep the pages turned to the right point, pick up each one when you want make a comparison, etc. It would be much handier if all the translations, all the critiques, all the bits of historical or biographical background, as well as the original, were on the Internet, and that you had handy ways to access it, much as Kevin Kelly proposes.

Andrew Keen is frightened of this, because he imagines it somehow means that the original version, of, say, The Brothers K (no, not Keen and Kelly, but Karamazov) would not remain itself, in easy reach also for anyone who sought it in itself alone. Or even that the good translations would not remain whole. I doubt that Kelly intended that, and, even if he did, the Internet does not need to work that way. There are plenty of ways that what each person expresses can be kept separate, even if someone’s expression is a mishmash of other people’s expressions, a sampling or collage or dictionary of quotations.

As long as an author has an any sort of audience there will be those who want to bask a bit in her reflected glory, getting attention through the attention that goes to the master. In effect, whatever their conscious motives this has long been the case for all those who prepare new translations, or who seek to edit critical editions or write biographies, or even find the work sufficiently interesting that they want to mention, discuss or brag about having read it. This group has a vested interest in ensuring that what they consider unadulterated versions of the master’s works will be available and easily discoverable online. Where they disagree, to be sure, they will put up variant versions, but these will all be available, accessible, searchable, and so on. Each work anyone cares about will be enriched, not lost at all.  If anyone took the trouble to mislead, by putting up a phony or adulterated version, fans of the author would quickly discover and denounce this, while making sure versions they consider authentic would remain findable.

I would rather trust in that kind of certainty than have to place my reliance on the local librarian, who might decide to clear the shelves of works that somehow no longer fit with local mores, limited shelf space, cataloguing requirements, or idiosyncratic policies. And I certainly would not be willing to rely on giant publishing conglomerates whose main motive is making a buck or increasing annual profits. Today printed books are commonly remaindered within a year of publication, and remain available only by dint of the Internet market in used books. An actual all-encompassing Internet library would be far more usable.

4. A Camel is Still a Horse Designed by a Committee

Keen implies that Kelly favors readers and — possibly — clumsy authors taking apart great works and rearranging them as multiple-author messes. I do think Kelly might have gotten a little carried away in that particular direction, but we don’t have to worry, partly for the reasons I just gave, and partly because of the nature of attention.

The glued-together kind of works that Keen thinks Kelly favors are usually not very attention-holding. In paying attention, as I have emphasized before, it is much easier to align one’s own mind to one other specific mind than with a whole crew, especially if the participants in that crew are not highly coordinated. A small group of very good jazz musicians may be able to jam together beautifully and coherently, but that sort of collaboration is rare, and rarely works well. You never hear a whole orchestra just jamming, because it would be impossible to follow. We do not find novels, plays, poems, paintings, sculptures or musical compositions with fifteen authors, and usually not even as many as two, unless their tasks are strictly sub-divided, or there is one clear leader for the whole work. Members of dance troupes work in coordination, not by individual whim, with one director or choreographer overseeing the totality of movement. Sports teams larger than those in doubles tennis have coaches who coordinate their practice sessions, decide on the range of plays they can handle and instruct them when to use different ones. We could not follow the plays otherwise.

What about movies? Anyone who sits through the credits rolling at the end of current ones sees that hundreds or even thousands of people are often involved. But they do not each work autonomously or have equal say. Rather, one, or sometimes two or still more rarely three equal collaborators shape each movie by directing and coordinating all the rest. Often the key person is the director, sometimes a screenwriter, sometimes a producer, or even an actor. But whenever more than one person is the key, conflicts can arise and the work loses coherence, to the point that virtually no one can pay close attention to it.

That was not always so, of course. Early books were simply collections of anything that could be copied and seemed to hold the copyists’ attention (as in fact Kelly points out in his article). But with the advent of printing, and in fact somewhat earlier, the idea of the author took pretty strict form, and as books became common, the one-author work predominated.  The fact that each book is a single physical item, visible for itself, whether on one’s bedside table, in a backpack or on a shelf, is a goad to reading it, picking it up again if one has started it, and basically reminding oneself of its separate and hopefully coherent existence. If you have access to all the books that have ever been written, even on a handy book-sized device you can carry around with you as conveniently as paperback, you will not have the same physical goad to continue reading where you left off. At the very least, a different kind of mental discipline than has been common will be required.

In today’s world, with so many calls on our attention, it is quite possible that many readers will lack the sustained concentration to get through an entire book. Though more novels are written than ever, the readership of “serious” novels seems anyway to be getting smaller. People buy thrillers to read on plane trips and then throw them away. Even that habit is under threat by onboard movie or video watching, whether on screens provided by airlines or laptops one takes along. But none of that implies the absence of a steady and even growing audience of truly dedicated novel readers, sub-divided into groups with different kinds of tastes, following different “schools” of literature, which also include comic-style “graphic novels,” such as Art Spiegelman’s Maus.

There is also an audience developing for extremely short fiction. Heretofore, the short story could not stand alone. Keen refers to one of the great Argentine fiction writer, Jorge Luis Borges’s articles, which was in fact a precursor to one of his typically very short stories, “The Library of Babel.” Borges made clear he thought novels were excessively long, and many of his stories were intended to imply that each described an actual much longer work. However, because his stories were so short, they simply could not be published individually, and either had to appear in magazines or as parts of collections. With the Internet, extremely short fiction a la Borges — or even shorter — can stand alone, as can mini-essays, poems, etc. (As with texts, since the 60’s or so, our styles of movie going or CD distribution left no room for what used to be known as short subjects> now they can burgeon once more. YouTube-style movies, a few minutes long, could one day have all the sophistication of a full-length film, collected in a very short space. )

For this shortening, the web provides a new means, but insofar as shorter attention spans are now perhaps normal, the web is merely a symptom, not a cause. The “ Western Canon” was under merciless attack in the groves of academe long before “today’s Internet.”  With the death of must-read literature has also come the fall of “Reader’s Digest Condensed Books” and “Book of the Month Club” and its ilk that chose each month what “middle-brow” readers needed to read. Intense calls on our attention come from sources such as the numerous TV channels, ubiquitous phoning, and much else that would exist even without an Internet.

Are all these trends terrible? Of course, in one way they are, in the sense that pleasure and the personal growth that comes about from immersing oneself in serious novels of some length is different from — and in some ways richer than — the obvious substitutes. It’s possible that people who do not take up and get through the challenge of serious literature will be shallower people with less-developed mental capacities than those who do. It is also possible — and indeed likely— that other attention-getting modes, even possibly including computer games, will take up the slack. In any event, since we cannot return to some glorious earlier time (nor would we really want to if we could) it still strikes me that the best way to hold on to what was good about the past is to increase opportunities to latch onto it, much more as Kelly suggests than Keen.

Sep 052007
 

“Prostitutes and gigolos are sexual professionals. Through hard work and experience, they are now masters of their craft. The best surely deserve excellent pay for what they do. If we have sex with amateurs and without paying for it, how will the professionals be able to continue to offer their vital services? Our culture will be destroyed. Ancient traditions will come to a halt. And the masters, the real pros, have yet another vital function: they help spread much-needed venereal diseases that keep our medical workers employed; how can we hope to maintain our way of life without the pros?”  I suspect that is roughly what Andrew Keen would have written had he been around to comment on the ‘60’s sexual revolution.

At least that is the impression I get when he warns in his new book, the cult of the amateur: how today’s internet is killing our culture, against bloggers, video uploaders and wikipedia writers. To him they are amateurs, who will displace “professional” journalists, ad copywriters, encyclopedia writers, political consultants, and so on. The trouble is, he seems basically to define “professional” simply by the fact that whatever the people in question do, like prostitutes they insist on being paid for it.

It’s true that most of us would be rightfully suspicious of amateur airline mechanics or brain surgeons, but not all so-called professions are the same. When we partially professionalize sports down to the level of Little League, we lose much of what active games once offered: free play, enjoyment for the participants, and a role for everyone regardless of skill. Professionalized athletes are good at starring, at showing off for the rest of us, and even at entertaining, but excluding the duffers is not necessarily a good side effect. Similarly, today’s politicians are professional at the art of getting elected, rather than keeping the interests of the public at large most at heart, nor at having the courage to do the right thing, nor to lead opinion by making clear cases for the common weal. Professional journalists know how to write an article, how to interview “the usual suspects,” and how to repeat what passes for common wisdom among their fellow journalists and those they most often interview. However, they often lack the wide knowledge of a field such as history, political movements or science that is a necessary background to write sensibly about the topic at hand. Journalism schools do not teach such subjects, at least not in any depth. (I will get to encyclopedia writers and ad writers in the next installment of this review.)

Keen offers only two examples of “professional journalists”  — Thomas Friedman of the NY Times and Robert Fisk of the London Independent. These are not reassuring examples. They both are, in Keen’s view, experts on the Middle East. One would expect two professional and highly reputed brain surgeons to agree most of the time on the broad outlines of how to treat particular cases. But Freidman and Fisk hardly ever do come out the same. Both have very strong — but differing — ideological biases, along with quite different ideas of who to talk to. Depending on which newspaper you read, therefore, you would get markedly different sense of how the world is. I trust neither of them, as it happens. They both lack wider judgment. I don’t want either shaping my mind too fully, and even both together would make a hash of things. (Of course, there are millions of other “experts on the Middle East” — those who grew up or live there permanently. They of course would vociferously disagree with most of the others about anything related to the topic. But that is just the nature of geographical area “expertise;” there are few objective truths.)

2. We could use a Thomas with more doubts
In the run up to the current Iraq war, which Keen admits is a huge folly, Friedman was one of the main cheerleaders, continually arguing that Iraq could easily become a democracy that would then be a beacon and a model for the entire Middle East, (meaning Southwest Asia plus North Africa)  which would then undercut support for Islamic terrorism. Not any step of that argument ever made the least sense, as many observers, expert and non-expert on the Mideast, blogger and non-blogger, said at the time. In the past week, almost five years since his war-cheerleading days, Friedman finally has suggested that the person needed to keep peace in Iraq was none other than Saddam Hussein, the dictator he was so eager to depose.

The problem with Friedman, as with Judith Miller, another NY Times Middle East “expert,” Michael Gordon, their military affairs guy, and Howell Raines, The Times’s editor at that time — along with hundreds of others with different employers is that they are part of an establishment in Washington and elsewhere, who get attention through access to others who also get attention, and are likely to be excluded if they happen to note that the emperor has no clothes. So they tend to find elaborate reasons why what appears to the unaided eye to be nakedness is really the most subtle and skillful finery.

The Washington DC equivalent of the Academy Awards is the annual dinner of the White House Correspondents’ Association, at which the President is always the most honored guest, and which is usually attended by assorted movie and other stars. The main difference from the Oscars that it is not widely televised, but as in Hollywood’s turn at self-celebration, there is entertainment. In 2006, the standard joke-telling role was assigned — apparently by someone who had never watched him — to Stephen Colbert of Comedy Central’s Colbert Report. He did not keep to the expected harmless one-liners, but instead, dared in the President’s presence to declare at last, and very funnily, that Bush was wearing not even a (metaphorical) stitch. The regular White House reporters, including Elizabeth Bumiller of the Times, were incensed, describing Colbert’s shtik as decidedly unfunny and rude. But it was captured on YouTube, and the jig was up. In a democracy, certainly, rudeness to a president can be a higher civic duty.

3. Professions and Attention

Every profession — that is, any group whose members all are viewed by the public as proceeding in some particular way with some basis in common skill and knowledge — gets some attention and shares some internally as well. But the degree this is central to their activities varies a lot. An excellent brain surgeon or airline mechanic may never be known to the larger public and not much care. Near the other extreme are reporters and politicians. Like movie stars, novelists and other artists, they would not fare well without nearly constant attention from quite large audiences. Unlike artists however, but like many business leaders and others they find themselves in an intrinsically compromised role: they get attention in part by claiming to provide a kind of objectivity that goes along more with the old order of the Money-thing World than the new.

Those professions that are farther away from the attention extreme tend to do something whose success can be measured strictly on the basis of the individual achievement. An oil well’s success can be measured strictly in monetary terms if you know the output in barrels and the price of oil that day. The geologist who determined this was a good place to drill can measure her success by the same standard. Similarly a factory that turns out standard 100-watt light bulbs can measure the worth of the bulbs with fair accuracy, and the manager’s success should be related to that. A land surveyor’s accuracy or a surgeon’s success rate with a certain kind of operation is also pretty independent of audience attention.

But a reporter’s success or a singer’s or even that of an encyclopedia writer or an ad copywriter cannot be determined without taking into account the attention the work gets. And that attention, as I have discussed before, flows through the work to the writer or performer herself.
Accuracy matters for a reporter’s work, for example, but in a news article, accuracy alone does not make the article worthy of attention. News matters if the audience cares about it, which will be less true if they have heard it before or if the subject matter does not grab them.

Bylines matter too; reporters strive to get attention on the basis of the ways they cover topics and what topics they specialize in, but they often need to share the attention of the people they interview or write about, and building those people up can enhance their own stardom very easily as well.

4. This just in! We have less news!

“Stop the presses!” That was great line in old movies, yelled by an actor playing a reporter rushing into a newsroom. But would that scene seem realistic today? In truth, less and less news nowadays is simply the reporting of clear objective facts that “matter,” rather than interpretation, regurgitated press releases, attempts to dig up a story based on mere shreds of evidence, or near-essays on hopefully interesting topics. No wonder more and more citizens tune out.

If we imagine the world of a thousand years ago, say in Western Europe, though there were certainly no newspapers, “news” could be of vital importance. What marauders or invading groups of knights might be coming this way? Which lord has died recently; which has interlinked his fortune with another lord through marriage; which overlord might be traveling though surveying his and his vassals’ estates? What epidemics have been heard of? And, in the few active ports, what ships have come in, or which might have foundered? And so on.

By the nineteenth century, when daily newspapers were beginning to take on some of the characteristics still present today  — and from which many current newspapers trace their origins, the news of the day still consisted of reports of fronts advancing in frequent wars (such as the Civil War, the Mexican and Spanish-American Wars, and numerous battles against American Indians); riots; land rushes; gold and other “strikes” that led to numerous gold and silver rushes in California, Colorado and Alaska, for instance, labor strife, epidemics, assassinations, nation-states  coming into being — Italy, Germany, and all the nations of Latin America, among them, train and ship wrecks, news of ships safely but unexpectedly arriving in port, discoveries by explorers trekking through uncharted spots — which, as little as a hundred years ago, included the North and South Pole. (Much of that news, by the way, was without bylines, except perhaps “from our correspondent” — as anonymous, and sometimes as venomous or libelous as anything decried by Keen on the Internet now.)

As recently as the 1950’s and 60’s, for Americans, such news, though referring to more distant events had the same kind of daily importance. Reports of advances or retreats by armies (in the Korean war), of ship or train wrecks or plane crashes were common. It was even still of some relevance in a place like New York to know which ships had docked that day, because passage time was unreliable. Epidemics such as polio were still serious and unpredictable scourges affecting many families. Labor strikes were big enough to have major impact on daily life. So were civil rights struggles in the south, riots in major cities, student actions, assassinations, frequent coups abroad, anti-colonial and other revolutions, etc. The Cuban missile crisis of 1962, apparently had the world poised on the brink of nuclear war.

Today, on the whole, such newsy news is a thing of the past. Despite “embedded reporters” in the initial invasion of Iraq, the war in the traditional form of an advancing front did not last long, nor was the outcome of that phase in doubt. Daily reports of suicide bombings, etc., fade into a constant background noise, with nothing specifically newsworthy apart from the specifics of the latest outrages. Who is “winning,” if anyone, is not amenable for reporters to discover. This is more or less a repeat of Vietnam, where there were no real fronts most of the time.

9-11 itself was a shocking and unprecedented event, to be sure, but it has not actually presaged anything like the battles of major wars. Despite many claims that we are in a long war with terrorists, so far there is only that one extremely traumatic event to demonstrate that. Six years later, little actual news can be reported that bears on the progress of that war. Similarly though we are treated to scares of a variety of epidemics that could possibly prove highly lethal, in reality very few Americans die of them, or they are fairly quickly stopped in their tracks (at least here at home). AIDS was a scourge, and is still certainly a danger, but it no longer has widespread impact in the advanced countries.

Even political leaders seem to be less available as targets of assassins than they once were. It would seem then that actual “professionals” such as professional administrators or Secret Service Agents or air controllers (along with airline pilots and mechanics) who prevent most air disasters, do their jobs so well the world had become, from a news point of view, a much more known and therefore duller place. A much smaller percentage of daily reporting refers to unexpected occurrences that are especially newsworthy on the day the stories happen to be published.

Yet we have more professional journalists (that is, those who are paid, and who have studied journalism —or media— in college or graduate school) than ever. Press conferences for even minor events or entirely staged happenings are often crowded. One of the most familiar scenes is of someone standing before a huge bank of microphones with dozens of news photographers jammed together to shoot pictures and reporters trying to hear what is said and to sneak in one or two lines of “exclusive interview.”

5. News Stars Rock! (They hope.)

Why are there so many reporters now if there is less news than ever? Only because, I would argue, journalism is exciting as a potential way to get attention. Where once many news reporters were anonymous, most today get bylines, and can become quite famous, at least in news circles for their reports or columns. We all have heard of Woodward and Bernstein, and as a result “investigative reporting” has become a desirable calling, even though it is often little more than seeking after criminal behavior on the part of politicians, because the reporters often have little real understanding of what might be important to probe to reveal worrisome aspects of the larger society, and such news needs a hook if editors are to run it.

Allegations of even minor criminal matters capture reporters’ imaginations, and sometimes do pull in large audiences. A politician like Senator Larry Craig, can be a great and useless nincompoop, of little interest to anyone but his constituents, until caught doing something slightly weird in a public restroom. Would any professional reporter have thought to report on his mediocrity were it not for this bizarre irrelevancy? Andrew Keen suggests that only bloggers report such trivialities. This is the opposite of the truth. (Keen falsely cites the Swift-boaters attack on Kerry in 2004 as if it were mostly done by blogging. In fact, the main effort was on a series of TV ads.) In 1988, Gary Hart was forced to abandon his presidential campaign because reporters for the Miami Herald discovered him apparently shacking up with someone not his wife. There were no bloggers then.

In an earlier era, to be sure, reporters kept quiet about JFK’s numerous liaisons, because they took it as matter of course — and perhaps, in those days, there was enough real news to go around. Today’s professional reporters are much hungrier, since there are now so many of them and less newsy news to report, so they would eagerly pick up on almost anything, even if the source were a blogger. Yet, editors seem to be afraid to stick out their necks to allow reporters to report on anything that other reporters haven’t caught up to, so news people often travel in packs. Bloggers who are not professionals can take up issues where reporters dare not tread, and thus have become a vital resource.

Keen quotes at face value a business reporter for the San Francisco Chronicle who self-servingly claims that the difference between professional reporters and bloggers is that only the former risk going to jail over their stories. This is utter bilge. In fact reporters are at least somewhat protected by shield laws from going to jail for keeping their notes out of the hands of prosecutors. Bloggers at present have no such protection. Bloggers also risk suit for libel, just as reporters do. To be sure, reporters can risk jail or even death in places like Iraq, but certainly so do bloggers. Reporters have newspapers and professional associations to protect them and stand up for their rights. Bloggers are much more naked. Even in Iraq, indeed, it is much safer to be a journalist for a major US paper than to be someone interviewed by them, and bloggers are in as much danger in such locales as the average person who dares speak to the press.

Many bloggers of course have little to contribute, but so do many reporters. Often bloggers are just editorializing, but editorials can be important in newspapers, and if bloggers have a freer rein that in itself can be of value. To be much read as a blogger, one has to have a distinctive voice and some specific point of  view , some specialty or other, so bloggers can at times be much deeper than news reporters. Beholden to no one, they can say the truth as they see it.

6. Wanted: Want Ads

One of Keen’s main concerns is that if bloggers replace newpapers, journalism will die because it will not be rewarded. Of course, what is rewarded and how that occurs does change as we move into Attention-World. Still, there are no sacrosanct methods of rewarding reporting anyway. Some reporters have always been considered so valuable that they earned a living through subscriptions to their newsletters. Others have been one hundred-percent supported by advertisers, but that can be a tricky source of reward, since one dare not bite the hand that feeds.

Generally speaking, in Attention-World, those who pay attention will strive to satisfy the attent’s needs, to the extent they can. (All of this is explained in my draft chapter on attention.) Why is it better that this be done indirectly, as Keen would like, say by continuing to have newspapers supported substantially by classified ad payments? When mass-distribution newspapers first appeared on the scene with their high-speed presses, high circulation and delivery to many neighborhoods of a city, it worked out well for newspapers to bundle classified ads in, since they could be printed when the presses would otherwise be idle.

The cost of those ads, however, was set artificially high. The advertisers could not usefully complain, since the newspapers had a relative monopoly, and the classified advertisers themselves were no position to unite to fight the rates, for what were often one-time ads. So the cost of reporting was largely borne by folks looking for jobs, places to live or used cars. These were not necessarily the same as those interested in the news stories the paper carried. (I am assuming the claim is true that customers of products or workers getting jobs ultimately pay for the advertising costs.) The relatively poor in effect subsidized the relatively better off who read the news as well as the reporters themselves.

When a new technology such as the Internet makes possible classified advertising essentially for free, why should we not look at that as something positive? We will just have to find new methods to reward those to whom we actually pay attention. How we do this can vary, and we might have to invent new modes, but there is no reason to suppose we won’t want to. True, our attention may not continue to go to the “traditional” news media, but our traditions on this score have constantly changed anyway. High-speed presses and major city-wide dailies came in only towards the end of the nineteenth century; news magazines started in the 1920’s;radio news came to prominence in the 30’s and 40’s; TV broadcast news became a nightly staple for most in the ‘60’s; cable news networks grew in the 80’s and 90’s; and now the Internet with bloggers of various kinds and YouTube are playing a larger role.

Ordinary people have always found some way to discuss whatever news seemed important to them. Today, a considerable proportion of that news, like this essay, is in itself a kind of meta-news. We may be more interested in issues surrounding who gets attention, or how to get it or restrict it, than in anything else.  But since a very large number of us are interested in that to some degree, being part of the conversation is of growing value. People eagerly — and sometimes very intelligently and articulately — add their comments to news articles, news columns, and blogs. They e-mail each other articles of interest, or  engage in detailed discussions on listservs.

Of course, much of what it said is not so intelligent or articulate, by whoever’s standards you choose to use, but it is no worse than what is said on some of the cable-news channels or on talk radio, or formerly what got into many letters-to-the-editor columns, or even was said by “professional” columnists and reporters on smaller papers, etc. Nor is it worse than conversations people used to have in the local pub or their neighbor’s kitchen or in college dorms.