Dec 062007

In a recent post, Gwen Bell cites my work as a partial basis for her thoughts about “personal branding.” She has some sensible suggestions, but I think the idea of personal branding — common though it is — gets things backwards.

1. Meet the Smith Brothers, Trade and Mark

Of course, most of us know or would recognize hundreds or even thousands of brands: Heinz Ketchup, Toyota Prius, Apple iPod, AT&T, Hilton Hotels, Pepsi, Dr. Martens boots, Republic of Tea, Tamagotchi  “pet” and on and on.  But what is a brand? Why do they relate differently to persons than persons do to each other? Two aspects of brands are important.

The first is their linguistic role. Superficially, a brand name is an example of a “proper noun,” similar to a the name of a place or person. As it happens, I just returned from Spain. The word “Spain” is a proper noun or name, signifying a particular place, a particular country, and a particular society, all at once, and all with the same complex history. But Spain, at least in this sense, is not a brand similar to those I mentioned above. There is only one Spain, while there are many, many identical bottles of Heinz Ketchup on tables throughout the world, just as there are many, many Toyota Priuses on the roads, and millions of iPods plugged into people’s ears, etc.

Linguistically then, a brand (name) is more like an ordinary noun. It differs from a common noun, such as “table,” “cow,” or “strawberry,”  in that it is supposed to refer only to a line of pretty much identical products that all are associated with a particular company. Heinz Ketchup, Toyota Prius, and iPod, for instance, are to be kept separate in our thoughts from similar, more generic ones such as no-name ketchup (or catsup), just any hybrid-drive car, or any old MP3 player. But you or I or any person is as singular and unique as Spain. There is only one of you, not many identical versions Even if your name happens to be as commonplace as “John Smith,” you are a particular John Smith, found only in one place at one time, specific in yourself, and except perhaps for relatively rare cases of identity theft, in no danger of not being specific. (Common “identity theft” does not steal anything more than financial identity; it does nothing to the individuality of your thoughts, feelings, personal relationships, physical movements, expressions, etc.) You are already much more individual than a brand would make you.

(Philosophers sometimes say that common nouns have a definition or sense, whereas proper names only refer to or label individuals. For instance, your name singles you out, but does not define you — except perhaps by implying you are human being. Here again, brands are intermediate between definable things such as cars, and mere labels.)

2. Which Smith Brother, again?

The second point is that regular brands — far from being something that individuals need to emulate — are actually reminders of the singular persons or personalities who originated or stand behind the branded products or services.  Microsoft has thus far equalled Bill Gates as the driving force. It’s possible that it will come to mean Steve Ballmer as well or instead, but that remains to be seen. Apple represents Steve Jobs; Ford once represented Henry Ford; Kodak represented George Eastman, who invented the notion of easy photography; etc. The founder or re-founder’s personality sometimes continues to inform and shape the company in question long after she or he is gone. In the terms I have offered before, your attention  passes through the brand (or the branded object)  to the prime person or persons behind it. It is the same as how your attention right now goes through the words you see on the screen to me.

As the influence of the prime personality or personalities fades over time, the brand itself almost always tends towards the generic. It eventually becomes meaningless, except in those cases where it clearly comes to represent a new personality, who gives it new singleness of meaning. The process of dry photo-reproduction by electrostatic means was invented by Chester Carlson and introduced to the world under the Xerox brand. But soon enough, the word “xeroxing” just came to mean generic dry photo-copying, using any machine that could do it. Xerox, Inc., had to go to ludicrous lengths to “defend” its brand, but it still cannot prevent ordinary people from using the verb, “to xerox” in a generic sense, with no regard for what company produced the machine being used. The company has had a hard time finding a strong personality to give new meaning to the brand.  When you use a copier without thinking of the brand, whatever attetnio you pay to it May go without your knowledge to Chester Carlson, but  not via the Xerox brand particularly, that is it goes to no instigator in the Xerox company or any other.

Without such new personalities, the best a company can do is to stick rigorously to the initial impetus of the founder, say in the case of an unchanging product, like Tabasco sauce.  But even this is going to be a matter of interpretation, especially if there is to be any further innovation at all; someone has to at least be the “high priest” who interprets the founder’s or founders’ intent in new circumstances; so that personality will then be who is behind the name. This will be true, of course, even when most customers have never heard of this leader, just as they may never have heard of the original founder by name.

3. Why Pablo Picasso is Not a Brand

Of course, in being the brainchild of relatively few distinct personalities that still influence the “culture,” corporations are from alone. The US still is partly shaped by the visions of the likes of Jefferson and Hamilton, and, more distantly, Locke and Montesquieu, though the details have certainly changed. The Roman Catholic Church to some degree still represents the historical Jesus and some of his early interpreters.
On a still more personal basis, the great artistic, musical, literary and architectural creators are still represented by their works, and to some degree by the styles they introduced. In Spain, I went to see Picasso’s famous Painting Guernica.  It used to be in New York, but now you have to go to Madrid to see it, because it is unique, and totally unlike  Heinz’s Ketchup in being so.  A Rembrandt is a Rembrandt; a J. S. Bach cantata is clearly Bach; Hamlet is a Shakespeare play.  Much the same goes for philosophical, political  and even scientific innovators. Also for actors, singers, and other performers, especially since around 1900,  when it become possible to record their work.

The point is that by being who they are, taking their own expressive and creative impulses and thoughts seriously, all these people do not need to pay any attention to the concept of branding. They are themselves, and they reveal themselves in everything they do. As selves, they all evolved throughout their entire creative lives. A late Rembrandt or Picasso painting or drawing has some connection with earlier ones by the same artist, but also substantial differences that can only be connected by studying the works in chronological relationship. The late Wittgenstein takes off from the earlier one but is profoundly different as well. Much the same holds for more recent or living creators in whatever medium, even minor ones.   Being true to one’s evolving inner sense of what is right and important and what comes forth now — this minute —  is what one has to keep at, not some superficial characteristics.

4. Moving the Goalposts

Gwen Bell says that a blogger, be distinctive needs to be clear about “goals. In  what sense she does not make clear,: ultimate goals or the immediate creation at hand? I think goal setting is mostly a misleading concept taken over from crude business how-to texts.  A significant creator pours all of who she is into the work, and the goal of the moment can be as specific as finding the right word to use in a sentence or poem, the right gesture in acting a role, or can be much more long-range, but the goal emerges from the intensity of what one is about and who one is, not the other way round. As a portraitist, the artist Giacometti had the goal of actually capturing the face of the sitter as he saw it, but he was really never satisfied with his attempts. The intensity of his striving is what counted for him. Goals constantly change as one proceeds, just as in creating anything one continually redefines oneself.  (A corporation can have goals only to the degree that its few real leaders do, and they too are less in need of defining these than in maintaining personal intensity and expression.)

The vast majority of would-be artists, novelists, actors, essayists, columnists, journalists, political leaders, etc., get much less attention than they might like. There simply is not enough attention from other people to go around. No matter how they try to conform to some model of good blogging behavior, such as Bell’s, most bloggers will face the same problem (of not enough attention to go around).  Being successful as a blogger or at any other form of attention getting is primarily a question of luck, and after that, I think,  of being as fully yourself at the moment as you can be. Even this rule is not any guarantee of success, and breaking the rules, whatever they are, is often a good way to stand out.

5. Look Before you Leap — or Don’t

Bell mentions thinking carefully about what you are doing. That may work. (It certainly appeals to me, except I already do too much of it.) But being very spontaneous and not thinking consciously at all might work better, for some. Personal branding, though,  is a red herring, not worth worrying over. Don’t give it another though. (Except, perhaps, the one that follows….)

6. Addendum: Subtleties about Copies:
One possible response to the above would be to point out that while each creative person may be a unique individual, nonetheless, just as there are many identical but distinct Toyota Priuses on the road, defined or guaranteed as such by their brand, so the particular creative works of a person — a novel,say,  might exist in many many, identical but distinct copies. Why then should we distinguish between the author’s “personal brand”  and something like a car brand?

You could even say a blog exists in many copies, on the computer screen of every different person who web surfs to it. You want to read only “a John Grisham,”  let’s say, just as you may want to drive only a Prius.  But when you get to the bookstore, it turns out that all the John Grisham’s they have you have already read. You do not mean by that that you have read the exact copy that is on the bookstore’s shelf. To have read the book is not to have read the specific copy. The book exists more basically as an idea, separate from its physical manifestation. So in a very real sense, each novel is as unique as its author, in a way that each car of a particular brand and model is not. A painting or sculpture or building has a physical manifestation, of course, meaning it can only be experienced in a particular place at any one time, and that it is not the same as copies of it.  You cannot confuse having Picasso’s Guernica in your living room with having a reproduction of it, though you would be unlikely to think you drive a reproduction of the Prius instead of the real thing.

The idea of the Prius  is of course just as singular, just as much the creation of a single person or small group of people as a blog entry or a novel or a movie  or a painting is.  But a Prius, like most branded objects is a product of the industrial system, valuable not just for its idea, but for its utility in transporting you — in reality and not just in imagination — from point A to point B. It required many, many people’s efforts to make each individual one, whereas the human effort needed to create a copy of a blog entry on your screen is very very small. Blogs, like other works of mental value, are part of the attention system, not the industrial one. That is why personal branding does not make sense.

Oct 242007

0. Preface

In this and the next few installments, I will be addressing a number of connected ideas: the changing role in our lives of material things; the changing nature of firms; the rise of what I shall term hyper-creativity; how it interacts with slower moving institutions such as government; some examples; and the connection of all these with advertising.  All these are involved in the change from what I will now call the “Money-Thing Epoch” to what I will call the “Attention Epoch.” The terms in quotes are my latest attempts to find suitably apt and evocative terms to replace my earlier coinage: the Money-Market-Industrial Economy on one hand and the Attention Economy on the other.  

1. Who’s Riding Now?

According to  a poem by Ralph Waldo Emerson,  the “sage of Concord” (Massachusetts) and  the pre-eminent nineteenth-century  transcendentalist, “Things are in the saddle, and ride Mankind.”  Nowadays, as I’ve said, American and other advanced societies are passing beyond the Thing Epoch to enter the Attention Epoch, in which relations between minds tend to dominate, and the scarcity and desire for attention are what mainly structure our relations with each other. That doesn’t mean that we suddenly do without things, of course, but rather that their main function changes. Emerson’s observation does not hold any longer in the way he probably meant it. Where once the main value of a human-produced thing was utility, convenience or comfort, now it is its role as an attention focuser or intermediary.

Once you have first paid attention to someone you can then pay further attention merely by recalling that experience to mind. Mostly, that recalling will be triggered by some jog to your attention, as, for example, some situation that reminds you of some aspect of the earlier experience. Often, and perhaps reassuringly, that will be through objects you surround yourself with.

People put photos of close friends and relations — or pets or celebrities — on their desks and around their houses for this reason. But reminders of past attention paid certainly need not be images of the person in question. They can be anything that even slightly triggers recall. Any gift that you keep around can remind you of the gift-giver and turn attention in her direction. Sometimes this can be very subtle, even unconscious. The object need not be material, it could be a tune or an idea, a quote, almost anything. Still, material objects, just because they occupy the space in which we live our lives, are particularly likely to engage our senses and thus serve as reminders. As if they were windows, we pay attention through such objects to the people that seem to us to be behind them.

2. Through the Thing to the Mind Behind

Take the case of this blog, for instance (though it is only a material presence for you while you tune it in on your computer,   unless you happen to have printed out this entry). It’s pretty obvious that you are paying attention to me through it, or, in the terms I commonly use, you are temporarily aligning your mind to mine. If you happen to see the name of this blog on a list you keep, that might remind you to check it, and in even considering doing so, you might very, very briefly return to alignment with me. The same would take place if you had a book written by me and happened to see it in your house.  If you had read only a bit of it, you might be reminded by seeing it to read more. Whether you open it to read on or not, seeing the object in this case clearly would remind you of prior attention to me, and that recall would be an additional act of attention.

Whereas a book or written work quite obviously connects you to the mind of the writer, whose name you can easily discover by examining the book, in the case of many other objects, there is a distinct mind behind it, even if that is all you know. Say you have —  and like having — a Rabbit corkscrew. If so, you are somewhat aligned with the mind of the ingenious person who thought about and found a neat way to solve the minor problem of how to uncork a wine bottle smoothly, elegantly and nearly effortlessly. In having the object, sometimes using it, possibly showing it off to your guests, perhaps prizing it for its esthetic qualities, you are drawing attention to the connection between its unknown inventor and you, and both can gain. (A guest sufficiently impressed might obtain her own, and then bring you to mind a little along with the unknown designer whenever she thinks of or sees the corkscrew. Even if she never buys one or even intends to, every time she opens a wine bottle or sees her own corkscrew, she might  recall yours and you, as well as the Rabbit instigator.) It would probably be difficult to have an iPhone, and not be aware of the connection to Steve Jobs. In a slightly more complex chain, seeing, driving, or riding in a Prius might help focus your attention not only on its unknown design leader but on Al Gore.

3. Mmm-mm Good

Infants start out life generally connecting the objects around them to their parents, and in many cases find an object such as a blanket or pacifier the presence of which seems to include a parent’s attention. Similarly, food represents a parent’s loving attention, at least when it is liked. The whole category known as “comfort food” like macaroni and cheese owes its comforting status to its resemblance to what was provided by a loving caretaker in childhood. If you never were given mac and cheese in your early years, you are unlikely to find it particularly comforting now. (Remarkably, a study of medical students has shown that these college graduates are more likely to trust a drug salesman who plies them with foods like pizza than one who does not. Perhaps more teachers should feed their students if they want them paying attention. That should include medical school professors explaining why drug salesmen have ulterior motives,  in my view.)

In the case of a parent and child, the attention tied with the food passes both ways. The parent is certainly showing attention to the child in feeding her, especially when feeding her what she likes. When you buy macaroni and cheese in a store, most of the attention you might feel coming to you is illusory; the chef who possibly provided the recipe probably doesn’t know of your existence, and if the store is a supermarket probably nobody will be paying much actual attention to you. What you get instead is the illusion of attention coming your way. Very often today, material objects tend to serve as repositories for this kind of illusory attention.

4. Star-infested Underwear and Prada Bags

Children a little older, if they watch children’s TV, want items associated with the apparent stars of these programs, such as Elmo or Sponge-Bob Squarepants. Adults do much the same thing, in a slightly more sophisticated way.  We associate food or songs or even furniture to the original times we paid attention to this particular item or sort of item, and thus to whoever first fed it to is or sang it or showed it to us, or perhaps simply was whom we thought we were paying attention to when we first noticed the item. (For instance, it might have been in an ad associated with a particular TV show, possibly starring a favorite actor, or simply a show we love that came from the mind of a certain producer, whether we know her name  or not.) In the case of food, it can be a chef or cookbook writer or star on the food channel who gets our attention as we eat or even cook.

4. The Future of the Present

As I mentioned earlier, many items that are purchased in America today are bought as gifts. In fact, our lavish level of gift-giving, including not only Christmas and birthdays but all sorts of occasions including weddings, baby showers, Bar Mitzvahs and the equivalent, account for perhaps nearly half of all sales in American retailing. Some of the items are purely functional, of course, and some of the gifts are more or less exchanges between equals or merely what seems to be required.  Yet, even then, the giving is intended as an act of mutual attention. The recipient is supposed to feel that Uncle Clarence, having paid attention to her, thus aligned with her mind, is aware of her needs and wants. Once the gift is given, Uncle Clarence need think of it no more, but as long as it stays out of a dark closet somewhere, the niece so gifted will be paying attention to Clarence whenever she notices or even thinks of  the object in question.

5. Object Lessons

We have been thinking mainly of material objects, but the word “object” is commonly used in two other, more specialized senses.  Many followers of the psychoanalytic tradition speak of “introjected objects,” meaning persons one had paid enough attention to that they are in some sense present in one’s mind at all times. In the terms I prefer, that means that one can easily , and often even unconsciously align or reshape one’s mind  in the image of that personage’s. Psychoanalysts tend to think of “significant others” and especially parents and primary caretakers as the main sources of such objects, but anyone one pays enough attention to will be internalized as well. Meanwhile, in computer programming, there is the quite different notion of semi-independent objects that  in some way can be approached as units “inside” computers. These objets can be connected bits of code that  perform a certain function, or, very often, things such as images that would appear on the computer screen.

While there is no necessary connection between these different usages, my point is that there very well could be, and, if you consider something like a blog or a YouTube video to be an “object,” the connection can be strong.  I suggest that an almost inevitable future direction of computing and the  Internet will be  to make virtual objects that are more like material objects in that one frequently encounters or glances at  or feels as if one is touching them even while doing something quite different. This would make them like objects you have in your home, very much as if they were physically present. They would also be likely to evoke memories of attention paid in specific ways to specific people in the past, and incline you to pay more attention to those same people.

6.  Virtual Things

Simple versions of this already exist: the lists of “buddies” available for instant messaging, or the links to individuals one knows or feels as if one does on the social-networking sites such as MySpace or Facebook. But computer operating systems could go much further, incorporating something rather more like Second Life, or any computer game in which three-dimensional objects of all  sorts seem to exist in a 3-D space you can move through. Such virtual things could remind you of specific “objects (people) to whom one has given one’s attention, and that in some way demand more, just as a half-finished book lying beside your bed might beckon.  It might be a moving image of Beethoven or Bruce Springsteen, invoking memories of their music and maybe a desire to hear more, which might be accomplished , in part by clicking on these images. An image of an Eames chair might connect you with all about Charles and Ray Eames, so the virtual space you inhabit through your computer would radically revise the details of how you pay attention. The virtual world would be a sort of pictorial encyclopedia organized around what you had paid attention to before, but always opening up new avenues as well.

Today, already, many of us  walk around listening to iPods or talking on iPhones or sending and receiving Blackberry messages. So inthe not-too-distant future, we will be even further immersed in the enhanced virtual world, perhaps with virtual objects appearing next to “real” ones through the computerized spectacles we will wear.   Purely attetnional objects then will increasingly replace material things.

Oct 222007

As readers of this blog already know, I first came up with the phrase “Attention Economy” to describe the entirely new kind of economic system that I see as increasingly dominating our lives. It is an economy in the sense that it involves allocating of what is most scarce and precious in the present period, namely the attention that can come to each of us  from other human beings. As you also know, ever since Thomas H. Davenport and John C. Beck appropriated my term for their own, different purpose in their book with my title,  my usage has gotten lost in the more unreflective usage they proposed. They do not mean a new kind of economy, basically, but really refer still to the economy based on money, the market etc. This is utterly mistaken. More and more of the activity in which we engage involves paying, passing along, receiving or seeking attention. Even the money economy is ever more tightly an appendage to such efforts, and not anymore a free-standing economy in its own right. (Even D & B’s usage has been further down-graded to refer mostly to the collection of so-called “attention-data” via the Internet, for the purpose mostly, of advertising, a misusage that nonetheless led me to the investigations that will be forthcoming on this blog shortly. )

Overall, the book Davenport and Beck put together with my title has been very hard for me to read, though lately I have gone through it. As it happens, the very makeup of their book reveals they have barely a clue about attention, not to mention writing. (See my draft chapter on attention for a better understanding. An additional annoyance I feel is that book editors inanely tell me that there is already a book “on my subject,” namely D & B.)

The design (of D & B’s book )includes as many distractions as possible on every page, leading to hundreds of reasons to stop reading. Further, like most books with two or more authors, there is no single mind behind it with which the reader can hope to align. Rather, it reads as a middling sort of high-school textbook, put together by a committee and with no real goal other than making the publisher, and perhaps the authors, some money (though D & B are probably smart enough to realize that they want attention as well). As to the contents, the authors occasionally make quite astute comments,  but  their level of self-reflection is amazingly low, while the amount of nonsense they include is quite high.

The book has no overall point or even a consistent point of view.  Unlike even a better-quality high-school text, D & B’s  does not call upon the reader ever to think critically or reflectively or ever to have to struggle to get a key concept.  Any time a flaccid half-thought can be introduced, they put it in, as they bounce around nearly randomly from topic to topic. They never consider just why attention or its economics should be of particular importance now, partly because they seemingly have no concept of history or historical changes, of the kinds of changing motivations that arise at different times or even of the desirability of attention and why that should be.

D&B are both apparently psychologists, and there is of course  a huge but problematic psychological literature on the subject  of attention. (One reason it is problematic: psychologists, in doing experiments on how people or even animals pay attention rarely consider that the experimental subjects’ attention may mostly be focused on they themselves as experimenters. The subject, especially any human one, continually understands she should be doing what the experimenters ask, and that is the primary attention focus. )

D & B  introduce and misuse Abraham Maslow’s 1970’s “hierarchy of needs,” which, taken literally, is nonsense anyway,  just made up without any attempt to verify that needs actually occur in such a hierarchy, or in the order he proposed. It in fact conflicts with much that psychologists and ethologists (students of animal behavior) had already discovered when Maslow wrote.  According to Maslow, the need for food is more fundamental than the need for attention; this is a reductive falsehood. Virtually every mammalian infant has parental attention as at least as primary a need as food; anorexia is only one sign that attention-seeking can come even before physical survival. The historical fact that the Thing Epoch came before the Attention Epoch is a matter of historical and perhaps technical contingency, not biological fact. Of course, D & B don’t have nay particular point to make in introducing Maslow’s thought, other, perhaps, than to impress the gullible reader that they are saying something weighty.

At several points, D & B imply that attention may simply be bought for money, though in other places they make fairly clear that they do not themselves believe this foolishness. They do not ever seem to offer the simple truth that all that can be bought is some chance to get and hold attention, which then depends entirely on the abilities of the would-be attention-getter to connect with the audience; nor do they have a coherent theory as to how the latter might happen.

Perhaps I should not be surprised at D&B’s low-brow approach. Their book is after all intended to be read by business people. The average business person probably coasted through high school without being much interested in any complex thought that did not have to do with making money. The current occupant of the White House was in fact touted as the first President with an M.B.A. (from Harvard Business School, incidentally, the very same school whose Press published D&B).  By now almost everyone can see what a disaster that has been. Despite the aura surrounding this degree,  the number of best leaders in any field — including even business itself — who hold it as their actual  pinnacle of formal education is not high.

I would guess that most business people simply flip through D&B’s book, get the idea that, as they put it “paying attention to attention” is somehow important and probably leave it  at that.  Then, every time this reader notices the book, she gives a tiny bit of extra thought to attention, which is an example of how objects do focus attention. My next blog entry, in fact, will discuss just how material things — of all sorts, but especially  human-made ones — now have as their primary role just such attention focussing role. D&B’s book may not really be worth reading — if indeed it can be read — but it does serve as a model of how a certain part of the actual Attention Economy, while a mystery to them,  operates.

Sep 182007

Here’s some Q &A from the Encyclopedia Britannica online:

Quick Facts about Bellow, Saul

Q: Who is the author of “The Adventures of Augie March”?
A: Saul Bellow is the author of “The Adventures of Augie March”

Q: Who is the author of “Dangling Man”?
A: Saul Bellow is the author of “Dangling Man”

Q: Who is the author of “The Victim”?
A: Saul Bellow is the author of “The Victim”

Quick Facts about Plath, Sylvia
Q: Who is the author of “The Bell Jar”?

Sylvia Plath is the author of “The Bell Jar”

Q: Who is the author of “Ariel”?
A: Sylvia Plath is the author of “Ariel”

Q: Who is the author of “The Collected Poems”?
A: Sylvia Plath is the author of “The Collected Poems”

Does the Britannica believe that anyone of any age would be interested in this nonsense? It certainly makes one wonder why Andrew Keen, in his the cult of the amateur: how today’s internet is killing our culture, is quite exercised that the “professional” Brittannica may be replaced by the “amateur” Wikipedia. (In passing, Keen exults that one college’s history department banned references to Wikipedia in papers, but most college teachers would look askance at citations of any encyclopedia, I think. I certainly would.)  But the online version of the Britannica is guilty of a number of quite amateurish moves, the above idiotic Q & A being just some.

Keen does not appear to have made any actual comparisons of the Britannica and Wikipedia, perhaps relying on publicity handouts from the former. He mentions for instance that Albert Einstein, Madame Curie and George Bernard Shaw all once wrote for the Britannica. He neglects to point out that all these authors have been dead for at least half a century. He further ignores the fact that Britannica has been significantly dumbed down since those days. One can, indeed still read Albert Einstein’s article on “Spacetime,” but rather than being part of the current edition, it is described as from “classic Britannica.” It is prefaced by remarks that it is probably going to be a hard read.

2. The Once and Future King of Reference Works

The encyclopedia, as a form, made its appearance in the eighteenth century as a multi-volume compendium of knowledge that the rich might put in their personal libraries. But by the 1960’s, at the least, the full-fledged encyclopedia as a tool for adults was largely outmoded. For one thing, significant knowledge was multiplying at too rapid a rate to be confined in a reasonable number of volumes. The Britannica hit twenty volumes long ago. By now, to keep up the same level of coverage of various fields, it might well require a thousand volumes, and would have to cost something over $50,000.00 in print versions. That would be pretty much impossible, of course. Besides that, substantial revisions would be needed very frequently to keep the knowledge at the forefront. Relatively low cost books and large public and university libraries have meant that other sources of knowledge would be as readily accessible. The Britannica in its 15th edition of some 40 volumes at best became a slightly odd status symbol, one that most educated people found they could well do without.

If one adopts Keen’s outlook of opposition to “today’s Internet”, it is ironic that with its arrival, the problems of too great length or the need for rapid revision can be addressed in new ways, so once again the idea of an all-encompassing encyclopedia becomes feasible, at least in part. To a degree, the web itself is an encyclopedia, with every search engine some sort of index. But this is a difficult-to-correct set of articles, with no necessary or obvious indications of bias, lack of knowledge on the part of the authors, etc. In this situation, the wiki method is a brilliant innovation, though perhaps it could be improved slightly. Anyone who believes she has something to say about a topic can address it in the Wikipedia led by Jimmy Wales, but if others disagree, they can make alterations. When there is great dispute, Wales or the council working with him and somehow led by him can enter in, as of course can any other outsiders. As long as the overall council acts on more or less reasonable principles, the articles tend to get better, and revisions tend to settle down — most of the time.

There are problems inherent in any work being written by a committee: lack of literary style, repetitiveness, lack of overall organization in individual articles, great variation in quality between articles, some bias, etc. But these problems are also present to some degree in the EB or in any encyclopedia. The Wikipedia is very manifestly a work in progress, and it is only going to improve on the average, while already encompassing a larger swath of knowledge than the EB, with a reasonable degree of accuracy and with more currency. It is a key to Wikipedia’s success that one does not have to pay to use it, so that anyone interested can check an article and add their expertise. Because people believe that what they know and care about deserves attention, they are eager to make sure articles that matter to them are correct. They average Wikipedia writer is less good at making sure that non-experts can understand, but on the whole they do not do such a bad job with this

I looked at a few dozen articles in a wide range of fields, comparing online EB with W, all being articles about which I have considerable background knowledge. These include articles about such subjects as feudalism, string theory and other aspects of current physics, various modern writers, art, recent European history, botany and zoology, American politics, simple geometry and some other math, philosophy, auto mechanics, and more. On the whole, I learned more from W than from EB, though, in general, EB articles were better shaped and less repetitive.

Keen sneeringly suggests that an auto mechanic is just as likely to write or “correct” an article on physics as a physicist.  (He fails to consider articles on auto mechanics, which might also be of considerable interest the average reader. W is far superior to EB on the subject of anti-lock brakes, for example.) Why anyone not an expert would choose to write on a subject, Keen does not explain. In truth, as far as I can tell, the W article on string theory as well as other articles in recent or contemporary physics topics were probably written by physics graduate students, up on the latest, but not so knowledgeable about history, for example. The EB String Theory article was written by Brian Greene, a Columbia physics professor and author of a couple of “popular” books on the subject. His article is too brief, but what it does say it says well. However, anyone patient enough to follow the somewhat more awkward W article will learn more about the contemporary situation and the background as well, and even the very earliest history.

String theory emerged from an interpretation of a formula offered for different purposes by Gabriele Veneziano, in about 1968. He offered this as satisfying a set of requirements for what was known as the S-Matrix that had been proposed by Geoffrey Chew. When I looked up S-Matrix theory in W, I found an article that W‘s editing mechanism informed all readers was not appropriately written and should be cleaned up. The obvious reason was too many unexplained terms. The article was basically correct, in my view, but poorly written. No one reading W can be in doubt that this article has problems, and gets some sense of what those problems are. On the other hand, when I looked up S-matrix in EB online, I found nothing, and the same happened when I searched for Geoffrey Chew, who also at least has a brief entry in W.

W’s article on feudalism is also better than EB’s because the latter is written by an especially biased writer, whose main claim to fame is disputing that the term “feudalism” should ever be used at all. W’s article references her work, but also that of many others.

Still, W does have some pretty foolish articles. Anyone who has ever read the works of Thomas Pynchon knows that their plots are pretty much secondary to the telling of the story, but in a bizarre effort to match Cliff’s Notes, the article on Pynchon’s novel V offers  a plot outline that covers about half the chapters. Some dolt might use this as the basis for a paper to submit to a college class, but less harm would thereby be done than in most cases of the use of actual Cliff’s Notes. EB says almost nothing about V, so, while not informative, it does no harm in this case either. On the other hand, the main article about Pynchon in W is better than the one in EB.

And so on.

Wikipedia is already better than Britannica, in my view. While it will continue to have some eccentric articles, it will almost surely get better and better, and it will do so precisely because it does not charge set fees. It might do even better if it figures out a way for most contributors to have their names attached. That will increase the rewards of attention coming to the writers, and that attention will encourage care and accuracy where that has meaning — and perhaps better organization and writing.  At least this is worth some experimentation.

For instance, Wikipedia could start allowing authors who submit a brief bio and a photo to list their names, and then allow readers to judge what they have written. Those authors whose work is rated highest, and who have therefore contributed the most of an article, would have their names listed first among authors. Separate listings could be for editing. There could also be an honor roll of the best contributors to the most articles, and so on.  Writing a good article remains a difficult task, but it is also a wonderful exercise in understanding and learning how to explain. I think any offer of monetary rewards should be rejected, but the attention one gets for good article writing could logically carry over into the rest of one’s life. Of course, this would create some new problems, with attempts to gain unmerited attention, claims of precedence, and many other well-known problems of scholarly infighting. But one aspect of W that is good is that it already leads to a degree of protective watchfulness on the part of large number of readers.

With or without that change, W will revolutionize the notion of a single source of comprehensive knowledge in the Internet era. And it will do that in dialogical fashion, which is of course the source of all worthwhile knowledge except for individual expression and autobiography. The recognition that everyone is to some degree an expert in something and that that expertise can be of value to others is one of the implicit glories of the Internet.

Sep 172007

A couple of years ago, the philosophy professor Harry Frankfurt made publishing history of a sort by allowing his 7,000-word paper “On Bullshit” — which lives up pretty well to the second word in its title — to be published as a book. Bind some printed pieces of paper together, preferably in hard covers, distribute them via bookstores, at a cost of around $20, and voila, you have a book. If you choose not buy that one, however, you can read the paper free online. A book is thus a cultural artifact ,the form and meaning of which has changed throughout history. Books today tend to be printed words on paper, bound together, and thick enough that they can be located on the shelf by reading their spines.  They are sturdily enough held together so that you can carry them around, and today they are, especially when paperback, cheap enough that price is not the main preventative of reading them. Also today, books tend to have one author, and at least some pretense at coherence (though the occasional volume of selected or collected shorter works can be quite incoherent, and a number of books are edited collections, justified as not be a single author but as the selection of one or two editors). Books are of course only one way that printed works are presented; other common modes are newspapers, where articles, editorials, letters to the editor and columns can all be quite short, and magazines or scholarly journals. Pamphlets exist too. But books stand a better chance of being read form cover to cover, and of making a deep impact on the reader — at times.

Andrew Keen’s the cult of the amateur: how today’s internet is killing our culture  is somewhat longer than  On Bullshit — some 40,000 words, but it is still closer to a pamphlet than a book. However, the future of the book is one of Keen’s deep concerns. Here’s a key quote:

“Silicon Valley utopian Kevin Kelly wants to kill off the book entirely — as well as the intellectual property rights of writers and publishers. In fact, he wants to rewrite the definition of the book, digitalizing all books into a single universal and open-source free hypertext — like a huge literary Wikipedia. In a May 2006 New York Times Magazine ‘manifesto,’ Kelly describes this as the ‘Liquid Version’ of the book, a universal library in which ‘each is cross-linked, clustered, cited, extracted, indexed, analyzed, annotated, remixed, reassembled, and woven deeper into the culture than ever before.’ And Kelly couldn’t care less whether the contributor to this hyper-utopia is Dostoyevsky or one of the seven dwarfs.

“’Once digitized,’ Kelly says, ‘books can be unraveled into single pages or be reduced further, into snippets of a page. These snippets will be remixed into reordered books and virtual bookshelves.’ It is the digital equivalent of tearing out the pages of all the books in the world, shredding them line by line, and pasting them back together in infinite combinations. In his view, this results in ‘a web of names and a community of ideas.’ “

Who is right? Keen or Kelly, or neither? Here I, Goldhaber, just snipped Keen, who snipped Kelly. It’s not so alarming. The practice is as old as literature itself, or even older. The “Five Books of Moses” or The Pentateuch or Torah, better known to Christians as the first five books of the Old Testament, is clearly a compilation of texts with a variety of authors and origins. Some of these come from still earlier traditions such as the Book of Gilgamesh, the codes of Hammurabi, and no doubt a variety of tales handed down orally. Later, Jewish rabbis wove a huge series of comments and interpretations and further comments on and interpretations of those into a lengthy, multi-volume text known as Talmud. Today’s theologians keep this up with further commentary, and lay authors weave aspects of all these into countless texts, songs, plays, movie scripts, derivative music, etc.

(The snipping has sometimes gone even further, down to the level of letters. The medieval Aramaic Zohar was put together by Jewish mystics who believed the meanings of the biblical texts were to be found by viewing the letters of the words as a kind of code. Much later, around 1900, supporters of the idea that Francis Bacon wrote the works attributed to Shakespeare argued that Shakespeare’s First Folio was printed with two different sets of type, and that the two were placed so as to encode in binary statements about the actual authorship. )

Similar things happened with Greek mythology, woven into the oral tales later written down as the works of Homer, which were culled, added to and re-snipped to be the basis of the works of the great Greek tragedians, Aeschylus, Sophocles and Euripides. The Roman poet Virgil used Homer’s Iliad and Odyssey as a basis and model for his Aeneid (a dreadful piece of gore, in my view) and the greatest poet of Italy, Dante Alighieri, used Odysseus’s passage into the underworld as one source of his Divina Commedia or Divine Comedy (which, in the translations I’ve seen, gets boring as he leave tough, cynical Hell —Inferno —and ascends towards sweeter-than-sugar Heaven —Paradiso). Not long after Dante, his fellow Italian, Giovanni Boccaccio, wrote a collection of tales probably based in part on earlier works, which he called the Decameron. Geoffrey Chaucer soon stole many of its stories – some by direct translation, with no authorial credit — for his own Canterbury Tales.

To jump to a later time and another medium, the earliest-produced installment of George Lucas’s space epic, Star Wars, was based in several ways on famed Japanese director Akira Kurosawa’s 1958 Kakushi-toride no san-akunin or The Hidden Fortress, a samurai tale of Shogun-era Japan.  A later Kurosawa film, Ran, in turn is a Japanese version of Shakespeare’s King Lear.

Here is what Alfred Harbage, in his 1958 introduction to the Pelican Shakespeare edition says about King Lear itself:
“The story of Lear and his three daughters was given written form four centuries before Shakespeare’s birth. How much older its components may be we do not know. Cordelia [Lear’s loving but mistreated daughter] in one guise or another, including Cinderella’s, has figured in the folklore of most cultures, perhaps originally expressing what [Ralph Waldo] Emerson saw as the conviction of every human being of his worthiness to be loved and chosen, if only his true self were truly known. The figure of the ruler asking a question, often a riddle, with disastrous consequences to himself is equally old and dispersed. In his Historia Regum Britanniae [History of the Kings of Britain] (1136) Geoffrey of Monmouth converted folklore to history and established Lear and his daughters as rulers of ancient Britain, thus bequeathing them to the chronicles. Raphael Holinshed’s (1587) declared that ‘Leir, the sonne of Baldud,’ came to the throne ‘in the year of the world 3105, at which time Joas reigned in Juda,’ but belief in the historicity of such British kings was now beginning to wane, and Shakespeare could deal freely with the record. He read the story also in John Higgins’s lamentable verses in A Mirrour for Magistrates (1574), and in Edmund Spenser’s Faerie Queene, II, 10, 27-32. He knew, and may even have acted in, a bland dramatic version, The True Chronicle History of King Leir, published anonymously as in 1605 but staged at least as early as 1594.

“…. [the earliest date  for Shakespeare’s version is after ] March 16, 1603, when Samuel Harsnett’s Declaration Of Egregious Popishe Impostures was registered for publication. That this excursion in ‘pseudo-demonology’ was available to Shakespeare is evident in various ways, most clearly in the borrowed inventory of devils imbedded in Edgar’s jargon as Tom o’ Bedlam….”

It is a good thing copyright had not yet been invented when Chaucer or Shakespeare worked, or we wouldn’t have much of their work. Besides, if eternal copyright were the law, as some have suggested, we would not have numerous careful, scholarly editions of Shakespeare now available to us, along with the numerous adaptations and even bowdlerizations (such as those by Thomas Bowdler himself in the early nineteenth century). Probably Chaucer’s and Shakespeare’s works would have been long lost, as some heir, abashed, denied permission to reprint. No publisher could be quite sure who the rightful heirs were, and would certainly receive legal advice not to mess with the chance of being sued inherent in putting out an edition.

2. Attention Leads to New Works

In any medium, expression whether worthwhile or not, if anyone at all pays attention to it, has been influenced by earlier expressions and in turn often influences later ones, so that none stands in a vacuum. Expressive works of all sorts have always been transmitted, copied, riffed on, varied, quoted, translated, honored, given homages, lovingly or unlovingly parodied, satirized, pastiched, collaged, sampled, anthologized, excerpted, used as background, restated, adapted, and so on. Sometimes the whole work is lavishly reproduced, sometimes only a plot outline is kept, sometimes there are extensive quotes, sometimes only loose paraphrases. Everything of this sort took place long before the Web was a gleam in anyone’s eye. It is an inevitable result of paying attention to any work that it influences one, for better or worse, even one is an artist seeking to do something brand new.

3. Sitting by the Samovar

Keen specifically mentions Dostoyevsky.  Few non-Russians can fluently read his original words, instead having to settle for some translation. Which translation should you choose?  One way to decide is to compare them. It might be ideal to have many different translations available, so that you could flip from one to the other. It would also help to have at your disposal knowledgeable commentaries by Russian speakers very familiar with Dostoyevsky, though they will not necessarily agree among themselves. An average reader could not afford to buy all the necessary works, and it would be cumbersome to get them from a library, or even to make use of them if you had them all. You would have to open all the books, keep the pages turned to the right point, pick up each one when you want make a comparison, etc. It would be much handier if all the translations, all the critiques, all the bits of historical or biographical background, as well as the original, were on the Internet, and that you had handy ways to access it, much as Kevin Kelly proposes.

Andrew Keen is frightened of this, because he imagines it somehow means that the original version, of, say, The Brothers K (no, not Keen and Kelly, but Karamazov) would not remain itself, in easy reach also for anyone who sought it in itself alone. Or even that the good translations would not remain whole. I doubt that Kelly intended that, and, even if he did, the Internet does not need to work that way. There are plenty of ways that what each person expresses can be kept separate, even if someone’s expression is a mishmash of other people’s expressions, a sampling or collage or dictionary of quotations.

As long as an author has an any sort of audience there will be those who want to bask a bit in her reflected glory, getting attention through the attention that goes to the master. In effect, whatever their conscious motives this has long been the case for all those who prepare new translations, or who seek to edit critical editions or write biographies, or even find the work sufficiently interesting that they want to mention, discuss or brag about having read it. This group has a vested interest in ensuring that what they consider unadulterated versions of the master’s works will be available and easily discoverable online. Where they disagree, to be sure, they will put up variant versions, but these will all be available, accessible, searchable, and so on. Each work anyone cares about will be enriched, not lost at all.  If anyone took the trouble to mislead, by putting up a phony or adulterated version, fans of the author would quickly discover and denounce this, while making sure versions they consider authentic would remain findable.

I would rather trust in that kind of certainty than have to place my reliance on the local librarian, who might decide to clear the shelves of works that somehow no longer fit with local mores, limited shelf space, cataloguing requirements, or idiosyncratic policies. And I certainly would not be willing to rely on giant publishing conglomerates whose main motive is making a buck or increasing annual profits. Today printed books are commonly remaindered within a year of publication, and remain available only by dint of the Internet market in used books. An actual all-encompassing Internet library would be far more usable.

4. A Camel is Still a Horse Designed by a Committee

Keen implies that Kelly favors readers and — possibly — clumsy authors taking apart great works and rearranging them as multiple-author messes. I do think Kelly might have gotten a little carried away in that particular direction, but we don’t have to worry, partly for the reasons I just gave, and partly because of the nature of attention.

The glued-together kind of works that Keen thinks Kelly favors are usually not very attention-holding. In paying attention, as I have emphasized before, it is much easier to align one’s own mind to one other specific mind than with a whole crew, especially if the participants in that crew are not highly coordinated. A small group of very good jazz musicians may be able to jam together beautifully and coherently, but that sort of collaboration is rare, and rarely works well. You never hear a whole orchestra just jamming, because it would be impossible to follow. We do not find novels, plays, poems, paintings, sculptures or musical compositions with fifteen authors, and usually not even as many as two, unless their tasks are strictly sub-divided, or there is one clear leader for the whole work. Members of dance troupes work in coordination, not by individual whim, with one director or choreographer overseeing the totality of movement. Sports teams larger than those in doubles tennis have coaches who coordinate their practice sessions, decide on the range of plays they can handle and instruct them when to use different ones. We could not follow the plays otherwise.

What about movies? Anyone who sits through the credits rolling at the end of current ones sees that hundreds or even thousands of people are often involved. But they do not each work autonomously or have equal say. Rather, one, or sometimes two or still more rarely three equal collaborators shape each movie by directing and coordinating all the rest. Often the key person is the director, sometimes a screenwriter, sometimes a producer, or even an actor. But whenever more than one person is the key, conflicts can arise and the work loses coherence, to the point that virtually no one can pay close attention to it.

That was not always so, of course. Early books were simply collections of anything that could be copied and seemed to hold the copyists’ attention (as in fact Kelly points out in his article). But with the advent of printing, and in fact somewhat earlier, the idea of the author took pretty strict form, and as books became common, the one-author work predominated.  The fact that each book is a single physical item, visible for itself, whether on one’s bedside table, in a backpack or on a shelf, is a goad to reading it, picking it up again if one has started it, and basically reminding oneself of its separate and hopefully coherent existence. If you have access to all the books that have ever been written, even on a handy book-sized device you can carry around with you as conveniently as paperback, you will not have the same physical goad to continue reading where you left off. At the very least, a different kind of mental discipline than has been common will be required.

In today’s world, with so many calls on our attention, it is quite possible that many readers will lack the sustained concentration to get through an entire book. Though more novels are written than ever, the readership of “serious” novels seems anyway to be getting smaller. People buy thrillers to read on plane trips and then throw them away. Even that habit is under threat by onboard movie or video watching, whether on screens provided by airlines or laptops one takes along. But none of that implies the absence of a steady and even growing audience of truly dedicated novel readers, sub-divided into groups with different kinds of tastes, following different “schools” of literature, which also include comic-style “graphic novels,” such as Art Spiegelman’s Maus.

There is also an audience developing for extremely short fiction. Heretofore, the short story could not stand alone. Keen refers to one of the great Argentine fiction writer, Jorge Luis Borges’s articles, which was in fact a precursor to one of his typically very short stories, “The Library of Babel.” Borges made clear he thought novels were excessively long, and many of his stories were intended to imply that each described an actual much longer work. However, because his stories were so short, they simply could not be published individually, and either had to appear in magazines or as parts of collections. With the Internet, extremely short fiction a la Borges — or even shorter — can stand alone, as can mini-essays, poems, etc. (As with texts, since the 60’s or so, our styles of movie going or CD distribution left no room for what used to be known as short subjects> now they can burgeon once more. YouTube-style movies, a few minutes long, could one day have all the sophistication of a full-length film, collected in a very short space. )

For this shortening, the web provides a new means, but insofar as shorter attention spans are now perhaps normal, the web is merely a symptom, not a cause. The “ Western Canon” was under merciless attack in the groves of academe long before “today’s Internet.”  With the death of must-read literature has also come the fall of “Reader’s Digest Condensed Books” and “Book of the Month Club” and its ilk that chose each month what “middle-brow” readers needed to read. Intense calls on our attention come from sources such as the numerous TV channels, ubiquitous phoning, and much else that would exist even without an Internet.

Are all these trends terrible? Of course, in one way they are, in the sense that pleasure and the personal growth that comes about from immersing oneself in serious novels of some length is different from — and in some ways richer than — the obvious substitutes. It’s possible that people who do not take up and get through the challenge of serious literature will be shallower people with less-developed mental capacities than those who do. It is also possible — and indeed likely— that other attention-getting modes, even possibly including computer games, will take up the slack. In any event, since we cannot return to some glorious earlier time (nor would we really want to if we could) it still strikes me that the best way to hold on to what was good about the past is to increase opportunities to latch onto it, much more as Kelly suggests than Keen.

Sep 052007

“Prostitutes and gigolos are sexual professionals. Through hard work and experience, they are now masters of their craft. The best surely deserve excellent pay for what they do. If we have sex with amateurs and without paying for it, how will the professionals be able to continue to offer their vital services? Our culture will be destroyed. Ancient traditions will come to a halt. And the masters, the real pros, have yet another vital function: they help spread much-needed venereal diseases that keep our medical workers employed; how can we hope to maintain our way of life without the pros?”  I suspect that is roughly what Andrew Keen would have written had he been around to comment on the ‘60’s sexual revolution.

At least that is the impression I get when he warns in his new book, the cult of the amateur: how today’s internet is killing our culture, against bloggers, video uploaders and wikipedia writers. To him they are amateurs, who will displace “professional” journalists, ad copywriters, encyclopedia writers, political consultants, and so on. The trouble is, he seems basically to define “professional” simply by the fact that whatever the people in question do, like prostitutes they insist on being paid for it.

It’s true that most of us would be rightfully suspicious of amateur airline mechanics or brain surgeons, but not all so-called professions are the same. When we partially professionalize sports down to the level of Little League, we lose much of what active games once offered: free play, enjoyment for the participants, and a role for everyone regardless of skill. Professionalized athletes are good at starring, at showing off for the rest of us, and even at entertaining, but excluding the duffers is not necessarily a good side effect. Similarly, today’s politicians are professional at the art of getting elected, rather than keeping the interests of the public at large most at heart, nor at having the courage to do the right thing, nor to lead opinion by making clear cases for the common weal. Professional journalists know how to write an article, how to interview “the usual suspects,” and how to repeat what passes for common wisdom among their fellow journalists and those they most often interview. However, they often lack the wide knowledge of a field such as history, political movements or science that is a necessary background to write sensibly about the topic at hand. Journalism schools do not teach such subjects, at least not in any depth. (I will get to encyclopedia writers and ad writers in the next installment of this review.)

Keen offers only two examples of “professional journalists”  — Thomas Friedman of the NY Times and Robert Fisk of the London Independent. These are not reassuring examples. They both are, in Keen’s view, experts on the Middle East. One would expect two professional and highly reputed brain surgeons to agree most of the time on the broad outlines of how to treat particular cases. But Freidman and Fisk hardly ever do come out the same. Both have very strong — but differing — ideological biases, along with quite different ideas of who to talk to. Depending on which newspaper you read, therefore, you would get markedly different sense of how the world is. I trust neither of them, as it happens. They both lack wider judgment. I don’t want either shaping my mind too fully, and even both together would make a hash of things. (Of course, there are millions of other “experts on the Middle East” — those who grew up or live there permanently. They of course would vociferously disagree with most of the others about anything related to the topic. But that is just the nature of geographical area “expertise;” there are few objective truths.)

2. We could use a Thomas with more doubts
In the run up to the current Iraq war, which Keen admits is a huge folly, Friedman was one of the main cheerleaders, continually arguing that Iraq could easily become a democracy that would then be a beacon and a model for the entire Middle East, (meaning Southwest Asia plus North Africa)  which would then undercut support for Islamic terrorism. Not any step of that argument ever made the least sense, as many observers, expert and non-expert on the Mideast, blogger and non-blogger, said at the time. In the past week, almost five years since his war-cheerleading days, Friedman finally has suggested that the person needed to keep peace in Iraq was none other than Saddam Hussein, the dictator he was so eager to depose.

The problem with Friedman, as with Judith Miller, another NY Times Middle East “expert,” Michael Gordon, their military affairs guy, and Howell Raines, The Times’s editor at that time — along with hundreds of others with different employers is that they are part of an establishment in Washington and elsewhere, who get attention through access to others who also get attention, and are likely to be excluded if they happen to note that the emperor has no clothes. So they tend to find elaborate reasons why what appears to the unaided eye to be nakedness is really the most subtle and skillful finery.

The Washington DC equivalent of the Academy Awards is the annual dinner of the White House Correspondents’ Association, at which the President is always the most honored guest, and which is usually attended by assorted movie and other stars. The main difference from the Oscars that it is not widely televised, but as in Hollywood’s turn at self-celebration, there is entertainment. In 2006, the standard joke-telling role was assigned — apparently by someone who had never watched him — to Stephen Colbert of Comedy Central’s Colbert Report. He did not keep to the expected harmless one-liners, but instead, dared in the President’s presence to declare at last, and very funnily, that Bush was wearing not even a (metaphorical) stitch. The regular White House reporters, including Elizabeth Bumiller of the Times, were incensed, describing Colbert’s shtik as decidedly unfunny and rude. But it was captured on YouTube, and the jig was up. In a democracy, certainly, rudeness to a president can be a higher civic duty.

3. Professions and Attention

Every profession — that is, any group whose members all are viewed by the public as proceeding in some particular way with some basis in common skill and knowledge — gets some attention and shares some internally as well. But the degree this is central to their activities varies a lot. An excellent brain surgeon or airline mechanic may never be known to the larger public and not much care. Near the other extreme are reporters and politicians. Like movie stars, novelists and other artists, they would not fare well without nearly constant attention from quite large audiences. Unlike artists however, but like many business leaders and others they find themselves in an intrinsically compromised role: they get attention in part by claiming to provide a kind of objectivity that goes along more with the old order of the Money-thing World than the new.

Those professions that are farther away from the attention extreme tend to do something whose success can be measured strictly on the basis of the individual achievement. An oil well’s success can be measured strictly in monetary terms if you know the output in barrels and the price of oil that day. The geologist who determined this was a good place to drill can measure her success by the same standard. Similarly a factory that turns out standard 100-watt light bulbs can measure the worth of the bulbs with fair accuracy, and the manager’s success should be related to that. A land surveyor’s accuracy or a surgeon’s success rate with a certain kind of operation is also pretty independent of audience attention.

But a reporter’s success or a singer’s or even that of an encyclopedia writer or an ad copywriter cannot be determined without taking into account the attention the work gets. And that attention, as I have discussed before, flows through the work to the writer or performer herself.
Accuracy matters for a reporter’s work, for example, but in a news article, accuracy alone does not make the article worthy of attention. News matters if the audience cares about it, which will be less true if they have heard it before or if the subject matter does not grab them.

Bylines matter too; reporters strive to get attention on the basis of the ways they cover topics and what topics they specialize in, but they often need to share the attention of the people they interview or write about, and building those people up can enhance their own stardom very easily as well.

4. This just in! We have less news!

“Stop the presses!” That was great line in old movies, yelled by an actor playing a reporter rushing into a newsroom. But would that scene seem realistic today? In truth, less and less news nowadays is simply the reporting of clear objective facts that “matter,” rather than interpretation, regurgitated press releases, attempts to dig up a story based on mere shreds of evidence, or near-essays on hopefully interesting topics. No wonder more and more citizens tune out.

If we imagine the world of a thousand years ago, say in Western Europe, though there were certainly no newspapers, “news” could be of vital importance. What marauders or invading groups of knights might be coming this way? Which lord has died recently; which has interlinked his fortune with another lord through marriage; which overlord might be traveling though surveying his and his vassals’ estates? What epidemics have been heard of? And, in the few active ports, what ships have come in, or which might have foundered? And so on.

By the nineteenth century, when daily newspapers were beginning to take on some of the characteristics still present today  — and from which many current newspapers trace their origins, the news of the day still consisted of reports of fronts advancing in frequent wars (such as the Civil War, the Mexican and Spanish-American Wars, and numerous battles against American Indians); riots; land rushes; gold and other “strikes” that led to numerous gold and silver rushes in California, Colorado and Alaska, for instance, labor strife, epidemics, assassinations, nation-states  coming into being — Italy, Germany, and all the nations of Latin America, among them, train and ship wrecks, news of ships safely but unexpectedly arriving in port, discoveries by explorers trekking through uncharted spots — which, as little as a hundred years ago, included the North and South Pole. (Much of that news, by the way, was without bylines, except perhaps “from our correspondent” — as anonymous, and sometimes as venomous or libelous as anything decried by Keen on the Internet now.)

As recently as the 1950’s and 60’s, for Americans, such news, though referring to more distant events had the same kind of daily importance. Reports of advances or retreats by armies (in the Korean war), of ship or train wrecks or plane crashes were common. It was even still of some relevance in a place like New York to know which ships had docked that day, because passage time was unreliable. Epidemics such as polio were still serious and unpredictable scourges affecting many families. Labor strikes were big enough to have major impact on daily life. So were civil rights struggles in the south, riots in major cities, student actions, assassinations, frequent coups abroad, anti-colonial and other revolutions, etc. The Cuban missile crisis of 1962, apparently had the world poised on the brink of nuclear war.

Today, on the whole, such newsy news is a thing of the past. Despite “embedded reporters” in the initial invasion of Iraq, the war in the traditional form of an advancing front did not last long, nor was the outcome of that phase in doubt. Daily reports of suicide bombings, etc., fade into a constant background noise, with nothing specifically newsworthy apart from the specifics of the latest outrages. Who is “winning,” if anyone, is not amenable for reporters to discover. This is more or less a repeat of Vietnam, where there were no real fronts most of the time.

9-11 itself was a shocking and unprecedented event, to be sure, but it has not actually presaged anything like the battles of major wars. Despite many claims that we are in a long war with terrorists, so far there is only that one extremely traumatic event to demonstrate that. Six years later, little actual news can be reported that bears on the progress of that war. Similarly though we are treated to scares of a variety of epidemics that could possibly prove highly lethal, in reality very few Americans die of them, or they are fairly quickly stopped in their tracks (at least here at home). AIDS was a scourge, and is still certainly a danger, but it no longer has widespread impact in the advanced countries.

Even political leaders seem to be less available as targets of assassins than they once were. It would seem then that actual “professionals” such as professional administrators or Secret Service Agents or air controllers (along with airline pilots and mechanics) who prevent most air disasters, do their jobs so well the world had become, from a news point of view, a much more known and therefore duller place. A much smaller percentage of daily reporting refers to unexpected occurrences that are especially newsworthy on the day the stories happen to be published.

Yet we have more professional journalists (that is, those who are paid, and who have studied journalism —or media— in college or graduate school) than ever. Press conferences for even minor events or entirely staged happenings are often crowded. One of the most familiar scenes is of someone standing before a huge bank of microphones with dozens of news photographers jammed together to shoot pictures and reporters trying to hear what is said and to sneak in one or two lines of “exclusive interview.”

5. News Stars Rock! (They hope.)

Why are there so many reporters now if there is less news than ever? Only because, I would argue, journalism is exciting as a potential way to get attention. Where once many news reporters were anonymous, most today get bylines, and can become quite famous, at least in news circles for their reports or columns. We all have heard of Woodward and Bernstein, and as a result “investigative reporting” has become a desirable calling, even though it is often little more than seeking after criminal behavior on the part of politicians, because the reporters often have little real understanding of what might be important to probe to reveal worrisome aspects of the larger society, and such news needs a hook if editors are to run it.

Allegations of even minor criminal matters capture reporters’ imaginations, and sometimes do pull in large audiences. A politician like Senator Larry Craig, can be a great and useless nincompoop, of little interest to anyone but his constituents, until caught doing something slightly weird in a public restroom. Would any professional reporter have thought to report on his mediocrity were it not for this bizarre irrelevancy? Andrew Keen suggests that only bloggers report such trivialities. This is the opposite of the truth. (Keen falsely cites the Swift-boaters attack on Kerry in 2004 as if it were mostly done by blogging. In fact, the main effort was on a series of TV ads.) In 1988, Gary Hart was forced to abandon his presidential campaign because reporters for the Miami Herald discovered him apparently shacking up with someone not his wife. There were no bloggers then.

In an earlier era, to be sure, reporters kept quiet about JFK’s numerous liaisons, because they took it as matter of course — and perhaps, in those days, there was enough real news to go around. Today’s professional reporters are much hungrier, since there are now so many of them and less newsy news to report, so they would eagerly pick up on almost anything, even if the source were a blogger. Yet, editors seem to be afraid to stick out their necks to allow reporters to report on anything that other reporters haven’t caught up to, so news people often travel in packs. Bloggers who are not professionals can take up issues where reporters dare not tread, and thus have become a vital resource.

Keen quotes at face value a business reporter for the San Francisco Chronicle who self-servingly claims that the difference between professional reporters and bloggers is that only the former risk going to jail over their stories. This is utter bilge. In fact reporters are at least somewhat protected by shield laws from going to jail for keeping their notes out of the hands of prosecutors. Bloggers at present have no such protection. Bloggers also risk suit for libel, just as reporters do. To be sure, reporters can risk jail or even death in places like Iraq, but certainly so do bloggers. Reporters have newspapers and professional associations to protect them and stand up for their rights. Bloggers are much more naked. Even in Iraq, indeed, it is much safer to be a journalist for a major US paper than to be someone interviewed by them, and bloggers are in as much danger in such locales as the average person who dares speak to the press.

Many bloggers of course have little to contribute, but so do many reporters. Often bloggers are just editorializing, but editorials can be important in newspapers, and if bloggers have a freer rein that in itself can be of value. To be much read as a blogger, one has to have a distinctive voice and some specific point of  view , some specialty or other, so bloggers can at times be much deeper than news reporters. Beholden to no one, they can say the truth as they see it.

6. Wanted: Want Ads

One of Keen’s main concerns is that if bloggers replace newpapers, journalism will die because it will not be rewarded. Of course, what is rewarded and how that occurs does change as we move into Attention-World. Still, there are no sacrosanct methods of rewarding reporting anyway. Some reporters have always been considered so valuable that they earned a living through subscriptions to their newsletters. Others have been one hundred-percent supported by advertisers, but that can be a tricky source of reward, since one dare not bite the hand that feeds.

Generally speaking, in Attention-World, those who pay attention will strive to satisfy the attent’s needs, to the extent they can. (All of this is explained in my draft chapter on attention.) Why is it better that this be done indirectly, as Keen would like, say by continuing to have newspapers supported substantially by classified ad payments? When mass-distribution newspapers first appeared on the scene with their high-speed presses, high circulation and delivery to many neighborhoods of a city, it worked out well for newspapers to bundle classified ads in, since they could be printed when the presses would otherwise be idle.

The cost of those ads, however, was set artificially high. The advertisers could not usefully complain, since the newspapers had a relative monopoly, and the classified advertisers themselves were no position to unite to fight the rates, for what were often one-time ads. So the cost of reporting was largely borne by folks looking for jobs, places to live or used cars. These were not necessarily the same as those interested in the news stories the paper carried. (I am assuming the claim is true that customers of products or workers getting jobs ultimately pay for the advertising costs.) The relatively poor in effect subsidized the relatively better off who read the news as well as the reporters themselves.

When a new technology such as the Internet makes possible classified advertising essentially for free, why should we not look at that as something positive? We will just have to find new methods to reward those to whom we actually pay attention. How we do this can vary, and we might have to invent new modes, but there is no reason to suppose we won’t want to. True, our attention may not continue to go to the “traditional” news media, but our traditions on this score have constantly changed anyway. High-speed presses and major city-wide dailies came in only towards the end of the nineteenth century; news magazines started in the 1920’s;radio news came to prominence in the 30’s and 40’s; TV broadcast news became a nightly staple for most in the ‘60’s; cable news networks grew in the 80’s and 90’s; and now the Internet with bloggers of various kinds and YouTube are playing a larger role.

Ordinary people have always found some way to discuss whatever news seemed important to them. Today, a considerable proportion of that news, like this essay, is in itself a kind of meta-news. We may be more interested in issues surrounding who gets attention, or how to get it or restrict it, than in anything else.  But since a very large number of us are interested in that to some degree, being part of the conversation is of growing value. People eagerly — and sometimes very intelligently and articulately — add their comments to news articles, news columns, and blogs. They e-mail each other articles of interest, or  engage in detailed discussions on listservs.

Of course, much of what it said is not so intelligent or articulate, by whoever’s standards you choose to use, but it is no worse than what is said on some of the cable-news channels or on talk radio, or formerly what got into many letters-to-the-editor columns, or even was said by “professional” columnists and reporters on smaller papers, etc. Nor is it worse than conversations people used to have in the local pub or their neighbor’s kitchen or in college dorms.

Aug 292007

1. Howdy, Pardner!
Andrew Keen, in his diatribe, the cult of the amateur: how today’s internet is killing our culture, claims to be mainly concerned about “Web 2.0,” though he lards his list of ills with e-mail spam, phishing, online porn and gambling, which don’t really fit. The Internet at present is somewhere between a wild west, a playground, and an experimental laboratory. All sorts of things get tried, standards are few and unevenly enforceable, and certainly there are problems. But what Web 2.0 really offers is the host of opportunities for ordinary people with modest technical skills to seek attention and to play around with related issues, including intimacy and friendship. It is thus very much part of the move away from what we can call the Money-and-Thing-World (or just plain Money-World, for short) to the new Attention-World, which is what I (but not others) have meant all along by the phrases “Attention Economy” or “Attention Society.”

In Money-World, which is also loosely the same as capitalism, the main human interaction is the cash nexus, buying and selling or deal-making. The appropriate attitude in such a world is the poker face —not revealing your inner feelings about whether a proposed deal immensely pleases you or is barely acceptable. What you do not want to do is “lay all your cards on the table”  — at least not until all bets are in. If someone acts too friendly, watch your wallet.

Attention -World is entirely different. The less your reveal who you are and what you think and imagine, the less interesting you are, and so the less it is possible for anyone else to align their mind with you. Even in the height of the industrial economy, the worlds of family and friendship and neighborliness were outside the market. They were mini-attention worlds of intimacy where things were rarely bought or sold, but where how much attention you got, while at issue, was not a huge problem.

2. Shrinking Families

Let us recall too that under the conditions that prevailed in most of the world until quite recently, it made sense to have large families. There was always a considerable risk that children would die very young; at the same time, unless some children lived to maturity, there would be no one to help out on or eventually take charge of the family farm or workshop, no one to take care of the parents in their old age, should they happen to survive. Large nuclear families meant large extended families as well, most commonly, with plenty of aunts and uncles still alive from the parent’s generation when kids were small, and numerous cousins, etc.  And families didn’t tend to move very far from their ancestral spots. So children grew up surrounded by relatives with whom to be close.

Today, in contrast, it is not uncommon to have only one child, or no children. Only children, when they have children of their own, introduce them into a world without uncles, aunts or first cousins. Because of greater geographical mobility, what family remains is often far away. That has two consequences. Today’s young people need attention from non-family more than ever, and the Internet certainly has become a major avenue for seeking this. At the same time, even though the average child has a much higher chance of living a long time than her ancestor did at birth, parents of only one or two children are likely to feel far more worried about slight dangers to them than parents a few generations ago would have done. As a parent himself, Keen exhibits such anxieties.

As attention becomes the leading scarcity and what is most sought after, it is natural that the domains of friendship and intimacy are scenes of play and experimentation. People want to extend their circles of friendship, as they are among the main ways to feel connected, that is, to get attention. Hence: blogs focusing on personal life and intimate feelings; social networking sites in which “friending” is a major exercise, and sites like “Second Life ” and some on-line games in which it is possible to play act, adopting a personality perhaps different from one’s normal one.

“A personality different from one’s normal one,” hints that even one’s “normal” persona is in some degree a construct, a way of acting and even thinking that accords with others’ and one’s own expectations about who one should be. Taking on different personas, with great intimacy, is exactly what novel writing, play or movie acting, much art and much poetry and musicianship are all about. It is the very stuff of “our culture,” the very one that Keen is afraid is being killed.  On the contrary, this move toward greater involvement with the attention world shows great promise of enlarging this culture.

If Keen does want to point to something that might indeed be culture-killing, he would to better to decry the elimination of art, music and sometimes even sports in public schools that are simplemindedly trying to enhance learning by focusing on the “three R’s, and on preparing for tests mandated by the equally idiotic “No Child Left Behind” law, none of which has much to do with the Internet. The actual effect of these seems to be, however, to make the contents of schooling all the less relevant to most students. What teachers offer increasingly little seems to connect with what matters. No wonder students’ life and actual learning does become more Internet connected. School remains a locus of attention getting and paying, but more and more only informally, only outside the class (room) structure.

3. Friends Don’t Let Friends Miss Out on Culture

With the attention world on the horizon, the growth of experimentation with new kinds of contact and connection, new aspects of friendship and intimacy, comes a sharing of what kinds of things we pay attention to. By apparently befriending or at least pointing to one’s favorite stars, one seeks some of the attention that pull in. That occurs in lists of favorite songs, books, TV shows , etc., on MySpace and Facebook. One can also seek attention through one’s own creations, say by uploading pictures and videos on the same social networking sites or on Flickr, Picasa, Google video, Yahoo video or  YouTube, among others. It also occurs in the straight file-sharing sites, which are closely interlinked with these, in common use.

The social networking sites in particular have some of the aspects of a large party, where attention goes back and forth with banter, and sharing of whatever can be shared. The file-sharing aspects especially bother Keen, because he sees in it a violation of the intellectual property laws.  Here, Keen mistakes legality with culture. Laws that go against culture don’t usually work. Despite heavy enforcement of the US Prohibition laws, from 1920 to 1933, the culture of drinking alcohol certainly did not disappear. The same goes currently for the use of a variety of illicit drugs. One can wonder, in fact, how much of our Culture with a capital “C” would survive the utter end of the drug culture.

Even though culture changes rapidly, there is a way in which the alignments that are behind it work together, with not a strict logic, but nonetheless some kind of rough agreement between various aspects. If a state legislature were to pass a law that, before leaving a parking place, every driver had to salute the flag three times, who would obey unless the police were watching? The law would simply not fit in with current ideas of the relationship between parking and the flag (i.e., no relationship at all.)
In the case of property, we have strong cultural sense of what material property means, as well, say, as what it means to hand over something (such as money) to someone (say, a bank) for safekeeping, and also of our right to our own identity. There are those people who regularly steal all these things of course, but far fewer than, say, illicit drug users or underage drinkers. In all these cases, we understand from a very early age that if we take it the original owner loses it.

This is not the kind of taking involved in the “theft” of intellectual property such as a copyrighted bit of text or music. If I own a physical book, and you are my friend, if you take the book without asking, that is (minor) theft, which you probably feel obliged to refrain from most of the time. On those rare occasions when you give into temptation you might well feel pangs of guilt. (That guilt has nothing to do with fear of going to jail, of course. Who would try to prosecute a friend for stealing an ordinary book, after all? What police department or court would bother with this?) But that does not amount to theft of intellectual property at all. Only if you published the book without the author’s or original publisher’s consent, would you run afoul of this law. Until very recently, that was just not something you had to worry about doing.

In other words, the intellectual property laws, as they have long been understood, were not at all a matter of concern for ordinary citizens. Until recently no one could easily distribute copies of books or records in large numbers without having to use considerably complex equipment, the kind of equipment then found in commercial presses of one sort or another but certainly not in kids’ bedrooms.

Between law-abiding firms, intellectual property laws could fairly easily be enforced and made some sense. But for private individuals they are completely counter to what makes sense. We are constantly passing on ideas, recipes, stories, news, opinions, and more, often with some hope that others will in fact pass them still further on. If we send them to our friends, whether by snail mail or e–mail, pretty much the same applies. If I have a book that I think is worth reading, I could lend you my copy with no thought that this is remotely wrong.

If I record a song and you pass it on to all your friends and they pass it further, why shouldn’t I be delighted, especially if you make clear that I’m the one singing? If you admire a star, who therefore feels rather like a friend, why shouldn’t you feel exactly the same way about passing on a song of hers that you especially love? It’s not theft at all; it’s devotion. Why would she not regard this as a favor?

If you consider that the very same move towards Attention World (or in other words the true Attention Economy) enlarges the motives to exhibit friendliness and intimacy and increases the attention to cultural expressions of all sorts, then you should also see that inevitably these move towards weakening the kinds of constraints that are summed up in intellectual property laws and especially copyright. The Internet extends rather than diminishes the reach of expressive people who make “Culture” with a capital “C” through precisely the same motives and mechanisms that undercut intellectual property laws. Keen is completely wrong when he suggests that weakening these laws will weaken culture.

4. Artists Without (Stock) Portfolios!

Keen’s concern is that having, e.g.,  music passed along at no charge would lead to an enormous problem.  He implies the stream of new compositions, performances, songs, and the like would run dry. But certainly if you include all postings on the Internet, the exact opposite is happening so far.  More music is being recorded than ever before, in more styles, both new and old. Even old recordings are now available for a wider audience than ever. Keen cites his own favorite Tower record store as the world’s largest, but clearly, the Internet must offer far more, if we include all the offerings of any sort, paid and free, MP3 format or net radio, along with mail-ordered CD’s, tapes and records, new and used. The low-paid clerks who took jobs at Tower because they loved music, and whose advice Keen cherished, have been replaced by tens of thousands of fans, reviewers, etc., mostly unpaid, but nonetheless deeply enthusiastic about their favorite stars. By listing every venue of live performances as well, the Internet has undoubtedly helped new audiences form. Fans of new musical niches, and the desire to get the attention of these will probably ensure an even larger  supply of musical performance.

Is all this music good? Of course not, whatever your standards of goodness. But it was not all good before, either. There was a period in my life when I felt no good music had been written since Bach died, in 1750.  Others have felt that the pinnacle of music was Gregorian chant before the monasteries were closed, or the early days of New Orleans jazz or the period of Mississippi rural blues. Some people would insist the best was Arturo Toscanini conducting Beethoven, or the mid-period Beatles, or the Grateful Dead, or Tu’uvan throat singers before they were spoiled by fame. There have to be some who insist it was Brittney Spears before marriage ruined her or Puff Daddy before he became too much of a mogul. Audiophiles might insist the only good recordings are old LPs or even 78s, while others might argue that nothing but a live performance is real music. Most of these views have nothing to do with the existence of the Internet, one way or the other. It only adds a new wrinkle to a very long debate.

Contra Keen, people who love music are not going to stop making it just because they can’t anymore make millions of dollars. (Assuming they can’t, which is far from clear. J.K. Rowling is a writer, not a musician, but her works are heavily pirated, yet she ahs apparently grossed about a billion dollars for the Harry Potter heptad.  Some musicians too still have their bling, their Lamborghinis and their jets.). There have always been plenty of highly talented, hardworking and even innovative musicians who never made enough money through their music to live on it, just as, for the last century, at least, wonderful poets kept writing even though very few have been able to make even a meager living thereby.

5. Steelworkers Don’t  Sweat

Keen’s repeated theme is that “professionals” that is those who in the past earned a good living from their attention-getting activities deserve the same living now. After all they have earned this by the famous “sweat of their brow” and their talent. But how do we tell how much they deserve? Presumably Keen would say that number should be determined in the marketplace. But markets change too, based on changed conditions. The market is in this sense a tautology: whatever people in fact earn is what the market decides. Keen is not at all bothered it seems that factory workers who gave various brand names their reputations constantly lose their jobs to automation or to China.  I guess they don’t sweat.

To return to Attention-World, the conditions by which people get attention and what material needs or desires they reap as a result are not permanent and never have been. Certainly some refuse to adjust to new changes, or do adjust, but not with pleasure. When the baby-boom generation was growing up in the late 50’s and early sixties, and rock music came into fashion, quite a few in the business world realized that the record companies were sitting on a potential goldmine. Like book publishers a little later, these companies were bought up and consolidated, and a vast number of artists were signed to multi-record, multi-year contracts, which often contained clauses highly unfavorable to those artists, many of whom signed when they were young and unsophisticated. The more lucky and canny singers, musicians, composers, etc., became very rich, while others fared much less well. This had little to do with what Keen sees as the reason for the rich rewards for those who got them: “the sweat of their brow” and their talent.

Keen quotes the singer-songwriter Paul Simon to the effect that “Web 2.0” has ruined music, because it is now impossible to get record companies to front a million dollars for him to produce one of his new records. This merely shows his lack of economic imagination. If his fans want to hear a new record from him, and if there are enough of them, it would be easy for him to appeal to them directly over the Internet and raise that money, and that’s only one possible means. (Another would be to get volunteers to work together over the Internet to help produce the album at much less cost. Or maybe he could simply dig into his own pocket — perish the thought. )

I hate to say this, but Simon is spoiled by the particular era in which he made his name. Before recordings existed, and for several generations afterwards, his custom of spending a million on producing an album would have been viewed as meaningless, absurd, or ridiculously excessive. Mozart ended up in a pauper’s grave, but we still have his wonderful music. The great blues pioneer Robert Johnson was even poorer. No one alive today has ever heard Beethoven play the piano. In my book, Paul Simon’s efforts, while enjoyable, are not in their league or anywhere near it. I shed no tears over his plight.

Another thing that changed the appreciation of music was the advent of television. Elvis was talented and had a good voice, but what made him a star was the way he wiggled his hips, just out of camera range, on the Ed Sullivan show. Soon every band on TV had to jump, twist and gyrate or remain unseen by the vast TV and music video audience. Today, that style influences even grand opera. I recently caught a truly wonderful version of Don Giovanni by the San Francisco Opera, in which the singer playing the Don had to leap about athletically and only one soprano remained immobile in the traditional style. We may not see the like of Pavarotti for some time, for the simple reason that no tenor of such unathletic girth will be considered right for any part, no matter how wonderful and expressive his or her singing. Opera lovers will both lose and gain by this, which is just another example of the ceaseless transformations of culture before and during the rise of Attention  world and the  time of Internet, some having to do with the latter, and some not.

6. Destroying Culture in Order to Save It

One of the great failings of Soviet and Chinese Communism was in the hope to create a new culture. “New Soviet Man,” or the people who had been “reeducated” in the Chinese Cultural Revolution were to have attitudes and feelings different from what had been transmitted or emerged without special pushing out of the previously prevailing culture. Getting rid of old patterns and habits proved hard; one result was sending people to gulags or reeducation camps for extensive punishment or enforced attitude changes. Whatever was done there did not much stick. In calling for a dramatic cultural change around intellectual property, which is what he really does, Keen seems to endorse nearly as draconian measures, as do the major corporate holders of copyrights. Threatening jail for individuals ignoring copyright is unlikely to work any better than the gulag. It is in reality destroying culture in the claim of saving it. It is death.

Aug 172007

——Part I of a review of (and riff on) Andrew Keen’s the cult of the amateur.

A hundred and eleven years ago, the “modern” Olympic games were born, emphasizing what could have been criticized as a conservative “cult of the amateur.” There were strict rules that only pure amateurs could compete, which meant, of course, that only people of independent means could enter. This neatly kept out representatives of the “great unwashed” or, in other words, the laboring classes. They of course did not have spare money to throw around, so they could only afford to participate successfully in sports if they somehow found a way to be paid to do so. Only quite recently did the International Olympic Committee alter this. We now have the “cult of the professional” in sports. One of its dire effects may be that in order to win or even to join a good team, with all that money (and attention) at stake, athletes are too tempted to use some variety of performance-enhancing drug.

However, today’s conservatives, as exemplified by Andrew Keen, have also come a long way. Instead of criticizing the whole notion of professionalism, Keen earnestly endorses it, because he doesn’t like what “amateurs” are doing on the Internet. His subtitle is how today’s internet is killing our culture. [His or his publisher’s chi-chi lack of caps, btw]

1. Whose culture?

If Keen means the culture that most Americans now participate in, which definitely includes the Internet, his sub-title simply makes no sense. The Internet is hardly killing whatever it fosters. Does he mean the gentleman’s culture of a century ago? No, evidently not. He exhibits no awareness of its very existence. His prime example of how everything is being ruined now is the closing of his favorite Tower Records mega-store in San Francisco. As many people do, he thinks back with more nostalgia than realism to the “golden age” that just happened to coincide with his being, I would guess, about twenty.

For someone a little older than Keen, Tower could be viewed very differently than with deep nostalgia. It was part of the replacement of purely local record stores with larger, deeper-pocketed and more profitable stores that were part of national chains. (My personal favorite once was Leopold’s records in Berkeley, an offshoot of the Associated Students of UC, a store that actually was replaced on the same spot by the Berkeley branch of Tower, which many of my friends long boycotted as a result.) The same fate also befell local bookstores, as chains such as B. Dalton, Waldenbooks, and Crown crowded them out by treating books like canned spaghetti. These chains in turn feel prey to mega-stores such as Borders and Barnes & Noble who simply went the earlier chains a few better before falling in turn to the likes of Amazon. Up to the last step, all this was before the Internet had much impact, but it was in a way part of the same process that has now led to Internet retailing of books and records, and even to the free “file-sharing” of recorded music that Keen so much decries.

Even local record and bookstores haven’t always been with us. Before about 1900, for instance, live music was the only kind that could be heard. Orchestras, bands and choirs abounded, and amateur musicians playing at home for parties or just the family were common. Many have rued their decline. Bookshops go back longer, perhaps to the eighteenth century, but before that, one could mostly buy a particular book only from its printer. And of course, when printed Bibles first became available, they were decried by the “professionals” of the era, the Roman Catholic hierarchy who saw the right to interpret scripture as far too dangerous for amateurs — that is, lay people. Go back a couple of millennia from that and you come to Socrates objecting to the invention of writing as debasing memory abilities. (Ironically, we would have long forgotten Socrates’s plaints if Plato hadn’t written them down.)

2. Wading into culture

“You cannot step in the same river twice,” says Plato’s Socrates, quoting the even older Heracleitus. That is certainly true of culture. It has to be in constant flux. Let’s think about culture a bit and see why.

“Culture” has several meanings, some much contested. But one meaning is pretty well established by now: A particular moment’s “culture” refers to all knowledge the humans in question currently have, along with their full repertoire of meaningful practices — excepting only those that inevitably result from genetic endowment or from physical laws such as the law of gravity. You may fall asleep for purely biological reasons; that you sleep in a bed is an aspect of culture. That you set an alarm and get up because you have places to be is also part of culture. So is your understanding of why you do this. Every kind of intentional practice you thus engage in has meaning, and that meaning too is part of your culture.

This wide meaning of culture, we should note, encompasses practices of all kinds, certainly including economic ones. An economy is an aspect of a culture. Yet at the same time, economic patterns tremendously influence all sorts of cultural possibilities.

The word “culture” derives from a Latin word that means tending or attending to or worshipping, but it took on its own current meaning in a roundabout way. Farmers attend to the plants they are thereby “cultivating.” Metaphorically, parents also attend to and cultivate their children, by teaching them by explicit lesson and by example what the world is. (In order to be thus cultivated, children have to pay attention, they have to align their minds to those of the adults, as best they can.) Through some degree of mutual attention, meaning gets passed on, until children are capable of paying attention on their own to other than their parents.

One’s culture is thus the residual matrix of prior alignments — prior attention that has shaped one’s mind. It’s presence allows the individual to create new meaning of some sort, for instance as a way of getting attention and/or acting in the world. Imagine the first generation that developed, say, language. Their children would have grown up in a very different milieu from that first generation’s. As a result, those children would have seen and understood the world differently from the prior generation, and would have thus had a different culture to pass down to their own children, who in turn would again have been raised in a different environment, and accordingly have grown up with a different culture. A culture can only cease changing if it ceases being culture, it would seem, and becomes, in essence totally stereotyped knowledge and practices, no different from instinct. You don’t live exactly your parent’s life, so you cannot keep exactly her culture.

Language is one aspect of culture. It is a scaffolding allowing — indeed almost requiring — new sentences, never before heard, and so the passing on of new thoughts. That process acts on language itself, adding new meanings, along with new words and ideas, while altering pronunciation and grammar too. Words form a network of meanings that depend on each other, and anything new added to this network alters it, changing near meanings slightly and then further off ones. As the relationships among meanings change, new words and new combinations of words must come into play. Since a word’s sound is affected by the adjacent word, the sounds change along with the new patterns. Old grammar no longer works or sounds quite right, and new grammatical rules are born.

The study of comparative linguistics reveals such changes throughout the recorded history of (mostly written) language. Archaeology shows the same sort of thing seems to have happened with tools and artifacts of all sorts. Though records are far scantier, the same seems to hold for music, dance, and every variety of mundane practice — from travel from one village to the next, to tree-pruning, to shoemaking. Nothing ever remained just how it had been.

Not all cultural change has moved a the rapid pace of today, of course. In the past, many innovations were purely local, and often on a very small scale. But even Egyptologists see differences in the output of the many different dynasties that followed each other for thousands of years. Even before that, going back tens of thousands of years, there were steady — if usually small — changes in the artifacts left behind.

3. “Culture” as in Vulture

There is, of course another — actually older — meaning of the word “Culture,” though it comes from the same source. It is not so much human knowledge and practices in general, but rather knowledge of what are considered great and significant works of art, philosophy, science, and other things that have to be learned through lengthy and careful study, or at least through reasonably detailed and close attention. (This often comes via a formal and elite system of education.) This meaning of culture is also highly contentious. Is there “high culture” and “low culture,” “mass culture” or “highbrow culture” ? And where do “geek culture” game culture,” and so on, fit in?

Certainly, in the past, a considerable exposure to what was labeled high culture was a sort of ornament that entitled those so exposed to claim social leadership and superiority. Such was knowledge of the “classical languages” of Latin and Greek and the ancient works written in those languages. (Andrew Keen claims to have been “classically educated,” and this may be what he means, though that kind of classical culture is certainly not what he is striving to save.)
It makes no real difference whether the works in question are paintings, novels, videos, musical compositions, scientific theories or even computer software or games. Whichever ones are considered essential to any sort of being cultured or cultivated, or simply “in,” attention to them does shape the minds of those so cultured.

Every small child who has had stories told or read to her will make up stories of her own; every artist who admires works by past artists will be inspired in some way as a result. Alignment with any significant work will itself take some degree of dedication, and it will almost always lead to some desire to try to do something like what the creator of that work has done. This is a nearly inevitable part of paying attention: aligning your mind to someone else’s includes feeling some of the drive they did to create in that or some similar medium; it also involves wanting to get attention in somewhat the same way they seem to have wanted it. Again, the inevitable outcome is new Culture — now an outpouring of would-be homages, variations, pastiches, parodies, responses, negations, or works intended to break the boundaries of whatever conventions first inspired them.

The greater our access to Culture, the more attempts at more of it there will be, and the sooner past will become prologue and the old forms will give way to new. In Keen’s terms then, the more intense our cultural life, the faster we “kill” it, by overwhelming it with the new. Even though most attempts at emulation or response don’t live up to their models, plenty still do, and we need not worry about culture or Culture drying up.

4. “Today’s Internet”

In all human history, the rate of social and cultural change has never been as fast, as intense, as widespread as it is now, as humans become linked and connected through the Internet and related means. Change at this pace is naturally confusing, difficult to evaluate, often confounding and disturbing. So Keen’s anxious jeremiad is only to be expected, and perhaps is even useful as an exhaustive compendium of complaints about the Internet. One of the problems, though, with what he has to say is that he lacks all sense of the flow of history.

Keen takes it that technological change is inevitable. That is much too simple. Technological changes matter and become common only when new the new inventions strike a chord. Keen just does not like the chord struck. He is a firm advocate of greed for money as a motivator. He even once went so far as to host a conference about the Internet called “Where’s the Money?” However, much as he honors monetary greed, he is disgusted by the desire for attention.

Like a feudal lord who saw lust for fighting and loyalty as primary virtues but decried “mere” commerce as loathsome and petty, Keen stands up for the capitalist virtues, but does not get that a new kind of economy is growing robustly, and that the desires that hold sway in this new economy are mostly what determine which new Internet offerings are likely to catch on. Blogs, social networking sites, and sites that allow easy uploading of and searching through pictures, videos, music or blogs themselves— are the very stuff of “Web 2.0” that Keen especially opposes. But they catch on — that is, are adopted by many — because they hold out the potential of considerable attention, even though the sheer arithmetic means that in most cases they cannot really deliver it.

I have dealt with these subjects many times before — not least in an Internet radio interview conducted and “broadcast” by Keen himself.[The site has now been taken down.] For convenience, I will reprise the argument in outline here. In all past history, the great majority of people were engaged, in one way or another in wresting from nature and then forming for human use material things, from food and clothing to machinery, etc. The incredible increases in productivity brought on by industrial capitalism, have now ended that mode of life. Human energies, whether we like it or not, have thus been freed to move in new directions. The primary direction taken has to do with the new prime scarcity: that of attention from other human beings. An increasing percentage of the world’s people, wherever they are, and in whatever part of their waking day they find themselves, devote their energies to paying attention, to receiving attention or to seeking it.

The new technologies make these quests possible on an ever-enlarging scale. One day soon, all six or seven or eight billion people on earth might form one huge potential audience for each of us. More than ever, our culture as well as our new economy of attention becomes a system of creating more culture. A culture of cultural intensification, in other words. And since each of us has only limited capacity for paying attention, that means, inevitably, a faster giving up of part of the old to attend to the new.

Naturally, this is disconcerting to anyone who has put energy and thought into becoming adept in what was. To some degree, cultural learning is about retention. No one could learn to speak, if every day the people around her had abandoned yesterday’s words, meanings, and grammar for entirely new ones. Or suppose you looked in your closet and discovered that the clothes that somehow had entered it overnight did not have the sleeves and legs and fasteners that you were used to and had to be put on in some way you had to newly discover. Just getting dressed would be a significant obstacle. We can pay attention to the new only to the extent we master a set of habits or routines we can rely on that allow us not to pay attention just to navigating the “background”. Too rapid cultural change is akin to one of those nightmares in which you find yourself in a somewhat familiar place but cannot manage to locate people, items or doors you expect to find.

This may suggest that cultural change is a problem akin to global warming that will destroy us if we do not find some way to rein it in. We can imagine that cultural change alone could become comparable to the chaos experienced today by the inhabitants of Baghdad as a result of America’s ill-considered invasion and the opposition it engendered. That example suggests cultural change foisted from outside on a population helpless to deal with or control it. Clearly, that can happen, but I would argue it is not the main mode in which change is occurring now. Instead, the main forces that change our culture are limited by the degree to which we —or at least many of us — adopt the new culture. Inevitably, the young can adopt new culture faster than the old, but , given that the median age is climbing as children per capita decline and life expectancy grows, teenagers alone are in no position to dictate to all of us. We adopt new culture fast, but not faster than we can.

Cultural conservatives always have another argument, of course. It is that the prior culture contains inherited wisdom that will be lost if we abandon its specifics. Andrew Keen does not really spell out this argument, but I think he implies it. The problem is that if we look at past history, we do not see eras with monopolies on wisdom. The stone age? The Roman Empire? The World War II era with it’s touted “greatest generation”? The sixties? Hardly. 1990 is equally suspect.

Certainly, old wisdom may be lost, but new wisdom can also be gained. In fact, what is wise depends on context, so that much of what was the old wisdom would be today’s stupidity. Inheritance alone cannot tell us what is wise; we have to keep coming up with new ways to do that. And we have no way to measure relative wisdom, so we can only keep striving for wisdom in current terms. Critics would have to argue that we can’t do that, or aren’t trying. If that’s Keen’s point, he is not convincing — as I will explain further in the next installment.

Jul 292007

Pay attention! This is IMPORTANT, not just my usual blather!

In response to an earlier post of mine, Paul Salomone writes, in part:
“…it’s not just that ALL people need to have a more equal share of the attention wealth, but IMPORTANT people and ideas (read: necessary for the healthy and happy functioning of global society) do even more so.”

I don’t think this demand makes real sense, understandable though it may be. Meanwhile, in an attempt to deal with too many calls on our attention, people or companies such as Seriosity try to come up with ways to quantify how much attention something ought to be worth, presumably based on how “important” it is. These efforts too are doomed, but why?

How can we understand what importance is, from an Attention Economic perspective? In this perspective, recall, the attention that matters is only what goes, directly or indirectly, from human to human.

Saying something is important is first of all a ploy — conscious or not — for getting attention. You ask others to see the world through your eyes, urging that in so doing they will better be able to pay attention to themselves or others. However, the way you put it is that you are just pointing out the way the world truly is, not just how you see it.

(If you are being truthful, of course, you do report the world is as you truly believe. However, inevitably, in demanding attention for your version of the world, you are also demanding attention for yourself and perhaps for some of those who see as you do.)

If you are one of the main people who draws attention to this aspect of the world, many others may align with you, and you become an “important person.” If it happens you are already a star of some sort — and thus massively attention-getting anyway —already a VIP, or “Very Important Person” — by insisting that something is important, you automatically make it so. And you become still more of a VIP.

But can’t things be intrinsically important, without humans deciding so? They cannot. Even if the world were scheduled to blow up tomorrow, that would only be important if we cared, though in this extreme case we certainly would. We would because it would affect us in an extreme way.  A planet the size of earth two million light years away can blow up without our caring, and if so it would be unimportant. To put it another way, we humans evolved in a world essentially without meaning, until we invented meaning and imparted it to things we are capable of noting and categorizing and having feelings about. We pass on such meanings when getting the attention of the next generation and getting them to align with us, and that has a lot to do with how we pass on importance too.

Deciding that something is important is a social process, depending on at least a shared alignment as to the urgency of a certain action or viewpoint, usually in response to someone’s capacity to get and hold attention around this, and sometimes to a shared perception that gets translated into immediate action (panicking in a flood or an earthquake, let’s say).

“The sky is falling!” “Terrorists are coming!” “Humans are causing global warming!”  “The US is in danger of falling behind China!”  “Remember, never use dangling participles!” “Barry Bonds’s records don’t count; he took steroids!” “Wolves are at the door!” “We are ‘Amusing Ourselves to Death’ [title of Neil Postman’s 1980’s  book]!” Almost anything can seem deeply important to someone.

A baby can get attention by smiling, cooing or crying. A slightly older child can say “pay attention to me” or pull on a parent’s sleeve, or even slap someone to get attention. At a slightly later age, a child becomes aware that pointing out things about the world can be attention demanding, especially if it is in the form of claiming a certain urgency. That’s importance. Other ways to get attention include making funny jokes, learning and showing off skills, making things, etc. But if just being entertaining or artistic or interesting doesn’t do it, there remains the importance ploy.

In the early cave period, when I was a child, I was repeatedly told the story of “the boy who cried wolf.” The point was that you must not falsely claim you have something important to say, because if you do that often enough, when there actually is a wolf you will not get anyone’s attention and will end up eaten. As the repeated telling of this fable itself illustrates, adults of course use the importance ploy to get attention from children, with varying success. And they use it towards each other, with ever-greater frequency.

As I have argued elsewhere, attention is not synonymous with time. Nonetheless, like every human action the act of paying attention must take place in time, and so is limited by the time available. Suppose you decided to spend your entire waking life paying equal attention to everyone else in the world, all six billion of us. You would have only about a third of a second to devote to each person. If you happen to be American and limit your attention to the three hundred million Americans, that would still afford you only about 7 seconds for each. Thus, it is almost inevitable that many will want more than their fair share. A third of a second or seven seconds just does not seem enough. Many might settle for much less than the whole world’s attention, but as long as some do not, and there is no intrinsic personal limit stopping them, the competition for attention will certainly continue to increase at all levels.

As the competition for attention heats up, and as the possible world audience keeps growing, the number of claims of importance keeps rising and diversifying. They often get shriller, as well, as they must in the face of the growing competition.

If you pay attention to someone who says something is important, that is if you at least partially align with her, you do want to do something about it. Importance claims demand action of some sort. But often this action will consist of trying to get the attention of still more important people, people who somehow can actually “change things.” That gets frustrating, in several ways. It’s hard to get their attention, first of all. And these people who can change things want to get and keep your and others’ attention. So they find it easier and easier to give lip service to “important” topics, but not necessarily easier to do anything about them.

Take the Iraq war. A preponderance of the American public wants it to end and thinks that is important. But there seems to be nothing they can do individually to get attention from those who can effect that outcome, and it is only one of many topics that get some attention. So it’s easier to focus on something else.

Suppose everyone in the world agrees that such and such is important. Then little or no attention can be gotten by merely re-iterating that.

Whenever some people get attention for an issue they think is important, they automatically create an opening for those who choose to say that the opposite —or perhaps some variant — is more important — hinting that this alternative, if attended to, will help the audience pay more attention to themselves or those who they want to pay attention to (say, their children, friends or stars) than the other choice will.

In truth, a large number of people with very disparate mindsets have their own issues, their own pet peeves or pet hopes, their own sense of what is most important and most  “necessary for the healthy and happy functioning of global society, as Salomone puts it ”. That’s not always so good. Even Hitler justified the Holocaust on the basis that Jews were an “unhealthy” presence in the world. Hitler was an anti-Semite of long standing, but it seems one reason he so emphasized this was that he found when speaking to German audiences angry about their defeat in WWI, he got much bigger crowds and applause …much more attention, in effect, when he trotted out this hatred.

Paul Salomone is disgusted that people waste their time paying attention to the Paris Hiltons of the world. Hitler, after taking power, rid German museums and art galleries of what he considered “degenerate art” including cubist, abstract, and expressionist works. When he decreed that these works should be put on display so that the populace could share in his disgust, instead Germans flocked to the show out of genuine interest in this art.

Of course, it is vast hyperbole to put Salomone and Hitler in the same paragraph. Yet the comparison indicates the problem. How do we decide what is important? Is there any way we can just pay attention to truly important things? Attention equality (that is, equal attention to each and every person) would clearly be impossible to enforce, but is attention to what is important in any way a more usable criterion? Who would apply it?

Importance is going to continue to be decided on the basis of what positions and statements get attention and through and by whom. But we can expect endless logjams as important issues and personalities pile up. Things that are deemed important in this way are first seen as problems. Then, possible solutions seem important. Then come reservations about the solutions.

Meanwhile, many will change the channel, preferring amusement to frustration. That’s better by far than starting wars or genocides, a cruder way to try to be important or see to it that “important” changes actually happen.

I don’t mean at all to suggest that nothing is important. I have my own collection of issues that I view as such. I prefer diplomacy instead of war-making, oppose arms sales, favor energy conservation improvements to stop global warming, want better teaching of arts and music in public schools, want all drugs legalized but sold for low cost by the government, want everyone to have broadband Net access, want better protection for citizens of the Eastern Congo, and better protection for elephants, and on and on. And of course I think proper understanding of the Attention Economy is important. Some of these are more important to me than others. But these reflections leave me with fewer illusions that what I want is likely to permeate enough minds to make a difference.

Each of us has an implicit or explicit list of what is most important, of what we most want fellow citizens of the world or those who are already most important top ay attention to. But my reflections here suggest that actual change is most likely when it seems pretty unimportant, happening almost in the shadows, or at best merely micro-important.

Jul 162007

Friendship might be defined as a state of more-or-less mutual attention paying. From little acts of attention, including times when you just are together, talking, walking, or engaging in some joint activity your minds get into sync so that you can align easily (that is pay attention)  to what the other is saying or doing, feeling or thinking. Long friendship makes attention all that much easier and full.

No wonder people clamor for friends, and even for claiming friendship in crude ways. For example, the social networking site Facebook makes it easy for people to try to establish a somewhat ersatz friendship with you when all they may know of you is your presence on the public friends list of someone who is indeed your friend.  And that can quickly accelerate to claming friends with even more degrees separation. Friend of friend makes some sense, of course. If someone is genuinely friends with both of you, you can align with the second –degree friend in part by however the two of you both align with the one you have in common.

What you know about your friends includes funny little things, little admissions, some somewhat scandalous pieces of action on their parts, their little annoying habits, as well as their pleasant enjoyable ones, and it all helps make them real, making it easier  for you to align with them.

Take this one step further. What you don’t happen to know about a friend of yours you will eagerly want to learn from some other friend, and in so learning, you not only can feel a surge of guilt over stumbling upon a secret, but a new sense of connection both with the friend you are gossiping about, and the friend (or would-be friend) who passes on the gossip to you.
And now one more step. Suppose you don’t personally know the object of the gossip, but are familiar with them as a star. It could be Paris Hilton, David Letterman, Angelina Jolie, Barack Obama, Venus Williams or even a dead celebrity like Jean-Paul Sartre or Sylvia Plath. You have occasionally aligned with this person in listening to them on radio, or seeing them on TV, reading their words, in hearing of their performance, in repeating a joke they have told, and so forth. They have or had no knowledge of your existence, but they act or acted, in some way, like a friend. Anything that will add to this sense of familiarity will only make it easier for you to pay more attention to them, because the more real and human they seem the easier it is to align with them.

That is why we soak up memoirs, biographies, interviews and other less formal kinds of connection, such as gossip, about stars. The better it feels as if we know them, the more easily alignment becomes. That’s true even as we cluck over some scandal that might comport with some behavior we might dare not engage in ourselves, but still find in some degree enticing. And its equally true if they are caught in a behavior considered scandalous that we ourselves or those actually near us have engaged in many times.
Some celebrities may hate being gossiped about, being followed by paparazzi and all the rest, though often they also realize that gossip only helps build fan interest, gaining them more attention of a completely desirable type.

For a fan, even a mild one, gossip about the appropriate celebrities is an avenue to getting attention from other fans of the same stars, in the same way that gossip about close friends is.  We must be careful in easily accepting part of Paul Salomone’s response to my previous post : “I know of plenty of folks who waste hours a day charting the obscure maneuvers of far-off celebrities, whilst their personal lives are in high disarray. Were they actually to put some energy into their local communities, paying attention to local causes rather than watching Access Hollywood and collecting memorabilia, we’d all be better off.”

Unfortunately Salomone seems to ignore how the Attention Economy (or Attention Society) generally works and the real value that many people find in such gossip.  Attention equality is certainly in some ways a desirable end, but it is also hard to achieve, and not without some significant costs. For example, a  purely local outlook would still leave us blind to many issues in the global system we inhabit. If Angelina Jolie, for instance, tries to acquaint us with suffering in Africa, that would still less than it does if she could not get massive attention.

(In my next post, I will address the issue of what concerns are “important,” in response to other lines in Salomone’s comment.)