‘Underworld’ on the iPhone

Underworld is a novel by Don DeLillo that is 827 pages long. I have a very nice remaindered first edition that I purchased several years ago. On a whim last winter, I decided to read it. I felt — this will probably tell you more about me than I want you to know — that I was finally “ready” to read Underworld, that I had read enough other DeLillo to be able to absorb it. And so I dove in, but I quickly decided to download an electronic copy so that I wouldn’t have to tote around the two-hander hardback. I downloaded a Kindle version, which conveniently appeared on my phone and on my Kindle. I thought being able to have the novel with me at all times would increase my odds of finishing it. And then, out of a fit of perversity more than anything, I decided to see if I could read the entire beast just on my phone.

And I did it, though I should immediately confess that I cheated a little. I read most of the first novella-length chapter and a couple of bits in the middle third from the hardback. And I read the last 20 percent of the book on the Kindle, as I was on a trip and didn’t want to use up my phone’s battery, about whose level of fullness I am in a constant state of anxiety. So, 827 “pages,” three versions, three sets of marginalia — a half-adventuresome, half-grumpy sack-race into the future.

After reading several hundred thumb-flips on my iPhone, the palatial spread of the actual hard copy was resplendent. The book made more sense as a structured object when I read the hardback. I also had a better sense of where I was in the book and how much terrain I still had left to cover. Perhaps this point is obvious. I am normally highly concerned with the amount of pages left when I read a story or a novel. I am not sure how to account for this anxiety. One almost begins to question whether or not I like reading at all if I’m always concerned with how much of it I have left. But this anxiety was amplified by reading the novel on my phone, and it’s not because the phone doesn’t tell you where you are in the text. In fact, it has multiple, frequent, and nefarious expressions of your progress, which might explain my disposition.

When you first open your book on the iPhone, stuck still somewhere in the middle, the information that appears at the bottom of the screen for a brief moment is the 697 of 827 pages left, or sometimes, the percentage read, as well as the “position,” which reads something like “Loc 2729 of 12607.” This position is important but confusing, the non-page-number-like page number that the Kindle software uses to determine location in the absence of real page numbers. As we move inexorably toward more e-books, or e-books as the first step in publishing book-length bodies of text, it becomes increasingly important to have some other locator aside from page number. It all depends on what edition comes first and if there is some concrete analog referent out there in the world.1 In fact, as formats proliferate, one could see the need for some standard type of locator data. I realize that you can search words within the book, but it would be nice to be able to pinpoint the — well — position you’re at.

Also, the phone displays a progress bar that shows where you are in the book. But after this brief blip of locative information, the only remaining pieces of logistical info are the page number information on the bottom left and the percentage info on the bottom right. At some point in the many moons it took me to finish Underworld, that bit of page number info changed into time info.2 For instance, “9 hrs and 8 mins left in book,” which on the one hand is nice info to have, I can almost plan my weekend around it, but the problem I soon found was that it seemed to be terribly inaccurate, and no matter how fast or slow I read, I couldn’t seem to affect my personal reading-speed prediction. Occasionally it would chip away at the time, but I couldn’t tell if it was improving because I had a particularly successful reading lunch hour or because I hadn’t touched the book in a week. And also, it just made me more self-conscious about how slow I read, and I kept trying to impress the little machine with an improved flip-rate. As is perhaps obvious, this is a ridiculously stupid way to go about reading a book.

This same “time left in book” trope appeared on my Kindle when I began reading the final section on the airplane. But being as I was trapped aboard a modern aircraft and was relieved of the constant pressure to read email, send email, read tweets, read articles that are found within tweets, destructively compare my daily activities to my “friends” on Facebook, or otherwise look up stupid crap, I read on the Kindle with increasing pleasure. I read on a second generation Kindle Paperwhite. This is my second Kindle, and what I like about reading on these devices is just how incredibly stupid they are. All you can do is read, and if you drop it, 9 times out of 10, the device is fine, and on that 10th time, you can replace it cheaply and all of your stuff appears back on it.3 I have to say I prefer my old button-keyboarded Kindle to the newer, fancier touch screens, mainly because highlighting and note taking are now more difficult. As I have read more on the Kindle, I have slowly given up typing notes unless I am extremely provoked. It is just too clumsy. Likewise the highlighting feature is such crap that I often just highlight the general area of interest rather than the exact words I want. I would lament this more, but these note-taking gestures are really just ways to commemorate my own enthusiasm more than anything else. Though I do, I must narcissistically admit, enjoy going back to old paper books and seeing what I was provoked to highlight. I haven’t used the Kindle enough to see if I will ever go back to enjoy my digital traces.

Strangely, as a pure reading/highlighting/note taking experience, the Kindle app on the iPhone is much better. Perhaps I am just more used to typing with my thumbs on this device. Of course, after a while one grows tired of constantly flicking to a new page on the phone. The screen is just too small, the paragraphs too scrunched. One begins to daydream of the palatial white beaches of your everyday trade paperback. (I realize I could solve this problem by buying one of Apple’s new little kneeboards, but I have my own personal planned obsolescence geared around when I drop my phone, and I’m still waiting for that to happen again.)

Whenever emphatically pro–e-book people crow about carrying around a library in their pocket, I think about Tom Petty, who owns some unknown multitude of beautiful vintage guitars. He was showing off his glory room in some television segment, and he said, self-mockingly, “Of course, you can only play one of them at a time.” You can carry around a library in your device, but why would you? I mean, after about 10 or so titles, what’s the point? A book is not a mixed tape. And if you want to read lightly curated brief sections of text, why wouldn’t you just go online, where that is the default mode?

All of my middle-aged griping aside, it was awfully nice to be able to switch devices as mood or circumstance arose. And I surprised myself by not really having any trouble switching between the three versions. The syncing to the “last place read” between the electronic devices worked well, and except for that evil little timer I could easily jump ahead to a given page number if I’d snuck off for some old-fashioned hard copy action.

But didn’t the meaning of the book change? This is the question I’ve been chewing over. What I comprehended and didn’t comprehend reading Underworld is mostly not tied to which device I read it on. At least, I don’t think so. My misunderstandings are for the most part due to the slowness and fracturedness of my reading — infrequent sections split over months. I did have trouble remembering some characters’ names, and whereas if I was only reading it on paper, I would just flip to an earlier section of the book to check myself, here I just plowed ahead and eventually figured it out. This doesn’t really make sense, because you can jump easily between sections or just search for a name, but this slight speed bump, this just barely foreign process, kept me from doing it. I would have understood more if I hadn’t been so lazy, but this goes for more than simply reading a fat novel on a thin computer. Turns out I need to practice reading on an electronic device.

But all these devices did confirm just how nice it is to read a paper book. Perhaps this is the part of the post where I will just become old fashioned and sentimental. Aside from the nice physical properties, a book’s self-sustaining independence is comforting in and of itself, and marches in opposition to the networked world. Heck, a good book marches in opposition to the physical world, too. Do not disturb is the caption under every person absorbed with a book. (I supposed the caption for person absorbed with their device would be, Hold, please.) Despite the appearance of some real life historical figures, the world of Underworld is its own phantasm. It’s an analogous existence. And the virtual reality of any fictional world is complemented by the disconnected nature of the physical book. The physical restriction blossoms into an epistemological freedom. You can’t look up the definitions of the words in the online dictionary, not because the physical book is disconnected from the network, but because the book is its own network; any good work of fiction provides its own definitions. To go outside of it, you necessarily break the spell. Just by opening the cover of a book, you shut the door on so much else.

And here’s the part of the post where I grow increasingly prescriptive: If the novel is to remain relevant, or to function as its own distinct narrative species in our new networked reading life, it has to become the island, blissful in its own self-sustaining ecosystem, within the rising sea-level of text.

1. And does anyone think that we’re not moving inexorably in this direction? When our short texts have moved toward HTML and the web, it surely well feels like it’s just a matter of time before our longer chunks of text, the organization of text formerly known as books, move toward an e-form of distribution. This doesn’t mean that some books won’t always be print books first or more themselves as print books, and this doesn’t mean that some books won’t evolve into print books. Print still seems the natural and certainly the most stable archival mechanism. And even that’s getting easier: I’m now capable of designing and printing a paper book that will last 1,000 years (if I put it on a shelf and don’t eat lunch over it), and I can barely design a business card. But in saying this (that e-books might become the primary distribution mechanism for books), I don’t think the Kindle is the end-all of electronic book distribution. It is simply the first instance of mass success. I think — and here I am predicting, which I am horrible at doing — that the e-reader, as a distinct device (can we please come up with a more elegant name for this?), will continue only as a niche product, and our main reading devices will be some small portable computer formerly known as your cell phone, and I think that other platforms will develop to distribute e-books, be they free or charged, or some combination thereof. Some books will strive for the prestige of print because that particular audience (poetry, for example) craves print and feels that print and print alone substantiates its existence. But it seems that the human population as a mass moves toward lower fidelity and increased efficiency, and it seems foolish to ignore the gigantic convenience of e-books. This all might be screamingly obvious, but I find it useful to write it down, if just for myself.

2. My guess is this was a software update. I have previously written about the “minutes to read” phenomenon here.

3. I’ll save addressing Amazon as the publishing world’s chief innovator/bully for a future post.

D.G. Myers, RIP

I did not know D.G. Myers personally, and except for a couple of twitter exchanges, I never communicated with him directly. I knew him only via his writing, which I read with steady attention for approximately the past six years. I can’t remember now what link pushed me in his direction, but after reading just a little bit of his literary criticism, I had the singular question that so much good writing throws off: who does this guy think he is?

I was in my first year as a “visiting writer,” teaching various creative writing courses to undergraduates, when I found his A Commonplace Blog, and I was immediately taken — his seemingly encyclopedic knowledge of the novel, his generosity toward various writers I knew nothing about, his hostility toward political correctness and fashion, his sense of literary standards in a standard-less world. One of his ideas in particular has become lodged within my own life so much that I quote it to myself almost weekly. Here is the long version:

Literature is just the writing that arouses the impulse to preserve it and pass it on. (I call that the “canonical impulse.” Canons are inseparable from literature. To call something literature is to start a canon.) “When an inability to stay interested in Sappho lasted longer than the parchment she was copied on,” Hugh Kenner says, “the poems of Sappho were lost.” There are many reasons to keep something from being lost, however.

These many reasons cannot be contained by a list of genres, no matter how long it is extended; nor by distinguishing fiction from non-fiction (because there are whole literatures, of which Jewish literature is only one, to which this distinction is an utter stranger); nor by “privileged criteria” like sublimity or irony or artistry or “stylistic range” or “bravura performance” or anything else that can be humanly imagined (because exceptions to the rule will immediately suggest themselves).

Literature is simply good writing — where “good” has, by definition, no fixed definition.

I often want to emblazon that last line in my office — perhaps scrawled onto the surface of my desk with a knife. What it did when I first read it, and what it does now, is relieve me from the narcissism of minor differences that so much contemporary American literature finds itself embroiled within. Is it realism or magical realism? Is this novel thoroughly postmodern enough? Is this “experimental”? Are fairy tales de-facto bad and non-adult? Does this novel contain just the right amount of autobiographical confessionalism? Does this novel attempt to contain all of contemporary American culture? Is this novel new and different according to these obscure criteria?

Furthermore, Myers definition of literature forces me to come up with my own definition of what “good” is — articulate it, defend it, proclaim it, try to manufacture it myself.

I resolved to read Myers book, The Elephants Teach: Creative Writing Since 1880, during my first summer break from teaching. It was a revelation. It made what I was doing — pretending to be a Writer, so that I could fund my own attempt to write — historically coherent within the broader institution of American higher ed. I had come to the book with a short, convenient notion of creative writing’s history: that it was begun after WWII and the GI Bill in order to deal with the flux of students, some of whom wanted to be poets and novelists, etc.

Myers wrote that there was an increase in creative writing as a consequence of the GI Bill but that the pedagogy had begun much earlier, at the beginning of the century at Harvard, and was a manifestation of the broader impulses of progressive education: the idea that every student had something to express and that part of education was providing the means and the context to express it. The book also taught me that poetry and fiction, sequestered at the high-art end of the hall, were above neither freshman composition nor literary scholarship. (I heard one senior professor refer to comp once as the “gutter of the profession.”) Freshman comp was the moat you had to swim through to get to the castle of courses that “counted toward the major,” and I had finally made that transition, or so I thought. But Myers showed that in the beginning the courses came out of the same philosophical impulse, and that the subsequent battles were over turf and prestige, and that I should be much less cavalier in my pose of artistic importance. All of us teaching creative writing were merely teaching comp’s kin.

Myers didn’t take away my gargantuan level of self-satisfaction at being a visiting writer, but he did build a lot of context under my feet, and he made me a better teacher. I began to tell every student who approached me about going to graduate school to read Myers’s book. It’s one of those brief, historically stuffed books that makes sense of an entire cultural phenomenon and relieves the amnesiatic MFA vs. NYC debates of most of their self-puffed importance.

If he had only written that one book, I would have reason enough to be grateful toward Myers, but I had the regular appearance of his prose to contend with as well. Lord knows I didn’t agree with all of his literary judgments (no patience for or inclination toward DFW), or agree with his politics (extremely conservative), or agree with his religious beliefs (Orthodox Judaism), or not think that at times he was just being cranky (which of course I am never), but the cumulative effect of reading his prose over several years was unambiguously inspiring. I began to read him the way I have come to read the essays of Cynthia Ozick — as a balm and a provocation. When I am feeling down, either about the literature I’m reading or the literature I am trying to write, I go to Ozick and now Myers to be reminded why I’m doing what I’m doing, and to see an eloquent encounter with literature in action.

Not only was Myers’s writing motivational and provocative in its discrete installments, he was also a model of how one might write today. As a professor who stood in opposition to almost all of the directions of contemporary academic scholarship, and as writer who had written for various publications but who was eventually fired from his blog and regular review slot at Commentary when he published “The Conservative Case for Gay Marriage” after the 2012 presidential election, and as someone who in his last year did not have his teaching contract renewed at Ohio State, so that he became a teacher without a classroom — as all of the institutional contextual girders that supported his regular writing fell away — Myers still continued to write. He showed what one person with a library card, a Blogger account, and an internet connection can accomplish.

And what did he accomplish? Well, he became a permanent fixture in my literary sensibility, and he did the same for several other writers out there currently working. You don’t have to do much detective work to find a wide swath of contemporary writers and academics who read Myers avidly, who not necessarily agreed with him but recognized the excellence he embodied.

Myers wrote that “the sum and substance of what it means to respect the institution of literature” was manifested in the “moral obligation to write well.” What’s so burdensome about this obligation is that it must be born every time you set down a sentence. But Myers bore that burden as if it were a blessing.

He died last Friday after living with prostate cancer for several years. He was married and a father to four children.

Mechanisms of prestige

Yesterday, I was reading this excellent post from Rohan Maitzen at her Novel Readings blog, which led me to another excellent post where she succinctly describes the predicament of literary criticism at the present time. Namely, where should a professional critic publish her criticism in the age of easy online self-publishing, aka blogs? Should one publish via the slow, vetted, and prestigious venue of professional scholarly journals? Or should one publish via their own personal blog? Or somewhere in between?

Maitzen quite calmly and intelligently says it should be a mixture and that different forms of writing are better suited to different contexts, but that each has its place. She neither traffics in blog triumphalism or in professional old-school, rear-guard defensilism. Blogs are neither the only place for literary criticism or merely a venue for networking and personal commercials. They are another avenue for writing and thought, and the practice of regular blogging can be its own valuable contribution to literary culture. What’s more, writing via a personal blog in some ways fixes the problems of professional scholarship: its slowness, its almost autistic inability to deal with a non-professional audience, its theoretical architecture and prior-scholarship throat-clearing, its restriction to printed journals located only in college libraries, etc. (Of course, many of these problems are also intentional benefits; such is life.)

What piqued me personally about Maitzen’s post is how so much of it articulates for me thoughts I’ve felt but have been unable to articulate regarding the publishing of short fiction. As someone who is not in any way a “professional literary critic” but who was, for a brief time, a teacher of creative writing, many of the mechanisms of prestige and professional publication for literary criticism are mirrored in the world of creative writing. In fact, since fiction and poetry writing became activities of instruction within the English department, the publication of those types of works (via literary journals often run by graduate students at large universities) has modeled itself off of scholarly peer-reviewed articles. The consequences are sometimes similar: extremely long publishing cycles, prestige from publication combined with a kind of sequestration from day-to-day literary life, creating a kind of slow moving museum of prose, etc. To be sure, not all literary journals are like this; many of the journals that are more lively are disconnected from university life entirely, or they are run by a permanent series of editors. I don’t think it’s a coincidence that the journals with better editorial consistency aren’t changing out their student mastheads every 2-3 years. (Of course there are exceptions to my exceptions, but go with me.)

But what this means is that literary fiction and poetry are even further decontextualized from everyday literary life. They exist solely on the reservation of the campus. (And they’re extremely hard to get into! At least, I’ve found them extremely hard to get into, but perhaps I’m simply not talented or diligent enough — a distinct possibility.) It becomes a country club of staid fashion and values.

And all of this professional rigmarole is rendered even more ridiculous when you take into account the absurd ease of online publication. Why take years of submitting a 14-page story so that it will be published in a modestly respectable print journal that will (under the most wildly optimistic of circumstances) be read by 1,500 subscribers, if, in the span of one afternoon, it can be fairly nicely presented on the world of worldwide webs, where it can be read by anyone (or no one!) for as long as you’re able to effectuate the maintenance of the software? The answer, of course, is the prestige of the print publication. It means something to publish in State University Quarterly, where it means almost nothing to publish here at my blog, even though the words themselves could be the exact same. The problem here, which is I think even more acute for so-called “creative work,” as opposed to literary criticism, is one of context. To determine its value so much of art depends upon its context. A urinal in a bathroom is something you piss into, but placed sideways in an exhibit and signed, it’s a sculpture. In a realm of no context, it’s both, but there is no realm without some kind of context. But what the context of prestige provides is legitimacy. In fact these mechanisms of prestige often take the very place of having to read a story. It’s in the New Yorker; it almost doesn’t matter what it says. The container is more important than what is contained. Or take the Harper’s “readings” section which picks and chooses pieces of found text and re-contextualizes them within the well-justified columns of that magazine. What a fortuitous changing of context!

(I feel like I have said all this before, probably just to myself but also perhaps in some form on this website. Here’s hoping I can turn redundancy into a charming quirk.)

But until online self-publication is afforded the same attention to each iteration’s own self-generated context and potential worth, online writing will exist in the eyes of professionals as a type of neverending graffiti.

I think that this will end or at least develop when some kind of literary critical version of Jason Kottke comes along, who will not publish the good literary criticism but will draw attention to the worthwhile, already-published literary criticism. Publishing will come to seem less important and the drawing attention to, the congealing of attention around, what is already available will become much more valuable. The context will become being picked up by Kottke, or some such.

Of course, with all of the rapid linking going on now and the fact that many lit blogs have been running strong for ten years, we’re already in that world; it’s just that it’s not recognized as the primary distinction. Getting in print is still the primary distinction, when it should be the attention of a respected editorial attention, a Kottke of the literary world, or a Maitzen, or a James Wood, or a Dan Green, etc.

Notes on ‘The Free and the Antifree’

I almost always enjoy n+1’s The Intellectual Situation, which usually appears at the beginning of each issue. The typical format is a kind of flaneur-diary, where the editors collectively embark on some errand and interpolate essays on contemporary matters along the way. It’s a peculiar form with an old-fashioned feel, and I am not sure of its historical precedent, and being as this is a dashed-off-devil-may-care blog post, I’m not going to research this. What I’m going to do instead is annotate the first half of this issue’s Intellectual Situation, the essay “The Free and the Antifree.”

Lately, online, depending on what sites you read, the topic of what writers should or shouldn’t be paid has become contentious. I’ve been thinking about writing this topic for a while but n+1 has helpfully done it for me, thus preserving my cherished blog torpor. The Intellectual Situation, as a genre, often does this: coalesces the vapors swirling around contemporary discussion and forms a (usually) coherent discussion.

Though it’s perhaps not immediately obvious, all of the following half-thoughts contribute to the simmering stew of a question that’s been bothering me for the past year or so: what is the point of the literary magazine now? Or, more specifically, what is the point in submitting (on spec) to (small but prestigious) literary magazines in hopes of getting published (in print, but with no hope of any real pay)?

These writers and copy editors were among the many who, faced with limited resources and their own cultural omnivorousness, came home each night eager to download MP3s, PDFs, and other digital copies of artworks and research they would otherwise be unable to access. Around the reality of these thefts a powerful ideological movement emerged, taking as its inspiration not just facts on the ground but also the libertarian, antigovernment, “hacker” spirit of the earliest personal computing and internet communities. The apostles of the Free Culture movement, as it came to be called, argued that stealing digital content was a progressive politics and should be brought into the open. Some of these apostles were hucksters and profiteers, others were merely hypocrites (who preached the virtues of free from their perches as well-paid magazine editors or college or law school professors), but still others, like the freeware hacker Aaron Swartz, were true believers. Congress had allowed copyright protections to be rewritten by huge corporations (most notably Disney) to become a parody of a law. If what was being illegally downloaded was some of the best that had been thought or said by human beings, and the downloaders were people who couldn’t afford the purchase price of the books or movies (some of which were expensive) — wasn’t that a good thing?

This too swiftly equates all participation in internet culture as a type of thievery, which is detrimentally simplistic. It ignores the more obvious point that the net is based on frictionless sharing of data files (as in, that is how it literally works) and that much of what was “stolen” in this regard was freely provided. See, e.g., the newspaper industry. (Whether or not that was ultimately a good decision on the part of the newspaper industry is another discussion.) The editors (the piece, per usual, is unsigned and its mode is ex cathedra) seem to be lamenting the disappearance of somewhat writing-related, somewhat available hack work rather than the actual artwork this was originally meant to support. This feels to me like a misplaced nostalgia — Blues for Editorial Assistant.

Out of this necessity, conventional magazine journalism came to be marketed as an endangered art form. Nowhere was this more evident than in talk about the influential online aggregators Longreads and Longform. As nearly every article about Longreads’ founders said, they were “passionate about longform storytelling”—in other words, commercial journalism had become a passion project. Its producers, mostly old-fashioned magazines like GQ, eagerly took to this as well, tweeting their #longform and #longreads, and on every front advancing the idea that their writers were artists, in need of public support. Of course, there was a catch: in order to be selected as a “longread,” the work had to be available online for free. Eventually, Longreads launched a $3 monthly membership, which would not go to editors and writers but “contribute to our editorial budget, which goes toward finding and sharing outstanding storytelling from around the world.”

This is a good point, and one can see at the same time a valorization of the magazine profile writer as not just a sharp journalist but as a true artist. Cultural reportage became the new novel. A good example here would be the extremely vigorous lauding of John Jeremiah Sullivan a couple of years ago when he published his very good collection of reportage Pulphead, and which was greeted, in a kind of collective grasp for excitement by the publishing industry, as a proto-DFW level achievement. (It’s a good book, don’t get me wrong, but the excitement had a strange, willed, meta-quality.) It’s useful to remember here that on college campuses journalism has traditionally been a separate school from the English department/creative writing program. The creative writing program is a child of the English department in administrative, budgetary, and philosophical terms, while the journalism school has always conceived of itself as a trade school. Its closest departmental sibling would probably be the business school. In other words, the creative writing program has always been ideologically aligned with art over money/utility. But now, with the utter tanking of the newspaper industry over the past decade and the somewhat slower tanking but in all respects still fraught magazine industry, journalism has been turned into art, aka that which we want to keep around but which is no longer visibly useful or profitable. But we’d like to keep it around just in case. It will be interesting to see how journalism tackles the issue of contest entry fees.

For little magazines (like ours), these conversations were painful, for the critics had homed in on a particular problem. The little magazine always originates as an image of utopia that it then betrays. It starts with love but very little money, and because it is edited for free (mostly), it gets writing for free (mostly) in a nonexploitative way, since no one is extracting any surplus value. This is the utopian stage, where writing as a competitive enterprise, as a sphere rife with greed and envy, disappears. It is replaced by a pure and purely unnecessary (in the sense of not being directly useful to the reproduction of biological life and material needs) contemplation of essential, fundamental problems — that is to say, it becomes art. But then, almost immediately, the little magazine becomes a way to “graduate” to the world of hackery — for its editors and writers to become journalists, novelists, overpaid business school speakers — and in this way can serve more as an instrument than an opponent of the hack world.

And so, strangely enough, it was smaller publications that seemed most vulnerable to the shaming critique produced by Who Pays Writers. Not only the publications but the writers, too, had to be shamed, as full-time freelancer Yasmin Nair did, when in a controversial blog post she called academics and others with steady jobs who wrote for small fees “scabs.” Both the people who gave and the people who accepted unpaid internships at these publications, further perpetuating their existence, would have to be shamed as well. As someone wrote to n+1 about its (unpaid) internship program, “It’s typical that you would advertise an unpaid internship. You should be aware that this is no longer done.”

The key phrase here is “But then, almost immediately,” which indicates that actually there is no distinct transition of phases in literary magazine production between the art-drunk utopia and the grubby world of hackery; these circles overlap simultaneously and always. The editors are trying to turn a complex phenomenon into a binary. The divisions between writing that’s hackery and art are provisional and fluid, like game lines on a soccer field that have to be re-drawn each week.

The argument between free and antifree may be framed in many ways; one would be as an argument between the American scholar Lewis Hyde and the French Marxist sociologist Pierre Bourdieu. In his great book The Gift (1983), Hyde tried to explain, against an American intellectual background of economic rationalism, why people would do something like write poetry. Bourdieu, whose work was beginning to be translated into English around this same time, had already prepared an answer to this question: people make art for the same reason people do everything — because they want to gain capital. In the case of art this capital was often symbolic rather than financial, but it was still capital. For Hyde, art-making looked more like the premodern gift economies described by anthropologists like Mauss and Lévi-Strauss — the creation of something without obvious utility that could be presented to the world as a gift. (Bourdieu had also written about gift economies; for him they were, like art, a winnable game with rules and strategies.) For Hyde, the secret of art was that there was no secret — art-making was what made us human. It was what we did for free.

As it happens, Hyde’s book is often cited as an argument against payment for writing — “Art is a gift,” these people say, as they pick up their paychecks from Princeton or Iowa or Columbia. Antifree responds with some variant of Bourdieu’s old unmasking: Nothing exists outside the realm of exchange. If a writer is not paid in money, she is paid in “cultural capital” that translates into improved standing and, eventually, cash. So why (asks antifree) should the writer be forced to wait? Why shouldn’t she be paid right now?

Again, what I disagree with here is the apparent editorial certainty. The problem I have with Bourdieu (whom I have not read except in n+1–style summaries) is that he sounds as grimly dismal as conventional economists, nothing done or left undone outside the cold light of capital and rational self-interest. And just because Hyde’s book (which I have read and am a huge fan of) has been misused in this way, it doesn’t mean that’s the correct way to read him. As someone who himself attempts to make art, I side with Hyde, if for no other reason than Hyde makes me feel better about trying to make art. Bourdieu strikes me like many other French literary theorists — provocative and challenging but ultimately rather empty. I am speaking in terms of my personal artistic ambition. French theory guts my motivation, because if the market doesn’t value your art in monetary terms, theory devalues it in intellectual terms. Though Bourdieu’s notion of “distinction” is fascinating, it’s also a kind of harsh economics of the spirit. One doesn’t want to be a dreamy romantic all the time, but being woozy from the vapors of your own self-importance turns out to be a better condition for making art, at least in my limited experience. It keeps the fires burning, when otherwise art-making seems like a ridiculous game, a kind of meaningless middle school politics.

But as usual we have some qualms. Sometimes antifree can feel like it has invested too much of its energy and passion in the fight for an extra $50. Which is not to scoff at $50. It’s a way station to making a living. But for the moment it’s just $50. The conversation shouldn’t stop there. On the money side, perhaps the next step for antifree is to create and strengthen a union — one that can demand standards for contracts, reprimand institutions for reneging on terms or norms of conduct, and otherwise represent the interests of culture workers before the ultimate bearers of responsibility for the diminishing of salaries and security: media conglomerates, corporate boards, and shareholders. And what about tax reform? In Ireland, artists are exempt from taxes on the first 40,000 euros they earn from their work — whereas artists and freelancers here are faced, among many other obstacles, with onerous self-employment taxes that punish anyone who tries to stay clear of the corporate system. We could do better.

Of the two ideas, I think amending the self-employment tax is more practical. I think trying to create a union is doomed. If we can’t sustain viable unions for auto workers and teachers, there’s no way we’re going to establish a union for writers, especially since the designation of “writer” has become so diffuse as to be almost meaningless. You would have to professionalize everyone.

The Intellectual Situation as a genre is always willingly provocative and a bit simplistic. This is just a necessary rhetorical trade-off. However, I will admit that lately these essays feel less charged and more often simply vague. I’ve been a big, subscribing fan of n+1 since its first issue. Though of course I don’t always agree with everything in the magazine, it’s a welcome regular presence in my intellectual life, a kind of seasonal astringent, a somewhat demanding houseguest. Much of this stringency comes from the magazine’s deft deployment of various binaries. But now almost 10 years into this magazine’s run, everything seems to me more complicated and muddled than these provocative, brief essays allow. Previously The Intellectual Situation felt as if the editors capitalized on a current discussion and pushed it forward, and by their eloquence and force of vision they became the touchstone for all future discussion of a given topic. But lately The Intellectual Situation essays now feel more like wan summaries of missives I’ve already read online. Perhaps I’m just ten years older and saddled with my own increased poundage. As my margins have increased so has an appetite for admixture.

Notes on ‘Bluets’

1. I haven’t read Leslie Jamison’s new book of essays The Empathy Exams, but even if the slew of reviews are only half right, it’s an amazing book. This past weekend, Jamison had an essay in the Guardian talking about confessional writing and how it was not in fact a primary venue for narcissism but an arena for so much more. Even though I haven’t read her book, it struck me as an odd argument to make; we seem indisputably awash in personal confessional writing. This is not to disparage it. Confessional writing, like writing of the nonconfessional sort, can be both excellent and not. It all depends. As a mode of writing it seems currently in very little need of defense. The essay struck me almost as an attempted high-art rescue of confessional writing, a gentrification of an allegedly seedy neighborhood.

2. I read this in light of recent mental chewing of a different, much-praised essay book, Bluets by Maggie Nelson, which I finished still wanting to like much more than I actually did.*

3. Maggie Nelson, according to her jacket bio, is most often classified as a poet. The book is divided into 240 numbered sections, most of which are less than a page in length. These sections total 95 pages.

4. I should say that I am not against numbered brief sections nor am I against essays that read like poems or vice versa.

5. Here’s how the book begins:

1. Suppose I were to begin by saying that I had fallen in love with a color. Suppose I were to speak this as though it were a confession; suppose I shredded my napkin as we spoke. It began slowly. An appreciation, an affinity. Then, one day, it became more serious. Then (looking into an empty teacup, its bottom stained with thin brown excrement coiled into the shape of a sea horse) it became somehow personal.

6. This beginning is indicative: the conversational tone that surges to stylization and heightened poeticism, the performance of her thinking as though it were a confession, information she is pretending to confess.

7. Nelson really likes the color blue, finds basically a religious soothing in its myriad appearances in her life. This might seem odd but hardly amounts to a confession. It might be rather a slightly askew preoccupation with religious or spiritual undertones, or at least undertones of the meaningful, but those undertones are not audible here. She tries to dress up a relatively banal personal preoccupation as hidden, private wisdom expressed at great psychic expense. But it’s just her riffing on the color blue.

8. Perhaps what I’m really trying to say is that I wish the book took itself less seriously.

9. The effect of the short numbered sections is an agglomeration of riffs. Some topics covered include: a friend who has been paralyzed by an accident, never described; memories of a recently departed lover; scholarly snippets from previous writers on the color blue; her accumulation of blue objects and moments in a lifetime of being consistently moved by the color blue.



10. I wish I could say this added up to something or that it satisfactorily didn’t add up to something. I wish I could say it was provocatively fractured.


11. What this quickly leads to is a kind of faux confession, confession as performance and rhetorical mode, rather than any information actually being confessed. And faux confession is basically a form of bragging, a type of willfully glamorous information offered strictly to impress the reader.

12. The two main relationships in the book — the paralyzed friend and the departed lover — feel opaque. For example, the sections about her friend feel like bids for seriousness, but the contrast between her friend’s paralysis and her own preoccupation with the color blue undermines her color enthusiasm. It makes Nelson seem trivial, and it makes her friend seem like she was used as a rhetorical device, as a ballast for our narrator’s serious whimsy. Here’s an example:

119. My friend was a genius before her accident, and she remains a genius now. The difference is that these days it is nearly impossible to discount her pronouncements. Something about her condition has bestowed upon her the quality of an oracle, perhaps because now she generally stays in one place, and one must go unto her. Eventually you will have to give up this love, she told me one night while I made us dinner. It has a morbid heart.

The high lyricism of “go unto her” on first reading seems deft and complex, but on a second reading seems enormously callous — as if the friend is being mocked for the sake of a poetic conceit.

13. Likewise, the central relationship with the unnamed man that provides the closest thing to a through-line in the book. It’s here where the book is interestingly confessional and interestingly contemporary in its confessionalism. Nelson is frank about the sexual satisfaction she experiences with her lover. She revels in the sex between them, and on the one hand it is bracing to read a woman narrator disclose her pleasure like this under no veil of shame, under no societally constructed architecture of landing Mr. Right by the end of the movie, etc. There is simply the self-renewing mystery of her own desire. But as the confessions increase without any compensatory revelations about her or him or the texture or context of the relationship as a whole, aside from the sporadic but great sex, the confessions take on a different light. Again, it all starts to sound like a kind of bragging. Here is an example:

116. One of the last times you came to see me, you were wearing a pale blue button-down shirt, short-sleeved. I wore this for you, you said. We fucked for six hours straight that afternoon, which does not seem precisely possible but that is what the clock said. We killed the time. You were on your way to a seaside town, a town of much blue, where you would be spending a week with the other woman you were in love with, the woman you are with now. I’m in love with you both in completely different ways, you said. It seemed unwise to contemplate this statement any further.

14. By mentioning this aspect of the book, I do not mean to judge Nelson in some kind of moralistic, nanny-pants way. What I am interested in is the cumulative rhetorical effect. What’s more, if this we’re a piece of fiction we might interpret this one-sided portrait of a relationship as interestingly flawed; that is, we might be tempted to judge the narrator for thinking of the character in this way or to interpret her myopia. But since it’s purportedly an essay, it seems like Nelson is simply being willfully obtuse. That either Nelson shares this seemingly shallow appreciation of other people (doubtful) or that the presentation of herself has gotten out of her authorial control.

15. Theory: it seems that confessional writing only really works when the author is fully willing to exploit herself and her friends and neighbors. Any kind authorial reticence, out of politeness or fear, cripples the memoir, turns it into a narcissist’s cave where we’re watching wall projections. To make it a really riveting piece of writing, you still have to go outside the self; you have to transgress against all the other people to make them characters.

16. Phillip Lopate wrote in an essay a few years ago about how his students were basically turning their personal nonfiction pieces into a type of short story where they let versions of themselves pretend to be stupid in order to make the stories more “dramatic.” That is, they dumbed down their own intelligence to turn their essays into a kind of fiction, while still calling them essays. Essays, if they are to mean anything, are the full-flowing tide of one’s intellect, not strategically dammed for dramatic effect. The very premise of an essay, or a piece of personal nonfiction, is that one is not unnecessarily dramatizing what happened, but dealing with what actually happened, as close as possible, and when what actually happened is obscured for some reason, such as the identity of a lover, that the hiddenness be dealt with as frankly as possible.

17. A counterargument: All of this is on purpose. That is, Nelson is deliberately constructing this authorial persona for Bluets. A few months ago, Will Wilkinson wrote a really wonderful post** about the “implied author” in fiction and how this relates to an author’s persona in nonfiction as well, how nonfiction authors inevitably construct slightly exaggerated versions of themselves, and that we, as readers, should be totally familiar with this move; comedians do it all the time. Seinfeld in Seinfeld is not really Seinfeld. So I’m not trying to be a rube in misunderstanding Nelson’s moves. But in that case, I can’t figure out why Nelson would portray herself this way — why she would, in effect, make herself appear shallow. I realize that hand-in-hand with adopting a persona, nonfiction narrators often degrade themselves — becoming more obtuse, more curmudgeonly, more daft, more neurotic, etc. — as a way to humble themselves before the reader. And again: that’s fine. It reminds me in a way of Jonathan Franzen’s nonfiction; he often portrays himself as a smarty-pants jerk and then ironizes that depiction in a way that seemingly attempts to absolve himself of being a smarty-pants jerk or to make us see that he understands this about himself, but the whole procedure turns into a failure of irony and a failure of authorial control. Jerk will out. It’s akin to listening to a humorless person tell you joke after joke.

18. Re: the Jamison editorial: The tendency in confessional writing to turn into a kind of arms race of trauma. I think this is why we get fake Holocaust memoirs. It’s the ultimate trauma — world history’s transgressions made personal or “relatable.”

19. Whereas in fiction the reader is encouraged to empathize with the character in spite of their differences, in this kind of confessional nonfiction the trauma becomes a solder between author and reader, so that the exchange of readerly trust becomes a form of identity politics: I get you because I am like you. You “represent” me and my troubles. This is empathy as demography.

20. Can you empathize with confessional nonfiction writing if you can’t “relate” to the trauma? Does it all come down to how well everything is depicted in language? That is, to good writing? But tell me again what exactly is the definition of good writing?

21. I’m going to stop writing now before I am tempted to make a definitive point or discuss reality television, whichever may come first.

*I would also simultaneously like to recognize a counter-feeling of not really caring much what my own opinion is re: a book I read. In short, I’ve grown tired of liking books or disliking books. Or, if you prefer, “liking” them. My own opinions feel stale, not to be trusted.

**Seriously. Just go read it. It’s wonderfully well articulated and clarifying and much better than this paragraph-bingo.

Notes on blogging from a non-blogger

Recently, the New York Times decided to shutter a few of its “blogs,” with promises to radically decrease many more of its other blogs over the next year, the idea being that blogs as stand-alone journalistic entities at the paper had run their course. This sparked a brief miniature discussion of the definition of blogs that had echoes of 2004.

I’ve been thinking about what blogs are and what they could be used for, while not actually doing any blogging myself, since about that time, so I was keenly interested. I was especially interested in what Dave Winer said of the subject and the fact that I disagreed with him. Now, Dave Winer knows more about blogging than I do (obv), but I thought: what could airing my ill-informed and underbaked disagreement with the “protoblogger” hurt? And why not express this disagreement via my never-updated blog?

Winer defines a blog as “the unedited voice of a person.” This defines blogs from an editorial standpoint; that is, a blog is writing that has not been edited, at least by anyone not the author. In this way a blog becomes the (almost) unmediated voice of the writer. (I say “almost” because all of this still must occur in language.) Winer says that what the Times had on its hands were not actually blogs because they by definition were too beholden to the institutional voice of the New York Times (excepting maybe Paul Krugman). Winer sees blogging as a reporter’s sources speaking directly (something which could and should then be culled and refined into a journalist’s own activity).

I always defined blogging from a technical perspective: blogging is a series of posts that are listed in reverse chronological order. What this really means is that blogging or a blog is a certain type of website or a certain format of publishing online (a format whose time might very well be past its point of usefulness). I can see right away that my own definition is slightly rickety because a permalink for a blog post is itself its own individual web page, just like an article page from the Times or a post on Medium. A blog is really just a method for organizing individual web pages. Medium is the latest revision to this idea, functioning as a collection and, more importantly, promotional device for a bunch of blog posts. Medium is a new form of blog that de-emphasizes the author and her chronology, and re-emphasizes the platform through which an author publishes her writing.* The platform is the consistent thread, wherein an old-form blog (Winer’s conception of the blog), the consistent thread was the single author’s voice.

My main problem with Winer’s definition is that it stretches the definition of blogging out almost to the point of uselessness. If a blog is just my language unedited by any outside entity, then there’s no difference between what I’m writing here and a grocery list, if I put my grocery list online.

On yr way home, get:
— milk
— pickles
— the good ice cream

But what both definitions do is point to how blogging is tied up historically and technologically with the moment when publishing writing on the web by individuals with no special technical skill became a mass phenomenon. When, in essence, writing a blog post became as easy as sending an email. When that intersection occurred, “writing online” became almost synonymous with blogging, and perhaps it’s time to retire the term. I tend to feel that when there is energetic and inflamed debate about an endeavor’s vocabulary, there is something deeper afoot. Journalists are (I assume) loathe to describe themselves merely as bloggers, simply because that invites professional class distinctions. Likewise, if I were to publish a short story on my blog, it would not “count” in the academic-literary-publishing sense, because it didn’t appear in an edited journal, either in print (still preferable!) or online. The joy and the problem of online publishing is that you can publish anything, but the context clues that we are familiar with vanish and the accompanying shorthand indications for quality and/or prestige vanish at the same time. The ranting of a lunatic and your next favorite novel look the same in your feed reader or as a Facebook post, or however you get your “content” these days. That is, you have to read them to determine if the person’s a ranting lunatic. This is one reason academic peer review and scholarly journals exist; theoretically, it means that you don’t have to read a person’s scholarship to determine if they are a good scholar.

Add to this what Winer calls “the pressure on blogging,” the multitude of new and different formats for people to publish their unedited voices/grocery lists: Twitter, Facebook, Tumblr, etc. Blogging stopped being a meaningful term as that particular format stopped being the primary way people communicated on the internet.

So why stick with the reverse chronological scheme (with dated archive) at this point in time when blogging, such as it is, seems in flux or meaningless or passe? Because it’s a reasonably understood structural convention of web publishing. And, at this point, it’s easy. A couple of years ago the writer Dan Baum issued a barrage of tweets narrating his short and fraught time as a staff writer for the New Yorker. In that fractured chain of tweets (I would use the phrase “tweet storm” but that just seems gooby), he said his editor John Bennet told him, “This is the New Yorker, so you can use any narrative structure you like. Just know that when I get it, I’m going to take it apart and make it all chronological.” Perhaps that radical simplicity, reversed, is why the blogging format might live on.

p.s. “Bloggy content with a conversational tone” is a phrase that appears in the Poynter article. I maintain that “conversational tone” in a blog or elsewhere is a ruse, a mirage of rhetoric. Anyone who reads for any amount of time (say, a week?) or who writes for an even shorter amount of time (say, a couple of hours?) realizes that “conversational” in written prose is an effect; that is, you create the effect of breeziness through diligent editing. Also, one writer’s calculated sloppiness or attempt at conversational prose is another person’s affected period slang. See The Catcher in the Rye.

One related mistake that Andrew Sullivan makes in his definition of blogging (most fully outlined in a long magazines article for the Atlantic, “Why I Blog”) is that blogging is a form of broadcast prose. (He calls blogging “the spontaneous expression of instant thought” and “writing out loud.”) I think this is fundamentally mistaken. Even though writing online, because of its nearly instantaneous ease (I can tweet from my phone faster than I can do almost anything else in the world, an amazing accomplishment) and because of its almost instantaneous potential response from an audience, makes it feel like talking. And while the technology makes it feel like talking, it is always still writing, with all of the inconveniences of writing. The primary inconvenience of writing (also its fundamental blessing) is that it exists independent of my presence during your reception. You read this writing whenever you want, at whatever pace you want, and without my somatic gestures to indicate how you should interpret various sentences. You read it in the isolation of your own consciousness. So if blogging feels like a form of talking as you produce and publish it, you’re still stuck reading it at the other end of the screen.

*From an editorial perspective, my main criticism of Medium’s content can be derived from the percentage of its articles that begin with the word “How.” All of the contributors’ rhetorical excesses spring from this word — the author assuming the role of expert to talk down to the (presumed) idiot.

And now this

I’ve written another short essay and posted it to Medium. It’s called “Coincidental Religion” and it’s about the JFK assassination, Don Delillo’s novel Libra, the Boston Marathon Bombing, and Twitter.

Here are three bits of film I discovered after I had finished the essay: two short films by Errol Morris about different aspects of the JFK assassination — The Umbrella Man and November 22, 1963; and a BBC documentary on DeLillo from the early 90s, which is endearingly hokey.

Why did I post it over on Medium and not here? Honestly, I don’t know. I’m still trying to work out my own logic.

My Life as a Mannequin

Dear friendly people of the Internet,

Are we still capitalizing “Internet”? I refuse to hyphenate “email” and feel increasingly gooby capitalizing “Web.” Surely all linguistic acceptance leads toward lowercase.

Anyway, I have a new essay out in the world. It’s called “My Life as a Mannequin” and it’s about Philip Roth, getting lost, Washington, D.C., good bookstores, and more Philip Roth. It’s in the latest issue of Open Letters Monthly.

I originally read part of this essay at the Roth@80 Conference in Newark, NJ, this past spring, an event that was put on by the Philip Roth Society in honor of Roth’s 80th birthday. You can read more about the extravaganza in this New York Times’ article.

Do I feel smug linking to a New York Times’ article? Yes, I do.

Anyway, this essay wouldn’t have made it out of the gate if it weren’t for Roth scholar extraordinare and friend and all around badass David Gooblar. If you want to know more about Roth, you should read his book, The Major Phases of Philip Roth.

p.s. For the extra diligent, here is David Remnick’s recap in the New Yorker of the same Roth event.

p.p.s. The essay in Open Letters features perhaps my second favorite Roth photo of all time. My first favorite is the Hot Dog Photo, which I can’t find in my preliminary internet searching (little “i”), but which was definitely included in the recent photo exhibit in the Newark Public Library.

A re-design is afoot

Sometimes I endeavor to explore the magnitude of my own ignorance. Hence, I have started to fiddle with the look and feel of this website. How should one tweak the most obscure corner of the Internet?

The previous theme for this site was a riff on the Marber Grid, a graphic design schema used in the old Penguin paperbacks, and I admired its thorough bookishness. But this current and most likely temporary theme is “responsive,” in the lingo of the day, which means it will fluff your pillow and serve you tea no matter what kind of screened device you use to visit A Public Address System.

We’ll see how it goes. Progress will no doubt be almost invisible. Which is all for the best, because I think I pulled a muscle in this last iteration.