Just the other day, I was bellyaching about wanting a book by Mark Greif, one of the founding editors of the literary magazine n+1. He was, as far as I could tell, the only remaining founding editor who hadn’t published a book, and since he always was my preferred Beatle, I wanted to read his book most of all.

And then today some random internet gardening revealed that Greif has a book coming out this very month! It’s called The Age of the Crisis of Man: Thought and Fiction in America, 1933-1973. I made this discovery while skimming this essay by Leon Wieseltier, whose name is quite difficult to spell, which I found via Nick Carr’s blog Rough Type.

So: mad props to Mark Greif, and mad props to myself for having book dreams come true.

Notes on Notes

I was in my first job after grad school when I discovered Gawker, which in turn helped me discover n+1. The two publications have always seemed like each other’s evil twin. I mean this as a compliment.

I went on to subscribe to n+1 and have been happily almost continuously subscribing ever since. I flaked at one point. (Come on: treat yourself.)

So I am extremely happy that my annotation of last issue has been picked up and condensed into a letter to the editor in this month’s newest issue “Throwback.” There’s a lot more interesting stuff to read in that issue of course than my handful of paragraphs, but still: it’s nice to be there. Happy winter solstice festival of your choosing!

Now: where is the Mark Greif book I’ve always wanted?

Review of ‘Loitering’

I’ve got a review of the new Charles D’Ambrosio essay collection Loitering in today’s new issue of the Quarterly Conversation. It’s been a while since I’ve had a review in QC and it feels great to be back.

Are we living through some kind of surprise golden age of the personal essay, or at least the book-length collection of personal essays? Beats me, but after reading books like D’Ambrosio’s, it sure feels like it.

Kitsch Revelation, Pt. 1

Note: This is the first in a series of posts on kitsch. (Hopefully.)

The man and woman appear on stage like a couple cast out of time. She wears a dress — call it a prairie dress — with cowboy boots, a look so incongruous it must be deliberate. The man wears a grey suit without tie. The top and bottoms match but they look well rumpled, thoroughly slept in. They carry guitar cases and set them down at the rear of the stage. There’s a table on which they place a small, narrow case. It looks like a little suitcase, except tall, like it might contain a dollhouse or a couple bottles of wine. It has rounded corners and a handle on top. They unlatch the front of the case, which swings open door-like, and inside are four shelves. It’s a miniature cabinet of curiosities. From my position in the crowd I can’t see what’s in those little shelves though I’m terribly curious. These two people have walked out on stage and promptly turned their backs to us, to tune up and explore their tiny wardrobe.

For early May in Alabama, it’s strangely cold. No one is properly dressed for this festival. It’s only in the 40s, but we’re all in flip-flops and jeans, T-shirts and mini-skirts. The shivering crowd is ready for some serious entertainment.

Eventually they turn around and make preliminary sidelong glances to one another. She plays a big-bellied sunburst Gibson acoustic. He plays a much smaller archtop, a nameless, historically vague instrument, like something retrieved from a junk store. These festivals always occur in May, just before the pestilence of summer. But today it might as well be February. We add to their strumming the percussion of our chattering teeth.

I don’t remember the first song they play. Let’s say it’s “Elvis Presley Blues,” the mid-tempo, drunk-a-loping ballad from their (at that point) latest album Time (the Revelator). The song begins, “I was thinking that night about Elvis, the day that he died . . .” and the singer’s voice — her name is Gillian Welch — breaks like it’s sliding off key, but as she gets the motor of the song running and reaches the lines “he shook it like a chorus girl, he shook it like a Harlem queen,” her companion — his name is David Rawlings — slides under her with his sympathetic harmony, and the song hits this weirdly soothing glassine lilt. I am unprepared. I’ve stopped paying attention to my girlfriend, to whom I’ve promised a sweatshirt. I’ve forgotten even that I’m cold.


I had first heard about them from my girlfriend, who in her first year out of college had become a sponge for new music. It was a fortuitous situation because my acquisition of new music had completely atrophied. I had made it to Aimee Mann, I had made it to Radiohead, but then I had surrendered. My tastes had been set, like an aesthetic Jello mold. But my girlfriend was then like a miniature A&R person. That year she was in a valley of roots music — mandolins and crooning backwoods harmony, delicate finger picking and strange, moonshiny lyrics. Bands she brought in included Welch, Hem, Alison Krauss, Nanci Griffith, Nickel Creek, Wilco (whom she hated), Emmylou Harris. . . She was a bit evangelical as well, buying CDs for everyone for Christmas. I remember when she gave the Welch CD to her parents and her mother called it mournful, which I thought was accurate. The music seemed bleak and scrabbled, dustbowl sad and not, you know, sad in a deep way. Like Radiohead was.

But seeing them in concert was entirely different. There was something in the simplicity of Welch and Rawlings standing alone singing and playing, the utter starkness of it, the utter lack of equipment and gear that was pleasant and challenging. As the son of a musician, a drummer no less, I’ve always been obsessed with how much material stuff it takes to create the simplest pop songs, not just in the studio but also when replicated live. And here were two people with no more gear than what many people have in their closets. They had an intentional primitivism that excited me.

But I had been down this road before. I had seen many a show where the act was interesting live, invigorating, and then I bought their CD — this was, yes, during a time when one still bought CDs — and over the next week or two was slowly educated about my misspent enthusiasm. What happened under the veil of performative darkness was all well and good, transfixing in its seeming excellence, but in the cold harsh light of the morning’s car stereo, the songs didn’t sound nearly as alluring.

But what happened was that when I went back to Time (the Revelator), it only got better. Perhaps it’s because the record is so close to their live performance — still just two guitars and two voices. But this still doesn’t make total sense since an audience always overcompensates. So many concerts can reach lift off merely from the cushion of an audience’s sympathy. But this record rewarded obsessive re-listenings. Like Radiohead did.

I was living in Tuscaloosa, Alabama, fifty miles away from Birmingham, and I would commute over to see my girlfriend several times a week. I don’t think at any other point in my life had I spent as much time alone in the car. I would listen again and again to Time. And Time incidentally stopped time in that car, turned it into a moving box of air that Welch and Rawlings strummed. And yet listening to the record was like going back in time. I wasn’t sure what the lyrics referred to but it was odd, non-pop song territory: Oakies and Casey Jones and John Henry.


In the decade-plus since then, the album has only gotten better, historically deeper. It’s one of those albums that comes with hidden hooks that tug at you at least once a year, reeling you back to deal with it again. And I realized that Time came out at the end of July 2001, just before 9/11 happened, and in one of those historical shiverings of coincidence and premonition, the album now seems to be a comment upon 9/11.

First, the album seems obsessed with American myths: in the course of its songs Welch mentions Elvis, the Titanic, the Lincoln assassination, rock and roll, the “road” as mythic and existential stage, etc. The repeated allusions to various American disasters makes one think that the album in particular, and American history more broadly, is just one long string of intermittent catastrophe, a graph of history plotted along points of public violence. So that even though the album of course couldn’t possibly “mention” 9/11, the almost simultaneous occurrence of the two events makes room for the album to be about 9/11 without ever saying it. 9/11 is just another dark dot on the slowly escalating graph of national carnage and despair.

At the same time, the album is soaked in Americana. Though it came out just before 9/11, the record received its primary promotional push from the Coen brothers movie O Brother Where Art Thou. Welch was one of the neo-Appalachian singers who contributed to its soundtrack, and it was the popularity of the soundtrack as much as the movie that helped spread the word about Welch. The old master of American folk music on the soundtrack is Ralph Stanley, the bluegrass singer, who sings an a cappella version of “O Death,” and Stanley is himself like the grim reaper. Emmylou Harris, Alison Krauss, and Welch sing “I’ll Fly Away” together like country music’s new muses. They will resurrect the tradition.

But Welch and Rawlings also benefited from a more general resurgence of the traditional white male, both in music and in culture, which occurred in the wake of 9/11. President George W. Bush’s west Texas disposition, the mustachioed firefighters of NYC, New York Governor Rudy Giuliani, the phrase and sentiment behind “let’s roll,” the general post-catastrophe grip of fear and paranoia that often manifested itself in an old-fashioned jingoist racism — all of this was the historical context for the renaissance of rootsy American music, which is, it must be said, the music of old white people, a pre-rock and roll, which means a kind of music prior to the mass cultural miscegenation represented by rock and roll. It’s a more purely innocent, more purely rural type of music. I’m not saying that Welch and Rawlings in any way tried to co-opt this sentiment, merely that this was the sentiment in the air when they hit public consciousness in a larger way. This brief stretch of “roots music” could possibly be seen as the last truly popular efflorescence of white Americana, a ballad with which to end the Empire.

Which is, of course, a lot to lay on an acoustic duo. I’m also not saying that I thought all of this at the time, while I was shivering out in the darkness watching them work their magic. It’s only in hindsight, with the accumulated grime of time, that this group and this album in particular seem so emblematic.


About two thirds of the way through their set, they acknowledge the cold. They blow on their hands in between songs and grin at one another. Rawlings cinches his coat tighter and Welch buttons up a jean jacket. They apologize for not playing more ballads, say they’re trying to stick to the fast ones to keep the blood going.

At one point, they return to their strange little suitcase. They open the drawers and rustle around inside. I still can’t see what the drawers contain. Then they return to the microphones, Welch holding a harmonica and one of those head-gear like contraptions that allows you to play the harmonica while still playing the guitar. So, it’s their small bag of tricks, their minimal haul of accoutrements. They suddenly seem to me like traveling salesmen, but like time-traveling salesmen, a musical American Gothic, salesmen who come back through time not to sell you goods, or even their music, but to sell you time itself, to sell you an idea of time, to reveal an idea of the past, which isn’t a true duplication of a past time but its bent reflection, a warped convex mirror version of the past, bent by their own two hands, own two voices.

Notes on the new DFW reader

Last week I noticed via Kottke that the publisher Little, Brown has just published a David Foster Wallace Reader. This makes me happy, as I’ve thought that since his death the two Wallace books that “needed” to exist were a) a collection of his nonfiction and interviews and b) a reader, so that he would be more easily teachable in college courses. This last idea came from my own teaching days, when I occasionally longed to include a kind of DFW Swiss Army knife on a syllabus.

Now that it’s out there and I can inspect the table of contents, I, of course, have opinions. Harumph, harumph. Why have they decided to include syllabi and teaching materials from the college courses Wallace himself taught? I’m of two minds about this. On the one hand, I’ve often thought that one of the aspects that made Wallace so interesting and compelling a writer was that he was always himself, his sensibility burned through whatever genre he was working in, so that his fiction and nonfiction and even his interviews had the same grain of energy. And yes, even his syllabi. The ones that I’ve read online have the same level of wit and attention to language. It sounds weird — it sound kind of creepy, given his cultish status — but the syllabi are interesting in and of themselves as written artifacts.

As pieces of art? I dunno. That takes more interpretive energy than I’m willing to muster currently. But I’m not sure they really belong in a reader. It feels a little funny to re-contextualize them in this way, which is a kind of Whitman sampler of the Great Man’s Work. Reading his syllabi and class correspondence feels like it should be the next level of Wallace interest, for intrigued autodidacts to seek it out. Like the re-publishing of his graduation speech in book form, the class materials seem to sanctify the person, to burnish the icon. Though without the book in front of me and not having read the syllabi in question, I am merely blowing out thought bubbles here. I’m just being opinionated.

Which reminds me: is it possible to write online without falling back on opinion bubbles? The writer Paul Ford has said that the engine of all internet activity comes down to the self-righteous question “why wasn’t I consulted?” And you don’t have to do much exploring to see how online writing has degenerated into a series of “takes” on the subjects of the day. So how does one (or perhaps more accurately, why does one) write on the internet without devolving into an editorial writer hepped up on speed? Because quickly formed opinions on complex matters do not typically lead to graceful prose, or even just interesting prose, much less well-built ideas. It leads to a kind of performative morality, a kind of keyword call and response, rather than actual debate or searing sentence construction.

Reading and writing online, I’ve started to realize how tired I am of everyone’s opinions on everything. And I don’t exclude my own opinions from this. My own little thought bubbles are tired, slightly shriveled — like grocery balloons four days after the party, huddling in the corner of the dining room. They only float when kicked.

And yet here I am contributing to the very problem by having and now articulating my own personal bubble re: the arrival of this new collection of Wallace’s writing. I often wonder: if Buddhism is based in part on removing desire from oneself, then might a corresponding Buddhistic internet mode be something like removing one’s opinions from oneself. What if having an opinion were basically a manifestation of a desire? A desire to be consulted on a topic, and that internet writing was the rage made manifest of that thwarted desire? What if one could write on the internet without recourse to expressing an opinion about everything?

I’m not sure I’m strong enough.

Time of your life

Why “minutes to read” is wrong

I originally wrote this over on Medium a while back, but for reasons too boring and idiosyncratic to go into, I wanted to post it here as well.

So I downloaded iOS7 a few days ago, and I noticed that the icon on the home screen for the clock actually tells time. That is, you can see the excited red second hand busily crop-circling its way around the clock. I don’t think this icon has ever moved like this before, though I could be mistaken. I remember when I discovered that the calendar app actually indicated the day you were currently living through, I thought it was a remarkably useful mirroring of the “real world,” as it’s commonly referred to.

But that also got me thinking about reading and timing and how many of the current “reading platforms,” for lack of a better phrase, indicate the amount of time it will take to read a piece of writing. So far I think this happens here on Medium (hullo, you strange publication-platform centaur!), the new “Netflix-for-books” app Oyster, and Readability. Time to read has also become a sorting feature for the latest update to Instapaper, though I’m not sure if it gives you an actual minute estimate. My point is that “minutes to read,” as a functionality, seems to be slowly growing as a standard, and I think this is wrong.

Why? First, there is my admittedly liberal arts-y objection to timing reading: that’s not what reading is supposed to be about. Reading is about escaping the clock or stopping time, not racing to beat some spectral average. And while I admit that this is a bit of a rarefied concern — one could simply ignore the “minutes to read” information — I do think it puts the emphasis on the wrong metric. Reading should be about the pleasure of reading. It’s not a baking recipe. If you are enamored or perplexed by a certain paragraph, then you should take a long as you need to stare at that paragraph. It’s like when I was in high school and was delighted to see that I was only supposed to read three poems for homework that night. Done in six minutes. Of course, 20 years later I still haven’t figured out what those poems were about; their sedimentary knowledge has outlasted the changeable weather of my attention.

This points to a larger concern with how reading is often discussed: as a way to learn empathy, as a preventative against the flood of age-related mental deterioration, as the best way to be informed about the world, etc. Reading, of course, can be all those things but what gets lost is how the act of reading itself is what is primarily pleasurable. Sure, there are all these sub-benefits, but the primary joy lies in decoding these strings of letters and sometimes feeling the sound of the voice behind them. So often reading, as a human activity, suffers from the nutritional grid we place upon it — subdivisions of one’s daily allowance of information.

I admit that part of my opposition comes from my own slowness as a reader. I am horribly slow; I think my average is about 30 pages of prose an hour. I haven’t timed myself in a long while (and god forbid I actually do so again just to provide data for this little essay) and I’m wary of sharing this information in public. I have friends who can read circles around me, chewing through books like self-powered lawn mowers; all they have to do is hold on. I wish I could say that my pace was part of a deliberate Slow Food-like campaign to savor what’s really worth enjoying in life, but the truth is that I am merely slow. Think fast! The ball’s already hit me in the chest. So when I see “minutes to read” near my little “read later” article, I feel like an intellectual jock is glaring at me, laughing.

However, despite my own read-speed insecurities, and despite the fact that as a reading duration metric, it’s simply inaccurate, I do realize that the “minutes” issue is a tangible attempt to deal with a usability problem. (Whenever I hear the word “usability,” I can’t help but think of drug paraphernalia.) As our reading moves device-ward, as column inches and finger-feel fall away as the constant way to gauge how much one has left in a book or magazine, we need a different method to measure the length of a piece of writing. I will be the first to admit that I’m not smart enough to figure this solution out. Part of me likes how the scrolling bar in some browsers shrinks proportionally to indicate how much scrolling one has left, but this also becomes quickly meaningless if it’s a seriously long document.

I feel as if there is some icon out there waiting to be developed that will solve this problem. Relatedly, who thought up the “hamburger icon” and does she or he get to retire now and receive regular royalties for making our lives infinitesimally but still perceptibly easier? Can she, or whomever, collectively or individually, solve this abstract sizing problem, graphically render it in a condensed form that both makes thinking easier by making it almost unnecessary?

My favorite solution to this problem thus far is the Kindle for iPhone app, which shows both a horizontal “completion bar” (I’m sure this has some technical name) as well as a percentage number for how much you’ve read. Now I actually like the percentage number. It turns your reading activity into data, to be sure, but it’s more useful, more applicable. It shows how much of the word pie you’ve eaten, not how late you are to the party that is the next article in your list of homework. Of course, even this number is usually inaccurate,since the end-of-book information, like an index, is included in this percentage, but that part of the book doesn’t count as the “finish line” to my hyperventilating mind. Let me know when I can truly, honestly stop reading.

And perhaps that’s the key to this reading anxiety. I wish I could say, when considering my own reading habits, that I am never concerned with how long it takes to read something. But in fact I am obsessed with the length of what I’m reading. I’ll often skip ahead in a magazine just to see how much longer I have to endure this round of pleasure. What I need then is a way to soothe the reading-time anxiety in a way that makes me feel better about the anxiety’s existence and doesn’t actively stoke it.

Surely, that’s not too much to ask?

‘Underworld’ on the iPhone

Underworld is a novel by Don DeLillo that is 827 pages long. I have a very nice remaindered first edition that I purchased several years ago. On a whim last winter, I decided to read it. I felt — this will probably tell you more about me than I want you to know — that I was finally “ready” to read Underworld, that I had read enough other DeLillo to be able to absorb it. And so I dove in, but I quickly decided to download an electronic copy so that I wouldn’t have to tote around the two-hander hardback. I downloaded a Kindle version, which conveniently appeared on my phone and on my Kindle. I thought being able to have the novel with me at all times would increase my odds of finishing it. And then, out of a fit of perversity more than anything, I decided to see if I could read the entire beast just on my phone.

And I did it, though I should immediately confess that I cheated a little. I read most of the first novella-length chapter and a couple of bits in the middle third from the hardback. And I read the last 20 percent of the book on the Kindle, as I was on a trip and didn’t want to use up my phone’s battery, about whose level of fullness I am in a constant state of anxiety. So, 827 “pages,” three versions, three sets of marginalia — a half-adventuresome, half-grumpy sack-race into the future.

After reading several hundred thumb-flips on my iPhone, the palatial spread of the actual hard copy was resplendent. The book made more sense as a structured object when I read the hardback. I also had a better sense of where I was in the book and how much terrain I still had left to cover. Perhaps this point is obvious. I am normally highly concerned with the amount of pages left when I read a story or a novel. I am not sure how to account for this anxiety. One almost begins to question whether or not I like reading at all if I’m always concerned with how much of it I have left. But this anxiety was amplified by reading the novel on my phone, and it’s not because the phone doesn’t tell you where you are in the text. In fact, it has multiple, frequent, and nefarious expressions of your progress, which might explain my disposition.

When you first open your book on the iPhone, stuck still somewhere in the middle, the information that appears at the bottom of the screen for a brief moment is the 697 of 827 pages left, or sometimes, the percentage read, as well as the “position,” which reads something like “Loc 2729 of 12607.” This position is important but confusing, the non-page-number-like page number that the Kindle software uses to determine location in the absence of real page numbers. As we move inexorably toward more e-books, or e-books as the first step in publishing book-length bodies of text, it becomes increasingly important to have some other locator aside from page number. It all depends on what edition comes first and if there is some concrete analog referent out there in the world.1 In fact, as formats proliferate, one could see the need for some standard type of locator data. I realize that you can search words within the book, but it would be nice to be able to pinpoint the — well — position you’re at.

Also, the phone displays a progress bar that shows where you are in the book. But after this brief blip of locative information, the only remaining pieces of logistical info are the page number information on the bottom left and the percentage info on the bottom right. At some point in the many moons it took me to finish Underworld, that bit of page number info changed into time info.2 For instance, “9 hrs and 8 mins left in book,” which on the one hand is nice info to have, I can almost plan my weekend around it, but the problem I soon found was that it seemed to be terribly inaccurate, and no matter how fast or slow I read, I couldn’t seem to affect my personal reading-speed prediction. Occasionally it would chip away at the time, but I couldn’t tell if it was improving because I had a particularly successful reading lunch hour or because I hadn’t touched the book in a week. And also, it just made me more self-conscious about how slow I read, and I kept trying to impress the little machine with an improved flip-rate. As is perhaps obvious, this is a ridiculously stupid way to go about reading a book.

This same “time left in book” trope appeared on my Kindle when I began reading the final section on the airplane. But being as I was trapped aboard a modern aircraft and was relieved of the constant pressure to read email, send email, read tweets, read articles that are found within tweets, destructively compare my daily activities to my “friends” on Facebook, or otherwise look up stupid crap, I read on the Kindle with increasing pleasure. I read on a second generation Kindle Paperwhite. This is my second Kindle, and what I like about reading on these devices is just how incredibly stupid they are. All you can do is read, and if you drop it, 9 times out of 10, the device is fine, and on that 10th time, you can replace it cheaply and all of your stuff appears back on it.3 I have to say I prefer my old button-keyboarded Kindle to the newer, fancier touch screens, mainly because highlighting and note taking are now more difficult. As I have read more on the Kindle, I have slowly given up typing notes unless I am extremely provoked. It is just too clumsy. Likewise the highlighting feature is such crap that I often just highlight the general area of interest rather than the exact words I want. I would lament this more, but these note-taking gestures are really just ways to commemorate my own enthusiasm more than anything else. Though I do, I must narcissistically admit, enjoy going back to old paper books and seeing what I was provoked to highlight. I haven’t used the Kindle enough to see if I will ever go back to enjoy my digital traces.

Strangely, as a pure reading/highlighting/note taking experience, the Kindle app on the iPhone is much better. Perhaps I am just more used to typing with my thumbs on this device. Of course, after a while one grows tired of constantly flicking to a new page on the phone. The screen is just too small, the paragraphs too scrunched. One begins to daydream of the palatial white beaches of your everyday trade paperback. (I realize I could solve this problem by buying one of Apple’s new little kneeboards, but I have my own personal planned obsolescence geared around when I drop my phone, and I’m still waiting for that to happen again.)

Whenever emphatically pro–e-book people crow about carrying around a library in their pocket, I think about Tom Petty, who owns some unknown multitude of beautiful vintage guitars. He was showing off his glory room in some television segment, and he said, self-mockingly, “Of course, you can only play one of them at a time.” You can carry around a library in your device, but why would you? I mean, after about 10 or so titles, what’s the point? A book is not a mixed tape. And if you want to read lightly curated brief sections of text, why wouldn’t you just go online, where that is the default mode?

All of my middle-aged griping aside, it was awfully nice to be able to switch devices as mood or circumstance arose. And I surprised myself by not really having any trouble switching between the three versions. The syncing to the “last place read” between the electronic devices worked well, and except for that evil little timer I could easily jump ahead to a given page number if I’d snuck off for some old-fashioned hard copy action.

But didn’t the meaning of the book change? This is the question I’ve been chewing over. What I comprehended and didn’t comprehend reading Underworld is mostly not tied to which device I read it on. At least, I don’t think so. My misunderstandings are for the most part due to the slowness and fracturedness of my reading — infrequent sections split over months. I did have trouble remembering some characters’ names, and whereas if I was only reading it on paper, I would just flip to an earlier section of the book to check myself, here I just plowed ahead and eventually figured it out. This doesn’t really make sense, because you can jump easily between sections or just search for a name, but this slight speed bump, this just barely foreign process, kept me from doing it. I would have understood more if I hadn’t been so lazy, but this goes for more than simply reading a fat novel on a thin computer. Turns out I need to practice reading on an electronic device.

But all these devices did confirm just how nice it is to read a paper book. Perhaps this is the part of the post where I will just become old fashioned and sentimental. Aside from the nice physical properties, a book’s self-sustaining independence is comforting in and of itself, and marches in opposition to the networked world. Heck, a good book marches in opposition to the physical world, too. Do not disturb is the caption under every person absorbed with a book. (I supposed the caption for person absorbed with their device would be, Hold, please.) Despite the appearance of some real life historical figures, the world of Underworld is its own phantasm. It’s an analogous existence. And the virtual reality of any fictional world is complemented by the disconnected nature of the physical book. The physical restriction blossoms into an epistemological freedom. You can’t look up the definitions of the words in the online dictionary, not because the physical book is disconnected from the network, but because the book is its own network; any good work of fiction provides its own definitions. To go outside of it, you necessarily break the spell. Just by opening the cover of a book, you shut the door on so much else.

And here’s the part of the post where I grow increasingly prescriptive: If the novel is to remain relevant, or to function as its own distinct narrative species in our new networked reading life, it has to become the island, blissful in its own self-sustaining ecosystem, within the rising sea-level of text.

1. And does anyone think that we’re not moving inexorably in this direction? When our short texts have moved toward HTML and the web, it surely well feels like it’s just a matter of time before our longer chunks of text, the organization of text formerly known as books, move toward an e-form of distribution. This doesn’t mean that some books won’t always be print books first or more themselves as print books, and this doesn’t mean that some books won’t evolve into print books. Print still seems the natural and certainly the most stable archival mechanism. And even that’s getting easier: I’m now capable of designing and printing a paper book that will last 1,000 years (if I put it on a shelf and don’t eat lunch over it), and I can barely design a business card. But in saying this (that e-books might become the primary distribution mechanism for books), I don’t think the Kindle is the end-all of electronic book distribution. It is simply the first instance of mass success. I think — and here I am predicting, which I am horrible at doing — that the e-reader, as a distinct device (can we please come up with a more elegant name for this?), will continue only as a niche product, and our main reading devices will be some small portable computer formerly known as your cell phone, and I think that other platforms will develop to distribute e-books, be they free or charged, or some combination thereof. Some books will strive for the prestige of print because that particular audience (poetry, for example) craves print and feels that print and print alone substantiates its existence. But it seems that the human population as a mass moves toward lower fidelity and increased efficiency, and it seems foolish to ignore the gigantic convenience of e-books. This all might be screamingly obvious, but I find it useful to write it down, if just for myself.

2. My guess is this was a software update. I have previously written about the “minutes to read” phenomenon here.

3. I’ll save addressing Amazon as the publishing world’s chief innovator/bully for a future post.

D.G. Myers, RIP

I did not know D.G. Myers personally, and except for a couple of twitter exchanges, I never communicated with him directly. I knew him only via his writing, which I read with steady attention for approximately the past six years. I can’t remember now what link pushed me in his direction, but after reading just a little bit of his literary criticism, I had the singular question that so much good writing throws off: who does this guy think he is?

I was in my first year as a “visiting writer,” teaching various creative writing courses to undergraduates, when I found his A Commonplace Blog, and I was immediately taken — his seemingly encyclopedic knowledge of the novel, his generosity toward various writers I knew nothing about, his hostility toward political correctness and fashion, his sense of literary standards in a standard-less world. One of his ideas in particular has become lodged within my own life so much that I quote it to myself almost weekly. Here is the long version:

Literature is just the writing that arouses the impulse to preserve it and pass it on. (I call that the “canonical impulse.” Canons are inseparable from literature. To call something literature is to start a canon.) “When an inability to stay interested in Sappho lasted longer than the parchment she was copied on,” Hugh Kenner says, “the poems of Sappho were lost.” There are many reasons to keep something from being lost, however.

These many reasons cannot be contained by a list of genres, no matter how long it is extended; nor by distinguishing fiction from non-fiction (because there are whole literatures, of which Jewish literature is only one, to which this distinction is an utter stranger); nor by “privileged criteria” like sublimity or irony or artistry or “stylistic range” or “bravura performance” or anything else that can be humanly imagined (because exceptions to the rule will immediately suggest themselves).

Literature is simply good writing — where “good” has, by definition, no fixed definition.

I often want to emblazon that last line in my office — perhaps scrawled onto the surface of my desk with a knife. What it did when I first read it, and what it does now, is relieve me from the narcissism of minor differences that so much contemporary American literature finds itself embroiled within. Is it realism or magical realism? Is this novel thoroughly postmodern enough? Is this “experimental”? Are fairy tales de-facto bad and non-adult? Does this novel contain just the right amount of autobiographical confessionalism? Does this novel attempt to contain all of contemporary American culture? Is this novel new and different according to these obscure criteria?

Furthermore, Myers definition of literature forces me to come up with my own definition of what “good” is — articulate it, defend it, proclaim it, try to manufacture it myself.

I resolved to read Myers book, The Elephants Teach: Creative Writing Since 1880, during my first summer break from teaching. It was a revelation. It made what I was doing — pretending to be a Writer, so that I could fund my own attempt to write — historically coherent within the broader institution of American higher ed. I had come to the book with a short, convenient notion of creative writing’s history: that it was begun after WWII and the GI Bill in order to deal with the flux of students, some of whom wanted to be poets and novelists, etc.

Myers wrote that there was an increase in creative writing as a consequence of the GI Bill but that the pedagogy had begun much earlier, at the beginning of the century at Harvard, and was a manifestation of the broader impulses of progressive education: the idea that every student had something to express and that part of education was providing the means and the context to express it. The book also taught me that poetry and fiction, sequestered at the high-art end of the hall, were above neither freshman composition nor literary scholarship. (I heard one senior professor refer to comp once as the “gutter of the profession.”) Freshman comp was the moat you had to swim through to get to the castle of courses that “counted toward the major,” and I had finally made that transition, or so I thought. But Myers showed that in the beginning the courses came out of the same philosophical impulse, and that the subsequent battles were over turf and prestige, and that I should be much less cavalier in my pose of artistic importance. All of us teaching creative writing were merely teaching comp’s kin.

Myers didn’t take away my gargantuan level of self-satisfaction at being a visiting writer, but he did build a lot of context under my feet, and he made me a better teacher. I began to tell every student who approached me about going to graduate school to read Myers’s book. It’s one of those brief, historically stuffed books that makes sense of an entire cultural phenomenon and relieves the amnesiatic MFA vs. NYC debates of most of their self-puffed importance.

If he had only written that one book, I would have reason enough to be grateful toward Myers, but I had the regular appearance of his prose to contend with as well. Lord knows I didn’t agree with all of his literary judgments (no patience for or inclination toward DFW), or agree with his politics (extremely conservative), or agree with his religious beliefs (Orthodox Judaism), or not think that at times he was just being cranky (which of course I am never), but the cumulative effect of reading his prose over several years was unambiguously inspiring. I began to read him the way I have come to read the essays of Cynthia Ozick — as a balm and a provocation. When I am feeling down, either about the literature I’m reading or the literature I am trying to write, I go to Ozick and now Myers to be reminded why I’m doing what I’m doing, and to see an eloquent encounter with literature in action.

Not only was Myers’s writing motivational and provocative in its discrete installments, he was also a model of how one might write today. As a professor who stood in opposition to almost all of the directions of contemporary academic scholarship, and as writer who had written for various publications but who was eventually fired from his blog and regular review slot at Commentary when he published “The Conservative Case for Gay Marriage” after the 2012 presidential election, and as someone who in his last year did not have his teaching contract renewed at Ohio State, so that he became a teacher without a classroom — as all of the institutional contextual girders that supported his regular writing fell away — Myers still continued to write. He showed what one person with a library card, a Blogger account, and an internet connection can accomplish.

And what did he accomplish? Well, he became a permanent fixture in my literary sensibility, and he did the same for several other writers out there currently working. You don’t have to do much detective work to find a wide swath of contemporary writers and academics who read Myers avidly, who not necessarily agreed with him but recognized the excellence he embodied.

Myers wrote that “the sum and substance of what it means to respect the institution of literature” was manifested in the “moral obligation to write well.” What’s so burdensome about this obligation is that it must be born every time you set down a sentence. But Myers bore that burden as if it were a blessing.

He died last Friday after living with prostate cancer for several years. He was married and a father to four children.

Mechanisms of prestige

Yesterday, I was reading this excellent post from Rohan Maitzen at her Novel Readings blog, which led me to another excellent post where she succinctly describes the predicament of literary criticism at the present time. Namely, where should a professional critic publish her criticism in the age of easy online self-publishing, aka blogs? Should one publish via the slow, vetted, and prestigious venue of professional scholarly journals? Or should one publish via their own personal blog? Or somewhere in between?

Maitzen quite calmly and intelligently says it should be a mixture and that different forms of writing are better suited to different contexts, but that each has its place. She neither traffics in blog triumphalism or in professional old-school, rear-guard defensilism. Blogs are neither the only place for literary criticism or merely a venue for networking and personal commercials. They are another avenue for writing and thought, and the practice of regular blogging can be its own valuable contribution to literary culture. What’s more, writing via a personal blog in some ways fixes the problems of professional scholarship: its slowness, its almost autistic inability to deal with a non-professional audience, its theoretical architecture and prior-scholarship throat-clearing, its restriction to printed journals located only in college libraries, etc. (Of course, many of these problems are also intentional benefits; such is life.)

What piqued me personally about Maitzen’s post is how so much of it articulates for me thoughts I’ve felt but have been unable to articulate regarding the publishing of short fiction. As someone who is not in any way a “professional literary critic” but who was, for a brief time, a teacher of creative writing, many of the mechanisms of prestige and professional publication for literary criticism are mirrored in the world of creative writing. In fact, since fiction and poetry writing became activities of instruction within the English department, the publication of those types of works (via literary journals often run by graduate students at large universities) has modeled itself off of scholarly peer-reviewed articles. The consequences are sometimes similar: extremely long publishing cycles, prestige from publication combined with a kind of sequestration from day-to-day literary life, creating a kind of slow moving museum of prose, etc. To be sure, not all literary journals are like this; many of the journals that are more lively are disconnected from university life entirely, or they are run by a permanent series of editors. I don’t think it’s a coincidence that the journals with better editorial consistency aren’t changing out their student mastheads every 2-3 years. (Of course there are exceptions to my exceptions, but go with me.)

But what this means is that literary fiction and poetry are even further decontextualized from everyday literary life. They exist solely on the reservation of the campus. (And they’re extremely hard to get into! At least, I’ve found them extremely hard to get into, but perhaps I’m simply not talented or diligent enough — a distinct possibility.) It becomes a country club of staid fashion and values.

And all of this professional rigmarole is rendered even more ridiculous when you take into account the absurd ease of online publication. Why take years of submitting a 14-page story so that it will be published in a modestly respectable print journal that will (under the most wildly optimistic of circumstances) be read by 1,500 subscribers, if, in the span of one afternoon, it can be fairly nicely presented on the world of worldwide webs, where it can be read by anyone (or no one!) for as long as you’re able to effectuate the maintenance of the software? The answer, of course, is the prestige of the print publication. It means something to publish in State University Quarterly, where it means almost nothing to publish here at my blog, even though the words themselves could be the exact same. The problem here, which is I think even more acute for so-called “creative work,” as opposed to literary criticism, is one of context. To determine its value so much of art depends upon its context. A urinal in a bathroom is something you piss into, but placed sideways in an exhibit and signed, it’s a sculpture. In a realm of no context, it’s both, but there is no realm without some kind of context. But what the context of prestige provides is legitimacy. In fact these mechanisms of prestige often take the very place of having to read a story. It’s in the New Yorker; it almost doesn’t matter what it says. The container is more important than what is contained. Or take the Harper’s “readings” section which picks and chooses pieces of found text and re-contextualizes them within the well-justified columns of that magazine. What a fortuitous changing of context!

(I feel like I have said all this before, probably just to myself but also perhaps in some form on this website. Here’s hoping I can turn redundancy into a charming quirk.)

But until online self-publication is afforded the same attention to each iteration’s own self-generated context and potential worth, online writing will exist in the eyes of professionals as a type of neverending graffiti.

I think that this will end or at least develop when some kind of literary critical version of Jason Kottke comes along, who will not publish the good literary criticism but will draw attention to the worthwhile, already-published literary criticism. Publishing will come to seem less important and the drawing attention to, the congealing of attention around, what is already available will become much more valuable. The context will become being picked up by Kottke, or some such.

Of course, with all of the rapid linking going on now and the fact that many lit blogs have been running strong for ten years, we’re already in that world; it’s just that it’s not recognized as the primary distinction. Getting in print is still the primary distinction, when it should be the attention of a respected editorial attention, a Kottke of the literary world, or a Maitzen, or a James Wood, or a Dan Green, etc.