Category Archives: internet culture

Dispatches from the Digital Revolution

I know right now that I am partly subject to the enthusiasm of the new convert in seeing my object of adoration everywhere I turn, but truly, it seems that everywhere I turn these days I see the landslide toward a total digitalization of the world of the humanities.  Like a landslide, it may have looked a long ways off at first, but its upon us now, and rumble has become a roar.   As I said in this previous post, I think we’re a long way past a print plus world and we better figure out how digital tools, either simple things like e-books or complex tools and methodologies associated with digitalization, are going to change what we are doing with ourselves and our students.  A few rumblings:

1. Robert Darnton announces that the Digital Public Library of America will be up and running by 2013.  Darnton, an advocate of public digitalization efforts that will prevent private entities like Google from controlling access to information, has spearheaded the effort to bring together the digitalization efforts of libraries around the globe.  According to the DPLA’s website, the purpose of the the DPLA is focused in the following ways:

Many universities, public libraries, and other public-spirited organizations have digitized materials that could be brought together under the frame of the DPLA, but these digital collections often exist in silos. Compounding this problem are disparate technical standards, disorganized and incomplete metadata, and a host of legal issues. No project has yet succeeded in bringing these different viewpoints, experiences, and collections together with leading technical experts and the best of private industry to find solutions to these complex challenges. Users have neither coherent access to these materials nor tools to use them in new and exciting ways, and institutions have no clear blueprint for creating a shared infrastructure to serve the public good. The time is right to launch an ambitious project to realize the great promise of the Internet for the advancement of sharing information and of using technology to enable new knowledge and discoveries in the United States.

2. Appearance of the Journal of Digital Humanities:  I already mentioned this yesterday, but I’ll go ahead and do it again.  It seems to me that Digital Humanities is coalescing in to a force in academe–rather than a marginalized crew on the ragtag end–not unlike the massive changes that occurred in humanistic studies after 1966 and the advent of deconstruction and its step-children.  In my estimation the change may be even more massive–and perhaps more painful and more exciting–than those earlier changes since deconstruction did not essentially change the tools of the trade–we still read books (and gradually included film, pop-culture, and other media) and we still wrote papers about them.  While deconstruction may have been a more sophisticated and nifty looking hammer, it was still basically a hammer.  Digital Humanities is changing humanistic work at the level of the tool, creating houses without hammers.

3.People Who read e-books read more books than those who do not--A new Pew Research Center study suggests the following:

a survey from the Pew Research Center’s Internet & American Life Project shows that e-book consumers in the U.S. are reading over a third more books than their print-only customers. According to the report, titled “The Rise of E-Reading,” the average reader of e-books says he or she has read 24 books in the past 12 months, compared with an average of 15 books by non–e-book consumers.

Overall, Pew found that the number of American adults who say they have read an e-book rose to 21%, compared to 17% reported just a few months ago in December 2011. That jump comes following a holiday season that saw a spike in the ownership of both tablet computers and dedicated e-readers.

I admit that I want to cavil a bit about this news.  It’s also been demonstrated that e-readers so far are overwhelmingly dominated by pulp fiction romances and mysteries, the kind of thing you can read easily in a day.  On the other hand, book selling and reading in general has ALWAYS been dominated by the romance and mystery genres, so that’s nothing new.

The same Publishers Weekly article points to a study saying that e-readers are poised to take off with a massive global spike.  We’ve heard this before, but….Well, I asked my boss the other day if I could purchase a Kindle so I could experiment with the Kindle library program.  I am over the edge and into the dark side of the abyss.

4. The New York Public Library opened up an amazing new database tool for the 19040 census–itself an amazing database just released by the U.S. government.  I haven’t totally figured out how to use it yet, but your can search for persons in the census, tag their location in GIS based maps of New York City and do multilayered searching of NYC based on the crowd-sourced effort at developing a digital social history of New York City.  According to this article in the Gothamist,

Kate Stober at the NYPL tells us it’s “more than just a research tool, we’ll be helping New Yorkers create a social history map of buildings and neighborhoods in the five boroughs. When you find an address, the tool pins it to both a 1940 map and a contemporary map, so you can see how the area has changed. You’re then invited to leave a note attached to the pin—memories, info about who lived there, what the neighborhood was like, questions… As people use the site, we’ll build a cultural map of New York in 1940 that will assist both professional historians and laypeople alike.” And that’s pretty amazing.

I’m especially fond of this article because it goes on to point out that famous recluse, J.D. Salinger was indeed living in plain site on Park Avenue in New York City in 1940.  You just had to know his first name was Jerome and have faith that there couldn’t be more than one Jerome D. Salinger’s in Manhattan.  I think the question for humanist scholars will be what responsible teacher of the culture, art, history, and politics, etcetera of America in the 1940s would not want to use this tool and insist that their students use it to.

It’s more than a rumble.

Journal of the Digital Humanities: The Community as Gatekeeper

Earlier today I posted on ongoing sense of mild disorientation making my way through the thickets of Digital Humanities, noting with complaint that roads and pathways toward destinations were none too clearly marked, and that gateways “in” seemed obscured by a resistance to the notion that there were insiders and outsiders to begin with.  It’s probably a good thing I posted this morning, since this evening I was pleasantly surprised by the arrival of a roadmap and a gateway on my iPad screen in the form of the newly minted Journal of Digital Humanities.  Not only does it look like a really fantastic read, with articles ranging from theory to the problems related with specific projects and tools to the question of the privileging of racial and gender stereotypes in DH discourse, it actually has  an article written just for me and my fellow “noobs” whom I evoked in my post earlier today:  Lisa Spiro’s “Getting Started in the Digital Humanities”.  It’s really a little bit more of a catalogue than an article, and I would have kind of liked a little more reflective or evaluative analysis, serving perhaps as a form of a bibliographic essay of sorts. The very large number of possibilities and the fact that they are all existing on a more or less equal plane still leaves one groping just a bit. But still, mostly I found it really informative.  I also found it comforting because I recognized a lot of the resources and felt like I and my group here at Messiah College had been pursuing the right things, consulting the right sources, looking in the right places, the feeling a little like one who has been wandering around in the woods for several hours and crests a hill to discover she’d been going the right way all along.

Perhaps more than that article, however, the fact of the journal struck me as a kind of beacon–although I know there are other journals related to DH and I’ve looked some of them.  Perhaps I felt this way because of its unique editorial and publishing agenda, embodying an open-review ethos and practice. From the editors introduction to the journal:

Nothing herein has been submitted to the Journal of Digital Humanities. Instead, as is now common in this emerging discipline, works were posted on the open web. They were then discovered and found worthy of merit by the community and by our team of editors.

The works in this issue were first highlighted on the Digital Humanities Now site and its related feeds. Besides taking the daily pulse of the digital humanities community—important news and views that people are discussing—Digital Humanities Now serves, as newspapers do for history, as a rough draft of theJournal of Digital Humanities. Meritorious new works were linked to from Digital Humanities Now, thus receiving the attention and constructive criticism of the large and growing digital humanities audience—approaching a remarkable 4,000 subscribers as we write this. Through a variety of systems we continue to refine, we have been able to spot articles, blog posts, presentations, new sites and software, and other works that deserve a broader audience and commensurate credit.

Once highlighted as an “Editors’ Choice” on Digital Humanities Now, works were eligible for inclusion in the Journal of Digital History. By looking at a range of qualitative and quantitative measures of quality, from the kinds of responses a work engendered, to the breadth of the community who felt it was worth their time to examine a work, to close reading and analyses of merit by the editorial board and others, we were able to produce the final list of works. For the inaugural issue, more than 15,000 items published or shared by the digital humanities community last quarter were reviewed for Digital Humanities Now. Of these, 85 were selected as Editors’ Choices, and from these 85 the ones that most influenced the community, as measured by interest, transmission, and response, have been selected for formal publication in the Journal. The digital humanities community participated further in the review process through open peer review of the pieces selected for the Journal. Authors selected for inclusion were given time to revise their work to answer criticisms and suggestions from the community and editors, prior to a round of careful editing to avoid typographical errors and other minor mistakes.

This strikes me as ingenious since it combines a high standard of quality control with a community based ethos.  Theoretically, this produces a work that is neither the idiosyncratic preference of an editor, nor is it simply a scattershot random collection of the individual preferences of readers or writers.  It really is in some ways the embodiment of the values of a particular academic community, demonstrating and enacting the standards by which membership/participation in that community is determined.  In my post earlier today I discussed the importance of gatekeepers as a “way in” even though the presence of gatekeepers can feel exclusionary or hierarchical.  This kind of approach to an academic journal strikes me as a way of embodying the community as gatekeeper, something that comes closer to embodying the kind of egalitarian ideals that DH folks obviously hold dear.

In any case, kudos to the editors and the community that built this journal.  I’m looking forward to the read.

Literacy in the Digital Humanities: Or, a clueless “noob” in digital academe

Today my faculty group focused on the Digital Humanities here at Messiah College had a great session with Ryan Cordell from St. Norbert’s College.  Ryan blogs regularly for ProfHacker at the Chronicle of Higher Education, and holds down the Digital Humanities fort (or perhaps leads the insurgency) at St. Norbert’s.  He’s also especially done some work advising liberal arts colleges on projects in the Digital Humanities, so I thought he’d be a good choice for consulting.  I’m happy with the choice:  Ryan was practical and down-to-earth, while also pointing to really challenging and exciting places we could take some of our nascent ideas.  I think we came away with some good possibilities for next steps that will lead to some concrete action in the next year.  I highly recommend Ryan if your looking for a consultant for starting or managing digital humanities projects in a smaller school setting.

Earlier in the day I had had the good luck to look in on a massive twitter debate that was, unbeknownst to the participants, about or at least precipitated by me and a brief conversation I’d had with Ryan.  I’d told Ryan that one of my biggest concerns was professional development for faculty and getting them over some of the immediate humps of alienation that traditional humanistic scholars feel when confronted with what amounts to an alien DH world.  I mentioned the fact that I  and one of my colleagues, David Pettegrew--who is himself much more versed in technical know-how than I am–went to a THATCamp and spent the first two or three hours feeling completely lost and at sea, unable to fully comprehend half the language that was being used or the tasks that we were being asked to implement. I mentioned to Ryan that I felt that I probably needed to have had a half of a semester of a coding class before I would have gotten everything out of the THATCamp that I should have gotten.  Although that improved as things went along and we got in to concrete projects, and I also found everyone very gracious and the atmosphere enthusiastic,  I was worried that my faculty who were only interested in investigating (and perhaps then only after my pleading) would be discouraged or uninterested in engaging with DH if a THATCamp was their first experience.

Ryan mentioned this in a tweet yesterday.

All-twitter-hell broke loose.

Well, not really.  In fact it was a really fascinating and intellectually complex conversation–one I wouldn’t have thought could happen via Twitter.  I won’t try to completely replicate that conversation here.  You could go to Ryan’s twitter feed and find the essentials for yourself.  It was clear, though, that Ryan’s tweet had touched what amounted to a raw digital nerve.  Some twitterers were flabbergasted that anyone would find a THATCamp too daunting or that it could ever be alienating.  Others assumed that the problem definitely must have been with me, that I was too shy to ask for help.  Ultimately the conversation turned to a pretty serious engagement with the question of whether there were genuinely insider and exclusive groups and hierarchies within DH.

As a “noob”–which I discovered in the course of the twitter conversation yesterday is what I am–I am here to say without a hint of condemnation, “Yes, yes, yes there are.”

For me, this is not a moral or even a political statement, though it was very clear to me that for many people in the conversation this was a moral or political concern.  To admit to hierarchies and exclusivity was  a betrayal of the collaborative and radically democratic spirit that many feel is at the heart of DH work.  I will say that these collaborative aspects are part of what most attracts me to what’s going on in DH–as little as I actually do know;  I see it as a superb fit for some of the commitments my school has to the public humanities and to public service more generally, besides moving students in to more collaborative learning environments that will be useful to them in the world they are entering.

However, any academic discourse that is imaginable, maybe any discourse that is imaginable at all, operates by exclusion and inclusion simply given the facts that there are those who know the language and those who do not, there are those who are literate in the language and those who are not, there are those who are fluent in the language and those who are not, and there are those who are creators in with and of the language and there are those who are not.  It is impossible for me to imagine how this could be otherwise.

The reason DH can be difficult and alienating for beginners like me is because we don’t know enough of the language to even know what to ask for. I will say I mused over the question of whether I had just been too shy to ask for help at the THATCamp.  Being a fainting violet is not really a quality that will get you terribly far in administration, so I doubt it, but it may be that I could have asked for more help.  The problem was, I felt so lost that I wasn’t entirely sure what kind of help to ask for.  This is a basic function of discourse, to understand the parameters of the language games you are playing, to know what questions to ask, what moves to make and when, and where to go for the vocabulary you need.  Its why you need consultants like Ryan, or teachers who are in the know.  Its the rationale for the title of my post referencing Gerald Graff’s Clueless in Academe.  DH is obviously a part of academe, even in its alt-academic forms, and it is increasingly central to academic work in the humanities, and there are an awful lot of people who are clueless about where to begin.

There is nothing morally or politically wrong with this or with being a part of an in group.  To say there is would be to say there is something morally or politically wrong with being alive.  Hyper-Calvinists aside, I don’t think this is a tenable position.

The problem, however, from an administrators point of view–and I speak in to this conversation primarily as an administrator who is trying to facilitate the work of others and promote the well-being of our students–is the pathways toward accessing the language and practices of this world aren’t always terribly clear.  Indeed, ironically, I think some of the laudable democratic ethos in DH work and culture may contribute to this obscurity.  Because a THATCamp–and so much other DH work–is so democratically organized, it means that one experience, conference or workshop may in fact really work well for rank beginners, while another may really require attendees to be a little more versed in the basics before attending.

For me as a person and as a thinker, that’s fine.  I actually look forward to going to another THATCamp someday, even if I am just as lost as I was the first time around. My tenure no longer depends upon it–which gives me a freedom my junior faculty do not have.

However, as an administrator, that democratic quality is a disaster as I consider what kinds of professional development efforts to try to support with my faculty.  I would not be able to tell whether a particular experience would be appropriate for a rank beginner who is hesitantly interested or at least willing to give this a try.  Alternatively, I wouldn’t be able to know ahead of time whether a particular experience would be appropriate for a more advanced colleague who might go and get an iteration of the basics she already knows.  My ability to manage my budgets in a responsible fashion is hampered by my inability to gauge what kinds of professional development experiences I should pursue or promote with my colleagues who are at very different places in their experience of and expertise in DH methodologies and practices.

The traditional life of a humanist academic is elitist in its own obvious ways with its own arcana and exclusionary practices. But the pathway toward access to its languages is fairly well marked, even if it is now increasingly travelled successfully by the very lucky very few.  I could tell junior faculty members 10 years ago that if they wanted to succeed at my college they needed to do three or four things, and I could outline how they should go about doing them.  I don’t sense that kind of pathway to DH work, yet, so while I am wanting mightily to get my faculty more involved with some of these efforts, I’m also aware that without a clearer path for their own professional development, I may be as likely to facilitate confusion as I am to promote professional development.

This problem may simply disappear as DH becomes more and more central to the humanist enterprise, but I suspect as it does become more and more central that the pathways to access will have to become more and more clearly marked.  This means the development of disciplinary (or quasi-disciplinary–I am aware of the angst over thinking of DH as a discipline) protocols and expectations, and as importantly the expected means by which those elements of professional life are effectively accessed by beginners.

This means the recognition of certain gateways and the appointment of their gatekeepers, which all smacks a little bit of hierarchy and exclusion.  However, while it’s true that roadmaps undemocratically dominate a landscape, they also get you where you need to go.  And while gateways mark a boundary, they also let you in.

What is the future of the book?–Anthony Grafton’s Keynote lecture at Messiah College

This past February we had the privilege of hearing from Dr. Anthony Grafton from Princeton University at our Humanities Symposium at Messiah College.  Grafton is a formidable scholar and intellect, and a generous soul, a too rare combination.  The following video is his keynote lecture for the Symposium.  Grafton’s instincts are conservative, readily admitting his undying love for the codex and its manifold cultural institutions (libraries, used bookstores, even Barnes and Nobles).  At the same time, he is under no illusions that the future of the book lies elsewhere.  His lecture looks at what is threatened, what should be valued and protected from the fast, but also what might be a potential for the future of the book, and what values we should bring to bear to shape the book, which is, after all, a human institution.

Many thanks to Derick Esch, my work study student, for his work in filming and producing this video.  Other videos from the symposium can be found at the same vimeo page.

Dr. Anthony Grafton: 2012 Humanities Symposium Keynote Address from Messiah Humanities on Vimeo.

Why digital humanities is already a basic skill, not just a specialist niche–Matthew Kirschenbaum

Sometimes I think we humanists “of a certain age,” to put the issue politely, imagine digital humanities as an optional activity that will be filled by an interesting niche of young professors who take their place in the academy as yet another niche sub-discipline, something that research universities hire for and small colleges struggle hopelessly to replicate.  It may be indeed that small colleges will struggle to integrate digital humanities in to their own infrastructures, but I think the general picture of Digital Humanities as an optional sub-discipline will simply be unsustainable.  The argument smells a little of the idea that e-books are a nice sub-genre of texts, but not something the average humanist has to worry that much about.  I think, to the contrary, that digital humanities and the multitude of techniques that it entails, will become deeply integrated in a fundament way with the basic methodologies of how we go about doing business, akin to knowing how to do close reading or how to maneuver our way through libraries.

Although pointing out this fact is not his main point, Matthew Kirschenbaum–already a Digital Humanities patron saint in many respects–has an essay in The Chronicle that points to this fact.  Kirschenbaum is currently interested in how we preserve digital material, and the problems are just as complex if not moreso than the general question of how and when to save print materials.  Moreso to the degree that we cannot be sure that the current forms in which we place our digital intelligence will actually be usable five years from now.  The consequences for humanities research and writing are profound and must be considered. From Kirschenbaum:

Digital preservation is the sort of problem we like to assume others are thinking about. Surely someone, somewhere, is on the job. And, in lots of ways, that is true. Dire warnings of an approaching “digital dark ages” appear periodically in the media: Comparisons are often made to the early years of cinema—roughly half of the films made before 1950 have been lost because of neglect. 

But the fact is that enormous resources—government, industry, and academic—are being marshaled to attack the problem. In the United States, for example, the Library of Congress has been proactive through its National Digital Information Infrastructure and Preservation Program. Archivists of all stripes now routinely receive training in not only appraisal and conservation of digital materials but also metadata (documentation and description) and even digital forensics, through which we can stabilize and authenticate electronic records. (I now help teach such a course at the University of Virginia’s renowned Rare Book School.) Because of the skills of digital archivists, you can read former presidents’ e-mail messages and examine at Emory University Libraries a virtual recreation of Salman Rushdie’s first computer. Jason Scott’s Archive Team, meanwhile, working without institutional support, leaps into action to download and redistribute imperiled Web content.

What this suggests is that Rushdie’s biographers will have to not so much know how to sift through piles of letters, but how to recreate digital archives that authors themselves may not be interested in preserving.  Biographers of the present and surely the future, will have to be Digital technicians, as well as close readers of the digital archives they are able to recover.

Kirschenbaum goes on to suggest that most of us must do this work on our own, and must do this work for ourselves, in preserving our own archives.

But despite those heroic efforts, most individuals must still be their own digital caretakers. You and I must take responsibility for our own personal digital legacy. There are no drive-through windows (like the old photo kiosks) where you can drop off your old floppies and pick up fresh files a day or two later. What commercial services are available tend to assume data are being recovered from more recent technology (like hard drives), and these also can be prohibitively expensive for average consumers. (Organizations like the Library of Congress occasionally sponsor public-information sessions and workshops to teach people how to retrieve data from old machines, but those are obviously catch as catch can.)

Research shows that many of us just put our old disks, CD’s, and whatnot into shoeboxes and hope that if we need them again, we’ll figure out how to retrieve the data they contain when the time comes. (In fact, researchers such as Cathy Marshall, at Microsoft Research, have found that some people are not averse to data loss—that the mishaps of digital life provide arbitrary and not entirely unwelcome opportunities for starting over with clean slates.)

This last, of course, is an interesting problem.  Authors have often been notoriously averse to having their mail probed and prodded for signs of the conflicts and confessions, preferring that the “work” stand on its own. Stories of authors burning their letters and manuscripts are legion, nightmarishly so for  the literary scholar.   Such literary self-immolations are both harder and easier in a digital world.  My drafts and emails can disappear at the touch of a button.  On the other hand, I am told that a hard drive is never actually erased for those who are really in the know.  Then again, the task of scholar who sees a writers computer as his archive is in some ways vastly more difficult than that of the writer who was an assiduous collector of his type-written drafts.  Does every deletion and spell correct count as a revision.  What should we trace as an important change, and what should we disregard as detritus.  These are, of course, the standard archival questions, but it seems to me they are exponentially more complicated in a digital archive where a text may change a multitude of times in a single sitting, something not so possible in a typewritten world.
Well, these are the kinds of things Kirschenbaum takes up.  And having the tools to apply to such questions will be the task for every humanist in the future, not a narrow coterie.

Living in an e-plus world: Students now prefer digital texts when given a choice

A recent blog by Nick DeSantis in the Chronicle points to a survey by the Pearson Foundation that suggests Tablet ownership is on the rise.  That’s not surprising, but more significant is the fact that among tablet users there’s a clear preference for digital texts over the traditional paper codex, something we haven’t seen before even among college students of this wired generation:

One-fourth of the college students surveyed said they owned a tablet, compared with just 7 percent last year. Sixty-three percent of college students believe tablets will replace textbooks in the next five years—a 15 percent increase over last year’s survey. More than a third said they intended to buy a tablet sometime in the next six months.

This year’s poll also found that the respondents preferred digital books over printed ones. It’s a reversal of last year’s results and goes against findings of other recent studies, which concluded that students tend to choose printed textbooks. The new survey found that nearly six in 10 students preferred digital books when reading for class, compared with one-third who said they preferred printed textbooks.

I find this unsurprising as it matches up pretty well with my own experience.  5 years ago I could never imagine doing any significant reading on a tablet.  Now I do all my reading of scholarly journals and long form journalism–i.e The Atlantic, the New York Review of Books, The Chronicle Review–on my iPad.  And while I still tend to prefer the codex for the reading of novels and other book length works, the truth is that preference is slowly eroding as well.  As I become more familiar with the forms of e-reading, the notions of its inherent inferiority, like the notions of any unreflective prejudice, gradually fade in the face of familiarity.

And yet I greet the news of this survey with a certain level of panic, not panic that it should happen at all, but panic that the pace of change is quickening and we are hardly prepared, by we I mean we in the humanities here in small colleges and elsewhere.  I’ve blogged on more than one occasion about my doubts about e-books and yet my sense of their inevitable ascendancy.  For instance here on the question of whether e-books are being foisted on students by a cabal of publishers and administrators like myself out to save a buck (or make a buck as the case may be), and here on the nostalgic but still real feeling that I have that print codex forms of books have an irreplaceable individuality and physicality that the mere presence of text in a myriad of e-forms does not suffice to replace.

But though I’ve felt the ascendancy of e-books was inevitable, I think I imagined a 15 or 20 year time span in which print and e-books would mostly live side by side.  Our own librarians here at Messiah College talk about a “print-plus” model for libraries, as if e-book will remain primarily an add on for some time to come.  I wonder.  Just as computing power increases exponentially, it seems to me that the half-life of print books is rapidly diminishing.  I now wonder whether we will have five years before students will expect their books to be in print–all their books, not just their hefty tomes for CHEM 101 that can be more nicely illustrated with iBook Author–but also their books for English and History classes as well.  This is an “e-plus”  world  where print will increasingly not be the norm, but the supplement to fill whatever gaps e-books have not yet bridged, whatever textual landscapes have not yet been digitized.

Despite warnings, we aren’t yet ready for an e-plus world.  Not only do we not know how to operate the apps that make these books available, we don’t even know how to critically study books in tablet form.  Yet learning what forms of critical engagement are possible and necessary will be required.  I suspect, frankly, that our current methods developed out of a what was made possible by the forms that texts took, rather than forms following our methodological urgencies.  This means that the look of critical study in the classroom will change radically in the next ten years.  What will it look like?

Should Humanities students learn to code?

One of the big questions that’s been on our mind in the digital humanities working group is whether and to what degree humanities students (and faculty!) need to have digital literacy or even fluency.  Should students be required to move beyond the ability to write a blog or create a wiki toward understanding and even implementing the digital tools that make blogs and wikis and databases possible.  This essay from Anastasia Salter takes up the issue, largely in the affirmative, in a review of Douglas Rushikoff’s Program or be Programmed:  Read more at…Should Humanities students learn to code?.

Discussion of Henry Jenkins and Lev Manovich

I’m also blogging occasionally over at digitalhumanity.wordpress.org, our site for discussing Digital Humanities at Messiah College.  You’re invited to check in and see our flailing around as we try to get our minds around whatever it is that goes on with this field and try to think about how we might contribute.  Today we had a discussion of some work by Henry Jenkins and Lev Manovich.  A few of my notes can be found here.

Grading the Crowd

Can the wisdom of crowds apply to grading student papers, or to evaluation of culture more generally?  What about the quality of a theological argument, or a decision about foreign policy?  We’re taken a lot with the idea of crowds and collaboration lately, and not without good reason.  I think there’s a great deal to be said about getting beyond the notion of the isolated individual at work in his study;  especially in the humanities I think we need to learn something from our colleagues in the sciences and think through what collaborative engagement as a team of scholars might look like as a norm rather than an exception.  At the same time, is there a limit to collective learning and understanding?  Can we detect the difference between the wisdom of the crowd and the rather mindless preferences of a clique, or a mob.  I found myself thinking about these things again this evening as I read Cathy Davidson’s latest piece in The Chronicle Review, “Please Give Me Your Divided Attention: Transforming Learning for the Digital Age.”

Cathy Davidson, "Now You See It"

I wrote about Davidson a couple of days ago–she’s around a lot lately, as authors tend to be when a new book comes out that a publisher has decided to push—and I feel almost bad at taking up  only my crabbiest reactions to her recent work.  First, let me say that I briefly crossed paths with Davidson at Duke where she was hired the year I was finished my doctorate in English.  She seemed like a breath of fresh and genuine air in a department that could sometime choke on its collective self-importance, and the enthusiasm and generosity and love of teaching that Davidson evinces in this essay was evident then as well, though I never had her for class.  And, as this comment suggests, I think there’s a lot in this essay that’s really important to grapple with.  First, her suggestions of the ways that she and some of her colleagues at Duke trusted students with an experiment in iPod pedagogy paid off in so many unexpected ways, and we now know a good bit of that was far ahead of its time.  Moreover, she paints a wonderful picture of students as collaborative teachers in the learning process in her course on the way neuroscience is changing everything.  Still, as with a lot of these things that focus on student-centeredness, I find that promising insights are blinded by what amounts to a kind of ideology that may not be as deeply informed about human action as it really ought to be.  I felt this way in Davidson’s discussion of grading.

 There are many ways of crowdsourcing, and mine was simply to extend the concept of peer leadership to grading. The blogosphere was convinced that either I or my students would be pulling a fast one if the grading were crowdsourced and students had a role in it. That says to me that we don’t believe people can learn unless they are forced to, unless they know it will “count on the test.” As an educator, I find that very depressing. As a student of the Internet, I also find it implausible. If you give people the means to self-publish—whether it’s a photo from their iPhone or a blog—they do so. They seem to love learning and sharing what they know with others. But much of our emphasis on grading is based on the assumption that learning is like cod-liver oil: It is good for you, even though it tastes horrible going down. And much of our educational emphasis is on getting one answer right on one test—as if that says something about the quality of what you have learned or the likelihood that you will remember it after the test is over.

Grading, in a curious way, exemplifies our deepest convictions about excellence and authority, and specifically about the right of those with authority to define what constitutes excellence. If we crowdsource grading, we are suggesting that young people without credentials are fit to judge quality and value. Welcome to the Internet, where everyone’s a critic and anyone can express a view about the new iPhone, restaurant, or quarterback. That democratizing of who can pass judgment is digital thinking. As I found out, it is quite unsettling to people stuck in top-down models of formal education and authority.

Davidson’s last minute veering into ad hominem covers over the fact that she doesn’t provide any actual evidence for the superiority of her method, offers a cultural fact for a substantive good—if this is how things are done in the age of digital thinking, it must be good, you old fogies—seems to crassly assume that any theory of judgment that does not rely on the intuitions of 20 year olds is necessarily anti-democratic and authoritarian, and glibly overlooks the social grounding within which her own experiment was even possible.  All of this does sound like a lot of stuff that comes out of graduate departments in English, Duke not least of all, but I wonder if the judgment is really warranted.

An alternate example would be a class I occasionally teach, when I have any time to teach at all any more, on book reviewing.  In the spirit of democratizing the classroom, I usually set this course up as a kind of book contest in which students choose books to review and on the basis of those reviews, books proceed through a process of winnowing, until at last, with two books left, we write reviews of the finalists and then vote for our book of the year.  The wisdom of the crowd does triumph in some sense because through a process of persuasion students have to convince their classmates what books are worth reading next.  The class is partly about the craft of book reviewing, partly about the business of book publishing, and partly about theories of value and evaluation.  We spend time not only thinking about how to write effective book reviews for different markets, we discuss how theorists from Kant to Pierre Bourdieu to Barbara Herrnstein Smith discuss the nature of value, all in an effort to think through what we are saying when we finally sit down and say one thing is better than another thing.

The first two times I taught this class, I gave the students different lists of books.  One list included books that were short listed for book awards, one list included first time authors, and one list included other books from notable publishers that I had collected during the previous year.  I told them that to begin the class they had to choose three books to read and review from the lists that I had provided, and that at least one book had to be from a writer of color (my field of expertise being Ethnic literature of the U.S., I reserved the right).  They could also choose one book simply through their own research to substitute for a book on one of my lists.  Debates are always spirited, and the reading is always interesting.  Students sometimes tell me that this was one of their favorite classes.

The most recent time I taught the class, I decided to take the steps in democratizing one step further by allowing the students to choose three books entirely on their own accord.  Before class began I told them how to go about finding books through the use of major industry organs like Publishers Weekly, as well as how to use search engines on Amazon and elsewhere—which, their digital knowledge notwithstanding, students are often surprised at what you can do on a search engine.  The only other guidance was students would ultimately have to justify their choices by defending in their reviews why they liked the books and thought they could be described as good works of literature, leaving open what we meant by terms like “good” and “literature” since that was part of the purpose of the course.

The results were probably predictable but left me disheartened nonetheless.  Only one book out of fifty some books in the first round was by a writer of color.  A predictable problem, but one I had kept my fingers crossed would not occur.  More than half the books chosen by my students were from romance, mystery, fantasy lit, and science fiction genres.  Strictly speaking I didn’t have a problem with that since I think great works of fiction can be written in all kinds of genres, and most works of what we call literary fiction bear the fingerprints of their less reputable cousins (especially mystery writing, in my view, but that’s another post).  I thought there might be a chance that there would be some undiscovered gem in the midst.  I do not have the time to read all fifty books, of course, but rely on students to winnow for me and then try to read every book that gets to the round of eight.  It’s fair to say that in my personal authoritarian aesthetic, none of the books that fell in to that generic category could have been called a great work of fiction, though several of them were decent enough reads.  Still, I was happy to go with this and see where things would take us, relatively sure that as things went on and we had to grapple with what it meant to evaluate prose, we would probably still come out with some pretty good choices.

Most of the works that I would have considered literary were knocked out by the second round, though it is the case that Jonathan Franzen’s Freedom made it all the way to the finals, paired against Lisa Unger’s entry from 2010, whose title I can’t even remember now.  In the end the class split almost down the middle, but chose Unger’s book as the best book they had read during the course of the semester.  Not that I thought it was a terrible book. It was a nice enough read and Unger is a decent enough mystery writer.  But being asked to remember the book is a little like being asked to remember my last trip to MacDonalds.  One doesn’t go there for memorable dining experience, and one doesn’t read Lisa Unger in order come up with books that we will care to remember several weeks after having set them down.  But what was perhaps most intriguing to me was that after an hour long discussion of the two books in which students offered spirited defenses of each writer, I asked them that if they could project themselves in to the year 2020 and had to choose only one book to include on a syllabus in a course on the best books of 2010, which book would it be.  Without exception the students voted for Franzen’s book.  When I asked the students who changed their votes why this would be, they said “We think that Franzen is more important, we just liked reading Unger more.”

This is the nub.  Can the wisdom of crowds decide what is most important?  To that, the answer can only be “sometimes”.  As often crowds choose what is conveniently at hand, satisfies a sweet tooth, or even the desire for revenge. Is there a distinction between what is important or what is true and what is merely popular?  Collaboration can lead us past blindnesses, but it is not clear that the subjectivity of a crowd is anything but blind (in my original draft I typed “bling”, a telling typographical slip and one that may be truer and more interesting than “blind.”  It is not clear that they can consistently be relied upon by their intuition to decide what ought to last. This may not be digital thinking, but at least it is thinking, something crowds cannot be always relied upon to do.

If we could really rely on crowds to make our choices, we would discover that there is really very little to choose between almost anything.  Going on Amazon, what is amazing is that four stars is the average score for all of the 100,000s of thousands of books that are catalogued.  And popularity trumps everything:  Lisa Scottoline scores higher in a lot of cases than Jane Austen.  Literally everything is above average and worth my time.  This is because in the world of the crowd, people mostly choose to be with those crowds that are most like themselves and read those things that are most likely to reinforce the sense they have that they are in the right crowd to begin with.  This is true as even elementary studies of internet usage have pointed out.  Liberals read other liberals, and delight in their wisdom and the folly of conservatives.  Conservatives read other conservatives and do likewise.  This too is digital thinking, and in this case it is quite easily seen that crowds can become authoritarian over and against the voice of the marginalized.  My students choices to not read students of color unless I tell them to is only one small reminder of that.

Which leads to one last observation.  I wonder, indeed, whether it is not the case that this experiment worked so well at Duke because students at Duke already know what it takes to get an A.  That is, in some sense Davidson is not really crowdsourcing at all but is relying on the certain educational processes that will deliver students well-attuned to certain forms of cultural excellence,  able to create effectively and “challenge” the status quot because they are already deeply embedded within those forms of culture excellence and all the assumptions they entail.  That is, as with many pedagogical theories offered by folks at research institutions, Davidson isn’t theorizing from the crowd but from a tiny elite and extremely accomplished sample.  As Gerald Graff points out, most of us most want to teach the students that don’t need us, that means most of us want to teach at places where none of the students actually need us.  Davidson’s lauding of her students in the fact that they don’t’ really need her to learn may merely be an index of their privilege, not the inherent wisdom of crowds or the superiority of her pedagogical method.   Her students have already demonstrated that they know what it takes to get A’s in almost everything because they have them—and massively high test scores besides—or they are unusually gifted in other ways or they wouldn’t be at Duke.  These are the students who not only read all their AP reading list the summer before class started, they also read all the books related to the AP reading list and  have attended tutoring sessions to learn to write about them besides.

Let me hasten to say that there is absolutely nothing wrong with their being accomplished.  On some level, I was one of them in having gone to good private colleges and elite graduate programs.  But it is a mistake to assume that the well-learned practices of the elite, the cultural context that reinforces those practices, and the habits of mind that enable the kinds of things that Davidson accomplished actual form the basis for a democratizing pedagogy for everyone.  Pierre Bourdieu 101.

Tchotchkes R’US: Formerly known as Barnes and Nobles, Booksellers

Like a beaten dog wagging its tail as it returns to the master for one more slap, I keep returning to Barnes and Nobles, hoping for a dry bone or at least a condescending pat on the head. Mostly getting the slap.  I’m wondering lately how much longer B&N can hold on to the subtitle of their name with a straight face. I admit that for purists Barnes and Nobles was never much of a bookseller in the first place, the corporate ambiance just a bit too antiseptic for the crowd that prefers their books straight, preferably with the slightest scent of dust and mold. But as a person that has spent the entirety of his life in flyover country, Barnes and Nobles and its recently deceased cousin Borders seemed something like salvation. If the ambiance was corporate, the books were real, and they were many. If there were too few from independent publishers, there were more than enough good books to last any reader a lifetime, and I spent many hours on my own wandering the shelves, feeling that ache that all readers know, the realization that there are too many good books and one lifetime will never be enough.

Barnes and Nobles became a family affair for us. One way I induced the habit of reading in my kids was to promise them I’d take them to B&N anytime they finished a book and buy them another one. The ploy paid off. My kids read voraciously, son and daughter alike, putting the lie to the notion that kids today have to choose between reading and surfing.  My kids do both just fine, and I think this is attributable in no small part to the fact our family time together was spent wandering the endless aisles of bookstores, imaging the endless possibilities, what the world would be like if we only had enough time to read them all. Other families go on kayak trips; we read books. I’m not sorry for the tradeoffs.

All that is mostly over, for paper books anyway. My son and I still go over to Barnes and Nobles, but the last three trips we’ve come out saying the same thing to one another without prompting–worthless. Aisle after Aisle of bookshelves in our local store are being replaced by toys and tchotchkes designed to…..do what? It’s not even clear. At least when the place was dominated by books it was clear that this was where you went for books. Now it seems like a vaguely upscale Walmart with a vast toy section. I’m waiting for the clothing section to open up soon.

I don’t think we should underestimate the consequence of these changes for booksellers and bookreaders. Although it is the case that readers will still be able to get books via your local Amazon.com, the place of books is changing in radical ways.  The advent of e-books is completely reordering the business of bookselling–and i would say the culture of books as well.  An article in the Economist notes that books are following in the sucking vortex that has swallowed the music and newspaper industries all but whole.  Among the biggest casualties is the books and mortar bookstore, and this is of no small consequence to the effort to sell books in general:

Perhaps the biggest problem, though, is the gradual disappearance of the shop window. Brian Murray, chief executive of HarperCollins, points out that a film may be released with more than $100m of marketing behind it. Music singles often receive radio promotion. Publishers, on the other hand, rely heavily on bookstores to bring new releases to customers’ attention and to steer them to books that they might not have considered buying. As stores close, the industry loses much more than a retail outlet. Publishers are increasingly trying to push books through online social networks. But Mr Murray says he hasn’t seen anything that replicates the experience of browsing a bookstore.

Confession, I actually enjoy browsing Amazon, and I read book reviews endlessly.  But I think this article is right that there is nothing quite like the experience of browsing the shelves at a bookstore, in part because it is a kind of communal event.  It is not that there are more choices–there aren’t, there are far more choices online.  Nor is it necessarily that things are better organized.  I think I have a better chance of finding things relevant to my interests through a search engine than I do by a chance encounter in the stack.   And, indeed, The Strand is a book store that leaves me vaguely nauseous and dizzy, both because there is too much choice and there is too little organization.  But the physical fact of browsing with one’s companions through the stacks, the chance encounter with a book you had heard about but never seen, the excitement of discovery, the anxious calculations–at least if you are an impoverished graduate student or new parent–as to whether you have enough cash on hand to make the purchase now or take a chance that the book will disappear if you wait.  All of these get left behind in the sterility of the online exchange.  The bookstore is finally a cultural location, a location of culture, where bookminded people go for buzz they get from being around other book-minded people.  I can get my books from Amazon, and I actually don’t mind getting them via e-books, avoiding all the fuss of going down and having a face to face transaction with a seller.  But that face to face is part of the point, it seems to me.  Even though book-buying has always fundamentally been about an exchange of cash for commodity, the failure to see that it was also more than that is the cultural poverty of a world that amazon creates.  With books stores dying a rapid death and libraries close upon their heels, I’m feeling a little like a man without a country, since the country of books is the one to which I’ve always been most loyal.

I am, of course, sounding old and crotchety.  The same article in the Economist notes that IKEA is now changing their bookshelf line in the anticipation that people will no longer use them for books.

TO SEE how profoundly the book business is changing, watch the shelves. Next month IKEA will introduce a new, deeper version of its ubiquitous “BILLY” bookcase. The flat-pack furniture giant is already promoting glass doors for its bookshelves. The firm reckons customers will increasingly use them for ornaments, tchotchkes and the odd coffee-table tome—anything, that is, except books that are actually read.

I suspect this may be true.  Bookshelves and books alike may become craft items, things produced by those curious folks who do things by hand, items that you can only buy at craft fairs and auctions, something you can’t find at Wal-Mart, or Barnes and Nobles.