These are the slides for my Sept. 29th featured presentation at the NFAIS Humanities Roundtable in New York. View the full program here
Full text of this presentation will be added to this post soon.
Drucker, Johanna. Humanistic Theory and Digital Scholarship. Debates in Digital Humanities. Matt Gold ed. U of Minnesota P. 2012.
This contribution by Johanna Drucker opens with two very poignant questions for the dh world: 1) “are [humanists] actually doing anything different or just extending the activities that have always been their core concerns, enabled by advantages of networked digital technology?” and 2) “have the humanities made any impact on the digital environment? Can we create graphical interfaces and digital platforms from humanistic models?” (85).
I am particularly interested in this question because I am co-designing a platform specifically intended to aid in humanities research – the Writing Studies Tree. Putting a research question, or problem, first, and designing to meet this goal seems to me the correct order – and also addresses Drucker’s inquiry. However, Drucker’s point that “protocols for information visualization, geospatial representation, and other research instruments have been absorbed from disciplines whose epistemological foundations and fundamental values are at odds with, or even hostile to, the humanities” (85-86). Data in the humanities tends to rely on context and interpretation – it does not stand alone as purely quantitative. For instance, in the Writing Studies Tree we are visualizing relationships between people and institutions, but these visualizations only show the quantitative data concerning dates, locations, and titles (mentor, chair, instructor, student), it does not, and in my opinion should not, show the qualitative data concerning the nature of those relationships. When we presenting this tool to our colleagues at the Conference on College Composition and Communication audience members want the program to show “negative” relationships, for example a falling out between an adviser and advisee. This subjective information can be visualized (I joking suggested changing the colored line to look like yellow and black striped caution tape), but should it be? How does representing those emotional or affective descriptions advance research in field of writing studies?
When Drucker claims “the ideology of almost all current information visualization is anathema to humanistic thought” it seems she is assuming that the humanities does not have an interest in quantifiable data, nor does it question the presentation of “what is.” To be honest, I think she is playing devils advocate here, but it is clear that humanists do grapple with quantifiable data in many fields, especially history, art history, and classics where dates, times, locations, and quantities (number of people, words, paintings, books, etc.) are continuously vital to their work. And, as Drucker rightfully points out, after Father Busa’s first wave of concordance, many humanities scholars are successfully “counting, sorting, searching, and finding nonambiguous instances of discrete and identifiable string of information coded in digital form” (87). Franco Moretti’s work is probably both the best and most well-known example of this work in literary studies. Also, coming from a background in composition and rhetoric, training in analyzing visual rhetoric was a critical part of my education and is now an objective of my teaching. I do not think it is a fair criticism of humanities scholars that we do not analyze maps, graphs, charts, and images, however I do agree that we fail to train emerging scholars to create accurate representations in various media. This is why I fully embrace a constructivist approach to teaching and learning, in fact it is the foundation of my teaching philosophy. I believe the rise of the digital humanities ethos and the availability of open access (free as in beer and speech) tools have encouraged others to concentrate on pedagogy based on making/building as well. But I still see a lack of training in graduate programs in the humanities to collect, visualize, and report data accurately.
Drucker’s vision of this training reimagines traditional forms of visualization, in particular maps, to reveal the cultural regimes that constructed our knowledge of those places, spaces, and events. Although I cannot fully comprehend how this vision would materialize, I am convinced that “without minimizing the complexity involved in realizations and aware of the risk of cartoonish special effects, my point is that by not addressing these possibilities we cede the virtual ground to a design sensibility based in engineering and games” (92). Along with Drucker, I wonder how the humanities can create new forms of digital scholarship that push the boundaries of how and why we can and should be building and making.
Nunberg, Geoffrey. “Farewell to the Information Age.” From The Future of the Book. U of California P, 1996.
I agree with Nunberg’s founding claim here: discussions concerning the future of the book are plagued by “misapprehensions.” The general public, spurred on by journals and critics, seem convinced that new media technologies are causing the death of the book. But as Nunberg argues, although some print-based forms will be transferred entirely online – and for good reason – there is no evidence that the printed book is at risk. For example, works published in many volumes such as scholarly journals, or those that are printed daily and rely heavily on the prominence of advertising such as newspapers, seem to function better online. As electronic texts these genres are easily to search, index, archive, and most importantly access, unlike their paper bound predecessors. These are not arguments original to Nunberg. What I am interested in is how he supports the claim that “There will be a digital revolution, but the printed book will be an important participant in it” and that “the introduction of these media is bound to be accompanied by sweeping changes…including the relation between the author and reader, the nature of the public, the conception of intellectual property, and the nature of the text itself.” I do not see ample evidence for the former in this excerpt, but there is a slew of interesting support for the latter here. For instance, I think Nunberg is right on when he criticizes theorists for applying “old media” terminology and functions to new media formats – such as “author” and “publication.” Neither of these concepts transfer interchangeably into the digital realm. This is easily proven to anyone who has worked on a blog – those who write posts are given the role of “author” and in order to post their writing they click the button labeled “publish.” This is clearly not the same process as publishing a paperbound book (although in some cases it will reach a greater audience; see Nunberg’s example of the tenure case). Of course this is why born digital texts are rightfully the target of suspicion. Without the policing agents involved in traditional publication, how can we be sure the texts we read online are authentic, reliable, authoritative…and protected under copyright? Obviously these are legitimate concerns, but are they exclusive to born digital texts? I would argue are these the same concerns we must address to every text we encounter… and that is what I teach my students.
Similarly, Nunberg’s concern with intertextuality may not be unique to electronic texts. His claim that electronic texts have no boundaries highlights some inherent truth if you qualify the medium as those digital texts that are dynamic, contain links, and are published under the most liberal Creative Commons licensing. Certainly a locked pdf distributed by a publishing company, or a Google Books image, does not follow this logic. The idea of a “a domain where there can be intertextuality without transgression” is compelling, but I do not think it “rests on an anachronistic sense of the text that is carried over from our experience of print.” Most print bound texts do in fact introduce the reader to many forms of intertextuality – footnotes, citations, references, allusions, definition, etc. Readers rarely encounter a text in isolation, and we educate our students to read with the aid of reference material. Whether I am reading his article in a paper-bound book, a kindle version of the text, or a copy found on the web, I am still going to look up “propadeutic,” its just that my kindle does it with a tap of my finger and the website with two right-clicks.
I am most persuaded by Nunberg’s argument that new media technologies forces us to examine the notion of content. In Nunberg’s estimate, this rests on our inefficient definition of “information;” indeed a difficult term that has not evolved to meet its modern usage. As Nunberg notes, the OED definition contrasts information with data, and obvious issue to any user of modern technology. Furthermore, citing William Weaver and Claude Shannon, “information must not be confused with meaning,” but rather “information as a property of a signal relative to an interpreter.” This leads us to the title phrase “The Information Age,” which indicates not a verb (to inform), but an abstract noun (to receive information). Nunberg attributes this shift to the difference between the seventeenth century notion of information which indicated published, not private content, “a step that implies the commoditization of content that is central to the cultural role we ask information to play,” and the nineteenth view that “resituated the agency of instruction in text and its producers, and resituated the reader to the role of a passive consumer of content.” This leads to Michel de Certeau’s “public shaped by writing.” Therefore, information became associated with the dissemination of content through free exchange in a democratic society.
In this view, information is produced by society and its “instruments,” in Walter Benjamin’s sense of the word. One example of this being the “news,” or journalism and another being reference works. Nunberg writes “Each after its fashion, these forms impose a particular registration on their content, with characteristic syntax and semantics, which in turn elicits a particular mode of reading from its consumers.” Interestingly, Nunberg questions the transferability of information, not from text to person, or person to person, but from one medium to another (ex a novel to a comic book). This raises important questions concerning visualization, presentation, and access. It also draws the distinction between knowledge and information.
So my question is what is lost or gained when we move a print text online? What new genres emerge? How can we view Nunberg’s argument in light of e-readers and similar applications for mobile devices? How does this change the nature of both information and the process of knowledge-making?
This past weekend, November 12th-14th, I attended THATcamp New England . THATcamp is a humanities and technology unconference attempting to subvert the traditional academic conference model in remarkably rebellious ways. THATcamp is free. Free of keynote speakers, free of formal sessions, free of hierarchal labels, and free to attend. THATcamps take place in many cities all over the world, with the expectation that those interested in attending may do so with minimal travel and lodging costs. THATcamp is extremely participant driven. Every participant is expected to propose a topic to tackle during their particular THATcamp. Some camps have a set schedule and some have a narrow focus, and yet others are extremely free form. The 2010 THATcamp New England set up a blog where participants posted their session proposals in advance, and those proposals were then printed and voted on during breakfast the first morning. The schedule was then created in a Google Doc that everyone accessed via their laptops, tablets, or cell phones, and edits were made until everyone was content. This is another way THATcamp is able to keep overhead costs low; everyone brings their own technology, and they do not print out snazzy pamphlets or paper abstracts. However, participants do still get a nametag and t-shirt, along with breakfast, lunch, and snacks. These perks are orchestrated by the local organizers, who are usually students such as Lincoln Mullen and Stephanie Cheney, and are funded through grants from the Mellon Foundation and NEH (National Endowment for the Humanities). THATcamp New England took place at the Wentworth Institute of Technology in Boston, but the THATcamp headquarters is The Center for New Media at George Mason University (the fine folks who brought us Zotero) and Amanda French is the coordinator (and my THATcampNE roomie).
The only pre-scheduled events at THATcampNE were the BootCamps. BootCamps are essentially training seminars for humanists interested in learning technological tools. Fortunately, I received a $500 micro-fellowship to participate in these BootCamps (THANK YOU). This allowed me to attend a THATcamp outside my region, which was very rewarding because I was able to reconnect with people I met at dh09, as well as meeting new scholars who represent some of the most prestigious schools in the country (after all, this is Boston). Also, I was able to hitch a ride with Lauren Klein, a student at the dissertation level of my program at CUNY. Our conversations in the car were extremely enlightening for me, and I thank Lauren for her guidance. One disappointing aspect of THATcampNE that was pointed out over Vietnamese food Sunday afternoon, was the lack of prominent senior faculty members from the Boston area. However, the participants did represent faculty, graduate students, archivists, librarians, instructional technologists, administrations, and many other alternative academic careers. The BootCamp instructors were extremely knowledgeable and well known in the digital humanities community. CUNY’s own Boone Gorges lead a BootCamp on Anthologize , his one-week, one-tool WordPress application. I attended a BootCamp session on “Introduction to Programming” led by Dr. Julie Meloni, a post-doc at the University of Victoria, who I know from the dh09 conference, and because of her prolific presence on twitter. I own @jcmeloni’s book on html and css , because I was one of the first five followers to tweet her one day. Although reading papers or presenting PowerPoints is completely forbidden at THATcamp, Julie did use a slideshow to keep the pace of her 75 minute class. The true genius of Julie’s BootCamp was her ability to approach programming from the perspective of a humanist. As Julie said in her session, “Humanities scholars are uniquely poised to ask the right questions on technology.” She explained the need for programming as a way of creating tools that “do things that take a long time or are difficult for humans to do,” and the need to learn programming language as a way to communicate with programmers who can help us create the tools we “need and desire.” Even as someone who has done some basic programming, understanding programming language in terms of semantics and syntax gave me a new vocabulary in which to present my ideas. Julie said to “ask the computer to act in a call and response way,” which for a professor of Harlem Renaissance literature was crystal clear. I also learned that in the ever changing world of programming, PHP and Ruby are currently the languages preferred in the digital humanities community. Julie’s concluding remarks: “What do we want to do? Figure it out, ask a programmer, then ask NEH for funding and chocolate.”
The next BootCamp session was Text Encoding led by Dr. Vika Zafrin, aka @veek, who happened to be my roommate at dh09. As Vika clarified at the beginning, XML is not a language, it is a framework. The most important concept conveyed by Vika is that you can communicate any information you want through text encoding. Previously, I imagined the primary function of text encoding, such as OCR or TEI, to be searchability. If you have the word “madness” in a text, encoding that word would allow a user to find every text in a database or archive that contains that word. However, encoding text with empirical information opens up a world of theoretical possibilities. My example was to encode Jerome McGann’s A Rationale for Hypertext and code the name Tannselle with terms such as “author,” “male,” “textual scholar,” but also as “opposition” which would infer I as the encoder align myself with McGann’s side of this theoretical debate. This would enable a literary scholar to not only create critical editions of texts that are easily accessible and affordable, but also are annotated in a new space. There is quite literally a new layer to the text.
The Semantic Web was a thread that ran through all of my THATcamp sessions. As Wikipedia explains “Tim Berners-Lee defined the Semantic Web as ‘a web of data that can be processed directly and indirectly by machines.’ The key element is that the application in context will try to determine the meaning of the text or other data and then create connections for the user.” For me this holds the potential of connecting human-directed semantic tags, with image recognizing, and machine-driven tools. As scholars, this capability holds the possibility for real collaboration across barriers of geography, discipline, and profession. For example, in the first session of my THATcamp experience, we explored how best to support DH research. This session was driven by archivists hoping to crowdsource ideas on how to make their collections not only accessible to the scholars who need them, but also to connect their information to other archives and databases so that archivists and researchers do not unnecessarily duplicate the same work. The discussion turned to RDF, or the Research Descriptive Framework as well as OWL, or Web Ontology Language . Two campers took the stage, plugged a personal MacBook into the projector and began explaining how RDF works and possible applications for this technology in terms of archives and academic research. This is the perfect example of why the THATcamp model works so well. When a term or tool needs to be explained, someone steps in to demonstrate without insulting the audience or a presenter. However, significant questions were raised about programmatically discovering connections versus taxonomies and ontologies created by humans. What is trustworthy? How we do assess authenticity? For instance, academic databases that have relevancy ranks and “find articles like these” buttons, create “fuzzy” connections between material, but novice academic researchers trust these connections. These results can lead the researcher to certain conclusions. However, as one camper pointed out, this is not just a technical problem; historians always have to question the authenticity of a source, examining its bias, trustworthiness, etc. This group then began to break into smaller groups interested in different aspects of this debate, another positive aspect of the unconference model.
THATcamp Sessions tend to follow a pattern: first participants present ideas, then questions are raised about the role of technology or the limitations of technological tools and the access users have to the technology, which is followed by brainstorming solutions or sharing anecdotal success and failure stories relevant to the topic. I attended sessions on “The Book and Monograph Re-mixed,” “The Paperless Professor,” and “Information Overload.” You can follow the entire conversation on twitter here: http://bit.ly/cvKy2d . Twitter is an essential part of THATcamp. Most participants tweet key ideas and questions throughout every session so that those who are attending other sessions can stay informed, and so that the greater twitterverse can participate in THATcamp. As I am writing this blog post I am contributing to a conversation on composition and computation generating from the twitterstream of THATcamp Chicago.
I would like to wrap up by posting some of the questions raised in the session I attended at THATcampNE:
What can E-books do for us in the classroom?
Is having a paperless class punitive in institutions where students do not have equal access to technology?
How can we assess digital literacy in our students? How can we best address the needs of students who have never had access to information technology?
How do we address the needs of transfer students, continuing education students, and older faculty who do not have experience working with the digital tools that are increasingly becoming an integral part of higher education?
Do hypertext or multimedia research papers take away from writing and academic research skills?
What are the best tools for peer review?
How do we get other faculty members to accept and implement educational technology?
What is the future of the monograph? Will the monograph change in response to online scholarly journals and the creation of new digital tools? How will this affect the dissertation and tenure application process?
These are not new questions, nor are they easy to answer. However, I have a new perspective on these topics after participating in THATcamp. Thank you to everyone who was involved in making my first THATcamp experience a memorable one.