Alexander on Multiple Literacies

Alexander, Bryan. 2008. “Web 2.0 and Emergent Multiliteracies.” Theory Into Practice.  47, no. 2:150-160. ISSN 00405841.

Although it already seems dated (was 2008 really that long ago?), this article presents a solid overview of the many ways in which students write and create online – their “multiple literacies” as the title suggests. In the epigraph, Alexander refers to one of my favorite Kathy Yancy points; “Note that no one is making anyone do any of this writing” (from Yancy, K. B. (2004). “Made not only in in words: Composition in a new key.” CCCC, 56, 297-328). Despite the trend of using online composing in schools (where you can argue someone is making them write), the truth remains that people of all ages are composing text messages, facebook posts, tweets, and blog posts everyday outside o
f any institutional framework. This is important for instructors who work in institutions where many students claim to have never written an essay or read a book previous to college (my CC composition students frequently write this in diagnostic essays – I realize this does not mean teachers have not assigned these tasks…).

In explicating how to integrate these multiple literacies in the classroom, Alexander does an  excellent job defining terms from both a historical technical and perspective. For instance, in “What is Web 2.0?” Alexander reminds us that the term originally appeared in Tim O’Reilly’s 2005 post of the same name, and that to qualify as Web 2.0 (versus 1.0) “projects must abide by a fairly coherent set of digital strategies,” which include social software or social networking and microcontent in conjunction with openness and social filtering. To further define the first of these terms Alexander conjures Licklider’s “dream of using networked computing to connect people and boost their knowledge and ability to learn” as well as a long list of now obsolescent social networks. I also find Alexander’s definition of microcontent to be useful in that it is about small posts, not entire pages, but I am not sure I agree that they are small in terms of user effort. For social networking sites Alexander refers to such as Flickr or Facebook, the effort is indeed minimal. However, while you don’t need to “build a page layout, design menus” to blog, most bloggers do design their blog space – and as is true for this blog – many posts involve a considerable amount of intellectual labor as well. (Alexander 152)

Perhaps most useful to me is Alexander’s discussion of social filtering : “Drawing on the wisdom of the crowds, users contribute content to the work of others, leading to multiple-authored works, whose authorship grows over time” (153). This “digital strategy” includes commenting on, tagging, and distributing microcontent. In building commenting and tagging features into my course blogs and my digital project the Writing Studies Tree, I hope to harness this exact notion – collective wisdom. Within these very different digital spaces my expectations of social filtering is also different: in the context of a undergraduate course blog, I want the students to learn from each other by commenting on each other’s drafts and discussion posts, and tagging exists to show patterns between texts (both the course material and student work), whereas on the Writing Studies Tree, I hope to increase searchability and enhance the filtering on my visualizations by encouraging a folksonomy, and commenting primarily exists to gather information that cannot be represented visually. In both cases the interactivity generates new knowledge. I am very interested in studying the results of that knowledge creation process (visa-vi the process itself). As the title of this article alludes, I think the ability to create and understand these new forms of writing within digital spaces is a vital skill, one that needs to be addressed in our system of education.

Note on future uses of this article:

For my research project on the Writing Studies Tree tagging feature, I will use Alexander’s distinction that “folksonomies consist of single words (FOOTNOTE in my case this could be titles or phrases) that users choose and apply to microcontent. In contrast, traditional metadata is usually hierarchical (topics nested within topics), structured (traditional library sanctioned standards such as Dublin Core), and predetermined by content authorities (bibliographers, catalogers)” (153-4). An important distinction because I am looking to the contributors as the experts in the field whose knowledge will become apparent through the use of the site.

Drucker on Humanistic Theory and Digital Scholarship

Drucker, Johanna. Humanistic Theory and Digital Scholarship. Debates in Digital Humanities. Matt Gold ed. U of Minnesota P. 2012.

This contribution by Johanna Drucker opens with two very poignant questions for the dh world: 1) “are [humanists] actually doing anything different or just extending the activities that have always been their core concerns, enabled by advantages of networked digital technology?” and 2) “have the humanities made any impact on the digital environment? Can we create graphical interfaces and digital platforms from humanistic models?” (85).

I am particularly interested in this question because I am co-designing a platform specifically intended to aid in humanities research – the Writing Studies Tree. Putting a research question, or problem, first, and designing to meet this goal seems to me the correct order – and also addresses Drucker’s inquiry. However, Drucker’s point that “protocols for information visualization, geospatial representation, and other research instruments have been absorbed from disciplines whose epistemological foundations and fundamental values are at odds with, or even hostile to, the humanities” (85-86). Data in the humanities tends to rely on context and interpretation – it does not stand alone as purely quantitative. For instance, in the Writing Studies Tree we are visualizing relationships between people and institutions, but these visualizations only show the quantitative data concerning dates, locations, and titles (mentor, chair, instructor, student), it does not, and in my opinion should not, show the qualitative data concerning the nature of those relationships. When we presenting this tool to our colleagues at the Conference on College Composition and Communication audience members want the program to show “negative” relationships, for example a falling out between an adviser and advisee. This subjective information can be visualized (I joking suggested changing the colored line to look like yellow and black striped caution tape), but should it be? How does representing those emotional or affective descriptions advance research in field of writing studies?

When Drucker claims “the ideology of almost all current information visualization is anathema to humanistic thought” it seems she is assuming that the humanities does not have an interest in quantifiable data, nor does it question the presentation of “what is.” To be honest, I think she is playing devils advocate here, but it is clear that humanists do grapple with quantifiable data in many fields, especially history, art history, and classics where dates, times, locations, and quantities (number of people, words, paintings, books, etc.) are continuously vital to their work. And, as Drucker rightfully points out, after Father Busa’s first wave of concordance, many humanities scholars are successfully “counting, sorting, searching, and finding nonambiguous instances of discrete and identifiable string of information coded in digital form” (87). Franco Moretti’s work is probably both the best and most well-known example of this work in literary studies. Also, coming from a background in composition and rhetoric, training in analyzing visual rhetoric was a critical part of my education and is now an objective of my teaching. I do not think it is a fair criticism of humanities scholars that we do not analyze maps, graphs, charts, and images, however I do agree that we fail to train emerging scholars to create accurate representations in various media. This is why I fully embrace a constructivist approach to teaching and learning, in fact it is the foundation of my teaching philosophy. I believe the rise of the digital humanities ethos and the availability of open access (free as in beer and speech) tools have encouraged others to concentrate on pedagogy based on making/building as well. But I still see a lack of training in graduate programs in the humanities to collect, visualize, and report data accurately.

Drucker’s vision of this training reimagines traditional forms of visualization, in particular maps, to reveal the cultural regimes that constructed our knowledge of those places, spaces, and events. Although I cannot fully comprehend how this vision would materialize, I am convinced that “without minimizing the complexity involved in realizations and aware of the risk of cartoonish special effects, my point is that by not addressing these possibilities we cede the virtual ground to a design sensibility based in engineering and games” (92). Along with Drucker, I wonder how the humanities can create new forms of digital scholarship that push the boundaries of how and why we can and should be building and making.

Gardiner and Musto on The Electronic Book

Gardiner, Eileen and Ronald G. Musto. “The Electronic Book.” The Oxford Companion to the Book. Michael F. Suarez, S.J. and H. R. Woudhusyen, eds. Vol. 1. Oxford UP. Web.

In defining the e-book, Gardiner and Musto write “The e-book is a young medium and its definition is a work in progress, emerging from the history of the print book and evolving technology. In this context it is less useful to consider the book as object-particularly as commercial object-than to view it as cultural practice, with the e-book as one manifestation of this practice” (164). I appreciate this distinction, because it directs the reader away from the now tired arguments about the e-book “killing” the print book. I continue to hope it is clear that digital publication formats are not replacing printed texts, however this fallacy seems to live on in both popular and scholarly debates. Gardiner and Musto’s definition highlights the effects of digital publication on the act of reading, rather than the physical objects (be in paper and ink or PDAs).

That said, this article also provides a history of electronic publish that while reliant on discussions of  medium, serves as a useful reminder that the digital book has been evolving for almost 50 years now. In fact the history begins with Vannevar Bush’s prophetic description of the Memex in 1945, and Ted Holms introduction of the term “hypertext” in 1965. An important date to remember is the “ Mother of All Demos” during which Douglas Engelbart demonstrated e-mail, tele-conferencing, videoconferencing, and the mouse. Gardiner and Musto note “[m]ost importantly for the future of the book, it demonstrated hypertext and introduced the ‘paper paradigm’, which embodied the current standard experience of a computer: windows, black text on white background, files, folders, and a desktop” (165). A convenient timeline of the history of hypertext can be found here: http://www.useit.com/papers/hypertext-history/. One interesting project on this timeline is the If Monks Had Macs: The choices in If Monks Had Macs … were prophetic: its metaphors of pre-print MS, early print, and marginal publishing not only introduced a new medium, but also set the intellectual and cultural paradoxes within which the e-book still operates: an essentially nonlinear, multiple medium that most readers and producers approach with the cultural apparatus developed for the *codex. It was also both retrospective and prescient in terms of production and distribution: like early print, it was produced and distributed outside the mainstream of academic and large business institutions (Gardiner and Musto 165).

Initially this project was on CD-ROM and is now housed here: http://rivertext.com/monks.html (although many of the links are broken). Although there are a few significant projects designed for CD-ROM – including, but not mentioned – the American History Project by Steve Brier, Director of the ITP program at the Graduate Center, the invention of the World Wide Web vastly increased the prevalence of e-books.

With the growth of personal computers and the Internet came libraries of electronic texts. One of my favorite resources, Project Gutenberg, was the forerunner in this pursuit. It began with students typing in texts by hand (can you imagine typing a Victorian novel!!!???), but now uses OCR and currently offers 36000 free online texts in multiple formats. By 2003 almost all texts were born digital, even if printed for distribution and consumption. This shift led to the rise of dedicated e-readers (Kindle, Nook, Sony Reader) and e-reading applications for PDA and mobile devices. Unfortunately, “[t]o accomplish this conversion, most enterprises relied either on automated digitization or on the low wages and long working hours of thousands of centres in the Global South (primarily in India), where an enormous new workforce could produce encoded text, images, and links. This raised ethical and economic issues that were virtually nonexistent for the print book” (Gardiner and Musto 166).

The potential of the e-book lies in what makes it different from the print book: hyperlinking and coding. As Robert Darnton described in 1999, the e-book is multilayered, giving the reader access to potentially infinite information as they interact with the text. I believe these capabilities should be the focus on scholarly debate at this point. Not if ebooks should be used for academic purposes, but how to improve digital texts to maximize their usefulness in scholarly pursuits. For examples, one of the drawbacks to e-books as I see it is inconsistent or non-existent pagination – mainly for the purpose of formal citation, although I question the relevancy of these antiquated models anyway. But this problem also affects the readers’ ability to navigate the text – it is difficult to go to a specific page or direct other readers to a specific passage when discussing the text in a group. Also, as famously proven in the ironic case of Orwell’s 1984, I have serious concerns about the ownership of purchased digital material through proprietary vendors such as Amazon, which can be rescinded without the permission of the consumer. But with these concerns come conveniences. More people have access to more texts. This is a point that should not be undervalued. And, the portability and adaptability of e-reading platforms means more people are reading more often. For use in academia, this gives students access to texts for all of their classes in multiple places (e-readers, cell phones, laptops, cloud storage) and all of their notes and marks are synced across platforms. One interesting development that I consider to have great potential is that these marks and notes can be shared – like picking up a used book and seeing what everyone else who has read that book highlighted and scribbled in the margin. In my Kindle version of John Dewey’s Experience and Education, one sentence is marked as “highlighted by 46 users” in the text. This helps novice readers and researchers understand the annotation process and presents reading a communal, albeit asynchronous, activity. These developments should not be viewed as a threat to the traditional book, they should be heralded as progress toward a more literate public.

Nunberg on The Information Age

Nunberg, Geoffrey. “Farewell to the Information Age.” From The Future of the Book. U of California P, 1996.
I agree with Nunberg’s founding claim here: discussions concerning the future of the book are plagued by “misapprehensions.” The general public, spurred on by journals and critics, seem convinced that new media technologies are causing the death of the book. But as Nunberg argues, although some print-based forms will be transferred entirely online – and for good reason – there is no evidence that the printed book is at risk. For example, works published in many volumes such as scholarly journals, or those that are printed daily and rely heavily on the prominence of advertising such as newspapers, seem to function better online. As electronic texts these genres are easily to search, index, archive, and most importantly access, unlike their paper bound predecessors. These are not arguments original to Nunberg. What I am interested in is how he supports the claim that “There will be a digital revolution, but the printed book will be an important participant in it” and that “the introduction of these media is bound to be accompanied by sweeping changes…including the relation between the author and reader, the nature of the public, the conception of intellectual property, and the nature of the text itself.” I do not see ample evidence for the former in this excerpt, but there is a slew of interesting support for the latter here. For instance, I think Nunberg is right on when he criticizes theorists for applying “old media” terminology and functions to new media formats – such as “author” and “publication.” Neither of these concepts transfer interchangeably into the digital realm. This is easily proven to anyone who has worked on a blog – those who write posts are given the role of “author” and in order to post their writing they click the button labeled “publish.” This is clearly not the same process as publishing a paperbound book (although in some cases it will reach a greater audience; see Nunberg’s example of the tenure case). Of course this is why born digital texts are rightfully the target of suspicion. Without the policing agents involved in traditional publication, how can we be sure the texts we read online are authentic, reliable, authoritative…and protected under copyright? Obviously these are legitimate concerns, but are they exclusive to born digital texts? I would argue are these the same concerns we must address to every text we encounter… and that is what I teach my students.

Similarly, Nunberg’s concern with intertextuality may not be unique to electronic texts. His claim that electronic texts have no boundaries highlights some inherent truth if you qualify the medium as those digital texts that are dynamic, contain links, and are published under the most liberal Creative Commons licensing. Certainly a locked pdf distributed by a publishing company, or a Google Books image, does not follow this logic. The idea of a “a domain where there can be intertextuality without transgression” is compelling, but I do not think it “rests on an anachronistic sense of the text that is carried over from our experience of print.” Most print bound texts do in fact introduce the reader to many forms of intertextuality – footnotes, citations, references, allusions, definition, etc. Readers rarely encounter a text in isolation, and we educate our students to read with the aid of reference material. Whether I am reading his article in a paper-bound book, a kindle version of the text, or a copy found on the web, I am still going to look up “propadeutic,” its just that my kindle does it with a tap of my finger and the website with two right-clicks.

I am most persuaded by Nunberg’s argument that new media technologies forces us to examine the notion of content. In Nunberg’s estimate, this rests on our inefficient definition of “information;” indeed a difficult term that has not evolved to meet its modern usage. As Nunberg notes, the OED definition contrasts information with data, and obvious issue to any user of modern technology. Furthermore, citing William Weaver and Claude Shannon, “information must not be confused with meaning,” but rather “information as a property of a signal relative to an interpreter.” This leads us to the title phrase “The Information Age,” which indicates not a verb (to inform), but an abstract noun (to receive information). Nunberg attributes this shift to the difference between the seventeenth century notion of information which indicated published, not private content, “a step that implies the commoditization of content that is central to the cultural role we ask information to play,” and the nineteenth view that “resituated the agency of instruction in text and its producers, and resituated the reader to the role of a passive consumer of content.” This leads to Michel de Certeau’s “public shaped by writing.” Therefore, information became associated with the dissemination of content through free exchange in a democratic society.

In this view, information is produced by society and its “instruments,” in Walter Benjamin’s sense of the word. One example of this being the “news,” or journalism and another being reference works. Nunberg writes “Each after its fashion, these forms impose a particular registration on their content, with characteristic syntax and semantics, which in turn elicits a particular mode of reading from its consumers.” Interestingly, Nunberg questions the transferability of information, not from text to person, or person to person, but from one medium to another (ex a novel to a comic book). This raises important questions concerning visualization, presentation, and access. It also draws the distinction between knowledge and information.

So my question is what is lost or gained when we move a print text online? What new genres emerge? How can we view Nunberg’s argument in light of e-readers and similar applications for mobile devices? How does this change the nature of both information and the process of knowledge-making?