Alexander on Multiple Literacies

Alexander, Bryan. 2008. “Web 2.0 and Emergent Multiliteracies.” Theory Into Practice.  47, no. 2:150-160. ISSN 00405841.

Although it already seems dated (was 2008 really that long ago?), this article presents a solid overview of the many ways in which students write and create online – their “multiple literacies” as the title suggests. In the epigraph, Alexander refers to one of my favorite Kathy Yancy points; “Note that no one is making anyone do any of this writing” (from Yancy, K. B. (2004). “Made not only in in words: Composition in a new key.” CCCC, 56, 297-328). Despite the trend of using online composing in schools (where you can argue someone is making them write), the truth remains that people of all ages are composing text messages, facebook posts, tweets, and blog posts everyday outside o
f any institutional framework. This is important for instructors who work in institutions where many students claim to have never written an essay or read a book previous to college (my CC composition students frequently write this in diagnostic essays – I realize this does not mean teachers have not assigned these tasks…).

In explicating how to integrate these multiple literacies in the classroom, Alexander does an  excellent job defining terms from both a historical technical and perspective. For instance, in “What is Web 2.0?” Alexander reminds us that the term originally appeared in Tim O’Reilly’s 2005 post of the same name, and that to qualify as Web 2.0 (versus 1.0) “projects must abide by a fairly coherent set of digital strategies,” which include social software or social networking and microcontent in conjunction with openness and social filtering. To further define the first of these terms Alexander conjures Licklider’s “dream of using networked computing to connect people and boost their knowledge and ability to learn” as well as a long list of now obsolescent social networks. I also find Alexander’s definition of microcontent to be useful in that it is about small posts, not entire pages, but I am not sure I agree that they are small in terms of user effort. For social networking sites Alexander refers to such as Flickr or Facebook, the effort is indeed minimal. However, while you don’t need to “build a page layout, design menus” to blog, most bloggers do design their blog space – and as is true for this blog – many posts involve a considerable amount of intellectual labor as well. (Alexander 152)

Perhaps most useful to me is Alexander’s discussion of social filtering : “Drawing on the wisdom of the crowds, users contribute content to the work of others, leading to multiple-authored works, whose authorship grows over time” (153). This “digital strategy” includes commenting on, tagging, and distributing microcontent. In building commenting and tagging features into my course blogs and my digital project the Writing Studies Tree, I hope to harness this exact notion – collective wisdom. Within these very different digital spaces my expectations of social filtering is also different: in the context of a undergraduate course blog, I want the students to learn from each other by commenting on each other’s drafts and discussion posts, and tagging exists to show patterns between texts (both the course material and student work), whereas on the Writing Studies Tree, I hope to increase searchability and enhance the filtering on my visualizations by encouraging a folksonomy, and commenting primarily exists to gather information that cannot be represented visually. In both cases the interactivity generates new knowledge. I am very interested in studying the results of that knowledge creation process (visa-vi the process itself). As the title of this article alludes, I think the ability to create and understand these new forms of writing within digital spaces is a vital skill, one that needs to be addressed in our system of education.

Note on future uses of this article:

For my research project on the Writing Studies Tree tagging feature, I will use Alexander’s distinction that “folksonomies consist of single words (FOOTNOTE in my case this could be titles or phrases) that users choose and apply to microcontent. In contrast, traditional metadata is usually hierarchical (topics nested within topics), structured (traditional library sanctioned standards such as Dublin Core), and predetermined by content authorities (bibliographers, catalogers)” (153-4). An important distinction because I am looking to the contributors as the experts in the field whose knowledge will become apparent through the use of the site.

Drucker on Humanistic Theory and Digital Scholarship

Drucker, Johanna. Humanistic Theory and Digital Scholarship. Debates in Digital Humanities. Matt Gold ed. U of Minnesota P. 2012.

This contribution by Johanna Drucker opens with two very poignant questions for the dh world: 1) “are [humanists] actually doing anything different or just extending the activities that have always been their core concerns, enabled by advantages of networked digital technology?” and 2) “have the humanities made any impact on the digital environment? Can we create graphical interfaces and digital platforms from humanistic models?” (85).

I am particularly interested in this question because I am co-designing a platform specifically intended to aid in humanities research – the Writing Studies Tree. Putting a research question, or problem, first, and designing to meet this goal seems to me the correct order – and also addresses Drucker’s inquiry. However, Drucker’s point that “protocols for information visualization, geospatial representation, and other research instruments have been absorbed from disciplines whose epistemological foundations and fundamental values are at odds with, or even hostile to, the humanities” (85-86). Data in the humanities tends to rely on context and interpretation – it does not stand alone as purely quantitative. For instance, in the Writing Studies Tree we are visualizing relationships between people and institutions, but these visualizations only show the quantitative data concerning dates, locations, and titles (mentor, chair, instructor, student), it does not, and in my opinion should not, show the qualitative data concerning the nature of those relationships. When we presenting this tool to our colleagues at the Conference on College Composition and Communication audience members want the program to show “negative” relationships, for example a falling out between an adviser and advisee. This subjective information can be visualized (I joking suggested changing the colored line to look like yellow and black striped caution tape), but should it be? How does representing those emotional or affective descriptions advance research in field of writing studies?

When Drucker claims “the ideology of almost all current information visualization is anathema to humanistic thought” it seems she is assuming that the humanities does not have an interest in quantifiable data, nor does it question the presentation of “what is.” To be honest, I think she is playing devils advocate here, but it is clear that humanists do grapple with quantifiable data in many fields, especially history, art history, and classics where dates, times, locations, and quantities (number of people, words, paintings, books, etc.) are continuously vital to their work. And, as Drucker rightfully points out, after Father Busa’s first wave of concordance, many humanities scholars are successfully “counting, sorting, searching, and finding nonambiguous instances of discrete and identifiable string of information coded in digital form” (87). Franco Moretti’s work is probably both the best and most well-known example of this work in literary studies. Also, coming from a background in composition and rhetoric, training in analyzing visual rhetoric was a critical part of my education and is now an objective of my teaching. I do not think it is a fair criticism of humanities scholars that we do not analyze maps, graphs, charts, and images, however I do agree that we fail to train emerging scholars to create accurate representations in various media. This is why I fully embrace a constructivist approach to teaching and learning, in fact it is the foundation of my teaching philosophy. I believe the rise of the digital humanities ethos and the availability of open access (free as in beer and speech) tools have encouraged others to concentrate on pedagogy based on making/building as well. But I still see a lack of training in graduate programs in the humanities to collect, visualize, and report data accurately.

Drucker’s vision of this training reimagines traditional forms of visualization, in particular maps, to reveal the cultural regimes that constructed our knowledge of those places, spaces, and events. Although I cannot fully comprehend how this vision would materialize, I am convinced that “without minimizing the complexity involved in realizations and aware of the risk of cartoonish special effects, my point is that by not addressing these possibilities we cede the virtual ground to a design sensibility based in engineering and games” (92). Along with Drucker, I wonder how the humanities can create new forms of digital scholarship that push the boundaries of how and why we can and should be building and making.

Gardiner and Musto on The Electronic Book

Gardiner, Eileen and Ronald G. Musto. “The Electronic Book.” The Oxford Companion to the Book. Michael F. Suarez, S.J. and H. R. Woudhusyen, eds. Vol. 1. Oxford UP. Web.

In defining the e-book, Gardiner and Musto write “The e-book is a young medium and its definition is a work in progress, emerging from the history of the print book and evolving technology. In this context it is less useful to consider the book as object-particularly as commercial object-than to view it as cultural practice, with the e-book as one manifestation of this practice” (164). I appreciate this distinction, because it directs the reader away from the now tired arguments about the e-book “killing” the print book. I continue to hope it is clear that digital publication formats are not replacing printed texts, however this fallacy seems to live on in both popular and scholarly debates. Gardiner and Musto’s definition highlights the effects of digital publication on the act of reading, rather than the physical objects (be in paper and ink or PDAs).

That said, this article also provides a history of electronic publish that while reliant on discussions of  medium, serves as a useful reminder that the digital book has been evolving for almost 50 years now. In fact the history begins with Vannevar Bush’s prophetic description of the Memex in 1945, and Ted Holms introduction of the term “hypertext” in 1965. An important date to remember is the “ Mother of All Demos” during which Douglas Engelbart demonstrated e-mail, tele-conferencing, videoconferencing, and the mouse. Gardiner and Musto note “[m]ost importantly for the future of the book, it demonstrated hypertext and introduced the ‘paper paradigm’, which embodied the current standard experience of a computer: windows, black text on white background, files, folders, and a desktop” (165). A convenient timeline of the history of hypertext can be found here: http://www.useit.com/papers/hypertext-history/. One interesting project on this timeline is the If Monks Had Macs: The choices in If Monks Had Macs … were prophetic: its metaphors of pre-print MS, early print, and marginal publishing not only introduced a new medium, but also set the intellectual and cultural paradoxes within which the e-book still operates: an essentially nonlinear, multiple medium that most readers and producers approach with the cultural apparatus developed for the *codex. It was also both retrospective and prescient in terms of production and distribution: like early print, it was produced and distributed outside the mainstream of academic and large business institutions (Gardiner and Musto 165).

Initially this project was on CD-ROM and is now housed here: http://rivertext.com/monks.html (although many of the links are broken). Although there are a few significant projects designed for CD-ROM – including, but not mentioned – the American History Project by Steve Brier, Director of the ITP program at the Graduate Center, the invention of the World Wide Web vastly increased the prevalence of e-books.

With the growth of personal computers and the Internet came libraries of electronic texts. One of my favorite resources, Project Gutenberg, was the forerunner in this pursuit. It began with students typing in texts by hand (can you imagine typing a Victorian novel!!!???), but now uses OCR and currently offers 36000 free online texts in multiple formats. By 2003 almost all texts were born digital, even if printed for distribution and consumption. This shift led to the rise of dedicated e-readers (Kindle, Nook, Sony Reader) and e-reading applications for PDA and mobile devices. Unfortunately, “[t]o accomplish this conversion, most enterprises relied either on automated digitization or on the low wages and long working hours of thousands of centres in the Global South (primarily in India), where an enormous new workforce could produce encoded text, images, and links. This raised ethical and economic issues that were virtually nonexistent for the print book” (Gardiner and Musto 166).

The potential of the e-book lies in what makes it different from the print book: hyperlinking and coding. As Robert Darnton described in 1999, the e-book is multilayered, giving the reader access to potentially infinite information as they interact with the text. I believe these capabilities should be the focus on scholarly debate at this point. Not if ebooks should be used for academic purposes, but how to improve digital texts to maximize their usefulness in scholarly pursuits. For examples, one of the drawbacks to e-books as I see it is inconsistent or non-existent pagination – mainly for the purpose of formal citation, although I question the relevancy of these antiquated models anyway. But this problem also affects the readers’ ability to navigate the text – it is difficult to go to a specific page or direct other readers to a specific passage when discussing the text in a group. Also, as famously proven in the ironic case of Orwell’s 1984, I have serious concerns about the ownership of purchased digital material through proprietary vendors such as Amazon, which can be rescinded without the permission of the consumer. But with these concerns come conveniences. More people have access to more texts. This is a point that should not be undervalued. And, the portability and adaptability of e-reading platforms means more people are reading more often. For use in academia, this gives students access to texts for all of their classes in multiple places (e-readers, cell phones, laptops, cloud storage) and all of their notes and marks are synced across platforms. One interesting development that I consider to have great potential is that these marks and notes can be shared – like picking up a used book and seeing what everyone else who has read that book highlighted and scribbled in the margin. In my Kindle version of John Dewey’s Experience and Education, one sentence is marked as “highlighted by 46 users” in the text. This helps novice readers and researchers understand the annotation process and presents reading a communal, albeit asynchronous, activity. These developments should not be viewed as a threat to the traditional book, they should be heralded as progress toward a more literate public.

Nunberg on The Information Age

Nunberg, Geoffrey. “Farewell to the Information Age.” From The Future of the Book. U of California P, 1996.
I agree with Nunberg’s founding claim here: discussions concerning the future of the book are plagued by “misapprehensions.” The general public, spurred on by journals and critics, seem convinced that new media technologies are causing the death of the book. But as Nunberg argues, although some print-based forms will be transferred entirely online – and for good reason – there is no evidence that the printed book is at risk. For example, works published in many volumes such as scholarly journals, or those that are printed daily and rely heavily on the prominence of advertising such as newspapers, seem to function better online. As electronic texts these genres are easily to search, index, archive, and most importantly access, unlike their paper bound predecessors. These are not arguments original to Nunberg. What I am interested in is how he supports the claim that “There will be a digital revolution, but the printed book will be an important participant in it” and that “the introduction of these media is bound to be accompanied by sweeping changes…including the relation between the author and reader, the nature of the public, the conception of intellectual property, and the nature of the text itself.” I do not see ample evidence for the former in this excerpt, but there is a slew of interesting support for the latter here. For instance, I think Nunberg is right on when he criticizes theorists for applying “old media” terminology and functions to new media formats – such as “author” and “publication.” Neither of these concepts transfer interchangeably into the digital realm. This is easily proven to anyone who has worked on a blog – those who write posts are given the role of “author” and in order to post their writing they click the button labeled “publish.” This is clearly not the same process as publishing a paperbound book (although in some cases it will reach a greater audience; see Nunberg’s example of the tenure case). Of course this is why born digital texts are rightfully the target of suspicion. Without the policing agents involved in traditional publication, how can we be sure the texts we read online are authentic, reliable, authoritative…and protected under copyright? Obviously these are legitimate concerns, but are they exclusive to born digital texts? I would argue are these the same concerns we must address to every text we encounter… and that is what I teach my students.

Similarly, Nunberg’s concern with intertextuality may not be unique to electronic texts. His claim that electronic texts have no boundaries highlights some inherent truth if you qualify the medium as those digital texts that are dynamic, contain links, and are published under the most liberal Creative Commons licensing. Certainly a locked pdf distributed by a publishing company, or a Google Books image, does not follow this logic. The idea of a “a domain where there can be intertextuality without transgression” is compelling, but I do not think it “rests on an anachronistic sense of the text that is carried over from our experience of print.” Most print bound texts do in fact introduce the reader to many forms of intertextuality – footnotes, citations, references, allusions, definition, etc. Readers rarely encounter a text in isolation, and we educate our students to read with the aid of reference material. Whether I am reading his article in a paper-bound book, a kindle version of the text, or a copy found on the web, I am still going to look up “propadeutic,” its just that my kindle does it with a tap of my finger and the website with two right-clicks.

I am most persuaded by Nunberg’s argument that new media technologies forces us to examine the notion of content. In Nunberg’s estimate, this rests on our inefficient definition of “information;” indeed a difficult term that has not evolved to meet its modern usage. As Nunberg notes, the OED definition contrasts information with data, and obvious issue to any user of modern technology. Furthermore, citing William Weaver and Claude Shannon, “information must not be confused with meaning,” but rather “information as a property of a signal relative to an interpreter.” This leads us to the title phrase “The Information Age,” which indicates not a verb (to inform), but an abstract noun (to receive information). Nunberg attributes this shift to the difference between the seventeenth century notion of information which indicated published, not private content, “a step that implies the commoditization of content that is central to the cultural role we ask information to play,” and the nineteenth view that “resituated the agency of instruction in text and its producers, and resituated the reader to the role of a passive consumer of content.” This leads to Michel de Certeau’s “public shaped by writing.” Therefore, information became associated with the dissemination of content through free exchange in a democratic society.

In this view, information is produced by society and its “instruments,” in Walter Benjamin’s sense of the word. One example of this being the “news,” or journalism and another being reference works. Nunberg writes “Each after its fashion, these forms impose a particular registration on their content, with characteristic syntax and semantics, which in turn elicits a particular mode of reading from its consumers.” Interestingly, Nunberg questions the transferability of information, not from text to person, or person to person, but from one medium to another (ex a novel to a comic book). This raises important questions concerning visualization, presentation, and access. It also draws the distinction between knowledge and information.

So my question is what is lost or gained when we move a print text online? What new genres emerge? How can we view Nunberg’s argument in light of e-readers and similar applications for mobile devices? How does this change the nature of both information and the process of knowledge-making?

 

 

Turkle on Life on the Screen

Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. Simon and Schuster (Sept. 1997): NY, NY. http://www.amazon.com/Life-Screen-Identity-Age-Internet/dp/0684833484

Are you a Mac or a PC?

http://www.youtube.com/watch?v=C5z0Ia5jDt4

Besides the brilliantly clever advertising campaign Apple launched asking this very question (with the costs passed directly to the customer), this question has a complicated history that could make you rethink your answer. As Turkle rightfully points out, in the 1970’s the first personal computers engaged users with a more mechanical, technical inclination than the GUI, or simulation-driven models ubiquitous in today’s market (hacker vs hobbyist).  The hardware was designed so that these computers could be dismantled, manipulated, and rebuilt by the user. Furthermore, early IBM’s and very early Apple models necessitated the user write basic code – or commands – in order to execute functions. Even word processing programs in DOS (MS, PC, DR) required the user frame their text in code.Throughout the evolution of these early models, this “transparency” and need to “see inside” remained. Turkle claims these computers “presented themselves as open, ‘transparent,’ potentially reducible to their underlying mechanisms. These were systems that invited users to imagine that they could understand its “gears” as they turned, even if very few people ever tried to reach that level of understanding.” However, with the introduction of the Apple II and its contemporaries, processing applications negated the need to understand how the hardware and software communicate to execute functions. As the efficiency of applications advanced, with increased ease of use, the hardware became more invisible, and harder to access. Even the term “users” evolved with the changing landscape of simulation-based interfaces – a “user” Turkle argues, becomes someone who is hands-on, but “not interested in the technology except as it enables an application.” By the mid-1980’s Apple users could no longer open the machine at all (only authorized personnel had access to the tool needed to break into the hardware).

I am sure you can see where I, via Turkle, am going with this. Apple=submission. Whereas the technology itself encouraged us to build, manipulate, understand, and communicate, it now encourages us to consume, regurgitate, and conform. Turkle puts it beautifully, “[c]hanges in both technology and culture encourage certain styles of technology and of representing technology to dominate others.”  We are easily dominated in this world of consumer technology. My only hope is that the resurgence of the open-access movement will help us advocate for access to the code, processes, and gears that allow us to understand how these machines work, and how to create new, custom machines that stimulate creativity rather than conformity. Take for example the Berlin Declaration: http://oa.mpg.de/lang/en-uk/berlin-prozess/berliner-erklarung/

I think it is the need to make things, to be creative, to communicate, that drives the desire to integrate technology into our lives, and classrooms. It is the same reason we write. As Turkle expresses, “We paint, we work, we keep journals, we start companies, we build things that express the diversity of our personal and intellectual sensibilities. Yet the computer offers us new opportunities as a medium that embodies our ideas and expresses our diversity.” However, it is the desire to know how things work, to understand processes, and to understand the effects of our creations that lead us to be more than just users – we want to “see inside.” This is the same reason I study writing. And technology.

A few more notes that should lead to further discussion. The first is discipline specific, and the second is a general provocation:

Important questions are raised for those of us who write, and teach writing, based on Turkle’s causal observations that:
1) What she once thought of cutting and pasting as editing, now with the ease of computer software she is just part of writing.
2) When she wants to write she will wait until there is a computer around- she feels she must wait until she has a computer in order to write.
So my question is, how does composing in digital mediums, especially through applications such as word processors, change the way we write? For better or worse?

Turkle’s book is, as she claims, “about the intense relationships people have with computers and how these relationships are changing the way we think and feel” and her thesis can be summarized in this sentence, “Computers don’t just do things for us, they do things to us, including to our ways of thinking about ourselves and other people.” How has ubicom (ubiquitous computing) changed the way we think? If we no longer desire to know how things work, and we are content to let computers do things for us and to us, what will be the fate of our civilization?

For now, I sign-off as a proud (code-writing, program building, open-access advocating) PC.

Rietje van Vliet on “Print and Public in Europe 1600-1800”

Eliot, Simon, and Jonathan Rose, eds. A Companion to the History of the Book. Oxford/Malden, MA: Blackwell, 2007. Z4 C73 2007; ISBN 978-1-4051-2765-3.

The rise of the print industry:
This chapter is framed by the claim that while bookstores – with title page advertisements on the windows and shelves full of unbound books (in folio, quarto, octavos, and duodecimos) – remained the same throughout the seventeenth and eighteenth centuries in Europe, the book industry was growing and changing rapidly. This change occurred unevenly due to “the extend of urbanization, the Church’s influence, and the sovereign’s ambitions.”  In the sixteenth century Venice was the center of the publication market, but that transitioned to Spain, and then Russia due to religious censorship. By the eighteenth century the Dutch Republic was the largest player in the international book trade due to its “political structure, enormous economic growth, excellent commercial relations, comparatively broad religious tolerance, and the presence of trade and investment capital.” However in 1750, The French government under the rule of Louis XV loosened their censorship restrictions to allow for the publishing industry to become economically competitive. At the same time, under Fredrick the Great Berlin also grew to become the center of scholarly publishing. The sheer volume of books published in Germany was well ahead of the rest of Europe in “1755, there were just 1,231 titles. in 1775; there were 2,025 titles; and by 1795 the number had risen to 3,368 titles.”
The changing economy of the print industry:
The system of trade also shifted during this period. First, the rise in commission selling in which publishers sent retailers books on commission made it possible to get more new titles in more frequently. Also, up until this point it was custom to have payments made in printed sheets of the same value, so printers were generally also publishers. However, that tradition gradually changed to booksellers being publishers who contracted out their printing to countries with lower salaries and cheaper paper and ink. In fact, between 1751 and 1825 only 13.7% of publisher-booksellers did their own printing.
The role of the writer:
Pushing away from the tradition of dedicating their texts to a patron in order to receive remuneration for their work, writers began to seek publication rights in the seventeenth century. Up until this point publishers were free to do whatever they saw fit with copies of a text within their geographical boundaries. This created a system of piracy, that in many cases was supported by the governing bodies. In some ways piracy prevented “monopolies, inadequate distribution, and high prices,” however it also caused writers who intended to live off of their writing to begin negotiating with publishers to ensure they had control over their work. This led some writers to print their work at their own expense, or join together to establish collective printing companies.
The audience:
In The Structural Transformation of the Public Sphere, Jurgen Habermas defines the public sphere as “the medium by means of which private persons can debate in public. In doing so, they make use of a rich array of cultural media: reading societies, literary societies, learned societies, libraries, theaters, museums, coffee houses, salons, and so on. Free debate could happen orally, of course, but also in books, newspapers, and other periodicals.” The demand for these printed public forums spread with the rapidly increasing literacy rate. It is difficult for scholars to give accurate numbers in regards to literacy rates in this period, but it is certain that they varied between countries, and that there were disparities between men and women, Catholics and Protestants. Yet, based on the volume and diversity of titles publishes, sociologists claim that the second half of the eighteenth century was a reading revolution, especially in Germany, France, England, and Italy – all countries included in the Republic of Letters. It was also at this time that new genres entered the market, such as novels, encyclopedias, scholarly journals, and popular magazines. And new circulating libraries reached new demographics, including the middle and lower classes. instead on a small portion of the population re-reading a few books intensely with the intention of memorization, a large portion of the population read books, magazines, and newspapers extensively. By the end of the eighteenth century every European country was familiar with a wide range of periodicals in the fields of fashion, literature, music, theater, and the fine arts, as well as having access to journals dedicated to special interests such as educational theory, technology, or physics.

Eliot and Rose on “North America and Transatlantic Book Culture to 1800”

Eliot, Simon, and Jonathan Rose, eds. A Companion to the History of the Book. Oxford/Malden, MA: Blackwell, 2007. Z4 C73 2007; ISBN 978-1-4051-2765-3.

“North America and Transatlantic Book Culture to 1800”

Shortly after Gutenberg invented the printing press with movable type, the discovery of America was documented through this technology. For instance arguably the most famous voyage to the “new world” Epistola Christofori Colom was reprinted eleven times published in multiple European cities. Travel narratives such as Hakluyt’s three part Principall Navigations, Voiages and Discoveries of the English Nation served as templates for a new genre of popular literature. These texts not only described the perils of transatlantic exploration, they also gave a coveted glimpse of the landscape and native cultures that defined this wild frontiers. Printing came to the New World in 1539, and these travel narratives continued to be popular throughout the 1700’s and were often published as serial monthly installments in the colonial newspaper and journals. Some of these tales, such as Antonio de Solis’s Historia de la conquista de Mexico pushed the boundaries of non-fiction to the point of being “like novels.” (Perhaps this is why the term “history” was re-purposed in the eighteenth century by authors pioneering the generic form we now call the “novel”).

These narrations were followed by the more scientific bibliographic guides such as Bibliotheca Americana and European Americana: A Chronological Guide to Works Printed in Europe Relating to the Americas. the North American Imprints Program (NAIP) is now trying to create a digital archive of these records. These records allow us to compare which texts were reprinted in both Britain and America, and question those that were contained in one country.

Printing in the New World was established in Mexico City in 1539. In 1638 a press in was established in Cambridge, MA where the famed Bay Psalm Book was the first to be printed in the United States. After this, the print industry developed rapidly. By the end of the seventeenth century there were permanent presses in all of the major colonial cities and by 1740, five printers in Boston alone were issuing their own newspapers: “More than any other factor, the rise of newspapers changed the nature of printing in the British colonies.” (In 21st century America can we say the same for blogs? E-books? Online newspapers?)

In America the market, rather than the government, drove the printing industry. This meant a wider variety of materials were printed in greater numbers than in England (and most of Europe). Shorter works were far more common; religious texts such as sermons or psalms, government pamphlets, and broadsides were cheaper and therefore printed in greater numbers at a faster pace. One of the most popular genres was the almanac. Many almanacs were also diaries, giving the reader an intimate portrait of figures such as George Washington and Benjamin Franklin. Almanacs were also used as propaganda during the American Revolution.

While printed material indisputably circulated propaganda, it also facilitated education in the United States. John Fenno is quoted as claiming: “The middling and lower class of citizens will therefore find their account in becoming subscribers for this Gazette, should it pay a particular regard to this great subject” in the first issue of The Gazette of the United States in April 1989. This sentiment expressed that through their patronage of a paper on the subject of educated, they would be receiving an education.Newspapers also helped publicize the issue of slavery. Both political statements on freedom and advertisements for runaway slaves often occupied the same space. Likewise Johns writes “the printed word gave women a broader audience as well and helped to focus attention on their altered roles in the newly established republican order.”

But newspapers were not the only affordable way to access reading material. Benjamin Franklin wrote that subscription libraries “have improved the general Conversation of Americans, made the common Tradesman and Farmers as intelligent as most Gentlemen from other Countries, and perhaps have contributed in some Degree to the Stand so generally made throughout the Colonies in the Defence of their Priviledges.” In fact, more than 100 book catalogs of various kinds were published in America before 1801. From both academic and popular catalogs it can be determined that religious titles, textbooks, professional manuals, guides, personal narratives, and a small number of imaginative works were frequently reprinted and circulated.

Johns on Piracy

Johns, Adrian. Piracy: The Intellectual Property Wars from Gutenberg to Gates. U of Chicago P: 2009.

Part 1: To be continued in a future post…

The rise of piracy in the seventeenth century correlates with the ability to print – raising important questions about the nature of knowledge: can knowledge be authored, owned, and stolen? Interestingly, the rise of print culture coincides with the “scientific revolution,” and in fact many claim this revolution would not have been possible without this new ability to disseminate information.

In early modern times, scholars studied the relationship between words and things, which Issac Newton and his contemporaries claimed to “revolution[ize] in terms of a fundamental recasting of that relation, or even as a discarding of the former in favor of the latter.” However, even with the nature of science being to experiment with things, the results are communicated with words. Furthermore, the natural philosophers of the sixteenth and seventeenth centuries still based their methodology on the reading the work of their predecessors and building new theories from their groundwork. This practice or reading and writing is obviously still the basis of scholarly inquiry across the disciplines today.

“Experimenting with print as well as nature, the experimentalists created the distant origins

of peer review, journals, and archives – the whole gallimaufry that is often taken as distinctive of science,

and that is now in question once again in the age of open access and digital distribution.”

Experimental philosophy was based on doing things, but it also relied on the acts of writing, printing, and reading. Printed texts were distributed by trade companies. The Royal Society depended on the system of printing, distributing, and then reporting and reviewing in registries to maintain their system of knowledge making. Experiments had to be witnessed by an audience of trained notetakers who wrote and registered these reports – “ideally on repeated occasions.”

Perhaps the most fascinating aspect of this practice is the “learned sociability” developed through reading these reports in public. The educated gentlemen present at Royal Society gathers brought diverse interpretations of these experimental readings that helped to solidify their findings, disseminate knowledge, and advance the rate of scientific progress. Furthermore, these reading helped to cement social bonds and sustain the community. However, the Society did not lay claim to authorship. The laborers who recorded and often read the experiments were not attributed as authors. To recognize a gentleman as an author was viewed as “immodest” and when Edward Tyler was given this title it was deemed and “allowable boldness.” These public readings became to progenitors of scholarly peer review, although at this point they were still based on “civility rather than expertise.”

Hayles on Cybernetics

Hayles, N. Katherine. “Cybernetics.” Mitchell, W. J. T. and Mark B. N. Hansen, eds. Critical Terms for Media Studies. Chicago/London: University of Chicago Press, 2010.  146-156.

In this article Hayles suggests that media can be understood through materiality, technology, semiotics, and social contexts. The article examines these aspects of media through systems theory – exploring how information is produced, processed, and consumed in a world of new media. Hayles cites Gordon Pask’s definition of cybernetics “as the field concerned with information flows in all media, including biological, mechanical, and even cosmological systems.” Hayles further develops this definition through the use of Claude Shannon and Norbert Wiener’s definition of information “as a function of message probabilities…detached from context and consequently from meaning.” Therefore information is disembodied. Hans Moravec goes as far as suggesting it will be possible to upload the human brain to a computer, allowing humans to move into a “postbiological era” which clearly influences much of Hayles’ work (such as this book).

Hayles breaks the history of cybernetics into a three-part progression covering 1943 to 1996. 1) 1943-1960: first order cybernetics, focusing on the separation between the organism/mechanism and their environment. 2) 1960-1985 reflexivity and autopoietic theory, introducing the observer as part of the system. 3)1985-1996 virtuality (coined by Hayles in 1996), Hayles claims that “human and animal bodies are media because they have the capacity to store, transmit and process information.” This moves into the current fourth phase with the ability of modern technology to cause virtual environments, or cyberspace, to become integrated into the “real world.”  Examples of this include the GUI (the graphic user interface, often created to mimic physical work spaces), augmented reality, and semantic web possibilities. These manifestations of virtuality are termed “mixed reality.” The third and fourth phases are imagined as a third-order cybernetics concentrated on the social and linguistic environments occupied by the observer (of these I am particularly interested in the construction of social networks).

Another interesting shift occurs through Edward Fredkin’s claim that “the meaning of information is given by the processes that interpret it,” including mechanical nonhuman processes. Hayles interprets this as significant because it “enables us to see these sub-cognitive and non-cognitive processes not just as contributing to a conscious thought but as themselves acts of interpretation and meaning.” This theory has radical repercussions within both literary and writing studies because this shift would refocus attention of the process of interpreting rather than on the interpretation as a product.