Computers in the Classroom? A Critique of the Digital Computer as a Metaphor for Mind

Gibson, Keith

Because we can design computers that follow rules when they process information, and because apparently human beings also follow rules when they think, then [some argue that] there is some unitary sense in which the brain and the computer are functioning in a similar—and indeed maybe the same—fashion.

— John Searle

For many years, literacy theorists have sounded the call for a complex understanding of their subject. Though literacy is often viewed as the “simple” ability to read and write, many in academia are encouraging a perspective that recognizes its interactional aspects (see, for instance, Oxenham, Robinson, Gee, and Ogbu). These teachers and theorists have persuaded many in the academic community of this point of view, but perhaps the most important audience—the general public, specifically the parents of the students—remains unconvinced. The demand for school accountability in this country is growing, as seen in the political platforms of both major parties in this country for the past several years as well as President Bush’s No Child Left Behind policy. This demand is generally in the form of clamor for higher standardized test scores, a desire driven by panic-stricken reports detailing the ignorance of our youth (based on declining test scores) and the certain disastrous consequences (Copperman). These reports, given ominous titles like A Nation at Risk in 1983 or the current anxiety-inducing “Nation’s Report Card,” tend to fuel the notion that the numbers are the important thing, that our chief educational objective should be breaking 1400 on the SAT, despite many studies illustrating and many educators decrying the inadequacy of standardized tests to measure much more than students’ ability to completely fill in small circles (as indicated in Kaestle and Breland).

The conclusion we are to draw from this state of affairs is an unsurprising one: the general public has very different views on education than educators do. At the same time, the public wields a great amount of power (and rightly so, since their children are involved) over our educational system, exercised in the form of electing officials who write government checks. I propose the way around this difficulty is not removing the power from the people, but rather bringing their views more in line with “the experts” in the field.

This is clearly not going to be an easy task, but I believe we can take a great step in the right direction by looking briefly at the power of metaphor. In Metaphors We Live By, George Lakoff and Mark Johnson argue that “metaphors allow us to understand one domain of experience in terms of another” (117). This generally occurs when we are trying to understand a concept that is fairly ambiguous; to help us grasp it, we search for a concept that is more concrete and attempt to explain it in those terms. The two concepts will necessarily not be identical, so when we explain one in terms of another, we are not going to be perfectly accurate. We are willing to put up with this for the sake of our understanding, and, in many cases, the presence of the metaphor is not forgotten; we are always aware, for instance, that time is not actually flying. There are instances, however, when metaphors become so entrenched in our language that we begin to mistake the metaphor for the reality, and it is my contention that this phenomenon is at the heart of our troubles with literacy. Specifically, Americans have gotten used to the idea that the mind is a digital computer, and this idea has had a negative impact on literacy theorists’ attempts to introduce a complex understanding of the subject.

The notion that the mind is a digital computer has been around for nearly half a century since artificial intelligence researchers began supposing that a computer could potentially duplicate all the actions of the brain. In the beginning, there was certainly nothing wrong with this idea; as John Searle has pointed out, this is simply the latest in a long line of explanatory metaphors:

Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured the brain was a telephone switchboard. (‘What else could it be?’) I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told that some of the ancient Greeks thought the brain functions like a catapult. (Science, 44)

The danger has come in the past few decades; as computers have come to resemble, and in certain areas outperform, our brains (in a way that catapults certainly never did), we have been much more accepting of the metaphor as an actual physical explanation. Indeed, cognitive scientists have formulated what is called the “computational theory of mind”; Steven Pinker describes it as the idea that “beliefs and desires are information, incarnated as configurations of symbols. The symbols are physical states of bits of matter, like chips in a computer or neurons in the brain” (25). Scholars that have PhDs in fields like artificial intelligence or cognitive psychology certainly understand the limitations of such a metaphor; Pinker himself qualifies his statement: “The claim is not that the brain is like commercially available computers. Rather, the claim is that brains and computers embody intelligence for some of the same reasons” (26–27). Most of us, however, do not have Pinker’s level of expertise, and the more we hear about speech-recognition programs for our PCs and see Deep Blue beating Gary Kasparov at chess, the more likely we are to fully accept the idea that our brains really are computers. The effects of this metaphor are often quite subtle, but I believe they are pervasive and influential. I claim a substantial percentage of American parents have come to believe that the minds of their children can be taught in ways fundamentally similar to programming a digital computer, and this belief is having serious effects on our education strategies.

These effects come from extending the mind-as-digital-computer metaphor to cover learning. If we want to improve the capacity of a computer, the method is simple and straightforward: we load a program or upgrade the memory. Furthermore, a single set of instructions will work perfectly for any computer of a similar type. To the extent that we believe our minds are simply digital computers, we are also led to some questionable ideas about teaching. We are led to believe that literacy is an easily identifiable thing that students either have or do not have. We determine whether or not they have it by running a (standardized) test, and if they do not have it, we can simply give it to them. We are also led to believe that imparting this literacy “program” is basically the same for all students; once we determine the proper method, it will work universally.[1]

I claim that our stubborn insistence on this metaphor has reinforced three distinct educational ideas: 1) because one virus-checker can diagnose an unlimited number of machines using a simple objective test, we believe standardized tests are effective measures of achievement; 2) because a word processor can be successfully installed with or without a spreadsheet program, we believe that composition can be successfully bracketed off from the other disciplines; and 3) because identical copies of a CD-ROM can load the same data on any brand of PC, we believe a single educational strategy can reach a student of almost any cultural background. These are notions that rhetoric and composition scholars (among others in academia) have been trying to overthrow for quite some time (see Crowley for an excellent discussion of this struggle), and the good news is that some headway has been made: a recent Public Agenda report indicates that, though 92 percent of parents and 87 percent of teachers believe that students should be required to pass a standardized test to graduate, only 22 percent of parents believe that test scores are “the best way to measure achievement” with most (61 percent) believing that classroom work and homework offer the best indicator. The latter two beliefs, however, remain common, and it is important that we, as writing instructors, confront these issues. Thorough, academic explanations of cause-and-effect in student learning have not yet been effective, so I am suggesting we try a new approach: attack the problem at the source. Since the digital computer metaphor strongly reinforces those beliefs, an examination of that metaphor could help rhet/comp scholars deal more directly with them. Thus, in this paper, I will conduct an analysis of the fitness of our current metaphor for mind, and I will show that much of the latest research in artificial intelligence argues against the reality of that model. To accomplish this goal, I will first examine the theoretical background of digital computing; I will then describe two of the most influential objections to that theory; and I will conclude by suggesting approaches writing instructors can employ in the classroom to reduce the pervasiveness of this counter-productive metaphor for mind.

This argument will necessarily be a complex one, and many of the details will be unfamiliar to most rhet/comp specialists. But I believe an understanding of the issues at hand can go a long way toward success in our drive to assist our students in obtaining the rhetorical skills they need and to provide the American electorate with a more sophisticated view of literacy for them and for their children. In this endeavor, I hope to be accurate and persuasive, but I also hope to present the argument in a way with which non-specialists can be comfortable. If this notion of mine—changing Americans’ minds about education by changing their metaphors for mind—is to work, it must be accessible to those unfamiliar with the subject of artificial intelligence. With that in mind, I will quickly define some of my terms.

Classical v. Quantum Physics : Classical physics is used to explain the behavior of objects that are roughly the size of cannonballs. When I say “roughly,” I am intentionally giving myself a fair amount of leeway; in this context, objects are roughly cannonball-sized if they are anywhere between the size of a water molecule and the size of the earth. Objects on either side of this cannonball-ness require quantum physics (if smaller) or relativity (if larger).[2] Thus, digital computers, and all their inner workings, fall squarely in the realm of classical physics; if we want to claim that the mind is a digital computer (and some of us clearly do), we are claiming that it can be understood and duplicated (and that all of its important parts can be understood and duplicated) using only classical physics. I am arguing that human-like intelligence cannot be re-created on a digital computer, and, more generally, that a thorough understanding of mind is not possible using only classical physics.

Consciousness, Mind, and Intelligence : In philosophical circles, there is a clear lack of consensus regarding the meanings of these terms, and I do not find it particularly easy to draw sharp boundaries for them, either. Fortunately, my discussion does not depend as much on an independent meaning of each term as it does the relationship the terms have with each other. I see mind as the housing of any sort of consciousness, and consciousness as a necessary prerequisite of intelligence. Thus, the real issue for me is consciousness: if that is present, there must be a mind, and we must understand it on the way to understanding intelligence. As I discuss the issues, I will keep focused on what aspects are needed to explain consciousness, as opposed to either mind or intelligence.

The Theoretical Foundations of Digital Computing

In 1928, a British mathematician named David Hilbert suggested that one of the great unsolved problems in mathematics was what he called the halting problem.[3] Eight years later, Alan Turing, at the tender age of 25, solved it—sort of. Turing actually proved that Hilbert’s halting problem could never be solved, and his proof was so thorough and so persuasive that it quickly settled the debate and now, more than seventy-five years later, not even mathematicians think much about the halting problem. But the paper in which the proof appeared, “On Computable Numbers,” remains important to this day because in it, as a vehicle to establish his proof, Turing developed the idea of a Turing machine.

The Turing machine employs (theoretically) a series of squares on a tape; in each square there can exist one of two possible markings—let us say a 0 or 1. An apparatus moves along the tape reading each square; it either changes the marking to its opposite or leaves it the same. In Turing’s paper, he demonstrated the ability of this machine to perform any computation for which it could be given instructions. Earlier, an Austrian mathematician named Kurt Gödel had proven that any message that can be symbolized (for instance, a list of directions) can be translated into numbers, or, more specifically, into a series of 0s and1s (a process which later came to be called Gödel-numbering). Turing, together with Alonzo Church, combined this conclusion with his own work to formulate the Church-Turing thesis: any computation that can be carried out by mechanical means can be performed by some Turing machine. This led to Turing’s conception of the Universal Turing Machine, a machine capable of performing any computable function. Turing did not live to witness the effect his theories would have, but now the entire world is familiar with them: modern digital computers are based on the theory he described over 70 years ago.

Clearly, then, Turing Machines are very powerful instruments, but we want to know exactly what they are capable of. We will be concerned with three relevant features: they contain an apparatus capable of considering a single piece of input at any one time; their potential input is unrestricted, as is their output; and they are capable of performing (in principle) any computation which can be carried out by mechanical means. Taken together, these concepts have led many to assume Universal Turing Machines generally (and their modern instantiations as digital computers particularly) possess the same capacities as the human mind. Turing himself believed this; in 1950 he published a paper entitled “Computing Machinery and Intelligence” in which he described a test whereby one of his machines’ intelligence could be measured in relation to a person’s.[4] Furthermore, Turing’s work is clearly reliant on classical physics; to the extent that a Turing machine can explain consciousness, then, we must accept that consciousness is a classical phenomenon.

Despite the obvious power possible with a Turing machine, it has clear limitations. As mentioned previously, Turing’s initial exposition of his theory was in a paper which explained the impossibility of using it to solve Hilbert’s halting problem. This restriction points to the Turing machine’s single greatest liability: its inability to consider more than one piece of information at any one time. In this way, a Turing machine is a serial processor; it must do one thing at a time and finish it completely before moving on to the next item on the list. If the machine wants to remember what happened at any time previously, it must go back to that particular location on the tape; the reading apparatus itself has no memory capabilities .

Problems with the Turing Machine Duplicating Human Thought

Any theory is bound to have detractors, but as Turing’s thought experiment moved from conjecture to reality, more and more scientists and philosophers began realizing the potential power of the Turing machine. As artificial intelligence research became a serious endeavor in the late 1950s and early 1960s, optimism was high that this time our metaphor for mind could become more than just a figure of speech. Many researchers issued bold predictions about digital computers equaling and surpassing human intelligence in rather short periods of time; even when these predictions fell short, the focus was on the progress, and nearly everyone involved with AI work saw intelligent Turing machines as a question of “when,” not “if.”

But progress slowed considerably in the 1970s, and the critics’ voices began getting louder and more influential. There have been many individuals with many different critiques of Turing machines; in this piece, I will represent the group by describing the grounds of two of the most common objections: semantics and incompleteness.

The Turing Machine’s Trouble with Semantics

The Turing machine received one of its first serious challenges as a potential bearer of consciousness over 40 years after Turing first proposed it. In 1980, John Searle wrote an article, “Minds, Brains, and Programs,” in which he proposed a thought experiment which, he claimed, showed the inability of a Turing machine to possess intelligence. He called his thought experiment the Chinese Room, and it consists of a person inside a room communicating with the outside world by passing cards back and forth through a slot in one wall. The cards to be passed have various assortments of “squiggles” and “squaggles” on them. These designs on the cards are unintelligible to the person in the room, but that person has an instruction book telling her which card to pass back through the slot when a certain card is received. As it turns out, the squiggles and squaggles on the cards represent actual Chinese characters, and, as the person passes them back and forth, she is holding an actual conversation with Chinese speakers on the outside. Searle then asks his readers to imagine if you were that person, would you actually understand Chinese? The answer is, he hopes, no, and he argues that a digital computer, in all its complexity, is no more than the person inside the Chinese Room. Manipulating meaningless symbols according to rules it does not understand does not make the machine intelligent.

For Searle, formally trained as a linguist, the key term is semantics, and it is precisely this that digital computers lack. He is even uncomfortable allowing that computers employ “formal symbol manipulation” since the word “symbol” implies something is being symbolized. But for computers, there is no meaning at all in the symbols; “In the linguistic jargon, they have only a syntax, but no semantics” (Searle, “Programs,” 83). If the computers are simply moving around meaningless symbols, there cannot possibly be any intentionality (which, for Searle, is mandatory for intelligence), just as the person in the Chinese Room cannot be said to have any actual intentionality even if one of the cards she passes through the slot reads “I want a hamburger.” Any meaning derived from the conversation would be inferred by those with whom she was conversing since the person in the room had no idea what any of the cards actually meant. For digital computers, the story is the same: “such intentionality as computers appear to have is solely in the minds of those who program them and use them” (Searle, “Programs,” 83).

Another perspective on this issue of semantics can come from a look back at Ferdinand de Saussure’s Course in General Linguistics. There he argued that a word’s linguistic value is “not determined merely by that concept or meaning for which it is a token. It must also be assessed against comparable values, by contrast with other words. The content of a word is determined in the final analysis not by what it contains but by what exists outside it” (Saussure 114). Thus, words have meaning for us as a result of our consideration of many simultaneous concepts. As we understand all the things a word is not, we begin to form a picture of what it is. In this way, linguistic value is an emergent property; a single concept emerges from many others.

This is something of which a Turing machine is not capable, even in principle. If we return to our discussion of Turing machines, we recall that they are set up in such a way that the apparatus considers a single square on the tape at any one time: it is a serial processor. But Saussure reminds us that we do not achieve semantics by thinking about one concept at a time; we consider many concepts, and the single one emerges from them. Digital computers have no way of dealing with this difficulty.

The Effects of Gödel’s Incompleteness Theorem

A complaint of similar character to Searle’s was articulated in 1989 by Roger Penrose. In his book The Emperor’s New Mind, he put his considerable ethos behind an argument that had been made in many forms, dating back to a 1961 article by J.R. Lucas, “Minds, Machines, and Gödel.” Though distinct in some very important ways, Penrose and Lucas both argued that a Turing machine is necessarily a victim of Kurt Gödel’s incompleteness theorem. Gödel’s landmark 1931 paper (later published in its own volume) not only established the existence of Gödel-numbering (mentioned above), but put an end once and for all to the Russell-Whitehead project of deriving all mathematical truth from a small set of axioms in the Principia Mathematica. Gödel’s discovery was that any formal set of mathematical propositions will be either incomplete or inconsistent; in Gödel’s own words, “If c be a given recursive, consistent class of formulae, then the prepositional formula which states that c is consistent is not c-provable” (70). The intricacies of the math used to prove this theorem are beyond the scope of this paper, but, since its original publication, there have been no serious challenges to it, and it is now wholly accepted in the mathematical community.

The implications of Gödel’s Incompleteness Theorem, however, are not wholly accepted by any large group of mathematicians, but Roger Penrose is staking his claim with a handful of others in arguing that Gödel’s work demonstrates the incapacity of a Turing machine to possess human intelligence. The argument hinges on the difference between formal and non-formal systems; a formal system is one that begins with a set of axioms and, from those axioms, produces propositions which are true because they can be created from the system. A system is consistent if it never produces two contradictory statements, and it is complete if it can produce any statement expressible in its language. Turing machines are without question formal systems, and, as such, Gödel’s Incompleteness Theorem applies to them; in the mathematical jargon, Turing machines cannot understand the truth of their own Gödel-sentences.[5] Penrose’s claim is that human minds are not formal systems, and thereby not bound by the theorem. This exhibits a qualitative difference between human intelligence and the possibilities of Turing machine intelligence: “Whatever (consistent) formal system is used for arithmetic, there are statements that we [humans] can see are true but which do not get assigned the truth-value true by the formalist’s proposed procedure [embodied in Turing machines]” (Penrose, Emperor’s, 108). Admittedly, this is a rather small result, and its implications may seem limited to a very small set of circumstances, but, Penrose argues, it indicates a fundamental obstacle that Turing machines cannot overcome. Perceiving the truth of a Gödel statement is something a computational machine cannot in principle do; since humans can perceive the truth of these statements, there is at least one part of our consciousness that cannot be duplicated by a Turing machine.

In many ways, Penrose’s argument against Turing machines is like Searle’s. The inherent incompleteness Gödel demonstrated in formal systems [6] is structurally similar to the deficiency that denies a Turing machine semantics; it implies an inability to consider more than one thing at a time. We can perceive the truth of Gödel sentences because we can understand the implications of an infinite sequence, and we can do this because we can consider lists as more than simply individual items. A Turing machine does not have this ability; when considering an infinite sequence, it must take one item at a time, never grasping the connection between them.

Confronting the Metaphor

Thus far, I have focused on making the case that the digital computer is a poor metaphor for the human mind, but there is more at stake in this discussion than the future direction of AI research. As writing instructors, we are confronted by students, parents, principals, university administrators, and politicians who (though often subconsciously) wholly accept this metaphor and many of the implications that go with it. Many of the struggles we have with those who hold simplistic views of education and assessment can be traced directly or indirectly to this inaccurate figure of speech. And I believe if we can change opinions about this metaphor, we can change opinions about literacy.

Of course, to change opinions about the metaphor, we must first identify it, and that is not always a simple task. Explanatory metaphors are like warrants in the Toulmin schema: they are rarely stated, but, if faulty, can cause all kinds of problems. No one ever comes right out and says they believe in the SAT because they think our brains are just meaty PCs; they simply hold their beliefs as long as this basic assumption is not addressed. And like an unstated assumption, we can find this metaphor by looking for its implications, namely the three beliefs about education described above: standardized tests are effective measures of achievement, composition can be successfully bracketed off from the other disciplines, and a single educational strategy can reach a student of almost any cultural background. When we see one or more of these ideas in someone who has some influence over our teaching, we may find it necessary to address the core of the disagreement: the inaccurate metaphor for mind.

Confronting this metaphor will require that we familiarize ourselves with a discussion of the problem that will be palatable to non-technical audiences. To that end, I suggest the following three points that can help explain why this metaphor is unsound, i.e., why classical physics and the digital computer are unfit to understand consciousness.

  1. There is a history of conflict between consciousness and classical physics. As Henry Stapp explains, “Classical mechanics arose from the banishment of consciousness from our conception of the physical universe. Hence it should not be surprising to find that the readmission of consciousness requires going beyond that theory.” Classical physics has never quite known what to do with the mind, a confusion ultimately expressed in Descartes’ mind-body duality. We are now rightly skeptical of this binary, but Newton’s laws are no better equipped to describe consciousness now than they were in the 17 th century.
  2. Classical physics is no longer in the business of providing fundamental explanations. As the twentieth century progressed, classical physics lost more and more of its grip on its explanatory power of the physical universe: beginning with electromagnetism, quantum theory has taken over more and more of the territory once apportioned to Newtonian physics. Classical physics is clearly still useful, but physicists now accord it little more than “approximation” status. It does not, then, seem reasonable that the handful of Newtonian principles that explain digital computers can also explain human consciousness.
  3. Consciousness is going global. One of the biggest difficulties classical physics has in attempting to explain consciousness is its inability to handle non-local effects. Stapp reminds us that “the fundamental principle in classical mechanics is that any physical entity can be decomposed into a collection of simple independent local elements each of which only interacts with its immediate neighbors.” But consciousness is much more holistic than that; there is a qualitative difference between thoughts and the neuron firings that cause them which cannot be explained by strict classical reductionism.

Armed with this information, we must be prepared to talk about this problem with anyone who will listen. The coming years are only going to feature more debates over writing and literacy and their place in education, and I believe a discussion of this metaphor can be a helpful component of such debates. We have reason for optimism: the Public Agenda figures mentioned above, and the (relatively) rhetorically sophisticated writing assessment employed by the Department of Education for the “Nation’s Report Card.” There are still gains to be made, however, and I believe some attention on this metaphor will move the conversation in the right direction. Even though this strategy may seem to require a discussion too technical to hold our audience’s attention, in my experience, most non-specialists are willing to give such an argument a chance. There is a fair amount of public interest in artificial intelligence, and we all want to understand our minds better; an explanation of how we think about our consciousness and how we could understand it more is often very effective.

The Implications for the Writing Classroom

Of course, as writing teachers we are able to do more than simply talk about this approach to literacy and writing: we are able to implement it in our classrooms. I would like to suggest three specific activities built around this more complex view of mind that can help us better serve our students as they become skilled writers: hypertext writing, real-world projects, and assignments built around our students’ own experiences.

Hypertext Writing

It may seem odd that one of my suggestions for overcoming the metaphor of mind as a computer is turning to computers in the classroom. Hypertext writing, however, can be a way for students to see that writing and literacy are much more than sitting alone in front of a blank page (or screen). Hypertext writing invites (almost demands) collaboration, and this collaboration takes place at all stages of the writing process and by writers with widely divergent ideas about writing. Ideally, these writers may not even know each other: they could be in different classes, different schools, or different states. This kind of writing will necessarily change the way our students think about composition; as Johndan Johnson-Eilola points out, hypertext writing can help demonstrate how “the activities of writing and reading are transformed and appropriated by widely divergent communities, each of which reconstructs general characteristics of hypertext in relationship to that community’s goals” (7). As the writing process becomes more collaborative, our students will begin to understand that theirs is not the only way of writing and theirs is not the only way of seeing the world.

Real-World Projects

One of the biggest obstacles in the writing classroom is the lack of exigence for the students: their writing seems important to no one, least of all, it turns out, the students themselves. Many advanced college composition courses (particularly those in professional and technical communication) deal with this by offering outreach programs in which instructors can connect their students with actual writing projects for an actual audience. I propose we pursue these opportunities earlier, in the first-year writing course and possibly even in junior or senior-level courses in high school. There are many advantages to these programs, one of which the way students begin thinking about invention. For many beginning writers, the pre-writing stage consists of a handful of activities (freewriting, brainstorming, bubble diagrams, etc.) that they endure no matter what type of writing they plan to do: it’s like the start-up sequence for Windows. Real-life writers are much more flexible because the rhetorical situations often dictate the type of writing and pre-writing that will be the most helpful. Getting young writers into these kinds of situations will help them understand the reason we run through a variety of approaches to writing—because there really are different types of writing and different types of literacy.

Assignments Built Around Students’ Own Experiences

Reading and writing are very personal activities for our students; much of their identity is tied up in the experiences they have had constructing their own literacy. Daniel Wagner argues we must take this into account as we interact with our students, that for them becoming a better writer “means change not only in a set of skills . . . but also in behaviors, attitudes, and beliefs that define each individual, the rest of his or her community, and, ultimately, the structure of communities and societies themselves” (300). We can make this transition easier for them if we are more flexible in the subject matter of their essays. There are some advantages to constructing themed first-year composition courses in which all students write essays on the same topics: I have taught many of those courses myself. But while we gain some coherence in class lectures and save some time while grading, we lose some of our students’ ability to focus on the rhetorical principles we want them to learn while they are familiarizing themselves with what may be a completely new topic. If we build our assignments around specific skills and principles we want them to learn, rather than around a particular topic, we allow them to more easily negotiate the difficult transition from beginning to experienced writers.[7]

Conclusion

Changing a metaphor is a large undertaking, especially for one as entrenched as the mind as a digital computer. The fight we have ahead—in schools, in legislatures, and in public forums—is sure to be a long one, but it is just as sure to be an important one. I believe a discussion of metaphors, mind, and artificial intelligence can help us, in some small way, change the way we think about literacy and change the way we think about writing. Similarly, an awareness of the differences inherent in our students and in the way they achieve literacy will make us better, more effective teachers of writing.

Notes

1 It is clearly true that these ideas are older than digital computers, but though this metaphor may not have originally inspired these ideas, it has strengthened them in a time when many of our older notions are being updated. Thus, they can be viewed as being at the “source” of the trouble.

2 Relativity is usually classified with classical physics, but it is much newer than (and quite different from) most aspects of classical theory. I have thus left the two separate, but the distinction is not a vital one for this argument, anyway.

3 Hilbert’s halting problem is a formal symbolization of the attempt to determine whether a search will end. For instance, an algorithm can be written to search for the largest prime number; however, since there is no largest prime number, that algorithm will never yield a result. Hilbert wanted to know if an algorithm could be written determining if a search would end without having to carry out the search. Turing proved that is impossible.

4 The Turing Test is a blind test of a computer’s responses to inquiry. Without knowing if he is communicating with a person or a computer, an interrogator is allowed to question the other party for a given length of time. If at the end of the time the interrogator cannot tell whether it was a person or computer, the computer is said to be intelligent.

5 One of the results Gödel demonstrated is the existence, for every formal system, of a Gödel-sentence for the system. In rough terms, a system’s Gödel-sentence states “I am not a theorem of this system” (Hofstadter 272). Caught in its own formal rigidity, the system can do nothing; if it can prove the theorem, it has proved a falsehood and is thereby inconsistent. If it cannot prove the theorem, the system is incomplete because there exists a true theorem it cannot prove.

6 Mathematically, this “inherent” deficiency is known as ω-incompleteness. Hofstadter defines a system with this particular feature as one in which “all the strings in a pyramidal family are theorems, but the universally quantified summarizing string is not a theorem” (222). In other words, a system that could prove that every prime number in particular is not the largest prime number, but never prove in general that there is no largest prime number would be ω-incomplete.

7 An additional consequence of these varied essays is to expose your students to a range of new cultures and experiences when they workshop each others’ drafts. I have had several students comment on how much they learned simply by reading what their fellow students had written about themselves or their backgrounds.

Works Cited

Breland, Hunter, ed. Challenges in College Admissions: Report of a Survey of Undergraduate Admissions Policies, Practices, and Procedures . New York: PMDS-AACRAO Distribution Service, 1996.

Copperman, P. “The Achievement Decline of the 1970s.” Phi Delta Kappan. (1979): 736–39.

Crowley , Sharon. Composition in the University: Historical and Polemical Essays. Pittsburgh: U of Pittsburgh P, 1998.

De Saussure, Ferdinand. Course in General Linguistics. Trans. Roy Harris. La Salle, IL: Open Court, 1983.

Gee, James Paul. Social Linguistics and Literacies: Ideology in Discourses. 2 nd ed. London: Falmer Press, 1996.

Gödel, Kurt. On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Trans. B. Meltzer. New York: Dover Publications, Inc, 1992.

Hofstadter, Douglas R. Gödel, Escher, Bach: an Eternal Golden Braid. New York: Vintage Books, 1979.

Johnson-Eilola, Johndan. Nostalgic Angels: Rearticulating Hypertext Writing. Norwood, NJ: Ablex Publishing Corp., 1997.

Kaestle, Carl F., et al. Literacy in the United States. New Haven, CT: Yale UP, 1991.

Lakoff, George, and Johnson, Mark. Metaphors We Live By. Chicago: U of Chicago P, 1980.

Ogbu, John. “Literacy and Black Americans: Comparative Perspectives.” Literacy Among African-American Youths: Issues in Learning, Teaching, and Schooling. Ed. Vivian L. Gadsden and Daniel A. Wagner. Cresskill, NJ: Hampton Press, 1995.

Oxenham, John. Literacy: Writing, Reading, and Social Organization. London: Routledge, 1980.

Penrose, Roger. The Emperor’s New Mind. New York: Penguin Books, 1989.

—. Shadows of the Mind. New York: Penguin Books, 1993.

Pinker, Steven. How the Mind Works. New York: W.W. Norton & Company, 1997.

Public Agenda. An Assessment of Survey Data on Attitudes About Teaching. 25 Aug. 2003. 1 May 2004. <http://www.publicagenda.org/research/ research_reports_details.cfm?list=4>.

Robinson, Jay L. “The Social Context of Literacy.” Perspectives on Literacy. Ed. Eugene R. Kintgen, Barry M. Kroll, and Mike Rose. Carbondale, IL: Southern Illinois UP, 1998. 243–52.

Searle, John R. “Minds, Brains, and Programs.” The Philosophy of Artificial Intelligence. Ed. Margaret A. Boden. Oxford: Oxford UP, 1990. 67–88.

—. Minds, Brains, and Science. Cambridge, MA: Harvard UP, 1984.

—. The Mystery of Consciousness. New York: The New York Review of Books, 1997.

Stapp, Henry. “Why Classical Mechanics Cannot Naturally Accommodate Consciousness but Quantum Mechanics Can.” Psyche 2.5. May 1995. (10 April 2000) <http://psyche.cs.monash.edu.au/v2/psyche-2-05-stapp.html>.

Turing, Alan M. “Computing Machinery and Intelligence.” The Philosophy of Artificial Intelligence. Ed. Margaret A. Boden. Oxford: Oxford UP, 1990. 40–66.

—. “On Computable Numbers, with an Application to the Entscheidungs Problem.” Procedures of the London Mathematical Society 42 (1937): 230–65.

Wagner, Daniel A. “Literacy and Cultural Differences: An Afterword.” Literacy Among African-American Youth. Ed. Vivian L. Gadsden and Daniel A. Wagner. Cresskill, NJ: Hampton Press, 1995.

Provenance: 
This text was accepted for publication after an anonymous peer review process.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 License.
Publication date: 
2007-09
Rating: 
0
No votes yet