Paige Morgan

#CLOCread: Ch. 1: The Cultural Function of Computation

In this chapter, David Golumbia sets up his argument, and lays out his goals. I’ll recap those goals, but I think it’s also useful if I use this post to explicitly identify how I’m likely to respond to his ideas, based on my own background as a reader with experience in literary history and digital humanities. I should probably add that I’m interested in keeping this series of posts in the realm of  low-stakes writing — formality isn’t the goal. Instead, I’m aiming for something a bit more like the Mark Reads series.

I think it’s significant that I’ve heard this book referred to as The Cultural Logic of Computingrather than Computation. Right away in the intro chapter, Golumbia clarifies that his book is about the latter — computation, rather than computers or the specific computing that people do with them. Computation refers to “the methods computers use to operate,” and it’s the “rhetoric of computation” in society today that is a major focus for this book. To illustrate, computing is the program I write to produce a list of prime numbers from 1-100; computation is the way I get this list: dropping the numbers 1-100 into the program, and having it quickly run a series of tests (is this number divisible by 2, 3, 4, 5, 6, 7, 8, or 9?) which eliminates most of the numbers and lists the rest, all in about 3 seconds. In short, by asking questions that sort and eliminate until a sufficiently small answer remains. The rhetoric of computation is (t0 give one example) the view that if you allow everyone access to edit Wikipedia, then the process of their interaction will naturally provide, test, and eliminate data to produce a reliable source of information, while simultaneously revealing which people are most usefully knowledgeable as either well-trained scholars or savage savants. Or, to give another example, the rhetoric of computation influences the educational system to promulgate the idea that if you make sure that students are given a steady four-year diet of information of 30% humanities, 20% science and math, and 50% concentration in a subject of their choice, they will emerge as fully autonomous, intellectually and emotionally functional members of society. Or even the idea that if you try to write an essay, and it isn’t coming, that the answer is to try harder, because you must not be working hard enough. (This last one is a topic that’s contentious in my classroom — the revolutionary idea that the best way to write an essay is not, perhaps, to bolt yourself to the desk and force yourself to bleed the words out.)

The problems with computationalism, then, are much larger than the rise of smartphones in the classroom (or Twitter being assigned to the syllabus).Thus, in regard to Wong’s NYT article, it might be better to say, “the world is changing the way that students are, and it is no simple thing to know how to adapt our teaching to meet their needs.” To describe the situation in those terms in no way lessens the seriousness of the issue.

Would it be easier for the technology enthusiasts and skeptics to find common ground and work towards solutions if both were aware of the idea of computationalism? I suspect so, just as it’s easier for my first-year comp students to craft a careful and delicate discussion when they become aware that they are allowed to think and write about abstract concepts; and that it is worthwhile to write about the ways that two different people define love [1. I say this because my favorite way to introduce this idea is to ask them about the differing concepts of love expressed in songs by Justin Bieber and Lady Gaga; and to describe the significance of these different definitions. It works. Almost instantaneously.].

Golumbia treads very carefully in explaining the problem and significance of computation, and not fear-mongering. The problem with computationalism is that it “underwrites and reinforces a surprisingly traditionalist conception of human being, society, and politics.” His goal is

to show the functions of that discourse in our society, to think about how and why it is able to rule out viable alternative views, and to argue that it is legitimate and even necessary to operate as if it is possible that computationalism will eventually fail to bear the philosophical-conceptual burden that we today put on it.

That burden, of course, is the idea that our current and future internet and associated computing tools will promote a primarily beneficial and democratic societal structure, along the lines described by Clay Shirky in Here Comes Everybody (to name just one title).

I would be lying through my teeth if I said anything other than how much I loved this chapter; or how excited I was to read through it and be treated to a discussion of how computationalism has developed in the last 4oo years, and which is helping me to consider Enlightenment thinking in more complex terms than rationality and anti-rationality. I see that so rarely in academic texts, even those that I consider to be very smart on other subjects. I find it difficult to track discussions that treat this as a simple opposition, because I’m usually frowning over the foundational assumption that rationality has a simple and conventionally-agreed upon meaning. But then, my own research deals with the confluence of rational and irrational thinking and imaginative writing in regards to issues of value and economics in 18th and 19th century England — and so, to my great delight, the scope of this argument actually looks as though it’ll connect beautifully with the dissertation I’m currently revising. (My director will be thrilled.)

And as long as I’m squeeing, I’ll praise the writing for discussing theory without alienating people who haven’t spent years reading and studying it under the guidance of teachers who are adept at teaching it; or without making me feel as though I’m going to be hopelessly lost unless I rush right out and read 600+ pages of Deleuze & Guattari and Derrida right this minute.

What I understand about the rest of the book from this chapter is that it will involve an exploration of  the places in our society where computationalism has had a strong influence, including philosophy, linguistics, corporate practice and structure, academia, and politics — and that Golumbia will be considering both computationalism as an everyday social practice or discourse, and computational protocol — the computing network and infrastructure that is what many people picture as “digital technology/Teh Internets.” Doing this will require a discussion of how those phenomena have affected the way we exist, as humans, and consider what it means to be human. I’m really looking forward to digging into that exploration.

Three years ago I attended a talk by the digital-physical performance artist Stelarc, and was both startled by what he had done, and struck by how much some of his precepts and experiments reminded me of William Blake’s ideas and art; and left me puzzling over whether Blake’s ideas on perception and thought meant the same thing in the 21st century as they had in the 18th. Said puzzling mostly melted my brain before I figured out what I was trying to say (and turned it into an article coming soon to a Palgrave collection near you!); but suffice it to say, I really wish I’d had The Cultural Logic of Computation handy when I was struggling with it.

I really wish I could say more, but I’ve got a conference presentation to work on.

Next up: (Sunday, I hope): Ch. 2: Chomsky’s Computationalism. Will I be able to follow the discussion? I’m a bit nervous, but definitely optimistic, given the first chapter.

 

 

Comments are closed.