For the past month and a half, I've been burying myself in IRS records, GAO files, and Office of Technology Assessment reports, trying to piece together a history of the Individual Master File, the 60-year-old collection of IBM assembly that runs large parts of the Internal Revenue Service. For a variety of reasons, it's a story that defies easy narration. The most obvious one is that it's an enormously complicated system, one which interfaces with nearly every part of the agency, so conceiving of this as an article-scale project that I could knock out quickly involved a certain level of hubris on my part. But there's another, deeper reason, one that I thought was important enough to devote some space to: namely narrative practice and the kinds of stories we tell about technology, especially computing technology. Looking at a sixty-year-old system—a continually modified, continually extended, continually contested computing system—defies some of our expectations about computing technology and upends some of our assumptions about them. Let me explain.
One common way to tell a technology story is through a two-part "development-and-impact" model. In the first part of the story, when a technology is being developed, our heroes (or, depending on your narrative and political commitments, our villains) create something new and revolutionary. In the process, these creators imbue a technology with political, cultural, and moral values. They may do this intentionally, think, for instance, of the ways that Richard Stallman attempted to bootstrap a new intellectual property regime with the GNU Public License or the way that Bitcoin’s architect(s) encoded normative assumptions about monetary policy, inflation, and the legitimate functions of the state into the technology itself. They may also do this unintentionally: consider the numerous stories about bias in training data for machine learning systems and large language models. When we talk about values being “baked-in” to a technology, it is during this phase that the baking happens.
In the second part of these stories, the new technology is released into the world and "impacts" it. Whether the consequences of these new technologies are salutary or harmful depends only on the kind of story you want to tell. Whether you're telling a cautionary tale in the mold of Mary Shelley's Frankenstein or a treacly hagiography about the way that Apple II transformed the computer scene, the basic outline is the same, technology is the cause and society absorbs the impacts. "After all," as the German media and technology theorist Friedrich Kittler once succinctly put it, "it is we who adapt to the machine. The machine does not adapt to us."
This narrative model and it’s assumptions has always been subjected to criticism by a subset of researchers in the fields of the History of Technology and Science and Technology Studies. Despite these criticisms, however, the development and impact model remains a popular way to tell stories about technology, and one which, at least from my perspective, has become even more entrenched and pervasive over the past decade. It provides the structural undergirding for nearly every side of the ChatGPT wars, from the boosters, to the “AI Ethics” crew, to the technomystic doomers concerned with “existential risk.” Whether the answer is audits, alignment, or highlighting the ethical responsibility of engineers, the answers usually involve the enlightened, technocratic choices of a small set of actors early in the process.
There are several possible reasons for this popularity. One is that over the past couple of decades, technology criticism, always something of a reactive field, glommed on to some tropes popular among its ideological enemies. In the early 2010s, at the height of the popularity of Clayton Christensen's theory of "disruptive innovation" among Silicon Valley founders and the business press, technology critics and scholars created an inverted form of the theory you might call "dark disruption." In these stories, the basic outlines of Christiansen's model were adopted, but the value judgments about the nature of "disruption" were inverted. Instead of “disrupting” old business models and incumbents within a sector, what is being “disrupted” gets reinterpreted as “society,” “democracy,” or “basic humanity.” It’s a nice rhetorical trick, but one which I always felt left a lot of the most important questions unaddressed, including those about whether the technology is even capable of effecting these changes attributed to it.
Another reason has to do with the origins of the ML fairness and AI ethics fields themselves. Both of these interconnected fields have their origins in the work of in-house researchers and industry-supported academics to understand and limit some of the negative consequences of (then) emerging machine learning systems. Since their goal was to identify and mitigate exactly the kinds of problems easily understandable with the "development and impact" model, it made a lot of sense that they’d adopt it. For industry's part, it made good business sense to have people working on these questions. These efforts could easily be understood as building trust around new, socially unproven technologies and as evidence of corporate social responsibility. But by 2017 this arrangement between industry and researchers had begun to show signs of strain and by late 2020 it had (infamously) collapsed entirely at Google.
After the collapse of this relationship, we were left with something of a critical monoculture. We have an ecosystem of researchers, journalists, and tech workers primed to see technology as "political" but with very narrow ideas of what that means and with decreasing opportunities to actually intervene. Their ways of conceiving of technological politics allowed for a rich understanding of the formative stages of a technology, but provided only unsatisfying, static visions of the social world it "impacts."1 In effect, they offered a strange kind of technological determinism, one with a narrow opening for enlightened experts to intervene at the beginning of the story, but nothing for the rest of us. And where once their relationship with industry gave them an uncommon level of access to the internals of these technologies and the ear of power, with the collapse of the bargain, their politics offered only a set of false affordances.
In the issues to come, I hope to point some ways forward, to explore the ways that, contra Kittler, the machine does adapt to us, but this will require looking at the problem in a different way. It will require looking at technology less as an autonomous, ahistorical force capable of durably constitutionalizing sets of arrangements, but as a site of ongoing struggles and conflicts. Artifacts do have politics, but it’s a mistake to think that these politics are stable, immutable, and not subject to reinterpretation and renegotiation.
Here, a longer view helps. The use of computers, software, and “artificial intelligence” to automate decision making, to deskill and eliminate professions, to serve as pretexts for organizational changes, and to undermine worker autonomy and discretion are not new phenomena. And, frankly, they usually have less to do with the “baked-in” politics of any given technology and much more to do with a desire for power, profit, and control. The fight isn't over when a technology ships, and—in most of the ways that actually matter—the fight hasn't even started. But fighting will require us to abandon some of our one-dimensional views of technological politics, to talk less about impacts and more about social processes. As David Noble once put it, "Viewing technological development as a social process rather than as an autonomous, transcendent, and deterministic force can be liberating...because it opens up a realm of freedom too long denied. It restores people once again to their proper role as subjects of the story, rather than mere pawns of technology."
As I write this, there are signs that this "realm of freedom" is, in fact, reopening. One way to view the current preoccupation with Luddism is not that people are enamored with the movement's historical values, methods, or aims, but that we’re yearning for a more expansive vision of what technological politics even means, one less centered on the politics of expertise and open to other forms of political engagement. The excitement surrounding the hard-won tentative agreement between the Writers Guild of America and Hollywood studios shows this. It establishes a set of firm regulations over the use of LLMs in the workplace, not through a technical critique of a model's internals, not through “baking” new politics into the model, but through traditional, organized labor power. It shows that the social meaning of technology is never finally settled. We always have a choice. Always. It just takes organization and a will to fight.
I’m painting with a broad brush here. There are exceptions to this characterization and they know who they are. (Hi, Moritz)