Cast iron cookware inspires endless, and usually insipid, debates online. How do you season it? What’s the best kind of oil to season it with? How do you clean it? Should you use soap? A nearly indestructible slab of metal has, in the minds of many people, become a focal point for anxieties about precision, optimization, and the life-ending catastrophe of potentially making a mistake.
I always skim these threads with the same lurid, semi-intellectual sense of fascination with which I might read Daniel Paul Schreber’s memoirs or—more to the point—OCD forums. The constant checking, the desperate search for the "right" way, the idea that the use and care of your $25 Lodge Pre-Seasoned Cast Iron Skillet demands Extreme Vigilance, all follow the same pattern. I recognize the tune. These debates aren’t just about cookware; they reveal deeper anxieties about certainty, confidence, and self-trust—ones that also defines our relationship with technology.
As you might have noticed, I’ve been struggling with this follow-up piece for quite some time. Part of this is for the normal, dreary, broken-brained reasons. But part of it is because what I want to talk about, namely “computers and human psychology,” feels to me like an absolutely scorched discursive landscape. It is the terrain on which the big culture wars about platforms, identity, and information are fought. And because it’s a battlefield on which global dramas play out, it tends to be treated with about the same level of care and consideration that Poland was given in the twentieth century. And this carelessness has left us with some bad habits when it comes to talking about these issues.
An earlier generation of technology critics, people like Sherry Turkle in the 80s and 90s, understood something we seem to have forgotten: that the relationship between humans and computers isn't one of "impact," a term that implies a one-way force exerted by technology onto passive users. Instead, these critics recognized that human-computer interaction is reciprocal and deeply intertwined. Identity, emotion, and cognition are reshaped through a constant interaction between the user and system. The pathologies, the truly bad ones, which develop are expressions of the precise nature of this relationship.
One of the ways that relationship has played out is through what I can only describe as 'a delegation of confidence.' The machines we've built to help us have become the tools we lean on to confront our doubts. They promise reassurance, certainty, and validation in situations where our intuition falters. We reach for our phones to verify things we probably already know, seeking confirmation for even the simplest decisions. With each verification, each surrendered decision, we're participating in a process that Kenneth Burke, writing in 1935, would have recognized as a new and distinctive form of 'trained incapacity.'
Burke, borrowing from Veblen, developed this concept to explain how specialized expertise could become a kind of cognitive prison. He saw how institutions and professions cultivated habits of mind so thoroughly that they rendered individuals blind to possibilities outside their specialized domains. These "occupational psychoses"—Burke's term for the ways of seeing entrenched by our jobs and institutions—could become so ingrained that they actually limited our ability to perceive or act outside those patterns.
But Burke's insight takes on a different resonance in an age where the very act of technological verification has become its own kind of expertise. Where he saw occupational specialization limiting perspectives—the banker reducing everything to financial transactions, the engineer seeing only technical solutions—we've developed a more pervasive form of trained incapacity. Our reflexive reach for technological verification has become a cognitive framework that actively reshapes what we consider judgment to be. The cast iron pan exemplifies this perfectly: what should be a process of developing practical knowledge through use and observation has been transformed into an endless series of verification checks, each one reinforcing our inability to trust direct experience. We've become experts at technological mediation, and like all forms of expertise Burke described, this proficiency reshapes our understanding in ways that make other forms of knowing increasingly inaccessible.
What emerges is not just a crisis of verification, but a more profound retreat from unmediated experience. Technological systems don't merely offer verification; they provide a comprehensive architecture of avoidance. Where once we might have confronted uncertainty directly—learning through trial, error, and embodied experience—we now have elaborate mechanisms for circumventing discomfort.
Consider again the landscape of contemporary food discourse. What was once a domain of personal exploration and cultural transmission has become a labyrinth of anxieties and algorithmic interventions. Food blogs, recipe sites, and youtube cooking tutorials have transformed cooking from an embodied practice into a performance of expertise. Each recipe comes with a meticulously documented backstory, a litany of potential failures, and an armory of specialized equipment. The message is clear: it’s the same lesson that you get from watching The Bear, cooking is not something you do, but something you might catastrophically fail at.
This might not be the way you interact with online cooking culture, but that's precisely the point. Where the traditional impact model sees technology as a unidirectional force actively reshaping human experience, a relational lens reveals the more intricate ways these systems and human practices co-construct each other. What emerges is not a universal truth, but a particular mode of being online—a specific configuration where networked anxieties find their perfect market expression.
Let’s stay with perennial subjects of online food discourse. For the right kind of user, DoorDash isn't just a meal delivery service—it's an elegant solution to a carefully manufactured problem. What begins as occasional convenience morphs into a subtle restructuring of domestic life. The app doesn't simply deliver food; it reframes cooking itself as a high-cognitive-load activity, a potential minefield of performance anxiety. For those already wrestling with time pressures, social anxieties, or diminished culinary confidence, the platform offers more than meals—it provides a form of psychological outsourcing. The app doesn't force dependency; it creates an environment where avoidance feels not just acceptable, but rational. Each delivered meal becomes a small negotiation with one's own perceived limitations—a way of managing not hunger, but one’s lack of motivation, of confidence, of tolerance for mistakes.
Take note of what I am not saying: I am not saying that no one ever has any reason to use these services. I am not saying that this dynamic captures everyone who orders a burrito. What I am saying is that our current discursive frameworks insist on treating two truths—“some disabled people find UberEats useful” and “some people have exploitative, incapacitating relationships with delivery apps”—as if they cannot coexist.
This complexity—this refusal for the story to resolve into a simple narrative of technological harm or liberation—is precisely what makes our contemporary moment so resistant to easy diagnosis. We've been using the wrong metaphors, the wrong models. We've been seeing impacts where we should be seeing processes, and now we have to face down all that annoying dialectical stuff—the mutual constitution, the recursive transformations, the way systems and subjects co-evolve in ways that defy linear causality. I'm going to try to figure out a way of talking that doesn’t sound like a Marxist learning LISP.
Sooner next time,
Kevin
this was so lovely, i ended up highlighting the entire thing
Really enjoyed this