Kill Chain
On the automated bureaucratic machinery that killed 175 children

On the first morning of Operation Epic Fury, February 28, 2026, American forces struck the Shajareh Tayyebeh elementary school in Minab, in southern Iran, hitting the building at least two times during the morning session.1 American forces killed between 175 and 180 people, most of them girls between the ages of seven and twelve. Within days, the question that organized the coverage was whether Claude had selected the school as a target. Congress wrote to the Secretary of Defense about the extent of AI use in the strikes. The New Yorker asked whether Claude could be trusted to obey orders in combat, whether it might resort to blackmail as a self-preservation strategy, and whether the Pentagon’s chief concern should be that the chatbot had a personality.2 Almost none of this had any relationship to reality. The targeting for Operation Epic Fury ran on a system called Maven. Nobody was arguing about Maven.
Eight years ago, Maven was the most contested project in Silicon Valley. In 2018, more than four thousand Google employees signed a letter opposing the company’s contract to build artificial intelligence for the Pentagon’s targeting systems.3 Workers organized a walk out. Engineers quit. And Google ultimately abandoned the contract. Palantir Technologies took it over and spent the next six years building it into a targeting infrastructure that fuses satellite imagery, signals intelligence, and sensor feeds into target packages and moves them from nomination to strike. The building in Minab had been classified as a military facility in a Defense Intelligence Agency database that had not been updated since at least 2013, years after it had been walled off from the adjacent IRGC compound and converted into a school.4 Maven processed that list. This is what the 2018 protesters were afraid of. By the start of the Iran War, Maven had sunk into the plumbing, it had become part of the military’s infrastructure, and the argument was all about Claude. This obsession with Claude is a kind of AI psychosis, though not of the kind we normally talk about, and it afflicts critics and opponents of the technology as fiercely as it does its boosters. You do not have to use a language model to let it organize your attention or distort your thinking.
In 2019, the STS scholar Morgan Ames published The Charisma Machine, a study of how certain technologies draw attention, resources, and attribution toward themselves and away from everything else in the system they inhabit.5 The usual framework for understanding this dynamic is “hype,” but hype only describes what boosters do, and it assigns critics a privileged debunking role that still leaves the technology at the center of every argument. A charismatic technology shapes the whole field around it, the way a magnet organizes iron filings. LLMs may be the most powerful instance of this type in history. By the time the war began, the discourse had already become magnetized. “AI safety” and “alignment” and “hallucination” and “stochastic parrots” had become the terms of every argument about artificial intelligence, structuring and limiting what we could even say. Worse, “artificial intelligence” itself had come to be synonymous with LLMs. When the school was bombed, those were the terms people reached for, despite the fact that this critical apparatus offered a poor fit for the older, more mature stack of technologies involved in targeting. The real question, the question almost nobody was asking, is not about Claude or any language model. It is a bureaucratic question about what happened to the kill chain, and the answer is Palantir.
As military jargon goes, “kill chain” is a remarkably honest term. In essence, it refers to the bureaucratic framework for organizing the steps between detecting something and destroying it. The oldest reference to the term itself I can find is from the 90s, but the idea is quite old — dates at least to the 1760s, when French artillery reformers began replacing the gunner’s experienced eye with ballistic tables, elevation screws, and standardized firing procedures.6 The steps in the “kill chain” are subject to constant change, to keep pace with changes in targeting doctrine, but also to incorporate whatever management fads the military’s strategic thinkers had become afflicted by. The U.S. military has named and renamed the steps for eighty years. In the Second World War the sequence was Find, Fix, Fight, Finish. By the 1990s the Air Force had stretched it to Find, Fix, Track, Target, Engage, Assess, or F2T2EA. Every generation of military technology has been sold on the promise of making everything about kill chains shorter, except for the acronyms.
Palantir’s Maven Smart System is the latest iteration of this compression, and it grew out of a shift in strategic thinking during Obama’s second term. In 2014, Secretary of Defense Chuck Hagel and his deputy, Robert Work, announced what they called the “Third Offset Strategy.”7 An “offset” in this line of thinking is essentially a bet that a technological advantage can compensate for a strategic weakness the country cannot fix directly. The first two “offsets” addressed the same problem: the United States could not match the Soviet Union in conventional forces. The thinking was that the Red Army could just continue to throw personnel at a problem, as they did at Stalingrad, or, to be a little anachronistic, as the contemporary Russian Army did at Bakhmut and Avdiivka. Nuclear weapons, the first offset, made the personnel advantage irrelevant in the 1950s. When the Soviets reached nuclear parity in the 1970s, precision-guided munitions and stealth offered the promise that a smaller force could defeat a larger one. By 2014, that advantage was eroding. China and Russia had spent two decades acquiring precision-guided munitions and building anti-access systems designed to neutralize the ones the US already had. Work insisted that the third offset was not about any particular technology but about operational and organizational constructs that would let the United States make decisions faster than China and Russia, overwhelming and disorienting the enemy by maintaining a faster operational tempo than they could match.8
In April 2017, early in the first Trump administration, Work signed a memo establishing the Algorithmic Warfare Cross-Functional Team, designated “Project Maven.”9 A general overseeing Maven, Lieutenant General Jack Shanahan, put the problem plainly: thousands of intelligence analysts were spending eighty percent of their time on mundane tasks, drowning in footage from surveillance drones that no one had time to watch. A single Predator mission could generate hundreds of hours of video, and the analysts tasked with understanding this were faced with an information overload problem.10 “We’re not going to solve it by throwing more people at the problem,” Shanahan said, “that’s the last thing that we actually want to do.”11 The core conceit of the project was that the machine could watch so that the analyst could think.
The Pentagon needed someone to build it. Google took the contract, and what happened next became the most visible labor action in the history of Silicon Valley.
After Google abandoned the Maven contract in 2018, Palantir took it over. In 2020, the XVIII Airborne Corps began testing the system in an exercise called “Scarlet Dragon,” which started as a tabletop wargaming exercise in a windowless basement at Fort Bragg.12 Its commander, Lieutenant General Michael Erik Kurilla, wanted to build what he called the first “AI-enabled Corps” in the Army.13 The goal was to test whether the system could give a small team the targeting capacity of a full theater operation. Over the next five years, Scarlet Dragon grew through more than ten iterations into a joint live-fire exercise spanning multiple states, with “forward-deployed engineers” from Palantir and other contractors embedded alongside soldiers.14 Each iteration was meant to provide an answer to the same question: how fast could the system move from detection to decision. The benchmark was the 2003 invasion of Iraq, where roughly two thousand people worked the targeting process for the entire theater.15 During Scarlet Dragon, twenty soldiers using Maven handled the same volume of work. By 2024, the stated goal was a thousand targeting decisions in an hour. That is 3.6 seconds per decision, or from the individual “targeteer’s” perspective, one decision every 72 seconds.
The Maven Smart System is the platform that came out of those exercises, and it, not Claude, is what is being used to produce “target packages” in Iran. There are real limits to what a civilian like myself can know about this system, and what follows is based on publicly-available information, assembled from Palantir product demos, conferences, as well as instructional material produced for military users. But we can know quite a bit. The interface looks like a tacticool, dark mode send-up of enterprise software paired with the features of geospatial application like ArcGIS. What the operator sees are either maps with GIS-like overlays or a screen organized like a project management board. There are columns representing stages of the targeting process, with individual targets moving across them from left to right, as in a Kanban board.
Before Maven, operators worked across eight or nine separate systems simultaneously, pulling data from one, cross-referencing in another, manually moving detections between platforms to build a targeting case. Maven consolidated and orchestrated all of these behind a single interface. Cameron Stanley, the Pentagon’s chief digital and AI officer, called it an “abstraction layer,” a common term in software engineering, meaning a system which hides the complexity underneath it.16 Humans run the targeting and the ML systems underneath produce confidence intervals. Three clicks convert a data point on the map into a formal detection and move it into a targeting pipeline. These targets then move through columns representing different decision-making processes and rules of engagement. The system evaluates factors and presents ranked options for which platform and munition to assign, what the military calls a Course of Action. The officer selects from the ranked options, and the system, depending on who is using it, either sends the target package to an officer for approval or moves it to execution.
The AI underneath the interface is not a language model, or at least the AI that counts is not. The systems that detect targets in satellite imagery, fuse data from radar and drone footage, and track objects across multiple intelligence sources are computer vision and sensor fusion.17 They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data, or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem;they were added in late 2024, years after the core system was operational, “AIP” was added as a natural language layer that summarizes documents or constructs and answers queries.18 When Anthropic was blacklisted, the Pentagon signed a replacement contract with OpenAI within hours. Replacing one language model with another is often just a simple configuration change, all you really have to do is change the API endpoint.
The language model was never what mattered about this system. What mattered was what Maven did to the process: it consolidated the systems, compressed the time, and reduced the people. That is not a new idea. The United States military has been trying to close the gap between seeing something and destroying it for as long as that gap has existed, and every attempt has produced the same failure. Maven may not even be the most extreme case.
In the late 1960s, the United States faced a version of the same problem in Vietnam. Supplies were moving south along the Ho Chi Minh Trail through jungle the military could not see into. The solution was Operation Igloo White, a billion-dollar-a-year program which scattered twenty thousand acoustic and seismic sensors along the trail.19 These sensors transmitted data to relay aircraft overhead, which fed the signals to IBM 360 computers at Nakhon Phanom Air Base in Thailand. The computers analyzed the sensor data and predicted where convoys would be, and strike aircraft were directed to those coordinates.20 The system could sense but it could not see. It could detect a vibration but it could not tell a truck from an ox cart. The North Vietnamese figured this out. They played recordings of truck engines, herded animals near the sensors to trigger vibration detection, and hung buckets of urine in trees to set off the chemical detectors.21 The system could be fooled because nobody in the process could look at what it was sensing. The Air Force claimed forty-six thousand trucks destroyed or damaged over the course of the campaign. The CIA reported that the claims for a single year exceeded the total number of trucks believed to exist in all of North Vietnam.22 The system’s own output was the only measure of its performance, and nobody outside the system had standing to challenge it. Air Force historian Bernard Nalty later called the service’s casualty computations “an exercise in metaphysics rather than mathematics” and his colleague Earl Tilford concluded that ’the Air Force succeeded only in fooling itself” the program was working.23 When daytime reconnaissance flights failed to find the wreckage of all those trucks, Air Force personnel invented a creature to explain the absence. They called it “the Great Laotian Truck Eater.”24
The pattern here seems new, but is actually older than digital computing and strikes a problem that emerged as soon as the United States started conducting air wars. Michael Sherry’s The Rise of American Air Power (1987) traces it to the founding doctrine of precision bombing, whose confidence in its own methods made examining what those methods produced unnecessary.25 “Belief in success,” Sherry wrote, “encouraged imprecision about how to achieve it.” By 1944, operations analysts on both sides of the Atlantic were measuring bombing in a shared language of industrial optimization. Workers were “dehoused.” Man-hours lost were calculated per ton dropped. One British evaluation treated the bomber itself as a capital asset: a single sortie against a German city wiped off the cost of building the aircraft, and everything after that was “clear profit.” Sherry called the resulting mindset “technological fanaticism.” Kenneth Burke would have called it a “technological psychosis.”26 Either way, Sherry’s point was not that anyone chose destruction. It was that the people refining the technique of bombing stopped asking what the bombing was for. But even by the time the operations researchers had gotten their hands on targeting, this logic was already taking shape. As William Thomas has argued, the operations analysts did not impose this logic on the military; the military was already converting operational experience into systematic procedure, and had been for decades.27 Nobody stopped making judgments. But the judgments were no longer about whether the bombing served a strategic purpose. They were about how to measure it and how to optimize around those measurements.
Clausewitz had a word for everything the optimization leaves out. He called it “friction,” the accumulation of uncertainty, error, and contradiction that ensures no operation goes as planned. But friction is also where judgment forms. Clausewitz observed that most intelligence is false, that reports contradict each other. The commander who has worked through this learns to see the way an eye adjusts to darkness, not by getting better light but by staying long enough to use what light there is. The staying is what takes time. Compress the time and the friction does not disappear. You just stop noticing it. Clausewitz called what unfolds when you refused to notice a “war on paper,” a plan that proceeds without resistance because everything that connected it to the world it was supposed to act on has been taken out.28
Air power is uniquely vulnerable to this. The pilot never sees what the bomb hits. The analyst works from imagery, coordinates, databases. The entire enterprise is mediated by representations of the target, not the target itself, which means the gap between the package and the world can widen without anyone in the process feeling it. The 2003 invasion of Iraq, the operation that Scarlet Dragon would later use as its benchmark, was a case in point. Marc Garlasco, the Pentagon’s chief of high-value targeting during the 2003 invasion of Iraq, ran the fastest targeting cycle the US had operated to that point. He recommended fifty leadership strikes. The bombs were precise. The intelligence behind them was not. None of the fifty killed its intended target. Two weeks after the invasion, Garlasco left the Pentagon for Human Rights Watch, went to Iraq, and stood in the crater of a strike he had targeted himself. “These aren’t just nameless, faceless targets,” he said later. “This is a place where people are going to feel ramifications for a long time.”29 The targeting cycle had been fast enough to hit fifty buildings and too fast to discover it was hitting the wrong ones.
The Air Force’s own targeting guide, in effect during the Iraq War, said this was never supposed to happen. Published in 1998, it described the six functions of targeting as “intertwined,” with the targeteer moving “back” to refine objectives and “forward” to assess feasibility. “The best analysis,” the manual stated, “is reasoned thought with facts and conclusions, not a checklist.”30 But Jon Lindsay, who served as a Navy intelligence officer in Kosovo and later studied special operations targeting in Iraq, found something different. Once a target was reified on a PowerPoint slide — the target intelligence package, or TIP — it became a black box. Questioning the assumptions behind it got harder as the hunt gained momentum, as collection effort accumulated, as the folder thickened with what Lindsay calls “representational residua.” There was more machinery for building up a target than for inspecting the quality of its construction. Personnel became disinclined to ask whether some targets were potential allies, or not actually bad guys at all, because producing targets meant participating in the hunt.31 The targeting guide had warned about this too. “If targeteers don’t provide full targeting service,” it read, “then other well meaning but under trained and ill-experienced groups will step in.”32 Maven eventually would.
Lindsay’s book, Information Technology and Military Power, is the most careful study I’ve found of how targeting actually works, at least partially because it was written by someone who actually did it. During the Kosovo air war, General Wesley Clark demanded two thousand targets, which made it easy to justify any target’s connection to the Milošević government. The CIA assembled its only target nomination of the war, a package on the Federal Directorate of Supply and Procurement.33 Analysts had a street address but not coordinates, so they tried to reverse-engineer a location from three outdated maps, and hit the Chinese embassy — which had recently relocated — 440 meters from the building they were aiming for.34 The State Department knew about the move. The military’s facilities database did not. Target reviews failed to notice, because each validation relied on the last. Lindsay calls this circular reporting: an accumulation of supporting documents that “created the illusion of multiple validations” while amplifying a single error. The PowerPoint slide looked as well vetted as the hundreds of others that NATO struck without incident. On the night of the strike, an intelligence analyst phoned headquarters to express doubts. Asked specifically about collateral damage, he could not articulate a concern. The strike proceeded. It killed three Chinese journalists. Lindsay, writing in his journal at the time, called the result “an immense error, perfectly packaged.”
Lt Col John Fyfe’s study of time-sensitive targeting during the 2003 invasion found the exception. In the Combined Air Operations Center, Royal Air Force officers served in key leadership positions alongside their American counterparts. They operated under more restricted rules of engagement. Fyfe noted that their “more reserved, conservative personalities” produced what he called “a very positive dampening effect on the sometimes harried, chaotic pace of offensive operations.” The contrast between shifts was visible: American leaders pressed ahead full bore, while British officers methodically reconsidered risk and cost-benefit tradeoffs before approving execution. On UK-led shifts, there were no friendly fire incidents and no significant collateral damage. On numerous occasions, Fyfe notes, the British officer in charge prevented the operation from getting ahead of itself. What the next generation of reformers would measure as latency was the mechanism that caught mistakes.35
From inside the efficiency frame, every feature Fyfe describes registered as a defect. The UK shifts were slower. The restricted rules of engagement added constraints. The dampening effect added time. Speed saves lives, the argument goes, but the fastest targeting cycle before Maven was Garlasco’s, and it struck fifty buildings without hitting a single intended target. Scarlet Dragon eliminated all of it. The disagreements about targeting stopped. So did everything the disagreements were doing.
Organizations that run on formal procedure need someone inside the process to interpret the rules, notice exceptions, recognize when the categories no longer fit the case. But the procedural form cannot admit this. If the organization concedes that its outcomes depend on the discretion of the people executing it, then the procedure is not a procedure but a suggestion, and the authority the organization derives from appearing rule-governed collapses. So the judgment has to happen, and it has to look like something else. It has to look like following the procedure rather than interpreting it. I’ve come to think of this as the “bureaucratic double bind,” the organization cannot function without the judgment, and it cannot acknowledge the judgment without undermining itself and being seen as “political.” One solution to this problem is replace the judgment with a number. Theodore Porter, in Trust in Numbers (1995), argued that organizations adopt quantitative rules not because numbers are more accurate but because they are more defensible.36 Judgment is politically vulnerable. Rules are not. The procedure exists to make discretion disappear, or seem to. The system’s actual flexibility lives entirely in this unacknowledged interpretive work, which means it can be removed by anyone who mistakes it for inefficiency.
Harry Braverman argued in Labor and Monopoly Capital (1974) that managers gain control of a process not by making the worker faster but by separating conception from execution. The worker who holds both can exercise judgment the institution cannot govern. Move conception into the system and the worker becomes an operator, executing decisions that arrive from somewhere else.37 Alex Karp, the CEO of Palantir, describes exactly this achievement in The Technological Republic (2025).38 “Software is now at the helm,” he writes, with AI systems that “metabolize data and make targeting recommendations” and hardware “serving as the means by which the recommendations of AI are implemented in the world.” His model for what this should look like comes from nature: bee swarms, murmurations of starlings. “There is no mediation of the information captured by the scouts once they return to the hive,” Karp writes. No weekly reports, no presentations to senior leaders, no meetings to prepare for other meetings. Braverman would have recognized this. The signal that passes without mediation is the signal that nobody can question.
Karp thinks he is destroying bureaucracy. He is encoding it. The contempt for meetings and weekly reports and presentations to senior leaders; he treats these as the bureaucratic process itself. They were not. They were where people interpreted the procedure, the place where someone could notice that the categories no longer fit the case. The targeting doctrine is still there. The rules of engagement are still there. They are the columns on the Kanban board. What Karp eliminated was the discretion the institution could never admit it depended on, and what he got was a bureaucracy that runs exactly as written. Encoded bureaucracy does not bend. It shatters.
The target package for the Shajareh Tayyebeh school presented a military facility. Lucy Suchman, whose Plans and Situated Actions (1987) remains the sharpest account of how formal procedures obscure the work that actually produces their outcomes, would not have been surprised.39 Plans always look complete afterward. They achieve completeness by filtering out everything that wasn’t legible to their categories. This package looked like every other package in the queue. But outside the package, the school appeared in Iranian business listings. It was visible on Google Maps. A search engine could have found it. Nobody searched. At a thousand decisions an hour, nobody was going to. A former senior defense official asked the obvious question: “The building was on a target list for years. Yet this was missed, and the question is how.”40 How indeed.
Congress did not authorize this war. In three weeks, American forces struck six thousand targets. The school was one of them. American forces killed almost 200 people, and the reporting reached for “AI error,” which domesticated the event into something a better algorithm or better guardrails could have prevented. In the days after the strike, the charisma of AI organized the entire political conversation around the technology: whether Claude hallucinated, whether the model was aligned, whether Anthropic bore responsibility for its deployment. The constitutional question of who authorized this war and the legal question of whether this strike constitutes a war crime were displaced by a technical question that is easier to ask and impossible to answer in the terms it set. The Claude debate absorbed the energy. That is what charisma does.
It has also occluded something deeper. The philosopher Mark Wilson has observed that concepts often appear stable while the work they do shifts entirely as they move between domains. He calls this “wandering significance.”41 “Decision” is wandering now. In the military context it means the system’s output, a targeting nomination scored and forwarded. In journalism and tech criticism it means much the same, the thing the AI got wrong. In neither of these does it mean what it used to mean, which is a person choosing to do something for which they could be held accountable. People are still making decisions in that sense. Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces a thousand targeting decisions per hour and call them high-quality. Someone decided to start this war. Several hundred people are sitting on Capitol Hill, refusing to stop it. Calling it an “AI problem” gives those decisions, and those people, a place to hide.
“Al Jazeera Investigation: Iran Girls’ School Targeting Likely ‘Deliberate,’” Al Jazeera, March 3, 2026, https://www.aljazeera.com/news/2026/3/3/questions-over-minab-girls-school-strike-as-israel-us-deny-involvement.↩︎
Gideon Lewis-Kraus, “The Pentagon Went to War with Anthropic. What’s Really at Stake?,” The New Yorker, March 2026.
Scott Shane and Daisuke Wakabayashi, “The Business of War: Google Employees Protest Work for the Pentagon,” New York Times, April 4, 2018.
Scott Shane and Daisuke Wakabayashi, “The Business of War: Google Employees Protest Work for the Pentagon,” New York Times, April 4, 2018.
Morgan G. Ames, The Charisma Machine: The Life, Death, and Legacy of One Laptop per Child (Cambridge, MA: MIT Press, 2019).
Ken Alder, Engineering the Revolution: Arms and Enlightenment in France, 1763–1815 (Princeton, NJ: Princeton University Press, 1997).
Chuck Hagel, “Reagan National Defense Forum Keynote” (speech, Ronald Reagan Presidential Library, Simi Valley, CA, November 15, 2014). Full text published in War on the Rocks, November 20, 2014, https://warontherocks.com/2014/11/a-game-changing-third-offset-strategy/.
Robert O. Work, remarks at the Air Force Association Air, Space and Cyber Conference, National Harbor, MD, September 21, 2016. Reported in Sydney J. Freedberg Jr., “Air Force Leading Way to 3rd Offset: Bob Work,” Breaking Defense, September 21, 2016, https://breakingdefense.com/2016/09/air-force-ops-centers-lead-way-to-3rd-offset-bob-work/
Sydney J. Freedberg Jr., “People, Not Tech: DepSecDef Work on 3rd Offset, JICSPOC,” Breaking Defense, February 12, 2016, https://breakingdefense.com/2016/02/its-not-about-technology-bob-work-on-the-3rd-offset-strategy/
Robert O. Work, “Memorandum: Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven),” Office of the Deputy Secretary of Defense, April 26, 2017, https://www.govexec.com/media/gbc/docs/pdfs_edit/establishment_of_the_awcft_project_maven.pdf.
Marcus Weisgerber, “The Pentagon’s New Artificial Intelligence Is Already Hunting Terrorists,” Defense One, December 21, 2017, https://www.defenseone.com/technology/2017/12/pentagons-new-artificial-intelligence-already-hunting-terrorists/144777/.
Marcus Weisgerber, “Pentagon Will Use Artificial Intelligence to Find New Targets in the Fight against ISIS,” Defense One, May 14, 2017
Maj. Matthew St. Clair and Sgt. Hermon Whaley Jr., “Scarlet Dragon Exercises: XVIII Airborne Corps Using AI to Share Data More Efficiently,” AUSA, December 1, 2025, https://www.ausa.org/articles/scarlet-dragon-exercises-xviii-airborne-corps-using-ai-share-data-more-efficiently
Sydney J. Freedberg Jr., “Army AI Gets Live Fire Test Next Week,” Breaking Defense, February 26, 2021, https://breakingdefense.com/2021/02/army-ai-gets-live-fire-test-next-week/.
Todd South, “This System May Allow Small Army Teams to Probe 1,000 Targets per Hour,” Army Times, August 21, 2024, https://www.armytimes.com/news/your-army/2024/08/21/this-system-could-allow-small-army-teams-to-hit-1000-targets-per-hour/.
Emelia Probasco, Building the Tech Coalition: How Project Maven and the U.S. 18th Airborne Corps Operationalized Software and Artificial Intelligence for the Department of Defense (Washington, DC: Center for Security and Emerging Technology, Georgetown University, August 2024), https://cset.georgetown.edu/publication/building-the-tech-coalition/.
Cameron Stanley, “Multi-Domain AI: The Future of Command and Control” (presentation, Palantir AIPCon 9, March 13, 2026), YouTube video, posted by Palantir,
Katrina Manson, “AI Warfare Becomes Real for US Military with Project Maven,” Bloomberg, February 28, 2024.
Richard H. Shultz, “Big Data at War: Special Operations Forces, Project Maven, and Twenty-First-Century Warfare,” Modern War Institute at West Point, March 25, 2021, https://mwi.westpoint.edu/big-data-at-war-special-operations-forces-project-maven-and-twenty-first-century-warfare/.
“Palantir Expands Maven Smart System AI/ML Capabilities to Military Services,” Palantir, 2024, https://investors.palantir.com/news-details/2024/Palantir-Expands-Maven-Smart-System-AIML-Capabilities-to-Military-Services/
Paul Dickson, The Electronic Battlefield (Bloomington: Indiana University Press, 1976).
Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge, MA: MIT Press, 1996).
John T. Correll, “Igloo White,” Air Force Magazine, November 2004, https://www.airandspaceforces.com/article/1104igloo/.
Henry S. Shields, Project CHECO Southeast Asia Report: Igloo White, July 1968–December 1969 (Hickam AFB, HI: HQ PACAF, Directorate of Operations Analysis, CHECO Division, 1970), https://apps.dtic.mil/sti/citations/ADA485166.
Henry S. Shields, Project CHECO Southeast Asia Report: Igloo White, January 1970–September 1971 (Hickam AFB, HI: HQ PACAF, Directorate of Operations Analysis, CHECO Division, November 1971), https://apps.dtic.mil/sti/citations/ADA485194.
Bernard C. Nalty, The War against Trucks: Aerial Interdiction in Southern Laos, 1968–1972 (Washington, DC: Air Force History and Museums Program, 2005).
Thomas R. Yarborough, “Truck Hunting on the Ho Chi Minh Trail,” Vietnam 26, no. 3 (October 2013), https://historynet.com/truck-hunting-ho-chi-minh-trail/.
Central Intelligence Agency, “North Vietnam’s Trucks and the War,” n.d., CIA FOIA Electronic Reading Room, https://www.cia.gov/readingroom/document/loc-hak-558-13-2-6.
US Senate, Committee on Foreign Relations, Subcommittee on United States Security Agreements and Commitments Abroad, Laos: April 1971; A Staff Report, 92nd Cong., 1st sess. (Washington, DC: Government Printing Office, 1971), https://books.google.com/books?id=LkngM5NGvfoC.
Bernard C. Nalty, The War against Trucks: Aerial Interdiction in Southern Laos, 1968–1972 (Washington, DC: Air Force History and Museums Program, 2005).
Earl H. Tilford Jr., Crosswinds: The Air Force’s Setup in Vietnam (College Station: Texas A&M University Press, 1993).
Jack S. Ballard, Development and Employment of Fixed-Wing Gunships, 1962–1972 (Washington, DC: Office of Air Force History, 1974), 181, https://media.defense.gov/2011/Mar/23/2001330095/-1/-1/0/AFD-110323-040.pdf.
Michael S. Sherry, The Rise of American Air Power: The Creation of Armageddon (New Haven, CT: Yale University Press, 1987).
Kenneth Burke, Permanence and Change: An Anatomy of Purpose (New York: New Republic, 1935).
William Thomas, Rational Action: The Sciences of Policy in Britain and America, 1940–1960 (Cambridge, MA: MIT Press, 2015).
Carl von Clausewitz, On War, ed. and trans. Michael Howard and Peter Paret (Princeton, NJ: Princeton University Press, 1976)
Josh White, “The Man on Both Sides of Air War Debate,” Washington Post, February 13, 2008, https://www.washingtonpost.com/wp-dyn/content/article/2008/02/12/AR2008021202692.html.
Human Rights Watch, Off Target: The Conduct of the War and Civilian Casualties in Iraq (New York: Human Rights Watch, December 2003), https://www.hrw.org/report/2003/12/11/target/conduct-war-and-civilian-casualties-iraq.
US Air Force, USAF Intelligence Targeting Guide, Air Force Pamphlet 14-210 (Washington, DC: Department of the Air Force, February 1, 1998), https://irp.fas.org/doddir/usaf/afpam14-210/.
Jon R. Lindsay, Information Technology and Military Power (Ithaca, NY: Cornell University Press, 2020).
Jon R. Lindsay, “Target Practice: Counterterrorism and the Amplification of Data Friction,” Science, Technology, & Human Values 42, no. 6 (2017): 1061–99.
US Air Force, USAF Intelligence Targeting Guide, Air Force Pamphlet 14-210 (Washington, DC: Department of the Air Force, February 1, 1998), https://irp.fas.org/doddir/usaf/afpam14-210/.
Jon R. Lindsay, Information Technology and Military Power (Ithaca, NY: Cornell University Press, 2020).
US Department of State, “Oral Presentation to the Chinese Government Regarding the Accidental Bombing of the P.R.C. Embassy in Belgrade,” July 6, 1999.
Eric Schmitt, “In a Fatal Error, C.I.A. Picked a Bombing Target Only Once: The Chinese Embassy,” New York Times, July 23, 1999.
John M. Fyfe, The Evolution of Time Sensitive Targeting: Operation Iraqi Freedom Results and Lessons (Maxwell AFB, AL: Air University Press, 2003), https://apps.dtic.mil/sti/tr/pdf/ADA476994.pdf.
Theodore M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton, NJ: Princeton University Press, 1995).
Harry Braverman, Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century (New York: Monthly Review Press, 1974).
Alexander C. Karp and Nicholas W. Zamiska, The Technological Republic: Hard Power, Soft Belief, and the Future of the West (New York: Crown Currency, 2025).
Lucy A. Suchman, Plans and Situated Actions: The Problem of Human-Machine Communication (Cambridge: Cambridge University Press, 1987).
Madhumita Murgia, Charles Clover, Jacob Judah, and Alison Killing, “The AI-Driven ‘Kill Chain’ Transforming How the US Wages War,” Financial Times, March 12, 2026, https://www.ft.com/content/fedb262e-e6db-40bc-a4d0-080812f0f82b.
Mark Wilson, Wandering Significance: An Essay on Conceptual Behavior (Oxford: Oxford University Press, 2006).

