Is the U.S. Ready for the Next War?

Is the U.S. Ready for the Next War?

Late this spring, I was led into a car in Kyiv, blindfolded, and driven to a secret factory in western Ukraine. The facility belongs to TAF Drones, founded three years ago by Oleksandr Yakovenko, a young Ukrainian businessman who wanted to help fend off the Russian invasion. When the war started, Yakovenko was busy running a logistics company in Odesa, but his country needed all the help it could get. Ukraine was overmatched—fighting a larger, wealthier adversary with a bigger army and more sophisticated weapons. “The government said to me, ‘We need you to make drones,’ ” Yakovenko told me. “So I said to my guys, ‘You have four hours to make up your minds. Leave or stay—and, if you stay, promise me that you’ll do your best to help our military.’ ”

Yakovenko’s task was to set up factories to mass-produce unmanned vehicles, designed to overwhelm whatever Russia sent across the border. When I visited his fab, as the plants are called, more than a hundred employees, many of them women, were working intently in a setting that seemed more college campus than munitions factory. With techno music humming in the background, they tended to 3-D printers, assembled carbon-fibre components, carried out flight simulations, adjusted video cameras and radio transmitters. “It’s quite meditative,” one of the women told me.

The TAF fabs are part of a constellation of similar facilities, hidden in basements, warehouses, and old factories, which have helped the Ukrainians battle the Russian Army to a stalemate. The one that I visited makes about a thousand drones a day. They are sophisticated and lethal and, above all, cheap, produced for about five hundred dollars apiece. Some are used for surveillance and some to ferry supplies, but most of them, laden with explosives and directed by an operator through a video screen, are crashed directly into their targets. One of Yakovenko’s managers showed me a fuzzy black-and-white video, taken in April, of a night operation behind enemy lines. Onscreen, a drone equipped with a thermal camera dived toward a TOS-1 rocket launcher, and then the screen exploded in a white flash. Russia builds TOS-1 units for about five million dollars apiece. “One of our drones costs a tiny fraction of what it destroys,” the manager told me. “That’s our advantage.”

When the Russian Army rolled into Ukraine, it was equipped for a conflict from an earlier era: an old-fashioned land war prosecuted by tanks and heavy artillery. In response, Ukraine devised a futuristic take on hit-and-run guerrilla operations. Now when a Russian column tries to advance it is met by a swarm of buzzing bombs. Russia has suffered about a million casualties in its attempt to invade. Since early 2024, according to an estimate by Mykhailo Samus, a researcher in Kyiv, about eighty per cent of its losses in men and matériel have been inflicted by drones.

The most dramatic application of this asymmetric approach came in June, when a fleet of more than a hundred Ukrainian drones struck targets as far away as Siberia, destroying or damaging some twenty Russian warplanes. It was the most militarily significant attack on Russia since the Second World War. The Ukrainians released a taunting video, in which first-person views of the drones careering into the planes were set to a pulsing techno soundtrack. The videos were stamped “Failsafe,” a military term that suggests immunity to harm.

While the future of warfare is being invented in places like Ukraine, U.S. officials are looking on with a growing sense of urgency. For decades, the American armed forces have relied on highly sophisticated, super-expensive weapons, like nuclear-powered aircraft carriers and stealth fighters, which take years to design and cost billions of dollars to produce. (The country’s failures in Iraq and Afghanistan were not for a lack of technical prowess.) Since the end of the Cold War, these munitions have given the U.S. near-total dominance on land, sea, and air. But now the technological shifts that have stymied the Russian invasion of Ukraine are threatening to undermine America’s global military preëminence. David Ochmanek, a former Pentagon official and a defense analyst at the Rand Corporation, told me that the American way of war is no longer viable. “We are not moving fast enough,” he said.

Throughout history, technological advantages have altered the course of wars, sometimes suddenly. In the late nineteenth century, railways displaced horses as a way of moving and supplying armies, and the Prussians exploited them to overwhelm their French opponents. In the first Gulf War, the U.S. used precision-guided cruise missiles that could be steered into an office window from a thousand miles away. The Ukrainians argue that they represent a similar technological vanguard. “We are inventing a new way of war,” Valeriy Borovyk, the founder of First Contact, whose drones carried out the strike on the Russian warplanes, told me. “Any country can do what we are doing to a bigger country. Any country!”

America’s best approximation of Oleksandr Yakovenko is Palmer Luckey, who helped found the defense startup Anduril in 2017. Not long ago, he met me at the company’s headquarters, in Costa Mesa, California, amid an array of high-tech weapons: drones, missiles, pilotless planes. Anduril is housed in a cavernous building that once contained the Orange County offices of the Los Angeles Times, whose faded logo is still visible on the exterior walls. At thirty-two, Luckey embodies the stereotype of a cocky, gnomic tech mogul: shorts and a Hawaiian shirt, flip-flops, a mullet and a soul patch. As we talked, he snacked from a bag of chocolate-chip cookies.

He wanted to show off his creations, autonomous weapons that he believes will upend many of the American military’s most cherished notions of strategy and defense. He walked over to a model of the Dive-XL, an unmanned submarine that can go a thousand miles without surfacing and is designed to be produced as quickly as an IKEA couch. “I can make one of these in a matter of days,” he said.

The U.S. military is accustomed to doing business with huge, entrenched players: companies like Lockheed Martin and Northrop Grumman that employ tens of thousands of engineers and military veterans in a culture not unlike the one inside the Pentagon. Luckey, by contrast, built an early career in video games and virtual reality. At nineteen, working from his parents’ home, in Long Beach, he created a V.R. headset called Oculus, a technology that he promised would “transport us into worlds we cannot hope to experience in real life.” He sold the company for two billion dollars to Facebook, whose founder, Mark Zuckerberg, brought him on to oversee the Oculus team. Their collaboration was brief. In 2016, following a controversy over a contribution that Luckey made to a pro-Trump group, Zuckerberg fired him. “I had a real chip on my shoulder,” Luckey said. “I wanted to prove that Oculus wasn’t a fluke.”

A few months later, Luckey met with Trae Stephens, a principal at Founders Fund, a venture-capital firm led by Peter Thiel, the billionaire investor and libertarian political activist. Thiel had helped found Palantir, which was transforming the American defense establishment by integrating computer operations and simplifying tasks like tracking and destroying enemy targets. At Founders Fund, he and Stephens were searching for fledgling companies that could bring the breakthroughs of the tech world to the military.

Anduril has secured billions of dollars in defense contracts, as the Pentagon has been swept up in a wave of enthusiasm for unmanned systems. But many questions remain, including the fundamental one of whether such weapons work as well as Luckey says they do. Even with the Pentagon pouring cash into experiments, the vast majority of the budget still goes to the same kinds of programs that it has been pursuing for decades. A growing consensus of defense experts holds that the United States is dangerously unprepared for the conflicts it might face. In the past, the country’s opponents were likely to be terrorist groups or states with armies far smaller than ours. Now planners must contend with considerably different threats. On the one hand, there is the prospect of insurgents who can field swarms of armed drones. On the other, there is the rise of China—a “peer competitor,” which by some measures has surpassed the U.S. as a military force. There is no guarantee that we have the right matériel to prevail against either. “Shit,” Luckey said. “We’re like a gun store with no stock.”

During the Second World War and the decades after, the American armed forces devised technologies far more advanced than anything made in the private sector. “The military produced an astonishing amount of innovation,” Bill Greenwalt, a fellow at the American Enterprise Institute and a former staffer on the Senate Armed Services Committee, told me.

Facing an existential threat, the Pentagon adopted a free-form procurement process, with senior leaders often assigning several contractors to make prototypes for a single weapon and then giving a contract to the most successful contestant. “The generals threw money at good people, broke furniture, and picked winners,” Greenwalt said. This unconstrained methodology helped lead to the first reconnaissance satellites, the first integrated circuits, the first atomic weapon. “The important thing to remember about the Manhattan Project is that there were multiple pathways to success,” Greenwalt pointed out. “It was incredibly competitive.” In 1949, Admiral Hyman Rickover was assigned to oversee an effort to use the newly harnessed atomic energy to power a submarine—an idea that many observers considered fanciful. Five years later, the first nuclear submarine entered service.

Over time, though, the process became more regular and rules-bound. In 1960, President John F. Kennedy appointed a new Secretary of Defense, Robert McNamara, who had built his reputation by bringing organizational discipline to the Ford Motor Company. Under the system he helped implement, weapons were conceived not by industry but by the Pentagon, where planners were typically following five-year prospectuses drawn up by other Pentagon planners. It usually took years to design a new weapon—and only once the specifications were agreed upon did the Pentagon solicit input from defense companies and finally select a contractor to produce it. The new system was more orderly, but it was also less competitive and far less dynamic. “We stopped innovating,” Greenwalt said.

The combination of limited production capacity and expensive weapons sometimes constrained the government’s options. In March, President Trump vowed that the Houthis, an Iran-backed militia that was menacing global shipping in the Red Sea, would be “completely annihilated.” The Navy and the Air Force launched more than eleven hundred strikes, at a cost of at least a billion dollars in the first month. The Houthis, who sometimes operated out of speedboats and skiffs, kept on harassing ships. They shot down several American MQ-9 Reaper drones—which cost thirty million dollars apiece—and fired on two U.S. carriers. After seven weeks of fighting, they agreed to stop attacking American vessels, and Trump called off the campaign. But the Houthi force remains largely intact, and has attacked ships from other countries. Even this brief engagement left senior Pentagon officers worried that they had dangerously depleted the country’s stores of weapons.

Earlier this year, a group of Ukrainian officers stood in the lobby of a civilian building in Kyiv. Among them was Kyrylo Budanov, the country’s head of military intelligence—a hulking, baby-faced figure, instantly recognizable even though he was partly masked. He and his colleagues had gathered to boast. About a week before, a pair of Magura V7 pilotless attack boats had ventured into the Black Sea and shot down two Russian Su-30 fighter jets. It was the first time in history that combat aircraft had been shot down by maritime drones, the Ukrainians said. One of Budanov’s officers, a masked man who went by Thirteen, stepped forward and spoke through an electronic device that scrambled his voice. He pointed to a Magura V7 that had been wheeled in for the occasion: a sleek, low-slung craft made of fibreglass and polyethylene. It looked like a miniature speedboat with missiles attached. “The Ukrainian intelligence service has made a revolution in war in the sea,” he said.

As the conflict began, Russian warships roamed the Black Sea from their base in Sevastopol, a Ukrainian port captured in 2014. Ukraine hardly had a navy. When Russia blockaded the port of Odesa, a crucial outlet for grain and other agricultural commodities, it threatened to devastate an already battered economy. “We were desperate,” Thirteen told me.

Ukraine began attacking Russian naval vessels with missiles and aerial drones, and struck the Sevastopol base. Around the same time, it implemented two parallel programs to launch a fleet of naval drones. Group Thirteen, a newly created military-intelligence unit, oversaw the making of the Magura, a fast, maneuverable craft that would go after ships at sea. The country’s counter-intelligence agency put forth the Sea Baby, designed to carry heavier payloads and strike such targets as bridges and ships in harbor. With ranges of more than five hundred miles, the two could threaten adversaries almost anywhere in the Black Sea.

Ukraine released them into service, and, in the course of a few weeks in early 2024, swarms of Magura drones sank three Russian warships—the Ivanovets, the Tsezar Kunikov, and the Sergey Kotov. The rest of Russia’s Black Sea fleet soon retreated from Sevastopol and began dispersing from Novorossiysk, on the eastern shore. This March, the Russians agreed to a ceasefire in the Black Sea. “They didn’t have a choice,” Thirteen said.

At the beginning of the war, Ukraine used drones mostly for reconnaissance. But, as they showed their worth as weapons, their use expanded. Last year, by some estimates, Ukraine’s factories turned out more than three million drones. The key to successful operations, TAF workers told me, was that the manufacturers of the drones and the soldiers using them were in the same place, allowing the software and components to be continually tweaked. The drones that I examined were remarkably simple: a lightweight square frame, four propellers, a video camera, a battery-powered motor, and room for a bomb. The attack drones, known as F.P.V.s, for “first-person view,” are guided by an operator watching a video screen that shows what the drone is seeing; other members of the unit monitor feeds from reconnaissance drones. Yakovenko described a recent attack in which a Ukrainian pilot crashed his drone into a Russian tank, forcing the crew inside to flee. Other F.P.V. drones chased down the Russian soldiers. “We killed all of them,” he said.

The Russians are terrorizing the Ukrainians with drone attacks of their own. Towns and hamlets have been largely pulverized along the front lines and for miles beyond; even American air defenses are mostly useless, because setting them up invites an immediate Russian attack. Iranian-made Shahed drones, capable of carrying large warheads long distances, have pummelled Kyiv and other cities with hundreds of strikes. Under the constant threat of attack, the Ukrainians have found it difficult to supply their front lines, and evacuation is sometimes impossible.

The effects were devastating, crippling about a dozen long-range bombers that were equipped to carry nuclear weapons. Borovyk, whose company made the drones, told me that the key was the element of surprise. Russia hadn’t anticipated drone strikes so far from the border, and had no time to put jamming systems into place. “They were not prepared for that type of attack,” Borovyk said.

Ukraine’s fighters have not yet been able to regularly deploy autonomous drones—the kind that can find targets without human help—but they are getting closer. Some of Borovyk’s drones were steered manually, but others were equipped with A.I. technology that could help them find their marks. According to reports in the Ukrainian press, the A.I. had been trained to recognize targets using images of old Soviet warplanes on display in an aviation museum east of Kyiv.

When Palmer Luckey began tinkering in a camper in his parents’ driveway, the kind of rapid innovation that is flourishing in Ukraine was almost unthinkable in the American defense establishment. Silicon Valley was producing a string of technological breakthroughs, but its leaders shied away from working on defense projects. The reasons were partly ideological—the tech business retained some of its roots in the seventies counterculture, which was revolted by the Vietnam War. But mostly the hesitation was pragmatic: the Pentagon’s development process was so slow that it typically took contractors years to receive any money. Many big Silicon Valley companies weren’t willing to wait, and smaller ones couldn’t afford to. Meanwhile, the technology that the Pentagon developed on its own often became obsolete before a weapon was even deployed. “By the time the F-35 came out, some of the microprocessors it used were slower than an iPhone,” a former Pentagon official who worked on tech issues told me.

In 2015, Ash Carter, President Obama’s Secretary of Defense, set out to bring the two communities together. Carter, who had a doctorate in theoretical physics, dispatched a team of officers to the Bay Area to set up an outpost—officially called the Defense Innovation Unit, but known at the Pentagon as Unit X. Its job was to find fledgling technology companies with interesting ideas and give them contracts. One of Unit X’s first initiatives was to do an end run around the Pentagon’s procurement process. By invoking an obscure paragraph buried in a budget-authorization bill, it was able to award contracts to companies as soon as they completed a successful pilot program. “Our goal was to shrink the Pentagon’s contracting process from ten years to six weeks,” Chris Kirchhoff, a founder of the unit, said. “We were able to do that.”

The Pentagon was also under pressure from Silicon Valley, which increasingly regarded itself as a rival power center to the government. In 2014 and 2016, the tech companies SpaceX and Palantir sued the government, claiming that it prevented private firms from competing for contracts; the companies argued that they could offer products at much lower costs. Both prevailed, and went on to receive billions of dollars’ worth of federal contracts, clearing the way for others.

As the Pentagon was opening up, Palmer Luckey got fired from Facebook and started Anduril. Among the first ideas that he brainstormed with his co-founders was an A.I. system that, by synthesizing enormous amounts of data, could learn to identify objects and track them in real time. Once it locked on, it could guide a mass-produced, disposable weapon to strike the target nearly anywhere on earth. They named the system Lattice, and a few months later they won their first government work: a contract, for U.S. Customs and Border Protection, to use Lattice in towers that tracked people moving across the U.S.-Mexico border.

The system worked, and Border Protection soon bought more. But Luckey believed that the ideal client for Lattice was the Pentagon. He explained to me that if the military needed to track a Chinese destroyer across the Pacific, Lattice could provide a real-time picture of the ship, using data from more than a hundred sources—a mix of classified and public channels that included geospatial satellites, ship beacons, radar, signal intercepts, and thermal sensors. With precise targeting, the military could sink the destroyer with a much smaller, cheaper missile than the ones it was using. “I can tell you, not only is that a Chinese destroyer, I can tell you which one it is—it’s a Luyang destroyer!” Luckey said. “I can tell you that because of the particular equipment it is configured with. And I know that, to achieve my objective—mission kill—I need to target either the bridge or its radar. I can put a missile right there.”

Rather than wait for military leaders to announce the kinds of weapons they needed, Anduril’s engineers would build sophisticated devices and offer them to the Pentagon. If the generals wanted something slightly different, Luckey’s team could simply rewrite the code. The weapons themselves would be little more than shells for software, making them much easier to build. “Our cruise missile has fifty per cent fewer parts than what the military uses now, and it can be put together with ten simple hand tools that I can put in a small bag,” Luckey said.

In 2018, with most of their ideas still inchoate, Luckey and Stephens walked into the office of Christian Brose, the defense adviser to Senator John McCain, who was then the chairman of the Senate Armed Services Committee. At the time, Anduril was a startup with twenty-five employees, hoping to break into the defense business. Brose, like his boss, had grown deeply frustrated with the Pentagon. He quickly realized that he and Luckey had aligning objectives. Later that year, after McCain died of cancer, he joined Anduril as the head of defense strategy.

The way Brose saw it, the Pentagon had to be transformed. Not only did it need a new strategy; it also needed to supplant many of its most coveted weapons. “The U.S. used to have a system that worked, but it’s broken,” he told me. “We spend a ton on defense, but if we don’t change we’re going to lose the wars of the future.” The war that worried him and his peers most was with China.

Earlier this year, at the Center for Strategic and International Studies, in Washington, a dozen or so experts gathered to conduct a simulated war between the United States and China over the island of Taiwan. Though most discussion about such a conflict centers on an all-out Chinese invasion, the C.S.I.S. war game was built around what many observers regard as a more likely scenario: a blockade, designed to box out the American Navy and squeeze Taiwan into submission.

The experts split into teams representing the U.S. and China, and each side was armed with the weapons that its country is thought to possess. As the game began, a crisis was already under way; China had encircled the island, and its sailors had sunk ships that attempted to run the blockade. U.S. forces announced that they would protect Taiwanese vessels, and American and Chinese ships began exchanging fire.

The scenario felt alarmingly plausible. In 2021, President Biden broke with decades of “strategic ambiguity” by publicly committing the United States to Taiwan’s defense. Biden called America’s support for Taiwan “sacred”—but the island also produces the world’s most sophisticated microchips, which are considered essential to the global economy. Although President Trump has been less declarative, Defense Secretary Pete Hegseth recently warned China that any attempt to conquer Taiwan would have “devastating consequences.”

From the game’s opening moves, the conflict escalated rapidly. The Taiwanese Air Force began attacking Chinese ships and mining the Taiwan Strait, and Chinese warplanes struck ports on the island and shot down two American planes. The U.S. retaliated by sinking a squadron of Chinese warships in harbor. After China fired ballistic missiles at American bases in Japan, the fighting exploded, with the U.S. launching massive strikes against the mainland and Japanese jets attacking Chinese ships. China’s missiles sank three aircraft carriers—drowning as many as fifteen thousand sailors—and destroyed a quarter of the American Air Force.

By the time the game was stopped, each side had lost tens of thousands of people. Seth Jones, a C.S.I.S. president who took part in the game, seemed taken aback by the ferocity of the fighting. “I’m surprised how rapidly things got out of control,” he said. Still, it could have been worse. The Chinese didn’t strike the U.S. mainland, as they do in other war games. In some simulations, the two countries have traded nuclear assaults, with hundreds of thousands of casualties.

The branches of the military, which maintain their own communication networks, have their own obstacles. Navy ships typically cannot communicate directly with Air Force jets, even when they are operating in the same theatre. Even within the Air Force, many planes cannot talk to one another; the pilot of an F-22 fighter jet can’t communicate directly with the pilot of an F-18. “If you flew the two aircraft next to each other, the only way the pilots could communicate would be to wave to each other,” the retired Air Force general Scott Stapp, who spent several years working on such concerns as a senior Pentagon official, said.

Experts see the issue of “joint command and control” as one of the military’s biggest, most underpublicized problems. The former Senate staffer imagined what might happen during a crisis in the Western Pacific. A satellite could detect a radio signal sent by what the N.S.A. believes is a Chinese warship. To make a precise identification, the N.S.A. would need the National Geospatial-Intelligence Agency, which oversees imaging satellites, to take a photo. “You have to make the request through a tasking mechanism,” the former staffer said. “And then it gets shipped over to the N.G.A., to take a picture of this ship. That can take several minutes. There’s a war going on, and you’re asking yourself, ‘Do I have to shoot this thing?’ But by then the ship has moved.” He continued, “It has to be boom, boom, boom, boom. And people have to be making split-second decisions, and you have to get the latencies down, because it’s not just one fucking ship but hundreds of targets, all at the same time.”

For two decades, senior legislators and military leaders have been working, mostly without success, to overcome these problems. In Pentagon jargon, the goal is known as Joint All-Domain Command and Control, a term that has become so familiar that it has acquired a shorthand—JADC2. “It’s not a technology issue,” Stapp, the former Air Force general, said. “It’s a cultural issue. The commercial world solved these kinds of problems years ago, and we have made the choice to run on separate networks with separate capabilities.”

The prospect of armed drones limited only by the capacities of artificial intelligence raises a disturbing question: Could they escape our control? Ever since humans began to dream of intelligent machines, they have feared that their creations would turn on them. In “R.U.R.,” Karel Čapek’s play from 1920, androids created to do humankind’s drudge work rise up and wipe out their makers. In “The Terminator,” from 1984, an A.I. defense system called Skynet becomes self-aware and triggers a nuclear war. This year’s “Mission: Impossible” sequel has basically the same theme: a rogue A.I. known as the Entity seizes control of nuclear weapons and comes within a tenth of a second of obliterating life on earth.

Similar warnings have come from more sober sources. Demis Hassabis, a prominent A.I. innovator at Google, has warned, “A bad actor could repurpose those same technologies for a harmful end.” Yet the Pentagon seems more concerned with making A.I. systems work effectively. Under a 2012 Defense Department order updated by Biden and left intact by Trump, the military may employ autonomous systems as long as they succeed in tests and their use is consistent with international humanitarian law.

The most prominent real-time laboratory for using A.I. in warfare is in Israel. When Hamas-led fighters crossed the border on October 7th, 2023, and launched a bloody assault, hundreds of thousands of Israelis were called to military duty. Among them was a technology entrepreneur from Tel Aviv, who asked me to refer to him as Michael. For four months, Michael told me recently, he commanded a group of sixteen targeters for the Israel Defense Forces, taking advantage of powerful computer programs that helped select targets. “We called ourselves warriors of the keyboard,” he said.

A former senior Israeli military official said that the programs were constantly refining their own methods. “It’s not just finding the targets that’s important but how to locate the people more quickly as they move around,” he told me. “We are learning all the time. The A.I. is learning.” At one point, he said, intelligence officers determined that they could find places Hamas had buried rockets by identifying where the soil had shifted after heavy rains. So they used a program to scan hundreds of hours of drone footage and find disturbed soil, “even if it had moved only two centimetres. And then, like that, we created another two hundred targets.”

Much of the targeting work was done by Unit 8200, a wing of the I.D.F. whose function was to gather signal intelligence. For most of the war, it was run by General Yossi Sariel, who oversaw a team of twelve thousand, including targeters and linguists who worked from a desert airbase in Nevatim. The former senior I.D.F. officer told me that the targeters were meant to see artificial intelligence as a tool, not as a moral arbiter. “The purpose of the machine was to support the soldier, not replace him,” he said. “Our A.I. programs never took the decision to attack anyone. Only humans made those decisions.”

Michael, the targeter, described the process: The A.I., sifting the data, would suggest a target and list the factors, such as telephone contacts and video evidence, that supported a link to Hamas. Based on those, it would give an estimated likelihood that the person or building should be struck. “What we have is a priority queue,” Michael said. “The A.I. will say, ‘You should watch this guy.’ ”

Michael told me that his team was required to attempt to verify each target: examining video footage from drones, listening to telephone conversations. “My job in the targeting room was to put together all the indications and decide, What am I looking at?” he said. He added that he was also required to estimate how many civilians would be killed or wounded in an attack. If a suspected militant was in an apartment building, he would examine property records and drone footage to determine how many people lived there. “The A.I. thing says, ‘You should pay attention to this,’ and then I gotta do this whole checklist,” Michael said. “Who else is in the building? When did they leave?” In the course of a typical workday, the programs that Michael used would give his team about a hundred suggestions. He would select about five of them and send the recommendations to superior officers. “Usually two will be accepted,” he said.

As the battle raged, though, there were times when he felt pressure to decide on targets too quickly. “Sometimes I couldn’t do all the preparation and all of the checks that I should have,” he said. “Obviously, there were mistakes.” He added that he was comfortable with the final outcome of his work. But Adam Raz, an Israeli writer and activist, said other I.D.F. targeters had told him that in the most intense periods of the war their efforts were merely pro forma. “Most times, it took thirty seconds to a minute to get the target from Lavender or Gospel, verify it, and then give it to the Air Force to strike,” he said.

An estimated sixty thousand Palestinians have died in the conflict, prompting widespread accusations that Israel has committed war crimes. Yet Israeli authorities show little concern about the targeting systems. “We ended up, I believe, with about twenty-five thousand Hamas killed and twenty-five thousand civilians,” a former political leader told me earlier this year. “This is a better proportion than was ever achieved by a modern military.” When I ran that argument by John Spencer, a professor at West Point, he concurred that similar attempts to expunge enemies from densely populated areas had often resulted in higher proportions of civilian deaths. In 2016, the U.S. military initiated a campaign to root out ISIS from Mosul, Iraq, which killed about five thousand militants and twice as many civilians; the fighting ended up razing a city of two million people. In the Second World War, when the U.S. retook Manila from the Japanese, about seventeen thousand soldiers and a hundred thousand civilians were killed.

In Gaza, though, no one knows precisely how many people have died, or what proportion were innocents; the Gaza Health Ministry, which is run by Hamas, maintains that more than half were women and children. American officials suggested that the essential issue was one of human judgment. “The civilian casualties in Gaza were not an A.I. issue—they appear to be a rules-of-engagement issue,” Michael Horowitz, a Deputy Assistant Secretary of Defense in the Biden Administration, said. Michael, the targeter, told me that in the early stages of the conflict he was permitted to recommend a strike that could result in as many as twenty civilian deaths for one suspected militant. The strictures tightened and loosened over time, but they were typically relaxed for high-ranking figures. “When we killed Nasrallah, there were a lot of people in that building, you know?” he said.

The former senior U.S. defense official told me Israel permitted civilian casualties in numbers that, by American standards, were greatly disproportionate to the value of the militants being attacked. “During the invasion of Iraq, if we were contemplating hitting a target that might result in twenty-five civilians being killed, that’s a decision that would have gone all the way to the President or the Secretary of Defense,” he said. “In Gaza, that was happening every day.”

Sebastian Ben Daniel, a lecturer at Ben-Gurion University and a critic of the I.D.F., told me that the claims of precise targeting can’t be verified, because the way the A.I. systems function is largely a mystery. “How do we know that this person was a legitimate target?” he said. “We don’t know, because nobody can check. The algorithm is a black box. The military says it looks at millions of parameters. But what parameters? We don’t know.” A.I. systems like the ones that the I.D.F. used often fail to understand context; if someone says “watermelon” on the phone, the A.I. can’t tell if he’s making an oblique reference to a bomb or just talking about fruit. “You think this person was Hamas, because he met somebody in Hamas, or he called somebody in Hamas—so you kill him,” he said.

Ultimately, Ben Daniel argued, the purpose of the A.I. systems was to lend a veneer of legitimacy to a preconceived policy. “The goal was not to kill this guy or that guy, for which A.I. was sometimes useful,” he said. “The goal was the destruction of Gaza. A.I. gives you that effect without the public outcry.”

Even within the I.D.F., there is some concern that A.I. will displace human intelligence. The former senior I.D.F. officer told me that Israel had used a combination of technical and human means to track Hezbollah leaders in Lebanon, some of them for years. “We knew where Nasrallah was almost every single day,” he said. “We could have killed him whenever we were asked.” Right until the end, he said, Nasrallah was convinced that Israel would never strike him. He was only forty feet underground when the bomb hit.

The former senior Israeli official spoke proudly of a case in which targeters believed that a Hezbollah leader was hiding in a Beirut apartment, and wanted to gather details of its layout and surroundings. “We can send someone to the street to take photos,” he said. “We have people on the ground—but not inside.” To acquire more precise information, the I.D.F. developed a telephone that appeared to be registered in Lebanon. Then an agent posing as a wealthy expatriate called a real-estate broker in Beirut and said that she was interested in several properties on the same street. The former officer described the scheme: “Some nice woman will start with you on the phone. She’s very rich. Her father was from Lebanon. And she wants to buy the entire block.” The woman asked to hear details about the street, the apartment, the specific room where the target was thought to be; the broker provided all the information the I.D.F. needed. “Do you know how many people work for us without knowing that they work for us?” the former officer said.

Still, for Israeli security officials, small victories do not assuage the sense that they missed intelligence that might have forestalled a war altogether. The failure to prevent the October 7th attacks still weighs heavily. One cause, some officers told me, was an overreliance on intelligence gathered by technological means. Cameras set up along the border were easily disabled, and warnings from intelligence officers were ignored. The former Israeli military official told me that, during the attack, some militants switched off their cellphones to make themselves harder to track. Others simply left their phones home.

Indeed, the former official said, Israel had largely given up trying to cultivate human sources inside Hamas and Hezbollah. He said that the I.D.F., himself included, had fallen in love with technological methods because they seemed so easy to use, compared with the tedious and dangerous process of cultivating spies. “How many human souls did we have to describe the reality for us in Gaza and Lebanon on the night of October 7th?” he said. “Zero.”

The former official continued, “This is the main reason that created this great failure and caused us not to see what Hamas was planning to do. The feeling was ‘I don’t need to know you. I don’t need to know where you are going to pray, or what is your ideological way of thinking—I don’t need them because I have your phone.’ The trouble is, on the night they attacked, their devices were turned off.”

So far, Anduril has secured several billion dollars’ worth of military contracts, including one for sending drones to Taiwan. Early this year, the company announced that it was taking over a twenty-two-billion-dollar project, formerly run by Microsoft, to develop “augmented reality” headsets for the Army to use in combat. To produce its weapons, Anduril is planning to open a sprawling factory near Columbus, Ohio. Luckey told me that, in order to build a secure supply chain, none of the components would come from China.

Financial analysts have been speculating that Anduril will soon open investment to the public. Still, the essential question remains: In the uncertainties of combat, will Luckey’s unmanned systems work? Even admirers of the company evince some skepticism about weapons built around A.I. “I would take any claims of success with a grain of salt,” a former senior Pentagon official told me. “The Pentagon needs to do its own testing.”

On a lonely stretch of chaparral near Fort Stockton, Texas, I watched two Anduril engineers make their last adjustments before test-firing a Roadrunner—a five-foot-tall interceptor similar to the company’s attack drones, except that it is designed to crash into such airborne targets as jets, missiles, and drones. At about a hundred thousand dollars each, the Roadrunner isn’t Ukrainian-style cheap, but in the Pentagon’s arms bazaar it qualifies as a bargain. If it misses its target, it returns to base, to be fired again. “It lands just like a spaceship,” an engineer named Jackson Wiggs told me.

The Roadrunner is built to be launched out of its own packing crate; before the test flight, the engineers placed one of those crates in the scrub, as tumbleweeds skittered by. A low buzz from an intruding drone echoed from the other side of a nearby ridge. As the sound drew closer, Wiggs and his colleague pressed a button on a console. The sides fell from the packing crate, and the Roadrunner, a squat device that looked a little like a penguin, was propelled upward by two turbojets. It climbed to about three hundred feet before it turned and flattened until its fuselage was parallel to the earth. Then, like its namesake, the Roadrunner took off, sailing over the ridgeline. Seconds later, it shot past the intruding drone, missing it by a precisely calibrated distance. It circled back, righted itself, and landed neatly next to its packing crate. “Perfect,” Wiggs said.

Even as the Anduril engineers congratulated themselves on a successful test, people elsewhere were scrambling to create new advantages, under the messy conditions of war. Ukraine launched autonomous craft from catapults and snared Russian drones in fishing nets. Israel, in its recent conflict with Iran, deployed lasers to blast drones from the sky by burning up their guidance systems. An American company called BlueHalo is testing a similar device. It’s carried on a truck, and, after an investment of nearly a hundred million dollars, can fire individual shots for three dollars each. One day, it, too, will be eclipsed. ♦

More Articles

US inflation rises as tariffs drive up prices

US inflation rises as tariffs drive up prices

China’s cyber sector amplifies Beijing’s hacking of U.S. targets

Leave a Reply

Your email address will not be published. Required fields are marked *