A new technological front has opened in the war in Ukraine, as both Russian and Ukrainian forces accelerate the development and deployment of drones equipped with artificial intelligence. A recently captured Russian drone capable of autonomous targeting highlights a significant shift in military technology, creating weapons that can operate without human control and resist traditional electronic countermeasures like jamming.
Key Takeaways
- Both Russia and Ukraine are actively using AI-powered drones for targeting, intelligence gathering, and autonomous attacks.
- A new type of Russian AI drone has been intercepted that can find and attack targets on its own, making it immune to jamming.
- Ukraine processes over 50,000 drone video streams monthly using AI to identify and map targets quickly.
- Developers are creating fully autonomous weapons, raising ethical concerns about human oversight and the rules of war.
- Ukrainian President Volodymyr Zelensky has called for global regulations on AI in weaponry, comparing the urgency to nuclear non-proliferation.
A New Generation of Autonomous Weapons
The conflict in Ukraine is serving as a testing ground for advanced military hardware, particularly in the realm of unmanned aerial vehicles (UAVs). Serhiy Beskrestnov, a consultant for the Ukrainian defence forces, recently examined an intercepted Russian drone that represents a major technological leap.
"This technology is our future threat," Beskrestnov stated, explaining that the device was unlike previous models. He discovered that the drone uses artificial intelligence to find and engage targets independently, without sending or receiving signals during its final attack phase. This capability makes it impervious to jamming, a common tactic used to disable conventional drones.
What is an Autonomous Weapon?
An autonomous weapon system is a type of military robot designed to search for, identify, and kill human targets without direct human control. These systems use AI to make decisions on the battlefield, which distinguishes them from remotely operated drones that require a human pilot.
Ukraine's Reliance on AI for Defense
For Ukraine's military, artificial intelligence has become a critical tool for processing vast amounts of battlefield information. According to Yuriy Myronenko, Ukraine's deputy defence minister, AI is essential for managing the sheer volume of data collected from the front lines.
"Our military gets more than 50,000 video streams [from the front line] every month which are analysed by artificial intelligence," Myronenko explained. "This helps us quickly process this massive data, identify targets and place them on a map."
This data analysis enhances strategic planning and optimizes the use of resources. Ukrainian forces also utilize AI-based software that allows drones to lock onto a target and autonomously complete the final, most critical stage of their mission. This makes the drones highly effective, as they are small, fast, and difficult to shoot down in their terminal approach.
The Push Towards Full Autonomy
The technology is rapidly evolving from semi-autonomous assistance to fully autonomous weapon systems. Ukrainian developers are at the forefront of this shift, creating systems that could one day operate with minimal human input.
Yaroslav Azhnyuk, the chief executive of Ukrainian development firm The Fourth Law, described a future where a soldier could simply press a button on a smartphone application to launch a mission. "The drone will do the rest," he said, explaining that it would find the target, deploy its payload, assess the damage, and return to its base automatically. "And it would not even require piloting skills from the soldier," Azhnyuk added.
The Advantage of AI Interceptors
Azhnyuk believes AI-guided interceptor drones could dramatically improve Ukraine's air defenses against long-range Russian attack drones, such as the Iranian-made Shaheds. "A computer-guided autonomous system can be better than a human in so many ways," he noted. "It can be more perceptive. It can see the target sooner than a human can. It can be more agile."
Deputy Defence Minister Myronenko confirmed that Ukraine is close to finalizing the development of such systems, stating, "We have partly implemented it in some devices." Azhnyuk is optimistic, claiming that thousands of such autonomous systems could be deployed by the end of 2026.
Ethical Boundaries and the Human Factor
Despite the technological advancements, significant ethical concerns remain. Ukrainian developers are proceeding with caution, particularly regarding the risk of autonomous systems making fatal errors, such as misidentifying friendly forces.
Vadym, a representative from the company DevDroid, which produces AI-powered remote-controlled machine guns, highlighted these dangers. His company's systems use AI to automatically detect and track human targets, but they do not have an automatic firing option enabled. The risk of friendly fire is too high, as soldiers on both sides may wear similar uniforms.
"We can enable it, but we need to get more experience and more feedback from the ground forces in order to understand when it is safe to use this feature," Vadym said, emphasizing the need for proven reliability before removing human decision-making.
Key questions remain unanswered: How will an AI system distinguish a surrendering soldier from an active combatant? How will it avoid harming civilians in complex urban environments? Myronenko insists that the final decision to use lethal force should remain with a human, even if AI makes that decision easier.
A Global Call for Regulation
The rapid escalation of AI weaponry has prompted calls for international oversight. The potential for AI-assisted "drone swarms," a tactic where a large number of coordinated drones overwhelm defenses, is a growing fear. Ukraine's successful "Spider Web" operation, which reportedly used around 100 drones against Russian air bases, may have been aided by AI and is a tactic that Russia could replicate.
Recognizing the danger, Ukrainian President Volodymyr Zelensky addressed the United Nations, warning that AI is fueling "the most destructive arms race in human history." He urged the international community to establish global rules for the use of AI in weapons systems, stressing that the issue is "just as urgent as preventing the spread of nuclear weapons."
As the technology continues to advance on the battlefield, the line between human and machine control is becoming increasingly blurred, creating a new and unpredictable chapter in modern warfare.





