(NaturalNews) In a Navy-funded study, researchers at the Georgia Institute of Technology have successfully designed robots that are capable of deception. The study is part of a wider trend toward smarter and more autonomous battlefield robots. It was published in the journal IEEE Intelligent Systems.
Lead researcher Ronald Arkin and his students programmed robots for deception in two separate experiments, each modeled on the natural behavior of a different animal. The first experiment was based on the behavior of squirrels that store acorns in specific caches throughout their range and then patrol the area to keep scavengers away. Normally, these squirrels will travel from cache to cache in order to check on them. In the presence of another squirrel; however, they will instead begin visiting empty caches.
The researchers used this model to design an experiment in which one robot was attempting to raid hidden locations guarded by another robot. Using the same strategy as the squirrels, the guard robot was successfully able to deceive the predator robot.
"This application could be used by robots guarding ammunition or supplies on the battlefield," Arkin said. "If an enemy were present, the robot could change its patrolling strategies to deceive humans or another intelligent machine, buying time until reinforcements are able to arrive."
The second experiment was based upon the behavior of a bird known as an Arabian babbler. Like many birds, babblers will often join together to harass ("mob") larger predators such as hawks, thereby chasing them away. Using computer modeling, the researchers concluded that a weak babbler would benefit most from enthusiastically joining in mobbing behavior, because predatory birds in such situations are overwhelmed more by the number of attackers than by their individual strengths. In contrast, a weak bird that is "honest" about its strength and hangs back is more likely to be eaten.
"In military operations, a robot that is threatened might feign the ability to combat adversaries without actually being able to effectively protect itself," said Arkin. "Being honest about the robot's abilities risks capture or destruction. Deception, if used at the right time in the right way, could possibly eliminate or minimize the threat."
Arkin acknowledges that giving robots the ability to deceive humans creates serious ethical questions.
"When these research ideas and results leak outside the military domain, significant ethical concerns can arise," he said.
Lethal robots are already in use by the U.S. military. Although, to date, they all require human controllers. Arkin is among the researchers who have declared the eventual development and deployment of autonomous killing robots to be "inevitable."
Some of Arkin's other research involves designing programs to teach robots to distinguish between appropriate and inappropriate targets, as defined by the rules of war. But his program presupposes a battlefield free of civilians, where every human is a legitimate target. Other researchers have criticized this condition as being essentially impossible.
"I challenge you to find a war with no civilians," said Colin Allen of Indiana University.