The researchers collaborated with automobile manufacturer BMW to test ways in which humans and robots can work together in a car assembly line. They made a replica of a factory floor setting and rigged a robot on rails, whose task was to deliver auto parts between work stations. It was programmed to stop for a moment to let a person walk by. But the team observed that the robot would pause long before a person had crossed its path, affecting workplace efficiency.
To solve this problem, they examined the robot's current algorithms, which were drawn from music and speech processing. These algorithms were designed to align two sets of related data, such as an audio track of a musical performance and a scrolling video of that piece's musical notation. Meanwhile, a robot aligns, or compares, fresh data on motion trajectory with previously recorded data in order to make a prediction.
But most of these algorithms only account for the distance traveled by a person, which means they're only reliable when dealing with predictable sets of data. Predicting a person's movement trajectory is more complex because human motions are messy and highly variable. For example, when a person stops on its path for a moment, that delay in time can easily confuse a robot as it only maps motion trajectory by accounting for distance.
Therefore, the team developed a “partial trajectory” algorithm, which combines data on a person's real-time movement trajectory and a library of reference trajectories that were gathered before. This algorithm considers both distance and time.
The researchers tested the new algorithm and found that it was able to provide better estimates compared to previous alignment algorithms. When the algorithm was incorporated into motion predictors, the robot was able to more accurately anticipate when and where a person is headed. For example, it was less prone to freeze in place in a factory floor setting. Instead, it resumed its task shortly after a person crossed its path.
According to the team, the algorithm can also serve as a preprocessing step for other techniques applied in human-robot interaction, such as action recognition and gesture detection. In turn, it can pave the way for robots and humans working together in structured environments, such as a factory floor and a domestic setting.
"The key is that the [robotic] system can observe patterns that occur over and over, so that it can learn something about human behavior," said co-researcher Julie Shah.
In an opinion column for UNESCO's Courier, arms expert and journalist Vasily Sychev wrote of the threat posed by killer robots. He observed that artificial intelligence has become increasingly ubiquitous in the field of combat. For one, Russia and Japan developed planes manned by human pilots but equipped with elements of AI. These planes autonomously survey the environment or scan the aircraft for signs of damage. But AI can have a more pronounced role in the battlefield, suggested Sychev.
"Its speed of analysis and its ability to learn make AI attractive for combat systems," he wrote. These combat systems operate in a fully autonomous manner – it can identify a target, open fire, move around and choose optimal trajectories, much like a human soldier but without the weaknesses of one. Many experts, including Elon Musk, believe that autonomous weapons run the risk of inflicting widespread damage on both military and civilian populations. (Related: Robot-controlled vehicles could soon be restocking military front lines with ammo, food, fuel.)
Sychev noted that no such combat system has been made so far. Countries have also announced that creating fully autonomous combat systems is not a priority. But he's keeping his eyes peeled: "[Nuclear] weapons – which should never have seen the light of day, and which have faced opposition from the earliest phase of their conception – has nevertheless been well and truly used."
Robots.news has more on workplace and killer robots.