Posted inSecurityEmergent TechNew Tech

Researchers at the UAE’s TII hone in on drone swarm behaviour

They developed metrics and terminology to characterize robot swarm democracies that could lead to better swarm control and could perhaps one day lead to fairer elections

Stubborn behaviour from a few individuals can lead to bad decisions among humans and drones. Even if a group contains a few well-informed individuals or robotic beings, outcomes can be better.

Researchers at the Technology Innovation Institute in the UAE experimented with hybrid swarms composed of a few informed robots with greater environmental awareness and masses that are stubborn and committed to their opinion, called zealots. When the number of informed individuals got too low, the zealots guided all the drones to the worst location.

Giulia De Masi, Principal Scientist at the TII said, “This research promises to improve the safety of real-world application of robot swarms.”

They developed metrics and terminology to characterize robot swarm democracies that could lead to better swarm control and could perhaps one day lead to fairer elections. One concern is that bad actors could maliciously design rogue robots to drive the behaviour of swarms. Another is that a few robots might unintentionally drive the behaviour of others causing the whole swarm to malfunction. 

TII senior researcher Eliseo Ferrante said, “When there are too many zealots preferring bad decisions, the swarm becomes insensitive to quality when there were not enough informed agents.” Things began to change as they added more drones able to accurately rate characteristics of the world against a given goal. As might be expected, they found the greatest sensitivity when all the drones were informed, despite the presence of a higher number of zealots voting for bad decisions.

The current research was conducted in a highly simplified version of the real world, in which drones were either sensitive to the world or not in a binary way. In a real-world scenario, there are more gradations of sensitivity between individuals.

This research builds on prior work on bird flocks that found that the quality of decisions was a strength that pulled the groups to do something, sort of acting like an oligarchy or dictatorship. In contrast, non-informed decisions were seen as a democratizing force. In that case, researchers found that restoring what they characterized as democracy – all the birds flocking together – required enough uninformed birds to follow the flock without question. However, the flock could disperse if too many individuals went off in a new direction, such as chasing food.

Swarm democracy research could also shape the way we think about and address issues with fake news. Ferrante said that if there are too many naïve people, and too many zealots pushing in the wrong direction, the collective tends to make the wrong decision no matter how much better the better decision is. “You need to have enough people that have studied, that are able to discern good from bad with their own brain, to make a good collective choice,” he said.

He observed that politics, flocks, and robot swarms operate at different scales and with different constraints. There is a value in considering standard ways of quantifying and measuring phenomena like zealots, informed individuals, and the quality of decisions.

“We should explore how the same mechanisms emerge in social networks of humans,” Ferrante said. However, humans form far more complex networks of interactions that evolve over time. What’s more, human social networks take on more nuanced topologies as some individuals manage to become “hubs” by having way more connections than others (think of politicians, for example).

In the meantime, these ideas could lead to better techniques for creating and guiding swarms of drones for practical tasks, like delivering medicine, finding accident survivors, and pollinating flowers. 

De Masi said, “This research opens the door for multidisciplinary collaborations across social and political institutions to robotics and security research centres. Social and political Institutions are more interested to applications of this model to opinion formation and the role of committed minorities, like no-vax people or oriented minorities in political elections. Robotics centres are more interested in the impact of stubborn robots on the security of systems of many autonomous robots. This problem is in fact one of the most important emerging issues for the widespread deployment of multi-robot systems in practical applications.”