SMU Office of Research – Despite the fact that computing power has exponentially increased in keeping with Moore’s Law, most computers remain ‘dumb’, only able to do exactly what they are programmed by humans to do and unable to think for themselves. But what if computer programs were trained to act independently, instead of simply following a hard and fast set of pre-determined rules, and then allowed to interact with each other?
The possibilities are endless, if the many presentations at the Autonomous Agents and Multiagent Systems (AAMAS) International Conference are anything to go by. The flagship event of the non-profit International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), AAMAS 2016 was hosted by the Singapore Management University (SMU) School of Information Systems from 9–13 May 2016.
Cutting across the disciplines of distributed computing systems, artificial intelligence and social science, the emerging field of autonomous agents and multiagent systems makes use of ‘intelligent’ agents to model real-world scenarios that can be dynamic and unpredictable. These agents—be they software, robots or even humans—interact with each other to solve complex problems that single agents cannot handle.
The conference saw delegates from around the globe coming together to share how they have used such multiagent systems to solve problems ranging from determining the optimal prices for electric vehicle charging, to identifying the most influential teenagers in a social network to spread HIV awareness messages, and programming self-driving cars.
Gaming the system, preempting the poachers
One particularly exciting area of research highlighted during the conference was the role of game theory in guiding the development of multiagent systems for security applications.
Traditionally used in economics and psychology to understand human behaviour, game theory is a branch of mathematics that describes the strategies used to deal with competitive situations involving multiple participants. In a security scenario, the attacker(s) and defender(s) can be thought of as different agents involved in a game where the attackers seek to maximise their rewards, and the defenders aim to minimise their losses.
When applied to the real world, the stakes of such ‘games’ can be very high, said Ms. Thanh Hong Nguyen, a PhD student under the Teamcore Research Group led by the University of Southern California’s Professor Milind Tambe. Citing the danger of extinction, Nguyen pointed out how the international tiger population has declined from more than 100,000 tigers a hundred years ago to less than 3,200 individuals today.
To help wildlife rangers make better use of their limited patrolling resources to protect endangered animals, Nguyen and her team framed the problem as a Stackelberg security game whereby the defenders are the rangers and the attackers are poachers who observe the patrols and make attacks based on their observations.
Their solution, a predictive model called Comprehensive Anti-Poaching tool with Temporal and observation Uncertainty REasoning (CAPTURE), used game theory to devise an optimal patrolling plan. Firstly, the researchers estimated the probability that the poachers would go to a certain location based on their knowledge of the rangers’ patrols, the number of animals, and features of the landscape such as distance to roads and villages. Secondly, they predicted the probability that the rangers would be able to detect signs of poaching using information on the extent of patrol coverage and data on past poaching events.
“However, learning the model parameters was too computationally expensive, due to the large number of parameters and large number of targets,” Nguyen said. “To overcome these computational challenges, we used two approaches to simplify the problem: parameter separation and target extraction.”
The researchers then tested this simplified model on 12 years’ worth of data collected by rangers from the Queen Elizabeth National Park in Uganda. The CAPTURE model was more accurate than other models tested using the same data, and will be used to plan patrols in the park in a pilot project starting later this year.
Defending against the unknown
Game theory can also help researchers make predictions even when the behaviour of the poachers is largely unknown. Working on the related environmental problem of preventing illegal fishing, Teamcore Research PhD student Mr. Xu Haifeng presented work done in collaboration with Professor Nicholas Jennings from Imperial College London that showed how predictions of the poachers’ behaviour could be made even without assuming any prior knowledge of the attackers.
“In the case of illegal fishing, the coastguard won’t know which part of the ocean has more fish and which part has less fish. In fact, the illegal fishermen may sometimes have better knowledge in this regard,” Xu explained. “To make the problem more complicated, the number of fish in each area keeps dynamically changing.”
“The second major challenge is that the attacker’s behaviour is unknown, largely because the payoff structure is not known. The attacker may not behave rationally and may act in an opportunistic manner, so the traditional best response assumption doesn’t hold,” he added.
“Finally, the defender only gains very limited information in each round of the game as they can only observe feedback from the areas they have visited. This is unlike other security scenarios such as terror attacks, where the attack will be known even if the area was not patrolled.”
To overcome this lack of information on the attackers, Xu and his team used strategies from a field known as adversarial machine learning, specifically, a modified algorithm called Follow the Perturbed Leader with Uniform Exploration (FPL-UE). This computationally efficient algorithm estimates the reward for each target, allowing the attacker to either choose a target at random or choose the best response to the estimated reward. The estimated reward is then updated based on the information obtained with each round of the game.
When compared against other algorithms—including the Subjective Utility Quantal Response (SUQR) model, a state-of-the-art behavioural model that has been validated in real-world environmental security games—the FPL-UE algorithm was found to be more robust, even though it required less information.
“CAPTURE and FPL-UE are just two examples of the many real-world applications of multiagent systems in security challenges,” said SMU Assistant Professor Pradeep Varakantham, who served as local arrangements chair of AAMAS 2016. “The presentations, talks and demonstrations at AAMAS 2016 showcase the myriad ways in which autonomous agents and multiagent systems are already playing an important role in addressing complex and challenging problems.”
By Rebecca Tan