You are here

Clue Gamesmanship Leads to New Remote-Sensing Algorithm

February 19, 2009

A new mathematical model inspired by a classic board game could help remote sensors do their work more swiftly and efficiently.

Chenghui Cai and Silvia Ferrari of Duke University made the potential breakthrough thanks to the whodunnit game CLUE©. In the game, players roll dice to move from room to room gathering clues to identify a murderer, where the murder took place, and with what weapon the murderer perpetrated the crime.

"One night we were playing CLUE©," Ferrari said. "You can't visit all the rooms by the end of the game, so you need to come up with a way to minimize the amount of movement but maximize the ability to reach your targets," she said.

Similarly, Ferrari noted, when a robotic mine sweeper searches for mines, "you want the robot to spend as little time as possible on the ground and maximize its information reward function."

Ferrari and Cai developed mathematical ways of representing choices and the acquisition of information to come up with a strategy for winning CLUE©. They pitted this algorithm against experienced CLUE© players and other game-playing strategies.

When players used the new algorithm against two players using an artificial intelligence strategy known as constraint satisfaction, they won 7 out of 10 times. When played against another artificial intelligence strategy based on Bayesian networks, the new algorithm did almost as well. Against a player employing a Bayesian network approach and another player using a neural network strategy, the new algorithm had a victory rate of 72 percent.

The algorithm's success is "due to its strategy of selecting movements and optimizing its ability to incorporate new information, while minimizing the distance traveled by the pawn" in CLUE©, Ferrari concluded. "In this manner, it was able to win the game as quickly as possible."

“The key to success, both for the CLUE© player and the robots, is to not only take in the new information it discovers, but to use this new information to help guide its next move,” Cai said. “This learning-adapting process continues until either the player has won the game, or the robot has found the mines.”

The results appear in the article "Information-Driven Search Strategies in the Board Game of CLUE©," published online in the IEEE Transactions on Systems, Man, and Cybernetics—Part B.

Source: Duke University, Jan. 27, 2009

Id: 
520
Start Date: 
Thursday, February 19, 2009