Document details

dipblue: a diplomacy agent with strategic and trust reasoning

Author(s): André Filipe da Costa Ferreira

Date: 2014

Persistent ID: https://hdl.handle.net/10216/72469

Origin: Repositório Aberto da Universidade do Porto

Subject(s): Engenharia electrotécnica, electrónica e informática; Electrical engineering, Electronic engineering, Information engineering


Description

Diplomacy is a military strategy turn-based board game, which takes place in the turn of the 20th century, where seven world powers fight for the dominion of Europe. The game can be played by 2 to 7 players and is characterized by not having random factors, as well as, by being a zero-sum game. It has a very important component when played by human players that has been put aside in games typically addressed by Artificial Intelligence techniques: before making their moves the players can negotiate among themselves and discuss issues such as alliances, move propositions, exchange of information, among others. Keeping in mind that the players act simultaneously and that the number of units and movements is extremely large, the result is a vast game tree impossible of being effectively searched. The majority of existing artificial players for Diplomacy don't make use of the negotiation opportunities the game provides and try to solve the problem through solution search and the use of complex heuristics. This dissertation proposes an approach to the development of an artificial player named DipBlue, that makes use of negotiation in order to gain advantage over its opponents, through the use of peace treaties, formation of alliances and suggestion of actions to allies. Trust is used as a tool to detect and react to possible betrayals by allied players. DipBlue has a flexible architecture that allows the creation of different variations of the bot, each with a particular configuration and behaviour. The player was built to work with the multi-agent systems testbed DipGame and was tested with other players of the same platform and variations of itself. The results of the experiments show that the use of negotiation increases the performance of the bots involved in the alliances if all of them are trustworthy, however, when betrayed the efficiency of the bots drastically decreases. In this scenario, the ability to perform trust reasoning proved to successfully reduce the impact of betrayals.

Document Type Master thesis
Language English
facebook logo  linkedin logo  twitter logo 
mendeley logo

Related documents