Emergence of anti-coordination through reinforcement learning in generalized minority games
Abstract
In this paper we propose adaptive strategies to solve coordination failuresin a prototype generalized minority game model with a multi-agent, multi-choiceenvironment. We illustrate the model with an application to large scale distributedprocessing systems with a large number of agents and servers. In our set up, agents areassigned responsibility to complete tasks that require unit time. They request serversto process these tasks. Servers can process only one task at a time. Agents have tochoose servers independently and simultaneously, and have access to the outcomesof their own past requests only. Coordination failure occurs if more than one agentsimultaneously requests the same server to process tasks at the same time, while otherservers remain idle. Since agents are independent, this leads to multiple coordinationfailures. In this paper, we propose strategies based on reinforcement learning thatminimize such coordination failures. We also prove a null result that a large categoryof probabilistic strategies which attempts to combine information about other agents’strategies, asymptotically converge to uniformly random choices over the servers.
Collections
- Journal Articles [3687]