In the last year I have used at least three different simulators for multi agent systems. The first two simulated a specific environment: an intergalactic battle between three races, and a soccer game. The other one is NetLogo, a general purpose multi-agent simulator. If you open its library you will see several simulation from social to computer sciences. I want to write some quick summary of the simulation approach of each one of these.


BWAPI is an API for controlling units in the Stacraft: Brood Wars video game. With the API, a program can query for information about all of its units, about the environment (limited by the fog of war), and emit actions to be performed by the army. It s not exactly a multiagent system since all the units are in the same process, and are not completely independent. However, with C++ you can encapsulate behavior in classes and have a multiagent system simulated inside your program where each unit (agent) can query the environment and decide which action to run in the next simulation step. I used BWAPI in my master's degree thesis with an implementation of flocking algorithm to control a peloton of marines.


The RoboCup2D takes a completely different approach than BWAPI. In this case, there is a central server that runs the simulation with a dedicated port to where an agent can connect and receives updates about the environment and send actions to be performed in the simulation. Each port emits UDP packages and listen for UDP packages, so, each player (agent) can be programmed in any programming language that supports UDP communication. And they can even be distributed in a network, all they need is the URL and port where the server is running, and in this case the player can use all the resources available in the machine it is running. I used RoboCup2D in one of my published research paper, where I compared a statistic method of localization based on rules, and a particle filter approach.

NetLogo provides a world where the researcher can create a world with patches (non mobile agents) and turtles (mobile agents), and control the environment with an observer: an all-seeing agent. The observer can query the environment and ask the turtles and patches to perform an action according to certain rules. The approach taken by NetLogo is more similar to that of BWAPI, but with the multiagent simulation as first class by design.

Now, I am working on my own simulator for my research in multiagent and distributed systems in my PhD program. I am setting up a self-driving cars environment, where each car is modelled as an autonomous agent and can communicate with other cars or manned vehicles. I am basing the communication channel as that of RoboCup2D, where there is a server with the simulated world and each agent is an independent process communicating with the server through websockets. The server is developed in Node.js. The motions engine is very basic: going forward or turning in each simulation step, and the agents receives information about their sensors in each simulation step.

I think there are at least two possible interaction models for the sensors. The one I am using right now is based on a push method, where the server sends the sensor data in each simulation step, but it limits the sensing frequency. The other one is a pull method where each agent can request data from its own sensor, in this case the server should include some trade off (frequency vs accuracy). This later approach seems to be a more accurate representation of an autonomous agent, but the development could take more time. How do these approaches affects the outcome of the simulations? Stay tuned.