This environment is part of the magent environments. Please read that page first for general information.
Name | Value |
---|---|
Actions | Discrete |
Agents | 495 |
Parallel API | Yes |
Manual Control | No |
Action Shape | (33) |
Action Values | Discrete(33) |
Observation Shape | (15,15,5) |
Observation Values | [0,2] |
Import | pettingzoo.magent import gather_v3 |
Agents | agents= [ omnivore_[0-494] ] |
In gather, the agents gain reward by eating food. Food needs to be broken down by 5 “attacks” before it is absorbed. Since there is finite food on the map, there is competitive pressure between agents over the food. You expect to see that agents coordinate by not attacking each other until food is scarce. When food is scarce, agents may attack each other to try to monopolize the food. Agents can kill each other with a single attack.
Key: move_N
means N separate actions, one to move to each of the N nearest squares on the grid.
Action options: [do_nothing, move_28, attack_4]
Reward is given as:
The observation space is a 15x15 map with the below channels (in order):
name | number of channels |
---|---|
obstacle/off the map | 1 |
omnivore_presence | 1 |
omnivore_hp | 1 |
omnivore_minimap(minimap_mode=True) | 1 |
food_presense | 1 |
food_hp | 1 |
food_minimap(minimap_mode=True) | 1 |
one_hot_action(extra_features=True) | 33 |
last_reward(extra_features=True) | 1 |
agent_position(minimap_mode=True) | 2 |
gather_v3.env(minimap_mode=False, step_reward=-0.01, attack_penalty=-0.1, dead_penalty=-1, attack_food_reward=0.5, max_cycles=500, extra_features=False)
minimap_mode
: Turns on global minimap observations. These observations include your and your opponents piece densities binned over the 2d grid of the observation space. Also includes your agent_position
, the absolute position on the map (rescaled from 0 to 1).
step_reward
: reward added unconditionally
dead_penalty
: reward added when killed
attack_penalty
: reward added for attacking
attack_food_reward
: Reward added for attacking a food
max_cycles
: number of frames (a step for each agent) until game terminates
extra_features
: Adds additional features to observation (see table). Default False