environment gif

This environment is part of the magent environments. Please read that page first for general information.

Name Value
Actions Discrete
Agents 495
Parallel API True
Manual Control No
Action Shape (33)
Action Values Discrete(33)
Observation Shape (15,15,43)
Observation Values [0,2]
Import pettingzoo.magent import gather_v1
Agents agents= [ omnivore_[0-494] ]

Agent Environment Cycle

environment aec diagram


In gather, the agents must gain reward by eating food or fighting each other. Agent’s don’t die unless attacked. You expect to see that agents coordinate by not attacking each other until food is scarce.

Action options: [do_nothing, move_28, attack_4]

Reward is given as:

Observation space: [empty, obstacle, omnivore, food, omnivore_minimap, food_minimap, one_hot_action, last_reward, agent_position]

Map size: 200x200


gather_v1.env(step_reward=-0.01, attack_penalty=-0.1, dead_penalty=-1, attack_food_reward=0.5, max_frames=500)

step_reward: reward added unconditionally

dead_penalty: reward added when killed

attack_penalty: reward added for attacking

attack_food_reward: Reward added for attacking a food

max_frames: number of frames (a step for each agent) until game terminates