environment gif

This environment is part of the magent environments. Please read that page first for general information.

Name Value
Actions Discrete
Agents 162
Parallel API True
Manual Control No
Action Shape (21)
Action Values Discrete(21)
Observation Shape (13,13,41)
Observation Values [0,2]
Import pettingzoo.magent import battle_v1
Agents agents= [red_[0-80], blue_[0-80]]

Agent Environment Cycle

environment aec diagram


A large-scale team battle.

Like all MAgent environments, agents can either move or attack each turn. An attack against another agent on their own team will not be registered.

Action options: [do_nothing, move_12, attack_8]

Reward is given as:

If multiple options apply, rewards are added together.

Observation space: [empty, obstacle, red, blue, minimap_red, minimap_blue, binary_agent_id(10), one_hot_action, last_reward, agent_position]

Map size: 45x45


battle_v1.env(step_reward-0.005, dead_penalty=-0.1, attack_penalty=-0.1, attack_opponent_reward=0.2, max_frames=1000)

step_reward: reward added unconditionally

dead_penalty: reward added when killed

attack_penalty: reward added for attacking

attack_opponent_reward: Reward added for attacking an opponent

max_frames: number of frames (a step for each agent) until game terminates