Simple

environment gif

This environment is part of the mpe environments. Please read that page first for general information.

Name Value
Actions Discrete/Continuous
Agents 1
Parallel API Yes
Manual Control No
Action Shape (5)
Action Values Discrete(5)/Box(0.0, 1.0, (5,))
Observation Shape (4)
Observation Values (-inf,inf)
Import from pettingzoo.mpe import simple_v2
Agents agents= [agent_0]
State Shape (4,)
State Values (-inf,inf)

Agent Environment Cycle

environment aec diagram

Simple

In this environment a single agent sees a landmark position and is rewarded based on how close it gets to the landmark (Euclidean distance). This is not a multiagent environment, and is primarily intended for debugging purposes.

Observation space: [self_vel, landmark_rel_position]

Arguments

simple_v2.env(max_cycles=25, continuous_actions=False)

max_cycles: number of frames (a step for each agent) until game terminates

continuous_actions: Whether agent action spaces are discrete(default) or continuous