Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Docstrings and Updated manual_control.py #73

Merged
merged 10 commits into from
Nov 4, 2022
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,3 +118,4 @@ Alternatively, if this doesn't work, you can also try running MiniWorld with `xv
```
xvfb-run -a -s "-screen 0 1024x768x24 -ac +extension GLX +render -noreset" python3 your_script.py
```

34 changes: 32 additions & 2 deletions gym_miniworld/envs/collecthealth.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,45 @@

class CollectHealth(MiniWorldEnv):
"""
## Description

Environment where the agent has to collect health kits and stay
alive as long as possible. This is inspired from the VizDoom
HealthGathering environment. Please note, however, that the rewards
`HealthGathering` environment. Please note, however, that the rewards
produced by this environment are not directly comparable to those
of the VizDoom environment.

reward:
## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |
| 3 | move back |
| 4 | pick up |
| 5 | drop |
| 6 | toggle / activate an object |
| 7 | complete task |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+2 for each time step
-100 for dying

## Arguments

```python
CollectHealth(size=16)
```

`size`: size of the room

"""

def __init__(self, size=16, **kwargs):
Expand Down
29 changes: 27 additions & 2 deletions gym_miniworld/envs/fourrooms.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,33 @@

class FourRooms(MiniWorldEnv):
"""
Classic four rooms environment.
The agent must reach the red box to get a reward.
## Description

Classic four rooms environment. The agent must reach the red box to get a reward.

## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+(1 - 0.2 * (step_count / max_episode_steps)) when red box reached

## Arguments

```python
FourRooms()
```

"""

def __init__(self, **kwargs):
Expand Down
28 changes: 28 additions & 0 deletions gym_miniworld/envs/hallway.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,36 @@

class Hallway(MiniWorldEnv):
"""
## Description

Environment in which the goal is to go to a red box
at the end of a hallway

## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+(1 - 0.2 * (step_count / max_episode_steps)) when red box reached

## Arguments

```python
FourRooms(length=12)
```

`length`: length of the entire space

"""

def __init__(self, length=12, **kwargs):
Expand Down
30 changes: 30 additions & 0 deletions gym_miniworld/envs/maze.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,37 @@

class Maze(MiniWorldEnv):
"""
## Description

Maze environment in which the agent has to reach a red box

## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+(1 - 0.2 * (step_count / max_episode_steps)) when red box reached

## Arguments

```python
MazeS2()
# or
MazeS3()
# or
MazeS3Fast()
```

"""

def __init__(
Expand Down
28 changes: 28 additions & 0 deletions gym_miniworld/envs/oneroom.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,36 @@

class OneRoom(MiniWorldEnv):
"""
## Description

Environment in which the goal is to go to a red box
placed randomly in one big room.

## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+(1 - 0.2 * (step_count / max_episode_steps)) when red box reached

## Arguments

```python
OneRoomS6()
# or
OneRoomS6Fast()
```

"""

def __init__(self, size=10, max_episode_steps=180, **kwargs):
Expand Down
32 changes: 32 additions & 0 deletions gym_miniworld/envs/pickupobjs.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,40 @@

class PickupObjs(MiniWorldEnv):
"""
## Description

Room with multiple objects. The agent collects +1 reward for picking up
each object. Objects disappear when picked up.

## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |
| 3 | move_back |
| 4 | pickup |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+1 when agent picked up object

## Arguments

```python
PickupObjs(size=12, num_objs=5)
```

`size`: size of world

`num_objs`: number of objects

"""

def __init__(self, size=12, num_objs=5, **kwargs):
Expand Down
33 changes: 33 additions & 0 deletions gym_miniworld/envs/putnext.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,41 @@

class PutNext(MiniWorldEnv):
"""
## Description

Single-room environment where a red box must be placed next
to a yellow box.

## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |
| 3 | move back |
| 4 | pick up |
| 5 | drop |
| 6 | toggle / activate an object |
| 7 | complete task |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+(1 - 0.2 * (step_count / max_episode_steps)) when red box is next to yellow box

## Arguments

```python
PutNext(size=12)
```

`size`: size of world

"""

def __init__(self, size=12, **kwargs):
Expand Down
3 changes: 0 additions & 3 deletions gym_miniworld/envs/remotebot.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,6 @@

from gym_miniworld.miniworld import MiniWorldEnv

# from gym.utils import seeding


try:
import zmq
except ImportError:
Expand Down
33 changes: 33 additions & 0 deletions gym_miniworld/envs/roomobjs.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,43 @@

class RoomObjs(MiniWorldEnv):
"""
## Description

Single room with multiple objects
Inspired by the single room environment of
the Generative Query Networks paper:
https://deepmind.com/blog/neural-scene-representation-and-rendering/

## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |
| 3 | move back |
| 4 | pick up |
| 5 | drop |
| 6 | toggle / activate an object |
| 7 | complete task |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+0

## Arguments

```python
RoomObjs(size=16)
```

`size`: size of world

"""

def __init__(self, size=10, **kwargs):
Expand Down
26 changes: 26 additions & 0 deletions gym_miniworld/envs/sidewalk.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,34 @@

class Sidewalk(MiniWorldEnv):
"""
## Description

Walk on a sidewalk up to an object to be collected.
Don't walk into the street.

## Action Space

| Num | Action |
|-----|-----------------------------|
| 0 | turn left |
| 1 | turn right |
| 2 | move forward |

## Observation Space

The observation space is an `ndarray` with shape `(obs_height, obs_width, 3)`
representing the view the agents sees.

## Rewards:

+(1 - 0.2 * (step_count / max_episode_steps)) when object reached

## Arguments

```python
Sidewalk()
```

"""

def __init__(self, **kwargs):
Expand Down
Loading