Skip to content

Latest commit

 

History

History

03_wall_jump

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Wall Jump

In this projects we'll implementing agents that learns to play Unity Wall Jump using several Deep Rl algorithms. Unity Ml Agents is a toolkit for developing and comparing reinforcement learning algorithms. We'll be using pytorch library for the implementation.

Libraries Used

  • Unity Ml Agents
  • PyTorch
  • numpy
  • matplotlib

About Enviroment

  • Set-up: A platforming environment where the agent can jump over a wall.
  • Goal: The agent must use the block to scale the wall and reach the goal.
  • Agents: The environment contains one agent linked to two different Models. The Policy the agent is linked to changes depending on the height of the wall. The change of Policy is done in the WallJumpAgent class.
  • Agent Reward Function:
    • -0.0005 for every step.
    • +1.0 if the agent touches the goal.
    • -1.0 if the agent falls off the platform.
  • Behavior Parameters:
    • Vector Observation space: Size of 74, corresponding to 14 ray casts each detecting 4 possible objects. plus the global position of the agent and whether or not the agent is grounded.
    • Vector Action space: (Discrete) 4 Branches:
      • Forward Motion (3 possible actions: Forward, Backwards, No Action)
      • Rotation (3 possible actions: Rotate Left, Rotate Right, No Action)
      • Side Motion (3 possible actions: Left, Right, No Action)
      • Jump (2 possible actions: Jump, No Action)
    • Visual Observations: None
  • Float Properties: Four
  • Benchmark Mean Reward (Big & Small Wall): 0.8

Deep RL Agents

Any questions

If you have any questions, feel free to ask me:

Don't forget to follow me on twitter, github and Medium to be alerted of the new articles that I publish

How to help

  • Clap on articles: Clapping in Medium means that you really like my articles. And the more claps I have, the more my article is shared help them to be much more visible to the deep learning community.
  • Improve our notebooks: if you found a bug or a better implementation you can send a pull request.