The Wayback Machine - https://web.archive.org/web/20200525143811/https://github.com/topics/ppo
Skip to content
#

ppo

Here are 187 public repositories matching this topic...

daniel-fudge
daniel-fudge commented Sep 2, 2019

The OpenAI Gym installation instructions are missing reference to the "Build Tools for Visual Studio 2019" from the following site.

https://visualstudio.microsoft.com/downloads/

I also found this by reading the following article.
https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30

Even though this is an issue in the OpenAI gym, a note in this RE

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
  • Updated Mar 3, 2020
  • Python

This repository contains most of pytorch implementation based classic deep reinforcement learning algorithms, including - DQN, DDQN, Dueling Network, DDPG, SAC, A2C, PPO, TRPO. (More algorithms are still in progress)
  • Updated Nov 15, 2019
  • Python
michaelschaarschmidt
michaelschaarschmidt commented Dec 28, 2018

Understanding the build process is currently quite difficult because it happens partly in the graph builder, in static and non-static parts of Component, and in various utils.

We should:

  • Make fully clear the purpose of each build Op
  • Fully document the Structure of the IR generated by the two builds (potentially revive visualisation project for this)
  • Clarify the use of Build ops in gra

Improve this page

Add a description, image, and links to the ppo topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ppo topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.