Skip to content

Latest commit

 

History

History
52 lines (28 loc) · 1.59 KB

File metadata and controls

52 lines (28 loc) · 1.59 KB

Environment Usage

gym_cityflow文档结构遵循 gym 环境的规则,详见此处

使用前,cdgym_cityflow目录,运行 pip install -e .进行环境包的安装。否则,无法在 ray_dqn_agent.py 中直接 import gym_cityflow

ray中可以自定义环境以及神经网络模型,详见此处

Configuration

Config包含 agent_configenv_config

  1. agent_config:遵照 rayagent/[algorithm].pyDEFAULT_CONFIG 以及 agents/trainer.pyCOMMON_CONFIG,两者是继承关系,DEFAULT_CONFIG继承 COMMON_CONFIG
  2. env_config:为 agent_config字典下的一个属性,即其为大字典中的小字典,用于配置 Cityflow 环境参数

Future Research

Just change this part

    trainer = DQNTrainer(
        env=CityflowGymEnv,
        config=config_agent)

and

import ray.rllib.agents.dqn as dqn
from ray.rllib.agents.dqn import DQNTrainer

into your algorithm, then it would work.

Remember: Follow the DEFAULT_CONFIG in each algorithm and change your own configuration.

TensorBoard

For Tensorboard, just

tensorboard --logdir = ~/ray_results

注: ray_results目录位置需根据本机位置修改, 可直接写绝对路径