Skip to content

Latest commit

 

History

History
44 lines (31 loc) · 1.82 KB

README.md

File metadata and controls

44 lines (31 loc) · 1.82 KB

Introduction to Reinforcement Learning

Find the lecture slides here.

In the RL-lecture.ipynb notebook you will find the exercises used during the lecture, whereas in RL-lecture-solutions.ipynb you can find the solutions to the exercises.

The lecture contains exercises on the following topics:

  1. Learn how to interact with RL environments using Gymnasium.
  2. Markov Decision Process (MDP).
  3. Dynamic programming (DP): Policy Evaluation and Value Iteration algorithms.
  4. Maintaining exploration with epsilon-greedy policies.
  5. TD(0) methods for sample-based RL: a comparison between SARSA and Q-learning.
  6. Function approximation: train a deep reinforcement learning agent to control a Lunar Lander.

Environment setup

In this presentation you can find more details on how to setup the execution environment to run the exercises of the RL lecture, which include two alternative ways: CERN's KubeFlow cluster or Google Colaboratory.

There is a third option, if you want work on your laptop: use python virtual environments.

# Create a virtual env called ".venv"
python3 -m venv .venv

# Activate it
source .venv/bin/activate

# Install libraries
pip install -r requirements.txt

# Launch jupyter notebook server
jupyter notebook

A new window should open in the browser... If the window does not open, you can try pasting this in the search bar of your browser: http://localhost:8888/

The Dockerfile is meant to generate a Docker image to be used as a notebook server in CERN's KubeFlow (pretty much a JupyterLab image). However, you won't need it.