You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. I am planning to use your framework for my Reinforcement Learning task. I saw that there is Betaflight support. My question is how can I achieve sim2real transfer for betaflight. Shall I write a new flight mode and build the new firmware? And also betaflight is using his own controller, so how is my RL code integrated on top of it? Are ther any succesful examples?
The text was updated successfully, but these errors were encountered:
The instructions to run the example using Betaflight (in simulation) are detailed here. It leverages the SITL version of BF to compute motor controls from CTBR commands.
The simulation environment in gym_pybullet_drones/envs/BetaAviary.py has a CTBR action space so, if you train an RL policy on it, it should output actions already compatible with BF. However, for sim2real transfer you also need to know the state of the system/drone. Are you going to use a MoCap system?
Hi. I am planning to use your framework for my Reinforcement Learning task. I saw that there is Betaflight support. My question is how can I achieve sim2real transfer for betaflight. Shall I write a new flight mode and build the new firmware? And also betaflight is using his own controller, so how is my RL code integrated on top of it? Are ther any succesful examples?
The text was updated successfully, but these errors were encountered: