Replies: 1 comment
-
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Objective
I am attempting to develop a navigation task inspired by
omni.isaac.tasks > manager based > navigation
, with a similar training approach. The task involves a robot navigating to specified x, y coordinates and orientation. However, during inference, I want to integrate real-time camera outputs. For instance, a robot (anymal_c) will use camera images to set a goal pose (x, y, orientation). Upon reaching the initial goal, it will determine subsequent goals dynamically based on new camera images.Current Progress
MyScene(interactivescene)
class and render its output as a tensor in the while loop simulation inplay.py
. However, this output is not yet treated as an observation.Questions
I am uncertain about the necessary modifications or additions to the file structure to properly register this new gym environment for navigation tasks. Despite reviewing all related tutorials three times and reading the API documents over several months, I am still unclear about it.
Could anyone provide guidance or point towards relevant examples or documentation that could assist in setting up this environment?
I am stuck with simply understanding this library and all the APIs for more than a month. I have been working on this as a single person without any guidance. Any help would be greatly appreciated.
Thank you very much for your time and help.
Beta Was this translation helpful? Give feedback.
All reactions