Skip to content

Started the project to understand how a robot can be simulated in the gazebo environment and to use it to get camaera images of an object to use it for next based view planning.

Notifications You must be signed in to change notification settings

chandramohanjagadeesan/Next_View_Based_Plannig

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Next_View_Based_Plannig

Started the project to understand how a robot can be simulated and how to use it to get camaera images of an object to use it for next based view planning.

STEP 1 : Done a survey of all the robotic simulation softwares and tools which can be good for reinforcement learning. Found that nvidia had Isaacgym has many robotic environments with APIs to do reinforcement learning on those agents Decided to go with Isaac gym

STEP 2 : htts://learningreinforcementlearning.com/setting-up-isaac-gym-on-an-ubuntu-laptop-785b5a15e5a9 Followed the tutorial from above blog to install all the prerequisites and install IsaacGym. Now IsaacGym was successfully installed and monkygym environment was tested.

STEPS TO BE DONE :

STEP 3 : Find a way to simulate 6 axis robot with camera as end effector and a static object.

LEARNINGS TILL NOW : Isaacsim is for robotic simulation. It has ROS and ROS2 compatibility but needs NVIDIA RTX Graphics cards to run. Isaacsim was not able to be installed in my PC with GTX graphic card. So Trying to figure out if I can use URDF files and run reinforcement learning algorithms on them. Isaacgym runs in python environment and can be used for reinforcement learning.

About

Started the project to understand how a robot can be simulated in the gazebo environment and to use it to get camaera images of an object to use it for next based view planning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published