-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use case multi robot carry #124
Comments
Anyone facing this issue please install this package, https://pypi.org/project/motion-planning-scenes/ thank you. |
@josyulakrishna Thanks for your interest in the project. And greater to see that you managed to solve the problem yourself as well. The motion planning scenes are optional. You can install them using pip: If you use pip on the cloned repository, the syntax would be If you use poetry, it is |
Thank you @maxspahn I would need a help/advice, I am planning to build a multiagent payload carrying simulation over the existing multirobot simulation with point robots in it This is simulation rendered with two point robots The idea is to train these robots with RL to carry the load to a goal. However, I'm not exactly sure where to start as I'm very new to pybullet and urdf. How do I add this load onto the robots, which would essentially be latched to the robots? can you please give me some suggestions, any help is deeply appreciated. thank you! |
Hi @josyulakrishna, Nice use case. Especially, as it will directly use the brand-new feature of multiple robots in the scene. example_carry_multi_robots.zip There is only one thing you will have to do: Define an appropriate reward function for this case, probably based on the location of the obstacle. We are currently developing a method to access the position of all obstacles in the scene at runtime, which might help you a lot in doing that. Let me know how it goes, and feel free to open a branch in your fork so we can directly discuss your code (and eventually merge the use case in here as an example 🔥 ) |
Thank you so much @maxspahn I have cloned the repo and will update my progress there. I am actually looking into making the load latch to the robot, I have modified the pointRobot.urdf which can be seen in this code and the simulation can be run with the point_robot.py in the examples folder, a picture is attached below, this also wouldn't block the lidar's view. Also, I need to have the load latched to the robots, so l did try to make a new urdf file with two robots right now, which would be carrying the load, the file is here however the robot_2's revolute joint is not rotating around the robot's axis. I have tried asking on slack and ros forums, but i had no luck. It'd be really helpful if you can get a chance to take a look at it. Thank you again for the advice! |
Hi again, I have looked at your fork and the corresponding implementations. There are several things I should clarify:
So, given that you have already designed a nice point robot with a latch that does not mess up with the lidar, there is very little left to do.
robots = [
GenericUrdfReacher(urdf="loadPointRobot.urdf", mode="vel"),
GenericUrdfReacher(urdf="loadPointRobot.urdf", mode="vel"),
]
env = gym.make(
"urdf-env-v0",
dt=0.01, robots=robots, render=render
)
action = np.array([0.1, 0.0, 1.0, 0.1, 0.0, 1.0])
base_pos = np.array(
[
[1.0, 0.1, 0.0],
[1.5, 0.1, 0.0],
]
)
ob = env.reset(vel=vel0, base_pos=base_pos)
urdf_obstacle_dict = {
"type": "urdf",
"geometry": {"position": [0.2, -0.0, 1.05]},
"urdf": os.path.join(os.path.dirname(__file__), "block.urdf"),
}
urdf_obstacle = UrdfObstacle(name="carry_object", content_dict=urdf_obstacle_dict)
env.add_obstacle(urdf_obstacle) There you obviously have to play around with the position of the obstacle so that it actually falls into the robot's latch. I hope this helps a bit. Let me know if something remains unclear. Best, |
By the way: I have changed re-opened the issue and changed its name. |
Thank you @maxspahn, indeed this does give me some ideas! |
@maxspahn, Hey Max! hope you're doing well, is there any way I can suppress these kinds of warnings? It seems to be coming from the lidar sensor link.
Any tips on making simulation faster for training? I have disabled rendering too. |
Concerning the warnings: I think you could add a mass in the correspond link of the urdf file. It never really bothered me as it is only invoked once. For speed: I am afraid that the physics engine is the limiting factor here. Unless you are ready to increase the time step, there is not much more speed up to be expected. We are currently looking into replacing the physics engine with Isaac gym as it allows better parallelization. |
Resetting the environment removes the robot from the environment keeping only one robot. For example this simple code
before reset: After the first reset in the environment, only one robot exists I'm not sure why this is happening, I noticed this after running PPO and it seemed to fail consistenly, i noticed env.reset() is called after every episode terminates, which is keeping only one robot in the environment, can you please help regarding this? All the code I'm using is here https://github.com/josyulakrishna/gym_envs_urdf Edit: Found a workaround, every time the env.rest(base_pos) is called one can rebuild the environment. |
@maxspahn hey Max, can you please give me an environment with the walls setup like this Thank you. |
Anyone interested may like this result, I used MAPPO to get this behavior, will extend to obstacle avoidance(passing through gaps in walls) video.mp4 |
The solution of installing the extra package is also a fix for the following:
|
I have followed all the instructions and tried to run the code however, I am getting this error
I did do a search it looks like there isn't a MotionPlanningGoal in the project. How can I resolve this error? thank you!
The text was updated successfully, but these errors were encountered: