Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IndexError: tensors used as indices must be long, int, byte or bool tensors #2

Open
logic110 opened this issue Oct 11, 2023 · 11 comments

Comments

@logic110
Copy link

Hi, thank you for your great job about parkour. I encountered some problems when I ran "barrier_track.py". "IndexError: tensors used as indices must be long, int, byte or bool tensors." What I need to do?

@ZiwenZhuang
Copy link
Owner

Hi,
Could you explain more details about your problem?
The file barrier_track.py is not designed to be run alone.
As I remembered, "IndexError: tensors used as indices must be long, int, byte or bool tensors." happened when some of the field in the configuration becomes a float rather than an integer. But I need more details to help you.

Could you show more about how you run the script and what does the error message says?

Best,

@logic110
Copy link
Author

Thank you for your fast reply! These are the wrong details.
python play.py --task a1_crawl --load_run /home/robot/parkour/parkour/legged_gym/legged_gym/field_a1/crawl_raw_919_ok
Importing module 'gym_38' (/home/robot/parkour/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/robot/parkour/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 2.1.0+cu121
Device count 1
/home/robot/parkour/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/.cache/torch_extensions/py38_cu121 as PyTorch extensions root...
Emitting ninja build file /home/.cache/torch_extensions/py38_cu121/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
Setting seed: 1
Using LeggedRobotField.init, num_obs and num_privileged_obs will be computed instead of assigned.
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
/home/anaconda3/envs/parkour/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Total number of volume estimation points for each robot is: 2909
Traceback (most recent call last):
File "play.py", line 340, in
play(args)
File "/home/anaconda3/envs/parkour/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "play.py", line 165, in play
env.reset()
File "/home/robot/parkour/parkour/legged_gym/legged_gym/envs/base/base_task.py", line 114, in reset
obs, privileged_obs, _, _, _ = self.step(torch.zeros(self.num_envs, self.num_actions, device=self.device, requires_grad=False))
File "/home/robot/parkour/parkour/legged_gym/legged_gym/envs/base/legged_robot.py", line 97, in step
self.post_physics_step()
File "/home/robot/parkour/parkour/legged_gym/legged_gym/envs/base/legged_robot_field.py", line 121, in post_physics_step
return super().post_physics_step()
File "/home/robot/parkour/parkour/legged_gym/legged_gym/envs/base/legged_robot.py", line 135, in post_physics_step
self.check_termination()
File "/home/robot/parkour/parkour/legged_gym/legged_gym/envs/base/legged_robot_field.py", line 135, in check_termination
stepping_obstacle_info = self.terrain.get_stepping_obstacle_info(self.volume_sample_points.view(-1, 3))
File "/home/robot/parkour/parkour/legged_gym/legged_gym/utils/terrain/barrier_track.py", line 720, in get_stepping_obstacle_info
obstacle_info = self.track_info_map[
IndexError: tensors used as indices must be long, int, byte or bool tensors

@ZiwenZhuang
Copy link
Owner

What is your pytorch version? Probably because line 698 in barrier_track.py. Maybe try forcing the data type on every possible values?

@jdluuu
Copy link

jdluuu commented Dec 20, 2023

I meet this question too. It can be fixed by change your pytorch to version 1.10.0

@jdluuu
Copy link

jdluuu commented Dec 20, 2023

Besides, mismatched versions of pytorch and cuda worked fine on my computer (pytorch 1.10.0 + cuda 11.7, RTX 3090)

I meet this question too. It can be fixed by change your pytorch to version 1.10.0

@Shifters1
Copy link

I got the same error, using pytorch 1.10.0 + cuda 11.3, RTX 4090, is there any fix?

@guyo-shifters
Copy link

i fixed it by adding .long() to the indices in this line, then it works with newer versions of pytorch

@sandorfelber
Copy link

i fixed it by adding .long() to the indices in this line, then it works with newer versions of pytorch

Thanks! This worked for me too.

@CrazyWan528
Copy link

CrazyWan528 commented Mar 31, 2024

i fixed it by adding .long() to the indices in this line, then it works with newer versions of pytorch

Are you using a 4090 GPU? I tried pytorch2.1.0+cuda12.1, pytorch2.0.0+cuda11.8, and pytorch1.10.0+cuda11.3, and they all reported the same error! I'm not sure if my change to "torch.zeros_like(track_idx[0]).long()" on line 698 is correct or not.

@CrazyWan528
Copy link

CrazyWan528 commented Mar 31, 2024

i fixed it by adding .long() to the indices in this line, then it works with newer versions of pytorch

Based on your suggestion, I changed the .to(int) in line 737 to .long(), and added .long() after line 742 as well, and it then worked fine with the env of pytorch2.0.0+cuda11.8 on RTX4090.

@AlorithmKing
Copy link

change newer version of pytorch pytorch2.0.0+cuda11.8 and add .long()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants