Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug in 'jvrc_walk' env #35

Open
formoree opened this issue Jul 12, 2024 · 2 comments
Open

bug in 'jvrc_walk' env #35

formoree opened this issue Jul 12, 2024 · 2 comments

Comments

@formoree
Copy link

Traceback (most recent call last):
  File "E:/3_code/4_lab/3.规划/10_具身智能/6_RA/Codeset/Mujoco/LearningHumanoidWalking-main/run_experiment.py", line 133, in <module>
    run_experiment(args)
  File "E:/3_code/4_lab/3.规划/10_具身智能/6_RA/Codeset/Mujoco/LearningHumanoidWalking-main/run_experiment.py", line 83, in run_experiment
    procs=args.num_procs))
  File "E:\3_code\4_lab\3.规划\10_具身智能\6_RA\Codeset\Mujoco\LearningHumanoidWalking-main\rl\envs\normalize.py", line 71, in get_normalization_params
    states.extend(ray.get(ready_ids[0]))
  File "D:\4_code\1_anaconda\envs\walk\lib\site-packages\ray\_private\client_mode_hook.py", line 105, in wrapper
    return func(*args, **kwargs)
  File "D:\4_code\1_anaconda\envs\walk\lib\site-packages\ray\worker.py", line 1713, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(AttributeError): ray::_run_random_actions() (pid=35092, ip=127.0.0.1)
  File "python\ray\_raylet.pyx", line 625, in ray._raylet.execute_task
  File "python\ray\_raylet.pyx", line 629, in ray._raylet.execute_task
  File "E:\3_code\4_lab\3.规划\10_具身智能\6_RA\Codeset\Mujoco\LearningHumanoidWalking-main\rl\envs\normalize.py", line 36, in _run_random_actions
    state, _, done, _ = env.step(action.data.numpy())
  File "E:\3_code\4_lab\3.规划\10_具身智能\6_RA\Codeset\Mujoco\LearningHumanoidWalking-main\rl\envs\wrappers.py", line 13, in step
    state, reward, done, info = self.env.step(action[0])
  File "E:\3_code\4_lab\3.规划\10_具身智能\6_RA\Codeset\Mujoco\LearningHumanoidWalking-main\envs\jvrc\jvrc_walk.py", line 118, in step
    rewards = self.task.calc_reward(self.robot.prev_torque, self.robot.prev_action, applied_action)
  File "E:\3_code\4_lab\3.规划\10_具身智能\6_RA\Codeset\Mujoco\LearningHumanoidWalking-main\tasks\walking_task.py", line 55, in calc_reward
    action_penalty=0.050 * rewards._calc_action_reward(self, prev_action),
  File "E:\3_code\4_lab\3.规划\10_具身智能\6_RA\Codeset\Mujoco\LearningHumanoidWalking-main\tasks\rewards.py", line 17, in _calc_action_reward
    action = self._client.get_pd_target()[0]
AttributeError: 'RobotInterface' object has no attribute 'get_pd_target'

Then i check the 'rewards.py' and 'robot_interface.py' files, and i exactly find that there is no function named get_pd_target. So for trainning, I just comment out the code.But I do't know whether it will have a impact on trained models.

@skr3178
Copy link

skr3178 commented Jul 15, 2024

@formoree found in fork of this repo: https://github.com/mhming/LearningHumanoidWalking/blob/main/envs/common/robot_interface.py

adding this code or substituting this code in downloaded repo- files- /LearningHumanoidWalking/envs/common/robot_interface.py

I added a snippet so you know where to add. I can create pull request if it didn't work for you

def check_self_collisions(self):
        """
        Returns True if there are collisions other than any-geom-floor.
        """
        contacts = [self.data.contact[i] for i in range(self.data.ncon)]
        floor_contacts = []
        floor_id = mujoco.mj_name2id(self.model, mujoco.mjtObj.mjOBJ_BODY, self.floor_body_name)
        for i,c in enumerate(contacts):
            geom1_is_floor = (self.model.geom_bodyid[c.geom1]==floor_id)
            geom2_is_floor = (self.model.geom_bodyid[c.geom2]==floor_id)
            if (geom1_is_floor or geom2_is_floor):
                floor_contacts.append((i,c))
        return len(floor_contacts) != self.data.ncon

    def get_pd_target(self):  ## this line is added
        return [self.current_pos_target, self.current_vel_target]  ## this line is added

    def set_pd_gains(self, kp, kv):
        assert kp.size==self.model.nu
        assert kv.size==self.model.nu
        self.kp = kp.copy()
        self.kv = kv.copy()
        return

    def step_pd(self, p, v):
        self.current_pos_target = p.copy()  ## this line is added
        self.current_vel_target = v.copy()  ## this line is added
        target_angles = self.current_pos_target
        target_speeds = self.current_vel_target

        assert type(target_angles)==np.ndarray
        assert type(target_speeds)==np.ndarray

        curr_angles = self.get_act_joint_positions()
        curr_speeds = self.get_act_joint_velocities()

        perror = target_angles - curr_angles
        verror = target_speeds - curr_speeds

        assert self.kp.size==perror.size
        assert self.kv.size==verror.size
        assert perror.size==verror.size
        return self.kp * perror + self.kv * verror

@formoree
Copy link
Author

OK. Let me have a try. I think the Uncomment the code, and the code can be runed. But I don't know whether it will have a impact on the tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants