You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, I am using mujoco in a basic simulation setup (a single ball that is moved in the y-z plain using a parallel gripper).
My setup
I am rendering a mujoco scene on a server with Ubuntu 24.04 Server installed. The server has a NVidia GPU attached. I managed to get the rendering to work for egl and mesa, where I am rendering to a virtual X server spawned via xvfb-run.
What's happening? What did you expect?
The speed for rendering with %env MUJOCO_GL=eql is the same than with %env MUJOCO_GL=mesa and %env MUJOCO_GL=osmesa. I would the speed to be quite different. Does someone has an idea why this could be the case?
Note that when I set the environment variable to something else like:
import os
os.environ['MUJOCO_GL']='foo'
this still seems to work (no error when importing mujoco). I wonder therefore if the setting is picked up at all.
Code (run from jupyter lab, kernel spawned using xvfb-run -a python ...):
import os
os.environ['MUJOCO_GL']='egl'
import time
import matplotlib.pylab as plt
import mujoco
from mujoco import viewer
model = mujoco.MjModel.from_xml_path('model/scene.xml')
data = mujoco.MjData(model)
renderer = mujoco.Renderer(model)
tic = time.time()
for i in range(100):
mujoco.mj_step(model, data)
renderer.update_scene(data)
pixels = renderer.render()
print(f"{time.time() - tic:.2f} s") # Prints around 1.63 s for mesa and egl
the os.environ['MUJOCO_GL']='foo' gives an error. That said, still when I am using os.environ['MUJOCO_GL']='egl' or os.environ['MUJOCO_GL']='osmesa, I am getting the same rendering performance :/
My Google Cloud VM was missing the NVidia drivers. After installing the graphic drivers, the egl backend renders around 10x faster than the osmesa one.
Intro
Hi there, I am using mujoco in a basic simulation setup (a single ball that is moved in the y-z plain using a parallel gripper).
My setup
I am rendering a mujoco scene on a server with Ubuntu 24.04 Server installed. The server has a NVidia GPU attached. I managed to get the rendering to work for egl and mesa, where I am rendering to a virtual X server spawned via
xvfb-run
.What's happening? What did you expect?
The speed for rendering with
%env MUJOCO_GL=eql
is the same than with%env MUJOCO_GL=mesa
and%env MUJOCO_GL=osmesa
. I would the speed to be quite different. Does someone has an idea why this could be the case?Note that when I set the environment variable to something else like:
this still seems to work (no error when importing mujoco). I wonder therefore if the setting is picked up at all.
Steps for reproduction
See code and setup described above. See how I managed to get mesa to work on this issue: https://github.com/KyberRobot. #2053 (comment)
Minimal model for reproduction
Code required for reproduction
Code (run from jupyter lab, kernel spawned using
xvfb-run -a python ...
):Confirmations
The text was updated successfully, but these errors were encountered: