-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ViSP community results obtained with ViSP #1053
Comments
Following video shows the results of the dISAS demonstrator for the H2020 PULSAR project, and whom Magellium was the project coordinator.
The ViSP library has been used for:
Final project video: More in-depth details and results about the dPAMT demonstrator (precise assembly of mirror tiles): Underwater demonstrator (dLSAFFE) for large scale robotic manipulation and simulated micro-gravity environment: |
The following video demonstrates an accurate pointing task by a UR 10 robot on a mock-up of aircraft part. This work was performed in the framework of the joint lab ROB4FAM between Airbus and CNRS, in collaboration with INRIA. The demonstration includes
https://peertube.laas.fr/videos/watch/9632ae06-2466-46cf-9d4d-6f45ee8b4d91 Reactive.and.accurate.pointing.of.holes.in.a.part.by.localization.using.vision-1080p.mp4 |
The following video demonstrates deburring operations performed by a mobile robot Tiago on a mock-up of aircraft pylon. This work was performed in the framework of the joint lab ROB4FAM between Airbus and CNRS. The demonstration includes:
https://peertube.laas.fr/videos/watch/6f40ea79-abcd-490e-a616-3a67bf297d93 Tiago.mobile.manipulator.performs.deburring.tasks.on.an.aircraft.part-540p.mp4 |
The following video is associated with the paper Integrating Features Acceleration in Visual Predictive Control, published in Robotics and Automation Letters.
2020-Fusco_RAL.mp4 |
The following video is associated with the paper Defocus-based Direct Visual Servoing, published in the IEEE Robotics and Automation Letters in 2021. ViSP's vpLuminance visual information was used and extended to consider the defocus variation in the control law (interaction matrix with the Laplacian of the image) Defocus-based.Direct.Visual.Servoing.mp4 |
Model-based tracking with TORO for visual navigationThe following video is a screen capture showing the model-based tracking of an aircraft panel to allow the TORO humanoid robot to reach a predefined pose. Model-based_tracking_with_TORO_Comanoid.mp4Model-based tracking with TORO for bracket graspingThe following video is also a screen capture showing the model-based tracking of a bracket feeeder to allow the TORO humanoid robot grasp different brackets. Model-based_tracking_with_TORO_bracket_grasping_Comanoid.mp4 |
The three following videos show eye-in-hand visual servoing to perform the assembly of an in-space primary mirror in simulation, and for the PULSAR H2020 project.
Eye-in-Hand_Visual_Servoing_PULSAR_1.mp4Eye-in-Hand_Visual_Servoing_PULSAR_2.mp4Eye-in-Hand_Visual_Servoing_PULSAR_3.mp4 |
This video shows results obtained with ViSP for a join work between COSMER lab and PRAO team at Ifremer. Test.asservissement.visuel.sur.une.chainette.avec.le.bluerov.1.mp4 |
The following video shows a highly accurate automatic positioning task ( accuracy of few tens of nanometers) performed in a microrobotic workcell using a direct visual servoing method using photometry [1]. The visual system consisted of a high magnification optical microscope, while the robotic system was a lab-made microrobotic positioning platform. The whole implemented within ViSP framework. [1]: Christophe Collewet, Eric Marchand. Photometric visual servoing. IEEE Transactions on Robotics, IEEE, 2011, 27 (4), pp.828-834. video_icra_2011.mp4 |
The following multimedia illustrates a 6-DoF positioning task achieved using wavelets coefficients-based direct visual servoing. Indeed, instead of using geometric visual features in standard vision-based approaches, this controller makes use of wavelet coefficients as control signal inputs. It uses the multiple resolution coefficients representing the wavelet transform of an image in the spatial domain. The implementation was done using ViSP and the experimental evaluation was performed in different conditions of use (nominal conditions, using 2D/3D scenes, under lighting variation, and with partial occlusions). Icra2016_c.mp4 |
The multimedia below illustrates a weakly calibrated three-view based visual servoing control law for laser steering in the context of surgical procedure (the final aim). It proposes to revisit the conventional trifocal constraints governing a three-view geometry for a more suitable use in the design of an efficient trifocal vision-based control. Thereby, an explicit control law is derived, without any matrix inversion or complex matrices manipulation. Different ViSP functions were used, e.g., the vpDot visual tracker from ViSP was used to track the laser spot. video_ijrr.mp4 |
The following video shows the automatic control of a laser spot using a visual servoing approaches. This work was developed in the context of minimally invasive surgery of the middle ear by burring residual pathological tissues called cholesteatoma. The whole approach consisted of the association of an optimal path generation method based of the well-known "Traveling Salesman Problem" and an image-based visual servoing to treat the residual cholesteatoma that look like debris spread all over the middle ear cavity. ViSP was used both for the laser spot and cholesteatoma visual tracking. ieee_iros_2022.mp4 |
The video below illustrates a path-following method for laser steering in the context of vocal folds surgery. In this work, non-holonomic control of the unicycle model is used to implement velocity-independent visual path following for laser surgery. The developed controller was tested, in simulation as well as experimentally in several conditions of use: different initial velocities (step input, successive step inputs, sinusoidal inputs), optimized/non-optimized gains, time-varying path (simulating a patient breathing), and complex curves with curvatures. Also, the experiments were performed at 587 Hz showed an average accuracy lower than 0.22 pixels (≈ 10µm) with a standard deviation of 0.55 pixels (≈ 25µm) path following, and a relative velocity distortion of less than 10^−6%. t-ro_video.mp4 |
The following video shows the operating of a vision-based control law to achieve automatic 6-DoFs positioning tasks. The objective of this work was to be able to replace a biological sample under an optical device for a non-invasive depth examination at any given time (i.e., performing repetitive and accurate optical characterizations of the sample). The optical examination, also called optical biopsy, is performed thanks to an optical coherence tomography (OCT) system. The OCT device is used to perform a 3-dimensional optical biopsy, and as a sensor to control the robot motion during the repositioning process. The visual servoing controller uses the 3D pose of the studied biological sample estimated directly from the C-scan OCT images using a Principal Component Analysis (PCA) framework. pca_multimedia.mp4 |
Model-based tracking with HRP-4 for circuit breaker manipulationThe following video shows on the left the model-based tracking of a circuit breaker, and on the right the model-based tracking of the HRP-4 hand. Thus, it allows servoing the robot hand and tool in order to reach a predefined pose w.r.t. one of the circuit breaker switch. Model-based_tracking_with_HRP-4_Comanoid.mp4Model-based tracking of a circuit breaker using edges+KLT+depth featuresThe following video shows the model-based tracking of a circuit breaker. The combination of edges, KLT and depth features allows for a stable and robust tracking and precise pose computation of the circuit breaker. Model-based_tracking_of_a_circuit_breaker_Comanoid.mp4Model-based tracking comparison between edges+KLT and edges+depth features in order to track a printerThe following video shows the model-based tracking of a printer. It compares the tracking and pose computation between edges+KLT features and edges+depth features. The use of depth features, thanks to an ASUS Xtion sensor, allows for a more stable and precise tracking and pose computation. Model-based_tracking_of_a_printer_Comanoid.mp4 |
This thread was created to allow all ViSP users to post videos of results obtained in research, industrial, European or other projects...
It completes the videos that the team regularly publishes on the vispTeam Youtube channel and the Rainbow team's channel.
To contribute to this thread, you can indicate the name of your laboratory, company, entity, add your video or a link to the video and a short description of the video.
Feel free to contribute to this thread to promote ViSP usage.
The text was updated successfully, but these errors were encountered: