Skip to content
Dan Royer edited this page Jul 29, 2021 · 1 revision

Last updated 2019-04-08

Where we are (high level)

We have a generic robot arm designed with D-H parameters.

We have a Sixi2 robot build from the generic robot.

We have full solving of forward kinematics (FK).

We have full solving of inverse kinematics (IK). 2019-04-04: we have solving of the spherical wrist, but nothing to compensate for flipping around the singularity. 2019-06-25: we have solved the spherical wrist correctly.

We can load and display static models in the world.

We have joystick support #27 for joystick control #28.

We read sensor input from the real robot and use that to adjust the FK of the robot (sensor feedback). #29

We can record, save, load, and playback recordings of joystick input. We can have drive-to-teach #32.

Concurrently we can add a tool on the end of the arm and build a system to switch tools (tooling). #34

When we have tooling THEN we can add pick up/drop off models with the tool (Part handling) #35. When we have tooling THEN might add gripper open/close? (tool animation) #36.

We have linear paths over time and we can record, save, load, edit, and playback those paths (path control) #4.

We can find Jacobians for Sixi2 and solve force, velocity, acceleration, and dynamics. #30

We can record, save, load, edit, and playback those paths (path control) #4.

Where we are going (high level)

When we have Jacobians AND a stiff enough robot THEN we can safely move in linear paths over time. #31 2019-06-28: Well no, that's not what jacobians are for. We can detect forces on the arm as well as push thing with the arm at a controlled amount of force. but we can also detect difference between force expected and force sensed and stop if the error is too high.

When we sensor feedback THEN we can have push-to-teach #33.

When we can switch tools AND pick up and move parts AND record the process THEN we can gamify programming tasks for the arm and build scripts to begin manufacturing (Gamification 1) #37.

h2. Where we are going (code level)

We have OpenCV camera calibration in reach, and we can start to reconstruct scenes to identify collidable shapes.