This is a evaluation trial on Robot Operating System (ROS), Robot Operating System 2 (ROS2) and Robot Raconteur (RR) using python turtle module. Participants are provided with ready-to-run example scripts, and the goal upon completion is to integrate those standalone python codes with ROS/ROS2/RR and control the on screen turtle with a webcam.
- Requires Ubuntu 20.04 (Native)
- python3.8
- OpenCV (
sudo apt-get install python3-opencv
) - git (
sudo apt-get install git
) - ROS Noetic (Follow instructions to download)
- Requires Ubuntu 20.04 (Native)
- python3
- OpenCV (
pip3 install opencv-python --user
) - git (
sudo apt-get install git
) - ROS2 Foxy (Follow instructions to download)
- Colcon, simply run the command
sudo apt install python3-colcon-common-extensions
- Works for both Windows 10 and 11
- Follow this link to install all requirements (ROS2 foxy/python3.8/OpenCV)
Note: Use ROS2 Build Installation from aka.ms/ros
to install ROS2 instead of downloading their pre-built packages.
Your environment setup command will be
call C:\opt\ros\foxy\x64\local_setup.bat
- Install RTI Connext Link
- Go to C:\Python38. Copy python and paste it with the new name
python_d
. - Colcon, open a command prompt as an administrator, then run the command
pip install colcon-common-extensions
- Keyboard module, open a command prompt as an administrator, then run the command
pip install keyboard
- Set up tcl environment variable, open a command prompt as an administrator, then run the command
setx -m TCL_LIBRARY "C:\Python38\tcl\tcl8.6"
- Follow these instruction to install docker-engine on ubuntu.
- Fire up the container with the following commands
cd ~/robotics_middleware_trial_python_turtle_ros2
source docker_run_ros2.sh
- Join a container with the following commands
cd ~/robotics_middleware_trial_python_turtle_ros2
source docker_join_ros2.sh
- Requires Ubuntu 20.04 (Native)
- python3.8
- OpenCV (
sudo apt-get install python3-opencv
) - git (
sudo apt-get install git
) - RobotRaconteur 0.14 (Follow instructions to download)
Clone this repository before continuing the trial:
cd ~
git clone https://github.com/hehonglu123/Robotics_Middleware_Trial_Python_Turtle.git
During the trial process, if you encounter questions specific about this trial, feel free to ask in the workspace. For questions about ROS/ROS2, please go to http://wiki.ros.org/answers.ros.org, and for RR https://www.robotraconteur.com/forum. Please include the following while posting your questions:
- Operating system (Ubuntu 18.04/20.04/Windows 10/11)
- ROS/RR Version (Noetic, Foxy, RR 0.14)
- ROS/ROS2/RR Error Message
- Related code snippets
Inside the Example/
folder, there're three basic ready-to-run example scripts, and they could be run simply by executing $ python <script name>
. Please run each one individually and make sure you have a good understanding of them.
turtlebot.py
: A simple python turtle script that drives the turtle around and changes color on screen.keyboard.py
: A simple python script that reads in arrow key on keyboard and print message based on key press.detection.py
: A python script with the OpenCV module that reads in an imageExamples/images/red.jpeg
as an OpenCV object, filtered with red filter (cv2.inRange()
), display the filtered image, and go through Connected Component Labeling to find connected parts as red objects.
The python turtle module is imported by
import turtle
And the display and turtle is initialized by
screen = turtle.Screen()
t1 = turtle.Turtle()
t1.shape("turtle")
screen.bgcolor("lightblue")
After that, we have a drive()
function and a setpose()
function which drive the turtle based on given speed and set its absolute pose on screen respectively.
def drive(turtlebot,move_speed,turn_speed): #Drive function, update new position, this is the one referred in definition
turtlebot.forward(move_speed)
turtlebot.left(turn_speed)
def setpose(turtlebot,x,y,angle): #set a new pose for turtlebot
turtlebot.setpos(x,y)
turtlebot.seth(angle)
Inside the first for
loop, the turtle is driven forward:
for i in range(50):
drive(t1,10,0)
Then the turtle color is changed to red:
t1.pencolor("red")
In the second for
loop, the turtle is driven in a spiral shape:
for i in range(50):
drive(t1,10,i)
The system is set and initialzied to read keyboard press:
import termios, fcntl, sys, os
#keyboard reading settings
fd = sys.stdin.fileno()
oldterm = termios.tcgetattr(fd)
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)
oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)
The above settings set the terminal to read characters in nonblocking mode.
In the while loop, the keyboard buffer is kept read in and detected. The arrow key starts with \x1b[
, and if q
is pressed, it breaks out of the loop and exit.
try:
while True:
try:
#read input and print "command"
c = sys.stdin.read()
if "\x1b[A" in c:
print("drive forward") ####Drive forward
if "\x1b[B" in c:
print("drive backward") ####Drive backward
if "\x1b[C" in c:
print("drive right") ####Drive right
if "\x1b[D" in c:
print("drive left") ####Drive left
if "q" in c:
break
except IOError: pass
except TypeError: pass
The two except
here prevents an empty buffer error. And finally up exit the system is set back to normal:
#finish reading keyboard input
finally:
termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)
This is an example detecting if an image contains a large section of red color.
First the libraries are imported:
import cv2
import numpy as np
And then the image is read in as an OpenCV object with metadate variables:
image=cv2.imread("images/red.jpeg") #read in image
image_size=len(image)*len(image[0]) #get image size
image_dimension=np.array([len(image),len(image[0])]) #get image dimension
Then the image is filtered with red filter, and display the filtered result on screen:
filtered_red=cv2.inRange(image,np.array([5,5,200]),np.array([200,200,255])) #filter the image with upper bound and lower bound in bgr format
#show filtered image
cv2.namedWindow("Image")
cv2.imshow("Image",filtered_red)
cv2.waitKey()
As a result, this script will read in the image at left and output image on the right. The second step is to filter out the "noise", so the Connect Component Labeling is used here on the filtered image:
#run color connected components to filter the counts and centroid
retval, labels, stats, centroids=cv2.connectedComponentsWithStats(filtered_red) #run CCC on the filtered image
idx=np.where(np.logical_and(stats[:,4]>=0.01*image_size, stats[:,4]<=0.1*image_size))[0] #threshold the components to find the best one
for i in idx:
if np.linalg.norm(centroids[i]-image_dimension/2.)<50: #threshold again, only for ones near the center
print("red detected")
- ROS (https://github.com/robotraconteur-contrib/Robotics_Middleware_Trial_Python_Turtle/blob/master/ROS/src/Trial_instruction.md)
- ROS2 (https://github.com/robotraconteur-contrib/Robotics_Middleware_Trial_Python_Turtle/blob/master/ROS2/Trial_instruction.md)
- RR (https://github.com/robotraconteur-contrib/Robotics_Middleware_Trial_Python_Turtle/blob/master/RR/Trial_instruction.md)
The final goal is to integrate those standalone python examples with ROS/RR, such that the python turtle will be able to drive based on what webcam sees.