Skip to content

elephantrobotics/AiKit_UI

Repository files navigation

AiKit UI Instructions

  • Applicable models and devices: myPalletizer 260 for M5、myCobot 280 for M5、ultraArm P340、mechArm 270 for M5、myCobot 280 for Pi、mechArm 270 for Pi、myCobot 280 for JN、myPalletizer 260 for Pi

Requires environment

Raspberry Pi Ubuntu20.04 system、Windows 10 or Windows 11、Jetson Nano Ubuntu20.04 system

python dependency package

使用前需确保系统已经安装以下第三方库,其中 opencv-pythonopencv-contrib-python必须指定安装 4.6.0.66 的版本,其他库原则上无需指定版本号。

opencv-python==4.6.0.66
opencv-contrib-python==4.6.0.66
pymycobot==3.6.3
PyQt5==5.15.10

如若未安装,请参考下面命令进行安装:

pip install pymycobot
pip install opencv-python==4.6.0.66
pip install opencv-contrib-python==4.6.0.66
pip install pyqt5

Install

git clone https://github.com/elephantrobotics/AiKit_UI.git

start method

path: Project file path

cd AiKit_UI
python main.py

After the startup is successful, as shown in the figure below:

img

Features

language switch

Click the button in the upper right corner of the window to switch between languages (Chinese, English).
img

device connection

  1. Select serial port, device, baud rate
    img

  2. Click the 'CONNECT' button to connect, after the connection is successful, the 'CONNECT' button will change to 'DISCONNECT'
    img

  3. Clicking the 'DISCONNECT' button will disconnect the robot arm
    img

  4. After the robotic arm is successfully connected, the gray button will be lit and become clickable.
    img

Turn on the camera

  1. Set the camera serial number, the default serial number is 0, when Windows is used, the serial number is usually 1, and when Linux is used, the serial number is usually 0.
    img

  2. Click the 'Open' button to try to open the camera. If the opening fails, you should try to change the camera serial number; the camera is successfully opened as shown in the figure below: Note: Before use, the camera should be adjusted to be just above the QR code whiteboard, and there is a line The straight line is facing the mechanical arm.
    img

  3. After successfully opening the camera, click the 'Close' button to close the camera
    img

algorithm control

  1. Fully automatic mode, after clicking the 'Auto Mode' button, the recognition, grabbing, and placing will always be on; click the 'Auto Mode' button again to turn off the fully automatic mode.
    img

  2. Go back to the initial point of grabbing, click the 'Go' button, it will stop the current operation and return to the initial point.
    img

  3. Step-by-step Recognition recognition: click the 'Run' button to start the recognition, Aigorithm is the current algorithm used.
    img Pick: Click the 'Run' button to start the capture. After the capture is successful, the recognition and capture will be automatically closed, and you need to click it again for the next use.
    img Placement: Click the 'Run' button to start placing. The BinA, BinB, BinC, and BinD selection boxes correspond to BinA, BinB, BinC, and BinD 4 storage boxes, respectively, and will be placed in the designated storage box after selection.
    img

  4. Grab point adjustment, X offset, Y offset and Z offset respectively represent the positions of the X-axis, Y-axis and Z-axis of the robot arm coordinates, and can be modified according to actual needs. Click the 'Save' button to save. After saving successfully, it will follow the latest point position to fetch.
    img
    img

  5. Open the file location, our code is open source, you can modify it according to your needs, click the 'Open File' button to open the file location.
    img Open the 'main.py' file and modify it
    img Note: The 'main.py.bak' file is the backup of the 'main.py' file, delete the 'main.py' file when you need to use it, and re-modify the 'main.py. The suffix of the bak' file is 'main.py'; then re-backup the 'main.py' file and name it 'main.py.bak'; you can also choose to re-download the project.

  6. Algorithm selection includes color recognition, shape recognition, two-dimensional code recognition, and Keypoints recognition. Selecting the corresponding algorithm will perform corresponding recognition.
    img

  7. How to use yolov5. After successfully connecting the robotic arm, the algorithm selects 'yolov5'
    image-20230202145832134
    then turn on the camera
    image-20230202150049121
    Put in the picture that needs to be recognized, and then click the Cut button
    image-20230202150221140
    Intercept the whiteboard part of the QR code, press Enter to confirm (repeatable interception)
    image-20230202150752804
    Then identify and grab.

  8. Add a picture for 'Keypoints'
    img Click the 'Add' button, the camera will open and a prompt will appear.
    img Click the 'Cut' button, the current camera content will be intercepted, and a prompt will be given to 'press the ENTER key after the content needs to be saved'

    img Frame the content to be saved and press the ENTER key to start selecting the saved area, corresponding to BinA, BinB, BinC, BinD 4 storage boxes.

    img The intercepted content will be displayed here
    img

    You can enter the following path to view the saved pictures
    img

  9. Click the 'Exit' button to exit adding pictures. Note: If you start capturing, please exit after capturing. You can choose not to save the captured pictures.
    img

coordinate display

  1. Real-time coordinate display of the robotic arm: click the 'current coordinates' button to open
    img

  2. Recognition coordinate display: click the ''image coordinates' button to open
    image-20230106180304086

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages