In this tutorial, we show how InfraredTags can be fabricated and decoded using low-cost, infrared-based 3D printing and imaging tools. While any 2D marker can be embedded as InfraredTags, we demonstrate the process for QR codes and ArUco markers. This research project has been published at the 2022 ACM CHI Conference on Human Factors in Computing Systems. Learn more about the project here.
By Mustafa Doga Dogan*, Ahmad Taka*, Veerapatr Yotamornsunthorn*, Michael Lu*, Yunyi Zhu*, Akshat Kumar*, Aakar Gupta†, Stefanie Mueller*
*MIT and †Facebook Reality Labs
The tutorial consists of three steps: (1) embedding the marker into the 3D model of an object, (2) 3D printing the object, (3) decoding the tag from the printed object.
If you use InfraredTags as part of your research, you should cite it as follows:
Our method allows users to embed (a) QR codes and (b) ArUco markers to store information in objects or to track them. The hidden markers are decoded from infrared camera images (c) using a convolutional neural network based on U-Net.Mustafa Doga Dogan, Ahmad Taka, Michael Lu, Yunyi Zhu, Akshat Kumar, Aakar Gupta, and Stefanie Mueller. 2022. InfraredTags: Embedding Invisible AR Markers and Barcodes Using Low-Cost, Infrared-Based 3D Printing and Imaging Tools. In CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 269, 1–12. https://doi.org/10.1145/3491102.3501951
The tagged objects can be 3D printed using a single-material approach (only IR PLA) or a multi-material approach (IR PLA + regular white PLA). While both methods are described in the paper, we highly recommend that you use the multi-material approach.
- Rhino 3D (make sure it is version 6) and Grasshopper 3D - Once installed, follow the instructions to install the Pufferfish plugin
- Python and IDE (Any IDE will work, however, we use PyCharm)
- The SVGs have to be in a specific format in order for our Grasshopper code to parse it (see section below).
4) Double click the Find Mesh Centroid panel (the green box in the image below). Then click OK on the pop-up window shown below.
-
For the single-material approach, right click the "Single Material" panel shown below, then click Preview.
-
For the multi-material approach, right click the "Multi Material" panel shown below and click Preview. Similarly, right click the "IR Filament" panel, click Preview.
- After this, you should see a Rhino object with a code embedded on it.
Rhino object with code embedded in it. Point is used to orient the code over the object.
- Change the xyz coordinate of the point to move the code around on the surface of the object.
- The best way to move a point is to simply set the coordinates by right clicking "Pt" on the inputs panel and then going to Manage Point Collection and typing a new point.
- Due to a bug in the code, it is best to keep the point in the positive z-axis.
- We used 1.38mm and 1.92mm for White PLA and IR PLA, respectively.
- However, we recommend that you calibrate these values by first printing a test checkerboard as shown in the CHI'22 paper.
- For single-material, right click "single material" and click "bake". For multi-material, right click on both "multi material" and "IR filament".
- A black wire mesh should appear in the perspective screen.
- Simply highlight it with your mouse then navigate to File > Export Selected and save somewhere in your file system.
- Note: For multi-material, you need to bake and export each mesh separately. That way, you will have both the internal PLA component and the outer IR PLA component.
- To format the SVG you have two options:
- Difficult: Take the original SVG and parse it into paths of the following format:
<path d="Mx,yhavbh-az"></path>
- x,y are the starting position
- A is horizontal length, b is vertical length
- Ex:
<path d="M0,0h5v6h-5z"></path>
- Easier solution: Use websites that generate the codes automatically and process them with our Python script:
- For QR codes, use SVGs generated by this page (https://www.nayuki.io/page/qr-code-generator-library).
- For ArUco, get SVGs from this page (https://chev.me/arucogen/). Save the SVG and pass it into the "Aruco_to_Path.py" file changing the paths in line 84 and 85.
- Difficult: Take the original SVG and parse it into paths of the following format:
- Although our technique can be used with many filaments, we recommend using a standard white PLA for the tag, and IR PLA for the main geometry of the object including the top layer (link to IR pla).
This is a preview of the object in Cura Slicer
Now you can send the job to the 3D printer!
You will need a near-infrared (NIR) camera to be able to image and read the tags. Below we describe how you can build your own NIR camera and use our image processing pipeline to use the NIR stream as input for detection. Alternatively, you can use a smartphone that has an NIR camera as well, such as OnePlus 8 Pro (see this section).
- Raspberry Pi NoIR Camera (link)
- Raspberry Pi Zero (link)
- Micro-USB to USB type A cable (link)
- (Optional) 3D printed camera case to house all parts (see Section 4)
- Once a Raspberry Pi and near-infrared camera are obtained, follow the instructions in Section 4 and follow the instructions to set up the Pi + camera as a USB camera
- It is recommended that you use pycharm to run the decoder demos both for QR and Aruco, however the code can be run from a terminal
- Have Python 3 and pip3 pre-installed on your system link for this is here version 3.6 or greater should work just fine
- Run the following command in terminal:
pip install opencv-python numpy dbr opencv-contrib-python pyzbar
- Or in pycharm navigate to File > Settings > Project > Python Interpreter > Install packages (click the plus sign) and install the following packages:
- opencv-python
- numpy
- dbr
- opencv-contrib-python
- pyzbar
You can use our image processing scripts to binarize the raw infrared image using several filters and subsequently pass them to the QR code or ArUco marker reader. Please note that these filters may need manual calibration based on physical conditions (e.g., camera distance, lighting). We have also developed a machine learning (ML) model to make the binarization process more robust by collecting a dataset and training a neural network. You can find more details on the ML approach here.
- Navigate to qr_demo > qr_demo.py
- Open the file in an editor
- Navigate to line 22 and confirm that CAMERA_STREAM is the same as the IR webcam ID
- CAMERA_STREAM = 1 works, depending on the computer. On some computers it can be 0 or 2 etc. based on if there is one or more internal webcams.
- You should see a window popup in your screen if everything went alright
- There should also be a terminal outputting data on whether a code was detected or not
- Navigate to aruco_demo > aruco_demo.py
- Open the file in an editor
- Navigate to line 20 and confirm that CAMERA_STREAM is the same as the IR webcam ID
- CAMERA_STREAM = 1 works, depending on the computer. On some computers it can be 0 or 2 etc. based on if there is one or more internal webcams.
- You should see a window popup in your screen if everything went alright
- There should also be a terminal outputting data on whether a code was detected or not
You should only do this if you want to change the parameters for the ArUco detection
- Navigate to infrared_python_api and open irtags_calib.py
- Navigate to line 17 and confirm that VIDEO_STREAM is the same as the IR webcam ID
- VIDEO_STREAM = 1 works, depending on the computer. On some computers it can be 0 or 2 etc. based on if there is one or more internal webcams.
- A window with a panel should open on the right play around with the values until a code is detected
- Take note of these values, these values can be used to change the parameters for the image transforms
We have also developed machine learning (ML) modules for turning a low-resolution IR image to a binary image where the code is more easily detected using a convolutional neural network (CNN). You can access the ML tutorials in the related subdirectory.
Data-driven approach without ML: In case that you prefer not to use ML, we have another data-driven method, which is slightly less robust. For this method, we use the same data that used for training the ML module to find the best filters that work well on the data using a greedy algorithm. We find the filters by select a filter that works on the most number of images, then repeat the process on the images that cannot be detected after applying the selected filter. We also utilize the image pyramid; that is, each filter will be applied on different sizes of image. Note that this method only works well if your data have only a few different markers. You can find the code here.
- OnePlus 8 Pro (found here) with Android 11. This phone has an embedded near-infrared camera. You can buy a used one from Amazon.
- ADB shell (installation guide)
- Follow the steps to enable wireless debugging on the OnePlus and pair with your PC (here)
- Once everything is installed, and you have paired the OnePlus phone to your computer via adb, you can run this command to get the IR camera show up:
adb shell am start -n com.oneplus.factorymode/.camera.manualtest.CameraManualTest
(more detail here) - You should see the IR stream open on the OnePlus:
- Note: if you do not see the IR camera, you may have to change the camera view to "Fourth rear camera(4)" as seen in the top right of the image
- It is important that once you are in the "Fourth rear camera(4)" view, do not change views again. Otherwise, the app will freeze and you will need to restart the phone and resend the command to open the factory camera mode again.
- Finally, after all this is done navigate to the oneplus folder (here) and run oneplus.py. This should open up a window on your PC, livestreaming the phone's screen.
- All the demo code above for detecting QR codes uses Dynamsoft Barcode Reader (DBR) in the backend. Our code comes with a 1-day public trial license which must be renewed after expiration.
- If you do not renew the license, you will get only partial decoding of messages.
- To update the license key navigate to the dbr_decode.py file, after obtainign a new license from Dynamsoft, for each demo and change the license key variable (line 4 of dbr_decode.py).
Fully assembled custom IR camera module with IR LEDs
Optional items or tools are for the addition of IR LEDs
- 1 Raspberry Pi NoIR Camera (link)
- 1 Raspberry Pi Zero (link)
- 1 Micro-USB to USB type A cable (link)
- 8 3mm x 2mm Neodymium magnets (link)
- 8 6mm x 3mm Neodymium magnets (link)
- (Optional) 2 OSRam IR LEDs 4716AS (link)
- (Optional) Male to Female Jumper Wires (link)
- (Optional) 2 2.2K Ohm Resistors (link)
- (Optional) 8 M3x4mm screws and nuts (link)
- 8 M2x6mm and 2 M2x8mm screws and nuts (link)
- IR Filter (link)
- Small Hammer
- (Optional) Super Glue
- (Optional) Soldering Iron and Solder
- Print the STLs in the Camera Case V2 folder (hardware > Camera_Case > STL > V2)
- It is recommended you use 20% infill with any choice of filament
- Mount Raspberry Pi to the camera case body using 4 M2x6mm screws and 4 M2 nuts.
- Next we need to place the magnets in the case, to do this get a small hammer and gently place them into the holes at the top of the case and in the bottom of the filter mount. Make sure magnets in case and the filter mount are oppositely polarized! Take your time with this step.
- Use 4 hex M2x6mm screws and 4 M2 nuts to mount the camera to the 3D printed camera mount.
- Plug the camera into the Raspberry PI Zero making sure the cable stays within the camera case body.
- Screw the camera mount into the main case for the Raspberry PI with two M2x8mm screws.
- Similarly to Step 5 place the magnets for filter cover. Again make sure the magnets between the cover and mount are oppositely polarized and take your time with this step.
- Next follow the instructions to make the PI a USB camera (link)
- If everything went well you should be able to plug a Micro-USB to USB-Type-A cable into the PI and access the camera as a regular USB camera
It is recommended that you have some experience with circuits prior to this assembly.
- First cut 4 male-to-female jumper wires in half then strip the ends of the wire.
- Next pre-tin the half wires.
- Next take 2 tiny 4716AS IR LEDs and pre-tin the pads on the IR LED, make sure not to short them.
- Next Solder two of the male-half jumper wires (from step 1) to the anode and cathode path of IR LED respectively, keeping track of which wire is which.
- Do Step 4 again for the second IR LED.
- Next Solder two of the 2.2K Ohm Resistors to the +5V pad on the PI Zero, resulting in two parallel voltage dividers.
- Now Solder two female-half jumper wires (from step 1) to the ground (GND) pad.
- Solder one female-half jumper wire to one of the resistors on the +5V rail and similarly do the same for the other resistor.
- The result should be 4 female pins soldered to the Raspberry PI, which you can plug the IR LEDs into.
- Next follow steps laid out in Assembly Instructions without IR LEDs.
- Next, glue the IR LEDs to the IR LED cover (with super glue).
- Screw the IR LED covers, with glued LEDs, onto the 3D printed filter mount with 8 M3x4mm screws.
- If everything went well you should be able to plug a Micro-USB to USB-Type-A cable into the PI and access the camera as a regular USB camera.