Skip to content

Commit

Permalink
Feature/structured light (#9)
Browse files Browse the repository at this point in the history
* Add structured light support for main camera

* Add FAQ section to usage documentation

* Update version to 1.1.0

* Add structured light output documentation
  • Loading branch information
aelmiger committed Apr 5, 2024
1 parent 50d11c0 commit 264ba68
Show file tree
Hide file tree
Showing 8 changed files with 288 additions and 2 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/integration_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,8 @@ jobs:
"./output/*/main_camera_annotations/object_volume/*metadata.yaml"
"./output/*/main_camera_annotations/keypoints/*.json"
"./output/*/main_camera_annotations/keypoints/*metadata.yaml"
"./output/*/main_camera_annotations/structured_light/*.png"
"./output/*/main_camera_annotations/structured_light/*metadata.yaml"
"./output/*/object_positions/*.json"
"./output/*/object_positions/*metadata.yaml"
)
Expand Down
Binary file added docs/docs/img/docs/dot_projection.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
47 changes: 47 additions & 0 deletions docs/docs/usage/faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Frequently Asked Questions

Here are some common questions and issues that users may encounter when using Syclops:

## Installation

### Q: I'm having trouble installing Syclops. What should I do?

A: Make sure you have the correct version of Python installed (3.9 or higher) and that you're using a virtual environment to avoid package conflicts. If you're still having issues, please open an issue on the [GitHub repository](https://github.com/DFKI-NI/syclops/issues) with details about your operating system, Python version, and the error messages you're seeing.

## Assets

### Q: How do I add new assets to my project?

A: To add new assets, create an `assets.yaml` file in your project directory that defines the asset library and its assets. Then, run `syclops -c` to crawl the assets and update the catalog. For more information, see the [Assets documentation](/usage/assets/assets).

### Q: I'm getting an error message saying an asset file is missing. What should I do?

A: Check that the file paths in your `assets.yaml` file are correct and that the files exist in the specified locations. If you've recently added or moved assets, make sure to run `syclops -c` to update the asset catalog.

## Job Configuration

### Q: My job configuration isn't working as expected. How can I debug it?

A: You can use the `-d` flag to enable debugging mode in Syclops. Use `-d scene` to open the scene in Blender for visual debugging, or `-d blender-code` and `-d pipeline-code` to debug the Blender and pipeline code, respectively. For more information, see the [Debugging documentation](/developement/debugging).

### Q: How do I use dynamic evaluators in my job configuration?

A: Dynamic evaluators allow you to randomize parameter values for each frame in your scene. To use them, replace a fixed value in your job configuration with a dynamic evaluator expression, such as `uniform: [0, 1]` for a uniform random value between 0 and 1. For more examples, see the [Dynamic Evaluators documentation](/usage/job_description/dynamic_evaluators).

## Rendering

### Q: My renders are taking a long time. How can I speed them up?

A: To speed up rendering, you can try reducing the number of samples per pixel in your sensor configuration, or using a lower resolution for your output images. You can also make sure you're using GPU rendering if you have a compatible graphics card. For more tips, see the [Sensor Configuration documentation](/usage/job_description/sensor_configuration).

### Q: I'm getting artifacts or noise in my rendered images. What can I do?

A: Increase the number of samples per pixel in your sensor configuration to reduce noise and artifacts. You can also try enabling denoising in your job configuration by setting `denoising_enabled: True` and choosing an appropriate denoising algorithm, such as `OPTIX` or `OPENIMAGEDENOISE`.

## Postprocessing

### Q: How do I create custom postprocessing plugins?

A: To create a custom postprocessing plugin, define a new Python class that inherits from `PostprocessorInterface` and implement the required methods, such as `process_step` and `process_all_steps`. Then, register your plugin in the `pyproject.toml` file under the `[project.entry-points."syclops.postprocessing"]` section. For more details, see the [Postprocessing documentation](/usage/job_description/config_descriptions/postprocessing).

If you have any other questions or issues that aren't covered here, please open an issue on the [GitHub repository](https://github.com/DFKI-NI/syclops/issues) or reach out to the Syclops community for help.
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Structured Light Output Documentation

The Structured Light Output generates structured light patterns projected onto the scene, which can be used for 3D reconstruction and depth estimation.

![Dot projected image with stereo reconstruction on the right](/img/docs/dot_projection.png)

## Configuration Parameters

The following table describes each configuration parameter for the Structured Light Output:

| Parameter | Type | Description | Requirement |
|-----------|------|-------------|-------------|
| `id` | string | A unique identifier for the output. | **Required** |
| `frame_id` | string | The ID of the transformation frame to which the structured light projector is attached. | **Required** |
| `intensity` | float | The intensity of the projected light pattern. | **Required** |
| `scale` | float | The scale of the light pattern, controlling the density of the dots. | **Required** |
| `samples` | integer | The number of samples per pixel for rendering the structured light image. Higher values result in better quality but slower rendering. | **Required** |
| `debug_breakpoint` | boolean | If set to `true` and the [scene debugging](/developement/debugging/#visually-debug-a-job-file) is active, the rendering process will pause and open Blender before proceeding. | Optional |

## Example Configuration

```yaml
syclops_output_structured_light:
- id: main_cam_structured_light
frame_id: "camera_link"
intensity: 10000
scale: 60
samples: 256
debug_breakpoint: True
```
In this example, a structured light output is configured with the identifier `main_cam_structured_light`. The light projector is attached to the `camera_link` transformation frame. The intensity of the projected pattern is set to 10000, and the scale is set to 60. The image will be rendered with 256 samples per pixel. If scene debugging is active, the rendering process will pause and open Blender before proceeding.

## Output Format

The structured light output is saved as a grayscale PNG image for each frame, with the projected dot pattern visible on the scene objects. The images are stored in the `<sensor_name>_annotations/structured_light/` folder.


The structured light output can be used in multiple cameras to simulate stereo vision and generate depth maps. The structured light patterns can be used for 3D reconstruction and depth estimation.
2 changes: 2 additions & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,11 @@ nav:
- Pixel Annotation: usage/job_description/config_descriptions/pixel_annotation.md
- Object Position: usage/job_description/config_descriptions/object_position.md
- Keypoint Annotation: usage/job_description/config_descriptions/keypoint_annotation.md
- Structured Light: usage/job_description/config_descriptions/structured_light.md
- Post Processing:
- Bounding Boxes: usage/job_description/config_descriptions/bounding_box.md
- Assets: usage/assets/assets.md
- FAQ: usage/faq.md
- Developement:
- Architecture: developement/architecture.md
- Add Functionality:
Expand Down
3 changes: 2 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ readme = "README.md"
requires-python = ">=3.8"
license = {text = "GPLv3"}

version = "1.0.0"
version = "1.1.0"

dynamic = ["dependencies"]

Expand All @@ -31,6 +31,7 @@ syclops_output_rgb = "syclops.blender.sensor_outputs.rgb:RGB"
syclops_output_keypoints = "syclops.blender.sensor_outputs.keypoints:Keypoints"
syclops_output_pixel_annotation = "syclops.blender.sensor_outputs.pixel_annotation:PixelAnnotation"
syclops_output_object_positions = "syclops.blender.sensor_outputs.object_positions:ObjectPositions"
syclops_output_structured_light = "syclops.blender.sensor_outputs.structured_light:StructuredLight"
[project.entry-points."syclops.postprocessing"]
syclops_postprocessing_bounding_boxes = "syclops.postprocessing.bounding_boxes:BoundingBoxes"

Expand Down
8 changes: 7 additions & 1 deletion syclops/__example_assets__/test_job.syclops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ sensor:
duration: 0.03 # s (Scanline "exposure" time)
outputs:
syclops_output_rgb:
- samples: 16
- samples: 4
compositor:
chromatic_aberration: 0.007 #Strong aberration can cause shift between ground truth and rgb
bloom:
Expand All @@ -141,6 +141,12 @@ sensor:
id: main_cam_depth
object_volume:
id: main_cam_object_volume
syclops_output_structured_light:
- id: main_cam_structured_light
frame_id: "camera_link"
intensity: 10000
scale: 200
samples: 4

postprocessing:
syclops_postprocessing_bounding_boxes:
Expand Down
189 changes: 189 additions & 0 deletions syclops/blender/sensor_outputs/structured_light.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
import logging

import bpy
from syclops import utility
from syclops.blender.sensor_outputs.output_interface import OutputInterface

# META DESCRIPTION
meta_description = {
"type": "STRUCTURED_LIGHT",
"format": "PNG",
"description": "Color images with structured light patterns from the camera",
}


class StructuredLight(OutputInterface):
"""Generate structured light images"""

def generate_output(self, parent_class: object = None):
with utility.RevertAfter():
utility.configure_render()

job_description = utility.get_job_conf()
bpy.context.scene.cycles.use_denoising = job_description[
"denoising_enabled"
]
try:
bpy.context.scene.cycles.denoiser = job_description[
"denoising_algorithm"
]
except TypeError as e:
logging.error(
f"Could not set denoiser to {job_description['denoising_algorithm']}. Try 'OPENIMAGEDENOISE'."
)
raise e
bpy.context.scene.cycles.samples = self.config["samples"]

if "image_compression" in job_description:
bpy.data.scenes["Scene"].render.image_settings.compression = (
job_description["image_compression"]
)

bpy.context.scene.render.image_settings.color_mode = "RGB"

# Get the camera
cam_name = bpy.context.scene.camera["name"]
# Create subfolders
output_path = self._prepare_output_folder(cam_name)
# Set filename
curr_frame = bpy.context.scene.frame_current

file_string = str(curr_frame).rjust(4, "0") + ".png"
output_path = utility.append_output_path(output_path)
utility.append_output_path(file_string)
self.compositor()

# Turn off all lights
self.turn_off_all_lights()
# Add a spotlight with nodes
self.add_spotlight_with_nodes()

self.check_debug_breakpoint()
logging.info(f"Rendering Structured Light for sensor {cam_name}")
bpy.ops.render.render(write_still=True)

with utility.AtomicYAMLWriter(str(output_path / "metadata.yaml")) as writer:
# Add metadata
writer.data.update(meta_description)
# Add current step
writer.add_step(
step=curr_frame,
step_dicts=[{"type": "STRUCTURED_LIGHT", "path": str(file_string)}],
)
# Add expected steps
writer.data["expected_steps"] = job_description["steps"]
writer.data["sensor"] = cam_name
writer.data["id"] = self.config["id"]
logging.info("Structured Light output for sensor %s", cam_name)

def turn_off_all_lights(self):
# Turn off all lamps
for light in bpy.data.lights:
light.energy = 0

# Turn off all emission nodes in all materials
for material in bpy.data.materials:
if material.use_nodes:
for node in material.node_tree.nodes:
if node.type == "EMISSION":
node.inputs["Strength"].default_value = 0

# Set the lighting strength of the environment to 0
if bpy.context.scene.world:
bpy.context.scene.world.node_tree.nodes["Background"].inputs[
"Strength"
].default_value = 0

def add_spotlight_with_nodes(self):
"""Add a spotlight with nodes, that will generate a random dot pattern"""

collection = utility.create_collection(self.config["id"])
utility.set_active_collection(collection)
# Create a new spotlight
bpy.ops.object.light_add(type="SPOT")
spotlight = bpy.context.object
spotlight.data.energy = self.config["intensity"]
spotlight.data.spot_size = 3.14159 # Set cone angle to 180 degrees (in radians)

# Setting frame_id as parent
frame_id = self.config["frame_id"]
parent_frame_id = bpy.data.objects[frame_id]
spotlight.parent = parent_frame_id

# Enable use_nodes for this spotlight
spotlight.data.use_nodes = True
nodes = spotlight.data.node_tree.nodes

# Clear existing nodes
for node in nodes:
nodes.remove(node)

# Create the required nodes and recreate the structure
texture_coordinate_node = nodes.new("ShaderNodeTexCoord")
separate_xyz_node = nodes.new("ShaderNodeSeparateXYZ")
divide_node = nodes.new("ShaderNodeVectorMath")
divide_node.operation = "DIVIDE"
mapping_node = nodes.new("ShaderNodeMapping")
voronoi_texture_node = nodes.new("ShaderNodeTexVoronoi")
voronoi_texture_node.feature = "F1"
voronoi_texture_node.distance = "EUCLIDEAN"
voronoi_texture_node.voronoi_dimensions = "2D"
color_ramp_node = nodes.new("ShaderNodeValToRGB")
color_ramp_node.color_ramp.interpolation = "EASE"
emission_node = nodes.new("ShaderNodeEmission")
light_output_node = nodes.new("ShaderNodeOutputLight")

# Link nodes together as per the provided structure
links = spotlight.data.node_tree.links
links.new(
texture_coordinate_node.outputs["Normal"],
separate_xyz_node.inputs["Vector"],
)
links.new(texture_coordinate_node.outputs["Normal"], divide_node.inputs[0])
links.new(separate_xyz_node.outputs["Z"], divide_node.inputs[1])
links.new(divide_node.outputs[0], mapping_node.inputs["Vector"])
links.new(mapping_node.outputs["Vector"], voronoi_texture_node.inputs["Vector"])
links.new(
voronoi_texture_node.outputs["Distance"], color_ramp_node.inputs["Fac"]
)
links.new(color_ramp_node.outputs["Color"], emission_node.inputs["Strength"])
links.new(
emission_node.outputs["Emission"], light_output_node.inputs["Surface"]
)

# Set specific values for nodes based on the image
voronoi_texture_node.inputs["Scale"].default_value = self.config["scale"]
voronoi_texture_node.inputs["Randomness"].default_value = 1.0
color_ramp_node.color_ramp.elements[1].position = 0.3
color_ramp_node.color_ramp.elements[1].color = (0, 0, 0, 1)
color_ramp_node.color_ramp.elements[0].color = (1, 1, 1, 1)

def compositor(self):
bpy.context.scene.use_nodes = True
tree = bpy.context.scene.node_tree

# Clear existing nodes
for node in tree.nodes:
tree.nodes.remove(node)

# Create input node
render_layers_node = tree.nodes.new(type="CompositorNodeRLayers")

# Create RGB to BW node
rgb_to_bw_node = tree.nodes.new(type="CompositorNodeRGBToBW")

# Create output node
composite_node = tree.nodes.new(type="CompositorNodeComposite")

# Link nodes
links = tree.links
link = links.new(render_layers_node.outputs["Image"], rgb_to_bw_node.inputs["Image"])
link = links.new(rgb_to_bw_node.outputs["Val"], composite_node.inputs["Image"])

def _prepare_output_folder(self, sensor_name):
"""Prepare the output folder and return its path."""
output_folder = utility.append_output_path(
f"{sensor_name}_annotations/structured_light/"
)
utility.create_folder(output_folder)
return output_folder

0 comments on commit 264ba68

Please sign in to comment.