Skip to content

Commit

Permalink
Orthorectification with OptiX
Browse files Browse the repository at this point in the history
Signed-off-by: Julien Jomier <[email protected]>

Co-authored-by: Brent Bartlett <[email protected]>
  • Loading branch information
jjomier and bbartlett-nv committed Sep 11, 2023
1 parent 483400e commit 5c5da21
Show file tree
Hide file tree
Showing 11 changed files with 1,686 additions and 0 deletions.
59 changes: 59 additions & 0 deletions applications/orthorectification_with_optix/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# syntax = docker/dockerfile:1.2

ARG FROM_IMAGE="nvcr.io/nvidia/clara-holoscan/holoscan:v0.6.0-dgpu"

# ============ Stage: base ============
FROM ${FROM_IMAGE} AS base

# Configure the base conda environment
ARG CONDA_ENV_NAME=ortho_holoscan
ARG PYTHON_VER=3.8

# Install miniconda
ENV CONDA_DIR /opt/conda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda

# Install libarchive
RUN apt update && DEBIAN_FRONTEND=noninteractive apt install -y libarchive13

# Put conda in path so we can use conda activate
ENV PATH=$CONDA_DIR/bin:$PATH
RUN conda init bash
RUN ln -sf /bin/bash /bin/sh

# Install mamba to speed the solve up
RUN conda config --set ssl_verify false &&\
conda config --add pkgs_dirs /opt/conda/pkgs &&\
conda config --env --add channels conda-forge &&\
/opt/conda/bin/conda install -y -n base -c conda-forge "mamba >=0.22" "boa >=0.10" python=${PYTHON_VER}

# Create a base environment
RUN --mount=type=cache,id=conda_pkgs,target=/opt/conda/pkgs,sharing=locked \
# Create the environment and install as little dependencies as possible
CONDA_ALWAYS_YES=true /opt/conda/bin/mamba create -n ${CONDA_ENV_NAME} -c conda-forge python=${PYTHON_VER}

RUN source activate ${CONDA_ENV_NAME} && mamba install -c conda-forge -y numpy cupy pillow pytest gdal shapely && \
pip install pynvrtc tqdm opencv-python odm_report_shot_coverage

# MUST RUN BUILD COMMAND FROM ABOVE HOLOHUB-INTERNAL REPO
WORKDIR /work
COPY ./NVIDIA-OptiX-SDK-7.4.0-linux64-x86_64 /work/NVIDIA-OptiX-SDK-7.4.0-linux64-x86_64

RUN git clone https://github.com/NVIDIA/optix-toolkit.git
WORKDIR /work/optix-toolkit
RUN git checkout v0.8.1
RUN git submodule update --init --recursive PyOptiX
WORKDIR /work/optix-toolkit/PyOptiX/optix

RUN source activate ${CONDA_ENV_NAME} &&\
export PYOPTIX_CMAKE_ARGS="-DOptiX_INSTALL_DIR=/work/NVIDIA-OptiX-SDK-7.4.0-linux64-x86_64" &&\
export PYOPTIX_STDDEF_DIR="/usr/include/linux" &&\
python setup.py install
WORKDIR /work

# now install holoscan python bindings
RUN source activate ${CONDA_ENV_NAME} && pip install holoscan
WORKDIR /work

RUN echo "conda activate ${CONDA_ENV_NAME}" >> ~/.bashrc
56 changes: 56 additions & 0 deletions applications/orthorectification_with_optix/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# HoloHub Orthorectification Application

This application is an example of utilizing the nvidia OptiX SDK via the PyOptix bindings to create per-frame orthorectified imagery. In this example, one can create a visualization of mapping frames from a drone mapping mission processed with [Open Drone Map](https://opendronemap.org/). A typical output of a mapping mission is a single merged mosaic. While this product is useful for GIS applications, it is difficult to apply algorithms on a such a large single image without incurring additional steps like image chipping. Additionally, the mosaic process introduces image artifacts which can negativley impact algorithm performance.

Since this holoscan pipeline processes each frame individually, it opens the door for one to apply an algorithm to the original un-modififed imagery and then map the result. If custom image processing is desired, it is recommended to insert custom operators before the Ray Trace Ortho operator in the application flow.


![](docs/odm_ortho_pipeline.png)<br>
Fig. 1 Orthorectification sample application workflow

Steps for running the application:

a) Download and Prep the ODM Dataset<br>
1. Download the [Lafayette Square Dataset](https://www.opendronemap.org/odm/datasets/) and place into ~/Data.

2. Process the dataset with ODM via docker command: <br>
```docker run -ti --rm -v ~/Data/lafayette_square:/datasets/code opendronemap/odm --project-path /datasets --camera-lens perspective --dsm```

If you run out of memory add the following argument to preserve some memory: ```--feature-quality medium```

b) Clone holohub and navigate to this application directory

c) Download [OptiX SDK 7.4.0](https://developer.nvidia.com/optix/downloads/7.4.0/linux64-x86_64) and extract the package in the same directory as the source code
(i.e. applications/orthorectification_with_optix).

d) Build development container <br>
1. ```DOCKER_BUILDKIT=1 docker build -t holohub-ortho-optix:latest .```

You can now run the docker container by: <br>
1. ```xhost +local:docker```
2. ```nvidia_icd_json=$(find /usr/share /etc -path '*/vulkan/icd.d/nvidia_icd.json' -type f 2>/dev/null | grep .) || (echo "nvidia_icd.json not found" >&2 && false)```
3. ```docker run -it --rm --net host --runtime=nvidia -v ~/Data:/root/Data -v .:/work/ -v /tmp/.X11-unix:/tmp/.X11-unix -v $nvidia_icd_json:$nvidia_icd_json:ro -e NVIDIA_DRIVER_CAPABILITIES=graphics,video,compute,utility,display -e DISPLAY=$DISPLAY holohub-ortho-optix```

Finish prepping the input data: <br>
1. ```gdal_translate -tr 0.25 0.25 -r cubic ~/Data/lafayette_square/odm_dem/dsm.tif ~/Data/lafayette_square/odm_dem/dsm_small.tif```
2. ```gdal_fillnodata.py -md 0 ~/Data/lafayette_square/odm_dem/dsm_small.tif ~/Data/lafayette_square/odm_dem/dsm_small_filled.tif```

Finally run the application: <br>
1. ```python ./python/ortho_with_pyoptix.py```

You can modify the applications settings in the file "ortho_with_pyoptix.py"

```
sensor_resize = 0.25 # resizes the raw sensor pixels
ncpu = 8 # how many cores to use to load sensor simulation
gsd = 0.25 # controls how many pixels are in the rendering
iterations = 425 # how many frames to render from the source images (in this case 425 is max)
use_mosaic_bbox = True # render to a static bounds on the ground as defined by the DEM
write_geotiff = False
nb=3 # how many bands to write to the GeoTiff
render_scale = 0.5 # scale the holoview window up or down
fps = 8.0 # rate limit the simulated sensor feed to this many frames per second
```

![](docs/holohub_ortho_app.gif)<br>
Fig. 2 Running the orthorectification sample application
152 changes: 152 additions & 0 deletions applications/orthorectification_with_optix/cpp/src/optix/optixOrtho.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

#include <optix.h>
#include <builtin_types.h>
#include <optix_device.h>

#include "optixOrtho.h"
#include <cuda/helpers.h>

#include <sutil/vec_math.h>

extern "C" {
__constant__ Params params;
}

static __forceinline__ __device__ void setPayload(float3 p) {
optixSetPayload_0(float_as_int(p.x));
optixSetPayload_1(float_as_int(p.y));
optixSetPayload_2(float_as_int(p.z));
}

static __forceinline__ __device__ float2
computeSensorUV(const float3 hit_point) {
const float3 origin_to_hit_point =
hit_point - params.sensor_focal_plane_origin;
const float dist_x = length(cross(params.sensor_up, origin_to_hit_point));
const float dist_y = length(cross(params.sensor_right, origin_to_hit_point));
return make_float2(dist_x, dist_y) / params.sensor_focal_plane_size;
}

static __device__ __inline__ float3 traceSensorRay(float3 ray_origin,
float3 ray_direction) {
unsigned int p0 = 0, p1 = 0, p2 = 0;
optixTrace(params.handle, ray_origin, ray_direction,
0.0f, // Min intersection distance
1e16f, // Max intersection distance
0.0f, // rayTime -- used for motion blur
OptixVisibilityMask(255), // Specify always visible
OPTIX_RAY_FLAG_NONE,
0, // SBT offset -- See SBT discussion
1, // SBT stride -- See SBT discussion
0, // missSBTIndex -- See SBT discussion
p0, p1, p2);

const float3 sensor_col =
make_float3(int_as_float(p0), int_as_float(p1), int_as_float(p2));
return sensor_col;
}

extern "C" __global__ void __raygen__rg() {
// Lookup our location within the launch grid
const uint3 idx = optixGetLaunchIndex();
const uint3 dim = optixGetLaunchDimensions();

// for nearest
const float3 ray_origin = make_float3(
idx.x * params.image_gsd + params.image_corner_coords.x,
idx.y * params.image_gsd + params.image_corner_coords.y, -10.f);

const float3 ray_direction = make_float3(0.f, 0.f, 1.f);

// Trace the ray against our scene hierarchy
unsigned int p0, p1, p2;
optixTrace(params.handle, ray_origin, ray_direction,
0.0f, // Min intersection distance
1e16f, // Max intersection distance
0.0f, // rayTime -- used for motion blur
OptixVisibilityMask(255), // Specify always visible
OPTIX_RAY_FLAG_NONE,
0, // SBT offset -- See SBT discussion
1, // SBT stride -- See SBT discussion
0, // missSBTIndex -- See SBT discussion
p0, p1, p2);

if ((p0 > 0) && (p1 > 0) && (p2 > 0)) {
const float3 result =
make_float3(int_as_float(p0), int_as_float(p1), int_as_float(p2));

const uchar4 clr = make_color(result);
// Record results in the output raster
params.image[idx.y * params.image_width + idx.x] = clr;
}
}

extern "C" __global__ void __miss__ms() {
const float3 sensor_col = make_float3(0., 0., 0.);
setPayload(sensor_col);
}

extern "C" __global__ void __closesthit__terrain_ch() {
// When built-in triangle intersection is used, a number of fundamental
// attributes are provided by the OptiX API, indlucing barycentric
// coordinates.
const float3 ray_orig = optixGetWorldRayOrigin();
const float3 ray_dir = optixGetWorldRayDirection(); // incident direction
const float ray_t = optixGetRayTmax();

// Lookup our location within the launch grid
const uint3 idx = optixGetLaunchIndex();
const uint3 dim = optixGetLaunchDimensions();

const float3 hit_point = ray_orig + ray_t * ray_dir;
const int index = idx.y * params.image_width + idx.x;

if (hit_point.z < params.terrain_zmax) { // We hit terrain, cast ray to sensor
if (ray_orig.z > 0.f) {
const float3 sensor_col = make_float3(0., 0., 0.);
setPayload(sensor_col);
} else {
// // if you want to cast from sensor back to terrain
// const float3 to_terrain = normalize(params.sensor_pos - hit_point);
// const float3 sensor_col = traceSensorRay(params.sensor_pos,
// to_terrain);

// from terrain to sensor
const float3 to_sensor = normalize(params.sensor_pos - hit_point);
const float3 sensor_col =
traceSensorRay(hit_point + 0.01f * to_sensor, to_sensor);

setPayload(sensor_col);
}
} else { // We hit the sensor plane
const float2 sensor_uv = computeSensorUV(hit_point);
// for nearest lookup
const uchar4 sensor_rgba =
tex2D<uchar4>(params.sensor_tex, sensor_uv.x, sensor_uv.y);
const float3 sensor_col =
make_float3(sensor_rgba.x, sensor_rgba.y, sensor_rgba.z) / 255.;

// // for linear lookup
// const float4 sensor_rgba = tex2D<float4>(params.sensor_tex, sensor_uv.x,
// sensor_uv.y); const float3 sensor_col = make_float3(sensor_rgba.x,
// sensor_rgba.y, sensor_rgba.z);

setPayload(sensor_col);
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

struct Params {
OptixTraversableHandle handle;
cudaTextureObject_t sensor_tex;
uchar4 *image;
unsigned int image_width;
unsigned int image_height;
float2 image_corner_coords;
float image_gsd;
float sensor_focal_length;
float terrain_zmax;
float3 sensor_pos;
float3 sensor_up;
float3 sensor_right;
float3 sensor_focal_plane_origin;
float2 sensor_focal_plane_size;
};

struct RayGenData {
// No data needed
};

struct MissData {
float3 bg_color;
};

struct HitGroupData {
// No data needed
};
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
39 changes: 39 additions & 0 deletions applications/orthorectification_with_optix/python/metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
{
"application": {
"name": "Endoscopy Out of Body Detection Pipeline in C++",
"authors": [
{
"name": "Brent Bartlett",
"affiliation": "NVIDIA"
}
],
"language": "Python",
"version": "1.0",
"changelog": {
"1.0": "Initial Release"
},
"holoscan_sdk": {
"minimum_required_version": "0.6.0",
"tested_versions": [
"0.6.0"
]
},
"platforms": [
"amd64"
],
"tags": [
"Orthorectification",
"Drone",
"OptiX"
],
"ranking": 4,
"dependencies": {
"OptiX-SDK": {
"version": "4.7.0"
},
"OptiX-Toolit": {
"version": "0.8.1"
}
}
}
}
Loading

0 comments on commit 5c5da21

Please sign in to comment.