Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] #220 calibration #324

Draft
wants to merge 95 commits into
base: master
Choose a base branch
from

Conversation

ErichZimmer
Copy link
Contributor

@ErichZimmer ErichZimmer commented Jun 25, 2024

Preface

This quite large pull request entails the addition of a geometric camera calibration module that can be used for de-warping planar PIV images, stereo-PIV experiments, and tomo-PIV experiments. With over 25 new functions and three of the most common camera models, nearly any PIV experiment can be calibrated before further processing and reconstructions. This pull request contains the following new additions:

  • calibration (namespace module)
    • DLT camera model
      • forward projection
      • backward projection
      • analytical two-camera 3D point triangulation
      • least squared multi-camera 3D point triangulation
    • Pinhole camera model (with brown and polynomial distortion models)
      • forward projection
      • backward projection
      • analytical two-camera 3D point triangulation
      • least squared multi-camera 3D point triangulation
    • Soloff polynomial camera model
      • forward projection
      • backward projection
      • iterative robust least squared multi-camera 3D point triangulation
    • marker detection
      • correlated template detection
      • labeled blob detection
    • marker grid generation and matching
      • create rectangular symmetric and asymmettric grids
      • match image points to world points using DLT homographies
      • match image points to world points using camera projection estimates
    • epipolar utilities
      • plot epipolar lines
    • reprojection error utilities
      • get rms error of residuals
      • get reprojection error of camera models
      • get line-of-sight (LOS) error of camera models
    • mapping functions for de-warping planar PIV images
      • get de-warping mapping and pixel-to-world scale ratio

Along with all these new function, a large quantity of tests have been added too (more than 50 for the camera models alone). However, this module is still a work-in-progress and will be so for many weeks to months until the final product is considered production worthy.

Note

I added tomographic PIV test data in the openpiv/data/test7 directory. It is around 40 MB for this section of the dataset.

Important

Documentation will have to be rebuilt once pull request is merged and a new version is released.

How to test this

  • Clone fork
  • Run tests
  • Play around with calibration utilities

What issue does this PR address?

Closes #220

Additional Context

This module is primarily for 3D imaging systems such as

  • Stereo-PIV
  • Tomo-PIV
  • 3D PTV (mainly used for volume self calibration)

However, it can also be used for

  • De-warping image distortions
  • Aligning camera projection with planar light sheet (mainly for off-axis cameras)

Checklist

Additionally, here is a checklist of future tasks to be completed:

  • Debug image mapping and de-warping functions (something funny is going on)
  • Add changes to documentation generation to add new module
  • Fix relative import on test files (should be openpiv.calibration, not .calibration)
  • Fix examples on docstrings (they were originally from the draft phase)
  • Add an object-orientated wrapper over the procedural-based module
  • Minimize user parameters per function (in theory, minimizes user error and confusion)
  • Add more testing to calibration utilities
  • Optimize correlated template-based marker detection
  • Proof-read docstrings (I know there are a lot of mistakes)
  • Add de-warping capabilities to windef
  • Add an automated grid and image marker detection system
  • Add checkerboard detection
  • Add asymmetric grid detection capabilities to DLT homography-based matching function
  • Automate most of the calibration routines
  • Add volume self calibration (may be pushed towards the tomo-PIV pull request)
  • Add optical transfer functions (may be pushed towards the tomo-PIV pull request)
  • Test the calibration module on a synthetically generated 3D volume with 2 to 6 cameras (needed for tomo-PIV, also will test all projection functions for the camera models)
  • Move pull request out of draft

@ErichZimmer
Copy link
Contributor Author

@alexlib
The new camera module is ready for your testing. I'm sure there are still a few hickups here and there though..

@ErichZimmer ErichZimmer requested review from alexlib and removed request for alexlib June 30, 2024 18:39
@alexlib
Copy link
Member

alexlib commented Jun 30, 2024

@ErichZimmer all the tests pass (when I run it from the /openpiv/tests folder. What's missing is an example of how to use it. Do you want to zoom and then we figure it out together? I'd be happy to learn what/how you plan to use it for.

@ErichZimmer
Copy link
Contributor Author

We could do a zoom in the future (I'll have to allocate time in advance so it won't be on some wacky time). Additionally, I'll be adding some examples on my off-days along with working on the rest of the calibration utilities.

On calibration utilities, these are still mostly in the draft phase. It is just the camera models that are mostly finished after lots of planning and drawings by hand. However, the calibration utilities and models are still enough to successfully perform MLOS-based Tomo-PIV and other TOMO-PIV reconstruction algorithms that do not utilize optical transfer functions (OTF). This is mostly because OTFs require a basic PTV implementation (segmentation and particle matching), which I'll be using either an advanced iterative particle reconstruction algorithm or epipolar matching for locating particles for the volume self calibrations (VSC) and OTF calibrations. For VSC, it is broken down into two parts, a course and a fine VSC algorithm. The course VSC algorithm uses a correlation-based algorithm to minimize large camera drifts (literature reports up to 18 pixels) and the fine VSC uses a PTV-based algorithm with ghost particle suppression to refine the calibration as much as possible. Ideally, this would allow a camera system to be calibrated with a greater tolerance towards camera malignment (something I desperately need in my attic and backyard shed lab).

Oh, on a final note, I plan on incorporating wand-based calibration utilities and possibly refraction interfaces for the pinhole camera based on OpenPTV. This is mostly for the extrinsic parameters as the algorithm by Zang handles the calibration of the intrinsic parameters (except for distortion which is something I just realized I forgot).

@ErichZimmer
Copy link
Contributor Author

ErichZimmer commented Aug 23, 2024

@alexlib
Do we know how the AI source code reviewer may handle a 10k+ line (should be no more than 15k lines of code by completion) pull request for the camera calibration module and an expected 4k+ line pull request for a tomo-PIV module? The diff shouldn't be too terrible since most lines of code (>99.996) would be new and the only existing file that should be modified is a few lines of code in windef.py that allows a pre-computed mapping array to be utilized in the image deformation process. If possible, I would like it in this draft pull request, but it appears that it may not support drafts since I do not see an option for it in the reviewers tab (although, this is contradicting the few articles I read on the internet)..

Never mind, it looks like it only applies for pull requests that are capable of being merged.

@alexlib
Copy link
Member

alexlib commented Aug 24, 2024

I have no idea. I suggest you try it in a separate branch, not in master.

@alexlib
Copy link
Member

alexlib commented Aug 24, 2024

What is really needed here @ErichZimmer are tutorials - what are the use cases for calibration, how to use it, how to choose different models, what the user gets at the end of the story? calibrated stereo? how to use it with OpenPIV, etc.
I'm here to help - we can discuss various tasks and distribute it among us.

@ErichZimmer
Copy link
Contributor Author

Indeed, documentation and notebooks are really important. In fact, I am currently working on a few notebooks at the moment which mainly covers the following:

  • Basics of camera calibration
  • How to image a calibration target
  • Calibrating a single camera
  • Calibrating multiple cameras
  • A very simple stereo camera setup as an example (not to confuse with stereo-PIV)

As I do the notebooks, I am constantly refactoring the calibration module, though, since the only way to improve such a module is to physically use it. For instance, using Zhang's calibration method on a free-form calibration target allowed for a simple stereo-vision system to be setup where I can calculate the distance of an object in my room via triangulation. However, things have been slow for the past month or so since I unfortunately did catch a weird strain of SARS-COV-2 in addition to participating in some rather time consuming university courses.

@alexlib
Copy link
Member

alexlib commented Aug 25, 2024

wish you a quick and full recovery. if I can help, please share with me the details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3rd order image calibration
3 participants