Skip to content

eEcoLiDAR/laserchicken

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Please cite the software if you are using it in your scientific publication.

Build Status Coverage Status DOI Documentation Status CII Best Practices

Toolkit for handling point clouds created using airborne laser scanning (ALS). Find neighboring points in your point cloud and describe them as feature values. Read our user manual and our (very modest) tutorial.

Installation

Prerequisites:

  • Python 3.7 or higher
  • pip
pip install laserchicken

Necessary steps for making a new release

  • Check citation.cff using general DOI for all version (option: create file via 'cffinit')
  • Create .zenodo.json file from CITATION.cff (using cffconvert)
    cffconvert --validate
    cffconvert --ignore-suspect-keys --outputformat zenodo --outfile .zenodo.json
  • Set new version number in laserchicken/_version.txt
  • Check that documentation uses the correct version
  • Edit Changelog (based on commits in https://github.com/eecolidar/laserchicken/compare/v0.3.2...master)
  • Test if package can be installed with pip (pip install .)
  • Create Github release
  • Upload to pypi (now implemented via GitHub Actions):
    python setup.py sdist bdist_wheel
    python -m twine upload --repository-url https://upload.pypi.org/legacy/ dist/*
    (or python -m twine upload --repository-url https://test.pypi.org/legacy/ dist/* to test first)
  • Check doi on zenodo

Feature testing

All features were tested for the following general conditions:

  • Output consistent point clouds and don't crash with artificial data, real data, all zero data (x, y or z), data without points, data with very low number of neighbors (0, 1, 2)
  • Input should not be changed by the feature extractor

The specific features were tested as follows.

Echo ratio

A test was written with artificial data to check the correctness of the calculation with manually calculated ratio. Also tested on real data to make sure it doesn't crash, without checking for correctness. We could add a test for correctness with real data but we would need both that data and a verified ground truth.

Eigenvalues

Only sanity tests (l1>l2>l3) on real data and corner cases but no actual test for correctness. The code is very simple though and mainly calls numpy.linalg.eig.

Height statistics (max_z','min_z','mean_z','median_z','std_z','var_z','coeff_var_z','skew_z','kurto_z)

Tested on real data for correctness. It is however unclear where the ground truths come from. Code is mainly calling numpy methods that do all the work already. Only calculations in our code are:

range_z = max_z - min_z
coeff_var_z = np.std(z) / np.mean(z)

I don't know about any packages that could provide an out of the box coefficient of variance. This is probably because the calculation is so simple.

Pulse penetration ratio

Tested for correctness using artificial data against manually calculated values. No comparison was made with other implementations.

Sigma_z

Tested for correctness using artificial data against manually calculated values. No comparison was made with other implementations.

Percentiles

Tested for correctness using a simple case with artificial data against manually calculated values.

point_density

Tested for correctness on artificial data.