"HARDy: Handling Arbitrary Recognition of Data in python" A package to assist in discovery, research, and classification of YOUR data, no matter who you are!
Numerical and visual transformation of experimental data to improve its classification and cataloging
This project was part of DIRECT Capstone Project at University of Washington and was presented at the showcase, follow this link for the presentation
Package HARDy has following main dependencies:
- Python = 3.7
- Tensorflow = 2.0
The detailed list of dependencies is reflected in the environment.yml
file
The package HARDy can be installed using following command:
conda install -c pozzorg hardy
Alternatively, you can also install it using the GitHub repository in following steps:
*Please note that currently v1.0 is the most stable release
- In your terminal, run
git clone https://github.com/EISy-as-Py/hardy.git
- Change the directory to hardy root directory, by running
cd hardy
- Run
git checkout v1.0
- Run
python setup.py install
- To check installation run,
python -c "import hardy"
in your terminal
For other methods of installation like using environment file and installation using pip, please visit Installation page.
HARDy uses Keras for training Convolutional Neural Network & Keras-tuner for the hyperparameter optimization. The flow of information is shown in image below:
An example jupyter notebook to run HARDy using single script is available at this link Example Notebook
To perform various transformations, training Neural Network and Hyperparameter Optimization, Hardy utilizes following .yaml
configuration files:
The instructions for modifying or writing your own configuration file can be accessed by clicking on the configuration files listed above.
The notebooks and documentations can also be accessed at this link Documentations
In order to increase the density of data presented to the convolutional neural network and add a visual transformation of the data, we adopted a new plotting technique that takes advantage of how images are read by computers. Using color images, we were able to encode the experimental data in the pixel value, using different series per each image channel. The results are data- dense images, which are also pretty to look at.
We have been commissioned by Professor Lilo Pozzo to create a new tool for research and discovery, For her lab and for high throughput researchers everywhere. Our vision of the final product:
- A package which can approach any large, labeled dataset (such as those familiar to High Throughput Screening (HTS) researchers).
- Perform a (procedurally generated and data-guided) wide array of transformations on the data to produce completely novel ways of examining the data, maybe not Human-Readable but in a certainly machine-readable format.
- Train "A Machine Learning Algorithm" (We currently focus on Visual-Processing CNNs but are open to anything!) to classify the existing labled data based on each of the aforementioned transformations.
- Report back to the user:
- Which versions of the Model/Algorithm worked best?
- Which transformations appeared the most useful? (AKA were used across many of the most successful models)
- What Data "Fingerprints" should we pay the most attention to?
- Present a User Interface, to allow non-programmers to interact with and use the chosen classifier(s?) in their work.
The package is designed to deal with a diverse set of labeled data. These are some of the use cases we see benefitting from using the HARDy package.
- handling.py : Functions related to configuration, importing/exporting, and other sorts of back-end useful tasks.
- arbitrage.py : Data Pre-Analysis, Transformations, and other preparation to be fed into the learning algorithm.
- recognition.py : Setup, training and testing of single convolutional neural network (CNN) or hyperparameters optimization for CNNs.
- data_reporting.py : Output and reporting of any/all results. Tabular summary of runs, visual performance comparison, as well as parallel coordinate plots and feature maps
We welcome the members of open-source community to extend the functionalities of HARDy, submit feature requests and report bugs.
If you would like to suggest a feature or start a discussion on possible extension of HARDy, please feel free to raise an issue
If you would like to report a bug, please follow this link
If you would to contribute to HARDy, you can fork the repository, add your contribution and generate a pull request. The complete guide to make contributions can be found at this link
Maria Politi acknowledges support from the National Science Foundation through NSF-CBET grant 1917340