Skip to content

Latest commit

 

History

History
111 lines (73 loc) · 7.42 KB

README.md

File metadata and controls

111 lines (73 loc) · 7.42 KB

Build Status Coverage Status Documentation Status Anaconda-Server Badge Anaconda-Server Badge Anaconda-Server Badge

DOI

Project HARDy

"HARDy: Handling Arbitrary Recognition of Data in python" A package to assist in discovery, research, and classification of YOUR data, no matter who you are!

Project Objective

Numerical and visual transformation of experimental data to improve its classification and cataloging

This project was part of DIRECT Capstone Project at University of Washington and was presented at the showcase, follow this link for the presentation

Requirements:

Package HARDy has following main dependencies:

  1. Python = 3.7
  2. Tensorflow = 2.0

The detailed list of dependencies is reflected in the environment.yml file

Installation:

The package HARDy can be installed using following command:

conda install -c pozzorg hardy

Alternatively, you can also install it using the GitHub repository in following steps:

*Please note that currently v1.0 is the most stable release

  1. In your terminal, run git clone https://github.com/EISy-as-Py/hardy.git
  2. Change the directory to hardy root directory, by running cd hardy
  3. Run git checkout v1.0
  4. Run python setup.py install
  5. To check installation run, python -c "import hardy" in your terminal

For other methods of installation like using environment file and installation using pip, please visit Installation page.

Usage:

HARDy uses Keras for training Convolutional Neural Network & Keras-tuner for the hyperparameter optimization. The flow of information is shown in image below:

information flow of how the package works

An example jupyter notebook to run HARDy using single script is available at this link Example Notebook

To perform various transformations, training Neural Network and Hyperparameter Optimization, Hardy utilizes following .yaml configuration files:

The instructions for modifying or writing your own configuration file can be accessed by clicking on the configuration files listed above.

The notebooks and documentations can also be accessed at this link Documentations

Visualization

In order to increase the density of data presented to the convolutional neural network and add a visual transformation of the data, we adopted a new plotting technique that takes advantage of how images are read by computers. Using color images, we were able to encode the experimental data in the pixel value, using different series per each image channel. The results are data- dense images, which are also pretty to look at.

 details on the proposed visual transformation to increased the images data density

Mission:

We have been commissioned by Professor Lilo Pozzo to create a new tool for research and discovery, For her lab and for high throughput researchers everywhere. Our vision of the final product:

  • A package which can approach any large, labeled dataset (such as those familiar to High Throughput Screening (HTS) researchers).
  • Perform a (procedurally generated and data-guided) wide array of transformations on the data to produce completely novel ways of examining the data, maybe not Human-Readable but in a certainly machine-readable format.
  • Train "A Machine Learning Algorithm" (We currently focus on Visual-Processing CNNs but are open to anything!) to classify the existing labled data based on each of the aforementioned transformations.
  • Report back to the user:
    • Which versions of the Model/Algorithm worked best?
    • Which transformations appeared the most useful? (AKA were used across many of the most successful models)
    • What Data "Fingerprints" should we pay the most attention to?
  • Present a User Interface, to allow non-programmers to interact with and use the chosen classifier(s?) in their work.

Use Cases:

The package is designed to deal with a diverse set of labeled data. These are some of the use cases we see benefitting from using the HARDy package.

possible use cases for the HARDy package

Modules Overview:

  • handling.py : Functions related to configuration, importing/exporting, and other sorts of back-end useful tasks.
  • arbitrage.py : Data Pre-Analysis, Transformations, and other preparation to be fed into the learning algorithm.
  • recognition.py : Setup, training and testing of single convolutional neural network (CNN) or hyperparameters optimization for CNNs.
  • data_reporting.py : Output and reporting of any/all results. Tabular summary of runs, visual performance comparison, as well as parallel coordinate plots and feature maps

Community Guidlines:

We welcome the members of open-source community to extend the functionalities of HARDy, submit feature requests and report bugs.

Feature Request:

If you would like to suggest a feature or start a discussion on possible extension of HARDy, please feel free to raise an issue

Bug Report:

If you would like to report a bug, please follow this link

Contributions:

If you would to contribute to HARDy, you can fork the repository, add your contribution and generate a pull request. The complete guide to make contributions can be found at this link

Acknowledgment

Maria Politi acknowledges support from the National Science Foundation through NSF-CBET grant 1917340