Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluation metrics #1

Open
freeman-lab opened this issue Aug 1, 2016 · 5 comments
Open

evaluation metrics #1

freeman-lab opened this issue Aug 1, 2016 · 5 comments

Comments

@freeman-lab
Copy link
Member

Let's discuss evaluation metrics for spikefinder!

The data for doing evaluation will be, for each neuron, a fluorescence trace and an estimated spike rate. Presumably the rate will be either binary (a spike), integer (a count), or continuous valued (a probability).

Options include information gain, area under an ROC curve, and correlation. For anyone with a lot of experience here, it'd be great to add user experience with these statistics (cc @philippberens).

Correlation might be simplest because it's easy to compute and still gives reasonable results. But there's also no reason not to compute multiple metrics, as we currently do for neurofinder.

@philippberens
Copy link
Collaborator

@lucastheis would be good to add here, too

I think correlation coefficient is a good compromise between easy and informative.

The metrics used in our paper are implemented here:

https://github.com/lucastheis/c2s/blob/master/c2s/c2s.py

This should be relatively easy to port.

@philippberens
Copy link
Collaborator

We should also implement a rank correlation measure for completeness.

@philippberens
Copy link
Collaborator

@freeman-lab could you let me know where the metrics need to go? I think we are settled on the data format, so I could get started with the implementation. definitely will need some input from python pros though.

@freeman-lab
Copy link
Member Author

freeman-lab commented Aug 7, 2016

@philippberens just made a commit that adds stubs for the methods, you can find them here https://github.com/codeneuro/spikefinder-python/blob/master/spikefinder/main.py#L33-L46, and you should be able to test it out by calling

python example.py

For now that returns

corr: [0.66730837118203057, 1.0]
auc: [None, None]
loglik: [None, None]
info: [None, None]
rank: [None, None]

Most aren't implemented, but it should be clear where to insert the implementations. Also feel free to expand the test data files a.csv and b.csv to have some structure if it would help.

We can track what needs to be implemented here:

  • corr
  • auc
  • loglik
  • info
  • rank

@philippberens
Copy link
Collaborator

@freeman-lab

I added the code for rank correlation, as well as loglik and info gain. these were adapted from @lucastheis code on c2s.

I looked at the AUC code as well - I didn't fully understand where he gets the ROC module from. potentially this is the cython file. could you look at that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants