-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Identify our criteria for grading each dataset #2
Comments
I'd like to make the verifiability of datasets a small grading factor. If there is some mechanism to determine that a copy of a dataset is the same as the original (e.g., a hash), and SSL is in place to ensure that it's not tampered with in transit, that's got to be worth something. |
This is the scoring methodology used by the OKFN's global census:
The Local Open Data Census does not have a "Methodology" section, at least that I can find, but it seems to use the same criteria. I do not know if it uses the same weighting, and I do not know if it is possible for us to use different criteria or different weighting. |
I worry about scoring. At first blush, I'm not sure that a stated open license is really twice as important as being machine readable. I don't know if charging for data is necessarily a binary thing (yes, 0 points, no, 15 points)—if data costs $1M, that seems like it'd be worth 0 points, but if it costs 5¢, that's not great, but a different level of not-great than $1M. It seems like it'd be good to actually gather some data, test out the scoring, see how the results look, and then fiddle with how things are weighed. But I don't think it looks great right now. |
Here are the two additional metrics that I'd like to score against:
I don't think these are terribly important (yet), so I imagine I'd award just 5 points apiece. |
Another one: Is it available in their central repository? |
This is definitely flexible - see how Code for America has modified the questions for their Local Digital Services Census: https://service-census.herokuapp.com/ (rolled out for CodeAcross 2015.) |
Over on issue #5, I've come to the same conclusion. As soon as OKFN gives our account the thumbs-up, I think we can start entering any criteria that we like. |
Does it include key elements? (Version of: is it complete?) |
For future: does it adhere to the accepted schema? |
And, relatedly, whether it's possible for us to use our own criteria. CKAN's platform may not support using anything other than their own criteria.
The text was updated successfully, but these errors were encountered: