Replies: 4 comments 2 replies
-
A great place to start is with the two most common tools for ML explainability - LIME and SHAP: here is a nice overview. |
Beta Was this translation helpful? Give feedback.
-
A potentially simple hackathon idea could be to essentially reproduce the Wolf/Husky experiment but with some geophysical data. For example, a network could be trained to simply classify seismic shot gathers are noisy or clean. Then using some explainable AI methods (LIME, Activation Maps, etc), some comparisons could be made between how geophysicists determine a noisy shot gather vs a model. Where does the ML model/Geophysicist agree/disagree, etc. This is my first hackathon so this may be laughably simple or stupidly ambitious or maybe just boring. . I'm not sure :) |
Beta Was this translation helpful? Give feedback.
-
@EdwinB12 There's a repository that was recently made available for classifying unstructured data. This comes with a dataset and a pretrained model. Maybe this is something you where you could try the wolf husky experiment? Here is the Github repository: There are some great repositories out there to start from for heatmaps / Grad-CAM style visualizations. |
Beta Was this translation helpful? Give feedback.
-
There is also the FORCE lithology prediction model that was done in 2020 - https://github.com/olawaleibrahim/2020_FORCE_Lithology_Prediction |
Beta Was this translation helpful? Give feedback.
-
This thread will provide with some ideas and help for getting started with this year's EAGE hackathon theme - Explainable AI.
How do we ensure trust that our models are behaving how they should? How can we easily demonstrate this to those who don't have data science background? Here are some examples to get your team thinking.
Beta Was this translation helpful? Give feedback.
All reactions