Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
OxCGRT data merge, npi model computation docker deployment (#523)
* 504 - Extending the NPI model data * added a dummy featrues for every intervention which is turned on when a intervention is turned of for the first time and turned of when the original intervention is put in place again. * extended the data for another month by using th last know countermeasures in each region and data from johns hopkins * fixed workflow yaml * fixed workflow yml * maybe resolving failing dependency installation in github actions by updating pip and setuptools * fixed the extension of data, removed the cancled columns from intervention json * Introduce new intervention icons * Fix linting * Show NPI model chart only for model channel * extrapolating data * fixed data preprocessing - removing deaths from countermeasures * short-term model improvements * extrapolation * added extrapolation date * Merge OxCGRT countermeasure data * poetry update * Merged NPI data from OxCGRT Prepared docker container which can be run on GCP compute instances from github pipelines defined conda environment to accelerate the computation of the npi model * fixed invalid github workflow yml * add upload data step, removed decrypt secrets to workflow * fixed upload data step * moved GCP setup in workflow, added extrapolation perriod to the model * fixed extrapolation period argument * removed steps from compute npi workflow * don't run previous steps in workflows, instead download the latest r_estimates.csv inside docker. The other steps are quick Created a script which deletes the instance when docker exits. This script is copied to the GCP console, but I included it in the repo for consistency * fixed create-with-container command in workflow * added env file in compute-npi-model to hopefully fix the gcloud command * fixed workflow * fixed workflow * set region for compute * use different service account * update gcloud in workflow * changed order of arguments * screw instance templates, it just refuses to work - defining the instance in the command * set gcp project in workflow * fixed typo * pass foretold channel env variable directly * fixed parameter name * appending newline to env file * debugging workflow * redefine the machine * the cpu has to also be specified * dropped the machine type added vm type instead * added scopes to the vm instance, so that it can pull docker image * fixed syntax * building docker container, debugging startup script * use url to pass the startup script to the instance * extracting branch name, refactoring workflow * reformatted workflow yml * fixed workflow step * make sure the startup script won't block model * trying it without the startup script * another try to not block the npi model by startup script * more disk-space (conda image is large), debugging startup script * fixed preprocessing of countermeasures, debugging startup script * removed the startup script, killing the instance from the docker container, reformatted code * fixed linting after black update * Filter out subregions from OxCGRT data * OxCGRT added data for subregions (e.g. US states) which broke the pipeline. Fix is to filter it out, but we might use them in the future * fixed key passing to the container * Strip the quotes from the key - they are necessary when passing them in the env file * fixed run model script * fixed extrapolation date, changed channel, run on 40 countries * small fixes * Triggering the npi-model computing workflow manually * More tune interactions of the model (to hopefully shrink the confidence interval) * Made sure that each NUTS sampling process created by pymc3 only uses one thread - The parallelization doesn't work and this greatly speeds up the computation * fixed lining * pre-PR clean-up * lgtm based fixes Co-authored-by: Marek Pukaj <[email protected]>
- Loading branch information