You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
if you create a custom N1 VM, it can have up to 6.5 GB of memory per vCPU. So for example when we select 32GB for mutect step, this forces us to request 6 CPUs. A list of places where we've been bumped up to more cores is here: /storage1/fs1/mgriffit/Active/griffithlab/pipeline_test/gcp_wdl_test/saved_results/final_results_v1_fusions_ens95/workflow_artifacts/extra_cpu_requests.txt
WGS runs require more resources than exomes in many cases, but our memory/CPU values are set so that the largest data sets will run. Either a) Provide a parameter that allows for specifying WGS or Exome at the top level or b) use the bam size directly to estimate mem usage in some of these steps.
The text was updated successfully, but these errors were encountered:
Several related issues here:
if you create a custom N1 VM, it can have up to 6.5 GB of memory per vCPU. So for example when we select 32GB for mutect step, this forces us to request 6 CPUs. A list of places where we've been bumped up to more cores is here:
/storage1/fs1/mgriffit/Active/griffithlab/pipeline_test/gcp_wdl_test/saved_results/final_results_v1_fusions_ens95/workflow_artifacts/extra_cpu_requests.txt
WGS runs require more resources than exomes in many cases, but our memory/CPU values are set so that the largest data sets will run. Either a) Provide a parameter that allows for specifying WGS or Exome at the top level or b) use the bam size directly to estimate mem usage in some of these steps.
The text was updated successfully, but these errors were encountered: