This repo contains the code for the Human Tumor Atlas Network Data Portal
This is a Next.js project bootstrapped with create-next-app
All data is coming from Synapse. We have a Python script that generates a JSON file that contains all the metadata. There is currently no backend, it's a fully static site i.e. all filtering happens on the frontend.
Only certain metadata rows and data files on Synapse are released. We keep track of this information in Google BigQuery. One can get the latest dump of that using these commands (requires access to the htan-dcc google project):
bq extract --destination_format CSV released.entities_v4 gs://htan-release-files/entities_v4.csv
bq extract --destination_format CSV released.metadata_v4 gs://htan-release-files/metadata_v4.csv
gsutil cp gs://htan-release-files/entities_v4.csv entities_v4.csv
gsutil cp gs://htan-release-files/metadata_v4.csv metadata_v4.csv
cd data
# Run the script that pulls all the HTAN metadata
# It outputs a JSON in public/syn_data.json and a JSON with links to metadata in data/syn_metadata.json
python get_syn_data.py
cd ..
# Find and replace certain values (this is a temp fix)
yarn findAndReplace
# we store the result of this in gzipped format
gzip -c public/syn_data.json > public/syn_data.json.gz
# Convert the resulting JSON to a more efficient structure for visualization
# Note: we output stdout and stderr to files to share these with others for
# data qc debugging purposes
# TODO: there is a ssl legacy provider is hack for
# https://stackoverflow.com/questions/69692842/error-message-error0308010cdigital-envelope-routinesunsupported
yarn processSynapseJSON > data/processSynapseJSON.log 2> data/processSynapseJSON.error.log
# we also store the processed data in gzipped format
gzip -c public/processed_syn_data.json > public/processed_syn_data.json.gz
At the moment all data is hosted on S3 for producion. This is because there is a file size limit for vercel. To update it:
- gzip file (note that it's already gzipped in the repo)
- Remove ".gz" extension so it's just json and rename to include current date in filename.
- Upload file to s3 bucket "htanfiles" (part of schultz AWS org)
- The file needs two meta settings:
Content-Encloding=gzip
andContent-Type=application/json
- Once file is up, change path in
/lib/helpers.ts
Or step 1-4 as command:
MY_AWS_PROFILE=inodb
aws s3 cp processed_syn_data.json.gz s3://htanfiles/processed_syn_data_$(date "+%Y%m%d_%H%M").json --profile=${MY_AWS_PROFILE} --content-encoding gzip --content-type=application/json --acl public-read
There are currently no automated tests, other than building the project, so be careful when merging to master
First, run the development server:
npm run dev
# or
yarn dev
Open http://localhost:3000 with your browser to see the result.
You can start editing any page. The page auto-updates as you edit the file.
Add debugger;
somewhere in the code. Then run:
./node_modules/.bin/ncc build --source-map --no-source-map-register data/processSynapseJSON.ts
Followed by:
node --inspect-brk dist/index.js
Now you can attach to it in e.g. VSCode
To learn more about Next.js, take a look at the following resources:
- Next.js Documentation - learn about Next.js features and API.
- Learn Next.js - an interactive Next.js tutorial.
The app is deployed using the ZEIT Now Platform from the creators of Next.js.