This is a demonstration of how New York City employees can use DuckDB and dbt to build and analyze data.
This repo was created to accompany a presentation for Analytics Exchange Summit 2024 [Google Slides].
A fictional dataset about local agriculture called Farm to Market
was created for the purposes of this repo.
The code is organized into stages to show a progression from simple queries to a robust data pipeline.
- Stage 0: confirm everything is setup correctly
- Stage 1: load and explore data
- Stage 2: use a data pipeline to build Farm to Market
- Stage 3: use dbt to build Farm to Market
Farm to Market captures where local food is sold and might be grown in New York City.
This dataset combines the locations of farmers markets and potential farms (community gardens) to highlight availability and potential local suppliers of healthy produce in NYC.
markets
Each row is a farmers market
farms
Each row is a potential farm
farm_to_market
Each row is a market and farm pair
Source data:
- NYC Borough Boundaries [source]
- NYC Farmers Markets [source]
- GreenThumb Garden Info [source]
- GreenThumb Block-Lot [source]
Important
For City employees, installing software on work computers can be difficult due to a lack of administrative access. But you may be able to install software "only for me" rather than "anyone who uses this computer".
All of these prerequisites can be (and by default are) installed only for the user rather than system-wide.
Required
- Python 3.12 for running python code (any version >=3.9 works)
- git for cloning this repo and installing
bash
terminal
Optional
- VS Code for an integrated development environment (IDE) and the Python extension
- DBeaver for querying a database
Important
For City employees, most work computers access the internet through a firewall. This causes issues when writing and running code (e.g. pip install
fails).
To allow code to access the internet on a PC:
-
Search for and select
Edit environment variables for your account
in the Start menu -
Add two environment variables named
http_proxy
andhttps_proxy
with the same value ofhttp://bcpxy.nycnet:8080
-
Clone this repo and navigate to the new folder:
git clone https://github.com/NYCPlanning/anex-summit-2024-demo.git cd anex-summit-2024-demo
-
Create a python virtual environment named
.venv
either using the command below or using the VS Code commandPython: create environment
which python python --version python -m venv .venv
-
Activate the virtual environment
Windows bash
source .venv/Scripts/activate
Windows PowerShell
venv\Scripts\Activate.ps1
POSIX bash/zsh
source .venv/bin/activate
-
Install packages
python -m pip install --requirement requirements.txt pip list
Run a python script to confirm everything is setup
python -m stage_0.sanity_check
Load and explore data from various sources
These datasets were chosen to show some of the ways source data can be imported with DuckDB.
- PLUTO from the NYC Department of City Planning (source)
- NYC Airbnb data (source)
- Trip record data from the NYC Taxi and Limousine Commission (TLC) (source)
- Download PLUTO from NYC Open Data here by navigating to
Export
->Download file
->Export format: CSV
Tip
If the Open Data download takes too long, try downloading from DCP's site Bytes of the Big Apple (link).
-
Rename the downloaded csv file to
pluto.csv
and move it todata/source_data/
-
Run a python script to download the other two sources and load all three sources into a database:
python -m stage_1.load
-
Use the Jupyter notebook
stage_1/explore.ipynb
or DBeaver to explore the data
Use a data pipeline to build Farm to Market
-
Download all source data from their Open Data pages by navigating to
Export
->Download file
. Depending on the dataset, either download a CSV or a GeoJSON file. -
Rename the downloaded files to remove the dates and move them to
data/source_data/
-
Run a python script to load all source data into a database:
python -m stage_2.load
-
Run python scripts to transform and export data:
python -m stage_2.transform python -m stage_2.export
-
Use the Jupyter notebook
stage_2/analyze.ipynb
to review and analyze the dataset
Use dbt to build Farm to Market
-
Download and rename source data as described in Stage 2
-
Navigate to the
stage_3
folder:cd stage_3
-
Install dbt packages and confirm setup:
dbt deps dbt debug
-
Test source data:
dbt test --select "source:*"
-
Build the dataset:
dbt build
-
Generate and view data documentation:
dbt docs generate dbt docs serve