Epoch, the natural evolution of Solana infrastructure. Turn data into dollars without being a Solana expert.
Epoch provides every program, every account, everything decoded, and at every slot.
All accounts are decoded into human-readable form. Existing solutions tend to provide raw, encoded data. This forces the end user to understand Solana programming to decode the data themselves 🤮. Or worse, they can't find the account data they need from certain, less-known programs.
Existing historical data solutions all missed the mark.
Epoch exists to provide a better solution to accessing historical data.
Go data mine some alpha. This is what Epoch was built for.
cargo install cargo-make
Setup a managed Timescale database if you don't have one here. Once you have a managed database, now you can create tables and define configurations. This resets and recreates database, create migrations, copy migrations to the proper directory, and load migrations to the database. Note: this requires having a managed Timescale database.
cargo make setup-timescale
Epoch reads the backfill.yaml
config file which defines the snapshots to pull from Google Cloud Storage (GCS), the
GCS bucket to pull from, the number of workers/threads to parallelize tasks, and the Solana programs to filter for.
Example:
# Maximum number of cores to use for processing.
max_workers: 4
# Only these programs will be uploaded to Timescale.
programs:
- dRiftyHA39MWEi3m9aunc5MzRF1JYuBsbn6VPcn33UH # drift v2
- jupoNjAxXgZ4rjzxzPMP4oxduvQsQtZzyknqvzYNrNu # Jupiter limit
- PERPHjGBqRHArX4DySjwM6UJHiR3sWAatqfdBS2qQJu # Jupiter perp
- JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4 # Jupiter swap
- PhoeNiXZ8ByJGLkxNfZRnkUfjvmuYqLR89jjFHGqdXY # Phoenix
# Most historical date to backfill
# Format is yyyy-mm-dd
start_date: 2024-02-05
# Most recent date to backfill
# Format is yyyy-mm-dd
end_date: 2024-05-08
gcs_bucket: mainnet-beta-ledger-us-ny5
# Optional local file with GCS snapshot object data. If not provided, the snapshot will be downloaded from GCS
gcs_local_file: gcs_snapshots.json
# Optional if you want to dump accounts to a Timescale database
timescale_db: postgres://[username]:[password]@[host]:[port]/[db]?sslmode=require
Run this command to start uploading decoded accounts from BigTable to Timescale
cargo make backfill
This requires the config yaml file epoch.yaml
to be set.
It needs the local path to the Google service account JSON.
If you don't have this service account file you can get one from the Google Cloud Console.
It also requires a managed Redis database, which you can easily setup on their website.
The yaml file will look something like this:
gcs_sa_key: epoch_sa_key.json
redis_username: default
redis_password: password
redis_host: redis-17359.c284.us-east1-2.gce.cloud.redislabs.com
redis_port: 17359
After running the backfill client to dump accounts into the Timescale database, you may run the Epoch server:
cargo make epoch
The idls
directory contains bindings auto-generated by anchor-gen
using an IDL.
Since sometimes fetches for an IDL may fail, you must manually fetch the IDL via the Anchor CLI:
anchor idl fetch -o <where-to-store.json> <program-id> --provider.cluster mainnet
The JSON location should be the root of the new program crate in idls
, such as idls/drift/idl.json
.
In the new crate lib.rs
file, such as idls/drift/src/lib.rs
, you must define the enum of the program's accounts.
These are imported from anchor-gen
using use crate::typedefs::*;
.
See idls/drift/src/lib.rs
for an example of using decode_account!
macro to automatically handle deserialization.
You need a list of the accounts in the enum, which are sourced from the anchor-gen
macro.
Some IDEs don't provide macro expansion, so you can use the #[test]
in the lib.rs
file to print the IDL accounts.
Next, see idls/drift/src/lib.rs
for an example of defining PROGRAM_NAME
and PROGRAM_ID
.
Do this for the new program crate.
Next, modify decoder/src/program_decoder.rs
to support the new program.
First modify Decoder
with the new enum of accounts.
For example, in the idls/drift/lib.rs
there is the AccountType
enum.
pub enum Decoder {
Drift(drift_cpi::AccountType),
}
Next, add the program ID of the new program you defined
to pub static ref PROGRAMS
in decoder/src/program_decoder.rs
.
Install Rust buildpack: link
Install SSH key buildpack: link
Follow the directions on the link to generate the SSH key.
Add the BUILDPACK_SSH_KEY
to the Heroku config vars (env).
If it doesn't exist create a file in the root of this repo called .cargo/config.toml
.
Add the following to the file and replace the "Token you created on Shipyard".
# For more config options:
# https://doc.rust-lang.org/cargo/reference/config.html
[registries.epoch]
index = "ssh://[email protected]/epoch/crate-index.git"
token = "Token you created on Shipyard"
[registry]
global-credential-providers = ["cargo:token"]
[net.ssh]
known-hosts = ["ssh.shipyard.rs ecdsa-sha2-nistp256 someKeyHere"]
[net]
git-fetch-with-cli = true