The format is based on Keep a Changelog, and this project adheres to Semantic Versioning. See MAINTAINERS.md for instructions to keep up to date.
-
Update to
firehose-core
versionv1.6.5
. -
Added support to multiple endpoints through
--endpoints
flag forfiresol fetch rpc
.
- Fixes fetching "huge" blocks --> no longer panics, increased default size and added
--reader-node-line-buffer-size
flag to control this - Bumped
firehose-core
to version v1.5.6.
-
Bumped
firehose-core
to version v1.5.1. -
Fixed
firesol fetch rpc
to report fetch error(s) at least once each 30s.
- Bumped
firehose-core
bundled binary infirehose-solana
built Docker image.
-
Bumped
firehose-core
to latest version. -
Fixed some edge cases when block skips.
-
Fixed
tools check merged-blocks
default range when-r <range>
is not provided to now be[0, +∞]
(was previously[HEAD, +∞]
). -
Fixed
tools check merged-blocks
to be able to run without a block range provided. -
Added API Key based authentication to
tools firehose-client
andtools firehose-single-block-client
, specify the value through environment variableFIREHOSE_API_KEY
(you can use flag--api-key-env-var
to change variable's name to something else thanFIREHOSE_API_KEY
). -
Fixed
tools check merged-blocks
examples using block range (range should be specified as[<start>]?:[<end>]
). -
Added
--substreams-tier2-max-concurrent-requests
to limit the number of concurrent requests to the tier2 Substreams service.
Important
- All firehose processes have been removed from this binary. You will need to run this program from the firecore binary
- Previous
firesol start ...
command becomesfirecore start ...
- Previous
- New Poller: firesol no longer gets blocks from a Bigtable instance: it fetches the blocks using RPC calls
- Run
firecore start reader
with--reader-node-path=/path/to/firesol
and--reader-node-arguments=fetch rpc <https://your.solana.rpc/path> <start-block>
- Run
- New Block Format requires either fetching all the merged blocks again or converting them
- Convert old blocks by running:
ACCEPT_SOLANA_LEGACY_BLOCK_FORMAT=true firesol upgrade-merged-blocks <source-store> <dest-store> <start-num:stop-num>
- Convert old blocks by running:
- Upgrading your deployment will require a "stop the world" upgrade, where you start the new binaries, pointing to the new blocks, without any contact with the previous blocks or components.
- All the
firesol start ...
commands have been removed. Use firecore binary to run the reader, merger, relayer, firehose and substreams services - All the existing
firesol tools
commands
- Added
fetch rpc <endpoint> <start_block>
command fetches and prints the blocks in protobuf format, to be used by thefirecore start reader
command. - Added
upgrade-merged-blocks
command to perform the upgrade on previous solana merged-blocks. - Bumped firecore version to v1.2.0
- Fixed Substreams scheduler sometimes taking a long time to spawn more than a single worker.
- bumped firehose-core to
v0.2.2
- Firehose logs now include auth information (userID, keyID, realIP) along with blocks + egress bytes sent.
- Filesource validation of block order in merged-blocks now works correctly when using indexes in firehose
Blocks
queries (regression in v0.2.6)
- Flag
substreams-rpc-endpoints
removed, this was present by mistake and unused actually. - Flag
substreams-rpc-cache-store-url
removed, this was present by mistake and unused actually. - Flag
substreams-rpc-cache-chunk-size
removed, this was present by mistake and unused actually.
- bumped firehose-core to
v0.2.1
Important
We have had reports of older versions of this software creating corrupted merged-blocks-files (with duplicate or extra out-of-bound blocks) This release adds additional validation of merged-blocks to prevent serving duplicate blocks from the firehose or substreams service. This may cause service outage if you have produced those blocks or downloaded them from another party who was affected by this bug.
- Find the affected files by running the following command (can be run multiple times in parallel, over smaller ranges)
tools check merged-blocks-batch <merged-blocks-store> <start> <stop>
- If you see any affected range, produce fixed merged-blocks files with the following command, on each range:
tools fix-bloated-merged-blocks <merged-blocks-store> <output-store> <start>:<stop>
- Copy the merged-blocks files created in output-store over to the your merged-blocks-store, replacing the corrupted files.
- Added
tools check merged-blocks-batch
to simplify checking blocks continuity in batched mode, optionally writing results to a store - Added the command
tools fix-bloated-merged-blocks
to try to fix merged-blocks that contain duplicates and blocks outside of their range. - Command
tools print one-block and merged-blocks
now supports a new--output-format
jsonl
format. Bytes data can now printed as hex or base58 string instead of base64 string. - Added retry loop for merger when walking one block files. Some use-cases where the bundle reader was sending files too fast and the merger was not waiting to accumulate enough files to start bundling merged files
- Bumped
bstream
: thefilesource
will now refuse to read blocks from a merged-files if they are not ordered or if there are any duplicate. - The command
tools download-from-firehose
will now fail if it is being served blocks "out of order", to prevent any corrupted merged-blocks from being created. - The command
tools print merged-blocks
did not print the whole merged-blocks file, the arguments were confusing: now it will parse <start_block> as a uint64. - The command
tools unmerge-blocks
did not cover the whole given range, now fixed
-
Breaking The
reader-node-log-to-zap
flag has been removed. This was a source of confusion for operators reporting Firehose on bugs because the node's logs where merged within normal Firehose on logs and it was not super obvious.Now, logs from the node will be printed to
stdout
unformatted exactly like presented by the chain. Filtering of such logs must now be delegated to the node's implementation and how it deals depends on the node's binary. Refer to it to determine how you can tweak the logging verbosity emitted by the node.
- bump firehose-core to
v0.1.11
with a regression fix for when a substreams has a start block in the reversible segment
- bump firehose-core to
v0.1.10
with new metricssubstreams_active_requests
andsubstreams_counter
Important
The Substreams service exposed from this version will send progress messages that cannot be decoded by substreams clients prior to v1.1.12. Streaming of the actual data will not be affected. Clients will need to be upgraded to properly decode the new progress messages.
- Bumped firehose-core to
0.1.8
- Bumped substreams to
v1.1.12
to support the new progress message format. Progression now relates to stages instead of modules. You can get stage information using thesubstreams info
command starting at versionv1.1.12
. - Migrated to firehose-core
- change block reader-node block encoding from hex to base64
- Removed --substreams-tier1-request-stats and --substreams-tier1-request-stats (substreams request-stats are now always sent to clients)
- More tolerant retry/timeouts on filesource (prevent "Context Deadline Exceeded")
This release candidate is a hotfix for an issue introduced at v0.2.1 and affecting production-mode
where the stream will hang and some map_outputs
will not be produced over some specific ranges of the chains.
The substreams
scheduler has been improved to reduce the number of required jobs for parallel processing. This affects backprocessing
(preparing the states of modules up to a "start-block") and forward processing
(preparing the states and the outputs to speed up streaming in production-mode).
Jobs on tier2
workers are now divided in "stages", each stage generating the partial states for all the modules that have the same dependencies. A substreams
that has a single store won't be affected, but one that has 3 top-level stores, which used to run 3 jobs for every segment now only runs a single job per segment to get all the states ready.
The substreams
server now accepts X-Sf-Substreams-Cache-Tag
header to select which Substreams state store URL should be used by the request. When performing a Substreams request, the servers will optionally pick the state store based on the header. This enable consumers to stay on the same cache version when the operators needs to bump the data version (reasons for this could be a bug in Substreams software that caused some cached data to be corrupted on invalid).
To benefit from this, operators that have a version currently in their state store URL should move the version part from --substreams-state-store-url
to the new flag --substreams-state-store-default-tag
. For example if today you have in your config:
start:
...
flags:
substreams-state-store-url: /<some>/<path>/v3
You should convert to:
start:
...
flags:
substreams-state-store-url: /<some>/<path>
substreams-state-store-default-tag: v3
-
The app
substreams-tier1
andsubstreams-tier2
should be upgraded concurrently. Some calls will fail while versions are misaligned. -
Remove the flag
--substreams-tier1-subrequests-size
from your config, it is not used anymore.
-
Authentication plugin
trust
can now specify an exclusive list ofallowed
headers (all lowercase), ex:trust://?allowed=x-sf-user-id,x-sf-api-key-id,x-real-ip,x-sf-substreams-cache-tag
-
The
tier2
app no longer uses thecommon-auth-plugin
,trust
will always be used, so thattier1
can pass down its headers (ex:X-Sf-Substreams-Cache-Tag
). -
Added support for continuous authentication via the grpc auth plugin (allowing cutoff triggered by the auth system).
- Bumps substreams from v1.0.x to v1.1.1 -> RPC protocol changed from sf.substreams.v1.Stream/Blocks to sf.substreams.rpc.v2.Stream/Blocks. See release notes from github.com/streamingfast/substreams for details.
- Added support for "requester pays" buckets on Google Storage in url, ex:
gs://my-bucket/path?project=my-project-id
-
Config value
substreams-stores-save-interval
andsubstreams-output-cache-save-interval
have been merged together as a single value to avoid potential bugs that would arise when the value is different for those two. The new configuration value is calledsubstreams-cache-save-interval
.- To migrate, remove usage of
substreams-stores-save-interval: <number>
andsubstreams-output-cache-save-interval: <number>
if defined in your config file and replace withsubstreams-cache-save- interval: <number>
, if you had two different value before, pick the biggest of the two as the new value to put. We are currently setting to1000
for Ethereum Mainnet.
- To migrate, remove usage of
-
Updated to Substreams
v0.2.0
, please refer to release page for further info about Substreams changes. -
Updated
--substreams-output-cache-save-interval
default value to 1000.
-
Added
tools bt blocks --bt-project=<bigtable_project> --bt-instance=<bigtable_instance> <start-block-num> <stop-block-num>
command to scan bigtable rows- Added
--firehose-enabled
flag to output FIRE log
- Added
-
Added
reader-bt
application to sync directly from bigtable- Added
--reader-bt-readiness-max-latency
flag - Added
--reader-bt-data-dir
flag - Added
--reader-bt-debug-firehose-logs
flag - Added
--reader-bt-log-to-zap
flag - Added
--reader-bt-shutdown-delay
flag - Added
--reader-bt-working-dir
flag - Added
--reader-bt-blocks-chan-capacity
flag - Added
--reader-bt-one-block-suffix
flag - Added
--reader-bt-startup-delay
flag - Added
--reader-bt-grpc-listen-addr
flag
- Added
- Removed
dgraphql
application and all associated flags - Removed
tools reproc
replaced withtools bt blocks
- The repo name has changed from
sf-solana
tofirehose-solana
- The binary name has changed from
sfsol
tofiresol
(aligned with https://firehose.streamingfast.io/references/naming-conventions)
-
All config via environment variables that started with
SFSOL_
now starts withFIRESOL_
-
Changed
config-file
default from./sf.yaml
to""
, preventing failure without this flag. -
Renamed
common-blocks-store-url
tocommon-merged-blocks-store-url
-
Renamed
common-oneblock-store-url
tocommon-one-block-store-url
-
Renamed
common-blockstream-addr
tocommon-live-blocks-addr
-
Renamed
common-protocol-first-streamable-block
tocommon-first-streamable-block
-
Added
common-forked-blocks-store-url
-
Renamed the
mindreader
application toreader
- Renamed
mindreaderPlugin
toreaderPlugin
- Renamed
-
Renamed all the
mindreader-node-*
flags toreader-node-*
- Renamed
mindreader-node-start-block-num
toreader-node-start-block-num
- Renamed
mindreader-node-stop-block-num
toreader-node-stop-block-num
- Renamed
mindreader-node-blocks-chan-capacity
toreader-node-blocks-chan-capacity
- Renamed
mindreader-node-wait-upload-complete-on-shutdown
toreader-node-wait-upload-complete-on-shutdown
- Renamed
mindreader-node-oneblock-suffix
toreader-node-one-block-suffix
- Renamed
mindreader-node-deepmind-batch-files-path
toreader-node-firehose-batch-files-path
- Renamed
mindreader-node-purge-account-data
toreader-node-purge-account-data
- Added
reader-node-arguments
- Removed
reader-node-merge-and-store-directly
- Removed
reader-node-block-data-working-dir
- Removed
reader-node-extra-arguments
- Removed
reader-node-merge-threshold-block-age
- Renamed
-
Renamed all instances of
deepmind
tofirehose
- Renamed
path-to-deepmind-batch-files
topath-to-firehose-batch-files
- Renamed
mindreader-node-deepmind-batch-files-path
toreader-node-firehose-batch-files-path
- Renamed
-
Renamed
debug-deepmind
todebug-firehose-logs
- Renamed
mindreader-node-debug-deep-mind
toreader-node-debug-firehose-logs
- Renamed
-
Renamed
dmlog
tofirelog
- Flag
<path_to_dmlog.dmlog>
changed to<path_to_firelog.firelog>
- Flag
-
Renamed
DMLOG
prefix toFIRE
-
Added/Removed
merger-*
flags- Removed
merger-writers-leeway
- Removed
merger-one-block-deletion-threads
- Removed
merger-max-one-block-operations-batch-size
- Added
merger-time-between-store-pruning
- Added
merger-prune-forked-blocks-after
- Added
merger-stop-block
- Removed
-
Added/Removed
firehose-*
flags- Removed
firehose-blocks-store-urls
- Removed
firehose-real-time-tolerance
- Removed
firehose-blocks-store-urls
- Removed
firehose-real-time-tolerance
- Removed
-
Removed
relayer-*
flags- Removed
relayer-source-request-burst
- Removed
relayer-merger-addr
- Removed
relayer-buffer-size
- Removed
relayer-min-start-offset
- Removed