new_york_pink_weimaraner
Welcome to Qri 0.9.9! We've got a lot of internal changes that speed up the work you do on Qri everyday, as well as a bunch of new features, and key bug fixes!
Config Overhaul
We've taken a hard look at our config and wanted to make sure that, not only was every field being used, but also that this config could serve us well as we progress down our roadmap and create future features.
To that effect, we removed many unused fields, switched to using multiaddresses for all network configuration (replacing any port
fields), formalized the hierarchy of different configuration sources, and added a new Filesystems
field.
This new Filesystems
field allows users to choose the supported filesystems on which they want Qri to store their data. For example, in the future, when we support s3 storage, this Filesystems
field is where the user can go to configure the path to the storage, if it's the default save location, etc. More immediately however, exposing the Filesystems
configuration also allows folks to point to a non-default location for their IPFS storage. This leads directly to our next change: moving the default IPFS repo location.
Migration
One big change we've been working on behind the scenes is upgrading our IPFS dependency. IPFS recently released version 0.6.0, and that's the version we are now relying on! This was a very important upgrade, as users relying on older versions of IPFS (below 0.5.0) would not be seen by the larger IPFS network.
We also wanted to move the Qri associated IPFS node off the default IPFS_PATH
and into a location that advertises a bit more that this is the IPFS node we rely on. And since our new configuration allows users to explicitly set the path to the IPFS repo, if a user prefers to point their repo to the old location, we can still accommodate that. By default, the IPFS node that Qri relies on will now live on the QRI_PATH
.
Migrations can be rough, so we took the time to ensure that upgrading to the newest version of IPFS, adjusting the Qri config, and moving the IPFS repo onto the QRI_PATH
would go off without a hitch!
JSON schema
Qri now relies on a newer draft (draft2019_09) of JSON Schema. Our golang implementation of jsonschema
now has better support for the spec, equal or better performance depending on the keyword, and the option to extend using your own keywords.
Removed Update
This was a real kill-your-darlings situation! The functionality of update
- scheduling and running qri saves
- can be done more reliably using other schedulers/taskmanagers. Our upcoming roadmap expands many Qri features, and we realized we couldn't justify the planning/engineering time to ensure update
was up to our standards. Rather then letting this feature weigh us down, we realized it would be better to remove update
and instead point users to docs on how to schedule updates. One day we may revisit updates as a plugin or wrapper.
Merkledag error
Some users were getting Merkledag not found
errors when trying to add some popular datasets from Qri Cloud (for example nyc-transit-data/turnstile_daily_counts_2019
). This should no longer be the case!
Specific Command Line Features/Changes
qri save
- use the--drop
flag to remove a component from that dataset versionqri log
- use the--local
flag to only get the logs of the dataset that are storied locally
- use the--pull
flag to only get the logs of the dataset from the network (explicitly not local)
- use the--remote
flag to specify a remote off of which you want to grab that dataset's log. This defaults to the qri cloud registryqri get
- use the-- zip
flag to export a zip of the dataset
Specific API Features/Changes
/fetch
- removed, use/history?pull=true
/history
- use thelocal=true
param to only get the logs of a dataset that are stored locally
- use thepull=true
param to get the logs of a dataset from the network only (explicitly not local)
- use theremote=REMOTE_NAME
to specify a remote off of which you want to grab that dataset's log. This defaults to the qri cloud registry
BREAKING CHANGES
- update command and all api endpoints are removed
- removed
/fetch
endpoint - use/history
instead.local=true
param ensure that the logbook data is only what you have locally in your logbook