Changed:
- The project name is changed to shed-streaming.
- The package name is changed to shed_streaming.
- Update the simple_parallel tests using asyncio and latest dask utilities.
- Pin the bluesky version to 1.6.4.
Fixed:
None
- Fix the issue that 'import shed' stops (but does not crash) python
Changed:
- Future releases will be in the nsls-ii-forge channels on conda
Fixed:
- Fix syntax errors in the environment with the latest version of dependencies
- Add missing dependencies in the requirements
Fixed:
- use
xonsh.lib.subprocess
rather thansubprocess
so it does the right things on windows
Fixed:
original_time
metadata goes tooriginal_start_time
so file saving works properly
Added:
shed.simple.simple_to_event_stream_new_api
which has a new API for describing the data which is going to be converted to the event model
Changed:
AlignEventStreams
clears buffer on stop docs not start
Added:
- Notebook examples
shed.simple.LastCache
a stream node type which allows for caching the last event and emiting it under its own descriptor- Merkle tree like hashing capability for checking if two pipelies are the same
Changed:
replay.rebuild_node
createsPlaceholder
streams so that we can build the pipeline properly.- If no
stream_name
is provided forSimpleFromEventStream
then a name is produced from thedata_address
. translation.FromEventStream
now captures environment information via an iterable of funciton calls, it defaults to capturing the conda environmenttranslation.ToEventModel
issues aRuntimeError
if the pipeline contains a lambda function, as they are not capturable.
Removed:
replay.replay
export
kwarg. We no longer auto add data to a databroker
Fixed:
- check for hashability before checking if in graph
Added:
- Start documents now have their own
scan_id
Changed:
- Don't validate start documents
SimpleFromEventModel
nodes give themselves descriptive names if none given
Fixed:
AlignEventStream
properly drops buffers when start docs come in on the same buffer
Changed:
AlignEventStream
now supports stream specific joins
Fixed:
- Flush
AlignEventStream` which makes certain that even in the event of error we have fresh ``AlignEventStream
buffers
Added:
- descriptor data_keys metadata can be added
Changed:
AllignEventStreams
keeps track of the first map's start uid (for file saving)
Fixed:
- Protect Parallel nodes behind a
try except
Added:
examples/best_effort.py
as an example of using shed-streaming withBestEffortCallback
.ToEventStream
can now take nodata_keys
. This assumes that the incoming data will be a dict and that the keys of the dict are the data keys.
Changed:
- Get
ChainDB
from xonsh - Use common
DocGen
for document generation - Exchanged
zstreamz
dep forrapidz
Removed:
- Removed
event_streams
anddatabroker_utils
and associated tests
Fixed:
- Run package level imports so that
ToEventStream
and others default to serial - A
SimpleToEventStream
node can now have multple principle nodes - The same header can be run into a pipeline multiple times
- Multiple principle nodes are now properly handled
AlignEventStreams
now works with resource and datum docs- File writers work properly
Fixed:
FromEventStream
now looks foruid
ordatum_id
Added:
- Type mapping for
ToEventStream
- Convert
ChainDB
to dict
Fixed:
- Carve out an if statement for numpy ufuncs to get the numpy module
Changed:
- Readme now reflects the current design architecture
- Provenance example is now in the examples folder
hash_or_uid
is now_hash_or_uid
Deprecated:
EventStream
nodes in favor ofstreamz
nodes andtranslation
nodes
Fixed:
ToEventStream
now tracks the time that data was receivedToEventStream
is now executed before the rest of the graph so graph times match the execution time.
Added:
- conda forge activity to rever
- template back to news
Added:
- Nodes for Databroker integration
- Setup Rever changelog
Fixed:
- Fixed the tests after the move to ophyd.sim from bluesky.examples