archiveexport is a Python module to read files generated by the Epics Channel Archiver.
It is available as a self contained Conda packaged and can be easily installed via.
conda install -c paulscherrerinstitute archiveexport
An example Jupyter Notebook can be found in examples/Example.ipynb.
Module exposes two functions archiveexport.list()
to extrat channel names and archiveexport.get_data()
to extract the data.
import archiveexport as ae
import datetime
# find channels by pattern
index_file = "/mnt/archiver/index"
channels = ae.list(index_name=index_file, pattern="ARIDI.*BPM1")
# calculate start and date
now = datetime.datetime.now()
end = now-datetime.timedelta(days=1)
start = end-datetime.timedelta(minutes=1)
# query data
data = ae.get_data(index_name=index_file, channels=channels, start=start, end=end, get_units=True, get_status=True, get_info=True)
print(data)
archiveexport.list
(index_name, pattern="")
Searches the index file for channel names.
Praramters:
index_name
... filepath of the index file as string.pattern
(optional) ... regular expression to find channel names
Returns: A list of channel names.
archiveexport.get_data
(index_name, channels=[], start=..., end=... get_units=False, get_status=False, get_info=False)
Queries archived data.
Praramters:
index_name
... filepath of the index file as string.channels
... a list of channel names eg.["CHANNEL1", "CHANNEL2", ...]
start
(optional) ... query data from this point in time. (python datetime object)end
(optional) ... query data untill this point in time. (python datetime object)get_units
(optional) ... return also units for numeric data. (boolean)get_status
(optional) ... return also status and severity information. (boolean)get_info
(optional) ... return also limit information for numerical data or enum string for enums. (boolean)
Return value: Returns following structure:
{
"CHANNEL1":
[
{"value":value ,"seconds":seconds, "nanoseconds":nanoseconds, "unit":"unit", ...}
{"value":value ,"seconds":seconds, "nanoseconds":nanoseconds, ...}
...
],
"CHANNEL2":[...],
...
}
Returns a dictionary where every key is a channel name and value is a list of dictionaries that describe the stored data point. Also one datapoint before start
and one after end
will be returned if they exist. Keys and values available in the "value dictionary" are the following:
"value"
... For scalar values, depending on the epics data type the following python type is returned:
Epics DBR type | Python type |
---|---|
DBR_TIME_STRING | PyUnicodeObject |
DBR_TIME_CHAR | PyLongObject |
DBR_TIME_ENUM | PyLongObject |
DBR_TIME_SHORT | PyLongObject |
DBR_TIME_LONG | PyLongObject |
DBR_TIME_FLOAT | PyFloatObject |
DBR_TIME_DOUBLE | PyFloatObject |
In case of an array a list of values is retured where the same conversion as above is applied, with one exception DBR_TIME_CHAR is converted to PyByteArrayObject.
"seconds"
... number of seconds past since Epics epoch January 1, 1990 (PyLongObject)."nanoseconds"
... number of nanoseconds past since the last full second (PyLongObject).
Optional keys:
get_units=True
:
"unit"
... Engineering unit (PyUnicodeObject). Included only ifget_units=True
and the value is of a numeric type (CHAR, SHORT, LONG, FLOAT or DOUBLE)
get_status=True
:
"status"
... Numeric representation of status (PyLongObject)."status_string"
... String representation of status (PyUnicodeObject) orNone
if the string representation does not exist."severity"
... Numeric representation of severity (PyLongObject)."severity_string"
... String representation of severity (PyUnicodeObject) orNone
if the string representation does not exist.
get_info=True
- If the value is numeric, limits and percision are added to the dictionary. Limits are stored as C float by Channel Archiver independent of the actual value type, which can lead to discrepancies between the actual limit value and the stored one.
"low_alarm"
... (PyFloatObject)"low_warn"
... (PyFloatObject)"high_warn"
... (PyFloatObject)"high_alarm"
... (PyFloatObject)"disp_low"
... (PyFloatObject)"disp_high"
... (PyFloatObject)"precision"
... (PyLongObject)
get_info=True
- If the value is an (Epics) Enumeration, enum string is added to the dictionary.
"enum_string"
... (PyUnicodeObject) orNone
if the string representation does not exist.
The package can be installed via
conda install -c paulscherrerinstitute archiveexport
This package relies on the following packages:
- python
- epics-base
As epics-base is only available on the paulscherrerinstitute channel append it to your conda channels list:
conda config --append channels paulscherrerinstitute
Sources in folders Tools
storage Storage
and manual
are copied from the original ChannelArchiver installation used at PSI. For more information about the ChannelArchiver itself please read the manual.
Sources in PyExport
provide the Python bindings that are the core of this package.
Build the package from the root folder of the project:
conda build conda-recipe
For installing the package for testing purposes you can use:
conda install --use-local archiveexport
A test example is provided and can be found in examples directory.
Besides being on a machine the files are directly available SSHFS can be used to remote mount the archiver directories over a SSH connection:
sshfs < username >@gfalc:/net/slsmcarch-navf/export/archiver-data-mc/archive_ST /mnt/archiver/ -o "StrictHostKeyChecking=no" -o UserKnownHostsFile=/dev/null;
It is possible to build the module library using system libraries on linux. Cd to the system-build
folder and correct the paths in the Makefile. Then run make
, which will build the library and copy it to the system-build
folder. File test.py
can be used for testing.
To clean the build files run make clean
from the system-build
directory.