-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v4 to v5 Port #48
v4 to v5 Port #48
Conversation
Removed Fragment Separation From TAData & TPData Classes
Remove C Python Binding Import
Hi Alex, Thanks for the work on this. Here is my first round of review:
|
Also included a link to the channel maps table.
Added and using an option to use integer x-ticks in PDFPlotter histograms.
Previously invisible.
Log minor ticks were not being colored correctly.
Users should define which maker/plugin in a config JSON.
The last TP is sometimes missing, so skip it. Removed a pandas deprecation warning.
This is more correct to PEP 484.
Indexing the reader class now returns the indexed content of the primary data.
Follows the flow change that happened in `trigger`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
One suggestion made, but it's not related to v4->v5 movement so fine to ignore.
char* payload = static_cast<char*>(malloc(payload_size)); | ||
|
||
|
||
size_t offset(0); | ||
for ( const auto& ta : ta_buffer ) { | ||
triggeralgs::write_overlay(ta, payload+offset); | ||
triggeralgs::write_overlay(ta, static_cast<void*>(payload+offset)); | ||
offset += triggeralgs::get_overlay_nbytes(ta); | ||
} | ||
|
||
daqdataformats::Fragment ta_frag(payload, payload_size); | ||
daqdataformats::Fragment ta_frag(static_cast<void*>(payload), payload_size); | ||
|
||
free(payload); | ||
free(static_cast<void*>(payload)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both here and in EmulationUnit
, I would totally avoid using malloc
and free
. This is low-level C that should only be used if really cannot be avoided. In this case we could simply do:
std::vector<char> payload(payload_size);
size_t offset(0);
for ( const auto& ta : ta_buffer ) {
triggeralgs::write_overlay(ta, static_cast<void*>(payload.data()+offset));
offset += triggeralgs::get_overlay_nbytes(ta);
}
daqdataformats::Fragment ta_frag(static_cast<void*>(payload.data()), payload_size);
And no free
would be needed, since vector would go out of scope anyway (but could clear it if you must).
BUT this is not related to v4-to-v5 movement, so I'm fine if that's done at later date with a separate PR
@@ -80,18 +80,19 @@ EmulationUnit<T, U, V>::emulate_vector(const std::vector<T>& inputs) { | |||
if (payload_size == 0) | |||
return nullptr; | |||
|
|||
void* payload = malloc(payload_size); | |||
// Awkward type conversion to avoid compiler complaints on void* arithmetic. | |||
char* payload = static_cast<char*>(malloc(payload_size)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto re. malloc & free, could use vector here instead.
@@ -212,6 +212,7 @@ int main(int argc, char const *argv[]) | |||
if (latencies) { | |||
std::filesystem::path output_path(output_file_path); | |||
ta_emulator->set_timing_file((output_path.parent_path() / ("ta_timings_" + output_path.stem().string() + ".csv")).string()); | |||
ta_emulator->write_csv_header("TP Time Start,TP ADC Integral,Time Diffs,Is Last TP In TA"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this, although I wish it was done in a separate PR, not in a simple "v4 to v5". I'm now having to do PR review commit-by-commit :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the out-of-scopeness, but it was part of the requested changes. I tried to keep the commits well separated for what they changed.
@@ -7,6 +7,7 @@ | |||
import trgdataformats | |||
|
|||
import numpy as np | |||
from numpy.typing import NDArray |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had no idea numpy has its own typing module. Nice.
Only removes from building. The cxx is kept for reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @aeoranday for improvements, it's already better.
Here are more comments after another round of review:
- running
process_tpstream
over the same tpstream hd5 file, same configuration (barring change in algo names, but both prescale):
V4:
Average TA Time Process (19702 TAs): 7410 ns.
Average TC Time Process (197 TCs): 7758 ns.
-- Finished TSL 118490:0
V5:
Average TA Time Process (19695 TAs): 18111 ns.
Average TC Time Process (197 TCs): 21811 ns.
-- Finished TSL 118490:0
=> different number of TAsm and much much slower?
(output file also very slightly smaller - guessing that's the 'missing' TAs ?)
- running
process_tpstream
with--latencies
:
-- (same issue as above with missing TAs)
-- the 'Time Diffs' parameters differ for the same TA(s):
V4:
108017674586074187,16506,7949,1 <- ??
108017674565021675,26052,147,0
V5:
108017674586074187,16506,8115,0
108017674565021675,26052,953,0
-
convert_tplatencies
:
what exactly is the point of convert_tplatencies? It does not seem that there any scripts available that would use the output of this. And, if what it produces is the desired result, why don't we get it straight from tp_process with --latencies? is that inbetween output used for anything else other than the input to this converter ? -
tc_dump
:
- when using on the output file made with
process_tpstream
(the input to that process step is the same in both versions):
even though the # of TCs made IS the same, the TCs are different, as the TC stats are different:
V4:
time_candidate
Reference Statistics:
Total # TCs = 10,
Mean = 2.13,
Std = 1.37,
Min = 0.0,
Max = 4.220520496368408.
Anomalies:
# of >3 Sigma TCs = 0,
# of >2 Sigma TCs = 0.
time_end
Reference Statistics:
Total # TCs = 10,
Mean = 2.13,
Std = 1.37,
Min = 0.0,
Max = 4.220515489578247.
Anomalies:
# of >3 Sigma TCs = 0,
# of >2 Sigma TCs = 0.
time_start
Reference Statistics:
Total # TCs = 10,
Mean = 2.13,
Std = 1.37,
Min = 0.0,
Max = 4.220520496368408.
Anomalies:
# of >3 Sigma TCs = 0,
# of >2 Sigma TCs = 0.
V5:
time_candidate
Reference Statistics:
Total # TCs = 10,
Mean = 2.02,
Std = 1.46,
Min = 0.0,
Max = 4.214995384216309.
Anomalies:
# of >3 Sigma TCs = 0,
# of >2 Sigma TCs = 0.
time_end
Reference Statistics:
Total # TCs = 10,
Mean = 2.02,
Std = 1.46,
Min = 0.0,
Max = 4.214994430541992.
Anomalies:
# of >3 Sigma TCs = 0,
# of >2 Sigma TCs = 0.
time_start
Reference Statistics:
Total # TCs = 10,
Mean = 2.02,
Std = 1.46,
Min = 0.0,
Max = 4.214995384216309.
Anomalies:
# of >3 Sigma TCs = 0,
# of >2 Sigma TCs = 0.
(using this dump on the same raw hdf5 (without the process_ step), the output is identical).
py-trgtools
:
-- very last command block (Plotting->Example) is missing command to load data:
tpr.read_all_fragments() (or something similar that takes subset)
without that:
>>> print(tpr['adc_peak'])
[]
-- the output example.pdf seems to be broken / corrupted (empty) file (tried 3 different inputs)
-- there is an error message when exiting the interactive python session:
Exception ignored in: <function PDFPlotter.__del__ at 0x7fb183882680>
Traceback (most recent call last):
File "/nfs/home/mrigan/dune-daq/v5.2-241010/install/trgtools/lib64/python/trgtools/plot/PDFPlotter.py", line 91, in __del__
File "/nfs/home/mrigan/dune-daq/v5.2-241010/.venv/lib/python3.10/site-packages/matplotlib/backends/backend_pdf.py", line 2721, in close
File "/nfs/home/mrigan/dune-daq/v5.2-241010/.venv/lib/python3.10/site-packages/matplotlib/backends/backend_pdf.py", line 828, in finalize
File "/nfs/home/mrigan/dune-daq/v5.2-241010/.venv/lib/python3.10/site-packages/matplotlib/backends/backend_pdf.py", line 974, in writeFonts
File "/nfs/home/mrigan/dune-daq/v5.2-241010/.venv/lib/python3.10/site-packages/matplotlib/backends/backend_pdf.py", line 1451, in embedTTF
File "/nfs/home/mrigan/dune-daq/v5.2-241010/.venv/lib/python3.10/site-packages/matplotlib/backends/backend_pdf.py", line 1182, in embedTTFType3
ImportError: sys.meta_path is None, Python is likely shutting down
I understand that lot (if not all) of these are not caused by your port. I'm just listing my findings here as I was going through testing.
Please, for those points that you are not fixing for this PR, make issues.
This simply ports this repository from v4 to v5.
Most of the dependency for this port was data format related, which have already been ported.
Testing was done by viewing
t*_dump.py
for various files from both v4 and v5 and confirming the results are sensible.This also features a slight change to docstrings and a missed enhancement in v4.
The usage of
process_tpstream
is consistent and was previously addressed when the naming change happened intriggeralgs
for v5.