You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some initial doubts about the correctness of Task 8. I can't pin-point yet where the problem is and the problem could be in a different implementation, but I guess you are much faster in verifying than I am.
I am running the Coffea implementation, the Groot implementation, and one that I am writing in SQL on the first 100001 entries of the original file. The resulting histogram in Coffea has 719 (input) entries, while the other two have 8644. While this is only a weak indication, since the two latter implementations have the same result, I suspect that the former is the one that is not correct. To make the argument slightly stronger, I haven't looked at the Groot implementation while implementing the SQL-based one.
The best way to show that the problem is indeed in the Coffea implementation is to find an individual event that is falsely not qualified has having a trilepton. I'll try that next and report back.
The text was updated successfully, but these errors were encountered:
OK, this is a bit embarrassing -- I found it after three minutes. The difference comes from two additional tests in the Coffea implementation: (diele.mass > 50) & (diele.mass < 160) and (electrons.pt > 10) & (np.abs(electrons.eta) < 2.5) (plus the corresponding tests for muons). If I comment them out, I get the same number of entries as the other two implementations.
This does raise the question of whether this incoherence shouldn't be removed? Should the benchmark, in fact, impose these conditions? Why, otherwise, does the benchmark do them?
I have some initial doubts about the correctness of Task 8. I can't pin-point yet where the problem is and the problem could be in a different implementation, but I guess you are much faster in verifying than I am.
I am running the Coffea implementation, the Groot implementation, and one that I am writing in SQL on the first 100001 entries of the original file. The resulting histogram in Coffea has 719 (input) entries, while the other two have 8644. While this is only a weak indication, since the two latter implementations have the same result, I suspect that the former is the one that is not correct. To make the argument slightly stronger, I haven't looked at the Groot implementation while implementing the SQL-based one.
The best way to show that the problem is indeed in the Coffea implementation is to find an individual event that is falsely not qualified has having a trilepton. I'll try that next and report back.
The text was updated successfully, but these errors were encountered: