Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v2] [Feature] Support for customLabels in the evaluation output images #208

Open
Udayraj123 opened this issue Sep 15, 2024 · 0 comments
Open
Labels
enhancement New feature or request hacktoberfest help wanted Extra attention is needed intermediate

Comments

@Udayraj123
Copy link
Owner

Udayraj123 commented Sep 15, 2024

Note: this enhancement is to be built on top of the dev branch as the feature is not available in master branch currently. It is part of OMRChecker v2

Is your feature request related to a problem? Please describe.
We now support outputting evaluation summary on the marked bubble images(reference).

But currently it is limited to separate field labels, and ignores the customLabels provided in the template.json. One common use for customLabels is the two-digit integer type question, where we can show both marked bubbles as green if the multi-column answer was correct.

Describe the solution you'd like

  • Need to find a way to visually display the correct/incorrect verdicts without an ambiguity in case of the customLabels where multiple fields are concatenated before running evaluation.
  • Should document all possible cases of verdicts when using custom labels
  • Add a test sample containing these different cases (you can also reuse any existing sample that contains customLabels)

Describe alternatives you've considered
N/A

Additional context
Here's the todo comment corresponding to above enhancement

# linked_custom_labels = [custom_label if (field_label in field_labels) else None for (custom_label, field_labels) in template.custom_labels.items()]
# is_part_of_custom_label = len(linked_custom_labels) > 0
# TODO: replicate verdict: question_has_verdict = len([if field_label in questions_meta else None for field_label in linked_custom_labels])

Note: please share your queries and approaches on discord for quicker discussions:

@Udayraj123 Udayraj123 added enhancement New feature or request help wanted Extra attention is needed intermediate hacktoberfest labels Sep 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request hacktoberfest help wanted Extra attention is needed intermediate
Projects
None yet
Development

No branches or pull requests

1 participant