Skip to content
This repository has been archived by the owner on Jul 23, 2024. It is now read-only.

[Task Submission] Quantifier Understanding (quantifier_understanding) #18

Closed
wants to merge 3 commits into from

Conversation

lerow
Copy link

@lerow lerow commented Aug 1, 2023

Quantifier Understanding

The task evaluates generalization in the understanding of quantifiers. It aims to measure
how well can language models capture the semantics of logical quantifiers in natural language.

Authors

Implementation

The task re-implements the evaluation function to compute accuracy scores.

Usage

Given predictions and gold labels, evaluate_predictions() outputs the accuracy score.

Checklist

  • I and my co-authors agree that, if this PR is merged, the code will be available under the same license as the genbench_cbt repository.
  • Prior to submitting, I have ran the GenBench CBT test suite using the genbench-cli test-task tool.
  • I have read the description of what should be in the doc.md of my task, and have added the required arguments.
  • I have submitted or will submit an accompanying paper to the GenBench workshop.

@lerow lerow changed the title Quantifier understanding [Task Submission] Quantifier Understanding (quantifier_understanding) Aug 1, 2023
@lerow lerow changed the title [Task Submission] Quantifier Understanding (quantifier_understanding) [Task Submission] Quantifier Understanding (quantifier_understanding) Aug 1, 2023
@lerow lerow closed this Aug 1, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants