Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add grading method support #33911

Merged
merged 1 commit into from
Apr 4, 2024

Conversation

BryanttV
Copy link
Contributor

@BryanttV BryanttV commented Dec 11, 2023

Description

This PR adds a new field in the Problem settings for choosing a Grading Method. Currently, the only Grading Method is the Last Score. From now on, the new grading methods allow are:

  1. Last Score (Default): The last score made is taken for grading.
  2. First Score: The first score made is taken for grading.
  3. Highest Score: The highest score made is taken for grading.
  4. Average Score: The average of all scores made is taken for grading.

Important

This feature is only available if you enable the following feature flag:

ENABLE_GRADING_METHOD_IN_PROBLEMS

Supporting information

These changes are part of the effort made to implement Configurable grading method for problems with multiple attempts

Dependencies

Note

This PR works with the Legacy Problem Editor interface, but if you use the new interface with Course Authoring MFE, you need the changes in this PR.

Demo

grading-method-legacy-demo.mp4

Testing Instructions

  1. Add in your environment this setting to active the feature: FEATURES["ENABLE_GRADING_METHOD_IN_PROBLEMS"] = True

  2. Go to Studio and create a problem component from a Unit. e.g. Advanced > Blank Problem. In the settings you can add this example of a problem:

    <problem>
      <multiplechoiceresponse>
        <div>Grading Method Sample</div>
        <choicegroup>
          <choice correct="true">
            <div>Correct</div>
          </choice>
          <choice correct="false">
            <div>Incorrect</div>
          </choice>
          <choice correct="false">
            <div>Incorrect</div>
          </choice>
          <choice correct="false">
            <div>Incorrect</div>
          </choice>
        </choicegroup>
      </multiplechoiceresponse>
    </problem>
  3. In component settings, you should see a new field: Grading Method. This field is a dropdown list with the Grading Methods mentioned above.

  4. Choose a Grading Method and save changes.

  5. From the LMS, answer the problem and check with the different Grading Methods. You can also check from the progress section.

Rescoring

  1. From Studio, edit the correct answer to the problem, or change the Grading Method.
  2. From the LMS as an instructor, make the rescore of the problem in STAFF DEBUG INFO > Rescore Learner's Submission
  3. The final score should have changed depending on the new answer or new Grading Method.

@openedx-webhooks openedx-webhooks added the open-source-contribution PR author is not from Axim or 2U label Dec 11, 2023
@openedx-webhooks
Copy link

openedx-webhooks commented Dec 11, 2023

Thanks for the pull request, @BryanttV! Please note that it may take us up to several weeks or months to complete a review and merge your PR.

Feel free to add as much of the following information to the ticket as you can:

  • supporting documentation
  • Open edX discussion forum threads
  • timeline information ("this must be merged by XX date", and why that is)
  • partner information ("this is a course on edx.org")
  • any other information that can help Product understand the context for the PR

All technical communication about the code itself will be done via the GitHub pull request interface. As a reminder, our process documentation is here.

Please let us know once your PR is ready for our review and all tests are green.

@BryanttV BryanttV force-pushed the bav/add-grading-strategy branch 2 times, most recently from cf084d3 to d3416c9 Compare December 14, 2023 16:45
@BryanttV BryanttV force-pushed the bav/add-grading-strategy branch 2 times, most recently from 3afaeae to ae8a024 Compare January 9, 2024 19:26
@BryanttV BryanttV changed the title feat: add grading strategy support feat: add grading method support Feb 19, 2024
@BryanttV BryanttV force-pushed the bav/add-grading-strategy branch 2 times, most recently from cd8296b to c467117 Compare February 19, 2024 19:58
@BryanttV BryanttV marked this pull request as ready for review February 19, 2024 21:44
@BryanttV
Copy link
Contributor Author

Hi @mariajgrimaldi, I have already made the last changes you requested. Thanks for the review!

Any other changes needed in this PR, I will make them once I get back in 1 week.

This method is based on `get_grade_from_current_answers` but it is used when
`ENABLE_GRADING_METHOD_IN_PROBLEMS` feature flag is enabled.

This method optionally receives a `correct_map` to be used instead
Copy link
Contributor

@bszabo bszabo Mar 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a subtlety in this new feature that the current comment doesn't capture well. Namely, with the new feature we need to be able to recalculate a student's score for this problem every time the grading policy changes, as well as every time the student adds a new answer. It follows that the problem's state information needs to be rich enough to support both cases.

With the new state fields added for this feature, every time a student answers a new answer to the problem a snapshot of the current answer key is retained. Put another way, the applicable grade on the day the problem is answered is grandfathered in, and a subsequent change to the answer key won't change that grade: the overall grade may change with changes to the answer key, or with changes to the grading policy, but the grade assigned to an answer submitted today should remain today's grade.

It's possible the above is not entirely correct, but that just speaks to the subtlety of this feature. I'm happy to work with you here to zero in on a description that's correct. The essential point is that the comment should make clear why keeping a correct map history is needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree some context is missing here. As far as I understand this, your method's description is accurate. So we can use it as the docstring after some tweaks if needed. @BryanttV will let us know.

Also, get_grade_from_answers doesn't give much away, what about get_grade_from_answers_history? Changing the name to that or something else you folks might suggest would make it clearer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid code repetition I opted to remove the get_grade_from_answers method, and instead updated the current get_grade_from_current_answers method to accept an optional parameter that receives a correct_map. Likewise, I added a new conditional to check if the feature is active, in which case the correct_map and student_answers passed as argument to the method will be used, and not those of the instance.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks better; thank you! However, we still need the comment explaining the nuances of the implementation; could you add that? Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, I added in the docstring the behavior when the grading method is enabled. I think this is clearer, any additional suggestions?

@@ -732,3 +733,101 @@ def test_get_question_answer(self):
# Ensure that the answer is a string so that the dict returned from this
# function can eventualy be serialized to json without issues.
assert isinstance(problem.get_question_answers()['1_solution_1'], str)

def test_get_grade_from_answers_with_student_answers(self):
Copy link
Contributor

@bszabo bszabo Mar 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The essence of the changes in this PR is the introduction of new fields in the CAPA problem that allow for grading differently per a grading policy. These tests focus on which methods are called, and how. Don't we need tests that prove that the newly added fields are being used consistently with the new policies? i.e., wouldn't you expect to have at least one test per grading policy, to ensure that calculated grades, per the new fields, are correct?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope I'm not missing anything, but I believe we are adding them here: https://github.com/openedx/edx-platform/pull/33911/files#diff-950d99d7bc471ac2295f68305d8fd00d15834c97283f6c7352b7d33421d6a24a

But we could separate them into grading policies if that's easier to find.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests were added in the test_capa_block.py module. There is already a test for each grading method, and there is also a test when switching between grading methods. From here you can see them.
Would this be enough? @mariajgrimaldi @bszabo

Comment on lines 1825 to 1828
if self.enable_grading_method:
self.set_score_with_grading_method(current_score)
else:
self.set_score(current_score)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could apply here the same suggestion as in get_grade_from_answers:

def set_score(...):
   if self.enable_grading_method:
      self.set_score_with_grading_method(current_score)
      return
....

Copy link
Contributor Author

@BryanttV BryanttV Apr 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you could not make such a change, since the set_score method is used in different parts of the code, which would affect the behavior. Instead I modified the new get_score_with_grading_method method to remove the else, like this:

...

if self.enable_grading_method:
    current_score = self.get_score_with_grading_method(current_score)
self.set_score(current_score)

...

Comment on lines 2241 to 2245
if self.enable_grading_method:
calculated_score = self.get_rescore_with_grading_method()
else:
self.update_correctness()
calculated_score = self.calculate_score()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest doing the same as in set_score and get_grade_from_answers here, but I think the implementation is fairly different.

This method is based on `get_grade_from_current_answers` but it is used when
`ENABLE_GRADING_METHOD_IN_PROBLEMS` feature flag is enabled.

This method optionally receives a `correct_map` to be used instead
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree some context is missing here. As far as I understand this, your method's description is accurate. So we can use it as the docstring after some tweaks if needed. @BryanttV will let us know.

Also, get_grade_from_answers doesn't give much away, what about get_grade_from_answers_history? Changing the name to that or something else you folks might suggest would make it clearer.

Comment on lines 487 to 530
if not correct_map:
correct_map = self.correct_map
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it'd be something like:

  • The usual flow calls for get_grade_from_current_answers(), but now with the correct_map as argument
  • If the feature is enabled, then call get_grade_from_answers() after line 532
  • Inside get_grade_from_answers(), for each responder check for the extra condition added here
  • Then update the results with the correct map, and return

Am I correct?

Comment on lines 505 to 548
elif student_answers is not None:
results = responder.evaluate_answers(student_answers, correct_map)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds better than duplicating these complex conditions. Thanks.

xmodule/capa/capa_problem.py Outdated Show resolved Hide resolved
@@ -732,3 +733,101 @@ def test_get_question_answer(self):
# Ensure that the answer is a string so that the dict returned from this
# function can eventualy be serialized to json without issues.
assert isinstance(problem.get_question_answers()['1_solution_1'], str)

def test_get_grade_from_answers_with_student_answers(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope I'm not missing anything, but I believe we are adding them here: https://github.com/openedx/edx-platform/pull/33911/files#diff-950d99d7bc471ac2295f68305d8fd00d15834c97283f6c7352b7d33421d6a24a

But we could separate them into grading policies if that's easier to find.

@BryanttV BryanttV force-pushed the bav/add-grading-strategy branch 8 times, most recently from 233064c to f834916 Compare April 1, 2024 23:46
@BryanttV
Copy link
Contributor Author

BryanttV commented Apr 2, 2024

Hi @bszabo, I already addressed the comments, could you review again, thank you!

Copy link
Member

@mariajgrimaldi mariajgrimaldi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few comments left to address on my part. Thank you!

@@ -232,6 +239,15 @@ def __init__(self, problem_text, id, capa_system, capa_block, # pylint: disable
if extract_tree:
self.extracted_tree = self._extract_html(self.tree)

@property
def enable_grading_method(self) -> bool:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about:

Suggested change
def enable_grading_method(self) -> bool:
def is_grading_method_enabled(self) -> bool:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

This method is based on `get_grade_from_current_answers` but it is used when
`ENABLE_GRADING_METHOD_IN_PROBLEMS` feature flag is enabled.

This method optionally receives a `correct_map` to be used instead
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks better; thank you! However, we still need the comment explaining the nuances of the implementation; could you add that? Thanks!

@bszabo
Copy link
Contributor

bszabo commented Apr 3, 2024

Thank you very much for the changes you made in response to my requests. The code as it presently sits is fine with me. I don't seem to be able to approve, but am fine with approval by one of you.

Copy link
Member

@mariajgrimaldi mariajgrimaldi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BryanttV: thank you for addressing all of our comments. Thank you, @bszabo, also for all the help!

LGTM

@mariajgrimaldi mariajgrimaldi merged commit 85620ec into openedx:master Apr 4, 2024
68 checks passed
@openedx-webhooks
Copy link

@BryanttV 🎉 Your pull request was merged! Please take a moment to answer a two question survey so we can improve your experience in the future.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX staging environment in preparation for a release to production.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX production environment.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX staging environment in preparation for a release to production.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX production environment.

KyryloKireiev pushed a commit to raccoongang/edx-platform that referenced this pull request Apr 24, 2024
…penedx#33911)

A new field in the Problem settings for choosing a Grading Method. Currently, the only Grading Method is the Last Score. From now on, when turning the feature flag on, the new grading methods available for configuration in Studio are:
- Last Score (Default): The last score made is taken for grading.
- First Score: The first score made is taken for grading.
- Highest Score: The highest score made is taken for grading.
- Average Score: The average of all scores made is taken for grading.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
open-source-contribution PR author is not from Axim or 2U waiting for eng review PR is ready for review. Review and merge it, or suggest changes.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

7 participants