-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add grading method support #33911
feat: add grading method support #33911
Conversation
Thanks for the pull request, @BryanttV! Please note that it may take us up to several weeks or months to complete a review and merge your PR. Feel free to add as much of the following information to the ticket as you can:
All technical communication about the code itself will be done via the GitHub pull request interface. As a reminder, our process documentation is here. Please let us know once your PR is ready for our review and all tests are green. |
cf084d3
to
d3416c9
Compare
3afaeae
to
ae8a024
Compare
ae8a024
to
7f54dd2
Compare
7f54dd2
to
33bb89d
Compare
cd8296b
to
c467117
Compare
378dce3
to
378cc1d
Compare
Hi @mariajgrimaldi, I have already made the last changes you requested. Thanks for the review! Any other changes needed in this PR, I will make them once I get back in 1 week. |
xmodule/capa/capa_problem.py
Outdated
This method is based on `get_grade_from_current_answers` but it is used when | ||
`ENABLE_GRADING_METHOD_IN_PROBLEMS` feature flag is enabled. | ||
|
||
This method optionally receives a `correct_map` to be used instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a subtlety in this new feature that the current comment doesn't capture well. Namely, with the new feature we need to be able to recalculate a student's score for this problem every time the grading policy changes, as well as every time the student adds a new answer. It follows that the problem's state information needs to be rich enough to support both cases.
With the new state fields added for this feature, every time a student answers a new answer to the problem a snapshot of the current answer key is retained. Put another way, the applicable grade on the day the problem is answered is grandfathered in, and a subsequent change to the answer key won't change that grade: the overall grade may change with changes to the answer key, or with changes to the grading policy, but the grade assigned to an answer submitted today should remain today's grade.
It's possible the above is not entirely correct, but that just speaks to the subtlety of this feature. I'm happy to work with you here to zero in on a description that's correct. The essential point is that the comment should make clear why keeping a correct map history is needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree some context is missing here. As far as I understand this, your method's description is accurate. So we can use it as the docstring after some tweaks if needed. @BryanttV will let us know.
Also, get_grade_from_answers
doesn't give much away, what about get_grade_from_answers_history
? Changing the name to that or something else you folks might suggest would make it clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid code repetition I opted to remove the get_grade_from_answers
method, and instead updated the current get_grade_from_current_answers
method to accept an optional parameter that receives a correct_map. Likewise, I added a new conditional to check if the feature is active, in which case the correct_map and student_answers passed as argument to the method will be used, and not those of the instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That looks better; thank you! However, we still need the comment explaining the nuances of the implementation; could you add that? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, I added in the docstring the behavior when the grading method is enabled. I think this is clearer, any additional suggestions?
@@ -732,3 +733,101 @@ def test_get_question_answer(self): | |||
# Ensure that the answer is a string so that the dict returned from this | |||
# function can eventualy be serialized to json without issues. | |||
assert isinstance(problem.get_question_answers()['1_solution_1'], str) | |||
|
|||
def test_get_grade_from_answers_with_student_answers(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The essence of the changes in this PR is the introduction of new fields in the CAPA problem that allow for grading differently per a grading policy. These tests focus on which methods are called, and how. Don't we need tests that prove that the newly added fields are being used consistently with the new policies? i.e., wouldn't you expect to have at least one test per grading policy, to ensure that calculated grades, per the new fields, are correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope I'm not missing anything, but I believe we are adding them here: https://github.com/openedx/edx-platform/pull/33911/files#diff-950d99d7bc471ac2295f68305d8fd00d15834c97283f6c7352b7d33421d6a24a
But we could separate them into grading policies if that's easier to find.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests were added in the test_capa_block.py
module. There is already a test for each grading method, and there is also a test when switching between grading methods. From here you can see them.
Would this be enough? @mariajgrimaldi @bszabo
xmodule/capa_block.py
Outdated
if self.enable_grading_method: | ||
self.set_score_with_grading_method(current_score) | ||
else: | ||
self.set_score(current_score) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could apply here the same suggestion as in get_grade_from_answers
:
def set_score(...):
if self.enable_grading_method:
self.set_score_with_grading_method(current_score)
return
....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you could not make such a change, since the set_score
method is used in different parts of the code, which would affect the behavior. Instead I modified the new get_score_with_grading_method
method to remove the else, like this:
...
if self.enable_grading_method:
current_score = self.get_score_with_grading_method(current_score)
self.set_score(current_score)
...
xmodule/capa_block.py
Outdated
if self.enable_grading_method: | ||
calculated_score = self.get_rescore_with_grading_method() | ||
else: | ||
self.update_correctness() | ||
calculated_score = self.calculate_score() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest doing the same as in set_score and get_grade_from_answers here, but I think the implementation is fairly different.
xmodule/capa/capa_problem.py
Outdated
This method is based on `get_grade_from_current_answers` but it is used when | ||
`ENABLE_GRADING_METHOD_IN_PROBLEMS` feature flag is enabled. | ||
|
||
This method optionally receives a `correct_map` to be used instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree some context is missing here. As far as I understand this, your method's description is accurate. So we can use it as the docstring after some tweaks if needed. @BryanttV will let us know.
Also, get_grade_from_answers
doesn't give much away, what about get_grade_from_answers_history
? Changing the name to that or something else you folks might suggest would make it clearer.
xmodule/capa/capa_problem.py
Outdated
if not correct_map: | ||
correct_map = self.correct_map |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it'd be something like:
- The usual flow calls for
get_grade_from_current_answers()
, but now with the correct_map as argument - If the feature is enabled, then call
get_grade_from_answers()
after line 532 - Inside
get_grade_from_answers()
, for each responder check for the extra condition added here - Then update the results with the correct map, and return
Am I correct?
xmodule/capa/capa_problem.py
Outdated
elif student_answers is not None: | ||
results = responder.evaluate_answers(student_answers, correct_map) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds better than duplicating these complex conditions. Thanks.
@@ -732,3 +733,101 @@ def test_get_question_answer(self): | |||
# Ensure that the answer is a string so that the dict returned from this | |||
# function can eventualy be serialized to json without issues. | |||
assert isinstance(problem.get_question_answers()['1_solution_1'], str) | |||
|
|||
def test_get_grade_from_answers_with_student_answers(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope I'm not missing anything, but I believe we are adding them here: https://github.com/openedx/edx-platform/pull/33911/files#diff-950d99d7bc471ac2295f68305d8fd00d15834c97283f6c7352b7d33421d6a24a
But we could separate them into grading policies if that's easier to find.
233064c
to
f834916
Compare
Hi @bszabo, I already addressed the comments, could you review again, thank you! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few comments left to address on my part. Thank you!
xmodule/capa/capa_problem.py
Outdated
@@ -232,6 +239,15 @@ def __init__(self, problem_text, id, capa_system, capa_block, # pylint: disable | |||
if extract_tree: | |||
self.extracted_tree = self._extract_html(self.tree) | |||
|
|||
@property | |||
def enable_grading_method(self) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about:
def enable_grading_method(self) -> bool: | |
def is_grading_method_enabled(self) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
xmodule/capa/capa_problem.py
Outdated
This method is based on `get_grade_from_current_answers` but it is used when | ||
`ENABLE_GRADING_METHOD_IN_PROBLEMS` feature flag is enabled. | ||
|
||
This method optionally receives a `correct_map` to be used instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That looks better; thank you! However, we still need the comment explaining the nuances of the implementation; could you add that? Thanks!
9c9d79e
to
101e07a
Compare
Thank you very much for the changes you made in response to my requests. The code as it presently sits is fine with me. I don't seem to be able to approve, but am fine with approval by one of you. |
72e5d23
to
71cbcb0
Compare
71cbcb0
to
2fc029b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BryanttV 🎉 Your pull request was merged! Please take a moment to answer a two question survey so we can improve your experience in the future. |
2U Release Notice: This PR has been deployed to the edX staging environment in preparation for a release to production. |
2U Release Notice: This PR has been deployed to the edX production environment. |
2U Release Notice: This PR has been deployed to the edX staging environment in preparation for a release to production. |
2U Release Notice: This PR has been deployed to the edX production environment. |
…penedx#33911) A new field in the Problem settings for choosing a Grading Method. Currently, the only Grading Method is the Last Score. From now on, when turning the feature flag on, the new grading methods available for configuration in Studio are: - Last Score (Default): The last score made is taken for grading. - First Score: The first score made is taken for grading. - Highest Score: The highest score made is taken for grading. - Average Score: The average of all scores made is taken for grading.
Description
This PR adds a new field in the Problem settings for choosing a Grading Method. Currently, the only Grading Method is the Last Score. From now on, the new grading methods allow are:
Important
This feature is only available if you enable the following feature flag:
Supporting information
These changes are part of the effort made to implement Configurable grading method for problems with multiple attempts
Dependencies
Note
This PR works with the Legacy Problem Editor interface, but if you use the new interface with Course Authoring MFE, you need the changes in this PR.
Demo
grading-method-legacy-demo.mp4
Testing Instructions
Add in your environment this setting to active the feature:
FEATURES["ENABLE_GRADING_METHOD_IN_PROBLEMS"] = True
Go to Studio and create a problem component from a Unit. e.g.
Advanced
>Blank Problem
. In the settings you can add this example of a problem:In component settings, you should see a new field:
Grading Method
. This field is a dropdown list with the Grading Methods mentioned above.Choose a Grading Method and save changes.
From the LMS, answer the problem and check with the different Grading Methods. You can also check from the progress section.
Rescoring
STAFF DEBUG INFO
>Rescore Learner's Submission