Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add grading method support #33911

Merged
merged 1 commit into from
Apr 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions cms/envs/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -570,6 +570,15 @@
# .. toggle_creation_date: 2024-03-14
# .. toggle_tickets: https://github.com/openedx/edx-platform/pull/34173
'ENABLE_HOME_PAGE_COURSE_API_V2': True,

# .. toggle_name: FEATURES['ENABLE_GRADING_METHOD_IN_PROBLEMS']
# .. toggle_implementation: DjangoSetting
# .. toggle_default: False
# .. toggle_description: Enables the grading method feature in capa problems.
# .. toggle_use_cases: open_edx
# .. toggle_creation_date: 2024-03-22
# .. toggle_tickets: https://github.com/openedx/edx-platform/pull/33911
'ENABLE_GRADING_METHOD_IN_PROBLEMS': False,
}

# .. toggle_name: ENABLE_COPPA_COMPLIANCE
Expand Down
3 changes: 3 additions & 0 deletions lms/djangoapps/instructor/enrollment.py
Original file line number Diff line number Diff line change
Expand Up @@ -391,6 +391,9 @@ def _reset_module_attempts(studentmodule):
problem_state = json.loads(studentmodule.state)
# old_number_of_attempts = problem_state["attempts"]
problem_state["attempts"] = 0
problem_state["score_history"] = []
problem_state["correct_map_history"] = []
mariajgrimaldi marked this conversation as resolved.
Show resolved Hide resolved
problem_state["student_answers_history"] = []
Comment on lines +394 to +396
Copy link
Member

@mariajgrimaldi mariajgrimaldi Feb 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this means that the score_history, which saves students' scores for calculations (like average, highest, ...), depends on the number of attempts. For example:

  • A problem has 3 attempts
  • The student submitted their 3 attempts
  • The average score is 0.5
  • The instructor resets the student's attempt to 0
  • The student's score is still 0.5. This is how the platform works; scores are not reset when attempts are.
  • The student's score history is empty
  • The student tries submitting another answer, it's their first attempt after the reset. The answer is incorrect.
  • The student's score history contains the 1st attempt score: 0
  • The student's average score is 0

And so on. Although it makes sense, the product proposal doesn't specify what should happen after resetting attempts. It only mentions this:

This feature will require to store as part of the problem state the series of scores obtain by the learner in all its attemps, so that the average can always be computed and the grading strategy can be changed at some point in the future and rescore the problem correctly.

So the premise of this implementation is:
All the attempts mean the current attempts, not all time submissions.

Is this the correct behavior?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so. it would be the correct behavior because when a instructor resets the learner attempts, they are giving the learner the option (not the obligation) of trying the problem again from the starts, even if this creates an inconsistency between the learner grade and the reported number of attemps.
With the new grading methods, the instructor will also be giving a learner the option, not the obligation to get a new score based on a fresh new set of attempts.
Also, trying to include the submissions prior to the reset in the score calculation would make things very confusing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree! It isn't very clear for the learner to include all time submissions. Thanks!


# save
studentmodule.state = json.dumps(problem_state)
Expand Down
5 changes: 4 additions & 1 deletion lms/djangoapps/instructor/tests/test_enrollment.py
Original file line number Diff line number Diff line change
Expand Up @@ -543,7 +543,10 @@ def setup_team(self):
'attempts': 1,
'saved_files_descriptions': ['summary', 'proposal', 'diagrams'],
'saved_files_sizes': [1364677, 958418],
'saved_files_names': ['case_study_abstract.txt', 'design_prop.pdf', 'diagram1.png']
'saved_files_names': ['case_study_abstract.txt', 'design_prop.pdf', 'diagram1.png'],
'score_history': [],
'correct_map_history': [],
'student_answers_history': [],
}
team_state = json.dumps(self.team_state_dict)

Expand Down
11 changes: 10 additions & 1 deletion lms/envs/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -1049,7 +1049,16 @@
# .. toggle_use_cases: opt_in
# .. toggle_creation_date: 2023-10-10
# .. toggle_tickets: https://github.com/openedx/openedx-events/issues/210
'SEND_LEARNING_CERTIFICATE_LIFECYCLE_EVENTS_TO_BUS': False
'SEND_LEARNING_CERTIFICATE_LIFECYCLE_EVENTS_TO_BUS': False,

# .. toggle_name: FEATURES['ENABLE_GRADING_METHOD_IN_PROBLEMS']
# .. toggle_implementation: DjangoSetting
# .. toggle_default: False
# .. toggle_description: Enables the grading method feature in capa problems.
# .. toggle_use_cases: open_edx
# .. toggle_creation_date: 2024-03-22
# .. toggle_tickets: https://github.com/openedx/edx-platform/pull/33911
'ENABLE_GRADING_METHOD_IN_PROBLEMS': False,
}

# Specifies extra XBlock fields that should available when requested via the Course Blocks API
Expand Down
5 changes: 4 additions & 1 deletion lms/templates/problem.html
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<%page expression_filter="h"/>
<%!
from django.utils.translation import ngettext, gettext as _
from openedx.core.djangolib.markup import HTML
from openedx.core.djangolib.markup import HTML, Text
%>

<%namespace name='static' file='static_content.html'/>
Expand Down Expand Up @@ -90,6 +90,9 @@ <h3 class="hd hd-3 problem-header" id="${ short_id }-problem-title" aria-describ
% if attempts_allowed and (not submit_disabled_cta or attempts_used == 0):
${ngettext("You have used {num_used} of {num_total} attempt", "You have used {num_used} of {num_total} attempts", attempts_allowed).format(num_used=attempts_used, num_total=attempts_allowed)}
% endif
% if grading_method:
<div>${Text(_("Grading method: {grading_method}")).format(grading_method=grading_method)}</div>
% endif
mariajgrimaldi marked this conversation as resolved.
Show resolved Hide resolved
<span class="sr">${_("Some problems have options such as save, reset, hints, or show answer. These options follow the Submit button.")}</span>
</div>
</div>
Expand Down
38 changes: 34 additions & 4 deletions xmodule/capa/capa_problem.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
from collections import OrderedDict
from copy import deepcopy
from datetime import datetime
from typing import Optional
from xml.sax.saxutils import unescape

from django.conf import settings
Expand Down Expand Up @@ -172,6 +173,12 @@ def __init__(self, problem_text, id, capa_system, capa_block, # pylint: disable
self.has_saved_answers = state.get('has_saved_answers', False)
if 'correct_map' in state:
self.correct_map.set_dict(state['correct_map'])
self.correct_map_history = []
for cmap in state.get('correct_map_history', []):
correct_map = CorrectMap()
correct_map.set_dict(cmap)
self.correct_map_history.append(correct_map)

self.done = state.get('done', False)
self.input_state = state.get('input_state', {})

Expand Down Expand Up @@ -232,6 +239,15 @@ def __init__(self, problem_text, id, capa_system, capa_block, # pylint: disable
if extract_tree:
self.extracted_tree = self._extract_html(self.tree)

@property
def is_grading_method_enabled(self) -> bool:
"""
Returns whether the grading method feature is enabled. If the
feature is not enabled, the grading method field will not be shown in
Studio settings and the default grading method will be used.
"""
return settings.FEATURES.get('ENABLE_GRADING_METHOD_IN_PROBLEMS', False)

def make_xml_compatible(self, tree):
"""
Adjust tree xml in-place for compatibility before creating
Expand Down Expand Up @@ -299,8 +315,10 @@ def do_reset(self):
Reset internal state to unfinished, with no answers
"""
self.student_answers = {}
self.student_answers_history = []
self.has_saved_answers = False
self.correct_map = CorrectMap()
self.correct_map_history = []
self.done = False

def set_initial_display(self):
Expand Down Expand Up @@ -328,6 +346,7 @@ def get_state(self):
'student_answers': self.student_answers,
'has_saved_answers': self.has_saved_answers,
'correct_map': self.correct_map.get_dict(),
'correct_map_history': [cmap.get_dict() for cmap in self.correct_map_history],
mariajgrimaldi marked this conversation as resolved.
Show resolved Hide resolved
'input_state': self.input_state,
'done': self.done}

Expand Down Expand Up @@ -434,6 +453,7 @@ def grade_answers(self, answers):
self.student_answers = convert_files_to_filenames(answers)
new_cmap = self.get_grade_from_current_answers(answers)
self.correct_map = new_cmap # lint-amnesty, pylint: disable=attribute-defined-outside-init
self.correct_map_history.append(deepcopy(new_cmap))
return self.correct_map

def supports_rescoring(self):
Expand All @@ -455,7 +475,7 @@ def supports_rescoring(self):
"""
return all('filesubmission' not in responder.allowed_inputfields for responder in self.responders.values())

def get_grade_from_current_answers(self, student_answers):
def get_grade_from_current_answers(self, student_answers, correct_map: Optional[CorrectMap] = None):
"""
Gets the grade for the currently-saved problem state, but does not save it
to the block.
Expand All @@ -468,9 +488,14 @@ def get_grade_from_current_answers(self, student_answers):
For rescoring, `student_answers` is None.

Calls the Response for each question in this problem, to do the actual grading.

When the grading method is enabled, this method is used for rescore. In this case,
the `correct_map` and the `student_answers` passed as arguments will be used,
corresponding to each pair in the fields that store the history (correct_map_history
and student_answers_history). The correct map will always be updated, depending on
the student answers. The student answers will always remain the same over time.
"""
# old CorrectMap
oldcmap = self.correct_map
oldcmap = correct_map if self.is_grading_method_enabled else self.correct_map

# start new with empty CorrectMap
newcmap = CorrectMap()
Expand All @@ -487,7 +512,12 @@ def get_grade_from_current_answers(self, student_answers):

# use 'student_answers' only if it is provided, and if it might contain a file
# submission that would not exist in the persisted "student_answers".
if 'filesubmission' in responder.allowed_inputfields and student_answers is not None:
# If grading method is enabled, we need to pass each student answers and the
# correct map in the history fields.
if (
"filesubmission" in responder.allowed_inputfields
and student_answers is not None
) or self.is_grading_method_enabled:
results = responder.evaluate_answers(student_answers, oldcmap)
else:
results = responder.evaluate_answers(self.student_answers, oldcmap)
Expand Down
110 changes: 109 additions & 1 deletion xmodule/capa/tests/test_capa_problem.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,24 @@
import textwrap
import unittest

from django.conf import settings
from django.test import override_settings
import pytest
import ddt
from lxml import etree
from markupsafe import Markup
from mock import patch
from mock import patch, MagicMock

from xmodule.capa.correctmap import CorrectMap
from xmodule.capa.responsetypes import LoncapaProblemError
from xmodule.capa.tests.helpers import new_loncapa_problem
from openedx.core.djangolib.markup import HTML


FEATURES_WITH_GRADING_METHOD_IN_PROBLEMS = settings.FEATURES.copy()
FEATURES_WITH_GRADING_METHOD_IN_PROBLEMS['ENABLE_GRADING_METHOD_IN_PROBLEMS'] = True


@ddt.ddt
class CAPAProblemTest(unittest.TestCase):
""" CAPA problem related tests"""
Expand Down Expand Up @@ -732,3 +739,104 @@ def test_get_question_answer(self):
# Ensure that the answer is a string so that the dict returned from this
# function can eventualy be serialized to json without issues.
assert isinstance(problem.get_question_answers()['1_solution_1'], str)

@override_settings(FEATURES=FEATURES_WITH_GRADING_METHOD_IN_PROBLEMS)
def test_get_grade_from_current_answers(self):
"""
Verify that `responder.evaluate_answers` is called with `student_answers`
and `correct_map` sent to `get_grade_from_current_answers`.

When both arguments are provided, means that the problem is being rescored.
"""
student_answers = {'1_2_1': 'over-suspicious'}
correct_map = CorrectMap(answer_id='1_2_1', correctness="correct", npoints=1)
problem = new_loncapa_problem(
"""
<problem>
<multiplechoiceresponse>
<choicegroup>
<choice correct="true">Answer1</choice>
<choice correct="false">Answer2</choice>
<choice correct="false">Answer3</choice>
<choice correct="false">Answer4</choice>
</choicegroup>
</multiplechoiceresponse>
</problem>
"""
)
responder_mock = MagicMock()

with patch.object(problem, 'responders', {'responder1': responder_mock}):
responder_mock.allowed_inputfields = ['choicegroup']
responder_mock.evaluate_answers.return_value = correct_map

result = problem.get_grade_from_current_answers(student_answers, correct_map)
self.assertDictEqual(result.get_dict(), correct_map.get_dict())
responder_mock.evaluate_answers.assert_called_once_with(student_answers, correct_map)

@override_settings(FEATURES=FEATURES_WITH_GRADING_METHOD_IN_PROBLEMS)
def test_get_grade_from_current_answers_without_student_answers(self):
"""
Verify that `responder.evaluate_answers` is called with appropriate arguments.

When `student_answers` is None, `responder.evaluate_answers` should be called with
the `self.student_answers` instead.
"""
correct_map = CorrectMap(answer_id='1_2_1', correctness="correct", npoints=1)
problem = new_loncapa_problem(
"""
<problem>
<multiplechoiceresponse>
<choicegroup>
<choice correct="true">Answer1</choice>
<choice correct="false">Answer2</choice>
<choice correct="false">Answer3</choice>
<choice correct="false">Answer4</choice>
</choicegroup>
</multiplechoiceresponse>
</problem>
"""
)
responder_mock = MagicMock()

with patch.object(problem, 'responders', {'responder1': responder_mock}):
problem.responders['responder1'].allowed_inputfields = ['choicegroup']
problem.responders['responder1'].evaluate_answers.return_value = correct_map

result = problem.get_grade_from_current_answers(None, correct_map)

self.assertDictEqual(result.get_dict(), correct_map.get_dict())
responder_mock.evaluate_answers.assert_called_once_with(None, correct_map)

@override_settings(FEATURES=FEATURES_WITH_GRADING_METHOD_IN_PROBLEMS)
def test_get_grade_from_current_answers_with_filesubmission(self):
"""
Verify that an exception is raised when `responder.evaluate_answers` is called
with `student_answers` as None and `correct_map` sent to `get_grade_from_current_answers`

This ensures that rescore is not allowed if the problem has a filesubmission.
"""
correct_map = CorrectMap(answer_id='1_2_1', correctness="correct", npoints=1)
problem = new_loncapa_problem(
"""
<problem>
<multiplechoiceresponse>
<choicegroup>
<choice correct="true">Answer1</choice>
<choice correct="false">Answer2</choice>
<choice correct="false">Answer3</choice>
<choice correct="false">Answer4</choice>
</choicegroup>
</multiplechoiceresponse>
</problem>
"""
)
responder_mock = MagicMock()

with patch.object(problem, 'responders', {'responder1': responder_mock}):
responder_mock.allowed_inputfields = ['filesubmission']
responder_mock.evaluate_answers.return_value = correct_map

with self.assertRaises(Exception):
problem.get_grade_from_current_answers(None, correct_map)
responder_mock.evaluate_answers.assert_not_called()
Loading
Loading