Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[libc++] Add input validation for set_intersection() in debug mode. #101508

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions libcxx/include/__algorithm/is_sorted_until.h
Original file line number Diff line number Diff line change
Expand Up @@ -20,18 +20,18 @@

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a release note in 20.rst?

_LIBCPP_BEGIN_NAMESPACE_STD

template <class _Compare, class _ForwardIterator>
template <class _Compare, class _ForwardIterator, class _Sent>
_LIBCPP_HIDE_FROM_ABI _LIBCPP_CONSTEXPR_SINCE_CXX20 _ForwardIterator
__is_sorted_until(_ForwardIterator __first, _ForwardIterator __last, _Compare __comp) {
__is_sorted_until(_ForwardIterator __first, _Sent __last, _Compare&& __comp) {
if (__first != __last) {
_ForwardIterator __i = __first;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
_ForwardIterator __i = __first;
_ForwardIterator __prev = __first;

This makes the code quite a bit clearer.

while (++__i != __last) {
if (__comp(*__i, *__first))
return __i;
__first = __i;
while (++__first != __last) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: what is the reason to swap first and i here?

if (__comp(*__first, *__i))
return __first;
__i = __first;
}
}
return __last;
return __first;
}

template <class _ForwardIterator, class _Compare>
Expand Down
11 changes: 11 additions & 0 deletions libcxx/include/__algorithm/set_intersection.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,15 @@

#include <__algorithm/comp.h>
#include <__algorithm/comp_ref_type.h>
#include <__algorithm/is_sorted_until.h>
#include <__algorithm/iterator_operations.h>
#include <__algorithm/lower_bound.h>
#include <__assert>
#include <__config>
#include <__functional/identity.h>
#include <__iterator/iterator_traits.h>
#include <__iterator/next.h>
#include <__type_traits/is_constant_evaluated.h>
#include <__type_traits/is_same.h>
#include <__utility/exchange.h>
#include <__utility/move.h>
Expand Down Expand Up @@ -95,6 +98,14 @@ __set_intersection(
_Compare&& __comp,
std::forward_iterator_tag,
std::forward_iterator_tag) {
#if _LIBCPP_HARDENING_MODE == _LIBCPP_HARDENING_MODE_DEBUG
if (!__libcpp_is_constant_evaluated()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this required?

Copy link
Contributor Author

@ichaer ichaer Aug 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because__builtin_expect(), which_LIBCPP_ASSERT() expands to, can't be constant-evaluated. I learned that from a compilation error, btw.

Edit: Sorry, now that I said it I'm not sure =/. Maybe it wasn't __builtin_expect(), but _LIBCPP_VERBOSE_ABORT()? Anyway, something inside _LIBCPP_ASSERT() can't be constant-evaluated. I had been using __check_strict_weak_ordering_sorted() as my blueprint for this change, but I left that bit out and the compiler explained to me why I couldn't.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't understand. Can you try removing the if(is-constant-evaluated) and let's see what the CI says? You're probably right, but I'd like to see what the error is.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 -- also curious to see the error.

_LIBCPP_ASSERT_INTERNAL(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should likely be _LIBCPP_ASSERT_ARGUMENT_WITHIN_DOMAIN but I'd like @var-const to chime in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should have added a comment: I didn't do that because _LIBCPP_ASSERT_ARGUMENT_WITHIN_DOMAIN is enabled in _LIBCPP_HARDENING_MODE_EXTENSIVE, and I thought the cost wasn't appropriate. More about this in my response to https://github.com/llvm/llvm-project/pull/101508/files#r1702696227.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think this should be _LIBCPP_ASSERT_SEMANTIC_REQUIREMENT instead, actually. That matches what we do for __check_strict_weak_ordering_sorted.

Copy link
Member

@var-const var-const Sep 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. +1 to Louis' comment -- internal is meant for checks that aim to catch bugs in our own implementation, bugs that are independent from user input. Since this checks the user-provided arguments, we need to find some other category.
  2. My first intuition is that argument-within-domain is a somewhat better match than semantic-requirement. In __check_strict_weak_ordering_sorted, we're checking at the resulting (presumably) sorted sequence as a way to validate the given comparator -- the comparator has the semantic requirement to provide strict weak ordering, but we cannot check that without resorting to an imperfect heuristic. Here, however, we are checking the given argument directly, and the check is very straightforward, just expensive. argument-within-domain is essentially a catch-all for "the given argument is valid but if it's not, it won't cause UB within our code (but will produce an incorrect result that might well cause UB in user code)", which seems to apply to the situation here.
  3. Since we're wrapping the whole thing in a conditional, it's not really important which modes enable the assertion category we choose -- e.g. if we choose argument-within-domain that is enabled in both extensive and debug, the check for _LIBCPP_HARDENING_MODE_DEBUG still makes sure it only runs in debug. It's a little inelegant, but we already have precedent in __check_strict_weak_ordering_sorted, so I wouldn't try to fix that within this patch.

std::__is_sorted_until(__first1, __last1, __comp) == __last1, "set_intersection: input range 1 must be sorted");
_LIBCPP_ASSERT_INTERNAL(
std::__is_sorted_until(__first2, __last2, __comp) == __last2, "set_intersection: input range 2 must be sorted");
}
#endif
_LIBCPP_CONSTEXPR std::__identity __proj;
bool __prev_may_be_equal = false;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,33 +43,31 @@

#include "test_iterators.h"

namespace {

// __debug_less will perform an additional comparison in an assertion
static constexpr unsigned std_less_comparison_count_multiplier() noexcept {
#if _LIBCPP_HARDENING_MODE == _LIBCPP_HARDENING_MODE_DEBUG
return 2;
// Debug mode provides no complexity guarantees, testing them would be a waste of effort.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree here. While we do not conform to the standards requirements, we do try to avoid increasing the complexity. A constant factor is usually OK, and it would be good to set the current increase here to avoid increasing it accidentally.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @philnik777. @var-const should chime in, but basically while we are OK with larger performance penalties in the debug mode, we don't "not care" about changing the complexity.

@var-const It turns out this is probably something we should have documented in the Hardening mode documentation, and we should go ahead and document it now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with what you're saying, I unfortunately have a ton of first-hand experience with not being able to use the best tool for the job because it's too slow, but this isn't as simple as it may seem. I'll argue the case for what I did a little bit out of pride, but mostly because I believe I made this choice for valid reasons which I probably should have shared in the commit description or a code comment.

You'll see we had previously a function called std_less_comparison_count_multiplier(), which was already doing some of what we're continuing here: adding a constant factor to an operation when in debug mode. When I added that I was very uncomfortable with how much it looked like a change detector, but it was just the one and it looked harmless enough, so I just did that. I could make it a less effective change detector by adding some padding to my constant factor and using it for all operations, but the improvement we made to set_intersection() has a twist to it: the worst-case scenario complexity is linear, but in most cases the effective complexity is logarithmic, and we do check some of those cases in the complexity test (because it's an improvement and we don't want to break it, right?). Input validation, however, is linear. So, conceptually, using a larger constant doesn't really work. In practice it would, because, in practice, we only test with (small) fixed-size inputs, but it's conceptually wrong. Then there is the question of the arbitrarily padded constant: if it's arbitrarily padded, how much is it really helping? It's still a change detector, a test which will at some point break when someone adds more debug validations. With all that, is the test still a net positive? And let's say we changed the logic of complexity validation in debug mode so that we checked that it's linear with a less-padded constant, would that even make sense?

I agree that performance on debug mode is not something we want to ignore, but I don't have a good solution for any of it, and, since I don't have a good solution for automating validation, I would personally prefer relying on the safety net of code reviews to catch this sort of stuff. Validating the number of operations is really strict, it's a tricky thing to mix with debug instrumentation.

Having said all that, I'm happy to change this proposal if you disagree. We could have a single constant multiplier to be used for all operations, or one constant for comparisons+projections and another one for iterator operations.

What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for writing this out!

I think we've been somewhat consciously postponing making the decision on what kind of performance guarantees the debug mode should provide (if any) -- there was no pressing need to make a commitment there, and I wanted to gain some usage experience to inform this decision. I would say that from what I've seen so far, I'm leaning towards making the debug mode guarantee big-O complexity (the Standard sometimes mandates the exact number of operations which would be too strict, IMO). I'm very concerned that unless we keep even the debug mode relatively lightweight and performant, it will end up a mode that no one enables and thus is effectively useless; I'm especially concerned about a potential "death from a thousand cuts" scenario where many checks, each one not that heavyweight on its own, add up to something that is unacceptably slow for too many users.

IIUC, the new checks don't really check the complexity here as well (it makes the average case worse, but worst case is the same).

We do need to test for the exact complexity in regular modes (since it's mandated by the Standard), but I can relate to your perspective that these tests aren't really well-suited for the debug mode. Not checking for complexity in the debug mode makes sense to me; we should change the comment, however, like Louis suggested.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In light of the discussion above (thanks for the detailed explanation BTW), I would change to this:

Suggested change
// Debug mode provides no complexity guarantees, testing them would be a waste of effort.
// We don't check number of operations in Debug mode because they are not stable enough due to additional validations

That way we're not making a statement about whether the complexity is supposed to be the same or not. I'm basically sweeping this whole thing under the rug.

#ifdef _LIBCPP_HARDENING_MODE_DEBUG
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is wrong. _LIBCPP_HARDENING_MODE_DEBUG is always defined.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#ifdef _LIBCPP_HARDENING_MODE_DEBUG
#if defined(_LIBCPP_HARDENING_MODE_DEBUG) && _LIBCPP_HARDENING_MODE_DEBUG

We still need to check for if defined(_LIBCPP_HARDENING_MODE_DEBUG) to avoid -Wundef when testing non-libc++ libraries.

# define ASSERT_COMPLEXITY(expression) (void)(expression)
#else
return 1;
# define ASSERT_COMPLEXITY(expression) assert(expression)
#endif
}

namespace {

struct [[nodiscard]] OperationCounts {
std::size_t comparisons{};
struct PerInput {
std::size_t proj{};
IteratorOpCounts iterops;

[[nodiscard]] constexpr bool isNotBetterThan(const PerInput& other) {
[[nodiscard]] constexpr bool isNotBetterThan(const PerInput& other) const noexcept {
return proj >= other.proj && iterops.increments + iterops.decrements + iterops.zero_moves >=
other.iterops.increments + other.iterops.decrements + other.iterops.zero_moves;
}
};
std::array<PerInput, 2> in;

[[nodiscard]] constexpr bool isNotBetterThan(const OperationCounts& expect) {
return std_less_comparison_count_multiplier() * comparisons >= expect.comparisons &&
in[0].isNotBetterThan(expect.in[0]) && in[1].isNotBetterThan(expect.in[1]);
[[nodiscard]] constexpr bool isNotBetterThan(const OperationCounts& expect) const noexcept {
return comparisons >= expect.comparisons && in[0].isNotBetterThan(expect.in[0]) &&
in[1].isNotBetterThan(expect.in[1]);
}
};

Expand All @@ -80,16 +78,17 @@ struct counted_set_intersection_result {

constexpr counted_set_intersection_result() = default;

constexpr explicit counted_set_intersection_result(std::array<int, ResultSize>&& contents) : result{contents} {}
constexpr explicit counted_set_intersection_result(std::array<int, ResultSize>&& contents) noexcept
: result{contents} {}

constexpr void assertNotBetterThan(const counted_set_intersection_result& other) {
constexpr void assertNotBetterThan(const counted_set_intersection_result& other) const noexcept {
assert(result == other.result);
assert(opcounts.isNotBetterThan(other.opcounts));
ASSERT_COMPLEXITY(opcounts.isNotBetterThan(other.opcounts));
}
};

template <std::size_t ResultSize>
counted_set_intersection_result(std::array<int, ResultSize>) -> counted_set_intersection_result<ResultSize>;
counted_set_intersection_result(std::array<int, ResultSize>) noexcept -> counted_set_intersection_result<ResultSize>;

template <template <class...> class InIterType1,
template <class...>
Expand Down Expand Up @@ -306,7 +305,7 @@ constexpr bool testComplexityBasic() {
std::array<int, 5> r2{2, 4, 6, 8, 10};
std::array<int, 0> expected{};

const std::size_t maxOperation = std_less_comparison_count_multiplier() * (2 * (r1.size() + r2.size()) - 1);
const std::size_t maxOperation = 2 * (r1.size() + r2.size()) - 1;

// std::set_intersection
{
Expand All @@ -321,7 +320,7 @@ constexpr bool testComplexityBasic() {
std::set_intersection(r1.begin(), r1.end(), r2.begin(), r2.end(), out.data(), comp);

assert(std::ranges::equal(out, expected));
assert(numberOfComp <= maxOperation);
ASSERT_COMPLEXITY(numberOfComp <= maxOperation);
}

// ranges::set_intersection iterator overload
Expand Down Expand Up @@ -349,9 +348,9 @@ constexpr bool testComplexityBasic() {
std::ranges::set_intersection(r1.begin(), r1.end(), r2.begin(), r2.end(), out.data(), comp, proj1, proj2);

assert(std::ranges::equal(out, expected));
assert(numberOfComp <= maxOperation);
assert(numberOfProj1 <= maxOperation);
assert(numberOfProj2 <= maxOperation);
ASSERT_COMPLEXITY(numberOfComp <= maxOperation);
ASSERT_COMPLEXITY(numberOfProj1 <= maxOperation);
ASSERT_COMPLEXITY(numberOfProj2 <= maxOperation);
}

// ranges::set_intersection range overload
Expand Down Expand Up @@ -379,9 +378,9 @@ constexpr bool testComplexityBasic() {
std::ranges::set_intersection(r1, r2, out.data(), comp, proj1, proj2);

assert(std::ranges::equal(out, expected));
assert(numberOfComp < maxOperation);
assert(numberOfProj1 < maxOperation);
assert(numberOfProj2 < maxOperation);
ASSERT_COMPLEXITY(numberOfComp < maxOperation);
ASSERT_COMPLEXITY(numberOfProj1 < maxOperation);
ASSERT_COMPLEXITY(numberOfProj2 < maxOperation);
}
return true;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,44 +40,45 @@ constexpr bool test_all() {
constexpr auto operator<=>(const A&) const = default;
};

std::array in = {1, 2, 3};
std::array in2 = {A{4}, A{5}, A{6}};
const std::array in = {1, 2, 3};
const std::array in2 = {A{4}, A{5}, A{6}};

std::array output = {7, 8, 9, 10, 11, 12};
auto out = output.begin();
std::array output2 = {A{7}, A{8}, A{9}, A{10}, A{11}, A{12}};
auto out2 = output2.begin();

std::ranges::equal_to eq;
std::ranges::less less;
auto sum = [](int lhs, A rhs) { return lhs + rhs.x; };
auto proj1 = [](int x) { return x * -1; };
auto proj2 = [](A a) { return a.x * -1; };
const std::ranges::equal_to eq;
const std::ranges::less less;
const std::ranges::greater greater;
const auto sum = [](int lhs, A rhs) { return lhs + rhs.x; };
const auto proj1 = [](int x) { return x * -1; };
const auto proj2 = [](A a) { return a.x * -1; };

#if TEST_STD_VER >= 23
test(std::ranges::ends_with, in, in2, eq, proj1, proj2);
#endif
test(std::ranges::equal, in, in2, eq, proj1, proj2);
test(std::ranges::lexicographical_compare, in, in2, eq, proj1, proj2);
test(std::ranges::is_permutation, in, in2, eq, proj1, proj2);
test(std::ranges::includes, in, in2, less, proj1, proj2);
test(std::ranges::includes, in, in2, greater, proj1, proj2);
test(std::ranges::find_first_of, in, in2, eq, proj1, proj2);
test(std::ranges::mismatch, in, in2, eq, proj1, proj2);
test(std::ranges::search, in, in2, eq, proj1, proj2);
test(std::ranges::find_end, in, in2, eq, proj1, proj2);
test(std::ranges::transform, in, in2, out, sum, proj1, proj2);
test(std::ranges::transform, in, in2, out2, sum, proj1, proj2);
test(std::ranges::partial_sort_copy, in, in2, less, proj1, proj2);
test(std::ranges::merge, in, in2, out, less, proj1, proj2);
test(std::ranges::merge, in, in2, out2, less, proj1, proj2);
test(std::ranges::set_intersection, in, in2, out, less, proj1, proj2);
test(std::ranges::set_intersection, in, in2, out2, less, proj1, proj2);
test(std::ranges::set_difference, in, in2, out, less, proj1, proj2);
test(std::ranges::set_difference, in, in2, out2, less, proj1, proj2);
test(std::ranges::set_symmetric_difference, in, in2, out, less, proj1, proj2);
test(std::ranges::set_symmetric_difference, in, in2, out2, less, proj1, proj2);
test(std::ranges::set_union, in, in2, out, less, proj1, proj2);
test(std::ranges::set_union, in, in2, out2, less, proj1, proj2);
test(std::ranges::partial_sort_copy, in, output, less, proj1, proj2);
test(std::ranges::merge, in, in2, out, greater, proj1, proj2);
test(std::ranges::merge, in, in2, out2, greater, proj1, proj2);
test(std::ranges::set_intersection, in, in2, out, greater, proj1, proj2);
test(std::ranges::set_intersection, in, in2, out2, greater, proj1, proj2);
test(std::ranges::set_difference, in, in2, out, greater, proj1, proj2);
test(std::ranges::set_difference, in, in2, out2, greater, proj1, proj2);
test(std::ranges::set_symmetric_difference, in, in2, out, greater, proj1, proj2);
test(std::ranges::set_symmetric_difference, in, in2, out2, greater, proj1, proj2);
test(std::ranges::set_union, in, in2, out, greater, proj1, proj2);
test(std::ranges::set_union, in, in2, out2, greater, proj1, proj2);
#if TEST_STD_VER > 20
test(std::ranges::starts_with, in, in2, eq, proj1, proj2);
#endif
Expand Down
Loading