-
Notifications
You must be signed in to change notification settings - Fork 11.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[libc++] Unify the benchmarks with the test suite #101399
base: main
Are you sure you want to change the base?
[libc++] Unify the benchmarks with the test suite #101399
Conversation
@llvm/pr-subscribers-libunwind @llvm/pr-subscribers-libcxx Author: Louis Dionne (ldionne) ChangesInstead of building the benchmarks separately via CMake and running them As a result:
When running the tests via I am not really satisfied with the layering violation of adding the Patch is 64.39 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/101399.diff 77 Files Affected:
diff --git a/.github/workflows/libcxx-build-and-test.yaml b/.github/workflows/libcxx-build-and-test.yaml
index 1456f245cf7c0..94889bb243f3c 100644
--- a/.github/workflows/libcxx-build-and-test.yaml
+++ b/.github/workflows/libcxx-build-and-test.yaml
@@ -154,9 +154,7 @@ jobs:
'generic-no-rtti',
'generic-optimized-speed',
'generic-static',
- # TODO Find a better place for the benchmark and bootstrapping builds to live. They're either very expensive
- # or don't provide much value since the benchmark run results are too noise on the bots.
- 'benchmarks',
+ # TODO Find a better place for the bootstrapping build to live, since it's very expensive.
'bootstrapping-build'
]
machine: [ 'libcxx-runners-8-set' ]
diff --git a/libcxx/CMakeLists.txt b/libcxx/CMakeLists.txt
index 6168c76bff6d9..d58c485110bd6 100644
--- a/libcxx/CMakeLists.txt
+++ b/libcxx/CMakeLists.txt
@@ -156,13 +156,8 @@ message(STATUS "Using libc++ testing configuration: ${LIBCXX_TEST_CONFIG}")
set(LIBCXX_TEST_PARAMS "" CACHE STRING
"A list of parameters to run the Lit test suite with.")
-# Benchmark options -----------------------------------------------------------
option(LIBCXX_INCLUDE_BENCHMARKS "Build the libc++ benchmarks and their dependencies" ON)
-set(LIBCXX_BENCHMARK_TEST_ARGS_DEFAULT --benchmark_min_time=0.01)
-set(LIBCXX_BENCHMARK_TEST_ARGS "${LIBCXX_BENCHMARK_TEST_ARGS_DEFAULT}" CACHE STRING
- "Arguments to pass when running the benchmarks using check-cxx-benchmarks")
-
option(LIBCXX_INCLUDE_DOCS "Build the libc++ documentation." ${LLVM_INCLUDE_DOCS})
set(LIBCXX_LIBDIR_SUFFIX "${LLVM_LIBDIR_SUFFIX}" CACHE STRING
"Define suffix of library directory name (32/64)")
diff --git a/libcxx/docs/BuildingLibcxx.rst b/libcxx/docs/BuildingLibcxx.rst
index 5c224689e0f9f..ebc5513d40d67 100644
--- a/libcxx/docs/BuildingLibcxx.rst
+++ b/libcxx/docs/BuildingLibcxx.rst
@@ -381,7 +381,8 @@ libc++ Feature Options
**Default**: ``ON`` (or value of ``LLVM_INCLUDE_TESTS``)
- Build the libc++ tests.
+ Build the libc++ test suite, which includes various types of tests like conformance
+ tests, vendor-specific tests and benchmarks.
.. option:: LIBCXX_INCLUDE_BENCHMARKS:BOOL
@@ -390,15 +391,6 @@ libc++ Feature Options
Build the libc++ benchmark tests and the Google Benchmark library needed
to support them.
-.. option:: LIBCXX_BENCHMARK_TEST_ARGS:STRING
-
- **Default**: ``--benchmark_min_time=0.01``
-
- A semicolon list of arguments to pass when running the libc++ benchmarks using the
- ``check-cxx-benchmarks`` rule. By default we run the benchmarks for a very short amount of time,
- since the primary use of ``check-cxx-benchmarks`` is to get test and sanitizer coverage, not to
- get accurate measurements.
-
.. option:: LIBCXX_ASSERTION_HANDLER_FILE:PATH
**Default**:: ``"${CMAKE_CURRENT_SOURCE_DIR}/vendor/llvm/default_assertion_handler.in"``
diff --git a/libcxx/docs/TestingLibcxx.rst b/libcxx/docs/TestingLibcxx.rst
index 65a7e1ec30c96..6ee8b476bba1d 100644
--- a/libcxx/docs/TestingLibcxx.rst
+++ b/libcxx/docs/TestingLibcxx.rst
@@ -392,6 +392,10 @@ Test Filenames`_ when determining the names for new test files.
of Lit test to be executed. This can be used to generate multiple Lit tests from a single source file, which is useful for testing repetitive properties
in the library. Be careful not to abuse this since this is not a replacement for usual code reuse techniques.
+ * - ``FOO.bench.cpp``
+ - A benchmark test. These tests are linked against the GoogleBenchmark library and generally contain micro-benchmarks of individual
+ components of the library.
+
libc++-Specific Lit Features
----------------------------
@@ -438,48 +442,20 @@ Libc++ contains benchmark tests separately from the test of the test suite.
The benchmarks are written using the `Google Benchmark`_ library, a copy of which
is stored in the libc++ repository.
-For more information about using the Google Benchmark library see the
+For more information about using the Google Benchmark library, see the
`official documentation <https://github.com/google/benchmark>`_.
-.. _`Google Benchmark`: https://github.com/google/benchmark
-
-Building Benchmarks
--------------------
-
-The benchmark tests are not built by default. The benchmarks can be built using
-the ``cxx-benchmarks`` target.
-
-An example build would look like:
-
-.. code-block:: bash
-
- $ ninja -C build cxx-benchmarks
-
-This will build all of the benchmarks under ``<libcxx>/test/benchmarks`` to be
-built against the just-built libc++. The compiled tests are output into
-``build/libcxx/test/benchmarks``.
+The benchmarks are located under ``libcxx/test/benchmarks``. Running a benchmark
+works in the same way as running a test. Both the benchmarks and the tests share
+the same configuration, so make sure to enable the relevant optimization level
+when running the benchmarks.
-Also See:
-
- * :ref:`Building Libc++ <build instructions>`
- * :ref:`CMake Options`
-
-Running Benchmarks
-------------------
-
-The benchmarks must be run manually by the user. Currently there is no way
-to run them as part of the build.
-
-For example:
-
-.. code-block:: bash
-
- $ cd build/libcxx/test/benchmarks
- $ ./find.bench.out # Runs all the benchmarks
- $ ./find.bench.out --benchmark_filter="bm_ranges_find<std::vector<char>>" # Only runs that specific benchmark
-
-For more information about running benchmarks see `Google Benchmark`_.
+Note that benchmarks are only dry-run when run via the ``check-cxx`` target since
+we only want to make sure they don't rot. Do not rely on the results of benchmarks
+run through ``check-cxx`` for anything, instead run the benchmarks manually using
+the instructions for running individual tests.
+.. _`Google Benchmark`: https://github.com/google/benchmark
.. _testing-hardening-assertions:
@@ -523,4 +499,3 @@ A toy example:
Note that error messages are only tested (matched) if the ``debug``
hardening mode is used.
-
diff --git a/libcxx/test/CMakeLists.txt b/libcxx/test/CMakeLists.txt
index b25712b0e363b..ddff951e435d5 100644
--- a/libcxx/test/CMakeLists.txt
+++ b/libcxx/test/CMakeLists.txt
@@ -1,15 +1,21 @@
include(HandleLitArguments)
add_subdirectory(tools)
-if (LIBCXX_INCLUDE_BENCHMARKS)
- add_subdirectory(benchmarks)
-endif()
-
set(AUTO_GEN_COMMENT "## Autogenerated by libcxx configuration.\n# Do not edit!")
set(SERIALIZED_LIT_PARAMS "# Lit parameters serialized here for llvm-lit to pick them up\n")
serialize_lit_string_param(SERIALIZED_LIT_PARAMS compiler "${CMAKE_CXX_COMPILER}")
+if (LIBCXX_INCLUDE_BENCHMARKS
+ AND LIBCXX_ENABLE_LOCALIZATION AND LIBCXX_ENABLE_THREADS AND LIBCXX_ENABLE_FILESYSTEM AND LIBCXX_ENABLE_RANDOM_DEVICE
+ AND LIBCXX_ENABLE_EXCEPTIONS AND LIBCXX_ENABLE_RTTI) # TODO: The benchmarks should work with exceptions/RTTI disabled
+ add_subdirectory(benchmarks)
+ set(_libcxx_benchmark_mode "dry-run")
+else()
+ serialize_lit_string_param(SERIALIZED_LIT_PARAMS enable_benchmarks "no")
+ set(_libcxx_benchmark_mode "no")
+endif()
+
if (NOT LIBCXX_ENABLE_EXCEPTIONS)
serialize_lit_param(SERIALIZED_LIT_PARAMS enable_exceptions False)
endif()
@@ -46,4 +52,5 @@ configure_lit_site_cfg(
add_lit_testsuite(check-cxx
"Running libcxx tests"
${CMAKE_CURRENT_BINARY_DIR}
+ PARAMS enable_benchmarks="${_libcxx_benchmark_mode}"
DEPENDS cxx-test-depends)
diff --git a/libcxx/test/benchmarks/CMakeLists.txt b/libcxx/test/benchmarks/CMakeLists.txt
index d61367a367738..a505bdb0cacb6 100644
--- a/libcxx/test/benchmarks/CMakeLists.txt
+++ b/libcxx/test/benchmarks/CMakeLists.txt
@@ -1,10 +1,8 @@
-include(ExternalProject)
-include(CheckCXXCompilerFlag)
-
#==============================================================================
# Build Google Benchmark
#==============================================================================
+include(ExternalProject)
set(BENCHMARK_COMPILE_FLAGS
-Wno-unused-command-line-argument
-nostdinc++
@@ -39,169 +37,4 @@ ExternalProject_Add(google-benchmark
-DBENCHMARK_USE_LIBCXX:BOOL=ON
-DBENCHMARK_ENABLE_TESTING:BOOL=OFF)
-#==============================================================================
-# Benchmark tests configuration
-#==============================================================================
-add_custom_target(cxx-benchmarks)
-set(BENCHMARK_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR})
-set(BENCHMARK_INSTALL_DIR ${CMAKE_CURRENT_BINARY_DIR}/google-benchmark)
-
-add_library( cxx-benchmarks-flags INTERFACE)
-
-# TODO(cmake): remove. This is a workaround to prevent older versions of GCC
-# from failing the configure step because they don't support C++23.
-if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "13.0")
- return()
-endif()
-#TODO(cmake): remove the `add_compile_options`. Currently we have to explicitly
-# pass the `std:c++latest` flag on Windows to work around an issue where
-# requesting `cxx_std_23` results in an error -- somehow CMake fails to
-# translate the `c++23` flag into `c++latest`, and the highest numbered C++
-# version that MSVC flags support is C++20.
-if (MSVC)
- add_compile_options(/std:c++latest)
-# ibm-clang does not recognize the cxx_std_23 flag, so use this as a temporary
-# workaround on AIX as well.
-elseif (${CMAKE_SYSTEM_NAME} MATCHES "AIX")
- add_compile_options(-std=c++23)
-else()
- target_compile_features( cxx-benchmarks-flags INTERFACE cxx_std_23)
-endif()
-
-target_compile_options(cxx-benchmarks-flags INTERFACE -fsized-deallocation -nostdinc++
- ${SANITIZER_FLAGS} -Wno-user-defined-literals -Wno-suggest-override)
-target_include_directories(cxx-benchmarks-flags INTERFACE "${LIBCXX_GENERATED_INCLUDE_DIR}"
- INTERFACE "${BENCHMARK_INSTALL_DIR}/include"
- INTERFACE "${LIBCXX_SOURCE_DIR}/test/support")
-target_link_options(cxx-benchmarks-flags INTERFACE -lm -nostdlib++
- "-L${BENCHMARK_INSTALL_DIR}/lib" "-L${BENCHMARK_INSTALL_DIR}/lib64"
- ${SANITIZER_FLAGS})
-
-set(libcxx_benchmark_targets)
-
-function(add_benchmark_test name source_file)
- set(libcxx_target ${name}_libcxx)
- list(APPEND libcxx_benchmark_targets ${libcxx_target})
- add_executable(${libcxx_target} EXCLUDE_FROM_ALL ${source_file})
- target_link_libraries(${libcxx_target} PRIVATE cxx-benchmarks-flags)
- add_dependencies(${libcxx_target} cxx google-benchmark)
- add_dependencies(cxx-benchmarks ${libcxx_target})
- if (LIBCXX_ENABLE_SHARED)
- target_link_libraries(${libcxx_target} PRIVATE cxx_shared)
- else()
- target_link_libraries(${libcxx_target} PRIVATE cxx_static)
- endif()
- target_link_libraries(${libcxx_target} PRIVATE cxx_experimental benchmark)
- if (LLVM_USE_SANITIZER)
- target_link_libraries(${libcxx_target} PRIVATE -ldl)
- endif()
- set_target_properties(${libcxx_target}
- PROPERTIES
- OUTPUT_NAME "${name}.bench.out"
- RUNTIME_OUTPUT_DIRECTORY "${BENCHMARK_OUTPUT_DIR}"
- CXX_EXTENSIONS NO)
- cxx_link_system_libraries(${libcxx_target})
-endfunction()
-
-
-#==============================================================================
-# Register Benchmark tests
-#==============================================================================
-set(BENCHMARK_TESTS
- algorithms.partition_point.bench.cpp
- algorithms/count.bench.cpp
- algorithms/equal.bench.cpp
- algorithms/find.bench.cpp
- algorithms/fill.bench.cpp
- algorithms/for_each.bench.cpp
- algorithms/lower_bound.bench.cpp
- algorithms/make_heap.bench.cpp
- algorithms/make_heap_then_sort_heap.bench.cpp
- algorithms/min.bench.cpp
- algorithms/minmax.bench.cpp
- algorithms/min_max_element.bench.cpp
- algorithms/mismatch.bench.cpp
- algorithms/pop_heap.bench.cpp
- algorithms/pstl.stable_sort.bench.cpp
- algorithms/push_heap.bench.cpp
- algorithms/ranges_contains.bench.cpp
- algorithms/ranges_ends_with.bench.cpp
- algorithms/ranges_make_heap.bench.cpp
- algorithms/ranges_make_heap_then_sort_heap.bench.cpp
- algorithms/ranges_pop_heap.bench.cpp
- algorithms/ranges_push_heap.bench.cpp
- algorithms/ranges_sort.bench.cpp
- algorithms/ranges_sort_heap.bench.cpp
- algorithms/ranges_stable_sort.bench.cpp
- algorithms/set_intersection.bench.cpp
- algorithms/sort.bench.cpp
- algorithms/sort_heap.bench.cpp
- algorithms/stable_sort.bench.cpp
- atomic_wait.bench.cpp
- atomic_wait_vs_mutex_lock.bench.cpp
- libcxxabi/dynamic_cast.bench.cpp
- libcxxabi/dynamic_cast_old_stress.bench.cpp
- allocation.bench.cpp
- deque.bench.cpp
- deque_iterator.bench.cpp
- exception_ptr.bench.cpp
- filesystem.bench.cpp
- format_to_n.bench.cpp
- format_to.bench.cpp
- format.bench.cpp
- formatted_size.bench.cpp
- formatter_float.bench.cpp
- formatter_int.bench.cpp
- function.bench.cpp
- join_view.bench.cpp
- lexicographical_compare_three_way.bench.cpp
- map.bench.cpp
- monotonic_buffer.bench.cpp
- numeric/gcd.bench.cpp
- ordered_set.bench.cpp
- shared_mutex_vs_mutex.bench.cpp
- stop_token.bench.cpp
- std_format_spec_string_unicode.bench.cpp
- std_format_spec_string_unicode_escape.bench.cpp
- string.bench.cpp
- stringstream.bench.cpp
- system_error.bench.cpp
- to_chars.bench.cpp
- unordered_set_operations.bench.cpp
- util_smartptr.bench.cpp
- variant_visit_1.bench.cpp
- variant_visit_2.bench.cpp
- variant_visit_3.bench.cpp
- vector_operations.bench.cpp
- )
-
-foreach(test_path ${BENCHMARK_TESTS})
- get_filename_component(test_file "${test_path}" NAME)
- string(REPLACE ".bench.cpp" "" test_name "${test_file}")
- if (NOT DEFINED ${test_name}_REPORTED)
- message(STATUS "Adding Benchmark: ${test_file}")
- # Only report the adding of the benchmark once.
- set(${test_name}_REPORTED ON CACHE INTERNAL "")
- endif()
- add_benchmark_test(${test_name} ${test_path})
-endforeach()
-
-if (LIBCXX_INCLUDE_TESTS)
- include(AddLLVM)
-
- configure_lit_site_cfg(
- ${CMAKE_CURRENT_SOURCE_DIR}/lit.cfg.py.in
- ${CMAKE_CURRENT_BINARY_DIR}/lit.cfg.py)
-
- configure_lit_site_cfg(
- ${CMAKE_CURRENT_SOURCE_DIR}/lit.site.cfg.py.in
- ${CMAKE_CURRENT_BINARY_DIR}/lit.site.cfg.py)
-
- set(BENCHMARK_LIT_ARGS "--show-all --show-xfail --show-unsupported ${LIT_ARGS_DEFAULT}")
-
- add_lit_target(check-cxx-benchmarks
- "Running libcxx benchmarks tests"
- ${CMAKE_CURRENT_BINARY_DIR}
- DEPENDS cxx-benchmarks cxx-test-depends
- ARGS ${BENCHMARK_LIT_ARGS})
-endif()
+add_dependencies(cxx-test-depends google-benchmark)
diff --git a/libcxx/test/benchmarks/CartesianBenchmarks.h b/libcxx/test/benchmarks/CartesianBenchmarks.h
index eca4e15cd009b..c712230843ebf 100644
--- a/libcxx/test/benchmarks/CartesianBenchmarks.h
+++ b/libcxx/test/benchmarks/CartesianBenchmarks.h
@@ -27,11 +27,11 @@ constexpr auto makeEnumValueTuple(std::index_sequence<Idxs...>) {
}
template <class B>
-static auto skip(const B& Bench, int) -> decltype(Bench.skip()) {
+auto skip(const B& Bench, int) -> decltype(Bench.skip()) {
return Bench.skip();
}
template <class B>
-static auto skip(const B& Bench, char) {
+auto skip(const B&, char) {
return false;
}
@@ -51,7 +51,7 @@ void makeBenchmarkFromValues(const std::vector<std::tuple<Args...> >& A) {
}
template <template <class...> class B, class Args, class... U>
-void makeBenchmarkImpl(const Args& A, std::tuple<U...> t) {
+void makeBenchmarkImpl(const Args& A, std::tuple<U...>) {
makeBenchmarkFromValues<B<U...> >(A);
}
diff --git a/libcxx/test/benchmarks/ContainerBenchmarks.h b/libcxx/test/benchmarks/ContainerBenchmarks.h
index 744505b439985..e572792d9fe5f 100644
--- a/libcxx/test/benchmarks/ContainerBenchmarks.h
+++ b/libcxx/test/benchmarks/ContainerBenchmarks.h
@@ -150,7 +150,7 @@ void BM_EmplaceDuplicate(benchmark::State& st, Container c, GenInputs gen) {
}
template <class Container, class GenInputs>
-static void BM_Find(benchmark::State& st, Container c, GenInputs gen) {
+void BM_Find(benchmark::State& st, Container c, GenInputs gen) {
auto in = gen(st.range(0));
c.insert(in.begin(), in.end());
benchmark::DoNotOptimize(&(*c.begin()));
@@ -164,7 +164,7 @@ static void BM_Find(benchmark::State& st, Container c, GenInputs gen) {
}
template <class Container, class GenInputs>
-static void BM_FindRehash(benchmark::State& st, Container c, GenInputs gen) {
+void BM_FindRehash(benchmark::State& st, Container c, GenInputs gen) {
c.rehash(8);
auto in = gen(st.range(0));
c.insert(in.begin(), in.end());
@@ -179,7 +179,7 @@ static void BM_FindRehash(benchmark::State& st, Container c, GenInputs gen) {
}
template <class Container, class GenInputs>
-static void BM_Rehash(benchmark::State& st, Container c, GenInputs gen) {
+void BM_Rehash(benchmark::State& st, Container c, GenInputs gen) {
auto in = gen(st.range(0));
c.max_load_factor(3.0);
c.insert(in.begin(), in.end());
@@ -193,7 +193,7 @@ static void BM_Rehash(benchmark::State& st, Container c, GenInputs gen) {
}
template <class Container, class GenInputs>
-static void BM_Compare_same_container(benchmark::State& st, Container, GenInputs gen) {
+void BM_Compare_same_container(benchmark::State& st, Container, GenInputs gen) {
auto in = gen(st.range(0));
Container c1(in.begin(), in.end());
Container c2 = c1;
@@ -208,7 +208,7 @@ static void BM_Compare_same_container(benchmark::State& st, Container, GenInputs
}
template <class Container, class GenInputs>
-static void BM_Compare_different_containers(benchmark::State& st, Container, GenInputs gen) {
+void BM_Compare_different_containers(benchmark::State& st, Container, GenInputs gen) {
auto in1 = gen(st.range(0));
auto in2 = gen(st.range(0));
Container c1(in1.begin(), in1.end());
diff --git a/libcxx/test/benchmarks/algorithms.partition_point.bench.cpp b/libcxx/test/benchmarks/algorithms.partition_point.bench.cpp
index ed2e337fa6ea3..42ebce8ad2f4a 100644
--- a/libcxx/test/benchmarks/algorithms.partition_point.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms.partition_point.bench.cpp
@@ -6,6 +6,8 @@
//
//===----------------------------------------------------------------------===//
+// UNSUPPORTED: c++03, c++11, c++14
+
#include <algorithm>
#include <array>
#include <cassert>
diff --git a/libcxx/test/benchmarks/algorithms/count.bench.cpp b/libcxx/test/benchmarks/algorithms/count.bench.cpp
index 7370293fd6efd..46b85e909efa5 100644
--- a/libcxx/test/benchmarks/algorithms/count.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms/count.bench.cpp
@@ -6,6 +6,8 @@
//
//===----------------------------------------------------------------------===//
+// UNSUPPORTED: c++03, c++11, c++14, c++17
+
#include <algorithm>
#include <benchmark/benchmark.h>
#include <cstring>
diff --git a/libcxx/test/benchmarks/algorithms/equal.bench.cpp b/libcxx/test/benchmarks/algorithms/equal.bench.cpp
index 6d63d8c48ce1e..2dc11585c15c7 100644
--- a/libcxx/test/benchmarks/algorithms/equal.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms/equal.bench.cpp
@@ -6,6 +6,8 @@
//
//===----------------------------------------------------------------------===//
+// UNSUPPORTED: c++03, c++11, c++14, c++17
+
#include <algorithm>
#include <benchmark/benchmark.h>
#include <vector>
diff --git a/libcxx/test/benchmarks/algorithms/fill.bench.cpp b/libcxx/test/benchmarks/algorithms/fill.bench.cpp
index 40f37425c394c..c157b5e5c9862 100644
--- a/libcxx/test/benchmarks/algorithms/fill.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms/fill.bench.cpp
@@ -6,6 +6,8 @@
//
//===----------------------------------------------------------------------===//
+// UNSUPPORTED: c++03, c++11, c++14, c++17, c++20
+
#include <algorithm>
#include <benchmark/benchmark.h>
#include <vector>
diff --git a/libcxx/test/benchmarks/algorithms/find.bench.cpp b/libcxx/test/benchmarks/algorithms/find.bench.cpp
index 6ff2d95ab4353..43d103474ebdf 100644
--- a/libcxx/test/benchmarks/algorithms/find.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms/find.bench.cpp
@@ -6,6 ...
[truncated]
|
@llvm/pr-subscribers-github-workflow Author: Louis Dionne (ldionne) ChangesInstead of building the benchmarks separately via CMake and running them As a result:
When running the tests via I am not really satisfied with the layering violation of adding the Patch is 64.39 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/101399.diff 77 Files Affected:
diff --git a/.github/workflows/libcxx-build-and-test.yaml b/.github/workflows/libcxx-build-and-test.yaml
index 1456f245cf7c0..94889bb243f3c 100644
--- a/.github/workflows/libcxx-build-and-test.yaml
+++ b/.github/workflows/libcxx-build-and-test.yaml
@@ -154,9 +154,7 @@ jobs:
'generic-no-rtti',
'generic-optimized-speed',
'generic-static',
- # TODO Find a better place for the benchmark and bootstrapping builds to live. They're either very expensive
- # or don't provide much value since the benchmark run results are too noise on the bots.
- 'benchmarks',
+ # TODO Find a better place for the bootstrapping build to live, since it's very expensive.
'bootstrapping-build'
]
machine: [ 'libcxx-runners-8-set' ]
diff --git a/libcxx/CMakeLists.txt b/libcxx/CMakeLists.txt
index 6168c76bff6d9..d58c485110bd6 100644
--- a/libcxx/CMakeLists.txt
+++ b/libcxx/CMakeLists.txt
@@ -156,13 +156,8 @@ message(STATUS "Using libc++ testing configuration: ${LIBCXX_TEST_CONFIG}")
set(LIBCXX_TEST_PARAMS "" CACHE STRING
"A list of parameters to run the Lit test suite with.")
-# Benchmark options -----------------------------------------------------------
option(LIBCXX_INCLUDE_BENCHMARKS "Build the libc++ benchmarks and their dependencies" ON)
-set(LIBCXX_BENCHMARK_TEST_ARGS_DEFAULT --benchmark_min_time=0.01)
-set(LIBCXX_BENCHMARK_TEST_ARGS "${LIBCXX_BENCHMARK_TEST_ARGS_DEFAULT}" CACHE STRING
- "Arguments to pass when running the benchmarks using check-cxx-benchmarks")
-
option(LIBCXX_INCLUDE_DOCS "Build the libc++ documentation." ${LLVM_INCLUDE_DOCS})
set(LIBCXX_LIBDIR_SUFFIX "${LLVM_LIBDIR_SUFFIX}" CACHE STRING
"Define suffix of library directory name (32/64)")
diff --git a/libcxx/docs/BuildingLibcxx.rst b/libcxx/docs/BuildingLibcxx.rst
index 5c224689e0f9f..ebc5513d40d67 100644
--- a/libcxx/docs/BuildingLibcxx.rst
+++ b/libcxx/docs/BuildingLibcxx.rst
@@ -381,7 +381,8 @@ libc++ Feature Options
**Default**: ``ON`` (or value of ``LLVM_INCLUDE_TESTS``)
- Build the libc++ tests.
+ Build the libc++ test suite, which includes various types of tests like conformance
+ tests, vendor-specific tests and benchmarks.
.. option:: LIBCXX_INCLUDE_BENCHMARKS:BOOL
@@ -390,15 +391,6 @@ libc++ Feature Options
Build the libc++ benchmark tests and the Google Benchmark library needed
to support them.
-.. option:: LIBCXX_BENCHMARK_TEST_ARGS:STRING
-
- **Default**: ``--benchmark_min_time=0.01``
-
- A semicolon list of arguments to pass when running the libc++ benchmarks using the
- ``check-cxx-benchmarks`` rule. By default we run the benchmarks for a very short amount of time,
- since the primary use of ``check-cxx-benchmarks`` is to get test and sanitizer coverage, not to
- get accurate measurements.
-
.. option:: LIBCXX_ASSERTION_HANDLER_FILE:PATH
**Default**:: ``"${CMAKE_CURRENT_SOURCE_DIR}/vendor/llvm/default_assertion_handler.in"``
diff --git a/libcxx/docs/TestingLibcxx.rst b/libcxx/docs/TestingLibcxx.rst
index 65a7e1ec30c96..6ee8b476bba1d 100644
--- a/libcxx/docs/TestingLibcxx.rst
+++ b/libcxx/docs/TestingLibcxx.rst
@@ -392,6 +392,10 @@ Test Filenames`_ when determining the names for new test files.
of Lit test to be executed. This can be used to generate multiple Lit tests from a single source file, which is useful for testing repetitive properties
in the library. Be careful not to abuse this since this is not a replacement for usual code reuse techniques.
+ * - ``FOO.bench.cpp``
+ - A benchmark test. These tests are linked against the GoogleBenchmark library and generally contain micro-benchmarks of individual
+ components of the library.
+
libc++-Specific Lit Features
----------------------------
@@ -438,48 +442,20 @@ Libc++ contains benchmark tests separately from the test of the test suite.
The benchmarks are written using the `Google Benchmark`_ library, a copy of which
is stored in the libc++ repository.
-For more information about using the Google Benchmark library see the
+For more information about using the Google Benchmark library, see the
`official documentation <https://github.com/google/benchmark>`_.
-.. _`Google Benchmark`: https://github.com/google/benchmark
-
-Building Benchmarks
--------------------
-
-The benchmark tests are not built by default. The benchmarks can be built using
-the ``cxx-benchmarks`` target.
-
-An example build would look like:
-
-.. code-block:: bash
-
- $ ninja -C build cxx-benchmarks
-
-This will build all of the benchmarks under ``<libcxx>/test/benchmarks`` to be
-built against the just-built libc++. The compiled tests are output into
-``build/libcxx/test/benchmarks``.
+The benchmarks are located under ``libcxx/test/benchmarks``. Running a benchmark
+works in the same way as running a test. Both the benchmarks and the tests share
+the same configuration, so make sure to enable the relevant optimization level
+when running the benchmarks.
-Also See:
-
- * :ref:`Building Libc++ <build instructions>`
- * :ref:`CMake Options`
-
-Running Benchmarks
-------------------
-
-The benchmarks must be run manually by the user. Currently there is no way
-to run them as part of the build.
-
-For example:
-
-.. code-block:: bash
-
- $ cd build/libcxx/test/benchmarks
- $ ./find.bench.out # Runs all the benchmarks
- $ ./find.bench.out --benchmark_filter="bm_ranges_find<std::vector<char>>" # Only runs that specific benchmark
-
-For more information about running benchmarks see `Google Benchmark`_.
+Note that benchmarks are only dry-run when run via the ``check-cxx`` target since
+we only want to make sure they don't rot. Do not rely on the results of benchmarks
+run through ``check-cxx`` for anything, instead run the benchmarks manually using
+the instructions for running individual tests.
+.. _`Google Benchmark`: https://github.com/google/benchmark
.. _testing-hardening-assertions:
@@ -523,4 +499,3 @@ A toy example:
Note that error messages are only tested (matched) if the ``debug``
hardening mode is used.
-
diff --git a/libcxx/test/CMakeLists.txt b/libcxx/test/CMakeLists.txt
index b25712b0e363b..ddff951e435d5 100644
--- a/libcxx/test/CMakeLists.txt
+++ b/libcxx/test/CMakeLists.txt
@@ -1,15 +1,21 @@
include(HandleLitArguments)
add_subdirectory(tools)
-if (LIBCXX_INCLUDE_BENCHMARKS)
- add_subdirectory(benchmarks)
-endif()
-
set(AUTO_GEN_COMMENT "## Autogenerated by libcxx configuration.\n# Do not edit!")
set(SERIALIZED_LIT_PARAMS "# Lit parameters serialized here for llvm-lit to pick them up\n")
serialize_lit_string_param(SERIALIZED_LIT_PARAMS compiler "${CMAKE_CXX_COMPILER}")
+if (LIBCXX_INCLUDE_BENCHMARKS
+ AND LIBCXX_ENABLE_LOCALIZATION AND LIBCXX_ENABLE_THREADS AND LIBCXX_ENABLE_FILESYSTEM AND LIBCXX_ENABLE_RANDOM_DEVICE
+ AND LIBCXX_ENABLE_EXCEPTIONS AND LIBCXX_ENABLE_RTTI) # TODO: The benchmarks should work with exceptions/RTTI disabled
+ add_subdirectory(benchmarks)
+ set(_libcxx_benchmark_mode "dry-run")
+else()
+ serialize_lit_string_param(SERIALIZED_LIT_PARAMS enable_benchmarks "no")
+ set(_libcxx_benchmark_mode "no")
+endif()
+
if (NOT LIBCXX_ENABLE_EXCEPTIONS)
serialize_lit_param(SERIALIZED_LIT_PARAMS enable_exceptions False)
endif()
@@ -46,4 +52,5 @@ configure_lit_site_cfg(
add_lit_testsuite(check-cxx
"Running libcxx tests"
${CMAKE_CURRENT_BINARY_DIR}
+ PARAMS enable_benchmarks="${_libcxx_benchmark_mode}"
DEPENDS cxx-test-depends)
diff --git a/libcxx/test/benchmarks/CMakeLists.txt b/libcxx/test/benchmarks/CMakeLists.txt
index d61367a367738..a505bdb0cacb6 100644
--- a/libcxx/test/benchmarks/CMakeLists.txt
+++ b/libcxx/test/benchmarks/CMakeLists.txt
@@ -1,10 +1,8 @@
-include(ExternalProject)
-include(CheckCXXCompilerFlag)
-
#==============================================================================
# Build Google Benchmark
#==============================================================================
+include(ExternalProject)
set(BENCHMARK_COMPILE_FLAGS
-Wno-unused-command-line-argument
-nostdinc++
@@ -39,169 +37,4 @@ ExternalProject_Add(google-benchmark
-DBENCHMARK_USE_LIBCXX:BOOL=ON
-DBENCHMARK_ENABLE_TESTING:BOOL=OFF)
-#==============================================================================
-# Benchmark tests configuration
-#==============================================================================
-add_custom_target(cxx-benchmarks)
-set(BENCHMARK_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR})
-set(BENCHMARK_INSTALL_DIR ${CMAKE_CURRENT_BINARY_DIR}/google-benchmark)
-
-add_library( cxx-benchmarks-flags INTERFACE)
-
-# TODO(cmake): remove. This is a workaround to prevent older versions of GCC
-# from failing the configure step because they don't support C++23.
-if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "13.0")
- return()
-endif()
-#TODO(cmake): remove the `add_compile_options`. Currently we have to explicitly
-# pass the `std:c++latest` flag on Windows to work around an issue where
-# requesting `cxx_std_23` results in an error -- somehow CMake fails to
-# translate the `c++23` flag into `c++latest`, and the highest numbered C++
-# version that MSVC flags support is C++20.
-if (MSVC)
- add_compile_options(/std:c++latest)
-# ibm-clang does not recognize the cxx_std_23 flag, so use this as a temporary
-# workaround on AIX as well.
-elseif (${CMAKE_SYSTEM_NAME} MATCHES "AIX")
- add_compile_options(-std=c++23)
-else()
- target_compile_features( cxx-benchmarks-flags INTERFACE cxx_std_23)
-endif()
-
-target_compile_options(cxx-benchmarks-flags INTERFACE -fsized-deallocation -nostdinc++
- ${SANITIZER_FLAGS} -Wno-user-defined-literals -Wno-suggest-override)
-target_include_directories(cxx-benchmarks-flags INTERFACE "${LIBCXX_GENERATED_INCLUDE_DIR}"
- INTERFACE "${BENCHMARK_INSTALL_DIR}/include"
- INTERFACE "${LIBCXX_SOURCE_DIR}/test/support")
-target_link_options(cxx-benchmarks-flags INTERFACE -lm -nostdlib++
- "-L${BENCHMARK_INSTALL_DIR}/lib" "-L${BENCHMARK_INSTALL_DIR}/lib64"
- ${SANITIZER_FLAGS})
-
-set(libcxx_benchmark_targets)
-
-function(add_benchmark_test name source_file)
- set(libcxx_target ${name}_libcxx)
- list(APPEND libcxx_benchmark_targets ${libcxx_target})
- add_executable(${libcxx_target} EXCLUDE_FROM_ALL ${source_file})
- target_link_libraries(${libcxx_target} PRIVATE cxx-benchmarks-flags)
- add_dependencies(${libcxx_target} cxx google-benchmark)
- add_dependencies(cxx-benchmarks ${libcxx_target})
- if (LIBCXX_ENABLE_SHARED)
- target_link_libraries(${libcxx_target} PRIVATE cxx_shared)
- else()
- target_link_libraries(${libcxx_target} PRIVATE cxx_static)
- endif()
- target_link_libraries(${libcxx_target} PRIVATE cxx_experimental benchmark)
- if (LLVM_USE_SANITIZER)
- target_link_libraries(${libcxx_target} PRIVATE -ldl)
- endif()
- set_target_properties(${libcxx_target}
- PROPERTIES
- OUTPUT_NAME "${name}.bench.out"
- RUNTIME_OUTPUT_DIRECTORY "${BENCHMARK_OUTPUT_DIR}"
- CXX_EXTENSIONS NO)
- cxx_link_system_libraries(${libcxx_target})
-endfunction()
-
-
-#==============================================================================
-# Register Benchmark tests
-#==============================================================================
-set(BENCHMARK_TESTS
- algorithms.partition_point.bench.cpp
- algorithms/count.bench.cpp
- algorithms/equal.bench.cpp
- algorithms/find.bench.cpp
- algorithms/fill.bench.cpp
- algorithms/for_each.bench.cpp
- algorithms/lower_bound.bench.cpp
- algorithms/make_heap.bench.cpp
- algorithms/make_heap_then_sort_heap.bench.cpp
- algorithms/min.bench.cpp
- algorithms/minmax.bench.cpp
- algorithms/min_max_element.bench.cpp
- algorithms/mismatch.bench.cpp
- algorithms/pop_heap.bench.cpp
- algorithms/pstl.stable_sort.bench.cpp
- algorithms/push_heap.bench.cpp
- algorithms/ranges_contains.bench.cpp
- algorithms/ranges_ends_with.bench.cpp
- algorithms/ranges_make_heap.bench.cpp
- algorithms/ranges_make_heap_then_sort_heap.bench.cpp
- algorithms/ranges_pop_heap.bench.cpp
- algorithms/ranges_push_heap.bench.cpp
- algorithms/ranges_sort.bench.cpp
- algorithms/ranges_sort_heap.bench.cpp
- algorithms/ranges_stable_sort.bench.cpp
- algorithms/set_intersection.bench.cpp
- algorithms/sort.bench.cpp
- algorithms/sort_heap.bench.cpp
- algorithms/stable_sort.bench.cpp
- atomic_wait.bench.cpp
- atomic_wait_vs_mutex_lock.bench.cpp
- libcxxabi/dynamic_cast.bench.cpp
- libcxxabi/dynamic_cast_old_stress.bench.cpp
- allocation.bench.cpp
- deque.bench.cpp
- deque_iterator.bench.cpp
- exception_ptr.bench.cpp
- filesystem.bench.cpp
- format_to_n.bench.cpp
- format_to.bench.cpp
- format.bench.cpp
- formatted_size.bench.cpp
- formatter_float.bench.cpp
- formatter_int.bench.cpp
- function.bench.cpp
- join_view.bench.cpp
- lexicographical_compare_three_way.bench.cpp
- map.bench.cpp
- monotonic_buffer.bench.cpp
- numeric/gcd.bench.cpp
- ordered_set.bench.cpp
- shared_mutex_vs_mutex.bench.cpp
- stop_token.bench.cpp
- std_format_spec_string_unicode.bench.cpp
- std_format_spec_string_unicode_escape.bench.cpp
- string.bench.cpp
- stringstream.bench.cpp
- system_error.bench.cpp
- to_chars.bench.cpp
- unordered_set_operations.bench.cpp
- util_smartptr.bench.cpp
- variant_visit_1.bench.cpp
- variant_visit_2.bench.cpp
- variant_visit_3.bench.cpp
- vector_operations.bench.cpp
- )
-
-foreach(test_path ${BENCHMARK_TESTS})
- get_filename_component(test_file "${test_path}" NAME)
- string(REPLACE ".bench.cpp" "" test_name "${test_file}")
- if (NOT DEFINED ${test_name}_REPORTED)
- message(STATUS "Adding Benchmark: ${test_file}")
- # Only report the adding of the benchmark once.
- set(${test_name}_REPORTED ON CACHE INTERNAL "")
- endif()
- add_benchmark_test(${test_name} ${test_path})
-endforeach()
-
-if (LIBCXX_INCLUDE_TESTS)
- include(AddLLVM)
-
- configure_lit_site_cfg(
- ${CMAKE_CURRENT_SOURCE_DIR}/lit.cfg.py.in
- ${CMAKE_CURRENT_BINARY_DIR}/lit.cfg.py)
-
- configure_lit_site_cfg(
- ${CMAKE_CURRENT_SOURCE_DIR}/lit.site.cfg.py.in
- ${CMAKE_CURRENT_BINARY_DIR}/lit.site.cfg.py)
-
- set(BENCHMARK_LIT_ARGS "--show-all --show-xfail --show-unsupported ${LIT_ARGS_DEFAULT}")
-
- add_lit_target(check-cxx-benchmarks
- "Running libcxx benchmarks tests"
- ${CMAKE_CURRENT_BINARY_DIR}
- DEPENDS cxx-benchmarks cxx-test-depends
- ARGS ${BENCHMARK_LIT_ARGS})
-endif()
+add_dependencies(cxx-test-depends google-benchmark)
diff --git a/libcxx/test/benchmarks/CartesianBenchmarks.h b/libcxx/test/benchmarks/CartesianBenchmarks.h
index eca4e15cd009b..c712230843ebf 100644
--- a/libcxx/test/benchmarks/CartesianBenchmarks.h
+++ b/libcxx/test/benchmarks/CartesianBenchmarks.h
@@ -27,11 +27,11 @@ constexpr auto makeEnumValueTuple(std::index_sequence<Idxs...>) {
}
template <class B>
-static auto skip(const B& Bench, int) -> decltype(Bench.skip()) {
+auto skip(const B& Bench, int) -> decltype(Bench.skip()) {
return Bench.skip();
}
template <class B>
-static auto skip(const B& Bench, char) {
+auto skip(const B&, char) {
return false;
}
@@ -51,7 +51,7 @@ void makeBenchmarkFromValues(const std::vector<std::tuple<Args...> >& A) {
}
template <template <class...> class B, class Args, class... U>
-void makeBenchmarkImpl(const Args& A, std::tuple<U...> t) {
+void makeBenchmarkImpl(const Args& A, std::tuple<U...>) {
makeBenchmarkFromValues<B<U...> >(A);
}
diff --git a/libcxx/test/benchmarks/ContainerBenchmarks.h b/libcxx/test/benchmarks/ContainerBenchmarks.h
index 744505b439985..e572792d9fe5f 100644
--- a/libcxx/test/benchmarks/ContainerBenchmarks.h
+++ b/libcxx/test/benchmarks/ContainerBenchmarks.h
@@ -150,7 +150,7 @@ void BM_EmplaceDuplicate(benchmark::State& st, Container c, GenInputs gen) {
}
template <class Container, class GenInputs>
-static void BM_Find(benchmark::State& st, Container c, GenInputs gen) {
+void BM_Find(benchmark::State& st, Container c, GenInputs gen) {
auto in = gen(st.range(0));
c.insert(in.begin(), in.end());
benchmark::DoNotOptimize(&(*c.begin()));
@@ -164,7 +164,7 @@ static void BM_Find(benchmark::State& st, Container c, GenInputs gen) {
}
template <class Container, class GenInputs>
-static void BM_FindRehash(benchmark::State& st, Container c, GenInputs gen) {
+void BM_FindRehash(benchmark::State& st, Container c, GenInputs gen) {
c.rehash(8);
auto in = gen(st.range(0));
c.insert(in.begin(), in.end());
@@ -179,7 +179,7 @@ static void BM_FindRehash(benchmark::State& st, Container c, GenInputs gen) {
}
template <class Container, class GenInputs>
-static void BM_Rehash(benchmark::State& st, Container c, GenInputs gen) {
+void BM_Rehash(benchmark::State& st, Container c, GenInputs gen) {
auto in = gen(st.range(0));
c.max_load_factor(3.0);
c.insert(in.begin(), in.end());
@@ -193,7 +193,7 @@ static void BM_Rehash(benchmark::State& st, Container c, GenInputs gen) {
}
template <class Container, class GenInputs>
-static void BM_Compare_same_container(benchmark::State& st, Container, GenInputs gen) {
+void BM_Compare_same_container(benchmark::State& st, Container, GenInputs gen) {
auto in = gen(st.range(0));
Container c1(in.begin(), in.end());
Container c2 = c1;
@@ -208,7 +208,7 @@ static void BM_Compare_same_container(benchmark::State& st, Container, GenInputs
}
template <class Container, class GenInputs>
-static void BM_Compare_different_containers(benchmark::State& st, Container, GenInputs gen) {
+void BM_Compare_different_containers(benchmark::State& st, Container, GenInputs gen) {
auto in1 = gen(st.range(0));
auto in2 = gen(st.range(0));
Container c1(in1.begin(), in1.end());
diff --git a/libcxx/test/benchmarks/algorithms.partition_point.bench.cpp b/libcxx/test/benchmarks/algorithms.partition_point.bench.cpp
index ed2e337fa6ea3..42ebce8ad2f4a 100644
--- a/libcxx/test/benchmarks/algorithms.partition_point.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms.partition_point.bench.cpp
@@ -6,6 +6,8 @@
//
//===----------------------------------------------------------------------===//
+// UNSUPPORTED: c++03, c++11, c++14
+
#include <algorithm>
#include <array>
#include <cassert>
diff --git a/libcxx/test/benchmarks/algorithms/count.bench.cpp b/libcxx/test/benchmarks/algorithms/count.bench.cpp
index 7370293fd6efd..46b85e909efa5 100644
--- a/libcxx/test/benchmarks/algorithms/count.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms/count.bench.cpp
@@ -6,6 +6,8 @@
//
//===----------------------------------------------------------------------===//
+// UNSUPPORTED: c++03, c++11, c++14, c++17
+
#include <algorithm>
#include <benchmark/benchmark.h>
#include <cstring>
diff --git a/libcxx/test/benchmarks/algorithms/equal.bench.cpp b/libcxx/test/benchmarks/algorithms/equal.bench.cpp
index 6d63d8c48ce1e..2dc11585c15c7 100644
--- a/libcxx/test/benchmarks/algorithms/equal.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms/equal.bench.cpp
@@ -6,6 +6,8 @@
//
//===----------------------------------------------------------------------===//
+// UNSUPPORTED: c++03, c++11, c++14, c++17
+
#include <algorithm>
#include <benchmark/benchmark.h>
#include <vector>
diff --git a/libcxx/test/benchmarks/algorithms/fill.bench.cpp b/libcxx/test/benchmarks/algorithms/fill.bench.cpp
index 40f37425c394c..c157b5e5c9862 100644
--- a/libcxx/test/benchmarks/algorithms/fill.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms/fill.bench.cpp
@@ -6,6 +6,8 @@
//
//===----------------------------------------------------------------------===//
+// UNSUPPORTED: c++03, c++11, c++14, c++17, c++20
+
#include <algorithm>
#include <benchmark/benchmark.h>
#include <vector>
diff --git a/libcxx/test/benchmarks/algorithms/find.bench.cpp b/libcxx/test/benchmarks/algorithms/find.bench.cpp
index 6ff2d95ab4353..43d103474ebdf 100644
--- a/libcxx/test/benchmarks/algorithms/find.bench.cpp
+++ b/libcxx/test/benchmarks/algorithms/find.bench.cpp
@@ -6,6 ...
[truncated]
|
Review note: I will split the first commit into a separate PR once I have addressed all the issues that arise from running the benchmarks as part of the test suite. |
You can test this locally with the following command:darker --check --diff -r 6563ed3162d16e7f067dda554e96d0c9d476f207...55800ff2f0a20258762da79494febc2ae10b9424 libcxx/utils/libcxx/test/config.py libcxx/utils/libcxx/test/format.py libcxx/utils/libcxx/test/params.py View the diff from darker here.--- config.py 2024-10-30 18:38:40.000000 +0000
+++ config.py 2024-10-30 21:17:34.312862 +0000
@@ -50,10 +50,17 @@
action.pretty(config, lit_config.params), feature.pretty(config)
)
)
# Print the basic substitutions
- for sub in ("%{cxx}", "%{flags}", "%{compile_flags}", "%{link_flags}", "%{benchmark_flags}", "%{exec}"):
+ for sub in (
+ "%{cxx}",
+ "%{flags}",
+ "%{compile_flags}",
+ "%{link_flags}",
+ "%{benchmark_flags}",
+ "%{exec}",
+ ):
note("Using {} substitution: '{}'".format(sub, _getSubstitution(sub, config)))
# Print all available features
note("All available features: {}".format(", ".join(sorted(config.available_features))))
--- format.py 2024-10-30 18:38:40.000000 +0000
+++ format.py 2024-10-30 21:17:34.424040 +0000
@@ -27,11 +27,18 @@
return tmpDir, tmpBase
def _checkBaseSubstitutions(substitutions):
substitutions = [s for (s, _) in substitutions]
- for s in ["%{cxx}", "%{compile_flags}", "%{link_flags}", "%{benchmark_flags}", "%{flags}", "%{exec}"]:
+ for s in [
+ "%{cxx}",
+ "%{compile_flags}",
+ "%{link_flags}",
+ "%{benchmark_flags}",
+ "%{flags}",
+ "%{exec}",
+ ]:
assert s in substitutions, "Required substitution {} was not provided".format(s)
def _executeScriptInternal(test, litConfig, commands):
"""
Returns (stdout, stderr, exitCode, timeoutInfo, parsedCommands)
|
515e7e2
to
5adb0bf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After this patch were are the benchmark executables stored? When benchmarking I typically want to run the before and after version.
Are the benchmarks still executed in the benchmark CI configuration?
libcxx/docs/TestingLibcxx.rst
Outdated
The benchmarks are located under ``libcxx/test/benchmarks``. Running a benchmark | ||
works in the same way as running a test. Both the benchmarks and the tests share | ||
the same configuration, so make sure to enable the relevant optimization level | ||
when running the benchmarks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels really error prone to me and might run benchmarks at lower optimization levels than expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel you, but it still seems like the right thing to do. We want to be able to run the benchmarks under arbitrary optimization levels so we can e.g. evaluate the performance of an algorithm when optimizations are disabled. This calls for not hardcoding any optimization level when running the benchmarks, just like we don't when we run the tests.
The benchmarks are stored in the same location as a normal test executable is stored. For example: However, the biggest benefit of this patch is that you don't need to care anymore. You can simply run a benchmark like you run a test, with
I removed that CI configuration. After this patch, the benchmarks are always dry-run as part of the test suite, but we don't have a dedicated CI job that runs the full benchmarks. We were not gaining anything from having that since we never looked at those results. Instead, as a follow-up I would like to work on setting up a dedicated machine that we can run the benchmarks on and ideally even have something to prevent regressions. That CI job could be triggered by a comment in the PR or something like that: we probably don't want to run it all the time since we'd just be wasting cycles. |
I do care. I sometimes like to compare benchmarks from different iterations. Currently when I build a benchmark I get a message where the executable is stored. The new approach makes harder to find the location of the executable. I wonder whether we can show a message where the benchmark is stored, maybe always print the location in In the case I only want to run it once the new approach is indeed nicer.
I see. |
5adb0bf
to
bef5d24
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like this. Not having to figure out how to build and run the benchmarks every time would be quite nice. It's also quite nice to be able to run the benchmarks in different configurations more easily.
6d5897a
to
f3208d3
Compare
bce7beb
to
fa56368
Compare
This patch fixes a few warnings and errors when running the benchmarks as part of the test suite. It also adds most of the necessary Lit annotations to make it pass on various configurations.
Instead of building the benchmarks separately via CMake and running them separately from the test suite, this patch merges the benchmarks into the test suite and handles both uniformly. As a result: - It is now possible to run individual benchmarks like we run tests (e.g. using libcxx-lit), which is a huge quality-of-life improvement. - The benchmarks will be run under exactly the same configuration as the rest of the tests, which is a nice simplification. This does mean that one has to be careful to enable the desired optimization flags when running benchmarks, but that is easy with e.g. `libcxx-lit <...> --param optimization=speed`. - Benchmarks can use the same annotations as the rest of the test suite, such as `// UNSUPPORTED` & friends. When running the tests via `check-cxx`, we only compile the benchmarks because running them would be too time consuming. This introduces a bit of complexity in the testing setup, and instead it would be better to allow passing a --dry-run flag to GoogleBenchmark executables, which is the topic of a GoogleBenchmark issue. I am not really satisfied with the layering violation of adding the %{benchmark_flags} substitution to cmake-bridge, however I believe this can be improved in the future.
fa56368
to
55800ff
Compare
Instead of building the benchmarks separately via CMake and running them
separately from the test suite, this patch merges the benchmarks into
the test suite and handles both uniformly.
As a result:
It is now possible to run individual benchmarks like we run tests
(e.g. using libcxx-lit), which is a huge quality-of-life improvement.
The benchmarks will be run under exactly the same configuration as
the rest of the tests, which is a nice simplification. This does
mean that one has to be careful to enable the desired optimization
flags when running benchmarks, but that is easy with e.g.
libcxx-lit <...> --param optimization=speed
.Benchmarks can use the same annotations as the rest of the test
suite, such as
// UNSUPPORTED
& friends.When running the tests via
check-cxx
, we only compile the benchmarksbecause running them would be too time consuming. This introduces a bit
of complexity in the testing setup, and instead it would be better to
allow passing a --dry-run flag to GoogleBenchmark executables, which is
the topic of google/benchmark#1827.
I am not really satisfied with the layering violation of adding the
%{benchmark_flags} substitution to cmake-bridge, however I believe
this can be improved in the future.