Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[libc] Correctly Run Multiple Benchmarks in the Same File #98467

Merged
merged 1 commit into from
Jul 11, 2024

Conversation

jameshu15869
Copy link
Contributor

There was previously an issue where registering multiple benchmarks in the same file would only give the results for the last benchmark to run. This PR fixes the issue.

@jhuber6

@llvmbot llvmbot added the libc label Jul 11, 2024
@llvmbot
Copy link
Collaborator

llvmbot commented Jul 11, 2024

@llvm/pr-subscribers-libc

Author: None (jameshu15869)

Changes

There was previously an issue where registering multiple benchmarks in the same file would only give the results for the last benchmark to run. This PR fixes the issue.

@jhuber6


Full diff: https://github.com/llvm/llvm-project/pull/98467.diff

2 Files Affected:

  • (modified) libc/benchmarks/gpu/LibcGpuBenchmark.cpp (+3-4)
  • (modified) libc/benchmarks/gpu/src/ctype/isalnum_benchmark.cpp (+13-1)
diff --git a/libc/benchmarks/gpu/LibcGpuBenchmark.cpp b/libc/benchmarks/gpu/LibcGpuBenchmark.cpp
index 7f60c9cc4a2f4..3dd83cef6d4df 100644
--- a/libc/benchmarks/gpu/LibcGpuBenchmark.cpp
+++ b/libc/benchmarks/gpu/LibcGpuBenchmark.cpp
@@ -52,11 +52,10 @@ void Benchmark::run_benchmarks() {
   uint64_t id = gpu::get_thread_id();
   gpu::sync_threads();
 
-  for (Benchmark *b : benchmarks)
+  for (Benchmark *b : benchmarks) {
     results[id] = b->run();
-  gpu::sync_threads();
-  if (id == 0) {
-    for (Benchmark const *b : benchmarks) {
+    gpu::sync_threads();
+    if (id == 0) {
       BenchmarkResult all_results = reduce_results(results);
       constexpr auto GREEN = "\033[32m";
       constexpr auto RESET = "\033[0m";
diff --git a/libc/benchmarks/gpu/src/ctype/isalnum_benchmark.cpp b/libc/benchmarks/gpu/src/ctype/isalnum_benchmark.cpp
index 4050bc0ec77b9..6f8d247902f76 100644
--- a/libc/benchmarks/gpu/src/ctype/isalnum_benchmark.cpp
+++ b/libc/benchmarks/gpu/src/ctype/isalnum_benchmark.cpp
@@ -6,4 +6,16 @@ uint64_t BM_IsAlnum() {
   char x = 'c';
   return LIBC_NAMESPACE::latency(LIBC_NAMESPACE::isalnum, x);
 }
-BENCHMARK(LlvmLibcIsAlNumGpuBenchmark, IsAlnumWrapper, BM_IsAlnum);
+BENCHMARK(LlvmLibcIsAlNumGpuBenchmark, IsAlnum, BM_IsAlnum);
+
+uint64_t BM_IsAlnumCapital() {
+  char x = 'A';
+  return LIBC_NAMESPACE::latency(LIBC_NAMESPACE::isalnum, x);
+}
+BENCHMARK(LlvmLibcIsAlNumGpuBenchmark, IsAlnumCapital, BM_IsAlnumCapital);
+
+uint64_t BM_IsAlnumNotAlnum() {
+  char x = '{';
+  return LIBC_NAMESPACE::latency(LIBC_NAMESPACE::isalnum, x);
+}
+BENCHMARK(LlvmLibcIsAlNumGpuBenchmark, IsAlnumNotAlnum, BM_IsAlnumNotAlnum);

@jhuber6 jhuber6 merged commit eeed589 into llvm:main Jul 11, 2024
6 of 7 checks passed
@llvm-ci
Copy link
Collaborator

llvm-ci commented Jul 11, 2024

LLVM Buildbot has detected a new failure on builder premerge-monolithic-linux running on premerge-linux-1 while building libc at step 7 "test-build-unified-tree-check-all".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/153/builds/2642

Here is the relevant piece of the build log for the reference:

Step 7 (test-build-unified-tree-check-all) failure: test (failure)
...
PASS: lit :: allow-retries.py (92737 of 92746)
PASS: lit :: discovery.py (92738 of 92746)
PASS: lit :: shtest-external-shell-kill.py (92739 of 92746)
PASS: lit :: selecting.py (92740 of 92746)
PASS: lit :: googletest-timeout.py (92741 of 92746)
PASS: lit :: shtest-timeout.py (92742 of 92746)
PASS: lit :: shtest-shell.py (92743 of 92746)
PASS: lit :: shtest-define.py (92744 of 92746)
PASS: lit :: max-time.py (92745 of 92746)
TIMEOUT: LLVM :: ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll (92746 of 92746)
******************** TEST 'LLVM :: ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll' FAILED ********************
Exit Code: -9
Timeout: Reached timeout of 60 seconds

Command Output (stderr):
--
RUN: at line 1: /build/buildbot/premerge-monolithic-linux/build/bin/lli -jit-kind=orc-lazy -compile-threads=2 -thread-entry hello /build/buildbot/premerge-monolithic-linux/llvm-project/llvm/test/ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll | /build/buildbot/premerge-monolithic-linux/build/bin/FileCheck /build/buildbot/premerge-monolithic-linux/llvm-project/llvm/test/ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll
+ /build/buildbot/premerge-monolithic-linux/build/bin/lli -jit-kind=orc-lazy -compile-threads=2 -thread-entry hello /build/buildbot/premerge-monolithic-linux/llvm-project/llvm/test/ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll
+ /build/buildbot/premerge-monolithic-linux/build/bin/FileCheck /build/buildbot/premerge-monolithic-linux/llvm-project/llvm/test/ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll

--

********************
********************
Timed Out Tests (1):
  LLVM :: ExecutionEngine/OrcLazy/multiple-compile-threads-basic.ll


Testing Time: 340.20s

Total Discovered Tests: 118117
  Skipped          :     46 (0.04%)
  Unsupported      :   2869 (2.43%)
  Passed           : 114906 (97.28%)
  Expectedly Failed:    295 (0.25%)
  Timed Out        :      1 (0.00%)
FAILED: CMakeFiles/check-all /build/buildbot/premerge-monolithic-linux/build/CMakeFiles/check-all 
cd /build/buildbot/premerge-monolithic-linux/build && /usr/bin/python3.10 /build/buildbot/premerge-monolithic-linux/build/./bin/llvm-lit -v --param USE_Z3_SOLVER=0 --param flang_site_config=/build/buildbot/premerge-monolithic-linux/build/tools/flang/test/lit.site.cfg.py --param bolt_site_config=/build/buildbot/premerge-monolithic-linux/build/tools/bolt/test/lit.site.cfg --param polly_site_config=/build/buildbot/premerge-monolithic-linux/build/tools/polly/test/lit.site.cfg --param polly_unit_site_config=/build/buildbot/premerge-monolithic-linux/build/tools/polly/test/Unit/lit.site.cfg /build/buildbot/premerge-monolithic-linux/build/utils/mlgo-utils /build/buildbot/premerge-monolithic-linux/build/tools/lld/test /build/buildbot/premerge-monolithic-linux/build/tools/mlir/test /build/buildbot/premerge-monolithic-linux/build/tools/clang/tools/extra/include-cleaner/test /build/buildbot/premerge-monolithic-linux/build/tools/clang/tools/extra/pseudo/test /build/buildbot/premerge-monolithic-linux/build/tools/clang/tools/extra/clangd/test/../unittests /build/buildbot/premerge-monolithic-linux/build/tools/clang/tools/extra/clangd/test /build/buildbot/premerge-monolithic-linux/build/tools/clang/tools/extra/test /build/buildbot/premerge-monolithic-linux/build/tools/clang/test /build/buildbot/premerge-monolithic-linux/build/tools/flang/test /build/buildbot/premerge-monolithic-linux/build/tools/bolt/test /build/buildbot/premerge-monolithic-linux/build/tools/polly/test @/build/buildbot/premerge-monolithic-linux/build/runtimes/runtimes-bins/lit.tests /build/buildbot/premerge-monolithic-linux/build/utils/lit /build/buildbot/premerge-monolithic-linux/build/test
ninja: build stopped: subcommand failed.

aaryanshukla pushed a commit to aaryanshukla/llvm-project that referenced this pull request Jul 14, 2024
There was previously an issue where registering multiple benchmarks in
the same file would only give the results for the last benchmark to run.
This PR fixes the issue.

@jhuber6
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants