-
Notifications
You must be signed in to change notification settings - Fork 23
Home
Welcome to the llvm-project-cxxjit wiki! This is a fork of LLVM with a Clang enhanced with just-in-time (JIT) compilation functionality.
Clang JIT extensions are designed to address two key challenges of C++ programming:
-
Long Compile times - Instantiating many C++ templates in order to provide high-performance, customized behaviors necessarily generate long compile times. In some cases, at runtime, only a subset of these instantiations are actually. In such cases, compile time can be reduced by delaying relevant template instantiation until runtime.
-
Performance - While C++ is generally one of the highest-performance languages available, there are cases where runtime-specialized code can obtain significantly better performance than a more-generic ahead-of-time-compiled implementation in C++. In C++, a programmer can choose to instantiate specialized instances of particular algorithms, and then dispatch at runtime between those pre-defined choices. This, however, is a major contributor to the aforementioned compile-time problems.
Enabling JIT support when using Clang, add the command-line flag:
-fjit
when both compiling and linking the application or shared library.
#ifndef __has_feature // For compatibility with non-Clang compilers.
#define __has_feature(x) 0
#endif
#if __has_feature(clang_cxx_jit)
// Code here depending on the JIT
#endif
By itself, the command-line flag affects some linking details, but does enable any use of just-in-time compilation. Function templates can be tagged for just-in-time compilation by using the attribute:
[[clang::jit]]
The attributed function template provides for additional features and restrictions. Features:
-
Instantiations of this function template will not be constructed at compile time, but rather calling a specialization of the template, or taking the address of a specialization of the template, will trigger the instantiation and compilation of the template at runtime.
-
Non-constant expressions may be provided for the non-type template parameters, and these values will be used at runtime to construct the type of the requested instantiation.
$ cat /tmp/jit.cpp
#include <iostream>
#include <cstdlib>
template <int x>
[[clang::jit]] void run() {
std::cout << "I was compiled at runtime, x = " << x << "\n";
}
int main(int argc, char *argv[]) {
int a = std::atoi(argv[1]);
run<a>();
}
$ clang++ -O3 -fjit -o /tmp/jit /tmp/jit.cpp
$ /tmp/jit 5
I was compiled at runtime, x = 5
- Type arguments to the template can be provided as strings. If the argument is implicitly convertible to a
const char *
, then that conversion is performed, and the result is used to identify the requested type. Otherwise, if an object is provided, and that object has a member function namedc_str()
, and the result of that function can be converted to aconst char *
, then the call and conversion (if necessary) are performed in order to get a string used to identify the type. The string is parsed and analyzed to identify the type in the declaration context of the parent to the function triggering the instantiation. Whether types defined after the point in the source code that triggers the instantiation are available is not specified.
$ cat /tmp/jit-t.cpp
#include <iostream>
struct F {
int i;
double d;
};
template <typename T, int S>
struct G {
T arr[S];
};
template <typename T>
[[clang::jit]] void run() {
std::cout << "I was compiled at runtime, sizeof(T) = " << sizeof(T) << "\n";
}
int main(int argc, char *argv[]) {
std::string t(argv[1]);
run<t>();
}
$ clang++ -O3 -fjit -o /tmp/jit-t /tmp/jit-t.cpp
$ /tmp/jit-t '::F'
I was compiled at runtime, sizeof(T) = 16
$ /tmp/jit-t 'F'
I was compiled at runtime, sizeof(T) = 16
$ /tmp/jit-t 'float'
I was compiled at runtime, sizeof(T) = 4
$ /tmp/jit-t 'double'
I was compiled at runtime, sizeof(T) = 8
$ /tmp/jit-t 'size_t'
I was compiled at runtime, sizeof(T) = 8
$ /tmp/jit-t 'std::size_t'
I was compiled at runtime, sizeof(T) = 8
$ /tmp/jit-t 'G<F, 5>'
I was compiled at runtime, sizeof(T) = 80
Restrictions:
- Because the body of the template is not instantiated at compile time,
decltype(auto)
and any other type deduction mechanism depending on the body of the function are not available. - Because the template specializations are not compiled until runtime, they're not available at compile time for use as non-type template arguments, etc.
Note: Explicit specializations of a JIT function template are not just-in-time compiled.
Note: A JIT template with a pointer/reference non-type template parameter which is provided with a runtime pointer value will generate a different instantiation for each pointer value. If the pointer provided points to a global object, no attempt is made to map that pointer value back to the name of the global object when constructing the new type.
Note: In general, pointer/reference-type non-type template arguments are not permitted to point to subobjects. This restriction still applies formally to the templates instantiated at runtime using runtime-provided pointer values. This has important optimization benefits: pointers that can be traced back to distinct underlying objects are known not to alias, and these template parameters appear to the optimizer to have this unique-object property.
As a benchmark to illustrate the feature, we'll adapt a benchmark from the Eigen library. Specifically, this one: https://github.com/eigenteam/eigen-git-mirror/blob/master/bench/benchmark.cpp
We want to look at two aspects: Compile time and runtime performance. Eigen provides a matrix type which can either have compile-time specific or runtime specified sizes (i.e., the number of rows and columns).
#include <iostream>
#include <string>
#include <chrono>
#include <cstdlib>
#include <Eigen/Core>
using namespace std;
using namespace Eigen;
If we wish to support a variant of this benchmark supporting float, double, and long double, and supporting any size at runtime, we can adapt the code as:
template <typename T>
void test_aot(int size, int repeat) {
Matrix<T,Dynamic,Dynamic> I = Matrix<T,Dynamic,Dynamic>::Ones(size, size);
Matrix<T,Dynamic,Dynamic> m(size, size);
for(int i = 0; i < size; i++)
for(int j = 0; j < size; j++) {
m(i,j) = (i+size*j);
}
auto start = chrono::system_clock::now();
for (int r = 0; r < repeat; ++r) {
m = Matrix<T,Dynamic,Dynamic>::Ones(size, size) + T(0.00005) * (m + (m*m));
}
auto end = chrono::system_clock::now();
cout << "AoT: " << chrono::duration<double>(end - start).count() << " s\n";
}
void test_aot(std::string &type, int size, int repeat) {
if (type == "float")
test_aot<float>(size, repeat);
else if (type == "double")
test_aot<double>(size, repeat);
else if (type == "long double")
test_aot<long double>(size, repeat);
else
cout << type << "not supported for AoT\n";
}
To do the same thing with the Clang JIT extensions, we can write:
template <typename T, int size>
[[clang::jit]] void test_jit_sz(int repeat) {
Matrix<T,size,size> I = Matrix<T,size,size>::Ones();
Matrix<T,size,size> m;
for(int i = 0; i < size; i++)
for(int j = 0; j < size; j++) {
m(i,j) = (i+size*j);
}
auto start = chrono::system_clock::now();
for (int r = 0; r < repeat; ++r) {
m = Matrix<T,size,size>::Ones() + T(0.00005) * (m + (m*m));
}
auto end = chrono::system_clock::now();
cout << "JIT: " << chrono::duration<double>(end - start).count() << " s\n";
}
void test_jit(std::string &type, int size, int repeat) {
return test_jit_sz<type, size>(repeat);
}
And we can use very-similar code to construct explicit instantiations at compile time, but of course, then we're limited to support for the explicit sizes we have selected.
Compiling on an Intel Xeon E5-2699 using the flags -march=native -ffast-math -O3
, and measuring compile time using "user" time from the Linux time
command.
Time | |
---|---|
JIT Only | 3.5s |
Single Specialization (double, size = 16) | 4.95s |
Single Specialization (double, size = 7) | 3.3s |
Single Specialization (double, size = 3) | 3.2s |
Single Specialization (double, size = 1) | 2.95s |
Two Specializations (double, size = 16) and (double, 7) | 5.7s |
AoT Only (three floating-point types with dispatch) | 9.7s |
AoT Only (double only) | 5.3s |
Nothing (just the includes and a main function) | 2.58s |
This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of two U.S. Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering, and early testbed platforms, in support of the nation’s exascale computing imperative.