From a541785b091a51b4cd6aae42026ae68d9135f7c2 Mon Sep 17 00:00:00 2001 From: Istvan Kiss Date: Sat, 25 May 2024 01:25:48 +0200 Subject: [PATCH] WIP --- docs/how-to/faq.md | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/docs/how-to/faq.md b/docs/how-to/faq.md index 489e239a3f..7422301eed 100644 --- a/docs/how-to/faq.md +++ b/docs/how-to/faq.md @@ -8,7 +8,7 @@ HIP provides the following: * Memory management (`hipMalloc()`, `hipMemcpy()`, `hipFree()`, etc.) * Streams (`hipStreamCreate()`, `hipStreamSynchronize()`, `hipStreamWaitEvent()`, etc.) * Events (`hipEventRecord()`, `hipEventElapsedTime()`, etc.) -* Kernel launching (hipLaunchKernel/hipLaunchKernelGGL is the preferred way of launching kernels. hipLaunchKernelGGL is a standard C/C++ macro that can serve as an alternative way to launch kernels, replacing the CUDA triple-chevron (<<< >>>) syntax). +* Kernel launching (`hipLaunchKernel`/`hipLaunchKernelGGL` is the preferred way of launching kernels. `hipLaunchKernelGGL` is a standard C/C++ macro that can serve as an alternative way to launch kernels, replacing the CUDA triple-chevron (`<<< >>>`) syntax). * HIP Module API to control when and how code is loaded. * CUDA-style kernel coordinate functions (`threadIdx`, `blockIdx`, `blockDim`, `gridDim`) * Cross-lane instructions including shfl, ballot, any, all @@ -73,12 +73,12 @@ However, we can provide a rough summary of the features included in each CUDA SD ## What libraries does HIP support? -HIP includes growing support for the four key math libraries using hipBLAS, hipFFt, hipRAND and hipSPARSE, as well as MIOpen for machine intelligence applications. +HIP includes growing support for the four key math libraries using hipBLAS, hipFFT, hipRAND and hipSPARSE, as well as MIOpen for machine intelligence applications. These offer pointer-based memory interfaces (as opposed to opaque buffers) and can be easily interfaced with other HIP applications. The hip interfaces support both ROCm and CUDA paths, with familiar library interfaces. * [hipBLAS](https://github.com/ROCmSoftwarePlatform/hipBLAS), which utilizes [rocBlas](https://github.com/ROCmSoftwarePlatform/rocBLAS). -* [hipFFt](https://github.com/ROCmSoftwarePlatform/hipfft) +* [hipFFT](https://github.com/ROCmSoftwarePlatform/hipfft) * [hipsSPARSE](https://github.com/ROCmSoftwarePlatform/hipsparse) * [hipRAND](https://github.com/ROCmSoftwarePlatform/hipRAND) * [MIOpen](https://github.com/ROCmSoftwarePlatform/MIOpen) @@ -93,7 +93,7 @@ HIP offers several benefits over OpenCL: * Developers can code in C++ as well as mix host and device C++ code in their source files. HIP C++ code can use templates, lambdas, classes and so on. * The HIP API is less verbose than OpenCL and is familiar to CUDA developers. * Because both CUDA and HIP are C++ languages, porting from CUDA to HIP is significantly easier than porting from CUDA to OpenCL. -* HIP uses the best available development tools on each platform: on NVIDIA GPUs, HIP code compiles using NVCC and can employ the nSight profiler and debugger (unlike OpenCL on NVIDIA GPUs). +* HIP uses the best available development tools on each platform: on NVIDIA GPUs, HIP code compiles using NVCC and can employ the Nsight profiler and debugger (unlike OpenCL on NVIDIA GPUs). * HIP provides pointers and host-side pointer arithmetic. * HIP provides device-level control over memory allocation and placement. * HIP offers an offline compilation model. @@ -274,7 +274,7 @@ One symptom of this problem is the message "error: 'unknown error'(11) at square ## On CUDA, can I mix CUDA code with HIP code? -Yes. Most HIP data structures (hipStream_t, hipEvent_t) are typedefs to CUDA equivalents and can be intermixed. Both CUDA and HIP use integer device ids. +Yes. Most HIP data structures (`hipStream_t`, `hipEvent_t`) are typedefs to CUDA equivalents and can be intermixed. Both CUDA and HIP use integer device ids. One notable exception is that `hipError_t` is a new type, and cannot be used where a `cudaError_t` is expected. In these cases, refactor the code to remove the expectation. Alternatively, hip_runtime_api.h defines functions which convert between the error code spaces: `hipErrorToCudaError` @@ -376,7 +376,10 @@ HIP_VERSION=HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 + HIP_VERS ``` HIP version can be queried from HIP API call, + +```cpp hipRuntimeGetVersion(&runtimeVersion); +``` The version returned will always be greater than the versions in previous ROCm releases.