Skip to content

Enable float32 model with FP16 precision for QNN HTP backend (#19863) #735

Enable float32 model with FP16 precision for QNN HTP backend (#19863)

Enable float32 model with FP16 precision for QNN HTP backend (#19863) #735

Triggered via push March 13, 2024 15:35
Status Success
Total duration 47s
Artifacts 1
Generate C/C++ API docs
36s
Generate C/C++ API docs
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
onnxruntime-c-apidocs Expired
1.3 MB