From 238c3e8463d33606fc722bef7d6dc9a64dcc4208 Mon Sep 17 00:00:00 2001 From: "Haotian (Ken) Tang" Date: Wed, 3 Jan 2024 13:56:08 -0500 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index d0fb163..d708b71 100644 --- a/README.md +++ b/README.md @@ -77,6 +77,8 @@ TorchSparse-MLsys on cloud GPUs. It also improves the latency of SpConv 2.3.5 by TorchSparse achieves superior mixed-precision training speed compared with MinkowskiEngine, TorchSparse-MLSys and SpConv 2.3.5. Specifically, it is **1.16x** faster on Tesla A100, **1.27x** faster on RTX 2080 Ti than state-of-the-art SpConv 2.3.5. It also significantly outperforms MinkowskiEngine by **4.6-4.8x** across seven benchmarks on A100 and 2080 Ti. Measured with batch size = 2. +You may find our benchmarks from [this link](https://zenodo.org/records/8311889). To access preprocessed datasets, please contact the authors. We cannot publicly release raw data from SemanticKITTI, nuScenes and Waymo due to license requirements. + ## Team