TorchSparse: Efficient Point Cloud Inference Engine

Haotian Tang*, Zhijian Liu*, Xiuyu Li*, Yujun Lin, Song Han
Massachusetts Institute of Technology (MIT)
(* indicates equal contributions)

Abstract

Deep learning on point clouds has received increased attention thanks to its wide applications in AR/VR and autonomous driving. These applications require low latency and high accuracy to provide real-time user experience and ensure user safety. Unlike conventional dense workloads, the sparse and irregular nature of point clouds poses severe challenges to running sparse CNNs efficiently on the general-purpose hardware. Furthermore, existing sparse acceleration techniques for 2D images do not translate to 3D point clouds. In this paper, we introduce TorchSparse, a high-performance point cloud inference engine that accelerates the sparse convolution computation on GPUs. TorchSparse directly optimizes the two bottlenecks of sparse convolution: irregular computation and data movement. It applies adaptive matrix multiplication grouping to trade computation for better regularity, achieving 1.4-1.5x speedup for matrix multiplication. It also optimizes the data movement by adopting vectorized, quantized and fused locality-aware memory access, reducing the memory movement cost by 2.7x. Evaluated on seven representative models across three benchmark datasets, TorchSparse achieves 1.6x and 1.5x measured end-to-end speedup over the state-of-the-art MinkowskiEngine and SpConv, respectively.

Citation

@inproceedings{tang2022torchsparse,
  title={TorchSparse: Efficient Point Cloud Inference Engine},
  author={Tang, Haotian and Liu, Zhijian and Li, Xiuyu and Lin, Yujun and Han, Song},
  booktitle={Conference on Machine Learning and Systems (MLSys)},
  year={2022}
}

Acknowledgments: We would like to thank Hanrui Wang and Ligeng Zhu for their feedback on the artifact evaluation. This research was supported by NSF CAREER Award #1943349, Hyundai and Ford. Zhijian Liu and Yujun Lin were partially supported by the Qualcomm Innovation Fellowship.