
StarNet: Targeted Computation for Object Detection in Point Clouds Jiquan Ngiam∗†, Benjamin Caine∗†, Wei Hany, Brandon Yangy, Yuning Chaiz, Pei Sunz, Yin Zhouz, Xi Yi, Ouais Alsharifz, Patrick Nguyen, Zhifeng Cheny, Jonathon Shlens∗†, Vijay Vasudevany yGoogle Brain, zWaymo fweihan,[email protected] Abstract We present an object detection system designed for point cloud data that enables targeted and adaptive computation. We observe that objects in point clouds are quite distinct from traditional camera images: objects are sparse and vary widely in location, but do not exhibit scale distortions observed in single camera perspec- tive. These two observations suggest that simple and cheap data-driven object proposals to maximize spatial coverage or match the observed densities of point cloud data may suffice. This recognition paired with a local point cloud-based net- work permits building an object detector that can adapt to different computational settings and target spatial regions. We demonstrate this flexibility and the targeted detection strategies it enables on the large-scale Waymo Open Dataset. 1 Introduction Detecting and localizing objects forms a critical component of any autonomous driving platform [1, 2]. Self-driving cars (SDC) are equipped with a variety of sensors such as cameras, LiDARs, and radars [3, 4], where LiDAR is one of the most critical as it natively provides high resolution, accurate 3D data about the environment. However, object detection systems for LiDAR look remarkably similar to systems designed for generic camera imagery. Despite large modality and task-specific differences, the best performing methods for 3D object detection re-purpose camera-based detection architectures. Several methods apply convolutions to discretized representations of point clouds in the form of a projected Birds Eye View (BEV) image [5, 6, 7, 8], or a 3D voxel grid [9, 10]. Alternatively, methods that operate directly on point clouds have re-purposed two stage object detector design [11, 12, 13]. We start by recognizing that 3D region proposals are fundamentally distinct. Every reflected point (above the road) must belong to an object or surface. We demonstrate that efficient sampling schemes that match the data distribution are sufficient for generating region proposals. Then, each proposed region is process independently. To avoid discretization, we featurize each local point cloud directly [14, 15] in order to classify objects and regress bounding box locations [16, 17]. The resulting detector is as accurate as the state-of-the-art at lower inference costs, and more accu- rate at similar inference costs. The model does not waste computation on empty regions because the proposal method naturally exploits the sparse distribution of the point cloud. One can also dynam- ically vary the number of proposals and the number of points per proposal at inference time since the model operates locally. This allows the cost of inference to scale linearly, and permits a single trained model to operate at different computational budgets. Finally, because each region is com- pletely independent, one may select at run-time where to allocate region proposals based on context. Machine Learning for Autonomous Driving Workshop at the 33rd Conference on Neural Information Process- ing Systems (NeurIPS 2019), Vancouver, Canada. Selected center Neighborhood points Box Predicted Classifier Boxes R Featurizer Box Suppression and Box Scores Regressor Sample centers Gather and featurize cells Project to anchor offsets Figure 1: StarNet overview. After obtaining a proposal location, we featurize the local point cloud around the proposal. We randomly select K points within a radius of R meters of each proposal center. In our experiments, K is typically between 32 to 1024, and R is 2-3 meters. All local points are re-centered to an origin for each proposal. For example, a deployed system could exploit priors (e.g. HD maps or temporal information) to target where in the scene to run the detector. 2 Related work Object detection in point clouds has started with porting ideas from the image-based object detection literature. By voxelizing a point cloud (i.e. identifying a grid location for individual points ~xi) into a series of stacked image slices describing occupancy, one may employ CNN techniques for object detection on the resulting voxel grids [5, 9, 10, 6, 7]. Employing a grid representation for point clouds in object detection presents potential drawbacks. Even when ignoring the height dimension in 3D by using a Birds Eye View Representation (BEV), convolutions can be expensive and the computational demand grows as roughly as O(hw) where h and w are the height and width of an image. In practice, this constraint requires that CNNs operate at no larger than ∼ 1000 pixel input resolution [18]. Given the large spatial range of LiDAR, selecting a grid resolution to achieve this pixel resolution (e.g. 0:16 ∼ 0:33 meter/pixel [8]) discards detailed spatial information. This often results in systematically worse performance on smaller objects such as pedestrians [5, 9, 10, 7] where the latter may only occupy several pixels in a voxelized image. For these reasons, many authors explored building detection systems that operate directly on repre- sentations of the point cloud data. For instance, VoxelNet partitions 3-D space and encodes LiDAR points within each partition with a point cloud featurization [9]. The result is a fixed-size feature map, on which a conventional CNN-based object detection architecture may be applied. Likewise, PointPillars [8] proposes an object detector that employs a point cloud featurization, providing in- put into a grid-based feature representation for use in a feature pyramid network [19]; the resulting per-pillar features are combined with anchors for every pillar to perform joint classification and regression. The resulting network achieves a high level of predictive performance with minimal computational cost on small scenes, but its fixed grid increases in cost notably on larger scenes and cannot adapt to each scene’s unique data distribution. In the vein of two stage detection systems, PointRCNN [12] employs a point cloud featurizer [15] to make proposals via a per-point segmentation network into foreground and background. Subse- quently, a second stage operates on cropped featurizations to perform classification and localization. Finally, other works propose bounding boxes through a computationally intensive, learned proposal system operating on paired camera images [11, 13], with the goal of improving predictive perfor- mance by leveraging a camera image to seed a proposal system to maximize recall. 3 Methods Our goal is to construct a detector that takes advantage of the sparsity of LiDAR data and allows us to target where to spend computation. We propose a sparse targeted object detector, termed StarNet: From a sparse sampling of locations (centers) in the point cloud, the model extracts a local subset of neighboring points. The model featurizes the point cloud [14], classifies the region, and regresses bounding box parameters. Importantly, the object location is predicted relative to the selected location. Each spatial location may be processed by the detector completely independently. An overview of this method is depicted in Figure 1 and Appendix C. The structure of the proposed system confers two advantages. First, inference on each cell proposal occurs completely independently, enabling computation of each center location to be parallelized. 2 Second, heuristics or side information [20, 7] may be used to rank the locations to process. This permits the system to focus it’s computation budget on the most important locations. 3.1 Center location selection We propose using an inexpensive, data-dependent algorithm to generate proposals from LiDAR with high recall. In contrast to prior work [5, 8, 10], we do not base proposals on fixed grid locations, but instead generate proposals to respect the observed data distribution in a scene. Concretely, we sample N points from the point cloud, and use their (x; y) coordinates as propos- als. In this work, we explore two sampling al- (# points, 64) gorithms: random uniform sampling, and farthest point sampling (FPS), which are compared in Ap- BN - Linear - ReLU pendix D and visualized in Appendix F. Ran- (# points, 256) dom uniform sampling provides a simple and ef- fective baseline because the sampling method bi- BN - Linear - ReLU ases towards densely populated regions of space. In contrast, farthest point sampling (FPS) selects (# points, 128) individual points sequentially such that the next Concat point selected is maximally far away from all pre- vious points selected, maximizing the spatial cov- (64) erage across the point cloud. This approach per- Max mits varying the number of proposals from a small, sparse set to a large, dense set that covers point (# points, 64) cloud space. 3.2 Featurizing local point clouds Figure 2: StarNet Block. We annotate edges We experimented with several architectures for with tensor dimensions for clarity: (# points, featurizations of native point cloud data [15, 21] feature dimension) represents a point cloud but the StarNet featurizer most closely followed with # points with attached features. [22]. The resulting architecture is agnostic to the number of points provided as input [15, 21, 22]. StarNet blocks (Figure 2) take as input a set of points, where each point has an associated feature vector. Each block first computes aggregate statistics (max) across the point cloud. Next, the global statistics are concatenated back to each point’s feature. Finally, two fully-connected layers are ap- plied, each composed of batch normalization (BN), linear projection, and ReLU activation. StarNet Blocks are stacked to form a featurizer (Figure 8). 3.3 Anchor Boxes. We use a grid of G × G total anchor offsets relative to each cell center, and each offset can employ different rotations or anchor dimension priors. We project each featurized cell to D dimensional feature vectors at each location offset from which we predict classification logits and bounding box regression logits following [10, 8].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-