Magnifying Feature Maps to Detect Small Faces
Total Page:16
File Type:pdf, Size:1020Kb
Face-MagNet: Magnifying Feature Maps to Detect Small Faces Pouya Samangouei∗ Mahyar Najibi∗ Rama Chellappa Larry S. Davis University of Maryland Institute for Advanced Computer Studies Baltimore Ave, College Park, MD, 20742 pouya,lsd,[email protected] [email protected] Abstract In this paper, we introduce the Face Magnifier Network (Face-MageNet), a face detector based on the Faster-RCNN framework which enables the flow of discriminative infor- mation of small scale faces to the classifier without any skip or residual connections. To achieve this, Face-MagNet de- ploys a set of ConvTranspose, also known as deconvolution, layers in the Region Proposal Network (RPN) and another set before the Region of Interest (RoI) pooling layer to fa- cilitate detection of finer faces. In addition, we also design, train, and evaluate three other well-tuned architectures that represent the conventional solutions to the scale problem: context pooling, skip connections, and scale partitioning. Each of these three networks achieves comparable results to the state-of-the-art face detectors. With extensive exper- iments, we show that Face-MagNet based on a VGG16 ar- Figure 1: An example of face detection result using the pro- chitecture achieves better results than the recently proposed posed algorithm. The green boxes are the confident detec- ResNet101-based HR [7] method on the task of face detec- tion results and the blue ones are just above the decision tion on WIDER dataset and also achieves similar results boundary. on the hard set as our other recently proposed method SSH [17].1 exception regarding this matter. However, as shown in [25] and Figure2, even the well performing object/face detec- 1. Introduction tion methods fail badly on WIDER dataset. We investigate arXiv:1803.05258v1 [cs.CV] 14 Mar 2018 different aspects of detecting small faces on this dataset. Object detection has always been an active area of re- One of the earliest solutions to the scale problem is ”skip search in computer vision where researchers compare their connections” [6,1]. Skip connections resize, normalize, methods on established benchmarks such as MS-COCO scale and concatenate feature maps with possibly different [13]. The main challenge in these benchmarks is the un- resolutions to transfer the feature information from earlier constrained environment where the objects appear in. Al- convolutional layers to the final classifier. They are already though, the number of small objects in these benchmarks employed for face detection in HyperFace [20] and CMS- has been increased recently, it is not enough to affect the fi- RCNN [27]. We train our variant of this solution as the nal aggregated detection results. Therefore, most of the fo- SkipFace network. Compared to CMS-RCNN, our normal- cus is on challenges rather than the scale of the objects. Be- ization of concatenated layers is much simpler. This net- sides the WIDER [25] dataset, other face detection bench- work achieves a better result than CMS-RCNN and HR- marks such as FDDB [8], and Pascal Faces [3] were of no VGG16 [7] on the WIDER benchmark. ∗Authors contributed equally The second practical solution for detecting small objects 1The code is available at github.com/po0ya/face-magnet is employing “context pooling” [6]. CMS-RCNN [27] pools detector [24]. In this section, we will only go over re- TP cent DCNN learning-based approaches. Most of the early FN 4000 DCNN-based face detectors are based on R-CNN[5] object Total detection which scores proposed regions by an external ob- 3000 ject proposal method. HyperFace [20] and DP2MFD [19] are some of the first face detectors based on this idea. Af- 2000 ter R-CNN, Fast-R-CNN [21] tried to remove the need for running the detector multiple times on object proposals by 1000 performing pooling on the feature maps. Faceness [22] im- proved the object proposals by re-ranking the boxes based 0 on feature AP responses of the base network. The next de- 1.2 3.8 6.2 8.8 velopment was removing the need for the external object 11.2 13.8 16.2 18.8 21.2 23.8 26.2 28.8 31.2 33.8 36.2 38.8 41.2 43.8 46.2 48.8 Average Size proposal methods. Many methods such as [16] and Region Proposal Networks in the Faster-RCNN [21] were success- Figure 2: The distribution of the hits (True Positives) and ful in eliminating the need for a pre-computed object pro- misses (False Negatives) of a VGG16 network trained with posals. Faster-RCNN trains the object proposal and classi- the default settings of the Faster-RCNN framework between fier at the same time on the same base network. [10] is one sizes 0 and 50 on the validation set of the WIDER dataset. of the earliest applications of Faster-RCNN in face detec- tion. [27] as well as our work are also face detectors based on this framework. a larger box around the proposed boxes of a Faster-RCNN There are also other detectors like those recently pro- detector to account for the context information. We build posed in [17] and [7] that are object-proposal-less i.e. they on the same idea but we use a different network architecture perform detection in a fully convolutional manner as RPN and obtain better results than CMS-RCNN and HR-VGG16 does for proposing bounding boxes. Other methods like with just using our context-augmented VGG16-based net- [11], [26] employ a cascade of classifiers to produce final work. detection results. Our final detector beats the cascade detec- Another way to tackle the scale challenge is to have tors and HR [7] while producing comparable results to SSH multiple proposal and detection sub-networks. [2, 17] use [17] on the WIDER face detection benchmark. However, this strategy to distribute object detection results at different these proposal-free methods are faster than Faster-RCNN- scales to three sub-networks on a shared VGG16 core. We based methods since they are equivalent to the first part of design the SizeSplit architecture which has two branches for such algorithms. small and big faces. Each branch has its own RPN and clas- sifier. SizeSplit achieves comparable results to HR-VGG16 3. Baseline Detectors on the WIDER dataset. Finally, we propose Face Magnifier Network (Face- The base object detector that we build on is Faster- MagNet) with the idea of discriminative magnification of RCNN [21]. It is a CNN that performs object detection in feature maps before face proposal and classification. We a single pass. Given an input image, it first generates a set show that this computationally efficient approach is as ef- of object proposals by its Region Proposal Network (RPN) fective as size splitting and skip connections and when branch. Then it does spatial max-pooling on the last convo- tested with an image pyramid beats the recently proposed lutional layer feature maps to get the features for each pro- ResNet101-based HR detector [7]. It is as efficient as a sim- posed bounding box. The features are then used to classify ple VGG16 based face detector which can detect faces with and regress the dimensions of the candidate box. Our base speed of 10 frames per second. architecture is a VGG16 network [23] with convolutional blocks conv-1 to conv-5 and fully connected layers fc6 and The rest of the paper is organized as follows. In Section fc7. Each of the convolutional blocks are followed by a 2, we review related works. Section3 describes the baseline 2 2 max-pooling operator. Each block internally has two detectors. Section4 presents our approach in detail. Abla- × or three 3 3 convolutional layers of the same resolution. tion studies and benchmark results are presented in Section × We also describe briefly the choice of hyper-parameters in 5. Finally, we conclude the paper in Section6. the experiment section. We show the level of effectiveness of each design decision in Section 5.4. 2. Related Works 3.1. Context Classifier There is a vast body of research on face detection as it has been one of the oldest applications that machine learn- The final classifier can also make use of context informa- ing proved to be practically useful via the Viola-Jones face tion to detect smaller faces more accurately. We pool a con- fc7 RPN Conv5 cl RoI-Pooling cl RoI-Pooling reg fc6 fc7 cl Conv1 Conv2 Conv3 Conv4 reg fc6 fc7 Context concat Conv1 Conv2 Conv3 Conv4 Conv5 RPN bbox generator cl reg cl RoI-Pooling RoI-Pooling reg context-fc7 reg RPN context-fc6 context-fc7 Conv3 Small fc6 fc7 Figure 3: The architecture of the context network. Figure 5: The architecture of the SizeSplit network. of the pooled features of the nth block and C is the number concat of channels. Then we define the normalized feature maps as Y = X/ X and its derivative computed as convT 2 cl k:1 s:1 k k convT O:512 T convT k:8 s:4 ∂Y I XX k:4 s:2 O:512 = (1) o:256 RPN ∂X X − 3 conv 2 X 2 concat k:1 s:1 k k k k o:512 reg Conv5 fc6 fc6 Conv4 is the globally normalized feature vector. The dimensions Conv2 Conv3 normalize Conv1 x1000 of these normalized features are then reduced using a 1x1 normalize normalize convolution layer appropriately to make transferring the weights from a pre-trained model possible. The architecutre Figure 4: The architecture of the SkipFace network of this method is shown in Figure4. We employ the same strategy for the input of RPN network.