Deep Miner: a Deep and Multi-Branch Network Which Mines Rich and Diverse Features for Person Re-Identification

Deep Miner: a Deep and Multi-Branch Network Which Mines Rich and Diverse Features for Person Re-Identification

Deep Miner: A Deep and Multi-branch Network which Mines Rich and Diverse Features for Person Re-identification Abdallah Benzine1, Mohamed El Amine Seddik2, Julien Desmarais1 1Digeiz AI Lab 2Ecole Polytechnique Abstract 2 2 3 Mined 4 Features Most recent person re-identification approaches are 2 Global based on the use of deep convolutional neural networks Features (CNNs). These networks, although effective in multiple Local tasks such as classification or object detection, tend to focus 4 Features on the most discriminative part of an object rather than re- 3 3 2 ( ) trieving all its relevant features. This behavior penalizes the 1 2 Erasing 1 1 4 1 1 Operator 3 3 Mined performance of a CNN for the re-identification task, since 2 4 Features it should identify diverse and fine grained features. It is Attention Module 1 then essential to make the network learn a wide variety of finer characteristics in order to make the re-identification Figure 1: Deep Miner Model Architecture. Given a stan- process of people effective and robust to finer changes. dard CNN backbone, several branches are created to en- In this article, we introduce Deep Miner, a method that rich and diversify the features for the purpose of person re- allows CNNs to “mine” richer and more diverse features identification. The proposed Deep Miner model is made about people for their re-identification. Deep Miner is of three types of branches: (i) The main branch G (in or- specifically composed of three types of branches: a Global ange) is the original backbone and predicts the standard branch (G-branch), a Local branch (L-branch) and an global features fg; (ii) Several Input-Erased (IE) branches Input-Erased branch (IE-branch). G-branch corresponds to (in green) that takes as input erased feature maps and pre- the initial backbone which predicts global characteristics, dict mined features fe1 and fe2 ; (iii) The local branch (in while L-branch retrieves part level resolution features. The blue) that outputs local features fl and in which a uniform IE-branch for its part, receives partially suppressed fea- partition strategy is employed for part level feature reso- ture maps as input thereby allowing the network to “mine” lution as proposed by [38]. In the global branch, and the new features (those ignored by G-branch) as output. For bottom IE Branch, attention modules are used in order to this special purpose, a dedicated suppression procedure for improve their feature representation. identifying and removing features within a given CNN is in- troduced. This suppression procedure has the major benefit arXiv:2102.09321v1 [cs.CV] 18 Feb 2021 of being simple, while it produces a model that significantly objective of person Re-ID is to determine whether a given outperforms state-of-the-art (SOTA) re-identification meth- person has already appeared over a network of cameras, ods. Specifically, we conduct experiments on four standard which technically implies a robust modelling of the global person re-identification benchmarks and witness an abso- appearance of individuals. The Re-ID task is particularly lute performance gain up to 6.5% mAP compared to SOTA. challenging because of significant appearance changes – of- ten caused by variations in the background, the lightening conditions, the body pose and the subject orientation w.r.t. 1. Introduction the recording camera. In order to overcome these issues, one of the main goals In recent years, person re-identification (Re-ID) has at- of person Re-ID models is to produce rich representations tracted major interest due to its important role in various of any input image for person matching. Notably, CNNs are computer vision applications: video surveillance, human known to be robust to appearance changes and spatial loca- authentication, human-machine interaction etc. The main tion variations, as their global features are invariant to such 1 details and concepts. Basically, deep networks tend to focus on surface statistical regularities rather than more general abstract concepts. This behavior is problematic in the context of re-identification, since the network is required to provide the richest and most diverse possible representations. In this paper, we propose to address this problem by adding Input Erased Branches (IE-Branch) into a standard backbone. Precisely, an IE-Branch takes partially removed feature maps as input in the aim of producing (that is “mining”) more diversified features as output (as depicted in Figure2). In particular, the removed regions correspond quite intuitively to areas where the network has strong activations and are determined by a simple suppression operation (see subsection 3.2). The proposed Deep Miner model is therefore made as the combination of IE-branches ~ with local and global branches. The multi-branch architec- Input Global IE 1 IE 2 Local ture of Deep Miner is depicted on Figure1. Figure 2: Feature visualization for three examples. The main contributions brought by this work may be Warmer color denotes higher value. The global branch summarized in the following items: (i) We provide a multi- (second column) focuses only on some features but ignores branch model allowing the mining of rich and diverse fea- other important ones. Thanks to the Input Erased branches tures for people re-identification. The proposed model in- (third and fourth columns), Deep Miner discovers new im- cludes three types of branches: a Global branch (G-branch), portant features (localized by the black boxes). For in- a Local branch (L-branch) and an Input-Erased Branch (IE- stance, in the first row, the IE-branches are more attentive Branch); the latter being responsible of mining extra fea- to the person pant. In the second row, they discover some tures; (ii) IE-Branches are constructed by adding an erase patterns on the coat and get attentive to its cap. In the third operation on the global branch feature maps, thereby allow- row, they find out the plastic bag. The local branch (last ing the network to discover new relevant features; (iii) Ex- column) helps the network to focus on local features such tensive experiments were conducted on Market1501 [44], as the shoes of the subject or the object handled by the sub- DukeMTMC-ReID [24], CUHK03 [17] and MSMT17[37]. ject in the second row. We demonstrate that our model significantly outperforms the existing SOTA methods on all benchmarks. transformations. Nevertheless, the aforementioned global 2. Related Work features are prone to ignore detailed and potentially relevant There has been an extensive amount of works around the information for identifying specific person representations. problem of people re-identification. This section particu- To enforce the learning of detailed features, attention mech- larly recalls the main advances and techniques to tackle this anisms and aggregating global part-based representations task, which we regroup subsequently in terms of different were introduced in the literature, yielding very promising approaches: results [5,7, 31]. Specifically, attention mechanisms allow Part-level features which essentially pushes the model to reduce the influence of background noise and to focus to discover fine-grained information by extracting features on relevant features, while part-based models divide feature at a part-level scale. Specifically, this approach consists in maps into spatial horizontal parts thereby allowing the net- dividing the input image into multiple overlapping parts in work to focus on fine-grained and local features. order to learn part-level features [40, 17]. In the same vein, Despite their observed effectiveness in various tasks, other methods of body division were also proposed; Pose- these approaches do not provide ways to enrich and diver- Driven Deep Convolutional (PDC) leverages human pose sify an individual’s representation. In fact, deep learning information to transform a global body image into normal- models are shown to exhibit a biased learning behavior ized part regions, Part-based Convolution Baseline (PCB) [4,3, 32]; in the sense that they retrieve sufficiently partial [31] learns part level features by dividing the feature map attributes concepts which contribute to reduce the training equally – Essentially, the network has a 6-branch struc- loss over the seen classes, rather than learning all-sided ture by dividing the feature maps into six horizontal stripes 2 and an independent loss is applied to each strip. Based on which suppresses the salient features learned in the previ- PCB, stronger part based methods were notably developed ous cascaded stage thereby extracting other potential salient [43, 22, 36]. However, theses division strategies usually suf- features. Still, our method differs from [8] through differ- fer from misalignment between corresponding parts – be- ent aspects: (i) the erasing operation used in Deep Miner cause of large variations in poses, viewpoints and scales. is conceptually simpler – since it consists in averaging and In order to avoid this misalignment, [38] suggest to con- thresholding operations yielding an erasing mask – while catenate the part-level feature vectors into a single vector being very efficient in terms of outcome; (ii) the obtained and then apply a single loss to the concatenated vector. erasing mask is then multiplied by features maps of the This strategy is more effective than applying individual loss initial CNN backbone allowing to create an Input-Erased to each part-level feature vector. As shall be seen subse- branch that will discover new relevant features; (iii) by sim- quently, we particularly employ this strategy to learn the ply incorporating such simple Input-Erased branches into local branch of our Deep Miner model. the standard Resnet50 backbone, Deep Miner achieves Metric learning methods contribute also to enhance the the same mAP in Market1501 as [8] which include many representations power of Re-ID features as notably shown other complex modules (e.g. attention modules); (iv) Deep in [6, 12, 30, 24].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us