[English | Jpapanese]

Person Re-Identification



Hierarchical Gaussian Descriptor


Abstract
Describing the color and textural information of a person image is one of the most crucial aspects of person re-identification. In this paper, we present a novel descriptor based on a hierarchical distribution of pixel features. A hierarchical covariance descriptor has been successfully applied for image classification. However, the mean information of pixel features, which covariance discards, tend to be major discriminative information of person images. To solve this problem, we describe region via hierarchical Gaussian distribution in which both means and covariances are included in their parameters. More specifically, we model the region as a set of multiple Gaussian distributions in which each Gaussian represents the appearance of a local patch. The characteristics of the set of Gaussians are again described by another Gaussian distribution. In both steps, unlike the hierarchical covariance descriptor, the proposed descriptor can model both the mean and the covariance of pixel features properly. The results of experiments conducted on five databases indicate that the proposed descriptor exhibits remarkably high performance which improves the state-of-the art descriptors in person re-identification.


Download
Publication
  • Tetsu Matsukawa, Takahiro Okabe, Einoshin Suzuki, Yoichi Sato
    Hierarchical Gaussian Descriptor for Person Re-Identification
    in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR2016), pp.1363--1372, 2016
    [paper][supp][poster][slide]


CNN Features


Abstract
This paper presents fine-tuned CNN features for person re-identification. Recently, features extracted from top layers of pre-trained Convolutional Neural Network (CNN) on a large annotated dataset, e.g., ImageNet, have been proven to be strong off-the-shelf descriptors for various recognition tasks. However, large disparity among the pre-trained task, i.e., ImageNet classification, and the target task, i.e., person image matching, limits performances of the CNN features for person re-identification. In this paper, we improve the CNN features by conducting a fine-tuning on a pedestrian attribute dataset. In addition to the classification loss for multiple pedestrian attribute labels, we propose new labels by combining different attribute labels and use them for an additional classification loss function. The combination attribute loss forces CNN to distinguish more person specific information, yielding more discriminative features. After extracting features from the learned CNN, we apply conventional metric learning on a target re-identification dataset for further increasing discriminative power. Experimental results on four challenging person re-identification datasets (VIPeR, CUHK, PRID450S and GRID) demonstrate the effectiveness of the proposed features.


Download
  • Extracted features: features_FTCNN.zip (including VIPeR, CUHK01, PRID450S, GRID datasets)
  • CMC curves: CMCs_FTCNN.zip (including VIPeR, CUHK01, PRID450S, GRID datasets)
Publication
  • Tetsu Matsukawa, Einoshin Suzuki,
    Person Re-Identification Using CNN Features Learned from Combination of Attributes
    in Proceedings of International Conference and Pattern Recognition (ICPR2016), pp.2429--2434, 2016
    [paper][slide]


Discriminative Accumulation of Local Features


Abstract
Metric learning to learn a good distance metric for distinguishing different people while being insensitive to intra-person variations is widely applied to person re-identification. In previous works, local histograms are densely sampled to extract spatially localized information of each person image. The extracted local histograms are then concatenated into one vector that is used as an input of metric learning. However, the dimensionality of such a concatenated vector often becomes large while the number of training samples is limited. This leads to an over fitting problem. In this work, we argue that such a problem of over-fitting comes from that it is each local histogram dimension (e,g. color brightness bin) in the same position is treated separately to examine which part of the image is more discriminative. To solve this problem, we propose a method that analyzes discriminative image positions shared by different local histogram dimensions. A common weight map shared by different dimensions and a distance metric which emphasizes discriminative dimensions in the local histogram are jointly learned with a unified discriminative criterion. Our experiments using four different public datasets confirmed the effectiveness of the proposed method.


Download
  • CMC curves: CMCs_DALF.zip (including VIPeR, PRID2011, GRID, CAVIAR datasets)
Publication
  • Tetsu Matsukawa, Takahiro Okabe, Yoichi Sato.
    Person Re-Identification via Discriminative Accumulation of Local Features
    in Proceedings of International Conference on Pattern Recognition (ICPR2014), pp.3975-3980, 2014
    [pdf][slide]

Copyright (c) Tetsu Matsukawa, All Right Researved.