About

Areas of specialism

Machine learning; Computer Vision; Multimodal learning

Research

Research interests

Teaching

Publications

Sauradip Nag, Anran Qi, Xiatian Zhu, Ariel Shamir (2023)PersonalTailor: Personalizing 2D Pattern Design from 3D Garment Point Clouds, In: arXiv.org Cornell University Library, arXiv.org

Garment pattern design aims to convert a 3D garment to the corresponding 2D panels and their sewing structure. Existing methods rely either on template fitting with heuristics and prior assumptions, or on model learning with complicated shape parameterization. Importantly, both approaches do not allow for personalization of the output garment, which today has increasing demands. To fill this demand, we introduce PersonalTailor: a personalized 2D pattern design method, where the user can input specific constraints or demands (in language or sketch) for personal 2D panel fabrication from 3D point clouds. PersonalTailor first learns a multi-modal panel embeddings based on unsupervised cross-modal association and attentive fusion. It then predicts a binary panel masks individually using a transformer encoder-decoder framework. Extensive experiments show that our PersonalTailor excels on both personalized and standard pattern fabrication tasks.

Sauradip Nag, Xiatian Zhu, Yi-Zhe Song, Tao Xiang (2023)Post-Processing Temporal Action Detection, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)pp. 18837-18845 IEEE

Existing Temporal Action Detection (TAD) methods typ- ically take a pre-processing step in converting an input varying-length video into a fixed-length snippet represen- tation sequence, before temporal boundary estimation and action classification. This pre-processing step would tem- porally downsample the video, reducing the inference res- olution and hampering the detection performance in the original temporal resolution. In essence, this is due to a temporal quantization error introduced during the resolu- tion downsampling and recovery. This could negatively im- pact the TAD performance, but is largely ignored by existing methods. To address this problem, in this work we intro- duce a novel model-agnostic post-processing method with- out model redesign and retraining. Specifically, we model the start and end points of action instances with a Gaussian distribution for enabling temporal boundary inference at a sub-snippet level. We further introduce an efficient Taylor- expansion based approximation, dubbed as Gaussian Ap- proximated Post-processing (GAP). Extensive experiments demonstrate that our GAP can consistently improve a wide variety of pre-trained off-the-shelf TAD models on the chal- lenging ActivityNet (+0.2%∼0.7% in average mAP) and THUMOS (+0.2%∼0.5% in average mAP) benchmarks. Such performance gains are already significant and highly comparable to those achieved by novel model designs. Also, GAP can be integrated with model training for further performance gain. Importantly, GAP enables lower tem- poral resolutions for more efficient inference, facilitating low-resource applications. The code will be available in https://github.com/sauradip/GAP

Xiao Han, Xiatian Zhu, Lincheng Yu, Li Zhang, Yi-Zhe Song, Tao Xiang (2023)FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion Tasks, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2023-pp. 2669-2680 IEEE

Abstract In the fashion domain, there exists a variety of vision- and-language (V+L) tasks, including cross-modal retrieval, text-guided image retrieval, multi-modal classification, and image captioning. They differ drastically in each individ- ual input/output format and dataset size. It has been com- mon to design a task-specific model and fine-tune it in- dependently from a pre-trained V+L model (e.g., CLIP). This results in parameter inefficiency and inability to ex- ploit inter-task relatedness. To address such issues, we pro- pose a novel FAshion-focused Multi-task Efficient learn- ing method for Vision-and-Language tasks (FAME-ViL) in this work. Compared with existing approaches, FAME-ViL applies a single model for multiple heterogeneous fashion tasks, therefore being much more parameter-efficient. It is enabled by two novel components: (1) a task-versatile architecture with cross-attention adapters and task-specific adapters integrated into a unified V+L model, and (2) a sta- ble and effective multi-task training strategy that supports learning from heterogeneous data and prevents negative transfer. Extensive experiments on four fashion tasks show that our FAME-ViL can save 61.5% of parameters over alternatives, while significantly outperforming the conven- tional independently trained single-task models. Code is available at https://github.com/BrandonHanx/FAME-ViL

Mengmeng Xu, Juan-Manuel Perez-Rua, Victor Escorcia, Brais Martinez, XIATIAN ZHU, Li Zhang, Bernard Ghanem, TAO XIANG (2021)Boundary-Sensitive Pre-Training for Temporal Localization in Videos
Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H. S. Torr, Li Zhang (2021)Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021pp. 6881-6890 Institute of Electrical and Electronics Engineers (IEEE)

Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoder-decoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (i.e., without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first position in the highly competitive ADE20K test server leaderboard on the day of submission.

Shuaifeng Li, Mao Ye, Xiatian Zhu, Lihua Zhou, Lin Xiong (2022)Source-Free Object Detection by Learning to Overlook Domain Style, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022)pp. 8014-8023 Institute of Electrical and Electronics Engineers (IEEE)

Source-free object detection (SFOD) needs to adapt a detector pre-trained on a labeled source domain to a target domain, with only unlabeled training data from the target domain. Existing SFOD methods typically adopt the pseudo labeling paradigm with model adaption alternating between predicting pseudo labels and fine-tuning the model. This approach suffers from both unsatisfactory accuracy of pseudo labels due to the presence of domain shift and limited use of target domain training data. In this work, we present a novel Learning to Overlook Domain Style (LODS) method with such limitations solved in a principled manner. Our idea is to reduce the domain shift effect by enforcing the model to overlook the target domain style, such that model adaptation is simplified and becomes easier to carry on. To that end, we enhance the style of each target domain image and leverage the style degree difference between the original image and the enhanced image as a self-supervised signal for model adaptation. By treating the enhanced image as an auxiliary view, we exploit a student-teacher architecture for learning to overlook the style degree difference against the original image, also characterized with a novel style enhancement algorithm and graph alignment constraint. Extensive experiments demonstrate that our LODS yields new state-of-the-art performance on four benchmarks.

Mengmeng Xu, Juan-Manuel Perez-Rua, XIATIAN ZHU, Bernard Ghanem, Brais Martinez Low-Fidelity Video Encoder Optimization for Temporal Action Localization
Mantun Chen, Yongjun Wang, Zhiquan Qin, XIATIAN ZHU Few-shot website fingerprinting attack
Jiachen Lu, Jinghan Yao, Junge Zhang, XIATIAN ZHU, Hang Xu, Chunjing Xu, TAO XIANG, Li Zhang (2021)Soft: Softmax-free transformer with linear complexity
Anindya Mondal, Sauradip Nag, Joaquin Prada, Xiatian Zhu, Anjan Dutta (2023)Actor-agnostic Multi-label Action Recognition with Multi-modal Query, In: arXiv.org arXiv

Existing action recognition methods are typically actor-specific due to the intrinsic topological and apparent differences among the actors. This requires actor-specific pose estimation (e.g., humans vs. animals), leading to cumbersome model design complexity and high maintenance costs. Moreover, they often focus on learning the visual modality alone and single-label classification whilst neglecting other available information sources (e.g., class name text) and the concurrent occurrence of multiple actions. To overcome these limitations, we propose a new approach called 'actor-agnostic multi-modal multi-label action recognition,' which offers a unified solution for various types of actors, including humans and animals. We further formulate a novel Multi-modal Semantic Query Network (MSQNet) model in a transformer-based object detection framework (e.g., DETR), characterized by leveraging visual and textual modalities to represent the action classes better. The elimination of actor-specific model designs is a key advantage, as it removes the need for actor pose estimation altogether. Extensive experiments on five publicly available benchmarks show that our MSQNet consistently outperforms the prior arts of actor-specific alternatives on human and animal single- and multi-label action recognition tasks by up to 50%. Code will be released at https://github.com/mondalanindya/MSQNet.

Jiaqi Chen, Jiachen Lu, Xiatian Zhu, Li Zhang (2023)Generative Semantic Segmentation, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)pp. 7111-7120 IEEE

We present Generative Semantic Segmentation (GSS), a generative learning approach for semantic segmentation. Uniquely, we cast semantic segmentation as an image-conditioned mask generation problem. This is achieved by replacing the conventional per-pixel discriminative learning with a latent prior learning process. Specifically, we model the variational posterior distribution of latent vari-ables given the segmentation mask. To that end, the segmentation mask is expressed with a special type of image (dubbed as maskige). This posterior distribution allows to generate segmentation masks unconditionally. To achieve semantic segmentation on a given image, we further intro-duce a conditioning network. It is optimized by minimizing the divergence between the posterior distribution of maskige (i.e. segmentation masks) and the latent prior distribution of input training images. Extensive experiments on standard benchmarks show that our GSS can perform competitively to prior art alternatives in the standard semantic segmentation setting, whilst achieving a new state of the art in the more challenging cross-domain setting.

Jiachen Lu, Jinghan Yao, Junge Zhang, Xiatian Zhu, Hang Xu, Weiguo Gao, Chunjing Xu, Tao Xiang, Li Zhang (2021)SOFT: Softmax-free Transformer with Linear Complexity, In: M Ranzato, A Beygelzimer, Y Dauphin, P S Liang, J W Vaughan (eds.), ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021)26pp. 21297-21309 Neural Information Processing Systems (Nips)

Vision transformers (ViTs) have pushed the state-of-the-art for various visual recognition tasks by patch-wise image tokenization followed by self-attention. However, the employment of self-attention modules results in a quadratic complexity in both computation and memory usage. Various attempts on approximating the self-attention computation with linear complexity have been made in Natural Language Processing. However, an in-depth analysis in this work shows that they are either theoretically flawed or empirically ineffective for visual recognition. We further identify that their limitations are rooted in keeping the softmax self-attention during approximations. Specifically, conventional self-attention is computed by normalizing the scaled dot-product between token feature vectors. Keeping this softmax operation challenges any subsequent linearization efforts. Based on this insight, for the first time, a softmax-free transformer or SOFT is proposed. To remove softmax in self-attention, Gaussian kernel function is used to replace the dot-product similarity without further normalization. This enables a full self-attention matrix to be approximated via a low-rank matrix decomposition. The robustness of the approximation is achieved by calculating its Moore-Penrose inverse using a Newton-Raphson method. Extensive experiments on ImageNet show that our SOFT significantly improves the computational efficiency of existing ViT variants. Crucially, with a linear complexity, much longer token sequences are permitted in SOFT, resulting in superior trade-off between accuracy and complexity.

Junting Pan, Ziyi Lin, Yuying Ge, Xiatian Zhu, Renrui Zhang, Yi Wang, Yu Qiao, Hongsheng Li Retrieving-to-Answer: Zero-Shot Video Question Answering with Frozen Large Language Models, In: arXiv.org

Video Question Answering (VideoQA) has been significantly advanced from the scaling of recent Large Language Models (LLMs). The key idea is to convert the visual information into the language feature space so that the capacity of LLMs can be fully exploited. Existing VideoQA methods typically take two paradigms: (1) learning cross-modal alignment, and (2) using an off-the-shelf captioning model to describe the visual data. However, the first design needs costly training on many extra multi-modal data, whilst the second is further limited by limited domain generalization. To address these limitations, a simple yet effective Retrieving-to-Answer (R2A) framework is proposed.Given an input video, R2A first retrieves a set of semantically similar texts from a generic text corpus using a pre-trained multi-modal model (e.g., CLIP). With both the question and the retrieved texts, a LLM (e.g., DeBERTa) can be directly used to yield a desired answer. Without the need for cross-modal fine-tuning, R2A allows for all the key components (e.g., LLM, retrieval model, and text corpus) to plug-and-play. Extensive experiments on several VideoQA benchmarks show that despite with 1.3B parameters and no fine-tuning, our R2A can outperform the 61 times larger Flamingo-80B model even additionally trained on nearly 2.1B multi-modal data.

Jay Gala, Sauradip Nag, Huichou Huang, Ruirui Liu, Xiatian Zhu Adaptive-Labeling for Enhancing Remote Sensing Cloud Understanding

Cloud analysis is a critical component of weather and climate science, impacting various sectors like disaster management. However, achieving fine-grained cloud analysis, such as cloud segmentation, in remote sensing remains challenging due to the inherent difficulties in obtaining accurate labels, leading to significant labeling errors in training data. Existing methods often assume the availability of reliable segmentation annotations, limiting their overall performance. To address this inherent limitation, we introduce an innovative model-agnostic Cloud Adaptive-Labeling (CAL) approach, which operates iteratively to enhance the quality of training data annotations and consequently improve the performance of the learned model. Our methodology commences by training a cloud segmentation model using the original annotations. Subsequently, it introduces a trainable pixel intensity threshold for adaptively labeling the cloud training images on the fly. The newly generated labels are then employed to fine-tune the model. Extensive experiments conducted on multiple standard cloud segmentation benchmarks demonstrate the effectiveness of our approach in significantly boosting the performance of existing segmentation models. Our CAL method establishes new state-of-the-art results when compared to a wide array of existing alternatives.

Sauradip Nag, Xiatian Zhu, Jiankang Deng, Yi-Zhe Song, Tao Xiang DiffTAD: Temporal Action Detection with Proposal Denoising Diffusion

We propose a new formulation of temporal action detection (TAD) with denoising diffusion, DiffTAD in short. Taking as input random temporal proposals, it can yield action proposals accurately given an untrimmed long video. This presents a generative modeling perspective, against previous discriminative learning manners. This capability is achieved by first diffusing the ground-truth proposals to random ones (i.e., the forward/noising process) and then learning to reverse the noising process (i.e., the backward/denoising process). Concretely, we establish the denoising process in the Transformer decoder (e.g., DETR) by introducing a temporal location query design with faster convergence in training. We further propose a cross-step selective conditioning algorithm for inference acceleration. Extensive evaluations on ActivityNet and THUMOS show that our DiffTAD achieves top performance compared to previous art alternatives. The code will be made available at https://github.com/sauradip/DiffusionTAD.

Song Tang, An Chang, Fabian Zhang, Xiatian Zhu, Mao Ye, Changshui Zhang (2023)Source-Free Domain Adaptation via Target Prediction Distribution Searching, In: International journal of computer vision

Abstract Existing Source-Free Domain Adaptation (SFDA) methods typically adopt the feature distribution alignment paradigm via mining auxiliary information (eg., pseudo-labelling, source domain data generation). However, they are largely limited due to that the auxiliary information is usually error-prone whilst lacking effective error-mitigation mechanisms. To overcome this fundamental limitation, in this paper we propose a novel Target Prediction Distribution Searching (TPDS) paradigm. Theoretically, we prove that in case of sufficient small distribution shift, the domain transfer error could be well bounded. To satisfy this condition, we introduce a flow of proxy distributions that facilitates the bridging of typically large distribution shift from the source domain to the target domain. This results in a progressive searching on the geodesic path where adjacent proxy distributions are regularized to have small shift so that the overall errors can be minimized. To account for the sequential correlation between proxy distributions, we develop a new pairwise alignment with category consistency algorithm for minimizing the adaptation errors. Specifically, a manifold geometry guided cross-distribution neighbour search is designed to detect the data pairs supporting the Wasserstein distance based shift measurement. Mutual information maximization is then adopted over these pairs for shift regularization. Extensive experiments on five challenging SFDA benchmarks show that our TPDS achieves new state-of-the-art performance. The code and datasets are available at https://github.com/tntek/TPDS .

Lihua Zhou, Nianxin Li, Mao Ye, Xiatian Zhu, Song Tang (2024)Source-free domain adaptation with Class Prototype Discovery, In: Pattern recognition145109974 Elsevier Ltd

Source-free domain adaptation requires no access to the source domain training data during unsupervised domain adaption. This is critical for meeting particular data sharing, privacy, and license constraints, whilst raising novel algorithmic challenges. Existing source-free domain adaptation methods rely on either generating pseudo samples/prototypes of source or target domain style, or simply leveraging pseudo-labels (self-training). They suffer from low-quality generated samples/prototypes or noisy pseudo-label target samples. In this work, we address both limitations by introducing a novel Class Prototype Discovery (CPD) method. In contrast to all alternatives, our CPD is established on a set of semantic class prototypes each constructed for representing a specific class. By designing a classification score based prototype learning mechanism, we reformulate the source-free domain adaptation problem to class prototype optimization using all the target domain training data, and without the need for data generation. Then, class prototypes are used to cluster target features to assign them pseudo-labels, which highly complements the conventional self-training strategy. Besides, a prototype regularization is introduced for exploiting well-established distribution alignment based on pseudo-labeled target samples and class prototypes. Along with theoretical analysis, we conduct extensive experiments on three standard benchmarks to validate the performance advantages of our CPD over the state-of-the-art models. •We propose a novel Class Prototype Discovery method for solving the SFDA problem.•A prototype regularization is introduced based on distribution alignment strategy.•CPD outperforms a wide variety of state-of-the-art methods, often by a large margin.

Yanbei Chen, Massimiliano Mancini, XIATIAN ZHU, Zeynep Akata (2022)Semi-Supervised and Unsupervised Deep Visual Learning: A Survey, In: IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE

State-of-the-art deep learning models are often trained with a large amount of costly labeled training data. However, requiring exhaustive manual annotations may degrade the model’s generalizability in the limited-label regime.Semi-supervised learning and unsupervised learning offer promising paradigms to learn from an abundance of unlabeled visual data. Recent progress in these paradigms has indicated the strong benefits of leveraging unlabeled data to improve model generalization and provide better model initialization. In this survey, we review the recent advanced deep learning algorithms on semi-supervised learning (SSL) and unsupervised learning (UL) for visual recognition from a unified perspective. To offer a holistic understanding of the state-of-the-art in these areas, we propose a unified taxonomy. We categorize existing representative SSL and UL with comprehensive and insightful analysis to highlight their design rationales in different learning scenarios and applications in different computer vision tasks. Lastly, we discuss the emerging trends and open challenges in SSL and UL to shed light on future critical research directions.

Mantun Chen, Yongjun Wang, Zhiquan Qin, XIATIAN ZHU (2021)Few-Shot Website Fingerprinting Attack with Data Augmentation, In: Security and communication networks2840289 Hindawi

This work introduces a novel data augmentation method for few-shot website fingerprinting (WF) attack where only a handful of training samples per website are available for deep learning model optimization. Moving beyond earlier WF methods relying on manually-engineered feature representations, more advanced deep learning alternatives demonstrate that learning feature representations automatically from training data is superior. Nonetheless, this advantage is subject to an unrealistic assumption that there exist many training samples per website, which otherwise will disappear. To address this, we introduce a model-agnostic, efficient, and harmonious data augmentation (HDA) method that can improve deep WF attacking methods significantly. HDA involves both intrasample and intersample data transformations that can be used in a harmonious manner to expand a tiny training dataset to an arbitrarily large collection, therefore effectively and explicitly addressing the intrinsic data scarcity problem. We conducted expensive experiments to validate our HDA for boosting state-of-the-art deep learning WF attack models in both closed-world and open-world attacking scenarios, at absence and presence of strong defense. For instance, in the more challenging and realistic evaluation scenario with WTF-PAD-based defense, our HDA method surpasses the previous state-of-the-art results by nearly 3% in classification accuracy in the 20-shot learning case.

Zhiyi Cheng, Xiatian Zhu, Shaogang Gong (2020)Characteristic Regularisation for Super-Resolving Face Images, In: 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)9093480pp. 2424-2433 IEEE

Existing facial image super-resolution (SR) methods focus mostly on improving "artificially down-sampled" lowresolution (LR) imagery. Such SR models, although strong at handling artificial LR images, often suffer from significant performance drop on genuine LR test data. Previous unsupervised domain adaptation (UDA) methods address this issue by training a model using unpaired genuine LR and HR data as well as cycle consistency loss formulation. However, this renders the model overstretched with two tasks: consistifying the visual characteristics and enhancing the image resolution. Importantly, this makes the end-to-end model training ineffective due to the difficulty of back-propagating gradients through two concatenated CNNs. To solve this problem, we formulate a method that joins the advantages of conventional SR and UDA models. Specifically, we separate and control the optimisations for characteristics consistifying and image super-resolving by introducing Characteristic Regularisation (CR) between them. This task split makes the model training more effective and computationally tractable. Extensive evaluations demonstrate the performance superiority of our method over state-of-the-art SR and UDA models on both genuine and artificial LR facial imagery data.

Jiabo Huang, Shaogang Gong, Xiatian Zhu (2020)Deep Semantic Clustering by Partition Confidence Maximisation, In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)9157527pp. 8846-8855 IEEE

By simultaneously learning visual features and data grouping, deep clustering has shown impressive ability to deal with unsupervised learning for structure analysis of high-dimensional visual data. Existing deep clustering methods typically rely on local learning constraints based on inter-sample relations and/or self-estimated pseudo labels. This is susceptible to the inevitable errors distributed in the neighbourhoods and suffers from error-propagation during training. In this work, we propose to solve this problem by learning the most confident clustering solution from all the possible separations, based on the observation that assigning samples from the same semantic categories into different clusters will reduce both the intra-cluster compactness and inter-cluster diversity, i.e. lower partition confidence. Specifically, we introduce a novel deep clustering method named PartItion Confidence mAximisation (PICA). It is established on the idea of learning the most semantically plausible data separation, in which all clusters can be mapped to the ground-truth classes one-to-one, by maximising the "global" partition confidence of clustering solution. This is realised by introducing a differentiable partition uncertainty index and its stochastic approximation as well as a principled objective loss function that minimises such index, all of which together enables a direct adoption of the conventional deep networks and mini-batch based model training. Extensive experiments on six widely-adopted clustering benchmarks demonstrate our model's performance superiority over a wide range of the state-of-the-art approaches. The code is available online.

Lihua Zhou, Siying Xiao, Mao Ye, Xiatian Zhu, Shuaifeng Li (2023)Adaptive Mutual Learning for Unsupervised Domain Adaptation, In: IEEE Transactions on Circuits and Systems for Video Technology Institute of Electrical and Electronics Engineers (IEEE)

Unsupervised domain adaptation aims to transfer knowledge from labeled source domain to unlabeled target domain. The semi-supervised method based on mean-teacher framework is one of the main stream approaches. By enforcing consistency constraints, it is hopeful that the teacher network will distill useful source domain knowledge to the student network. However, in practice negative transfer often emerges because the performance of the teacher network is not guaranteed to be always better than the student network. To address this limitation, a novel Adaptive Mutual Learning (AML) strategy is proposed in this paper. Specifically, given a target sample, the network with worse prediction will be optimized by pushing its prediction close to the better prediction. This is in the spirit of traditional knowledge distillation. On the other hand, the network with better prediction is further refined by requiring its prediction to stay away from the worse prediction. This can be regarded conceptually as reverse knowledge distillation. In this way, two networks learn from each other according to their respective performance. At inference phase, the averaged output of these two networks can be taken as the final prediction. Experimental results demonstrate that our AML achieves competitive results.

Peng Xu, Xiatian Zhu, David A. Clifton (2023)Multimodal Learning With Transformers: A Survey, In: IEEE transactions on pattern analysis and machine intelligence45(10)pp. 12113-12132 IEEE

Transformer is a promising neural network learner, and has achieved great success in various machine learning tasks. Thanks to the recent prevalence of multimodal applications and Big Data, Transformer-based multimodal learning has become a hot topic in AI research. This paper presents a comprehensive survey of Transformer techniques oriented at multimodal data. The main contents of this survey include: (1) a background of multimodal learning, Transformer ecosystem, and the multimodal Big Data era, (2) a systematic review of Vanilla Transformer, Vision Transformer, and multimodal Transformers, from a geometrically topological perspective, (3) a review of multimodal Transformer applications, via two important paradigms, i.e., for multimodal pretraining and for specific multimodal tasks, (4) a summary of the common challenges and designs shared by the multimodal Transformer models and applications, and (5) a discussion of open problems and potential research directions for the community.

Mantun Chen, Yongjun Wang, Hongzuo Xu, Xiatian Zhu (2021)Few-shot website fingerprinting attack, In: Computer networks (Amsterdam, Netherlands : 1999)198108298 Elsevier B.V

Website fingerprinting (WF) attack stands opposite against privacy protection in using the Internet, even when the content details are encrypted, such as Tor networks. Whilst existing difficulty in the preparation of many training samples, we study a more realistic problem — few-shot website fingerprinting attack where only a few training samples per website are available. We introduce a novel Transfer Learning Fingerprinting Attack (TLFA) that can transfer knowledge from the labeled training data of websites disjoint and independent to the target websites. Specifically, TLFA trains a stronger embedding model with the training data collected from non-target websites, which is then leveraged in a task-agnostic manner with a task-specific classifier model fine-tuned on a small set of labeled training data from target websites. We conduct expensive experiments to validate the superiority of our TLFA over the state-of-the-art methods in both closed-world and open-world attacking scenarios, at the absence and presence of strong defense. •We study a realistic and difficult few-shot website fingerprinting attack problem.•We propose a novel Transfer Learning Fingerprinting Attack(TLFA) method.•Experiments show TLFA outperforms significantly previous state-of-the-art methods.

Wei Li, Xiatian Zhu, Shaogang Gong (2020)Scalable Person Re-Identification by Harmonious Attention, In: International journal of computer vision128(6)1635pp. 1635-1653 Springer Nature

Existing person re-identification (re-id) deep learning methods rely heavily on the utilisation of large and computationally expensive convolutional neural networks. They are therefore not scalable to large scale re-id deployment scenarios with the need of processing a large amount of surveillance video data, due to the lengthy inference process with high computing costs. In this work, we address this limitation via jointly learning re-id attention selection. Specifically, we formulate a novel harmonious attention network (HAN) framework to jointly learn soft pixel attention and hard region attention alongside simultaneous deep feature representation learning, particularly enabling more discriminative re-id matching by efficient networks with more scalable model inference and feature matching. Extensive evaluations validate the cost-effectiveness superiority of the proposed HAN approach for person re-id against a wide variety of state-of-the-art methods on four large benchmark datasets: CUHK03, Market-1501, DukeMTMC, and MSMT17.

Zeyu Yang, Jiaqi Chen, Zhenwei Miao, Wei Li, Xiatian Zhu, Li Zhang DeepInteraction: 3D Object Detection via Modality Interaction, In: arXiv (Cornell University)

Existing top-performance 3D object detectors typically rely on the multi-modal fusion strategy. This design is however fundamentally restricted due to overlooking the modality-specific useful information and finally hampering the model performance. To address this limitation, in this work we introduce a novel modality interaction strategy where individual per-modality representations are learned and maintained throughout for enabling their unique characteristics to be exploited during object detection. To realize this proposed strategy, we design a DeepInteraction architecture characterized by a multi-modal representational interaction encoder and a multi-modal predictive interaction decoder. Experiments on the large-scale nuScenes dataset show that our proposed method surpasses all prior arts often by a large margin. Crucially, our method is ranked at the first position at the highly competitive nuScenes object detection leaderboard.

Jiachen Lu, Zheyuan Zhou, Xiatian Zhu, Hang Xu, Li Zhang (2022)Learning Ego 3D Representation as Ray Tracing, In: S Avidan, G Brostow, M Cisse, G M Farinella, T Hassner (eds.), COMPUTER VISION, ECCV 2022, PT XXVI13686pp. 129-144 Springer Nature

A self-driving perception model aims to extract 3D semantic representations from multiple cameras collectively into the bird's-eye-view (BEV) coordinate frame of the ego car in order to ground downstream planner. Existing perception methods often rely on error-prone depth estimation of the whole scene or learning sparse virtual 3D representations without the target geometry structure, both of which remain limited in performance and/or capability. In this paper, we present a novel end-to-end architecture for ego 3D representation learning from an arbitrary number of unconstrained camera views. Inspired by the ray tracing principle, we design a polarized grid of "imaginary eyes" as the learnable ego 3D representation and formulate the learning process with the adaptive attention mechanism in conjunction with the 3D-to-2D projection. Critically, this formulation allows extracting rich 3D representation from 2D images without any depth supervision, and with the built-in geometry structure consistent w.r.t BEV. Despite its simplicity and versatility, extensive experiments on standard BEV visual tasks (e.g., camera-based 3D object detection and BEV segmentation) show that our model outperforms all state-of-the-art alternatives significantly, with an extra advantage in computational efficiency from multi-task learning.

W Li, S Gong, X Zhu, AAA Intelligence (2020)Neural Graph Embedding for Neural Architecture Search Association for the Advancement of Artificial Intelligence

Existing neural architecture search (NAS) methods often operate in discrete or continuous spaces directly, which ignores the graphical topology knowledge of neural networks. This leads to suboptimal search performance and efficiency, given the factor that neural networks are essentially directed acyclic graphs (DAG). In this work, we address this limitation by introducing a novel idea of neural graph embedding (NGE). Specifically, we represent the building block (i.e. the cell) of neural networks with a neural DAG, and learn it by leveraging a Graph Convolutional Network to propagate and model the intrinsic topology information of network architectures. This results in a generic neural network representation integrable with different existing NAS frameworks. Extensive experiments show the superiority of NGE over the state-of-the-art methods on image classification and semantic segmentation.

G Wu, X Zhu, S Gong, AAA Intelligence (2020)Tracklet Self-Supervised Learning for Unsupervised Person Re-Identification Association for the Advancement of Artificial Intelligence

Existing unsupervised person re-identification (re-id) methods mainly focus on cross-domain adaptation or one-shot learning. Although they are more scalable than the supervised learning counterparts, relying on a relevant labelled source domain or one labelled tracklet per person initialisation still restricts their scalability in real-world deployments. To alleviate these problems, some recent studies develop unsupervised tracklet association and bottom-up image clustering methods, but they still rely on explicit camera annotation or merely utilise suboptimal global clustering. In this work, we formulate a novel tracklet self-supervised learning (TSSL) method, which is capable of capitalising directly from abundant unlabelled tracklet data, to optimise a feature embedding space for both video and image unsupervised re-id. This is achieved by designing a comprehensive unsupervised learning objective that accounts for tracklet frame coherence, tracklet neighbourhood compactness, and tracklet cluster structure in a unified formulation. As a pure unsupervised learning re-id model, TSSL is end-to-end trainable at the absence of source data annotation, person identity labels, and camera prior knowledge. Extensive experiments demonstrate the superiority of TSSL over a wide variety of the state-of-the-art alternative methods on four large-scale person re-id benchmarks, including Market-1501, DukeMTMC-ReID, MARS and DukeMTMC-VideoReID.

Wei Li, Shaogang Gong, Xiatian Zhu (2023)Neural operator search, In: Pattern recognition136109215 Elsevier Ltd

•A novel heterogeneous search space for NAS with richer primitive operations (e.g., feature self-calibration).•A novel Neural Operator Search (NOS) method dedicated for NAS in the proposed heterogeneous search space.•Our approach is highly competitive on both CI-FAR and ImageNet-mobile image classification tests. Existing neural architecture search (NAS) methods usually explore a limited feature-transformation-only search space, ignoring other advanced feature operations such as feature self-calibration by attention and dynamic convolutions. This disables the NAS algorithms to discover more advanced network architectures. We address this limitation by additionally exploiting feature self-calibration operations, resulting in a heterogeneous search space. To solve the challenges of operation heterogeneity and significantly larger search space, we formulate a neural operator search (NOS) method. NOS presents a novel heterogeneous residual block for integrating the heterogeneous operations in a unified structure, and an attention guided search strategy for facilitating the search process over a vast space. Extensive experiments show that NOS can search novel cell architectures with highly competitive performance on the CIFAR and ImageNet benchmarks.

Feng Zhang, Xiatian Zhu, Chen Wang (2022)A Comprehensive Survey on Single-Person Pose Estimation in Social Robotics, In: International journal of social robotics14(9)pp. 1995-2008 Springer Nature

With the development of the economy and the improvement of people's living standard, social robotics gradually enter into daily lives of individuals. Human-robot interaction is the basic function of social robotics, and how to achieve better experience of human-robot interaction is an important issue in the field of social robotics. Single-person pose estimation is the core technology for human-robot interaction in social robots. Benefiting from the development of deep learning, single-person pose estimation has made great progress. This paper reviews the development of single-person pose estimation from four aspects: data augmentation, the evolution of SPPE model, learning target and post-processing. Besides, we give the commonly used datasets and evaluation metrics. Finally, the problems of SPPE are discussed and the future research trends are given.

Yanqin Jiang, Li Zhang, Zhenwei Miao, Xiatian Zhu, Jin Gao, Weiming Hu, Jiang Yu-Gang (2023)PolarFormer: Multi-camera 3D Object Detection with Polar Transformer, In: arXiv.org Cornell University Library, arXiv.org

3D object detection in autonomous driving aims to reason "what" and "where" the objects of interest present in a 3D world. Following the conventional wisdom of previous 2D object detection, existing methods often adopt the canonical Cartesian coordinate system with perpendicular axis. However, we conjugate that this does not fit the nature of the ego car's perspective, as each onboard camera perceives the world in shape of wedge intrinsic to the imaging geometry with radical (non-perpendicular) axis. Hence, in this paper we advocate the exploitation of the Polar coordinate system and propose a new Polar Transformer (PolarFormer) for more accurate 3D object detection in the bird's-eye-view (BEV) taking as input only multi-camera 2D images. Specifically, we design a cross attention based Polar detection head without restriction to the shape of input structure to deal with irregular Polar grids. For tackling the unconstrained object scale variations along Polar's distance dimension, we further introduce a multi-scalePolar representation learning strategy. As a result, our model can make best use of the Polar representation rasterized via attending to the corresponding image observation in a sequence-to-sequence fashion subject to the geometric constraints. Thorough experiments on the nuScenes dataset demonstrate that our PolarFormer outperforms significantly state-of-the-art 3D object detection alternatives.

Xian Shi, Xun Xu, Wanyue Zhang, Xiatian Zhu, Chuan Sheng Foo, Kui Jia (2022)Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding, In: 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)2022-pp. 5045-5051 IEEE

Semantic understanding of 3D point cloud relies on learning models with massively annotated data, which, in many cases, are expensive or difficult to collect. This has led to an emerging research interest in semi-supervised learning (SSL) for 3D point cloud. It is commonly assumed in SSL that the unlabeled data are drawn from the same distribution as that of the labeled ones; This assumption, however, rarely holds true in realistic environments. Blindly using out-of-distribution (OOD) unlabeled data could harm SSL performance. In this work, we propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized. To estimate the weights, we adopt a bi-level optimization framework which iteratively optimizes a metaobjective on a held-out validation set and a task-objective on a training set. Faced with the instability of efficient bi-level optimizers, we further propose three regularization techniques to enhance the training stability. Extensive experiments on 3D point cloud classification and segmentation tasks verify the effectiveness of our proposed method. We also demonstrate the feasibility of a more efficient training strategy. Our code is released on Github 1.

J Huang, Q Dong, S Gong, X Zhu, AAA Intelligence (2020)Unsupervised Deep Learning via Affinity Diffusion Association for the Advancement of Artificial Intelligence

Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet.

Mengmeng Xu, Juan-Manuel Perez-Rua, Xiatian Zhu, Bernard Ghanem, Brais Martinez (2021)Low-Fidelity Video Encoder Optimization for Temporal Action Localization, In: M Ranzato, A Beygelzimer, Y Dauphin, P S Liang, J W Vaughan (eds.), ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021)34 Neural Information Processing Systems (Nips)

Most existing temporal action localization (TAL) methods rely on a transfer learning pipeline, first optimizing a video encoder on a large action classification dataset (i.e., source domain), followed by freezing the encoder and training a TAL head on the action localization dataset (i.e., target domain). This results in a task discrepancy problem for the video encoder - trained for action classification, but used for TAL. Intuitively, joint optimization with both the video encoder and TAL head is an obvious solution to this discrepancy. However, this is not operable for TAL subject to the GPU memory constraints, due to the prohibitive computational cost in processing long untrimmed videos. In this paper, we resolve this challenge by introducing a novel low-fidelity (LoFi) video encoder optimization method. Instead of always using the full training configurations in TAL learning, we propose to reduce the mini-batch composition in terms of temporal, spatial or spatio-temporal resolution so that jointly optimizing the video encoder and TAL head becomes operable under the same memory conditions of a mid-range hardware budget. Crucially, this enables the gradients to flow backwards through the video encoder conditioned on a TAL supervision loss, favourably solving the task discrepancy problem and providing more effective feature representations. Extensive experiments show that the proposed LoFi optimization approach can significantly enhance the performance of existing TAL methods. Encouragingly, even with a lightweight ResNet18 based video encoder in a single RGB stream, our method surpasses two-stream (RGB + optical flow) ResNet50 based alternatives, often by a good margin. Our code is publicly available at https://github.com/saic-fi/lofi_action_localization.

Hui Tang, Xiatian Zhu, Ke Chen, Kui Jia, C. L. Philip Chen (2022)Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain Adaptation Using Structurally Regularized Deep Clustering, In: IEEE transactions on pattern analysis and machine intelligence44(10)pp. 6517-6533 IEEE

Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain, given labeled data on a source domain whose distribution diverges from the target one. Mainstream UDA methods strive to learn domain-aligned features such that classifiers trained on the source features can be readily applied to the target ones. Although impressive results have been achieved, these methods have a potential risk of damaging the intrinsic data structures of target discrimination, raising an issue of generalization particularly for UDA tasks in an inductive setting. To address this issue, we are motivated by a UDA assumption of structural similarity across domains, and propose to directly uncover the intrinsic target discrimination via constrained clustering, where we constrain the clustering solutions using structural source regularization that hinges on the very same assumption. Technically, we propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one, and we thus term our method as H-SRDC. Our hybrid model is based on a deep clustering framework that minimizes the Kullback-Leibler divergence between the distribution of network prediction and an auxiliary one, where we impose structural regularization by learning domain-shared classifier and cluster centroids. By enriching the structural similarity assumption, we are able to extend H-SRDC for a pixel-level UDA task of semantic segmentation. We conduct extensive experiments on seven UDA benchmarks of image classification and semantic segmentation. With no explicit feature alignment, our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings. We make our implementation codes publicly available at https://github.com/huitangtang/H-SRDC.

Z Cheng, Q Dong, S Gong, X Zhu, IEEE Conference on Computer Vision and Pattern Recognition (2020)Inter-task association critic for cross-resolution person re-identification IEEE

Person images captured by unconstrained surveillance cameras often have low resolutions (LR). This causes the resolution mismatch problem when matched against the high-resolution (HR) gallery images, negatively affecting the performance of person re-identification (re-id). An effective approach is to leverage image super-resolution (SR) along with person re-id in a joint learning manner. However, this scheme is limited due to dramatically more difficult gradients backpropagation during training. In this paper, we introduce a novel model training regularisation method, called Inter-Task Association Critic (INTACT), to address this fundamental problem. Specifically, INTACT discovers the underlying association knowledge between image SR and person re-id, and leverages it as an extra learning constraint for enhancing the compatibility of SR model with person re-id in HR image space. This is realised by parameterising the association constraint which enables it to be automatically learned from the training data. Extensive experiments validate the superiority of INTACT over the state-of-the-art approaches on the cross-resolution re-id task using five standard person re-id datasets.

Junting Pan, Adrian Bulat, Fuwen Tan, Xiatian Zhu, Lukasz Dudziak, Hongsheng Li, Georgios Tzimiropoulos, Brais Martinez (2022)EdgeViTs: Competing Light-Weight CNNs on Mobile Devices with Vision Transformers, In: S Avidan, G Brostow, M Cisse, G M Farinella, T Hassner (eds.), COMPUTER VISION, ECCV 2022, PT XI13671pp. 294-311 Springer Nature

Self-attention based models such as vision transformers (ViTs) have emerged as a very competitive architecture alternative to convolutional neural networks (CNNs) in computer vision. Despite increasingly stronger variants with ever higher recognition accuracies, due to the quadratic complexity of self-attention, existing ViTs are typically demanding in computation and model size. Although several successful design choices (e.g., the convolutions and hierarchical multi-stage structure) of prior CNNs have been reintroduced into recent ViTs, they are still not sufficient to meet the limited resource requirements of mobile devices. This motivates a very recent attempt to develop light ViTs based on the state-of-the-art MobileNet-v2, but still leaves a performance gap behind. In this work, pushing further along this under-studied direction we introduce EdgeViTs, a new family of light-weight ViTs that, for the first time, enable attention based vision models to compete with the best light-weight CNNs in the tradeoff between accuracy and ondevice efficiency. This is realized by introducing a highly cost-effective local-global-local (LGL) information exchange bottleneck based on optimal integration of self-attention and convolutions. For device-dedicated evaluation, rather than relying on inaccurate proxies like the number of FLOPs or parameters, we adopt a practical approach of focusing directly on on-device latency and, for the first time, energy efficiency. Extensive experiments on image classification, object detection and semantic segmentation validate high efficiency of our EdgeViTs when compared to the state-of-the-art efficient CNNs and ViTs in terms of accuracy-efficiency tradeoff on mobile hardware. Specifically, we show that our models are Pareto-optimal when both accuracy-latency and accuracy-energy tradeoffs are considered, achieving strict dominance over other ViTs in almost all cases and competing with the most efficient CNNs. Code is available at ht tps://github.com/saic-fi/edgevit.

Learning discriminative spatio-temporal representation is the key for solving video re-identification (re-id) challenges. Most existing methods focus on learning appearance features and/or selecting image frames, but ignore optimising the compatibility and interaction of appearance and motion attentive information. To address this limitation, we propose a novel model to learning Spatio-Temporal Associative Representation (STAR). We design local frame-level spatio-temporal association to learn discriminative attentive appearance and short-term motion features, and global video-level spatio-temporal association to form compact and discriminative holistic video representation. We further introduce a pyramid ranking regulariser for facilitating end-to-end model optimisation. Extensive experiments demonstrate the superiority of STAR against state-of-the-art methods on four video re-id benchmarks, including MARS, DukeMTMC-VideoReID, iLIDS-VID and PRID-2011.

Hengyuan Ma, Li Zhang, Xiatian Zhu, Jianfeng Feng (2022)Accelerating Score-Based Generative Models with Preconditioned Diffusion Sampling, In: S Avidan, G Brostow, M Cisse, G M Farinella, T Hassner (eds.), COMPUTER VISION, ECCV 2022, PT XXIII13683pp. 1-16 Springer Nature

Score-based generative models (SGMs) have recently emerged as a promising class of generative models. However, a fundamental limitation is that their inference is very slow due to a need for many (e.g., 2000) iterations of sequential computations. An intuitive acceleration method is to reduce the sampling iterations which however causes severe performance degradation. We investigate this problem by viewing the diffusion sampling process as a Metropolis adjusted Langevin algorithm, which helps reveal the underlying cause to be ill-conditioned curvature. Under this insight, we propose a model-agnostic preconditioned diffusion sampling (PDS) method that leverages matrix preconditioning to alleviate the aforementioned problem. Crucially, PDS is proven theoretically to converge to the original target distribution of a SGM, no need for retraining. Extensive experiments on three image datasets with a variety of resolutions and diversity validate that PDS consistently accelerates off-the-shelf SGMs whilst maintaining the synthesis quality. In particular, PDS can accelerate by up to 29x on more challenging high resolution (1024x1024) image generation.

Feng Zhang, Xiatian Zhu, Hanbin Dai, Mao Ye, Ce Zhu (2020)Distribution-Aware Coordinate Representation for Human Pose Estimation, In: 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)9157744pp. 7091-7100 IEEE

While being the de facto standard coordinate representation for human pose estimation, heatmap has not been investigated in-depth. This work fills this gap. For the first time, we find that the process of decoding the predicted heatmaps into the final joint coordinates in the original image space is surprisingly significant for the performance. We further probe the design limitations of the standard coordinate decoding method, and propose a more principled distribution-aware decoding method. Also, we improve the standard coordinate encoding process (i.e. transforming ground-truth coordinates to heatmaps) by generating unbiased/accurate heatmaps. Taking the two together, we formulate a novel Distribution-Aware coordinate Representation of Keypoints (DARK) method. Serving as a model-agnostic plug-in, DARK brings about significant performance boost to existing human pose estimation models. Extensive experiments show that DARK yields the best results on two common benchmarks, MPII and COCO. Besides, DARK achieves the 2nd place entry in the ICCV 2019 COCO Keypoints Challenge. The code is available online [36].

Y Chen, X Zhu, W Li, S Gong, AAA Intelligence (2020)Semi-Supervised Learning under Class Distribution Mismatch Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved

Semi-supervised learning (SSL) aims to avoid the need for collecting prohibitively expensive labelled training data. Whilst demonstrating impressive performance boost, existing SSL methods artificially assume that small labelled data and large unlabelled data are drawn from the same class distribution. In a more realistic scenario with class distribution mismatch between the two sets, they often suffer severe performance degradation due to error propagation introduced by irrelevant unlabelled samples. Our work addresses this under-studied and realistic SSL problem by a novel algorithm named UncertaintyAware Self-Distillation (UASD). Specifically, UASD produces soft targets that avoid catastrophic error propagation, and empower learning effectively from unconstrained unlabelled data with out-of-distribution (OOD) samples. This is based on joint Self-Distillation and OOD filtering in a unified formulation. Without bells and whistles, UASD significantly outperforms six state-of-the-art methods in more realistic SSL under class distribution mismatch on three popular image classification datasets: CIFAR10, CIFAR100, and TinyImageNet.

Hang Su, Shaogang Gong, Xiatian Zhu (2020)Scalable logo detection by self co-learning, In: Pattern recognition97107003 Elsevier Ltd

•The literature lacks large-scale logo detection test benchmarks due to rather expensive data selection and label annotation.•We contribute a large-scale dataset collected automatically for scalable logo detection.•We present a scalable logo detection solution characterised by joint co-learning and self-learning in a unified framework, without the tedious need for manually labelling any training data. Existing logo detection methods usually consider a small number of logo classes, limited images per class and assume fine-gained object bounding box annotations. This limits their scalability to real-world dynamic applications. In this work, we tackle these challenges by exploring a web data learning principle without the need for exhaustive manual labelling. Specifically, we propose a novel incremental learning approach, called Scalable Logo Self-co-Learning (SL2), capable of automatically self-discovering informative training images from noisy web data for progressively improving model capability in a cross-model co-learning manner. Moreover, we introduce a very large (2,190,757 images of 194 logo classes) logo dataset “WebLogo-2M” by designing an automatic data collection and processing method. Extensive comparative evaluations demonstrate the superiority of SL2 over the state-of-the-art strongly and weakly supervised detection models and contemporary web data learning approaches.

Victor Escorcia, Ricardo Guerrero, Xiatian Zhu, Brais Martinez (2022)SOS! Self-supervised Learning over Sets of Handled Objects in Egocentric Action Recognition, In: S Avidan, G Brostow, M Cisse, G M Farinella, T Hassner (eds.), COMPUTER VISION, ECCV 2022, PT XIII13673pp. 604-620 Springer Nature

Learning an egocentric action recognition model from video data is challenging due to distractors in the background, e.g., irrelevant objects. Further integrating object information into an action model is hence beneficial. Existing methods often leverage a generic object detector to identify and represent the objects in the scene. However, several important issues remain. Object class annotations of good quality for the target domain (dataset) are still required for learning good object representation. Moreover, previous methods deeply couple existing action models with object representations, and thus need to retrain them jointly, leading to costly and inflexible integration. To overcome both limitations, we introduce Self-Supervised Learning Over Sets (SOS), an approach to pre-train a generic Objects In Contact (OIC) representation model from video object regions detected by an off-the-shelf hand-object contact detector. Instead of augmenting object regions individually as in conventional self-supervised learning, we view the action process as a means of natural data transformations with unique spatiotemporal continuity and exploit the inherent relationships among per-video object sets. Extensive experiments on two datasets, EPIC-KITCHENS-100 and EGTEA, show that our OIC significantly boosts the performance of multiple state-of-the-art video classification models.

Minxian Li, Xiatian Zhu, Shaogang Gong (2020)Unsupervised Tracklet Person Re-Identification, In: IEEE transactions on pattern analysis and machine intelligence42(7)8658110pp. 1770-1782 IEEE

Most existing person re-identification (re-id) methods rely on supervised model learning on per-camera-pair manually labelled pairwise training data. This leads to poor scalability in a practical re-id deployment, due to the lack of exhaustive identity labelling of positive and negative image pairs for every camera-pair. In this work, we present an unsupervised re-id deep learning approach. It is capable of incrementally discovering and exploiting the underlying re-id discriminative information from automatically generated person tracklet data end-to-end . We formulate an Unsupervised Tracklet Association Learning (UTAL) framework. This is by jointly learning within-camera tracklet discrimination and cross-camera tracklet association in order to maximise the discovery of tracklet identity matching both within and across camera views. Extensive experiments demonstrate the superiority of the proposed model over the state-of-the-art unsupervised learning and domain adaptation person re-id methods on eight benchmarking datasets.

Junting Pan, Ziyi Lin, Xiatian Zhu, Jing Shao, Hongsheng Li ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning, In: arXiv (Cornell University)

Capitalizing on large pre-trained models for various downstream tasks of interest have recently emerged with promising performance. Due to the ever-growing model size, the standard full fine-tuning based task adaptation strategy becomes prohibitively costly in terms of model training and storage. This has led to a new research direction in parameter-efficient transfer learning. However, existing attempts typically focus on downstream tasks from the same modality (e.g., image understanding) of the pre-trained model. This creates a limit because in some specific modalities, (e.g., video understanding) such a strong pre-trained model with sufficient knowledge is less or not available. In this work, we investigate such a novel cross-modality transfer learning setting, namely parameter-efficient image-to-video transfer learning. To solve this problem, we propose a new Spatio-Temporal Adapter (ST-Adapter) for parameter-efficient fine-tuning per video task. With a built-in spatio-temporal reasoning capability in a compact design, ST-Adapter enables a pre-trained image model without temporal knowledge to reason about dynamic video content at a small (~8%) per-task parameter cost, requiring approximately 20 times fewer updated parameters compared to previous work. Extensive experiments on video action recognition tasks show that our ST-Adapter can match or even outperform the strong full fine-tuning strategy and state-of-the-art video models, whilst enjoying the advantage of parameter efficiency. The code and model are available at https://github.com/linziyi96/st-adapter

Zhiyi Cheng, Xiatian Zhu, Shaogang Gong (2020)Face re-identification challenge: Are face recognition models good enough?, In: Pattern recognition107107422 Elsevier Ltd

•We construct the largest and only face re-identification benchmark with native surveillance facial imagery data, the Surveillance Face Re-ID Challenge (SurvFace).•We benchmark representative deep learning face-recognition models on the SurvFace challenge, in a more realistic open-set scenario, originally missing in the previous studies.•We investigate extensively the performance of existing models on SurvFace by exploiting simultaneously image super-resolution and face-recognition models.•We provide extensive discussions on future research directions for face re-identification. Face re-identification (Re-ID) aims to track the same individuals over space and time with subtle identity class information in automatically detected face images captured by unconstrained surveillance camera views. Despite significant advances of face recognition systems for constrained social media facial images, face Re-ID is more challenging due to poor-quality surveillance face imagery data and remains under-studied. However, solving this problem enables a wide range of practical applications, ranging from law enforcement and information security to business, entertainment and e-commerce. To facilitate more studies on face Re-ID towards practical and robust solutions, a true large scale Surveillance Face Re-ID benchmark (SurvFace) is introduced, characterised by natively low-resolution, motion blur, uncontrolled poses, varying occlusion, poor illumination, and background clutters. This new benchmark is the largest and more importantly the only true surveillance face Re-ID dataset to our best knowledge, where facial images are captured and detected under realistic surveillance scenarios. We show that the current state-of-the-art FR methods are surprisingly poorfor face Re-ID. Besides, face Re-ID is generally more difficult in an open-set setting as naturally required in surveillance scenarios, owing to a large number of non-target people (distractors) appearing in open ended scenes. Moreover, the low-resolution problem inherent to surveillance facial imagery is investigated. Finally, we discuss open research problems that need to be solved in order to overcome the under-studied face Re-ID problem.

Guile Wu, XIATIAN ZHU, Shaogang Gong (2022)Learning hybrid ranking representation for person re-identification, In: Pattern recognition [e-journal]121108239 Elsevier

Contemporary person re-identification (re-id) methods mostly compute independentlya feature representation of each person image in the query set and the gallery set. This strategy fails to consider any ranking context information of each probe image in the query set represented implicitly by the whole gallery set. Some recent re-ranking re-id methods therefore propose to take a post-processing strategy to exploit such contextual information for improving re-id matching performance. However, post-processing is independent of model training without jointly optimising the re-id feature and the ranking context information for better compatibility. In this work, for the first time, we show that the appearance feature and the ranking context information can be jointly optimised for learning more discriminative representations and achieving superior matching accuracy. Specifically, we propose to learn a hybrid ranking representation for person re-id with a two-stream architecture: (1) In the external stream, we use the ranking list of each probe image to learn plausible visual variations among the top ranks from the gallery as the external ranking information; (2) In the internal stream, we employ the part-based fine-grained feature as the internal ranking information, which mitigates the harm of incorrect matches in the ranking list. Assembling these two streams generates a hybrid ranking representation for person matching. Extensive experiments demonstrate the superiority of our method over the state-of-the-art methods on four large-scale re-id benchmarks (Market-1501, DukeMTMC-ReID, CUHK03 and MSMT17), under both supervised and unsupervised settings.

Wei Li, Shaogang Gong, XIATIAN ZHU (2021)Hierarchical distillation learning for scalable person search, In: Pattern recognition [e-journal]114107862 Elsevier

Existing person search methods typically focus on improving person detection accuracy. This ignores the model inference efficiency, which however is fundamentally significant for real-world applications. In this work, we address this limitation by investigating the scalability problem of person search involving both model accuracy and inference efficiency simultaneously. Specifically, we formulate a Hierarchical Distillation Learning (HDL) approach. With HDL, we aim to comprehensively distil the knowledge of a strong teacher model with strong learning capability to a lightweight student model with weak learning capability. To facilitate the HDL process, we design a simple and powerful teacher model for joint learning of person detection and person re-identification matching in unconstrained scene images. Extensive experiments show the modelling advantages and cost-effectiveness superiority of HDL over the state-of-the-art person search methods on three large person search benchmarks: CUHK-SYSU, PRW, and DukeMTMC-PS.

Xu Lan, XIATIAN ZHU, Shaogang Gong (2022)Unsupervised cross-domain person re-identification by instance and distribution alignmen, In: Pattern recognition [e-journal]124108514 Elsevier

Most existing person re-identification (re-id) methods assume supervised model training on a separate large set of training samples from the target domain. While performing well in the training domain, such trained models are seldom generalisable to a new independent unsupervised target domain without further labelled training data from the target domain. To solve this scalability limitation, we develop a novel Hierarchical Unsupervised Domain Adaptation (HUDA) method. It can transfer labelled information of an existing dataset (a source domain) to an unlabelled target domain for unsupervised person re-id. Specifically, HUDA is designed to model jointly global distribution alignment and local instance alignment in a two-level hierarchy for discovering transferable source knowledge in unsupervised domain adaptation. Crucially, this approach aims to overcome the under-constrained learning problem of existing unsupervised domain adaptation methods. Extensive evaluations show the superiority of HUDA for unsupervised cross-domain person re-id over a wide variety of state-of-the-art methods on four re-id benchmarks: Market-1501, DukeMTMC, MSMT17 and CUHK03.

Chen Wang, Feng Zhang, XIATIAN ZHU, Shuzhi Sam Ge (2022)Low-resolution human pose estimation, In: Low-resolution Human Pose Estimation126108579

Human pose estimation has achieved significant progress on images with high imaging resolution. However, low-resolution imagery data bring nontrivial challenges which are still under-studied. To fill this gap, we start with investigating existing methods and reveal that the most dominant heatmap-based methods would suffer more severe model performance degradation from low-resolution, and offset learning is an effective strategy. Established on this observation, in this work we propose a novel Confidence-Aware Learning (CAL) method which further addresses two fundamental limitations of existing offset learning methods: inconsistent training and testing, decoupled heatmap and offset learning. Specifically, CAL selectively weighs the learning of heatmap and offset with respect to ground-truth and most confident prediction, whilst capturing the statistical importance of model output in mini-batch learning manner. Extensive experiments conducted on the COCO benchmark show that our method outperforms significantly the state-of-the-art methods for low-resolution human pose estimation.

Wei-Shi Zheng, Jincheng Hong, Jiening Jiao, Ancong Wu, XIATIAN ZHU, Shaogang Gong, Jiayin Qin , Jian-Huang Lai (2021)Joint Bilateral-Resolution Identity Modeling for Cross-Resolution Person Re-Identification, In: International Journal of Computer Vision130pp. 136-156 Springer

Person images captured by public surveillance cameras often have low resolutions (LRs), along with uncontrolled pose variations, background clutter and occlusion. These issues cause the resolution mismatch problem when matched with high-resolution (HR) gallery images (typically available during collection), harming the person re-identification (re-id) performance. While a number of methods have been introduced based on the joint learning of super-resolution and person re-id, they ignore specific discriminant identity information encoded in LR person images, leading to ineffective model performance. In this work, we propose a novel joint bilateral-resolution identity modeling method that concurrently performs HR-specific identity feature learning with super-resolution, LR-specific identity feature learning, and person re-id optimization. We also introduce an adaptive ensemble algorithm for handling different low resolutions. Extensive evaluations validate the advantages of our method over related state-of-the-art re-id and super-resolution methods on cross-resolution re-id benchmarks. An important discovery is that leveraging LR-specific identity information enables a simple cascade of super-resolution and person re-id learning to achieve state-of-the-art performance, without elaborate model design nor bells and whistles, which has not been investigated before.

Mantun Chen, Yongjun Wang, Xiatian Zhu (2022)Few-shot Website Fingerprinting attack with Meta-Bias Learning, In: Pattern Recognition130108739 Elsevier

Website fingerprinting (WF) attack aims to identify which website a user is visiting from the traffic data patterns. Whilst existing methods assume many training samples, we investigate a more realistic and scalable few-shot WF attack with only a few labeled training samples per website. To solve this problem, we introduce a novel Meta-Bias Learning (MBL) method for few-shot WF learning. Taking the meta-learning strategy, MBL simulates and optimizes the target tasks. Moreover, a new model parameter factorization idea is introduced for facilitating meta-training with superior task adaptation. Expensive experiments show that our MBL outperforms significantly existing hand-crafted feature and deep learning based alternatives in both closed-world and open-world attack scenarios, at the absence and presence of defense.

Xiangping Zhu, XIATIAN ZHU, Minxian Li, Pietro Morerio, Vittorio Murino, Shaogang Gong (2021)Intra-Camera Supervised Person Re-Identification, In: International journal of computer vision129pp. 1580-1595 Springer

Existing person re-identification (re-id) methods mostly exploit a large set of cross-camera identity labelled training data. This requires a tedious data collection and annotation process, leading to poor scalability in practical re-id applications. On the other hand unsupervised re-id methods do not need identity label information, but they usually suffer from much inferior and insufficient model performance. To overcome these fundamental limitations, we propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation. This eliminates the most time-consuming and tedious inter-camera identity labelling process, significantly reducing the amount of human annotation efforts. Consequently, it gives rise to a more scalable and more feasible setting, which we call Intra-Camera Supervised (ICS) person re-id, for which we formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method. Specifically, MATE is designed for self-discovering the cross-camera identity correspondence in a per-camera multi-task inference framework. Extensive experiments demonstrate the cost-effectiveness superiority of our method over the alternative approaches on three large person re-id datasets. For example, MATE yields 88.7% rank-1 score on Market-1501 in the proposed ICS person re-id setting, significantly outperforming unsupervised learning models and closely approaching conventional fully supervised learning competitors.

Hang Su, Shaogang Gong, XIATIAN ZHU (2021)Multi-perspective cross-class domain adaptation for open logo detection, In: Computer vision and image understanding204103156 Elsevier

Existing logo detection methods mostly rely on supervised learning with a large quantity of labelled training data in limited classes. This restricts their scalability to a large number of logo classes subject to limited labelling budget. In this work, we consider a more scalable open logo detection problem where only a fraction of logo classes are fully labelled whilst the remaining classes are only annotated with a clean icon image (e.g. 1-shot icon supervised). To generalise and transfer knowledge of fully supervised logo classes to other 1-shot icon supervised classes, we propose a Multi-Perspective Cross-Class (MPCC) domain adaptation method. In a data augmentation principle, MPCC conducts feature distribution alignment in two perspectives. Specifically, we align the feature distribution between synthetic logo images of 1-shot icon supervised classes and genuine logo images of fully supervised classes, and that between logo images and non-logo images, concurrently. This allows for mitigating the domain shift problem between model training and testing on 1-shot icon supervised logo classes, simultaneously reducing the model overfitting towards fully labelled logo classes. Extensive comparative experiments show the advantage of MPCC over existing state-of-the-art competitors on the challenging QMUL-OpenLogo benchmark (Su et al., 2018).

Xiangtai Li, Li Zhang, Guangliang Cheng, Kuiyuan Yang, Yunhai Tong, XIATIAN ZHU, TAO XIANG (2021)Global Aggregation Then Local Distribution for Scene Parsing, In: IEEE transactions on image processing : a publication of the IEEE Signal Processing Society30pp. 6829-6842 IEEE

Modelling long-range contextual relationships is critical for pixel-wise prediction tasks such as semantic segmentation. However, convolutional neural networks (CNNs) are inherently limited to model such dependencies due to the naive structure in its building modules (e.g., local convolution kernel). While recent global aggregation methods are beneficial for long-range structure information modelling, they would oversmooth and bring noise to the regions contain fine details (e.g., boundaries and small objects), which are very much cared in the semantic segmentation task. To alleviate this problem, we propose to explore the local context for making the aggregated long-range relationship being distributed more accurately in local regions. In particular, we design a novel local distribution module which models the affinity map between global and local relationship for each pixel adaptively. Integrating existing global aggregation modules, we show that our approach can be modularized as an end-to-end trainable block and easily plugged into existing semantic segmentation networks, giving rise to the GALD networks. Despite its simplicity and versatility, our approach allows us to build new state of the art on major semantic segmentation benchmarks including Cityscapes, ADE20K, Pascal Context, Camvid and COCO-stuff. Code and trained models are released at https://github.com/lxtGH/GALD-DGCNet to foster further research.