EZ-HOI: VLM Adaptation via Guided Prompt Learning for Zero-Shot HOI Detection

NeurIPS 2024
1National University of Singapore, 2University of Mississippi, 3ASUS Intelligent Cloud Services (AICS)
An Efficient Zero-shot HOI detection method with good generalization ability to unseen classes and small computational cost

EZ-HOI: An Efficient Zero-shot HOI detection method with good generalization ability to unseen classes and with small computational cost

Abstract

Detecting human-object interactions (HOI) in zero-shot settings, where models must handle unseen classes, poses significant challenges. Existing methods that rely on aligning visual encoders with large Vision-Language Models (VLMs) to tap into the extensive knowledge of VLMs, require large, computationally expensive models and encounter training difficulties. Adapting VLMs with prompt learning offers an alternative to direct alignment. However, fine-tuning on task-specific datasets often leads to overfitting to seen classes and suboptimal performance on unseen classes, due to the absence of unseen class labels.

To address these challenges, we introduce a novel prompt learning-based framework for Efficient Zero-Shot HOI detection (EZ-HOI). First, we introduce Large Language Model (LLM) and VLM guidance for learnable prompts, integrating detailed HOI descriptions and visual semantics to adapt VLMs to HOI tasks. However, because training datasets contain seen-class labels alone, fine-tuning VLMs on such datasets tends to optimize learnable prompts for seen classes instead of unseen ones. Therefore, we design prompt learning for unseen classes using information from related seen classes, with LLMs utilized to highlight the differences between unseen and related seen classes. Quantitative evaluations on benchmark datasets demonstrate that our EZ-HOI achieves state-of-the-art performance across various zero-shot settings with only 10.35\% to 33.95\% of the trainable parameters compared to existing methods.

Pipeline

Overview of our proposed EZ-HOI framework.

Framework overview

Learnable text prompts capture detailed HOI class information from the LLM. To enhance their generalization ability, we introduce the Unseen Text Prompt Learning (UTPL) module. Meanwhile, visual learnable prompts are guided by a frozen VLM visual encoder. These learnable text and visual prompts are then separately input into the text and visual encoder. Finally, HOI predictions are made by calculating the cosine similarity between the text encoder output and the HOI image features.

Qualitative and Quantitative Results

Qualitative Results

Here are some qualitative results comparison between our method and MaPLe.

Qualitative Results

Quantitative Results

Here are critical experiment results in zero-shot HOI detection.

Quantitative Results

BibTeX

    @inproceedings{lei2024efficient,
      title     = {EZ-HOI: VLM Adaptation via Guided Prompt Learning for Zero-Shot HOI Detection},
      author    = {Lei, Qinqian and Wang, Bo and Robby T., Tan},
      booktitle = {The Thirty-eighth Annual Conference on Neural Information Processing Systems},
      year      = {2024}
    }