SU-SAM: A Simple Unified Framework for Adapting SAM in Underperformed Scene

Yiran Song, Qianyu Zhou, Xuequan Lu, Zhiwen Shao, Lizhuang Ma*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Segment Anything Model (SAM) excels in common vision tasks but struggles with specialized data. Recent methods fine-tune SAM using parameter-efficient techniques and task-specific designs, but they rely heavily on handcrafting and pre/post-processing, limiting the generalizability. In this paper, we propose SU-SAM, a simple and unified framework that adapts SAM efficiently without task-specific designs, improving its adaptability to underperforming scenes. SU-SAM abstracts parameter-efficient modules into basic design elements, offering four variants: series, parallel, mixed, and LoRA structures. Experiments across nine datasets and six tasks, including medical and defect segmentation, demonstrate SU-SAM's superior performance. We analyze the effectiveness of different parameter-efficient designs and present a generalized model and benchmark, highlighting SU-SAM's adaptability across diverse datasets.

Original languageEnglish
Title of host publication2025 IEEE International Conference on Multimedia and Expo
Subtitle of host publicationJourney to the Center of Machine Imagination, ICME 2025 - Conference Proceedings
PublisherIEEE Computer Society
ISBN (Electronic)9798331594954
DOIs
StatePublished - 2025
Externally publishedYes
Event2025 IEEE International Conference on Multimedia and Expo, ICME 2025 - Nantes, France
Duration: 30 Jun 20254 Jul 2025

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X

Conference

Conference2025 IEEE International Conference on Multimedia and Expo, ICME 2025
Country/TerritoryFrance
CityNantes
Period30/06/254/07/25

Keywords

  • Adapter
  • Foundation Models
  • Generalizability
  • Segment Anything Model
  • Underperformed Scenes

Fingerprint

Dive into the research topics of 'SU-SAM: A Simple Unified Framework for Adapting SAM in Underperformed Scene'. Together they form a unique fingerprint.

Cite this