TY - JOUR
T1 - Developing Evolving Adaptability in Biological Intelligence
T2 - A Novel Biologically-Inspired Continual Learning Model for Video Saliency Prediction
AU - Zhu, Dandan
AU - Zhang, Kaiwei
AU - Zhu, Kun
AU - Zhang, Nana
AU - Min, Xiongkuo
AU - Zhai, Guangtao
AU - Yang, Xiaokang
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - In the era of deep learning, video saliency prediction task still remains major challenge due to the issue of catastrophic forgetting during feature learning. Most prior works commonly employ generative replay strategies to generate pseudo-samples from previous tasks, enabling them to recall the data distribution. However, scaling up generative replay to accommodate class-incremental and task-incremental settings poses challenges, as generated data with low quality can severely deteriorate performance. Additionally, existing advances mainly focus on preserving memory stability to alleviate catastrophic forgetting, but they remain difficult to flexibly adapt to incremental changes in dynamic scenes. To achieve a better balance between memory stability and learning plasticity, we propose a novel biologically-inspired continual learning (BICL) model tailored to effectively predict human attention in dynamic scenes while mitigate catastrophic forgetting. In particular, inspired by the function of the hippocampus in the human neural system, we elaborately design a visual saliency memory bank module to explicitly store and retrieve representative features from previous tasks. Furthermore, drawing inspiration from the Drosophila \gammaMB system, we propose an active forgetting strategy equipped with multiple parallel adaptive learner modules, which can appropriately attenuate old memories in parameter distribution to enhance learning plasticity to adapt to new tasks, and accordingly to ensure compatibility among multiple learners. Notably, without compromising the performance of old tasks, our proposed model can achieve a better trade-off between memory stability and learning plasticity. Through extensive experiments on several benchmark datasets, our model not only enhances performance in task-incremental settings, but also potentially provides deep insights into neurological adaptive mechanisms.
AB - In the era of deep learning, video saliency prediction task still remains major challenge due to the issue of catastrophic forgetting during feature learning. Most prior works commonly employ generative replay strategies to generate pseudo-samples from previous tasks, enabling them to recall the data distribution. However, scaling up generative replay to accommodate class-incremental and task-incremental settings poses challenges, as generated data with low quality can severely deteriorate performance. Additionally, existing advances mainly focus on preserving memory stability to alleviate catastrophic forgetting, but they remain difficult to flexibly adapt to incremental changes in dynamic scenes. To achieve a better balance between memory stability and learning plasticity, we propose a novel biologically-inspired continual learning (BICL) model tailored to effectively predict human attention in dynamic scenes while mitigate catastrophic forgetting. In particular, inspired by the function of the hippocampus in the human neural system, we elaborately design a visual saliency memory bank module to explicitly store and retrieve representative features from previous tasks. Furthermore, drawing inspiration from the Drosophila \gammaMB system, we propose an active forgetting strategy equipped with multiple parallel adaptive learner modules, which can appropriately attenuate old memories in parameter distribution to enhance learning plasticity to adapt to new tasks, and accordingly to ensure compatibility among multiple learners. Notably, without compromising the performance of old tasks, our proposed model can achieve a better trade-off between memory stability and learning plasticity. Through extensive experiments on several benchmark datasets, our model not only enhances performance in task-incremental settings, but also potentially provides deep insights into neurological adaptive mechanisms.
KW - Continual learning
KW - active forgetting strategy
KW - memory bank
KW - multiple parallel learners
KW - saliency prediction
UR - https://www.scopus.com/pages/publications/105024821052
U2 - 10.1109/TPAMI.2025.3643517
DO - 10.1109/TPAMI.2025.3643517
M3 - 文章
AN - SCOPUS:105024821052
SN - 0162-8828
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -