Biologically motivated local contextual modulation improves low-level visual feature representations

Abstract

This paper describes a biologically motivated local context operator to improve low-level visual feature representations. The computation borrows the idea from the primate visual system that different visual features are computed with different speeds in the visual system and thus they can positively affect each other via early recurrent modulations. The modulation improves visual representation by suppressing responses with respect to background pixels, cluttered scene parts and image noise. The proposed local contextual computation is fundamentally different from exiting approaches that involve whole scene perspectives. Context-modulated visual feature representations are tested in a variety of existing saliency algorithms. Using real images and videos, we quantitatively compare output saliency representations between modulated and non-modulated architectures with respect to human experimental data. Results clearly demonstrate that local contextual modulation has a positive and consistent impact on the saliency computation.

Publication
Lecture Notes in Computer Science
Date
Links
DOI