Salient Object Detection using a Context-Aware Refinement Network

Md Amirul Islam       Mahmoud Kalash       Mrigank Rochan       Neil Bruce       Yang Wang

Abstract

Recently there has been remarkable success in pushing the state of the art in salient object detection. Most of the improvements are driven by employing end-to-end deeper feed-forward networks. However, in many cases precisely detecting salient regions requires representation of fine details. Combining high-level and low-level features using skip connections is a strategy that has been proposed, but sometimes fails to select the right contextual features. To overcome this limitation, we propose an end-toend encoder-decoder network that employs recurrent refinement to generate a saliency map in a coarse-to-fine fashion by incorporating finer details in the detection framework. The proposed approach makes use of refinement units within each stage of the decoder that are responsible for refining the saliency map produced by earlier layers by learning context-aware features. Experimental results on several challenging saliency detection benchmarks validate the effectiveness of our proposed architecture providing a significant improvement over current state-of-the-art methods.


Publication

Salient Object Detection using a Context-Aware Refinement Network
Md Amirul Islam, Mahmoud Kalash, Mrigank Rochan, Neil D. B. Bruce, Yang Wang
British Machine Vision Conference (BMVC), 2017  
[pdf]   [poster]  



Bibtex

@inproceedings{Islam2017SalientOD, title={Salient Object Detection using a Context-Aware Refinement Network}, author={Md Amirul Islam and Mahmoud Kalash and Mrigank Rochan and Yang Wang}, year={2017} }

@inproceedings{Islam2017SalientOD,
 Author = {Md Amirul Islam and Mahmoud Kalash and Mrigank Rochan and Neil D. B. Bruce and Yang Wang},
 Title = {Salient Object Detection using a Context-Aware Refinement Network},
 Booktitle = {The 28th British Machine Vision Conference (BMVC)},
 Year = {2017}}


Contact

For any questions regarding this work, please contact the corresponding author at amirul@cs.umanitoba.ca