Title

Multiscale discriminant saliency for visual attention

Document Type

Conference Proceeding

Publisher

Springer

Faculty

Faculty of Health, Engineering and Science

School

School of Engineering/Centre for Communications and Electronics Research

RAS ID

17528

Comments

This article was originally published as: Le Ngo, A., Ang, L. , Qui, G., & Seng, K. (2013). Multiscale discriminant saliency for visual attention. Computational science and its applications - 2013 : 13th international conference, Ho Chi Minh City, Vietnam, June 24-27, 2013, proceedings (pp. 464-484). Heidelberg, Germany: Springer. Original article available here

Abstract

The bottom-up saliency, an early stage of humans' visual attention, can be considered as a binary classification problem between center and surround classes. Discriminant power of features for the classification is measured as mutual information between features and two classes distribution. The estimated discrepancy of two feature classes very much depends on considered scale levels; then, multi-scale structure and discriminant power are integrated by employing discrete wavelet features and Hidden markov tree (HMT). With wavelet coefficients and Hidden Markov Tree parameters, quad-tree like label structures are constructed and utilized in maximum a posterior probability (MAP) of hidden class variables at corresponding dyadic sub-squares. Then, saliency value for each dyadic square at each scale level is computed with discriminant power principle and the MAP. Finally, across multiple scales is integrated the final saliency map by an information maximization rule. Both standard quantitative tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed multiscale discriminant saliency method (MDIS) against the well-know information-based saliency method AIM on its Bruce Database wity eye-tracking data. Simulation results are presented and analyzed to verify the validity of MDIS as well as point out its disadvantages for further research direction.

DOI

10.1007/978-3-642-39637-3-37

Access Rights

free_to_read

Share

 
COinS