Denis Filimonov

MSc Semester Project

Title

Automatic Extraction of Interesting Image Content

Candidate

Denis Filimonov

Supervisor

Prof. Dr. Touradj Ebrahimi

Expected

June 11, 2010

Place

EPFL

Description

Objects in the scene have different importance for the scene interpretation. The ability of humans to fixate on specific parts of an image which carry most of the useful information needed for scene interpretation is known as visual attention.
The task is to examine how most salient image regions can be automatically extracted by content analysis techniques. Potential of using a computational bottom-up visual attention model for image tagging will be explored. More specifically, bottom-up attention model predicts the location of interesting objects in a given image. Although the bottom-up approach is considered as a very simple approximation of attention, it has been found to be quite successful in computer vision, where it has been modeled by the saliency map highlighting regions which “catch the eye” in the sense of low level image properties. Identifying the most salient regions in images is helpful in many applications, e.g. one can assume that people spontaneously tag the most important objects in a picture.
More specifically, the goal of this project is to study different approaches for visual attention in still images and to assess and compare their performance. The following tasks have to be performed:

  • Study different approaches for visual attention analysis,
  • Experiment with the approach by Itti et al. “A Model of Saliency-Based Visual Attention for Rapid Scene Analysis”,
  • Experiment with the approach by Harel et al. “Graph-Based Visual Saliency”,
  • Experiment with the approach by Achanta et al. “Frequency-tuned Salient Region Detection”,
  • Implement and validate a visual attention system developed in software running on a PC, which takes an image as input, performs the above mentioned approaches and suggests regions in the image which are most salient,
  • Assess and compare the performance of above-mentioned approaches.