Tuesday, October 21, 2008

Muti-level Semantic Segmentation and Self-grouping in Scene Understanding (On-going)

The basic idea is: for each pixel in the image, find its neighbors in feature space (color, MRF, SIFT....) the feature space is ranged in a way that reflects structural 'importance'. For ocean and grass, in the case of color and markov field moldel for example, waters (or grass) will put togather as neighbors because their share similar color (or pattern). Each group of pixels in different features will jointly unveil its underlying properties.

Nov 19 2008:

Saddly, I found out today on Antonio's course 6.870 that this idea have been tested by D. Hoiem in "Geometric Context from a Single Image", ICCV 2005. http://www.cs.uiuc.edu/homes/dhoiem/projects/context/index.html but on the other side, their work showed my idea works! It is exciting when I see this methods generate good segmentations, which is expected. But a more principled way to organize of those 'over-segmented patches' is expetected, instead of being rather 'ad-hoc' as it is in Hoiem's work.

No comments: