Wednesday, November 22, 2006

Edge Detection

A commonly held belief that edge detection is the first step in vision processing has fueled a long search for a good edge detection algorithm. Edge detection refers to the process of identifying and locating sharp discontinuities in an image. Edges are defined as discontinuities in the image intensity due to changes in scene structure. These discontinuities originate from different scene features and can describe the information that an image of the external world contains. Enhancement and smoothing attempt to make these discontinuities apparent to the detector, so that desirable edges can be extracted.
Edge detectors, where ground truth not available, are evaluated by their ability to produce edges that provide for the quick and accurate recognition, as judged by humans, of a three dimensional object from a grayscale image of the object in its natural setting. From a complete evaluation methodology was determined that a statistically significant difference exists in the relative performance of edge detection algorithms. The relative performance depends on the method used for selecting the input parameters, as significantly better performance was attained by the edge detectors when the parameters of each were optimized individually for each image than when a single set of parameters was optimized for the entire set of images.
Edge Detection Important For Feature Extraction and Subsequent Vision Tasks : Texture Analysis, Motion Detection/ Estimation, Stereopsis and Recognition in both Machine and Biological Vision Systems.

No comments: