An edge can be defined as an abrupt change in brightness as we move from one pixel to its neighbour in an image.
In digital image processing, each image is quantized into pixels. With gray-scale images, each pixel indicates the
level of brightness of the image in a particular spot: 0 represents black, and with 8-bit pixels, 255 represent white.
An edge is an abrupt change in the brightness (gray scale level) of the pixels. Detecting edges is an important task
\in boundary detection, motion detection/estimation, texture analysis, segmentation, and object identification.

Edge information for a particular pixel is obtained by exploring the brightness of pixels in the
neighborhood of that pixel. If all of the pixels in the neighborhood have almost the same brightness,
then there is probably no edge at that point. However, if some of the neighbors are much brighter than the others,
then there is a probably an edge at that point.
Measuring the relative brightness of pixels in a neighborhood is mathematically analogous to
calculating the derivative
of brightness. The image illustrates an example of Hard and Soft Edges on an image. Brightness values are discrete,
not continuous, so we approximate the derivative function. Different edge detection methods (Prewitt, Laplacian,
Roberts, Sobel and Canny) use different discrete approximations of the derivative function.

For example consider a random discrete 9 x 9 pixel image.

Refresh the page to see randomize the pattern.
Horizontal Edge Example
Vertical Edge Example
Diagonal Edge Example

X difference is calculated as | I(i+1,j) - I(i,j)|
Y difference is calculated as | I(i,j+1) - I(i,j)|
where I denotes the intensity values [0-255]

more »