Computer Vision Algorithms

Get to know the algorithms that that enables computers to perceive

Image by Daniil Kuželev


The word computer vision means the ability of a computer to see and perceive the surrounding. A lot of application holds for computer vision to cover — Object detection and recognition, self driving cars, facial recognition, ball tracking, photo tagging, and many more. Before diving in the technical jargons, first lets discuss the entire computer vision pipeline.

computer vision pipeline, image by Author

The entire pipeline is divide into 5 basic steps, each with a specific function. Firstly, the input is needed for the algorithm to process that can be in the form of an image or stream of image (image frames). The next step is pre-processing. In this step, functions are applied to the incoming image(s) so that the algorithm can better understand the image. Some of the functions involve noise reduction, image scaling, dilation, and erosion, removing color spots, etc. The next step is selecting the area of interest or the region of interest. Under this lies the object detection and image segmentation algorithms. Further, we have feature extraction that means retrieving relevant information/features from the images that are necessary for accomplishing the end goal. The final step is recognition or prediction, where we recognize objects in a given frame of images or predict the probability of the object in a given image frame.


Let’s look at a real world application of the computer vision pipeline. Facial expression recognition is an application of compute vision that is used a lot research labs to get an idea of what effect does a particular product have on it’s users. Again, we have input data to which we apply the pre-processing algorithms. The next step involves detecting of faces in a particular frame and cropping of that part of the frame. Once this is achieved, facial landmarks are identified like mouth, eyes, nose etc. — key features for emotion recognition.

pipeline for ace expression recognition, Image by Author

In the end, a prediction model( trained model) classifies the images based on the features extracted in the intermediary steps.


Before I start mentioning the algorithms in computer vision, i want to stress on the term ‘Frequency’. Frequency of an image is the rate of change of intensity. High frequency images have large change in intensity. A low frequency image is relatively uniform in brightness or the intensity changes slowly. On applying Fourier transform to an image we get a magnitude spectrum that yields the information of the image frequency. Concentrated point in the center of frequency domain image means a lot of low frequency components are present in the image. High frequency components includes — edges, corners, stripes, etc. We know that image is a function of x and y f(x,y). To measure the intensity change, we just take the derivative of the function f(x,y).

Sober Filter

The Sobel operator is used in image processing and computer vision for edge detection algorithms. The filter creates an image of emphasizing edges. It computes an approximation of the slope/gradient of the image intensity function. At each pixel in the image, the output of the Sobel operator is both the corresponding gradient vector and the norm of this vector. The Sobel Operator convolves the image with a small integer-valued filter in the horizontal and vertical directions. This makes the operator inexpensive in terms of computation complexity. The Sx filter detects edges in the horizontal direction and Sy filter detects edges in the vertical direction. It is a high pass filter.

Sobel Filters, Image from Google
Applying Sx to the image, Image by Author
Applying Sy to the image, Image by Author

Averaging Filter

Average filter is a normalized filter which is used to brightness or darkness of an image. The average filter moves across the image pixel by pixel replacing each value in the pixel with the average value of the neighboring pixels, including itself. The Average (or mean) filtering smoothens the images by reducing the amount of variation in the intensity between the neighboring pixels.

Average filter, Image by Google

Gaussian Blur Filter

Gaussian blur filter is a low pass filter and it has the following functions:

  1. Smooths an image
  2. Blocks high frequency parts of an image
  3. Preserves edges

Mathematically, by applying a Gaussian blur to an image we are basically convolving the image with a Gaussian function.

2D Gaussian function, Image by Google

In the above formula, x is the horizontal distance from the point of origin, y is the vertical distance from the origin point, and σ is the standard deviation of the Gaussian distribution. In two dimension, the formula represents a surface whose profiles are concentric circles with a Gaussian distribution from the point of origin.

3x3 Gaussian Blur Filter, Image by Google
5x5 Gaussian Blur Filter, Image by Google

One thing to note here is that the importance of choosing a right kernel size. It is important because if the kernel dimension is too large, small features present in the image may disappear and the image will look blurred. It it is too small, the noise in the image will not be eliminated.

Canny Edge Detector

It is an algorithm that makes use of four filters to detect horizontal, vertical and diagonal edges in the blurred image. The algorithm performs the following functions.

  1. It is a widely used an accurate edge detection algorithm
  2. Filters out noise using Gaussian Blur
  3. Finds the strength and direction of edges using Sobel filter
  4. Applies non-max suppression to isolate the strongest edges and thin them to one pixel line
  5. Uses hysteresis(double thresholding method) to isolate the best edges
Canny Edge detector on a steam engine photo, Image by Wikipedia

Haar Cascade

This is a machine learning based approach where a cascade function is trained to solve binary classification problem. The function is trained from a plethora of positive and negative images and is further used to detect objects in other images. It detects the following:

  1. Edges
  2. Lines
  3. Rectangular patterns

To detect the above patterns, following features are used:

Haar Cascade Features, Image by OpenCV

Convolutional layers

In this approach, the neural network learns the features of a group of images belonging to the same category. The learning takes place by updating the weights of the neurons using back propagation technique and gradient descent as an optimizer. It is an iterative process that aims to decrease the error between the actual output and the ground truth. The convolution layers/blocks so obtained in the process act as feature layers that are used to distinguish a positive image from a negative one. Example of a convolution layer is given below.

Convolutional Neural Network, Image by Google

The fully connected layers along with a SoftMax function at the end categorizes the incoming image into one of the categories it is trained on. The output score is a probabilistic score with a range between 0 to 1.


An overview of the most common algorithms used in Computer Vision has been covered in this blog along with a general pipeline. These algorithms form the basis of more complicated algorithms like SIFT, SURF, ORB, and many more.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store