Abstract—Face detection and recognition are challenging research topics in the field of computer vision. Several algorithms have been proposed to solve lot of problems related to changes in environment and lighting conditions. In this research, we introduce a new algorithm for face identification or detection. The proposed method uses the well-known local binary patterns (LBP) algorithm and K-means clustering for face segmentation and maximum likelihood to classify output data. This method can be summarized as a process of detecting and recognizing faces on the basis of the distribution of feature vector amplitudes on six levels that is, three for positive vector amplitudes and three for negative amplitudes. Detection is conducted by classifying distribution values and deciding whether or not these values compose a face.
Keywords—Face detection; LBP; K-means clustering; HoG; PADL.
Face detection and recognition is a very major concept in the fields of computer vision or robotic vision and biometrics not only for its challenging nature but also for its interesting applications and needs in the field. There are several biometrics available like finger print, iris identification etc. But Facial recognition or detection is one of the biometric software applications that can identify a particular individual in a digital image processing. Face recognitions were used in many applications in the field of banking, passport office etc. But the problem in the face recognition is it cannot identify the person in the case of identical twins. So the algorithm called local binar6y patters was used to identify the face in the case of identical twins because the LBP can describe well about the micro patterns present in the face.
Over the past 15 years, several algorithms have been introduced after the well-known Viola–Jones algorithm 1. Face detection algorithms can be categorized into two main schemes, namely, rigid templates, which are learned by boosting main methods or deep neural networks 2, and deformable models that describe a face by parts. Face detection is an important field of study because it is a cornerstone for many applications that depend on facial features, such as face recognition, verification, and tracking, and gender, age, and emotion recognition. Facial recognition is considered as a very tough challenge due to variation in size, shape, color, and texture of human faces and also there is no unique method to recognize the face among the humans. Therefore in order to build a fully automated system, a robust and efficient face recognition method is required. The face recognition system consists of recognizing the faces given as input with the data base images. There are several methods available to recognize the face such as appearance based method, support vector machine, hidden Markov model etc. This paper analysis the face recognition based on local binary patterns which is appearance based method.
In the existing system, many algorithms are used to recognize the faces. An important face detection algorithm was introduced by Viola and Jones, which made face detection feasible and is still extensively applied in many real-world systems. The Viola–Jones method is referred to as boosting-based face detection, which paved the way for several achievements in the field. The past few years have witnessed a number of modifications on face detection using deep learning. It significantly outperformed classical computer vision methods. Li et al. presented a new method for detecting faces in the wild, which integrates a ConvNet and 3D mean face model in an end-to-end multi-task discriminative learning framework. Recently, another study applied the faster region-based convolution neural networks(R-CNN), which is a state-of-the-art generic object detector, and achieved hopeful results. Considerable work has been conducted to improve the faster R-CNN architecture. In one study, joint training was conducted on the CNN cascade, namely, region proposal network (RPN), thereby realizing end-to-end optimization for faster R-CNN. Wan et al. combined the faster R-CNN face detection algorithm with hard negative mining and ResNet and significantly boosted performance in face detection dataset and benchmarks (FDDB). Generally, these are used for reducing the dimension of the image. But one of the major problems with that is it cannot produce the complete information about the face.
Modern face detection algorithms have benefited from feature extraction methodologies, such as scale-invariant feature transform 3, local binary patterns (LBP) 4 and their variations, fast counterparts, such as speeded-up robust features 5, and histograms of oriented gradients (HoG) 6. These feature extraction methods were used to describe and detect faces. Face detection algorithms can be classified into four groups depending on their methods as follows: Feature invariant, Knowledge based, Template matching, Appearance based.
In the proposed method, the targeted images were collected, and K-means clustering was applied to separate the colors that form these targeted images. According to this method, a face should be separated from its background. The LBP algorithm is applied to the faces included in an image. Afterwards, the amplitudes of the LBP feature are collected and separated into six groups, namely, three for positive amplitude and three for negative amplitude. Fig. 1 provides the symmetry of the amplitude distribution of a training face in comparison with the targeted region, which will reveal the existence of a face in that region.
Our algorithm consists of the following procedures:
A. LBP Concept
LBP is the process of returning image information in such a way that will make it a unique binary pattern.
B. LBP Extension and Modification
Many modifications have been made to LBP to modify its performance for certain applications, such as texture description 4. These modifications were made by changing the neighborhood sizes. Another extension is made by adding another factor, which are so-called uniform patterns that reduce the length of a feature vector and provide a simple rotation invariant descriptor. The uniform pattern is defined as a transition from 0 to 1 or 1 to zero, which is likely equal to 2 and called bitwise transition. For example, if the pattern of a pixel is 00000000, then it is considered uniform because its bitwise transition is less than 2. If the pixel neighborhood pattern is 01111110, then it is called uniform because the bitwise transition occurred two times. If we obtain 01010111, then it will not be considered uniform because the bitwise transition occurred 5 times, and so on.
C. Clustering Techniques
Clustering can be defined as the task of grouping a set of objects in such a way that those in the same cluster are more highly similar to one another than those in other clusters.
D. K-means Clustering
K-means clustering is a popular algorithm used for clustering, which is defined as the process of separating information into groups. Each group of information is highly similar to one another and is less similar to the information of other groups. K-means clustering was first introduced by Stuart Lloyd in 1957. It is considered an unsupervised clustering algorithm, where (K) represents the number of clusters and it is a user input value, which are capable of estimating the value of (K) automatically. However, K-means clustering works for numerical data only and is easy to implement.
Detecting a face on the basis of the similarity between LBP histograms is the main concept of this algorithm. The comparison is based mainly on the number of histogram amplitude distribution points instead of the feature vector.
In the histogram that forms the feature vector, the space of the amplitudes is separated into six levels, that is, three for the positive and three for the negative amplitudes. On the basis of the distribution for these points, the classifying element is able to separate the face from a non-face area.
A. Face Segmentation
The first procedure of our algorithm involves segmenting an image to separate the face from the scene by using the Kmeans clustering code and reducing the time consumed for face detection. We took a sample of images that contain a face in the scene.
B. LBP Feature Vector
As stated previously, the main concept of face detection is to identify the differences between the LBP for the face sample and that for the block taken from the image. The similarity between the two feature vectors is then calculated, and the face and non-face are segregated.
C. Classification images
Image classification is achieved using a proposed method called “probability of amplitude distribution levels of difference of LBP” (PADL). This method can be expressed as a process of dividing the amplitude values of a feature vector histogram into 6 levels (3 for positive and 3 for negative). Hence, classification will be based on the number of amplitude distribution. Classification will be conducted to detect face or non-face objects.
D. Face Boundaries
We put together a simple dataset for faces, which consists of 10 faces and 10 non-face objects or scenes. The face dataset was taken from the eyebrow area to the area under the lips instead of the whole face.
There are three important steps to be performed for face recognition
1. FACE DETECTION: This is the important initial step in the facial recognition system; performed to obtain pure facial images with normalized intensity, uniform size and shape.
2. FEATURE EXTRACTION: Extracting the important Features in a face image is done to obtain meaningful information that is useful to identify the similarities between the different faces.
3. VERIFICATION: The obtained Face image is then related with the images available in the data base images. Once the obtained image is matched with the data base image then it means that the face is recognized otherwise it is not identified.
Table -1: Comparison of Various Methods and their performance


Here, the input image is loaded from the database and then it is divided into ? 2 regions and local binary pattern is applied for each region then histogram is calculated for each block separately and finally it is concatenated into a single feature vector.

Here, the above output image will represent that the given face image is recognized at the different directions.
The proposed algorithm detects faces in images on the basis of facial texture after extraction from the surroundings. The PADL algorithm locates faces based on the distribution of points in six levels. Results show the efficiency of the algorithm in detecting faces with a high rate of accuracy. Detecting faces in many poses, such as turned and upside-down faces, is our main contribution to the literature.