Performances of proposed normalization algorithm for iris recognition

A typical biometric recognition system mainly consists of four modules, acquisition and detection of biometric characteristics, extraction of a feature set, representation of the extracted feature set, and matching between this feature and other template features in the database. Initially, the biometric features of the user are enrolled as a template in a database. For each subsequent use, the biometric features are acquired again and compared with the template features previously stored in the database. If the similarity between the acquired features and the template features is less than a predetermined threshold, they are identified as the same person. Recently, personal identification by using physical characteristics is the best research topic due to the capability of the characteristics. Most biometric applications extract the features from individuals, including facial features, fingerprint, hand vein, retina, and iris. The capability of these applications has published in the cited papers [1]–[4]. These applications are convenient and secure as compared to traditional personal identification methods. ARTICL E INFO ABSTRACT

Iris recognition has very high recognition accuracy in comparison with many other biometric features. The iris pattern is not the same even right and left eye of the same person. It is different and unique. This paper proposes an algorithm to recognize people based on iris images. The algorithm consists of three stages. In the first stage, the segmentation process is using circular Hough transforms to find the region of interest (ROI) of given eye images. After that, a proposed normalization algorithm is to generate the polar images than to enhance the polar images using a modified Daugman's Rubber sheet model. The last step of the proposed algorithm is to divide the enhance the polar image to be 16 divisions of the iris region. The normalized image is 16 small constant dimensions. The Gray-Level Co-occurrence Matrices (GLCM) technique calculates and extracts the normalized image's texture feature. Here, the features extracted are contrast, correlation, energy, and homogeneity of the iris. In the last stage, a classification technique, discriminant analysis (DA), is employed for analysis of the proposed normalization algorithm. We have compared the proposed normalization algorithm to the other nine normalization algorithms. The DA technique produces an excellent classification performance with 100% accuracy. We also compare our results with previous results and find out that the proposed iris recognition algorithm is an effective system to detect and recognize person digitally, thus it can be used for security in the building, airports, and other automation in many applications.

162
International Journal of Advances in Intelligent Informatics ISSN 2442-6571 Vol. 6 As of now, iris recognition is one of the popular techniques for personal identification. There are some reasons stated that iris biometric is a good trait for identification. The iris pattern has not changed since the age of 8 months, and its location protected by the cornea and aqueous humor. No two are the same iris, iris, and even right, and the left eye of the same person is different and is unique. Identical twins have the same DNA pattern but have different iris patterns and genuinely unique. Since the iris recognition system has compared and approved more reliable and capable technique with lower error recognition rate than the face, palm-prints, vein, and fingerprints recognition as reported in Mansfield et al. [5], iris recognition has received increasing attention in the recent years.
The research in the area of iris recognition has been receiving considerable attention, and several techniques and algorithms have been proposed over the last few years. For iris recognition, the methods for converting an iris image into an easily manipulated code are a critical process. Thus, we first take a brief overview of the techniques used in recent work. Several approaches have been used for iris recognition systems, the significant difference being the method used for extracting and analyzing iris features [6]- [10]. In general, iris recognition approaches can be roughly divided into four main categories: phase-based approaches [11] [12], zero-crossing representation [13], intensity variation analysis-based methods [14] [15], and texture analysis [10].
Besides iris image quality in the acquisition and detection of biometric characteristics step, feature extraction plays an essential role in the performance of the iris recognition system [1] [10]. Designing a robust method for iris recognition is challenging. The main difficulty comes from the fact that there is no quick and efficient technique to extract unique features. Traditionally, decomposition techniques such as Fourier decomposition or Wavelet decomposition using basis functions are selected to analyze real-world signals [16]. Also, Fourier and Wavelet descriptors have been used for feature extraction [17]. However, the main drawback of those approaches is that the essential functions are fixed and not necessarily matching the varying nature of signals. Also, to improve accuracy, most of the biometric authentication systems store multiple templates per user to account for variations in biometric data. Therefore, these systems suffer from storage space and computational overheads.
Furthermore, suitable features comparison and classification systems for iris patterns must be developed. This paper presents a proposed iris recognition algorithm which consists of the segmentation, the proposed normalization, features extraction, and classification techniques to optimize the performances of the recognition system. In the first step, the segmentation process is using circular Hough transforms to find the region of interest (ROI) of given eye images. The second step is normalization by employing the proposed algorithm. The proposed algorithm is to generate the polar images then to enhance the polar images using a modified Daugman's Rubber sheet model then to divide the normalized image into small contrast dimension. Then, the GLCM technique is used for texture features extraction and the discriminant analysis is used for classification technique.
The rest of this paper is organized as follows; Section 1 explores the introduction and briefly reviews previous work on iris recognition techniques. Section 2 highlights the methods collaborated in the iris recognition algorithm. The results and discussion are tabulated and explored in Section 3. Finally, section 4 gives the conclusion.

Iris database
In this work, we have used a large publicly and freely available iris databases, CASIA-Iris version 3-Interval. The CASIA iris database is an extensive open iris database and we only use a subset for performance evaluation. This database includes 249 different eyes (hence, 249 different classes) with 1664 images. Each image has a resolution of 320x280 in 8-bit gray level. In the preprocessing stage, we checked the segmentation accuracy of the iris boundaries subjectively and obtained an accuracy rate of (185 images, 11 classes are not used), which shows different causes failing iris locating.

The proposed iris recognition algorithm
A new iris recognition algorithm is proposed in this paper. The algorithm consists of segmentation, proposed normalization, features extraction, and classification, as presented in Fig. 1. The algorithm is developed as a combination process of several algorithms. The segmentation involves circular Hough transforms to find the region of interest (ROI) of given eye images, modified Daugman's Rubber sheet model as a proposed normalization algorithm, divided the area to be 16 regions, and Gray-Level Cooccurrence Matrices (GLCM) for texture features extraction process, then the classification is conducted by using discriminant analysis (DA) classifier. In the combination algorithm, we proposed a normalization algorithm by adding the step enhancement and divide the enhanced image to be 16 regions. Thus, the proposed normalization algorithm is analyzed by comparing the other nine methods.

Segmentation
For this paper, the iris segmentation has achieved by the following three main steps. The first step locates the center and radius of the iris in the input image by using a circular Hough transform. Then a set of points is taken as pupil initialization from nearby points to the iris center. The last step locates the pupil boundary points by using the region-based active contours. Several segmentation techniques that have been applied for iris recognition. They are Hough transform [18][19]- [21], Daugman's interodifferential operator [22], active contour models [12], and eyelash and noise detection [23]. The Hough transform is a standard computer vision algorithm that can be used to determine the parameters of simple geometric objects, such as lines and circles presented in an image. However, it suffers some limitations such as (i) it requires threshold value to be chosen for edge map, (ii) it may not be suitable for real-time applications due to limitation (i). Thus, enhanced of the Hough transform are proposed by Wildes [20], namely circular Hough transform. Wildes [20] proposed the Gaussian smoothing function to obtain an edge map of the image for enhanced the Hough transform. The edge map is obtained by thresholding the magnitude of the image intensity gradient. From the edge map, votes are cast in Hough space for the parameters of circles passing through each edge point. These parameters are the center coordinates xc and yc, and the radius r, which are able to define any circle according to equation 1.
A maximum point in the Hough space will correspond to the radius and center coordinates of the circle best defined by the edge points. Wildes [20] also make use of the parabolic Hough transform to detect the eyelids, approximating the upper and lower eyelids with parabolic arcs, which are represented as equation 2.
where aj controls the curvature, (hj, kj) is the peak of the parabola, θj is the angle of rotation relative to the x-axis.
In this paper, it was decided to use a circular Hough transform for detecting the iris and pupil boundaries. This involves first employing Canny edge detection to generate an edge map. Gradients were biased in the vertical direction for the outer iris/sclera boundary. Vertical and horizontal angles were weighted equally for the inner iris/pupil boundary. A modified version of Kovesi's Canny edge detection MATLAB function was implemented, which allowed for the weighting of the gradients. The circular Hough transform can be employed to deduce the radius and center coordinates of the pupil and iris regions. In the eye images, recognition of a circle can be achieved by considering the sharp edges in an image as the local patterns and searching for the maximum value of a circular Hough transform.
An automatic segmentation algorithm based on the circular Hough transform is employed by references [18]- [21]. The localization method, similar to Daugman's approach, is also based on the first derivative of the image. Currently, the enhanced circular hough transform based on Masek [24] is used in this paper. The improved algorithm for segmentation purposes is presented in Fig. 2.

Proposed Normalization Algorithm
Once the iris region is successfully segmented from an eye image, the next stage is to present each iris image as a graph of a fixed iris region so that it has set dimensions to allow comparisons. The normalization is done to eliminate the noise from eyelashes. One of the problems for fixing aspects is caused by stretching of the iris. The stretching of the iris is caused by pupil dilation from varying levels of illumination. The normalization process will produce iris regions, which have the same constant dimensions so that two photographs of the same iris under different conditions will have characteristic features at the same spatial location. Another point of note is that the pupil region is not always concentric within the iris region and is usually slightly nasal. This act must be taken into account if trying to normalize the 'doughnut' shaped iris region to have a constant radius [24].
Most of the normalization techniques have been applied for iris recognition, such as the Daugman rubber sheet model [24], image registration [20], and virtual circles [25]. Modifications of the Daugman's normalization technique were proposed by references [26]- [28]. The homogenous rubber sheet model devised by Daugman remaps each point within the iris region to a pair of polar coordinates (r,θ) where r is on the interval [0,1] and θ is angle [0,2π]. The remapping of the iris region from (x,y) Cartesian coordinates to the normalized non-concentric polar representation is modeled as where, I(x,y) is the iris region image, (x,y) is the original Cartesian coordinates, (r,θ) are the corresponding normalized polar coordinates, and xp yp and xl yl are the coordinates of the pupil and iris boundaries along the θ direction. The rubber sheet model takes into account pupil dilation and size inconsistencies in order to produce a normalized representation with constant dimensions. In this way the iris region is modelled as a flexible rubber sheet anchored at the iris boundary with the pupil center as the reference point.
Even though the homogenous rubber sheet model accounts for pupil dilation, imaging distance and non-concentric pupil displacement, it does not compensate for rotational inconsistencies. In the Daugman system, rotation is considered for during matching by shifting the iris templates in the θ direction until two iris templates are aligned. On the other hand, there are other sources of dimensional inconsistency include varying imaging distance, rotation of the camera, head tilt, and rotation of the eye within the eye socket [29].
Daugman's rubber sheet model is one of the popular normalization techniques used due to easy to implement. There are several studies that have conducted the modified normalization techniques based on Daugman's rubber sheet model to obtain excellent performance in recognition [28]. In our work, a normalized image based on the Daugman rubber sheet model is enhanced and cropped half of it as shown in Fig. 3, then the half of the improved normalized image is divided into 16 partitions. The main purpose of enhancement and cropping is to obtain a clear picture and to remove noise from the normalized image as the deleted half of it. Thus, the ROI of the enhanced normalized image is the better image which has the same constant dimensions.

Features extraction
In this work, the GLCM technique is implemented to extract texture features differentiating the iris of the person for classification. This technique has been used in our previous work [30] [31]. This technique depends on second-order statistics of the pixel intensities. The main aim of this technique is to obtain the features matrix used for the comparison purpose. The co-occurrence matrix estimates the joint probability distribution function of gray level pairs in an image. It provides a simple approach to capture the spatial relationship between two points in a textured pattern. In this paper, the matrices are constructed using Matlab function for the GLCM at a distance of d=2 and at angles incremented from θ= 0°, 45°, 90°, and 135°. Contrast, correlation, energy, and homogeneity of pixel values are extracted as features for this research. They are calculated from the partition of the enhanced normalized iris image using pixels as primary information.
The algorithm for the features extraction by using the GLCM is presented below: 1) Obtain the pixel values of 16 partitions of the enhanced normalized image from Subsection 2.2.2.
2) Apply the GLCM techniques in terms of distance and angle, d= 2 and θ=0° to 135°.
3) Calculate the contrast, correlation, energy, and homogeneity values for the partitions simultaneously using equation (6) to (9).

Discriminant analysis
Several researchers in iris recognition employ hamming distance, Euclian distance, neural networks. As reviewed, they have an excellent capability to classify iris classes. They are compared with our approach in terms of accuracy with different data, amount of data, and classifiers. Several techniques are also tested using our data and the same classifier. In this study, discriminant analysis (DA) is chosen as a classifier tool for differentiating iris classes of the used eye images. DA is a method implemented in numerous software and easy to be implemented as classification tools. The technique has a direct analytical solution and very good at detecting global phenomena. The method is simply defined and implemented, especially if there is insufficient data to define sample means and covariance matrices adequately.
Based on section 2.2, a proposed iris recognition algorithm includes segmentation, proposed normalization, features extraction, and classification. The algorithm contains segmentation using a circular Hough transform, new normalization technique based on Daugman rubber sheet model, texture features extraction using GLCM technique, and classification using discriminant analysis. As in Table 1, there are nine types of different normalization techniques for comparison. They are followed by GLCM technique and DA classifier for features extraction and classification, respectively. Therefore, ten kinds of data input based on ten different normalization techniques are then fed into the DA classifier to test each performance. The comparison of each of the normalization techniques are conducted in four features: contrast, correlation, energy, and homogeneity. 20x120 pixels for first of half iris T4 20x120 pixels plus enhancement for first of half iris T5 20x120 pixels for second of half iris T6 20x120 pixels plus enhancement for second of half iris T7 10x240 pixels T8 10x240 pixels using 8 partitions T9 10x240 pixels using 8 partitions plus enhancement

Results and Discussion
After the text edit has been completed, the paper is ready for the template. Duplicate the template file by using the Save As command, and use the naming convention prescribed by your conference for the name of your paper. In this newly created file, highlight all of the contents and import your prepared text file. You are now ready to style your paper; use the scroll down window on the left of the MS Word Formatting toolbar.

Segmentation, normalization, and features extraction
The segmentation model proved to be successful. The CASIA version 3 interval databases provided good segmentation since those eye images had been taken correctly for iris recognition research and boundaries of iris pupil and sclera were clearly distinguished. For the CASIA database, the segmentation technique managed to accurately segment the iris region from 1484 out of 1664 used eye images, which corresponds to a success rate of around 89.2%. The problem images had small intensity differences between the iris region and the pupil region. Eyelid detection also proved quite successful and managed to isolate most occluding eyelid regions. One problem was that it would sometimes separate too much of the iris region, which could make the recognition process less accurate since there is less iris information. However, this is preferred over including too much of the iris region, if there is a high chance it would also include undetected eyelash and eyelid regions. The eyelash detection implemented for the CASIA version 3 interval database also proved to be successful in isolating most of the eyelashes occurring within the iris region, as shown in Fig. 4. The slight problem was that areas where the eyelashes were light, such as at the tips, were not detected. However, these undetected areas were small International Journal of Advances in Intelligent Informatics ISSN 2442-6571 Vol. 6, No. 2, July 2020, pp. 161-172 when compared with the size of the iris region. Therefore, cropping the iris area after the normalization process is essential to obstacle some previous problems in segmentation using a circular Hough transform.
The normalization process proved to be successful, as presented in Fig. 4. The rectangular representation is constructed from 4.800 data points or 20x240 pixels in each iris region. As stated in the previous section, the rectangular description is a suitable form for comparison purposes due to the same dimension. The normalized image is enhanced and determined the region of interest (ROI) for handling the problem due to segmentation process and to improve accuracy further process in extraction features.

Fig. 4. Original iris images and its segmentation images
The region of interest in this study is 10x240 pixels. The ROI is divided into 16 partitions to obtain representative features(3.2.1c. features: contrast, correlation, energy, and homogeneity) are extracted in each partition. Therefore, the features for each enhanced normalized image are 16x8 contrast values, 16x8 correlation values, 16x8 energy values, and 16x8 homogeneity values. It means that each partition has 8,8,8,8 of contrast, correlation, energy, and homogeneity features values, respectively. Therefore, each image has 128 contrast features, 128 correlation features, 128 energy features, and 128 homogeneity features.

Classification
For presenting the objective of comparison, as shown in section 2.3, nine types of normalization are used to compare the capability and performance of the proposed normalization algorithm, as tabulated in Table 2. The datasets are fed into DA employed as a classifier tool to differentiate the used data. To demonstrate the robustness of the proposed approach, in our experiments, a large publicly and freely available iris databases, CASIA, are used as stated in the previous section. Based on Table 2, the proposed normalization algorithm is the best between nine other techniques. All of the used features can achieve more than 95% accuracy if it is used as input features into the classifier. On the other hand, performances of T8 and T9 techniques are also achieved high accuracy values 91.2 to 99.9% of accuracy for all used features by using the GLCM technique. Therefore, it approves that the GLCM technique is an excellent technique to extract texture features.
At present, only small data sets are used to evaluate recognition performance for iris recognition by most algorithms, as tabulated in Table 3. As presented in Table 3, there are several researchers using characteristic similarity distance as classifier [14][15] [22] [32]. Daugman [22] achieved good accuracy performance by using their own captured data, Gabor filter as features extraction, and hamming distance as a classifier. In 2004, Ma et al. [14][15] by using 2 255 iris images, a particular class of wavelets, Gaussian-Hermit moments extracting 50 features, and Euclidian distance or nearest center classifier based on cosine similarity measure as classifier can achieve 98.25% and 98.73% of accuracy, respectively. Chang et al. [32] used Empirical Mode Decomposition (EMD), Mean of the Euclidean distances (MED) for 249 classes reduced by failure iris locating images of CASIA database using 16384 features which they obtained good performance. On the other hand, several studies are using neural networks for classification purposes that is also obtaining excellent performance [17][33] [34]. Huang et al. [17] extracted features using Fourier-wavelet technique, and then they achieved 94.37% of accuracy. While Abiyev & Altunkaya [33] applying Linear Hough transform to extract features made 99.25% of efficiency. Ant colony optimization (ACO) based segmentation, and a self-organized feature map (SOFM) and neural network are applied by Ma et al. [34] for 900 used images achieved 93.9% of accuracy. Farouk [19] used circular Hough transform as a feature extraction technique for 249 classes reduced by failure iris locating copies of CASIA database, and Elastic graph matching (EGM) for the classifier. This study achieved excellent performance with 98.7% of accuracy.
Only the approaches proposed by Ma et al. [14], Chang et al. [32], Ma et al. [34], Farouk [19], and our approaches have been tested on large image sets involving more than 200 subjects. In this study, we achieved excellent performance using a proposed system as presented in this paper. However, it is not objective for comparison due to different conditions of data.

Conclusion
Here we have presented a new and practical approach for iris recognition, which operates using the combination of several algorithms to produce an excellent performance. This paper presents a proposed iris recognition algorithm which consists of the segmentation, the proposed normalization, features extraction, and classification techniques. In the first step, the segmentation process is using circular Hough transforms to find the region of interest (ROI) of given eye images. The second step is normalization by employing the proposed algorithm. The proposed algorithm is to generate the polar images than to enhance the polar images using a modified Daugman's Rubber sheet model then to divide the normalized image into a small contrast dimension. Then, the GLCM technique is used for texture feature extraction, and the discriminant analysis is used for the classification technique. The algorithm approach is tested for CASIA irises databases. All recognition rates are more than 95%. Therefore, the proposed iris recognition algorithm has demonstrated to be promising for iris recognition and is suitable for the identification process. In future work, we will improve the processing method for iris segmentation to reduce the influence of light, eyelid, and eyelash. Furthermore, feature selection will be employed to improve the performance of the system further.