I. INTRODUCTION
The information security is increasingly significant in the digital era. Traditional numeric passwords are commonly used for user authentication. Unlike possessions and passwords, biometric features, i.e., the inherent characteristics of human body and behavior, are hardly lost, forgotten or cracked, so biometric overcomes the defects of traditional authentication methods, and becomes a novel authentication technology with high security and reliability. Compared with other biometric modalities, palmprint has several remarkable advantages, including rich discriminant features, few restrictions, low application cost, high privacy, and so on, so it has become an outstanding biometric modality [1].
Online palmprint recognition can be categorized into contact and contactless modes [2]. In contact mode, users’ hands are in contact with the equipment surface. On the contrary, in contactless mode, users’ hands do not need to contact with any equipment surface [3].
In contact palmprint recognition system, both the locations and gestures of hands can be strictly controlled with the piles or other assistant devices, so the preprocessing is not difficult, and accordingly the accuracy can be high [4]. Unfortunately, contact systems usually lead to epidemic, limited-flexibility, pollution of acquisition sensor, cultural resistance, etc [5]. Non-contact palmprint recognition systems overcome the aforementioned problems in contact systems [6]; however, non-contact systems suffer from several severe technical challenges, including the complex background, variant illuminations, uncontrollable locations and gestures of hands [7]. Ong et al. proposed a competitive hand valley detection method [8]. Gaussian skin color model was used to segment non-contact palm region. Then the hand boundary was detected and all the boundary points were judged according to some rules. Finally, the four valley points between five fingers were found. This method heavily depends on the segmentation results of skin color. Tang et al. also designed a non-contact palmprint key-point localization method that was suitable to both open fingers and close fingers [9]. Leng et al. proposed triple-perpendicular-translation residuals for key-point localization with low computational complexity [10]. Aykut and Ekinci applied active appearance model for palm detection [11]; however the computation complexity is high. Javidnia developed an efficient illumination normalization algorithm to restrain the illumination disturbance [12].
A non-contact palmprint recognition system is developed on personal computer (PC) platform in this paper. In order to solve the severe technical challenges of non-contact palmprint system, three methods are implemented for the palmprint localization. Among them, “double-line-single-point” (DLSP) [13] and “double-assistant-crosshair” (DAC) [14] methods both constrain the hand location and gesture during palmprint image acquisition, and they can effectively help localize palmprint key points and lines. In addition, a novel method named “none-assistant-graphic” (NAG) is designed for palmprint localization. Hand segmentation and the cropping of region of interest (ROI) are performed without any assistant graphics. The convex hull contour of hand helps detect the outside contour of little finger as well as the valley bottom between thumb and index finger. The three palmprint localization methods have good operating efficiency, and can meet the performance requirements of real-time system. Furthermore, an attendance system on PC platform is designed and developed based on non-contact palmprint recognition.
The rest of this paper is organized as follows. In Section II, the three methods for palmprint localization are introduced. The system design and development are explained in Section III and Section IV, respectively. Finally, the conclusions are drawn in Section V.
II. PALMPRINT LOCALIZATION
The main task of preprocessing for palmprint recognition is to crop ROI from the original image. Three methods, namely NAG, DLSP and DAC, are implemented in the system for the palmprint localization.
None of the assistant graphics is employed for palm localization in NAG method. Since it is not easy to control the stretch degree between the fingers, the four fingers, i.e., index, middle, ring and litter fingers, are combined together. Only the thumb is stretched. The whole hand region is in the acquisition window. The directions of the four fingertips are all upward. The results of the steps in NAG are shown in Fig. 1, which are performed as follows.
The skin-color likelihood are binarized with maximum inter-class variance method, i.e., Otsu method, as shown in Fig. 1(a).
The holes in the binarized image are filled to obtain the complete hand region, as shown in Fig. 1(b).
The convex and concave points of hand region are detected, as shown in Fig. 1(d).
Fig. 2 is a part of Fig. 1(d). The convex points, A and B, are the crossover points of the convex hull and the edge contour. An outside region is surrounded by a pair of the convex points and the contour segment between them. For example, an outside region is surrounded by the line segment AB as well as the contour segment between A and B. The point-line-distance between each point on contour segment and the line crossing the pair of the convex points is computed. The point with the longest point-line-distance in each outside region is considered as the concave point of this region. The concave point with the longest point-line-distance among all the concave points is considered as the valley bottom point between thumb and index finger. In Fig. 2, C is a concave point, and also is the valley bottom point.
The external rectangle of hand region is enclosed, as shown in Fig. 3.
The coordinate system is established, in which the positive directions of x-axis and y-axis are rightward and downward, respectively. The four boundaries of the external rectangle, namely left, right, top and bottom sides, are restricted with the four points, the vertically most convex point on the outside of little finger, the valley bottom point between thumb and index finger, the top of the middle finger, and the bottom of the palm. The width of the external rectangle is W, the position of C, the valley bottom point between thumb and index finger, is (xC,yC). D is the top left corner point of ROI in Fig. 4. The position of D is (xD,yD) computed by:
The side length of ROI is 0.6×W.
In consideration of comfort, palmprint images are captured with the build-in rear camera of smart phone, on the contrary, users are apt to capture palmprint images with the front camera of PC. Thus the directions of both assistant lines are adjusted to vertical upward direction on PC platform for comfort.
When the four fingers (index, middle, ring, little fingers) are combined together, that is almost none of interspace is among them, the outside boundaries of the four fingers are appropriately two parallel straight lines. The right hand placement are shown in Fig. 5. The assistant graphics of right hand can be flipped horizontally to become those of left hand. Four fingers are combined together naturally, while thumb is sketched. Two vertical assistant lines are used for the alignment of the two outside boundaries of the four fingers, respectively. The red assistant point should be aligned to the intersection of the outside boundary and the bottom line of index finger.
The centers of the two crosshairs in the preview screen form the two assistant points, which are also the centers of the two assistant boxes. The users should try to stretch five fingers and align the two key points, i.e., the two valley bottom points between index and middle fingers as well as between ring and little fingers, to the assistant points, as shown in Fig. 6. The square ROI is localized with the two assistant points.
ROI is zoomed to a uniform size of 128×128. Some state-of-the-art palmprint features can be extracted from ROI and matched for palmprint recognition, such as Palm Code, Fusion Code, Competitive Code, Ordinal Code, Robust Line Orientation Code, Binary Orientation Co-occurrence Vector, Extended Binary Orientation Co-occurrence Vector, etc [15]. The palmprint features can be selected according the the balance between accuracy and computation complexity.
III. SYSTEM DESIGN
The main task of preprocessing for palmprint recognition is to crop ROI from the original image. Three methods, namely NAG, DLSP and DAC, are implemented in the system for the palmprint localization.
The development environments are shown in Table 1.
The flow charts of registration/re-registration and authentication/attendance check are shown in Fig. 8 and Fig. 9, respectively. If one user re-registers, he/she has to input his/her ID that has been registered in the system, and then he/she has to pass the authentication to obtain the authority for information update. A new palmprint template replaces the previous one in the database for the update of re-registration.
IV. SYSTEM DEVELOPMENT
The main interface is shown in Fig. 10.
The controls on main interface are introduced in Table 2.
The three assistant methods are the alternatives for ROI localization, as shown in Fig. 11.
As shown in Fig. 12, the input registered ID and clicked registration button on the main interface means that the user would like to re-register, i.e., to update his/her palmprint template, then he/she has to pass the authentication firstly.
Fig. 13 shows the registration interface. “Capture” can capture the palmprint image. The left-right hand switch button is on the registration interface of DLSP. “Open” and “Close” can launch and turn off the camera, respectively. Both the prompt box shown in Fig. 14 and the voice prompt notify the user once his/her registration is completed successfully.
The interface of authentication/attendance check is similar to that of registration/re-registration, so only the interface of authentication/attendance check for NAG is shown in Fig. 15. The difference is that there is no “Capture” button on authentication/attendance check interface. The frames of the video stream are automatically real-time captured for palmprint authentication until the authentication is passed or the timer ends. The operation time of each frame is about 0.3 second, so the real-time requirement is met.
If the user’s palmprint template is not found in the database, the system prompts the user to register his/her palmprint template, as shown in Fig. 16. The authentication results are shown in Fig. 17. The attendance date and time are recorded and stored in the database if the authentication is passed.
Click “Record” button on the main interface to open the pop-up query window. Input ID and click “Query” Button, the attendance records, including attendance date and time, are displayed, as shown in Fig. 18.
Because QT toolbox provides a convenient access interface and an efficient data processing speed for MySQL database, MySQL database is used to store all user data, including ID, palmprint template, attendance time and date.
QT toolkit provides QtSql header files that enable the connection and usage of various databases. First, the database driver is set through QtSql database class. The instance defined by this class represents the connection to a database. After the driver instance is initialized, some configurations need to be set, including IP address, database name, port, username and password for login. For the normal operation of the database, the login with the highest level of root privilege is used for the database connection in this system. The database is connected when the program is launched, and the user is notified by the prompt box shown in Fig. 19.
V. CONCLUSIONS AND FUTURE WORKS
The developed non-contact palmprint recognition system works well on PC platform. The users can flexibly select the three palmprint localization methods, including DLSP, DAC, NAG, according to their customs. The three palmprint localization methods have good operating efficiency, and can meet the performance requirements. The developed system can be used both for authentication and attendance check. The accuracy of NAG depends on the performance of skin-color model, so it possibly fails in complex background. How to segment hand region more accurately from the complex background will be intensively studied in the future works. In addition, the computational complexity can be further reduced to improve the real-time performance.