I. INTRODUCTION
In the pattern recognition system, it is considered that is the characteristic of the vectors on the image. The number of the characteristic vectors that is composed to the dimension of vectors is also very important in the rate of recognition. As the more characteristic factors is, it will be affected weakness on the system. Furthermore, it will be slow to read the learning.
g and recognition rate of the image and we need to the group of the learning for the modeling. The number of the characteristic vectors is getting more i.e. the characteristic vectors that is composed the dimensions of the vectors is higher. On the most of pattern recognition, the capacity of the pattern recognition is better until some range, but it will happen rather decreasing than increasing. By the principle components analysis, what is the optimal reduction from higher dimension of characteristic vectors to lower dimension of characteristic vectors. The Principle component is one of multivariate data processing methods that reduced from high dimension to low dimension. Characteristic Data is presented standard axis as many as the number of dimensions of the characteristic vectors. The n-dimension is presented by the data of n standard axes. If the dimension is reduced, it will be reduced the number of the standard axes. Therefore, the standard axes of statistics multivariate data is to project on a characteristic vector, it will be reduced. PCA (Principle Component Algorithm) is to rearrange the characteristic vectors and transform a standard axis based on the set of uncorrelated variates. In this paper, we analyze the extractions of the characteristic vectors and the algorithm in the parallel cases.
II. DEVELOPMENT OF IMAGE RECONGNITION’S PARALLEL ALOGORITHM
Why do we need this method to rearrange the characteristic vectors? We need the simple example the concrete the cases. We have a simple example of PCA as below:
A group of random vectors that the value is below will be distributed by given condition.
-
To produce natural vector sets
-
To rearrange the given vectors from (1) that average of all vectors are zero.
-
To reproduce the vectors that the radius from origin is less than 0.5 and ignore others, i.e. greater than 0.5. We can scatter the data that is composed 2-dimension of 1000 random vectors (Figure 1)
Also we consider the step to figure out from x-y coordinates to polar coordinates transformations of the distribution of the given set of the vectors (Fig. 2 and Fig. 3).
In this work, we need to compute the eigenvalues of a covariance matrix to know the principal components of the images. Furthermore, we use the eigenvectors from the eigenvalues to get the information of Principle components.
There are several methods to solve the eigen problem, but there are sequential processes to get all eigenvalues. Also, the structure of the given matrix is broken to do for it. In our work, we want to keep the structure of the given matrix and develop the parallel algorithm. We compare the complexity to compute all eigenvalues of the given matrix between existing algorithm and our algorithm.
Each pixel of a face-image is composed of on the face space. An eigenface is the base vectors which is composed the face space. The base vectors is presented the common characteristics among all candidate face images. The eigen face is produced from difference vectors between an average face image and each candidate face image. These are eigenvectors of the covariance matrix. Since a set of eigenvalues of the covariance matrix is a representation of the range of distribution of average face image, the eigenface which is composed the largest eigenvalue and its eigenvector is the closest face with the origin face. If the eigenvalue is getting smaller, it will decrease the characteristic of the face image. Therefore, we discuss the largest eigenvalue and the smallest eigenvalue of our face image and study the algorithm to compute them in this section.
We apply the algorithm for computing the largest eigenvalue and the smallest eigenvalue independently. We compare the difference between the largest eigenface and the smallest eigenface. We discuss analysis of the algorithm in parallel. First of all, we develop a method to compute the extreme eigenvalues. Ii is modified by Newton method with a parameter t. It is a procedure to develop the algorithm as follows;
We denote by Mn the space of n-by-n symmetric matrices.
We denote by σ(A) the set of eigenvalues of A ∈ Mn.
Let A ∈ Mn be a symmetric matrix. Then there is an orthogonality U = [U1,….,Un] ∈ Mn such that
The following is the usual Newton method for obtaining an eigenpair of a Symmetric matrix A ∈ Mn [3]:
Consider G: Rn×R → Rn × R such that
Then L is the set of solution for .
Asumming the matrix
is invertible, then the usual Newton iteration is
It is well known that the Newton’s method has a local quadratic convergence rate [3], that is there is a small neighborhood N∈k for each eigenpair such that
for all i = 0,1,…. where C < ∞ is a positive constant.
We call N∈k the quadratic convergence neighborhood of the eigenpair . Although the specific determination of each N∈k is an extremely difficult task, if the method converges to a point in L then we know the rate of convergence will eventually be quadratic. It can be shown easily that the Newton’s method is not global.
We modify the Newton method in order to have a global convergence [2]. There are several considerations to give for the Modification.
First, under the modification we desire the pair gets closer to an eigenpair at each step of the iteration, i.e., where dL is a suitable distance measure from a point to L. It will ensure the points under the iteration remain in D. Second, we want to modify the method the least amount as possible in order to preserve the original properties of the Newton’s Method, for example, local quadratic convergence. Third, the modified method should be simple to implement, requires almost the same procedures as the original Newton iteration.
Consider the usual Newton iteration (2):
Choose a parameter t > 0 so that the method takes the form
Then
and
Then the parameterized Newton method takes a form:
and
and be the set of all eigenpairs of A. Define a distance measure from a point to L by
Clearly, , implies L, and dL: is continuous (since D is compact, dL is actually uniformly continuous).
We have the following.
Lemma 1[4]: Let A∈ Mn be Symmetric. Consider the parameterized Newton’s method , and , where , and
Therefore, we have the following modification of the Newton’s method:
In this 2.3, we discuss it about an application of eigenfaces from 25 different expressions of a face.
There are 5 steps to construct for face images.
Step 1: An input of the origin images
The size of face image is N × N and the number of the face images that are recognized as candidates are M such that each is N2 × 1 column vector. Denote the set of face vectors that are recognized as candidates by S (Fig 4).
Step 2: Normalization of the image
We normalize the image on the basis of setting average and variance to reduce errors caused by the light and background (Figure 5).
Step 3: Computation of the average Face’s vector
We compute the average face vector from the face set S of recognition candidates (Figure 6).
In the computation for eigenpairs we develop the parallel algorithm with modified Newton method each eigenpair. Even though, the given matrix is ill conditioned, we can get all eigenpairs. This is an ill-conditioned matrix case:
Example1: Let be a n by n Hilbert matrix H. The Hilbert matrix is a well-known example of an ill-conditioned positive. Because the smallest eigenvalue λ12 of H∈M12 is so near zero, many conventional algorithms produce λ12 = 0. Our method gives the following experimental results:
Set the convergence criterion, ε=2 × 10−16, i.e., .
Suppose is the initial set of points where ei is the ith column of the identity matrix and hi,i is the ith diagonal entry of H.
If we have some clustering case on the target matrix, we can also overcome all eigenpairs[Figure 7].
To overcome the lost eigenpairs from MNM by the Gram-Schmidt orthogonalization process.
Suppose both [x1, α1]T and [x2, α2] T converge to [U1,λ1]T.
-
Choose a vector x = x1 (or x2).
-
Apply the Gram-Schmidt orthogonalization process.
z = x - < U1, x> U1, xx = z/‖z‖2
-
Apply MNM with [xx, α1]T (or [xx, α2]T).
[xx, α1]T (or [xx, α2]T) will converge to [U2, λ2].
It is shown the MNM in (10), (11) and this is algorithm as below:
Modified Newton’s Algorithm
is an initial eigenpairs where and , k = 1, …, n are the diagonal entries of the given matrix and .
For k = 1, …, n do (in parallel)
For j = 1, … do until convergence
End
End
We apply the MNM(Modified Newton Method) to generate eigen-imgages at each step for image recognition problems.
In this case, we work to find the all eigenpairs of a covariance matrix which is generated by the characteristic vectors.
Step 4: Solving of the eigen problem of a Covariance Matrix C
We compute the eigenvalue λi and the corresponding eigenvectors μi of the covariance matrix C. It is based on our algorithm. The eigenvalue present an average face of the given image and it produce a eigen-face(Fig. 8).
III. CONCLUSION
We can get each component ωk to be projected by the eigenvalues λk after their new faces are input. It is shown how to compute each eigenface component as follows :
M′ is the number that there are excluded the smallest eigenface corresponding the smallest eigenvalues. From the weight, we can get the component vector Ω of eigenface that are expressed input face images. If we get the set Ω = { ω1, ω2,…, ωM′}, we can expect the errors as follows εk = ‖Ω − Ωk‖2.
Step 5: A Reconstruction of the given image
We compare the origin image and reconstructed image which is applied our algorithm (Figure 9).