WO1999049414A1 - Image recognition method - Google Patents

Image recognition method Download PDF

Info

Publication number
WO1999049414A1
WO1999049414A1 PCT/US1998/005443 US9805443W WO9949414A1 WO 1999049414 A1 WO1999049414 A1 WO 1999049414A1 US 9805443 W US9805443 W US 9805443W WO 9949414 A1 WO9949414 A1 WO 9949414A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
image
components
feature
attribute
Prior art date
Application number
PCT/US1998/005443
Other languages
French (fr)
Inventor
Takami Satonaka
Takaaki Baba
Koji Asari
Original Assignee
Matsushita Electronics Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electronics Corporation filed Critical Matsushita Electronics Corporation
Priority to PCT/US1998/005443 priority Critical patent/WO1999049414A1/en
Priority to JP54819999A priority patent/JP2002511175A/en
Priority to US09/147,592 priority patent/US6236749B1/en
Publication of WO1999049414A1 publication Critical patent/WO1999049414A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis

Definitions

  • the present invention relates to an image recognition method for recognizing an object in a three-dimensional space based on the feature patterns of a two-dimensional image.
  • feature patterns representing the attributes of the object by using chromatic signals or luminance ones are extracted from several two-dimensional images.
  • the object having such attributes is recognized by using the feature patterns thus extracted.
  • an image region including the object is cut out from the two-dimensional image, and the image region is divided into several rectangular elements. Then, the average values of the pixels which are included in the rectangular elements are used as the feature patterns identifying the object. Thus, since the dimension of the feature patterns becomes very high, it has been required to reduce the dimension of the feature patterns.
  • Hybrid Neural Network Approach Technical Reports, CS-TR- 3608, University of Maryland, 1996) proposed a neural network, in which feature pattern are extracted by principal component analysis. Such a method provides us a base suitable for representing higher-dimensional feature patterns in a lower-dimensional space.
  • a neural network In order to identify the type of an arbitrary object by using the feature patterns represented in such a lower- dimensional space, a neural network has been used.
  • neuron units recognize the feature patterns by using decision function based on weighted summation calculation, convolution calculation or distance calculation.
  • a multi-layered neural network consisting of neural recognition units performing weighted summation calculations includes: terminals provided for an input or an output layer to which the feature patterns are input or output and an intermediate layer.
  • the dimension of terminals in the input layer is the same as that of a feature pattern input thereto, and the number of the neuron units in the output layer is the same as the number of objects having different attributes.
  • neuron units in the output layer output the likelihood of the feature pattern belonging to each class.
  • the type of the object to which the feature pattern belongs is determined by a neuron unit in the output layer which has the maximum likelihood output.
  • the connection coefficients between these layers are modified by a learning technique such that an a neuron unit to classify a feature pattern outputs the maximum likelihood output.
  • the learning coefficients of the network are modified supervisedly by the error back propagation method in which a set of input patterns and expected output patterns are presented to input terminals and output terminals of neural network as teaching signals.
  • teaching signal 0 or 1 is presented to each neuron output.
  • Kohonen proposed a single layered network based on distance calculations, which stores reference feature patterns representing the attribute of an object. The network determines Euclidien distance between an arbitrary feature pattern and a reference feature pattern. T. ( Kohonen, The self-organizing map, Proc.
  • Mahalanobis distance considering statistical distribution of input feature patterns is introduced as metric measure to determine the likelihood of a feature pattern belonging to an object class.
  • an image recognit ion method utilizing an image communication technique an image, which has been compressed by using MPEG or JPEG standard is transmitted, received and then decompressed. After the image has been decompressed, a feature extraction is performed, thereby determining the features of the image.
  • the problems of the previous methods for recognizing an object in a three-dimensional space by using the image of the object will be described.
  • the object has a high degree of freedom. Specifically, the recognition precision is dependent on the rotation and the position of the object and a variation in illumination surrounding the object. 4
  • the base representing the features of an object to be recognized is dependent on the distribution of data. Furthermore , it is required to perform an enormous amount of calculation to obtain the data-dependent base.
  • the network structure thereof is too complicated and the time required for learning is too long. For example, in the method by Lawrence et al., it takes a learning time of about 4 hours to make the network learn half of the data stored in the database in which 10 distinct facial features are stored for each of 40 persons.
  • the problem is to make a complicated network efficiently classify an enormous amount of data by using a smaller number of learning image samples.
  • the statistical distribution of feature patterns of an object deviates significantly from a normal distribution when available training samples of an image database are relatively small. It is particularly difficult to estimate variance of the statistical distribution when the number of training samples is small. The error rate of an object recognition considering the variances sometimes is higher than that of an object recognition neglecting it due to failure to estimate variances precisely.
  • the present invention provides the following image recognition methods.
  • the first means is quantization of luminance or chromatic signals.
  • the luminance or chromatic signals of all the pixels of the two-dimensional image, which have been obtained by scanning an object in a three-dimensional space, are input into a nonlinear table and are transformed into quantified ones.
  • the second means is a feature pattern extraction using a rectangular element as a basic element.
  • the two-dimensional image, obtained by the first means is divided into several rectangular elements, and the number of pixels belonging to each scale representing a corresponding intensity of the quantified luminance or chromatic signal is obtained for each rectangular element.
  • the feature pattern of the two-dimensional image obtained from the two-dimensional image is denoted by using a pixel number h (x , y , z) at a horizontal coordinate (x), a pixel number at a vertical coordinate (y) and a coordinate (z) representing an intensity of the quantified image signal within each said rectangular element.
  • the third means is a reduction in dimension of the feature patterns.
  • the feature patterns are obtained by transforming the pixel number components h (x y z) of the three-dimensional matrix, which have been obtained from a two-dimensional image of an arbitrary object by using the first and the second means.
  • the pixel number components corresponding to a two- dimensional matrix consisting of a horizontal coordinate and a vertical coordinate are taken out from the pixel number components of the three-dimensional matrix with respect to each scale of the intensity of the quantified image signals,
  • the pixel number components of the two-dimensional matrix are transformed by using a two-dimensional discrete cosine transform or a two-dimensional discrete sine transform, thereby obtaining frequency components in a two-dimensional space, and
  • the feature pattern represented as a three-dimensional matrix (u, v, z) of frequency components, which have been extracted as low-frequency ones from the frequency components in the two-dimensional space such that a number of the components maximizes recognition precision, is used as a feature pattern of the two-dimensional image for the object in the three-dimensional space.
  • the fourth means is also a reduction in dimension of the feature pattern.
  • the feature patterns are obtained by transforming the pixel number components h (x, y, z) of the three-dimensional matrix, which have been obtained from an two-dimensional image by using the first and the second means.
  • the pixel number components of the three-dimensional matrix are transformed by a three-dimensional discrete cosine transform or a three-dimensional discrete sine transform with respect to a horizontal coordinate, a vertical coordinate and 7 a coordinate representing an intensity of the quantified image signal, thereby obtaining frequency components in a three- dimensional space, and
  • the fifth means is feature pattern recognition based on relative distance.
  • the feature pattern h (p1 ⁇ p2, p3 ) is represented as the three-dimensional matrix of pixel number components or frequency components of the two-dimensional image, which have been obtained by the first and the second means, by the first and the second apd the third means, or by the first and the second and the fourth means from an image database obtained by scanning an object in a three-dimensional space belonging to an arbitrary type of attribute.
  • the relative distance between the reference vector (p1, p 2 , p 3 ) and the feature pattern vector h (p1, p2, p3) is calculated by using the relative distance:
  • the reference vector ⁇ q(p1 p2 p3) and the variance vector ⁇ q(p1 P2 p 3 ) are obtained as an average and a variance in a distribution of pixel number components or frequency components for each element of each three-dimensional matrix from the database of two-dimensional images having the same attribute q of the object
  • the feature pattern of the object in the three- dimensional space belonging to the arbitrary type of attribute is estimated by the relative distance to the reference vector obtained from a set of two-dimensional images of an object having Mq types of attributes
  • the feature pattern vector h (pl P2 P3 ) is determined to belong to a type m* represented by a reference vector which has the minimum relative distance
  • the sixth means is a feature pattern recognition based on distance utilizing the neural network learning
  • the neural network consists of a set of neural recognition units
  • the number of the neural recognition units is the same as that of the attributes of the objects to be classified
  • the neural recognition unit has the same number of input terminals as the number of components of a three-dimensional matrix and a single 9 output terminal.
  • the neural recognition unit storing a reference vector representing the attribute of the object outputs the minimum output value.
  • the neural recognition unit stores the reference vector and the variance vector of the statistical distribution of the feature pattern of each component of the three-dimensional matrix as learning coefficients and estimates relative distance between the feature pattern of the three-dimensional matrix, which has been obtained from the two-dimensional image of an arbitrary object, and the reference vector.
  • the reference and variance vectors corresponding to the average and variance of the distribution of the pixel number components or the frequency components with respect to each element of the three-dimensional matrix is obtained by means of neural network learning.
  • the seventh means is an image recognition using a mixture model of prior distributions of feature pattern of pixel number components or frequency components of the three-dimensional matrix, which have been obtained by the first and the second means, by the first, the second and third means or by the first, the second and the fourth means from an image database obtained by scanning the object belonging to the same type of attribute.
  • the mixture class is constructed by using a set of classes. For each class of feature patterns having an attribute of an object, feature patterns having the other 1 0
  • the optimal number of classes included in the mixture classes is selected.
  • the component of the most likelihood variance vector is obtained from a mixture distribution of pixel number components or frequency components with respect to each element of the three-dimensional matrix by minimizing the measure value of an amount of mutual information between a component of the variance vector and a component of another variance vector having a different attribute from that of the former variance vector.
  • the measure value is defined by using the logarithm value of the variance in feature pattern input vectors, which have been obtained from the two-dimensional images of an arbitrary object, and the logarithm value of the variance in feature patterns distribution which have been obtained from the two-dimensional images of a different object from the object.
  • the eighth means is production of feature pattern vectors for training a neural network to obtain the reference vector and the variance vector, respectively corresponding to the average and the variance of the distribution of pixel number components or frequency components of each element of the three-dimensional matrix.
  • the feature pattern vector of a different object having the minimal distance defined by the fifth means to the former feature pattern vector is selected.
  • the feature input pattern, obtained by mixing these feature input patterns with each other, is input to the neural network and the mixing ratio is varied in accordance with a number of times of learning. 1 1
  • the ninth means is determination of the attribute of the object feature patterns, which have been transmitted through a hardwired or wireless medium, by using the fifth means at a remote site.
  • the determination of the attribute of object feature patterns is performed at a remote site by receiving feature patterns being represented as the three-dimensional matrix of pixel number components or frequency components of the two-dimensional image, which have been obtained by the first and the second means, by the first, the second and the third means, or by the first, the second and fourth means from a set of several two-dimensional image data or from an image database obtained by scanning an object in a three-dimensional space belonging to an arbitrary type of attribute.
  • the reference pattern vector and the variance pattern vector obtained from an average and a variance in a distribution of feature components for each element of each three- dimensional matrix are stored by a receiver site of a hardwired network or a wireless network.
  • the feature input pattern of the two-dimensional image, which is represented by the pixel number components or the frequency components of the three-dimensional matrix obtained by the first and the second means , by the first, the second and third means, or by the first, the second and the fourth means is encoded and encrypted by a transmitter site.
  • the transmitted feature input pattern of the two- dimensional image is decrypted and decoded by the receiver site and determines the attribute of the object represented by the feature pattern of the two-dimensional image by using the neural network.
  • the tenth means is construction of the object image database of which image is used by the first and the second means.
  • the object image database consists of a set of two- dimensional images, which have been obtained by rotating the object in the three-dimensional space several times, each time by a predetermined angle, and by scanning the object every time.
  • the present invention can solve the above-described problems through these means, thereby attaining the following functions or effects.
  • the neural network used in the image recognition method of the present invention can advantageously extract an invariant feature pattern, even if any variation is caused in rotation, position and illumination of the object.
  • a feature pattern which is invariant with respect to a variation in illumination is used through a quantization using a histogram, a high precision recognition which is highly resistible to a variation in external illumination is realized.
  • a variation in intensity of the image signals of a three-dimensional object can be stored separately with no interference in a quantified luminance space.
  • the statistical variation in feature patterns of an image can be effectively optimized.
  • Figure 1 is a chart of embodiment of an object recognition used in image communication. 1 3
  • Figure 2 is feature detection by means of block subdivision and sampling of two dimensional image.
  • Figure 3 is a structure of neural network designed for image recognition by this patent.
  • Figure 4 is hardware implementation of a neural recognition unit.
  • Figure 5 is the relationships between the numbers of coefficients and the recognition error rate and mean absolute error for the coefficients.
  • Figure 6 illustrates the difference of the learning performance between nonlinear table with histogram equalization and that without it.
  • Figure 7 is image synthesis for training neural network.
  • Figure 8 is recognition performance using training image samples obtained by image synthesis.
  • Figure 9 is a schematic structure of mixture classes obtained by a likelihood function of number of the mixture
  • Figure 10 is the relation of average variance and error rate to the number of mixture classes.
  • Figure 11 is an illustration of training image generation realizing rotational invariant recognition.
  • Figure 12 is recognition performance of rotating objects with variation of angle range from 0 to 60 degrees.
  • Figure 13 is a table showing system performance obtained by barious method.
  • Example 1 A flow chart of the preferred embodiment of a method 1 4 according to the invention is shown in Figure 1.
  • Figure 1 shows an example of object recognition method used for image communication.
  • sampling unit calculates number of pixels per quantified scale in each rectangular element.
  • Three dimensional feature patterns extracted from low frequency components of DCT coefficients are encoded and decrypted for the transmission via wire or wireless channels.
  • transmitted three dimensional feature patterns are decoded and decrypted as low frequency components of DCT coefficients.
  • the decoded low frequency components of DCT coefficients are input to neural network for object classification.
  • the transmitter has a function of feature extraction and compression, which enables remote object recognition at receiver site without transmitting original image.
  • DCT transform used in this patent has a compatibility with JPEG 15 and MPEG image compression algorithm.
  • the main procedure of feature detection and classification to recognize an object is embodied in this example using image database which consists of Mq objects with
  • step Ml luminance or chromatic scale j is transformed by using normalized integral histogram function G(j).
  • FIG. 2 shows an example of object feature representation by 1 6 using a three-dimensional memory structure obtained from block subdivision and sampling.
  • step M2 two-dimensional quantified image 11 is subdivided into rectangular element 10, which consist of N x x N y rectangular elements.
  • each rectangular element number of pixels whose luminance or chromatic signal belongs to each scale is calculated.
  • the histogram density at each rectangular element is represented by three dimensional matrix h p(xy 2 ⁇ .
  • x and y denote horizontal and vertical locations of rectangular element.
  • z denotes quantified scale of chromatic or luminance intensity.
  • the histogram density is represented by N x x N y x N z matrix.
  • histogram density is transformed by two or three dimensional DCT in step M3.
  • h 2dct p(a v, z is the discrete cosine transform of h (x ⁇ z ) .
  • Low frequency components h 2dct p(U ⁇ V,z > derived from N x x N y DCT coefficients are used as feature pattern for pattern classification.
  • step M4 feature patterns are classified by using decision 17 function, which is obtained from a negative Iog- 1 ikeI ihood representation of normal distribution P(h( UViZ ⁇ ⁇ q ( U. ⁇ ,_- > o q( u, v.z )' ⁇
  • ⁇ 2dct g ( U , v.z and ⁇ 2dct q ( ⁇ VtZ denote components of reference and variance vectors derived form averages and variances of the classes assuming normal distributions.
  • this invention constructs a set of M q decision functions and identifies the most appropriate class m*, for an arbitrary input vector h 2dct (uvw > as the class for 0 which the decision function has the minimal value.
  • FIG. 3 illustrates a structure of neural network designed for image recognition by this patent.
  • the neural network consists of a set of neural recognition units 14.
  • the neural recognition unit has the same number of input terminals 13 as the number of components of a three-dimensional matrix and a single output terminal 14.
  • the number of neural recognition units is the same as that of the attributes of the objects to be classified.
  • a neuron network is constructed by using a set of M q neural recognition units, each of which implements decision function defined by equation(4).
  • a feature pattern input vector is input from the two-dimensional image of an arbitrary object, one of the neural recognition units storing a reference vector representing the attribute of the object, outputs the minimum output value.
  • This neural network determines the most appropriate class m* by identifying the minimal value of neuron decision functions for an input feature pattern vector.
  • FIG. 4 illustrates hardware implementation of a neural recognition unit.
  • a neural recognition unit stores a reference vector and a variance vector derived from a normal distribution of feature input pattern vectors in memory registers 20 and 1 9
  • the difference between each component of an input feature pattern vector and that of a reference pattern vector is calculated by using subtracting arithmetic unit 17.
  • multiplier 18 difference of each component is weighted by an inverse value of each component of a variance vector and square of the weighted difference of each component is obtained.
  • accumulator 19 obtains relative distance between an input vector of feature pattern obtained from a two- dimensional image of an arbitrary object and a reference vector.
  • a reference vector and a variance vector corresponding to an average and a variance of distribution of the pixel number components or the frequency components wi h respect to each element of the three-dimensional matrix is obtained by means of neural network learning.
  • Initial values of reference and variance vectors for an arbitrary class p are derived form an average and a variance assuming a normal distribution of each component of DCT coefficients.
  • This patent provides means to select optimal number of coefficients in the DCT representation of an object feature.
  • h 2dct Ku , v , z > obtained by DCT transform of block histogram density h p(x , y , z ⁇ described in step M3 of example 2.
  • Figure 5 shows relation of number of DCT coefficients to recognition error rate and mean absolute error for the coefficients h (a)2dct (U ⁇ 2 ⁇ and h (b)2dct (UV ⁇ 2 ⁇ , which are specified by I ines 22 and 23.
  • the invented neural network is trained by using 400 images of 0RL( Olivetti Research Laboratory ) database consisting of 40 persons with 10 distinct poses. 5 poses are used for training and the rest 5 for testing. An input image of 114x88 pixels is divided into 8x8x8 rectangular blocks.
  • Figure 6 illustrates the difference of the learning performance between nonlinear table with histogram equalization and that without it.
  • the error rate is significantly increased with luminance variation.
  • the error rate exceeds 0.1 in the ranges of ⁇ ⁇ O.925, 7 ⁇ 1.2.
  • the error rate obtained by the quantization method with histogram equalization has maintained a 2 3 relatively low error rate between 0.04 —0.09 for the range of
  • FIG. 7 shows training examples obtained by image synthesis to construct a neural network for face classification according to this invention.
  • the images are obtained from image synthesis using two different facial image of person A and B with mixing ratio of y 0 y n y /2 , 2 _ , and . These two images are specified by q and m(q).
  • the image specified by m(q) is selected from facial image database to have the minimal distance of reference pattern defined by equation 4 with regard to that by an arbitrary image q.
  • m(q) argmin d( ⁇ q ⁇ u v z) , ⁇ m(q)(UtV>z) ) ⁇
  • the mixing ratio is decreased with a decay constant ⁇ when 24 the learning proceeds.
  • the recognition performance is obtained from face recognition using ORL database consisting of 400 facial images with 10 different expressions for 40 persons with distinct variations such as open/close eyes, smiling/non-smiling faces, glasses/non-glasses poses, and rotation up to 20 degrees .
  • Figure 8 shows error rate dependence on the number of training samples, ranging from 1 to 5 images for each person. The remaining images from 9 to 5 are used for testing performance of an image recognition system.
  • the result specified by 27 is obtained from two dimensional DCT with image synthesis using images m and m(q) described above.
  • the decay factor is 200.
  • FIG. 13 shows system performance obtained by various methods.
  • SOM(Self-organizing Map) + CN(Convolution Network), Pseudo two dimensional HMM(Hidden Marcov Model) and Eigenface model are excerpted from Face Recognition: A Hybrid Neural Network Approach, (S. Lawrence 2 5 et al , Technical Report CS-TR-3608 , University of Maryland, 1996).
  • An error rate of 2.29% using our method is achieved, which is better than the 3.8%, 5.3%, 5% and 10.5% obtained using SOM+CN, KLT convolution network, Pseudo 2D hidden Markov model and eigenface model, respectively.
  • Equation 4 present decision function using reference vector and variance vector , which are derived by assuming a proper distribution of input feature patterns, to estimate relative distance between input feature pattern and reference pattern representing object features.
  • the assumption of feature pattern distribution is crucial for the performance of an image recognition when number of samples is very small.
  • a mixture model of local prior distributions is invented, which assigns a variable number of the mixture classes to each class and then determines the mixture number and the local metric parameters so as to minimize the cross entropy between the real distribution and the modeled one.
  • Figure 9 is a schematic structure of mixture classes defined by a likelihood function l (u ' v - z ) min of number.
  • the m-th mixture class k(m) is assigned according to each reference class k based on the likelihood measure l k ⁇ ( u, v,z ⁇ m ⁇ [Equation25] 2 6
  • the minimal size of the mixture model is uniquely derived from the minimization of the likelihood function.
  • Figure 10 shows the relation of average variance and error rate to the number of mixture classes.
  • the performance using this patent specified by 29 and 31, are compared with results derived from a fixed size of mixture classes for all the components specified by 28 and 30.
  • the minimal error rates of 7.65% and 8.24% or the variances of 618 and 960 are obtained at the size of 3 and 4.
  • the error rate and average variance are decreased and saturated to 6.28% and 516 with an increase in the number of mixture classes.
  • the essence of this patent allows variable size of the mixture classes.
  • the error rate with variable size of mixture classes is significantly improved from 7.78% to 6.28% by allowing an optimal size of mixture classes defined by the entropy distance.
  • FIG. 11 illustrates an example of generating luminance images from a rotating object.
  • Luminance signals of all the pixels of the two-dimensional image which have been obtained by scanning an rotating object in a three-dimensional space, are input into a nonlinear table and are transformed into quantified ones.
  • the quantified two-dimensional image is divided into several rectangular elements, and the number of pixels belonging to each scale representing a corresponding intensity of the quantified luminance signal is obtained at each rectangular element.
  • Feature patterns with three dimensional matrix structure obtained from six rotated images per person are presented for training neural network.
  • This method using three dimensional representation of a rotating object assigns feature patterns, obtained by sampling the different luminance signals at the same location of a rectangular element of an rotating object to different scales.
  • This enables neural network to store feature patterns without interference among rotated images.
  • Figure 12 shows recognition performance of rotating objects with variation of angle range from 0 to 60 degrees.
  • Input feature patterns obtained from rotating facial objects of 40 persons from 0, 60, 120, 180 and 240 degrees are used for constructing a neural network.
  • This patent employs image 2 8
  • each image represented by using luminance signals per rectangular element interfere with the other image.

Abstract

The method provides object recognition procedure and a neural network by using the discrete-cosine transform (DCT) (4) and histogram adaptive quantization (5). The method employs the DCT transform with the added advantage of having a computationally-efficient and data-independent matrix as an alternative to the Karhunen-Loeve transform or principal component analysis which requires data-dependent eigenvectors as a priori information. Since the set of learning samples (1) may be small, we employ a mixture model of prior distributions for accurate estimation of local distribution of feature patterns obtained from several two dimensional images. The model selection method based on the mixture classes is presented to optimize the mixture number and local metric parameters. This method also provides image synthesis to generate a set of image databases to be used for training a neural network.

Description

1 DESCRIPTION
Image recognition method
FIELD OF THE INVENTION
The present invention relates to an image recognition method for recognizing an object in a three-dimensional space based on the feature patterns of a two-dimensional image.
BACKGROUND OF THE INVENTION
In the methods for recognizing an object in a three- dimensional space known before the present invention, feature patterns representing the attributes of the object by using chromatic signals or luminance ones are extracted from several two-dimensional images. The object having such attributes is recognized by using the feature patterns thus extracted.
Specifically, an image region including the object is cut out from the two-dimensional image, and the image region is divided into several rectangular elements. Then, the average values of the pixels which are included in the rectangular elements are used as the feature patterns identifying the object. Thus, since the dimension of the feature patterns becomes very high, it has been required to reduce the dimension of the feature patterns.
In order to efficiently represent such high-dimensional feature patterns in a low-dimensional feature pattern space, various orthogonal transformation techniques such as principal component analysis and Karhunen-Loeve transform have been employed. The previous method (reported in Face Recognition: 2
Hybrid Neural , Network Approach Technical Reports, CS-TR- 3608, University of Maryland, 1996) proposed a neural network, in which feature pattern are extracted by principal component analysis. Such a method provides us a base suitable for representing higher-dimensional feature patterns in a lower-dimensional space.
In order to identify the type of an arbitrary object by using the feature patterns represented in such a lower- dimensional space, a neural network has been used. In the neural network, neuron units recognize the feature patterns by using decision function based on weighted summation calculation, convolution calculation or distance calculation.
A multi-layered neural network consisting of neural recognition units performing weighted summation calculations includes: terminals provided for an input or an output layer to which the feature patterns are input or output and an intermediate layer. The dimension of terminals in the input layer is the same as that of a feature pattern input thereto, and the number of the neuron units in the output layer is the same as the number of objects having different attributes. When a feature pattern is input to input terminals, neuron units in the output layer output the likelihood of the feature pattern belonging to each class. The type of the object to which the feature pattern belongs is determined by a neuron unit in the output layer which has the maximum likelihood output. The connection coefficients between these layers are modified by a learning technique such that an a neuron unit to classify a feature pattern outputs the maximum likelihood output. The learning coefficients of the network are modified supervisedly by the error back propagation method in which a set of input patterns and expected output patterns are presented to input terminals and output terminals of neural network as teaching signals. For he neural unit having a sigmoid function, teaching signal 0 or 1 is presented to each neuron output. Kohonen proposed a single layered network based on distance calculations, which stores reference feature patterns representing the attribute of an object. The network determines Euclidien distance between an arbitrary feature pattern and a reference feature pattern. T. ( Kohonen, The self-organizing map, Proc. of the IEEE 78, 1464-1480,1990.) To improve the performance further, Mahalanobis distance considering statistical distribution of input feature patterns is introduced as metric measure to determine the likelihood of a feature pattern belonging to an object class. In an image recognit ion method utilizing an image communication technique, an image, which has been compressed by using MPEG or JPEG standard is transmitted, received and then decompressed. After the image has been decompressed, a feature extraction is performed, thereby determining the features of the image.
The problems of the previous methods for recognizing an object in a three-dimensional space by using the image of the object will be described. First , in the image of an object in a three-dimensional space, the object has a high degree of freedom. Specifically, the recognition precision is dependent on the rotation and the position of the object and a variation in illumination surrounding the object. 4
In addition, if the three-dimensional object is to be represented by using several two-dimensional images, at least a memory having a large capacity is required. Moreover , in the principal component analysis and the orthogonal transformation which are used for reducing the amount of information, the base representing the features of an object to be recognized is dependent on the distribution of data. Furthermore , it is required to perform an enormous amount of calculation to obtain the data-dependent base. Thus, in a conventional multi- layered neural network, the network structure thereof is too complicated and the time required for learning is too long. For example, in the method by Lawrence et al., it takes a learning time of about 4 hours to make the network learn half of the data stored in the database in which 10 distinct facial features are stored for each of 40 persons. Thus, the problem is to make a complicated network efficiently classify an enormous amount of data by using a smaller number of learning image samples.
Furthermore , the statistical distribution of feature patterns of an object deviates significantly from a normal distribution when available training samples of an image database are relatively small. It is particularly difficult to estimate variance of the statistical distribution when the number of training samples is small. The error rate of an object recognition considering the variances sometimes is higher than that of an object recognition neglecting it due to failure to estimate variances precisely.
SUMMARY OF THE INVENTION In order to solve the problems described above, the present invention provides the following image recognition methods.
The first means is quantization of luminance or chromatic signals. The luminance or chromatic signals of all the pixels of the two-dimensional image, which have been obtained by scanning an object in a three-dimensional space, are input into a nonlinear table and are transformed into quantified ones. The non I i near tabl e i s conf i gured by us i ng the integral histogram distribution of the luminance or chromatic signals, which has been obtained from all the pixels of the two-dimensional image of an object having an attribute, such that, in histogram distribution of scales which has been obtained by quantizing the input imagesignalsof al l thepixelsof thetwo-dimensional image, the histogram of the quantified pixels belonging to each of the sea les is equa I i zed.
The second means is a feature pattern extraction using a rectangular element as a basic element. The two-dimensional image, obtained by the first means, is divided into several rectangular elements, and the number of pixels belonging to each scale representing a corresponding intensity of the quantified luminance or chromatic signal is obtained for each rectangular element. Herein, the feature pattern of the two-dimensional image obtained from the two-dimensional image is denoted by using a pixel number h(x, y, z) at a horizontal coordinate (x), a pixel number at a vertical coordinate (y) and a coordinate (z) representing an intensity of the quantified image signal within each said rectangular element. The third means is a reduction in dimension of the feature patterns. The feature patterns are obtained by transforming the pixel number components h(x y z) of the three-dimensional matrix, which have been obtained from a two-dimensional image of an arbitrary object by using the first and the second means. The pixel number components corresponding to a two- dimensional matrix consisting of a horizontal coordinate and a vertical coordinate are taken out from the pixel number components of the three-dimensional matrix with respect to each scale of the intensity of the quantified image signals, The pixel number components of the two-dimensional matrix are transformed by using a two-dimensional discrete cosine transform or a two-dimensional discrete sine transform, thereby obtaining frequency components in a two-dimensional space, and The feature pattern represented as a three-dimensional matrix (u, v, z) of frequency components, which have been extracted as low-frequency ones from the frequency components in the two-dimensional space such that a number of the components maximizes recognition precision, is used as a feature pattern of the two-dimensional image for the object in the three-dimensional space.
The fourth means is also a reduction in dimension of the feature pattern. The feature patterns are obtained by transforming the pixel number components h(x, y, z) of the three-dimensional matrix, which have been obtained from an two-dimensional image by using the first and the second means.
The pixel number components of the three-dimensional matrix are transformed by a three-dimensional discrete cosine transform or a three-dimensional discrete sine transform with respect to a horizontal coordinate, a vertical coordinate and 7 a coordinate representing an intensity of the quantified image signal, thereby obtaining frequency components in a three- dimensional space, and
The feature pattern represented as a three-dimensional matrix (u, v, w) of frequency components, which have been extracted as low-frequency ones from the frequency components in the three-dimensional space such that a number of the components maximizes recognition precision, is used as a feature input pattern of the two-dimensional image for the object
The fifth means is feature pattern recognition based on relative distance. The feature pattern h(p1ι p2, p3) is represented as the three-dimensional matrix of pixel number components or frequency components of the two-dimensional image, which have been obtained by the first and the second means, by the first and the second apd the third means, or by the first and the second and the fourth means from an image database obtained by scanning an object in a three-dimensional space belonging to an arbitrary type of attribute. The relative distance between the reference vector (p1, p2, p3) and the feature pattern vector h(p1, p2, p3) is calculated by using the relative distance:
d U(\hn(pl,p2,p3)> uq(pl,p2,p3) )) = Y _ > Y_ > Y__ ( μ"(pl'p2'pi) (PWPV ) ++ ι l Q υ&f,(y2Δππ\\σσq(pl,p2,p3) . pl=lp=lp3=l σq(pl,p2,p_)
( Where coordinate (pi, p2, p3) is identical to (x, y, z) for three-dimensional pixel number components, (u, v, z) for frequency components obtained by two-dimensional DCT or (u, v, w) for frequency components obtained by three- dimensional DCT ) The distance is equal to 2 times of the logarithm of the probability function P( μ q(pl p2 p3) , σ q(p1 p2 P3) ) represented by a normal distribution
-2logP( μ q(pl p2 p3> , σ q(pl p2 p3) )
The reference vector μ q(p1 p2 p3) and the variance vector σ q(p1 P2 p3) are obtained as an average and a variance in a distribution of pixel number components or frequency components for each element of each three-dimensional matrix from the database of two-dimensional images having the same attribute q of the object
The feature pattern of the object in the three- dimensional space belonging to the arbitrary type of attribute is estimated by the relative distance to the reference vector obtained from a set of two-dimensional images of an object having Mq types of attributes The feature pattern vector h(pl P2 P3) is determined to belong to a type m* represented by a reference vector which has the minimum relative distance
m* = argmin d(h{plιp2 p3) , μm{pl,p2>p3) )(m = 1- • • Mq ) .
The sixth means is a feature pattern recognition based on distance utilizing the neural network learning The neural network consists of a set of neural recognition units The number of the neural recognition units is the same as that of the attributes of the objects to be classified The neural recognition unit has the same number of input terminals as the number of components of a three-dimensional matrix and a single 9 output terminal. When feature pattern input vector obtained from the two-dimensional image of an arbitrary object is input, the neural recognition unit storing a reference vector representing the attribute of the object outputs the minimum output value.
The neural recognition unit stores the reference vector and the variance vector of the statistical distribution of the feature pattern of each component of the three-dimensional matrix as learning coefficients and estimates relative distance between the feature pattern of the three-dimensional matrix, which has been obtained from the two-dimensional image of an arbitrary object, and the reference vector.
The reference and variance vectors corresponding to the average and variance of the distribution of the pixel number components or the frequency components with respect to each element of the three-dimensional matrix is obtained by means of neural network learning.
The seventh means is an image recognition using a mixture model of prior distributions of feature pattern of pixel number components or frequency components of the three-dimensional matrix, which have been obtained by the first and the second means, by the first, the second and third means or by the first, the second and the fourth means from an image database obtained by scanning the object belonging to the same type of attribute. To derive the most likelihood variance for the relative distance calculation, the mixture class is constructed by using a set of classes. For each class of feature patterns having an attribute of an object, feature patterns having the other 1 0
attribute are included in the mixture class. The optimal number of classes included in the mixture classes is selected.
The component of the most likelihood variance vector is obtained from a mixture distribution of pixel number components or frequency components with respect to each element of the three-dimensional matrix by minimizing the measure value of an amount of mutual information between a component of the variance vector and a component of another variance vector having a different attribute from that of the former variance vector.
The measure value is defined by using the logarithm value of the variance in feature pattern input vectors, which have been obtained from the two-dimensional images of an arbitrary object, and the logarithm value of the variance in feature patterns distribution which have been obtained from the two-dimensional images of a different object from the object. The eighth means is production of feature pattern vectors for training a neural network to obtain the reference vector and the variance vector, respectively corresponding to the average and the variance of the distribution of pixel number components or frequency components of each element of the three-dimensional matrix.
For feature pattern input vectors of an arbitrary object, the feature pattern vector of a different object having the minimal distance defined by the fifth means to the former feature pattern vector is selected. The feature input pattern, obtained by mixing these feature input patterns with each other, is input to the neural network and the mixing ratio is varied in accordance with a number of times of learning. 1 1
The ninth means is determination of the attribute of the object feature patterns, which have been transmitted through a hardwired or wireless medium, by using the fifth means at a remote site. The determination of the attribute of object feature patterns is performed at a remote site by receiving feature patterns being represented as the three-dimensional matrix of pixel number components or frequency components of the two-dimensional image, which have been obtained by the first and the second means, by the first, the second and the third means, or by the first, the second and fourth means from a set of several two-dimensional image data or from an image database obtained by scanning an object in a three-dimensional space belonging to an arbitrary type of attribute. The reference pattern vector and the variance pattern vector obtained from an average and a variance in a distribution of feature components for each element of each three- dimensional matrix are stored by a receiver site of a hardwired network or a wireless network. The feature input pattern of the two-dimensional image, which is represented by the pixel number components or the frequency components of the three-dimensional matrix obtained by the first and the second means , by the first, the second and third means, or by the first, the second and the fourth means is encoded and encrypted by a transmitter site.
The transmitted feature input pattern of the two- dimensional image is decrypted and decoded by the receiver site and determines the attribute of the object represented by the feature pattern of the two-dimensional image by using the neural network. 1 2
The tenth means is construction of the object image database of which image is used by the first and the second means. The object image database consists of a set of two- dimensional images, which have been obtained by rotating the object in the three-dimensional space several times, each time by a predetermined angle, and by scanning the object every time.
The present invention can solve the above-described problems through these means, thereby attaining the following functions or effects.
First , the neural network used in the image recognition method of the present invention can advantageously extract an invariant feature pattern, even if any variation is caused in rotation, position and illumination of the object. According to the present invention , since a feature pattern which is invariant with respect to a variation in illumination is used through a quantization using a histogram, a high precision recognition which is highly resistible to a variation in external illumination is realized. In addition, in the feature pattern space of the present invention , a variation in intensity of the image signals of a three-dimensional object can be stored separately with no interference in a quantified luminance space. Thus, the statistical variation in feature patterns of an image can be effectively optimized.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a chart of embodiment of an object recognition used in image communication. 1 3
Figure 2 is feature detection by means of block subdivision and sampling of two dimensional image.
Figure 3 is a structure of neural network designed for image recognition by this patent. Figure 4 is hardware implementation of a neural recognition unit.
Figure 5 is the relationships between the numbers of coefficients and the recognition error rate and mean absolute error for the coefficients. Figure 6 illustrates the difference of the learning performance between nonlinear table with histogram equalization and that without it.
Figure 7 is image synthesis for training neural network. Figure 8 is recognition performance using training image samples obtained by image synthesis.
Figure 9 is a schematic structure of mixture classes obtained by a likelihood function of number of the mixture
Figure 10 is the relation of average variance and error rate to the number of mixture classes. Figure 11 is an illustration of training image generation realizing rotational invariant recognition.
Figure 12 is recognition performance of rotating objects with variation of angle range from 0 to 60 degrees.
Figure 13 is a table showing system performance obtained by barious method.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Example 1 A flow chart of the preferred embodiment of a method 1 4 according to the invention is shown in Figure 1. Figure 1 shows an example of object recognition method used for image communication.
(1) Original image is captured by CCD(Charge Coupled Devices) camera.
(2) Scale of luminance or chromatic signal of original image is transformed into quantified scale by nonlinear look-up table.
(3) The quantified image is divided into several rectangular elements. Sampling unit calculates number of pixels per quantified scale in each rectangular element.
(4) Number of pixels represented by three-dimensional matrix structure, obtained by sampling in step 3, is transformed by three-dimensional or two-dimensional DCT(discrete cosine transformation) according to methods of claims 1 to 3. Low frequency components of DCT coefficients are selected and used as three dimensional feature patterns for object recognition.
(5) Three dimensional feature patterns extracted from low frequency components of DCT coefficients are encoded and decrypted for the transmission via wire or wireless channels.
(6) At receiver site, transmitted three dimensional feature patterns are decoded and decrypted as low frequency components of DCT coefficients. (7) The decoded low frequency components of DCT coefficients are input to neural network for object classification.
The transmitter has a function of feature extraction and compression, which enables remote object recognition at receiver site without transmitting original image. DCT transform used in this patent has a compatibility with JPEG 15 and MPEG image compression algorithm.
Example 2
The main procedure of feature detection and classification to recognize an object is embodied in this example using image database which consists of Mq objects with
Np different expressions. ( Mq =40 and Np=10)
(Ml) Quantization of luminance or chromatic signal of all the pixels of two dimensional image by nonlinear transformation. (M2) Object feature representation by using a three- dimensional memory structure obtained from block subdivision and sampI ing.
(M3)DCT transformation of three-dimensional feature patterns.
(M4)Classif ication of feature patterns by using distance measures invented this patent.
In step Ml, luminance or chromatic scale j is transformed by using normalized integral histogram function G(j).
[Equationl]
g( )=255-^-.
K J G(255) where
G(j) = ∑ f(i). i=l
(fq is histogram function for 256 scales. Signal j with 256 scales is transformed to quantified signal y with Nz ones. [Equation2]
Figure 2 shows an example of object feature representation by 1 6 using a three-dimensional memory structure obtained from block subdivision and sampling. In step M2, two-dimensional quantified image 11 is subdivided into rectangular element 10, which consist of Nx x Ny rectangular elements. In each rectangular element, number of pixels whose luminance or chromatic signal belongs to each scale is calculated. Then, the histogram density at each rectangular element is represented by three dimensional matrix hp(xy 2 }. (Where x and y denote horizontal and vertical locations of rectangular element. z denotes quantified scale of chromatic or luminance intensity. ) In the case that a quantified image with Nz scales is divided into an image of Nx x Ny rectangular elements, the histogram density is represented by Nx x Ny x Nz matrix. For dimension reduction, histogram density is transformed by two or three dimensional DCT in step M3. By applying two dimensional DCT, we obtain
[Equation3]
np(u,vy - 2-.x=0 - y=0 C(U)C(V)V.,,) os^ycos-yy-
Where cw = l/2V2 (u ≠ O) and c(«) = l/2 (w = 0). And u=0 to Nx-1, v=0 to Ny-1 and z=0 to Nz-1. The h2dct p(a v, z is the discrete cosine transform of h(x γ z ). Low frequency components h2dct p(Uι V,z > derived from Nx x Ny DCT coefficients are used as feature pattern for pattern classification. For examples, u=0 toN^-l, v=0 to Ndcτy 1, and z=0 to Nz-1, Ndotx x N^y x Nz DCT coefficients are derived. Optimal numbers Ndctx and Ndcty are selected so as to maximize recognition performance by using methods explained in example 4. In step M4, feature patterns are classified by using decision 17 function, which is obtained from a negative Iog- 1 ikeI ihood representation of normal distribution P(h(UViZ \ μ q(U.γ,_- > o q( u, v.z )'
[Equation4J
N3j__-lNlicκ-lN,-l .. _ L r j ι 2da ,,2dct \ \ "T V"1 ^ .v.z) ( ,v,z) ^2 . , /--> '_ K α=0 v=0 z=0 ^(u.v.z)
Where μ 2dct g(U,v.z and σ 2dct q(αVtZ denote components of reference and variance vectors derived form averages and variances of the classes assuming normal distributions. 0
[Equationδ]
Np l.2dct(p) 2dct _ ^ _ "'" " *λ ^9(κ,v,z) Z-t
P= N_
[Equation6]
Np (U2Λct(p) _ 2dct \2 A l 2dα Y - V l(u-v-z) rq(u,v,z)}
( Np is number of training samples and p denotes . ) To classify Mq objects, this invention constructs a set of Mq decision functions and identifies the most appropriate class m*, for an arbitrary input vector h 2dct (uvw > as the class for 0 which the decision function has the minimal value.
[Equation7] m* = argmin d(h α),rf£tVμ)).
5 In place of equation3, three dimensional DCT is used for dimension reduction of histogram density hp(xY:Z }. [Equat ionδ]
h P(u,vy = ∑x=0y=0Z=Q ("^)ciw)hp{x y z) cos ^^ cos
Where C(K) = 1 / 2Λ/2 (H ≠ O) and c(«) = l/ 2 (κ = 0).
Example 3
Figure 3 illustrates a structure of neural network designed for image recognition by this patent. The neural network consists of a set of neural recognition units 14. The neural recognition unit has the same number of input terminals 13 as the number of components of a three-dimensional matrix and a single output terminal 14.
The number of neural recognition units is the same as that of the attributes of the objects to be classified. To classify Mq objects, a neuron network is constructed by using a set of Mq neural recognition units, each of which implements decision function defined by equation(4).
A feature pattern input vector is input from the two-dimensional image of an arbitrary object, one of the neural recognition units storing a reference vector representing the attribute of the object, outputs the minimum output value.
This neural network determines the most appropriate class m* by identifying the minimal value of neuron decision functions for an input feature pattern vector.
Figure 4 illustrates hardware implementation of a neural recognition unit. A neural recognition unit stores a reference vector and a variance vector derived from a normal distribution of feature input pattern vectors in memory registers 20 and 1 9
21. Then it determines relative distance between a reference vector and a input feature pattern vector of the three- dimensional matrix, which is input into input register 16-
The difference between each component of an input feature pattern vector and that of a reference pattern vector is calculated by using subtracting arithmetic unit 17. By using multiplier 18, difference of each component is weighted by an inverse value of each component of a variance vector and square of the weighted difference of each component is obtained. By adding up the components of the square of the weighted difference, accumulator 19 obtains relative distance between an input vector of feature pattern obtained from a two- dimensional image of an arbitrary object and a reference vector. A reference vector and a variance vector corresponding to an average and a variance of distribution of the pixel number components or the frequency components wi h respect to each element of the three-dimensional matrix is obtained by means of neural network learning.
Here, we present an example of training method of a neural network using two dimensional DCT. For each presentation of an input vector h2dct p(UιVιZ > of a class p, we estimate relative distance and update each component of a reference vector with the minimal value in the following manner. When an input vector of feature pattern belongs to class k( p=k ),
[Equation9] fc(κ,v,z) /"*(u,v,z) τMv'p(«,ι,z) f*k(u,v,z) ) '
When an input vector does not belong to class I (p ≠ I ), 20
[Equa ionIO] u /;('u+,1v>,z) = r ul{(,u),v,z) -β2(hnp(,(),v,z) - r ui((,u),v,z) Λ)
( μ(t)k(u.v.z) and -" (t) i(u,v.z ) denote components of reference vectors of class k and I at learning iteration time t. β , and β 2 are modification factors.
[Equationll ] >z) =(i+<r(i*; ,) -^l,z)i- v,z))) v,z)-
[Equation12] τ-C+1) _ (0 ';(u,v,z) = (!-« v,z) -lb( ,z)l) /(«,v,;
( σ (t)k(u.v.z ) an σ (t) I V.Z ) denote components of variance vectors of class k and I at learning iteration time t.
C(λ) is a modification function:
[Equation13] λ>0 ζ(λ) = ζ0.
[EquationH] λ≤O ζ(λ) = 0.
Initial values of reference and variance vectors for an arbitrary class p are derived form an average and a variance assuming a normal distribution of each component of DCT coefficients.
[Equationlδ] 21
Np h 2dct
„(0)
^"p(B,v,z) =Σ p( ,v,z) p=. N-
[Equation161
Np (h2dc< - A0) or~(°) , Σ N-
Example 4
This patent provides means to select optimal number of coefficients in the DCT representation of an object feature. In this example, h2dct Ku,v,z > obtained by DCT transform of block histogram density h p(x,y,z } described in step M3 of example 2.
[Equation17]
Σ-Vι-1-j— ,-Vy-l^—ι-Vz-l .9
iNdct-l-r- ιNdct-1
£p(x,y ) - p(x,y,z) /_-„=0 2-,v=0 C("*JV>Λp(«. -z) C0S ^^ COS -^T" •
Figure 5 shows relation of number of DCT coefficients to recognition error rate and mean absolute error for the coefficients h (a)2dct (UιΫι2 } and h (b)2dct (UVι2 }, which are specified by I ines 22 and 23.
[Equationlδ]
{h(a)2dct (u.v.Z )l 0 ≤ u, v ≤Νa dct}.
[Equation19] 22
{h(b)2dct U(,,z)l ≤ u+v≤Nb dct}.
There is the trade-off between error rate and complexity in the DCT approximation. Increasing the coefficient number reduces the approximation error but increases the complexity of the learning network . The recognition error rates reach the minimum value at around 36 coeff icients( Na dct =5. N3 dct =7. )According to this patent, the number of DCT coefficients is selected so as to minimize error rate determined by trade-off between the model complexity and mean absolute error.
Example 5
The invented neural network is trained by using 400 images of 0RL( Olivetti Research Laboratory ) database consisting of 40 persons with 10 distinct poses. 5 poses are used for training and the rest 5 for testing. An input image of 114x88 pixels is divided into 8x8x8 rectangular blocks.
Figure 6 illustrates the difference of the learning performance between nonlinear table with histogram equalization and that without it. Test images for estimating luminance variation characteris ics are generated by using gamma transformation G(y)=255*(y/255) (1/ r). At gamma factor γ =1, the error rate obtained by quantization without histogram equalization is lower than that of this patent. However, the error rate is significantly increased with luminance variation. The error rate exceeds 0.1 in the ranges of γ ≤O.925, 7≥1.2. The error rate obtained by the quantization method with histogram equalization, on the other hand, has maintained a 2 3 relatively low error rate between 0.04 —0.09 for the range of
7 =0.4 ~3.0.
To account for the effectiveness of invented image synthesis method, results of face recognition performance base on two or three dimensional DCT with image synthesis are compared with those of other known methods on the same data. The dimension of block histogram inputs is reduced from 8x8x8 to 4x4x8 or 4x4x4x by 2DCT or 3DCT. Figure 7 shows training examples obtained by image synthesis to construct a neural network for face classification according to this invention. The images are obtained from image synthesis using two different facial image of person A and B with mixing ratio of y0yn y/2,2_ , and . These two images are specified by q and m(q). The image specified by m(q) is selected from facial image database to have the minimal distance of reference pattern defined by equation 4 with regard to that by an arbitrary image q.
[Equation21]
Ndcl-lNdct-lNz-l /, _ //
<*(/ ,,/ ,,,)=∑ ∑ ∑C,(W) μm{q){u'v'z) f +log 2π\σq{u,vJ . a=0 v=0 z=0 q(u,v„z )
[Equation22]
m(q) = argmin d(μq{u v z) , μm(q)(UtV>z) ) ■
[Equation23]
\(u.V.z) = (1 - α)Λ,(«,»,z) + °^ (9)(«,v,z)
The mixing ratio is decreased with a decay constant η when 24 the learning proceeds.
[Equation24]
a - aQ *exp(—)
The recognition performance is obtained from face recognition using ORL database consisting of 400 facial images with 10 different expressions for 40 persons with distinct variations such as open/close eyes, smiling/non-smiling faces, glasses/non-glasses poses, and rotation up to 20 degrees . Figure 8 shows error rate dependence on the number of training samples, ranging from 1 to 5 images for each person. The remaining images from 9 to 5 are used for testing performance of an image recognition system. The result specified by 27 is obtained from two dimensional DCT with image synthesis using images m and m(q) described above. The decay factor is 200. The result specified by 26 is obtained from two dimensional DCT without image synthesis. With 5 learning samples, the error rate of 2DCT combined with image synthesis ( a 0 =0.5) is 2. 4%, which is much lower than 13.61 obtained from using 2DCT without image synthesis( a _ =0.0). The error rate of 2DCT is therefore improved by approximately six times by using the invented image synthesis. This result accounts for effectiveness of invented face recognition based on two dimensional DCT with image synthesis. Figure 13 shows system performance obtained by various methods. The results of SOM(Self-organizing Map) + CN(Convolution Network), Pseudo two dimensional HMM(Hidden Marcov Model) and Eigenface model are excerpted from Face Recognition: A Hybrid Neural Network Approach, (S. Lawrence 2 5 et al , Technical Report CS-TR-3608 , University of Maryland, 1996). An error rate of 2.29% using our method is achieved, which is better than the 3.8%, 5.3%, 5% and 10.5% obtained using SOM+CN, KLT convolution network, Pseudo 2D hidden Markov model and eigenface model, respectively.
Example 6
Equation 4 present decision function using reference vector and variance vector , which are derived by assuming a proper distribution of input feature patterns, to estimate relative distance between input feature pattern and reference pattern representing object features. The assumption of feature pattern distribution is crucial for the performance of an image recognition when number of samples is very small. For an image recognition using a small set of samples, a mixture model of local prior distributions is invented, which assigns a variable number of the mixture classes to each class and then determines the mixture number and the local metric parameters so as to minimize the cross entropy between the real distribution and the modeled one. Figure 9 is a schematic structure of mixture classes defined by a likelihood function l (u'v-z ) min of number. A mixture class M k(m)( u v'z } is constructed from a set of classes k(0), k(1), •••,k(m-1) and k(m). (where k(0)=k. )
Mk(\)(u,v,z) e Mk(2)(u,v,z)_ _ e Mk(m)(u,v,z) _
The m-th mixture class k(m) is assigned according to each reference class k based on the likelihood measure lkω( u, v,z ^m ■ [Equation25] 2 6
e)("'v'z) = ∑logσ-, OX"."-2) =0
For each class, the minimal size of the mixture model is uniquely derived from the minimization of the likelihood function.
[Equation26] j*(0)(κ,v,z) --, 7*(l)(u,v,z)-t#> »*(ιπ)(ιι,v,z) gηJ jk{m)(u,v,z) jk(m+l)( ,v,z) min mm mm 'mm 'mm
Then, an optimal variance is obtained by [Equation27]
\2 jyσkU){u,v.z))
\σk(u,v,z)) m ■ 4+-11
Figure 10 shows the relation of average variance and error rate to the number of mixture classes. The performance using this patent specified by 29 and 31, are compared with results derived from a fixed size of mixture classes for all the components specified by 28 and 30. The minimal error rates of 7.65% and 8.24% or the variances of 618 and 960 are obtained at the size of 3 and 4. Using this patent, the error rate and average variance are decreased and saturated to 6.28% and 516 with an increase in the number of mixture classes. The essence of this patent allows variable size of the mixture classes. The error rate with variable size of mixture classes is significantly improved from 7.78% to 6.28% by allowing an optimal size of mixture classes defined by the entropy distance. 27 Example 7
A neural network for recognizing rotation- invariant object features is constructed according to this patent by presenting several images of rotating objects to a neural network. Figure 11 illustrates an example of generating luminance images from a rotating object. An object 34 is rotated at the intersecting point of two central lines 32 and 33 by angle θ =60*n degree(n=0 ~5). Luminance signals of all the pixels of the two-dimensional image, which have been obtained by scanning an rotating object in a three-dimensional space, are input into a nonlinear table and are transformed into quantified ones. The quantified two-dimensional image is divided into several rectangular elements, and the number of pixels belonging to each scale representing a corresponding intensity of the quantified luminance signal is obtained at each rectangular element.
Feature patterns with three dimensional matrix structure obtained from six rotated images per person are presented for training neural network. This method using three dimensional representation of a rotating object assigns feature patterns, obtained by sampling the different luminance signals at the same location of a rectangular element of an rotating object to different scales. This enables neural network to store feature patterns without interference among rotated images. Figure 12 shows recognition performance of rotating objects with variation of angle range from 0 to 60 degrees. Input feature patterns obtained from rotating facial objects of 40 persons from 0, 60, 120, 180 and 240 degrees are used for constructing a neural network. This patent employs image 2 8
synthesis using training samples. (Then, θ =0, 60, 120, 180, 240. ) The test images are generated by changing angle by Δ θ =1 to 60 degrees. The rotating angle of test images is specified by θ = θ i A Θ . The recognition error rates of this patent are less than that of conventional method without image synthesis . This example accounts for effectiveness of recognizing rotating objects by this patent.
By superimposing six rotated images in luminance or chromatic signal space, each image represented by using luminance signals per rectangular element interfere with the other image.

Claims

29 CLAIMS
1. An image recognition method comprising: a first step of obtaining quantified image signals by inputting luminance or chromatic signals of all the pixels of a two-dimensional image into a nonlinear table, the two- dimensional image having been obtained by scanning an object in a three-dimensional space; and a second step of dividing the two-dimensional image, obtained by the first step, into several rectangular elements and obtaining number of pixels belonging to each quantified scale of luminance or chromatic signal in each of the rectangular elements, wherein the nonl inear table is constructed such that, in histogram distribution of scales which has been obtained by quantizing the input image signals of al l the pixels of the two-dimensional image, the histogram of the quantified pixels belonging to each of the scales is equal ized, and wherein a feature pattern of the two-dimensional image obtained from the object in the three-dimensional space is represented by a three-dimensional matrix (x, y, z) by using a pixel number at a horizontal coordinate (x), a pixel number at a vertical coordinate (y) and a coordinate (z) representing an intensity of the quantified image signal within each said rectangular element obtained by performing the first and the second steps.
2. The image recognition method according to Claim 1, wherein, instead of obtaining the feature pattern as the three-dimensional matrix (x, y, z), consisting of the pixel 3 0 number components, from the two-dimensional image by performing the first and the second steps of Claim 1 on the image signals of the two-dimensional image which has been obtained from the object in the three-dimensional space, pixel number components corresponding to a two- dimensional matrix consisting of a horizontal coordinate and a vertical coordinate are taken out from the pixel number components of the three-dimensional matrix with respect to each scale of the intensity of the quantified image signals, the pixel number components of the two-dimensional matrix are transformed by using a two-dimensional discrete cosine transform or a two-dimensional discrete sine transform, thereby obtaining frequency components in a two-dimensional space, and a feature pattern represented as a three-dimensional matrix (u, v, z) of frequency components, which have been extracted as low-frequency ones from the frequency components in the two-dimensional space such that a number of the components maximizes recognition precision, is used as a feature pattern of the two-dimensional image for the object in the three-dimensional space.
3. The image recognition method according to Claim 1, wherein, instead of obtaining the feature pattern as the three-dimensional matrix (x, y, z), consisting of pixel number components, from the two-dimensional image by performing the first and the second steps of Claim 1 on the image signals of the two-dimensional image which has been obtained from the object in the three-dimensional space, the pixel number components of the three-dimensional matrix are transformed by a three-dimensional discrete cosine 3 1 transform or a three-dimensional discrete sine transform with respect to a horizontal coordinate, a vertical coordinate and a coordinate representing an intensity of the quantified image signal, thereby obtaining frequency components in a three- dimensional space, and a feature pattern represented as a three-dimensional matrix (u, v, w) of frequency components, which have been extracted as low-frequency ones from the frequency components in the three-dimensional space such that a number of the components maximizes recognition precision, is used as a feature input pattern of the two-dimensional image for the object in the three-dimensional space.
4. The image recognition method according to claim 1, wherein, in a set of feature patterns, each pattern being represented as the three-dimensional matrix of pixel number components or frequency components of the two-dimensional image, which have been obtained by the method of Claim 1 from a set of several two-dimensional image data or from an image database obtained by scanning an object in a three-dimensional space belonging to an arbitrary type of attribute, a reference vector and a variance vector are obtained from an average and a variance in a distribution of pixel number components or frequency components for each element of each three-dimensional matrix; the feature pattern of the obj ect in the three- dimensional space belonging to the arbitrary type of attribute is represented by these vectors; a relative distance between the feature pattern of the three-dimensional matrix, which has been obtained from the 3 2 two-dimensional image of the object of the arbitrary attribute by the method of Claim 1 with reference to the reference vector and the variance vector, and the reference vector is determined; and the attribute of the object is determined from an arbitrary two-dimensional image other than the image database.
5. The image recognition method according to Claim 4, wherein, in each set of feature patterns having the same attribute of the object in the two-dimensional image database, the database being represented as an image feature pattern of pixel number components or frequency components of the three-dimensional matrix, which has been obtained by the method of claim 1 , from a set of several two-dimensional images or the image database obtained from by the object of Claim 4 in the three-dimensional space belonging to the arbitrary type of attribute, a variance vector is obtained for a distribution of pixel number components or frequency components with respect to each element of the three-dimensional matrix by means of a statistical calculation or neural network learning; and a reference vector corresponding to an average of the distribution of the pixel number components or the frequency components with respect to each element of the three- dimensional matrix of Claim 4 is obtained by means of neural network learning; the neural network storing the reference vector and the variance vector, which represent the objects having the same attribute of the image database of Claim 4, as learning coefficients, 3 3 determining a relative distance between the feature pattern of the three-dimensional matrix, which has been obtained from the two-dimensional image other than the image database by the method of Claim 1, and the reference vector; and determining the attribute of the object from an arbi trary two-dimensional image other than the image database.
6. The image recognition method according to claim 4, in each set of feature patterns having the same attribute of the object in the two-dimensional image database, the database being represented as an image feature pattern of pixel number components or frequency components of the three- dimensional matrix, which has been obtained by the method of Claim 1, from a set of several two-dimensional images or an image database obtained from by the object of Claim 4 in the three-dimensional space belonging to the arbitrary type of attribute, a variance vector is obtained for a distribution of pixel number components or frequency components with respect to each element of the three-dimensional matrix; and a reference value of an amount of mutual information between a component of the variance vector and a component of another variance vector having a different attribute from that of the former variance vector is obtained instead of the variance vector of each said attribute; and the component of the variance vector of each said attribute is determined by using the components of several variance vectors having respective attributes different from the attribute so as to minimize the reference value of the amount 34 of mutual i nformat ion.
7. The image recognition method according to claim 5, wherein, in order to obtain the reference vector and the variance vector, respectively corresponding to the average and the variance of the distribution of pixel number components or frequency components of each said element of the three- dimensional matrix of Claim 5, by making a neural network learn, for the distribution of pixel number components or frequency components of each said element of the three- dimensional matrix which has been obtained by the statistical calculation of Claim 5, not the variance vectors, but a feature pattern, which is separated by a minimum relative distance from a feature pattern belonging to an object having an attribute in the feature pattern space of Claim 5 and which has a different attribute from the attribute, is obtained from the feature patterns having the attributes which are represented by the pixel number components or the frequency components of the three-dimensional matrix, obtained by the method of Claim 1, for the two-dimensional image in the image database of Claim 5, a feature input pattern, obtained by mixing these feature input patterns with each other, is input to the neural network, and a mixing ratio thereof is varied in accordance with a number of times of learning.
8. The image recognition method according to claim 5, wherein in a set of feature patterns, each pattern being represented as the three-dimensional matrix of pixel number components or frequency components of the two-dimensional image, which have been obtained by the method of Claim 1 from a set of several two-dimensional image data or from an image database obtained by scanning an object in a three-dimensional space belonging to an arbitrary type of attribute, a reference pattern vector and a variance pattern vector obtained from an average and a variance in a distribution of feature components for each element of each said three- dimensional matrix are stored by a receiver site of a hardwired network or a wireless network, the feature input pattern of the two-dimensional image, which is represented by the pixel number components or the frequency components of the three-dimensional matrix obtained by the method of Claim 1, is encrypted and encoded by a transmitter site, and the transmitted feature input pattern of the two- dimensional image is decoded and decrypted by the receiver site, thereby determining the attribute of the object represented by the feature pattern of the two-dimensional image by using the neural network of Claim 5.
9. The image recognition method according to one of Claims 1, wherein a set of two-dimensional images, which have been obtained by rotating the object in the three-dimensional space several times, each time by a predetermined angle, and by scanning the object every time, is used instead of the set of two-dimensional images or the image database which has been obtained by scanning the object in the three-dimensional space 3 6 belonging to the attribute of the arbitrary type as described in Claim 1, thereby using the feature pattern of pixel number components or frequency components of the three-dimensional matrix of Claim 1.
PCT/US1998/005443 1998-03-23 1998-03-23 Image recognition method WO1999049414A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/US1998/005443 WO1999049414A1 (en) 1998-03-23 1998-03-23 Image recognition method
JP54819999A JP2002511175A (en) 1998-03-23 1998-03-23 Image recognition method
US09/147,592 US6236749B1 (en) 1998-03-23 1998-03-23 Image recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US1998/005443 WO1999049414A1 (en) 1998-03-23 1998-03-23 Image recognition method

Publications (1)

Publication Number Publication Date
WO1999049414A1 true WO1999049414A1 (en) 1999-09-30

Family

ID=22266637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/005443 WO1999049414A1 (en) 1998-03-23 1998-03-23 Image recognition method

Country Status (2)

Country Link
JP (1) JP2002511175A (en)
WO (1) WO1999049414A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609893B2 (en) 2004-03-03 2009-10-27 Trw Automotive U.S. Llc Method and apparatus for producing classifier training images via construction and manipulation of a three-dimensional image model
CN111881920A (en) * 2020-07-16 2020-11-03 深圳力维智联技术有限公司 Network adaptation method of large-resolution image and neural network training device
CN112580666A (en) * 2020-12-18 2021-03-30 北京百度网讯科技有限公司 Image feature extraction method, training method, device, electronic equipment and medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5468824B2 (en) 2009-06-22 2014-04-09 株式会社豊田自動織機 Method and apparatus for determining shape match in three dimensions
US20130022282A1 (en) * 2011-07-19 2013-01-24 Fuji Xerox Co., Ltd. Methods for clustering collections of geo-tagged photographs
JP2019036899A (en) * 2017-08-21 2019-03-07 株式会社東芝 Information processing unit, information processing method and program
KR102516198B1 (en) * 2021-12-22 2023-03-30 호서대학교 산학협력단 Apparatus for vision inspection using artificial neural network and method therefor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5150433A (en) * 1989-12-01 1992-09-22 Eastman Kodak Company Histogram/variance mechanism for detecting presence of an edge within block of image data
US5274714A (en) * 1990-06-04 1993-12-28 Neuristics, Inc. Method and apparatus for determining and organizing feature vectors for neural network recognition
US5568568A (en) * 1991-04-12 1996-10-22 Eastman Kodak Company Pattern recognition apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5150433A (en) * 1989-12-01 1992-09-22 Eastman Kodak Company Histogram/variance mechanism for detecting presence of an edge within block of image data
US5274714A (en) * 1990-06-04 1993-12-28 Neuristics, Inc. Method and apparatus for determining and organizing feature vectors for neural network recognition
US5568568A (en) * 1991-04-12 1996-10-22 Eastman Kodak Company Pattern recognition apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KULKARNI A.D., et al., "Neural Nets for Invariant Object Recognition", IEEE 1991 APPLIED COMPUTING CONFERENCE PROCEEDINGS, April 1991, pages 336-344. *
PODILCHUK C. et al., "Face Recognition Using DCT-Based Feature Vectors Acoustics", SPEECH AND SIGNAL PROCESSING 1996, IEEE CONFERENCE PROCEEDINGS, May 1996, pages 2144-2147. *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609893B2 (en) 2004-03-03 2009-10-27 Trw Automotive U.S. Llc Method and apparatus for producing classifier training images via construction and manipulation of a three-dimensional image model
CN111881920A (en) * 2020-07-16 2020-11-03 深圳力维智联技术有限公司 Network adaptation method of large-resolution image and neural network training device
CN111881920B (en) * 2020-07-16 2024-04-09 深圳力维智联技术有限公司 Network adaptation method of large-resolution image and neural network training device
CN112580666A (en) * 2020-12-18 2021-03-30 北京百度网讯科技有限公司 Image feature extraction method, training method, device, electronic equipment and medium

Also Published As

Publication number Publication date
JP2002511175A (en) 2002-04-09

Similar Documents

Publication Publication Date Title
US6236749B1 (en) Image recognition method
Liu et al. PQA-Net: Deep no reference point cloud quality assessment via multi-view projection
Cvejic et al. Region-based multimodal image fusion using ICA bases
Ramadan et al. Face recognition using particle swarm optimization-based selected features
US6430307B1 (en) Feature extraction system and face image recognition system
US6996257B2 (en) Method for lighting- and view -angle-invariant face description with first- and second-order eigenfeatures
Sezer et al. Approximation and compression with sparse orthonormal transforms
AU2004273275A1 (en) Object posture estimation/correlation system using weight information
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
Mukhedkar et al. Fast face recognition based on Wavelet Transform on PCA
Chowdhary et al. Singular value decomposition–principal component analysis-based object recognition approach
WO1999049414A1 (en) Image recognition method
CN115169415A (en) Communication radiation source open set identification method and system
Ueda et al. Motion analysis using 3D high-resolution frequency analysis
Farnebäck Orientation Estimation Based on Weighted Projection onto Quadratic Polynomials.
Ebrahimpour-Komleh et al. Robustness to expression variations in fractal-based face recognition
JP2020112900A (en) Device associating depth image based on human body and composition value
Singh et al. Recognizing faces under varying poses with three states hidden Markov model
Kaya An algorithm for image clustering and compression
WO2009088524A1 (en) System for and method of enhancing images using fractals
Sun et al. Image comparison by compound disjoint information with applications to perceptual visual quality assessment, image registration and tracking
CN113065579A (en) Method and device for classifying target object
Tang et al. Robust video hashing based on multidimensional scaling and ordinal measures
Abdel-Kader et al. Rotation invariant face recognition based on hybrid LPT/DCT features
Liu Research on improved fingerprint image compression and texture region segmentation algorithm

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 09147592

Country of ref document: US

AK Designated states

Kind code of ref document: A1

Designated state(s): JP US