WO1996030893A1 - Pattern recognition system - Google Patents

Pattern recognition system Download PDF

Info

Publication number
WO1996030893A1
WO1996030893A1 PCT/US1996/004341 US9604341W WO9630893A1 WO 1996030893 A1 WO1996030893 A1 WO 1996030893A1 US 9604341 W US9604341 W US 9604341W WO 9630893 A1 WO9630893 A1 WO 9630893A1
Authority
WO
WIPO (PCT)
Prior art keywords
rectangle
value
array element
integer
swath
Prior art date
Application number
PCT/US1996/004341
Other languages
French (fr)
Inventor
Gabriel Ilan
Jacob Goldberger
Original Assignee
Advanced Recognition Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Recognition Technologies, Inc. filed Critical Advanced Recognition Technologies, Inc.
Priority to AU53789/96A priority Critical patent/AU5378996A/en
Publication of WO1996030893A1 publication Critical patent/WO1996030893A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/12Speech classification or search using dynamic programming techniques, e.g. dynamic time warping [DTW]

Abstract

A pattern recognition method of dynamic time warping of two sequences of feature sets onto each other for use in a distance pattern recognition system is provided. The system converts the audio pattern (50) to be analyzed into a feature set in an integer format using processor (52). The integer feature set for the test pattern is then compared with a database (54) containing reference patterns, by measuring the spectral distance between the test pattern and each reference pattern, using dynamic time warping unit (56). The processor (52) digitizes audio pattern (50) using an analog-to-digital converter (60) and then detects the endpoints of each audio pattern (50) using detector (62). The output is an audio word. The processor breaks the word into frames and then extracts the features of each frame via a feature extractor (64). Feature extractor (64) comprises a linear prediction coefficient calculator (66), a cepstrum converter (68) and an integer formatter (70).

Description

PATTERN RECOGNITION SYSTEM
FIELD OF THE INVENTION
The present invention relates generally relates to pattern recognition systems, and in particular, to pattern recognition systems using a weighted cepstral distance measure.
BACKGROUND OF THE INVENTION
Pattern recognition systems are used, for example, for the recognition of characters and speech patterns.
Pattern recognition systems are known which are based on matching
the pattern being tested against a reference database of pattern templates. The spectral distance between the test pattern and the database of reference patterns is measured and the reference pattern having the closest spectral distance to the test pattern is chosen as the recognized pattern.
An example of the prior art pattern recognition system using a
distance measure calculation is shown in Figs. 1 , 2 and 3, to which reference is now made. Fig. 1 is a flow chart illustrating the prior art pattern recognition system for speech patterns using a conventional linear predictor coefficient (LPC) determiner and ε distance calculator via dynamic time warping (DTW). Fig. 2 illustrates the relationship between two speech patterns A and B, along i-axis and j-axis, respectively. Fig. 3 illustrates the relationship between two successive points of pattern matching between speech patterns A and B.
Referring to Fig. 1 , the audio signal 10 being analyzed, has within it a plurality of speech patterns. Audio signal 10 is digitized by an analog/digital converter 12 and the endpoints of each speech pattern are detected by a detector 14. The digital signal of each speech pattern is broken into frames and for each frame, analyzer 16 computes the linear predictor coefficients (LPC) and converts them to cepstrum coefficients, which are the feature vectors of the test
pattern. Reference patterns, which have been prepared as templates, are stored in a database 18. A spectral distance calculator 20 uses a dynamic time warping (DTW) method to compare the test pattern to each of the reference patterns stored in database 18. The DTW method measures the local spectral distance between the test pattern and the reference pattern, using a suitable method of measuring spectral distance, such as the Euclidean distance between the cepstral coefficients or the weighted cepstral distance measure. The template whose reference pattern is closest in distance to the analyzed speech pattern, is then selected as being the recognized speech pattern.
In a paper, entitled "Dynamic Programming Algorithm Optimization for Spoken Word Recognition", published by the IEEE Transactions on Acoustics-
Speech and Signal Processing in February 1978, Sakoe and Chiba reported on a dynamic programming (DP) based algorithm for recognizing spoken words. DP techniques are known to be an efficient way of matching speech patterns. Sakoe and Chiba introduced the technique known as "slope constraint", wherein the warping function slope is restricted so as to discriminate between words in different categories.
Numerous spectral distance measures have been proposed including the Euclidean distance between cepstral coefficients which is widely used with
LPC-derived cepstral coefficients. Furui in a paper, entitled "Cepstral Analysis Techniques for Automatic Speaker Verification", published by the IEEE Transactions on Acoustics. Speech and Signal Processing in April 1981 , proposed a weighted cepstral distance measure which further reduces the percentage of errors in recognition.
In a paper, entitled "A Weighted Cepstral Distance Measure for
Speech Recognition", published by the IEEE Transactions on Acoustics. Speech
and Signal Processing in October 1987, Tahkura proposed an improved weighted cepstral distance measure as a means to improve the speech recognition rate.
Referring now to Fig. 2, the operation of the DTW method will be explained. In Fig. 2., speech patterns A and B are shown along the i-axis and
j-axis, respectively. Speech patterns A and B are expressed as a sequence of feature vectors a, a2, a3....am and b, b2, b3....bm , respectively.
The timing differences between two speech patterns A and B, can be
depicted by a series of 'points' Ck(i,j). A 'point' refers to the intersection of a frame i from pattern A to a frame j of pattern B. The sequence of points C1 , C2, C3 ...Ck represent a warping function 30 which effects a map from the time axis of pattern A, having a length m, on to the time axis of pattern B, having a length n. In the example of Fig. 2, function 30 is represented by points d (1 ,1 ), c2(1 ,2), c3(2,2), c4(3,3), c5(4,3) .... ck(n,m). Where timing differences do not exist between speech patterns A and B, function 30 coincides with the 45 degree diagonal line (j = i). The greater the timing differences, the further function 30 deviates from the 45 degree diagonal line.
Since function 30 is a model of time axis fluctuations in a speech pattern, it must abide by certain physical conditions. Function 30 can only advance forward and cannot move backwards and the patterns must advance together. These restrictions can be expressed by the following relationships:
i(k) - i(k-1) < 1 and j(k) - j(k-1) ≤ 1 ; and i(k-1) < i(k) and ](k-1) <j(k). (1)
Warping function 30 moves one step at a time from one of three
possible directions. For example, to move from C3(2,2) to C4(3,3), function 30 can either move directly in one step from (2,2) to (3,3) or indirectly via the points at (2,3) or (3,2).
Function 30 is further restricted to remain within a swath 32 having a
width r. The outer borders 34 and 36 of swath 32 are defined by (j = i+r) and (j = i-r), respectively.
A fourth boundary condition is defined by:
i(1 ) = 1 , j(1) = 1 , and i(end) = m, j(end) = n. Referring now to Fig. 3, where, for example, the relationship between successive points C10(1010) and C11<11 11), of pattern matching between speech patterns A and B is illustrated. In accordance with the conditions as described hereinbefore, there are three possible ways to arrive at point C11 (,, .,<), that is, either directly from C10(1010) and C11 («. ,*), indicated by line 38 or from C10(1010) via point (1011) to C11 (11r11), indicated by lines 40 and 42, or thirdly from C10(1010) via point (11 10) to C11 (** ,*), indicated by lines 44 and 46.
Furthermore, associated with each arrival point (i,j), such as point C11 (,* **), is a weight W4, such as the Euclidean or Cepstral distance between the ith frame of pattern A and the jth frame of pattern B. By applying a weight W*, to each of indirect paths 40, 42, 44 and 46 and a weight of 2W,, to direct path 38, the path
value S , at the point (i,j) can be recursively ascertained from the equation:
( 2WU + S, <-ι. -ι
5,. = min ' j-l (2)
+ S; . , )
In order to arrive at endpoint S,,.., it is necessary to calculate the best path value Sy at each point. Row by row is scanned and the values of Sy for the complete previous row plus the values of the present row up to the present point are stored. The value for Snm is the best path value. SUMMARY OF THE INVENTION
It is thus the general object of the present invention to provide an improved pattern recognition method.
According to the invention there is provided a method of dynamic time warping of two sequences of feature sets onto each other. The method includes the steps of creating a rectangular graph having the two sequences on its two axes, defining a swath of width r, where r is an odd number, centered about a diagonal line connecting the beginning point at the bottom left of the rectangle to the endpoint at the top right of the rectangle and also defining r-1 lines within the swath. The lines defining the swath are parallel to the diagonal line. Each array element k of an r-sized array is associated with a separate array of the r lines within the swath and for each row of the rectangle, the dynamic time warping method recursively generates new path values for each array element k as a function of the previous value of the array element k and of at least one
of the current values of the two neighboring array elements k-1 and k+1 of the array element k. The latter step of recursively generating new path values is
repeat for all of the rows of the rectangle and the value of the middle array element is selected as the output value sought.
Furthermore, according to the invention there is provided a method of
dynamic time warping of two sequences of feature sets onto each other where the first sequence set has a length L1 and the second sequence set having a length L2 and L1 being greater than L2. The method includes the steps of creating a rectangular graph having the first longer sequence on its horizontal axis and the second sequence on its vertical axis, defining a swath of width r, where r is an odd number, centered about a diagonal line connecting the beginning point at the bottom left of the rectangle to the endpoint at the top right of the rectangle and also defining r-1 lines, which are parallel to the diagonal line
within the swath. The method further includes the steps of associating each array element k of an r-sized array with a separate array of the r lines within the swath and for each row of the rectangle, recursively generating new path values for each array element k as a function of the previous value of array element k and of at least one of the current values of the two neighboring array elements
k-1 and k+1. The latter step is repeated for all of the rows of the rectangle. For every L1/(L1-L2) rows of the rectangle, a new path value for an array element k=max(k)+1 of the array element k is also generated and for each of the array elements k, the new path values are replaced by the value of its neighboring array element k+1. The value of the middle array element is selected as the
output value sought.
Furthermore, in accordance with a preferred embodiment of the invention, the step of selecting the output value is replaced by the step of selecting, as output, the smallest value stored in the array elements and the array element number associated therewith.
Furthermore, in accordance with a preferred embodiment of the invention, the feature sets have integer values. Additionally, in accordance with a preferred embodiment of the invention, the step of defining a swath of width r, is replaced by the step of defining a swath connecting the beginning point at the top right of the rectangle to the endpoint at the bottom left of the rectangle.
Furthermore, in accordance with a preferred embodiment of the invention, there is provided a method of pattern recognition including the steps of generating feature sets, having floating points, of a set of reference patterns, normalizing the feature sets by their standard deviations across the set of
reference patterns and selecting only the integer portions of the result, storing the portions as integer feature sets for the reference patterns, for every input pattern, generating a feature set and formatting an integer value in accordance with the step normalizing the feature sets by their standard deviations described above and comparing the integer feature sets of the input pattern to at least one
of the integer feature sets of the reference patterns.
Additionally, in accordance with a preferred embodiment of the invention, the step of formatting an integer value includes the steps of calculating the average value of the input patterns, calculating the standard deviation of each of the
feature sets, dividing each of the feature sets by the calculated standard
deviation and multiplying by a factor q and calculating the integer value.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the accompanying drawings in which:
Fig. 1 is a flow chart illustration of a prior art pattern recognition system using a conventional cepstrum coefficients and a distance calculator via dynamic time warping (DTW).
Fig. 2 is a schematic illustration of the relationship between two speech
patterns A and B, along i-axis and j-axis, respectively, in accordance with the prior art;
Fig. 3 is a schematic illustration of the relationship between two successive
points of pattern matching between the two speech patterns A and B;
Fig. 4 is a flow chart illustration of a distance fan pattern recognition system,
constructed and operative in accordance with a preferred embodiment of the
present invention;
Fig. 5 is a schematic illustration of the relationship between two speech patterns X and Y, of approximately equal length, along the i-axis and j-axis, respectively;
Fig. 6 is a schematic illustration detail of the end and start points, between two speech patterns X and Y, respectively.
Fig. 7 is a schematic illustration of the relationship between two speech patterns X and Y, of unequal lengths. DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
Reference is now made to Fig. 4 which is a flow chart representation of the distance pattern recognition system (DPR), constructed and operative in accordance with a preferred embodiment of the present invention. The following description relates to audio or speech patterns, though it should be understood that the present invention is not restricted thereto and can apply to any kind of
pattern.
The DPR system converts the audio pattern 50 to be analyzed, into a feature set in an integer format, using a processor, generally designated 52. The integer
feature set for the test pattern is then compared with a database 54 containing reference patterns, by measuring the spectral distance between the test pattern and each reference pattern, using dynamic time warping (DTW) by the DTW unit 56. The reference pattern which is closest to the test pattern, is then selected as the recognized speech pattern 58. It should be noted that integer format requires less storage space than floating point format.
Processor 52 digitizes audio pattern 50 using an analog/digital converter 60 and then detects the endpoints of each audio pattern 50 using detector 62. The output is an audio word. Processor 52 breaks the word into frames and then
extracts the features of each frame via a feature extractor, generally designated 64. Feature extractor 64 comprises a linear prediction coefficient (LPC)
calculator 66, a cepstrum converter 68 and an integer formatter 70. LPC calculator 66 computes the linear prediction coefficients (LPC) for each frame. Cepstrum converter 68 converts the LPC coefficients of each frame into a set of cepstrum coefficients. Finally integer formatter 70 normalizes and converts the
cepstrum coefficients of each frame into integer format. The integer coefficients are the feature set of the frame. Database 54 comprises reference patterns,
which have been previously prepared, using the process hereinbefore described.
Prior to operation, for each cepstrum coefficient, D1 , D2, etc. in the feature
sets of the database, integer formatter 70 calculates its average value and the standard deviation. Then, for each cepstrum coefficient in each feature set (whether of the reference database or of an incoming feature set), integer
formatter 70 divides each cepstrum coefficient D, by its associated standard deviation σ,, multiplies the result by a factor q and saves the integer portion of
the result. The constant q is any number which results in the integer portions for all the cepstrum coefficients being within a range of -100 to +100. Thus, the integer coefficient does not require storage of more than one byte of 8 bits.
Using integer formatter 70 enables the full dynamic range of the resolution to be
used.
Thus, for example, for five D, cepstrum coefficients 5.2, -4.0, 5.4, 6.4, and 20, the standard deviation σ is 6.6513. If q=20, dividing each cepstrum coefficient
by σ results in values of 15.64, -12.03, 16.23, 19.24 and 60.14, respectively. The integer coefficients are thus 15, -12, 16, 19 and 60, respectively.
Reference is now made to Fig. 5 which is a schematic illustration of the
relationship between two audio patterns X and Y, of equal length, along i-axis and j-axis, respectively. Patterns X and Y have a sequence of frames associated with which are integer feature vectors, designated x1 ( x2, xm and y,, y2,..yn,
respectively. Fig. 5 is useful for understanding the operation of the DTW unit 56 and is similar to Fig. 2.
For identical speech patterns, that is, where timing differences do not exist, the warping function F coincides with a 45 degree diagonal line D (where x=y). The warping function approximates to the 45 degree diagonal line D. The DTW unit of the present invention scans row by row through a swath of width r.
In the present invention, the points in a scan row are labeled Sp where p is defined by:- r/2 ≤ p < +r/2(4)
Thus, for example, for a swath width of r = 5, p is -2, -1 0, +1 or +2. Thus,
each line contains five points, designated S.2, S.., S0, S+1 and S+2 , centered about point S0 which lies on the ideal diagonal line D. The beginning and end of the path through space of Fig. 5 are represented by Sb and Se and also lie on diagonal line D.
It is a feature of the present invention that DTW unit 56 measures the spectral distance between the test pattern X and the reference pattern Y by calculating the best path value Sp at each point centered about S0.
As hereinbefore described with respect to the prior art, weightings can be applied to the distance measurements. Any weighing formulation can be utilized. A weight Wij is applied to the indirect paths and a weight of 2Wij is applied to the direct path.
It is noted that since p is centered about the diagonal line D, j = i+p.
At point T0, the path values which are used for calculating the best value at
T0 are along direct path S0 and indirect paths, T.« and S+1. Similarly, at point T+1, the path values which are used for calculating the best value at T+1 are T0, S+1
and S+2. Thus, at point T0, the path values which need to be retained for calculating subsequent best values are S0, S+1, S+2, T.2 and T.r
It is noted that, in the case of the present invention, once the best path value
for T0 is calculated, the value S0 is no longer required and the value T0 can be stored 'in place' of S0. Thus, at point T+1, the path values which are required for
calculating the best value can be rewritten as S0, S+1 and S+2 where S0 is the 'new' value which equals the value for T0. Similarly, the values T.< and T.2 are
stored 'in place' of S.2 and S.1 t respectively. The final value of S0 for endpoint Se yields the required path value for the test pattern X, vis-a-vis the reference pattern Y.
The above description can be written recursively as equation:
( SP + 2W +P >
S_ = min 5 p D, --ϊl W x., x*p (3)
Figure imgf000015_0001
For test audio pattern X, having a length m, the best path value Sk to arrive
at any point Sx y for x = 1...m, is the minimum distance of three possibilities. Points outside the swath, that is, for k > r+2 or k < k-2, equal infinity.
In summary, the only values which need to be stored for subsequent calculations of best path values are the path values for S.2> S.v S0, S+1 and S+2.
Reference is now made to Fig. 6 which schematically details the end and start
points, Se and Sb, respectively between two patterns X and Y, respectively.
The startpoint Sb which lies on the diagonal line D is assumed to have a path value S0 and similarly the final best value for S0 coincides with endpoint Se.
When endpoint Se is reached, the final five values retained (S.2, S.*, S0, S+1 and S+2 ) refer to the five points, designated E_2, E.,, E0, E+1 and E+2, along the boundary of the warping function. Since r/2=2 and the warping function follows a 45 degree line, the last row only contains the path values E.2> E.. and E0. All
other points in the row would have to utilize points outside the swath, which is not allowed. The previous row retains the value of E+1, which have not been overwritten, since the new path values for the last row are outside the swath. Similarly, the value stored in E+2 refers to the second last row.
Since the endpoint detector 62 may have incorrectly selected the endpoints of the audio pattern, the start and end points, Sb and Se, respectively, are not necessarily correct. Therefore, even if the startpoint Sb is known, the final value of S0 corresponding with endpoint Se may not accurately reflect the end point and may not have the best path value.
If the endpoint Se is known and the startpoint Sb is unknown, the best path value process, described hereinabove, can be carried out in reverse. Thus, the final path value for Sb is the best of the five boundary values B.2, B.1 ( B0, B+1 and
B+2, illustrated.
If the best overall path value is found to be E+1, for example, the assumed length for the test pattern is shorter than previously and thus is not equal in length to the reference pattern. Thus, the path values for E.2, E.1 ( E0, E+1 and E+2 have to be normalized by their path lengths and only then compared.
If neither start nor end points are known, the startpoint Sb is assumed with
a value S0 and the final best path value (one of the five values E.2, E.1 f E0, E+1 and E+2) is found. The point having the best total path value is then taken as the startpoint and the process carried out in reverse to find the best path value for
Sb. Therefore, in accordance with the present invention, the path value for the reference pattern is the best path value from among the boundary path values B.2, B.,, B0, B+1 and Bu+2.
Reference is now made to Fig. 7 which is a schematic illustration of the
relationship between two audio patterns X and Y, of unequal length, along the
i-axis and j-axis, respectively. The relationship between the lengths of X and Y is shown, for example, as being 8:12 (2:3). That is pattern Y is 1.5 times longer than pattern X. For non-identical speech patterns, the straight line G, connecting the start and end points Sb and Se, respectively, does not coincide with the 45 degree
diagonal line D, shown dashed. In the example of Fig. 8, path values coincide with line G only every third row. That is, points i=2,j=3 and i=4,j=6 lie on line G.
The path values S.2, S. S0, S+1 and S+2 are shown for each of rows x=1 , x=2
and x=3. Each group of path values is designated with a prefix indicating the row, such as the prefix "1 " for x=1. Thus, path values 1 S.2, 1S. 1 S0, 1S+1 and 1S+2 refer to the row x=1.
The best path value process is carried out as described hereinbefore for patterns of equal length. Thus, startpoint Sb assumes a value of S0. Values are calculated for each row. Every z rows, where z=n/(n-m), it is necessary to adjust for the inequality of the test pattern lengths. In the example, where z=3 (12/(12-
8)}, an extra path value S+3 is calculated every third row. Thus, for the first two
rows (x=0 and x=1 ), the five Sk values (S.2, S.-, S0, S+1 and S+2) are calculated, as hereinbefore described. For the third row, an extra value for 2S+3 is calculated. Then value 2S.2 is discarded and the value for 2S.« is stored 'in
place' of 2S.2. Similarly, each of the stored values, 2S0, 2S+1, 2S+2 and 2S+3 are
stored 'in place' of their neighbors, 2S.,, 2S0, 2S+1 and 2S+2, respectively.
Every z rows, the path value stored in S0 'jumps' back on track and coincides with the straight line G. Thus, in the example, a 'jump' is made on rows x=2, x=
5 and final row x=8. The final value of S0 will then coincide with the endpoint Se
and yield the total path value for the two patterns. The path values for patterns of unequal length may be represented by the
following equation:
( Sk + 2 W , _♦*♦/ . Sk = min St_, + W^ M , (4)
Sf i + Wx, -♦*♦/ >
where: I = number of 'jumps' performed to date, which is updated every z rows and z= n/(n-m).
The track of the path value S0 is shown by double line H.
As will be appreciated by persons knowledgeable in the art, the various
embodiments hereinbefore referred to, are given by way of example only and do not in any way limit the present invention.
Those skilled in the art will be readily appreciate that various changes,
modifications and variations may be applied to the preferred embodiments without departing from the scope of the invention as defined in and by the
appended claims.

Claims

1. A method of dynamic time warping of two sequences of feature sets onto each other, the method comprising the steps of: a. creating a rectangular graph having the two sequences on its two axes; b. defining a swath of width r, where r is an odd number, centered about a diagonal line connecting the beginning point at the bottom left of the rectangle to the endpoint at the top right of the rectangle;
c. also defining r-1 lines within said swath, said lines being parallel to said diagonal line; d. associating each array element k of an r-sized array with a separate array of the r lines within said swath;
e. for each row of said rectangle, recursively generating new path values
for each array element k as a function of the previous value of said array element k and of at least one of the current values of the two neighboring array elements k-1 and k+1 of said array element k; f. repeating step e) for all of the rows of said rectangle; g. selecting, as output, the value of the middle array element.
2. A method according to claim 1 and wherein said step of selecting is replaced by the step of selecting, as output, the smallest value stored in the array elements and the array element number associated therewith.
3. A method according to claim 1 and wherein said function is defined by the equation:
Figure imgf000021_0001
where W, i+k is the distance between the ith frame of the first sequence and the jth frame of second sequence.
4. A method according to claim 1 and wherein said feature sets have integer values.
5. A method according to claim 1 and wherein said step of defining a swath of width r, is replaced by the step of defining a swath connecting the beginning point at the top right of the rectangle to the endpoint at the bottom left of the rectangle.
6. A method of dynamic time warping of a first and second sequence of feature sets onto each other, said first sequence set having a length L1 , said second sequence set having a length L2 and L1 being greater than L2, the method comprising the steps of: a. creating a rectangular graph having said first sequence on its horizontal axis and said second sequence on its vertical axis; b. defining a swath of width r, where r is an odd number, centered about a diagonal line connecting the beginning point at the bottom left of the
rectangle to the endpoint at the top right of the rectangle; c. also defining r-1 lines within said swath, said lines being parallel to said diagonal line;
d. associating each array element k of an r-sized array with a separate array of the r lines within said swath; e. for each row of said rectangle, recursively generating new path values for each array element k as a function of the previous value of said array element k and of at least one of the current values of the two
neighboring array elements k-1 and k+1 f. repeating step e) for all of the rows of said rectangle; g. for every L1/(L1-L2) rows of said rectangle: i. performing step e); ii. generating a new path value for an array element k=max(k)+1
of said array element k; and iii. replacing each the new path values from step i) for each of said array elements k by the value of its neighboring array element k+1 ; h. selecting, as output, the value of the middle array element.
7. A method according to claim 6 and wherein said step of selecting is replaced by the step of selecting, as output, the smallest value stored in
the array e'ements and the array element number associated therewith.
8. A method according to claim 6 and wherein said function is defined by the equation:
( Sk + 2W. ,,, ,
Sk = min ^-i + Wχ x k ,
Sk.< + » * )
where Wj i+k is the distance between the ith frame of the first sequence and
the jth frame of second sequence.
9. A method according to claim 6 and wherein said feature sets have integer
values.
10. A method according to claim 6 and wherein said step of defining a swath of width r, is replaced by the step of defining a swath connecting the beginning point at the top right of the rectangle to the endpoint at the
bottom left of the rectangle.
11. A method of pattern recognition comprising the steps of: a. generating feature sets, having floating points, of a set of reference
patterns; b. normalizing said feature sets by their standard deviations across the
set of reference patterns and selecting only the integer portions of the result, c. storing said portions as integer feature sets for said reference patterns;
d. for every input pattern, generating a feature set and formatting an integer value in accordance with step b) e. comparing said integer feature sets of said input pattern to at least one of the integer feature sets of said reference patterns.
12. A method according to claim 11 and wherein said step of formatting an integer value comprises the steps of:
a. calculating the average value of said input patterns; b. calculating the standard deviation of each of said feature sets; c. dividing each of said feature sets by said calculated standard deviation and multiplying by a factor q; and d. calculating the integer value of the result of step c).
PCT/US1996/004341 1995-03-30 1996-03-29 Pattern recognition system WO1996030893A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU53789/96A AU5378996A (en) 1995-03-30 1996-03-29 Pattern recognition system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL11320495A IL113204A (en) 1995-03-30 1995-03-30 Pattern recognition system
IL113204 1995-03-30

Publications (1)

Publication Number Publication Date
WO1996030893A1 true WO1996030893A1 (en) 1996-10-03

Family

ID=11067302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/004341 WO1996030893A1 (en) 1995-03-30 1996-03-29 Pattern recognition system

Country Status (4)

Country Link
US (1) US5809465A (en)
AU (1) AU5378996A (en)
IL (1) IL113204A (en)
WO (1) WO1996030893A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109192223A (en) * 2018-09-20 2019-01-11 广州酷狗计算机科技有限公司 The method and apparatus of audio alignment

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6731803B1 (en) 1999-07-12 2004-05-04 Advanced Recognition Technologies, Ltd Points based handwriting recognition system
DK2364495T3 (en) * 2008-12-10 2017-01-16 Agnitio S L Method of verifying the identity of a speaking and associated computer-readable medium and computer
JP5728918B2 (en) * 2010-12-09 2015-06-03 ヤマハ株式会社 Information processing device
CN103871412B (en) * 2012-12-18 2016-08-03 联芯科技有限公司 A kind of dynamic time warping method and system rolled based on 45 degree of oblique lines
CN116821713B (en) * 2023-08-31 2023-11-24 山东大学 Shock insulation efficiency evaluation method and system based on multivariable dynamic time warping algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4592086A (en) * 1981-12-09 1986-05-27 Nippon Electric Co., Ltd. Continuous speech recognition system
US4667341A (en) * 1982-02-01 1987-05-19 Masao Watari Continuous speech recognition system
US4742547A (en) * 1982-09-03 1988-05-03 Nec Corporation Pattern matching apparatus
US4910783A (en) * 1983-03-22 1990-03-20 Matsushita Electric Industrial Co., Ltd. Method and apparatus for comparing patterns
US5121465A (en) * 1987-03-16 1992-06-09 Nec Corporation Pattern matching system

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2262834B1 (en) * 1973-04-09 1977-10-21 Calspan Corp
JPS57147781A (en) * 1981-03-06 1982-09-11 Nec Corp Pattern matching device
US4384273A (en) * 1981-03-20 1983-05-17 Bell Telephone Laboratories, Incorporated Time warp signal recognition processor for matching signal patterns
US4570232A (en) * 1981-12-21 1986-02-11 Nippon Telegraph & Telephone Public Corporation Speech recognition apparatus
US4488243A (en) * 1982-05-03 1984-12-11 At&T Bell Laboratories Dynamic time warping arrangement
US4509187A (en) * 1982-06-14 1985-04-02 At&T Bell Laboratories Time warp signal recognition processor using recirculating and/or reduced array of processor cells
US4918733A (en) * 1986-07-30 1990-04-17 At&T Bell Laboratories Dynamic time warping using a digital signal processor
US4906940A (en) * 1987-08-24 1990-03-06 Science Applications International Corporation Process and apparatus for the automatic detection and extraction of features in images and displays
JPH01183793A (en) * 1988-01-18 1989-07-21 Toshiba Corp Character recognizing device
JPH07104952B2 (en) * 1989-12-28 1995-11-13 シャープ株式会社 Pattern matching device
KR950001601B1 (en) * 1990-07-09 1995-02-27 니폰 덴신 덴와 가부시끼가시야 Neural network circuit
JPH07117950B2 (en) * 1991-09-12 1995-12-18 株式会社エイ・ティ・アール視聴覚機構研究所 Pattern recognition device and pattern learning device
CA2077274C (en) * 1991-11-19 1997-07-15 M. Margaret Withgott Method and apparatus for summarizing a document without document image decoding
CA2077969C (en) * 1991-11-19 1997-03-04 Daniel P. Huttenlocher Method of deriving wordshapes for subsequent comparison
US5682464A (en) * 1992-06-29 1997-10-28 Kurzweil Applied Intelligence, Inc. Word model candidate preselection for speech recognition using precomputed matrix of thresholded distance values
US5459798A (en) * 1993-03-19 1995-10-17 Intel Corporation System and method of pattern recognition employing a multiprocessing pipelined apparatus with private pattern memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4592086A (en) * 1981-12-09 1986-05-27 Nippon Electric Co., Ltd. Continuous speech recognition system
US4667341A (en) * 1982-02-01 1987-05-19 Masao Watari Continuous speech recognition system
US4742547A (en) * 1982-09-03 1988-05-03 Nec Corporation Pattern matching apparatus
US4910783A (en) * 1983-03-22 1990-03-20 Matsushita Electric Industrial Co., Ltd. Method and apparatus for comparing patterns
US5121465A (en) * 1987-03-16 1992-06-09 Nec Corporation Pattern matching system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IEEE, NEURAL NETWORKS FOR SIGNAL PROCESSING IV. PROCEEDINGS OF THE 1994 IEEE WORKSHOP, issued 06-08 September 1994, MATSUURA et al., "Word Recognition Using a Neural Network and a Phonetically Based DTW", pages 329-334. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109192223A (en) * 2018-09-20 2019-01-11 广州酷狗计算机科技有限公司 The method and apparatus of audio alignment

Also Published As

Publication number Publication date
US5809465A (en) 1998-09-15
IL113204A0 (en) 1995-06-29
AU5378996A (en) 1996-10-16
IL113204A (en) 1999-03-12

Similar Documents

Publication Publication Date Title
US5684925A (en) Speech representation by feature-based word prototypes comprising phoneme targets having reliable high similarity
US5315689A (en) Speech recognition system having word-based and phoneme-based recognition means
US20030154075A1 (en) Knowledge-based strategies applied to n-best lists in automatic speech recognition systems
US7324941B2 (en) Method and apparatus for discriminative estimation of parameters in maximum a posteriori (MAP) speaker adaptation condition and voice recognition method and apparatus including these
JPH01167896A (en) Voice input device
US5825977A (en) Word hypothesizer based on reliably detected phoneme similarity regions
WO1993013519A1 (en) Composite expert
US4937870A (en) Speech recognition arrangement
US5309547A (en) Method of speech recognition
EP1005019A2 (en) Segment-based similarity measurement method for speech recognition
EP0648366A1 (en) Speech regognition system utilizing vocabulary model preselection
US6314392B1 (en) Method and apparatus for clustering-based signal segmentation
US5809465A (en) Pattern recognition system
US5682464A (en) Word model candidate preselection for speech recognition using precomputed matrix of thresholded distance values
EP0344017B1 (en) Speech recognition system
US6195638B1 (en) Pattern recognition system
US4908864A (en) Voice recognition method and apparatus by updating reference patterns
JPH0247760B2 (en)
JP3403838B2 (en) Phrase boundary probability calculator and phrase boundary probability continuous speech recognizer
JPH0792989A (en) Speech recognizing method
JP3469375B2 (en) Method for determining certainty of recognition result and character recognition device
EP1414023A1 (en) Method for recognizing speech
KR100755483B1 (en) Viterbi decoding method with word boundary detection error compensation
KR100293465B1 (en) Speech recognition method
JP3353334B2 (en) Voice recognition device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA