WO2002098169A1 - Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors - Google Patents

Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors Download PDF

Info

Publication number
WO2002098169A1
WO2002098169A1 PCT/US2002/017251 US0217251W WO02098169A1 WO 2002098169 A1 WO2002098169 A1 WO 2002098169A1 US 0217251 W US0217251 W US 0217251W WO 02098169 A1 WO02098169 A1 WO 02098169A1
Authority
WO
WIPO (PCT)
Prior art keywords
acoustic signals
speech
acoustic
noise
difference parameters
Prior art date
Application number
PCT/US2002/017251
Other languages
French (fr)
Inventor
Gregory C. Burnett
Original Assignee
Aliphcom
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/905,361 external-priority patent/US20020039425A1/en
Priority claimed from US09/990,847 external-priority patent/US20020099541A1/en
Application filed by Aliphcom filed Critical Aliphcom
Priority to EP02739572A priority Critical patent/EP1415505A1/en
Priority to KR1020037015511A priority patent/KR100992656B1/en
Priority to JP2003501229A priority patent/JP2005503579A/en
Priority to CA002448669A priority patent/CA2448669A1/en
Publication of WO2002098169A1 publication Critical patent/WO2002098169A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0204Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02168Noise filtering characterised by the method used for estimating noise the estimation exclusively taking place during speech pauses
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Definitions

  • TECHNICAL FIELD The disclosed embodiments relate to the processing of speech signals.
  • voiced and unvoiced speech are critical to many speech applications including speech recognition, speaker verification, noise suppression, and many others.
  • speech from a human speaker is captured and transmitted to a receiver in a different location.
  • noise sources that pollute the speech signal, or the signal of interest, with unwanted acoustic noise. This makes it difficult or impossible for the receiver, whether human or machine, to understand the user's speech.
  • Typical methods for classifying voiced and unvoiced speech have relied mainly on the acoustic content of microphone data, which is plagued by problems with noise and the corresponding uncertainties in signal content. This is especially problematic now with the proliferation of portable communication devices like cellular telephones and personal digital assistants because, in many cases, the quality of service provided by the device depends on the quality of the voice services offered by the device.
  • There are methods known in the art for suppressing the noise present in the speech signals demonstrate performance shortcomings that include unusually long computing time, requirements for cumbersome hardware to perform the signal processing, and distorting the signals of interest.
  • Figure 1 is a block diagram of a NAVSAD system, under an embodiment.
  • Figure 2 is a block diagram of a PSAD system, under an embodiment.
  • Figure 3 is a block diagram of a denoising system, referred to herein as the Pathfinder system, under an embodiment.
  • Figure 4 is a flow diagram of a detection algorithm for use in detecting voiced and unvoiced speech, under an embodiment.
  • Figure 5A plots the received GEMS signal for an utterance along with the mean correlation between the GEMS signal and the Mic 1 signal and the threshold for voiced speech detection.
  • Figure 5B plots the received GEMS signal for an utterance along with the standard deviation of the GEMS signal and the threshold for voiced speech detection.
  • Figure 6 plots voiced speech detected from an utterance along with the GEMS signal and the acoustic noise.
  • Figure 7 is a microphone array for use under an embodiment of the PSAD system.
  • Figure 8 is a plot of ⁇ M versus di for several ⁇ d values, under an embodiment.
  • Figure 9 shows a plot of the gain parameter as the sum of the absolute values of H- ⁇ (z) and the acoustic data or audio from microphone 1.
  • Figure 10 is an alternative plot of acoustic data presented in Figure 9.
  • FIG. 1 is a block diagram of a NAVSAD system 100, under an embodiment.
  • the NAVSAD system couples microphones 10 and sensors 20 to at least one processor 30.
  • the sensors 20 of an embodiment include voicing activity detectors or non-acoustic sensors.
  • the processor 30 controls subsystems including a detection subsystem 50, referred to herein as a detection algorithm, and a denoising subsystem 40. Operation of the denoising subsystem 40 is described in detail in the Related Applications.
  • the NAVSAD system works extremely well in any background acoustic noise environment.
  • FIG. 2 is a block diagram of a PSAD system 200, under an embodiment.
  • the PSAD system couples microphones 10 to at least one processor 30.
  • the processor 30 includes a detection subsystem 50, referred to herein as a detection algorithm, and a denoising subsystem 40.
  • the PSAD system is highly sensitive in low acoustic noise environments and relatively insensitive in high acoustic noise environments.
  • the PSAD can operate independently or as a backup to the NAVSAD, detecting voiced speech if the NAVSAD fails.
  • the detection subsystems 50 and denoising subsystems 40 of both the NAVSAD and PSAD systems of an embodiment are algorithms controlled by the processor 30, but are not so limited.
  • Alternative embodiments of the NAVSAD and PSAD systems can include detection subsystems 50 and/or denoising subsystems 40 that comprise additional hardware, firmware, software, and/or combinations of hardware, firmware, and software. Furthermore, functions of the detection subsystems 50 and denoising subsystems 40 may be distributed across numerous components of the NAVSAD and PSAD systems.
  • FIG 3 is a block diagram of a denoising subsystem 300, referred to herein as the Pathfinder system, under an embodiment.
  • the Pathfinder system is briefly described below, and is described in detail in the Related Applications. Two microphones Mic 1 and Mic 2 are used in the Pathfinder system, and Mic 1 is considered the "signal" microphone.
  • the Pathfinder system 300 is equivalent to the NAVSAD system 100 when the voicing activity detector (VAD) 320 is a non-acoustic voicing sensor 20 and the noise removal subsystem 340 includes the detection subsystem 50 and the denoising subsystem 40.
  • VAD voicing activity detector
  • the Pathfinder system 300 is equivalent to the PSAD system 200 in the absence of the VAD 320, and when the noise removal subsystem 340 includes the detection subsystem 50 and the denoising subsystem 40.
  • the NAVSAD and PSAD systems support a two-level commercial approach in which (i) a relatively less expensive PSAD system supports an acoustic approach that functions in most low- to medium-noise environments, and (ii) a NAVSAD system adds a non-acoustic sensor to enable detection of voiced speech in any environment. Unvoiced speech is normally not detected using the sensor, as it normally does not sufficiently vibrate human tissue.
  • the NAVSAD and PSAD systems include an array algorithm for speech detection that uses the difference in frequency content between two microphones to calculate a relationship between the signals of the two microphones. This is in contrast to conventional arrays that attempt to use the time/phase difference of each microphone to remove the noise outside of an "area of sensitivity".
  • the methods described herein provide a significant advantage, as they do not require a specific orientation of the array with respect to the signal.
  • the systems described herein are sensitive to noise of every type and every orientation, unlike conventional arrays that depend on specific noise orientations. Consequently, the frequency-based arrays presented herein are unique as they depend only on the relative orientation of the two microphones themselves with no dependence on the orientation of the noise and signal with respect to the microphones. This results in a robust signal processing system with respect to the type of noise, microphones, and orientation between the noise/signal source and the microphones.
  • the systems described herein use the information derived from the
  • the voicing state includes silent, voiced, and unvoiced states.
  • the NAVSAD system for example, includes a non-acoustic sensor to detect the vibration of human tissue associated with speech.
  • the non-acoustic sensor of an embodiment is a General Electromagnetic Movement Sensor (GEMS) as described briefly below and in detail in the Related Applications, but is not so limited. Alternative embodiments, however, may use any sensor that is able to detect human tissue motion associated with speech and is unaffected by environmental acoustic noise.
  • GEMS General Electromagnetic Movement Sensor
  • the GEMS is a radio frequency device (2.4 GHz) that allows the detection of moving human tissue dielectric interfaces.
  • the GEMS includes an RF interferometer that uses homodyne mixing to detect small phase shifts associated with target motion. In essence, the sensor sends out weak electromagnetic waves (less than 1 milliwatt) that reflect off of whatever is around the sensor. The reflected waves are mixed with the original transmitted waves and the results analyzed for any change in position of the targets. Anything that moves near the sensor will cause a change in phase of the reflected wave that will be amplified and displayed as a change in voltage output from the sensor.
  • a similar sensor is described by Gregory C. Burnett (1999) in "The physiological basis of glottal electromagnetic micropower sensors (GEMS) and their use in defining an excitation function for the human vocal tract"; Ph.D. Thesis, University of California at Davis.
  • FIG 4 is a flow diagram of a detection algorithm 50 for use in detecting voiced and unvoiced speech, under an embodiment.
  • both the NAVSAD and PSAD systems of an embodiment include the detection algorithm 50 as the detection subsystem 50.
  • This detection algorithm 50 operates in real-time and, in an embodiment, operates on 20 millisecond windows and steps 10 milliseconds at a time, but is not so limited.
  • the voice activity determination is recorded for the first 10 milliseconds, and the second 10 milliseconds functions as a "look-ahead" buffer. While an embodiment uses the 20/10 windows, alternative embodiments may use numerous other combinations of window values.
  • Pathfinder performance can be compromised if the adaptive filter training is conducted on speech rather than on noise. It is therefore important not to exclude any significant amount of speech from the VAD to keep such disturbances to a minimum.
  • the systems using the detection algorithm of an embodiment function in environments containing varying amounts of background acoustic noise. If the non-acoustic sensor is available, this external noise is not a problem for voiced speech. However, for unvoiced speech (and voiced if the non-acoustic sensor is not available or has malfunctioned) reliance is placed on acoustic data alone to separate noise from unvoiced speech.
  • An advantage inheres in the use of two microphones in an embodiment of the Pathfinder noise suppression system, and the spatial relationship between the microphones is exploited to assist in the detection of unvoiced speech. However, there may occasionally be noise levels high enough that the speech will be nearly undetectable and the acoustic-only method will fail.
  • the non-acoustic sensor (or hereafter just the sensor) will be required to ensure good performance.
  • the speech source should be relatively louder in one designated microphone when compared to the other microphone. Tests have shown that this requirement is easily met with conventional microphones when the microphones are placed on the head, as any noise should result in an Hi with a gain near unity.
  • the NAVSAD relies on two parameters to detect voiced speech. These two parameters include the energy of the sensor in the window of interest, determined in an embodiment by the standard deviation (SD), and optionally the cross- correlation (XCORR) between the acoustic signal from microphone 1 and the sensor data.
  • SD standard deviation
  • XCORR cross- correlation
  • the energy of the sensor can be determined in any one of a number of ways, and the SD is just one convenient way to determine the energy.
  • the SD is akin to the energy of the signal, which normally corresponds quite accurately to the voicing state, but may be susceptible to movement noise (relative motion of the sensor with respect to the human user) and/or electromagnetic noise.
  • the XCORR can be used. The XCORR is only calculated to 15 delays, which corresponds to just under 2 milliseconds at 8000 Hz.
  • the XCORR can also be useful when the sensor signal is distorted or modulated in some fashion. For example, there are sensor locations (such as the jaw or back of the neck) where speech production can be detected but where the signal may have incorrect or distorted time-based information. That is, they may not have well defined features in time that will match with the acoustic waveform. However, XCORR is more susceptible to errors from acoustic noise, and in high ( ⁇ 0 dB SNR) environments is almost useless. Therefore it should not be the sole source of voicing information.
  • the sensor detects human tissue motion associated with the closure of the vocal folds, so the acoustic signal produced by the closure of the folds is highly correlated with the closures. Therefore, sensor data that correlates highly with the acoustic signal is declared as speech, and sensor data that does not correlate well is termed noise.
  • the acoustic data is expected to lag behind the sensor data by about 0.1 to 0.8 milliseconds (or about 1-7 samples) as a result of the delay time due to the relatively slower speed of sound (around 330 m/s).
  • an embodiment uses a 15-sample correlation, as the acoustic wave shape varies significantly depending on the sound produced, and a larger correlation width is needed to ensure detection.
  • the SD and XCORR signals are related, but are sufficiently different so that the voiced speech detection is more reliable. For simplicity, though, either parameter may be used.
  • the values for the SD and XCORR are compared to empirical thresholds, and if both are above their threshold, voiced speech is declared. Example data is presented and described below.
  • Figures 5A, 5B, and 6 show data plots for an example in which a subject twice speaks the phrase "pop pan", under an embodiment.
  • Figure 5A plots the received GEMS signal 502 for this utterance along with the mean correlation 504 between the GEMS signal and the Mic 1 signal and the threshold T1 used for voiced speech detection.
  • Figure 5B plots the received GEMS signal 502 for this utterance along with the standard deviation 506 of the GEMS signal and the threshold T2 used for voiced speech detection.
  • Figure 6 plots voiced speech 602 detected from the acoustic or audio signal 608, along with the GEMS signal 604 and the acoustic noise 606; no unvoiced speech is detected in this example because of the heavy background babble noise 606.
  • the thresholds have been set so that there are virtually no false negatives, and only occasional false positives.
  • a voiced speech activity detection accuracy of greater than 99% has been attained under any acoustic background noise conditions.
  • the NAVSAD can determine when voiced speech is occurring with high degrees of accuracy due to the non-acoustic sensor data. However, the sensor offers little assistance in separating unvoiced speech from noise, as unvoiced speech normally causes no detectable signal in most non-acoustic sensors. If there is a detectable signal, the NAVSAD can be used, although use of the SD method is dictated as unvoiced speech is normally poorly correlated. In the absence of a detectable signal use is made of the system and methods of the Pathfinder noise removal algorithm in determining when unvoiced speech is occurring. A brief review of the Pathfinder algorithm is described below, while a detailed description is provided in the Related Applications.
  • the acoustic information coming into Microphone 1 is denoted by m ⁇ (n)
  • the information coming into Microphone 2 is similarly labeled m 2 (n)
  • the GEMS sensor is assumed available to determine voiced speech areas.
  • these signals are represented as M ⁇ (z) and M 2 (z).
  • N 2 (z) N(z)H ! (z)
  • Equation 1 has four unknowns and only two relationships and cannot be solved explicitly. However, there is another way to solve for some of the unknowns in
  • M ln (z) N ⁇ z)H x (z)
  • ⁇ - ⁇ (z) can be calculated using any of the available system identification algorithms and the microphone outputs when only noise is being received. The calculation can be done adaptively, so that if the noise changes significantly H ⁇ (z) can be recalculated quickly.
  • Equation 1 With a solution for one of the unknowns in Equation 1 , solutions can be found for another; H 2 (z), by using the amplitude of the GEMS or similar device along with the amplitude of the two microphones.
  • the PSAD system As sound waves propagate, they normally lose energy as they travel due to diffraction and dispersion. Assuming the sound waves originate from a point source and radiate isotropically, their amplitude will decrease as a function of 1/r, where r is the distance from the originating point. This function of 1/r proportional to amplitude is the worst case, if confined to a smaller area the reduction will be less. However it is an adequate model for the configurations of interest, specifically the propagation of noise and speech to microphones located somewhere on the user's head.
  • Figure 7 is a microphone array for use under an embodiment of the PSAD system. Placing the microphones Mic 1 and Mic 2 in a linear array with the mouth on the array midline, the difference in signal strength in Mic 1 and Mic 2 (assuming the microphones have identical frequency responses) will be proportional to both di and ⁇ d. Assuming a 1/r (or in this case 1/d) relationship, it is seen that
  • ⁇ M is the difference in gain between Mic 1 and Mic 2 and therefore H-i(z), as above in Equation 2.
  • d-i is the distance from Mic 1 to the speech or noise source.
  • Figure 8 is a plot 800 of ⁇ M versus di for several ⁇ d values, under an embodiment. It is clear that as ⁇ d becomes larger and the noise source is closer, ⁇ M becomes larger. The variable ⁇ d will change depending on the orientation to the speech/noise source, from the maximum value on the array midline to zero perpendicular to the array midline. From the plot 800 it is clear that for small ⁇ d and for distances over approximately 30 centimeters (cm), ⁇ M is close to unity. Since most noise sources are farther away than 30 cm and are unlikely to be on the midline on the array, it is probable that when calculating H- ⁇ (z) as above in Equation 2, ⁇ M (or equivalently the gain of H ⁇ (z)) will be close to unity. Conversely, for noise sources that are close (within a few centimeters), there could be a substantial difference in gain depending on which microphone is closer to the noise.
  • FIG. 9 shows a plot 900 of the gain parameter 902 as the sum of the absolute values of H ⁇ (z) and the acoustic data 904 or audio from microphone 1.
  • the speech signal was an utterance of the phrase "pop pan", repeated twice.
  • the evaluated bandwidth included the frequency range from 2500 Hz to 3500 Hz, although 1500Hz to 2500 Hz was additionally used in practice. Note the rapid increase in the gain when the unvoiced speech is first encountered, then the rapid return to normal when the speech ends.
  • the large changes in gain that result from transitions between noise and speech can be detected by any standard signal processing techniques.
  • the standard deviation of the last few gain calculations is used, with thresholds being defined by a running average of the standard deviations and the standard deviation noise floor. The later changes in gain for the voiced speech are suppressed in this plot 900 for clarity.
  • Figure 10 is an alternative plot 1000 of acoustic data presented in Figure 9.
  • the data used to form plot 900 is presented again in this plot 1000, along with audio data 1004 and GEMS data 1006 without noise to make the unvoiced speech apparent.
  • this automatic backup of the NAVSAD system functions best in an environment with low noise (approximately 10+ dB SNR), as high amounts (10 dB of SNR or less) of acoustic noise can quickly overwhelm any acoustic-only unvoiced detector, including the PSAD.
  • This is evident in the difference in the voiced signal data 602 and 1002 shown in plots 600 and 100 of Figures 6 and 10, respectively, where the same utterance is spoken, but the data of plot 600 shows no unvoiced speech because the unvoiced speech is undetectable. This is the desired behavior when performing denoising, since if the unvoiced speech is not detectable then it will not significantly affect the denoising process.
  • the configuration of the microphones can have an effect on the change in gain associated with speech and the thresholds needed to detect speech.
  • each configuration will require testing to determine the proper thresholds, but tests with two very different microphone configurations showed the same thresholds and other parameters to work well.
  • the first microphone set had the signal microphone near the mouth and the noise microphone several centimeters away at the ear, while the second configuration placed the noise and signal microphones back-to-back within a few centimeters of the mouth.
  • the results presented herein were derived using the first microphone configuration, but the results using the other set are virtually identical, so the detection algorithm is relatively robust with respect to microphone placement.
  • NAVSAD and PSAD systems detect voiced and unvoiced speech.
  • One configuration uses the NAVSAD system (non-acoustic only) to detect voiced speech along with the PSAD system to detect unvoiced speech; the PSAD also functions as a backup to the NAVSAD system for detecting voiced speech.
  • An alternative configuration uses the NAVSAD system (non-acoustic correlated with acoustic) to detect voiced speech along with the PSAD system to detect unvoiced speech; the PSAD also functions as a backup to the NAVSAD system for detecting voiced speech.
  • Another alternative configuration uses the PSAD system to detect both voiced and unvoiced speech.
  • the "k” in “kick” has significant frequency content form 500 Hz to 4000 Hz, but a “sh” in “she” only contains significant energy from 1700-4000 Hz.
  • Voiced speech could be classified in a similar manner. For instance, an l ⁇ l (“ee”) has significant energy around 300 Hz and 2500 Hz, and an lal (“ah”) has energy at around 900 Hz and 1200 Hz. This ability to discriminate unvoiced and voiced speech in the presence of noise is, thus, very useful.
  • routines described herein can be provided with one or more of the following, or one or more combinations of the following: stored in non-volatile memory (not shown) that forms part of an associated processor or processors, or implemented using conventional programmed logic arrays or circuit elements, or stored in removable media such as disks, or downloaded from a server and stored locally at a client, or hardwired or preprogrammed in chips such as EEPROM semiconductor chips, application specific integrated circuits (ASICs), or by digital signal processing (DSP) integrated circuits.
  • non-volatile memory not shown
  • ASICs application specific integrated circuits
  • DSP digital signal processing

Abstract

Systems and methods are provided for detecting voiced and unvoiced speech in acoustic signals having varying levels of background noise. The systems (Fig. 3) receive acoustic signals at two microphones (Mic 1, Mic 2), and generate difference parameters between the acoustic signals received at each of the two microphones (Mic 1, Mic 2). The difference parameters are representative of the relative difference in signal gain between portions of the receive acoustic signals. The systems identify information of the acoustic signals as unvoiced speech when the difference parameters exceed a first threshold, and identify information of the acoustic signals as voiced speech when the difference parameters exceed a second threshold. Further, embodiments of the systems include non-acoustic sensors (20) that receive physiological information to aid identifying voiced speech.

Description

DETECTING VOICED AND UNVOICED SPEECH USING BOTH ACOUSTIC
AND NONACOUSTIC SENSORS
TECHNICAL FIELD The disclosed embodiments relate to the processing of speech signals.
BACKGROUND
The ability to correctly identify voiced and unvoiced speech is critical to many speech applications including speech recognition, speaker verification, noise suppression, and many others. In a typical acoustic application, speech from a human speaker is captured and transmitted to a receiver in a different location. In the speaker's environment there may exist one or more noise sources that pollute the speech signal, or the signal of interest, with unwanted acoustic noise. This makes it difficult or impossible for the receiver, whether human or machine, to understand the user's speech.
Typical methods for classifying voiced and unvoiced speech have relied mainly on the acoustic content of microphone data, which is plagued by problems with noise and the corresponding uncertainties in signal content. This is especially problematic now with the proliferation of portable communication devices like cellular telephones and personal digital assistants because, in many cases, the quality of service provided by the device depends on the quality of the voice services offered by the device. There are methods known in the art for suppressing the noise present in the speech signals, but these methods demonstrate performance shortcomings that include unusually long computing time, requirements for cumbersome hardware to perform the signal processing, and distorting the signals of interest.
BRIEF DESCRIPTION OF THE FIGURES
Figure 1 is a block diagram of a NAVSAD system, under an embodiment. Figure 2 is a block diagram of a PSAD system, under an embodiment.
Figure 3 is a block diagram of a denoising system, referred to herein as the Pathfinder system, under an embodiment.
Figure 4 is a flow diagram of a detection algorithm for use in detecting voiced and unvoiced speech, under an embodiment. Figure 5A plots the received GEMS signal for an utterance along with the mean correlation between the GEMS signal and the Mic 1 signal and the threshold for voiced speech detection.
Figure 5B plots the received GEMS signal for an utterance along with the standard deviation of the GEMS signal and the threshold for voiced speech detection.
Figure 6 plots voiced speech detected from an utterance along with the GEMS signal and the acoustic noise.
Figure 7 is a microphone array for use under an embodiment of the PSAD system.
Figure 8 is a plot of ΔM versus di for several Δd values, under an embodiment.
Figure 9 shows a plot of the gain parameter as the sum of the absolute values of H-ι(z) and the acoustic data or audio from microphone 1. Figure 10 is an alternative plot of acoustic data presented in Figure 9.
In the figures, the same reference numbers identify identical or substantially similar elements or acts.
Any headings provided herein are for convenience only and do not necessarily affect the scope or meaning of the claimed invention.
DETAILED DESCRIPTION
Systems and methods for discriminating voiced and unvoiced speech from background noise are provided below including a Non-Acoustic Sensor Voiced Speech Activity Detection (NAVSAD) system and a Pathfinder Speech Activity Detection (PSAD) system. The noise removal and reduction methods provided herein, while allowing for the separation and classification of unvoiced and voiced human speech from background noise, address the shortcomings of typical systems known in the art by cleaning acoustic signals of interest without distortion. Figure 1 is a block diagram of a NAVSAD system 100, under an embodiment. The NAVSAD system couples microphones 10 and sensors 20 to at least one processor 30. The sensors 20 of an embodiment include voicing activity detectors or non-acoustic sensors. The processor 30 controls subsystems including a detection subsystem 50, referred to herein as a detection algorithm, and a denoising subsystem 40. Operation of the denoising subsystem 40 is described in detail in the Related Applications. The NAVSAD system works extremely well in any background acoustic noise environment.
Figure 2 is a block diagram of a PSAD system 200, under an embodiment. The PSAD system couples microphones 10 to at least one processor 30. The processor 30 includes a detection subsystem 50, referred to herein as a detection algorithm, and a denoising subsystem 40. The PSAD system is highly sensitive in low acoustic noise environments and relatively insensitive in high acoustic noise environments. The PSAD can operate independently or as a backup to the NAVSAD, detecting voiced speech if the NAVSAD fails. Note that the detection subsystems 50 and denoising subsystems 40 of both the NAVSAD and PSAD systems of an embodiment are algorithms controlled by the processor 30, but are not so limited. Alternative embodiments of the NAVSAD and PSAD systems can include detection subsystems 50 and/or denoising subsystems 40 that comprise additional hardware, firmware, software, and/or combinations of hardware, firmware, and software. Furthermore, functions of the detection subsystems 50 and denoising subsystems 40 may be distributed across numerous components of the NAVSAD and PSAD systems.
Figure 3 is a block diagram of a denoising subsystem 300, referred to herein as the Pathfinder system, under an embodiment. The Pathfinder system is briefly described below, and is described in detail in the Related Applications. Two microphones Mic 1 and Mic 2 are used in the Pathfinder system, and Mic 1 is considered the "signal" microphone. With reference to Figure 1 , the Pathfinder system 300 is equivalent to the NAVSAD system 100 when the voicing activity detector (VAD) 320 is a non-acoustic voicing sensor 20 and the noise removal subsystem 340 includes the detection subsystem 50 and the denoising subsystem 40. With reference to Figure 2, the Pathfinder system 300 is equivalent to the PSAD system 200 in the absence of the VAD 320, and when the noise removal subsystem 340 includes the detection subsystem 50 and the denoising subsystem 40. The NAVSAD and PSAD systems support a two-level commercial approach in which (i) a relatively less expensive PSAD system supports an acoustic approach that functions in most low- to medium-noise environments, and (ii) a NAVSAD system adds a non-acoustic sensor to enable detection of voiced speech in any environment. Unvoiced speech is normally not detected using the sensor, as it normally does not sufficiently vibrate human tissue. However, in high noise situations detecting the unvoiced speech is not as important, as it is normally very low in energy and easily washed out by the noise. Therefore in high noise environments the unvoiced speech is unlikely to affect the voiced speech denoising. Unvoiced speech information is most important in the presence of little to no noise and, therefore, the unvoiced detection should be highly sensitive in low noise situations, and insensitive in high noise situations. This is not easily accomplished, and comparable acoustic unvoiced detectors known in the art are incapable of operating under these environmental constraints. The NAVSAD and PSAD systems include an array algorithm for speech detection that uses the difference in frequency content between two microphones to calculate a relationship between the signals of the two microphones. This is in contrast to conventional arrays that attempt to use the time/phase difference of each microphone to remove the noise outside of an "area of sensitivity". The methods described herein provide a significant advantage, as they do not require a specific orientation of the array with respect to the signal.
Further, the systems described herein are sensitive to noise of every type and every orientation, unlike conventional arrays that depend on specific noise orientations. Consequently, the frequency-based arrays presented herein are unique as they depend only on the relative orientation of the two microphones themselves with no dependence on the orientation of the noise and signal with respect to the microphones. This results in a robust signal processing system with respect to the type of noise, microphones, and orientation between the noise/signal source and the microphones. The systems described herein use the information derived from the
Pathfinder noise suppression system and/or a non-acoustic sensor described in the Related Applications to determine the voicing state of an input signal, as described in detail below. The voicing state includes silent, voiced, and unvoiced states. The NAVSAD system, for example, includes a non-acoustic sensor to detect the vibration of human tissue associated with speech. The non-acoustic sensor of an embodiment is a General Electromagnetic Movement Sensor (GEMS) as described briefly below and in detail in the Related Applications, but is not so limited. Alternative embodiments, however, may use any sensor that is able to detect human tissue motion associated with speech and is unaffected by environmental acoustic noise.
The GEMS is a radio frequency device (2.4 GHz) that allows the detection of moving human tissue dielectric interfaces. The GEMS includes an RF interferometer that uses homodyne mixing to detect small phase shifts associated with target motion. In essence, the sensor sends out weak electromagnetic waves (less than 1 milliwatt) that reflect off of whatever is around the sensor. The reflected waves are mixed with the original transmitted waves and the results analyzed for any change in position of the targets. Anything that moves near the sensor will cause a change in phase of the reflected wave that will be amplified and displayed as a change in voltage output from the sensor. A similar sensor is described by Gregory C. Burnett (1999) in "The physiological basis of glottal electromagnetic micropower sensors (GEMS) and their use in defining an excitation function for the human vocal tract"; Ph.D. Thesis, University of California at Davis.
Figure 4 is a flow diagram of a detection algorithm 50 for use in detecting voiced and unvoiced speech, under an embodiment. With reference to Figures 1 and 2, both the NAVSAD and PSAD systems of an embodiment include the detection algorithm 50 as the detection subsystem 50. This detection algorithm 50 operates in real-time and, in an embodiment, operates on 20 millisecond windows and steps 10 milliseconds at a time, but is not so limited. The voice activity determination is recorded for the first 10 milliseconds, and the second 10 milliseconds functions as a "look-ahead" buffer. While an embodiment uses the 20/10 windows, alternative embodiments may use numerous other combinations of window values.
Consideration was given to a number of multi-dimensional factors in developing the detection algorithm 50. The biggest consideration was to maintaining the effectiveness of the Pathfinder denoising technique, described in detail in the Related Applications and reviewed herein. Pathfinder performance can be compromised if the adaptive filter training is conducted on speech rather than on noise. It is therefore important not to exclude any significant amount of speech from the VAD to keep such disturbances to a minimum.
Consideration was also given to the accuracy of the characterization between voiced and unvoiced speech signals, and distinguishing each of these speech signals from noise signals. This type of characterization can be useful in such applications as speech recognition and speaker verification.
Furthermore, the systems using the detection algorithm of an embodiment function in environments containing varying amounts of background acoustic noise. If the non-acoustic sensor is available, this external noise is not a problem for voiced speech. However, for unvoiced speech (and voiced if the non-acoustic sensor is not available or has malfunctioned) reliance is placed on acoustic data alone to separate noise from unvoiced speech. An advantage inheres in the use of two microphones in an embodiment of the Pathfinder noise suppression system, and the spatial relationship between the microphones is exploited to assist in the detection of unvoiced speech. However, there may occasionally be noise levels high enough that the speech will be nearly undetectable and the acoustic-only method will fail. In these situations, the non-acoustic sensor (or hereafter just the sensor) will be required to ensure good performance. In the two-microphone system, the speech source should be relatively louder in one designated microphone when compared to the other microphone. Tests have shown that this requirement is easily met with conventional microphones when the microphones are placed on the head, as any noise should result in an Hi with a gain near unity. Regarding the NAVSAD system, and with reference to Figure 1 and Figure
3, the NAVSAD relies on two parameters to detect voiced speech. These two parameters include the energy of the sensor in the window of interest, determined in an embodiment by the standard deviation (SD), and optionally the cross- correlation (XCORR) between the acoustic signal from microphone 1 and the sensor data. The energy of the sensor can be determined in any one of a number of ways, and the SD is just one convenient way to determine the energy.
For the sensor, the SD is akin to the energy of the signal, which normally corresponds quite accurately to the voicing state, but may be susceptible to movement noise (relative motion of the sensor with respect to the human user) and/or electromagnetic noise. To further differentiate sensor noise from tissue motion, the XCORR can be used. The XCORR is only calculated to 15 delays, which corresponds to just under 2 milliseconds at 8000 Hz.
The XCORR can also be useful when the sensor signal is distorted or modulated in some fashion. For example, there are sensor locations (such as the jaw or back of the neck) where speech production can be detected but where the signal may have incorrect or distorted time-based information. That is, they may not have well defined features in time that will match with the acoustic waveform. However, XCORR is more susceptible to errors from acoustic noise, and in high (<0 dB SNR) environments is almost useless. Therefore it should not be the sole source of voicing information.
The sensor detects human tissue motion associated with the closure of the vocal folds, so the acoustic signal produced by the closure of the folds is highly correlated with the closures. Therefore, sensor data that correlates highly with the acoustic signal is declared as speech, and sensor data that does not correlate well is termed noise. The acoustic data is expected to lag behind the sensor data by about 0.1 to 0.8 milliseconds (or about 1-7 samples) as a result of the delay time due to the relatively slower speed of sound (around 330 m/s). However, an embodiment uses a 15-sample correlation, as the acoustic wave shape varies significantly depending on the sound produced, and a larger correlation width is needed to ensure detection.
The SD and XCORR signals are related, but are sufficiently different so that the voiced speech detection is more reliable. For simplicity, though, either parameter may be used. The values for the SD and XCORR are compared to empirical thresholds, and if both are above their threshold, voiced speech is declared. Example data is presented and described below.
Figures 5A, 5B, and 6 show data plots for an example in which a subject twice speaks the phrase "pop pan", under an embodiment. Figure 5A plots the received GEMS signal 502 for this utterance along with the mean correlation 504 between the GEMS signal and the Mic 1 signal and the threshold T1 used for voiced speech detection. Figure 5B plots the received GEMS signal 502 for this utterance along with the standard deviation 506 of the GEMS signal and the threshold T2 used for voiced speech detection. Figure 6 plots voiced speech 602 detected from the acoustic or audio signal 608, along with the GEMS signal 604 and the acoustic noise 606; no unvoiced speech is detected in this example because of the heavy background babble noise 606. The thresholds have been set so that there are virtually no false negatives, and only occasional false positives. A voiced speech activity detection accuracy of greater than 99% has been attained under any acoustic background noise conditions. The NAVSAD can determine when voiced speech is occurring with high degrees of accuracy due to the non-acoustic sensor data. However, the sensor offers little assistance in separating unvoiced speech from noise, as unvoiced speech normally causes no detectable signal in most non-acoustic sensors. If there is a detectable signal, the NAVSAD can be used, although use of the SD method is dictated as unvoiced speech is normally poorly correlated. In the absence of a detectable signal use is made of the system and methods of the Pathfinder noise removal algorithm in determining when unvoiced speech is occurring. A brief review of the Pathfinder algorithm is described below, while a detailed description is provided in the Related Applications.
With reference to Figure 3, the acoustic information coming into Microphone 1 is denoted by mι(n), the information coming into Microphone 2 is similarly labeled m2(n), and the GEMS sensor is assumed available to determine voiced speech areas. In the z (digital frequency) domain, these signals are represented as Mι(z) and M2(z). Then
Ml(z) = S{z)+ N2{z) M2{z) = N(z) + S2{z) with
N2(z) = N(z)H!(z) S2{z) = S(z)H2(z) so that Ml(z) = S{z)+ N{z)Hl{z)
M2(z) = N(z)+ S(z)H2(z)
This is the general case for all two microphone systems. There is always going to be some leakage of noise into Mic 1 , and some leakage of signal into Mic 2. Equation 1 has four unknowns and only two relationships and cannot be solved explicitly. However, there is another way to solve for some of the unknowns in
Equation 1. Examine the case where the signal is not being generated - that is, where the GEMS signal indicates voicing is not occurring. In this case, s(n) = S(z) = 0, and Equation 1 reduces to
Mln(z)= N{z)Hx(z) M2n(z) = N(z) where the n subscript on the M variables indicate that only noise is being received. This leads to 1„( ) = 2n(z)H1( )
Figure imgf000011_0001
Η-ι(z) can be calculated using any of the available system identification algorithms and the microphone outputs when only noise is being received. The calculation can be done adaptively, so that if the noise changes significantly Hι(z) can be recalculated quickly.
With a solution for one of the unknowns in Equation 1 , solutions can be found for another; H2(z), by using the amplitude of the GEMS or similar device along with the amplitude of the two microphones. When the GEMS indicates voicing, but the recent (less than 1 second) history of the microphones indicate low levels of noise, assume that n(s) = N(z) ~ 0. Then Equation 1 reduces to
M (z)= S(z) M2s(z) = S{z)H2 (z)
which in turn leads to 2 (z) = (z)H2(z)
M (*) which is the inverse of the Ηι(z) calculation, but note that different inputs are being used.
After calculating Hι(z) and H2(z) above, they are used to remove the noise from the signal. Rewrite Equation 1 as
S(z) = Mλ(z)-N(z)Hλ(z) N(z)= M2(z)-S(z)H2(z) S(z) = 1(z)-[ 2(z)-S(z)H2(z)]H1(z) ' S(z)[l - H2 (z)H, (z)] = Mx (z) - M2 (z)Hλ (z)
and solve for S(z) as:
Figure imgf000011_0002
In practice Η2(z) is usually quite small, so that H^zjH^z) « 1 , and S(z) ^ Ml(z)-M2(z)H1(z),
obviating the need for the H2(z) calculation.
With reference to Figure 2 and Figure 3, the PSAD system is described. As sound waves propagate, they normally lose energy as they travel due to diffraction and dispersion. Assuming the sound waves originate from a point source and radiate isotropically, their amplitude will decrease as a function of 1/r, where r is the distance from the originating point. This function of 1/r proportional to amplitude is the worst case, if confined to a smaller area the reduction will be less. However it is an adequate model for the configurations of interest, specifically the propagation of noise and speech to microphones located somewhere on the user's head.
Figure 7 is a microphone array for use under an embodiment of the PSAD system. Placing the microphones Mic 1 and Mic 2 in a linear array with the mouth on the array midline, the difference in signal strength in Mic 1 and Mic 2 (assuming the microphones have identical frequency responses) will be proportional to both di and Δd. Assuming a 1/r (or in this case 1/d) relationship, it is seen that
Figure imgf000012_0001
where ΔM is the difference in gain between Mic 1 and Mic 2 and therefore H-i(z), as above in Equation 2. The variable d-i is the distance from Mic 1 to the speech or noise source.
Figure 8 is a plot 800 of ΔM versus di for several Δd values, under an embodiment. It is clear that as Δd becomes larger and the noise source is closer, ΔM becomes larger. The variable Δd will change depending on the orientation to the speech/noise source, from the maximum value on the array midline to zero perpendicular to the array midline. From the plot 800 it is clear that for small Δd and for distances over approximately 30 centimeters (cm), ΔM is close to unity. Since most noise sources are farther away than 30 cm and are unlikely to be on the midline on the array, it is probable that when calculating H-ι(z) as above in Equation 2, ΔM (or equivalently the gain of Hι(z)) will be close to unity. Conversely, for noise sources that are close (within a few centimeters), there could be a substantial difference in gain depending on which microphone is closer to the noise.
If the "noise" is the user speaking, and Mic 1 is closer to the mouth than Mic 2, the gain increases. Since environmental noise normally originates much farther away from the user's head than speech, noise will be found during the time when the gain of Hι(z) is near unity or some fixed value, and speech can be found after a sharp rise in gain. The speech can be unvoiced or voiced, as long as it is of sufficient volume compared to the surrounding noise. The gain will stay somewhat high during the speech portions, then descend quickly after speech ceases. The rapid increase and decrease in the gain of H-ι(z) should be sufficient to allow the detection of speech under almost any circumstances. The gain in this example is calculated by the sum of the absolute value of the filter coefficients. This sum is not equivalent to the gain, but the two are related in that a rise in the sum of the absolute value reflects a rise in the gain. As an example of this behavior, Figure 9 shows a plot 900 of the gain parameter 902 as the sum of the absolute values of Hι(z) and the acoustic data 904 or audio from microphone 1. The speech signal was an utterance of the phrase "pop pan", repeated twice. The evaluated bandwidth included the frequency range from 2500 Hz to 3500 Hz, although 1500Hz to 2500 Hz was additionally used in practice. Note the rapid increase in the gain when the unvoiced speech is first encountered, then the rapid return to normal when the speech ends. The large changes in gain that result from transitions between noise and speech can be detected by any standard signal processing techniques. The standard deviation of the last few gain calculations is used, with thresholds being defined by a running average of the standard deviations and the standard deviation noise floor. The later changes in gain for the voiced speech are suppressed in this plot 900 for clarity.
Figure 10 is an alternative plot 1000 of acoustic data presented in Figure 9. The data used to form plot 900 is presented again in this plot 1000, along with audio data 1004 and GEMS data 1006 without noise to make the unvoiced speech apparent. The voiced signal 1002 has three possible values: 0 for noise, 1 for unvoiced, and 2 for voiced. Denoising is only accomplished when V = 0. It is clear that the unvoiced speech is captured very well, aside from two single dropouts in the unvoiced detection near the end of each "pop". However, these single-window dropouts are not common and do not significantly affect the denoising algorithm. They can easily be removed using standard smoothing techniques.
What is not clear from this plot 1000 is that the PSAD system functions as an automatic backup to the NAVSAD. This is because the voiced speech (since it has the same spatial relationship to the mics as the unvoiced) will be detected as unvoiced if the sensor or NAVSAD system fail for any reason. The voiced speech will be misclassified as unvoiced, but the denoising will still not take place, preserving the quality of the speech signal.
However, this automatic backup of the NAVSAD system functions best in an environment with low noise (approximately 10+ dB SNR), as high amounts (10 dB of SNR or less) of acoustic noise can quickly overwhelm any acoustic-only unvoiced detector, including the PSAD. This is evident in the difference in the voiced signal data 602 and 1002 shown in plots 600 and 100 of Figures 6 and 10, respectively, where the same utterance is spoken, but the data of plot 600 shows no unvoiced speech because the unvoiced speech is undetectable. This is the desired behavior when performing denoising, since if the unvoiced speech is not detectable then it will not significantly affect the denoising process. Using the Pathfinder system to detect unvoiced speech ensures detection of any unvoiced speech loud enough to distort the denoising. Regarding hardware considerations, and with reference to Figure 7, the configuration of the microphones can have an effect on the change in gain associated with speech and the thresholds needed to detect speech. In general, each configuration will require testing to determine the proper thresholds, but tests with two very different microphone configurations showed the same thresholds and other parameters to work well. The first microphone set had the signal microphone near the mouth and the noise microphone several centimeters away at the ear, while the second configuration placed the noise and signal microphones back-to-back within a few centimeters of the mouth. The results presented herein were derived using the first microphone configuration, but the results using the other set are virtually identical, so the detection algorithm is relatively robust with respect to microphone placement.
A number of configurations are possible using the NAVSAD and PSAD systems to detect voiced and unvoiced speech. One configuration uses the NAVSAD system (non-acoustic only) to detect voiced speech along with the PSAD system to detect unvoiced speech; the PSAD also functions as a backup to the NAVSAD system for detecting voiced speech. An alternative configuration uses the NAVSAD system (non-acoustic correlated with acoustic) to detect voiced speech along with the PSAD system to detect unvoiced speech; the PSAD also functions as a backup to the NAVSAD system for detecting voiced speech.
Another alternative configuration uses the PSAD system to detect both voiced and unvoiced speech.
While the systems described above have been described with reference to separating voiced and unvoiced speech from background acoustic noise, there are no reasons more complex classifications can not be made. For more in-depth characterization of speech, the system can bandpass the information from Mic 1 and Mic 2 so that it is possible to see which bands in the Mic 1 data are more heavily composed of noise and which are more weighted with speech. Using this knowledge, it is possible to group the utterances by their spectral characteristics similar to conventional acoustic methods; this method would work better in noisy environments.
As an example, the "k" in "kick" has significant frequency content form 500 Hz to 4000 Hz, but a "sh" in "she" only contains significant energy from 1700-4000 Hz. Voiced speech could be classified in a similar manner. For instance, an l\l ("ee") has significant energy around 300 Hz and 2500 Hz, and an lal ("ah") has energy at around 900 Hz and 1200 Hz. This ability to discriminate unvoiced and voiced speech in the presence of noise is, thus, very useful.
Each of the steps depicted in the flow diagrams presented herein can itself include a sequence of operations that need not be described herein. Those skilled in the relevant art can create routines, algorithms, source code, microcode, program logic arrays or otherwise implement the invention based on the flow diagrams and the detailed description provided herein. The routines described herein can be provided with one or more of the following, or one or more combinations of the following: stored in non-volatile memory (not shown) that forms part of an associated processor or processors, or implemented using conventional programmed logic arrays or circuit elements, or stored in removable media such as disks, or downloaded from a server and stored locally at a client, or hardwired or preprogrammed in chips such as EEPROM semiconductor chips, application specific integrated circuits (ASICs), or by digital signal processing (DSP) integrated circuits.
Unless described otherwise herein, the information described herein is well known or described in detail in the Related Applications. Indeed, much of the detailed description provided herein is explicitly disclosed in the Related
Applications; most or all of the additional material of aspects of the invention will be recognized by those skilled in the relevant art as being inherent in the detailed description provided in such Related Applications, or well known to those skilled in the relevant art. Those skilled in the relevant art can implement aspects of the invention based on the material presented herein and the detailed description provided in the Related Applications.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise," "comprising," and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of "including, but not limited to." Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words "herein," "hereunder," and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. The above description of illustrated embodiments of the invention is not intended to be exhaustive or to limit the invention to the precise form disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. The teachings of the invention provided herein can be applied to signal processing systems, not only for the speech signal processing described above. Further, the elements and acts of the various embodiments described above can be combined to provide further embodiments.
All of the above references and Related Applications are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions and concepts of the various references described above to provide yet further embodiments of the invention.
These and other changes can be made to the invention in light of the above detailed description. In general, in the following claims, the terms used should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims, but should be construed to include all speech signal systems that operate under the claims to provide a method for procurement. Accordingly, the invention is not limited by the disclosure, but instead the scope of the invention is to be determined entirely by the claims.
While certain aspects of the invention are presented below in certain claim forms, the inventor contemplates the various aspects of the invention in any number of claim forms. Thus, the inventor reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the invention.

Claims

CLAIMSWhat I claim is:
1. A system for detecting voiced and unvoiced speech in acoustic signals having varying levels of background noise, comprising: at least two microphones for receiving the acoustic signals; at least one processor coupled among the microphones, wherein the at least one processor; generates difference parameters between the acoustic signals received at each of the two microphones, wherein the difference parameters are representative of the relative difference in signal gain between portions of the received acoustic signals; identifies information of the acoustic signals as unvoiced speech when the difference parameters exceed a first threshold; and identifies information of the acoustic signals as voiced speech when the difference parameters exceed a second threshold.
2. A method for detecting voiced and unvoiced speech in acoustic signals having varying levels of background noise, comprising: receiving the acoustic signals at two receivers; generating difference parameters between the acoustic signals received at each of the two receivers, wherein the difference parameters are representative of the relative difference in signal gain between portions of the received acoustic signals; identifying information of the acoustic signals as unvoiced speech when the difference parameters exceed a first threshold; and identifying information of the acoustic signals as voiced speech when the difference parameters exceed a second threshold.
3. The method of claim 2, further comprising generating the first and second thresholds using standard deviations corresponding to the generation of the difference parameters.
4. The method of claim 2, further comprising: identifying information of the acoustic signals as noise when the difference parameters are less than the first threshold; and performing denoising on the identified noise.
5. The method of claim 2, further comprising receiving physiological information associated with human voicing activity, wherein the physiological information comprises receiving physiological data associated with human voicing using at least one detector selected from a group including radio frequency devices, electroglottographs, ultrasound devices, acoustic throat microphones, and airflow detectors.
6. A system for detecting voiced and unvoiced speech in acoustic signals having varying levels of background noise, comprising: at least two microphones that receive the acoustic signals; at least one voicing sensor that receives physiological information associated with human voicing activity; and at least one processor coupled among the microphones and the voicing sensor, wherein the at least one processor; generates cross correlation data between the physiological information and an acoustic signal received at one of the two microphones; identifies information of the acoustic signals as voiced speech when the cross correlation data corresponding to a portion of the acoustic signal received at the one receiver exceeds a correlation threshold; generates difference parameters between the acoustic signals received at each of the two receivers, wherein the difference parameters are representative of the relative difference in signal gain between portions of the received acoustic signals; identifies information of the acoustic signals as unvoiced speech when the difference parameters exceed a gain threshold; and identifies information of the acoustic signals as noise when the difference parameters are less than the gain threshold.
7. A method for removing noise from acoustic signals, comprising: receiving the acoustic signals at two receivers and receiving physiological information associated with human voicing activity at a voicing sensor; generating cross correlation data between the physiological information and an acoustic signal received at one of the two receivers; identifying information of the acoustic signals as voiced speech when the cross correlation data corresponding to a portion of the acoustic signal received at the one receiver exceeds a correlation threshold; generating difference parameters between the acoustic signals received at each of the two receivers, wherein the difference parameters are representative of the relative difference in signal gain between portions of the received acoustic signals; identifying information of the acoustic signals as unvoiced speech when the difference parameters exceed a gain threshold; and identifying information of the acoustic signals as noise when the difference parameters are less than the gain threshold.
PCT/US2002/017251 2001-05-30 2002-05-30 Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors WO2002098169A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP02739572A EP1415505A1 (en) 2001-05-30 2002-05-30 Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors
KR1020037015511A KR100992656B1 (en) 2001-05-30 2002-05-30 Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors
JP2003501229A JP2005503579A (en) 2001-05-30 2002-05-30 Voiced and unvoiced voice detection using both acoustic and non-acoustic sensors
CA002448669A CA2448669A1 (en) 2001-05-30 2002-05-30 Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
US60/294,383 2001-05-30
US09/905,361 2001-07-12
US09/905,361 US20020039425A1 (en) 2000-07-19 2001-07-12 Method and apparatus for removing noise from electronic signals
US60/335,100 2001-10-30
US09/990,847 US20020099541A1 (en) 2000-11-21 2001-11-21 Method and apparatus for voiced speech excitation function determination and non-acoustic assisted feature extraction
US09/990,847 2001-11-21
US60/332,202 2001-11-21
US60/362,162 2002-03-05
US60/362,170 2002-03-05
US60/362,161 2002-03-05
US60/361,981 2002-03-05
US60/362,103 2002-03-05
US60/368,343 2002-03-27
US60/368,208 2002-03-27
US60/368,209 2002-03-27

Publications (1)

Publication Number Publication Date
WO2002098169A1 true WO2002098169A1 (en) 2002-12-05

Family

ID=27129427

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/017251 WO2002098169A1 (en) 2001-05-30 2002-05-30 Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors

Country Status (1)

Country Link
WO (1) WO2002098169A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1587064A1 (en) * 2004-04-12 2005-10-19 Sony Corporation Method of and apparatus for reducing noise
EP1638084A1 (en) * 2004-09-17 2006-03-22 Microsoft Corporation Method and apparatus for multi-sensory speech enhancement
US7120477B2 (en) 1999-11-22 2006-10-10 Microsoft Corporation Personal mobile computing device having antenna microphone and speech detection for improved speech recognition
US7283850B2 (en) 2004-10-12 2007-10-16 Microsoft Corporation Method and apparatus for multi-sensory speech enhancement on a mobile device
EP1891627A2 (en) * 2005-06-20 2008-02-27 Microsoft Corporation Multi-sensory speech enhancement using a clean speech prior
US7406303B2 (en) 2005-07-05 2008-07-29 Microsoft Corporation Multi-sensory speech enhancement using synthesized sensor signal
KR100857877B1 (en) 2006-09-14 2008-09-17 유메디칼 주식회사 pure tone audiometer with automated masking
US7512245B2 (en) 2003-02-25 2009-03-31 Oticon A/S Method for detection of own voice activity in a communication device
US7680656B2 (en) 2005-06-28 2010-03-16 Microsoft Corporation Multi-sensory speech enhancement using a speech-state model
US7930178B2 (en) 2005-12-23 2011-04-19 Microsoft Corporation Speech modeling and enhancement based on magnitude-normalized spectra
US9066186B2 (en) 2003-01-30 2015-06-23 Aliphcom Light-based detection for acoustic applications
US9099094B2 (en) 2003-03-27 2015-08-04 Aliphcom Microphone array with rear venting
US9196261B2 (en) 2000-07-19 2015-11-24 Aliphcom Voice activity detector (VAD)—based multiple-microphone acoustic noise suppression
EP2996314A1 (en) * 2014-08-27 2016-03-16 Fujitsu Limited Voice processing device, voice processing method, and computer program for voice processing
EP2908550B1 (en) 2014-02-13 2018-07-25 Oticon A/s A hearing aid device comprising a sensor member

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5212764A (en) * 1989-04-19 1993-05-18 Ricoh Company, Ltd. Noise eliminating apparatus and speech recognition apparatus using the same
US5400409A (en) * 1992-12-23 1995-03-21 Daimler-Benz Ag Noise-reduction method for noise-affected voice channels
US5414776A (en) * 1993-05-13 1995-05-09 Lectrosonics, Inc. Adaptive proportional gain audio mixing system
US5539859A (en) * 1992-02-18 1996-07-23 Alcatel N.V. Method of using a dominant angle of incidence to reduce acoustic noise in a speech signal
US5633935A (en) * 1993-04-13 1997-05-27 Matsushita Electric Industrial Co., Ltd. Stereo ultradirectional microphone apparatus
US5835608A (en) * 1995-07-10 1998-11-10 Applied Acoustic Research Signal separating system
US5917921A (en) * 1991-12-06 1999-06-29 Sony Corporation Noise reducing microphone apparatus
US6009396A (en) * 1996-03-15 1999-12-28 Kabushiki Kaisha Toshiba Method and system for microphone array input type speech recognition using band-pass power distribution for sound source position/direction estimation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5212764A (en) * 1989-04-19 1993-05-18 Ricoh Company, Ltd. Noise eliminating apparatus and speech recognition apparatus using the same
US5917921A (en) * 1991-12-06 1999-06-29 Sony Corporation Noise reducing microphone apparatus
US5539859A (en) * 1992-02-18 1996-07-23 Alcatel N.V. Method of using a dominant angle of incidence to reduce acoustic noise in a speech signal
US5400409A (en) * 1992-12-23 1995-03-21 Daimler-Benz Ag Noise-reduction method for noise-affected voice channels
US5633935A (en) * 1993-04-13 1997-05-27 Matsushita Electric Industrial Co., Ltd. Stereo ultradirectional microphone apparatus
US5414776A (en) * 1993-05-13 1995-05-09 Lectrosonics, Inc. Adaptive proportional gain audio mixing system
US5835608A (en) * 1995-07-10 1998-11-10 Applied Acoustic Research Signal separating system
US6009396A (en) * 1996-03-15 1999-12-28 Kabushiki Kaisha Toshiba Method and system for microphone array input type speech recognition using band-pass power distribution for sound source position/direction estimation

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7120477B2 (en) 1999-11-22 2006-10-10 Microsoft Corporation Personal mobile computing device having antenna microphone and speech detection for improved speech recognition
US9196261B2 (en) 2000-07-19 2015-11-24 Aliphcom Voice activity detector (VAD)—based multiple-microphone acoustic noise suppression
US9066186B2 (en) 2003-01-30 2015-06-23 Aliphcom Light-based detection for acoustic applications
US7512245B2 (en) 2003-02-25 2009-03-31 Oticon A/S Method for detection of own voice activity in a communication device
US9099094B2 (en) 2003-03-27 2015-08-04 Aliphcom Microphone array with rear venting
US7697699B2 (en) 2004-04-12 2010-04-13 Sony Corporation Method of and apparatus for reducing noise
EP1587064A1 (en) * 2004-04-12 2005-10-19 Sony Corporation Method of and apparatus for reducing noise
CN1684189B (en) * 2004-04-12 2010-09-29 索尼株式会社 Method of and apparatus for reducing noise
EP1638084A1 (en) * 2004-09-17 2006-03-22 Microsoft Corporation Method and apparatus for multi-sensory speech enhancement
US7283850B2 (en) 2004-10-12 2007-10-16 Microsoft Corporation Method and apparatus for multi-sensory speech enhancement on a mobile device
EP1891627A4 (en) * 2005-06-20 2009-07-22 Microsoft Corp Multi-sensory speech enhancement using a clean speech prior
NO339834B1 (en) * 2005-06-20 2017-02-06 Microsoft Technology Licensing Llc Multisensory speech enhancement using the probability of pure speech
EP1891627A2 (en) * 2005-06-20 2008-02-27 Microsoft Corporation Multi-sensory speech enhancement using a clean speech prior
US7680656B2 (en) 2005-06-28 2010-03-16 Microsoft Corporation Multi-sensory speech enhancement using a speech-state model
US7406303B2 (en) 2005-07-05 2008-07-29 Microsoft Corporation Multi-sensory speech enhancement using synthesized sensor signal
US7930178B2 (en) 2005-12-23 2011-04-19 Microsoft Corporation Speech modeling and enhancement based on magnitude-normalized spectra
KR100857877B1 (en) 2006-09-14 2008-09-17 유메디칼 주식회사 pure tone audiometer with automated masking
EP2908550B1 (en) 2014-02-13 2018-07-25 Oticon A/s A hearing aid device comprising a sensor member
US10524061B2 (en) 2014-02-13 2019-12-31 Oticon A/S Hearing aid device comprising a sensor member
US11128961B2 (en) 2014-02-13 2021-09-21 Oticon A/S Hearing aid device comprising a sensor member
US11533570B2 (en) 2014-02-13 2022-12-20 Oticon A/S Hearing aid device comprising a sensor member
US11889265B2 (en) 2014-02-13 2024-01-30 Oticon A/S Hearing aid device comprising a sensor member
EP2996314A1 (en) * 2014-08-27 2016-03-16 Fujitsu Limited Voice processing device, voice processing method, and computer program for voice processing
US9847094B2 (en) 2014-08-27 2017-12-19 Fujitsu Limited Voice processing device, voice processing method, and non-transitory computer readable recording medium having therein program for voice processing

Similar Documents

Publication Publication Date Title
US7246058B2 (en) Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors
US20070233479A1 (en) Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors
US8321213B2 (en) Acoustic voice activity detection (AVAD) for electronic systems
US8326611B2 (en) Acoustic voice activity detection (AVAD) for electronic systems
US9263062B2 (en) Vibration sensor and acoustic voice activity detection systems (VADS) for use with electronic systems
US10230346B2 (en) Acoustic voice activity detection
EP2633519B1 (en) Method and apparatus for voice activity detection
US20140126743A1 (en) Acoustic voice activity detection (avad) for electronic systems
US8942383B2 (en) Wind suppression/replacement component for use with electronic systems
US8488803B2 (en) Wind suppression/replacement component for use with electronic systems
JP3812887B2 (en) Signal processing system and method
WO2002098169A1 (en) Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors
US11627413B2 (en) Acoustic voice activity detection (AVAD) for electronic systems
AU2016202314A1 (en) Acoustic Voice Activity Detection (AVAD) for electronic systems
EP1415505A1 (en) Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors
US20230379621A1 (en) Acoustic voice activity detection (avad) for electronic systems

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2448669

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 028109724

Country of ref document: CN

Ref document number: 2003501229

Country of ref document: JP

Ref document number: 1020037015511

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2002739572

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 2002739572

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2002739572

Country of ref document: EP