US20100198593A1 - Speech Enhancement with Noise Level Estimation Adjustment - Google Patents

Speech Enhancement with Noise Level Estimation Adjustment Download PDF

Info

Publication number
US20100198593A1
US20100198593A1 US12/677,087 US67708708A US2010198593A1 US 20100198593 A1 US20100198593 A1 US 20100198593A1 US 67708708 A US67708708 A US 67708708A US 2010198593 A1 US2010198593 A1 US 2010198593A1
Authority
US
United States
Prior art keywords
level
subband
audio signal
speech
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/677,087
Other versions
US8538763B2 (en
Inventor
Rongshan Yu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby Laboratories Licensing Corp
Original Assignee
Dolby Laboratories Licensing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corp filed Critical Dolby Laboratories Licensing Corp
Priority to US12/677,087 priority Critical patent/US8538763B2/en
Assigned to DOLBY LABORATORIES LICENSING CORPORATION reassignment DOLBY LABORATORIES LICENSING CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YU, RONGSHAN
Publication of US20100198593A1 publication Critical patent/US20100198593A1/en
Application granted granted Critical
Publication of US8538763B2 publication Critical patent/US8538763B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02168Noise filtering characterised by the method used for estimating noise the estimation exclusively taking place during speech pauses
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain

Definitions

  • the invention relates to audio signal processing. More particularly, it relates to speech enhancement of a noisy audio speech signal.
  • the invention also relates to computer programs for practicing such methods or controlling such apparatus.
  • speech components of an audio signal composed of speech and noise components are enhanced.
  • An audio signal is changed from the time domain to a plurality of subbands in the frequency domain.
  • the subbands of the audio signal are subsequently processed.
  • the processing includes controlling the gain of the audio signal in ones of said subbands, wherein the gain in a subband is reduced as the level of estimated noise components increases with respect to the level of speech components, wherein the level of estimated noise components is determined at least in part by comparing an estimated noise components level with the level of the audio signal in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the input signal level in the subband exceeds the estimated noise components level in the subband by a limit for more than a defined time.
  • the processed subband audio signal is changed from the frequency domain to the time domain to provide an audio signal in which speech components are enhanced.
  • the estimated noise components may be determined by a voice-activity-detector-based noise-level-estimator device or process. Alternatively, the estimated noise components may be determined by a statistically-based noise-level-estimator device or process.
  • speech components of an audio signal composed of speech and noise components are enhanced.
  • An audio signal is changed from the time domain to a plurality of subbands in the frequency domain.
  • the subbands of the audio signal are subsequently processed.
  • the processing includes controlling the gain of the audio signal in ones of said subbands, wherein the gain in a subband is reduced as the level of estimated noise components increases with respect to the level of speech components, wherein the level of estimated noise components is determined at least in part by obtaining and monitoring the signal-to-noise ratio in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the signal-to-noise ratio in the subband exceeds a limit for more than a defined time.
  • the processed subband audio signal is changed from the frequency domain to the time domain to provide an audio signal in which speech components are enhanced.
  • the estimated noise components may be determined by a voice-activity-detector-based noise-level-estimator device or process. Alternatively, the estimated noise components may be determined by a statistically-based noise-level-estimator device or process.
  • FIG. 1 is a functional block diagram showing an exemplary embodiment of the invention.
  • FIG. 2 is an idealized hypothetical plot of actual noise level for estimated noise level for a first example.
  • FIG. 3 is an idealized hypothetical plot of actual noise level for estimated noise level for a second example.
  • FIG. 4 is an idealized hypothetical plot of actual noise level for estimated noise level for a third example.
  • FIG. 5 is a flowchart relating to the exemplary embodiment of FIG. 1 .
  • FIG. 1 is a functional block diagram showing an exemplary embodiment of aspects of the present invention.
  • the input is generated by digitizing an analog speech signal that contains both clean speech as well as noise.
  • Analysis Filterbank 2 changes the audio signal from the time domain to a plurality of subbands in the frequency domain.
  • the subband signals are applied to a noise-reducing device or function (“Speech Enhancement”) 4 , a noise-level estimator or estimation function (“Noise Level Estimator”) 6 , and a noise-level estimator adjuster or adjustment function (“Noise Level Adjustment”) (“NLA”) 8 .
  • Sound Enhancement a noise-reducing device or function
  • Noise Level Estimator a noise-level estimator or estimation function
  • NLA noise-level estimator adjuster or adjustment function
  • Speech Enhancement 4 controls a gain scale factor GNR k (m) that scales the amplitude of the subband signals.
  • GNR k m
  • Such an application of a gain scale factor to a subband signal is shown symbolically by a multiplier symbol 10 .
  • the figures show the details of generating and applying a gain scale factor to only one of multiple subband signals (k).
  • gain scale factor GNR k (m) is controlled by Speech Enhancement 4 so that subbands that are dominated by noise components are strongly suppressed while those dominated by speech are preserved.
  • Speech Enhancement 4 may be considered to have a “Suppression Rule” device or function 12 that generates a gain scale factor GNR k (m) in response to the subband signals Y k (m) and the adjusted estimated noise level output from Noise Level Adjustment 8 .
  • VAD voice-activity detector or detection function
  • a VAD is required if Speech Enhancement 4 is a VAD-based device or function. Otherwise, a VAD may not be required.
  • Enhanced subband speech signals ⁇ tilde over (Y) ⁇ k (m) are provided by applying gain scale factor GNR k (m) to the unenhanced input subband signals Y k (m). This may be represented as:
  • the dot symbol (“ ⁇ ”) indicates multiplication.
  • the processed subband signals ⁇ tilde over (Y) ⁇ k (m) may then be converted to the time domain by using a synthesis filterbank device or process (“Synthesis Filterbank”) 14 that produces the enhanced speech signal ⁇ tilde over (y) ⁇ (n).
  • the synthesis filterbank changes the processed audio signal from the frequency domain to the time domain.
  • Subband audio devices and processes may use either analog or digital techniques, or a hybrid of the two techniques.
  • a subband filterbank can be implemented by a bank of digital bandpass filters or by a bank of analog bandpass filters.
  • digital bandpass filters the input signal is sampled prior to filtering. The samples are passed through a digital filter bank and then downsampled to obtain subband signals.
  • Each subband signal comprises samples which represent a portion of the input signal spectrum.
  • analog bandpass filters the input signal is split into several analog signals each with a bandwidth corresponding to a filterbank bandpass filter bandwidth.
  • the subband analog signals can be kept in analog form or converted into in digital form by sampling and quantizing.
  • Subband audio signals may also be derived using a transform coder that implements any one of several time-domain to frequency-domain transforms that functions as a bank of digital bandpass filters.
  • the sampled input signal is segmented into “signal sample blocks” prior to filtering.
  • One or more adjacent transform coefficients or bins can be grouped together to define “subbands” having effective bandwidths that are sums of individual transform coefficient bandwidths.
  • Analysis Filterbank 2 and Synthesis Filterbank 14 may be implemented by any suitable filterbank and inverse filterbank or transform and inverse transform, respectively.
  • gain scale factor GNR k (m) is shown controlling subband amplitudes multiplicatively, it will be apparent to those of ordinary skill in the art that equivalent additive/subtractive arrangements may be employed.
  • spectral enhancement devices and functions may be useful in implementing Speech Enhancement 4 in practical embodiments of the present invention.
  • spectral enhancement devices and functions are those that employ VAD-based noise-level estimators and those that employ statistically-based noise-level estimators.
  • useful spectral enhancement devices and functions may include those described in references 1, 2, 3, 6 and 7, listed above and in the following two United States Provisional Patent Applications:
  • the speech enhancement gain factor GNR k (m) may be referred to as a “suppression gain” because its purpose is to suppress noise.
  • One way of controlling suppression gain is known as “spectral subtraction” (references [1], [2] and [7]), in which the suppression gain GNR k (m) applied to the subband signal Y k (m) may be expressed as:
  • GNR k ⁇ ( m ) 1 - a ⁇ ⁇ ⁇ k ⁇ ( m ) ⁇ Y k ⁇ ( m ) ⁇ 2 , ( 2 )
  • VAD voice activity detector
  • the initial value of the noise energy estimation ⁇ k ( ⁇ 1) can be set to zero, or set to the noise energy measured during the initialization stage of the process.
  • the parameter ⁇ is a smoothing factor having a value 0 ⁇ 1.
  • the estimation of the noise energy may be obtained by performing a first order time smoother operation (sometimes called a “leaky integrator”) on a power of the input signal Y k (m) (squared in this example).
  • the smoothing factor ⁇ may be a positive value that is slightly less than one. Usually, for a stationary input signal a ⁇ value closer to one will lead to a more accurate estimation.
  • the value ⁇ should not be too close to one to avoid losing the ability to track changes in the noise energy when the input becomes not stationary.
  • this value is not critical. It is also possible to estimate the noise energy by using a more complex time smoother that may be non-linear or linear (such as a multipole lowpass filter.)
  • FIG. 2 is an idealized illustration of the noise level underestimation problem for VAD-based noise level estimator.
  • noise is shown at constant levels in this figure and also in related FIGS. 3 and 4 .
  • the actual noise level increases from ⁇ 0 to ⁇ 1 at time m 0 .
  • VAD voice is present
  • a VAD-based noise estimater does not update the noise level estimation when the actual noise level increases at time m 0 . Therefore, the noise level is underestimated for m>m 0 .
  • Such a noise level underestimation if unaddressed, leads to insufficient amount of suppression of the noise components in the incoming noise signal. As a result, strong residual noise is present in the enhanced speech signal, which may be annoying to a listener.
  • the minimum statistics process keeps a record of historical samples for each subband, and estimates the noise level based on the minimum signal-level samples from the record.
  • the speech signal in general is an on/off process and naturally has pauses.
  • the signal level is generally much higher when the speech signal is present. Therefore, the minimum signal-level samples from the record are likely to be from a speech pause section if the record is sufficiently long in time, and the noise level can be reliably estimated from such samples.
  • the minimum statistics method does not rely on explicit VAD detection, it is less subject to the noise level underestimation problem described above. If one goes back to the example shown in FIG. 2 , and assumes that the minimum statistic process keeps a record of W samples in its record, it can be seen from FIG. 3 , which shows a solution of the noise level underestimation problem with the minimum statistics process, that after m>m 0 +W, all the samples from time m ⁇ m 0 will have been shifted out from the record. Therefore, the noise estimation will be totally based on samples from m ⁇ m 0 , from which a more accurate noise level estimation may be obtained. Thus, the use of the minimum statistics process provides some improvement to the problem of noise level underestimation.
  • an appropriate adjustment to the estimated noise level is made to overcome the problem of noise level understimation.
  • Such an adjustment as may be provided by Noise Level Adjustment device or process 8 in the example of FIG. 1 , may be employed either with speech enhancer devices and processes employing either VAD-based or minimum-statistic type noise level estimators or estimator functions.
  • Noise Level Adjustment 8 monitors the time in which the energy level in each of a plurality of subbands is larger than the estimated noise energy level in each such subband. Noise Level Adjustment 8 then decides that the noise level is underestimated if the time period is longer than a pre-determined maximum value, and increases the noise energy level estimation by a small pre-determined adjustment step size, such as 3 dB. Noise Level Adjustment 8 iteratively increases the estimated noise level until the measured time period no longer exceeds the maximum time period, resulting in a noise level estimation that in most cases is larger than the actual noise level by an amount no larger than the adjustment step size.
  • Noise Level Adjustment 8 measures the energy of the input signal ⁇ k (m) as follows:
  • ⁇ k ( m ) ⁇ k ( m ⁇ 1)+(1 ⁇ )
  • is a smoothing factor having a value 0 ⁇ 1.
  • the initial value of the input signal ⁇ k ( ⁇ 1) may be set to zero.
  • the parameter ⁇ plays the same role as the parameter ⁇ as in Eqn. (3).
  • the parameter d k denotes the time during which the incoming signal has a level exceeding the estimated noise level for subband k. At each time m, it is updated as follows in Eqn. 5.
  • d k ⁇ d k + 1 ⁇ k ⁇ ( m ) > ⁇ ⁇ ⁇ ⁇ k ′ ⁇ ( m ) ⁇ ⁇ or ⁇ ⁇ h k > 0 ; 0 else . ( 5 )
  • is a pre-determined constant and d k is set to 0 at the initialization stage of the process.
  • h k is a hand-off counter introduced to improve the robustness of the process, which is calculated at every time index m as:
  • h k ⁇ h ma ⁇ ⁇ x ⁇ k ⁇ ( m ) > ⁇ ⁇ ⁇ ⁇ k ′ ⁇ ( m ) ; h k - 1 ⁇ k ⁇ ( m ) ⁇ ⁇ ⁇ ⁇ ⁇ k ′ ⁇ ( m ) ⁇ ⁇ and ⁇ ⁇ h k > 0 , ( 6 )
  • h max is a pre-determined integer and h k is also set to zero at the process initialization stage.
  • Noise Level Adjustment 8 detects that d k is larger than a pre-selected maximum time duration D, usually some value larger than the maximum possible duration of a phoneme in normal speech, it will then decide that the noise level of subband k is underestimated.
  • the value of the parameter D is not critical to the invention. In that case, Noise Level Adjustment 8 updates the estimated noise level for subband k as:
  • FIG. 5 A flowchart showing an example of the process suitable for use by Noise Level Adjustment 8 is shown in FIG. 5 .
  • the flowchart of FIG. 5 shows the process underlying the exemplary embodiment of FIG. 1 .
  • the final step indicates that the time index m is then advanced by one (“m ⁇ m+1”) and the process of FIG. 5 is repeated.
  • the flowchart applies also to the alternative implementation of the invention if the condition ⁇ k (m)> ⁇ ′ k (m) is replaced by ⁇ k >1+ ⁇ ,
  • the Noise Level Adjustment 8 keeps increasing the estimated noise level until d k has a value smaller than D. In that case, the estimated noise level ⁇ ′ k (m) will have a value:
  • ⁇ k is the actual noise level in the incoming signal.
  • the second inequality in the above comes from the fact that the Noise Level Adjustment 8 stops increasing the estimated noise level as soon as ⁇ ′ k (m) has a value larger than ⁇ k .
  • advantage is taken of the fact that many speech enhancement processes actually estimate the signal-to-noise ratio (SNR) ⁇ k for each subband, which also gives a good indication of noise level underestimation if it has a large value persistently over a long time period. Therefore, the condition ⁇ k (m)> ⁇ ′ k (m) in the above process can be replaced by ⁇ k >1+ ⁇ and the rest of the process remains unchanged.
  • SNR signal-to-noise ratio
  • Noise Level Adjustment 8 detects that the incoming signal has a level persistently higher than the estimated noise level after time m 0 because the actual noise level increases from ⁇ 0 to ⁇ 1 at time m 0 .
  • the present invention provides a more accurate noise estimation, thus providing an improved enhanced speech output.
  • the invention may be implemented in hardware or software, or a combination of both (e.g., programmable logic arrays). Unless otherwise specified, the processes included as part of the invention are not inherently related to any particular computer or other apparatus. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform the required method steps. Thus, the invention may be implemented in one or more computer programs executing on one or more programmable computer systems each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
  • Program code is applied to input data to perform the functions described herein and generate output information.
  • the output information is applied to one or more output devices, in known fashion.
  • Each such program may be implemented in any desired computer language (including machine, assembly, or high level procedural, logical, or object oriented programming languages) to communicate with a computer system.
  • the language may be a compiled or interpreted language.
  • Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein.
  • a storage media or device e.g., solid state memory or media, or magnetic or optical media
  • the inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.

Abstract

Enhancing speech components of an audio signal composed of speech and noise components includes controlling the gain of the audio signal in ones of its subbands, wherein the gain in a subband is reduced as the level of estimated noise components increases with respect to the level of speech components, wherein the level of estimated noise components is determined at least in part by (1) comparing an estimated noise components level with the level of the audio signal in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the input signal level in the subband exceeds the estimated noise components level in the subband by a limit for more than a defined time, or (2) obtaining and monitoring the signal-to-noise ratio in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the signal-to-noise ratio in the subband exceeds a limit for more than a defined time.

Description

    TECHNICAL FIELD
  • The invention relates to audio signal processing. More particularly, it relates to speech enhancement of a noisy audio speech signal. The invention also relates to computer programs for practicing such methods or controlling such apparatus.
  • INCORPORATION BY REFERENCE
  • The following publications are hereby incorporated by reference, each in their entirety.
    • [1] S. F. Boll, “Suppression of acoustic noise in speech using spectral subtraction,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 27, pp. 113-120, April 1979.
    • [2] Y. Ephraim, H. Lev-Ari and W. J. J. Roberts, “A brief survey of Speech Enhancement,” The Electronic Handbook, CRC Press, April 2005.
    • [3] Y. Ephraim and D. Malah, “Speech enhancement using a minimum mean square error short time spectral amplitude estimator,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 32, pp. 1109-1121, December 1984.
    • [4] Thomas, I. and Niederjohn, R., “Preprocessing of Speech for Added Intelligibility in High Ambient Noise”, 34th Audio Engineering Society Convention, March 1968.
    • [5] Villchur, E., “Signal Processing to Improve Speech Intelligibility for the Hearing Impaired”, 99th Audio Engineering Society Convention, September 1995.
    • [6] N. Virag, “Single channel speech enhancement based on masking properties of the human auditory system,” IEEE Tran. Speech and Audio Processing, vol. 7, pp. 126-137, March 1999.
    • [7] R. Martin, “Spectral subtraction based on minimum statistics,” in Proc. EUSIPCO, 1994, pp. 1182-1185.
    • [8] P. J. Wolfe and S. J. Godsill, “Efficient alternatives to Ephraim and Malah suppression rule for audio signal enhancement,” EURASIP Journal on Applied Signal Processing, vol. 2003, Issue 10, Pages 1043-1051, 2003.
    • [9] B. Widrow and S. D. Stearns, Adaptive Signal Processing. Englewood Cliffs, N.J.: Prentice Hall, 1985.
    • [10] Y. Ephraim and D. Malah, “Speech enhancement using a minimum mean square error Log-spectral amplitude estimator,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 33, pp. 443-445, December 1985.
    • [11] E. Terhardt, “Calculating Virtual Pitch,” Hearing Research, pp. 155-182, 1, 1979.
    • [12] ISO/IEC JTC1/SC29/WG11, Information technology—Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s—Part 3: Audio, IS 11172-3, 1992
    • [13] J. Johnston, “Transform coding of audio signals using perceptual noise criteria,” IEEE J. Select. Areas Commun., vol. 6, pp. 314-323, February 1988.
    • [14] S. Gustafsson, P. Jax, P Vary, “A novel psychoacoustically motivated audio enhancement algorithm preserving background noise characteristics,” Proceedings of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing, 1998. ICASSP '98.
    • [15] Yi Hu, and P. C. Loizou, “Incorporating a psychoacoustic model in frequency domain speech enhancement,” IEEE Signal Processing Letter, pp. 270-273, vol. 11, no. 2, February 2004.
    • [16] L. Lin, W. H. Holmes, and E. Ambikairajah, “Speech denoising using perceptual modification of Wiener filtering,” Electronics Letter, pp 1486-1487, vol. 38, November 2002.
    • [17] A. M. Kondoz, “Digital Speech: Coding for Low Bit Rate Communication Systems,” John Wiley & Sons, Ltd., 2nd Edition, 2004, Chichester, England, Chapter 10: Voice Activity Detection, pp. 357-377.
    DISCLOSURE OF THE INVENTION
  • According to a first aspect of the invention, speech components of an audio signal composed of speech and noise components are enhanced. An audio signal is changed from the time domain to a plurality of subbands in the frequency domain. The subbands of the audio signal are subsequently processed. The processing includes controlling the gain of the audio signal in ones of said subbands, wherein the gain in a subband is reduced as the level of estimated noise components increases with respect to the level of speech components, wherein the level of estimated noise components is determined at least in part by comparing an estimated noise components level with the level of the audio signal in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the input signal level in the subband exceeds the estimated noise components level in the subband by a limit for more than a defined time. The processed subband audio signal is changed from the frequency domain to the time domain to provide an audio signal in which speech components are enhanced. The estimated noise components may be determined by a voice-activity-detector-based noise-level-estimator device or process. Alternatively, the estimated noise components may be determined by a statistically-based noise-level-estimator device or process.
  • According to another aspect of the invention, speech components of an audio signal composed of speech and noise components are enhanced. An audio signal is changed from the time domain to a plurality of subbands in the frequency domain. The subbands of the audio signal are subsequently processed. The processing includes controlling the gain of the audio signal in ones of said subbands, wherein the gain in a subband is reduced as the level of estimated noise components increases with respect to the level of speech components, wherein the level of estimated noise components is determined at least in part by obtaining and monitoring the signal-to-noise ratio in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the signal-to-noise ratio in the subband exceeds a limit for more than a defined time. The processed subband audio signal is changed from the frequency domain to the time domain to provide an audio signal in which speech components are enhanced. The estimated noise components may be determined by a voice-activity-detector-based noise-level-estimator device or process. Alternatively, the estimated noise components may be determined by a statistically-based noise-level-estimator device or process.
  • DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a functional block diagram showing an exemplary embodiment of the invention.
  • FIG. 2 is an idealized hypothetical plot of actual noise level for estimated noise level for a first example.
  • FIG. 3 is an idealized hypothetical plot of actual noise level for estimated noise level for a second example.
  • FIG. 4 is an idealized hypothetical plot of actual noise level for estimated noise level for a third example.
  • FIG. 5 is a flowchart relating to the exemplary embodiment of FIG. 1.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • FIG. 1 is a functional block diagram showing an exemplary embodiment of aspects of the present invention. The input is generated by digitizing an analog speech signal that contains both clean speech as well as noise. This unaltered audio signal y(n) (“Noisy Speech”), where n=0,1, . . . is the time index, is then sent to an analysis filterbank device or function (“Analysis Filterbank”) 2, producing K multiple subband signals, Yk(m), k=1, . . . , K, m=0,1, . . . ,∞, where k is the subband number, and m is the time index of each subband signal. Analysis Filterbank 2 changes the audio signal from the time domain to a plurality of subbands in the frequency domain.
  • The subband signals are applied to a noise-reducing device or function (“Speech Enhancement”) 4, a noise-level estimator or estimation function (“Noise Level Estimator”) 6, and a noise-level estimator adjuster or adjustment function (“Noise Level Adjustment”) (“NLA”) 8.
  • In response to the input subband signals and in response to an adjusted estimated noise level output of Noise Level Adjustment 8, Speech Enhancement 4 controls a gain scale factor GNRk(m) that scales the amplitude of the subband signals. Such an application of a gain scale factor to a subband signal is shown symbolically by a multiplier symbol 10. For clarity in presentation, the figures show the details of generating and applying a gain scale factor to only one of multiple subband signals (k).
  • The value of gain scale factor GNRk(m) is controlled by Speech Enhancement 4 so that subbands that are dominated by noise components are strongly suppressed while those dominated by speech are preserved. Speech Enhancement 4 may be considered to have a “Suppression Rule” device or function 12 that generates a gain scale factor GNRk(m) in response to the subband signals Yk(m) and the adjusted estimated noise level output from Noise Level Adjustment 8.
  • Speech Enhancement 4 may include a voice-activity detector or detection function (VAD) (not shown) that, in response to the input subband signals, determines whether speech is present in noisy speech signal y(n), providing, for example, a VAD=1 output when speech is present and a VAD=0 output when speech is not present. A VAD is required if Speech Enhancement 4 is a VAD-based device or function. Otherwise, a VAD may not be required.
  • Enhanced subband speech signals {tilde over (Y)}k(m) are provided by applying gain scale factor GNRk(m) to the unenhanced input subband signals Yk(m). This may be represented as:

  • {tilde over (Y)}k(m)=GNR k(mY k(m)   (1)
  • The dot symbol (“·”) indicates multiplication.
  • The processed subband signals {tilde over (Y)}k(m) may then be converted to the time domain by using a synthesis filterbank device or process (“Synthesis Filterbank”) 14 that produces the enhanced speech signal {tilde over (y)}(n). The synthesis filterbank changes the processed audio signal from the frequency domain to the time domain.
  • It will be appreciated that various devices, functions and processes shown and described in various examples herein may be shown combined or separated in ways other than as shown in FIGS. 1 and 5. For example, although Speech Enhancement 4, Noise Level Estimator 6, and Noise Level Adjustment 8 are shown as separate devices or functions, they may, in practice be combined in various ways. Also, for example, when implemented by computer software instruction sequences, functions may be implemented by multithreaded software instruction sequences running in suitable digital signal processing hardware, in which case the various devices and functions in the examples shown in the figures may correspond to portions of the software instructions.
  • Subband audio devices and processes may use either analog or digital techniques, or a hybrid of the two techniques. A subband filterbank can be implemented by a bank of digital bandpass filters or by a bank of analog bandpass filters. For digital bandpass filters, the input signal is sampled prior to filtering. The samples are passed through a digital filter bank and then downsampled to obtain subband signals. Each subband signal comprises samples which represent a portion of the input signal spectrum. For analog bandpass filters, the input signal is split into several analog signals each with a bandwidth corresponding to a filterbank bandpass filter bandwidth. The subband analog signals can be kept in analog form or converted into in digital form by sampling and quantizing.
  • Subband audio signals may also be derived using a transform coder that implements any one of several time-domain to frequency-domain transforms that functions as a bank of digital bandpass filters. The sampled input signal is segmented into “signal sample blocks” prior to filtering. One or more adjacent transform coefficients or bins can be grouped together to define “subbands” having effective bandwidths that are sums of individual transform coefficient bandwidths.
  • Although the invention may be implemented using analog or digital techniques or even a hybrid arrangement of such techniques, the invention is more conveniently implemented using digital techniques and the preferred embodiments disclosed herein are digital implementations. Thus, Analysis Filterbank 2 and Synthesis Filterbank 14 may be implemented by any suitable filterbank and inverse filterbank or transform and inverse transform, respectively.
  • Although the gain scale factor GNRk(m) is shown controlling subband amplitudes multiplicatively, it will be apparent to those of ordinary skill in the art that equivalent additive/subtractive arrangements may be employed.
  • Speech Enhancement 4
  • Various spectral enhancement devices and functions may be useful in implementing Speech Enhancement 4 in practical embodiments of the present invention. Among such spectral enhancement devices and functions are those that employ VAD-based noise-level estimators and those that employ statistically-based noise-level estimators. Such useful spectral enhancement devices and functions may include those described in references 1, 2, 3, 6 and 7, listed above and in the following two United States Provisional Patent Applications:
      • (1) “Noise Variance Estimator for Speech Enhancement,” of Rongshan Yu, Ser. No. 60/918,964, filed Mar. 19, 2007; and
      • (2) “Speech Enhancement Employing a Perceptual Model,” of Rongshan Yu, Ser. No. 60/918,986, filed Mar. 19, 2007.
        Other spectral enhancement devices and functions may also be useful. The choice of any particular spectral enhancement device or function is not critical to the present invention.
  • The speech enhancement gain factor GNRk(m) may be referred to as a “suppression gain” because its purpose is to suppress noise. One way of controlling suppression gain is known as “spectral subtraction” (references [1], [2] and [7]), in which the suppression gain GNRk(m) applied to the subband signal Yk(m) may be expressed as:
  • GNR k ( m ) = 1 - a λ k ( m ) Y k ( m ) 2 , ( 2 )
  • where |Yk(m)| is the amplitude of subband signal Yk(m), λk(m) is the noise energy in subband k, and a>1 is an “over subtraction” factor chosen to assure that a sufficient suppression gain is applied. “Over subtraction” is explained further in reference [7] at page 2 and in reference 6 at page 127.
  • In order to determine appropriate amounts of suppression gains, it is important to have an accurate estimation of the noise energy for subbands in the incoming signal. However, it is not a trivial task to do so when the noise signal is mixed together with the speech signal in the incoming signal. One way to solve this problem is to use a voice-activity-detection-based noise level estimator that uses a standalone voice activity detector (VAD) to determine whether a speech signal is present in the incoming signal or not. Many voice activity detectors and detector functions are known. Suitable such device or function is described in Chapter 10 of reference [17] and in the bibliography thereof. The use of any particular voice activity detector is not critical to the invention. The noise energy is updated during the period when speech is not present (VAD=0). See, for example, reference [3]. In such a noise estimator, the noise energy estimation λk(m) for time m may be given by:
  • λ k ( m ) = { β λ k ( m - 1 ) + ( 1 - β ) Y k ( m ) 2 VAD = 0 ; λ k ( m - 1 ) VAD = 1. ( 3 )
  • The initial value of the noise energy estimation λk(−1) can be set to zero, or set to the noise energy measured during the initialization stage of the process. The parameter β is a smoothing factor having a value 0<<β<1. When speech is not present (VAD=0), the estimation of the noise energy may be obtained by performing a first order time smoother operation (sometimes called a “leaky integrator”) on a power of the input signal Yk(m) (squared in this example). The smoothing factor β may be a positive value that is slightly less than one. Usually, for a stationary input signal a β value closer to one will lead to a more accurate estimation. On the other hand, the value β should not be too close to one to avoid losing the ability to track changes in the noise energy when the input becomes not stationary. In practical embodiments of the present invention, a value of β=0.98 has been found to provide satisfactory results. However, this value is not critical. It is also possible to estimate the noise energy by using a more complex time smoother that may be non-linear or linear (such as a multipole lowpass filter.)
  • There is a tendency for VAD-based noise level estimators to underestimate the noise level. FIG. 2 is an idealized illustration of the noise level underestimation problem for VAD-based noise level estimator. For simplicity in presentation, noise is shown at constant levels in this figure and also in related FIGS. 3 and 4. In FIG. 2, the actual noise level increases from λ0 to λ1 at time m0. However, because speech is present (VAD=1) throughout the entire time period shown in FIG. 2, starting at m=0, a VAD-based noise estimater does not update the noise level estimation when the actual noise level increases at time m0. Therefore, the noise level is underestimated for m>m0. Such a noise level underestimation, if unaddressed, leads to insufficient amount of suppression of the noise components in the incoming noise signal. As a result, strong residual noise is present in the enhanced speech signal, which may be annoying to a listener.
  • It is possible to improve the noise level underestimation problem to some extent by using a different noise level estimation process, e.g., the minimum statistics process of reference [7]. In principle, the minimum statistics process keeps a record of historical samples for each subband, and estimates the noise level based on the minimum signal-level samples from the record. The rationale behind this approach is that the speech signal in general is an on/off process and naturally has pauses. In addition, the signal level is generally much higher when the speech signal is present. Therefore, the minimum signal-level samples from the record are likely to be from a speech pause section if the record is sufficiently long in time, and the noise level can be reliably estimated from such samples. Because the minimum statistics method does not rely on explicit VAD detection, it is less subject to the noise level underestimation problem described above. If one goes back to the example shown in FIG. 2, and assumes that the minimum statistic process keeps a record of W samples in its record, it can be seen from FIG. 3, which shows a solution of the noise level underestimation problem with the minimum statistics process, that after m>m0+W, all the samples from time m<m0 will have been shifted out from the record. Therefore, the noise estimation will be totally based on samples from m≧m0, from which a more accurate noise level estimation may be obtained. Thus, the use of the minimum statistics process provides some improvement to the problem of noise level underestimation.
  • In accordance with aspects of the present invention, an appropriate adjustment to the estimated noise level is made to overcome the problem of noise level understimation. Such an adjustment, as may be provided by Noise Level Adjustment device or process 8 in the example of FIG. 1, may be employed either with speech enhancer devices and processes employing either VAD-based or minimum-statistic type noise level estimators or estimator functions.
  • Referring again to FIG. 1, Noise Level Adjustment 8 monitors the time in which the energy level in each of a plurality of subbands is larger than the estimated noise energy level in each such subband. Noise Level Adjustment 8 then decides that the noise level is underestimated if the time period is longer than a pre-determined maximum value, and increases the noise energy level estimation by a small pre-determined adjustment step size, such as 3 dB. Noise Level Adjustment 8 iteratively increases the estimated noise level until the measured time period no longer exceeds the maximum time period, resulting in a noise level estimation that in most cases is larger than the actual noise level by an amount no larger than the adjustment step size.
  • Noise Level Adjustment 8 measures the energy of the input signal ηk(m) as follows:

  • ηk(m)=κηk(m−1)+(1−κ)|Y k(m)|2,   (4)
  • in which κ is a smoothing factor having a value 0<<κ<1. The initial value of the input signal ηk(−1) may be set to zero. The parameter κ plays the same role as the parameter β as in Eqn. (3). However, κ may be set to a value that is slightly smaller than β because the energy of the input signal usually changes rapidly when speech is present. It has been found that κ=0.9 gives satisfied results, although the value of κ is not critical to the invention.
  • The parameter dk denotes the time during which the incoming signal has a level exceeding the estimated noise level for subband k. At each time m, it is updated as follows in Eqn. 5. The time period of each m, as in any digital system, is decided by the sampling rate of the subband. So it may vary depending on the sampling rate of the input signal, and the filterbank used. In a practical implementation, the time period for each m is 1(s)/8000*32 =4 ms (an 8000 kHz speech signal and a filterbank with a downsampling factor of 32).
  • d k = { d k + 1 η k ( m ) > μ λ k ( m ) or h k > 0 ; 0 else . ( 5 )
  • where μ is a pre-determined constant and dk is set to 0 at the initialization stage of the process. Here hk is a hand-off counter introduced to improve the robustness of the process, which is calculated at every time index m as:
  • h k = { h ma x η k ( m ) > μ λ k ( m ) ; h k - 1 η k ( m ) μ λ k ( m ) and h k > 0 , ( 6 )
  • where hmax is a pre-determined integer and hk is also set to zero at the process initialization stage. The parameter μ is a constant larger than one to increase the estimated noise level when compared with the level of the incoming signal to avoid any possible false alarm (that is, the level of the incoming signal exceeding the estimated noise level by a small amount temporarily due to signal fluctuation). In In a practical embodiment μ=2 was found to be a useful value. The value of the parameter μ is not critical to the invention. Similarly, the hand-off counter is introduced since we also want to avoid reset of counter dk when the level of the incoming signal falls below the estimated noise temporarily due to signal fluctuation. In a practical embodiment, a maximum hand-off period of hmax=5 or 20 ms was found to be a useful value. The value of the parameter hmax is not critical to the invention.
  • If Noise Level Adjustment 8 detects that dk is larger than a pre-selected maximum time duration D, usually some value larger than the maximum possible duration of a phoneme in normal speech, it will then decide that the noise level of subband k is underestimated. In a practical embodiment of the invention, a value of D=150 or 600 ms was found to be a useful value. The value of the parameter D is not critical to the invention. In that case, Noise Level Adjustment 8 updates the estimated noise level for subband k as:

  • λ′k(m)←a·λ′k(m),   (7)
  • where a>1 is a pre-determined adjustment step size, and resets the counter dk to zero. Otherwise, it keeps the value of λ′k(m) unchanged. The value of α decides the trade-off between the accuracy of the noise level estimation after the adjustment, and the speed of adjustment when noise level underestimation is detected. In a practical embodiment of the invention, a value of α=2 or 3 dB was found to be a useful value. The value of the parameter α is not critical to the invention A flowchart showing an example of the process suitable for use by Noise Level Adjustment 8 is shown in FIG. 5. The flowchart of FIG. 5 shows the process underlying the exemplary embodiment of FIG. 1. The final step indicates that the time index m is then advanced by one (“m←m+1”) and the process of FIG. 5 is repeated. The flowchart applies also to the alternative implementation of the invention if the condition ηk(m)>μλ′k(m) is replaced by ξk>1+μ,
  • When a noise level underestimation occurs, the Noise Level Adjustment 8 keeps increasing the estimated noise level until dk has a value smaller than D. In that case, the estimated noise level λ′k(m) will have a value:

  • λk≦λ′k(m)<a·λ k,   (8)
  • where λk is the actual noise level in the incoming signal. The second inequality in the above comes from the fact that the Noise Level Adjustment 8 stops increasing the estimated noise level as soon as λ′k(m) has a value larger than λk.
  • As an alternative implementation, advantage is taken of the fact that many speech enhancement processes actually estimate the signal-to-noise ratio (SNR) ξk for each subband, which also gives a good indication of noise level underestimation if it has a large value persistently over a long time period. Therefore, the condition ηk(m)>μλ′k(m) in the above process can be replaced by ξk>1+μ and the rest of the process remains unchanged.
  • Finally, one may use the same example as in FIGS. 2 and 3 to illustrate how the present invention addresses the problem of noise level underestimation. As shown in FIG. 4, Noise Level Adjustment 8 detects that the incoming signal has a level persistently higher than the estimated noise level after time m0 because the actual noise level increases from λ0 to λ1 at time m0. As a result, Noise Level Adjustment 8 increases the estimated noise level at time m0+kD, where k=1,2, . . . , until the estimated noise level estimation is close enough to the actual noise level λ1. In this particular example, this happens after m>m0+3D when the estimated noise level has a value a3λ′0 that is slightly larger than λ1. By comparison to FIGS. 2 and 3, it will be seen that the present invention provides a more accurate noise estimation, thus providing an improved enhanced speech output.
  • IMPLEMENTATION
  • The invention may be implemented in hardware or software, or a combination of both (e.g., programmable logic arrays). Unless otherwise specified, the processes included as part of the invention are not inherently related to any particular computer or other apparatus. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform the required method steps. Thus, the invention may be implemented in one or more computer programs executing on one or more programmable computer systems each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
  • Each such program may be implemented in any desired computer language (including machine, assembly, or high level procedural, logical, or object oriented programming languages) to communicate with a computer system. In any case, the language may be a compiled or interpreted language.
  • Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.
  • A number of embodiments of the invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. For example, some of the steps described herein may be order independent, and thus can be performed in an order different from that described.

Claims (8)

1. A method for enhancing speech components of an audio signal composed of speech and noise components, comprising:
changing the audio signal from the time domain to a plurality of subbands in the frequency domain producing K multiple subband signals, Yk(m), k=1, . . . , K, m=0,1, . . . , ∞, where k is the subband number, and m is the time index of each subband signal,
processing the subbands of the audio signal,
said processing including controlling the gain of the audio signal in ones of said subbands, wherein the gain in a subband is reduced as the level of estimated noise components increases with respect to the level of speech components, the change of the said gain being performed according to a set of parameters continuously updated for each
time index m, said parameters being dependent only on their respective prior value at time index (m−1), characteristics of the subband at time index m, and a set of
predetermined constants,
wherein the level of estimated noise components is determined at least in part by comparing an estimated noise components level with the level of the audio signal in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the input signal level in the subband exceeds the estimated noise components level in the subband by a limit for more than a defined time,
wherein said defined time is updated according to a counter, said counter being robust with respect to false alarms and resets due to temporary signal fluctuations by introducing a hand-off counter, and
changing the processed audio signal from the frequency domain to the time domain to provide an audio signal in which speech components are enhanced.
2. A method according to claim 1 wherein the estimated noise components are determined by a voice-activity-detector-based noise-level-estimator device or process.
3. A method according to claim 1 wherein the estimated noise components are determined by a statistically-based noise-level-estimator device or process.
4. A method for enhancing speech components of an audio signal composed of speech and noise components, comprising:
changing the audio signal from the time domain to a plurality of subbands in the frequency domain, producing K multiple subband signals, Yk(m), k=1, . . . , K, m=0, 1, . . . , ∞, where k is the subband number, and m is the time index of each subband signal,
processing the subbands of the audio signal,
said processing including controlling the gain of the audio signal in ones of said subbands, wherein the gain in a subband is reduced as the level of estimated noise components increases with respect to the level of speech components, wherein the level of estimated noise components is determined at least in part by obtaining and monitoring the signal-to-noise ratio in the subband and increasing the estimated noise components level in the subband by a predetermined amount when the signal-to-noise ratio in the subband exceeds a limit for more than a defined time, the change of the said gain being performed according to a set of parameters continuously updated for each time index m, said parameters being dependent only on their respective prior value at time index (m−1), characteristics of the subband at time index m, and a set of predetermined constants, and said defined time being updated according to a counter, said counter being robust with respect to false alarms and resets due to temporary signal fluctuations by introducing a hand-off counter, and
changing the processed audio signal from the frequency domain to the time domain to provide an audio signal in which speech components are enhanced.
5. A method according to claim 4 wherein the estimated noise components are determined by a voice-activity-detector-based noise-level-estimator device or process.
6. A method according to claim 4 wherein the estimated noise components are determined by a statistically-based noise-level-estimator device or process.
7. Apparatus comprising means adapted to perform the method of any one of claims 1 through 6.
8. A computer program, stored on a computer-readable medium for causing a computer to perform the method of any one of claims 1 through 6.
US12/677,087 2007-09-12 2008-09-10 Speech enhancement with noise level estimation adjustment Active 2031-01-17 US8538763B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/677,087 US8538763B2 (en) 2007-09-12 2008-09-10 Speech enhancement with noise level estimation adjustment

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US99354807P 2007-09-12 2007-09-12
PCT/US2008/010589 WO2009035613A1 (en) 2007-09-12 2008-09-10 Speech enhancement with noise level estimation adjustment
US12/677,087 US8538763B2 (en) 2007-09-12 2008-09-10 Speech enhancement with noise level estimation adjustment

Publications (2)

Publication Number Publication Date
US20100198593A1 true US20100198593A1 (en) 2010-08-05
US8538763B2 US8538763B2 (en) 2013-09-17

Family

ID=40028506

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/677,087 Active 2031-01-17 US8538763B2 (en) 2007-09-12 2008-09-10 Speech enhancement with noise level estimation adjustment

Country Status (7)

Country Link
US (1) US8538763B2 (en)
EP (1) EP2191465B1 (en)
JP (1) JP4970596B2 (en)
CN (1) CN101802909B (en)
AT (1) ATE501506T1 (en)
DE (1) DE602008005477D1 (en)
WO (1) WO2009035613A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100100386A1 (en) * 2007-03-19 2010-04-22 Dolby Laboratories Licensing Corporation Noise Variance Estimator for Speech Enhancement
US20100104113A1 (en) * 2008-10-24 2010-04-29 Yamaha Corporation Noise suppression device and noise suppression method
US20130191118A1 (en) * 2012-01-19 2013-07-25 Sony Corporation Noise suppressing device, noise suppressing method, and program
WO2013142723A1 (en) 2012-03-23 2013-09-26 Dolby Laboratories Licensing Corporation Hierarchical active voice detection
US8761410B1 (en) * 2010-08-12 2014-06-24 Audience, Inc. Systems and methods for multi-channel dereverberation
US20150325253A1 (en) * 2014-05-09 2015-11-12 Fujitsu Limited Speech enhancement device and speech enhancement method
US9343056B1 (en) 2010-04-27 2016-05-17 Knowles Electronics, Llc Wind noise detection and suppression
US9431023B2 (en) 2010-07-12 2016-08-30 Knowles Electronics, Llc Monaural noise suppression based on computational auditory scene analysis
US9438992B2 (en) 2010-04-29 2016-09-06 Knowles Electronics, Llc Multi-microphone robust noise suppression
US9502048B2 (en) 2010-04-19 2016-11-22 Knowles Electronics, Llc Adaptively reducing noise to limit speech distortion

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8804977B2 (en) 2011-03-18 2014-08-12 Dolby Laboratories Licensing Corporation Nonlinear reference signal processing for echo suppression
US9449609B2 (en) 2013-11-07 2016-09-20 Continental Automotive Systems, Inc. Accurate forward SNR estimation based on MMSE speech probability presence
US9449610B2 (en) 2013-11-07 2016-09-20 Continental Automotive Systems, Inc. Speech probability presence modifier improving log-MMSE based noise suppression performance
US9449615B2 (en) 2013-11-07 2016-09-20 Continental Automotive Systems, Inc. Externally estimated SNR based modifiers for internal MMSE calculators
GB201401689D0 (en) 2014-01-31 2014-03-19 Microsoft Corp Audio signal processing
WO2015130283A1 (en) 2014-02-27 2015-09-03 Nuance Communications, Inc. Methods and apparatus for adaptive gain control in a communication system
US10020002B2 (en) * 2015-04-05 2018-07-10 Qualcomm Incorporated Gain parameter estimation based on energy saturation and signal scaling
CN106920559B (en) * 2017-03-02 2020-10-30 奇酷互联网络科技(深圳)有限公司 Voice communication optimization method and device and call terminal
CN108922523B (en) * 2018-06-19 2021-06-15 Oppo广东移动通信有限公司 Position prompting method and device, storage medium and electronic equipment
US11605392B2 (en) * 2020-03-16 2023-03-14 Google Llc Automatic gain control based on machine learning level estimation of the desired signal
CN112102818B (en) * 2020-11-19 2021-01-26 成都启英泰伦科技有限公司 Signal-to-noise ratio calculation method combining voice activity detection and sliding window noise estimation

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4811404A (en) * 1987-10-01 1989-03-07 Motorola, Inc. Noise suppression system
US6289309B1 (en) * 1998-12-16 2001-09-11 Sarnoff Corporation Noise spectrum tracking for speech enhancement
US6415253B1 (en) * 1998-02-20 2002-07-02 Meta-C Corporation Method and apparatus for enhancing noise-corrupted speech
US6477489B1 (en) * 1997-09-18 2002-11-05 Matra Nortel Communications Method for suppressing noise in a digital speech signal
US20040078200A1 (en) * 2002-10-17 2004-04-22 Clarity, Llc Noise reduction in subbanded speech signals
US6732073B1 (en) * 1999-09-10 2004-05-04 Wisconsin Alumni Research Foundation Spectral enhancement of acoustic signals to provide improved recognition of speech
US6760435B1 (en) * 2000-02-08 2004-07-06 Lucent Technologies Inc. Method and apparatus for network speech enhancement
US20050027520A1 (en) * 1999-11-15 2005-02-03 Ville-Veikko Mattila Noise suppression
US20050240401A1 (en) * 2004-04-23 2005-10-27 Acoustic Technologies, Inc. Noise suppression based on Bark band weiner filtering and modified doblinger noise estimate
US6993480B1 (en) * 1998-11-03 2006-01-31 Srs Labs, Inc. Voice intelligibility enhancement system
US20060206320A1 (en) * 2005-03-14 2006-09-14 Li Qi P Apparatus and method for noise reduction and speech enhancement with microphones and loudspeakers
US7117145B1 (en) * 2000-10-19 2006-10-03 Lear Corporation Adaptive filter for speech enhancement in a noisy environment
US7191122B1 (en) * 1999-09-22 2007-03-13 Mindspeed Technologies, Inc. Speech compression system and method
US20070094017A1 (en) * 2001-04-02 2007-04-26 Zinser Richard L Jr Frequency domain format enhancement

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04230798A (en) * 1990-05-28 1992-08-19 Matsushita Electric Ind Co Ltd Noise predicting device
JP3418855B2 (en) * 1996-10-30 2003-06-23 京セラ株式会社 Noise removal device
US6108610A (en) * 1998-10-13 2000-08-22 Noise Cancellation Technologies, Inc. Method and system for updating noise estimates during pauses in an information signal
US6618701B2 (en) 1999-04-19 2003-09-09 Motorola, Inc. Method and system for noise suppression using external voice activity detection
US6910011B1 (en) * 1999-08-16 2005-06-21 Haman Becker Automotive Systems - Wavemakers, Inc. Noisy acoustic signal enhancement
JP3454206B2 (en) * 1999-11-10 2003-10-06 三菱電機株式会社 Noise suppression device and noise suppression method
US20030023429A1 (en) 2000-12-20 2003-01-30 Octiv, Inc. Digital signal processing techniques for improving audio clarity and intelligibility
JP3574123B2 (en) * 2001-03-28 2004-10-06 三菱電機株式会社 Noise suppression device
CA2354755A1 (en) 2001-08-07 2003-02-07 Dspfactory Ltd. Sound intelligibilty enhancement using a psychoacoustic model and an oversampled filterbank
US7447631B2 (en) * 2002-06-17 2008-11-04 Dolby Laboratories Licensing Corporation Audio coding system using spectral hole filling
CN100517298C (en) * 2003-09-29 2009-07-22 新加坡科技研究局 Method for performing a domain transformation of a digital signal from the time domain into the frequency domain and vice versa
CN1322488C (en) * 2004-04-14 2007-06-20 华为技术有限公司 Method for strengthening sound
JP4519169B2 (en) * 2005-02-02 2010-08-04 富士通株式会社 Signal processing method and signal processing apparatus
US8744844B2 (en) * 2007-07-06 2014-06-03 Audience, Inc. System and method for adaptive intelligent noise suppression
JP4454591B2 (en) * 2006-02-09 2010-04-21 学校法人早稲田大学 Noise spectrum estimation method, noise suppression method, and noise suppression device
JP4836720B2 (en) * 2006-09-07 2011-12-14 株式会社東芝 Noise suppressor
JP4746533B2 (en) * 2006-12-21 2011-08-10 日本電信電話株式会社 Multi-sound source section determination method, method, program and recording medium thereof
JP5034735B2 (en) * 2007-07-13 2012-09-26 ヤマハ株式会社 Sound processing apparatus and program
JP4886715B2 (en) * 2007-08-28 2012-02-29 日本電信電話株式会社 Steady rate calculation device, noise level estimation device, noise suppression device, method thereof, program, and recording medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4811404A (en) * 1987-10-01 1989-03-07 Motorola, Inc. Noise suppression system
US6477489B1 (en) * 1997-09-18 2002-11-05 Matra Nortel Communications Method for suppressing noise in a digital speech signal
US6415253B1 (en) * 1998-02-20 2002-07-02 Meta-C Corporation Method and apparatus for enhancing noise-corrupted speech
US6993480B1 (en) * 1998-11-03 2006-01-31 Srs Labs, Inc. Voice intelligibility enhancement system
US6289309B1 (en) * 1998-12-16 2001-09-11 Sarnoff Corporation Noise spectrum tracking for speech enhancement
US6732073B1 (en) * 1999-09-10 2004-05-04 Wisconsin Alumni Research Foundation Spectral enhancement of acoustic signals to provide improved recognition of speech
US7191122B1 (en) * 1999-09-22 2007-03-13 Mindspeed Technologies, Inc. Speech compression system and method
US20050027520A1 (en) * 1999-11-15 2005-02-03 Ville-Veikko Mattila Noise suppression
US6760435B1 (en) * 2000-02-08 2004-07-06 Lucent Technologies Inc. Method and apparatus for network speech enhancement
US7117145B1 (en) * 2000-10-19 2006-10-03 Lear Corporation Adaptive filter for speech enhancement in a noisy environment
US20070094017A1 (en) * 2001-04-02 2007-04-26 Zinser Richard L Jr Frequency domain format enhancement
US20040078200A1 (en) * 2002-10-17 2004-04-22 Clarity, Llc Noise reduction in subbanded speech signals
US20050240401A1 (en) * 2004-04-23 2005-10-27 Acoustic Technologies, Inc. Noise suppression based on Bark band weiner filtering and modified doblinger noise estimate
US20060206320A1 (en) * 2005-03-14 2006-09-14 Li Qi P Apparatus and method for noise reduction and speech enhancement with microphones and loudspeakers

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8280731B2 (en) * 2007-03-19 2012-10-02 Dolby Laboratories Licensing Corporation Noise variance estimator for speech enhancement
US20100100386A1 (en) * 2007-03-19 2010-04-22 Dolby Laboratories Licensing Corporation Noise Variance Estimator for Speech Enhancement
US20100104113A1 (en) * 2008-10-24 2010-04-29 Yamaha Corporation Noise suppression device and noise suppression method
US8515098B2 (en) * 2008-10-24 2013-08-20 Yamaha Corporation Noise suppression device and noise suppression method
US9502048B2 (en) 2010-04-19 2016-11-22 Knowles Electronics, Llc Adaptively reducing noise to limit speech distortion
US9343056B1 (en) 2010-04-27 2016-05-17 Knowles Electronics, Llc Wind noise detection and suppression
US9438992B2 (en) 2010-04-29 2016-09-06 Knowles Electronics, Llc Multi-microphone robust noise suppression
US9431023B2 (en) 2010-07-12 2016-08-30 Knowles Electronics, Llc Monaural noise suppression based on computational auditory scene analysis
US8761410B1 (en) * 2010-08-12 2014-06-24 Audience, Inc. Systems and methods for multi-channel dereverberation
US20130191118A1 (en) * 2012-01-19 2013-07-25 Sony Corporation Noise suppressing device, noise suppressing method, and program
WO2013142723A1 (en) 2012-03-23 2013-09-26 Dolby Laboratories Licensing Corporation Hierarchical active voice detection
US20150325253A1 (en) * 2014-05-09 2015-11-12 Fujitsu Limited Speech enhancement device and speech enhancement method
US9779754B2 (en) * 2014-05-09 2017-10-03 Fujitsu Limited Speech enhancement device and speech enhancement method

Also Published As

Publication number Publication date
CN101802909B (en) 2013-07-10
DE602008005477D1 (en) 2011-04-21
EP2191465B1 (en) 2011-03-09
CN101802909A (en) 2010-08-11
WO2009035613A1 (en) 2009-03-19
US8538763B2 (en) 2013-09-17
EP2191465A1 (en) 2010-06-02
JP4970596B2 (en) 2012-07-11
JP2010539538A (en) 2010-12-16
ATE501506T1 (en) 2011-03-15

Similar Documents

Publication Publication Date Title
US8538763B2 (en) Speech enhancement with noise level estimation adjustment
US8583426B2 (en) Speech enhancement with voice clarity
US8560320B2 (en) Speech enhancement employing a perceptual model
US7953596B2 (en) Method of denoising a noisy signal including speech and noise components
US8280731B2 (en) Noise variance estimator for speech enhancement
EP1065656B1 (en) Method for reducing noise in an input speech signal
US6289309B1 (en) Noise spectrum tracking for speech enhancement
US8244523B1 (en) Systems and methods for noise reduction
US20090254340A1 (en) Noise Reduction
US20080167866A1 (en) Spectro-temporal varying approach for speech enhancement
WO2001073751A9 (en) Speech presence measurement detection techniques
US7885810B1 (en) Acoustic signal enhancement method and apparatus
Bai et al. Two-pass quantile based noise spectrum estimation
da Silva et al. Speech enhancement using a frame adaptive gain function for Wiener filtering
Afolabi et al. Speech Enhancement of a Mobile Car-Noisy Speech Using Spectral Subtraction Algorithms
Tun An Approach for Noise-Speech Discrimination Using Wavelet Domain

Legal Events

Date Code Title Description
AS Assignment

Owner name: DOLBY LABORATORIES LICENSING CORPORATION, CALIFORN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YU, RONGSHAN;REEL/FRAME:024046/0778

Effective date: 20071107

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8