WO2004056298A1 - Method and apparatus for removing noise from electronic signals - Google Patents
Method and apparatus for removing noise from electronic signals Download PDFInfo
- Publication number
- WO2004056298A1 WO2004056298A1 PCT/US2002/037399 US0237399W WO2004056298A1 WO 2004056298 A1 WO2004056298 A1 WO 2004056298A1 US 0237399 W US0237399 W US 0237399W WO 2004056298 A1 WO2004056298 A1 WO 2004056298A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- receiving device
- acoustic
- noise
- transfer function
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02165—Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
Definitions
- the invention is in the field of mathematical methods and electronic systems for removing or suppressing undesired acoustical noise from acoustic transmissions or recordings.
- Figure 1 is a block diagram of a denoising system, under an embodiment.
- Figure 2 is a block diagram illustrating a noise removal algorithm, under an embodiment assuming a single noise source and a direct path to the microphones.
- Figure 3 is a block diagram illustrating a front end of a noise removal algorithm of an embodiment generalized to n distinct noise sources (these noise sources may be reflections or echoes of one another).
- Figure 4 is a block diagram illustrating a front end of a noise removal algorithm of an embodiment in a general case where there are n distinct noise sources and signal reflections.
- Figure 5 is a flow diagram of a denoising method, under an embodiment.
- Figure 6 shows results of a noise suppression algorithm of an embodiment for an American English female speaker in the presence of airport terminal noise that includes many other human speakers and public announcements.
- Figure 7 is a block diagram of a physical configuration for denoising using unidirectional and omnidirectional microphones, under the embodiments of Figures 2, 3, and 4.
- Figure 8 is a denoising microphone configuration including two omnidirectional microphones, under an embodiment.
- Figure 9 is a plot of the C required versus distance, under the embodiment of Figure 8.
- Figure 10 is a block diagram of a front end of a noise removal algorithm under an embodiment in which the two microphones have different response characteristics.
- Figure 11 A is a plot of the difference in frequency response (percent) between the microphones (at a distance of 4 centimeters) before compensation.
- Figure 1 IB is a plot of the difference in frequency response (percent) between the microphones (at a distance of 4 centimeters) after DFT compensation, under an embodiment.
- Figure 11C is a plot of the difference in frequency response (percent) between the microphones (at a distance of 4 centimeters) after time-domain filter compensation, under an alternate embodiment.
- Figure 1 is a block diagram of a denoising system of an embodiment that uses knowledge of when speech is occurring derived from physiological information on voicing activity.
- the system includes microphones 10 and sensors 20 that provide signals to at least one processor 30.
- the processor includes a denoising subsystem or algorithm 40.
- FIG. 2 is a block diagram illustrating a noise removal algorithm of an embodiment, showing system components used. A single noise source and a direct path to the microphones are assumed. Figure 2 includes a graphic description of the process of an embodiment, with a single signal source 100 and a single noise source 101. This algorithm uses two microphones: a "signal” microphone 1 ("MIC1") and a “noise” microphone 2 ("MIC 2"), but is not so limited. MIC 1 is assumed to capture mostly signal with some noise, while MIC 2 captures mostly noise with some signal. The data from the signal source 100 to MIC 1 is denoted by s(n), where s(n) is a discrete sample of the analog signal from the source 100.
- s(n) is a discrete sample of the analog signal from the source 100.
- the data from the signal source 100 to MIC 2 is denoted by s 2 (n).
- the data from the noise source 101 to MIC 2 is denoted by n(n).
- the data from the noise source 101 to MIC 1 is denoted by n 2 (n).
- the data from MIC 1 to noise removal element 105 is denoted by mj(n)
- the data from MIC 2 to noise removal element 105 is denoted by m 2 (n).
- the noise removal element also receives a signal from a voice activity detection
- the NAD 104 detects uses physiological information to determine when a speaker is speaking.
- the NAD includes a radio frequency device, an electroglottograph, an ultrasound device, an acoustic throat microphone, and/or an airflow detector.
- the transfer functions from the signal source 100 to MIC 1 and from the noise source 101 to MIC 2 are assumed to be unity.
- the transfer function from the signal source 100 to MIC 2 is denoted by H 2 (z), and the transfer function from the noise source 101 to MIC 1 is denoted by Hj(z).
- H 2 (z) The transfer function from the signal source 100 to MIC 2
- Hj(z) the transfer function from the noise source 101 to MIC 1
- the assumption of unity transfer functions does not inhibit the generality of this algorithm, as the actual relations between the signal, noise, and microphones are simply ratios and the ratios are redefined in this manner for simplicity.
- the information from MIC 2 is used to attempt to remove noise from MIC 1.
- an unspoken assumption is that the NAD element 104 is never perfect, and thus the denoising must be performed cautiously, so as not to remove too much of the signal along with the noise.
- the NAD 104 is assumed to be perfect such that it is equal to zero when there is no speech being produced by the user, and equal to one when speech is produced, a substantial improvement in the noise removal can be made.
- the total acoustic information coming into MIC 1 is denoted by m x (n).
- the total acoustic information coming into MIC 2 is similarly labeled m 2 (n).
- M x (z) and M 2 (z) In the z (digital frequency) domain, these are represented as M x (z) and M 2 (z).
- N 2 (z) N(z)H 1 (z)
- S 2 (z) S(z)H 2 (z) so that M 1 (z) ⁇ S(z)+N(z)H 1 (z)
- Equation 1 has four unknowns and only two known relationships and therefore cannot be solved explicitly.
- Equation 1 Equation 1 reduces to
- H ⁇ z can be calculated using any of the available system identification algorithms and the microphone outputs when the system is certain that only noise is being received. The calculation can be done adaptively, so that the system can react to changes in the noise.
- Equation 1 A solution is now available for one of the unknowns in Equation 1.
- FIG. 3 is a block diagram of a front end of a noise removal algorithm of an embodiment, generalized to n distinct noise sources. These distinct noise sources may be reflections or echoes of one another, but are not so limited.
- H i depends only on the noise sources and their respective transfer functions and can be calculated any time there is no signal being transmitted.
- n subscripts on the microphone inputs denote only that noise is being detected, while an s subscript denotes that only signal is being received by the microphones.
- Equation 4 Rewriting Equation 4, using H x defined in Equation 6, provides,
- H j can be estimated to a high enough accuracy, and the above assumption of only one path from the signal to the microphones holds, the noise may be removed completely.
- the most general case involves multiple noise sources and multiple signal sources.
- Figure 4 is a block diagram of a front end of a noise removal algorithm of an embodiment in the most general case where there are n distinct noise sources and signal reflections. Here, reflections of the signal enter both microphones. This is the most general case, as reflections of the noise source into the microphones can be modeled accurately as simple additional noise sources.
- the direct path from the signal to MIC 2 has changed from H 0 (z) to H 00 (z), and the reflected paths to MIC 1 and MIC 2 are denoted by H 01 (z) and H 02 (z), respectively.
- the input into the microphones now becomes
- Equation 9 reduces to
- M ls S+SH 01
- M 2s SH 00 +SH 02 .
- Equation 9 Rewriting Equation 9 again using the definition for B. 1 (as in Equation 7) provides
- Equation 12 is the same as equation 8, with the replacement of H 0 by H 2 , and the addition of the (1+H 01 ) factor on the left side.
- This extra factor means that S cannot be solved for directly in this situation, but a solution can be generated for the signal plus the addition of all of its echoes. This is not such a bad situation, as there are many conventional methods for dealing with echo suppression, and even if the echoes are not suppressed, it is unlikely that they will affect the comprehensibility of the speech to any meaningful extent.
- the more complex calculation of H 2 is needed to account for the . signal echoes in MIC 2, which act as noise sources.
- Figure 5 is a flow diagram of a denoising method of an embodiment.
- the acoustic signals are received 502. Further, physiological information associated with human voicing activity is received 504.
- a first transfer function representative of the acoustic signal is calculated upon determining that voicing information is absent from the acoustic signal for at least one specified period of time 506.
- a second transfer function representative of the acoustic signal is calculated upon determining that voicing information is present in the acoustic signal for at least one specified period of time 508. Noise is removed from the acoustic signal using at least one combination of the first transfer function and the second transfer function, producing denoised acoustic data streams 510.
- An algorithm for noise removal, or denoising algorithm is described herein, from the simplest case of a single noise source with a direct path to multiple noise sources with reflections and echoes.
- the algorithm has been shown herein to be viable under any environmental conditions. The type and amount of noise are inconsequential if a good estimate has been made of H, and H 2 , and if one does not change substantially while the other is calculated. If the user environment is such that echoes are present, they can be compensated for if coming from a noise source. If signal echoes are also present, they will affect the cleaned signal, but the effect should be negligible in most environments. In operation, the algorithm of an embodiment has shown excellent results in dealing with a variety of noise types, amplitudes, and orientations.
- Equation 3 where H 2 (z) is assumed small and therefore H 2 (z)H j (z) « 0, so that Equation 3 reduces to
- the acoustic data was divided into 16 subbands, with the lowest frequency at 50 Hz and the highest at 3700.
- the denoising algorithm was then applied to each subband in turn, and the 16 denoised data streams were recombined to yield the denoised acoustic data. This works very well, but any combinations of subbands (i.e. 4, 6, 8, 32, equally spaced, perceptually spaced, etc.) can be used and has been found to work as well.
- the amplitude of the noise was constrained in an embodiment so that the microphones used did not saturate (that is, operate outside a linear response region). It is important that the microphones operate linearly to ensure the best performance. Even with this restriction, very low signal-to-noise ratio (SNR) signals can be denoised (down to -10 dB or less).
- SNR signal-to-noise ratio
- the calculation of Hj(z) is accomplished every 10 milliseconds using the Least- Mean Squares (LMS) method, a common adaptive transfer function.
- LMS Least- Mean Squares
- the NAD for an embodiment is derived from a radio frequency sensor and the two microphones, yielding very high accuracy (>99%) for both voiced and unvoiced speech.
- the NAD of an embodiment uses a radio frequency (RF) interferometer to detect tissue motion associated with human speech production, but is not so limited. It is therefore completely acoustic-noise free, and is able to function in any acoustic noise environment.
- RF radio frequency
- a simple energy measurement of the RF signal can be used to determine if voiced speech is occurring.
- Unvoiced speech can be determined using conventional acoustic-based methods, by proximity to voiced sections determined using the RF sensor or similar voicing sensors, or through a combination of the above. Since there is much less energy in unvoiced speech, its activation accuracy is not as critical as voiced speech.
- the algorithm of an embodiment can be implemented. Once again, it is useful to repeat that the noise removal algorithm does not depend on how the NAD is obtained, only that it is accurate, especially for voiced speech. If speech is not detected and training occurs on the speech, the subsequent denoised acoustic data can be distorted.
- the speaker is uttering the numbers 406-5562 in the midst of moderate airport terminal noise.
- the dirty acoustic data was denoised 10 milliseconds at a time, and before denoising the 10 milliseconds of data were prefiltered from 50 to 3700 Hz.
- a reduction in the noise of approximately 17 dB is evident. No post filtering was done on this sample; thus, all of the noise reduction realized is due to the algorithm of an embodiment. It is clear that the algorithm adjusts to the noise instantly, and is capable of removing the very difficult noise of other human speakers. Many different types of noise have all been tested with similar results, including street noise, helicopters, music, and sine waves, to name a few. Also, the orientation of the noise can be varied substantially without significantly changing the noise suppression performance.
- the distortion of the cleaned speech is very low, ensuring good performance for speech recognition engines and human receivers alike.
- the noise removal algorithm of an embodiment has been shown to be viable under any environmental conditions. The type and amount of noise are inconsequential if a good estimate has been made of H, and H 2 . If the user environment is such that echoes are present, they can be compensated for if coming from a noise source. If signal echoes are also present, they will affect the cleaned signal, but the effect should be negligible in most environments.
- Figure 7 is a block diagram of a physical configuration for denoising using a unidirectional microphone M2 for the noise and an omnidirectional microphone Ml for the speech, under the embodiments of Figures 2, 3, and 4.
- the path from the speech to the noise microphone (MIC 2) is approximated as zero, and that approximation is realized through the careful placement of omnidirectional and unidirectional microphones.
- the performance can drop to only 10-20 dB of noise suppression. This drop in suppression ability can be attributed to the steps taken to ensure that H 2 is close to zero.
- Figure 8 is a denoising microphone configuration including two omnidirectional microphones, under an embodiment. The same effect can be achieved through the use of two unidirectional microphones, oriented in the same direction (toward the signal source). Yet another embodiment uses one unidirectional microphone and one omnidirectional microphone. The idea is to capture similar information from acoustic sources in the direction of the signal source. The relative locations of the signal source and the two microphones are fixed and known.
- H 2 can be fixed to be of the form Cz "n , where C is the difference in amplitude of the signal data at Mj and M 2 .
- C the difference in amplitude of the signal data at Mj and M 2 .
- n 1, although any integer other than zero may be used.
- the use of positive integers is recommended.
- the C required can be estimated by
- Figure 9 is a plot of the C required versus distance, under the embodiment of
- FIG. 10 is a block diagram of a front end of a noise removal algorithm under an embodiment in which the two microphones MIC 1 and MIC 2 have different response characteristics.
- Figure 10 includes a graphic description of the process of an embodiment, with a single signal source 1000 and a single noise source 1001.
- This algorithm uses two microphones: a "signal” microphone 1 ("MIC1") and a “noise” microphone 2 ("MIC 2"), but is not so limited.
- MIC 1 is assumed to capture mostly signal with some noise, while MIC 2 captures mostly noise with some signal.
- the data from the signal source 1000 to MIC 1 is denoted by s(n), where s(n) is a discrete sample of the analog signal from the source 1000.
- the data from the signal source 1000 to MIC 2 is denoted by s 2 (n).
- the data from the noise source 1001 to MIC 2 is denoted by n(n).
- the data from the noise source 1001 to MIC 1 is denoted by n 2 (n).
- a transfer functions A(z) represents the frequency response of MIC 1 along with its filtering and amplification responses.
- a transfer function B(z) represents the frequency response of MIC 2 along with its filtering and amplification responses.
- the output of the transfer function A(z) is denoted by m ⁇ n), and the output of the transfer function B(z) is denoted by m 2 (n).
- the signal m ⁇ n) and m 2 (n) are received by a noise removal element 1005, which operates on the signals and outputs "cleaned speech".
- frequency response of MIC X will include the combined effects of the microphone and any amplification or filtering processes that occur during the data recording process for that microphone.
- H j represents the measured response and H x the actual response.
- H x the actual response.
- the magnitude of the DFT for MIC 2 in each frequency bin is then set to be equal to C multiplied by the magnitude of the DFT for MIC 1. If M n] represents the n* frequency bin magnitude of the DFT for MIC 1, then the factor that is multiplied by M 2 [n] would be
- MIC 2 is resynthesized so that the relationship
- This transformation could also be performed in the time domain, using a filter that would emulate the properties of F as closely as possible (for example, the Matlab function FFT2.M could be used with the calculated values of F[n] to construct a suitable FIR filter).
- Figure 11 A is a plot of the difference in frequency response (percent) between the microphones (at a distance of 4 centimeters) before compensation.
- Figure 1 IB is a plot of the difference in frequency response (percent) between the microphones (at a distance of 4 centimeters) after DFT compensation.
- Figure 11C is a plot of the difference in frequency response (percent) between the microphones (at a distance of 4 centimeters) after time-domain filter compensation.
- routines described herein can include any of the following, or one or more combinations of the following: a routine stored in nonvolatile memory (not shown) that forms part of an associated processor or processors; a routine implemented using conventional programmed logic arrays or circuit elements; a routine stored in removable media such as disks; a routine downloaded from a server and stored locally at a client; and a routine hardwired or preprogrammed in chips such as electrically erasable programmable read only memory (“EEPROM”) semiconductor chips, application specific integrated circuits (ASICs), or by digital signal processing (DSP) integrated circuits.
- EEPROM electrically erasable programmable read only memory
- ASICs application specific integrated circuits
- DSP digital signal processing
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020047007752A KR100936093B1 (en) | 2001-11-21 | 2002-11-21 | Method and apparatus for removing noise from electronic signals |
CA002465552A CA2465552A1 (en) | 2001-11-21 | 2002-11-21 | Method and apparatus for removing noise from electronic signals |
EP02793985A EP1480589A1 (en) | 2001-11-21 | 2002-11-21 | Method and apparatus for removing noise from electronic signals |
JP2004562239A JP2005529379A (en) | 2001-11-21 | 2002-11-21 | Method and apparatus for removing noise from electronic signals |
AU2002359445A AU2002359445A1 (en) | 2001-11-21 | 2002-11-21 | Method and apparatus for removing noise from electronic signals |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US32220201P | 2001-11-21 | 2001-11-21 | |
US60/322,202 | 2001-11-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004056298A1 true WO2004056298A1 (en) | 2004-07-08 |
Family
ID=32680708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2002/037399 WO2004056298A1 (en) | 2001-11-21 | 2002-11-21 | Method and apparatus for removing noise from electronic signals |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP1480589A1 (en) |
JP (1) | JP2005529379A (en) |
KR (1) | KR100936093B1 (en) |
CN (1) | CN1589127A (en) |
AU (1) | AU2002359445A1 (en) |
WO (1) | WO2004056298A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005029468A1 (en) * | 2003-09-18 | 2005-03-31 | Aliphcom, Inc. | Voice activity detector (vad) -based multiple-microphone acoustic noise suppression |
US6961623B2 (en) | 2002-10-17 | 2005-11-01 | Rehabtronics Inc. | Method and apparatus for controlling a device or process with vibrations generated by tooth clicks |
EP2202721A3 (en) * | 2008-12-26 | 2014-12-10 | Panasonic Corporation | Noise control device |
US9066186B2 (en) | 2003-01-30 | 2015-06-23 | Aliphcom | Light-based detection for acoustic applications |
US9099094B2 (en) | 2003-03-27 | 2015-08-04 | Aliphcom | Microphone array with rear venting |
US9338574B2 (en) | 2011-06-30 | 2016-05-10 | Thomson Licensing | Method and apparatus for changing the relative positions of sound objects contained within a Higher-Order Ambisonics representation |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0725110D0 (en) * | 2007-12-21 | 2008-01-30 | Wolfson Microelectronics Plc | Gain control based on noise level |
JP5555987B2 (en) | 2008-07-11 | 2014-07-23 | 富士通株式会社 | Noise suppression device, mobile phone, noise suppression method, and computer program |
US8189799B2 (en) * | 2009-04-09 | 2012-05-29 | Harman International Industries, Incorporated | System for active noise control based on audio system output |
JP2014194437A (en) * | 2011-06-24 | 2014-10-09 | Nec Corp | Voice processing device, voice processing method and voice processing program |
KR101968158B1 (en) * | 2017-05-29 | 2019-08-13 | 주식회사 에스원 | Appartus and method for separating valid signal |
CN112889110A (en) * | 2018-10-15 | 2021-06-01 | 索尼公司 | Audio signal processing apparatus and noise suppression method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5473702A (en) * | 1992-06-03 | 1995-12-05 | Oki Electric Industry Co., Ltd. | Adaptive noise canceller |
US5649055A (en) * | 1993-03-26 | 1997-07-15 | Hughes Electronics | Voice activity detector for speech signals in variable background noise |
US5754665A (en) * | 1995-02-27 | 1998-05-19 | Nec Corporation | Noise Canceler |
US6266422B1 (en) * | 1997-01-29 | 2001-07-24 | Nec Corporation | Noise canceling method and apparatus for the same |
US6430295B1 (en) * | 1997-07-11 | 2002-08-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods and apparatus for measuring signal level and delay at multiple sensors |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5406622A (en) * | 1993-09-02 | 1995-04-11 | At&T Corp. | Outbound noise cancellation for telephonic handset |
-
2002
- 2002-11-21 JP JP2004562239A patent/JP2005529379A/en active Pending
- 2002-11-21 CN CNA028231937A patent/CN1589127A/en active Pending
- 2002-11-21 EP EP02793985A patent/EP1480589A1/en not_active Withdrawn
- 2002-11-21 WO PCT/US2002/037399 patent/WO2004056298A1/en not_active Application Discontinuation
- 2002-11-21 AU AU2002359445A patent/AU2002359445A1/en not_active Abandoned
- 2002-11-21 KR KR1020047007752A patent/KR100936093B1/en not_active IP Right Cessation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5473702A (en) * | 1992-06-03 | 1995-12-05 | Oki Electric Industry Co., Ltd. | Adaptive noise canceller |
US5649055A (en) * | 1993-03-26 | 1997-07-15 | Hughes Electronics | Voice activity detector for speech signals in variable background noise |
US5754665A (en) * | 1995-02-27 | 1998-05-19 | Nec Corporation | Noise Canceler |
US6266422B1 (en) * | 1997-01-29 | 2001-07-24 | Nec Corporation | Noise canceling method and apparatus for the same |
US6430295B1 (en) * | 1997-07-11 | 2002-08-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods and apparatus for measuring signal level and delay at multiple sensors |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9196261B2 (en) | 2000-07-19 | 2015-11-24 | Aliphcom | Voice activity detector (VAD)—based multiple-microphone acoustic noise suppression |
US6961623B2 (en) | 2002-10-17 | 2005-11-01 | Rehabtronics Inc. | Method and apparatus for controlling a device or process with vibrations generated by tooth clicks |
US9066186B2 (en) | 2003-01-30 | 2015-06-23 | Aliphcom | Light-based detection for acoustic applications |
US9099094B2 (en) | 2003-03-27 | 2015-08-04 | Aliphcom | Microphone array with rear venting |
WO2005029468A1 (en) * | 2003-09-18 | 2005-03-31 | Aliphcom, Inc. | Voice activity detector (vad) -based multiple-microphone acoustic noise suppression |
EP2202721A3 (en) * | 2008-12-26 | 2014-12-10 | Panasonic Corporation | Noise control device |
US9020159B2 (en) | 2008-12-26 | 2015-04-28 | Panasonic Intellectual Property Management Co., Ltd. | Noise reduction device |
US9338574B2 (en) | 2011-06-30 | 2016-05-10 | Thomson Licensing | Method and apparatus for changing the relative positions of sound objects contained within a Higher-Order Ambisonics representation |
Also Published As
Publication number | Publication date |
---|---|
KR20040077661A (en) | 2004-09-06 |
EP1480589A1 (en) | 2004-12-01 |
JP2005529379A (en) | 2005-09-29 |
CN1589127A (en) | 2005-03-02 |
KR100936093B1 (en) | 2010-01-11 |
AU2002359445A1 (en) | 2004-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20020039425A1 (en) | Method and apparatus for removing noise from electronic signals | |
US9196261B2 (en) | Voice activity detector (VAD)—based multiple-microphone acoustic noise suppression | |
Zelinski | A microphone array with adaptive post-filtering for noise reduction in reverberant rooms | |
US5574824A (en) | Analysis/synthesis-based microphone array speech enhancer with variable signal distortion | |
JP4195267B2 (en) | Speech recognition apparatus, speech recognition method and program thereof | |
KR100549133B1 (en) | Noise reduction method and device | |
EP1169883B1 (en) | System and method for dual microphone signal noise reduction using spectral subtraction | |
US20030179888A1 (en) | Voice activity detection (VAD) devices and methods for use with noise suppression systems | |
US20070088544A1 (en) | Calibration based beamforming, non-linear adaptive filtering, and multi-sensor headset | |
WO2004077407A1 (en) | Estimation of noise in a speech signal | |
CN108172231A (en) | A kind of dereverberation method and system based on Kalman filtering | |
WO2004056298A1 (en) | Method and apparatus for removing noise from electronic signals | |
WO2003096031A9 (en) | Voice activity detection (vad) devices and methods for use with noise suppression systems | |
US20030128848A1 (en) | Method and apparatus for removing noise from electronic signals | |
US7890319B2 (en) | Signal processing apparatus and method thereof | |
Spriet et al. | Stochastic gradient-based implementation of spatially preprocessed speech distortion weighted multichannel Wiener filtering for noise reduction in hearing aids | |
CN108344501A (en) | Resonance identification and removing method and device in a kind of application of signal correlation | |
Huang et al. | Dereverberation | |
CA2465552A1 (en) | Method and apparatus for removing noise from electronic signals | |
KR101537653B1 (en) | Method and system for noise reduction based on spectral and temporal correlations | |
Cheng et al. | Speech Enhancement Based on Beamforming and Post-Filtering by Combining Phase Information. | |
Gustafsson et al. | Dual-Microphone Spectral Subtraction | |
KELAGADI et al. | REDUCTION OF ENERGY FOR IOT BASED SPEECH SENSORS IN NOISE REDUCTION USING MACHINE LEARNING MODEL. | |
Moir | Cancellation of noise from speech using Kepstrum analysis | |
Zhang et al. | Speech enhancement using improved adaptive null-forming in frequency domain with postfilter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2465552 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1347/DELNP/2004 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004562239 Country of ref document: JP Ref document number: 20028231937 Country of ref document: CN Ref document number: 1020047007752 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002793985 Country of ref document: EP |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 2002793985 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2002793985 Country of ref document: EP |