US20160066088A1 - Utilizing level differences for speech enhancement - Google Patents

Utilizing level differences for speech enhancement Download PDF

Info

Publication number
US20160066088A1
US20160066088A1 US14/477,761 US201414477761A US2016066088A1 US 20160066088 A1 US20160066088 A1 US 20160066088A1 US 201414477761 A US201414477761 A US 201414477761A US 2016066088 A1 US2016066088 A1 US 2016066088A1
Authority
US
United States
Prior art keywords
estimate
acoustic signal
primary
filter
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/477,761
Inventor
Carlos Avendano
Lloyd Watts
Peter Santos
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Knowles Electronics LLC
Original Assignee
Knowles Electronics LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Knowles Electronics LLC filed Critical Knowles Electronics LLC
Priority to US14/477,761 priority Critical patent/US20160066088A1/en
Assigned to AUDIENCE, INC. reassignment AUDIENCE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WATTS, LLOYD, SANTOS, PETER, AVENDANO, CARLOS
Assigned to AUDIENCE LLC reassignment AUDIENCE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: AUDIENCE, INC.
Assigned to KNOWLES ELECTRONICS, LLC reassignment KNOWLES ELECTRONICS, LLC MERGER (SEE DOCUMENT FOR DETAILS). Assignors: AUDIENCE LLC
Publication of US20160066088A1 publication Critical patent/US20160066088A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/002Damping circuit arrangements for transducers, e.g. motional feedback circuits
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2410/00Microphones
    • H04R2410/01Noise reduction using microphones having different directional characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's

Definitions

  • One such method is to use two or more microphones on an audio device. These microphones are localized and allow the device to determine a difference between the microphone signals. For example, due to a space difference between the microphones, the difference in times of arrival of the signals from a speech source to the microphones may be utilized to localize the speech source. Once localized, the signals can be spatially filtered to suppress the noise originating from different directions.
  • Beamforming techniques utilizing a linear array of microphones may create an “acoustic beam” in a direction of the source, and thus can be used as spatial filters.
  • This method suffers from many disadvantages.
  • Second, the number of sensors needed to achieve adequate spatial filtering is generally large (e.g., more than two). Additionally, if the microphone array is used on a small device, such as a cellular phone, beamforming is more difficult at lower frequencies because the distance between the microphones of the array is small compared to the wavelength.
  • Embodiments of the present invention overcome or substantially alleviate prior problems associated with noise suppression and speech enhancement.
  • systems and methods for utilizing inter-microphone level differences (ILD) to attenuate noise and enhance speech are provided.
  • the ILD is based on energy level differences.
  • energy estimates of acoustic signals received from a primary microphone and a secondary microphone are determined for each channel of a cochlea frequency analyzer for each time frame.
  • the energy estimates may be based on a current acoustic signal and an energy estimate of a previous frame. Based on these energy estimates the ILD may be calculated.
  • the ILD information is used to determine time-frequency components where speech is likely to be present and to derive a noise estimate from the primary microphone acoustic signal.
  • the energy and noise estimates allow a filter estimate to be derived.
  • a noise estimate of the acoustic signal from the primary microphone is determined based on minimum statistics of the current energy estimate of the primary microphone signal and a noise estimate of the previous frame.
  • the derived filter estimate may be smoothed to reduce acoustic artifacts.
  • the filter estimate is then applied to the cochlea representation of the acoustic signal from the primary microphone to generate a speech estimate.
  • the speech estimate is then converted into time domain for output. The conversion may be performed by applying an inverse frequency transformation to the speech estimate.
  • FIGS. 1 a and 1 b are diagrams of two environments in which embodiments of the present invention may be practiced
  • FIG. 2 is a block diagram of an exemplary communication device implementing embodiments of the present invention.
  • FIG. 3 is a block diagram of an exemplary audio processing engine
  • FIG. 4 is a flowchart of an exemplary method for utilizing inter-microphone level differences to enhance speech.
  • the present invention provides exemplary systems and methods for recording and utilizing inter-microphone level differences to identify time frequency regions dominated by speech in order to attenuate background noise and far-field distractors.
  • Embodiments of the present invention may be practiced on any communication device that is configured to receive sound such as, but not limited to, cellular phones, phone handsets, headsets, and conferencing systems.
  • exemplary embodiments are configured to provide improved noise suppression on small devices where prior art microphone arrays will not function well. While embodiments of the present invention will be described in reference to operation on a cellular phone, the present invention may be practiced on any communication device.
  • a user provides a speech source 102 , hereinafter audio source, to a communication device 104 .
  • the communication device 104 comprises at least two microphones: a primary microphone 106 relative to the speech source 102 and a secondary microphone 108 located a distance away from the primary microphone 106 .
  • the microphones 106 and 108 are omni-directional microphones.
  • Alternative embodiments may utilize other forms of microphones or acoustic sensors.
  • the microphones 106 and 108 receive sound information from the speech source 102 , the microphones 106 and 108 also pick up noise 110 . While the noise 110 is shown coming from a single location, the noise may comprise any sounds from one or more locations different than the speech and may include reverberations and echoes.
  • Embodiments of the present invention exploit level differences (e.g., energy differences) between the two microphones 106 and 108 independent of how the level differences are obtained.
  • level differences e.g., energy differences
  • FIG. 1 a because the primary microphone 106 is much closer to the speech source 102 than the secondary microphone 108 , the intensity level is higher for the primary microphone 106 resulting in a larger energy level during a speech/voice segment.
  • FIG. 1 b because directional response of the primary microphone 106 is highest in the direction of the speech source 102 and directional response of the secondary microphone 108 is lower in the direction of the speech source 102 , the level difference is highest in the direction of the speech source 102 and lower elsewhere.
  • the level differences may then be used to discriminate speech and noise in the time-frequency domain. Further embodiments may use a combination of energy level difference and time delays to discriminate speech. Based on binaural cue decoding, speech signal extraction or speech enhancement may be performed.
  • the exemplary communication device 104 is an audio receiving device that comprises a processor 202 , the primary microphone 106 , the secondary microphone 108 , an audio processing engine 204 , and an output device 206 .
  • the communication device 104 may comprise further components necessary for communication device 104 operation, but not related to noise suppression or speech enhancement.
  • the audio processing engine 204 will be discussed in more details in connection with FIG. 3 .
  • the primary and secondary microphones 106 and 108 are spaced a distance apart in order to allow for an energy level difference between them.
  • the microphones 106 and 108 may comprise any type of acoustic receiving device or sensor, and may be omni-directional, unidirectional, or have other directional characteristics or polar patters.
  • the acoustic signals are converted by an analog-to-digital converter (not shown) into digital signals for processing in accordance with some embodiments.
  • the acoustic signal received by the primary microphone 106 is herein referred to as the primary acoustic signal
  • the acoustic signal received by the secondary microphone 108 is herein referred to as the secondary acoustic signal.
  • the output device 206 is any device which provides an audio output to the user.
  • the output device 206 may be an earpiece of a headset or handset, or a speaker on a conferencing device.
  • FIG. 3 is a detailed block diagram of the exemplary audio processing engine 204 , according to one embodiment of the present invention.
  • the acoustic signals i.e., X 1 and X 2
  • the frequency analysis module 302 takes the acoustic signals and mimics a cochlea implementation (i.e., cochlea domain) using a filter bank.
  • other filter banks such as short-time Fourier transform (STFT), sub-band filter banks, modulated complex lapped transforms, wavelets, etc. can be used for the frequency analysis and synthesis.
  • STFT short-time Fourier transform
  • sub-band filter banks modulated complex lapped transforms, wavelets, etc.
  • a sub-band analysis on the acoustic signal determines what individual frequencies are present in the complex acoustic signal during a frame (i.e., a predetermined period of time).
  • the frame is 4 ms long.
  • the signals are forwarded to an energy module 304 which computes energy level estimates during an interval of time.
  • the energy estimate may be based on bandwidth of the cochlea channel and the acoustic signal.
  • the exemplary energy module 304 is a component which, in some embodiments, can be represented mathematically.
  • the energy level of the acoustic signal received at the primary microphone 106 may be approximated, in one embodiment, by the following equation
  • ⁇ E is a number between zero and one that determines an averaging time constant
  • X 1 (t, ⁇ ) is the acoustic signal of the primary microphone 106 in the cochlea domain
  • represents the frequency
  • t represents time.
  • a present energy level of the primary microphone 106 , E 1 (t, ⁇ ) is dependent upon a previous energy level of the primary microphone 106 , E 1 (t ⁇ 1,w).
  • the value of ⁇ E can be different for different frequency channels. Given a desired time constant T (e.g., 4 ms) and the sampling frequency f s (e.g. 16 kHz), the value of ⁇ E can be approximated as
  • the energy level of the acoustic signal received from the secondary microphone 108 may be approximated by a similar exemplary equation
  • X 2 (t,w) is the acoustic signal of the secondary microphone 108 in the cochlea domain. Similar to the calculation of energy level for the primary microphone 106 , energy level for the secondary microphone 108 , E 2 (t, ⁇ ), is dependent upon a previous energy level of the secondary microphone 108 , E 2 (t ⁇ 1, ⁇ ).
  • an inter-microphone level difference may be determined by an ILD module 306 .
  • the ILD module 306 is a component which may be approximated mathematically, in one embodiment, as
  • ILD ⁇ ( t , ⁇ ) [ 1 - 2 ⁇ E 1 ⁇ ( t , ⁇ ) ⁇ E 2 ⁇ ( t , ⁇ ) E 1 2 ⁇ ( t , ⁇ ) + E 2 2 ⁇ ( t , ⁇ ) ] * sign ⁇ ( E 1 ⁇ ( t , ⁇ ) - E 2 ⁇ ( t , ⁇ ) )
  • E 1 is the energy level of the primary microphone 106 and E 2 is the energy level of the secondary microphone 108 , both of which are obtained from the energy module 304 .
  • This equation provides a bounded result between ⁇ 1 and 1. For example, ILD goes to 1 when the E 2 goes to 0, and ILD goes to ⁇ 1 when E 1 goes to 0.
  • ILD goes to 1 when the E 2 goes to 0
  • ILD goes to ⁇ 1 when E 1 goes to 0.
  • ILD ⁇ ( t , ⁇ ) E 1 ⁇ ( t , ⁇ ) E 2 ⁇ ( t , ⁇ ) ,
  • ILD is not bounded and may go to infinity as the energy level of the primary microphone gets smaller.
  • the ILD may be approximated by
  • ILD ⁇ ( t , ⁇ ) E 1 ⁇ ( t , ⁇ ) - E 2 ⁇ ( t , ⁇ ) E 1 ⁇ ( t , ⁇ ) + E 2 ⁇ ( t , ⁇ ) .
  • the ILD calculation is also bounded between ⁇ 1 and 1. Therefore, this alternative ILD calculation may be used in one embodiment of the present invention.
  • a Wiener filter is used to suppress noise/enhance speech.
  • specific inputs are required. These inputs comprise a power spectral density of noise and a power spectral density of the source signal.
  • a noise estimate module 308 may be provided to determine a noise estimate for the acoustic signals.
  • the noise estimate module 308 attempts to estimate the noise components in the microphone signals.
  • the noise estimate is based only on the acoustic signal received by the primary microphone 106 .
  • the exemplary noise estimate module 308 is a component which can be approximated mathematically by
  • N ( t, ⁇ ) ⁇ 1 ( t, ⁇ ) E 1 ( t, ⁇ )+(1 ⁇ 1 ( t, ⁇ ))min[ N ( t ⁇ 1, ⁇ ), E 1 ( t, ⁇ )]
  • the noise estimate in this embodiment is based on minimum statistics of a current energy estimate of the primary microphone 106 , E 1 (t, ⁇ ) and a noise estimate of a previous time frame, N(t ⁇ 1, ⁇ ). Therefore the noise estimation is performed efficiently and with low latency.
  • ⁇ 1 (t, ⁇ ) in the above equation is derived from the ILD approximated by the ILD module 306 , as
  • ⁇ I ⁇ ( t , ⁇ ) ⁇ ⁇ 0 if ⁇ ⁇ ILD ⁇ ( t , ⁇ ) ⁇ threshold ⁇ 1 if ⁇ ⁇ ILD ⁇ ( t , ⁇ ) > threshold
  • exemplary embodiments of the present invention may use a combination of minimum statistics and voice activity detection to determine the noise estimate.
  • a filter module 310 then derives a filter estimate based on the noise estimate.
  • the filter is a Wiener filter.
  • Alternative embodiments may contemplate other filters. Accordingly, the Wiener filter approximation may be approximated, according to one embodiment, as
  • P s is a power spectral density of speech and P n is a power spectral density of noise.
  • P n is the noise estimate, N(t, ⁇ ), which is calculated by the noise estimate module 308 .
  • P s E 1 (t, ⁇ ) ⁇ N(t, ⁇ ), where E 1 (t, ⁇ ) is the energy estimate of the primary microphone 106 from the energy module 304 , and N(t, ⁇ ) is the noise estimate provided by the noise estimate module 308 . Because the noise estimate changes with each frame, the filter estimate will also change with each frame.
  • is an over-subtraction term which is a function of the ILD. ⁇ compensates bias of minimum statistics of the noise estimate module 308 and forms a perceptual weighting. Because time constants are different, the bias will be different between portions of pure noise and portions of noise and speech. Therefore, in some embodiments, compensation for this bias may be necessary. In exemplary embodiments, ⁇ is determined empirically (e.g., 2-3 dB at a large ILD, and is 6-9 dB at a low ILD).
  • ⁇ in the above exemplary Wiener filter equation is a factor which further suppresses the noise estimate.
  • can be any positive value.
  • nonlinear expansion may be obtained by setting ⁇ to 2.
  • is determined empirically and applied when a body of
  • an optional filter smoothing module 312 is provided to smooth the Wiener filter estimate applied to the acoustic signals as a function of time.
  • the filter smoothing module 312 may be mathematically approximated as
  • ⁇ s is a function of the Wiener filter estimate and the primary microphone energy, E 1 .
  • the filter smoothing module 312 at time (t) will smooth the Wiener filter estimate using the values of the smoothed Wiener filter estimate from the previous frame at time (t ⁇ 1).
  • the filter smoothing module 312 performs less smoothing on quick changing signals, and more smoothing on slower changing signals. This is accomplished by varying the value of ⁇ s according to a weighed first order derivative of E 1 with respect to time. If the first order derivative is large and the energy change is large, then ⁇ s is set to a large value. If the derivative is small then ⁇ s is set to a smaller value.
  • the primary acoustic signal is multiplied by the smoothed Wiener filter estimate to estimate the speech.
  • the speech estimation occurs in a masking module 314 .
  • the speech estimate is converted back into time domain from the cochlea domain.
  • the conversion comprises taking the speech estimate, S(t, ⁇ ), and multiplying this with an inverse frequency of the cochlea channels in a frequency synthesis module 316 . Once conversion is completed, the signal is output to user.
  • the system architecture of the audio processing engine 204 of FIG. 3 is exemplary. Alternative embodiments may comprise more components, less components, or equivalent components and still be within the scope of embodiments of the present invention.
  • Various modules of the audio processing engine 204 may be combined into a single module.
  • the functionalities of the frequency analysis module 302 and energy module 304 may be combined into a single module.
  • the functions of the ILD module 306 may be combined with the functions of the energy module 304 alone, or in combination with the frequency analysis module 302 .
  • the functionality of the filter module 310 may be combined with the functionality of the filter smoothing module 312 .
  • step 402 audio signals are received by a primary microphone 106 and a secondary microphone 108 ( FIG. 2 ).
  • the acoustic signals are converted to digital format for processing.
  • Frequency analysis is then performed on the acoustic signals by the frequency analysis module 302 ( FIG. 3 ) in step 404 .
  • the frequency analysis module 302 utilizes a filter bank to determine individual frequencies present in the complex acoustic signal.
  • step 406 energy estimates for acoustic signals received at both the primary and secondary microphones 106 and 108 are computed.
  • the energy estimates are determined by an energy module 304 ( FIG. 3 ).
  • the exemplary energy module 304 utilizes a present acoustic signal and a previously calculated energy estimate to determine the present energy estimate.
  • inter-microphone level differences are computed in step 408 .
  • the ILD is calculated based on the energy estimates of both the primary and secondary acoustic signals.
  • the ILD is computed by the ILD module 306 ( FIG. 3 ).
  • noise is estimated in step 410 .
  • the noise estimate is based only on the acoustic signal received at the primary microphone 106 .
  • the noise estimate may be based on the present energy estimate of the acoustic signal from the primary microphone 106 and a previously computed noise estimate.
  • the noise estimation is frozen or slowed down when the ILD increases, according to exemplary embodiments of the present invention.
  • a filter estimate is computed by the filter module 310 ( FIG. 3 ).
  • the filter used in the audio processing engine 204 ( FIG. 3 ) is a Wiener filter.
  • the filter estimate may be smoothed in step 414 . Smoothing prevents fast fluctuations which may create audio artifacts.
  • the smoothed filter estimate is applied to the acoustic signal from the primary microphone 106 in step 416 to generate a speech estimate.
  • the speech estimate is converted back to the time domain.
  • Exemplary conversion techniques apply an inverse frequency of the cochlea channel to the speech estimate.
  • the audio signal may now be output to the user in step 420 .
  • the digital acoustic signal is converted to an analog signal for output. The output may be via a speaker, earpieces, or other similar devices.
  • the above-described modules can be comprised of instructions that are stored on storage media.
  • the instructions can be retrieved and executed by the processor 202 ( FIG. 2 ).
  • Some examples of instructions include software, program code, and firmware.
  • Some examples of storage media comprise memory devices and integrated circuits.
  • the instructions are operational when executed by the processor 202 to direct the processor 202 to operate in accordance with embodiments of the present invention. Those skilled in the art are familiar with instructions, processor(s), and storage media.

Abstract

Systems and methods for utilizing level differences to attenuate noise and enhance speech are provided. In exemplary embodiments, energy estimates of acoustic signals, representing captured sound, are determined in order to determine a level difference. This level difference in combination with a noise estimate, based only on a primary acoustic signal, allow a filter estimate to be derived. In some embodiments, the derived filter estimate may be smoothed. The filter estimate is then applied to the acoustic signal from the primary acoustic signal to generate a speech estimate.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 13/705,132, filed Dec. 4, 2012, which, in turn, is a continuation of U.S. patent application Ser. No. 11/343,524, filed on Jan. 30, 2006 (now U.S. Pat. No. 8,345,890), which, in turn, claims the benefit of U.S. Provisional Patent Application No. 60/756,826, filed Jan. 5, 2006, all of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • Presently, there are numerous methods for reducing background noise in speech recordings made in adverse environments. One such method is to use two or more microphones on an audio device. These microphones are localized and allow the device to determine a difference between the microphone signals. For example, due to a space difference between the microphones, the difference in times of arrival of the signals from a speech source to the microphones may be utilized to localize the speech source. Once localized, the signals can be spatially filtered to suppress the noise originating from different directions.
  • Beamforming techniques utilizing a linear array of microphones may create an “acoustic beam” in a direction of the source, and thus can be used as spatial filters. This method, however, suffers from many disadvantages. First, it is necessary to identify the direction of the speech source. The time delay, however, is difficult to estimate due to such factors as reverberation which may create ambiguous or incorrect information. Second, the number of sensors needed to achieve adequate spatial filtering is generally large (e.g., more than two). Additionally, if the microphone array is used on a small device, such as a cellular phone, beamforming is more difficult at lower frequencies because the distance between the microphones of the array is small compared to the wavelength.
  • Spatial separation and directivity of the microphones provides not only arrival-time differences but also inter-microphone level differences (ILD) that can be more easily identified than time differences in some applications. Therefore, there is a need for a system and method for utilizing ILD for noise suppression and speech enhancement.
  • SUMMARY OF THE INVENTION
  • Embodiments of the present invention overcome or substantially alleviate prior problems associated with noise suppression and speech enhancement. In general, systems and methods for utilizing inter-microphone level differences (ILD) to attenuate noise and enhance speech are provided. In exemplary embodiments, the ILD is based on energy level differences.
  • In exemplary embodiments, energy estimates of acoustic signals received from a primary microphone and a secondary microphone are determined for each channel of a cochlea frequency analyzer for each time frame. The energy estimates may be based on a current acoustic signal and an energy estimate of a previous frame. Based on these energy estimates the ILD may be calculated.
  • The ILD information is used to determine time-frequency components where speech is likely to be present and to derive a noise estimate from the primary microphone acoustic signal. The energy and noise estimates allow a filter estimate to be derived. In one embodiment, a noise estimate of the acoustic signal from the primary microphone is determined based on minimum statistics of the current energy estimate of the primary microphone signal and a noise estimate of the previous frame. In some embodiments, the derived filter estimate may be smoothed to reduce acoustic artifacts.
  • The filter estimate is then applied to the cochlea representation of the acoustic signal from the primary microphone to generate a speech estimate. The speech estimate is then converted into time domain for output. The conversion may be performed by applying an inverse frequency transformation to the speech estimate.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGS. 1 a and 1 b are diagrams of two environments in which embodiments of the present invention may be practiced;
  • FIG. 2 is a block diagram of an exemplary communication device implementing embodiments of the present invention;
  • FIG. 3 is a block diagram of an exemplary audio processing engine; and
  • FIG. 4 is a flowchart of an exemplary method for utilizing inter-microphone level differences to enhance speech.
  • DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • The present invention provides exemplary systems and methods for recording and utilizing inter-microphone level differences to identify time frequency regions dominated by speech in order to attenuate background noise and far-field distractors. Embodiments of the present invention may be practiced on any communication device that is configured to receive sound such as, but not limited to, cellular phones, phone handsets, headsets, and conferencing systems. Advantageously, exemplary embodiments are configured to provide improved noise suppression on small devices where prior art microphone arrays will not function well. While embodiments of the present invention will be described in reference to operation on a cellular phone, the present invention may be practiced on any communication device.
  • Referring to FIGS. 1 a and 1 b, environments in which embodiments of the present invention may be practiced are shown. A user provides a speech source 102, hereinafter audio source, to a communication device 104. The communication device 104 comprises at least two microphones: a primary microphone 106 relative to the speech source 102 and a secondary microphone 108 located a distance away from the primary microphone 106. In exemplary embodiments, the microphones 106 and 108 are omni-directional microphones. Alternative embodiments may utilize other forms of microphones or acoustic sensors.
  • While the microphones 106 and 108 receive sound information from the speech source 102, the microphones 106 and 108 also pick up noise 110. While the noise 110 is shown coming from a single location, the noise may comprise any sounds from one or more locations different than the speech and may include reverberations and echoes.
  • Embodiments of the present invention exploit level differences (e.g., energy differences) between the two microphones 106 and 108 independent of how the level differences are obtained. In FIG. 1 a because the primary microphone 106 is much closer to the speech source 102 than the secondary microphone 108, the intensity level is higher for the primary microphone 106 resulting in a larger energy level during a speech/voice segment. In FIG. 1 b, because directional response of the primary microphone 106 is highest in the direction of the speech source 102 and directional response of the secondary microphone 108 is lower in the direction of the speech source 102, the level difference is highest in the direction of the speech source 102 and lower elsewhere.
  • The level differences may then be used to discriminate speech and noise in the time-frequency domain. Further embodiments may use a combination of energy level difference and time delays to discriminate speech. Based on binaural cue decoding, speech signal extraction or speech enhancement may be performed.
  • Referring now to FIG. 2, the exemplary communication device 104 is shown in more detail. The exemplary communication device 104 is an audio receiving device that comprises a processor 202, the primary microphone 106, the secondary microphone 108, an audio processing engine 204, and an output device 206. The communication device 104 may comprise further components necessary for communication device 104 operation, but not related to noise suppression or speech enhancement. The audio processing engine 204 will be discussed in more details in connection with FIG. 3.
  • As previously discussed, the primary and secondary microphones 106 and 108, respectively, are spaced a distance apart in order to allow for an energy level difference between them. It should be noted that the microphones 106 and 108 may comprise any type of acoustic receiving device or sensor, and may be omni-directional, unidirectional, or have other directional characteristics or polar patters. Once received by the microphones 106 and 108, the acoustic signals are converted by an analog-to-digital converter (not shown) into digital signals for processing in accordance with some embodiments. In order to differentiate the acoustic signals, the acoustic signal received by the primary microphone 106 is herein referred to as the primary acoustic signal, while the acoustic signal received by the secondary microphone 108 is herein referred to as the secondary acoustic signal.
  • The output device 206 is any device which provides an audio output to the user. For example, the output device 206 may be an earpiece of a headset or handset, or a speaker on a conferencing device.
  • FIG. 3 is a detailed block diagram of the exemplary audio processing engine 204, according to one embodiment of the present invention. In one embodiment, the acoustic signals (i.e., X1 and X2) received from the primary and secondary microphones 106 and 108 (FIG. 2) are converted to digital signals and forwarded to a frequency analysis module 302. In one embodiment, the frequency analysis module 302 takes the acoustic signals and mimics a cochlea implementation (i.e., cochlea domain) using a filter bank. Alternatively, other filter banks such as short-time Fourier transform (STFT), sub-band filter banks, modulated complex lapped transforms, wavelets, etc. can be used for the frequency analysis and synthesis. Because most sounds (e.g., acoustic signal) are complex and comprise more than one frequency, a sub-band analysis on the acoustic signal determines what individual frequencies are present in the complex acoustic signal during a frame (i.e., a predetermined period of time). In one embodiment, the frame is 4 ms long.
  • Once the frequencies are determined, the signals are forwarded to an energy module 304 which computes energy level estimates during an interval of time. The energy estimate may be based on bandwidth of the cochlea channel and the acoustic signal. The exemplary energy module 304 is a component which, in some embodiments, can be represented mathematically. Thus, the energy level of the acoustic signal received at the primary microphone 106 may be approximated, in one embodiment, by the following equation

  • E 1(t,ω)=λE |X 1(t,ω)|2+(1−λE)E 1(t−1,ω)
  • where ζE is a number between zero and one that determines an averaging time constant, X1(t,ω) is the acoustic signal of the primary microphone 106 in the cochlea domain, ω represents the frequency, and t represents time. As shown, a present energy level of the primary microphone 106, E1(t,ω), is dependent upon a previous energy level of the primary microphone 106, E1(t−1,w). In some other embodiments, the value of λE can be different for different frequency channels. Given a desired time constant T (e.g., 4 ms) and the sampling frequency fs (e.g. 16 kHz), the value of λE can be approximated as
  • λ E = 1 - - 1 Tf s
  • The energy level of the acoustic signal received from the secondary microphone 108 may be approximated by a similar exemplary equation

  • E 2(t,ω)=λE |X 2(t,ω)|2+(1−λE)E 2(t−1,ω)
  • where X2(t,w) is the acoustic signal of the secondary microphone 108 in the cochlea domain. Similar to the calculation of energy level for the primary microphone 106, energy level for the secondary microphone 108, E2(t,ω), is dependent upon a previous energy level of the secondary microphone 108, E2(t−1,ω).
  • Given the calculated energy levels, an inter-microphone level difference (ILD) may be determined by an ILD module 306. The ILD module 306 is a component which may be approximated mathematically, in one embodiment, as
  • ILD ( t , ω ) = [ 1 - 2 E 1 ( t , ω ) E 2 ( t , ω ) E 1 2 ( t , ω ) + E 2 2 ( t , ω ) ] * sign ( E 1 ( t , ω ) - E 2 ( t , ω ) )
  • where E1 is the energy level of the primary microphone 106 and E2 is the energy level of the secondary microphone 108, both of which are obtained from the energy module 304. This equation provides a bounded result between −1 and 1. For example, ILD goes to 1 when the E2 goes to 0, and ILD goes to −1 when E1 goes to 0. Thus, when the speech source is close to the primary microphone 106 and there is no noise, ILD=1, but as more noise is added, the ILD will change. Further, as more noise is picked up by both of the microphones 106 and 108, it becomes more difficult to discriminate speech from noise.
  • The above equation is desirable over an ILD calculated via a ratio of the energy levels, such as
  • ILD ( t , ω ) = E 1 ( t , ω ) E 2 ( t , ω ) ,
  • where ILD is not bounded and may go to infinity as the energy level of the primary microphone gets smaller.
  • In an alternative embodiment, the ILD may be approximated by
  • ILD ( t , ω ) = E 1 ( t , ω ) - E 2 ( t , ω ) E 1 ( t , ω ) + E 2 ( t , ω ) .
  • Here, the ILD calculation is also bounded between −1 and 1. Therefore, this alternative ILD calculation may be used in one embodiment of the present invention.
  • According to an exemplary embodiment of the present invention, a Wiener filter is used to suppress noise/enhance speech. In order to derive a Wiener filter estimate, however, specific inputs are required. These inputs comprise a power spectral density of noise and a power spectral density of the source signal. As such, a noise estimate module 308 may be provided to determine a noise estimate for the acoustic signals.
  • According to exemplary embodiments, the noise estimate module 308 attempts to estimate the noise components in the microphone signals. In exemplary embodiments, the noise estimate is based only on the acoustic signal received by the primary microphone 106. The exemplary noise estimate module 308 is a component which can be approximated mathematically by

  • N(t,ω)=λ1(t,ω)E 1(t,ω)+(1−λ1(t,ω))min[N(t−1,ω),E 1(t,ω)]
  • according to one embodiment of the present invention. As shown, the noise estimate in this embodiment is based on minimum statistics of a current energy estimate of the primary microphone 106, E1(t,ω) and a noise estimate of a previous time frame, N(t−1,ω). Therefore the noise estimation is performed efficiently and with low latency.
  • λ1(t,ω) in the above equation is derived from the ILD approximated by the ILD module 306, as
  • λ I ( t , ω ) = { 0 if ILD ( t , ω ) < threshold 1 if ILD ( t , ω ) > threshold
  • That is, when speech at the primary microphone 106 is smaller than a threshold value (e.g., threshold=0.5) above which speech is expected to be, λ1 is small, and thus the noise estimator follows the noise closely. When ILD starts to rise (e.g., because speech is detected), however, λ1 increases. As a result, the noise estimate module 308 slows down the noise estimation process and the speech energy does not contribute significantly to the final noise estimate. Therefore, exemplary embodiments of the present invention may use a combination of minimum statistics and voice activity detection to determine the noise estimate.
  • A filter module 310 then derives a filter estimate based on the noise estimate. In one embodiment, the filter is a Wiener filter. Alternative embodiments may contemplate other filters. Accordingly, the Wiener filter approximation may be approximated, according to one embodiment, as
  • W = ( P s P s + P n ) α ,
  • where Ps is a power spectral density of speech and Pn is a power spectral density of noise. According to one embodiment, Pn is the noise estimate, N(t,ω), which is calculated by the noise estimate module 308. In an exemplary embodiment, Ps=E1(t,ω)−βN(t,ω), where E1(t,ω) is the energy estimate of the primary microphone 106 from the energy module 304, and N(t,ω) is the noise estimate provided by the noise estimate module 308. Because the noise estimate changes with each frame, the filter estimate will also change with each frame.
  • β is an over-subtraction term which is a function of the ILD. β compensates bias of minimum statistics of the noise estimate module 308 and forms a perceptual weighting. Because time constants are different, the bias will be different between portions of pure noise and portions of noise and speech. Therefore, in some embodiments, compensation for this bias may be necessary. In exemplary embodiments, β is determined empirically (e.g., 2-3 dB at a large ILD, and is 6-9 dB at a low ILD).
  • α in the above exemplary Wiener filter equation is a factor which further suppresses the noise estimate. α can be any positive value. In one embodiment, nonlinear expansion may be obtained by setting α to 2. According to exemplary embodiments, α is determined empirically and applied when a body of
  • W = ( P s P s + P n )
  • falls below a prescribed value (e.g., 12 dB down from the maximum possible value of W, which is unity).
  • Because the Wiener filter estimation may change quickly (e.g., from one frame to the next frame) and noise and speech estimates can vary greatly between each frame, application of the Wiener filter estimate, as is, may result in artifacts (e.g., discontinuities, blips, transients, etc.). Therefore, an optional filter smoothing module 312 is provided to smooth the Wiener filter estimate applied to the acoustic signals as a function of time. In one embodiment, the filter smoothing module 312 may be mathematically approximated as

  • M(t,ω)=λs(t,ω)W(t,ω)+(1−λs(t,ω))M(t−1,ω)
  • where λs is a function of the Wiener filter estimate and the primary microphone energy, E1.
  • As shown, the filter smoothing module 312, at time (t) will smooth the Wiener filter estimate using the values of the smoothed Wiener filter estimate from the previous frame at time (t−1). In order to allow for quick response to the acoustic signal changing quickly, the filter smoothing module 312 performs less smoothing on quick changing signals, and more smoothing on slower changing signals. This is accomplished by varying the value of λs according to a weighed first order derivative of E1 with respect to time. If the first order derivative is large and the energy change is large, then λs is set to a large value. If the derivative is small then λs is set to a smaller value.
  • After smoothing by the filter smoothing module 312, the primary acoustic signal is multiplied by the smoothed Wiener filter estimate to estimate the speech. In the above Wiener filter embodiment, the speech estimate is approximated by S(t,ω)=X1(t,ω)*M(t,ω), where X1 is the acoustic signal from the primary microphone 106. In exemplary embodiments, the speech estimation occurs in a masking module 314.
  • Next, the speech estimate is converted back into time domain from the cochlea domain. The conversion comprises taking the speech estimate, S(t,ω), and multiplying this with an inverse frequency of the cochlea channels in a frequency synthesis module 316. Once conversion is completed, the signal is output to user.
  • It should be noted that the system architecture of the audio processing engine 204 of FIG. 3 is exemplary. Alternative embodiments may comprise more components, less components, or equivalent components and still be within the scope of embodiments of the present invention. Various modules of the audio processing engine 204 may be combined into a single module. For example, the functionalities of the frequency analysis module 302 and energy module 304 may be combined into a single module. Furthermore, the functions of the ILD module 306 may be combined with the functions of the energy module 304 alone, or in combination with the frequency analysis module 302. As a further example, the functionality of the filter module 310 may be combined with the functionality of the filter smoothing module 312.
  • Referring now to FIG. 4, a flowchart 400 of an exemplary method for noise suppression utilizing inter-microphone level differences is shown. In step 402, audio signals are received by a primary microphone 106 and a secondary microphone 108 (FIG. 2). In exemplary embodiments, the acoustic signals are converted to digital format for processing.
  • Frequency analysis is then performed on the acoustic signals by the frequency analysis module 302 (FIG. 3) in step 404. According to one embodiment, the frequency analysis module 302 utilizes a filter bank to determine individual frequencies present in the complex acoustic signal.
  • In step 406, energy estimates for acoustic signals received at both the primary and secondary microphones 106 and 108 are computed. In one embodiment, the energy estimates are determined by an energy module 304 (FIG. 3). The exemplary energy module 304 utilizes a present acoustic signal and a previously calculated energy estimate to determine the present energy estimate.
  • Once the energy estimates are calculated, inter-microphone level differences (ILD) are computed in step 408. In one embodiment, the ILD is calculated based on the energy estimates of both the primary and secondary acoustic signals. In exemplary embodiments, the ILD is computed by the ILD module 306 (FIG. 3).
  • Based on the calculated ILD, noise is estimated in step 410. According to embodiments of the present invention, the noise estimate is based only on the acoustic signal received at the primary microphone 106. The noise estimate may be based on the present energy estimate of the acoustic signal from the primary microphone 106 and a previously computed noise estimate. In determining the noise estimate, the noise estimation is frozen or slowed down when the ILD increases, according to exemplary embodiments of the present invention.
  • In step 412, a filter estimate is computed by the filter module 310 (FIG. 3). In one embodiment, the filter used in the audio processing engine 204 (FIG. 3) is a Wiener filter. Once the filter estimate is determined, the filter estimate may be smoothed in step 414. Smoothing prevents fast fluctuations which may create audio artifacts. The smoothed filter estimate is applied to the acoustic signal from the primary microphone 106 in step 416 to generate a speech estimate.
  • In step 418, the speech estimate is converted back to the time domain. Exemplary conversion techniques apply an inverse frequency of the cochlea channel to the speech estimate. Once the speech estimate is converted, the audio signal may now be output to the user in step 420. In some embodiments, the digital acoustic signal is converted to an analog signal for output. The output may be via a speaker, earpieces, or other similar devices.
  • The above-described modules can be comprised of instructions that are stored on storage media. The instructions can be retrieved and executed by the processor 202 (FIG. 2). Some examples of instructions include software, program code, and firmware. Some examples of storage media comprise memory devices and integrated circuits. The instructions are operational when executed by the processor 202 to direct the processor 202 to operate in accordance with embodiments of the present invention. Those skilled in the art are familiar with instructions, processor(s), and storage media.
  • The present invention is described above with reference to exemplary embodiments. It will be apparent to those skilled in the art that various modifications may be made and other embodiments can be used without departing from the broader scope of the present invention. Therefore, these and other variations upon the exemplary embodiments are intended to be covered by the present invention.

Claims (20)

What is claimed is:
1. A method for enhancing speech, comprising:
determining a filter estimate, using at least one hardware processor, during a frame, the filter estimate based on a noise estimate of a primary acoustic signal, an energy estimate of the primary acoustic signal, and a level difference based on the primary acoustic signal and a secondary acoustic signal, the primary acoustic signal and the secondary acoustic signal each representing at least one captured sound; and
applying the filter estimate to the primary acoustic signal to produce a speech estimate.
2. The method of claim 1, further comprising determining an energy estimate for each of the acoustic signals during the frame.
3. The method of claim 2, wherein the energy estimate of the primary acoustic signal is approximated as E1(t,ω)=λE|X1(t,ω)|2+(1−λE)E1(t−1,ω).
4. The method of claim 2, wherein the energy estimate of the secondary acoustic signal is approximated as E2(t,ω)=λE|X2(t,ω)|2+(1−λE)E2(t−1,ω).
5. The method of claim 2, further comprising using the energy estimates to determine the level difference for the frame.
6. The method of claim 5, wherein the level difference is approximated by
ILD ( t , ω ) = [ 1 - 2 E 1 ( t , ω ) E 2 ( t , ω ) E 1 2 ( t , ω ) + E 2 2 ( t , ω ) ] * sign ( E 1 ( t , ω ) - E 2 ( t , ω ) ) .
7. The method of claim 5, wherein the level difference is approximated by
ILD ( t , ω ) = E 1 ( t , ω ) - E 2 ( t , ω ) E 1 ( t , ω ) + E 2 ( t , ω ) .
8. The method of claim 1, wherein the noise estimate is based on the energy estimate of the primary acoustic signal and the level difference.
9. The method of claim 8, wherein the noise estimate is approximated as N(t,ω)=λI(t,ω)E1(t,ω)+(1−λI(t,ω))min[N(t−1,ω)E1(t,ω)].
10. The method of claim 1, further comprising smoothing the filter estimate prior to applying the filter estimate to the primary acoustic signal.
11. The method of claim 10, wherein the smoothing is approximated as M(t,ω)=λs(t,ω)W(t,ω)+(1−λs(t,ω))M(t−1,ω).
12. The method of claim 1, further comprising converting the speech estimate to a time domain.
13. The method of claim 1, further comprising outputting the speech estimate to a user.
14. The method of claim 1, wherein the at least one hardware processor comprises an audio processing engine.
15. A system for enhancing speech on a device, comprising:
a primary microphone configured to receive a primary acoustic signal;
a secondary microphone located a distance away from the primary microphone and configured to receive a secondary acoustic signal; and
an audio processing engine configured to enhance speech in the primary acoustic signal, the audio processing engine comprising:
a noise estimate module configured to determine a noise estimate for the primary acoustic signal based on an energy estimate of the primary acoustic signal and a level difference, the level difference being based on the primary acoustic signal and a secondary acoustic signal; and
a filter module configured to determine a filter estimate to be applied to the primary acoustic signal to generate a filtered acoustic signal, the filter estimate based on (i) the noise estimate of the primary acoustic signal, (ii) the energy estimate of the primary acoustic signal, and (iii) the level difference.
16. The system of claim 15, wherein the audio processing engine further comprises an energy module configured to determine an energy estimate for each of the acoustic signals during a frame.
17. The system of claim 15, wherein the audio processing engine further comprises a filter smoothing module configured to smooth the filter estimate prior to applying the filter estimate to the primary acoustic signal.
18. The system of claim 15, wherein the audio processing engine further comprises a masking module configured to determine the speech estimate.
19. A non-transitory computer readable medium having embodied thereon a program, the program being executable by a machine to perform a method for enhancing speech, the method comprising:
determining a filter estimate, using at least one hardware processor, during a frame, the filter estimate based on:
(i) a noise estimate of a primary acoustic signal,
(ii) an energy estimate of the primary acoustic signal, and
(iii) a level difference based on the primary acoustic signal and a secondary acoustic signal, the primary acoustic signal representing at least one captured sound and the secondary acoustic signal representing at least one other captured sound; and
applying the filter estimate to the primary acoustic signal to produce a speech estimate.
20. The non-transitory computer readable medium of claim 19, wherein the at least one hardware processor comprises an audio processor.
US14/477,761 2006-01-05 2014-09-04 Utilizing level differences for speech enhancement Abandoned US20160066088A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/477,761 US20160066088A1 (en) 2006-01-05 2014-09-04 Utilizing level differences for speech enhancement

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US75682606P 2006-01-05 2006-01-05
US11/343,524 US8345890B2 (en) 2006-01-05 2006-01-30 System and method for utilizing inter-microphone level differences for speech enhancement
US13/705,132 US8867759B2 (en) 2006-01-05 2012-12-04 System and method for utilizing inter-microphone level differences for speech enhancement
US14/477,761 US20160066088A1 (en) 2006-01-05 2014-09-04 Utilizing level differences for speech enhancement

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/705,132 Continuation US8867759B2 (en) 2006-01-05 2012-12-04 System and method for utilizing inter-microphone level differences for speech enhancement

Publications (1)

Publication Number Publication Date
US20160066088A1 true US20160066088A1 (en) 2016-03-03

Family

ID=38224448

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/343,524 Active 2030-06-10 US8345890B2 (en) 2006-01-05 2006-01-30 System and method for utilizing inter-microphone level differences for speech enhancement
US13/705,132 Active US8867759B2 (en) 2006-01-05 2012-12-04 System and method for utilizing inter-microphone level differences for speech enhancement
US14/477,761 Abandoned US20160066088A1 (en) 2006-01-05 2014-09-04 Utilizing level differences for speech enhancement

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/343,524 Active 2030-06-10 US8345890B2 (en) 2006-01-05 2006-01-30 System and method for utilizing inter-microphone level differences for speech enhancement
US13/705,132 Active US8867759B2 (en) 2006-01-05 2012-12-04 System and method for utilizing inter-microphone level differences for speech enhancement

Country Status (5)

Country Link
US (3) US8345890B2 (en)
JP (1) JP5007442B2 (en)
KR (1) KR101210313B1 (en)
FI (1) FI20080428L (en)
WO (1) WO2007081916A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9699554B1 (en) 2010-04-21 2017-07-04 Knowles Electronics, Llc Adaptive signal equalization

Families Citing this family (141)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8345890B2 (en) 2006-01-05 2013-01-01 Audience, Inc. System and method for utilizing inter-microphone level differences for speech enhancement
US8744844B2 (en) 2007-07-06 2014-06-03 Audience, Inc. System and method for adaptive intelligent noise suppression
US8204252B1 (en) 2006-10-10 2012-06-19 Audience, Inc. System and method for providing close microphone adaptive array processing
US9185487B2 (en) 2006-01-30 2015-11-10 Audience, Inc. System and method for providing noise suppression utilizing null processing noise subtraction
US8194880B2 (en) 2006-01-30 2012-06-05 Audience, Inc. System and method for utilizing omni-directional microphones for speech enhancement
US20070237341A1 (en) * 2006-04-05 2007-10-11 Creative Technology Ltd Frequency domain noise attenuation utilizing two transducers
US8204253B1 (en) 2008-06-30 2012-06-19 Audience, Inc. Self calibration of audio device
US8949120B1 (en) 2006-05-25 2015-02-03 Audience, Inc. Adaptive noise cancelation
US8849231B1 (en) 2007-08-08 2014-09-30 Audience, Inc. System and method for adaptive power control
US8934641B2 (en) 2006-05-25 2015-01-13 Audience, Inc. Systems and methods for reconstructing decomposed audio signals
US8150065B2 (en) 2006-05-25 2012-04-03 Audience, Inc. System and method for processing an audio signal
US8213623B2 (en) * 2007-01-12 2012-07-03 Illusonic Gmbh Method to generate an output audio signal from two or more input audio signals
US8259926B1 (en) 2007-02-23 2012-09-04 Audience, Inc. System and method for 2-channel and 3-channel acoustic echo cancellation
US20090018826A1 (en) * 2007-07-13 2009-01-15 Berlin Andrew A Methods, Systems and Devices for Speech Transduction
US8189766B1 (en) 2007-07-26 2012-05-29 Audience, Inc. System and method for blind subband acoustic echo cancellation postfiltering
JP5141691B2 (en) * 2007-11-26 2013-02-13 富士通株式会社 Sound processing apparatus, correction apparatus, correction method, and computer program
US8180064B1 (en) 2007-12-21 2012-05-15 Audience, Inc. System and method for providing voice equalization
US8143620B1 (en) 2007-12-21 2012-03-27 Audience, Inc. System and method for adaptive classification of audio sources
US8194882B2 (en) * 2008-02-29 2012-06-05 Audience, Inc. System and method for providing single microphone noise suppression fallback
US8355511B2 (en) 2008-03-18 2013-01-15 Audience, Inc. System and method for envelope-based acoustic echo cancellation
US9142221B2 (en) 2008-04-07 2015-09-22 Cambridge Silicon Radio Limited Noise reduction
US8930197B2 (en) * 2008-05-09 2015-01-06 Nokia Corporation Apparatus and method for encoding and reproduction of speech and audio signals
US8521530B1 (en) 2008-06-30 2013-08-27 Audience, Inc. System and method for enhancing a monaural audio signal
US8774423B1 (en) 2008-06-30 2014-07-08 Audience, Inc. System and method for controlling adaptivity of signal modification using a phantom coefficient
US8724829B2 (en) 2008-10-24 2014-05-13 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for coherence detection
US8218397B2 (en) * 2008-10-24 2012-07-10 Qualcomm Incorporated Audio source proximity estimation using sensor array for noise reduction
KR101475864B1 (en) * 2008-11-13 2014-12-23 삼성전자 주식회사 Apparatus and method for eliminating noise
US8620672B2 (en) * 2009-06-09 2013-12-31 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for phase-based processing of multichannel signal
US8948415B1 (en) * 2009-10-26 2015-02-03 Plantronics, Inc. Mobile device with discretionary two microphone noise reduction
US8406430B2 (en) * 2009-11-19 2013-03-26 Infineon Technologies Ag Simulated background noise enabled echo canceller
US9838784B2 (en) 2009-12-02 2017-12-05 Knowles Electronics, Llc Directional audio capture
US8718290B2 (en) * 2010-01-26 2014-05-06 Audience, Inc. Adaptive noise reduction using level cues
US9008329B1 (en) 2010-01-26 2015-04-14 Audience, Inc. Noise reduction using multi-feature cluster tracker
US8473287B2 (en) 2010-04-19 2013-06-25 Audience, Inc. Method for jointly optimizing noise reduction and voice quality in a mono or multi-microphone system
US8538035B2 (en) 2010-04-29 2013-09-17 Audience, Inc. Multi-microphone robust noise suppression
EP2561508A1 (en) 2010-04-22 2013-02-27 Qualcomm Incorporated Voice activity detection
US8781137B1 (en) 2010-04-27 2014-07-15 Audience, Inc. Wind noise detection and suppression
US9558755B1 (en) 2010-05-20 2017-01-31 Knowles Electronics, Llc Noise suppression assisted automatic speech recognition
US9245538B1 (en) * 2010-05-20 2016-01-26 Audience, Inc. Bandwidth enhancement of speech signals assisted by noise reduction
US8447596B2 (en) 2010-07-12 2013-05-21 Audience, Inc. Monaural noise suppression based on computational auditory scene analysis
US8611552B1 (en) * 2010-08-25 2013-12-17 Audience, Inc. Direction-aware active noise cancellation system
US8682006B1 (en) 2010-10-20 2014-03-25 Audience, Inc. Noise suppression based on null coherence
US8898058B2 (en) 2010-10-25 2014-11-25 Qualcomm Incorporated Systems, methods, and apparatus for voice activity detection
US8831937B2 (en) * 2010-11-12 2014-09-09 Audience, Inc. Post-noise suppression processing to improve voice quality
US8908877B2 (en) 2010-12-03 2014-12-09 Cirrus Logic, Inc. Ear-coupling detection and adjustment of adaptive response in noise-canceling in personal audio devices
CN103270552B (en) 2010-12-03 2016-06-22 美国思睿逻辑有限公司 The Supervised Control of the adaptability noise killer in individual's voice device
JP5857403B2 (en) 2010-12-17 2016-02-10 富士通株式会社 Voice processing apparatus and voice processing program
US9264804B2 (en) 2010-12-29 2016-02-16 Telefonaktiebolaget L M Ericsson (Publ) Noise suppressing method and a noise suppressor for applying the noise suppressing method
US8958571B2 (en) * 2011-06-03 2015-02-17 Cirrus Logic, Inc. MIC covering detection in personal audio devices
US9824677B2 (en) 2011-06-03 2017-11-21 Cirrus Logic, Inc. Bandlimiting anti-noise in personal audio devices having adaptive noise cancellation (ANC)
US9318094B2 (en) 2011-06-03 2016-04-19 Cirrus Logic, Inc. Adaptive noise canceling architecture for a personal audio device
US9214150B2 (en) 2011-06-03 2015-12-15 Cirrus Logic, Inc. Continuous adaptation of secondary path adaptive response in noise-canceling personal audio devices
US8948407B2 (en) 2011-06-03 2015-02-03 Cirrus Logic, Inc. Bandlimiting anti-noise in personal audio devices having adaptive noise cancellation (ANC)
US9076431B2 (en) 2011-06-03 2015-07-07 Cirrus Logic, Inc. Filter architecture for an adaptive noise canceler in a personal audio device
US8848936B2 (en) 2011-06-03 2014-09-30 Cirrus Logic, Inc. Speaker damage prevention in adaptive noise-canceling personal audio devices
US8972251B2 (en) * 2011-06-07 2015-03-03 Qualcomm Incorporated Generating a masking signal on an electronic device
WO2013009949A1 (en) 2011-07-13 2013-01-17 Dts Llc Microphone array processing system
CN103907152B (en) * 2011-09-02 2016-05-11 Gn奈康有限公司 The method and system suppressing for audio signal noise
US9325821B1 (en) 2011-09-30 2016-04-26 Cirrus Logic, Inc. Sidetone management in an adaptive noise canceling (ANC) system including secondary path modeling
JP2015502524A (en) * 2011-11-04 2015-01-22 ブリュエル アンド ケアー サウンド アンド ヴァイブレーション メジャーメント エー/エス Computationally efficient broadband filter and sum array focusing
US9408011B2 (en) * 2011-12-19 2016-08-02 Qualcomm Incorporated Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
US9258653B2 (en) 2012-03-21 2016-02-09 Semiconductor Components Industries, Llc Method and system for parameter based adaptation of clock speeds to listening devices and audio applications
US9142205B2 (en) 2012-04-26 2015-09-22 Cirrus Logic, Inc. Leakage-modeling adaptive noise canceling for earspeakers
US9014387B2 (en) 2012-04-26 2015-04-21 Cirrus Logic, Inc. Coordinated control of adaptive noise cancellation (ANC) among earspeaker channels
US9123321B2 (en) 2012-05-10 2015-09-01 Cirrus Logic, Inc. Sequenced adaptation of anti-noise generator response and secondary path response in an adaptive noise canceling system
US9076427B2 (en) 2012-05-10 2015-07-07 Cirrus Logic, Inc. Error-signal content controlled adaptation of secondary and leakage path models in noise-canceling personal audio devices
US9318090B2 (en) 2012-05-10 2016-04-19 Cirrus Logic, Inc. Downlink tone detection and adaptation of a secondary path response model in an adaptive noise canceling system
US9082387B2 (en) 2012-05-10 2015-07-14 Cirrus Logic, Inc. Noise burst adaptation of secondary path adaptive response in noise-canceling personal audio devices
US9319781B2 (en) 2012-05-10 2016-04-19 Cirrus Logic, Inc. Frequency and direction-dependent ambient sound handling in personal audio devices having adaptive noise cancellation (ANC)
CN102801861B (en) * 2012-08-07 2015-08-19 歌尔声学股份有限公司 A kind of sound enhancement method and device being applied to mobile phone
US9532139B1 (en) 2012-09-14 2016-12-27 Cirrus Logic, Inc. Dual-microphone frequency amplitude response self-calibration
US20140095161A1 (en) * 2012-09-28 2014-04-03 At&T Intellectual Property I, L.P. System and method for channel equalization using characteristics of an unknown signal
US9640194B1 (en) 2012-10-04 2017-05-02 Knowles Electronics, Llc Noise suppression for speech processing based on machine-learning mask estimation
US9107010B2 (en) 2013-02-08 2015-08-11 Cirrus Logic, Inc. Ambient noise root mean square (RMS) detector
MX354633B (en) * 2013-03-05 2018-03-14 Fraunhofer Ges Forschung Apparatus and method for multichannel direct-ambient decomposition for audio signal processing.
US20140278393A1 (en) 2013-03-12 2014-09-18 Motorola Mobility Llc Apparatus and Method for Power Efficient Signal Conditioning for a Voice Recognition System
US9369798B1 (en) 2013-03-12 2016-06-14 Cirrus Logic, Inc. Internal dynamic range control in an adaptive noise cancellation (ANC) system
US9106989B2 (en) 2013-03-13 2015-08-11 Cirrus Logic, Inc. Adaptive-noise canceling (ANC) effectiveness estimation and correction in a personal audio device
US9414150B2 (en) 2013-03-14 2016-08-09 Cirrus Logic, Inc. Low-latency multi-driver adaptive noise canceling (ANC) system for a personal audio device
US9215749B2 (en) 2013-03-14 2015-12-15 Cirrus Logic, Inc. Reducing an acoustic intensity vector with adaptive noise cancellation with two error microphones
US9324311B1 (en) 2013-03-15 2016-04-26 Cirrus Logic, Inc. Robust adaptive noise canceling (ANC) in a personal audio device
US9208771B2 (en) 2013-03-15 2015-12-08 Cirrus Logic, Inc. Ambient noise-based adaptation of secondary path adaptive response in noise-canceling personal audio devices
US9635480B2 (en) 2013-03-15 2017-04-25 Cirrus Logic, Inc. Speaker impedance monitoring
US9467776B2 (en) 2013-03-15 2016-10-11 Cirrus Logic, Inc. Monitoring of speaker impedance to detect pressure applied between mobile device and ear
US10206032B2 (en) 2013-04-10 2019-02-12 Cirrus Logic, Inc. Systems and methods for multi-mode adaptive noise cancellation for audio headsets
US9066176B2 (en) 2013-04-15 2015-06-23 Cirrus Logic, Inc. Systems and methods for adaptive noise cancellation including dynamic bias of coefficients of an adaptive noise cancellation system
US9462376B2 (en) 2013-04-16 2016-10-04 Cirrus Logic, Inc. Systems and methods for hybrid adaptive noise cancellation
US9460701B2 (en) 2013-04-17 2016-10-04 Cirrus Logic, Inc. Systems and methods for adaptive noise cancellation by biasing anti-noise level
US9478210B2 (en) 2013-04-17 2016-10-25 Cirrus Logic, Inc. Systems and methods for hybrid adaptive noise cancellation
US9578432B1 (en) 2013-04-24 2017-02-21 Cirrus Logic, Inc. Metric and tool to evaluate secondary path design in adaptive noise cancellation systems
US20180317019A1 (en) 2013-05-23 2018-11-01 Knowles Electronics, Llc Acoustic activity detecting microphone
US9264808B2 (en) 2013-06-14 2016-02-16 Cirrus Logic, Inc. Systems and methods for detection and cancellation of narrow-band noise
US9536540B2 (en) 2013-07-19 2017-01-03 Knowles Electronics, Llc Speech signal separation and synthesis based on auditory scene analysis and speech modeling
US9392364B1 (en) 2013-08-15 2016-07-12 Cirrus Logic, Inc. Virtual microphone for adaptive noise cancellation in personal audio devices
US9666176B2 (en) 2013-09-13 2017-05-30 Cirrus Logic, Inc. Systems and methods for adaptive noise cancellation by adaptively shaping internal white noise to train a secondary path
US9508345B1 (en) 2013-09-24 2016-11-29 Knowles Electronics, Llc Continuous voice sensing
US9620101B1 (en) 2013-10-08 2017-04-11 Cirrus Logic, Inc. Systems and methods for maintaining playback fidelity in an audio system with adaptive noise cancellation
US10382864B2 (en) 2013-12-10 2019-08-13 Cirrus Logic, Inc. Systems and methods for providing adaptive playback equalization in an audio device
US9704472B2 (en) 2013-12-10 2017-07-11 Cirrus Logic, Inc. Systems and methods for sharing secondary path information between audio channels in an adaptive noise cancellation system
US10219071B2 (en) 2013-12-10 2019-02-26 Cirrus Logic, Inc. Systems and methods for bandlimiting anti-noise in personal audio devices having adaptive noise cancellation
US9953634B1 (en) 2013-12-17 2018-04-24 Knowles Electronics, Llc Passive training for automatic speech recognition
US9369557B2 (en) 2014-03-05 2016-06-14 Cirrus Logic, Inc. Frequency-dependent sidetone calibration
US9479860B2 (en) 2014-03-07 2016-10-25 Cirrus Logic, Inc. Systems and methods for enhancing performance of audio transducer based on detection of transducer status
US9648410B1 (en) 2014-03-12 2017-05-09 Cirrus Logic, Inc. Control of audio output of headphone earbuds based on the environment around the headphone earbuds
CN106068654B (en) * 2014-03-17 2020-01-31 罗伯特·博世有限公司 System and method for all electrical noise testing of MEMS microphones in production
US9437188B1 (en) 2014-03-28 2016-09-06 Knowles Electronics, Llc Buffered reprocessing for multi-microphone automatic speech recognition assist
US9319784B2 (en) 2014-04-14 2016-04-19 Cirrus Logic, Inc. Frequency-shaped noise-based adaptation of secondary path adaptive response in noise-canceling personal audio devices
US9609416B2 (en) 2014-06-09 2017-03-28 Cirrus Logic, Inc. Headphone responsive to optical signaling
US10181315B2 (en) 2014-06-13 2019-01-15 Cirrus Logic, Inc. Systems and methods for selectively enabling and disabling adaptation of an adaptive noise cancellation system
WO2016015186A1 (en) 2014-07-28 2016-02-04 华为技术有限公司 Acoustical signal processing method and device of communication device
DE112015003945T5 (en) 2014-08-28 2017-05-11 Knowles Electronics, Llc Multi-source noise reduction
US9478212B1 (en) 2014-09-03 2016-10-25 Cirrus Logic, Inc. Systems and methods for use of adaptive secondary path estimate to control equalization in an audio device
DE112015004185T5 (en) 2014-09-12 2017-06-01 Knowles Electronics, Llc Systems and methods for recovering speech components
US9552805B2 (en) 2014-12-19 2017-01-24 Cirrus Logic, Inc. Systems and methods for performance and stability control for feedback adaptive noise cancellation
CN107112012B (en) 2015-01-07 2020-11-20 美商楼氏电子有限公司 Method and system for audio processing and computer readable storage medium
KR20180044324A (en) 2015-08-20 2018-05-02 시러스 로직 인터내셔널 세미컨덕터 리미티드 A feedback adaptive noise cancellation (ANC) controller and a method having a feedback response partially provided by a fixed response filter
US9578415B1 (en) 2015-08-21 2017-02-21 Cirrus Logic, Inc. Hybrid adaptive noise cancellation system with filtered error microphone signal
US10242689B2 (en) * 2015-09-17 2019-03-26 Intel IP Corporation Position-robust multiple microphone noise estimation techniques
WO2017123814A1 (en) * 2016-01-14 2017-07-20 Knowles Electronics, Llc Systems and methods for assisting automatic speech recognition
US10013966B2 (en) 2016-03-15 2018-07-03 Cirrus Logic, Inc. Systems and methods for adaptive active noise cancellation for multiple-driver personal audio device
US9820042B1 (en) 2016-05-02 2017-11-14 Knowles Electronics, Llc Stereo separation and directional suppression with omni-directional microphones
JP6729187B2 (en) * 2016-08-30 2020-07-22 富士通株式会社 Audio processing program, audio processing method, and audio processing apparatus
CN107026934B (en) * 2016-10-27 2019-09-27 华为技术有限公司 A kind of sound localization method and device
JP7052008B2 (en) * 2017-08-17 2022-04-11 セレンス オペレーティング カンパニー Reduced complexity of voiced voice detection and pitch estimation
WO2019091576A1 (en) 2017-11-10 2019-05-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoders, audio decoders, methods and computer programs adapting an encoding and decoding of least significant bits
EP3483882A1 (en) 2017-11-10 2019-05-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Controlling bandwidth in encoders and/or decoders
EP3483884A1 (en) 2017-11-10 2019-05-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Signal filtering
EP3483886A1 (en) 2017-11-10 2019-05-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Selecting pitch lag
EP3483879A1 (en) * 2017-11-10 2019-05-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Analysis/synthesis windowing function for modulated lapped transformation
US10885907B2 (en) * 2018-02-14 2021-01-05 Cirrus Logic, Inc. Noise reduction system and method for audio device with multiple microphones
JP2020036214A (en) 2018-08-30 2020-03-05 Tdk株式会社 MEMS microphone
JP2020036215A (en) 2018-08-30 2020-03-05 Tdk株式会社 MEMS microphone
KR102570384B1 (en) * 2018-12-27 2023-08-25 삼성전자주식회사 Home appliance and method for voice recognition thereof
US10891954B2 (en) 2019-01-03 2021-01-12 International Business Machines Corporation Methods and systems for managing voice response systems based on signals from external devices
US10978086B2 (en) * 2019-07-19 2021-04-13 Apple Inc. Echo cancellation using a subset of multiple microphones as reference channels
US11238853B2 (en) 2019-10-30 2022-02-01 Comcast Cable Communications, Llc Keyword-based audio source localization
KR102288182B1 (en) * 2020-03-12 2021-08-11 한국과학기술원 Method and apparatus for speech privacy, and mobile terminal using the same
KR20210125846A (en) * 2020-04-09 2021-10-19 삼성전자주식회사 Speech processing apparatus and method using a plurality of microphones
KR102422495B1 (en) * 2021-03-30 2022-07-20 엔오스 주식회사 Portable personal ontact device and control method thereof
GB2606366B (en) * 2021-05-05 2023-10-18 Waves Audio Ltd Self-activated speech enhancement
CN113689875B (en) * 2021-08-25 2024-02-06 湖南芯海聆半导体有限公司 Digital hearing aid-oriented double-microphone voice enhancement method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549586B2 (en) * 1999-04-12 2003-04-15 Telefonaktiebolaget L M Ericsson System and method for dual microphone signal noise reduction using spectral subtraction
US7117145B1 (en) * 2000-10-19 2006-10-03 Lear Corporation Adaptive filter for speech enhancement in a noisy environment
US20080201138A1 (en) * 2004-07-22 2008-08-21 Softmax, Inc. Headset for Separation of Speech Signals in a Noisy Environment

Family Cites Families (234)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3976863A (en) 1974-07-01 1976-08-24 Alfred Engel Optimal decoder for non-stationary signals
US3978287A (en) 1974-12-11 1976-08-31 Nasa Real time analysis of voiced sounds
US4137510A (en) 1976-01-22 1979-01-30 Victor Company Of Japan, Ltd. Frequency band dividing filter
GB2102254B (en) 1981-05-11 1985-08-07 Kokusai Denshin Denwa Co Ltd A speech analysis-synthesis system
US4433604A (en) 1981-09-22 1984-02-28 Texas Instruments Incorporated Frequency domain digital encoding technique for musical signals
JPS5876899A (en) * 1981-10-31 1983-05-10 株式会社東芝 Voice segment detector
US4536844A (en) 1983-04-26 1985-08-20 Fairchild Camera And Instrument Corporation Method and apparatus for simulating aural response information
US5054085A (en) 1983-05-18 1991-10-01 Speech Systems, Inc. Preprocessing system for speech recognition
US4674125A (en) 1983-06-27 1987-06-16 Rca Corporation Real-time hierarchal pyramid signal processing apparatus
US4581758A (en) 1983-11-04 1986-04-08 At&T Bell Laboratories Acoustic direction identification system
GB2158980B (en) 1984-03-23 1989-01-05 Ricoh Kk Extraction of phonemic information
US4649505A (en) * 1984-07-02 1987-03-10 General Electric Company Two-input crosstalk-resistant adaptive noise canceller
GB8429879D0 (en) 1984-11-27 1985-01-03 Rca Corp Signal processing apparatus
US4628529A (en) 1985-07-01 1986-12-09 Motorola, Inc. Noise suppression system
US4630304A (en) 1985-07-01 1986-12-16 Motorola, Inc. Automatic background noise estimator for a noise suppression system
US4658426A (en) * 1985-10-10 1987-04-14 Harold Antin Adaptive noise suppressor
JPH0211482Y2 (en) 1985-12-25 1990-03-23
GB8612453D0 (en) 1986-05-22 1986-07-02 Inmos Ltd Multistage digital signal multiplication & addition
US4812996A (en) 1986-11-26 1989-03-14 Tektronix, Inc. Signal viewing instrumentation control system
US4811404A (en) 1987-10-01 1989-03-07 Motorola, Inc. Noise suppression system
IL84902A (en) * 1987-12-21 1991-12-15 D S P Group Israel Ltd Digital autocorrelation system for detecting speech in noisy audio signal
US5027410A (en) 1988-11-10 1991-06-25 Wisconsin Alumni Research Foundation Adaptive, programmable signal processing and filtering for hearing aids
US5099738A (en) 1989-01-03 1992-03-31 Hotz Instruments Technology, Inc. MIDI musical translator
DE69011709T2 (en) * 1989-03-10 1994-12-15 Nippon Telegraph & Telephone Device for detecting an acoustic signal.
US5187776A (en) 1989-06-16 1993-02-16 International Business Machines Corp. Image editor zoom function
EP0427953B1 (en) * 1989-10-06 1996-01-17 Matsushita Electric Industrial Co., Ltd. Apparatus and method for speech rate modification
US5142961A (en) 1989-11-07 1992-09-01 Fred Paroutaud Method and apparatus for stimulation of acoustic musical instruments
GB2239971B (en) * 1989-12-06 1993-09-29 Ca Nat Research Council System for separating speech from background noise
US5058419A (en) 1990-04-10 1991-10-22 Earl H. Ruble Method and apparatus for determining the location of a sound source
JPH0454100A (en) 1990-06-22 1992-02-21 Clarion Co Ltd Audio signal compensation circuit
US5119711A (en) 1990-11-01 1992-06-09 International Business Machines Corporation Midi file translation
FR2673238B1 (en) * 1991-02-26 1999-01-08 Schlumberger Services Petrol PROCESS FOR CHARACTERIZING THE TEXTURE HETEROGENEITIES OF GEOLOGICAL FORMATIONS CROSSED BY A BOREHOLE.
US5224170A (en) * 1991-04-15 1993-06-29 Hewlett-Packard Company Time domain compensation for transducer mismatch
US5210366A (en) 1991-06-10 1993-05-11 Sykes Jr Richard O Method and device for detecting and separating voices in a complex musical composition
US5175769A (en) * 1991-07-23 1992-12-29 Rolm Systems Method for time-scale modification of signals
EP0527527B1 (en) * 1991-08-09 1999-01-20 Koninklijke Philips Electronics N.V. Method and apparatus for manipulating pitch and duration of a physical audio signal
JP3176474B2 (en) 1992-06-03 2001-06-18 沖電気工業株式会社 Adaptive noise canceller device
US5381512A (en) 1992-06-24 1995-01-10 Moscom Corporation Method and apparatus for speech feature recognition based on models of auditory signal processing
US5402496A (en) * 1992-07-13 1995-03-28 Minnesota Mining And Manufacturing Company Auditory prosthesis, noise suppression apparatus and feedback suppression apparatus having focused adaptive filtering
US5381473A (en) * 1992-10-29 1995-01-10 Andrea Electronics Corporation Noise cancellation apparatus
US5732143A (en) * 1992-10-29 1998-03-24 Andrea Electronics Corp. Noise cancellation apparatus
US5402493A (en) 1992-11-02 1995-03-28 Central Institute For The Deaf Electronic simulator of non-linear and active cochlear spectrum analysis
JP2508574B2 (en) * 1992-11-10 1996-06-19 日本電気株式会社 Multi-channel eco-removal device
US5355329A (en) 1992-12-14 1994-10-11 Apple Computer, Inc. Digital filter having independent damping and frequency parameters
US5400409A (en) 1992-12-23 1995-03-21 Daimler-Benz Ag Noise-reduction method for noise-affected voice channels
US5473759A (en) 1993-02-22 1995-12-05 Apple Computer, Inc. Sound analysis and resynthesis using correlograms
US5590241A (en) * 1993-04-30 1996-12-31 Motorola Inc. Speech processing system and method for enhancing a speech signal in a noisy environment
DE4316297C1 (en) 1993-05-14 1994-04-07 Fraunhofer Ges Forschung Audio signal frequency analysis method - using window functions to provide sample signal blocks subjected to Fourier analysis to obtain respective coefficients.
DE4330243A1 (en) * 1993-09-07 1995-03-09 Philips Patentverwaltung Speech processing facility
US5675778A (en) 1993-10-04 1997-10-07 Fostex Corporation Of America Method and apparatus for audio editing incorporating visual comparison
US5502211A (en) 1993-10-26 1996-03-26 Sun Company, Inc. (R&M) Substituted dipyrromethanes and their preparation
US5574824A (en) * 1994-04-11 1996-11-12 The United States Of America As Represented By The Secretary Of The Air Force Analysis/synthesis-based microphone array speech enhancer with variable signal distortion
US5471195A (en) 1994-05-16 1995-11-28 C & K Systems, Inc. Direction-sensing acoustic glass break detecting system
US5544250A (en) * 1994-07-18 1996-08-06 Motorola Noise suppression system and method therefor
JPH0896514A (en) * 1994-07-28 1996-04-12 Sony Corp Audio signal processor
US5729612A (en) 1994-08-05 1998-03-17 Aureal Semiconductor Inc. Method and apparatus for measuring head-related transfer functions
SE505156C2 (en) 1995-01-30 1997-07-07 Ericsson Telefon Ab L M Procedure for noise suppression by spectral subtraction
US5682463A (en) 1995-02-06 1997-10-28 Lucent Technologies Inc. Perceptual audio compression based on loudness uncertainty
US5920840A (en) * 1995-02-28 1999-07-06 Motorola, Inc. Communication system and method using a speaker dependent time-scaling technique
US5587998A (en) 1995-03-03 1996-12-24 At&T Method and apparatus for reducing residual far-end echo in voice communication networks
US5706395A (en) 1995-04-19 1998-01-06 Texas Instruments Incorporated Adaptive weiner filtering using a dynamic suppression factor
US6263307B1 (en) 1995-04-19 2001-07-17 Texas Instruments Incorporated Adaptive weiner filtering using line spectral frequencies
JP3580917B2 (en) 1995-08-30 2004-10-27 本田技研工業株式会社 Fuel cell
US5809463A (en) 1995-09-15 1998-09-15 Hughes Electronics Method of detecting double talk in an echo canceller
US5694474A (en) * 1995-09-18 1997-12-02 Interval Research Corporation Adaptive filter for signal processing and method therefor
US6002776A (en) * 1995-09-18 1999-12-14 Interval Research Corporation Directional acoustic signal processor and method therefor
US5792971A (en) 1995-09-29 1998-08-11 Opcode Systems, Inc. Method and system for editing digital audio information with music-like parameters
IT1281001B1 (en) 1995-10-27 1998-02-11 Cselt Centro Studi Lab Telecom PROCEDURE AND EQUIPMENT FOR CODING, HANDLING AND DECODING AUDIO SIGNALS.
US5956674A (en) 1995-12-01 1999-09-21 Digital Theater Systems, Inc. Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels
FI100840B (en) 1995-12-12 1998-02-27 Nokia Mobile Phones Ltd Noise attenuator and method for attenuating background noise from noisy speech and a mobile station
US5732189A (en) 1995-12-22 1998-03-24 Lucent Technologies Inc. Audio signal coding with a signal adaptive filterbank
JPH09212196A (en) * 1996-01-31 1997-08-15 Nippon Telegr & Teleph Corp <Ntt> Noise suppressor
US5749064A (en) * 1996-03-01 1998-05-05 Texas Instruments Incorporated Method and system for time scale modification utilizing feature vectors about zero crossing points
US5825320A (en) 1996-03-19 1998-10-20 Sony Corporation Gain control method for audio encoding device
US6222927B1 (en) * 1996-06-19 2001-04-24 The University Of Illinois Binaural signal processing system and method
US6978159B2 (en) * 1996-06-19 2005-12-20 Board Of Trustees Of The University Of Illinois Binaural signal processing using multiple acoustic sensors and digital filtering
US6072881A (en) * 1996-07-08 2000-06-06 Chiefs Voice Incorporated Microphone noise rejection system
US5796819A (en) * 1996-07-24 1998-08-18 Ericsson Inc. Echo canceller for non-linear circuits
US5806025A (en) 1996-08-07 1998-09-08 U S West, Inc. Method and system for adaptive filtering of speech signals using signal-to-noise ratio to choose subband filter bank
JPH1054855A (en) 1996-08-09 1998-02-24 Advantest Corp Spectrum analyzer
US6144711A (en) * 1996-08-29 2000-11-07 Cisco Systems, Inc. Spatio-temporal processing for communication
JP3355598B2 (en) * 1996-09-18 2002-12-09 日本電信電話株式会社 Sound source separation method, apparatus and recording medium
US6097820A (en) 1996-12-23 2000-08-01 Lucent Technologies Inc. System and method for suppressing noise in digitally represented voice signals
JP2930101B2 (en) 1997-01-29 1999-08-03 日本電気株式会社 Noise canceller
US5933495A (en) * 1997-02-07 1999-08-03 Texas Instruments Incorporated Subband acoustic noise suppression
DK1326479T4 (en) 1997-04-16 2018-09-03 Semiconductor Components Ind Llc Method and apparatus for noise reduction, especially in hearing aids.
JP4293639B2 (en) 1997-05-01 2009-07-08 メド−エル・エレクトロメディツィニシェ・ゲラーテ・ゲーエムベーハー Low power digital filter apparatus and method
US6151397A (en) * 1997-05-16 2000-11-21 Motorola, Inc. Method and system for reducing undesired signals in a communication environment
JP3541339B2 (en) * 1997-06-26 2004-07-07 富士通株式会社 Microphone array device
EP0889588B1 (en) 1997-07-02 2003-06-11 Micronas Semiconductor Holding AG Filter combination for sample rate conversion
US6430295B1 (en) * 1997-07-11 2002-08-06 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for measuring signal level and delay at multiple sensors
JP3216704B2 (en) 1997-08-01 2001-10-09 日本電気株式会社 Adaptive array device
US6216103B1 (en) * 1997-10-20 2001-04-10 Sony Corporation Method for implementing a speech recognition system to determine speech endpoints during conditions with background noise
US6134524A (en) 1997-10-24 2000-10-17 Nortel Networks Corporation Method and apparatus to detect and delimit foreground speech
US20020002455A1 (en) 1998-01-09 2002-01-03 At&T Corporation Core estimator and adaptive gains from signal to noise ratio in a hybrid speech enhancement system
JP3435686B2 (en) * 1998-03-02 2003-08-11 日本電信電話株式会社 Sound pickup device
US6717991B1 (en) * 1998-05-27 2004-04-06 Telefonaktiebolaget Lm Ericsson (Publ) System and method for dual microphone signal noise reduction using spectral subtraction
US5990405A (en) 1998-07-08 1999-11-23 Gibson Guitar Corp. System and method for generating and controlling a simulated musical concert experience
US7209567B1 (en) 1998-07-09 2007-04-24 Purdue Research Foundation Communication system with adaptive noise suppression
JP4163294B2 (en) 1998-07-31 2008-10-08 株式会社東芝 Noise suppression processing apparatus and noise suppression processing method
US6173255B1 (en) * 1998-08-18 2001-01-09 Lockheed Martin Corporation Synchronized overlap add voice processing using windows and one bit correlators
US6223090B1 (en) 1998-08-24 2001-04-24 The United States Of America As Represented By The Secretary Of The Air Force Manikin positioning for acoustic measuring
US6122610A (en) 1998-09-23 2000-09-19 Verance Corporation Noise suppression for low bitrate speech coder
US7003120B1 (en) 1998-10-29 2006-02-21 Paul Reed Smith Guitars, Inc. Method of modifying harmonic content of a complex waveform
US6469732B1 (en) 1998-11-06 2002-10-22 Vtel Corporation Acoustic source location using a microphone array
US6266633B1 (en) 1998-12-22 2001-07-24 Itt Manufacturing Enterprises Noise suppression and channel equalization preprocessor for speech and speaker recognizers: method and apparatus
US6381570B2 (en) * 1999-02-12 2002-04-30 Telogy Networks, Inc. Adaptive two-threshold method for discriminating noise from speech in a communication signal
US6363345B1 (en) * 1999-02-18 2002-03-26 Andrea Electronics Corporation System, method and apparatus for cancelling noise
US6496795B1 (en) 1999-05-05 2002-12-17 Microsoft Corporation Modulated complex lapped transform for integrated signal enhancement and coding
CA2367579A1 (en) 1999-03-19 2000-09-28 Siemens Aktiengesellschaft Method and device for recording and processing audio signals in an environment filled with acoustic noise
GB2348350B (en) 1999-03-26 2004-02-18 Mitel Corp Echo cancelling/suppression for handsets
US6487257B1 (en) 1999-04-12 2002-11-26 Telefonaktiebolaget L M Ericsson Signal noise reduction by time-domain spectral subtraction using fixed filters
GB9911737D0 (en) * 1999-05-21 1999-07-21 Philips Electronics Nv Audio signal time scale modification
US6226616B1 (en) 1999-06-21 2001-05-01 Digital Theater Systems, Inc. Sound quality of established low bit-rate audio coding systems without loss of decoder compatibility
US20060072768A1 (en) 1999-06-24 2006-04-06 Schwartz Stephen R Complementary-pair equalizer
US6355869B1 (en) 1999-08-19 2002-03-12 Duane Mitton Method and system for creating musical scores from musical recordings
GB9922654D0 (en) * 1999-09-27 1999-11-24 Jaber Marwan Noise suppression system
FI116643B (en) 1999-11-15 2006-01-13 Nokia Corp Noise reduction
US6513004B1 (en) 1999-11-24 2003-01-28 Matsushita Electric Industrial Co., Ltd. Optimized local feature extraction for automatic speech recognition
US6549630B1 (en) * 2000-02-04 2003-04-15 Plantronics, Inc. Signal expander with discrimination between close and distant acoustic source
WO2001069968A2 (en) 2000-03-14 2001-09-20 Audia Technology, Inc. Adaptive microphone matching in multi-microphone directional system
US7076315B1 (en) 2000-03-24 2006-07-11 Audience, Inc. Efficient computation of log-frequency-scale digital filter cascade
US6434417B1 (en) 2000-03-28 2002-08-13 Cardiac Pacemakers, Inc. Method and system for detecting cardiac depolarization
US20020009203A1 (en) * 2000-03-31 2002-01-24 Gamze Erten Method and apparatus for voice signal extraction
JP2001296343A (en) 2000-04-11 2001-10-26 Nec Corp Device for setting sound source azimuth and, imager and transmission system with the same
US7225001B1 (en) 2000-04-24 2007-05-29 Telefonaktiebolaget Lm Ericsson (Publ) System and method for distributed noise suppression
WO2001087011A2 (en) * 2000-05-10 2001-11-15 The Board Of Trustees Of The University Of Illinois Interference suppression techniques
ATE288666T1 (en) * 2000-05-26 2005-02-15 Koninkl Philips Electronics Nv METHOD FOR NOISE REDUCTION IN AN ADAPTIVE BEAM SHAPER
US6622030B1 (en) 2000-06-29 2003-09-16 Ericsson Inc. Echo suppression using adaptive gain based on residual echo energy
US7246058B2 (en) * 2001-05-30 2007-07-17 Aliph, Inc. Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors
US8467543B2 (en) * 2002-03-27 2013-06-18 Aliphcom Microphone and voice activity detection (VAD) configurations for use with communication systems
US8019091B2 (en) 2000-07-19 2011-09-13 Aliphcom, Inc. Voice activity detector (VAD) -based multiple-microphone acoustic noise suppression
US6718309B1 (en) * 2000-07-26 2004-04-06 Ssi Corporation Continuously variable time scale modification of digital audio signals
JP4815661B2 (en) 2000-08-24 2011-11-16 ソニー株式会社 Signal processing apparatus and signal processing method
DE10045197C1 (en) * 2000-09-13 2002-03-07 Siemens Audiologische Technik Operating method for hearing aid device or hearing aid system has signal processor used for reducing effect of wind noise determined by analysis of microphone signals
US7020605B2 (en) 2000-09-15 2006-03-28 Mindspeed Technologies, Inc. Speech coding system with time-domain noise attenuation
US20020116187A1 (en) * 2000-10-04 2002-08-22 Gamze Erten Speech detection
US7092882B2 (en) 2000-12-06 2006-08-15 Ncr Corporation Noise suppression in beam-steered microphone array
US20020133334A1 (en) * 2001-02-02 2002-09-19 Geert Coorman Time scale modification of digitally sampled waveforms in the time domain
US7206418B2 (en) * 2001-02-12 2007-04-17 Fortemedia, Inc. Noise suppression for a wireless communication device
US7617099B2 (en) * 2001-02-12 2009-11-10 FortMedia Inc. Noise suppression by two-channel tandem spectrum modification for speech signal in an automobile
US6915264B2 (en) 2001-02-22 2005-07-05 Lucent Technologies Inc. Cochlear filter bank structure for determining masked thresholds for use in perceptual audio coding
SE0101175D0 (en) 2001-04-02 2001-04-02 Coding Technologies Sweden Ab Aliasing reduction using complex-exponential-modulated filter banks
WO2002082428A1 (en) * 2001-04-05 2002-10-17 Koninklijke Philips Electronics N.V. Time-scale modification of signals applying techniques specific to determined signal types
DE10119277A1 (en) 2001-04-20 2002-10-24 Alcatel Sa Masking noise modulation and interference noise in non-speech intervals in telecommunication system that uses echo cancellation, by inserting noise to match estimated level
EP1253581B1 (en) 2001-04-27 2004-06-30 CSEM Centre Suisse d'Electronique et de Microtechnique S.A. - Recherche et Développement Method and system for speech enhancement in a noisy environment
GB2375688B (en) 2001-05-14 2004-09-29 Motorola Ltd Telephone apparatus and a communication method using such apparatus
JP3457293B2 (en) * 2001-06-06 2003-10-14 三菱電機株式会社 Noise suppression device and noise suppression method
US6493668B1 (en) 2001-06-15 2002-12-10 Yigal Brandman Speech feature extraction system
AUPR612001A0 (en) * 2001-07-04 2001-07-26 Soundscience@Wm Pty Ltd System and method for directional noise monitoring
US7142677B2 (en) * 2001-07-17 2006-11-28 Clarity Technologies, Inc. Directional sound acquisition
US6584203B2 (en) * 2001-07-18 2003-06-24 Agere Systems Inc. Second-order adaptive differential microphone array
KR20040019362A (en) 2001-07-20 2004-03-05 코닌클리케 필립스 일렉트로닉스 엔.브이. Sound reinforcement system having an multi microphone echo suppressor as post processor
CA2354858A1 (en) 2001-08-08 2003-02-08 Dspfactory Ltd. Subband directional audio signal processing using an oversampled filterbank
US20030061032A1 (en) 2001-09-24 2003-03-27 Clarity, Llc Selective sound enhancement
US6937978B2 (en) 2001-10-30 2005-08-30 Chungwa Telecom Co., Ltd. Suppression system of background noise of speech signals and the method thereof
US6792118B2 (en) 2001-11-14 2004-09-14 Applied Neurosystems Corporation Computation of multi-sensor time delays
US6785381B2 (en) * 2001-11-27 2004-08-31 Siemens Information And Communication Networks, Inc. Telephone having improved hands free operation audio quality and method of operation thereof
US20030103632A1 (en) 2001-12-03 2003-06-05 Rafik Goubran Adaptive sound masking system and method
US7315623B2 (en) 2001-12-04 2008-01-01 Harman Becker Automotive Systems Gmbh Method for supressing surrounding noise in a hands-free device and hands-free device
US7065485B1 (en) * 2002-01-09 2006-06-20 At&T Corp Enhancing speech intelligibility using variable-rate time-scale modification
US8098844B2 (en) * 2002-02-05 2012-01-17 Mh Acoustics, Llc Dual-microphone spatial noise suppression
US7171008B2 (en) * 2002-02-05 2007-01-30 Mh Acoustics, Llc Reducing noise in audio systems
US20050228518A1 (en) 2002-02-13 2005-10-13 Applied Neurosystems Corporation Filter set for frequency analysis
EP1351544A3 (en) * 2002-03-08 2008-03-19 Gennum Corporation Low-noise directional microphone system
AU2003233425A1 (en) 2002-03-22 2003-10-13 Georgia Tech Research Corporation Analog audio enhancement system using a noise suppression algorithm
JP2004023481A (en) 2002-06-17 2004-01-22 Alpine Electronics Inc Acoustic signal processing apparatus and method therefor, and audio system
US7242762B2 (en) * 2002-06-24 2007-07-10 Freescale Semiconductor, Inc. Monitoring and control of an adaptive filter in a communication system
JP4227772B2 (en) 2002-07-19 2009-02-18 日本電気株式会社 Audio decoding apparatus, decoding method, and program
US7555434B2 (en) * 2002-07-19 2009-06-30 Nec Corporation Audio decoding device, decoding method, and program
US20040078199A1 (en) 2002-08-20 2004-04-22 Hanoh Kremer Method for auditory based noise reduction and an apparatus for auditory based noise reduction
US6917688B2 (en) * 2002-09-11 2005-07-12 Nanyang Technological University Adaptive noise cancelling microphone system
US7062040B2 (en) 2002-09-20 2006-06-13 Agere Systems Inc. Suppression of echo signals and the like
WO2004034734A1 (en) 2002-10-08 2004-04-22 Nec Corporation Array device and portable terminal
US7146316B2 (en) 2002-10-17 2006-12-05 Clarity Technologies, Inc. Noise reduction in subbanded speech signals
US7092529B2 (en) 2002-11-01 2006-08-15 Nanyang Technological University Adaptive control system for noise cancellation
US7174022B1 (en) * 2002-11-15 2007-02-06 Fortemedia, Inc. Small array microphone for beam-forming and noise suppression
EP1432222A1 (en) * 2002-12-20 2004-06-23 Siemens Aktiengesellschaft Echo canceller for compressed speech
US7885420B2 (en) 2003-02-21 2011-02-08 Qnx Software Systems Co. Wind noise suppression system
US7949522B2 (en) * 2003-02-21 2011-05-24 Qnx Software Systems Co. System for suppressing rain noise
US8271279B2 (en) 2003-02-21 2012-09-18 Qnx Software Systems Limited Signature noise removal
FR2851879A1 (en) 2003-02-27 2004-09-03 France Telecom PROCESS FOR PROCESSING COMPRESSED SOUND DATA FOR SPATIALIZATION.
GB2398913B (en) 2003-02-27 2005-08-17 Motorola Inc Noise estimation in speech recognition
US7233832B2 (en) * 2003-04-04 2007-06-19 Apple Inc. Method and apparatus for expanding audio data
US7428000B2 (en) 2003-06-26 2008-09-23 Microsoft Corp. System and method for distributed meetings
TWI221561B (en) * 2003-07-23 2004-10-01 Ali Corp Nonlinear overlap method for time scaling
DE10339973A1 (en) 2003-08-29 2005-03-17 Daimlerchrysler Ag Intelligent acoustic microphone frontend with voice recognition feedback
US20070067166A1 (en) 2003-09-17 2007-03-22 Xingde Pan Method and device of multi-resolution vector quantilization for audio encoding and decoding
JP2005110127A (en) 2003-10-01 2005-04-21 Canon Inc Wind noise detecting device and video camera with wind noise detecting device
JP4396233B2 (en) 2003-11-13 2010-01-13 パナソニック株式会社 Complex exponential modulation filter bank signal analysis method, signal synthesis method, program thereof, and recording medium thereof
US6982377B2 (en) * 2003-12-18 2006-01-03 Texas Instruments Incorporated Time-scale modification of music signals based on polyphase filterbanks and constrained time-domain processing
JP4162604B2 (en) 2004-01-08 2008-10-08 株式会社東芝 Noise suppression device and noise suppression method
US7499686B2 (en) * 2004-02-24 2009-03-03 Microsoft Corporation Method and apparatus for multi-sensory speech enhancement on a mobile device
EP1581026B1 (en) 2004-03-17 2015-11-11 Nuance Communications, Inc. Method for detecting and reducing noise from a microphone array
US20050288923A1 (en) 2004-06-25 2005-12-29 The Hong Kong University Of Science And Technology Speech enhancement by noise masking
US8340309B2 (en) 2004-08-06 2012-12-25 Aliphcom, Inc. Noise suppressing multi-microphone headset
US20070230712A1 (en) 2004-09-07 2007-10-04 Koninklijke Philips Electronics, N.V. Telephony Device with Improved Noise Suppression
ATE405925T1 (en) 2004-09-23 2008-09-15 Harman Becker Automotive Sys MULTI-CHANNEL ADAPTIVE VOICE SIGNAL PROCESSING WITH NOISE CANCELLATION
US7383179B2 (en) 2004-09-28 2008-06-03 Clarity Technologies, Inc. Method of cascading noise reduction algorithms to avoid speech distortion
US8170879B2 (en) * 2004-10-26 2012-05-01 Qnx Software Systems Limited Periodic signal enhancement system
US20060133621A1 (en) * 2004-12-22 2006-06-22 Broadcom Corporation Wireless telephone having multiple microphones
US20070116300A1 (en) 2004-12-22 2007-05-24 Broadcom Corporation Channel decoding for wireless telephones with multiple microphones and multiple description transmission
US20060149535A1 (en) * 2004-12-30 2006-07-06 Lg Electronics Inc. Method for controlling speed of audio signals
US20060184363A1 (en) 2005-02-17 2006-08-17 Mccree Alan Noise suppression
US8311819B2 (en) * 2005-06-15 2012-11-13 Qnx Software Systems Limited System for detecting speech with background voice estimates and noise estimates
WO2007003683A1 (en) 2005-06-30 2007-01-11 Nokia Corporation System for conference call and corresponding devices, method and program products
US7464029B2 (en) 2005-07-22 2008-12-09 Qualcomm Incorporated Robust separation of speech signals in a noisy environment
JP4765461B2 (en) 2005-07-27 2011-09-07 日本電気株式会社 Noise suppression system, method and program
US7917561B2 (en) 2005-09-16 2011-03-29 Coding Technologies Ab Partially complex modulated filter bank
US7957960B2 (en) * 2005-10-20 2011-06-07 Broadcom Corporation Audio time scale modification using decimation-based synchronized overlap-add algorithm
US7565288B2 (en) 2005-12-22 2009-07-21 Microsoft Corporation Spatial noise suppression for a microphone array
US8345890B2 (en) 2006-01-05 2013-01-01 Audience, Inc. System and method for utilizing inter-microphone level differences for speech enhancement
CN1809105B (en) 2006-01-13 2010-05-12 北京中星微电子有限公司 Dual-microphone speech enhancement method and system applicable to mini-type mobile communication devices
US8744844B2 (en) 2007-07-06 2014-06-03 Audience, Inc. System and method for adaptive intelligent noise suppression
US9185487B2 (en) 2006-01-30 2015-11-10 Audience, Inc. System and method for providing noise suppression utilizing null processing noise subtraction
US8194880B2 (en) 2006-01-30 2012-06-05 Audience, Inc. System and method for utilizing omni-directional microphones for speech enhancement
US20070195968A1 (en) 2006-02-07 2007-08-23 Jaber Associates, L.L.C. Noise suppression method and system with single microphone
EP1827002A1 (en) * 2006-02-22 2007-08-29 Alcatel Lucent Method of controlling an adaptation of a filter
JP2007270061A (en) 2006-03-31 2007-10-18 Nippon Oil Corp Method for producing liquid fuel base
US8150065B2 (en) 2006-05-25 2012-04-03 Audience, Inc. System and method for processing an audio signal
US8934641B2 (en) 2006-05-25 2015-01-13 Audience, Inc. Systems and methods for reconstructing decomposed audio signals
JP5053587B2 (en) 2006-07-31 2012-10-17 東亞合成株式会社 High-purity production method of alkali metal hydroxide
KR100883652B1 (en) 2006-08-03 2009-02-18 삼성전자주식회사 Method and apparatus for speech/silence interval identification using dynamic programming, and speech recognition system thereof
JP4184400B2 (en) 2006-10-06 2008-11-19 誠 植村 Construction method of underground structure
TWI312500B (en) * 2006-12-08 2009-07-21 Micro Star Int Co Ltd Method of varying speech speed
US8488803B2 (en) 2007-05-25 2013-07-16 Aliphcom Wind suppression/replacement component for use with electronic systems
US20090012786A1 (en) * 2007-07-06 2009-01-08 Texas Instruments Incorporated Adaptive Noise Cancellation
KR101444100B1 (en) * 2007-11-15 2014-09-26 삼성전자주식회사 Noise cancelling method and apparatus from the mixed sound
US8194882B2 (en) 2008-02-29 2012-06-05 Audience, Inc. System and method for providing single microphone noise suppression fallback
US8355511B2 (en) 2008-03-18 2013-01-15 Audience, Inc. System and method for envelope-based acoustic echo cancellation
US8131541B2 (en) * 2008-04-25 2012-03-06 Cambridge Silicon Radio Limited Two microphone noise reduction system
US20110178800A1 (en) 2010-01-19 2011-07-21 Lloyd Watts Distortion Measurement for Noise Suppression System
US9099077B2 (en) * 2010-06-04 2015-08-04 Apple Inc. Active noise cancellation decisions using a degraded reference
US8744091B2 (en) * 2010-11-12 2014-06-03 Apple Inc. Intelligibility control using ambient noise detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549586B2 (en) * 1999-04-12 2003-04-15 Telefonaktiebolaget L M Ericsson System and method for dual microphone signal noise reduction using spectral subtraction
US7117145B1 (en) * 2000-10-19 2006-10-03 Lear Corporation Adaptive filter for speech enhancement in a noisy environment
US20080201138A1 (en) * 2004-07-22 2008-08-21 Softmax, Inc. Headset for Separation of Speech Signals in a Noisy Environment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9699554B1 (en) 2010-04-21 2017-07-04 Knowles Electronics, Llc Adaptive signal equalization

Also Published As

Publication number Publication date
US8345890B2 (en) 2013-01-01
KR20080092404A (en) 2008-10-15
FI20080428L (en) 2008-07-04
US20130096914A1 (en) 2013-04-18
JP2009522942A (en) 2009-06-11
WO2007081916A2 (en) 2007-07-19
US20070154031A1 (en) 2007-07-05
JP5007442B2 (en) 2012-08-22
KR101210313B1 (en) 2012-12-10
US8867759B2 (en) 2014-10-21
WO2007081916A3 (en) 2007-12-21

Similar Documents

Publication Publication Date Title
US8867759B2 (en) System and method for utilizing inter-microphone level differences for speech enhancement
US8194880B2 (en) System and method for utilizing omni-directional microphones for speech enhancement
US8194882B2 (en) System and method for providing single microphone noise suppression fallback
US9437180B2 (en) Adaptive noise reduction using level cues
US8189766B1 (en) System and method for blind subband acoustic echo cancellation postfiltering
US9558755B1 (en) Noise suppression assisted automatic speech recognition
US9076456B1 (en) System and method for providing voice equalization
US8355511B2 (en) System and method for envelope-based acoustic echo cancellation
US9185487B2 (en) System and method for providing noise suppression utilizing null processing noise subtraction
US8682006B1 (en) Noise suppression based on null coherence
KR101449433B1 (en) Noise cancelling method and apparatus from the sound signal through the microphone
US8521530B1 (en) System and method for enhancing a monaural audio signal
US8143620B1 (en) System and method for adaptive classification of audio sources
US8774423B1 (en) System and method for controlling adaptivity of signal modification using a phantom coefficient
EP2701145A1 (en) Noise estimation for use with noise reduction and echo cancellation in personal communication
US20030055627A1 (en) Multi-channel speech enhancement system and method based on psychoacoustic masking effects
US8849231B1 (en) System and method for adaptive power control
Schmidt et al. Signal processing for in-car communication systems
US8761410B1 (en) Systems and methods for multi-channel dereverberation
US8259926B1 (en) System and method for 2-channel and 3-channel acoustic echo cancellation
Yousefian et al. Using power level difference for near field dual-microphone speech enhancement
US20110051955A1 (en) Microphone signal compensation apparatus and method thereof
Martın-Donas et al. A postfiltering approach for dual-microphone smartphones
Kowalczyk Multichannel Wiener filter with early reflection raking for automatic speech recognition in presence of reverberation
Zhang et al. A frequency domain approach for speech enhancement with directionality using compact microphone array.

Legal Events

Date Code Title Description
AS Assignment

Owner name: AUDIENCE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AVENDANO, CARLOS;SANTOS, PETER;WATTS, LLOYD;SIGNING DATES FROM 20060127 TO 20110829;REEL/FRAME:035330/0311

AS Assignment

Owner name: AUDIENCE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:AUDIENCE, INC.;REEL/FRAME:037927/0424

Effective date: 20151217

Owner name: KNOWLES ELECTRONICS, LLC, ILLINOIS

Free format text: MERGER;ASSIGNOR:AUDIENCE LLC;REEL/FRAME:037927/0435

Effective date: 20151221

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION