WO2012158163A1 - Non-linear post-processing for acoustic echo cancellation - Google Patents

Non-linear post-processing for acoustic echo cancellation Download PDF

Info

Publication number
WO2012158163A1
WO2012158163A1 PCT/US2011/036856 US2011036856W WO2012158163A1 WO 2012158163 A1 WO2012158163 A1 WO 2012158163A1 US 2011036856 W US2011036856 W US 2011036856W WO 2012158163 A1 WO2012158163 A1 WO 2012158163A1
Authority
WO
WIPO (PCT)
Prior art keywords
end signal
signal
coherence
suppression factors
echo
Prior art date
Application number
PCT/US2011/036856
Other languages
French (fr)
Inventor
Andrew John Macdonald
Jan Skoglund
Original Assignee
Google Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Inc. filed Critical Google Inc.
Priority to PCT/US2011/036856 priority Critical patent/WO2012158163A1/en
Priority to EP11721215.9A priority patent/EP2710787A1/en
Priority to CN201180072348.6A priority patent/CN103718538B/en
Publication of WO2012158163A1 publication Critical patent/WO2012158163A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M9/00Arrangements for interconnection not involving centralised switching
    • H04M9/08Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic
    • H04M9/082Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic using echo cancellers

Definitions

  • the present invention relates generally to a method and system for cancellation of echoes in telecommunication systems. It particularly relates to a method and system for removing residual echo from an error signal by non-linear post processing of the error signal.
  • Speech quality is an important factor for telephony system suppliers. Customer demand makes it vital to strive for continuous improvements.
  • An echo which is a delayed version of what was originally transmitted, is regarded as a severe distraction to the speaker if the delay is long. For short round trip delays of less than approximately 20 ms, the speaker will not be able to distinguish the echo from the side tone in the handset.
  • a remotely generated echo signal often has a substantial delay.
  • the speech and channel coding compulsory in digital radio communications systems and for telephony over the Internet protocol (IP telephony, for short) also result in significant delays which make the echoes generated a relatively short distance away clearly audible to the speaker. Hence, canceling the echo is a significant factor in maintaining speech quality.
  • An echo canceller typically includes a linear filtering part which essentially is an adaptive filter that tries to adapt to the echo path. In this way, a replica of the echo can be produced from the far-end signal and subtracted from the near-end signal, thereby canceling the echo.
  • the filter generating the echo replica may have a finite or infinite impulse response. Most commonly it is an adaptive, linear finite impulse response (FIR) filter with a number of delay lines and a corresponding number of coefficients, or filter delay taps. The coefficients are values, which when multiplied with delayed versions of the filter input signal, generate an estimate of the echo.
  • the filter is adapted, i.e. updated, so that the coefficients converge to optimum values.
  • a traditional way to cancel out the echo is to update a finite impulse response (FIR) filter using the normalized least mean square (NLMS) algorithm.
  • FIR finite impulse response
  • NLMS normalized least mean square
  • the AEC employs the linear filter as a first stage to model the system impulse response. An estimated echo signal is obtained by filtering the far-end signal. This estimated echo signal is then subtracted from the near-end signal to cancel the echo. A problem, however, is that some audible echo will generally remain in the residual error signal after this first stage. A second stage post-process
  • a method for non-linear post processing of an audio signal for acoustic echo cancellation includes receiving as input, by a non-linear processor, at least two of the following signals: a far-end signal to be rendered and a plurality of capture-side signals, transforming the received signals to the frequency domain, and computing, for each frequency band, one or more coherence measures between the received signals.
  • the method also includes deriving suppression factors corresponding to each band based on the one or more coherence measures and applying the suppression factors to one of the capture-side signals to substantially remove echo from the capture-side signal.
  • the plurality of capture-side signals include a near-end captured signal and an error signal containing a residual echo output from a linear adaptive filter.
  • the method includes tracking the coherence measures over a predetermined amount of time to determine whether the near- end signal is in a "no-echo-state" or in an "echo-state".
  • the computing step further includes computing, for each frequency band, a first coherence measure between the far-end signal and the near-end signal; a second coherence measure between the near-end signal and the error signal; and applying the first and second coherence measures to compute the suppression factors.
  • the suppression factors are directly proportional to a combination of the coherence measures.
  • the suppression factors are directly proportional to one of the first coherence measure and the second coherence measure when the near-end signal is in the "no echo state".
  • the suppression factors are directly proportional to a minimum of the first and second coherence measures when the near-end signal is in the "echo state".
  • the first coherence measure is a frequency-domain analog to time-domain correlation between the far-end signal and the near-end signal.
  • the second coherence measure is a frequency-domain analog to time-domain correlation between the near-end signal and the error signal.
  • the method further includes applying suppression factors to the error signal to substantially remove the residual echo from the error signal.
  • the method further includes detecting filter divergence by comparing the energy of the error signal and the near-end signal and applying the suppression factors to the near-end signal based on detected filter divergence.
  • the method also includes accentuating valleys in the suppression factors by raising to a power.
  • the method includes weighting the suppression factors with a curve configured to influence less accurate bands.
  • the method includes tracking a minimum suppression factor and scaling the suppression factors such that the minimum approaches a target value.
  • the method includes transforming the far-end signal, the near-end signal, and the error signal to the frequency domain.
  • the frequency bands correspond to individual Discrete Fourier Transform (DFT) coefficients.
  • DFT Discrete Fourier Transform
  • a system for non-linear postprocessing of an audio signal for acoustic echo cancellation includes a non-linear processor and a transform unit.
  • the non-linear processor receives, as input, at least two of the following signals: a far-end signal to be rendered and a plurality of capture- side signals.
  • the transform unit transforms the received signals to the frequency domain.
  • the non-linear processor is configured to: compute, for each frequency band, one or more coherence measures between the received signals, derive suppression factors corresponding to each band based on the one or more coherence measures, and apply the suppression factors to one of the capture-side signals to substantially remove echo from the capture-side signal.
  • the non-linear processor is configured to track the coherence measures over a predetermined amount of time to determine whether the near-end signal is in a "no-echo-state" or in an "echo-state".
  • the non-linear processor is configured to compute, for each frequency band, a first coherence measure between the far- end signal and the near-end signal; a second coherence measure between the near-end signal and the error signal; and apply the first and second coherence measures to compute the suppression factors.
  • the non-linear processor is configured to apply suppression factors to the error signal to substantially remove the residual echo from the error signal.
  • the non-linear processor is configured to detect a filter divergence by comparing energy of the error signal and the near-end signal and apply the suppression factors to the near-end signal based on the detected filter divergence.
  • the non-linear processor is configured to accentuate valleys in the suppression factors by raising to a power.
  • the non-linear processor is configured to weight the suppression factors with a curve configured to influence less accurate bands.
  • the non-linear processor is configured to track a minimum suppression factor and scale the suppression factors such that the minimum approaches a target value.
  • the transform unit is configured to transform the far-end signal, the near-end signal, and the error signal to the frequency domain.
  • the frequency bands correspond to individual Discrete Fourier Transform (DFT) coefficients.
  • DFT Discrete Fourier Transform
  • a computer-readable storage medium having stored thereon computer executable program for non-linear post-processing of an audio signal for acoustic echo cancellation.
  • the computer program when executed causes a processor to execute the steps of: receiving as input, by a non-linear processor, at least two of the following signals: a far-end signal to be rendered and a plurality of capture-side signals; transforming the received signals to the frequency domain; computing, for each frequency band, one or more coherence measures between the received signals; deriving suppression factors corresponding to each band based on the one or more coherence measures; and applying the suppression factors to one of the capture-side signals to substantially remove echo from the capture-side signal.
  • the computer program when executed causes the processor to further execute the step of tracking the coherence measures over a predetermined amount of time to determine whether the near-end signal is in a "no-echo-state" or in an "echo-state".
  • the computer program when executed causes the processor to further execute the steps of computing, for each frequency band, a first coherence measure between the far-end signal and the near-end signal, a second coherence measure between the near-end signal and the error signal, and applying the first and second coherence measures to compute the suppression factors.
  • the computer program when executed causes the processor to further execute the step of applying suppression factors to the error signal to substantially remove the residual echo from the error signal.
  • the computer program when executed causes the processor to further execute the steps of detecting filter divergence by comparing the energy of the error signal and the near-end signal and applying the suppression factors to the near-end signal based on detected filter divergence.
  • the computer program when executed causes the processor to further execute the step of accentuating valleys in the suppression factors by raising to a power.
  • the computer program when executed causes the processor to execute the step of weighting the suppression factors with a curve configured to influence less accurate bands.
  • the computer program when executed causes the processor to further execute the step of tracking a minimum suppression factor and scaling the suppression factors such that the minimum approaches a target value.
  • the computer program when executed causes the processor to further execute the step of transforming the far-end signal, the near-end signal, and the error signal to the frequency domain.
  • Fig. 1 is a block diagram of an acoustic echo canceller in accordance with an embodiment of the present invention.
  • Fig. 2 illustrates a more detailed block diagram describing the functions that may be performed in the adaptive filter of Fig. 1 in accordance with an embodiment of the present invention.
  • FIG. 3 illustrates computational stages of the adaptive filter of Fig. 2 in accordance with an embodiment of the present invention.
  • FIG. 4 illustrates a more detailed block diagram describing block G m in Fig. 3 in accordance with an embodiment of the present invention.
  • Fig. 5 illustrates a flow diagram describing computational stages of the nonlinear processor of Fig. 1 in accordance with an embodiment of the present invention.
  • Fig. 6 is a flow diagram illustrating operations performed by the acoustic echo canceller according to an embodiment of the present invention illustrated in Fig. 5.
  • Fig. 7 is a flow diagram illustrating operations performed by the acoustic echo canceller according to an embodiment of the present invention illustrated in Fig. 6.
  • FIG. 8 is a block diagram illustrating an exemplary computing device that is arranged for acoustic echo cancellation in accordance with an embodiment of the present invention.
  • Fig. 1 illustrates an acoustic echo canceller (AEC) 100 in accordance with an exemplary embodiment of the present invention.
  • AEC acoustic echo canceller
  • the AEC 100 is designed as a high quality echo canceller for voice and audio communication over packet switched networks. More specifically, the AEC 100 is designed to cancel acoustic echo 130 that emerges due to the reflection of sound waves of a render device 10 from boundary surfaces and other objects back to a near-end capture device 20. The echo 130 may also exist due to the direct path from render device 10 to the capture device 20.
  • Render device 10 may be any of a variety of audio output devices, including a loudspeaker or group of loudspeakers configured to output sound from one or more channels.
  • Capture device 20 may be any of a variety of audio input devices, such as one or more microphones configured to capture sound and generate input signals.
  • render device 10 and capture device 20 may be hardware devices internal to a computer system, or external peripheral devices connected to a computer system via wired and/or wireless connections.
  • render device 10 and capture device 20 may be components of a single device, such as a microphone, telephone handset, etc.
  • one or both of render device 10 and capture device 20 may include analog-to-digital and/or digital-to-analog transformation functionalities.
  • the echo canceller 100 includes a linear filter 102, a nonlinear processor (LP) 104, a far-end buffer 106, and a blocking buffer 108.
  • a far- end signal 110 generated at the far-end and transmitted to the near-end is input to the filter 102 via the far-end buffer (FEBuf) 106 and the blocking buffer 108.
  • the far-end signal 110 is also input to a play-out buffer 112 located near the render device 10.
  • the output signal 1 16 of the far-end buffer 106 is input to the blocking buffer 108 and the output signal 118 of the blocking buffer is input to the linear filter 102.
  • the far-end buffer 106 is configured to compensate for and synchronize to buffering at sound devices (not shown).
  • the blocking buffer 108 is configured to block the signal samples for a frequency-domain transformation to be performed by the linear filter 102 and the NLP 104.
  • the linear filter 102 is an adaptive filter.
  • Linear filter 102 operates in the frequency domain through, e.g., the Discrete Fourier Transform (DFT).
  • the DFT may be implemented as a Fast Fourier Transform (FFT).
  • FFT Fast Fourier Transform
  • the other input to the filter 102 is the near-end signal (Sin) 122 from the capture device 20 via a recording buffer 1 14.
  • the near-end signal 122 includes near-end speech 120 and the echo 130.
  • the NLP 104 receives three signals as input. It receives (1) the far-end signal via the far-end buffer 106 and blocking buffer 108, (2) the near-end signal via the recording buffer 114, and (3) the output signal 124 of the filter 102.
  • the output signal 124 is also referred to as an error signal. In a case when the NLP 104 attenuates the output signal 124, a comfort noise signal is generated which will be explained later.
  • each frame is divided into 64 sample blocks. Since this choice of block size does not produce an integer number of blocks per frame the signal needs to be buffered before the processing. This buffering is handled by the blocking buffer 108 as discussed above. Both the filter 102 and the NLP 104 operate in the frequency domain and utilize DFTs of 128 samples.
  • the performance of the AEC 100 is influenced by the operation of the play- out buffer 112 and the recording buffer 114 at the sound device.
  • the AEC 100 may not start unless the combined size of the play-out buffer 1 12 and the recording buffer 114 is reasonably stable within a predetermined limit. For example, if the combined size is stable within +/- 8 ms of the first started size, for four consecutive frames, the AEC 100 is started by filling up the internal far-end buffer 106.
  • FIG. 2 illustrates a more detailed block diagram describing the functions performed in the filter 102 of Fig. 1.
  • Fig. 3 illustrates computational stages of the filter 102 in accordance with an embodiment of the present invention.
  • the adaptive filter 102 includes a first transform section 200, an inverse transform section 202, a second transform section 204, and an impulse response section (H) 206.
  • the far-end signal x(n) 210 to be rendered at the render device 10 is input to the first transform section 200.
  • the output signal X(n, k) of the first transform section 200 is input to the impulse response section 206.
  • the output signal Y(n, k) is input to the second transform section 202 which outputs the signal y(n).
  • This signal ' y(n) is then subtracted from the near-end signal d(n) 220 captured by the capture device 20 to output an error signal e(n) 230 as the output of the linear stage of the filter 102.
  • the error signal 230 is also input to the second transform section 204 the output signal of which, E(n, k), is also input to the impulse response section 206.
  • the above-mentioned adaptive filtering approach relates to an implementation of a standard blocked time-domain Least Mean Square (LMS) algorithm.
  • LMS Least Mean Square
  • the complexity reduction is due to the filtering and the correlations being performed in the frequency domain, where time-domain convolution is replaced by multiplication.
  • the error is formed in the time domain and is transformed to the frequency domain for updating the filter 102 as illustrated in Fig. 2.
  • Fig. 4 illustrates a more detailed block diagram describing block G m in the FLMS method of Fig. 3 in accordance with an embodiment of the present invention.
  • Ijv is a N x N-sized identity matrix
  • 0,v is a N x N-sized zero 'matrix. This means that the time domain vector is appended with N zeros before the Fourier transform.
  • the far-end samples, x(n) 310 are blocked into vectors of 2N samples, i.e. two blocks, at step S312,
  • x(k-m) [x ((k - m-2)N) ... x((k - m) -l)] T
  • the estimated echo signal is then obtained as the N last coefficients of the inverse transformed sum of the filter products performed at step S320 from which first block is discarded at step S322.
  • the estimated echo signal is represented as
  • N zeros are inserted at step S316 to the error vector, and the augmented vector is transformed at step S318 as
  • Fig. 4 illustrates a more detailed block diagram describing block G m in Fig. 3 in accordance with an embodiment of the present invention where the filter coefficient update can be expressed as
  • B(k) as shown in Fig. 4 is a modified error vector.
  • the modification includes a power normalization followed by a magnitude limiter 410.
  • the normalized error vector as also shown in Fig. 4, is
  • B (k) ⁇ ⁇ ⁇ - 1) + (1 - ⁇ ⁇ )
  • the diagonal matrix X(k-m) is conjugated by the conjugate unit 420 which is then multiplied with vector B(k) prior to performing an inverse DFT transform by the Inverse Discrete Fourier Transform (IDFT) unit 430. Then the discard last block unit 440 discards the last block. After discarding the last block, a zero block is appended by the append zero block unit 450 prior to performing a DFT by the DFT unit 460. Then, a block delay is introduced by the delay unit 480 which outputs Wm(k).
  • IDFT Inverse Discrete Fourier Transform
  • Fig. 5 illustrates a flow diagram describing computational processes of the NLP 104 of Fig. 1 in accordance with an embodiment of the present invention.
  • the NLP 104 of the AEC 100 accepts three signals as input: i) the far-end signal x(n) 110 to be rendered by the render device 10, ii) the near-end signal d(n) 122 captured by the capture device 20, and iii) the output error signal e(n) 124 of the linear stage performed at the filter 102.
  • the error signal e(n) 124 typically contains residual echo that should be removed for good performance.
  • the objective of the NLP 104 is to remove this residual echo.
  • the first step is to transform all three input signals to the frequency domain.
  • the far-end signal 1 10 is transformed to the frequency domain.
  • the near-end signal 122 is transformed to the frequency domain and at step S501 ", the error signal 124 is transformed to the frequency domain.
  • the NLP 104 is block-based and shares the block length N of the linear stage, but uses an overlap-add method rather than overlap- save: consecutive blocks are concatenated, windowed and transformed. By defining o as the element- wise product operator, the k* transformed block is expressed as
  • n N, N + L . . . , 2N to provide perfect reconstruction.
  • the length 2N DFT vectors are retained.
  • the redundant N - 1 complex coefficients are discarded.
  • X/c, Ok and refer to the frequency-domain representations of the k* far-end, near- end and error blocks, respectively.
  • echo suppression is achieved by multiplying each frequency band of the error signal e(n) 124 with a suppression factor between 0 and 1.
  • each band corresponds to an individual DFT coefficient. In general, however, each band may correspond to an arbitrary range of frequencies. Comfort noise is added and after undergoing an inverse FFT, the suppressed signal is windowed, and overlapped and added with the previous block to obtain the output.
  • the power spectral density (PSD) of each signal is obtained.
  • the PSD of the far-end signal x(n) 110 is computed.
  • the PSD of the near- end signal d(n) 122 is computed and at step S503", the PSD of the error signal e(n) 124 is computed.
  • the PSDs of the far-end signal 110, near-end signal 122, and the error signal 124 are represented by S x , S d , and S e , respectively.
  • the complex-valued cross-PSDs between i) the far-end signal x(n) 110 and near-end signal d(n) 122, and ii) the near-end signal d(n) 122 and error signal e(n) 124 are also obtained.
  • the complex-valued cross-PSD between the far-end signal 110 and the near-end signal 122 is computed and at step S504', the complex-valued cross-PSD between the near-end signal 122 and the error signal 124 is computed.
  • the complex-valued cross-PSD of the far-end signal 110 and near-end signal 122 is represented as S Xd .
  • the complex-valued cross-PSD of the near-end signal 122 and error signal 124 is represented as Sa e .
  • the PSDs are exponentially smoothed to avoid sudden erroneous shifts in echo suppression.
  • the PSDs are given by
  • an old block is selected to best synchronize it with the corresponding echo in the near-end at step S505.
  • the linear filter 102 diverges from a good echo path estimate. This tends to result in a highly distorted error signal, which although still useful for analysis, should not be used for output. According to an embodiment of the invention, divergence may be detected fairly easily, as it usually adds rather than removes energy from the near-end signal d(n) 122.
  • the divergence state determined at step S51 1 is utilized to either select (S512) E k or Dt as follows: If
  • the PSDs are used to compute the coherence measures for each frequency band between i) the far-end signal 1 10 and near-end signal 122 at step S513 as follows:
  • Coherence is a frequency- domain analog to time-domain correlation. It is a measure of similarity with 0 ⁇ c(n) ⁇ 1 ; where a higher coherence corresponds to more similarity.
  • c X d 1 - c X d .
  • the echo 130 is suppressed while allowing simultaneous near-end speech 120 to pass through.
  • the NLP 104 is configured to achieve this because the coherence is calculated independently for each frequency band. Thus, bands containing echo are fully or partially suppressed, while bands free of echo are not affected.
  • f s is the sampling frequency.
  • the preferred bands were chosen from frequency regions most likely to be accurate across a range of scenarios.
  • step S519 the system either selects c d e or c X d.
  • c xd is tracked over time to determine the broad state of the system at step S521. The purpose of this is to avoid suppression when the echo path is close to zero (e.g. during a call with a headset).
  • a thresholded minimum of c xd is computed at step S519 as follows:
  • the system may contain echo and otherwise does not contain echo.
  • the echo state is provided through an interface for potential use by other audio processing components.
  • suppression is limited by selecting suppression factors as follows at step S520, S524 and S518:
  • the overdrive ⁇ is set at step S531 such that applying it to the minimum will result in the target suppression level:
  • the S/, level is computed at step S533.
  • the final suppression factors s Y are produced according to the following algorithm.
  • s is first weighted towards s h according to a weighting vector V S N with components 0 ⁇ V S N (n) ⁇ 1 : f n x _ f v 8N (n)sh + v SN (n) ⁇ s(n) if s(n ⁇ > Sh
  • the weighting is selected to influence typically less accurate bands more heavily.
  • V T N is another weighting vector fulfilling a similar purpose as V S N- Overdriving through raising to a power serves to accentuate valleys in s v .
  • Yfc s 7 Q Efc + N3 ⁇ 4, where s k is artificial noise and at step S537, an inverse transform is performed to obtain the output signal y(n).
  • the suppression removes near-end noise as well as echo, resulting in an audible change in the noise level. This issue is mitigated by adding generated "comfort noise” to replace the lost noise.
  • the generation of N will be discussed in a later section below.
  • White noise may be produced by generating a random complex vector, u ⁇ , on the unit circle. This is shaped to match No k and weighted by the suppression levels to give the following comfort noise:
  • Fig. 6 shows a flow diagram illustrating operations performed by the acoustic echo canceller 100 according to the exemplary aspect of the present invention. More specifically, according to an embodiment of the invention, Fig. 6 further describes the algorithms on how echo state and suppression factors are determined in the NLP 104 of the AEC 100 as described above with respect to Fig. 5.
  • both the coherence c xd between the far-end signal 1 10 and near-end signal 122 and the coherence Cd e between the near-end signal 122 and error signal 124 are tracked over time to determine the state of the AEC 100. Based on the determination of a high or a low coherence, the NLP 104 decides whether to enter or leave the coherent state.
  • coherence is a frequency-domain analog to time-domain correlation. More specifically, as mentioned above with reference to Fig. 5, coherence is a measure of similarity with 0 ⁇ c(n) ⁇ 1 ; where a higher coherence corresponds to more similarity.
  • step S613 if the NLP 104 determines that the AEC 100 is not in the coherent state, the following suppression factor s is output by the NLP 104 at step S621 :
  • the suppression factors may then be applied by the NLP 104 to the error signal 124 to substantially remove residual echo from the error signal 124.
  • Fig. 7 is a flow diagram illustrating operations performed by the AEC 100 according to an embodiment of the present invention illustrated in Fig. 1. More specifically, according to an embodiment of the invention, Fig. 7 further describes the algorithms on how to remove residual echo from the error signal 124 by utilizing the echo state information and suppression factors determined in the NLP 104 of the AEC 100 as described above with respect to Figs. 5 and 6.
  • the NLP 104 receives as input the far-end signal 1 10 to be rendered, the near-end captured signal 122, and the error signal 124 containing a residual echo output from the linear adaptive filter 102.
  • the far-end signal 1 10, the near-end signal 122, and the error signal 124 are transformed into the frequency domain by the corresponding transform sections as described above with reference to Figs. 2-5.
  • a first coherence measure is computed between the far-end signal 1 10 and the near-end signal 122 according to the algorithm as described above with reference to Fig. 5.
  • a second coherence measure is computed between the near-end signal 122 and the error signal 124 according to the algorithm as described above with reference to Fig. 5.
  • suppression factors are derived corresponding to each band of frequencies.
  • the suppression factors are applied to the error signal 124 or to the near-end signal 122 to substantially remove echo from the error signal 124 or the near-end signal 122.
  • Fig. 8 is a block diagram illustrating an example computing device 800 that may be utilized to implement the AEC 100 including, but not limited to, the NLP 104, the filter 102, the far-end buffer 106, and the blocking buffer 108 as well as the processes illustrated in Figs. 3 and 5-7 in accordance with the present disclosure.
  • computing device 800 typically includes one or more processors 810 and system memory 820.
  • a memory bus 830 can be used for communicating between the processor 810 and the system memory 820.
  • processor 810 can be of any type including but not limited to a microprocessor ( ⁇ ), a microcontroller ( ⁇ ), a digital signal processor (DSP), or any combination thereof.
  • Processor 810 can include one more levels of caching, such as a level one cache 81 1 and a level two cache 812, a processor core 813, and registers 814.
  • the processor core 813 can include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof.
  • a memory controller 815 can also be used with the processor 810, or in some implementations the memory controller 815 can be an internal part of the processor 810.
  • system memory 820 can be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof.
  • System memory 820 typically includes an operating system 821, one or more applications 822, and program data 824.
  • Application 822 includes an echo cancellation processing algorithm 823 that is arranged to remove residual echo from an error signal.
  • Program Data 824 includes echo cancellation routing data 825 that is useful for removing residual echo from an error signal, as will be further described below.
  • application 822 can be arranged to operate with program data 824 on an operating system 821 such that residual echo from and error signal is removed. This described basic configuration is illustrated in Fig. 8 by those components within dashed line 801.
  • Computing device 800 can have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 801 and any required devices and interfaces.
  • a bus/interface controller 840 can be used to facilitate communications between the basic configuration 801 and one or more data storage devices 850 via a storage interface bus 841.
  • the data storage devices 850 can be removable storage devices 851 , non-removable storage devices 852, or a combination thereof.
  • Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few.
  • Example computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • System memory 820, removable storage 851 and non-removable storage 852 are all examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 800. Any such computer storage media can be part of device 800.
  • Computing device 800 can also include an interface bus 842 for facilitating communication from various interface devices (e.g., output interfaces, peripheral interfaces, and communication interfaces) to the basic configuration 801 via the bus/interface controller 840.
  • Example output devices 860 include a graphics processing unit 861 and an audio processing unit 862, which can be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 863.
  • Example peripheral interfaces 870 include a serial interface controller 871 or a parallel interface controller 872, which can be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 873.
  • An example communication device 880 includes a network controller 881 , which can be arranged to facilitate communications with one or more other computing devices 890 over a network communication via one or more communication ports 882.
  • the communication connection is one example of a communication media.
  • Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media.
  • a "modulated data signal" can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared (IR) and other wireless media.
  • RF radio frequency
  • IR infrared
  • the term computer readable media as used herein can include both storage media and communication media.
  • Computing device 800 can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions.
  • Computing device 800 can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field Programmable Gate Arrays
  • DSPs digital signal processors
  • Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
  • a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities).
  • a typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)
  • Filters That Use Time-Delay Elements (AREA)

Abstract

A method and system for non-linear post processing of an audio signal for acoustic echo cancellation is disclosed. The system includes a non-linear processor (NLP) (104) that receives, as input, at least two of the following signals: a far-end signal to be rendered and a plurality of capture-side signals. The NLP (104) first computes, for each frequency band, one or more coherence measures between the received signals and derives suppression factors corresponding to each band based on the one or more coherence measures. The NLP (104) also applies the suppression factors to one of the capture-side signals to substantially remove echo from the capture-side signal.

Description

NON-LINEAR POST-PROCESSING FOR ACOUSTIC ECHO CANCELLATION Technical Field of the Invention
[0001] The present invention relates generally to a method and system for cancellation of echoes in telecommunication systems. It particularly relates to a method and system for removing residual echo from an error signal by non-linear post processing of the error signal.
Background of the Invention
[0002] Speech quality is an important factor for telephony system suppliers. Customer demand makes it vital to strive for continuous improvements. An echo, which is a delayed version of what was originally transmitted, is regarded as a severe distraction to the speaker if the delay is long. For short round trip delays of less than approximately 20 ms, the speaker will not be able to distinguish the echo from the side tone in the handset. However, for long-distance communications, such as satellite communications, a remotely generated echo signal often has a substantial delay. Moreover, the speech and channel coding compulsory in digital radio communications systems and for telephony over the Internet protocol (IP telephony, for short) also result in significant delays which make the echoes generated a relatively short distance away clearly audible to the speaker. Hence, canceling the echo is a significant factor in maintaining speech quality.
[0003] An echo canceller typically includes a linear filtering part which essentially is an adaptive filter that tries to adapt to the echo path. In this way, a replica of the echo can be produced from the far-end signal and subtracted from the near-end signal, thereby canceling the echo.
[0004] The filter generating the echo replica may have a finite or infinite impulse response. Most commonly it is an adaptive, linear finite impulse response (FIR) filter with a number of delay lines and a corresponding number of coefficients, or filter delay taps. The coefficients are values, which when multiplied with delayed versions of the filter input signal, generate an estimate of the echo. The filter is adapted, i.e. updated, so that the coefficients converge to optimum values. A traditional way to cancel out the echo is to update a finite impulse response (FIR) filter using the normalized least mean square (NLMS) algorithm. [0005] Conventionally, the AEC employs the linear filter as a first stage to model the system impulse response. An estimated echo signal is obtained by filtering the far-end signal. This estimated echo signal is then subtracted from the near-end signal to cancel the echo. A problem, however, is that some audible echo will generally remain in the residual error signal after this first stage. A second stage post-processor needs to be applied to remove the residual echo.
Summary of the Invention
[0006] This Summary introduces a selection of concepts in a simplified form in order to provide a basic understanding of some aspects of the present disclosure. This Summary is not an extensive overview of the disclosure, and is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure. This Summary merely presents some of the concepts of the disclosure as a prelude to the Detailed Description provided below.
[0007] According to an aspect of the present invention, a method for non-linear post processing of an audio signal for acoustic echo cancellation is disclosed. The method includes receiving as input, by a non-linear processor, at least two of the following signals: a far-end signal to be rendered and a plurality of capture-side signals, transforming the received signals to the frequency domain, and computing, for each frequency band, one or more coherence measures between the received signals. The method also includes deriving suppression factors corresponding to each band based on the one or more coherence measures and applying the suppression factors to one of the capture-side signals to substantially remove echo from the capture-side signal.
[0008] According to a further aspect of the present invention, the plurality of capture-side signals include a near-end captured signal and an error signal containing a residual echo output from a linear adaptive filter.
[0009] According to another aspect of the present invention, the method includes tracking the coherence measures over a predetermined amount of time to determine whether the near- end signal is in a "no-echo-state" or in an "echo-state". [0010] According to yet another aspect of the present invention, the computing step further includes computing, for each frequency band, a first coherence measure between the far-end signal and the near-end signal; a second coherence measure between the near-end signal and the error signal; and applying the first and second coherence measures to compute the suppression factors.
[0011] According to a further aspect of the present invention, the suppression factors are directly proportional to a combination of the coherence measures.
[0012] According to another aspect of the present invention, the suppression factors are directly proportional to one of the first coherence measure and the second coherence measure when the near-end signal is in the "no echo state".
[0013] According to yet another aspect of the present invention, the suppression factors are directly proportional to a minimum of the first and second coherence measures when the near-end signal is in the "echo state".
[0014] In accordance with an aspect of the present invention, the first coherence measure is a frequency-domain analog to time-domain correlation between the far-end signal and the near-end signal.
[0015] According to another aspect of the present invention, the second coherence measure is a frequency-domain analog to time-domain correlation between the near-end signal and the error signal.
[0016] In addition, according to an aspect of the present invention, the method further includes applying suppression factors to the error signal to substantially remove the residual echo from the error signal.
[0017] According to an aspect of the present invention, the method further includes detecting filter divergence by comparing the energy of the error signal and the near-end signal and applying the suppression factors to the near-end signal based on detected filter divergence.
[0018] According to a further aspect of the present invention, the method also includes accentuating valleys in the suppression factors by raising to a power.
[0019] According to yet another aspect of the present invention, the method includes weighting the suppression factors with a curve configured to influence less accurate bands. [0020] In addition, according an aspect of the present invention, the method includes tracking a minimum suppression factor and scaling the suppression factors such that the minimum approaches a target value.
[0021] In accordance with another aspect of the present invention, the method includes transforming the far-end signal, the near-end signal, and the error signal to the frequency domain.
[0022] According to a further aspect of the present invention, wherein the frequency bands correspond to individual Discrete Fourier Transform (DFT) coefficients.
[0023] According to another aspect of the present invention, a system for non-linear postprocessing of an audio signal for acoustic echo cancellation is disclosed. The system includes a non-linear processor and a transform unit. The non-linear processor receives, as input, at least two of the following signals: a far-end signal to be rendered and a plurality of capture- side signals. The transform unit transforms the received signals to the frequency domain. The non-linear processor is configured to: compute, for each frequency band, one or more coherence measures between the received signals, derive suppression factors corresponding to each band based on the one or more coherence measures, and apply the suppression factors to one of the capture-side signals to substantially remove echo from the capture-side signal.
[0024] According to a further aspect of the present invention, the non-linear processor is configured to track the coherence measures over a predetermined amount of time to determine whether the near-end signal is in a "no-echo-state" or in an "echo-state".
[0025] According to yet another aspect of the present invention, the non-linear processor is configured to compute, for each frequency band, a first coherence measure between the far- end signal and the near-end signal; a second coherence measure between the near-end signal and the error signal; and apply the first and second coherence measures to compute the suppression factors.
[0026] In addition, according to another aspect of the present invention, the non-linear processor is configured to apply suppression factors to the error signal to substantially remove the residual echo from the error signal.
[0027] In accordance with another aspect of the present invention, the non-linear processor is configured to detect a filter divergence by comparing energy of the error signal and the near-end signal and apply the suppression factors to the near-end signal based on the detected filter divergence.
[0028] According to a further aspect of the present invention, the non-linear processor is configured to accentuate valleys in the suppression factors by raising to a power.
[0029] According to yet another aspect of the present invention, the non-linear processor is configured to weight the suppression factors with a curve configured to influence less accurate bands.
[0030] According to an aspect of the present invention, the non-linear processor is configured to track a minimum suppression factor and scale the suppression factors such that the minimum approaches a target value.
[0031] According to a further aspect of the present invention, the transform unit is configured to transform the far-end signal, the near-end signal, and the error signal to the frequency domain.
[0032] In addition, according to an aspect of the present invention, the frequency bands correspond to individual Discrete Fourier Transform (DFT) coefficients.
[0033] According to an aspect of the present invention, a computer-readable storage medium having stored thereon computer executable program for non-linear post-processing of an audio signal for acoustic echo cancellation is disclosed. The computer program when executed causes a processor to execute the steps of: receiving as input, by a non-linear processor, at least two of the following signals: a far-end signal to be rendered and a plurality of capture-side signals; transforming the received signals to the frequency domain; computing, for each frequency band, one or more coherence measures between the received signals; deriving suppression factors corresponding to each band based on the one or more coherence measures; and applying the suppression factors to one of the capture-side signals to substantially remove echo from the capture-side signal.
[0034] According to a further aspect of the present invention, the computer program when executed causes the processor to further execute the step of tracking the coherence measures over a predetermined amount of time to determine whether the near-end signal is in a "no-echo-state" or in an "echo-state". [0035] In accordance with an aspect of the present invention, the computer program when executed causes the processor to further execute the steps of computing, for each frequency band, a first coherence measure between the far-end signal and the near-end signal, a second coherence measure between the near-end signal and the error signal, and applying the first and second coherence measures to compute the suppression factors.
[0036] According to a further aspect of the present invention, the computer program when executed causes the processor to further execute the step of applying suppression factors to the error signal to substantially remove the residual echo from the error signal.
[0037] According to a yet another aspect of the present invention, the computer program when executed causes the processor to further execute the steps of detecting filter divergence by comparing the energy of the error signal and the near-end signal and applying the suppression factors to the near-end signal based on detected filter divergence.
[0038] According to another aspect of the present invention, the computer program when executed causes the processor to further execute the step of accentuating valleys in the suppression factors by raising to a power.
[0039] According to a further aspect of the present invention, the computer program when executed causes the processor to execute the step of weighting the suppression factors with a curve configured to influence less accurate bands.
[0040] According to another aspect of the present invention, the computer program when executed causes the processor to further execute the step of tracking a minimum suppression factor and scaling the suppression factors such that the minimum approaches a target value.
[0041] According to yet another aspect of the present invention, the computer program when executed causes the processor to further execute the step of transforming the far-end signal, the near-end signal, and the error signal to the frequency domain. Brief Description of the Drawings
[0042] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and together with the description, serve to explain the principles of the invention.
[0043] Fig. 1 is a block diagram of an acoustic echo canceller in accordance with an embodiment of the present invention.
[0044] Fig. 2 illustrates a more detailed block diagram describing the functions that may be performed in the adaptive filter of Fig. 1 in accordance with an embodiment of the present invention.
[0045] Fig. 3 illustrates computational stages of the adaptive filter of Fig. 2 in accordance with an embodiment of the present invention.
[0046] Fig. 4 illustrates a more detailed block diagram describing block Gm in Fig. 3 in accordance with an embodiment of the present invention.
[0047] Fig. 5 illustrates a flow diagram describing computational stages of the nonlinear processor of Fig. 1 in accordance with an embodiment of the present invention.
[0048] Fig. 6 is a flow diagram illustrating operations performed by the acoustic echo canceller according to an embodiment of the present invention illustrated in Fig. 5.
[0049] Fig. 7 is a flow diagram illustrating operations performed by the acoustic echo canceller according to an embodiment of the present invention illustrated in Fig. 6.
[0050] Fig. 8 is a block diagram illustrating an exemplary computing device that is arranged for acoustic echo cancellation in accordance with an embodiment of the present invention.
Detailed Description
[0048] The following detailed description of the embodiments of the invention refers to the accompanying drawings. The following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents thereof. [0049] Fig. 1 illustrates an acoustic echo canceller (AEC) 100 in accordance with an exemplary embodiment of the present invention.
[0050] The AEC 100 is designed as a high quality echo canceller for voice and audio communication over packet switched networks. More specifically, the AEC 100 is designed to cancel acoustic echo 130 that emerges due to the reflection of sound waves of a render device 10 from boundary surfaces and other objects back to a near-end capture device 20. The echo 130 may also exist due to the direct path from render device 10 to the capture device 20.
[0051] Render device 10 may be any of a variety of audio output devices, including a loudspeaker or group of loudspeakers configured to output sound from one or more channels. Capture device 20 may be any of a variety of audio input devices, such as one or more microphones configured to capture sound and generate input signals. For example, render device 10 and capture device 20 may be hardware devices internal to a computer system, or external peripheral devices connected to a computer system via wired and/or wireless connections. In some arrangements, render device 10 and capture device 20 may be components of a single device, such as a microphone, telephone handset, etc. Additionally, one or both of render device 10 and capture device 20 may include analog-to-digital and/or digital-to-analog transformation functionalities.
[0052] With reference to Fig. 1, the echo canceller 100 includes a linear filter 102, a nonlinear processor ( LP) 104, a far-end buffer 106, and a blocking buffer 108. A far- end signal 110 generated at the far-end and transmitted to the near-end is input to the filter 102 via the far-end buffer (FEBuf) 106 and the blocking buffer 108. The far-end signal 110 is also input to a play-out buffer 112 located near the render device 10. The output signal 1 16 of the far-end buffer 106 is input to the blocking buffer 108 and the output signal 118 of the blocking buffer is input to the linear filter 102.
[0053] The far-end buffer 106 is configured to compensate for and synchronize to buffering at sound devices (not shown). The blocking buffer 108 is configured to block the signal samples for a frequency-domain transformation to be performed by the linear filter 102 and the NLP 104.
[0054] The linear filter 102 is an adaptive filter. Linear filter 102 operates in the frequency domain through, e.g., the Discrete Fourier Transform (DFT). The DFT may be implemented as a Fast Fourier Transform (FFT).
[0055] The other input to the filter 102 is the near-end signal (Sin) 122 from the capture device 20 via a recording buffer 1 14. The near-end signal 122 includes near-end speech 120 and the echo 130. The NLP 104 receives three signals as input. It receives (1) the far-end signal via the far-end buffer 106 and blocking buffer 108, (2) the near-end signal via the recording buffer 114, and (3) the output signal 124 of the filter 102. The output signal 124 is also referred to as an error signal. In a case when the NLP 104 attenuates the output signal 124, a comfort noise signal is generated which will be explained later.
[0056] According to an exemplary embodiment, each frame is divided into 64 sample blocks. Since this choice of block size does not produce an integer number of blocks per frame the signal needs to be buffered before the processing. This buffering is handled by the blocking buffer 108 as discussed above. Both the filter 102 and the NLP 104 operate in the frequency domain and utilize DFTs of 128 samples.
[0057] The performance of the AEC 100 is influenced by the operation of the play- out buffer 112 and the recording buffer 114 at the sound device. The AEC 100 may not start unless the combined size of the play-out buffer 1 12 and the recording buffer 114 is reasonably stable within a predetermined limit. For example, if the combined size is stable within +/- 8 ms of the first started size, for four consecutive frames, the AEC 100 is started by filling up the internal far-end buffer 106.
[0058] Fig. 2 illustrates a more detailed block diagram describing the functions performed in the filter 102 of Fig. 1. Fig. 3 illustrates computational stages of the filter 102 in accordance with an embodiment of the present invention.
[0059] With reference to Fig. 2, the adaptive filter 102 includes a first transform section 200, an inverse transform section 202, a second transform section 204, and an impulse response section (H) 206. The far-end signal x(n) 210 to be rendered at the render device 10 is input to the first transform section 200. The output signal X(n, k) of the first transform section 200 is input to the impulse response section 206. The output signal Y(n, k) is input to the second transform section 202 which outputs the signal y(n). This signal' y(n) is then subtracted from the near-end signal d(n) 220 captured by the capture device 20 to output an error signal e(n) 230 as the output of the linear stage of the filter 102. The error signal 230 is also input to the second transform section 204 the output signal of which, E(n, k), is also input to the impulse response section 206.
[0060] The above-mentioned adaptive filtering approach relates to an implementation of a standard blocked time-domain Least Mean Square (LMS) algorithm. According to an embodiment of the invention, the complexity reduction is due to the filtering and the correlations being performed in the frequency domain, where time-domain convolution is replaced by multiplication. The error is formed in the time domain and is transformed to the frequency domain for updating the filter 102 as illustrated in Fig. 2.
[0061] There is a signal delay in the system due to the transform blocking. To reduce delay the filter 102 is partitioned in smaller segments and by overlap-save processing the overall delay is kept to the segment length. This method is referred to as partitioned block frequency domain method or multi-delay partitioned block frequency adaptive filter. For simplicity it is referred to as FLMS.
[0062] The operation of the FLMS method is illustrated in Fig. 3. Fig. 4 illustrates a more detailed block diagram describing block Gm in the FLMS method of Fig. 3 in accordance with an embodiment of the present invention.
[0063] With a total filter length L = M · N partitioned in blocks of N samples and with F = 2N x 2N Discrete Fourier Transform (DFT) matrix, the time domain impulse response of the filter 102, w(n), n = 0, 1, ... , L - 1, can be expressed in the frequency domain as a collection of partitioned filters
Figure imgf000012_0001
where wm(k) = [wmN . . . w(m+
Ijv is a N x N-sized identity matrix, and 0,v is a N x N-sized zero 'matrix. This means that the time domain vector is appended with N zeros before the Fourier transform.
[0064] The time domain filter coefficients, w(n) are not utilized in the algorithm and equation (1) is presented to establish the relation between the time- and frequency-domain coefficients.
[0065] As illustrated in Fig. 3 , the far-end samples, x(n) 310, are blocked into vectors of 2N samples, i.e. two blocks, at step S312,
x(k-m)=[x ((k - m-2)N) ... x((k - m) -l)]T
and transformed into a sequence of DFT vectors at step S 314,
X(k - m)= diag(Fx(k - m)).
[0066] This is implemented as a table of delayed DFT vectors, since the diagonal matrix also can be expressed as X(k - m) = DmX(k), where D is a delay operator. For each delayed block altering is performed as the multiplication of the diagonal matrix X(k - m) with a filter partition
Ym(k)=X(k - m)Wm(k) m = 0 , 1 , M- 1
[0067] The estimated echo signal is then obtained as the N last coefficients of the inverse transformed sum of the filter products performed at step S320 from which first block is discarded at step S322. The estimated echo signal is represented as
M-l
y(fc) - (!» ((* - W) · · . y(kN - i)f - \oN iN] F~1 £ Ym(*}
rre=0
[0068] The error is then formed in the time domain as e(k) = d(k) - y(k)
and this is also the output of the filter 102 of the AEC 100 as shown in Fig. 1. To adjust the filter coefficients, N zeros are inserted at step S316 to the error vector, and the augmented vector is transformed at step S318 as
E(Jt) = F I.v e(k) .
O.v
[0069] Fig. 4 illustrates a more detailed block diagram describing block Gm in Fig. 3 in accordance with an embodiment of the present invention where the filter coefficient update can be expressed as
! r.v
Wro(* + 1) - m(*) + F j
! 0jV Oy with a stepsize μο = 0.5 and where B(k), as shown in Fig. 4, is a modified error vector. The modification includes a power normalization followed by a magnitude limiter 410. The normalized error vector, as also shown in Fig. 4, is
A (ft) = (k}E(k} , where {k) = diag ([1/po - · · 1 /P2JV-1]) is a diagonal step size matrix controlling the adjustment of each frequency component using power estimates
B(k) = λρ^ - 1) + (1 - λρ) |Χ^ |2 ( j = 0, l , . . . , 2N - 1 , recursively calculated with a forgetting factor λρ = 0.9 and individual DFT coefficients Xjj = {X(k)}j.j is input to the magnitude limiter 410. The component magnitudes are then limited to a constant maximum magnitude, A0 = 1.5 x 10"6, into the vector B(k) with components
Figure imgf000015_0001
[0070] As illustrated in Fig. 4, the diagonal matrix X(k-m) is conjugated by the conjugate unit 420 which is then multiplied with vector B(k) prior to performing an inverse DFT transform by the Inverse Discrete Fourier Transform (IDFT) unit 430. Then the discard last block unit 440 discards the last block. After discarding the last block, a zero block is appended by the append zero block unit 450 prior to performing a DFT by the DFT unit 460. Then, a block delay is introduced by the delay unit 480 which outputs Wm(k).
[0071] Fig. 5 illustrates a flow diagram describing computational processes of the NLP 104 of Fig. 1 in accordance with an embodiment of the present invention.
[0072] The NLP 104 of the AEC 100 accepts three signals as input: i) the far-end signal x(n) 110 to be rendered by the render device 10, ii) the near-end signal d(n) 122 captured by the capture device 20, and iii) the output error signal e(n) 124 of the linear stage performed at the filter 102. The error signal e(n) 124 typically contains residual echo that should be removed for good performance. The objective of the NLP 104 is to remove this residual echo.
[0073] The first step is to transform all three input signals to the frequency domain. At step S501, the far-end signal 1 10 is transformed to the frequency domain. At step S501 ', the near-end signal 122 is transformed to the frequency domain and at step S501 ", the error signal 124 is transformed to the frequency domain. The NLP 104 is block-based and shares the block length N of the linear stage, but uses an overlap-add method rather than overlap- save: consecutive blocks are concatenated, windowed and transformed. By defining o as the element- wise product operator, the k* transformed block is expressed as
Figure imgf000015_0002
where F is the 2N DFT matrix as before, x^ is a length N time-domain sample column vector and w^vis a length 2N square-root Harming window column vector with entries 27ΓΠ- win) = 1 1— cos ( n = ϋ, 1 2N - 1
2N
[0074] The window is chosen such that the overlapping segments satisfy w2 (n) + w2 {n— N) = 1. n = N, N + L . . . , 2N to provide perfect reconstruction. According to an embodiment of the invention, the length 2N DFT vectors are retained. Preferably, however, the redundant N - 1 complex coefficients are discarded.
[0075] X/c, Ok and refer to the frequency-domain representations of the k* far-end, near- end and error blocks, respectively.
[0076] According to a further embodiment of the invention, echo suppression is achieved by multiplying each frequency band of the error signal e(n) 124 with a suppression factor between 0 and 1. According to a preferred embodiment, each band corresponds to an individual DFT coefficient. In general, however, each band may correspond to an arbitrary range of frequencies. Comfort noise is added and after undergoing an inverse FFT, the suppressed signal is windowed, and overlapped and added with the previous block to obtain the output.
|0077] For analysis, the power spectral density (PSD) of each signal is obtained. At step S503, the PSD of the far-end signal x(n) 110 is computed. At step S503', the PSD of the near- end signal d(n) 122 is computed and at step S503", the PSD of the error signal e(n) 124 is computed. The PSDs of the far-end signal 110, near-end signal 122, and the error signal 124 are represented by Sx, Sd, and Se, respectively.
[0078] In addition, the complex-valued cross-PSDs between i) the far-end signal x(n) 110 and near-end signal d(n) 122, and ii) the near-end signal d(n) 122 and error signal e(n) 124 are also obtained. At step S504, the complex-valued cross-PSD between the far-end signal 110 and the near-end signal 122 is computed and at step S504', the complex-valued cross-PSD between the near-end signal 122 and the error signal 124 is computed. The complex-valued cross-PSD of the far-end signal 110 and near-end signal 122 is represented as SXd. The complex-valued cross-PSD of the near-end signal 122 and error signal 124 is represented as Sae. The PSDs are exponentially smoothed to avoid sudden erroneous shifts in echo suppression. The PSDs are given by
S fc Vfc - AsS fc_1 ¾_ , + (1 - AsjXfc o Y . k > 0. S 0¾ = 0N
where the "*" here represents the complex conjugate, and where the exponential smoothing factor is given by
Figure imgf000017_0001
[0079] Note that X* = Y* for the "auto" PSDs, which are therefore real-valued while the cross-PSDs are complex valued.
[0080] Rather than using the current input far-end block, an old block is selected to best synchronize it with the corresponding echo in the near-end at step S505. The index of the partition, m, with maximum energy in the linear filter is chosen as follows: d = ar max( | | Wm | |2)
m
[0081] This estimated delay index is used to select the best block at step S507 for use in the far-end PSDs. Additionally, the far-end auto-PSD is thresholded at step S509 in order to avoid numerical instability as follows: sxkxk = max(SXkxk , So) . So - 15
[0082] It is sometimes the case that the linear filter 102 diverges from a good echo path estimate. This tends to result in a highly distorted error signal, which although still useful for analysis, should not be used for output. According to an embodiment of the invention, divergence may be detected fairly easily, as it usually adds rather than removes energy from the near-end signal d(n) 122. The divergence state determined at step S51 1 is utilized to either select (S512) Ek or Dt as follows: If
|S¾j¾ | |i > | |SA; jDfc j |i then the "diverge" state is entered, in which the effect of the linear stage is reversed by setting E* = D*. The diverge state is left if
oo] |SBfc£fc ||i < | |SDfc£>J j i, σ0 = 1.05
Furthermore, if divergence is very high, such as
Pi¾i¾ H i > <xi | |S¾ i¾ | |i ; σι = 19.95 the linear filter 102 resets to its initial state
Wm(As) = 0N , m = 0, l, . . . - l
The PSDs are used to compute the coherence measures for each frequency band between i) the far-end signal 1 10 and near-end signal 122 at step S513 as follows:
" DkD,_
■ Sxk- and ii) the near-end signal 122 and error signal 124 at step S515 as follows:
Figure imgf000018_0001
where the "*" here again represents the complex conjugate.
[0083] Denote a c vector entry in position n as c(n). Coherence is a frequency- domain analog to time-domain correlation. It is a measure of similarity with 0 < c(n) < 1 ; where a higher coherence corresponds to more similarity.
[0084] The primary effect of the NLP 104 is achieved through directly suppressing the error signal 124 with the coherence measures. Generally speaking, the output is given by
Yfc = Efe o crfe .
Under the assumption that the linear stage is working properly, c(n)de ~ 1 when no echo has been removed, allowing the error to pass through unchanged. In the opposite case of the linear stage having removed echo, 1 » c(ri)de≥ 0, resulting in a suppression of the error, ideally removing any residual echo remaining after the linear filtering by the filter 102 at the linear stage.
[0085] According to an embodiment of the invention, is considered to increase robustness, as described below, though de tends to be more useful in practice. Contrary to C d e , cX d is relatively high when there is echo 130, and low otherwise. To have the two measures in the same "domain" a modified coherence is defined as follows: c Xd = 1 - cX d .
[0086] It is preferred that to achieve high AEC performance, the echo 130 is suppressed while allowing simultaneous near-end speech 120 to pass through. The NLP 104 is configured to achieve this because the coherence is calculated independently for each frequency band. Thus, bands containing echo are fully or partially suppressed, while bands free of echo are not affected.
[0087] According to an embodiment of the invention, several data analysis method are used to tweak the coherence before it is applied as a suppression factor, s. First, the average coherence across a set of preferred bands is computed at step S517 for Cd 6 , and at step S 5 1 7 ' for c'xd as
Figure imgf000019_0001
where fs is the sampling frequency. The preferred bands were chosen from frequency regions most likely to be accurate across a range of scenarios.
[0088] At step S518, the system either selects c de or c Xd. According to an exemplary embodiment, c xd is tracked over time to determine the broad state of the system at step S521. The purpose of this is to avoid suppression when the echo path is close to zero (e.g. during a call with a headset). First, a thresholded minimum of c xd is computed at step S519 as follows:
Figure imgf000019_0002
with a step-size μ0 = 0.0006w β and factor m β given by
if fa = 8000
Otherwise [0089] This is used to construct two decision variables . A; > 0. uCa— 0
Figure imgf000020_0001
, and
0 if Cxdk = 1 or uk = 1
, k > 0, ea = 0
1 otherwise
[0090] The system is considered in the "coherent state" when uc = 1 and in the "echo state" when «e = · In the echo state, the system may contain echo and otherwise does not contain echo. The echo state is provided through an interface for potential use by other audio processing components.
[0091] While in the echo state, the suppression factor s is computed at step S520 by selecting the minimum of cae , c' X(j in each band as s = m { ^, c'xd)
[0092] Two overall suppression factors are computed at step S533 and S527 from order statistics across the preferred bands:
{Sh>. at} s(l \}, {nh, ni} = |n0 + {0.5, 0.75} (m - no 4- 1)J
[0093] This approach of selecting suppression factors is more robust to outliers than the average, and allows tuning through the exact selection of the order statistic position.
[0094] While in the "no echo state" (i.e. we = 0), suppression is limited by selecting suppression factors as follows at step S520, S524 and S518:
Figure imgf000020_0002
[0095] Across most scenarios, there is a typical suppression level required to reasonably remove all residual echo. This is considered to be the target suppression, s,. A scalar "overdrive" is applied to s to weight the bands towards s,. This improves performance in more difficult cases where the coherence measures are not accurate enough by themselves. The minimum 57 level is computed at step S527 and tracked at step S529 over time
* i _ f si} if sl < < °-6 l s (1 » _ ,
{ - j s min(|ifc_i + s ) i) } otherwise ! k > " Sl° ~ with a step-size μ5 = 0.0008 m β.
[0096] When the minimum s'ik is unchanged for two consecutive blocks, the overdrive γ is set at step S531 such that applying it to the minimum will result in the target suppression level:
St γ is smoothed and threshold as
, - ! ,·-. - \ , , . f 0.99 if < ¾
* " X^k~ l + A)) ""** ** ' - { 0.9 otherwise such that it will tend to move faster upwards than downwards, s, and γο are configurable to control the suppression aggressiveness; by default they are set to -1 1.5 and 2, respectively. Additionally, when
the smoothed overdrive is reset to the minimum,
Ik = 70-
[0097] The S/, level is computed at step S533. Next, the final suppression factors sY are produced according to the following algorithm. At step S525 s is first weighted towards sh according to a weighting vector VSN with components 0 < VSN (n) < 1 : fnx _ f v8N (n)sh + vSN (n)}s(n) if s(n} > Sh
" I s(n) otherwise
[0098] The weighting is selected to influence typically less accurate bands more heavily. Applying the overdriving at step S535, the following is derived:
,97(-n) » ¾ («)'
where VTN is another weighting vector fulfilling a similar purpose as VSN- Overdriving through raising to a power serves to accentuate valleys in sv. Finally, at step S536 the frequency- domain output block is given by
Yfc = s7 Q Efc + N¾, where sk is artificial noise and at step S537, an inverse transform is performed to obtain the output signal y(n). The suppression removes near-end noise as well as echo, resulting in an audible change in the noise level. This issue is mitigated by adding generated "comfort noise" to replace the lost noise. The generation of N will be discussed in a later section below.
[0099] The overlap-add transformation is inverted to arrive at the length N time- domain output signal yk as k≥ 0, y'0 = 0,v
Figure imgf000022_0001
[00100] To generate comfort noise, a reliable estimate of the true near-end background noise is required. According to an embodiment of the invention, a minimum statistics method is utilized to generate the comfort noise. More specifically, at every block a modified minimum of the near-end PSD is computed for each band:
Figure imgf000022_0002
with a step-size μ = 0.1 and ramp = 1.0002. No(n) is set such that it will be greater than a reasonable noise power. S ok is very similar to that discussed above, but is instead computed from the un- windowed DFT coefficients of the linear filter 102 computed at the linear stage.
[00101] White noise may be produced by generating a random complex vector, u^, on the unit circle. This is shaped to match Nok and weighted by the suppression levels to give the following comfort noise:
Nfc = Njfc O U2A- O y' - S7 O S»
[00102] Fig. 6 shows a flow diagram illustrating operations performed by the acoustic echo canceller 100 according to the exemplary aspect of the present invention. More specifically, according to an embodiment of the invention, Fig. 6 further describes the algorithms on how echo state and suppression factors are determined in the NLP 104 of the AEC 100 as described above with respect to Fig. 5.
[00103] As described earlier, both the coherence cxd between the far-end signal 1 10 and near-end signal 122 and the coherence Cde between the near-end signal 122 and error signal 124 are tracked over time to determine the state of the AEC 100. Based on the determination of a high or a low coherence, the NLP 104 decides whether to enter or leave the coherent state.
[00104] First, a determination is made by the NLP 104 at step S601 whether the coherence is high and at S605 whether the coherence is low as described above with reference to Fig. 5. As mentioned earlier, coherence is a frequency-domain analog to time-domain correlation. More specifically, as mentioned above with reference to Fig. 5, coherence is a measure of similarity with 0 < c(n) < 1 ; where a higher coherence corresponds to more similarity.
[00105] Accordingly, if the NLP 104 determines that the coherence is high at S601, the AEC 100 enters into the coherent state at step S603. If the NLP 104 determines that the coherence is low at S605, the AEC 100 leaves the coherent state at step S607. As mentioned with reference to Fig. 5, the AEC 100 is considered in the "coherent state" when wc = 1 and in the "echo state" when e = 1.
[00106] According to an exemplary aspect of the invention, a determination is made by the NLP 104 at step S609 whether cxd = 1. If the NLP 104 determines that cxd = 1 , the AEC 100 leaves the echo state at step S61 1. Then, a further determination is made by NLP 104 at step S613 whether the AEC 100 is in the coherent state. If the NLP 104 determines that the AEC 100 is still in the coherent state, the following suppression factor s is output by the NLP 104 at step S615:
S = Cde
Sh = C de
S| = C de ·
[00107] At step S613, if the NLP 104 determines that the AEC 100 is not in the coherent state, the following suppression factor s is output by the NLP 104 at step S621 :
s = C'xd
Sh = xd
S\ = C xd.
[00108] On the other hand, if at S609 the NLP 104 determines that cxd is not equal to 1, a further determination is made at S617 whether the AEC 100 is in the coherent state. As mentioned earlier, AEC 100 is considered in the "coherent state" when u = 1. If the AEC 100 is in the coherent state, it leaves the echo state at step S619 and outputs the same suppression factor s as outputted at step S621.
[00109] However, at S617, if the NLP 104 determines that the AEC 100 is not in the coherent state, the AEC 100 enters into echo state at step S623 when ue = 1 and the following suppression factor s is output by the NLP 104 at step S625: s = min(c'Xd, cde)
Sh = S(nh)
Si = S(ni).
[00110] According to an exemplary embodiment of the invention, the suppression factors may then be applied by the NLP 104 to the error signal 124 to substantially remove residual echo from the error signal 124.
[00111] Fig. 7 is a flow diagram illustrating operations performed by the AEC 100 according to an embodiment of the present invention illustrated in Fig. 1. More specifically, according to an embodiment of the invention, Fig. 7 further describes the algorithms on how to remove residual echo from the error signal 124 by utilizing the echo state information and suppression factors determined in the NLP 104 of the AEC 100 as described above with respect to Figs. 5 and 6.
[00112] At step S701 , the NLP 104 receives as input the far-end signal 1 10 to be rendered, the near-end captured signal 122, and the error signal 124 containing a residual echo output from the linear adaptive filter 102. At step S703, the far-end signal 1 10, the near-end signal 122, and the error signal 124 are transformed into the frequency domain by the corresponding transform sections as described above with reference to Figs. 2-5. At step S705, for each frequency band, a first coherence measure is computed between the far-end signal 1 10 and the near-end signal 122 according to the algorithm as described above with reference to Fig. 5. At step S707, for each frequency band, a second coherence measure is computed between the near-end signal 122 and the error signal 124 according to the algorithm as described above with reference to Fig. 5. At step S709, suppression factors are derived corresponding to each band of frequencies. Finally, at step S71 1 , the suppression factors are applied to the error signal 124 or to the near-end signal 122 to substantially remove echo from the error signal 124 or the near-end signal 122.
[00113] Fig. 8 is a block diagram illustrating an example computing device 800 that may be utilized to implement the AEC 100 including, but not limited to, the NLP 104, the filter 102, the far-end buffer 106, and the blocking buffer 108 as well as the processes illustrated in Figs. 3 and 5-7 in accordance with the present disclosure. In a very basic configuration 801, computing device 800 typically includes one or more processors 810 and system memory 820. A memory bus 830 can be used for communicating between the processor 810 and the system memory 820.
[00114] Depending on the desired configuration, processor 810 can be of any type including but not limited to a microprocessor (μΡ), a microcontroller (μ<ϋ), a digital signal processor (DSP), or any combination thereof. Processor 810 can include one more levels of caching, such as a level one cache 81 1 and a level two cache 812, a processor core 813, and registers 814. The processor core 813 can include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. A memory controller 815 can also be used with the processor 810, or in some implementations the memory controller 815 can be an internal part of the processor 810.
[00115] Depending on the desired configuration, the system memory 820 can be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. System memory 820 typically includes an operating system 821, one or more applications 822, and program data 824. Application 822 includes an echo cancellation processing algorithm 823 that is arranged to remove residual echo from an error signal. Program Data 824 includes echo cancellation routing data 825 that is useful for removing residual echo from an error signal, as will be further described below. In some embodiments, application 822 can be arranged to operate with program data 824 on an operating system 821 such that residual echo from and error signal is removed. This described basic configuration is illustrated in Fig. 8 by those components within dashed line 801.
[00116] Computing device 800 can have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 801 and any required devices and interfaces. For example, a bus/interface controller 840 can be used to facilitate communications between the basic configuration 801 and one or more data storage devices 850 via a storage interface bus 841. The data storage devices 850 can be removable storage devices 851 , non-removable storage devices 852, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
[00117] System memory 820, removable storage 851 and non-removable storage 852 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 800. Any such computer storage media can be part of device 800.
[00118] Computing device 800 can also include an interface bus 842 for facilitating communication from various interface devices (e.g., output interfaces, peripheral interfaces, and communication interfaces) to the basic configuration 801 via the bus/interface controller 840. Example output devices 860 include a graphics processing unit 861 and an audio processing unit 862, which can be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 863. Example peripheral interfaces 870 include a serial interface controller 871 or a parallel interface controller 872, which can be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 873. An example communication device 880 includes a network controller 881 , which can be arranged to facilitate communications with one or more other computing devices 890 over a network communication via one or more communication ports 882. The communication connection is one example of a communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. A "modulated data signal" can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared (IR) and other wireless media. The term computer readable media as used herein can include both storage media and communication media.
[00119] Computing device 800 can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. Computing device 800 can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations. [00120] There is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. There are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
[00121] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof.
[00122] In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure.
[00123] In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
[00124] Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation.
[00125] Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
[00126] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[00127] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims

What is claimed is:
1. A method for non-linear post processing of an audio signal for acoustic echo cancellation, comprising:
receiving as input, by a non-linear processor, at least two of the following signals: a far-end signal to be rendered and a plurality of capture-side signals;
transforming the received signals to the frequency domain;
computing, for each frequency band, one or more coherence measures between the received signals;
deriving suppression factors corresponding to each band based on said one or more coherence measures; and
applying said suppression factors to one of said capture-side signals to substantially remove echo from said capture-side signal.
2. The method according to claim 1, wherein said plurality of capture-side signals include a near-end captured signal and an error signal containing a residual echo output from a linear adaptive filter.
3. The method according to claim 2, further comprising: tracking the coherence measures over a predetermined amount of time to determine whether the near-end signal is in a "no echo state" or in an "echo state".
4. The method according to any of claims 2-3, wherein said computing step further comprising: computing, for each frequency band, a first coherence measure between the far-end signal and the near-end signal; a second coherence measure between the near-end signal and the error signal; and applying said first and second coherence measures to compute the suppression factors.
5. The method according to claim 4, wherein said suppression factors are directly proportional to a combination of said coherence measures.
6. The method according to any of claims 3-5, wherein said suppression factors are directly proportional to one of the first coherence measure and the second coherence measure when the near-end signal is in the "no echo state".
7. The method according to any of claims 3-6, wherein said suppression factors are directly proportional to a minimum of the first and second coherence measures when the near-end signal is in the "echo state".
8. The method according to any of claims 4-7, wherein the first coherence measure is a frequency-domain analog to time-domain correlation between the far-end signal and the near-end signal.
9. The method according to any of claims 4-8, wherein the second coherence measure is a frequency-domain analog to time-domain correlation between the near-end signal and the error signal.
10. The method according to any of claims 2-9, wherein said applying step applies suppression factors to the error signal to substantially remove the residual echo from the error signal.
11. The method according to any of claims 2-9, further comprising:
detecting filter divergence by comparing the energy of the error signal and the near- end signal and applying the suppression factors to the near-end signal based on detected filter divergence.
12. The method according to any of claims 1-11, further comprising: accentuating valleys in the suppression factors by raising to a power.
13. The method according to any of claims 1-12, further comprising: weighting the suppression factors with a curve configured to influence less accurate bands.
14. The method according to any of claims 1-13, further comprising: tracking a minimum suppression factor and scaling the suppression factors such that the minimum approaches a target value.
15. The method according to any of claims 2-14, further comprising: transforming the far-end signal, the near-end signal, and the error signal to the frequency domain.
16. The method according to any claims 1-15, wherein said frequency bands correspond to individual Discrete Fourier Transform (DFT) coefficients.
17. A system for non-linear post processing of an audio signal for acoustic echo cancellation, comprising:
a non-linear processor that receives, as input, at least two of the following signals: a far-end signal to be rendered and a plurality of capture-side signals; and
a transform unit operatively connected to said non-linear processor, said transform unit transforming the received signals to the frequency domain;
wherein said non-linear processor is configured to:
compute, for each frequency band, one or more coherence measures between the received signals;
derive suppression factors corresponding to each band based on said one or more coherence measures; and
apply said suppression factors to one of said capture-side signals to substantially remove echo from said capture-side signal.
18. The system according to claim 17, wherein said plurality of capture-side signals include a near-end captured signal and an error signal containing a residual echo output from a linear adaptive filter.
19. The system according to claim 18, wherein said non-linear processor is configured to track the coherence measures over a predetermined amount of time to determine whether the near-end signal is in a "no echo state" or in an "echo state".
20. The system according to any of claims 18 to 19, wherein said non-linear processor is configured to compute, for each frequency band, a first coherence measure between the far-end signal and the near-end signal; a second coherence measure between the near-end signal and the error signal; and apply said first and second coherence measures to compute the suppression factors.
21. The system according to claim 20, wherein said suppression factors are directly proportional to a combination of said coherence measures.
22. The system according to any of claims 19-21, wherein said suppression factors are directly proportional to one of the first coherence measure and the second coherence measure when the near-end signal is in the "no echo state".
23. The system according to any of claims 19-22, wherein said suppression factors are directly proportional to a minimum of the first and second coherence measures when the near-end signal is in the "echo state".
24. The system according to any of claims 20-23, wherein the first coherence measure is a frequency-domain analog to time-domain correlation between the far-end signal and the near-end signal.
25. The system according to any of claims 20-24, wherein the second coherence measure is a frequency-domain analog to time-domain correlation between the near-end signal and the error signal.
26. The system according to any of claims 18-25, wherein said non-linear processor is configured to apply suppression factors to the error signal to substantially remove the residual echo from the error signal.
27. The system according to any of claims 18-25, wherein said non-linear processor is configured to detect a filter divergence by comparing energy of the error signal and the near-end signal and apply the suppression factors to the near-end signal based on the detected filter divergence.
28. The system according to any of claims 17-27, wherein said non-linear processor is configured to accentuate valleys in the suppression factors by raising to a power.
29. The system according to any of claims 17-28, wherein said non-linear processor is configured to weight the suppression factors with a curve configured to influence less accurate bands.
30. The system according to any of claims 17-29, wherein said non-linear processor is configured to track a minimum suppression factor and scale the suppression factors such that the minimum approaches a target value.
31. The system according to any of claims 18-30, wherein said transform unit is configured to transform the far-end signal, the near-end signal, and the error signal to the frequency domain.
32. The system according to any of claims 17-31, wherein said frequency bands correspond to individual Discrete Fourier Transform (DFT) coefficients.
33. A computer-readable storage medium having stored thereon computer executable program for non-linear post processing of an audio signal for acoustic echo cancellation, the computer program when executed causes a processor to execute the steps of: receiving as input, by a non-linear processor, at least two of the following signals: a far-end signal to be rendered and a plurality of capture-side signals;
transforming the received signals to the frequency domain;
computing, for each frequency band, one or more coherence measures between the received signals;
deriving suppression factors corresponding to each band based on said one or more coherence measures; and
applying said suppression factors to one of said capture-side signals to substantially remove echo from said capture-side signal.
34. The computer-readable storage medium of claim 33, wherein said plurality of capture-side signals include a near-end captured signal and an error signal containing a residual echo output from a linear adaptive filter.
35. The computer-readable storage medium of claim 34, wherein the computer program when executed causes the processor to further execute the step of tracking the coherence measures over a predetermined amount of time to determine whether the near-end signal is in a "no echo state" or in an "echo state".
36. The computer-readable storage medium of any of claims 34-35, wherein the computer program when executed causes the processor to further execute the steps of computing, for each frequency band, a first coherence measure between the far-end signal and the near-end signal; a second coherence measure between the near-end signal and the error signal; and applying said first and second coherence measures to compute the suppression factors.
37. The computer-readable storage medium of claim 36, wherein said suppression factors are directly proportional to a combination of said coherence measures.
38. The computer-readable storage medium of any of claims 35-37, wherein said suppression factors are directly proportional to one of the first coherence measure and the second coherence measure when the near-end signal is in the "no echo state".
39. The computer-readable storage medium of any of claims 35-38, wherein said suppression factors are directly proportional to a minimum of the first and second coherence measures when the near-end signal is in the "echo state".
40. The computer-readable storage medium of any of claims 36-39, wherein the first coherence measure is a frequency-domain analog to time-domain correlation between the far-end signal and the near-end signal.
41. The computer-readable storage medium of any of claims 36-40, wherein the second coherence measure is a frequency-domain analog to time-domain correlation between the near-end signal and the error signal.
42. The computer-readable storage medium of any of claims 34-41, wherein the computer program when executed causes the processor to further execute the step of applying suppression factors to the error signal to substantially remove the residual echo from the error signal.
43. The computer-readable storage medium of any of claims 34-41, wherein the computer program when executed causes the processor to further execute the steps of detecting filter divergence by comparing energy of the error signal and the near-end signal and applying the suppression factors to the near-end signal based on detected filter divergence.
44. The computer-readable storage medium of any of claims 33-43, wherein the computer program when executed causes the processor to further execute the step of accentuating valleys in the suppression factors by raising to a power.
45. The computer-readable storage medium of any of claims 33-44, wherein the computer program when executed causes the processor to further execute the step of weighting the suppression factors with a curve configured to influence less accurate bands.
46. The computer-readable storage medium of any of claims 33-45, wherein the computer program when executed causes the processor to further execute the step of tracking a minimum suppression factor and scaling the suppression factors such that the minimum approaches a target value.
47. The computer-readable storage medium of any of claims 33-46, wherein the computer program when executed causes the processor to further execute the step of transforming the far-end signal, the near-end signal, and the error signal to the frequency domain.
48. The computer-readable storage medium of any claims 33-47, wherein said frequency bands correspond to individual Discrete Fourier Transform (DFT) coefficients.
PCT/US2011/036856 2011-05-17 2011-05-17 Non-linear post-processing for acoustic echo cancellation WO2012158163A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/US2011/036856 WO2012158163A1 (en) 2011-05-17 2011-05-17 Non-linear post-processing for acoustic echo cancellation
EP11721215.9A EP2710787A1 (en) 2011-05-17 2011-05-17 Non-linear post-processing for acoustic echo cancellation
CN201180072348.6A CN103718538B (en) 2011-05-17 2011-05-17 The non-linear post-processing method of audio signal and the system of acoustic echo elimination can be realized

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2011/036856 WO2012158163A1 (en) 2011-05-17 2011-05-17 Non-linear post-processing for acoustic echo cancellation

Publications (1)

Publication Number Publication Date
WO2012158163A1 true WO2012158163A1 (en) 2012-11-22

Family

ID=44209915

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/036856 WO2012158163A1 (en) 2011-05-17 2011-05-17 Non-linear post-processing for acoustic echo cancellation

Country Status (3)

Country Link
EP (1) EP2710787A1 (en)
CN (1) CN103718538B (en)
WO (1) WO2012158163A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9112951B2 (en) 2013-12-23 2015-08-18 Imagination Technologies Limited Acoustic echo suppression
CN105794190A (en) * 2013-12-12 2016-07-20 皇家飞利浦有限公司 Echo cancellation
RU2664717C2 (en) * 2013-03-19 2018-08-22 Конинклейке Филипс Н.В. Audio processing method and device
CN111341336A (en) * 2020-03-16 2020-06-26 北京字节跳动网络技术有限公司 Echo cancellation method, device, terminal equipment and medium
US11508363B2 (en) 2020-04-09 2022-11-22 Samsung Electronics Co., Ltd. Speech processing apparatus and method using a plurality of microphones

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994249B (en) * 2015-05-19 2017-03-15 百度在线网络技术(北京)有限公司 Sound method for echo cancellation and device
CN105304077A (en) * 2015-09-22 2016-02-03 广东欧珀移动通信有限公司 Acoustic treatment method and apparatus
US9870763B1 (en) * 2016-11-23 2018-01-16 Harman International Industries, Incorporated Coherence based dynamic stability control system
CN108172233B (en) * 2017-12-12 2019-08-13 天格科技(杭州)有限公司 The echo cancel method of signal and error signal regression vectors is estimated based on distal end
CN108390663B (en) * 2018-03-09 2021-07-02 电信科学技术研究院有限公司 Method and device for updating coefficient vector of finite impulse response filter
CN108831497B (en) * 2018-05-22 2020-06-09 出门问问信息科技有限公司 Echo compression method and device, storage medium and electronic equipment
EP3796629B1 (en) * 2019-05-22 2022-08-31 Shenzhen Goodix Technology Co., Ltd. Double talk detection method, double talk detection device and echo cancellation system
CN110335618B (en) * 2019-06-06 2021-07-30 福建星网智慧软件有限公司 Method for improving nonlinear echo suppression and computer equipment
CN112929506B (en) * 2019-12-06 2023-10-17 阿里巴巴集团控股有限公司 Audio signal processing method and device, computer storage medium and electronic equipment
CN111048096B (en) * 2019-12-24 2022-07-26 大众问问(北京)信息科技有限公司 Voice signal processing method and device and terminal
CN111048118B (en) * 2019-12-24 2022-07-26 大众问问(北京)信息科技有限公司 Voice signal processing method and device and terminal
CN110992975B (en) * 2019-12-24 2022-07-12 大众问问(北京)信息科技有限公司 Voice signal processing method and device and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0895397A2 (en) * 1997-08-01 1999-02-03 Bitwave PTE Ltd. Acoustic echo canceller
US20060034447A1 (en) * 2004-08-10 2006-02-16 Clarity Technologies, Inc. Method and system for clear signal capture
US7006458B1 (en) * 2000-08-16 2006-02-28 3Com Corporation Echo canceller disabler for modulated data signals
US20080281584A1 (en) * 2007-05-07 2008-11-13 Qnx Software Systems (Wavemakers), Inc. Fast acoustic cancellation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5587998A (en) * 1995-03-03 1996-12-24 At&T Method and apparatus for reducing residual far-end echo in voice communication networks
FI106489B (en) * 1996-06-19 2001-02-15 Nokia Networks Oy Eco-muffler and non-linear processor for an eco extinguisher
US6658107B1 (en) * 1998-10-23 2003-12-02 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for providing echo suppression using frequency domain nonlinear processing
JP5347794B2 (en) * 2009-07-21 2013-11-20 ヤマハ株式会社 Echo suppression method and apparatus
CN101719969B (en) * 2009-11-26 2013-10-02 美商威睿电通公司 Method and system for judging double-end conversation and method and system for eliminating echo

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0895397A2 (en) * 1997-08-01 1999-02-03 Bitwave PTE Ltd. Acoustic echo canceller
US7006458B1 (en) * 2000-08-16 2006-02-28 3Com Corporation Echo canceller disabler for modulated data signals
US20060034447A1 (en) * 2004-08-10 2006-02-16 Clarity Technologies, Inc. Method and system for clear signal capture
US20080281584A1 (en) * 2007-05-07 2008-11-13 Qnx Software Systems (Wavemakers), Inc. Fast acoustic cancellation

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2664717C2 (en) * 2013-03-19 2018-08-22 Конинклейке Филипс Н.В. Audio processing method and device
CN105794190A (en) * 2013-12-12 2016-07-20 皇家飞利浦有限公司 Echo cancellation
EP3080975B1 (en) 2013-12-12 2017-07-12 Koninklijke Philips N.V. Echo cancellation
US9112951B2 (en) 2013-12-23 2015-08-18 Imagination Technologies Limited Acoustic echo suppression
US9544422B2 (en) 2013-12-23 2017-01-10 Imagination Technologies Limited Acoustic echo suppression
CN111341336A (en) * 2020-03-16 2020-06-26 北京字节跳动网络技术有限公司 Echo cancellation method, device, terminal equipment and medium
CN111341336B (en) * 2020-03-16 2023-08-08 北京字节跳动网络技术有限公司 Echo cancellation method, device, terminal equipment and medium
US11508363B2 (en) 2020-04-09 2022-11-22 Samsung Electronics Co., Ltd. Speech processing apparatus and method using a plurality of microphones

Also Published As

Publication number Publication date
CN103718538B (en) 2015-12-16
CN103718538A (en) 2014-04-09
EP2710787A1 (en) 2014-03-26

Similar Documents

Publication Publication Date Title
WO2012158163A1 (en) Non-linear post-processing for acoustic echo cancellation
WO2012158164A1 (en) Using echo cancellation information to limit gain control adaptation
JP4975073B2 (en) Acoustic echo canceller using digital adaptive filter and same filter
JP5450567B2 (en) Method and system for clear signal acquisition
US8488776B2 (en) Echo suppressing method and apparatus
KR101250124B1 (en) Apparatus and Method for Computing Control Information for an Echo Suppression Filter and Apparatus and Method for Computing a Delay Value
JP4702371B2 (en) Echo suppression method and apparatus
JP5284475B2 (en) Method for determining updated filter coefficients of an adaptive filter adapted by an LMS algorithm with pre-whitening
JP2014502074A (en) Echo suppression including modeling of late reverberation components
WO2012158168A1 (en) Clock drift compensation method and apparatus
EP2716023B1 (en) Control of adaptation step size and suppression gain in acoustic echo control
US7003095B2 (en) Acoustic echo canceler and handsfree telephone set
CN111355855B (en) Echo processing method, device, equipment and storage medium
US20050008143A1 (en) Echo canceller having spectral echo tail estimator
EP2710789A1 (en) Non-linear post-processing for super-wideband acoustic echo cancellation
JP5057109B2 (en) Echo canceller
JP6143702B2 (en) Echo canceling apparatus, method and program
KR20220157475A (en) Echo Residual Suppression
KR100431965B1 (en) Apparatus and method for removing echo-audio signal using time-varying algorithm with time-varying step size
KR102649227B1 (en) Double-microphone array echo eliminating method, device and electronic equipment
US7917562B2 (en) Method and system for estimating and applying a step size value for LMS echo cancellers
Nguyen Ngoc et al. Implementation of the LMS and NLMS algorithms for Acoustic Echo Cancellationin teleconference systemusing MATLAB

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11721215

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011721215

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE