WO2015140113A1 - Mise en œuvre de filtres numériques récursifs sur des plateformes de calcul parallèle - Google Patents

Mise en œuvre de filtres numériques récursifs sur des plateformes de calcul parallèle Download PDF

Info

Publication number
WO2015140113A1
WO2015140113A1 PCT/EP2015/055455 EP2015055455W WO2015140113A1 WO 2015140113 A1 WO2015140113 A1 WO 2015140113A1 EP 2015055455 W EP2015055455 W EP 2015055455W WO 2015140113 A1 WO2015140113 A1 WO 2015140113A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory block
thread
filter
threads
block
Prior art date
Application number
PCT/EP2015/055455
Other languages
English (en)
Inventor
Jürgen Schmidt
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2015140113A1 publication Critical patent/WO2015140113A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03HIMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
    • H03H17/00Networks using digital techniques
    • H03H17/02Frequency selective networks
    • H03H17/04Recursive filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/055Filters for musical processing or musical effects; Filter responses, filter architecture, filter coefficients or control parameters therefor
    • G10H2250/111Impulse response, i.e. filters defined or specifed by their temporal impulse response features, e.g. for echo or reverberation applications
    • G10H2250/121IIR impulse

Definitions

  • This invention relates to a method for realization of recursive digital filters on parallel computing platforms without recursion, like e.g. OpenCL, to a corresponding recursive digital filter, and to a device including such filter.
  • Recursive linear filter also called infinite impulse response (MR) filter
  • MR infinite impulse response
  • a typical field of the application is the filtering of audio signals, e.g. for loudspeaker equalization, loudspeaker crossovers or in mixing consoles.
  • audio applications like 3D loudspeaker setups or mixing consoles, many audio channels have to be processed in a parallel manner.
  • a 3D loudspeaker setup used for Higher Order Ambisonics sound field playback can consist of 32 loudspeakers and four subwoofers.
  • the room response can be achieved by a flat transfer function for each loudspeaker in the room (+/- 1 dB, 80Hz...16000Hz).
  • equalization IIR-filter which will be applied to each loudspeaker, will be at or around an order of 40.
  • the required processing power must be appropriate. If the filter system should be implemented on a computer on which also the audio/video presentation is running, the CPU will be overburdened easily. An optimal solution would be a realization on the computer but outside the CPU.
  • Modern graphic processors (GPU) or graphic cards have high computation power of up to 2000 GFIops (single precision floating point) or several hundred GFIops for double precision floating point.
  • GFIops single precision floating point
  • MR filter single precision floating point
  • S/N signal-to-noise
  • OpenGL OpenGL
  • CUDA C-dialect
  • OpenCL OpenCL program
  • high-order recursive filters can be highly sensitive to quantization of their coefficients, and can easily become unstable. This is much less of a problem with first- order and second-order filters.
  • Each transfer function can be decomposed in products of second order section transfer functions.
  • a digital biquad filter is a second-order recursive linear filter with two poles and two zeros.
  • "Biquad” is an abbreviation of "biquadratic”, which refers to the fact that, in the Z-domain, the filter's transfer function is a quotient of two quadratic functions. Therefore, higher order filters are typically implemented as serially-cascaded biquad filters.
  • the expression Z-domain is commonly used for a target domain of a Z-transform, which converts a time discrete signal into a frequency domain.
  • a typical realization is known as "Direct-Form 1 ", as shown on Fig.1 .
  • the output signal y[k] is used for the calculation of the following output signals via a delay (z "1 ) and gain cascade.
  • a complementary form is known as the "Direct-Form 2" structure, with only one delay element per step or two delay elements per biquad.
  • the biquad filter structure in "Direct-Form 2" is shown in Fig. 2.
  • OpenCL is developed to enable parallel computation on several hundred or thousand processing elements in parallel tasks, e.g. on GPUs. Therefore, a task-comprehensive data access is not possible, which may lead to the assumption that GPUs in general and OpenCL in particular do not support recursion.
  • a parallelization on sample level would be optimal for the high number of processing elements on GPUs.
  • a problem to be solved is that such parallelization on sample level is not possible, because threads would need to have access to results y[k-n] of parallel running threads. But they can only have parallel access on input data. In other words: y[k] will be computed in task k, and y[k-n] will be calculated in task k-n. Therefore, y[k-n] cannot be fed to task k for the calculation of y[k].
  • transfer functions are decomposed in products of lower order (preferably second order) section transfer functions.
  • Each of the lower order section transfer functions is implemented as a separate thread.
  • Each of the threads reads data from a first memory block and writes data into a second memory block. All threads operate in parallel, i.e. simultaneously, for one recursion. After all threads are finished, the memory blocks are exchanged for the next recursion, such that each of the first memory blocks becomes a write memory block and each of the second memory blocks becomes a read memory block for the next recursion.
  • the memory blocks are assigned to the threads by a thread management unit within the GPU, according to program instructions. During the exchange, the memory blocks maintain their data.
  • the invention provides a method for implementing an infinite impulse response (MR) filter on a parallel processing hardware platform that comprises steps of separating the MR filter into a sequence of a plurality of bi-quad filters, and implementing each bi-quad filter as a separate thread with one or more processing elements each,
  • MR infinite impulse response
  • each thread except the first and the last thread, a different input memory block and a different output memory block, such that each output memory block of a thread becomes a new input memory block of the next thread, and each input memory block of a thread becomes a new output memory block of the previous thread (wherein all the memory contents remain unchanged), and
  • the threads correspond to the bi-quad filters and have a sequential order, given by the sequence of the plurality of bi-quad filters.
  • an input memory block is assigned to any particular thread (except the initial assignment)
  • it is the output memory block of the thread that is directly preceding the particular thread in the sequential order of threads.
  • Such memory block contains therefore the output data of the preceding bi-quad filter.
  • an output memory block is assigned to any particular thread, it is a memory block that was, in the previous iteration, the input memory block of the thread that is directly succeeding the particular thread in the sequential order of threads.
  • Such memory block contains therefore data that have already been processed by the subsequent bi-quad filter and that can be overwritten by the particular thread.
  • the invention relates to a computer readable storage medium having stored thereon executable instructions to cause a computer with at least one parallel processing hardware platform to perform a method for implementing an infinite impulse response (MR) filter comprising steps as described above.
  • MR infinite impulse response
  • double precision float as described above is used for the MR filter regarding to better S/N ratio.
  • double precision may be required for obtaining good signal-to- noise (S/N) ratios.
  • Advantages are at least that a recursive processing is enabled on a parallel processing hardware platform, and that processing elements need not be re-configured, so that the total processing chain is accelerated.
  • the invention relates to an apparatus that comprises a general processor (CPU) and a Graphic Processing Unit (GPU) for separating the MR (infinite impulse response) filter into a sequence of a plurality of bi-quad filters, implementing each bi-quad filter as a separate thread with one or more processing elements each, assigning to the first thread a first output memory block, and to the last thread a first input memory block, assigning to each thread, except the first and the last thread, an input memory block and an output memory block, executing each of the threads with a first block of data to be processed, and waiting until all threads are finished, assigning to the first thread a different second output memory block, and to the last thread a different second input memory block (memory contents remain unchanged), assigning to each thread, except the first and the last thread, a different input memory block and a different output memory block, such that each output memory block of a thread becomes a new input memory block of the next thread, and each input memory block of a thread becomes a
  • Fig. 3 a consecutive sequence of M biquad filters
  • Figs. 8 and 9 a detail according to Fig.6;
  • Fig. 10 a picture of an audio lab;
  • Fig. 1 1 a diagram related to loudspeaker equalisation;
  • Fig. 12 a decomposition example of a MR filter calculation
  • Fig. 13 a diagram for processing delay for different numbers of biquads and different samples
  • FIG. 14 to 16 diagrams of normalized processing time vs. block size
  • Figs. 17 to 19 3d graphs of a real time factor vs the number of channels and biquads for different blocksizes;
  • Fig. 20 an illustration of an apparatus using the invention
  • Fig. 21 a flow diagram with regard to a method according to one embodiment of the invention.
  • Fig. 22 a computer readable storage medium.
  • Fig.1 a realization of a recursive linear digital filter known as "Direct-Form 1 " is shown in Fig.1 .
  • an output signal y[k] is used for the calculation of the following output signals via processing elements 130,131 of a biquad filter 1 10 with delay elements 131 , denoted z "1 , and a gain cascade with gain elements 130.
  • the alpha coefficients of the gain elements 130 determine zeros
  • beta determine the position of the poles.
  • a complementary form is known as the "Direct-Form 2" structure mentioned above, with only one delay element per step or two delay elements per biquad.
  • the biquad filter structure in "Direct-Form 2" is shown in Fig. 2 with two biquad filters 1 10.
  • a biquad filter 1 10 shown in Fig. 2 consist of the following processing elements: two delay elements (each written as z "1 ), five multiplication- and four addition-operations.
  • the states behind each delay element have to be saved at the end of the block and have to be restored at the beginning of a new block to ensure the continuous filtering of input samples.
  • FIG. 2 The block diagram of an MR filter 100 of order 2M, separated into a consecutive sequence of M biquad filters 1 10 (Fig. 2) is shown in Fig. 3. Between the individual stages of this processing chain, only the results are transmitted. The delay states are kept inside the single biquad 1 10.
  • a first step is to assign each biquad 1 10 in the biquad chain to a single thread 120 (see Fig. 5). For this modification, the cross data access to processed data is still impossible.
  • a processing sequence comprises
  • each bi-quad filter as a separate thread 120 with one or more processing elements each
  • the processing sequence comprises
  • processing sequence comprises
  • processing sequence comprises
  • each thread except the first and the last thread (BQ1 1 and BQM1 ), a different input memory block 150 and a different output memory block 140 wherein each output memory block of a thread becomes a new input memory block of the next thread and each input memory block of a thread becomes a new output memory block of the previous thread, wherein memory contents remain preferably unchanged, and
  • Block 2 executing each of the threads with a second block of data 160, shown as Block 2 to be processed until all blocks of data to be processed are finished.
  • each bi-quad filter 1 10 is equipped with two memory blocks 140, 150 one input memory block 150, for the input data and one output memory block 140 for the results .
  • the first bi-quad filter BQ1 1 in the chain reads an input sequence x(n) and the last bi-quad filter BQM1 delivers the output sequence y(n).
  • All other bi-quad data interfaces BQ21 , BQ(M-1 )1 are coupled to individually associated memory blocks 140, 150, which are allocated from a common temporary storage S. This processing will be executed for one block of size B samples, wherein the first block is called Block 1.
  • an OpenCL kernel re-assigns memory blocks 140, 150 to the processing elements in the form of said bi-quad filters 1 10 for the next block processing, here Block 2.
  • the bi-quads cross-switch their associated memory blocks such that an output memory block 140 from a bi-quad m-1, which contains intermediate processing results of the previous data block, will become the input memory block 150 for the next bi-quad m.
  • Each processing element maintains its function, i.e. performs the same bi-quad operation as before.
  • the data flow from the input to the output according to Fig. 6 comprises that, for each block, a delay of block size for each biquad will be inserted until the end of the biquad chain is reached, as shown as a dataflow diagram Figure 7. This behavior is often called 'pipelining'.
  • the pipeline depth is determined by the length M of the biquad chain.
  • the resulting delay d is a product of blocksize B and biquad-length M:
  • the total delay of the biquad chain is an important factor for the application.
  • the selection of blocksize B will be influenced by this (desired/acceptable) delay and by the processing overhead for the setup and initialization of the processing tasks. This overhead increases by reducing the blocksize, so that for a balanced system at least a few ten samples will be optimal.
  • the number of blocks influences the performance of the host thread, since its number determines also the number of waits for the host task. As an example, for a complete audio frame of 1024 samples, a block size of 64 samples results in 16 waits, and therefore 16 task-switches of the host processor.
  • the parallelization p of the system is determined by the product of the number of channels C and the biquad length M:
  • Memory 1 1 which is not part of the temporary storage S shown in Figure 6 works for bi-quad filter 1 10, BQ1 , as a first output memory block 140 and Memory 12 works as an input memory block 150.
  • Memory 21 works for bi-quad filter 1 10, BQ2, as an output memory block 140 and Memory 22 works as an input memory block 150.
  • Figure 9 shows in a next step the memory switch function (dashed arrows) between Memory 12 and Memory 21 .
  • Memory 12 is now the output memory block and give its data received from BQ1 to BQ2.
  • Memory 21 is the new output memory block 140 for bi-quad BQ1.
  • Fig. 12 shows an inverse filter function 180 obtained with an IIR-filter of this invention.
  • the inverse filter function 180 is convoluted with an average signal similar to the signal of the microphones (black line) to reach a very flat target signal 190 with OdB in the area of interest.
  • the target audio interface at the Hanover audio lab can be configured for 64, 128 or 192 channels; a good equalization filter will have a length of about 20 biquads.
  • a lab software is constructed using the invention and works very well.
  • a first single benchmark on a single ATI Radeo HD7950 GPU for 8 biquads per channel and 40 channels gave an execution time of -130 ⁇ per block with a block size of 64 samples, which is an excellent value. This execution time will be expected to be nearly constant for longer biquad chains, because the GPU utilization is low in this configuration.
  • Fig. 13 shows the cumulated processing delay in milliseconds versus the number of biquads. It is shown that with 8 to 16 samples a delay between 30ms and 250ms could be reached using between 20 and 80 bi-quads.
  • the cumulated time delay for an IIR-filter should not be more than 200ms with regard e.g. to video processing.
  • each bi-quad element in the bi-quad chain is assigned to a single thread. However, this does not enable yet a data access across threads.
  • Figs. 14 to 16 diagrams of normalized processing time vs. block size are shown.
  • Fig. 14 shows a horizontal grey line illustrating factor 0,5 which mean that 50% of the processing available of a platform like a GPU or a CPU used to execute the workflow. This is an acceptable value because the other 50% could be used for other processes.
  • Figs. 17 to 19 3d graphs of a real time factor versus the number of channels and biquads for different blocksizes are shown.
  • Fig. 19 shows the results of MR filter processing on a GPU and
  • Fig. 17 and Fig. 18 shows the results of MR filter processing on a CPU platform. It is clearly shown that using GPU platforms leads to significantly lower real time factors compared to CPU usage.
  • a method for implementing an MR filter 100 (Fig. 6) on a parallel processing hardware platform 200 (Fig. 20) such as a GPU 340 comprises steps of separating the MR filter 100 into a sequence of bi-quad filters 1 10, implementing each bi- quad filter 1 10 as a separate thread 120 with one or more processing elements each, assigning to the first thread 120 a first output memory block 140, and to the last thread 120 a first input memory block 150, assigning to each of the remaining threads an input memory block 150 and an output memory block 140, executing each of the threads with a first block of data 160 to be processed, and waiting until all threads are finished, assigning to the first thread a different second output memory block 140, and to the last thread a different second input memory block 150, assigning to each of the remaining threads a different input memory block 150 and a different output memory block 140 than before, such that each output memory block 140 of a thread becomes a new input memory block 150 of the next thread 120, and each input memory block 150 of a thread
  • FIG. 20 is an illustration of an apparatus 300 comprising a CPU 320 and a GPU 340 working as a parallel processing platform 200 with MR filter 100 and Fig. 22 shows a storage medium 400 having stored thereon executable instructions to cause a computer with at least one parallel processing hardware platform to perform a method for implementing an infinite impulse response (MR) filter (100) comprising the steps according to one of the said methods.
  • MR infinite impulse response
  • FIG. 21 shows a flow diagram with regard to a method for filtering input data using an MR filter (100) implemented on a parallel processing hardware platform (200), wherein the MR filter (100) is separated into a sequence of a plurality of bi-quad filters (1 10) in step a) and each bi-quad filter (1 10) is implemented as a separate thread (120) with one or more processing elements (130) each in step b), the method comprising steps of
  • each thread (120), except the first and the second threads assigning to each thread (120), except the first and the second threads, an input memory block (150) different from its previous input memory block and an output memory block (140) different from its previous output memory block, such that each output memory block (140) of a thread that corresponds to a stage of the sequence of bi-quad filters (1 10) is assigned as a new input memory block (150) of a next thread (120) corresponding to the next stage of the sequence of bi-quad filters (1 10), and each input memory block (150) of a thread (120) that corresponds to a stage of the sequence of bi-quad filters (1 10) is assigned as new output memory block (140) of a previous thread (120) corresponding to the previous stage of the sequence of bi-quad filters (1 10);

Abstract

L'invention porte sur un procédé permettant de mettre en œuvre un filtre IIR (100) sur une plateforme matérielle de calcul parallèle (200) telle qu'un GPU (340), et consistant à séparer le filtre IIR en une série de filtres biquadratiques (110), à mettre en œuvre chaque filtre biquadratique sous la forme d'un fil d'exécution (120) séparé comportant un ou plusieurs éléments de traitement (130), à attribuer au premier fil d'exécution un premier bloc de mémoire de sortie (140) et au dernier fil d'exécution un premier bloc de mémoire d'entrée (150), à attribuer à chacun des fils d'exécution restants un bloc de mémoire d'entrée (150) et un bloc de mémoire de sortie (140), à exécuter chacun des fils d'exécution avec un premier bloc de données (160) à traiter, et, lorsque tous les fils d'exécution (120) sont terminés, à attribuer au premier fil d'exécution un second bloc de mémoire de sortie différent et au dernier fil d'exécution un second bloc de mémoire d'entrée différent, à attribuer à chacun des fils d'exécution restants un bloc de mémoire d'entrée et un bloc de mémoire de sortie différents d'avant, de telle sorte que chaque bloc de mémoire de sortie d'un fil d'exécution devienne un nouveau bloc de mémoire d'entrée du fil d'exécution suivant et que chaque bloc de mémoire d'entrée d'un fil d'exécution devienne un nouveau bloc de mémoire de sortie du fil d'exécution précédent, et à exécuter chacun des fils d'exécution avec un second bloc de données (160) à traiter. Les étapes ci-dessus sont répétées pour tous les blocs des données à traiter.
PCT/EP2015/055455 2014-03-21 2015-03-16 Mise en œuvre de filtres numériques récursifs sur des plateformes de calcul parallèle WO2015140113A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP14305410 2014-03-21
EP14305410.4 2014-03-21

Publications (1)

Publication Number Publication Date
WO2015140113A1 true WO2015140113A1 (fr) 2015-09-24

Family

ID=50478353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/055455 WO2015140113A1 (fr) 2014-03-21 2015-03-16 Mise en œuvre de filtres numériques récursifs sur des plateformes de calcul parallèle

Country Status (1)

Country Link
WO (1) WO2015140113A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160112033A1 (en) * 2014-10-15 2016-04-21 Texas Instruments Incorporated Efficient implementation of cascaded biquads

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991018342A1 (fr) * 1990-05-18 1991-11-28 Star Semiconductor Corporation Architecture de processeur de signaux programmable
EP2242044A2 (fr) * 2009-04-17 2010-10-20 Harman International Industries, Incorporated Système pour le contrôle actif du bruit doté d'un filtre de réponse à impulsions infinies
US20140067100A1 (en) * 2012-08-31 2014-03-06 Apple Inc. Parallel digital filtering of an audio channel

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991018342A1 (fr) * 1990-05-18 1991-11-28 Star Semiconductor Corporation Architecture de processeur de signaux programmable
EP2242044A2 (fr) * 2009-04-17 2010-10-20 Harman International Industries, Incorporated Système pour le contrôle actif du bruit doté d'un filtre de réponse à impulsions infinies
US20140067100A1 (en) * 2012-08-31 2014-03-06 Apple Inc. Parallel digital filtering of an audio channel

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAO-HUANG WEI ET AL: "Design and Implementation of Multi-Channel Bandpass Filter for Embedded System", ELECTRONIC DESIGN, TEST AND APPLICATIONS, 2006. DELTA 2006. THIRD IEEE INTERNATIONAL WORKSHOP ON KUALA LUMPUR, MALAYSIA 17-19 JAN. 2006, PISCATAWAY, NJ, USA,IEEE, 17 January 2006 (2006-01-17), pages 461 - 471, XP010883064, ISBN: 978-0-7695-2500-6 *
SCHAFFER R ET AL: "Recursive filtering on SIMD architectures", SIGNAL PROCESSING SYSTEMS, 2003. SIPS 2003. IEEE WORKSHOP ON 27 - 29 AUG. 2003, PISCATAWAY, NJ, USA,IEEE, 27 August 2003 (2003-08-27), pages 263 - 268, XP010661026, ISBN: 978-0-7803-7795-0 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160112033A1 (en) * 2014-10-15 2016-04-21 Texas Instruments Incorporated Efficient implementation of cascaded biquads
US10114796B2 (en) * 2014-10-15 2018-10-30 Texas Instruments Incorporated Efficient implementation of cascaded biquads

Similar Documents

Publication Publication Date Title
EP0935342A2 (fr) Améliorations relatives aux filtres
US7102548B1 (en) Cascaded integrator comb filter with arbitrary integer decimation value and scaling for unity gain
US20070040713A1 (en) Low-complexity sampling rate conversion method and apparatus for audio processing
US10826464B2 (en) Signal processing method and apparatus
EP2847860B1 (fr) Filtre iir à plusieurs étages et filtrage parallélisé de données au moyen de ce filtre
DE102017203804A1 (de) Umwandlung digitaler Abtastrate
WO2015140113A1 (fr) Mise en œuvre de filtres numériques récursifs sur des plateformes de calcul parallèle
US9787288B2 (en) Optimal factoring of FIR filters
WO2018022942A1 (fr) Traitement de signal numérique à échelonnage fractionnel
EP2730026B1 (fr) Filtrage à faible retard
Sokolov et al. Implementation of cosine modulated digital filter bank on processor with ARM architecture
TW201817164A (zh) 插值器和插取器用的有效多相架構
RU2460130C1 (ru) Способ цифровой рекурсивной полосовой фильтрации и цифровой фильтр для реализации этого способа
RU148684U1 (ru) Устройство фильтрации векторного сигнала
CN104579239B (zh) 一种滤波系统的过滤方法
EP3240276A1 (fr) Dispositif et procédé d'annulation du bruit dans un signal reçu
CN108228480B (zh) 一种数字滤波器及数据处理方法
CN104392727A (zh) 音频信号处理方法和相关装置
RU2579982C2 (ru) Способ цифровой рекурсивной полосовой фильтрации и цифровой фильтр для реализации способа
CN113015063B (zh) Cic滤波器、抽取滤波器及麦克风
CN114079440A (zh) 滤波器阵列的平滑快速更新方法、系统、介质及装置
WO2015187711A1 (fr) Traitement de signal audio
RU2249850C2 (ru) Способ последовательно-параллельного вейвлетного преобразования
Wallden Accelerating In-Transit Co-Processing for Scientific Simulations Using Region-Based Data-Driven Adaptive Compression
JP2006268294A (ja) データ変換処理装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15710174

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 15710174

Country of ref document: EP

Kind code of ref document: A1