US20100329362A1 - Video encoding and decoding apparatus and method using adaptive in-loop filter - Google Patents
Video encoding and decoding apparatus and method using adaptive in-loop filter Download PDFInfo
- Publication number
- US20100329362A1 US20100329362A1 US12/827,572 US82757210A US2010329362A1 US 20100329362 A1 US20100329362 A1 US 20100329362A1 US 82757210 A US82757210 A US 82757210A US 2010329362 A1 US2010329362 A1 US 2010329362A1
- Authority
- US
- United States
- Prior art keywords
- filter
- information
- image block
- filtering
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/24—Systems for the transmission of television signals using pulse code modulation
Definitions
- Apparatuses and methods consistent with exemplary embodiments generally relate to a video encoding and decoding apparatus and method, and more particularly, to a video encoding and decoding apparatus and method using an adaptive in-loop filter, in which an efficiency of video encoding is improved by enhancement of a performance of the in-loop filter.
- Encoding techniques for video compression such as H.261, H.263, Moving Picture Experts Group (MPEG)-1, MPEG-2, MPEG-4, and H.264/Advanced Video Codec (AV), generally include processes such as motion estimation/compensation, transform and quantization, entropy encoding, and so forth.
- MPEG Moving Picture Experts Group
- MPEG-4 Moving Picture Experts Group
- AV Advanced Video Codec
- Such video encoding techniques perform encoding in block units.
- block-based transform and quantization due to block-based transform and quantization, visually artificial discontinuity in pixel values occurs in a block boundary, which is referred to as a blocking phenomenon.
- H.264/AVC uses as an in-loop filter, a deblocking filter which allows fine motion estimation/compensation in an encoding process.
- the deblocking filter used in H.264/AVC is designed to be suitable for low-bitrate images.
- the deblocking filter may have no effect or even degrade an encoding performance.
- a related art in-loop filter applies a plurality of statistic filters to images, which have been reconstructed after encoding, based on context information.
- the context information includes an encoding mode, an image block boundary unit, a transform coefficient, a motion size, reference frame information, and the like.
- the related art in-loop filter is limitedly suitable for an image having a particular size or particular characteristics due to the use of the statistic filters. Consequently, the deblocking filter may have no effect on images having various characteristics or even degrade encoding performance for such images.
- tone or more exemplary embodiments provide a method and apparatus for creating an adaptive in-loop filter, taking account of characteristics of an image during video encoding.
- aspects of one or more exemplary embodiments provide a method and apparatus for creating an adaptive in-loop filter and updating the in-loop filter during video encoding.
- aspects of one or more exemplary embodiments provide a method and apparatus for creating an adaptive in-loop filter, taking account of characteristics of an image during video decoding.
- aspects of one or more exemplary embodiments provide a method and apparatus for updating an adaptive in-loop filter during video decoding.
- an in-loop filtering method for video encoding including: determining a type of a boundary of an image block to be filtered by using context information of the image block; adaptively creating a filter for filtering the boundary of the image block according to the determined type, selecting a filter for filtering the image block between the created filter and a previously stored filter; and filtering the image block by using the selected filter.
- an in-loop filtering apparatus for video encoding, the apparatus including: a filter creating unit which determines a type of a boundary of an image block to be filtered by using context information of the image block, and which adaptively creates a filter for filtering the boundary of the image block according to the determined type; a filter selecting unit which selects a filter for filtering the image block between the created filter and a previously stored filter; and a filtering performing unit which filters the image block by using the selected filter.
- an in-loop filtering method for video decoding including: receiving filter information regarding a filter used to filter an image block from a video encoding apparatus; when the received filter information includes filter coefficient information, filtering the image block by using a filter corresponding to the filter coefficient information; and when the received filter information includes information of a previously stored filter, filtering the image block by using a filter corresponding to the information.
- an in-loop filtering apparatus for video decoding, the apparatus including: a filter information receiving unit which receives filter information regarding a filter used to filter an image block from a video encoding apparatus; and a filtering performing unit which filters the image block by using a filter corresponding to filter coefficient information when the received filter information includes the filter coefficient information, and which filters the image block by using a filter corresponding to index information when the received filter information includes the index information of a previously stored filter.
- a method of adaptively creating a filter for an in-loop filtering of an image block including: determining a type of a boundary of the image block by using context information of the image block; and adaptively creating the filter for filtering the boundary of the image block according to the determined type of the boundary.
- an in-loop filtering method for video decoding including: receiving filter information regarding a filter used to filter an image block from a video encoding apparatus; and filtering the image block by using the filter, wherein the filter is adaptively based on a type of a boundary of the image block.
- FIG. 1 is a block diagram illustrating a video encoding apparatus according to an exemplary embodiment
- FIG. 2 is a block diagram illustrating an example of an in-loop filter according to an exemplary embodiment
- FIG. 3 illustrates examples of a two-dimensional (2D) filter having a size of N ⁇ N and a one-dimensional (1D) filter having a size of N according to an exemplary embodiment
- FIG. 4 is a flowchart illustrating an in-loop filtering method performed by an encoder according to an exemplary embodiment
- FIG. 5 is a block diagram illustrating a video encoding apparatus according to an exemplary embodiment
- FIG. 6 is a block diagram illustrating a video decoding apparatus according to an exemplary embodiment.
- FIG. 7 is a flowchart illustrating an in-loop filtering method performed by a decoder according to an exemplary embodiment.
- one or more exemplary embodiments determine a type of a boundary of an image block by using context information of the image block, create the filter adaptively according to the determined type, and select an optimal filter between the created filter and a previously created and stored filter. By using the selected filter, filtering is performed.
- exemplary embodiments will be described in detail.
- FIG. 1 is a block diagram illustrating a video encoding apparatus 100 according to an exemplary embodiment.
- the video encoding apparatus 100 includes an encoding unit 110 , a reconstructed image generating unit 120 , and an in-loop filter unit 130 .
- the encoding unit 110 encodes a difference signal between an original image to be currently encoded and a predicted image corresponding to the original image.
- the encoding unit 110 encodes the difference signal through at least one of Discrete Cosine Transform (DCT), quantization, entropy encoding, and the like.
- DCT Discrete Cosine Transform
- quantization quantization
- entropy encoding and the like.
- the DCT, quantization, and entropy encoding are widely used in the H.264 standard and thus will not be described in detail.
- the reconstructed image generating unit 120 reconstructs the encoded difference signal and generates a reconstructed image by using the reconstructed difference signal and the predicted image. More specifically, the reconstructed image generating unit 120 reconstructs the encoded difference signal through inverse quantization and Inverse Discrete Cosine Transform (IDCT), and adds the reconstructed signal to the predicted image, thus generating the reconstructed image.
- IDCT Inverse Discrete Cosine Transform
- the in-loop filter unit 130 performs filtering on the reconstructed image in the unit of an image block based on a scheme according to an exemplary embodiment.
- the image block unit may not be fixed and may be variable. For example, an image may be divided into quadtree units and different filters may be applied to blocks having different sizes for filtering.
- the image block unit may be referred to by various names, for example, a coding unit, a prediction unit, and a transform unit.
- the in-loop filter unit 130 determines a type of a block boundary of the reconstructed image by using context information, and creates a filter corresponding to the determined type in order to minimize bitrate distortion between the reconstructed image and the original image corresponding to the reconstructed image. Thereafter, the in-loop filter unit 130 compares the created filter with one or more previously stored filters to select an optimal filter therefrom, and performs filtering by using the selected filter.
- the filter may be created according to at least one of a boundary strength of an image block, a pixel position with respect to a boundary of an image block, a macroblock encoding mode, whether a macroblock encoding mode is a skip mode, a Coded Block Pattern (CBP), a Quantization Parameter (QP), and a motion vector.
- CBP Coded Block Pattern
- QP Quantization Parameter
- the created deblocking filter may be a one-dimensional (1D) filter having a size of N or a two-dimensional (2D) filter having a size of N ⁇ N. Furthermore, the size of the created deblocking filter may not be fixed and may be variable.
- the in-loop filter unit 130 filters the reconstructed image by using the created filter or the stored filter, and information related to the filter used for the filtering is encoded by the encoding unit 110 .
- FIG. 2 is a diagram for describing an example of the in-loop filter unit 130 according to an exemplary embodiment.
- the in-loop filter unit 130 includes a filter creating unit 210 , a filter storing unit 230 , a filter selecting unit 240 , a filtering performing unit 250 , and a filter information generating unit 270 .
- the filter creating unit 210 determines a type of a boundary of an image block by using context information, creates a filter corresponding to the determined type in order to minimize bitrate distortion between a reconstructed image and an original image corresponding thereto, and delivers filter coefficient information of the created filter to the filter selecting unit 240 .
- the context information may include, for example, at least one of a boundary strength of an image block, a pixel position with respect to a boundary of an image block, a macroblock encoding mode, whether or not a macroblock encoding mode is a skip mode, a CBP, a QP, a motion vector, etc. That is, the filter creating unit 210 determines a type of a boundary of an image block by using at least one of the context information and creates a filter corresponding to the determined type.
- a single slice includes a plurality of image blocks, boundaries of which may have different types.
- a plurality of filters may be created for the single slice. For example, if boundaries are classified into a total of 4 types according to context information, 4 filters may be created for a single slice. In this situation, if different filters are created for horizontal boundaries and vertical boundaries, the total number of filters to be created may be 8.
- the number of boundary types classified according to the context information may be selectively determined by the encoding unit 110 .
- a size and a dimension of a filter created by the filter creating unit 210 may not be fixed.
- the filter created by the filter creating unit 210 may be a 1D filter having a size of N or a 2D filter having a size of N ⁇ N.
- FIG. 3 illustrates examples of a 2D filter 310 having a size of N ⁇ N and a 1D filter 311 having a size of N.
- the form of the 1D filter may be a horizontal filter or a vertical filter.
- the horizontal filter is applied to left and right boundaries of an image block and the vertical filter is applied to top and bottom boundaries of the image block.
- the filter may also be applied by setting “on” or “off” in the unit of an image block. Therefore, combining both methods, the filtering scheme of the 1D filter may be classified into two methods, as described below.
- each image block may use 1-bit flag information.
- each image block may use 2-bits of flag information.
- it may be indicated whether the horizontal filter is applied to a current image block according to flag information controlling the horizontal filter, and it may also be indicated whether the vertical filter is applied to the current image block according to flag information controlling the vertical filter.
- filter coefficient information of the created filter is transmitted to a receiving side. Therefore, an amount or size of filter coefficient information may be minimized according to one or more exemplary embodiments.
- filter coefficients of the created filter may be configured to be symmetric and a sum thereof may be set to a predetermined value.
- the filter selecting unit 240 selects an optimal filter between the filter created by the filter creating unit 210 and the filter stored in the filter storing unit 230 and delivers filter information regarding the selected filter to the filtering performing unit 250 .
- a method for selecting the optimal filter may, for example, be performed by comparing cost functions of two target filters.
- the cost function is a function generated by considering at least one of overhead at the time of delivering filter information regarding a filter to a receiving side, filtering performance of the filter (e.g., an error rate between a reconstructed image and an original image), and the like. If there is no previously stored filter, the filter selecting unit 240 may select the filter currently created by the filter creating unit 210 .
- filter coefficient information of the created filter is delivered to the filter storing unit 230 and to the filter information generating unit 270 . If the filter selecting unit 240 selects the filter stored in the filter storing unit 230 , index information of the selected filter is delivered to the filter information generating unit 270 .
- the filter storing unit 230 updates a previously stored filter by using the filter coefficient information delivered from the filter selecting unit 240 .
- the update of the filter may be performed by replacing the filter coefficient information of the stored filter with the filter coefficient information of the created filter. If there is no previously stored filter, the filter storing unit 230 may store filter information regarding the currently created filter.
- the filtering performing unit 250 performs filtering on an image block by using the filter selected by the filter selecting unit 240 .
- the filter information generating unit 270 generates filter information by using information delivered from the filter selecting unit 240 and delivers the filter information to the encoding unit 110 .
- the filter information is encoded by the encoding unit 110 , and is then transmitted to a decoder of the receiving side.
- the filter information generated by the filter information generating unit 270 may vary according to a type of the selected filter.
- the filter information generating unit 270 may generate filter information by using filter coefficients of the currently created filter.
- the amount or size of the generated filter information may be reduced by using at least one of characteristics of the filter coefficients of the created filter (for example, symmetry) and a sum of the filter coefficients.
- the filter information generating unit 270 may generate, as the filter information, index information for identifying the previously stored filter or information in the form of a flag having information indicating the reuse of the stored filter.
- FIG. 4 is a flowchart illustrating an in-loop filtering method according to an exemplary embodiment. The method illustrated in FIG. 4 may be performed by the in-loop filter unit 130 shown in FIG. 1 .
- a type of a boundary of an image block is determined by using context information, and a filter corresponding to the determined type is created to minimize bitrate distortion between a reconstructed image and an original image corresponding thereto.
- a single slice includes a plurality of image blocks, boundaries of which may be classified into different types according to context information of adjacent image blocks.
- a plurality of filters may be created. For example, when a slice includes 4 image blocks and boundaries of the 4 image blocks are classified into predetermined N block types, a total of N filters may be created for the slice.
- an optimal filter is selected between the filter created for the determined type and a previously created and stored filter.
- a method for selecting the optimal filter may include comparing cost functions of two target filters.
- the cost function is a function generated by considering at least one of overhead at the time of delivering filter information regarding a filter to a receiving side, filtering performance of the filter (e.g., an error rate between a reconstructed image and an original image), and the like. If there is no previously stored filter, the currently created filter may be selected.
- step 430 If it is determined in step 430 that the selected filter is the filter created in step 410 , the previously stored filter is updated with the currently created filter in step 440 .
- the update of the filter may be performed by replacing filter coefficient information of the stored filter with filter coefficient information of the created filter. If there is no previously stored filter, filter information regarding the currently created filter may be stored.
- step 450 filtering is performed on an image block by using the selected filter.
- step 430 If it is determined in step 430 that the selected filter is not the currently created filter (i.e., the selected filter is the previously stored filter), filtering is performed by using the selected filter, i.e., the previously stored filter, in step 450 .
- step 460 filter information regarding the filter which is used for filtering is generated and encoded for transmission to a decoder of the receiving side.
- the filter information may vary according to the type of the selected filter.
- the filter information is generated by using filter coefficients of the currently created filter.
- the amount or size of generated filter information may be reduced by using at least one of characteristics of the filter coefficients of the created filter (for example, symmetry) and a sum of the filter coefficients.
- the generated filter information may include at least one of index information for identifying the previously stored filter or information in the form of a flag having information indicating the reuse of the stored filter.
- FIG. 5 is a block diagram illustrating a video encoding apparatus 500 according to an exemplary embodiment.
- the video encoding apparatus 500 includes an image predicting unit 510 , a difference signal generating unit 520 , an encoding unit 530 , a reconstructed image generating unit 540 , and an in-loop filter unit 550 .
- the image predicting unit 510 generates a predicted image of a current frame 501 from a reference frame 519 .
- the current frame 501 corresponds to an original image which is to be currently encoded
- the reference frame 519 corresponds to a reference image to which an in-loop filter has been applied.
- the image predicting unit 510 includes a motion estimating unit 511 , an intra-prediction selecting unit 513 , a motion compensating unit 515 , and an intra-predicting unit 517 .
- the motion estimating unit 511 estimates a motion of the current frame 501 by using the reference frame 519
- the motion compensating unit 515 compensates for a motion of the reference frame 519 .
- the intra-prediction selecting unit 513 generates the predicted image of the current frame 501 from a reconstructed image through the intra-predicting unit 517 .
- the difference signal generating unit 520 subtracts the predicted image generated by the image predicting unit 510 from the current frame 501 , thus generating a difference signal D.
- the encoding unit 530 includes a Discrete Cosine Transform (DCT) unit 531 which performs DCT on the difference signal, a quantizing unit 533 which performs quantization, a re-arranging unit 535 which re-arranges quantized data, and an entropy encoding unit 537 which performs entropy encoding on the re-arranged data.
- DCT Discrete Cosine Transform
- the reconstructed image generating unit 540 may include an inverse quantizing unit 541 , an Inverse Discrete Cosine Transform (IDCT) unit 543 , and an adder 545 .
- IDCT Inverse Discrete Cosine Transform
- the reconstructed image generating unit 540 reconstructs an image by compensating for an original block during encoding of a current block or a current frame.
- the in-loop filter unit 550 applies filtering to the reconstructed image based on an adaptive in-loop filter for the reconstructed image output from the reconstructed image generating unit 540 .
- the in-loop filter unit 550 includes a filter creating unit 551 , a filter selecting unit 556 , a filter storing unit 553 , a filtering performing unit 555 , and a filter information generating unit 557 .
- the operations of the filter creating unit 551 , the filter selecting unit 556 , the filter storing unit 553 , the filtering performing unit 555 , and the filter information generating unit 557 are substantially similar to those of the filter creating unit 210 , the filter selecting unit 240 , the filter storing unit 230 , the filtering performing unit 250 , and the filter information generating unit 270 illustrated in FIG. 2 , and thus will not be described in detail herein.
- FIG. 6 is a block diagram illustrating a video decoding apparatus 600 according to an exemplary embodiment.
- the video decoding apparatus 600 includes a decoding unit 610 , a reconstructed image generating unit 620 , and an in-loop filter unit 630 .
- the decoding unit 610 restores a difference signal by decoding an input bitstream.
- the decoding unit 610 includes an entropy decoding unit 611 , a re-arranging unit 613 which re-arranges decoded data, an inverse quantizing unit 615 which performs inverse quantization, and an IDCT unit 617 which performs IDCT.
- the reconstructed image generating unit 620 generates the reconstructed image by using the difference signal and a predicted image.
- the reconstructed image generating unit 620 includes a motion compensating unit 621 which performs motion compensation from a reference frame 625 , an intra-predicting unit 623 which performs intra-prediction, and an adder 627 which generates the reconstructed image by adding the predicted image to the difference signal.
- the in-loop filter unit 630 includes a filter information receiving unit 631 , a filter storing unit 633 , and a filtering performing unit 635 .
- the filter information receiving unit 631 receives filter information included in the input bitstream from the decoding unit 610 . If the received filter information is filter coefficient information, the received filter information may include at least one of index information for identifying a previously stored filter and information in the form of, for example, a flag having information indicating the reuse of the stored filter.
- the filter information receiving unit 631 generates filter coefficients or index information or flag information of a filter stored in the filter storing unit 633 by using the received filter information.
- the filter information receiving unit 631 delivers the filter coefficient information to the filtering performing unit 635 and the filter storing unit 633 .
- the filtering performing unit 635 delivers the index information or the flag information to the filter storing unit 633 .
- the filter storing unit 633 when receiving the filter coefficient information from the filter information receiving unit 631 , updates a previously stored filter by using the filter coefficient information.
- the filter storing unit 633 when receiving index information from the filter information receiving unit 631 , selects a filter corresponding to the index information from among previously stored filters and delivers a filter coefficient of the selected filter to the filtering performing unit 635 .
- the filter storing unit 633 delivers a filter coefficient of a currently stored filter to the filtering performing unit 635 .
- the filtering performing unit 635 performs filtering on the reconstructed image by using the filter coefficient information delivered from the filter information receiving unit 631 or the filter storing unit 633 .
- FIG. 7 is a flowchart illustrating an in-loop filtering method performed by a video decoding apparatus according to an exemplary embodiment. The method illustrated in FIG. 7 may be performed by the in-loop filter unit 630 shown in FIG. 6 .
- step 710 filter information transmitted from a video encoding apparatus is received.
- step 720 it is determined whether the received filter information is filter coefficient information or filter identifying information, e.g., index information for identifying a filter or information in the form of a flag indicating the reuse of a previously stored filter.
- filter identifying information e.g., index information for identifying a filter or information in the form of a flag indicating the reuse of a previously stored filter.
- filtering is performed by using a currently created filter according to the filter coefficient information in step 730 , and a previously stored filter is updated in step 750 .
- the filter information is the filter identifying information
- filtering is performed on an image block by using the previously stored filter according to the filter identification information.
- exemplary embodiments adaptively create a filter which minimizes bitrate distortion between a reconstructed image and an original image by using context information of the reconstructed image, selects an optimal filter between the created filter and a previously stored filter, and performs filtering by using the selected filter, thereby allowing optimal filtering according to characteristics of an image.
- more precise prediction during motion estimation and compensation is provided, together with improvement in encoding efficiency.
- exemplary embodiments can also be embodied as computer-readable code on a computer-readable recording medium.
- the computer-readable recording medium is any data storage device that can store data that can be thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices.
- the computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion.
- exemplary embodiments may be written as computer programs transmitted over a computer-readable transmission medium, such as a carrier wave, and received and implemented in general-use or special-purpose digital computers that execute the programs.
- one or more units of the video encoding apparatus 500 and the video decoding apparatus 600 can include a processor or microprocessor executing a computer program stored in a computer-readable medium.
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 61/221,780, filed on Jun. 30, 2009, the entire disclosure of which is hereby incorporated by reference.
- 1. Field
- Apparatuses and methods consistent with exemplary embodiments generally relate to a video encoding and decoding apparatus and method, and more particularly, to a video encoding and decoding apparatus and method using an adaptive in-loop filter, in which an efficiency of video encoding is improved by enhancement of a performance of the in-loop filter.
- 2. Description of Related Art
- Encoding techniques for video compression, such as H.261, H.263, Moving Picture Experts Group (MPEG)-1, MPEG-2, MPEG-4, and H.264/Advanced Video Codec (AV), generally include processes such as motion estimation/compensation, transform and quantization, entropy encoding, and so forth.
- Such video encoding techniques perform encoding in block units. However, due to block-based transform and quantization, visually artificial discontinuity in pixel values occurs in a block boundary, which is referred to as a blocking phenomenon.
- To alleviate quality degradation caused by the blocking phenomenon, H.264/AVC uses as an in-loop filter, a deblocking filter which allows fine motion estimation/compensation in an encoding process.
- However, the deblocking filter used in H.264/AVC is designed to be suitable for low-bitrate images. As a result, for high-definition images having a high bitrate, the deblocking filter may have no effect or even degrade an encoding performance.
- That is, a related art in-loop filter applies a plurality of statistic filters to images, which have been reconstructed after encoding, based on context information. The context information includes an encoding mode, an image block boundary unit, a transform coefficient, a motion size, reference frame information, and the like.
- However, the related art in-loop filter is limitedly suitable for an image having a particular size or particular characteristics due to the use of the statistic filters. Consequently, the deblocking filter may have no effect on images having various characteristics or even degrade encoding performance for such images.
- Aspects of tone or more exemplary embodiments provide a method and apparatus for creating an adaptive in-loop filter, taking account of characteristics of an image during video encoding.
- Moreover, aspects of one or more exemplary embodiments provide a method and apparatus for creating an adaptive in-loop filter and updating the in-loop filter during video encoding.
- Furthermore, aspects of one or more exemplary embodiments provide a method and apparatus for creating an adaptive in-loop filter, taking account of characteristics of an image during video decoding.
- In addition, aspects of one or more exemplary embodiments provide a method and apparatus for updating an adaptive in-loop filter during video decoding.
- According to an aspect of an exemplary embodiment, there is provided an in-loop filtering method for video encoding, the method including: determining a type of a boundary of an image block to be filtered by using context information of the image block; adaptively creating a filter for filtering the boundary of the image block according to the determined type, selecting a filter for filtering the image block between the created filter and a previously stored filter; and filtering the image block by using the selected filter.
- According to an aspect of another exemplary embodiment, there is provided an in-loop filtering apparatus for video encoding, the apparatus including: a filter creating unit which determines a type of a boundary of an image block to be filtered by using context information of the image block, and which adaptively creates a filter for filtering the boundary of the image block according to the determined type; a filter selecting unit which selects a filter for filtering the image block between the created filter and a previously stored filter; and a filtering performing unit which filters the image block by using the selected filter.
- According to an aspect of another exemplary embodiment, there is provided an in-loop filtering method for video decoding, the method including: receiving filter information regarding a filter used to filter an image block from a video encoding apparatus; when the received filter information includes filter coefficient information, filtering the image block by using a filter corresponding to the filter coefficient information; and when the received filter information includes information of a previously stored filter, filtering the image block by using a filter corresponding to the information.
- According to an aspect of another exemplary embodiment, there is provided an in-loop filtering apparatus for video decoding, the apparatus including: a filter information receiving unit which receives filter information regarding a filter used to filter an image block from a video encoding apparatus; and a filtering performing unit which filters the image block by using a filter corresponding to filter coefficient information when the received filter information includes the filter coefficient information, and which filters the image block by using a filter corresponding to index information when the received filter information includes the index information of a previously stored filter.
- According to an aspect of another exemplary embodiment, there is provided a method of adaptively creating a filter for an in-loop filtering of an image block, the method including: determining a type of a boundary of the image block by using context information of the image block; and adaptively creating the filter for filtering the boundary of the image block according to the determined type of the boundary.
- According to an aspect of another exemplary embodiment, there is provided an in-loop filtering method for video decoding, the in-loop filter method including: receiving filter information regarding a filter used to filter an image block from a video encoding apparatus; and filtering the image block by using the filter, wherein the filter is adaptively based on a type of a boundary of the image block.
- The above and/or other aspects will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
-
FIG. 1 is a block diagram illustrating a video encoding apparatus according to an exemplary embodiment; -
FIG. 2 is a block diagram illustrating an example of an in-loop filter according to an exemplary embodiment; -
FIG. 3 illustrates examples of a two-dimensional (2D) filter having a size of N×N and a one-dimensional (1D) filter having a size of N according to an exemplary embodiment; -
FIG. 4 is a flowchart illustrating an in-loop filtering method performed by an encoder according to an exemplary embodiment; -
FIG. 5 is a block diagram illustrating a video encoding apparatus according to an exemplary embodiment; -
FIG. 6 is a block diagram illustrating a video decoding apparatus according to an exemplary embodiment; and -
FIG. 7 is a flowchart illustrating an in-loop filtering method performed by a decoder according to an exemplary embodiment. - Hereinafter, exemplary embodiments will be described in detail with reference to the accompanying drawings. Throughout the drawings, like components are referred to by like reference numerals. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
- Before exemplary embodiments are described in detail, a basic concept of one or more exemplary embodiments will now be described in brief. To create an in-loop filter used for video encoding and decoding, one or more exemplary embodiments determine a type of a boundary of an image block by using context information of the image block, create the filter adaptively according to the determined type, and select an optimal filter between the created filter and a previously created and stored filter. By using the selected filter, filtering is performed. Hereinafter, exemplary embodiments will be described in detail.
-
FIG. 1 is a block diagram illustrating avideo encoding apparatus 100 according to an exemplary embodiment. Referring toFIG. 1 , thevideo encoding apparatus 100 includes anencoding unit 110, a reconstructedimage generating unit 120, and an in-loop filter unit 130. - The
encoding unit 110 encodes a difference signal between an original image to be currently encoded and a predicted image corresponding to the original image. Theencoding unit 110 encodes the difference signal through at least one of Discrete Cosine Transform (DCT), quantization, entropy encoding, and the like. The DCT, quantization, and entropy encoding are widely used in the H.264 standard and thus will not be described in detail. - The reconstructed
image generating unit 120 reconstructs the encoded difference signal and generates a reconstructed image by using the reconstructed difference signal and the predicted image. More specifically, the reconstructedimage generating unit 120 reconstructs the encoded difference signal through inverse quantization and Inverse Discrete Cosine Transform (IDCT), and adds the reconstructed signal to the predicted image, thus generating the reconstructed image. - The in-
loop filter unit 130 performs filtering on the reconstructed image in the unit of an image block based on a scheme according to an exemplary embodiment. The image block unit may not be fixed and may be variable. For example, an image may be divided into quadtree units and different filters may be applied to blocks having different sizes for filtering. The image block unit may be referred to by various names, for example, a coding unit, a prediction unit, and a transform unit. - To be specific, the in-
loop filter unit 130 determines a type of a block boundary of the reconstructed image by using context information, and creates a filter corresponding to the determined type in order to minimize bitrate distortion between the reconstructed image and the original image corresponding to the reconstructed image. Thereafter, the in-loop filter unit 130 compares the created filter with one or more previously stored filters to select an optimal filter therefrom, and performs filtering by using the selected filter. - To create a filter, various context information may be used. For example, the filter may be created according to at least one of a boundary strength of an image block, a pixel position with respect to a boundary of an image block, a macroblock encoding mode, whether a macroblock encoding mode is a skip mode, a Coded Block Pattern (CBP), a Quantization Parameter (QP), and a motion vector.
- The created deblocking filter may be a one-dimensional (1D) filter having a size of N or a two-dimensional (2D) filter having a size of N×N. Furthermore, the size of the created deblocking filter may not be fixed and may be variable. The in-
loop filter unit 130 filters the reconstructed image by using the created filter or the stored filter, and information related to the filter used for the filtering is encoded by theencoding unit 110. - Hereinafter, the in-
loop filter unit 130 will be described in detail with reference toFIG. 2 .FIG. 2 is a diagram for describing an example of the in-loop filter unit 130 according to an exemplary embodiment. Referring toFIG. 2 , the in-loop filter unit 130 includes afilter creating unit 210, afilter storing unit 230, afilter selecting unit 240, afiltering performing unit 250, and a filterinformation generating unit 270. - The
filter creating unit 210 determines a type of a boundary of an image block by using context information, creates a filter corresponding to the determined type in order to minimize bitrate distortion between a reconstructed image and an original image corresponding thereto, and delivers filter coefficient information of the created filter to thefilter selecting unit 240. - The context information may include, for example, at least one of a boundary strength of an image block, a pixel position with respect to a boundary of an image block, a macroblock encoding mode, whether or not a macroblock encoding mode is a skip mode, a CBP, a QP, a motion vector, etc. That is, the
filter creating unit 210 determines a type of a boundary of an image block by using at least one of the context information and creates a filter corresponding to the determined type. - Herein, a single slice includes a plurality of image blocks, boundaries of which may have different types. Thus, a plurality of filters may be created for the single slice. For example, if boundaries are classified into a total of 4 types according to context information, 4 filters may be created for a single slice. In this situation, if different filters are created for horizontal boundaries and vertical boundaries, the total number of filters to be created may be 8. The number of boundary types classified according to the context information may be selectively determined by the
encoding unit 110. - A size and a dimension of a filter created by the
filter creating unit 210 may not be fixed. For example, the filter created by thefilter creating unit 210 may be a 1D filter having a size of N or a 2D filter having a size of N×N.FIG. 3 illustrates examples of a2D filter 310 having a size of N×N and a1D filter 311 having a size of N. - When a 1D filter is created and used for filtering, the following scheme may be applied according to an exemplary embodiment. The form of the 1D filter may be a horizontal filter or a vertical filter. In this case, the horizontal filter is applied to left and right boundaries of an image block and the vertical filter is applied to top and bottom boundaries of the image block. The filter may also be applied by setting “on” or “off” in the unit of an image block. Therefore, combining both methods, the filtering scheme of the 1D filter may be classified into two methods, as described below.
- According to a first method, both a horizontal filter and a vertical filter are applied to an “on” image block and neither the horizontal filter nor the vertical filter are applied to an “off” image block. In this case, for indication, each image block may use 1-bit flag information.
- According to a second method, a horizontal filter and a vertical filter are independently applied or not applied. To this end, each image block may use 2-bits of flag information. In this case, it may be indicated whether the horizontal filter is applied to a current image block according to flag information controlling the horizontal filter, and it may also be indicated whether the vertical filter is applied to the current image block according to flag information controlling the vertical filter.
- When filtering is performed by using an adaptively created filter, filter coefficient information of the created filter is transmitted to a receiving side. Therefore, an amount or size of filter coefficient information may be minimized according to one or more exemplary embodiments. To this end, filter coefficients of the created filter may be configured to be symmetric and a sum thereof may be set to a predetermined value.
- For example, for a 5-tap 1D filter having a size of 5, let a sum of filter coefficients A, B, C, B, A be 1 (2A+2B+C=1). In this case, information of only the first filter coefficient A and the second coefficient B among the 5 filter coefficients may be transmitted to a decoder, since the 5 filter coefficients are symmetric and a sum of the 5 filter coefficients is 1. Thus, using the information of the first and second filter coefficients, the 3 other filter coefficients can be decoded.
- The
filter selecting unit 240 selects an optimal filter between the filter created by thefilter creating unit 210 and the filter stored in thefilter storing unit 230 and delivers filter information regarding the selected filter to thefiltering performing unit 250. A method for selecting the optimal filter may, for example, be performed by comparing cost functions of two target filters. The cost function is a function generated by considering at least one of overhead at the time of delivering filter information regarding a filter to a receiving side, filtering performance of the filter (e.g., an error rate between a reconstructed image and an original image), and the like. If there is no previously stored filter, thefilter selecting unit 240 may select the filter currently created by thefilter creating unit 210. - If the
filter selecting unit 240 selects the currently created filter, filter coefficient information of the created filter is delivered to thefilter storing unit 230 and to the filterinformation generating unit 270. If thefilter selecting unit 240 selects the filter stored in thefilter storing unit 230, index information of the selected filter is delivered to the filterinformation generating unit 270. - The
filter storing unit 230 updates a previously stored filter by using the filter coefficient information delivered from thefilter selecting unit 240. The update of the filter may be performed by replacing the filter coefficient information of the stored filter with the filter coefficient information of the created filter. If there is no previously stored filter, thefilter storing unit 230 may store filter information regarding the currently created filter. - The
filtering performing unit 250 performs filtering on an image block by using the filter selected by thefilter selecting unit 240. - The filter
information generating unit 270 generates filter information by using information delivered from thefilter selecting unit 240 and delivers the filter information to theencoding unit 110. The filter information is encoded by theencoding unit 110, and is then transmitted to a decoder of the receiving side. The filter information generated by the filterinformation generating unit 270 may vary according to a type of the selected filter. - If the
filter selecting unit 240 selects the filter created by thefilter creating unit 210, the filterinformation generating unit 270 may generate filter information by using filter coefficients of the currently created filter. The amount or size of the generated filter information may be reduced by using at least one of characteristics of the filter coefficients of the created filter (for example, symmetry) and a sum of the filter coefficients. - If the
filter selecting unit 240 selects the filter stored in thefilter storing unit 230, the filterinformation generating unit 270 may generate, as the filter information, index information for identifying the previously stored filter or information in the form of a flag having information indicating the reuse of the stored filter. -
FIG. 4 is a flowchart illustrating an in-loop filtering method according to an exemplary embodiment. The method illustrated inFIG. 4 may be performed by the in-loop filter unit 130 shown inFIG. 1 . - Referring to
FIG. 4 , instep 410, a type of a boundary of an image block is determined by using context information, and a filter corresponding to the determined type is created to minimize bitrate distortion between a reconstructed image and an original image corresponding thereto. A single slice includes a plurality of image blocks, boundaries of which may be classified into different types according to context information of adjacent image blocks. For the single slice, a plurality of filters may be created. For example, when a slice includes 4 image blocks and boundaries of the 4 image blocks are classified into predetermined N block types, a total of N filters may be created for the slice. - In
step 420, an optimal filter is selected between the filter created for the determined type and a previously created and stored filter. A method for selecting the optimal filter may include comparing cost functions of two target filters. For example, the cost function is a function generated by considering at least one of overhead at the time of delivering filter information regarding a filter to a receiving side, filtering performance of the filter (e.g., an error rate between a reconstructed image and an original image), and the like. If there is no previously stored filter, the currently created filter may be selected. - If it is determined in
step 430 that the selected filter is the filter created instep 410, the previously stored filter is updated with the currently created filter instep 440. The update of the filter may be performed by replacing filter coefficient information of the stored filter with filter coefficient information of the created filter. If there is no previously stored filter, filter information regarding the currently created filter may be stored. Instep 450, filtering is performed on an image block by using the selected filter. - If it is determined in
step 430 that the selected filter is not the currently created filter (i.e., the selected filter is the previously stored filter), filtering is performed by using the selected filter, i.e., the previously stored filter, instep 450. - In
step 460, filter information regarding the filter which is used for filtering is generated and encoded for transmission to a decoder of the receiving side. The filter information may vary according to the type of the selected filter. - If the currently created filter is selected, the filter information is generated by using filter coefficients of the currently created filter. As mentioned above, the amount or size of generated filter information may be reduced by using at least one of characteristics of the filter coefficients of the created filter (for example, symmetry) and a sum of the filter coefficients.
- If the previously stored filter is selected, the generated filter information may include at least one of index information for identifying the previously stored filter or information in the form of a flag having information indicating the reuse of the stored filter.
-
FIG. 5 is a block diagram illustrating avideo encoding apparatus 500 according to an exemplary embodiment. Referring toFIG. 5 , thevideo encoding apparatus 500 includes animage predicting unit 510, a differencesignal generating unit 520, anencoding unit 530, a reconstructedimage generating unit 540, and an in-loop filter unit 550. - The
image predicting unit 510 generates a predicted image of acurrent frame 501 from areference frame 519. Thecurrent frame 501 corresponds to an original image which is to be currently encoded, and thereference frame 519 corresponds to a reference image to which an in-loop filter has been applied. - The
image predicting unit 510 includes amotion estimating unit 511, anintra-prediction selecting unit 513, amotion compensating unit 515, and anintra-predicting unit 517. Themotion estimating unit 511 estimates a motion of thecurrent frame 501 by using thereference frame 519, and themotion compensating unit 515 compensates for a motion of thereference frame 519. When a predicted image is to be generated in an intra mode, theintra-prediction selecting unit 513 generates the predicted image of thecurrent frame 501 from a reconstructed image through theintra-predicting unit 517. - The difference
signal generating unit 520 subtracts the predicted image generated by theimage predicting unit 510 from thecurrent frame 501, thus generating a difference signal D. - The
encoding unit 530 includes a Discrete Cosine Transform (DCT)unit 531 which performs DCT on the difference signal, aquantizing unit 533 which performs quantization, are-arranging unit 535 which re-arranges quantized data, and anentropy encoding unit 537 which performs entropy encoding on the re-arranged data. - The reconstructed
image generating unit 540 may include aninverse quantizing unit 541, an Inverse Discrete Cosine Transform (IDCT)unit 543, and anadder 545. For encoding of a next block or a next frame, the reconstructedimage generating unit 540 reconstructs an image by compensating for an original block during encoding of a current block or a current frame. - The in-
loop filter unit 550 applies filtering to the reconstructed image based on an adaptive in-loop filter for the reconstructed image output from the reconstructedimage generating unit 540. The in-loop filter unit 550 includes afilter creating unit 551, afilter selecting unit 556, afilter storing unit 553, afiltering performing unit 555, and a filterinformation generating unit 557. It is understood that the operations of thefilter creating unit 551, thefilter selecting unit 556, thefilter storing unit 553, thefiltering performing unit 555, and the filterinformation generating unit 557 are substantially similar to those of thefilter creating unit 210, thefilter selecting unit 240, thefilter storing unit 230, thefiltering performing unit 250, and the filterinformation generating unit 270 illustrated inFIG. 2 , and thus will not be described in detail herein. -
FIG. 6 is a block diagram illustrating avideo decoding apparatus 600 according to an exemplary embodiment. Referring toFIG. 6 , thevideo decoding apparatus 600 includes adecoding unit 610, a reconstructedimage generating unit 620, and an in-loop filter unit 630. - The
decoding unit 610 restores a difference signal by decoding an input bitstream. Thedecoding unit 610 includes anentropy decoding unit 611, are-arranging unit 613 which re-arranges decoded data, aninverse quantizing unit 615 which performs inverse quantization, and anIDCT unit 617 which performs IDCT. - The reconstructed
image generating unit 620 generates the reconstructed image by using the difference signal and a predicted image. The reconstructedimage generating unit 620 includes amotion compensating unit 621 which performs motion compensation from areference frame 625, anintra-predicting unit 623 which performs intra-prediction, and anadder 627 which generates the reconstructed image by adding the predicted image to the difference signal. - The in-
loop filter unit 630 includes a filterinformation receiving unit 631, afilter storing unit 633, and afiltering performing unit 635. - The filter
information receiving unit 631 receives filter information included in the input bitstream from thedecoding unit 610. If the received filter information is filter coefficient information, the received filter information may include at least one of index information for identifying a previously stored filter and information in the form of, for example, a flag having information indicating the reuse of the stored filter. - The filter
information receiving unit 631 generates filter coefficients or index information or flag information of a filter stored in thefilter storing unit 633 by using the received filter information. - When the generated information is filter coefficient information, the filter
information receiving unit 631 delivers the filter coefficient information to thefiltering performing unit 635 and thefilter storing unit 633. When the generated information is index information or flag information, thefiltering performing unit 635 delivers the index information or the flag information to thefilter storing unit 633. - The
filter storing unit 633, when receiving the filter coefficient information from the filterinformation receiving unit 631, updates a previously stored filter by using the filter coefficient information. Thefilter storing unit 633, when receiving index information from the filterinformation receiving unit 631, selects a filter corresponding to the index information from among previously stored filters and delivers a filter coefficient of the selected filter to thefiltering performing unit 635. When receiving flag information, thefilter storing unit 633 delivers a filter coefficient of a currently stored filter to thefiltering performing unit 635. - The
filtering performing unit 635 performs filtering on the reconstructed image by using the filter coefficient information delivered from the filterinformation receiving unit 631 or thefilter storing unit 633. -
FIG. 7 is a flowchart illustrating an in-loop filtering method performed by a video decoding apparatus according to an exemplary embodiment. The method illustrated inFIG. 7 may be performed by the in-loop filter unit 630 shown inFIG. 6 . - Referring to
FIG. 7 , instep 710, filter information transmitted from a video encoding apparatus is received. - In
step 720, it is determined whether the received filter information is filter coefficient information or filter identifying information, e.g., index information for identifying a filter or information in the form of a flag indicating the reuse of a previously stored filter. - If the filter information is the filter coefficient information, filtering is performed by using a currently created filter according to the filter coefficient information in
step 730, and a previously stored filter is updated instep 750. - If the filter information is the filter identifying information, filtering is performed on an image block by using the previously stored filter according to the filter identification information.
- As described above, exemplary embodiments adaptively create a filter which minimizes bitrate distortion between a reconstructed image and an original image by using context information of the reconstructed image, selects an optimal filter between the created filter and a previously stored filter, and performs filtering by using the selected filter, thereby allowing optimal filtering according to characteristics of an image. Thus, more precise prediction during motion estimation and compensation is provided, together with improvement in encoding efficiency.
- While not restricted thereto, exemplary embodiments can also be embodied as computer-readable code on a computer-readable recording medium. The computer-readable recording medium is any data storage device that can store data that can be thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. Also, exemplary embodiments may be written as computer programs transmitted over a computer-readable transmission medium, such as a carrier wave, and received and implemented in general-use or special-purpose digital computers that execute the programs. Moreover, while not required in all aspects, one or more units of the
video encoding apparatus 500 and thevideo decoding apparatus 600 can include a processor or microprocessor executing a computer program stored in a computer-readable medium. - While exemplary embodiments have been shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present inventive concept as defined by the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/827,572 US20100329362A1 (en) | 2009-06-30 | 2010-06-30 | Video encoding and decoding apparatus and method using adaptive in-loop filter |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US22178009P | 2009-06-30 | 2009-06-30 | |
US12/827,572 US20100329362A1 (en) | 2009-06-30 | 2010-06-30 | Video encoding and decoding apparatus and method using adaptive in-loop filter |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100329362A1 true US20100329362A1 (en) | 2010-12-30 |
Family
ID=43380720
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/827,572 Abandoned US20100329362A1 (en) | 2009-06-30 | 2010-06-30 | Video encoding and decoding apparatus and method using adaptive in-loop filter |
Country Status (2)
Country | Link |
---|---|
US (1) | US20100329362A1 (en) |
KR (1) | KR101749269B1 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110222597A1 (en) * | 2008-11-25 | 2011-09-15 | Thomson Licensing | Method and apparatus for sparsity-based de-artifact filtering for video encoding and decoding |
US20110314253A1 (en) * | 2010-06-22 | 2011-12-22 | Jacob Yaakov Jeffrey Allan Alon | System, data structure, and method for transposing multi-dimensional data to switch between vertical and horizontal filters |
CN102325256A (en) * | 2011-09-14 | 2012-01-18 | 河海大学常州校区 | Loop filtering method based on image macro block coding mode decision |
US20120014436A1 (en) * | 2010-07-15 | 2012-01-19 | Sharp Laboratories Of America, Inc. | Parallel video coding based on block size |
US20120014440A1 (en) * | 2010-07-15 | 2012-01-19 | Sharp Laboratories Of America, Inc. | Parallel video coding based on mapping |
US20120014437A1 (en) * | 2010-07-15 | 2012-01-19 | Sharp Laboratories Of America, Inc. | Parallel video coding based on same sized blocks |
US20120170645A1 (en) * | 2011-01-05 | 2012-07-05 | Qualcomm Incorporated | Video filtering using a combination of one-dimensional switched filter and one-dimensional adaptive filter |
US20130003871A1 (en) * | 2011-06-29 | 2013-01-03 | Cisco Technology Inc. | Video compression using an adaptive loop filter |
CN103051892A (en) * | 2011-10-14 | 2013-04-17 | 联发科技股份有限公司 | Method and apparatus for in-loop filtering |
US20130094568A1 (en) * | 2011-10-14 | 2013-04-18 | Mediatek Inc. | Method and Apparatus for In-Loop Filtering |
US20130121424A1 (en) * | 2011-11-10 | 2013-05-16 | Stmicroelectronics Asia Pacific Pte Ltd. | Motion compensated de-blocking |
US20130208794A1 (en) * | 2011-04-21 | 2013-08-15 | Industry-University Cooperation Foundation Hanyang University | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US9071851B2 (en) | 2011-01-10 | 2015-06-30 | Qualcomm Incorporated | Adaptively performing smoothing operations |
US9282344B2 (en) | 2011-11-04 | 2016-03-08 | Qualcomm Incorporated | Secondary boundary filtering for video coding |
US20160105672A1 (en) * | 2014-10-13 | 2016-04-14 | Futurewei Technologies, Inc. | System and Method for Depth Map Coding for Smooth Depth Map Area |
US9462298B2 (en) | 2011-10-21 | 2016-10-04 | Qualcomm Incorporated | Loop filtering around slice boundaries or tile boundaries in video coding |
RU2676410C1 (en) * | 2011-11-07 | 2018-12-28 | Кэнон Кабусики Кайся | Method and device for optimization of encoding/decoding compensation movements for set of restored image samples |
US20190005709A1 (en) * | 2017-06-30 | 2019-01-03 | Apple Inc. | Techniques for Correction of Visual Artifacts in Multi-View Images |
CN110047118A (en) * | 2019-04-08 | 2019-07-23 | 腾讯科技(深圳)有限公司 | Video generation method, device, computer equipment and storage medium |
WO2020062074A1 (en) * | 2018-09-28 | 2020-04-02 | Hangzhou Hikvision Digital Technology Co., Ltd. | Reconstructing distorted images using convolutional neural network |
US10754242B2 (en) | 2017-06-30 | 2020-08-25 | Apple Inc. | Adaptive resolution and projection format in multi-direction video |
US10924747B2 (en) | 2017-02-27 | 2021-02-16 | Apple Inc. | Video coding techniques for multi-view video |
US10999602B2 (en) | 2016-12-23 | 2021-05-04 | Apple Inc. | Sphere projected motion estimation/compensation and mode decision |
US20210250597A1 (en) * | 2020-02-12 | 2021-08-12 | Tencent America LLC | Method and apparatus for cross-component filtering |
US11093752B2 (en) | 2017-06-02 | 2021-08-17 | Apple Inc. | Object tracking in multi-view video |
US11259046B2 (en) | 2017-02-15 | 2022-02-22 | Apple Inc. | Processing of equirectangular object data to compensate for distortion by spherical projections |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102014177B1 (en) * | 2011-05-04 | 2019-10-21 | 한국전자통신연구원 | Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same |
KR101877867B1 (en) | 2012-02-21 | 2018-07-12 | 삼성전자주식회사 | Apparatus for correcting of in-loop pixel filter using parameterized complexity measure and method of the same |
MX2022007230A (en) | 2019-12-12 | 2022-09-21 | Lg Electronics Inc | Filtering-based image coding device and method. |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5355328A (en) * | 1991-09-27 | 1994-10-11 | Northshore Laboratories, Inc. | Resampling apparatus suitable for resizing a video image |
US5617135A (en) * | 1993-09-06 | 1997-04-01 | Hitachi, Ltd. | Multi-point visual communication system |
US6970597B1 (en) * | 2001-12-05 | 2005-11-29 | Pixim, Inc. | Method of defining coefficients for use in interpolating pixel values |
US20060078052A1 (en) * | 2004-10-08 | 2006-04-13 | Dang Philip P | Method and apparatus for parallel processing of in-loop deblocking filter for H.264 video compression standard |
US7120197B2 (en) * | 2001-12-17 | 2006-10-10 | Microsoft Corporation | Motion compensation loop with filtering |
US20060288065A1 (en) * | 2005-06-17 | 2006-12-21 | Docomo Communications Laboratories Usa, Inc. | Method and apparatus for lapped transform coding and decoding |
US7242815B2 (en) * | 1997-03-13 | 2007-07-10 | Nokia Corporation | Adaptive filter |
US20080084932A1 (en) * | 2006-10-06 | 2008-04-10 | Microsoft Corporation | Controlling loop filtering for interlaced video frames |
US20080247467A1 (en) * | 2007-01-09 | 2008-10-09 | Nokia Corporation | Adaptive interpolation filters for video coding |
US20090034622A1 (en) * | 2007-08-01 | 2009-02-05 | Her Majesty The Queen In Right Of Canada Represented By The Minister Of Industry | Learning Filters For Enhancing The Quality Of Block Coded Still And Video Images |
US20090097547A1 (en) * | 2007-10-14 | 2009-04-16 | Nokia Corporation | Fixed-Point Implementation of an Adaptive Image Filter with High Coding Efficiency |
US7613240B2 (en) * | 2001-09-14 | 2009-11-03 | Sharp Kabushiki Kaisha | Adaptive filtering based upon boundary strength |
US20100008417A1 (en) * | 2008-07-09 | 2010-01-14 | Lidong Xu | Video encoding techniques |
US20100027686A1 (en) * | 2006-12-18 | 2010-02-04 | Koninklijke Philips Electronics N.V. | Image compression and decompression |
US20100074329A1 (en) * | 2008-09-25 | 2010-03-25 | Chih-Ming Fu | Adaptive interpolation filter for video coding |
US20100098345A1 (en) * | 2007-01-09 | 2010-04-22 | Kenneth Andersson | Adaptive filter representation |
US20110286530A1 (en) * | 2009-01-26 | 2011-11-24 | Dong Tian | Frame packing for video coding |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100711725B1 (en) | 2005-11-16 | 2007-04-25 | 엘지전자 주식회사 | Method for filtering of deblocking in video telephony mobile phone |
-
2010
- 2010-06-30 US US12/827,572 patent/US20100329362A1/en not_active Abandoned
- 2010-06-30 KR KR1020100063211A patent/KR101749269B1/en active IP Right Grant
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5355328A (en) * | 1991-09-27 | 1994-10-11 | Northshore Laboratories, Inc. | Resampling apparatus suitable for resizing a video image |
US5617135A (en) * | 1993-09-06 | 1997-04-01 | Hitachi, Ltd. | Multi-point visual communication system |
US7242815B2 (en) * | 1997-03-13 | 2007-07-10 | Nokia Corporation | Adaptive filter |
US7613240B2 (en) * | 2001-09-14 | 2009-11-03 | Sharp Kabushiki Kaisha | Adaptive filtering based upon boundary strength |
US6970597B1 (en) * | 2001-12-05 | 2005-11-29 | Pixim, Inc. | Method of defining coefficients for use in interpolating pixel values |
US7120197B2 (en) * | 2001-12-17 | 2006-10-10 | Microsoft Corporation | Motion compensation loop with filtering |
US20060078052A1 (en) * | 2004-10-08 | 2006-04-13 | Dang Philip P | Method and apparatus for parallel processing of in-loop deblocking filter for H.264 video compression standard |
US20060288065A1 (en) * | 2005-06-17 | 2006-12-21 | Docomo Communications Laboratories Usa, Inc. | Method and apparatus for lapped transform coding and decoding |
US20080084932A1 (en) * | 2006-10-06 | 2008-04-10 | Microsoft Corporation | Controlling loop filtering for interlaced video frames |
US20100027686A1 (en) * | 2006-12-18 | 2010-02-04 | Koninklijke Philips Electronics N.V. | Image compression and decompression |
US20080247467A1 (en) * | 2007-01-09 | 2008-10-09 | Nokia Corporation | Adaptive interpolation filters for video coding |
US20100098345A1 (en) * | 2007-01-09 | 2010-04-22 | Kenneth Andersson | Adaptive filter representation |
US20090034622A1 (en) * | 2007-08-01 | 2009-02-05 | Her Majesty The Queen In Right Of Canada Represented By The Minister Of Industry | Learning Filters For Enhancing The Quality Of Block Coded Still And Video Images |
US20090097547A1 (en) * | 2007-10-14 | 2009-04-16 | Nokia Corporation | Fixed-Point Implementation of an Adaptive Image Filter with High Coding Efficiency |
US20100008417A1 (en) * | 2008-07-09 | 2010-01-14 | Lidong Xu | Video encoding techniques |
US20100074329A1 (en) * | 2008-09-25 | 2010-03-25 | Chih-Ming Fu | Adaptive interpolation filter for video coding |
US20110286530A1 (en) * | 2009-01-26 | 2011-11-24 | Dong Tian | Frame packing for video coding |
Cited By (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110222597A1 (en) * | 2008-11-25 | 2011-09-15 | Thomson Licensing | Method and apparatus for sparsity-based de-artifact filtering for video encoding and decoding |
US9723330B2 (en) * | 2008-11-25 | 2017-08-01 | Thomson Licensing Dtv | Method and apparatus for sparsity-based de-artifact filtering for video encoding and decoding |
US20110314253A1 (en) * | 2010-06-22 | 2011-12-22 | Jacob Yaakov Jeffrey Allan Alon | System, data structure, and method for transposing multi-dimensional data to switch between vertical and horizontal filters |
US8873617B2 (en) * | 2010-07-15 | 2014-10-28 | Sharp Laboratories Of America, Inc. | Method of parallel video coding based on same sized blocks |
US20120014436A1 (en) * | 2010-07-15 | 2012-01-19 | Sharp Laboratories Of America, Inc. | Parallel video coding based on block size |
US20120014440A1 (en) * | 2010-07-15 | 2012-01-19 | Sharp Laboratories Of America, Inc. | Parallel video coding based on mapping |
US20120014437A1 (en) * | 2010-07-15 | 2012-01-19 | Sharp Laboratories Of America, Inc. | Parallel video coding based on same sized blocks |
US8848779B2 (en) * | 2010-07-15 | 2014-09-30 | Sharp Laboratories Of America, Inc. | Method of parallel video coding based on block size |
US8855188B2 (en) * | 2010-07-15 | 2014-10-07 | Sharp Laboratories Of America, Inc. | Method of parallel video coding based on mapping |
US20120170645A1 (en) * | 2011-01-05 | 2012-07-05 | Qualcomm Incorporated | Video filtering using a combination of one-dimensional switched filter and one-dimensional adaptive filter |
KR101552029B1 (en) | 2011-01-05 | 2015-09-09 | 퀄컴 인코포레이티드 | Video filtering using a combination of one-dimensional switched filter and one-dimensional adaptive filter |
US9445126B2 (en) * | 2011-01-05 | 2016-09-13 | Qualcomm Incorporated | Video filtering using a combination of one-dimensional switched filter and one-dimensional adaptive filter |
WO2012094232A1 (en) * | 2011-01-05 | 2012-07-12 | Qualcomm Incorporated | Video filtering using a combination of one-dimensional switched filter and one-dimensional adaptive filter |
CN103283230A (en) * | 2011-01-05 | 2013-09-04 | 高通股份有限公司 | Video filtering using a combination of one-dimensional switched filter and one-dimensional adaptive filter |
US9807424B2 (en) | 2011-01-10 | 2017-10-31 | Qualcomm Incorporated | Adaptive selection of region size for identification of samples in a transition zone for overlapped block motion compensation |
US9071851B2 (en) | 2011-01-10 | 2015-06-30 | Qualcomm Incorporated | Adaptively performing smoothing operations |
US10257543B2 (en) | 2011-01-10 | 2019-04-09 | Qualcomm Incorporated | Identification of samples in a transition zone |
US9008180B2 (en) * | 2011-04-21 | 2015-04-14 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US10785503B2 (en) * | 2011-04-21 | 2020-09-22 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US20190158883A1 (en) * | 2011-04-21 | 2019-05-23 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US11381844B2 (en) | 2011-04-21 | 2022-07-05 | Dolby Laboratories Licensing Corporation | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US10237577B2 (en) * | 2011-04-21 | 2019-03-19 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US10129567B2 (en) * | 2011-04-21 | 2018-11-13 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US9420312B2 (en) | 2011-04-21 | 2016-08-16 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US20130208794A1 (en) * | 2011-04-21 | 2013-08-15 | Industry-University Cooperation Foundation Hanyang University | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US20160330485A1 (en) * | 2011-04-21 | 2016-11-10 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US20160345025A1 (en) * | 2011-04-21 | 2016-11-24 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US20130003871A1 (en) * | 2011-06-29 | 2013-01-03 | Cisco Technology Inc. | Video compression using an adaptive loop filter |
US9332278B2 (en) * | 2011-06-29 | 2016-05-03 | Cisco Technology, Inc. | Video compression using an adaptive loop filter |
CN102325256A (en) * | 2011-09-14 | 2012-01-18 | 河海大学常州校区 | Loop filtering method based on image macro block coding mode decision |
CN103051892A (en) * | 2011-10-14 | 2013-04-17 | 联发科技股份有限公司 | Method and apparatus for in-loop filtering |
US8913656B2 (en) * | 2011-10-14 | 2014-12-16 | Mediatek Inc. | Method and apparatus for in-loop filtering |
US20130094568A1 (en) * | 2011-10-14 | 2013-04-18 | Mediatek Inc. | Method and Apparatus for In-Loop Filtering |
US9462298B2 (en) | 2011-10-21 | 2016-10-04 | Qualcomm Incorporated | Loop filtering around slice boundaries or tile boundaries in video coding |
US9282344B2 (en) | 2011-11-04 | 2016-03-08 | Qualcomm Incorporated | Secondary boundary filtering for video coding |
US9838718B2 (en) | 2011-11-04 | 2017-12-05 | Qualcomm Incorporated | Secondary boundary filtering for video coding |
RU2716535C1 (en) * | 2011-11-07 | 2020-03-12 | Кэнон Кабусики Кайся | Method and device for optimizing encoding / decoding of compensation offsets for a set of restored image samples |
US10462493B2 (en) | 2011-11-07 | 2019-10-29 | Canon Kabushiki Kaisha | Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image |
US10771819B2 (en) | 2011-11-07 | 2020-09-08 | Canon Kabushiki Kaisha | Sample adaptive offset filtering |
RU2676410C1 (en) * | 2011-11-07 | 2018-12-28 | Кэнон Кабусики Кайся | Method and device for optimization of encoding/decoding compensation movements for set of restored image samples |
US10575020B2 (en) | 2011-11-07 | 2020-02-25 | Canon Kabushiki Kaisha | Method and device for providing compensation offsets for a set of reconstructed samples of an image |
RU2701130C1 (en) * | 2011-11-07 | 2019-09-24 | Кэнон Кабусики Кайся | Method and apparatus for optimizing encoding / decoding of compensation offsets for a set of restored image samples |
RU2702054C1 (en) * | 2011-11-07 | 2019-10-03 | Кэнон Кабусики Кайся | Method and apparatus for optimizing encoding / decoding of compensation offsets for a set of restored image samples |
US20130121424A1 (en) * | 2011-11-10 | 2013-05-16 | Stmicroelectronics Asia Pacific Pte Ltd. | Motion compensated de-blocking |
US9326007B2 (en) * | 2011-11-10 | 2016-04-26 | Stmicroelectronics Asia Pacific Pte. Ltd. | Motion compensated de-blocking |
US20160105672A1 (en) * | 2014-10-13 | 2016-04-14 | Futurewei Technologies, Inc. | System and Method for Depth Map Coding for Smooth Depth Map Area |
US10097838B2 (en) * | 2014-10-13 | 2018-10-09 | Futurewei Technologies, Inc. | System and method for depth map coding for smooth depth map area |
US11818394B2 (en) | 2016-12-23 | 2023-11-14 | Apple Inc. | Sphere projected motion estimation/compensation and mode decision |
US10999602B2 (en) | 2016-12-23 | 2021-05-04 | Apple Inc. | Sphere projected motion estimation/compensation and mode decision |
US11259046B2 (en) | 2017-02-15 | 2022-02-22 | Apple Inc. | Processing of equirectangular object data to compensate for distortion by spherical projections |
US10924747B2 (en) | 2017-02-27 | 2021-02-16 | Apple Inc. | Video coding techniques for multi-view video |
US11093752B2 (en) | 2017-06-02 | 2021-08-17 | Apple Inc. | Object tracking in multi-view video |
US10754242B2 (en) | 2017-06-30 | 2020-08-25 | Apple Inc. | Adaptive resolution and projection format in multi-direction video |
US20190005709A1 (en) * | 2017-06-30 | 2019-01-03 | Apple Inc. | Techniques for Correction of Visual Artifacts in Multi-View Images |
WO2020062074A1 (en) * | 2018-09-28 | 2020-04-02 | Hangzhou Hikvision Digital Technology Co., Ltd. | Reconstructing distorted images using convolutional neural network |
CN110047118A (en) * | 2019-04-08 | 2019-07-23 | 腾讯科技(深圳)有限公司 | Video generation method, device, computer equipment and storage medium |
US20210250597A1 (en) * | 2020-02-12 | 2021-08-12 | Tencent America LLC | Method and apparatus for cross-component filtering |
US11375221B2 (en) * | 2020-02-12 | 2022-06-28 | Tencent America LLC | Method and apparatus for cross-component filtering |
US20220279198A1 (en) * | 2020-02-12 | 2022-09-01 | Tencent America LLC | Method and apparatus for cross-component filtering |
US11778218B2 (en) * | 2020-02-12 | 2023-10-03 | Tencent America LLC | Method and apparatus for cross-component filtering |
Also Published As
Publication number | Publication date |
---|---|
KR101749269B1 (en) | 2017-06-22 |
KR20110001991A (en) | 2011-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20100329362A1 (en) | Video encoding and decoding apparatus and method using adaptive in-loop filter | |
EP3772261B1 (en) | Deblocking filter for video coding and processing | |
CN109417639B (en) | Method and apparatus for video encoding with adaptive cropping | |
US8542736B2 (en) | Method and apparatus for video coding using prediction data refinement | |
RU2575992C2 (en) | Deriving reference mode values and encoding and decoding information representing prediction modes | |
EP2324638B1 (en) | System and method for video encoding using adaptive loop filter | |
CN107347157B (en) | Video decoding device | |
US8432964B2 (en) | Image processing device, method, and program | |
US20100329361A1 (en) | Apparatus and method for in-loop filtering of image data and apparatus for encoding/decoding image data using the same | |
KR101370286B1 (en) | Method and apparatus for encoding and decoding image using modification of residual block | |
EP2299720A1 (en) | Dynamic image encoding/decoding method and device | |
KR101907122B1 (en) | Methods and apparatus for collaborative partition coding for region based filters | |
US20120321206A1 (en) | Image processing apparatus and image processing method | |
US20140119455A1 (en) | Image coding apparatus, image coding method, and program, and image decoding apparatus, image decoding method, and program | |
US20130121407A1 (en) | Video encoding device and video decoding device | |
KR101960470B1 (en) | A rate control method of video coding processes supporting off-line cabac based on a bit estimator and an appratus using it | |
US20130016768A1 (en) | Methods and apparatus for efficient adaptive filtering for video encoders and decoders | |
KR20150105348A (en) | Method and apparatus for encoding/decoding images using transform | |
CN112020860B (en) | Encoder, decoder and methods thereof for selective quantization parameter transmission | |
US9438907B2 (en) | Motion picture encoding apparatus | |
WO2013145174A1 (en) | Video encoding method, video decoding method, video encoding device, and video decoding device | |
KR20200015656A (en) | Method and apparatus of in-loop filtering by adaptive band offset bands, and appparatus for decoding and encoding by adaptive band offset bands | |
EP3244611A1 (en) | Method and apparatus for video coding with adaptive clipping of pixel values | |
KR20140129418A (en) | Method for intra-prediction using residual transform, and apparatus thereof | |
KR20140129409A (en) | Method for encoding and decoding image using transform, and apparatus thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHOI, BYEONG-DOO;CHO, DAE-SUNG;SIM, DONG-GYU;AND OTHERS;REEL/FRAME:024618/0557 Effective date: 20100630 Owner name: KWANGWOON UNIVERSITY INDUSTRY-ACADEMIC COLLABORATI Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHOI, BYEONG-DOO;CHO, DAE-SUNG;SIM, DONG-GYU;AND OTHERS;REEL/FRAME:024618/0557 Effective date: 20100630 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |