US20160014409A1 - Encoding and decoding device and method using intra prediction - Google Patents

Encoding and decoding device and method using intra prediction Download PDF

Info

Publication number
US20160014409A1
US20160014409A1 US14/859,970 US201514859970A US2016014409A1 US 20160014409 A1 US20160014409 A1 US 20160014409A1 US 201514859970 A US201514859970 A US 201514859970A US 2016014409 A1 US2016014409 A1 US 2016014409A1
Authority
US
United States
Prior art keywords
block
filtering
reference pixels
intra prediction
target block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/859,970
Inventor
Jinhan Song
Jeongyeon Lim
Yoonsik Choe
Yonggoo Kim
Yung Ho Choi
Ji Hong Kang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SK Telecom Co Ltd
Original Assignee
SK Telecom Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SK Telecom Co Ltd filed Critical SK Telecom Co Ltd
Priority to US14/859,970 priority Critical patent/US20160014409A1/en
Assigned to SK TELECOM CO., LTD. reassignment SK TELECOM CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KANG, JI HONG, CHOI, YUNG HO, KIM, YONGGOO, SONG, JINHAN, LIM, JEONGYEON, CHOE, YOONSIK
Publication of US20160014409A1 publication Critical patent/US20160014409A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the present disclosure in one or more embodiments relates to encoding and decoding apparatus and method using intra prediction.
  • the state-of-the-art compression standard H.264/AVC enhances coding performance based on predictions using high correlations between neighboring pixels in intra coding.
  • the intra prediction by H.264/AVC provides a total of nine prediction modes for a block of 4 ⁇ sized unit as illustrated in FIG. 1A , and a total of four prediction modes for a block of 16 ⁇ 16 sized unit as illustrated in FIG. 1B .
  • H.264/AVC provides a total of nine intra prediction modes for a block of 8 ⁇ 8 sized unit as with the block of 4 ⁇ 4 sized unit, and first applies a filter for removing high frequency components to reference pixels of the neighboring blocks to be used for the prediction to smooth the reference pixels prior to performing the prediction [Document 1].
  • the neighboring pixels may be changed to have values more suitable for the intra prediction to decrease intra prediction errors for target blocks.
  • the inventor(s) has experienced that the filtering for removing high frequency components of the reference pixels attenuates AC components actually present among original pixels, such that it is difficult to provide details of block data to be predicted and in some cases, prediction performance or efficiency is relatively more degraded than the case in which the filtering is not applied.
  • HHI Fraunhofer Heinrich Hertz Institute
  • the adaptive filtering method proposed by the HHI compares the prediction error occurring when the filtering for removing high frequency components is applied to each block for intra prediction against the prediction error occurring without the same filtering applied and as a comparison results, applies the smaller prediction errors to the actual encoding, thereby reducing the prediction errors.
  • the inventor(s) has experienced that the decoding apparatus requires an additional amount of information for each prediction block unit to perform indexing of whether or not to apply filtering, which may rather cause deteriorated compression efficiency for such images that would show little decrement of prediction errors with the adaptive filtering.
  • JCT-VC Joint Collaborative Team on Video Coding
  • An embodiment of the present disclosure provides an apparatus for decoding a video using an intra prediction, the apparatus comprises an encoded data extractor, an intra predictor, a residual data decoder and an adder.
  • the encoded data extractor is configured to extract filtering information, information on an intra prediction mode to be applied to a target block and a transformed and quantized residual block corresponding to the target block.
  • the intra predictor is configured to calculate a reference pixel characteristic by using one or more of reference pixels within neighboring blocks of the target block, determine a filtering type to be applied to the reference pixels within the neighboring blocks, based at least on the extracted filtering information and the calculated reference pixel characteristic, and generate a predicted block, by adaptively filtering the reference pixels within the neighboring blocks depending on the determined filtering type and then predicting the target block from the adaptively-filtered reference pixels according to the information on the intra prediction mode.
  • the residual data decoder is configured to reconstruct a residual block by inversely quantizing and then inversely transforming the transformed and quantized residual block.
  • the adder is configured to add the predicted block to the residual block to reconstruct the target block.
  • Another embodiment of the present disclosure provides a method performed by an apparatus for decoding a video using an intra prediction, the method comprising: extracting filtering information from encoded data; calculating a reference pixel characteristic by using one or more of reference pixels within neighboring blocks of the target block; determining a filtering type to be applied to the reference pixels within the neighboring blocks, based at least on the extracted filtering information and the calculated reference pixel characteristic; generating a predicted block, by adaptively filtering the reference pixels within the neighboring blocks depending on the determined filtering type and then predicting the target block from the adaptively-filtered reference pixels; reconstructing a residual block of the target block from the encoded data; and reconstructing the target block by adding the predicted block to the residual block.
  • FIGS. 1A and 1B are diagrams of an intra prediction mode
  • FIG. 2 is a block diagram of an encoding apparatus according to at least one embodiment of the present disclosure
  • FIG. 3 is a block diagram of a configuration of an intra predictor of the encoding apparatus according to at least one embodiment of the present disclosure
  • FIG. 4 is a diagram of a configuration of a reference pixel characteristics extractor 320 according to at least one embodiment of the present disclosure
  • FIG. 5 is a diagram for illustrating an area applied with Sobel Mask within neighboring blocks for detecting edges
  • FIG. 6 is a block diagram of a configuration of a first intra predictor for performing an adaptive filtering-based intra prediction according to at least one embodiment of the present disclosure
  • FIG. 7 is an exemplary diagram of the first intra predictor for performing the adaptive filtering-based intra prediction according to one or more embodiment of the present disclosure
  • FIG. 8 is a flow chart of an encoding method according to at least one embodiment of the present disclosure.
  • FIG. 9 is a block diagram of a configuration of a decoding apparatus according to at least one embodiment of the present disclosure.
  • FIG. 10 is a block diagram of a configuration of an intra predictor of the decoding apparatus according to at least one embodiment of the present disclosure.
  • FIG. 11 is a flow chart of a decoding method according to at least one embodiment of the present disclosure.
  • Some embodiments of the present disclosure provides encoding and decoding apparatus and method which determine reference pixel characteristics of reference pixels included in neighboring blocks of a target block to be encoded or decoded and use the determination as a basis for performing any one of the adaptive filtering-based intra prediction or the typical intra prediction to decrease the required bit amount representing whether a filtering is performed, thereby improving coding efficiency.
  • first, second, A, B, (a), and (b) are solely for the purpose of differentiating one component from the other but not to imply or suggest the substances, order or sequence of the components.
  • a component were described as ‘connected’, ‘coupled’, or ‘linked’ to another component, they may mean the components are not only directly ‘connected’, ‘coupled’, or ‘linked’ but also are indirectly ‘connected’, ‘coupled’, or ‘linked’ via a third component.
  • FIG. 2 is a block diagram for illustrating an encoding apparatus according to at least one embodiment of the present disclosure.
  • the encoding apparatus may include an intra predictor 210 , a reference picture memory 220 , a residual data encoder 230 , a residual data decoder 240 , an entropy encoder 250 , and an encoded data generator 260 , and the like.
  • Each of the intra predictor 210 , the residual data encoder 230 , the residual data decoder 240 , the entropy encoder 250 , and the encoded data generator 260 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure.
  • ASICs application-specific integrated circuits
  • the reference picture memory 220 includes at least one non-transitory computer readable medium.
  • the video encoding apparatus further comprises input units (not shown in FIG. 2 ) such as one or more buttons, a touch screen, a mic and so on, and output units (not shown in FIG. 2 ) such as a display, an indicator and so on.
  • the encoding apparatus may represent a personal computer (PC), a notebook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a PlayStation Portable (PSP), a mobile communication terminal, and the like, and may mean various apparatuses including a communication device such as a communication modem that performs communication with various devices or wired/wireless communication networks (herein, the wire or wireless networks include, for example, one or more network interfaces including, but not limited to, cellular, Wi-Fi, LAN, WAN, CDMA, WCDMA, GSM, LTE and EPC networks, and cloud computing networks), a memory that stores various programs and data to encode images, a microprocessor that executes programs so as to perform calculation and control, and the like.
  • a communication device such as a communication modem that performs communication with various devices or wired/wireless communication networks
  • the wire or wireless networks include, for example, one or more network interfaces including, but not limited to, cellular, Wi-Fi, LAN, WAN, CDMA,
  • a video to be encoded may be input in a block unit, and the block may be a macroblock.
  • the macroblock is defined as a 16 ⁇ 16 form by the same method as a H.264/AVC standard, but a general form of macroblock may be M ⁇ N.
  • M and N may each be larger than 16 and may be different integers from each other or the same integer.
  • the intra predictor 210 uses reference pixel values available in a current block and neighboring blocks spatially located around the current block to generate an intra prediction block of the current block.
  • the intra prediction block is generated by calculating error values between the current block and the intra prediction block for each of the available intra prediction modes and applying an intra prediction mode having a minimum error value. Further, information on the intra prediction mode is transferred to the encoded data generator 260 by encoding the intra prediction mode having the minimum error value.
  • the intra predictor 210 extracts reference pixels included in neighboring blocks of a target block to be encoded from the reference picture memory 220 , prior to generating the intra prediction block to determine the reference pixel characteristics. Further, it is determined whether to perform an adaptive filtering-based intra prediction or a typical intra prediction based on the determined reference pixel characteristics, and the intra prediction block for the target block to be encoded is generated by using any one of the prediction methods based on the determination.
  • the adaptive filtering-based intra prediction means an intra prediction method for outputting cost-efficient one of the resultant intra prediction after performing the high-frequency filtering on the reference pixels included in the neighboring blocks and the resultant intra prediction without performing the high-frequency filtering.
  • the filtering information indicating whether the high-frequency filtering is applied to the intra prediction is output.
  • the intra predictor 210 will be described below in detail with reference to FIGS. 3 to 7 .
  • the result (intra prediction block) output from the intra prediction block 210 is subtracted from the block to be encoded and thereby generated as a residual block which is output to the residual data encoder 230 .
  • the residual data encoder 230 performs a transform and a quantization operation on the residual blocks to generate an encoded residual block.
  • various transform methods may be used for transforming a signal of a spatial domain into a signal of a frequency domain, such as Hadamard transform and discrete cosine transform, and various quantization methods may be used, such as uniform quantization including a dead zone and quantization matrix.
  • the transform block may have a size that does not exceed a size of the prediction block.
  • the transform blocks of the same size 16 ⁇ 16 may be used as well as smaller 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4 and so on.
  • the transform blocks such as 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, and 4 ⁇ 4 that do not exceed 8 ⁇ 8, may be used.
  • the size of the prediction block is 4 ⁇ 4, only the 4 ⁇ 4 transform block may be used.
  • the size of the transform block may be selected as a reference of rate-distortion optimization.
  • the residual data encoder 230 divides the residual blocks into the same size of sub-blocks as the transform block and sequentially transforms and quantizes the sub-blocks.
  • the size of the transform block may exceed the size of the prediction block.
  • transform blocks may be used by the sizes such as 32 ⁇ 16 pixels, 16 ⁇ 32 pixels, 32 ⁇ 32 pixels, 64 ⁇ 32 pixels, 32 ⁇ 64 pixels, 64 ⁇ 64 pixels.
  • the residual data encoder 230 combines a plurality of spatially neighboring residual blocks to generate, transform, and quantize the combined residual block that is equal to the size of the transform block.
  • the residual data decoder 240 dequantizes and inverse transforms the residual blocks that are transformed and quantized by the residual data encoder 220 to reconstruct the residual blocks.
  • the dequantization and the inverse transform reversely perform the transform and quantization processes that are performed by the residual data encoder 230 and may be implemented by various methods.
  • the residual data encoder 230 and the residual data decoder 240 may use the same transform and inverse transform processes or the same quantization and dequantization processes in their pre-agreement.
  • the residual data decoder 240 may use transform and quantization process information (for example, information on a transform size, a transform shape, a quantization type, and the like) which is generated and transferred by the transform and quantization processes of the residual data encoder 230 , to reversely perform the transform and quantization processes of the residual data encoder 230 , thereby performing the dequantization and the inverse transform.
  • transform and quantization process information for example, information on a transform size, a transform shape, a quantization type, and the like
  • the residual block output from the residual data decoder 240 is added to the prediction block reconstructed by the intra predictor 210 to generate a reconstructed block which is stored in the reference picture memory 220 and the stored reconstructed block is subsequently used as a reference picture for encoding the block to be encoded.
  • the entropy encoder 250 entropy-encodes and outputs the residual block output from the residual data encoder 230 .
  • the entropy encoder 250 may encode a variety of information required to decode encoded bit streams as well as the residual blocks.
  • a variety of information required to decode the encoded bit streams may include information on a macroblock type, information on the intra prediction mode, information on the transform and quantization types, filtering information indicating whether the high-frequency removal filtering is performed on the reference pictures used for the intra prediction, and the like.
  • the entropy encoder 250 may use a variety of entropy encoding methods, such as context adaptive variable length coding (CAVLC) and context adaptive binary arithmetic coding (CABAC).
  • CAVLC context adaptive variable length coding
  • CABAC context adaptive binary arithmetic coding
  • the encoded data generator 260 aligns the entropy encoded residual block, the information on the macroblock type, the intra prediction mode, and the like so as to be output as the encoded data. Further, when the intra predictor 210 performs the adaptive filtering-based intra prediction, the encoded data generator 260 also outputs as the encoded data the filtering information indicating whether the filtering is performed. However, when the intra predictor 210 does not perform the adaptive filtering-based intra prediction, no filtering information is included in the encoded data.
  • FIG. 3 is a block diagram for illustrating a configuration of the intra predictor according to at least one embodiment of the present disclosure.
  • the intra predictor 210 may include a reference pixel setter 310 , a reference pixel characteristics extractor 320 , a first intra predictor 330 , a second intra predictor 340 , and the like.
  • Each of the reference pixel setter 310 , the reference pixel characteristics extractor 320 , the first intra predictor 330 , and the second intra predictor 340 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure.
  • ASICs application-specific integrated circuits
  • the reference pixel setter 310 extracts pixels (reference pixels) of neighboring blocks of the target block to be encoded from the reference picture memory 220 .
  • the neighboring blocks of the target block to be currently encoded should have completed their block based encoding and decoding processes and set the pixel values of the corresponding neighboring blocks to be referenceable by the time of encoding the current block.
  • some encoding processes performed may make the pixel values of the neighboring blocks to be unusable, and such unusable pixels of the neighboring blocks are processed by the reference pixel setter 310 . That is, the reference pixel setter 310 checks whether the pixel values of the neighboring blocks are present and when there are neighboring pixels that cannot be referenced, the corresponding reference values are filled with products from an arbitrary operation.
  • the reference pixel characteristics extractor 320 receives the reference pixel values processed by the reference pixel setter 310 to determine reference pixel characteristics and uses the determination as a basis for determining whether to transfer the reference pixel values to the first intra predictor 330 that performs the adaptive filtering-based intra prediction or the second intra predictor 340 that performs the typical intra prediction.
  • the reference pixel characteristics may include statistical characteristics of the reference pixels, intra-image characteristics configured by the reference pixels, or the like and at least one embodiment of the present disclosure uses dispersion as the statistical characteristics and the presence or absence of edges as the intra-image characteristics.
  • this is only one embodiment, but is to be construed to be included in the scope of the present disclosure when it may be determined which of the adaptive filtering-based intra prediction and the typical intra prediction has the excellent coding efficiency and performance.
  • FIG. 4 is a diagram illustrating a configuration of the reference pixel characteristics extractor 320 according to at least one embodiment of the present disclosure.
  • the reference pixel characteristics extractor 320 may include a statistical characteristics extractor 410 , an edge detector 420 , and a filtering determiner 430 .
  • Each of the statistical characteristics extractor 410 , the edge detector 420 , and the filtering determiner 430 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure.
  • ASICs application-specific integrated circuits
  • the statistical characteristics extractor 410 operates the dispersion of the reference pixels to determine whether the dispersion is at a preset threshold value or less.
  • the threshold value may be determined by various methods and as one example, the following Equation 1 may be used.
  • T represents the threshold value and Qstep represents the width of a quantization interval. Further, T means the largest information among integers smaller than x.
  • the edge detector 420 uses the reference pixels included in the neighboring blocks of the target block to be encoded to detect whether the edges are present in the neighboring blocks.
  • the edge is a feature representing a boundary between regions within an image and corresponds to a discontinuous point. Therefore, the edges can be detected when a change in a gradient of image brightness is obtained using a differentiation or partial differentiation operation or a mask (operator) performing the role of a differential operation.
  • the representative method uses Sobel Mask. In addition to this, there are methods such as Roberts Mask, a Laplacian Mask, and Canny Mask.
  • At least one embodiment of the present disclosure uses the method for detecting edges using the Sobel Mask, but is not limited thereto, and therefore, various methods for detecting edges of an image are to be construed to be included in the scope of the present disclosure.
  • the method for extracting edges using the Sobel Mask applies the following mask to an image so as to detect edges.
  • the gradient of the image brightness in a vertical direction (y-axis direction) and the gradient of the image brightness in a horizontal direction (x-axis direction) need to be obtained, in which a left mask is to obtain the gradient in a vertical direction and a right mask is to obtain the gradient in a horizontal direction.
  • FIG. 5 illustrates a region to which the Sobel Mask is applied where different sizes of the blocks used for the prediction may change the size of the neighboring region from which the edges is to be extracted.
  • the magnitude of the gradients may be operated by the following Equation 2 that sets a central element value (element (2, 2) of a matrix) to Gy and Gx respectively as calculated by multiplying each pixel value of an image by each of the left and right masks.
  • Equation 2 sets a central element value (element (2, 2) of a matrix) to Gy and Gx respectively as calculated by multiplying each pixel value of an image by each of the left and right masks.
  • the filtering determiner 430 When the statistical characteristics extractor 410 determines that the dispersion is at the preset threshold value or less or when the edge detector determines that the edges are present, it is determined that the filtering determiner 430 does not apply the adaptive filtering to transfer the reference pixels to the second intra predictor 340 that performs the typical intra prediction. Otherwise, however, when the dispersion is larger than the preset threshold value and the edges are not detected, it is determined that the filtering determiner 430 applies the adaptive filtering to transfer the reference pixels to the first intra predictor 330 that performs the adaptive filtering-based intra prediction.
  • the high-frequency filtered reference pixels have values very close to original high-frequency-unfiltered reference pixels that are not subjected to the high-frequency filtering. In this case, therefore, the adaptive filtering-based intra prediction is not performed, obviating the need to use index bits for indicating whether the filtering is performed, thereby improving the coding efficiency.
  • the edges when the edges are present in the neighboring blocks of the target block to be encoded, the edges may be blurred when the filtering is applied to the reference pixels to give rise to a large error. Even in this case, therefore, the determination not to use the adaptive filtering-based intra prediction can improve the coding performance, and as the index bits are not used for indicating whether the filtering is performed or not, the coding efficiency can be improved.
  • the reference pixel characteristics are determined using both of the statistical characteristics of the reference pixels and the presence or absence of the edges, but the present disclosure is not limited thereto, and therefore the reference pixel characteristics may be determined using only any one of the methods.
  • the first intra predictor 330 performs the adaptive filtering-based intra prediction when it is determined that the reference pixel characteristics extractor 320 applies the adaptive filter.
  • FIG. 6 is a block diagram for illustrating a configuration of the first intra predictor for performing an adaptive filtering-based intra prediction according to at least one embodiment of the present disclosure.
  • the first intra predictor may include a low pass filter 610 , intra prediction performers 620 and 630 , a cost calculator 640 , and the like.
  • Each of the low pass filter 610 , the intra prediction performers 620 and 630 , and the cost calculator 640 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure.
  • ASICs application-specific integrated circuits
  • the low pass filter 610 removes the high frequency components from the reference pixels and transfers the reference pixels to the intra prediction performer 620 and the intra prediction performer 620 performs the intra prediction by using the reference pixels with the high frequency components removed and transfers the results to the cost calculator 640 .
  • the intra prediction performer 630 performs the intra prediction by using the values of the original high-frequency-unfiltered reference pixel and transfers the results to the cost calculator.
  • the cost calculator 640 calculates costs required to encode data using the results (that is, the intra prediction results of using the high-frequency filtered reference pixels) performed by the intra prediction performer 620 and costs required to encode data using the results (that is, the intra prediction results of using the high-frequency-unfiltered reference pixels of operation by the intra prediction performer 630 and outputs the cost-efficient results.
  • costs may be obtained by using rate-distortion or a bit amount required to encode data.
  • the cost calculator 640 when outputting the intra prediction results, the cost calculator 640 also outputs the filtering information indicating whether the high-frequency filtering was performed.
  • FIG. 7 is an exemplary diagram for explaining the first intra predictor for performing the adaptive filtering-based intra prediction according to at least one embodiment of the present disclosure.
  • the block of size n ⁇ m to undergo the intra prediction and the reconstructed reference pixel may be represented as follows.
  • a target block O to be encoded in n ⁇ m sized arrangement is as follows:
  • a prediction block P in the arrangement of n ⁇ m generated by predicting the target block O is as follows:
  • the reconstructed pixel values of the previous block are used as reference pixels for the current prediction block P.
  • FIG. 7 illustrates an example of the reconstructed reference pixels that can be referenced.
  • t [t0, . . . , tm ⁇ 1, . . . ]
  • the reconstructed left upper pixel values are defined by ‘a’.
  • the respective pixels of l, t, and a are represented by ‘available’.
  • the low pass filter is defined as follows as it has a length of k for smoothing the reference pixels used for the intra prediction prior to performing the intra prediction on the original block O.
  • the above filter coefficients generate the smoothed reference pixel vector by applying a convolution operation as the following Equation 3 to reference pixel vectors l and t.
  • the low pass filter 610 of FIG. 6 performs the high-frequency filtering on the original reference pixels depending on the Equation 2.
  • the intra prediction performer 620 uses the reference pixel vectors g1 and g2 output from the low pass filter 610 to perform the intra prediction.
  • the intra prediction performer 630 uses the original reference pixel vectors l and t without high-frequency filtering to perform the intra prediction.
  • the cost calculator 640 compares the costs when the encoding is performed using the intra prediction results based on the reference pixel vectors g1 and g2 with the costs when the encoding is performed using the intra prediction results based on the original reference pixel vectors l and t to output the cost-efficient intra prediction results. Further, the intra prediction results output by the cost calculator 640 are output along with the filtering information indicating whether the results are from using the high-frequency filtered reference pixels or not.
  • the second intra predictor 340 uses the high-frequency-unfiltered reference, that is, the original reference pixel vectors l and t to perform the intra prediction and outputs the results. In this case, as the second intra predictor does not use the adaptive filtering method, there is no need to output the filtering information indicating whether the high-frequency filtering is used for the intra prediction.
  • the filtering information is required for indicating whether the high-frequency filtering was used the intra prediction, but when it is determined that the reference pixel characteristics extractor 320 determines not to apply the adaptive filtering, the filtering information is not required, such that the bit amount required to indicate whether there was a filtering performed may be saved.
  • FIG. 8 is a flow chart for illustrating an encoding method according to at least one embodiment of the present disclosure.
  • the reference pixel characteristics are determined by extracting at least one reference pixels included in the neighboring blocks of the target block to be encoded in step S 810 .
  • the reference pixel characteristics may include the statistical characteristics of the reference pixels such as dispersion or the intra-image characteristics of the images configured by the reference pixels such as the presence or absence of the edges, and the like.
  • the reference pixel characteristics When the reference pixel characteristics are determined, it is determined whether an adaptive filtering is applied to the intra prediction based on the reference pixel characteristics (S 820 ). For example, it may determined that the adaptive filtering is not applied when the dispersion of the reference pixels is smaller than the preset threshold value or the edges are present in the neighboring blocks, and the like, otherwise it may determined that the adaptive filtering is applied.
  • the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the results are output (S 830 ).
  • the intra prediction is performed based on the adaptive filtering.
  • the high-frequency filtering is performed on the reference pixels and the intra prediction is performed by using the filtered reference pixels (S 840 and S 850 ). Then, the intra prediction is performed by using the high-frequency-unfiltered reference pixels (S 860 ). Further, calculations are made for the costs when the encoding is performed using the intra prediction results by the steps S 840 and S 850 and the costs when the encoding is performed using the intra prediction results through step S 860 and then the cost-efficient intra prediction results are output. In this case, the intra prediction results are output along with the filtering information for indicating whether the results are with or without the use of the high-frequency filtered reference pixels (S 870 ).
  • FIGS. 9 and 10 a decoding apparatus according to at least one embodiment of the present disclosure will be described with reference to FIGS. 9 and 10 .
  • FIG. 9 is a block diagram for illustrating a configuration of the decoding apparatus according to at least one embodiment of the present disclosure.
  • the decoding apparatus may include an encoded data extractor 910 , an entropy decoder 920 , a residual data decoder 930 , an intra predictor 940 , and the like.
  • Each of the encoded data extractor 910 , the entropy decoder 920 , the residual data decoder 930 , and the intra predictor 940 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure.
  • the video decoding apparatus further comprises input units (not shown in FIG. 9 ) such as one or more buttons, a touch screen, a mic and so on, and output units (not shown in FIG. 9 ) such as a display, an indicator and so on.
  • the decoding apparatus may represent a personal computer (PC), a notebook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a PlayStation Portable (PSP), a mobile communication terminal, and the like, and may mean various apparatuses including a communication device such as a communication modem that performs communication with various devices or wired/wireless communication networks (herein, the wire or wireless networks include, for example, one or more network interfaces including, but not limited to, cellular, Wi-Fi, LAN, WAN, CDMA, WCDMA, GSM, LTE and EPC networks, and cloud computing networks), a memory that stores various programs and data to decode images, a microprocessor that executes programs so as to perform calculation and controlling, and the like.
  • a communication device such as a communication modem that performs communication with various devices or wired/wireless communication networks
  • the wire or wireless networks include, for example, one or more network interfaces including, but not limited to, cellular, Wi-Fi, LAN, WAN, CDMA,
  • the encoded data extractor 910 extracts and analyzes the received encoded data and transfers data for the residual blocks to the entropy decoder 920 and data required for other predictions, for example, the macroblock mode, the encoded prediction information (information on the intra prediction mode, and the like) to the intra predictor 940 .
  • the entropy decoder 920 performs the entropy decoding on the residual blocks input from the encoded data extractor 910 to generate quantized residual blocks.
  • the entropy decoder 920 may decode a variety of information required to decode the encoded data as well as the residual blocks.
  • the variety of information required to decode the encoded data may include information on a block type, information on an intra prediction mode, information on transform and quantization types, and the like.
  • the entropy decoder 920 may be defined by various methods according to the entropy encoding method used for the entropy encoder 440 of the encoding apparatus to which at least one embodiment of the present disclosure is applied.
  • the residual data decoder 930 performs the same process as the residual data decoder 240 of the encoding apparatus according to at least one embodiment of the present disclosure to reconstruct the residual blocks. That is, the residual blocks are reconstructed by dequantizing the quantized residual blocks received from the entropy decoder and inversely transforming the dequantized residual blocks.
  • the intra predictor 940 performs the intra prediction based on the intra prediction mode information extracted from the encoded data extractor to generate an intra prediction block.
  • the intra predictor 940 uses the reference pixels included in the neighboring blocks of the target block to be decoded to determine the reference pixel characteristics and determines whether the encoding apparatus performed the intra prediction to which an adaptive filtering was applied, based on the reference pixel characteristics.
  • the filtering information is extracted from the encoded data.
  • the intra prediction is performed by using the high-frequency filtered reference pixels based on the extracted filtering information or the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the results are output.
  • the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the results are output.
  • the results (intra prediction block) output from the intra predictor 940 are added to the residual blocks reconstructed by the residual data decoder 930 so as to be reconstructed as the blocks of the original image.
  • FIG. 10 is a block diagram for illustrating a configuration of the intra predictor 940 according to at least one embodiment of the present disclosure.
  • the intra predictor 940 may include a reference pixel setter 1010 , a reference pixel characteristics extractor 1020 , a third intra predictor 1030 , and a fourth intra predictor 1040 , and the like.
  • Each of the reference pixel setter 1010 , the reference pixel characteristics extractor 1020 , the third intra predictor 1030 , and the fourth intra predictor 1040 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure
  • ASICs application-specific integrated circuits
  • the reference pixel setter 1010 extracts from a reference picture memory the pixels (reference pixels) of neighboring blocks of the target block to be encoded.
  • the reference pixel characteristics extractor 1020 receives reference pixel values transferred from the reference pixel setter 1010 to determine the reference pixel characteristics and determines whether to apply an adaptive filtering. Further, the reference pixel values are transferred to any one of the third intra predictor 1030 and the fourth intra predictor 1040 based on the determined results.
  • the reference pixel characteristics may include the statistical characteristics of the reference pixel, the intra-image characteristics configured by the reference pixels, or the like.
  • the reference pixel setter 1010 and the reference pixel characteristics extractor 1020 each have the same functions as the reference pixel setter 310 and the reference pixel characteristics extractor 320 of the encoding apparatus according to at least one embodiment of the present disclosure, and therefore the detailed description thereof will be omitted so as to avoid the repeated description.
  • the third intra predictor 1030 extracts the filtering information from the encoded data when it is determined that the reference pixel characteristics extractor 1020 applies the adaptive filtering.
  • the extracted filtering information indicates that the high-frequency filtering is performed, the high-frequency filtering is performed on the reference pixels and the intra prediction is performed using the high-frequency filtered reference pixels.
  • the extracted filtering information indicates that the high-frequency filtering is not performed, the high-frequency filtering is not performed and the intra prediction is performed using the original reference pixels and the results are output.
  • the fourth intra predictor 1040 performs the intra prediction by using the original high-frequency-unfiltered reference pixels and outputs the results.
  • the reference pixel characteristics extractor 1020 of the decoding apparatus has the same configuration as the reference pixel characteristics extractor 320 of the encoding apparatus.
  • the filtering information is included in the encoded data output by the encoding apparatus. Therefore, the third intra predictor 1030 extracts the filtering information from the encoded data and performs the intra prediction based on the filtering information.
  • the fourth intra predictor 1040 immediately uses the high-frequency-unfiltered original reference pixels to perform the intra prediction without considering the filtering information.
  • FIG. 11 is a flow chart for illustrating a decoding method according to at least one embodiment of the present disclosure.
  • the reference pixel characteristics are determined by extracting one or more reference pixels included in the neighboring blocks of the target block to be encoded (S 1110 ).
  • the reference pixel characteristics may include the statistical characteristics of the reference pixels such as dispersion or the intra-image characteristics configured by the reference pixels such as the presence or absence of the edges, and the like.
  • the reference pixel characteristics When the reference pixel characteristics are determined, it is determined whether to apply the adaptive filtering based on the reference pixel characteristics (S 1120 ). For example, when the dispersion of the reference pixels is at the preset threshold value or less or the edges are present in the neighboring blocks, it may be determined that the adaptive filtering is not applied, otherwise it may be determined that the adaptive filtering is applied.
  • step S 1120 When it is determined that the adaptive filtering is not applied in step S 1120 , the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the results are output (S 1130 ).
  • the filtering information is extracted from the encoded data (S 1140 ) and the extracted filtering information is checked (S 1150 ).
  • the intra prediction is performed by using the high-frequency filtered reference pixels and the result is output (S 1160 ).
  • the filtering information indicates that the no high-frequency filtering is to be performed, the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the result is output (S 1170 ).
  • Various embodiments of the present disclosure as described above determine the reference pixel characteristics included in the neighboring blocks of the target block to be encoded and determine whether to perform the adaptive filtering-based intra prediction or the typical intra prediction based on the determined reference pixel characteristics to perform the encoding, thereby reducing the generation frequency of additional information generated by the adaptive filtering.
  • various embodiments of the present disclosure as described above provide the higher compression encoding efficiency, control the strictness and leniency of the determination criterion on whether to apply the adaptive filtering to provide the efficient rate-distortion control factor of the video compression encoder, and decrease the complexity of the encoding and decoding processes occurring due to the repetitive filtering and the intra prediction cycles when the adaptive filtering is omitted.
  • Various embodiments of the present disclosure is able to achieve advantageous effects in effectively decreasing the generation frequency of the index signal for the application of the additional filter generated by the adaptive filtering in the intra prediction to provide the higher compression coding efficiency, controlling the strictness and leniency of the determination criterion on whether to apply the adaptive filtering to provide the efficient rate-distortion control factor of the video compression encoder, and eventually decreasing the complexity of the encoding and decoding processes occurring due to the repetitive filtering and the intra prediction cycles when the adaptive filtering is removed.
  • non-transitory computer readable recording medium examples include magnetic recording media, such as a hard disk, a floppy disk, and a magnetic tape, and optical recording media, such as a compact disk read only memory (CD-ROM) and a digital video disk (DVD), magneto-optical media, such as a floptical disk, and hardware devices that are specially configured to store and execute program instructions, such as a ROM, a random access memory (RAM), and a flash memory.
  • magnetic recording media such as a hard disk, a floppy disk, and a magnetic tape
  • optical recording media such as a compact disk read only memory (CD-ROM) and a digital video disk (DVD)
  • magneto-optical media such as a floptical disk
  • hardware devices that are specially configured to store and execute program instructions, such as a ROM, a random access memory (RAM), and a flash memory.

Abstract

An apparatus for decoding a video using an intra prediction includes: an encoded data extractor to extract filtering information, information on an intra prediction mode; an intra predictor to calculate a reference pixel characteristic by using one or more of reference pixels within neighboring blocks of the target block, determine a filtering type to be applied to the reference pixels within the neighboring blocks, based at least on the extracted filtering information and the calculated reference pixel characteristic, and generate a predicted block, by adaptively filtering the reference pixels within the neighboring blocks depending on the determined filtering type and then predicting the target block from the adaptively-filtered reference pixels according to the information on the intra prediction mode; a residual data decoder to reconstruct a residual block; and an adder to add the predicted block to the residual block to reconstruct the target block.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 13/819,153 filed Apr. 23, 2013, which is a the National Phase application of International Application No. PCT/KR2011/006346, filed Aug. 26, 2011, which is based upon and claims the benefit of priorities from Korean Patent Application No. 10-2010-0083026, filed on Aug. 26, 2010. The disclosures of the above-listed applications are hereby incorporated by reference herein in their entirety.
  • TECHNICAL FIELD
  • The present disclosure in one or more embodiments relates to encoding and decoding apparatus and method using intra prediction.
  • BACKGROUND
  • The statements in this section merely provide background information related to the present disclosure and do not constitute prior art.
  • The state-of-the-art compression standard H.264/AVC enhances coding performance based on predictions using high correlations between neighboring pixels in intra coding.
  • The intra prediction by H.264/AVC provides a total of nine prediction modes for a block of 4×sized unit as illustrated in FIG. 1A, and a total of four prediction modes for a block of 16×16 sized unit as illustrated in FIG. 1B.
  • Meanwhile, H.264/AVC provides a total of nine intra prediction modes for a block of 8×8 sized unit as with the block of 4×4 sized unit, and first applies a filter for removing high frequency components to reference pixels of the neighboring blocks to be used for the prediction to smooth the reference pixels prior to performing the prediction [Document 1]. In the filtering process, the neighboring pixels may be changed to have values more suitable for the intra prediction to decrease intra prediction errors for target blocks.
  • However, the inventor(s) has experienced that the filtering for removing high frequency components of the reference pixels attenuates AC components actually present among original pixels, such that it is difficult to provide details of block data to be predicted and in some cases, prediction performance or efficiency is relatively more degraded than the case in which the filtering is not applied.
  • An adaptive filtering method was proposed by Fraunhofer Heinrich Hertz Institute (hereinafter, referred to as HHI) in HEVC that is a meeting for establishing a standard of the next-generation moving picture compression encoding/decoding device [Document 2].
  • The adaptive filtering method proposed by the HHI compares the prediction error occurring when the filtering for removing high frequency components is applied to each block for intra prediction against the prediction error occurring without the same filtering applied and as a comparison results, applies the smaller prediction errors to the actual encoding, thereby reducing the prediction errors.
  • However, the inventor(s) has experienced that the decoding apparatus requires an additional amount of information for each prediction block unit to perform indexing of whether or not to apply filtering, which may rather cause deteriorated compression efficiency for such images that would show little decrement of prediction errors with the adaptive filtering.
  • [Document 1]
  • 1. Telecommunication Standardization sector of ITU, ““ITU-T Recommendation H.264, Series H: Audiovisual and Multimedia Systems, Advanced video coding for generic audiovisual services””, ITU-T Recommendation H.264, pp. 132 -133, Nov 2007.
  • 2. Martin Winken, Sebastian BoββBe, ““Description of video coding technology proposal by Fraunhofer HHI””, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, CfP response proposal JCTVC-A116, Apr 2010
  • SUMMARY
  • An embodiment of the present disclosure provides an apparatus for decoding a video using an intra prediction, the apparatus comprises an encoded data extractor, an intra predictor, a residual data decoder and an adder. The encoded data extractor is configured to extract filtering information, information on an intra prediction mode to be applied to a target block and a transformed and quantized residual block corresponding to the target block. The intra predictor is configured to calculate a reference pixel characteristic by using one or more of reference pixels within neighboring blocks of the target block, determine a filtering type to be applied to the reference pixels within the neighboring blocks, based at least on the extracted filtering information and the calculated reference pixel characteristic, and generate a predicted block, by adaptively filtering the reference pixels within the neighboring blocks depending on the determined filtering type and then predicting the target block from the adaptively-filtered reference pixels according to the information on the intra prediction mode. The residual data decoder is configured to reconstruct a residual block by inversely quantizing and then inversely transforming the transformed and quantized residual block. And the adder is configured to add the predicted block to the residual block to reconstruct the target block.
  • Another embodiment of the present disclosure provides a method performed by an apparatus for decoding a video using an intra prediction, the method comprising: extracting filtering information from encoded data; calculating a reference pixel characteristic by using one or more of reference pixels within neighboring blocks of the target block; determining a filtering type to be applied to the reference pixels within the neighboring blocks, based at least on the extracted filtering information and the calculated reference pixel characteristic; generating a predicted block, by adaptively filtering the reference pixels within the neighboring blocks depending on the determined filtering type and then predicting the target block from the adaptively-filtered reference pixels; reconstructing a residual block of the target block from the encoded data; and reconstructing the target block by adding the predicted block to the residual block.
  • DESCRIPTION OF DRAWINGS
  • FIGS. 1A and 1B are diagrams of an intra prediction mode;
  • FIG. 2 is a block diagram of an encoding apparatus according to at least one embodiment of the present disclosure;
  • FIG. 3 is a block diagram of a configuration of an intra predictor of the encoding apparatus according to at least one embodiment of the present disclosure;
  • FIG. 4 is a diagram of a configuration of a reference pixel characteristics extractor 320 according to at least one embodiment of the present disclosure;
  • FIG. 5 is a diagram for illustrating an area applied with Sobel Mask within neighboring blocks for detecting edges;
  • FIG. 6 is a block diagram of a configuration of a first intra predictor for performing an adaptive filtering-based intra prediction according to at least one embodiment of the present disclosure;
  • FIG. 7 is an exemplary diagram of the first intra predictor for performing the adaptive filtering-based intra prediction according to one or more embodiment of the present disclosure;
  • FIG. 8 is a flow chart of an encoding method according to at least one embodiment of the present disclosure;
  • FIG. 9 is a block diagram of a configuration of a decoding apparatus according to at least one embodiment of the present disclosure;
  • FIG. 10 is a block diagram of a configuration of an intra predictor of the decoding apparatus according to at least one embodiment of the present disclosure; and
  • FIG. 11 is a flow chart of a decoding method according to at least one embodiment of the present disclosure.
  • DETAILED DESCRIPTION
  • Some embodiments of the present disclosure provides encoding and decoding apparatus and method which determine reference pixel characteristics of reference pixels included in neighboring blocks of a target block to be encoded or decoded and use the determination as a basis for performing any one of the adaptive filtering-based intra prediction or the typical intra prediction to decrease the required bit amount representing whether a filtering is performed, thereby improving coding efficiency.
  • Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, like reference numerals designate like elements although they are shown in different drawings. Further, in the following description of the present embodiments, a detailed description of known functions and configurations incorporated herein will be omitted for the purpose of clarity.
  • Additionally, in describing the components of the present disclosure, there may be terms used like first, second, A, B, (a), and (b). These are solely for the purpose of differentiating one component from the other but not to imply or suggest the substances, order or sequence of the components. When a component were described as ‘connected’, ‘coupled’, or ‘linked’ to another component, they may mean the components are not only directly ‘connected’, ‘coupled’, or ‘linked’ but also are indirectly ‘connected’, ‘coupled’, or ‘linked’ via a third component.
  • FIG. 2 is a block diagram for illustrating an encoding apparatus according to at least one embodiment of the present disclosure.
  • As illustrated in FIG. 2, the encoding apparatus according to one or more embodiment of the present disclosure may include an intra predictor 210, a reference picture memory 220, a residual data encoder 230, a residual data decoder 240, an entropy encoder 250, and an encoded data generator 260, and the like. Each of the intra predictor 210, the residual data encoder 230, the residual data decoder 240, the entropy encoder 250, and the encoded data generator 260 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure. The reference picture memory 220 includes at least one non-transitory computer readable medium. The video encoding apparatus further comprises input units (not shown in FIG. 2) such as one or more buttons, a touch screen, a mic and so on, and output units (not shown in FIG. 2) such as a display, an indicator and so on.
  • Herein, the encoding apparatus may represent a personal computer (PC), a notebook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a PlayStation Portable (PSP), a mobile communication terminal, and the like, and may mean various apparatuses including a communication device such as a communication modem that performs communication with various devices or wired/wireless communication networks (herein, the wire or wireless networks include, for example, one or more network interfaces including, but not limited to, cellular, Wi-Fi, LAN, WAN, CDMA, WCDMA, GSM, LTE and EPC networks, and cloud computing networks), a memory that stores various programs and data to encode images, a microprocessor that executes programs so as to perform calculation and control, and the like.
  • A video to be encoded may be input in a block unit, and the block may be a macroblock. For convenience of explanation, in at least one embodiment of the present disclosure the macroblock is defined as a 16×16 form by the same method as a H.264/AVC standard, but a general form of macroblock may be M×N. In particular, M and N may each be larger than 16 and may be different integers from each other or the same integer.
  • The intra predictor 210 uses reference pixel values available in a current block and neighboring blocks spatially located around the current block to generate an intra prediction block of the current block. In this case, the intra prediction block is generated by calculating error values between the current block and the intra prediction block for each of the available intra prediction modes and applying an intra prediction mode having a minimum error value. Further, information on the intra prediction mode is transferred to the encoded data generator 260 by encoding the intra prediction mode having the minimum error value.
  • In particular, the intra predictor 210 according to at least one embodiment of the present disclosure extracts reference pixels included in neighboring blocks of a target block to be encoded from the reference picture memory 220, prior to generating the intra prediction block to determine the reference pixel characteristics. Further, it is determined whether to perform an adaptive filtering-based intra prediction or a typical intra prediction based on the determined reference pixel characteristics, and the intra prediction block for the target block to be encoded is generated by using any one of the prediction methods based on the determination. The adaptive filtering-based intra prediction means an intra prediction method for outputting cost-efficient one of the resultant intra prediction after performing the high-frequency filtering on the reference pixels included in the neighboring blocks and the resultant intra prediction without performing the high-frequency filtering. Herein, upon outputting the intra prediction result, the filtering information indicating whether the high-frequency filtering is applied to the intra prediction is output.
  • The intra predictor 210 will be described below in detail with reference to FIGS. 3 to 7.
  • The result (intra prediction block) output from the intra prediction block 210 is subtracted from the block to be encoded and thereby generated as a residual block which is output to the residual data encoder 230.
  • The residual data encoder 230 performs a transform and a quantization operation on the residual blocks to generate an encoded residual block. In this case, various transform methods may be used for transforming a signal of a spatial domain into a signal of a frequency domain, such as Hadamard transform and discrete cosine transform, and various quantization methods may be used, such as uniform quantization including a dead zone and quantization matrix.
  • According to at least one embodiment of the present disclosure, the transform block may have a size that does not exceed a size of the prediction block. For example, when the size of the prediction block is 16×16, the transform blocks of the same size 16×16 may be used as well as smaller 16×8, 8×16, 8×8, 8×4, 4×8, 4×4 and so on. When the size of the prediction block is 8×8, the transform blocks, such as 8×8, 8×4, 4×8, and 4×4 that do not exceed 8×8, may be used. When the size of the prediction block is 4×4, only the 4×4 transform block may be used. Further, the size of the transform block may be selected as a reference of rate-distortion optimization. As described above, when the size of the transform block does not exceed the size of the transform block, the residual data encoder 230 divides the residual blocks into the same size of sub-blocks as the transform block and sequentially transforms and quantizes the sub-blocks.
  • According to at least one embodiment of the present disclosure, the size of the transform block may exceed the size of the prediction block. For example, when the size of the prediction block is 16×16, transform blocks may be used by the sizes such as 32×16 pixels, 16×32 pixels, 32×32 pixels, 64×32 pixels, 32×64 pixels, 64×64 pixels. As such, when the size of the transform block is larger than that of the prediction block, the residual data encoder 230 combines a plurality of spatially neighboring residual blocks to generate, transform, and quantize the combined residual block that is equal to the size of the transform block.
  • The residual data decoder 240 dequantizes and inverse transforms the residual blocks that are transformed and quantized by the residual data encoder 220 to reconstruct the residual blocks. The dequantization and the inverse transform reversely perform the transform and quantization processes that are performed by the residual data encoder 230 and may be implemented by various methods. For example, the residual data encoder 230 and the residual data decoder 240 may use the same transform and inverse transform processes or the same quantization and dequantization processes in their pre-agreement. Alternatively, the residual data decoder 240 may use transform and quantization process information (for example, information on a transform size, a transform shape, a quantization type, and the like) which is generated and transferred by the transform and quantization processes of the residual data encoder 230, to reversely perform the transform and quantization processes of the residual data encoder 230, thereby performing the dequantization and the inverse transform.
  • The residual block output from the residual data decoder 240 is added to the prediction block reconstructed by the intra predictor 210 to generate a reconstructed block which is stored in the reference picture memory 220 and the stored reconstructed block is subsequently used as a reference picture for encoding the block to be encoded.
  • The entropy encoder 250 entropy-encodes and outputs the residual block output from the residual data encoder 230. Although not illustrated in at least one embodiment of the present disclosure, the entropy encoder 250 may encode a variety of information required to decode encoded bit streams as well as the residual blocks. Herein, a variety of information required to decode the encoded bit streams may include information on a macroblock type, information on the intra prediction mode, information on the transform and quantization types, filtering information indicating whether the high-frequency removal filtering is performed on the reference pictures used for the intra prediction, and the like.
  • The entropy encoder 250 may use a variety of entropy encoding methods, such as context adaptive variable length coding (CAVLC) and context adaptive binary arithmetic coding (CABAC).
  • The encoded data generator 260 aligns the entropy encoded residual block, the information on the macroblock type, the intra prediction mode, and the like so as to be output as the encoded data. Further, when the intra predictor 210 performs the adaptive filtering-based intra prediction, the encoded data generator 260 also outputs as the encoded data the filtering information indicating whether the filtering is performed. However, when the intra predictor 210 does not perform the adaptive filtering-based intra prediction, no filtering information is included in the encoded data.
  • Hereinafter, the more detailed configuration of the intra predictor 210 according to at least one embodiment of the present disclosure will be described with reference to FIGS. 3 to 7.
  • FIG. 3 is a block diagram for illustrating a configuration of the intra predictor according to at least one embodiment of the present disclosure.
  • The intra predictor 210 according to at least one embodiment of the present disclosure may include a reference pixel setter 310, a reference pixel characteristics extractor 320, a first intra predictor 330, a second intra predictor 340, and the like. Each of the reference pixel setter 310, the reference pixel characteristics extractor 320, the first intra predictor 330, and the second intra predictor 340 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure.
  • The reference pixel setter 310 extracts pixels (reference pixels) of neighboring blocks of the target block to be encoded from the reference picture memory 220. The neighboring blocks of the target block to be currently encoded should have completed their block based encoding and decoding processes and set the pixel values of the corresponding neighboring blocks to be referenceable by the time of encoding the current block. However, some encoding processes performed may make the pixel values of the neighboring blocks to be unusable, and such unusable pixels of the neighboring blocks are processed by the reference pixel setter 310. That is, the reference pixel setter 310 checks whether the pixel values of the neighboring blocks are present and when there are neighboring pixels that cannot be referenced, the corresponding reference values are filled with products from an arbitrary operation.
  • The reference pixel characteristics extractor 320 receives the reference pixel values processed by the reference pixel setter 310 to determine reference pixel characteristics and uses the determination as a basis for determining whether to transfer the reference pixel values to the first intra predictor 330 that performs the adaptive filtering-based intra prediction or the second intra predictor 340 that performs the typical intra prediction.
  • Herein, the reference pixel characteristics may include statistical characteristics of the reference pixels, intra-image characteristics configured by the reference pixels, or the like and at least one embodiment of the present disclosure uses dispersion as the statistical characteristics and the presence or absence of edges as the intra-image characteristics. However, this is only one embodiment, but is to be construed to be included in the scope of the present disclosure when it may be determined which of the adaptive filtering-based intra prediction and the typical intra prediction has the excellent coding efficiency and performance.
  • FIG. 4 is a diagram illustrating a configuration of the reference pixel characteristics extractor 320 according to at least one embodiment of the present disclosure. Referring to FIG. 4, the reference pixel characteristics extractor 320 may include a statistical characteristics extractor 410, an edge detector 420, and a filtering determiner 430. Each of the statistical characteristics extractor 410, the edge detector 420, and the filtering determiner 430 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure.
  • The statistical characteristics extractor 410 operates the dispersion of the reference pixels to determine whether the dispersion is at a preset threshold value or less. Herein, the threshold value may be determined by various methods and as one example, the following Equation 1 may be used.
  • T = Qstep 2 + 8 16 Equation 1
  • In the above Equation 1, T represents the threshold value and Qstep represents the width of a quantization interval. Further, T means the largest information among integers smaller than x.
  • The edge detector 420 uses the reference pixels included in the neighboring blocks of the target block to be encoded to detect whether the edges are present in the neighboring blocks.
  • The edge is a feature representing a boundary between regions within an image and corresponds to a discontinuous point. Therefore, the edges can be detected when a change in a gradient of image brightness is obtained using a differentiation or partial differentiation operation or a mask (operator) performing the role of a differential operation. Among edge detection methods using the mask, the representative method uses Sobel Mask. In addition to this, there are methods such as Roberts Mask, a Laplacian Mask, and Canny Mask.
  • At least one embodiment of the present disclosure uses the method for detecting edges using the Sobel Mask, but is not limited thereto, and therefore, various methods for detecting edges of an image are to be construed to be included in the scope of the present disclosure.
  • The method for extracting edges using the Sobel Mask applies the following mask to an image so as to detect edges.
  • [ - 1 - 2 - 1 0 0 0 + 1 + 2 + 1 ] [ + 1 0 - 1 + 2 0 - 2 + 1 0 - 1 ]
  • Since the image is configured to be two-dimensional, the gradient of the image brightness in a vertical direction (y-axis direction) and the gradient of the image brightness in a horizontal direction (x-axis direction) need to be obtained, in which a left mask is to obtain the gradient in a vertical direction and a right mask is to obtain the gradient in a horizontal direction.
  • The magnitude of the gradients is calculated by applying the two masks to the neighboring blocks, and FIG. 5 illustrates a region to which the Sobel Mask is applied where different sizes of the blocks used for the prediction may change the size of the neighboring region from which the edges is to be extracted.
  • The magnitude of the gradients may be operated by the following Equation 2 that sets a central element value (element (2, 2) of a matrix) to Gy and Gx respectively as calculated by multiplying each pixel value of an image by each of the left and right masks.

  • G=√{square root over (G x 2 +G y 2)}  Equation 2
  • When the values of the magnitude of the gradients are larger than the preset threshold value T, it may be determined that the edges are present in the corresponding regions.
  • When the statistical characteristics extractor 410 determines that the dispersion is at the preset threshold value or less or when the edge detector determines that the edges are present, it is determined that the filtering determiner 430 does not apply the adaptive filtering to transfer the reference pixels to the second intra predictor 340 that performs the typical intra prediction. Otherwise, however, when the dispersion is larger than the preset threshold value and the edges are not detected, it is determined that the filtering determiner 430 applies the adaptive filtering to transfer the reference pixels to the first intra predictor 330 that performs the adaptive filtering-based intra prediction.
  • When the dispersion of the reference pixels is small enough to approximate 0, the high-frequency filtered reference pixels have values very close to original high-frequency-unfiltered reference pixels that are not subjected to the high-frequency filtering. In this case, therefore, the adaptive filtering-based intra prediction is not performed, obviating the need to use index bits for indicating whether the filtering is performed, thereby improving the coding efficiency.
  • In addition, when the edges are present in the neighboring blocks of the target block to be encoded, the edges may be blurred when the filtering is applied to the reference pixels to give rise to a large error. Even in this case, therefore, the determination not to use the adaptive filtering-based intra prediction can improve the coding performance, and as the index bits are not used for indicating whether the filtering is performed or not, the coding efficiency can be improved.
  • Meanwhile, at least one embodiment of the present disclosure describes that the reference pixel characteristics are determined using both of the statistical characteristics of the reference pixels and the presence or absence of the edges, but the present disclosure is not limited thereto, and therefore the reference pixel characteristics may be determined using only any one of the methods.
  • Referring back to FIG. 3, the first intra predictor 330 performs the adaptive filtering-based intra prediction when it is determined that the reference pixel characteristics extractor 320 applies the adaptive filter.
  • FIG. 6 is a block diagram for illustrating a configuration of the first intra predictor for performing an adaptive filtering-based intra prediction according to at least one embodiment of the present disclosure. Referring to FIG. 6, the first intra predictor may include a low pass filter 610, intra prediction performers 620 and 630, a cost calculator 640, and the like. Each of the low pass filter 610, the intra prediction performers 620 and 630, and the cost calculator 640 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure.
  • The low pass filter 610 removes the high frequency components from the reference pixels and transfers the reference pixels to the intra prediction performer 620 and the intra prediction performer 620 performs the intra prediction by using the reference pixels with the high frequency components removed and transfers the results to the cost calculator 640.
  • The intra prediction performer 630 performs the intra prediction by using the values of the original high-frequency-unfiltered reference pixel and transfers the results to the cost calculator.
  • The cost calculator 640 calculates costs required to encode data using the results (that is, the intra prediction results of using the high-frequency filtered reference pixels) performed by the intra prediction performer 620 and costs required to encode data using the results (that is, the intra prediction results of using the high-frequency-unfiltered reference pixels of operation by the intra prediction performer 630 and outputs the cost-efficient results. Herein, costs may be obtained by using rate-distortion or a bit amount required to encode data.
  • Meanwhile, when outputting the intra prediction results, the cost calculator 640 also outputs the filtering information indicating whether the high-frequency filtering was performed.
  • FIG. 7 is an exemplary diagram for explaining the first intra predictor for performing the adaptive filtering-based intra prediction according to at least one embodiment of the present disclosure.
  • The block of size n×m to undergo the intra prediction and the reconstructed reference pixel may be represented as follows.
  • A target block O to be encoded in n×m sized arrangement is as follows:
  • O = [ o 0 , 0 o 0 , m - 1 o n - 1 , 0 o n - 1 , m - 1 ]
  • A prediction block P in the arrangement of n×m generated by predicting the target block O is as follows:
  • P = [ p 0 , 0 p 0 , m - 1 p n - 1 , 0 p n - 1 , m - 1 ]
  • The reconstructed pixel values of the previous block are used as reference pixels for the current prediction block P. FIG. 7 illustrates an example of the reconstructed reference pixels that can be referenced.
  • l=[l0, . . . , ln−1]
  • Further, the reconstructed upper pixel values are represented as follows.
  • t=[t0, . . . , tm−1, . . . ]
  • Further, the reconstructed left upper pixel values are defined by ‘a’. With their encoding and decoding processes completed prior to the current block, the respective pixels of l, t, and a are represented by ‘available’.
  • When the adaptive filtering is performed, the low pass filter is defined as follows as it has a length of k for smoothing the reference pixels used for the intra prediction prior to performing the intra prediction on the original block O.
  • f=[f0, . . . , fk−1]
  • The above filter coefficients generate the smoothed reference pixel vector by applying a convolution operation as the following Equation 3 to reference pixel vectors l and t.
  • g 1 ( x ) = τ = 0 k - 1 f ( τ ) l ( x - τ ) , x = 0 , 1 , , n - 1 g 2 ( x ) = τ = 0 k - 1 f ( τ ) t ( x - τ ) , x = 0 , 1 , , m - 1 Equation 3
  • In this case, in the operation of the first and final elements (t=0, t=n−1 or t=m−1) of g1 and g2, a value of f may be exceptionally changed.
  • The low pass filter 610 of FIG. 6 performs the high-frequency filtering on the original reference pixels depending on the Equation 2. In addition, the intra prediction performer 620 uses the reference pixel vectors g1 and g2 output from the low pass filter 610 to perform the intra prediction.
  • Meanwhile, the intra prediction performer 630 uses the original reference pixel vectors l and t without high-frequency filtering to perform the intra prediction.
  • Further, the cost calculator 640 compares the costs when the encoding is performed using the intra prediction results based on the reference pixel vectors g1 and g2 with the costs when the encoding is performed using the intra prediction results based on the original reference pixel vectors l and t to output the cost-efficient intra prediction results. Further, the intra prediction results output by the cost calculator 640 are output along with the filtering information indicating whether the results are from using the high-frequency filtered reference pixels or not.
  • Referring back to FIG. 3, when the reference pixel characteristics extractor determines not to apply the adaptive filtering, the second intra predictor 340 uses the high-frequency-unfiltered reference, that is, the original reference pixel vectors l and t to perform the intra prediction and outputs the results. In this case, as the second intra predictor does not use the adaptive filtering method, there is no need to output the filtering information indicating whether the high-frequency filtering is used for the intra prediction.
  • According to at least one embodiment of the present disclosure as described above, when the reference pixel characteristics extractor 320 determines to apply the adaptive filtering, the filtering information is required for indicating whether the high-frequency filtering was used the intra prediction, but when it is determined that the reference pixel characteristics extractor 320 determines not to apply the adaptive filtering, the filtering information is not required, such that the bit amount required to indicate whether there was a filtering performed may be saved.
  • FIG. 8 is a flow chart for illustrating an encoding method according to at least one embodiment of the present disclosure.
  • The reference pixel characteristics are determined by extracting at least one reference pixels included in the neighboring blocks of the target block to be encoded in step S810. Herein, the reference pixel characteristics may include the statistical characteristics of the reference pixels such as dispersion or the intra-image characteristics of the images configured by the reference pixels such as the presence or absence of the edges, and the like.
  • When the reference pixel characteristics are determined, it is determined whether an adaptive filtering is applied to the intra prediction based on the reference pixel characteristics (S820). For example, it may determined that the adaptive filtering is not applied when the dispersion of the reference pixels is smaller than the preset threshold value or the edges are present in the neighboring blocks, and the like, otherwise it may determined that the adaptive filtering is applied.
  • When it is determined that the adaptive filtering is not applied, the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the results are output (S830).
  • However, upon determining the application of the adaptive filtering, the intra prediction is performed based on the adaptive filtering.
  • That is, the high-frequency filtering is performed on the reference pixels and the intra prediction is performed by using the filtered reference pixels (S840 and S850). Then, the intra prediction is performed by using the high-frequency-unfiltered reference pixels (S860). Further, calculations are made for the costs when the encoding is performed using the intra prediction results by the steps S840 and S850 and the costs when the encoding is performed using the intra prediction results through step S860 and then the cost-efficient intra prediction results are output. In this case, the intra prediction results are output along with the filtering information for indicating whether the results are with or without the use of the high-frequency filtered reference pixels (S870).
  • Hereinafter, a decoding apparatus according to at least one embodiment of the present disclosure will be described with reference to FIGS. 9 and 10.
  • FIG. 9 is a block diagram for illustrating a configuration of the decoding apparatus according to at least one embodiment of the present disclosure.
  • The decoding apparatus according to at least one embodiment of the present disclosure may include an encoded data extractor 910, an entropy decoder 920, a residual data decoder 930, an intra predictor 940, and the like. Each of the encoded data extractor 910, the entropy decoder 920, the residual data decoder 930, and the intra predictor 940 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure. The video decoding apparatus further comprises input units (not shown in FIG. 9) such as one or more buttons, a touch screen, a mic and so on, and output units (not shown in FIG. 9) such as a display, an indicator and so on.
  • Herein, as with the encoding apparatus described with reference to FIG. 2, the decoding apparatus may represent a personal computer (PC), a notebook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a PlayStation Portable (PSP), a mobile communication terminal, and the like, and may mean various apparatuses including a communication device such as a communication modem that performs communication with various devices or wired/wireless communication networks (herein, the wire or wireless networks include, for example, one or more network interfaces including, but not limited to, cellular, Wi-Fi, LAN, WAN, CDMA, WCDMA, GSM, LTE and EPC networks, and cloud computing networks), a memory that stores various programs and data to decode images, a microprocessor that executes programs so as to perform calculation and controlling, and the like.
  • The encoded data extractor 910 extracts and analyzes the received encoded data and transfers data for the residual blocks to the entropy decoder 920 and data required for other predictions, for example, the macroblock mode, the encoded prediction information (information on the intra prediction mode, and the like) to the intra predictor 940.
  • The entropy decoder 920 performs the entropy decoding on the residual blocks input from the encoded data extractor 910 to generate quantized residual blocks. Although not illustrated in at least one embodiment of the present disclosure, the entropy decoder 920 may decode a variety of information required to decode the encoded data as well as the residual blocks. Herein, the variety of information required to decode the encoded data may include information on a block type, information on an intra prediction mode, information on transform and quantization types, and the like. The entropy decoder 920 may be defined by various methods according to the entropy encoding method used for the entropy encoder 440 of the encoding apparatus to which at least one embodiment of the present disclosure is applied.
  • The residual data decoder 930 performs the same process as the residual data decoder 240 of the encoding apparatus according to at least one embodiment of the present disclosure to reconstruct the residual blocks. That is, the residual blocks are reconstructed by dequantizing the quantized residual blocks received from the entropy decoder and inversely transforming the dequantized residual blocks.
  • The intra predictor 940 performs the intra prediction based on the intra prediction mode information extracted from the encoded data extractor to generate an intra prediction block.
  • In particular, the intra predictor 940 according to at least one embodiment of the present disclosure uses the reference pixels included in the neighboring blocks of the target block to be decoded to determine the reference pixel characteristics and determines whether the encoding apparatus performed the intra prediction to which an adaptive filtering was applied, based on the reference pixel characteristics. When it is determined that the adaptive filtering was applied, which would have included the filtering information in the encoded data received from the encoding apparatus, the filtering information is extracted from the encoded data. Further, the intra prediction is performed by using the high-frequency filtered reference pixels based on the extracted filtering information or the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the results are output. However, when it is determined that the adaptive filtering was not applied resulting in the absence of filtering information in the encoded data, the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the results are output.
  • The results (intra prediction block) output from the intra predictor 940 are added to the residual blocks reconstructed by the residual data decoder 930 so as to be reconstructed as the blocks of the original image.
  • FIG. 10 is a block diagram for illustrating a configuration of the intra predictor 940 according to at least one embodiment of the present disclosure.
  • As illustrated in FIG. 10, the intra predictor 940 according to at least one embodiment of the present disclosure may include a reference pixel setter 1010, a reference pixel characteristics extractor 1020, a third intra predictor 1030, and a fourth intra predictor 1040, and the like. Each of the reference pixel setter 1010, the reference pixel characteristics extractor 1020, the third intra predictor 1030, and the fourth intra predictor 1040 is implemented by, or includes, one or more processors and/or application-specific integrated circuits (ASICs) specified for respectively corresponding operations and functions described herein in the present disclosure
  • The reference pixel setter 1010 extracts from a reference picture memory the pixels (reference pixels) of neighboring blocks of the target block to be encoded.
  • The reference pixel characteristics extractor 1020 receives reference pixel values transferred from the reference pixel setter 1010 to determine the reference pixel characteristics and determines whether to apply an adaptive filtering. Further, the reference pixel values are transferred to any one of the third intra predictor 1030 and the fourth intra predictor 1040 based on the determined results. Herein, the reference pixel characteristics may include the statistical characteristics of the reference pixel, the intra-image characteristics configured by the reference pixels, or the like.
  • The reference pixel setter 1010 and the reference pixel characteristics extractor 1020 each have the same functions as the reference pixel setter 310 and the reference pixel characteristics extractor 320 of the encoding apparatus according to at least one embodiment of the present disclosure, and therefore the detailed description thereof will be omitted so as to avoid the repeated description.
  • The third intra predictor 1030 extracts the filtering information from the encoded data when it is determined that the reference pixel characteristics extractor 1020 applies the adaptive filtering. In addition, when the extracted filtering information indicates that the high-frequency filtering is performed, the high-frequency filtering is performed on the reference pixels and the intra prediction is performed using the high-frequency filtered reference pixels. However, when the extracted filtering information indicates that the high-frequency filtering is not performed, the high-frequency filtering is not performed and the intra prediction is performed using the original reference pixels and the results are output.
  • When it is determined that the reference pixel characteristics extractor 1020 does not apply the adaptive filtering, the fourth intra predictor 1040 performs the intra prediction by using the original high-frequency-unfiltered reference pixels and outputs the results.
  • The reference pixel characteristics extractor 1020 of the decoding apparatus has the same configuration as the reference pixel characteristics extractor 320 of the encoding apparatus.
  • Therefore, when it is determined that the reference pixel characteristics extractor 1020 of the decoding apparatus applies the adaptive filtering, as it means that the reference pixel characteristics extractor 320 of the encoding apparatus equally determined to apply the adaptive filtering, the filtering information is included in the encoded data output by the encoding apparatus. Therefore, the third intra predictor 1030 extracts the filtering information from the encoded data and performs the intra prediction based on the filtering information.
  • On the other hand, when the reference pixel characteristics extractor 1020 of the decoding apparatus determines not to apply the adaptive filtering, which reflects nonapplication of the adaptive filtering by the reference pixel characteristics extractor 320 of the encoding apparatus either, no filtering information then is included in the encoded data output by the encoding apparatus. In this case, therefore, the fourth intra predictor 1040 immediately uses the high-frequency-unfiltered original reference pixels to perform the intra prediction without considering the filtering information.
  • FIG. 11 is a flow chart for illustrating a decoding method according to at least one embodiment of the present disclosure.
  • The reference pixel characteristics are determined by extracting one or more reference pixels included in the neighboring blocks of the target block to be encoded (S1110). Herein, the reference pixel characteristics may include the statistical characteristics of the reference pixels such as dispersion or the intra-image characteristics configured by the reference pixels such as the presence or absence of the edges, and the like.
  • When the reference pixel characteristics are determined, it is determined whether to apply the adaptive filtering based on the reference pixel characteristics (S1120). For example, when the dispersion of the reference pixels is at the preset threshold value or less or the edges are present in the neighboring blocks, it may be determined that the adaptive filtering is not applied, otherwise it may be determined that the adaptive filtering is applied.
  • When it is determined that the adaptive filtering is not applied in step S1120, the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the results are output (S1130).
  • However, when it is determined that the adaptive filtering is applied in step S1120, the filtering information is extracted from the encoded data (S1140) and the extracted filtering information is checked (S1150). As the checked result, when the filtering information indicates that the high-frequency filtering is to be performed, the intra prediction is performed by using the high-frequency filtered reference pixels and the result is output (S1160). However, when the filtering information indicates that the no high-frequency filtering is to be performed, the intra prediction is performed by using the original high-frequency-unfiltered reference pixels and the result is output (S1170).
  • Various embodiments of the present disclosure as described above determine the reference pixel characteristics included in the neighboring blocks of the target block to be encoded and determine whether to perform the adaptive filtering-based intra prediction or the typical intra prediction based on the determined reference pixel characteristics to perform the encoding, thereby reducing the generation frequency of additional information generated by the adaptive filtering. As a result, various embodiments of the present disclosure as described above provide the higher compression encoding efficiency, control the strictness and leniency of the determination criterion on whether to apply the adaptive filtering to provide the efficient rate-distortion control factor of the video compression encoder, and decrease the complexity of the encoding and decoding processes occurring due to the repetitive filtering and the intra prediction cycles when the adaptive filtering is omitted.
  • Various embodiments of the present disclosure is able to achieve advantageous effects in effectively decreasing the generation frequency of the index signal for the application of the additional filter generated by the adaptive filtering in the intra prediction to provide the higher compression coding efficiency, controlling the strictness and leniency of the determination criterion on whether to apply the adaptive filtering to provide the efficient rate-distortion control factor of the video compression encoder, and eventually decreasing the complexity of the encoding and decoding processes occurring due to the repetitive filtering and the intra prediction cycles when the adaptive filtering is removed.
  • In the description above, although all of the components of the embodiments of the present disclosure have been described as assembled or operatively connected as a unit, the present disclosure is not intended to limit itself to such embodiments. Rather, within the objective scope of the claimed invention, the respective components are able to be selectively and operatively combined in any numbers. Every one of the components may also be implemented by itself in hardware while the respective ones can be combined in part or as a whole selectively and implemented in a computer program having program modules for executing functions of the hardware equivalents. Codes or code segments to constitute such a program is easily deduced by a person skilled in the art. The computer program is stored in non-transitory computer readable recording medium, which in operation can realize some embodiments of the present disclosure. Examples of the non-transitory computer readable recording medium include magnetic recording media, such as a hard disk, a floppy disk, and a magnetic tape, and optical recording media, such as a compact disk read only memory (CD-ROM) and a digital video disk (DVD), magneto-optical media, such as a floptical disk, and hardware devices that are specially configured to store and execute program instructions, such as a ROM, a random access memory (RAM), and a flash memory.
  • In addition, terms like ‘include’, ‘comprise’, and ‘have’ should be interpreted in default as inclusive or open rather than exclusive or closed unless expressly defined to the contrary. All the terms that are technical, scientific or otherwise agree with the meanings as understood by a person of ordinary skill in the art unless defined to the contrary. Common terms as found in dictionaries should be interpreted in the context of the related technical writings not too ideally or impractically unless the present disclosure expressly defines them so.
  • Various embodiments of the present disclosure have been just exemplified. Those of ordinary skill in the art appreciate various modifications and alterations without departing from the spirit and scope of the claimed invention. Specific terms used in this disclosure and drawings are used for illustrative purposes and not to be considered as limitations of the present disclosure. Therefore, exemplary embodiments of the present disclosure have not been described for limiting purposes. Accordingly, the scope of the claimed invention is not to be limited by the above embodiments but by the claims and the equivalents thereof.

Claims (10)

What is claimed is:
1. An apparatus for decoding a video using an intra prediction, the apparatus comprising:
an encoded data extractor configured to extract filtering information, information on an intra prediction mode to be applied to a target block and a transformed and quantized residual block corresponding to the target block;
an intra predictor configured to
calculate a reference pixel characteristic by using one or more of reference pixels within neighboring blocks of the target block,
determine a filtering type to be applied to the reference pixels within the neighboring blocks, based at least on the extracted filtering information and the calculated reference pixel characteristic, and
generate a predicted block, by adaptively filtering the reference pixels within the neighboring blocks depending on the determined filtering type and then predicting the target block from the adaptively-filtered reference pixels according to the information on the intra prediction mode;
a residual data decoder configured to reconstruct a residual block by inversely quantizing and then inversely transforming the transformed and quantized residual block; and
an adder configured to add the predicted block to the residual block to reconstruct the target block.
2. The apparatus of claim 1, wherein the intra-predictor is configured to
predict the target block from high-frequency filtered reference pixels when the filtering type is a first type, and
predict the target block from high-frequency unfiltered reference pixels when the filtering type is a second type.
3. The apparatus of claim 1, wherein the reference pixel characteristic is a variance of the reference pixels within the neighboring blocks.
4. The apparatus of claim 1, wherein a size of the target block is determined among a plurality of block sizes, the plurality of block sizes including a size larger than 8×8.
5. The apparatus of claim 1, wherein the intra-predictor is configured to, when at least one of the reference pixels within the neighboring blocks is unavailable, fill the unavailable reference pixel with a value calculated by a predetermined operation.
6. A method performed by an apparatus for decoding a video using an intra prediction, the method comprising:
extracting filtering information from encoded data;
calculating a reference pixel characteristic by using one or more of reference pixels within neighboring blocks of the target block;
determining a filtering type to be applied to the reference pixels within the neighboring blocks, based at least on the extracted filtering information and the calculated reference pixel characteristic;
generating a predicted block, by adaptively filtering the reference pixels within the neighboring blocks depending on the determined filtering type and then predicting the target block from the adaptively-filtered reference pixels;
reconstructing a residual block of the target block from the encoded data; and
reconstructing the target block by adding the predicted block to the residual block.
7. The method of claim 6, wherein the generating of the predicted block comprising:
predicting the target block from high-frequency filtered reference pixels when the filtering type is a first type, and
predicting the target block from high-frequency unfiltered reference pixels when the filtering type is a second type.
8. The method of claim 6, wherein the reference pixel characteristic is a variance of the reference pixels within the neighboring blocks.
9. The method of claim 6, wherein a size of the target block is determined among a plurality of block sizes, the plurality of block sizes including a size larger than 8×8.
10. The method of claim 6, further comprising before calculating the reference pixel characteristic:
when at least one of the reference pixels within the neighboring blocks is unavailable, filling the unavailable reference pixel with a value calculated by a predetermined operation.
US14/859,970 2010-08-26 2015-09-21 Encoding and decoding device and method using intra prediction Abandoned US20160014409A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/859,970 US20160014409A1 (en) 2010-08-26 2015-09-21 Encoding and decoding device and method using intra prediction

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR1020100083026A KR101663764B1 (en) 2010-08-26 2010-08-26 Apparatus and Method for Encoding and Decoding Using Intra Prediction
KR10-2010-0083026 2010-08-26
PCT/KR2011/006346 WO2012026794A2 (en) 2010-08-26 2011-08-26 Encoding and decoding device and method using intra prediction
US201313819153A 2013-04-23 2013-04-23
US14/859,970 US20160014409A1 (en) 2010-08-26 2015-09-21 Encoding and decoding device and method using intra prediction

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US13/819,153 Continuation US9179146B2 (en) 2010-08-26 2011-08-26 Encoding and decoding device and method using intra prediction
PCT/KR2011/006346 Continuation WO2012026794A2 (en) 2010-08-26 2011-08-26 Encoding and decoding device and method using intra prediction

Publications (1)

Publication Number Publication Date
US20160014409A1 true US20160014409A1 (en) 2016-01-14

Family

ID=45723952

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/819,153 Active 2032-10-30 US9179146B2 (en) 2010-08-26 2011-08-26 Encoding and decoding device and method using intra prediction
US14/859,970 Abandoned US20160014409A1 (en) 2010-08-26 2015-09-21 Encoding and decoding device and method using intra prediction

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/819,153 Active 2032-10-30 US9179146B2 (en) 2010-08-26 2011-08-26 Encoding and decoding device and method using intra prediction

Country Status (3)

Country Link
US (2) US9179146B2 (en)
KR (1) KR101663764B1 (en)
WO (1) WO2012026794A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130301715A1 (en) * 2011-01-14 2013-11-14 Huawei Technologies Co., Ltd. Prediction method in coding or decoding and predictor
US20190037217A1 (en) * 2016-02-16 2019-01-31 Samsung Electronics Co., Ltd. Video encoding method and apparatus, and decoding method and apparatus therefor

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015007200A1 (en) * 2013-07-15 2015-01-22 Mediatek Inc. Method of sample adaptive offset processing for video coding
KR101530774B1 (en) 2013-09-30 2015-06-22 연세대학교 산학협력단 Method, apparatus and system for image encoding and decoding
KR101530782B1 (en) 2013-12-03 2015-06-22 연세대학교 산학협력단 Method, apparatus and system for image encoding and decoding
CN103888764B (en) * 2014-03-14 2017-02-15 西安交通大学 Self-adaptation compensation system and method for video compression distortion
US10390020B2 (en) 2015-06-08 2019-08-20 Industrial Technology Research Institute Video encoding methods and systems using adaptive color transform
EP3767947A1 (en) 2015-11-17 2021-01-20 Huawei Technologies Co., Ltd. Method and apparatus of adaptive filtering of samples for video coding
USD836156S1 (en) 2016-03-04 2018-12-18 Samsung Electronics Co., Ltd. Stand for camera
CN116320494A (en) 2016-11-28 2023-06-23 韩国电子通信研究院 Method and apparatus for filtering
WO2018097700A1 (en) * 2016-11-28 2018-05-31 한국전자통신연구원 Method and device for filtering
CN115442599A (en) * 2017-10-20 2022-12-06 韩国电子通信研究院 Image encoding method, image decoding method, and recording medium storing bit stream
GB2567860A (en) * 2017-10-27 2019-05-01 Sony Corp Image data encoding and decoding
CN112399176B (en) * 2020-11-17 2022-09-16 深圳市创智升科技有限公司 Video coding method and device, computer equipment and storage medium
CN112399177B (en) * 2020-11-17 2022-10-28 深圳大学 Video coding method, device, computer equipment and storage medium

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6188799B1 (en) * 1997-02-07 2001-02-13 Matsushita Electric Industrial Co., Ltd. Method and apparatus for removing noise in still and moving pictures
US20030053711A1 (en) * 2001-09-20 2003-03-20 Changick Kim Reducing blocking and ringing artifacts in low-bit-rate coding
US20030161407A1 (en) * 2002-02-22 2003-08-28 International Business Machines Corporation Programmable and adaptive temporal filter for video encoding
US6711296B1 (en) * 1998-02-26 2004-03-23 Japan As Represented By Secretary Of Agency Of Industrial Science And Technology Apparatus for performing loss-less compression-coding of adaptive evolution type on image data
US20040158878A1 (en) * 2003-02-07 2004-08-12 Viresh Ratnakar Power scalable digital video decoding
US20050110907A1 (en) * 2003-11-21 2005-05-26 Jae-Han Jung Apparatus and method of measuring noise in a video signal
US20050243911A1 (en) * 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20060294171A1 (en) * 2005-06-24 2006-12-28 Frank Bossen Method and apparatus for video encoding and decoding using adaptive interpolation
US20070121731A1 (en) * 2005-11-30 2007-05-31 Akiyuki Tanizawa Image encoding/image decoding method and image encoding/image decoding apparatus
US20080002902A1 (en) * 2003-10-30 2008-01-03 Samsung Electronics Co., Ltd. Global and local statistics controlled noise reduction system
US20080279279A1 (en) * 2007-05-09 2008-11-13 Wenjin Liu Content adaptive motion compensated temporal filter for video pre-processing
US20100008430A1 (en) * 2008-07-11 2010-01-14 Qualcomm Incorporated Filtering video data using a plurality of filters
US7720153B2 (en) * 2003-03-27 2010-05-18 Ntt Docomo, Inc. Video encoding apparatus, video encoding method, video encoding program, video decoding apparatus, video decoding method and video decoding program
US7734115B2 (en) * 2005-12-08 2010-06-08 Industry-Academic Cooperation Foundation, Yonsei University Method for filtering image noise using pattern information
US20100272191A1 (en) * 2008-01-14 2010-10-28 Camilo Chang Dorea Methods and apparatus for de-artifact filtering using multi-lattice sparsity-based filtering
US20100284458A1 (en) * 2008-01-08 2010-11-11 Telefonaktiebolaget L M Ericsson (Publ) Adaptive filtering
US20100329341A1 (en) * 2009-06-29 2010-12-30 Hong Kong Applied Science and Technology Research Institute Company Limited Method and apparatus for coding mode selection
US20110038415A1 (en) * 2009-08-17 2011-02-17 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US20110243222A1 (en) * 2010-04-05 2011-10-06 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by using adaptive prediction filtering, method and apparatus for decoding video by using adaptive prediction filtering
US8090209B2 (en) * 2008-09-25 2012-01-03 Panasonic Corporation Image coding device, digital still camera, digital camcorder, imaging device, and image coding method
US20120014436A1 (en) * 2010-07-15 2012-01-19 Sharp Laboratories Of America, Inc. Parallel video coding based on block size
US20120082224A1 (en) * 2010-10-01 2012-04-05 Qualcomm Incorporated Intra smoothing filter for video coding
US20120147955A1 (en) * 2010-12-10 2012-06-14 Madhukar Budagavi Mode Adaptive Intra Prediction Smoothing in Video Coding
US8559526B2 (en) * 2010-03-17 2013-10-15 Kabushiki Kaisha Toshiba Apparatus and method for processing decoded images
US8805100B2 (en) * 2010-06-03 2014-08-12 Sharp Kabushiki Kaisha Filter device, image decoding device, image encoding device, and filter parameter data structure
US8837577B2 (en) * 2010-07-15 2014-09-16 Sharp Laboratories Of America, Inc. Method of parallel video coding based upon prediction type

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7450641B2 (en) 2001-09-14 2008-11-11 Sharp Laboratories Of America, Inc. Adaptive filtering based upon boundary strength
KR100654436B1 (en) 2004-07-07 2006-12-06 삼성전자주식회사 Method for video encoding and decoding, and video encoder and decoder
KR100711025B1 (en) 2005-02-15 2007-04-24 (주)씨앤에스 테크놀로지 The method for filtering a residual signal to improve performance in the standard coding mode of motion picture
KR101590500B1 (en) * 2008-10-23 2016-02-01 에스케이텔레콤 주식회사 / Video encoding/decoding apparatus Deblocking filter and deblocing filtering method based intra prediction direction and Recording Medium therefor

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6188799B1 (en) * 1997-02-07 2001-02-13 Matsushita Electric Industrial Co., Ltd. Method and apparatus for removing noise in still and moving pictures
US6711296B1 (en) * 1998-02-26 2004-03-23 Japan As Represented By Secretary Of Agency Of Industrial Science And Technology Apparatus for performing loss-less compression-coding of adaptive evolution type on image data
US20030053711A1 (en) * 2001-09-20 2003-03-20 Changick Kim Reducing blocking and ringing artifacts in low-bit-rate coding
US20030161407A1 (en) * 2002-02-22 2003-08-28 International Business Machines Corporation Programmable and adaptive temporal filter for video encoding
US20040158878A1 (en) * 2003-02-07 2004-08-12 Viresh Ratnakar Power scalable digital video decoding
US7720153B2 (en) * 2003-03-27 2010-05-18 Ntt Docomo, Inc. Video encoding apparatus, video encoding method, video encoding program, video decoding apparatus, video decoding method and video decoding program
US20080002902A1 (en) * 2003-10-30 2008-01-03 Samsung Electronics Co., Ltd. Global and local statistics controlled noise reduction system
US20050110907A1 (en) * 2003-11-21 2005-05-26 Jae-Han Jung Apparatus and method of measuring noise in a video signal
US20050243911A1 (en) * 2004-04-29 2005-11-03 Do-Kyoung Kwon Adaptive de-blocking filtering apparatus and method for mpeg video decoder
US20060294171A1 (en) * 2005-06-24 2006-12-28 Frank Bossen Method and apparatus for video encoding and decoding using adaptive interpolation
US20070121731A1 (en) * 2005-11-30 2007-05-31 Akiyuki Tanizawa Image encoding/image decoding method and image encoding/image decoding apparatus
US7734115B2 (en) * 2005-12-08 2010-06-08 Industry-Academic Cooperation Foundation, Yonsei University Method for filtering image noise using pattern information
US20080279279A1 (en) * 2007-05-09 2008-11-13 Wenjin Liu Content adaptive motion compensated temporal filter for video pre-processing
US20100284458A1 (en) * 2008-01-08 2010-11-11 Telefonaktiebolaget L M Ericsson (Publ) Adaptive filtering
US20100272191A1 (en) * 2008-01-14 2010-10-28 Camilo Chang Dorea Methods and apparatus for de-artifact filtering using multi-lattice sparsity-based filtering
US20100008430A1 (en) * 2008-07-11 2010-01-14 Qualcomm Incorporated Filtering video data using a plurality of filters
US8090209B2 (en) * 2008-09-25 2012-01-03 Panasonic Corporation Image coding device, digital still camera, digital camcorder, imaging device, and image coding method
US20100329341A1 (en) * 2009-06-29 2010-12-30 Hong Kong Applied Science and Technology Research Institute Company Limited Method and apparatus for coding mode selection
US20110038415A1 (en) * 2009-08-17 2011-02-17 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US8559526B2 (en) * 2010-03-17 2013-10-15 Kabushiki Kaisha Toshiba Apparatus and method for processing decoded images
US20110243222A1 (en) * 2010-04-05 2011-10-06 Samsung Electronics Co., Ltd. Method and apparatus for encoding video by using adaptive prediction filtering, method and apparatus for decoding video by using adaptive prediction filtering
US8805100B2 (en) * 2010-06-03 2014-08-12 Sharp Kabushiki Kaisha Filter device, image decoding device, image encoding device, and filter parameter data structure
US20120014436A1 (en) * 2010-07-15 2012-01-19 Sharp Laboratories Of America, Inc. Parallel video coding based on block size
US8837577B2 (en) * 2010-07-15 2014-09-16 Sharp Laboratories Of America, Inc. Method of parallel video coding based upon prediction type
US20120082224A1 (en) * 2010-10-01 2012-04-05 Qualcomm Incorporated Intra smoothing filter for video coding
US20120147955A1 (en) * 2010-12-10 2012-06-14 Madhukar Budagavi Mode Adaptive Intra Prediction Smoothing in Video Coding

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130301715A1 (en) * 2011-01-14 2013-11-14 Huawei Technologies Co., Ltd. Prediction method in coding or decoding and predictor
US20190037217A1 (en) * 2016-02-16 2019-01-31 Samsung Electronics Co., Ltd. Video encoding method and apparatus, and decoding method and apparatus therefor

Also Published As

Publication number Publication date
KR101663764B1 (en) 2016-10-07
US20130215958A1 (en) 2013-08-22
KR20120041287A (en) 2012-05-02
WO2012026794A3 (en) 2012-05-31
WO2012026794A2 (en) 2012-03-01
US9179146B2 (en) 2015-11-03

Similar Documents

Publication Publication Date Title
US20160014409A1 (en) Encoding and decoding device and method using intra prediction
US9900610B2 (en) Method and device for encoding/decoding image by inter prediction using random block
US9386325B2 (en) Method and apparatus for encoding and decoding image by using large transformation unit
US9344732B2 (en) Image encoding and decoding apparatus and method
US8111914B2 (en) Method and apparatus for encoding and decoding image by using inter color compensation
US9497454B2 (en) Method and device for encoding/decoding image using feature vectors of surrounding blocks
CN107347157B (en) Video decoding device
US10034024B2 (en) Method and apparatus for encoding/decoding images considering low frequency components
CN108141612B (en) Apparatus and method for encoding and decoding image
KR101645544B1 (en) Methods of encoding and decoding using multi-level prediction and apparatuses for using the same
US20130230104A1 (en) Method and apparatus for encoding/decoding images using the effective selection of an intra-prediction mode group
US8594189B1 (en) Apparatus and method for coding video using consistent regions and resolution scaling
US20150146776A1 (en) Video image encoding device, video image encoding method
US9532045B2 (en) Method and device for encoding/ decoding image having removable filtering mode
KR101539045B1 (en) Quantization Parameter Determination Method and Apparatus and Video Encoding/Decoding Method and Apparatus
KR101713250B1 (en) Apparatus and Method for Encoding and Decoding Using Intra Prediction
US8442338B2 (en) Visually optimized quantization
CN111684798A (en) Data encoding and decoding
CN112911312B (en) Encoding and decoding method, device and equipment
CN113473129B (en) Encoding and decoding method and device
KR100728032B1 (en) Method for intra prediction based on warping
KR20200134318A (en) Video encoding and decoding

Legal Events

Date Code Title Description
AS Assignment

Owner name: SK TELECOM CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SONG, JINHAN;LIM, JEONGYEON;CHOE, YOONSIK;AND OTHERS;SIGNING DATES FROM 20130306 TO 20130404;REEL/FRAME:036613/0001

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION